peshkov - stock.adobe.com

OpenStack Foundation will tackle infrastructure barriers to enterprise AI adoption

OpenStack supporting community will help enterprises to overcome infrastructure barriers to adopting artificial intelligence technologies in 2019 as demand for GPU and FPGA-based set-ups grows

The OpenStack Foundation (OSF) has confirmed that its community of supporters will start work in 2019 on helping enterprises overcome the infrastructure-level barriers preventing them from adopting artificial intelligence (AI) and machine learning technologies.

During a Q&A session with the press at the OpenStack Summit in Berlin, OSF executive director Jonathan Bryce confirmed that AI and machine learning would be classified as Strategic Focus Areas for the foundation next year.

Introduced in December 2017, the OSF’s Strategic Focus Areas initiative is geared towards focusing the minds of the foundation’s supporters and contributors on addressing common problems that blight users of certain technologies.

At launch, these included datacentre cloud infrastructures, continuous integration systems, edge computing environments, containers and serverless infrastructures, and now the OSF is adding AI and machine learning to the mix.

What these areas all have in common is the fact that the OpenStack user community is already actively using them in their own infrastructure deployments, but perhaps not in the most consistent or effective way, said Bryce.

“These are areas where we want to drive collaboration, improve integration, improve testing and, in some cases, build technology either in OpenStack or in an additional project to support that,” he said.

OSF chief operating officer Mark Collier said there would be particular emphasis on addressing the demands that running AI and machine learning workloads place on IT infrastructures, rather than the community creating a deep learning framework of its own.

“You might get off track if you think that, as a community, we’re going to build some kind of tool that is going to be a competitor, like Tensorflow or Caffe2 or any of these tools. That is not at all what we’re trying to do,” he said.

“Those tools are putting new demands on infrastructure and we see that already. And it’s that infrastructure piece that we are primarily working on.”

The growing use of AI and machine learning technologies is fuelling adoption of OpenStack bare metal services, and projects that provide users with frameworks to tap into GPU resources and reprogrammable chip technology, such as field programmable gate array (FPGA), said Collier.

Read more about OpenStack

And the results of the latest OpenStack user survey, made public on the first day of the show, appear to bear this out. According to its findings, production use of OpenStack’s bare metal cloud service has risen from 9% in 2016 to 24% in 2018.

How these technologies deliver these use cases was showcased during the second OpenStack Summit keynote, in the form of two demonstrations. The first showed how OpenStack Nova could be used to manage GPU resources to run a speech recognition app that delivered real-time close captioning using the public cloud.

The second demonstrated how OpenStack Cyborg  could be used to co-ordinate FGPA resources for video and image recognition purposes, hosted in an OpenStack-based private cloud.

Zhipeng Huang, a contributor to the OpenStack Cyborg project, told the keynote attendees that the OpenStack technology effectively acts as a management framework  for accelerators such as FGPA chips, and helps to plug a “significant gap” in the infrastructure stack.

“All these types of accelerators are being used more and more to support applications like AI, edge computing, HPC and stuff like that,” he said.

“There is a significant gap between these infrastructures and the management software if you want to build a system to support your service, and you have to fill that gap.”

Read more on Virtualisation management strategy