An overview of deep learning tools

As AI and deep learning matures, we look at advanced tools that enable developers to start building sophisticated, intelligent applications

This article can also be found in the Premium Editorial Download: Computer Weekly: A vintage digital transformation for Majestic Wines

The technology industry is currently in the midst of an artificial intelligence (AI) renaissance. Initial work in this field fell something short of its longer term potential due to limitations in the technology platforms of the day, somewhere around the 1980s.

As such, the first age of AI was ignominiously relegated to movies where it powered talking cars, humanoid cyborgs and a selection of other fancifully imagined products.

The current reawakening of AI has been facilitated by advances in hardware from processing to memory to data storage; but also by our ability to now develop complex algorithmic structures capable of running on these new super-powered backbones.

As IT departments start working to apply AI-enablement to the enterprise software stacks, it is worth taking a step back and examining what is actually happening inside the synaptic connections that go to make our AI “brains” so smart.

By knowing more about the software structures being architected, developers can, in theory, more intelligently apply AI advancements to the applications that are engineered for tomorrow.

Google TensorFlow 

Key among the “tools” many AI developers will be learning now is TensorFlow. Built and open sourced by Google, TensorFlow is a symbolic mathematical library used to build AI intelligence in the Python programming language. TensorFlow can be (for example) used to build a “classifier” – that is, a visual image scanning component that can recognise a handwritten numerical digit in under 40 lines of code.

Describing the principles behind deep learning, Rajat Monga, engineering director for TensorFlow at the Google Brain division, says: “Deep learning is a branch of machine learning loosely inspired by how the brain itself works. We’re focused on making it easier for humans to use the devices around them and we think that making TensorFlow an open source tool helps and speeds that effort up.”

TensorFlow is used heavily in Google’s speech recognition systems, the latest Google photos product and, crucially, in the core search function. It is also used to provide the latest AI functionality extensions inside Gmail – many users may have noticed an increasing amount of auto-complete options in Gmail, a development known as Smart Compose.

Perceptual breakthroughs 

The toolsets and libraries being developed in this area are focused on what is often referred to as “perceptual understanding”. This is the branch of AI model coding devoted to letting a computer-based image scanner pointed at a roadway directions sign know that it is looking at a signboard and not just letters on a wall. So applied context is key to this element of AI.

Scale is also key to many of these types of AI and machine learning libraries, so they need to be able to run on multiple CPUs, multiple GPUs and even multiple operating systems concurrently. TensorFlow is good at this and is a common attribute to much of the code discussed here.

“Most strong deep learning teams today use one of the more popular frameworks – and I’m talking about technologies like Tensorflow, Keras, PyTorch, MXNet or Caffe.

“These frameworks enable software engineers to build and train their algorithms and create the ‘brains’ inside AI,” explains Idan Bassuk, head of AI at Aidoc, a Tel Aviv-based specialist firm using AI to detect acute cases in radiology.

In addition to those mentioned, there are several categories of tools that enable deep learning engineers to actually “do” their work faster and more effectively. Examples include tools for automating DevOps-related tasks around deep learning (such as MissingLink.ai), tools for accelerating algorithm training (such as Uber's Horovod and Run.ai), and others, according to Bassuk. 

The other big contenders 

Microsoft’s work in this space comes in the shape of the Microsoft Cognitive Toolkit (the artist formerly known as CNTK). This library works to enhance the modularisation and maintenance of separating computation networks.

This toolkit can be used to build reinforcement learning (RL) functions for AI to grow cumulatively better over time. It can also be used to develop generative adversarial networks (GANs), a class of AI algorithms found in unsupervised machine learning. 

IBM has a very visible hand in this space with its Watson brand. Despite the firm’s recent acquisition of Red Hat, the IBM approach is rather more proprietary than some, that is – the firm offers developers access to a collection of representational state transfer application programming interfaces (Rest APIs) and software development kits (SDKs) that use Watson cognitive computing to solve complex problems.

A selection of AI and deep learning tools

  • Caffe: Caffe an open source framework for deep learning that supports various types of software architectures that were designed with image segmentation and image classification in mind.
  • DeepLearning4J: DeepLearning4J is an open-source, distributed deep learning library for the JVM. The company claims it well-suited for training distributed deep learning networks and can process huge data without losing its pace.
  • IBM Watson: IBM has positioned Watson as “deep learning for business”.
  • Keras: Keras is an open-source neural network library written in Python.
  • Microsoft Cognitive Toolkit - a deep learning framework developed by Microsoft Research. Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
  • MLflow: A tool from Databricks to support machine learning experiments
  • MXNet: Apache MXNet is a scalable training and inference framework with a concise API for machine learning. 
  • PyBrain: An open-source, modular machine learning library.
  • Scikit-Learn: Scikit-learn is an open-source machine learning framework for Python that is useful for data mining, data analysis, and data visualisation.
  • Tensorflow: This is an open source library for high performance computation. It combines several machine learning and deep learning techniques to support applications like face and handwriting recognition.
  • Theano: Theano is a Python library for defining, optimizing, manipulating, and evaluating mathematical expressions using a computer algebra system.
  • Torch: Torch is an open-source framework for scientific computing that supports machine learning algorithms. 

Facebook is also in the big brand group for AI and machine learning. The social networking company is (perhaps unsurprisingly) very keen to work on AI functions and is known for its PyTorch deep learning framework, which was open sourced at the start of 2018. PyTorch runs on Python and so is regarded to be a competitor to TensorFlow.

Facebook has also open sourced its Horizon Reinforcement Learning (RL) products this year. According to the developer team behind Horizon, “machine learning (ML) systems typically generate predictions, but then require engineers to transform these predictions into a policy (i.e. a strategy to take actions). RL, on the other hand, creates systems that make decisions, take actions and then adapt based on the feedback they receive.” 

Other notable toolsets

Any overview of neural nodes in the AI brain would be remiss without mentioning a number of other key libraries and toolsets. Caffe is an open source framework for deep learning that can be used to build what are known as convolutional neural networks (CNN), typically always used for image classification. Caffe goes down well with some developers due to its support for various different types of software architectures.

DeepLearning4J is another useful tool for the AI developer toolbox. This is an open source distributed deep learning library for the Java Virtual Machine. For Python developers, there is Scikit, a machine learning framework used for tasks including data mining, data analysis and data visualisation.

There is also Theano, a Python library for defining and managing mathematical expressions, which enables developers to perform numerical operations involving multi-dimensional arrays for large computationally intensive calculations.

Read more about deep learning

  • New tools help developers build better, smarter AI apps, as machine learning toolkits have progressed exponentially over the last few years.
  • Machine learning and deep learning increasingly make their way into the enterprise. Assess which AI technology is right for you, and the public cloud services that support them.

In the real world (but still the AI world), we can see firms using a number of different toolsets, libraries and code methodologies in their developer function to attempt to build the machine intelligence they seek.

According to a Databricks CIO survey, 87% of organisations invest in an average of seven different machine learning tools – and this of course adds to the organisational complexity that is present around using this data.

Databricks has attempted to address part of this challenge by producing and open sourcing a project called MLflow. The goal with MLflow is to help manage machine learning experiments and put them into what effectively becomes a lifecycle. It also strives to make it easier to share project setups and get those models into production.

The company insists that if we want AI to be easier to adopt and evolve over time, we need more standardised approaches to managing the tools, data, libraries and workflows in one place. MLflow was released in alpha status in June 2018.

The neural road ahead 

As these tools now develop, we are witnessing some common themes surfacing. Flexibility in these software functions often comes at the cost of either performance or ability to scale, or indeed both. If a toolset is tightly coupled to one language or deployment format, it is typically harder to reshape it bigger, wider, faster or fatter.

Over time, there is likely to be some consolidation of platforms or some wider community-driven migration to the most efficient, most powerful, most open, most intelligent and most “trainable” toolsets.

Next Steps

Accelerating Deep Neural Networks with Analog Memory Devices

Read more on Artificial intelligence, automation and robotics