AI developer toolset series: Shutterstock captures the moment of AI evolution & democratisation
The Computer Weekly Developer Network is in the engine room, covered in grease and looking for Artificial Intelligence (AI), Machines Leaning (ML) and Deep Learning (DL) tools for software application developers to use.
This post is part of a series which also runs as a main feature in Computer Weekly.
With so much AI, ML & DL power in development and so many new neural network brains to build for our applications, how should programmers ‘kit out’ their AI toolbox?
How much grease and gearing should they get their hands dirty with… and, which robot torque wrench should we start with?
Shutterstock
The following text is written by Peter Silvio in his capacity as vice president of engineering for platform solutions at Shutterstock — the company hosts millions of stock images, photos, videos and music on its portal website.
Silvio writes on the evolution and democratisation of AI/Deep Learning development as follows…
When it comes to the works of AI, much of the focus has been in the realm of pure science and mathematics, as well as researching, developing and training models.
Many businesses are attempting to leverage AI and deep learning in their applications… however, engineering and executing a production-ready build which is scalable and performant is a difficult challenge.
This presents interesting challenges as the applications move to the cloud.
Some microservices architectures have robust infrastructure capabilities and tooling, such as Docker and Kubernetes; however the same toolset support for deep learning based applications, is still nascent.
Offline Tasks & Online Tasks
In computer vision application development for example there are generally two types of tasks to complete application, ‘Offline Tasks’ & ‘Online Tasks’.
Offline tasks are those which do not impact current production environments, including model design, development, training and even initial testing and model validation. In terms of tooling, this is the most mature space within deep learning with diverse languages from TensorFlow, Keras to PyTorch and a growing ecosystem of supported libraries.
Additionally model development and training is beginning to see democratization through the availability of commercial software and managed services to reduce the overhead of the model development lifecycle.
A key tool many data scientists have used for years are Jupyter notebooks. In recent years, many vendors including major cloud providers have focused their attention on providing managed services which aim to make machine learning/deep learning design and development more efficient as well as accessible to data scientists and engineers alike.
Amazon Sage Maker, Google Data Lab and Azure ML Studio all provide fully managed cloud Jupyter notebooks for designing and developing machine learning and deep learning models by leveraging serverless cloud engines.
By leveraging these platforms businesses can take advantage of a full spectrum of capabilities, allowing developers to focus on building truly differentiated models where the business need or opportunity requires. As well as the ability to leverage pre-trained models or managed APIs for more standard, commoditised capabilities or in between by leveraging transfer learning.
Transfer learning
Given the time and resource required to build and train deep learning models, transfer learning is an incredibly efficient and powerful method, which has become popular in deep learning.
Transfer learning allows us to use a pre-trained model on new problems. Leveraging models such as Inception V3 for image recognition reduces overall development time while still producing highly accurate results.
Online tasks consist of the productionisation and operations of AI / deep learning capabilities into business applications and platforms. At Shutterstock we have applied our research in image recognition to develop a platform which offers our customers rich features such as ‘Reverse Image Search’ and ‘Similar Image Search’.
When it comes to executing on a production build for deep learning services such as custom computer vision models the tool-chain available to developers is still fairly nascent and relies heavily on more generalized tools and custom development.
Up until recently deploying Graphics processing unit (GPU) and Input/output (IO) heavy deep learning applications generally require expensive, intricate configurations and do not take full advantage of modern capabilities such as containerization, autoscaling and tools such as Kubernetes.
This will change rapidly, there is already great language support across Python, Tensor, MxNet, etc, and as the offline tasks are being simplified so too will online tasks. Now with the continued evolution of open source projects such as Kubernetes and Istio; building, deploying, testing and scaling are becoming streamlined.
About the author
Peter Silvio is a passionate technology leader with over 20 years of experience, he has spent the past decade architecting and building distributed service platforms and designing data architecture not just for internal applications, but also defining API products to power future company solutions.