Hazelcast: Distilling real-time machine inference secret sauce
This is a guest post for the Computer Weekly Developer Network written by Dale Kim in his capacity as senior director for technical solutions at Hazelcast – the company is known for its
Kim writes in full as follows…
All brands and companies have some kind of secret sauce: something that truly sets them apart.
But can your business processes play their part in safeguarding that secret sauce? If so, would this include your machine learning (ML) initiatives? Why not? Whether it’s a liquid formula, a secret blend of herbs and spices, or the way we build and operate the ML infrastructure for artificial intelligence (AI)-driven automation, a secret sauce makes all the difference.
Without a secret sauce, useful ML/AI deployments can be a struggle.
DevOps have been told to incorporate ML into an organisation’s technology stacks – but we all know how this goes: ML flies through the training phase but struggles once it needs to get into production.
How, for example, does your carefully trained ML model deal with a deluge of credit card transactions on the last Friday before Christmas, to score a torrent of streamed data in order to identify fraudulent activity? For accurate analysis the model needs to understand each customer’s past spending – but that means sub-millisecond data processing with real-time analysis.
To get there, you need the right infrastructure.
But anybody can bolt together storage, computation and ML tools – but that doesn’t necessarily deliver the outcome people want. This matters for ML because it is destined to drive more business and IT functions for faster and more accurate operations. So, while your boss might well have assembled a team of top data scientists, the real work lies in being able to consume and process the right data at the right time – in real time.
Why deployments can struggle
This is a giga-scale issue and there are two main reasons ML/AI deployments struggle.
The first is human.
The two groups responsible for the ML that underpins AI-driven automation – data scientists who build and train and DevOps responsible for deployment – have different ways of working. Data scientists build machine models in languages like Python while DevOps use languages such as Java. This stores up trouble when it comes to deployment and updating the model – for example, additional coding required to get Python models into a high-performance, Java-based system. This typically requires rewriting code from one language to another, a task that nobody seems to be excited about, especially due to the difference in required skills.
The second issue is technical.
Data scientists and DevOps teams are both challenged with building a data-processing model to enrich streaming data with legacy data for the context AI needs. This means that stream processing capabilities need to be tied closely together with historical data, and that typically requires a significant amount of custom code that needs to be optimised for performance. A further problem comes from the distributed nature of the cloud, where data is generated and stored from the center to the edge with a range of processing and storage power in between. You’ve got data ping-ponging across the network between storage and processing – meaning latency.
What’s needed is an infrastructure capable of absorbing waves of real-time data from – say – Kafka, IoT, database events, message queues and shopping baskets; of processing the data – cleaning, filtering and enriching it – and feeding it into an ML model for real-time inference.
ML for data scientists & DevOps
We need to automate as much of that as possible by removing the human and technical hurdles and integrating the process of delivering and updating ML models with high levels of performance. The secret sauce? A runtime for the data scientist and DevOps.
It has three ingredients.
The first is a high-speed stream processing engine that can take real-time data and perform all the data preparation work–ingestion, filtering, aggregation, transformation, enrichment–to get data ready for ML models. This typically means the use of streaming SQL, i.e., SQL queries that can be run on incoming, unbounded data sets, to prepare the data, which simplifies the data pipeline creation process for both data scientists and DevOps professionals.
The second is a high-speed, in-memory data store capable of holding data in RAM and providing co-located compute that allows you to execute code on that held data for massively parallel, sub-millisecond response times with millions of complex transactions. This component is critical for quickly retrieving historical and reference data to enrich the real-time data that you’re acting on.
Finally, you need an inference interface that is language-agnostic. This gives you the ability to call ML models that are created in a language other than the one used for the core stream processing engine. More specifically, this means you need to be able to call Python-based ML models from a Java-based data processing system. This avoids the extra step of having to convert ML models from Python to Java, just to make it compatible with the core data processing system.
ML secret sauce
ML/AI is the recipe for intelligent, automated business operations but without a holistic approach to building and operating ML inference, automation will not live up to expectations.
To get more productivity and results from your ML/AI initiatives, explore the suggestion above on the ingredients for an ML secret sauce to simplify and streamline your ML model deployment effort.