Starburst [Data]: Neural Neurotics & home truths in Deep Learning

This is a guest post for the Computer Weekly Developer Network written by Ken Pickering, SVP of engineering at Starburst – a company known for its analytics engine that powers the Data Mesh product.

Pickering writes in full as follows…

A lot of engineering and software development, at least for me, in the past decade has involved working with large data sets in some capacity. 

Whether it was security, insure-tech, e-commerce, travel fintech, or even my current role at a company that produces a large-scale analytics platform, we were capturing data at an exponential rate and delving more and more into how to automate business decisions based on it.

So, it’s no real surprise that the teams I’ve been a part of have explored research with both traditional Machine Learning (ML) and predictive analytics as well as Deep Learning (DL) and neural networks. 

In recent years, there has been a definitive shift in the industry on focusing less on traditional ML and more emphasis on Deep Learning, even if many businesses still find solid results with the former.

A deep DL shift

The reasons for this are pretty straightforward:Size of the data: Neural networks will (generally) improve the more data you feed into them. Traditional ML models hit a point where adding more data into the model doesn’t realistically improve the performance.

Algorithmic advancement: There have been several breakthroughs recently in the underlying algorithms of Deep Learning systems that have made them perform better

Computation power: With the cloud and the advances to modern computing (like the more widespread development of ARM processors), there is more computational power available than ever before.

Marketing: Find me a day where OpenAI/ChatGPT isn’t making the news in some capacity.

It can be pretty tempting to just fall completely into the ‘we should use neural networks’ for everything camp, too. In general, they can definitely outperform more specific purpose models and are much more adaptable as they incorporate newer information.

Glaring challenges

But, the real challenges they run into are pretty glaring overall:

They’re a total black box: You can get a result from your neural network and have literally no idea how or why it came up with that answer. It’s great when the answer is verifiably correct, but if it’s wrong, there’s almost no real way to trace why it’s wrong. In some cases, that’s fine, but in many applications of computing, it’s not.

Starburst EVP Pickering: Beyond the hype and deeper into the datasets.

Amount of data: You need to be working with a massive data set to be able to train a model correctly. In many cases, you can get away with a much smaller data set if you’re building a more specific model for an application.

Expense: As to the data point above, they’re much more expensive to train and leverage than a more traditional approach. If you don’t need a neural network, you’re probably going to find it much more cost-effective to use a traditional algorithm.

Hype: A lot of people have wildly unrealistic expectations of what they can deliver – and, due to the nature of the first point, expectations need to be set on what they can handle accurately.

Style of data ‘approach’

In light of this, while companies can really unlock the power of their data, it’s clear that the ‘data approach’ also matters. 

While deep learning is impressive, it’s also a relatively new technology and lack of visibility makes it difficult, or even dangerous in some scenarios, to leverage. 

But, at the same time, there can be distinct advantages over other traditional ML scenarios. It really comes down to the application and the skill of the engineering group applying it and being honest about what risk they’re willing to take and on what timeline.