AI In Code Series: SUSE, Databricks, Sinch - Developer keenness & cautions

The Computer Weekly Developer Network has been running a series of guest posts from technical authors who understand the application of Artificial Intelligence (AI) and Machine Learning (ML) not just at the end-user level, but closer to the command line interface for real developer advancement.

The overall take here is (as you might hope) that software application development professionals should embrace as many of these functions as possible.

From low-code tools to code-completion functions to error-checking layers designed to crunch through log file analytics, there are plenty of advantages to be had here.

But, just stop for a second, the commentators in this space also agree that developers should still go to school and learn how to hard code in the first instance.

Gerald Pfeifer, CTO, SUSE

CTO at SUSE Gerald Pfeifer agrees that AI is a broad field. 

”Some approaches are more on the deterministic end of the spectrum, some deeply into heuristics and probabilistics. Neither are better per se – it depends. What is at stake? If an AI-based fashion advisor keeps suggesting combinations I don’t enjoy – not much harm done. Plus anyway, maybe there’s some juicy option I never would have thought of? But if lives depend on a complex system, better make sure those responsible really know what is going on, in particular in corner cases and humans who interact are aware and properly trained – think Boeing 737 MAX or Air France 447,” said Pfeifer.

He asks us to consider where/whether full blown AI actually should be used in any given use case and urges us to adopt the KISS (keep it simple, stupid) where needed.

Clemens Mewald, data science at Databricks

Clemens Mewald is director of product management for machine learning and data science at Databricks. Mewald warns us to think about the IKEA effect when you [the developer] picks an AI stack.

“It is not surprising many enterprises are struggling with adopting ML Platforms, let alone transforming their companies into an ‘AI first’ business model. Those who try to transform often suffer from the IKEA effect and developers should be mindful of this,” said Mewald.

The IKEA effect refers to the phenomenon that people attribute more value to products they helped create. 

“What I am suggesting is that the same effect is predominant in companies with a strong engineering culture. An engineering team that built their own ML platform from the ground up, flawed as it may be, will attribute more value to it,” he added. 

Mewald says that engineers love building things and they love acquiring new skills. As a result, many engineers take a way-too-low-level ML course online. 

“Emboldened by their newly acquired knowledge about the nitty-gritty details of ML, engineers then go out and try to apply them to their enterprise business problems. This is where the IKEA effect comes in. The particular challenge with ML platforms is that, because we lack a dominant design and common form factor… and people don’t know what they don’t know, it is far too easy to think that you can build something meaningful with just a few engineers,” said Databricks’ Mewald.

Pieter Buteneers, ML & AI at Sinch

Buteneers says that although writing code might seem the holy grail of developer AI, we should not underestimate the impact of automated code tests… and so developers need to be careful with how they are implemented at the coal face.

“Most developers don’t mind writing a few extra lines of code, but testing all the edge cases remains a tedious task. As a developer, I see the biggest breakthroughs coming from test automation where AI figures out for us which edge cases we might not have considered and which security vulnerabilities we might have introduced,” said Buteneers, but at the same time heeding caution in terms of the wider toolset being applied.

He further advises that in recent years we have seen massive breakthroughs in machine learning — and the next frontier where AI is really close to breaking the barrier of human performance is Natural Language Processing (NLP). But again, with great power comes great responsibility.

“At least on one rather academic task (SQUAD v1.1) deep learning models have broken the barrier linking answers to questions. Every few months a new paper comes out where deep learning algorithms inch closer and closer to human performance on more realistic tasks. It is only a matter of time until we have machine learning algorithms that can write text just as well as we do,” he concluded/