AI In Code Series: O’Reilly - From telemetry tension to sublime pipelines
We users use Artificial Intelligence (AI) almost every day, often without even realising it i.e. a large amount of the apps and online services we all connect with have a degree of Machine Learning (ML) and AI in them in order to provide predictive intelligence, autonomous internal controls and smart data analytics designed to make the end user User Interface (UI) experience a more fluid and intuitive experience.
That’s great. We’re glad the users are happy and getting some AI-goodness. But what about the developers?
But what has AI ever done for the programming toolsets and coding environments that developers use every day? How can we expect developers to develop AI-enriched applications if they don’t have the AI advantage at hand at the command line, inside their Integrated Development Environments (IDEs) and across the Software Development Kits (SDKs) that they use on a daily basis?
What can AI can do for code logic, function direction, query structure and even for basic read/write functions… what tools are in development? In this age of components, microservices and API connectivity, how should AI work inside coding tools to direct programmers to more efficient streams of development so that they don’t have to ‘reinvent the wheel’ every time?
This Computer Weekly Developer Network series features a set of guest authors who will examine this subject. This post comes from Rachel Roumeliotis in her role as AI and data expert at O’Reilly — the company is known for its media and learning resources, which are all created and created by O’Reilly’s own experts and others.
Roumeliotis writes as follows…
So what has AI even really done for the developer?
While AI in general is a hot area, software developers in particular may be thinking, “what’s in it for me”, right? I think there are two key areas where AI certainly has the potential and promise to unlock a better behaving, more useful set of tools for the software developer.
One of the key areas is better, more automated error correction.
The bug hunt is an eternal one; a journey, not a destination. Some bugs are minor enough that even when they crop up, they can be worked around by the end users without necessitating a patch from the developer. Other bugs are serious enough to disable functionality, fault an application, or worse, trash an operating system instance.
AI can help prevent both types of bugs in specific scenarios:
Catching errors before they get checked in
If you pipe telemetry from end user deployments back to your office, you have a large corpus of data to analyse and tease out trends. But wouldn’t it be nice to have a third or fourth set of (electronic) eyes looking over your code for obvious issues?
For internal applications and distributed external applications AI brings the possibility of catching errors and correcting them on the fly automatically without the need to involve developers in the bug hunt, saving critical time and expense for the developer and her/his/their shop.
There exist tools used at commit time, when developers are checking in code to a tree, that can help to identify mistakes and either flag them or, in some cases, automatically rectify them before the code is committed to the larger base. These tools use machine learning to analyse previous code going back years, understand what errors were corrected by developers and what code those developers used to correct the errors… and learn how to identify those flags in the future.
A report on this showed that a video game company using AI like this saved 70% of the cost of fixing bugs once code shipped.
Finding errors from log output
Developers can use machine learning and models to process telemetry and logging produced by their application to predict the causes of failure and proactively suggest proper workarounds to their users. Disk space errors, memory errors and leaks, garbage collection issues and more can be reasonably predictable and are ideal for this scenario.
Using cloud infrastructure makes it a relatively simple task to set up a telemetry infrastructure that is highly available and reachable from many points; a nice bonus is many cloud providers offer data wrangling, machine learning, and analysis tools that can help you make sense of the telemetry data right there in the cloud tenant without having to construct a suite of analysis tools in your own shop. You can even archive the data to cold storage like Glacier for a very low cost.
Creating a better pipeline of features
One of the most critical set of decisions that needs to be made is what software will actually do and what features and functions will be delivered with any given release.
Whether it is an internal tool designed to optimise a business process, or a major release of an enterprise software application, understanding the pipeline and backlog of features against the resources you currently have and can reasonably expect will come online is a familiar dance.
Using AI to help understand how users interact with current releases of your application, through intelligent telemetry, natural language-based help and modelling techniques can help shed light on critical decisions about what goes in the next release or sprint and what must stay behind. AI-enabled decision-making helps prioritise overall development effort and design, reducing costs and sometimes making for even faster shipping times.
Rachel Roumeliotis, a vice president of content strategy at O’Reilly Media, leads an editorial team that covers a wide variety of programming topics, ranging from data and AI, to open source in the enterprise, to emerging programming languages. She is a programming chair of OSCON. Rachel has been working in technical publishing for 14+ years, acquiring content in many areas, including software development, UX, computer security and AI.