The low-no-code series – Professor Adrian Cheok: An aye-eye on low-code AI

The Computer Weekly Developer Network gets high-brow on low-code and no-code (LC/NC) technologies in an analysis series designed to uncover some of the nuances and particularities of this approach to software application development.

Looking at the core mechanics of the applications, suites, platforms and services in this space, we seek to understand not just how apps are being built this way, but also… what shape, form, function and status these apps exist as… and what the implications are for enterprise software built this way, once it exists in live production environments.

This post is written by Professor Adrian David Cheok AM, Member of the Order of Australia, Professor, i-University, Tokyo, Japan.

Professor Cheok writes as follows…

It could be stated that the current generation of no-code started (at the least within the public eye) on September 10, 2021, when the New York Times ran a commentary titled ‘A.I. Can Now Write Its Own coding system. That`s excellent news for Humans…’ describing OpenAI`s freely to be had Codex AI. 

In terms of provenance, Codex may be a descendant of GPT-32, one in all the foremost superior linguistic communication models available… but let’s examine things on a wider scale.

Codex power play

Codex is trained on more than 50 million GitHub repositories, representing the vast majority of the Python code to be had on GitHub. It can take English-language activities and actions and routinely generate code in numerous programming languages. It can also translate code in programming languages, explain (in English) the potential of code supplied as input… and, further still, it can return the complexity of code it generates. 

It additionally has the capacity to utilise APIs permitting it to, for instance, send emails and get admission to statistics in databases. 

Codex is on the market via the OpenAI API and it additionally powers GitHub Copilot… a technology that’s billed as an ‘AI pair programmer’, a reference to pair programming, a technique famous in computing schooling studies and practice. 

The NYT article supplied numerous samples of Codex output including ‘make snowfall on a black background’ – in which Codex presents JavaScript code to draw a black rectangle with randomly (but appropriately) sized and located white shapes, which then circulate downward, at an anticipated speed. 

Codex was additionally mentioned in a September 2021 Lex Fridman podcast, in an interview with Donald Knuth. Fridman said that, “This puts the human within the seat of solving issues as critical writing from scratch”. 

Lack of AI code control?

Knuth became sceptical about AI-generated code in terms of its potential to lead to loss of control, as structures are constructed on auto-generated code. Largely because we are talking about executing code with features that won’t always be absolutely understood.

Professor Adrian David Cheok: Don’t worry, even with low-code AI, we still need humans… for now at least.

This generation of software should be of prime interest to any or all computer programmers. 

What we’re talking about creating here could be a used by an application that can take a casually described English language problem specification (i.e. the same as ordinary examination questions) and return often-correct, nicely-dependent code that could pass as human-written. 

It can also translate code into different languages, return complex facts with code answers and offer an entire application or a standalone feature to complete a delegated undertaking through declaring either ‘write an application to’ or ‘write a feature to’ within the input. Either way, it’s pretty fast. 

The genie is already out

We are not able to place the genie back within the bottle. AI is already successfully routinely producing human-like first-class of answers.

We can assume that in the near future such tools will be able to generate answers to increasingly complex computing problems, which may be utilised by everyone from expert programmers, to engineers and college students. 

Results show that Codex performs well when tasked with most college student code writing questions in ordinary first year programming exams. It also plays moderately well in high-end versions of the Rainfall Problem. 

The answers generated through Codex seem to have some variation, which in itself could make it tough for teachers to detect its presence. So we may get to a point where computing will change substantially within the subsequent decade due to gear like Codex. 

However, all isn’t lost. 

While tools like Codex present clear threats and challenges to academic integrity, they present notable possibilities to refactor current curricula. We expect a good deal of future work on tools like Codex to appear globally. 

Anticipating this shift, we placed the following sentence into GPT-3: The robots are taking over. It returned… Yes, you study that right. The robots are coming for us. We have been warned.