Why we need a new ethical standard for the development and use of robots

This is a guest blogpost by Oded Karev, General Manager of NICE Advanced Process Automation

In the myth of Daedalus and Icarus, looking to escape from Crete, the latter finds himself with wings, flies too close to the sun, falls into the sea and drowns. In true Greek style though, death is not the real punishment. That is saved for Icarus’ father – the creator of the wings. The God’s punish Daedalus for the lack of consideration he gives to the consequences of his invention. It’s his hubris that led to his child’s demise and, in Greek eyes, a punishment worth than death: eternal grief.

In robotics, peak hubris is the common movie plot: human creates AI-powered robot, human loses control of AI-powered robot.  Fortunately, we’re not at this dystopian future but that’s not to say robotics in its current form can do no harm. Generally speaking, we’re at the stage where the majority are designing and building robots with net good in mind. Some robotics are now even commonplace: they help answer customer questions and check for financial fraud inaccuracies. Gartner predicts that 70% of white-collar workers will interact daily with digital communication platforms powered by bots by 2022. Yet for the ubiquity of robotics in our lives, no regulatory framework exists on a national or supranational basis.

In a rare meeting of interests, the technology industry and regulators have converged to a common path: asking for more oversight and regulation. Both recognise the manifest dangers of not addressing challenges posed by robotics and AI and yet no one with the requisite level of clout has seized the gauntlet and exacted the change required.

Intended as a rallying call to both industry and government, we at NICE recently unveiled our Robo Ethical Framework promoting responsibility and transparency in the design, creation and deployment of AI-powered robots. Building on Asimov’s Three Principles, we’ve produced five of our own that underlie every interaction with process robots. From planning to implementation, we live and breathe this doctrine to drive ethically sound human-robot partnerships in the workplace.

One of our clients – an US multinational financial services company – uses a feature that splits how role authorisation is delivered. By splitting the roles and responsibilities of the automation contributors, it becomes very difficult to manipulate the authorisation process in a biased way without being exposed. In this instance, we have designed a product that can expose fraud, benefit a client, and contribute to a financial services regulation. But we are not currently meeting an ethical robotics regulation in the process. For the simple reason: there isn’t one.

Although our Ethical Framework is shared with every customer along with their robotic licence; as vendors, we ultimately have no control over how our customers use the robots we design. At NICE we can say with good conscience that we are doing something but that is not enough. Change needs to happen quick and fast, led by national governments and supranational organisations. Pioneers in robotics and artificial intelligence will be glad that life does not imitate art. But the allegory of Daedalus and Icarus should remind us that history judges those who create and then don’t mind, or in this case regulate, their creations.