phonlamaiphoto - stock.adobe.com

Computer says no. Will fairness survive in the AI age?

New forms of regulation will be needed to safeguard against the risks posed by AI

Hollywood has colourful notions about artificial intelligence (AI). The popular image is a future where robot armies spontaneously turn to malevolence, pitching humanity in a battle against extinction.

In reality, the risks posed by AI today are more insidious and harder to unpick. They are often a by-product of the technology's seemingly endless application in modern society and increasing role in everyday life, perhaps best highlighted by Microsoft's latest multi-billion-dollar investment into ChatGPT-maker OpenAI.

Either way, it's unsurprising that AI generates so much debate, not least in how we can build regulatory safeguards to ensure we master the technology, rather than surrender control to the machines.

Right now, we tackle AI using a patchwork of laws and regulations, as well as guidance that doesn't have the force of law. Against this backdrop, it's clear that current frameworks are likely to change – perhaps significantly.

So, the question that demands an answer: what does the future hold for a technology that is set to refashion the world?

Ethical dilemmas

As application of AI-style tools spreads rapidly across industries, concerns have inevitably been raised about these systems' ability to detrimentally – and unpredictably – affect someone's fortunes.

A colleague observed recently that there's an increasing appreciation among businesses and regulators about the potential impacts of AI systems on individuals' rights and wellbeing.

This growing awareness is helping identify the risks, but we haven't yet moved into a period where there's consensus about what to do about them. Why? In many cases, because those risks are ever-changing and hard to foresee.

Often, the same tools used for benign purposes can be deployed for malign intentions. Take facial recognition; the same technology for applying humorous filters on social media can be used by oppressive regimes to restrict citizens' rights.

In short, risks are not only borne from the technology, but from its application. And with a technology like AI, where the number of new applications is growing exponentially, solutions that fit today might not fit tomorrow.

A prominent example is the Australian Government's Robodebt scheme, which used an unsophisticated AI algorithm that automatically, and in many cases erroneously, sent debt notices to welfare recipients who it determined had received overpayments.

Intended as a cost saving exercise, the persistent attempts to recover debts not owed, or incorrectly calculated, led many to raise concerns over the impact the scheme had on the physical and mental health of debt notice recipients.

Add to this the further complication of ‘black box’ AI systems, which can conceal processes or infer incomprehensible patterns, making it very difficult to explain to individuals how or why an AI tool led to an outcome. Absent this transparency, the ability to identify and challenge outcomes is diminished, and any route to redress effectively withdrawn.

Filling the gap

Another complication is that in many jurisdictions, these risks are not addressed by a single AI-related law or regulation. They are instead subject to a patchwork of existing laws covering areas such as employment, human rights, discrimination, data security and data privacy.

While none of these specifically target AI, they can still be used to address its risks in the short to medium term. However, by themselves, they are not enough.

A number of risks fall outside of these existing laws and regulations, so while lawmakers might wrestle with the far-reaching ramifications of AI, other industry bodies and other groups are driving the adoption of guidance, standards and frameworks - some of which might become standard industry practice even without the enforcement of law.

One illustration is the US' National Institute of Standards and Technology's AI risk management framework, which is intended "for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems".

Similarly, the International Organisation for Standardisation (ISO) joint technical committee for AI is currently working on adding to its 16 non-binding standards with over twenty more that are yet to be published.

The current focus of many of these initiatives surrounding the ethical use of AI is squarely on fairness. Bias is one particularly important element. The algorithms at the centre of AI decision making may not be human, but they can still imbibe the prejudices which hue human judgement.

Thankfully, policymakers in the EU appear to be alive to this risk. The bloc's draft EU Artificial Intelligence Act addressed a range of issues on algorithmic bias, arguing technology should be developed to avoid repeating “historical patterns of discrimination” against minority groups, particularly in contexts such as recruitment and finance.

It is expected many other jurisdictions will look to tackle this issue head-on in future AI laws, even if views on how to balance regulation and innovation in practice differ widely from country to country.

The race to regulate

What is interesting is how the EU looks to be putting the rights of its citizens at its centre, in apparent contrast to the laissez-faire approach to technology and regulation that is more typically adopted in the US.

The European Commission further supplemented the draft Act in September 2022, with proposals for an AI Liability Directive and revised Product Liability Directive that would streamline compensation claims where individuals suffer AI-related damage, including discrimination.

In comparison, some commentators argue that it's currently unclear where the UK wants to go. The desire to be a global leader in AI regulation hasn't really come through, partly due to the inherent tension between deregulating following Brexit and bringing other countries along with us by creating UK regulations.

There are, however, some signs of the UK seeking global leadership in this space. The Information Commissioner's Office (ICO) recently fined software business Clearview AI £7.5 million after the company scraped online images of individuals into a global database for its somewhat controversial facial recognition tool.

Clearview has since launched an appeal. But, in addition to underlining the increasing focus on protecting use of even publicly available biometric data, the ICO's action sends a clear message to the market: UK regulators will act swiftly to address the risks of AI where they deem it necessary.

Out of the box

The next five years will likely mark an implementation phase in which soft guidance morphs into hard law, potentially building on progress already made through the OECD AI principles and UNESCO Recommendation on the Ethics of AI. But many observers expect it to be much longer before the emergence of something that resembles a comprehensive global AI framework.

As much as some in the industry will chafe at intrusive oversight from policymakers, as individuals' appreciation of the ethical implications of the technology expand alongside its application, it is hard to see how businesses can retain public confidence without robust and considered AI regulation in place.

In the meantime, discrimination and bias will continue to command attention in demonstrating the most immediate risks of this technology being applied not only with ill intent, but merely a lack of diligence around unintended consequences.

But such factors are ultimately just pieces of a much larger puzzle. Industry, regulators and professional advisers face years of piecing together the full picture of legal and ethical risks if we want to remain the master of this technology, and not the other way around.

Next Steps

15 AI risks businesses must confront and how to address them

Read more on Artificial intelligence, automation and robotics