vchalup - stock.adobe.com

AI experts question tech industry’s ethical commitments

The massive proliferation of ethical frameworks for artificial intelligence has done little to change how the technology is developed and deployed, with experts questioning the tech industry’s commitment to making it a positive social force

This article can also be found in the Premium Editorial Download: Computer Weekly: AI experts question tech industry’s ethical commitments

From healthcare and education to finance and policing, artificial intelligence (AI) is becoming increasingly embedded in people’s daily lives.

Despite being posited by advocates as a dispassionate and fairer means of making decisions, free from the influence of human prejudice, the rapid development and deployment of AI has prompted concern over how the technology can be used and abused.

These concerns include how it affects people’s employment opportunities, its potential to enable mass surveillance, and its role in facilitating access to basic goods and services, among others.

In response, the organisations that design, develop and deploy AI technologies – often with limited input from those most affected by its operation – have attempted to quell people’s fears by setting out how they are approaching AI in a fair and ethical manner.  

Since around 2018, this has led to a deluge of ethical AI principles, guidelines, frameworks and declarations being published by private organisations and government agencies around the world.

However, ethical AI experts say the massive expansion of AI ethics has not necessarily led to better outcomes, or even a reduction in the technology’s potential to cause harm.

The emerging consensus from researchers, academics and practitioners is that, overall, such frameworks and principles have failed to fully account for the harms created by AI, because they have fundamentally misunderstood the social character of the technology, and how it both affects, and is affected by, wider political and economic currents.

They also argue that to bridge the gap between well-intentioned principles and practice, organisations involved in the development and deployment of AI should involve unions, conduct extensive audits and submit to more adversarial regulation.

Meanwhile, others say that those affected by AI’s operation should not wait for formal state action, and should instead consider building collective organisations to challenge how the technology is used and help push it in a more positive direction.

Abstract, contested concepts

According to a 2019 paper published by Brent Mittelstadt, data ethicist and director of research at the Oxford Internet Institute (OII), the vast majority of AI principles are highly abstract and ambiguous, to a point where they are almost useless in practice.

He says, for example, that although organisations have presented their high-level principles and value statements as “action-guiding”, in practice they “provide few specific recommendations and fail to address fundamental normative and political tensions embedded in key concepts”.

Luke Munn, a research fellow at the Digital Cultures & Societies Hub at the University of Queensland, has also been highly critical, similarly arguing in a paper in August 2022 that there “is a gulf between high-minded ideals and technological development on the ground”.

“These are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas,” he wrote.

Speaking to Computer Weekly about the proliferation of AI ethics, Sandra Wachter, a professor of technology and regulation at the OII, makes similar arguments about the highly abstract nature of ethical AI principles, which she says makes them almost impossible to implement in any meaningful way.

“Nobody’s going to say, ‘I want racist, sexist, unfair, privacy-invasive, fully autonomous killer robots’, but they’re essentially contested concepts”
Sandra Wachter, Oxford Internet Institute

Noting a number of common principles that appear in some form in almost every framework – such as fairness, transparency, privacy and autonomy – Wachter says that although nobody can really disagree with these on a surface level, operationalising them is another matter.

“Nobody’s going to say, ‘I want racist, sexist, unfair, privacy-invasive, fully autonomous killer robots’, but they’re essentially contested concepts,” she says. “We’ll both agree fairness is a good thing, but what you and I think about fairness probably could not be further apart.”

Wachter says there will also inevitably be tension between different principles in different contexts, adding: “At the end of the day, these principles are fine, but when confronted with a situation where you have to make a decision, well, then… you’re going to have to make a trade-off – transparency versus privacy, fairness versus profitability, explainability versus accuracy. There’s probably not a situation where every principle can be obliged or complied with.”

In October 2022, Emmanuel R Goffi, co-founder of the Global AI Ethics Institute, published an article in The Yuan criticising the “universalist” approach to ethical AI, which he argues is anything but, because it is all decided “by a handful of Western stakeholders promoting their vested interests”, and otherwise imposes uniformity where there should be cultural diversity.

“The problem with this kind of universalism is manifold,” wrote Goffi. “First, even though it stems from goodwill, it has essentially turned into an ideology. Consequently, it has become almost impossible to question its relevance and legitimacy. Second, the word ‘universal’ often gets improperly used to shape perceptions. This means that universal values are commonly presented as values that are shared by a majority, even though ‘universal’ and ‘majority’ are far from being the same thing.

“Third, universalism is often presented as being morally acceptable, and as a desirable counterweight to relativism. Yet the moral absolutism that is breaking at the horizon is not any more desirable than absolute relativism. Quite the contrary!”

All bark, no bite

Aside from the overt ambiguity and flawed appeals to universality, Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), says these ethical frameworks are also typically non-binding, with bad PR as the primary motivator to act within the spirit of the principles outlined.

“I think it would be helpful to have some kind of independent body, like a regulator, that could have hands-on access to the model to see the inputs and examine the outputs, and to test it adversarially,” she says. “The only incentive that companies have for these things not to blow up is the bad PR they’re going to get, and even then bad PR doesn’t typically affect their stock price or the market valuation.”

The “enforcement gap” is also highlighted by Gemma Galdon-Clavell, director of algorithmic auditing firm Eticas, who says there are no incentives for tech firms to be ethical in spite of such frameworks proliferating, because “you don’t pay a price for being a bad player”.

She says technological development in recent decades has been dominated by the idiosyncratic ideas of Silicon Valley, whereby innovation is defined very narrowly by the primacy of scalability above all else.

“The Silicon Valley model has basically taken over data innovation and is limiting the ability of other sorts of innovation around data to emerge, because if you’re not about ‘moving fast and breaking things’, if you’re not prioritising profit above everything else, if you don’t have a scalable product, then you’re not seen as innovative,” says Galdon-Clavell, adding that this has led to a situation where AI developers, in order to secure funding, are promising grand things of the technology that simply cannot be achieved.

“It’s allowed us to make very quick progress on some things, but it’s got to a point where it's being harmful,” she says. “When we audit systems [at Eticas], what we find behind the flashy systems that are advertised as the future of thought are very rudimentary systems.”

But she adds that more AI-powered systems should be rudimentary, and even “boring”, because the algorithms involved are simpler and make fewer mistakes, thus reducing the possibility of the systems producing negative social impacts.

Relating it back to the development of vaccines during the Covid-19 pandemic, Galdon-Clavell adds: “Innovation only makes sense if it goes through systems and procedures that protect people, but when it comes to technological innovation and data-related innovation, for some reason, we forget about that.”

Wachter adds that although the principles published so far provide a good starting point for discussions around AI ethics, they ultimately fail to deal with the core problems around the technology, which are not technical, but embedded directly into the business models and societal impetuses that dictate how it is created and used.

A technology of austerity and categorisation

Although the history of AI can be traced back to at least the 1950s, when it was formalised as a field of research, actual applications of the technology only began to emerge at the start of the 2010s – a time of global austerity immediately following the Great Recession.

Dan McQuillan, a lecturer in creative and social computing and author of Resisting AI: an anti-fascist approach to artificial intelligence, says it is no surprise that AI started to emerge at this particular historical juncture.

“It can’t escape the conditions in which it is emerging,” he says. “If you look at what AI does, it’s not really a productive technology – it’s a mode of allocation. I would even say it’s a mode of rationing, in a sense, as its way of working is really around scarcity.

“It reflects its times, and I would see it as an essentially negative solution, because it’s not actually solving anything, it’s just coming up with statistically refined ways to divide an ever smaller pie.”

Hanna also characterises AI as a technology of austerity, the politics of which she says can be traced back to the Reagan-Thatcher era – a period dominated by what economist David Harvey describes as “monetarism and strict budgetary control”.

Hanna adds: “The most common, everyday uses of AI involve predictive modelling to do things like predict customer churn or sales, and then in other cases it’s offered as a labour-saving device by doing things like automating document production – so it fits well to the current political-economic moment.”

For Wachter, the “cutting costs and saving time” mindset that permeates AI’s development and deployment has led practitioners to focus almost exclusively on correlation, rather than causation, when building their models.

“That spirit of making something quick and fast, but not necessarily improving it, also translates into ‘correlation is good enough – it gets the job done’,” she says, adding that the logic of austerity that underpins the technology’s real-world use means that the curiosity to discover the story between the data points is almost entirely absent.

“We don’t actually care about the causality between things,” says Wachter. “There is an intellectual decline, if you will, because the tech people don’t really care about the social story between the data points, and social scientists are being left out of that loop.”

She adds: “Really understanding how AI works is actually important to make it fairer and more equitable, but it also costs more in resources. There is very little incentive to figure out what is going on [in the models].”

Taking the point further, McQuillan describes AI technology as a “correlation machine” that, in essence, produces conspiracy theories. “AI decides what’s in and what’s out, who gets and who doesn’t get, who is a risk and who isn’t a risk,” he says. “Whatever it’s applied to, that’s just the way AI works – it draws decision boundaries, and what falls within and without particular kinds of classification or identification.

“Because it takes these potentially very superficial or distant correlations, because it datafies and quantifies them, it’s treated as real, even if they are not.”

“If you look at what AI does, it’s not really a productive technology – it’s a mode of allocation. I would even say it’s a mode of rationing, in a sense, as its way of working is really around scarcity. It reflects its times, and I would see it as an essentially negative solution, because it’s not actually solving anything, it’s just coming up with statistically refined ways to divide an ever smaller pie”
Dan McQuillan, lecturer in creative and social computing

Describing AI as “picking up inequalities in our lives and just transporting them into the future”, Wachter says a major reason why organisations may be hesitant to properly rectify the risks posed by their correlatively based AI models is that “under certain circumstances, unfortunately, it is profitable to be racist or sexist”.

Relating this back to the police practice of stop and search, whereby officers use “unconscious filters” to identify who is worth stopping, McQuillan says: “It doesn’t matter that that’s based on spurious correlations – it becomes a fact for both of those people, particularly the person who’s been stopped. It’s the same with these [AI] correlations/conspiracies, in that they become facts on the ground.”

While problems around correlation versus causality are not new, and have existed within social sciences and psychology for decades, Hanna says the way AI works means “we’re doing it at much larger scales”.

Using the example of AI-powered predictive policing models, Hanna says the data that goes into these systems is already “tainted” by the biases of those involved in the criminal justice system, creating pernicious feedback loops that lock people into being viewed in a certain way.

“If you start from this place that’s already heavily policed, it’s going to confirm that it’s heavily policed,” she says, adding that although such predictive policing systems, and AI generally, are advertised as objective and neutral, the pre-existing biases of the institutions deploying it are being hyper-charged because it is all based on “the faulty grounds of the [historic] data”.

Given AI’s capacity to categorise people and assign blame – all on the basis of historically biased data that emphasises correlation rather than any form of causality – McQuillan says the technology often operates in a way that is strikingly similar to the politics of far-right populism.

“If you take a technology that’s very good at dividing people up and blaming some of them, through imposing quite fixed categories on people, that becomes uncomfortably close to a kind of politics that’s also independently becoming very popular, which is far-right populism,” he says. “It operates in a very similar way of ‘let’s identify a group of people for a problem, and blame them’. I’m not saying AI is fascist, but this technology lends itself to those kinds of solutions.”

In October 2022, Algorithm Watch, a non-profit research and advocacy organisation committed to analysing automated decision-making systems, published a report on how the Brothers of Italy – a neo-fascist political party whose leader, Giorgia Meloni, was recently elected Italy’s prime minister – previously proposed using AI to assign young people mandatory jobs.

Speaking with Algorithm Watch, sociologist Antonio Casilli noted that similar systems had been proposed by other European governments, but none of them was really effective at fixing unemployment issues: “This kind of algorithmic solution to unemployment shows a continuum between far-right politicians in Italy, politicians in Poland and centre-right politicians like Macron,” he said.

“They are different shades of the same political ideology. Some are presented as market-friendly solutions, like the French one; others are presented as extremely bureaucratic and boring, like the Polish one; and the Italian proposal, the way it is phrased, is really reactionary and authoritarian.”

AI’s guilty conscience

Apart from failing to grapple with these fundamental logics of AI and their consequences, those Computer Weekly spoke to said virtually none of the ethical frameworks or principles published take in the fact that it could not exist without extensive human labour. 

Rather than being trained by machine processes, as many assume or claim, AI algorithms are often trained manually through data labelling carried out by people working in virtual assembly lines.

Known as clickwork or microwork, this is frequently defined by low wages, long hours, poor conditions and a complete geographical separation from other workers.

McQuillan says: “I doubt that AI practitioners would think of it this way, but I would say that AI would be unimaginable if it wasn’t for the decades-long destruction of the labour movement. AI would just not be thinkable in the way that it is at the moment.” He adds that he is “shocked” that none of the ethical frameworks take into account the human labour that underpins the technology.

“I think they themselves think of it as an unfortunate vestigial effect of AI’s evolution that it just awkwardly happens to depend on lots of exploitative clickwork,” he says.

Hanna says the framing of such labour by employers as an easy source of supplemental income or a fun side-hustle also helps to obfuscate the low pay and poor working conditions many face, especially those from more precarious economic situations across the Global South.

“In the discussion around AI ethics, we really don’t have this discussion of labour situations and labour conditions,” she says. “That is a giant problem, because it allows for a lot of ethics-washing.”

Hanna says part of the issue is the fact that, like Uber drivers and others active throughout the gig economy, these workers are classified as independent contractors, and therefore not entitled to the same workplace protections as full-time employees.

“I think unions definitely have a role to play in raising labour standards for this work, and considering it to even be work, but at the same time it’s difficult,” she says. “This is an area many unions have not paid attention to because it is hard to organise these individuals who are so [geographically] spread out. It’s not impossible, but there are lots of structural designs that prevent them from doing so.”

Collective approaches

Using the example of Google workers challenging the company’s AI-related contracts with the Israeli government, Hanna says although Google’s ethical AI principles did not stop it from taking the controversial contract in the first place, the fact that it was openly published means it was useful as an organising tool for unions and others.

A similar sentiment is expressed by Wachter, who says unions can still play a powerful role in strengthening legal rights around the gig economy and industrial action, despite the globally “dispersed and isolated” nature of microwork making collective action more difficult.

She adds that because there is a distinct lack of corporate ethical responsibility when it comes to AI, companies must be forced into taking action, which can be done through better laws and regulation, and regular audits.

“I am impressed [with the Google workers] and deeply respect those people, and I am grateful they did that, but the fact we need them means policy is failing us,” says Wachter. “Do I need to rely on people risking their social capital, their financial capital, just to do something that is ethical? Or is it not the job of a legislator to protect me from that?

“You also need to have people who have oversight and can audit it on a regular basis, to make sure that problems don’t come in at a later stage. I think there is probably a hesitancy because it would mean changing current business practices, which are making a lot of money.”

McQuillan, however, is more sceptical of the effectiveness of improved laws and regulations, arguing instead for an explicit rejection of the liberal notion that laws provide a “neutral and objective rule set that allows everyone to compete equally in society”, because it often projects the idea of a level playing field onto “a situation that’s already so asymmetric, and where the power is already so unevenly distributed, that it actually ends up perpetuating it”.

Instead, on top of self-organising staff in the workplace like those at Google, McQuillan suggests people could further organise citizen assemblies or juries to rein in or control the use of AI in specific domains – such as in the provision of housing or welfare services – so that they can challenge AI themselves in lieu of formal state enforcement.

“Because AI is so pervasive, because you can apply it to pretty much anything – self-organising assemblies of ordinary people around particular areas – it is a good way to organise against it,” he says. “The way to tackle the problems of AI is to do stuff that AI doesn’t do, so it’s about collectivising things, rather than individualising them down to the molecular level, which is what AI likes to do.”

McQuillan adds that this self-organising should be built around principles of “mutual aid and solidarity”, because AI is a “very hierarchical technology” which, in a social context, leads to people being divided up along lines of “good and bad”, with very little nuance in between.

Hanna also takes the view that a more participatory, community-informed approach to AI is needed to make it truly ethical.

Comparing the Montreal Declaration for Responsible AI produced by the University of Montreal in 2018 to the work of the Our Data Bodies collective, Hanna says the former started from the position that “we’re going to develop AI, what’s a responsible way [to do that]?” while the latter started from the position of how to defend people and their information against datafication-as-a-process.

“The individuals in that project were not focused on AI, were not AI researchers – they were organisers with organising roots in their own cities,” she says. “But they were focusing on what it would take to actually defend against all the data that gets scraped in and sucked up to develop these tools.

“Another example is Stop LAPD Spying, which starts from a pretty principled spot of, as the name suggests, [opposing] the datafication and surveillance by the Los Angeles Police Department. These aren’t starting from AI, they’re starting from areas of community concern.

“We know our data is being gathered up, we anticipate that it’s being used for either commercial gain or state surveillance. What can we do about that? How can we intervene? What kind of organising collectives do we need to form to defend against that? And so I think those are two very different projects and two very different horizons on what happens in the future.”

Practical steps to take in lieu of wider change

So what can organisations be doing in the meantime to reduce the harms cause by their AI models? Galdon-Clavell says it is important to develop proactive auditing practices, which the industry still lacks.  

“If you have regulation that says your system should not discriminate against protected groups, then you need to have methodology to identify who those protected groups are and to check for disparate impacts – it’s not that difficult to comply, but again the incentives are not there,” she says.

The main problem that Eticas comes across during algorithmic audits of organisations is how the model was built, says Galdon-Clavell: “No one documents, everything is very much trial and error – and that’s a problem.”

She adds: “Just documenting why decisions are made, what data are you using and for what, what procedures have been followed for approving certain decisions or rules or instructions that were built into the algorithm – if we had all that in writing, then things would be a lot a lot easier.”

Galdon-Clavell also says that auditing should take a holistic systems approach, rather than a model-specific approach: “AI cannot be understood separately from its context of operation, and so what is really important is that you are not just testing the technical aspects of the algorithm, but also the decisions and the processes that went into choosing data inputs, all the way up to implementation issues.”

Wachter’s own peer-reviewed academic work has focused on auditing, specifically around how to test AI systems for bias, fairness and compliance with the standards of equality law in both the UK and the European Union.

The method developed by Wachter and her colleagues – dubbed “counterfactual explanations” – shows why and how a decision was made – for example, why did a person need to go to prison – and what would need to be different to get a different result, which can be a useful basis for challenging decisions. All of this is done without infringing on companies’ intellectual property rights.

“I think ethics is actually cheaper than people who make it think it is – it just requires sometimes thinking outside of the box, and the tools that we have developed provide a way of allowing you to be fair and equitable without revealing trade secrets, but still giving meaningful information to people and holding them accountable at the same time,” she says.

Read more on Artificial intelligence, automation and robotics