alexyz3d - stock.adobe.com
Autonomous weapons reduce moral agency and devalue human life
Military technology experts gathered in Vienna have warned about the detrimental psychological effects of AI-powered weapons, arguing that implementing systems of algorithmic-enabled killing dehumanises both the user and the target
Using autonomous weapons systems (AWS) to target humans will erode moral agency and lead to a general devaluing of life, according to military technology experts.
Speaking at the Vienna Conference on Autonomous Weapons Systems on 30 April 2024 – a forum set up by the Austrian government to discuss the ongoing moral, ethical, legal and humanitarian challenges presented by artificial intelligence (AI)-powered weapons – experts talked about the impact of AWS on human dignity, and how algorithmically enabled violence will ultimately dehumanise both its targets and operators.
Specific concerns raised by experts throughout the conference included the potential for dehumanisation when people on the receiving end of lethal force are reduced to data points and numbers on a screen; the risk of discrimination during target selection due to biases in the programming or criteria used; as well as the emotional and psychological detachment of operators from the human consequences of their actions.
Speakers also touched on whether there can ever be meaningful human control over AWS, due to the combination of automation bias and how such weapons increase the velocity of warfare beyond human cognition.
The ethics of algorithmic killing
Highlighting his work on the ethics of autonomous weaponry with academic Elke Schwarz, Neil Renic, a researcher at the Centre for Military Studies in Copenhagen, said a major concern with AWS is how they could further intensify the broader systems of violence they are already embedded within.
“Autonomous weapons and the systematic killing they’ll enable and accelerate are likely to pressure human dignity in two different ways, firstly by incentivising a moral devaluation of the targeted,” he said, adding that the “extreme systematisation” of human beings under AWS will directly impose, or at least incentivise, the adoption of pre-fixed and overly broad targeting categories.
“This crude and total objectification of humans leads very easily to a loss of essential restraints, so the stripping away of basic rights and dignity from the targeted. And we can observe these effects by examining the history of systematic killing.”
For Fan Yang, an assistant professor of international law in the Law School of Xiamen University, the problem of bias in AWS manifests in terms of both the data used to train the systems and how humans interact with them.
Noting that bias in AWS will likely manifest in direct human casualties, Yang said this is orders of magnitude worse than, for example, being the victim of price discrimination in a retail algorithm.
“Technically, it’s impossible to eradicate bias from the design and development of AWS,” he said. “The bias would likely endure even if there is an element of human control in the final code because psychologically the human commanders and operators tend to over-trust whatever option or decision is recommended by an AWS.”
Yang added that any discriminatory targeting – whether a result of biases baked into the data or the biases of human operators to trust in the outputs of machines – will likely exacerbate conflict by further marginalising certain groups or communities, which could ultimately escalate violence and undermine peaceful solutions.
An erosion of human agency
Renic added the systemic and algorithmic nature of the violence inflicted by AWS also has the potential to erode the “moral agency” of the operators using the weapons.
“Within intensified systems of violence, humans are often disempowered, or disempower themselves, as moral agents,” he said. “They lose or surrender their capacity to self-reflect, to exercise meaningful moral judgement on the battlefield, and within systems of algorithmic violence, we are likely to see those involved cede more and more of their judgement to the authority of algorithms.”
Neil Renic, Centre for Military Studies
Renic further added that, through the “processes of routinisation” encouraged by computerised killing, AWS operators lose both the capacity and inclination to morally question such systems, leading to a different kind of dehumanisation.
Commenting on the detrimental effects of AWS on its operators, Amal El Fallah Seghrouchni, executive president of the International Centre of Artificial Intelligence of Morocco, said there is a dual problem of “virtuality” and “velocity”.
Highlighting the physical distance between a military AWS user and the operational theatre where the tech is deployed, she noted that consequences of automated lethal decisions are not visible in the same way, and that the sheer speed with which decisions are made by these systems can leave operators with a lack of awareness.
On the question of whether targets should be autonomously designated by an AWS based on their characteristics, no speaker came out in favour.
Anja Kaspersen, director for global markets development and frontier technologies at the Institute of Electrical and Electronics Engineers (IEEE), for example, said that with AI and machine learning in general, systems will often have an acceptable error rate.
“You have 90% accuracy, that’s okay. But in an operational [combat] theatre, losing 10% means losing many, many, many lives,” she said. “Accepting targeting means that you accept this loss in human life – that is unacceptable.”
Renic added while there may be some less problematic scenarios where an AWS can more freely select its targets – such as a maritime setting away from civilians where the characterises being identified are a uniformed enemy on the deck of a ship – there are innumerable scenarios where ill-defined or contestable characteristics can be computed to form the category of “targeted enemy” with horrendous results.
“Here, I think about just how much misery and unjust harm has been produced by the characteristic of ‘military-age male’,” he said. “I worry about that characteristic of military-age male, for example, being hard coded into an autonomous weapon. I think that’s the kind of moral challenge that should really discomfort us in these discussions.”
The consensus among Renic and other speakers was that the systematised approach to killing engendered by AWS, and the ease with which various actors will be able to deploy such systems, will ultimately lower the threshold of resorting to violence.
“Our issue here is not an erasure of humanity – autonomous weapons aren’t going to bring an end to human involvement,” said Renic. “What they will do, however, along with military AI more broadly, is rearrange and distort the human relationship with violence, potentially for the worse.”
In terms of regulating the technology, the consensus was that fully autonomous weapons should be completely prohibited, while every other type and aspect of AWS should be heavily regulated, including the target selection process, the scale of force deployed in a given instance, and the ability of humans to meaningfully intervene.
Read more about military AI
- Lords split over UK government approach to autonomous weapons: During a debate on autonomous weapons systems, Lords expressed mixed opinions towards the UK government’s current position, including its reluctance to adopt a working definition and commit to international legal instruments controlling their use.
- UK, US and Australia jointly trial AI-enabled drone swarm: British, American and Australian military organisations have trialled the use of artificial intelligence (AI) in drones in a collaboration designed to drive their adoption of AI-powered military tools.
- MoD sets out strategy to develop military AI with private sector: The UK Ministry of Defence has outlined its intention to work closely with the private sector to develop and deploy a range of artificial intelligence-powered technologies, committing to ‘lawful and ethical AI use’.