IgorZh - stock.adobe.com

Lords split over UK government approach to autonomous weapons

During a debate on autonomous weapons systems, Lords expressed mixed opinions towards the UK government’s current position, including its reluctance to adopt a working definition and commit to international legal instruments controlling their use

Lords are split in their opinions of the UK government’s approach to autonomous weapons, with some arguing for a much greater degree of caution and others calling for less restraint to get ahead of adversaries.

At the start of December 2023, a Lords committee urged the UK government to “proceed with caution” when deploying autonomous weapons systems (AWS) and other artificial intelligence (AI)-powered military capabilities, after finding that the government’s promise to approach military AI in an “ambitious, safe and responsible” way has not lived up to reality.

Key recommendations of the AI in Weapon Systems Committee included the government ensuring human control at all stages of an AWS’s lifecycle; adopting an operational, tech-agnostic definition of AWS so that meaningful policy decisions can be made; and appropriately designing procurement processes for an AI-driven world so there is proper accountability. It also recommended a complete prohibition of AI in nuclear command and control.

Responding to the findings of that committee in late March 2024, the government insisted it is already acting responsibly with due caution, and that the Ministry of Defence’s (MoD) priority with AI is to maximise military capability in the face of potential adversaries, which it claimed “are unlikely to be as responsible”.

The government added that while it welcomes the “thorough and thought-provoking analysis”, the overall message of the committee that it must proceed with caution already “mirrors the MoD’s approach to AI adoption”.

During a debate on the committee’s findings and government response held on 19 April 2024, Lords expressed conflicting opinions on the UK’s approach to autonomous weapons. While some criticised the government for its reluctance to place limits on the use of AWS, for example, others said undue caution would inhibit progress and put the UK behind its geopolitical opponents.

The government, however, maintains that the use of autonomous weapons is already governed by international humanitarian law (IHL), so there is no need to set an operational definition or commit to any international legal instruments controlling their use.

Regulation ‘a gift to our adversaries’

Elaborating on the government’s position during the Lords debate, the minister of state for defence, the Earl of Minto, said that setting an operational definition for AWS or creating new international instruments to control their use would only benefit the UK’s enemies.

“These systems are already governed by international humanitarian law so, unfortunately, defining them will not strengthen their lawful use. Indeed, it is foreseeable that, in international negotiations, those who wilfully disregard international law and norms could use a definition to constrain the capabilities and legitimate research of responsible nations,” he said.

“It is also for that reason that, after sincere and deep consideration, we do not support the committee’s call for a swift agreement of an effective international instrument on lethal autonomous weapons systems – that would be a gift to our adversaries.”

Minto added that instead of relying on a definition or a document, the key safeguard over military AI is ensuring human involvement throughout the lifecycle of a given system. In line with this, he also clarified that the government has no intention to either create fully autonomous weapons or cede “political control” of the UK’s nuclear capabilities to AI.

“The British Ministry of Defence will always have context-appropriate human involvement and, therefore, meaningful human control, responsibility and accountability,” he said. “We know, however, that other nations have not made similar commitments and may seek to use these new technologies irresponsibly.”

As such, he further added the UK will be working with allies to establish standards for responsible military AI – which he says will be grounded in IHL – while also working to identify and attribute dangerous military uses of the technology to help hold “irresponsible parties to account”.

Despite there being no support in government for either a definition or legal instrument, Minto said the UK will continue to engage in international dialogues around regulating autonomous weapons, particularly the United Nations’ (UN) group of governmental experts working under the scope of the Convention on Certain Conventional Weapons.

Lords react

During the debate, Lords expressed mixed opinions towards the government’s position and its response to the committee.

Committee chair Lord Lisvane, for example, said the government’s aim to create “ambitious, safe and responsible” military AI must now be translated into practical implementation, and was particularly critical of the decision to not adopt an operational definition of AWS.

“I hear what the government says, but I am not convinced. I believe it is possible to create a future-proofed definition,” he said. “Doing so would aid the UK’s ability to make meaningful policy on autonomous weapons and engage fully in discussions in international fora. It would make us a more effective and influential player.”

He also said the government should lead by example on international discussions around the regulation of AWS, noting that whether the outcome is a legally binding treaty or more informal measures clarifying how IHL should be applied, the agreement of “an effective international instrument must be a high priority”.

Lisvane added that while the government accepted a number of the committee’s recommendations – including around the importance of parliamentary accountability, human control over nuclear command, and improving the MoD’s procurement practices around software and data – its response provides little to no detail on how these points will be achieved.

“Overall, the government’s response to our report was ‘of constructive intent’. I hope that that does not sound too grudging,” he said. “They have clearly recognised the role of responsible AI in our future defence capability, but they must embed ethical and legal principles at all stages of design, development and deployment. Technology should be used when advantageous, but not at an unacceptable cost to the UK’s moral principles.”

Lord Clement-Jones, who sat on the committee after having pushed for its creation, similarly added that despite the near-global consensus on the need to regulate AWS, the UK government is yet to officially endorse any limitations on its use beyond a series of vague commitments around human oversight and ethical red lines.

Highlighting the use of AI systems to designate targets in Gaza, as well as the use of autonomous drones in Libya, Syria and Ukraine, Clement-Jones said this showed the urgency of the need to regulate AWS, which he added should start with at least a working definition.

“The inconsistency in how we define and understand AWS has significant implications for the development and governance of these technologies. However, the committee demonstrated that a working definition is possible, distinguishing between fully and partially autonomous systems. This is clearly still resisted by the government, as their response has shown.”

Lord Browne, who also sat on the committee, said the government’s explanation around why a definition of AWS is unnecessary “is plainly insufficient”, adding: “How can we actively seek to engage with policy in regulating AWS if we cannot find even provisional words with which to define it? It is like attempting to make a suit for a man whose measurements are shrouded in secrecy and whose very existence is merely a rumour.”

He added that while this is a complex topic, “complexity should not be a refuge but a rebuke” in good policy-making. “It is the job of governments of any political stripe to be able to articulate their approach and have it tested by experts and dissenting voices.”

However, other Lords were less critical of the government’s position and argued instead that both less caution and negativity around the potential consequences of military AI are needed.

Lord Houghton, who sat on the committee but stressed that he was speaking in a personal capacity during the debate, said for example while the requirements of any regulation “will undoubtedly constrain us in ways that patently will not trouble many of our potential enemies”, the strategic advantages that can be derived from AWS are so great that “we would be mad not to proceed with ways to exploit it”.

He also called for reassurances from the government that it “will not allow undue caution to inhibit progress” and took aim at the negativity expressed towards the risks of military AI, and AWS in particular, during the committee’s investigation.

“The negativity was not among the committee’s membership but rather among many of our expert witnesses, some of whom were technical doom-mongers, while others seemed to earn their living by turning what is ultimately a practical problem of battlefield management into an ethical challenge of Gordian complexity,” he said.

Speaking about AI in the context of the navy, Lord Stevens argued that AI-enabled systems will be needed as an effective force multiplier, and that caution is needed to ensure regulation of AWS is not overly restrictive.

“Parliament would be making a category mistake if we attempted to regulate AI as a discrete category of weapon system, when in fact it is a spectrum of rapidly evolving general-purpose technologies,” he said.

“AI systems clearly offer enormous potential benefits in the maritime environment. Parliament can and should help our nation capitalise on them… the signal we should send to the Royal Navy should be: continue to proceed with speed.”

Lord Hamilton similarly noted the UK would be left at “a serious disadvantage if our enemies adopt AI with enthusiasm and we do not”.

He added it is “extremely important” for the UK’s armed forces to adopt AI, so it can be used to “save the lives of our troops and improve our chances of winning wars”.

Read more about military artificial intelligence

Read more on Artificial intelligence, automation and robotics