Sergii Figurnyi - stock.adobe.co

Lords Committee to investigate use of AI-powered weapons systems

House of Lords to investigate the use of artificial intelligence in weapons systems, following UK government publication of AI defence strategy in June 2022

The House of Lords Artificial Intelligence (AI) in Weapon Systems Committee has published a call for evidence as part of its inquiry into the use of autonomous weapons.

Autonomous weapons systems (AWS), also known as lethal autonomous weapons systems (LAWS), are weapons systems which can select, detect and engage targets with little or no human intervention.

Established 31 January 2023, the committee will explore the ethics of developing and deploying autonomous weapons, including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.

The committee will also specifically look at the technical, legal and ethical safeguards that are necessary to control the use of AWS, as well as the sufficiency of current UK policy and the state of international policymaking in this area generally.

“Artificial intelligence features in many areas of life, including armed conflict. One of the most controversial uses of AI in defence is the creation of autonomous weapon systems that can select and engage a target without the direct control or supervision of a human operator,” said committee chair Lord Lisvane.

“We plan to examine the concerns that have arisen about the ethics of these systems, what are the practicalities of their use, whether they risk escalating wars more quickly, and their compliance with international humanitarian law.

“Our work relies on the input of a wide range of individuals and is most effective when it is informed by as diverse a range of perspectives and experiences as possible. We are inviting all those with views on this pressing and critical issue, including both experts and non-experts, to respond to our call for evidence by 10 April 2023.”

Following evidence submissions, the committee will begin interviewing witnesses in public session between March and July, with the aim of concluding its overall investigation by November 2023. A UK government response is expected shortly after in January 2024.

UK government approach

In June 2022, the Ministry of Defence (MoD) unveiled its Defence artificial intelligence strategy outlining how the UK will work closely with the private sector to prioritise research, development and experimentation in AI to “revolutionise our Armed Forces capabilities”.

Regarding the use of LAWS, the strategy claimed the UK was “deeply committed to multilateralism” and will therefore continue to engage with the United Nations (UN) Convention on Certain Conventional Weapons (CCW).

Although details on its approach to autonomous were light in the 72-page strategy document, an annex on LAWS in an accompanying policy paper said systems that can identify, select and attack targets without “context-appropriate human involvement” would be unacceptable.

“Sharing the concerns of governments and AI experts around the world, we…oppose the creation and use of systems that would operate without meaningful and context-appropriate human involvement throughout their lifecycle,” it said.

“The use of such weapons could not satisfy fundamental principles of International Humanitarian Law, nor our own values and standards as expressed in our AI Ethical Principles. Human responsibility and accountability cannot be removed – irrespective of the level of AI or autonomy in a system.”

It added the UK government would continue working with international allies and partners to address the “opportunities and risks” around LAWS.

The UK Stop Killer Robots campaign said at the time that while it was significant that the government recognised lines need to be drawn around use of LAWS, it has given no indication of how “context appropriate human involvement” is to be assessed or understood.

“In this new formulation,‘context appropriate human involvement’ could mean almost anything and the lack of detail about where the UK draws the line amounts to the UK public being told by the military ‘leave it to us’ to determine what is appropriate,” it said.

“This is emblematic of our previous concerns that this policy position, as well as the wider Defence AI strategy, was formulated without public consultation or any attempt by ministers to have a national conversation on this issue.”

Lack of international consensus

Commenting on the current state of international cooperation around LAWS in July 2022, however, the German Institute for International and Security Affairs said that expert talks at the UN level are facing failure.

“The Group of Governmental Experts [GGE] has been discussing… AWS in the UN arms control context since 2017,” it said. “Regulation of AWS is an increasingly remote prospect, and some representatives even admit privately that the talks may have failed.”

It added that because the GGE requires unanimity to make decisions, the lack of Russia involvement in talks since the February 2022 invasion of Ukraine means alternative forums will need to be found for the international debate around LAWS.

However, it noted that even before Russia’s actions in Ukraine, “it was clear that differences of substance…precluded rapid agreement”, including disagreements over the exact definitions and terminology.

“Another fault line is the arms race between the US, Russia and China, which is especially pronounced in the sphere of new technologies,” it added.

In a report on “emerging military technologies” published November 2022 by the Congressional Research Service, analysts noted that roughly 30 countries and 165 nongovernmental organisations (NGOs) have called for a pre-emptive ban on the use of LAWS due to the ethical concerns surrounding their use, including the potential lack of accountability and inability to comply with international laws around conflict.

A September 2019 study by Justin Haner and Denise Garcia said that oversight of autonomous weapons capabilities is “critically important” given the technology is likely “to proliferate rapidly, enhance terrorist tactics, empower authoritarian rulers, undermine democratic peace, and is vulnerable to bias, hacking, and malfunction”.

They also found the main players pushing the technology globally are US, China, Russia, South Korea, and the European Union.

“The US is the outright leader in autonomous hardware development and investment capacity. By 2010, the US had already invested $4bn into researching AWS with a further $18bn earmarked for autonomy development through 2020,” they said.

Despite the lack of explicit international rules and safeguards around LAWS, there are reports that autonomous weapons have already been deployed in combat situations.

A UN Security Council report published March 2021, for example, describes an engagement between the Government of National Accord Affiliated Forces (GNA-AF, which are backed by the UN) and the Hafter Affiliated Forces (HAF) in Tripoli, Libya.  

“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 and other loitering munitions,” it said.

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

It added that HAF units “were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray. Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems.”

However, it is unclear from the wording what level of autonomy these weapons systems have, and it does not explicitly mention whether people were killed as a result of their use.

The New Scientist reported in June 2021 that the Israeli Defence Force (IDF) has also used a swarm of AI-powered drones locate, identify and attack targets in Gaza.

Read more about artificial intelligence

  • MPs warned of AI arms race to the bottom: Expert tells Parliamentary committee that tech companies developing artificial intelligence are cutting corners and placing safety on the backburner, opening up ‘enormous risks’ for the future of AI.
  • Unionised contract workers who train Google’s AI win pay rise: Google contract workers employed at artificial intelligence training supplier RaterLabs have secured their first-ever pay rise, following collective demands that Google fulfil its own commitment to pay its extended workforce a minimum of $15 an hour.
  • Scrutinising AI requires holistic, end-to-end system audits: Understanding the full impacts of artificial intelligence requires organisations to conduct end-to-end social and technical audits of their systems, but the process comes with a number of challenges.

Read more on Artificial intelligence, automation and robotics