Ingo Bartussek - Fotolia
UK, US and Australia jointly trial AI-enabled drone swarm
British, American and Australian military organisations have trialled the use of artificial intelligence (AI) in drones in a collaboration designed to drive their adoption of AI-powered military tools
The UK government has deployed a “collaborative swarm” of autonomous drones to detect and track military targets using artificial intelligence (AI), as part of a joint trial with Australia and the US.
Organised by the UK’s Defence Science and Technology Laboratory (Dstl) and held in April 2023, the trial involved deploying the drones in a real-time “representative environment” and re-training their AI models mid-flight.
It also involved the “interchange” of different machine learning (ML) models between the drones of participating countries, and deploying those same models in a range of ground vehicles to further test their target identification capabilities.
“The trilateral teams collaborated to develop joint machine-learning (ML) models, apply test and evaluation processes, and fly on different national UAVs,” said the UK Ministry of Defence (MoD). “The ML models were quickly updated to include new targets and shared among the coalition and AI models retrained to meet changing mission requirements.”
The MoD further claimed that the use of autonomous systems to independently detect and track enemy targets “will have a massive impact on coalition military capability,” and that military AI needs to be developed “at pace if we are to maintain our operational advantage”.
Run under the AUKUS agreement – a trilateral security pact between the Australian, UK and US governments to advance military cooperation throughout the Indo-Pacific region in areas such as nuclear submarines, hypersonic weapons and AI – the trial formed part of the coalition’s Advanced Capabilities Pillar, otherwise known as Pillar 2.
The aim of this work is to collaboratively accelerate these three government’s collective understanding of AI in a military context and, ultimately, field the technology in operations.
“This trial demonstrates the military advantage of AUKUS advanced capabilities, as we work in coalition to identify, track and counter potential adversaries from a greater distance and with greater speed,” said lieutenant general Rob Magowan, the UK deputy chief of defence staff.
“Accelerating technological advances will deliver the operational advantages necessary to defeat current and future threats across the battlespace. We are committed to collaborating with partners to ensure that we achieve this while also promoting the responsible development and deployment of AI.”
Abe Denmark, a US senior adviser to the secretary of defence for Aukus, added that advanced AI technologies have the potential to transform the way defence and security challenges are approached by the three governments.
“This capability demonstration is truly a shared effort and is thus a critical step in our collective initiative to stay ahead of emerging threats,” he said. “By pooling our expertise and resources through our AUKUS partnerships, we can ensure that our militaries are equipped with the latest and most effective tools to defend our nations and uphold the principles of freedom and democracy around the world.”
More than 70 military and civilian defence personnel and industry contractors were involved in the trial, including personnel from drone suppliers Blue Bear and Insitu.
The MoD previously announced in January 2021 that it had conducted a trial of autonomous “swarming drones” in Cumbria under Dstl’s Many Drones Make Light Work programme, which consisted of 20 drones operating collaboratively to deliver six different payloads.
Further drone swarm trials were carried out by the Royal Marines under its Autonomous Advance Force 4.0 programme in July 2021, where six drones were tasked with reconnaissance and re-supplying ground units.
In June 2022, the Ministry of Defence (MoD) unveiled its Defence artificial intelligence strategy outlining how the UK will work closely with the private sector to prioritise research, development and experimentation in AI to “revolutionise our Armed Forces capabilities”.
Although details on its approach to autonomous weapons were light in the 72-page strategy document, the annex of an accompanying policy paper said systems that can identify, select and attack targets without “context-appropriate human involvement” would be unacceptable.
In a report on “emerging military technologies” published November 2022 by the Congressional Research Service, analysts noted that roughly 30 countries and 165 nongovernmental organisations (NGOs) have called for a pre-emptive ban on the use of autonomous weapons due to the ethical concerns surrounding their use, including the potential lack of accountability and inability to comply with international laws around conflict.
House of Lords scrutinises AI weapons
In January 2023, the House of Lords established an AI in Weapon Systems Committee to explore the ethics of developing and deploying autonomous weapons, including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.
During the committee’s first evidence session in March 2023, Lords were warned that the potential benefits of using AI in military operations should not be conflated with better international humanitarian law compliance.
Subsequent committee sessions – available online via official recordings and transcripts – raised further concerns.
On 20 April, for example, James Black, assistant director of the defence and security group at RAND Europe, noted that while conversations around the use of AI weapons by non-state actors tend to conjure “images of violent extremist organisations”, they should also include “large multinational corporations, which are the types of organisations that are at the forefront of developing this technology”.
He added: “Moving forward, a lot of this stuff is going to be difficult to control from a counter and non-proliferation perspective, due to its inherent software-based nature.
“A lot of our export controls and counter-proliferation or non-proliferation regimes that exist are focused on old-school, traditional hardware such as missiles, engines or nuclear materials. This sort of thing is a different proposition and, clearly, a challenge.”
Kenneth Payne, a professor of strategy at King’s College London, added, however, that the increasing importance of the private sector over AI development is not inevitable, and that governments can begin taking steps to address the issue of corporate dominance in this area.
“One concrete example is increasing the amount of compute that is available to researchers in a university setting. It is part of the new national computing strategy to do that. A sovereign foundation model AI capability would be a concrete step, as part of a wider project of democratising these activities again,” he said.
Payne further told Lords that war-game studies of military engagements run by “human-machine teams” shows there could be “quite a rapid escalation spiral” because “uncertainty about how much the adversary had outsourced to automatic decision-makers meant that you had to jump the gun and get your retaliation in first”.
He added while we generally have a good understanding of how humans think about deterrence, escalation and coercion, we do not have a similar understanding of how machines would go about navigating these complex dynamics.
Payne further noted that AI in this context will not simply replace what humans, but will enable military organisations to take “qualitatively different” actions: “There is no human equivalent of a 10,000-strong aerial swarm or of a submersible shoal that can stay at sea indefinitely.”
During a separate session on 27 April, Mariarosaria Taddeo, associate professor at the Oxford Internet Institute, said that the unpredictability of AI is “intrinsic to the technology itself”, adding: “If we are going to use AI in warfare, we have to make sure that we can apply regulations, including with respect to the responsibilities that people have, which is very hard to do and we are very far from finding a solution.”
Read more about artificial intelligence
- AI interview: Elke Schwarz, professor of political theory: Elke Schwarz speaks with Computer Weekly about the ethics of military artificial intelligence and the dangers of allowing governments and corporations to push forward without oversight or scrutiny.
- AI interview: Michael Osborne, professor of machine learning: Artificial intelligence researcher speaks with Computer Weekly about the implications of a market-driven AI arms race and the overwhelming dominance of the private sector over the technology.
- Worker-focused AI Bill introduced by backbench MP Mick Whitley: Using the 10-minute motion rule, backbench Labour MP Mick Whitley has introduced a worker-focused artificial intelligence (AI) bill to Parliament, which outlines a similar approach to AI being advocated by unions in the UK.