chesky - stock.adobe.com

Will autonomous weapons make humans passive participants in war?

Experts warn the emergence and adoption of AI-powered military systems may eventually push battlefield decision-making beyond the limits of human cognition

Throughout history, humans have always applied new technology to gain an edge in war. The current rush by nations worldwide to develop and deploy lethal autonomous weapons systems (AWS) is no different. Masters of this technology will acquire dazzling hard power capabilities that proponents insist will promote peace through deterrence. Critics contend it will instead incentivise war while dehumanising combatants and civilians alike by surrendering decisions over life and death to cold algorithmic calculations.

It’s possible both perspectives prove correct on a case-by-case basis. A lot will depend on the prism of context in which the technology is used. Central to the issue is how much control human operators cede to machines – especially as conflict scenarios unfold at a much higher tempo. Because if there’s one area of consensus around AWS, it’s that these systems will vastly accelerate warfare.

More than 100 experts in artificial intelligence (AI) and robotics signed an open letter to the United Nations (UN) in 2017 warning that AWS threaten to enable war “to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”. Indeed, this dynamic is prompting a major arms control dilemma. It’s one rife with uncertainty and disagreement around whether humans can rein in lethal technologies that can think faster than they can, and might one day act independently.

According to the Campaign to Stop Killer Robots, strict international rules are necessary to curb the proliferation and abuse of AWS. This position is backed by dozens of smaller countries and Nobel Peace Prize laureates, as well as numerous peace and security scholars. By contrast, military powers are resisting legally binding safeguards.

Nations such as Britain, China, India, Israel, the US and others are instead advocating for responsible use via human-in-the-loop principles. This, in theory, commits to having a human operator oversee and approve the use of force by AWS units at all times.

But new iterations of AWS are already expediting the OODA cycle – military jargon for how sequences of observation, orientation, decision and action determine attacks.

What’s more, automation bias is known to routinely displace human judgement in the use of emerging technology. When combining these two factors, enhanced speed and deference to machines, it’s an open question as to whether even hands-on operators of AWS will have complete control of the weapons they wield.

‘Computer says kill’

Automation bias is generally defined as a situation where users accept computer-generated decisions over contradictory evidence or their own perceptions.

“The most dangerous AI isn’t the Terminator-type,” Pat Pataranutaporn, a technologist at MIT, said in an email. “Because its evil intent is obvious.” Rather, according to Pataranutaporn, an expert in human-AI interaction, “the real danger lies in AI that appears friendly but subtly manipulates our behaviour in ways that we can’t anticipate”.

In early August, he and a colleague wrote an essay describing the dangerous allure of “addictive intelligence” – systems that are simultaneously superior and submissive to their human operators.

And while Pataranutaporn’s research focuses on AI companions, similar risks surrounding the use of AWS by state and non-state actors are clear. This is especially true if computer models underpinning intelligent weapons advise or select for scorched earth tactics as the shortest path to victory.

Multiple wargames run using large language models in the past year, for example, have shown AI bots roleplaying as military commanders display an alarming affinity for crushing adversaries by launching nuclear strikes.

Automating conflict

Indeed, AI is already revolutionising warfare. Moscow’s imperial gambit in Ukraine has coincided with huge leaps in the affordability and accessibility of machine learning, visual recognition tools, digital network connectivity and robotics. Aside from sparking the largest interstate war of this century so far, the outcome has been to create an unparalleled data-rich combat environment to conceive, test and validate AWS. 

Much of Ukraine’s defence against Russian invaders has by now been distilled down to drone-to-drone combat using systems that verge on fully autonomous capability. In June, Kyiv established a new Unmanned Systems Forces branch in its military – the world’s first. A month later, Ukraine and the Nato western military alliance announced a new joint €45m (£37m) common Drone Coalition fund. Led by the UK and Latvia, it’s meant to expedite the procurement and delivery of drones to Ukrainian fighters on the frontlines.

But drones are omnipresent in places far beyond eastern Europe, too. Unmanned systems have seen extensive use in conflict zones ranging from Gaza and Myanmar, to Sudan, Ethiopia, Nagorno-Karabakh in Azerbaijan, northern Iraq and Syria. The US military maintains a fleet of AI-controlled surface vessels monitoring the Strait of Hormuz, a strategically vital corridor for global energy supplies that borders Iran.

Autonomous sentry guns dot South Korea’s side of the demilitarised zone between it and North Korea. A report from UN experts suggests the world’s first truly autonomous killer robot – a drone produced by Turkish state-owned defence conglomerate STM – was used in Libya as far back as 2020, to hunt down rebel fighters loyal to rogue army general Khalifa Haftar.

Advanced software components are also enhancing the fighting efficiency of conventional forces, be it learning-enabled electronic warfare or rapid target identification and acquisition programs. A significant portion of this innovation is being driven by startups in Silicon Valley and elsewhere, seeking to disrupt the global defence industry. A report from the US military predicts the sum of all this new tech will be a dramatic increase the lethality of large-scale combat operations to a point where it will likely cause a change in modern military doctrines.

AWS’s escape velocity

Writing in Foreign Affairs magazine, ex-CEO and Google chair Eric Schmidt, along with former US military chief Mark A Milley, say the US and its allies should embrace a maximalist strategy for AWS. “Future wars will no longer be about who can mass the most people or field the best jets, ships and tanks,” they argue. “Instead, they will be dominated by increasingly autonomous weapons systems and powerful algorithms.”

Key to this, say Schmidt and Milley, will be outsourcing a massive amount of military planning to artificial agents. “AI systems could, for instance, simulate different tactical and operational approaches thousands of times, drastically shortening the period between preparation and execution,” they write.

However, others are wary of this type of future. They caution that the wholesale integration of AWS into military functioning risks aggravating the inherent brutality and fog of war in armed conflicts.

The human element of combat – be it risk of casualties, differentiated opinions or bureaucratic chains of command – typically moderates governments’ use of force, albeit imperfectly and sometimes unintentionally. It at least slows it down to the extent that alternative courses of action can be considered. The spirited adoption of AWS would remove some of those speed bumps.

Much of this boils down to the software and data training protocols underpinning the hardware of weapons systems. “Even when such systems are nominally under human control, there may be difficulties,” Peter Burt, a researcher at Drone Wars UK, mentioned in an email. “If too many functions of a system are automated, operators may not be able to properly monitor the process and override the system, if necessary.”

This matters greatly because it means AWS could distort accountability around the use of force. The explainability and traceability of wartime decisions is “already in extremely short supply”, Oxford University senior fellow and Obama-era member of the US National Security Council Brianna Rosen said in a podcast in March.

Read more about autonomous weapons systems

The introduction of AI into this process, she predicts, will make it much worse, as the opaque web of different algorithms, data sets and procedures preferred by different military branches and intelligence agencies means “no single person is going to fully understand how this technology works”.

Elke Schwarz, a professor of political theory at Queen Mary University of London focused on the ethics of the application of AI for military use, echoes these views.

“The issue with automation bias is quite thorny, because it is not something that can easily be overcome,” she says. “Operating an AI-enabled weapons system is not a simple case of command and control.”

Rather, when the human operator is “embedded within a setting of screens and interfaces and digital technologies, the workings of which are not always readily intelligible”, the human that is supposed to oversee the system actually becomes reliant on information presented to them via a black box of AI analysis. Then, says Elke, “killing becomes a matter of efficient data processing. Killing as a form of workflow management.”

Human oversight

Aside from Russia – which told the UN last year that it isn’t overly concerned about maintaining total direct human control of AWS – most military powers have said publicly they want the technology to always be subject to human oversight. But these positions are being asserted with little detail attached.

When it comes to keeping humans in the loop, “everyone can get on board with that concept, while simultaneously, everybody can disagree what it actually means in practice,” Rebecca Crootof, a law professor and AWS expert told The Guardian earlier this year. “It isn’t that useful in terms of actually directing technological design decisions.”

Already, as AI systems evolve, new designs are inherently distancing human control by a couple of degrees each time.

“We could look back 15 or 20 years from now and realise we crossed a very significant threshold,” warns Paul Scharre, executive vice-president and director of studies at the Center for a New American Security think tank.

Autonomous drones

Startups in Ukraine, for example, have developed autonomous drones that can operate in swarms where each unit communicates and coordinates with each other or can be programmed to execute an attack even if the internet connection with a human operator is severed.

Meanwhile, critics of AWS say Israel’s use of AI-powered weapons in Gaza dispels the notion that the technology will make warfare more precise and humane – key selling points among its advocates. As of late September, more than 42,000 people in Gaza have been killed by IDF forces and nearly 100,000 others wounded, according to local health authorities, the vast majority of them innocent civilians.

To correct for this in the future, Schmidt, the former CEO and chair of Google, and Milley, the ex-US military chief, suggest AWS and its human operators should be subject to relentless training. They advise weapons systems must be continuously tested and assessed to confirm they operate as intended in real-world conditions.

Schmidt and Milley also recommend that Washington imposes economic sanctions against countries that don’t follow this principle. “The next generation of autonomous weapons must be built in accordance with liberal values and a universal respect for human rights – and that requires aggressive US leadership,” they warn.

Rosen, the Oxford academic and sceptic of military applications of AI, likewise suggests that liberal democracies initiate a more salient public debate about the use of intelligent weapons. This, she says, can form the basis to build domestic policies and legal instruments to reduce their harms. And once these governance mechanisms are in place, they might provide the credibility necessary to try to find an international consensus around AWS.

A few weeks ago, 61 countries, including the US, endorsed a so-called “blueprint for action” at a summit for the responsible military use of AI held in Seoul. The non-binding framework details 20 different commitments, divided into three different categories, that seek to lay out a common understanding of how to address the impact of AI on international peace and security. This includes maintaining human control of AWS, and a vision for future modes governance of AI in the military domain. 

All told, autonomous weapons look set to become a fixture of 21st century conflict. It remains to be seen if, a decade or two from now, human decision-making will still be, too.

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close