Government insists it is acting ‘responsibly’ on military AI

The government has responded to calls from a Lords committee that it must “proceed with caution” when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach

The UK government insists it is already acting “responsibly” in the development of lethal autonomous weapons systems (LAWS) and military artificial intelligence (AI), following a warning from Lords to “proceed with caution”.

However, critics say the government is failing to engage with alternative ethical perspectives, and that the response merely confirms its commitment to the course of action already decided on.

Established in January 2023 to investigate the ethics of developing and deploying military AI and LAWS, the Lords Artificial Intelligence in Weapon Systems committee concluded on 1 December that the government’s promise to approach military AI in an “ambitious, safe and responsible” way has not lived up to reality.

Lords specifically noted that discussions around LAWS and military AI in general are “bedevilled by the pursuit of agendas and a lack of understanding”, and warned the government to “proceed with caution” when deploying AI for military purposes.

Responding to the findings of the committee, the government said it is already acting responsibly, and that the Ministry of Defence’s (MoD) priority with AI is to maximise military capability in the face of potential adversaries, which it claimed “are unlikely to be as responsible”.

The government added that while it welcomes the “thorough and thought-provoking analysis”, the overall message of the committee that it must proceed with caution already “mirrors the MoD’s approach to AI adoption”.

It also said it is already “committed to safe and responsible use of AI in the military domain”, and that it “will always comply with our national and international legal obligations”.

Specific recommendations

The government response addressed specific recommendations made by the committee, many of which were focused on improving oversight and building democratic support for military AI, including by making more information available to Parliament for proper scrutiny to take place, as well as undertaking work to understand public attitudes.

The committee also called for a specific prohibition for the use of AI in nuclear command, control and communications; meaningful human control at every stage of an AI weapon system’s lifecycle; and the adoption of an operational, tech-agnostic definition of AWS so that meaningful policy decisions can be made.

“[The UK government] must embed ethical and legal principles at all stages of design, development and deployment, while achieving public understanding and democratic endorsement,” said committee chair Lord Lisvane. “Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.”

Response details

Giving examples of how it is already approaching the issue with caution, the government said it has already set out its commitment to having meaningful human control over weapon systems and is actively conducting research into the most effective forms of human control; and has already publicly committed to maintaining “human political control” of its nuclear arsenal.

Commenting directly on recommendations that it must explicitly outline how it will adhere to ethical principles and become a leader in responsible military AI, the government said it is already taking “concrete steps” to deliver these outcomes.

“We are determined to adopt AI safely and responsibly because no other approach would be in line with the values of the British public; meet the demands of our existing rigorous approach around safety and legal compliance; or allow us to develop the AI-enabled capability we require,” it said.

“It is important to be clear that the adoption and integration of novel technologies and capabilities is not a new challenge for Defence. We have established and effective risk management systems in place with clear lines of accountability, and assurance and controls frameworks embedded throughout the lifecycle of any military capability.

“Where unique requirements are required, owing to the nature or functionality of AI, we will review and augment these approaches, working with established frameworks wherever possible.”

Maximising military capability

The government said that while it was mindful of the need to uphold international law and its own ethical principles, “our priority is to maximise our military capability in the face of growing threats”.

Given this priority, it said it does not agree with the committee on the need for a definition of AWS – and that it therefore does not intend to adopt a definition – because the “irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing [international] legal mechanisms”.

It added that using a definition of LAWS as the starting point for a new legal instrument prohibiting certain types of systems (as some have argued, but not the committee) “represents a threat to UK Defence interests, and at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries”.

“We maintain that meaningful human control, exercised through context-appropriate human involvement, must always be considered across a system’s full lifecycle,” it said. “We and our key partners have been clear that we oppose the creation and use of AWS that would operate in any other way, but we face potential adversaries who have not made similar commitments and are unlikely to be as responsible.

“Rather, as we have seen in other domains, adversaries will seek to use international pressure and legal instruments to constrain legitimate research and development while actively pursuing unsafe and irresponsible use cases. It is important that the UK maintains the freedom of action to develop legal and responsible defensive capabilities to protect our people and our society against such hostile activities.”

The government said that while integration of novel technologies is not a new challenge for the MoD, it will “shortly” publish further guidance on how ethical principles can be operationalised, which will specifically outline the governance, accountabilities, processes and reporting mechanisms it believes are necessary to translate the principles into practice.

Critical voices

Commenting on the government’s response, Peter Burt of non-governmental organisation Drone Wars (which gave evidence to the committee in April 2023) said there is little new information in the response and nothing in it would be surprising to observers of government policy in this area.

“The response merely outlines how the government intends to follow the course of action it had already planned to take, reiterating the substance of past policy statements such as the Defence Artificial Intelligence Strategy and puffing up recent MoD activity and achievements in the military AI field,” he wrote in a blog post.

“As might be imagined, the response takes a supportive approach to recommendations from the Lords which are aligned to its own agenda, such as developing high-quality data sets, improving MoD’s AI procurement arrangements, and undertaking research into potential future AI capabilities.”

However, he noted that in the rare instances where the committee has “substantially” challenged the MoD approach, these challenges were ultimately rejected by the government, most notably in relation to the refusal to adopt a definition of AWS on the basis that “the UK would be making itself more vulnerable to Russian military action. Really?”

An associate professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, Elke Schwarz told Computer Weekly that while there are some encouraging moves and signposts in the government response, as an ethicist, she finds it disappointing.

“The problem of ethics is entirely under-addressed,” she said. “Instead, the recent pivot to ‘responsible AI’, as under-defined and over-used as this concept is, is doing a lot of the heavy lifting.

“What we have seen in recent years is that the relatively thin notion of ‘responsibility’ has become a stand-in for a more in-depth engagement with ethics. This lateral shift toward ‘responsibility’ in lieu of trying to come to grips with more thorny ethical questions clearly present with AI, especially AI in targeting systems, has a number of implications for those state actors wanting to build AWS.”

Read more about military AI

Schwarz added that the focus on “responsibility” also creates a strategically useful distinction between responsible actors (i.e. Western powers) and intrinsically irresponsible actors (usually China or Russia): “We see this over and over and over again in the discourse,” she said. “It, again, is effective in dismissing challenges raised by critics about the possible ethical pitfalls, or indeed the possible lack of evidence that these systems are actually effective.”

Burt similarly noted that the government’s emphasis on dialogue with “likeminded nations” around military AI and AWS “takes little effort and is likely to pay limited dividends”, and that engaging with those who have differing views is essential to achieving genuine change.

“The UK has clearly made no effort to do this on the international stage,” he said. “The response is peppered with judgements such as, ‘We know some adversaries may seek to misuse advanced AI technologies, deploying them in a manner which is malign, unsafe and unethical,’ indicating that the UK intends to take an antagonistic approach to these adversaries and has no interest in engaging in dialogue with them.”

Ultimately, Schwarz said that, given how fast the goal posts are shifting in terms of what is and isn’t acceptable for AI weapons, is it difficult to not feel “a little bit jaded” by the government response, read it as an effort to circumnavigate the ethical issues raised and carry on as before.

“There are so many linguistic contortions, generalities, vague phrasings and so little concrete engagement with the specifics of the ethical issues raised not only by the report but by the many, many experts on this subject matter,” she said. “One gets the sense that a strong confirmation bias is at work with this issue, whereby only those aspects of the debate are acknowledged that the government wants to hear.”

Key partners

Both Burt and Schwarz noted that the government response explicitly highlights Israel as a “key partner” on military AI, which has been using a machine-learning based system known as Habsora (or, the Gospel, in English) to designate targets for its bombing campaign in Gaza.

At the time the AI system became public knowledge at the start of December 2023, the Israeli Defence Force (IDF) had already killed more than 15,000 people in Gaza; a figure which stands at over 30,000 at the time of publication, according to estimates by the Gazan Ministry of Health.

“Rather than hold Israel to account for its use of targeting and warfighting methods, which have led to inflated levels of civilian casualties, the UK is wholeheartedly supporting its war effort,” said Burt.

“Gaps like this between the government polices set out in the response to the Lords Committee and the reality of what is being experienced during the attack on Gaza expose the response for what it really is – a public relations document. The ultimate purpose of the response is not to convince members of the House of Lords that the government is listening to their concerns, but to ‘sell’ existing MoD policy and investment in military AI to the media and the public.”

Linking this to the marginalising effect AI systems can have on human judgement, Scwharz said that placing a human in the loop may not be effective at all given what the world has been witnessing with the IDF in Gaza.

“Human involvement means an accelerated approval process – serial acceptances of targets without much scrutiny will be the norm,” she said. “This is an obvious trajectory given the logic of the AI system and the allure of speed it offers. Such human involvement is, however, a far cry from meaningful.”

Schwarz concluded that despite the shortcomings of the government response, she will be keen to see the forthcoming guidance documents. “There might be hope still yet that the government is not entirely persuaded by the very strong and increasingly forceful technology lobby selling their as-of-yet-tried-and-tested wares for a future war,” she said.

Read more on Artificial intelligence, automation and robotics