Dmitry Naumov - Fotolia

Next UK government must be prepared to legislate on AI, say MPs

The House Science, Innovation and Technology Committee says the next government should be ready to legislate on artificial intelligence to plug any regulatory gaps

The next UK government should be ready to legislate on artificial intelligence (AI) if it finds gaps in the powers of regulators to protect the public interest from fast-moving technological developments, said the House Science, Innovation and Technology Committee (SITC).

Following its inquiry into the UK’s governance of AI – which was launched in October 2022 to examine how the UK is ensuring the technology will be used in an ethical and responsible way – the House of Commons SITC has said the next government must be ready to legislate on AI if the current hands-off approach proves insufficient in addressing the current and future harms associated with the technology.

The committee added that while it largely agrees with current approach of using existing regulators to manage the proliferation of AI in their sectors, the looming end of the current Parliamentary session means there is no time to bring forward updates to regulators’ remits or powers if gaps are identified.

Given the huge financial disparities between regulators and leading AI developers that “can command vast resources”, the SITC said the next government will also need offer more material support to UK regulators to help hold the companies accountable.

“It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective. We are worried that UK regulators are under-resourced compared to the finance that major developers can command,” said SITC chair Greg Clark.

The SITC also raises concerns about reports from Politico that the UK’s AI Safety Institute (AISI) has been unable to access some of the AI developers models for pre-deployment safety testing, as was voluntarily agreed at the UK government’s AI Safety Summit in Bletchley Park in November 2023.

Plugging gaps

In an interim report from August 2023, the SITC previously warned that the UK risks being left behind by other AI-related legislation being developed in other jurisdictions such as the European Union (EU), and pushed the government to confirm if and when it would see fit to formally legislate.

However, in follow-up sessions the SITC heard from senior Whitehall officials and digital secretary Michelle Donelan that the government did not agree on the need for AI-specific legislation, and that it was more important to improve the technology’s safety and build regulatory capacity in support of the “pro-innovation” framework outlined in its March 2023 AI whitepaper.

Under this approach, the UK government would rely on existing sectoral regulators to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise, consistent with laws already on the books.

While the government has largely reaffirmed its commitment to hold off on AI laws until the time was right, it said in February 2024 that it would consider binding legal requirements for companies developing the most powerful AI systems, noting that any voluntary measures for AI companies would likely be “incommensurate to the risk” presented by the most advanced capabilities.

“The government should in its response to this report provide further consideration of the criteria on which a decision to legislate will be triggered, including which model performance indicators, training requirements such as compute power or other factors will be considered,” said the SITC in its recommendations.

It added that the next government should commit to laying quarterly reviews of its current approach to AI regulation before Parliament, which should include a summary of technological developments related to its stated criteria for triggering a decision to legislate, as well as an assessment whether these criteria have been met.

Given the disparities between regulators and AI companies, the SITC further added that current funding levels are “clearly insufficient to meet the challenge”, and that the next government must “announce further financial support, agreed in consultation with regulators, that is commensurate to the scale of the task”.

While the SITC welcomed the government’s commitment to conduct a “regulatory gap analysis” to see if further powers for regulators are required, it said the coming end of the current Parliamentary session means there is no time left to update their regulatory remits or powers if gaps are identified.

In July 2023, a regulatory gap analysis conducted by the Ada Lovelace Institute found that because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not currently clear who would be responsible for scrutinising AI deployments in a range of different contexts.

This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.

“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” said the institute.

It added that independent legal analysis conducted for the institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK GDPR and the Equality Act often fail to protect people from harm or give them an effective route to redress.

Pre-deployment testing

On pre-deployment testing, the SITC said it is “concerned” by suggestions that the AISI has been unable to access unreleased models to undertake safety testing before they are rolled out, which was agreed with the firms on a voluntary basis during the AI Safety Summit in November 2023.

“If true, this would undermine the delivery of the institute’s mission and its ability to increase public trust in the technology,” it said. “In its response to this report, the government should confirm which models the AI Safety Institute has undertaken pre-deployment safety testing on, the nature of the testing, a summary of the findings, whether any changes were made by the model’s developers as a result, and whether any developers were asked to make changes but declined to do so.

“The government should also confirm which models the institute has been unable to secure access to and the reason for this. If any developers have refused access – which would represent a contravention of the reported agreement at the November 2023 Summit at Bletchley Park – the government should name them and detail their justification for doing so.”

Computer Weekly contacted DSIT for comment on the SITC report, and specifically whether it has conducted pre-deployment safety testing of all the models that the companies agreed to provide access to, but received no response.

During the AI Seoul Summit in South Korea, 16 AI global firms signed the Frontier AI Safety Commitments, which is a voluntary set of measures for how they will safely develop the technology.

Specifically, they voluntarily committed to assessing the risks posed by their models across every stage of the entire AI lifecycle; setting unacceptable risk thresholds to deal with the most severe threats; articulating how mitigations will be identified and implements to ensure the thresholds are not breached; and continually investing in their safety evaluation capabilities.

Under one of the key voluntary commitments, the companies will not develop or deploy AI systems if the risks cannot be sufficiently mitigated.

Yoshua Bengio – a Turing Award winning AI academic and a member of the UN’s Scientific Advisory Board who is heading up the nfrontier AI State of the Science report agreed at Bletchley – said that while he is pleased to see so many leading AI companies sign up (and particularly welcomes their commitments to halt models where they present extreme risks), they will need to be backed up by more formal regulatory measures down the line. 

“This voluntary commitment will obviously have to be accompanied by other regulatory measures, but it nonetheless marks an important step forward in establishing an international governance regime to promote AI safety,” he said.

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics