Kwanchanok - stock.adobe.com
No UK AI legislation until timing is right, says Donelan
The UK government will not legislate on artificial intelligence until it has a better understanding of the technology, so is instead focusing on building up regulatory capacity and conducting safety-focused research, says digital secretary
Digital secretary Michelle Donelan says UK government will not legislate on artificial intelligence (AI) until the timing is right, and is focused in the meantime on improving the technology’s safety and building regulatory capacity in support of its proposed “pro-innovation” framework.
On 13 December, Donelan appeared before the Science, Innovation and Technology Committee to answer MPs questions about the UK’s current approach to fostering and regulating the development of AI.
The committee previously published an interim report on the UK’s developing AI governance arrangements at the end of August, which warned there is a danger the UK will be “left behind” by legislation being developed elsewhere, specifically the European Union’s (EU) AI Act.
The report identified 12 challenge areas for AI legislation which relate to various competition, accountability and social issues associated with AI’s operation, including bias, privacy, misrepresentation, access to data, access to computing power, and open source among others.
Waiting to legislate
Commenting on lack of AI legislation in the King’s Speech (which sets out the legislative timetable for the next Parliamentary session until 2025), Donelan said: “What we are doing is investing more than any other nation in AI safety.”
Noting the government’s recent convening of 28 countries at its AI Safety Summit in Bletchley Park, the agreements that were made during that event around the need for pre-deployment testing and evaluation, and the AI whitepaper it released in March, Donelan added: “We’ve been ensuring that we’ve really doubled down on getting a better handle on what exactly are the risks.
“I do think it is important to remember that this is an emerging technology that is emerging quicker than any technology we’ve ever seen before. No country has a full handle on exactly what the risks are.”
Pressed by committee chair Greg Clark on whether the government will follow through on its commitment to place a statutory duty on regulators to have due regard for the AI whitepaper’s five principles (including safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress), Donelan said timing was key with any legislation, and that there is a risk of stifling innovation by acting too quickly without a proper understanding of the technology.
“To properly legislate, we need to better be able to understand the full capabilities of this technology,” she said, adding that while “every nation will eventually have to legislate” on AI, the government decided it was more important to be able to act quickly and get “tangible action now”.
“We don’t want to rush to legislate and get this wrong. We don’t want to stifle innovation…We want to ensure that our tools can enable us to actually deal with the problem in hand, which is fundamentally what we’ll be able to do with evaluating the models.”
Highlighting the AI Safety Institute launched by prime minister Rishi Sunak at the end of October (previously the Frontier AI Taskforce and before that the Model Foundation Taskforce), Donelan said government has been busy bringing in experts from industry, academia and civil society is new research and evaluation body so that it can fully get to grips with the range of risks the technology presents.
Asked by Clark how a statutory duty for due regard on regulators would stifle innovation, Donelan did not directly respond, but said regulators already have legislation to adhere to and that the goal of the whitepaper was to provide consistency and cohesion for those working in the AI space.
“What we put in that bill has not been determined as of yet and the timing hasn’t been set, because we in this country are taking a really proportionate and agile approach, one that’s going to be based on evidence, gathering the information and properly understanding the risk before we lurch to legislate,” she said.
“We have seen the impact that that can have, look at the EU AI Act and the ramifications and the response by industry to that…This isn’t as simple as ‘Legislation is the only tool in the toolbox’ – it certainly isn’t, and there are downsides with legislation, [like] the fact that it takes so long.”
She added that the EU and US “will be looking to us to fill that gap in time” because of the UK’s work on AI safety already underway.
While Donelan said the institute, on top of its various safety-related research, is already in a position to carry out AI model evaluations, Whitehall officials previously told the committee in mid-November that none of the institute’s work to date has been peer reviewed.
Emran Mian, director general of digital technologies and telecoms at the Department for Science, Innovation and Technology (DSIT), said however that the institute has already been helpful to policymakers in driving forward conversations around AI safety, as well kickstarting international collaboration on the issue.
“We clearly need to keep building the science around what good evaluation looks like,” he said.
Regulatory gaps?
A major element of the governments AI proposals, as set out in the whitepaper, is to rely on existing regulators – including the Information Commissioner’s Office (ICO), the Health and Safety Executive, Equality and Human Rights Commission (EHRC) and Competition and Markets Authority (CMA) – to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.
In its interim report, the committee recommended conducting a “gap analysis” of their capacities and powers before any legislative attempts are made.
Donelan defended the approach before the committee, arguing a standalone AI regulator would create a lot of overlap and duplication of work, and that it is therefore better to use the UK’s “existing fleet of regulators” as they are best placed to understand the different contexts of how AI is used in their sectors.
However, she added that the government was not conducting a formal gap analysis, and is instead taking an iterative approach to identify issues: “This is an ongoing process, surely, because AI is an emerging technology. So, if we were to do one gap analysis now, in a few month’s time it might be totally different. We need to constantly be doing that gap analysis.”
Donelan told MPs her department is already in the process of setting up a “central regulatory function” as per the whitepaper, which will act partly as a horizon-scanning body and partly as a support function for other regulators.
She added the government is also supporting the Digital Regulation Cooperation Forum (DRCF), a coalition of four regulators with remits over different aspects of digital life, with £2m extra funding.
“If we found that there were gaps with our regulators, we would be open to thinking of other fora or support functions to be able to assist them,” she said. “What we want to do is make sure that we have the regulation and the right mechanisms in place to be on the front foot on this agenda.”
In July 2023, a regulatory gap analysis conducted by the Ada Lovelace Institute found that because “large swathes” of the UK economy are either unregulated or only partially regulated, it is not clear who would be responsible for scrutinising AI deployments in a range of different contexts.
This includes recruitment and employment practices, which are not comprehensively monitored; education and policing, which are monitored and enforced by an uneven network of regulators; and activities carried out by central government departments that are not directly regulated.
“In these contexts, there will be no existing, domain-specific regulator with clear overall oversight to ensure that the new AI principles are embedded in the practice of organisations deploying or using AI systems,” said the institute.
It added that independent legal analysis conducted for the institute by data rights agency AWO found that, in these contexts, the protections currently offered by cross-cutting legislation such as the UK GDPR and the Equality Act often fail to protect people from harm or give them an effective route to redress.
“This enforcement gap frequently leaves individuals dependent on court action to enforce their rights, which is costly and time consuming, and often not an option for the most vulnerable,” it said.
Read more about artificial intelligence
- EU AI Act: The wording of the act is finalised: The EU AI Act has climbed another rung up the legislative ladder, with its wording having been finalised.
- Competition and Markets Authority looks into Microsoft/OpenAI after Altman fiasco: The firing and rehiring of OpenAI’s CEO, and the fallout, which could have seen Microsoft hire all its staff, has the regulator spooked.
- Lords committee urges caution on UK use of autonomous weapons: UK government must ensure proper democratic oversight of its development and use of AI-powered weapon systems, says Lords committee.