olavs - stock.adobe.com

Government reaffirms commitment to hold off on AI laws

The UK government reaffirming its commitment hold off on artificial intelligence legislation has been received positively by industry for balancing innovation and safety

The UK government has reaffirmed its commitment to not bring forward legislation to regulate the use of artificial intelligence (AI) until absolutely necessary, as the tech industry welcomed this approach for striking the right balance between safety and innovation.

Following the publication of its AI whitepaper in March 2023, the UK government opened a public consultation so that interested parties could provide feedback on its proposed “pro-innovation” framework for regulating AI.

Responding to that consultation on 6 February 2024, the government doubled down on its approach of empowering existing regulators to make targeted interventions without the need for specific AI legislation, claiming this approach would ensure the UK remains more agile than its competitors, while also putting it on course to be a world leader in safe responsible AI innovation.

The consultation response also outlined the government’s “initial thinking” on the introduction of binding requirements for select companies developing the most powerful AI systems, noting that any voluntary measures for AI companies would likely be “incommensurate to the risk” presented by the most advanced capabilities.

However, the government was also clear that it will not rush to legislate for binding measures, and that any future regulation would ultimately be targeted at the small number of developers of the most powerful general-purpose systems.

In an appearance before the Lords Communications and Digital Committee the same day, digital secretary Michelle Donelan defended the government’s long-awaited response to the AI whitepaper consultation, claiming that “lurching towards legislation” could “bind our hands” and undermine the agility of the current approach.

Asked by Lords whether, for example, the government will bring forward legislation to better protect copyright holders from having their material ingested by large langue models (LLMs), Donelan said while it was clearly having a “real impact” on the UK’s creative industries, it is equally important to protect the UK’s burgeoning AI sector.

“We cannot rush and get this wrong,” she said, adding that while the government is not ruling out legislating, and may eventually have to do so to achieve its desired outcomes, the preferred approach for the time being is to continue engaging stakeholders on a path forward.

Donelan further added that while industry executives from both sectors were unable to agree on a voluntary code of conduct for the use of copyrighted material in AI training data due to their different perspectives on what would be beneficial, government-led efforts are already underway to find a solution that works for each.

“It’s not a case of us just sitting back and waiting, it’s about as finding the best way forward,” she said.

Commenting on the balance between safety and innovation, Donelan added that they were “two sides of the same coin”, and that safety is needed to unlock the opportunities presented by AI technologies.

“One of the biggest risks that AI presents is that we turn people away from its potential and they become too scared to adopt it… and then we won’t be getting those fantastic [business] opportunities,” she said. “The heart of our entire approach is about trying to bolster the industry itself to develop that innovation.”

As part of its consultation response, the government also announced just over £100m of funding that will be allocated to various AI safety-related projects, as well as a nationwide series of research hubs. Around £10m of this will go towards preparing and upskilling UK regulators for the task ahead.

On the role of regulators in lieu of specific AI legislation, Donelan said the UK already has “a plethora of regulation and legislation” that interfaces with various aspects of AI, and that the government will focus its efforts on ensuring they have the support and skills they need to deal with increasing use of AI throughout the economy.

However, she was also clear that should any regulatory gaps emerge, or it becomes apparent the approach is not working in some respect, the UK government will consider legislation.

Industry reacts

The tech industry – which has been generally supportive of the government’s approach in the whitepaper – was largely welcoming of the consultation response, with many praising the decision not to legislate until some point down the line.

“The government should be applauded for listening to the CBI and other industry voices in avoiding the temptation to rush to legislate in the AI space,” said Benjamin Reid, director of technology and innovation at Confederation of British Industry (CBI).

“Setting a clear path towards an agile, principles-based approach to regulation will not only allow firms to make the most of this important emerging technology, but will create vital space and flexibility for further innovation. 

“The announcement of additional funding to advance research and support regulators is also welcome and will help encourage many of the UK’s high growth sectors to adopt and integrate AI processes into their operations.”

John Boumphrey, Amazon’s UK country manager, similarly said that the e-commerce and cloud giant “supports the UK’s efforts to establish guardrails for AI, while also allowing for continued innovation.

“As one of the world’s leading developers and deployers of AI tools and services, trust in our products is one of our core tenets and we welcome the overarching goal of the white paper.”

Lila Ibrahim, chief operating officer at Google DeepMind, also welcomed the balance between innovation and safety in the government’s response, adding: “The hub and spoke model will help the UK benefit from the domain expertise of regulators, as well as provide clarity to the AI ecosystem – and I’m particularly supportive of the commitment to support regulators with further resources.”

Praise from industry has not been unanimous, however, with some figures questioning whether the government’s response goes far enough, and others warning of the practical challenges that still face businesses.

“Although £100m may sound like a lot, it is actually very little to support AI research & development [R&D] and won’t go far at all,” said Michael Queenan, CEO and co-founder of UK data firm Nephos Technologies.

“It shows how little the UK government thinks about AI – they’ve just committed to spend £330m with Palantir for the NHS data platform project, which AI can be built on top of, so why commit less than a third of that to spend on AI regulation and R&D. As with everything in this space, the focus is on the inputs of AI rather than on the possible downstream effects.”

Harvey Lewis, a partner at consultancy firm Ernst and Young, said that the government’s response is a positive step, but warned that rapid advancements in generative AI mean regulation will be an ongoing challenge.

“The technology and how it’s used is also continually evolving so ongoing collaboration across the public, private and third-sectors will be crucial in harnessing the full potential of AI while also prioritising safe adoption,” he said.

Commenting on the UK and European Union’s (EU) diverging approaches to AI regulation, Sarah Pearce, a partner at law firm Hunton Andrews Kurth, said that it is still an open question “whether it is better to implement prescriptive regulation which is actively enforced, or take a more restrained approach based on principles to instil best practices and encourage innovation”.

She added that with no specific AI legislation, companies developing or deploying AI systems in their businesses “need to be aware of and comply with existing data protection legislation”.

Union perspective

Unions have criticised the government response for failing to introduce new workplace AI laws, which they have long-argued are needed to protect workers from algorithmically induced discrimination; automated decision-making that affects people’s employment or work conditions; work intensification; and “spiralling” surveillance.

“AI is already make life-changing decisions about the way we work – like how people are hired, performance-managed and even fired. That’s why we need employment-specific legislation to ensure AI is used fairly in the workplace,” said TUC general secretary Paul Nowak.

“But the government is still ducking this issue by refusing to pass new laws and to give workers and business the certainty they need. A minimalist approach to regulating AI is not going cut it. It will just leave many at risk of exploitation and discrimination.”

In mid-January 2024, a review by Wales TUC found that asymmetric power dynamics at work are fuelling Welsh workers’ negative experience of AI, and are making it difficult to meaningfully challenge the imposition of new technologies in the workplace.

“AI presents novel technical, legal and operational challenges that threaten to deepen power asymmetries in the workplace and wider economy,” it said. “However, this dynamic should be seen in a general context of some of the harshest laws governing industrial relations in Western Europe, and employment rights that are not designed to empower workers to be active stakeholders in their workplaces, regarding AI or any other issue.”

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics