Getty Images/iStockphoto
Experts call for government to look beyond AI safety
Experts have warned there are far more near-term technology concerns than AI, such as facial recognition and the explosion of deepfakes
Members of the House of Lords Communications Committee have been told of confusion surrounding the national artificial intelligence (AI) strategy.
Wendy Hall, regius professor of computer science at the University of Southampton, said: “I cannot tell you if there is a functioning AI strategy – I don’t have an official role and the AI council has been disbanded.”
In September, the government also disbanded its Centre for Data Ethics and Innovation advisory board in favour of pulling the relevant AI and data knowledge from a pool of external experts. “I don’t get updates on what is happening on skills,” she said, adding that the government is now largely focused on the AI safety summit, which is taking place on 1-2 November.
When asked whether the original strategy is still relevant, Hall replied: “With the Office for AI we developed the 2021 strategy, which is still very relevant. My worry is that a lot of things on that strategy are not the focus of today’s [strategy]. I worry, because as a nation, I think we are slipping.” She added that the UK needs to focus on providing people with training, tackling the crisis that will emerge when AI automates jobs and showing them a new career route. Discussing the UK’s standing globally, Hall referenced the Tortoise Index, which has the US and China first and second in terms of progress in AI development, infrastructure and training. The UK was third, but she said it is now fourth, behind Singapore.
Although the government is focused on AI safety, Hall described the risks as “hypothetical” compared with the more immediate societal risks posed by new technology. “We need to focus on things that can do us more harm today,” she said, referring to AI safety. “This isn’t as important as regulating facial recognition.”
Jeremy Silver, CEO of Digital Catapult, described the UK’s approach to AI as “somewhat fragmented”. “The origins of the strategy still stand; we still want to generate a vibrant ecosystem,” he said. “The surprise is that ChatGPT has shifted the agenda.”
This shift, he said, does not work in the UK’s favour. “Many of the players are globally active, and it is not for us to become arbiters,” said Silver. During the session, the expert witnesses discussed the fact that among the problems the UK faces is that much of the technology is coming out of the US, which means AI businesses are likely to turn to US regulators first for guidance rather than the UK.
Valuing access to public training data
Muffy Calder, vice-principal and head of the college of science and engineering at Glasgow University, highlighted the rich datasets the UK has, which large language models, like ChatGPT, require for training.
“Looking at public sector data for LLMs [large language models], we have fabulous data,” she said. “It is a precious resource. Our point to the government is that we need to be able to value that data; we need to make decisions across government that are not fragmented.”
Calder said an emphasis needs to be put on the use of public sector data, who has access to that data and who has access to the models built on it. “It is a huge national resource,” she added.
Hall questioned why US tech firms were being given access to this public data. “We have a fabulous legacy in AI research,” she said. “Apart from compute, you need access to vast amounts of data to train. But the data is all in 10 big companies and we have to redress that balance.”
Hall also said the UK is a leader in open and accessible data. “We need to learn how to use that data,” she added. “Why are we giving it away to the US? We need to learn to use it to generate good stuff ourselves.”
This led to a further debate exploring the UK’s appetite for nurturing tech startups. “We have to change the tax regime,” said Hall, adding that investors go to the US because they get a quicker return.
Funding AI development
Even if the government is able to work on joint initiatives to match funding with industry to help develop AI, Silver said: “We’re in a difficult economic moment. Timing isn’t great, but this direction of travel is definitely something we should follow.”
Committee members were also keen to understand why the majority of AI startups end up being acquired by US firms. “It does matter when our startups are bought up and sold to the US,” he said, reflecting on Hall’s point on the return on investment.
“There is a desire here to make a quick return,” said Silver, before urging MPs to consider how the UK can encourage investors to stay longer. “We should also try to create a greater culture in UK industry to make the kind of investments that US tech makes.”
LLMs need massive computing power, and there is funding for high-performance computing (HPC) resources that could be used for LLM training. “It was always expected in the past that SMEs would gain access to this kind of compute,” he said.
However, Silver pointed out that it is extremely difficult for the UK government to put aside the kind of funding the likes of Microsoft and Google can put into HPC resources. This limits how effective UK homegrown LLMs can be developed and trained.
HPC also tends to be either focused on research and development efforts in major companies or for pure research. “The question is how we make it available,” said Calder. “We need some kind of national resource. When you draft a research proposal, it is hard to anticipate what your [computational] needs will be.” She added that there also needs to be a conversation across government, industry and academic institutes about “fair access” to these computer resources.
What is clear from the experts giving evidence to the Communications Committee is that the UK appears to be falling behind, its AI strategy has shifted focus and not enough is being done to encourage investors to nurture AI startups.
Read more about the UK’s AI strategy
- The best enforcer is an informed and demanding customer base. We should be careful not to create an additional hurdle just for AI.
- Chris Eastham of law firm Fieldfisher looks at the government’s National Artificial Intelligence Strategy, and the merits of the 10-year plan and its approach.