UK government unveils AI safety research funding details

Through its AI Safety Institute, the UK government has committed an initial pot of £4m to fund research into various risks associated with AI technologies, which will increase to £8.5m as the scheme progresses

The UK government has formally launched a research and funding programme dedicated to improving “systemic AI safety”, which will see up to £200,000 in grants given to researchers working on making the technology safer.  

Launched in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI), the Systemic Safety Grants Programme will be delivered by the UK’s Artificial Intelligence Safety Institute (AISI), which is expected to fund around 20 projects through the first phase of the scheme with an initial pot of £4m.

Additional cash will then be made available as further phases are launched, with £8.5m earmarked for the scheme overall.

Established in the run-up to the UK AI Safety Summit in November 2023, the AISI is tasked with examining, evaluating and testing new types of AI, and is already collaborating with its US counterpart to share capabilities and build common approaches to AI safety testing.

The £8.5m in grant funding was initially announced during the second day of the AI Seoul Summit in May 2024 by then digital secretary Michelle Donelan, but the new Labour government has now provided more detail on the ambitions and timeline of the scheme.

Focused on how society can be protected from a range of AI-related risks – including deepfakes, misinformation and cyber attacks – the grants programme will aim to build on the AISI’s work by boosting public confidence in the technology, while also placing the UK at the heart of “responsible and trustworthy” AI development.

Critical risks

The research will further aim to identify the critical risks of frontier AI adoption in critical sectors such as healthcare and energy services, identifying potential offerings that can then be transformed into long-term tools that tackle potential risks in these areas.

“My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services,” said digital secretary Peter Kyle. “Central to that plan, though, is boosting public trust in the innovations which are already delivering real change.

“That’s where this grants programme comes in,” he said. “By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.” 

UK-based organisations will be eligible to apply for the grant funding via a dedicated website, and the programme’s opening phase will aim to deepen understandings over what challenges AI is likely to pose to society in the near future.

Projects can also include international partners, boosting collaboration between developers and the AI research community while strengthening the shared global approach to the safe deployment and development of the technology.  

The initial deadline for proposals is 26 November 2024, and successful applicants will be confirmed by the end of January 2025 before being formally awarded funding in February. “This grants programme allows us to advance broader understanding on the emerging topic of systemic AI safety,” said AISI chair Ian Hogarth. “It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society, whether that’s in areas like deepfakes or the potential for AI systems to fail unexpectedly.

“By bringing together research from a wide range of disciplines and backgrounds into this process of contributing to a broader base of AI research, we’re building up empirical evidence of where AI models could pose risks so we can develop a rounded approach to AI safety for the global public good.”

Read more about AI safety

A press release from the Department of Science, Innovation and Technology (DSIT) detailing the funding scheme also reiterated Labour’s manifesto commitment to introduce highly targeted legislation for the handful of companies developing the most powerful AI models, adding that the government would ensure “a proportionate approach to regulation rather than new blanket rules on its use”.

In May 2024, the AISI announced it had opened its first international offices in San Fransisco to make further inroads with leading AI companies headquartered there, such as Anthrophic and OpenAI.

In the same announcement, the AISI also publicly released its AI model safety testing results for the first time.

It found that none of the five publicly available large language models (LLMs) tested were able to do more complex, time-consuming tasks without humans overseeing them, and that all of them remain highly vulnerable to basic “jailbreaks” of their safeguards. It also found that some of the models will produce harmful outputs even without dedicated attempts to circumvent these safeguards.

However, the AISI claimed the models were capable of completing basic to intermediary cyber security challenges, and that several demonstrated a PhD-equivalent level of knowledge in chemistry and biology (meaning they can be used to obtain expert-level knowledge and their replies to science-based questions were on par with those given by PhD-level experts).

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close