eyetronic - stock.adobe.com

AI Seoul Summit: 10 nations and EU recommit to safe inclusive AI

During the latest AI Summit in South Korea, the participating governments reaffirmed their prior commitments to deepening international cooperation on AI safety, and have agreed to launch an international network of ‘safety institutes’

Ten governments and the European Union (EU) gathered at South Korea’s AI Seoul Summit have signed a joint deceleration laying out their “common dedication” to international cooperation on artificial intelligence, affirming the need for them to “actively include” a wide range of voices in the ongoing governance discussions.

Signed 21 May 2024, the Seoul Declaration for safe, innovative and inclusive AI builds on the Bletchley Deceleration signed six months ago by 28 governments and the EU at the UK’s inaugural AI Safety Summit.

Affirming the need for an inclusive, human-centric approach to ensure the technology’s trustworthiness and safety, the Bletchley Deceleration said international cooperation between countries would be focused on identifying AI safety risks of shared concern; building a shared scientific and evidence-based understanding of these risks; developing risk-based governance policies; and sustaining that understanding as capabilities continue to develop.

While the Bletchley Declaration noted the importance of inclusive action on AI safety, the Seoul Declaration – which was signed by Australia, Canada, the EU, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the UK, and the US – has explicitly affirmed “the importance of active multi-stakeholder collaboration” in this area, and committed the government’s involved to “actively” including a wide range stakeholders in AI-related discussions.

Despite the positivity of government officials and tech industry representatives in the wake of the last AI Summit, there was concern from civil society and trade unions about the exclusion of workers and others directly affected by AI, with more than 100 of these organisations signing an open letter branding the event “a missed opportunity”.

While there are some new additions, the latest Seoul Deceleration primarily reiterates many of the commitments made at Bletchley, particularly around the importance of deepening international cooperation and ensuring AI is used responsibly to, for example, protect human rights and the environment.

It also reiterated the previous commitment to develop risk-based governance approaches, which it has now added will need to be interoperable with one another; and further build out the international network of scientific research bodies established during the last Summit, such as the UK’s and US’ separate AI Safety Institutes.

Linked to this, the same 10 countries and the EU signed the Seoul Statement of Intent toward International Cooperation on AI Safety Science, which will see publicly backed research institutes that have already been established come together to ensure “complementarity and interoperability” between their technical work and general approaches to AI safety – something that has already been taking place between the US and UK institutes. 

“Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety, and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own,” said digital secretary Michelle Donelan.

“Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.”

Ahead of the Seoul Summit, the UK AI Safety Institute (AISI) announced that it would be establishing new offices in San Francisco to access leading AI companies and Bay Area tech talent, and publicly released its first set of safety testing results.

It found that none of the five unnamed large language models (LLMs) it had assessed were able to do more complex, time-consuming tasks without humans overseeing them, and that all of them remain highly vulnerable to basic “jailbreaks” of their safeguards. It also found that some of the models will produce harmful outputs even without dedicated attempts to circumvent these safeguards.

In a blog post from mid-May 2024, the Ada Lovelace Institute (ALI) questioned the overall effectiveness of the AISI and the dominant approach of model evaluations in the AI safety space, and further questioned the voluntary testing framework that means the Institute can only gain access to models with the agreement of companies.

“The limits of the voluntary regime extend beyond access and also affect the design of evaluations,” it said. “According to many evaluators we spoke with, current evaluation practices are better suited to the interests of companies than publics or regulators. Within major tech companies, commercial incentives lead them to prioritise evaluations of performance and of safety issues posing reputational risks (rather than safety issues that might have a more significant societal impact).”

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics