eyetronic - stock.adobe.com
AI Seoul Summit: 27 nations and EU to set red lines on AI risk
The countries will now work together to identify thresholds at which the risks presented by an AI model or system would be unacceptable without safeguards in place, as well as develop interoperable safety testing regimes for the technology
More than two dozen countries have committed to developing shared risk thresholds for frontier artificial intelligence (AI) models to limit their harmful impacts, as part of an agreement to promote safe, innovative and inclusive AI.
Signed on the second day of the AI Seoul Summit by 27 governments and the European Union (EU), the Seoul ministerial statement for advancing AI safety, innovation and inclusivity sets out their commitment to deepening international cooperation on AI safety.
This will include collectively agreeing on risk thresholds where the risks posed by AI models or systems would be severe without appropriate mitigations; establishing interoperable risk management frameworks for AI in their respective jurisdictions; and promoting credible external evaluations of AI models.
On severe risks, the statement highlighted the potential of AI model capabilities that would allow the systems to evade human oversight, or act otherwise autonomously without explicit human approval or permission; as well as help non-state actors advance their development of chemical or biological weapons.
Noting “it is imperative to guard against the full spectrum of AI risks”, the statement added that AI safety institutes being set up around the world will be used to share best practice and evaluation data sets, as well as collaborate to establish interoperable safety testing guidelines.
“Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors,” it said.
However, while the statement lacked specificity, it did affirm the signatories’ commitment to the relevant international laws, including United Nations (UN) resolutions and international human rights.
The full list of signatories
- Australia
- Canada
- Chile
- France
- Germany
- India
- Indonesia
- Israel
- Italy
- Japan
- Kenya
- Mexico
- Netherlands
- Nigeria
- New Zealand
- The Philippines
- Republic of Korea
- Rwanda
- Kingdom of Saudi Arabia
- Singapore
- Spain
- Switzerland
- Türkiye
- Ukraine
- United Arab Emirates
- United Kingdom
- United States of America
- European Union
UK digital secretary Michelle Donelan said the agreements reached in Seoul mark the beginning of “phase two of the AI safety agenda”, in which countries will be taking “concrete steps” to become more resilient to various AI risks.
“For companies, it is about establishing thresholds of risk beyond which they won’t release their models,” she said. “For countries, we will collaborate to set thresholds where risks become severe. The UK will continue to play the leading role on the global stage to advance these conversations.”
Innovation and inclusivity
The statement also stressed the importance of “innovation” and “inclusivity”. For the former, it specifically highlighted the need for governments to prioritise AI investment and research funding; facilities access to AI-related resources for small and medium-sized enterprises, startups, academia, and individuals; and sustainability when developing AI.
“In this regard, we encourage AI developers and deployers to take into consideration their potential environmental footprint such as energy and resource consumption,” it said. “We welcome collaborative efforts to explore measures on how our workforce can be upskilled and reskilled to be confident users and developers of AI enhance innovation and productivity.
“Furthermore, we encourage efforts by companies to promote the development and use of resource-efficient AI models or systems and inputs such as applying low-power AI chips and operating environmentally friendly datacentres throughout AI development and services.”
Commenting on the sustainability aspects, South Korean minister of science and ICT Lee Jong-Ho said: “We will strengthen global cooperation among AI safety institutes worldwide and share successful cases of low-power AI chips to help mitigate the global negative impacts on energy and the environment caused by the spread of AI.
”We will carry forward the achievements made in ROK [the Republic of Korea] and the UK to the next summit in France, and look forward to minimising the potential risks and side effects of AI while creating more opportunities and benefits.”
On inclusivity, the statement added that the governments are committed to promoting AI-related education through capacity-building and increased digital literacy; using AI to address some of the world’s most pressing challenges; and fostering governance approaches that encourage the participation of developing countries.
Day one
During the first day of the Summit, the EU and a smaller group of 10 countries signed the Seoul Declaration, which builds on the Bletchley Deceleration signed six months ago by 28 governments and the EU at the UK’s inaugural AI Safety Summit.
While the Bletchley Declaration noted the importance of inclusive action on AI safety, the Seoul Declaration explicitly affirmed “the importance of active multi-stakeholder collaboration” in this area, and committed the governments involved to “actively” include a wide range of stakeholders in AI-related discussions.
The same 10 countries and the EU also signed the Seoul Statement of Intent Toward International Cooperation on AI Safety Science, which will see publicly backed research institutes come together to ensure “complementarity and interoperability” between their technical work and general approaches to AI safety – something that has already been taking place between the US and UK institutes.
On the same day, 16 AI global firms signed the Frontier AI Safety Commitments, which is a voluntary set of measures for how they will safely develop the technology.
Specifically, they voluntarily committed to assessing the risks posed by their models across every stage of the entire AI lifecycle; setting unacceptable risk thresholds to deal with the most severe threats; articulating how mitigations will be identified and implements to ensure the thresholds are not breached; and continually investing in their safety evaluation capabilities.
Under one of the key voluntary commitments, the companies will not develop or deploy AI systems if the risks cannot be sufficiently mitigated.
Commenting on the companies’ commitment to risk thresholds, Beth Barnes, founder and head of research at non-profit for AI model safety METR, said: “It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety.”
Read more about artificial intelligence
- Autonomous weapons reduce moral agency and devalue human life: Military technology experts gathered in Vienna have warned about the detrimental psychological effects of AI-powered weapons, arguing that implementing systems of algorithmic-enabled killing dehumanises both the user and the target.
- Big tech’s cloud oligopoly risks AI market concentration: The oligopoly that big tech giants have over cloud computing could translate to a similar domination in the AI market, given the financial and compute resources needed to make the technology effective at scale.
- Creative workers say livelihoods threatened by generative AI: Computer Weekly speaks with various creative workers about the impact generative artificial intelligence systems are having on their work and livelihoods.