UK and others sign first ‘binding’ treaty on AI and human rights
The UK, US and EU have all signed a treaty from the Council of Europe that aims to mitigate the threat AI poses to human rights, democracy and the rule of law, but commentators say it lacks enforcement mechanisms and creates loopholes
The UK government has signed the world’s first “legally binding” treaty on artificial intelligence (AI) and human rights, which commits states to implementing safeguards against various threats posed by the technology.
Drawn up by the Council of Europe – an international organisation set up in 1949 to uphold human rights throughout the continent – the treaty has now been signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova and San Marino, as well as Israel, the US and the European Union (EU).
Officially titled the Framework convention on artificial intelligence and human rights, democracy, and the rule of law, the treaty outlines a number of principles that states must adhere to throughout the entire lifecycle of an AI system, including privacy and data protection; transparency and oversight; equality and non-discrimination; safe innovation; and human dignity.
To ensure these principles are protected, the treaty further requires countries to put in place measures to assess and mitigate any potentially adverse impacts of AI, as well as provide effective remedies where violations of human rights do occur as a result of its operation.
“Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth,” said Lord Chancellor and justice secretary Shabana Mahmood.
“However, we must not let AI shape us – we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”
The secretary of state for science, innovation and technology, Peter Kyle, added the treaty will be key to realising the potential of AI in boosting economic growth and transforming public services: “Once in force, it will further enhance protections for human rights, rule of law and democracy, – strengthening our own domestic approach to the technology while furthering the global cause of safe, secure, and responsible AI.”
Read more about AI regulation
- Lord Holmes: UK cannot ‘wait and see’ to regulate AI: Legislation is needed to seize the benefits of artificial intelligence while minimising its risks, says Lord Holmes – but the government’s ‘wait and see’ approach to regulation will fail on both fronts.
- EU’s AI Act fails to protect the rule of law and civic space: Analysis reveals that the AI Act is ‘riddled with far-reaching exceptions’ and its measures to protect fundamental rights are insufficient.
- AI Seoul Summit: 16 AI firms make voluntary safety commitments: Prominent artificial intelligence companies from around the world have committed to a set of voluntary AI safety measures, which includes developing continuous risk assessment processes, setting acceptable risk thresholds, and ensuring greater transparency.
Although the agreement applies to all public sector-related AI use – including where private companies are acting on their behalf – the text itself does not explicitly cover private sector use of the technology, and leaves it up to individual states to determine the extent to which companies must adhere to the requirements and obligations laid out.
The text also includes an explicit carve-out for national security interests. “A Party shall not be required to apply this Convention to activities within the lifecycle of artificial intelligence systems related to the protection of its national security interests, with the understanding that such activities are conducted in a manner consistent with applicable international law,” it says.
While a member state may ban particular use cases of AI where they believe it is incompatible with human rights, the text does not detail any particular sanctions for a government’s non-compliance.
No strict enforcement
Lawyers at Bird&Bird, for example, noted there is only a vague compliance mechanism in the form of reporting on the activities undertaken to meet treaty’s requirements, “but there are no strict enforcement criteria and so the effectiveness and impact of the AI Convention could be limited.”
However, it does contain dispute mechanisms for governments that disagree on the interpretation or application of the framework, and does allow countries to “denounce” (i.e. opt-out) of the convention if they provide a notification to the secretary general of the Council of Europe.
Nick Reiners, a senior geo-technology analyst at Eurasia Group, told Gzero Media that the opt-in nature of the treaty means it is not especially legally binding, despite how it’s being billed by its signatories. He added that the national security carve-out also waters down how strenuous it is, noting, for example, that it would not affect how Israel is deploying AI in Gaza to select and attack targets.
He added that the EU will have signed in an attempt to “internationalise the AI Act”, so that companies and governments outside the continent fall in line with its priorities on the technology.
The treaty will enter into force three months after it has been ratified by at least five signatories, including at least three Council of Europe members, after which governments from across the world will be eligible to join it.