peshkov - stock.adobe.com
EU’s AI Act fails to protect the rule of law and civic space
Analysis reveals that the AI Act is ‘riddled with far-reaching exceptions’ and its measures to protect fundamental rights are insufficient
The European Union’s (EU) Artificial Intelligence (AI) Act “fails to effectively protect the rule of law and civic space”, according to an assessment by the European Center for Not-for-Profit Law (ECNL).
The study identifies “significant gaps and legal uncertainty” in the AI Act, which it states was “negotiated and finalised in a rush”. It also concludes that the Act prioritises “industry interests, security services and law enforcement bodies” over the rule of law and civic space.
The ECNL’s evaluation of the Act identifies five fundamental flaws, where gaps in legislation, loopholes and secondary resolutions could “easily undermine the safeguards established by the AI Act, further eroding the fundamental rights and rule of law standards in the long term”.
This includes the blanket exemption placed on national security AI use cases, including for “remote biometric identification”; limited avenues of redress of individuals; and weak impact assessment requirements.
Since its initial proposal in 2021, the ECNL has monitored and participated in discussions surrounding the EU’s AI Act, in response to AI systems being used in the surveillance of activists, profiling of airline passengers and appointment of judges to court cases.
After a three-year legislative process, the European Parliament approved the Act last month.
The Act’s loopholes
Though Europe has laid out its first targeted legal framework at the AI industry, ECNL’s report notes that there are no “guidelines and delegated acts to clarify the often vague requirements”, leaving “too much to the discretion of the Commission, secondary legislation or voluntary codes of conduct”.
It added that many of the Act’s prohibitions are filled with loopholes that render them “empty declarations”, due to “far-reaching exceptions”. Additionally, a number of other loopholes allow companies and public authorities to evade being in scope of the Act’s list of high-risk systems.
“Despite promises that the EU’s AI Act would put people at its centre, the harsh reality is that we have a law with very little to protect us from the threats and harms posed by the proliferation of AI systems in practically all areas of life,” said Ella Jakubowska, head of policy at non-governmental organisation European Digital Rights (EDRi).
In practice, chief security officers (CSOs) can only represent individuals whose rights have been violated when consumer rights are involved, meaning that they “could file a complaint on behalf of a group of people harmed, e.g. by credit scoring systems, but not on behalf of protestors whose vivid freedoms have been violated by the use of biometric surveillance in the streets”.
The Act does not guarantee the right to participation – “public authorities or companies will not be required to engage with external stakeholders when assessing fundamental rights impacts of AI”.
State authority usage of AI
Furthermore, the standards for the Act’s fundamental rights impact assessment (FRIA) for public authorities planning to use high-risk AI systems are weak, with three significant shortcomings: there is no explicit obligation to assess these impacts; CSOs do not have a “direct, legally binding avenue” to contribute to impact assessments; and law enforcement and migration authorities will not have to reveal whether they use risky AI processes.
More broadly, AI developed for national security purposes are given a “blanket exemption”, which means governments could practically “invoke national security to introduce otherwise prohibited systems, such as mass biometric surveillance” and evade the Act’s regulations on risky AI systems.
“The overly broad exemption for national security also gives countries and companies a get-out-of-jail-free card to ignore the entire law, and the lack of application of the rules to tech exports further show the limited thinking behind this Act,” said Jakubowska.
According to Caterina Rodelli, an EU policy analyst at digital rights group Access Now, the AI Act does not provide meaningful safeguards in the border and migration content, as “the most dangerous systems are not meaningfully banned when used by police and migration authorities, such as remote biometric identification and lie-detectors”.
She added, with regards to police and migration authorities being exempt from transparency obligations, that “this is a dangerous precedent that will empower state authorities to use AI systems against the unwanted in society: first racialised people, but most likely then human rights defenders, journalists and political opponents”.
Setting precedent
Alexandra Geese MEP, vice-president of the Greens/European Free Alliance, said: “During the negotiations, we worked hard to achieve a complete ban on real-time biometric surveillance. Unfortunately, the list of bans was watered down by the Council in the final stages and even after the final trilogue. The European Council missed the opportunity to effectively protect its citizens against AI surveillance.
“But, without the AI Act, we would have no instrument to hold the AI industry accountable,” she said. “The EU has created the world’s first legal framework with solid rules that will allow us to improve AI tools in the future. With clear standards for the development and use of AI systems, Europe is paving the way for a future in which technology serves people while respecting our fundamental rights and democratic principles.”
Jakubowska argued that while there should be a continued push to fight for the strongest application of the EU’s AI Act rules, it’s clear the law should not become the global blueprint for how to protect rights and democracy in the face of society’s increasing digitisation.
Rodelli added: “The AI Act sets a dangerous precedent as it creates a separate parallel framework for law enforcement and migration authorities who use AI systems. This second standard will impact first of all the most marginalised in society, namely migrants and racialised people in the EU.”
She further stated that the repressive use of AI systems in other legislation following this precedent has already begun to take place, with the vote on the EU New Pact on Migration and Asylum on April 10. “This set of reforms is underpinned by a variety of surveillance technologies that would fall under the scope of the AI Act, but that will not de facto be regulated as such because of this parallel lax regulation system that has been put in place for police and migration authorities,” she Rodelli.
Access Now has released a statement on the link between the AI Act and Migration Pact, noting how both will foster greater surveillance in the EU.
Read more about artificial intelligence
- Inclusive approaches to AI governance needed to engage public: Technology practitioners and experts gathered at an annual Alan Turing Institute-run conference discussed the need for more inclusive approaches to AI governance that actually engage citizens and workers.
- Lord Holmes: UK cannot ‘wait and see’ to regulate AI: Legislation is needed to seize the benefits of artificial intelligence while minimising its risks, says Lord Holmes – but the government’s ‘wait and see’ approach to regulation will fail on both fronts.
- Government insists it is acting ‘responsibly’ on military AI: The government has responded to calls from a Lords committee that it must “proceed with caution” when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach.