kirill_makarov - stock.adobe.com

MEPs vote in raft of amendments to EU AI Act

The proposed amendments to the EU’s AI Act have garnered a mixed reception from both industry and civil society, with the former seeing it as too stringent and the latter as not stringent enough in many areas, despite positive progress in others

This article can also be found in the Premium Editorial Download: CW EMEA: CW EMEA: Can we trust AI?

MEPs in two European Parliament committees have overwhelmingly voted for a raft of amendments to the Artificial Intelligence Act (AIA), including a number of bans on “intrusive and discriminatory” systems, but there are still concerns around lingering loopholes and the potential for state overreach.

The list of prohibited systems deemed to represent “an unacceptable level of risk to people’s safety” now includes the use of live facial recognition in publicly accessible spaces; biometric categorisation systems using sensitive characteristics; and the use of emotion recognition in law enforcement, border management, workplace, and educational institutions.

Members of the Committees for Internal Market and Consumer Protection (IMCO) and for Civil Liberties, Justice and Home Affairs (LIBE) also opted for a complete ban predictive policing systems (including both individual and place-based profiling, the latter of which was previously not included), and the indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

While retrospective remote biometric identification systems are now prohibited, MEPs kept exceptions for law enforcement but said it would only be for the prosecution of serious crimes and only after official judicial authorisation.

On top of prohibitions, the MEPs also voted to expand the definition of what is considered “high risk” to include AI systems that harm people’s health, safety, fundamental rights or the environment, as well as measures to boost the accountability and transparency of AI deployers.

This includes an obligation to perform fundamental rights impact assessments before deploying high-risk systems, which public authorities will have to publish, and expanding the scope of the AIAs publicly viewable database of high-risk systems to also include those deployed by public bodies.  

Completely new measures around “foundational” models and generative AI systems have also been introduced, the creators of which will be obliged to assess a range of risks related to their systems - including the potential for environmental damage and whether their systems guarantee protection of fundamental rights – and forced to disclose “a sufficiently detailed summary of the use of training data protected” by copyright laws.

“It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level,” said AIA co-rapporteur Brando Benifei. “We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

However, the amendments only represent a “draft negotiating mandate” for the European Parliament, and are still subject to a plenary vote of the entire Parliament in mid-June 2023. Following this vote, behind-closed-door trialogue negotiations will begin between the European Parliament, European Council, and European Commission – all of which have adopted different positions.

Daniel Leufer, a senior policy analyst at Access Now, said for example that the council’s position is for there to be a much wider range of exemptions for the use of AI by law enforcement and immigration authorities, adding: “It’s hard to know what’s a real position that someone’s not going to move from.”

Initial reactions

Responding to the amendments, the Computer & Communications Industry Association (CCIA Europe) – whose members include the likes of Meta, Google, Amazon, BT, Uber, Red Hat and Intel, among many other tech firms – said that although there were some “useful improvements”, such as the definition of AI being aligned to that of the Organisation for Economic Co-operation and Development (OECD), “other changes introduced by Parliament mark a clear departure from the AI Act’s actual objective, which is promoting the uptake of AI in Europe.”

It specifically claimed that “useful AI applications would now face stringent requirements, or might even be banned” due to the “broad extension” of prohibited and high-risk use cases: “By abandoning the risk-based structure of the act, Members of the European Parliament dropped the ambition to support AI innovation.”

CCIA Europe’s policy manager, Boniface de Champris, added that the association is now calling on “EU lawmakers to maintain the AI Act’s risk-based approach in order to ensure that AI innovation can flourish in the European Union.

“The best way for the EU to inspire other jurisdictions is by ensuring that new regulation will enable, rather than inhibit, the development of useful AI practices.”

Tim Wright, a tech and AI regulatory partner at London law firm Fladgate, similarly noted that the AIA would “may take the edge off” European AI companies abilities to innovate.

“US -based AI developers will likely steal a march on their European competitors given news that the EU parliamentary committees have green-lit its ground-breaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset,” he said.

“The US tech approach (think Uber) is typically to experiment first and – once market and product fit is established – to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

“The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Civil society groups that have been campaigning around the AIA, on the other hand, welcomed a number of the new amendments, but warned there are still a number of issues, particularly around industry self-assessment and carve-outs for national security or law enforcement.

Griff Ferris, senior legal and policy officer at non-governmental organisation Fair Trials – which has been explicitly calling for a ban on the use of AI and other automated system to “predict” criminal behaviour since September 2021 – described the prohibition of predictive policing as a “landmark result” that will protect people from an “incredibly harmful, unjust and discriminatory” practice.

“We’ve seen how the use of these systems repeatedly criminalises people, even whole communities, labelling them as criminals based on their backgrounds. These systems automate injustice, exacerbating and reinforcing racism and discrimination in policing and the criminal justice system, and feeding systemic inequality in society,” he said.

“The EU Parliament has taken an important step in voting for a ban on these systems, and we urge them to finish the job at the final vote in June.”

Ella Jakubowska, senior policy adviser at European Digital Rights (EDRi), added: “We are delighted to see Members of the European Parliament stepping up to prohibit so many of the practices that amount to biometric mass surveillance. With this vote, the EU shows it is willing to put people over profits, freedom over control, and dignity over dystopia.”

Leufer similarly welcomed the two committee’s amendments, which he said better protects peoples rights: “Important changes have been made to stop harmful applications like dangerous biometric surveillance and predictive policing, as well as increasing accountability and transparency requirements for deployers of high-risk AI systems.

“However, lawmakers must address the critical gaps that remain, such as a dangerous loophole in Article 6’s high-risk classification process.”

Self-assessment

Speaking with Computer Weekly ahead of the vote, Leufer said Article 6 was previously amended by the European Council to exempt systems from the high-risk list (contained in Annex Three of the AIA) that would be “purely accessory”, which would essentially allow AI providers to opt-out of the regulation based on a self-assessment of whether their applications are high-risk or not.

“I don’t know who is selling an AI system that does one of the things in Annex three, but that is purely accessory to decision-making or outcomes,” he said. “The big danger is that if you leave it to a provider to decide whether or not their system is ‘purely accessory’, they’re hugely incentivised to say that it is and to just opt out of following the regulation.”

Leufer said the Parliament text voted on by the two committee’s includes “something much worse…which is to allow providers to do a self-assessment to see if they actually pose a significant risk”.

EDRi shared similar concerns around Article 6, noting it would incentivise under-classification and provide a basis for companies to argue that they should not be subject to the AIA’s requirements for high-risk systems.

“Unfortunately, the Parliament is proposing some very worrying changes relating to what counts as ‘high-risk’ AI,” said Sarah Chander, a senior policy adviser at EDRi. “With the changes in the text, developers will be able to decide if their system is ‘significant’ enough to be considered high risk, a major red flag for the enforcement of this legislation.”

On high-risk classifications generally, Conor Dunlop, the European public policy lead at the at the Ada Lovelace Institute, told Computer Weekly that the requirements placed on high-risks systems – including the need for quality data sets, technical documentation, transparency, human oversight, et cetera – should already be industry standard practices.

“There’s been a lot of pushback from industry to say that this is overly burdensome,” he said, adding that a solution would be to simply open more systems up to third-party assessments and conformity checks: “I think that would compel safer development and deployment.”

State overreach

Regarding the prohibitions on live and retrospective facial recognition, Leufer added while the Parliament has deleted all the exemptions on the former, it has not done so for the latter, which can still be used by law enforcement with judicial authorisation.

“Any exception means that the infrastructure needs to be there for use in those exceptional circumstances. Either that requires permanent infrastructure being installed in a public space, or it requires the purchase of mobile infrastructure,” he said. “They’re not going to leave it sitting around for three years and not use it, it’s going to be incentivised to show results that it was a worthwhile investment, and it will lead to overuse.”

Pointing to a joint opinion on the AIA published by two pan-European data protection authorities, Leufer added that those bodies called for a ban on remote biometric identification in any context, and clearly stated that both live and retrospective facial recognition are incompatible with Europe’s data protection laws.  

“It’s already illegal, we [at Access Now] been saying that for a long time, so it would be good if the AI Act put it to rest and had an explicit prohibition,” he said. “Anything less than a full ban is actually worse than not having anything, because it could be seen as providing a legal basis for something that’s already illegal.”

Leufer added part of the problem is that lawmakers have fallen into the trap of seeing live facial recognition as somehow more dangerous than retrospective facial recognition: “There is something visceral about being matched on the spot by this thing and then having the instant intervention, but I really think the retrospective is much more dangerous, as it weaponises historic CCTV footage, photos, all of this content that’s lying around, to just destroy anonymity.”

There are also concerns about the AIA allowing the development and deployment of AI for national security or military purposes with no exemptions on its use.

In a conversation with Computer Weekly about the ethical justifications of military AI, Elke Schwarz – an associate professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies – for example, described the AIA’s approach to military AI as “a bit of a muddle”.

This is because while military AI systems are exempt from the requirements if specifically designed for military purposes, the vast majority of AI systems are developed in the private sector for other uses and then transferred into the military domain afterwards.

“Palantir works with the NHS and works with the military, you know, so they have two or three core products of AI systems that obviously change based on different data and contexts, but ultimately it’s a similar logic that applies,” she said.

“Most big ambitious AI regulations end up weirdly bracketing the military aspect. I think there’s also a big lobby not to regulate, or let the private sector regulate ultimately, which is not very effective usually.”

In a legal opinion prepared for the European Center for Not-for-Profit Law in late 2022, emeritus professor of international law at the London Metropolitan University Douwe Korff said: “The attempts to exclude from the new protections, in sweeping terms, anything to do with AI in national security, defence and transnational law enforcement contexts, including research into as well as the ‘design, development and application of’ artificial intelligence systems used for those purposes, also by private companies, are pernicious: if successful, they would make the entire military-industrial-political complex a largely digital rights-free zone.”

Describing the national security exemption as “a huge potential loophole”, Ferris also noted it would “undermine all other protections” in the AIA, “particularly in the context of migration, policing, and criminal justice, because those are all issues which governments see as issues of national security”.

Access Now and EDRi are also calling for the national security and military exemptions to be dropped from the AIA.

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics