EU lawmakers propose limited ban on predictive policing systems

MEPs’ joint report on European Artificial Intelligence Act sets out limited ban on predictive policing systems alongside a raft of further amendments to improve redress mechanisms and extend the list of AI systems deemed high-risk

This article can also be found in the Premium Editorial Download: CW EMEA: CW Europe: Russia escalates cyber war on Ukraine

Two MEPs jointly in charge of overseeing and amending the European Union’s forthcoming Artificial Intelligence Act (AIA) have said that the use of AI-powered predictive policing tools to make “individualised risk assessments” should be prohibited on the basis that it “violates human dignity and the presumption of innocence”.

Ioan-Dragoş Tudorache, co-rapporteur on behalf of the Civil Liberties, Justice and Home Affairs (LIBE) committee, and Brando Benifei, co-rapporteur on behalf of the Internal Market and Consumer Protection (IMCO) committee, confirmed their support for a partial ban on predictive policing AI systems in a draft report.

“Predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination. It is therefore inserted among the prohibited practices,” said the 161-page report.

As it currently stands, the AIA lists four practices that are considered “an unacceptable risk” and are therefore prohibited, including: systems that distort human behaviour; systems that exploit the vulnerabilities of specific social groups; systems that provide “scoring” of individuals; and the remote, real-time biometric identification of people in public places.

Critics have previously told Computer Weekly that while the proposal provides a “broad horizontal prohibition” on these AI practices, such uses are still allowed in a law enforcement context.

Although the rapporteurs’ suggested predictive policing prohibition does limit the use of such systems by law enforcement, the ban would only extend to systems that “predict the probability of a natural person to offend or reoffend”, and not place-based predictive systems used to profile areas and locations.

Sarah Chander, a senior policy adviser at European Digital Rights (EDRi), told Computer Weekly: “Prohibiting predictive policing is a landmark step in European digital policy – never before has data-driven racial discrimination been so high on the EU’s agenda. But the predictive policing ban has not been extended to predictive policing systems that profile neighbourhoods for the risk of crime, which can increase experiences of discriminatory policing for racialised and poor communities.”

Non-governmental organistion (NGO) Fair Trials also welcomed the proposal, but similarly took issue with the exclusion of place-based predictive analytics.

“Time and time again, we’ve seen how the use of these systems exacerbates and reinforces discriminatory police and criminal justice action, feeds systemic inequality in society, and ultimately destroys people’s lives,” said Griff Ferris, legal and policy officer at Fair Trials. “However, the ban must also extend to include predictive policing systems that target areas or locations, that have the same effect.

“We now call on all MEPs to stay true to their mandate to protect people’s rights by supporting and voting in favour of the ban on all uses of predictive AI in policing and criminal justice.”

On 1 March 2022, Fair Trials, EDRi and 43 other civil society organisations collectively called on European lawmakers to ban AI-powered predictive policing systems, arguing that they disproportionately target the most marginalised people in society, infringe fundamental rights and reinforce structural discrimination.

Fair Trials also called for an outright ban on using AI and automated systems to “predict” criminal behaviour in September 2021.

Apart from the amendments relating to predictive policing, the text of the draft report suggests a number of further changes to the AIA.

Read more about police technology

  • A surveillance operation that covertly harvested text messages from an encrypted phone network allegedly used by criminals and drug dealers relied on technology that frequently failed and often stopped working.
  • Artificial intelligence researcher Sandra Wachter says that although the House of Lords inquiry into police technology “was a great step in the right direction” and succeeded in highlighting the major concerns around police AI and algorithms, the conflict of interest between criminal justice bodies and their suppliers could still hold back meaningful change.
  • Lords inquiry finds UK police are deploying artificial intelligence and algorithmic technologies without a thorough examination of their efficacy or outcomes, and are essentially ‘making it up as they go along’.

These include extending the list of high-risk applications to cover AI use cases in medical triaging, insurance, deep fakes, and those designed to interact with children; and creating a two-tiered approach whereby the European Commission will take on greater responsibility in assessing AI systems when there are “widespread infringements”, ie when a system is impacting individuals in three or more member states.

The rapporteurs have also widened the mechanisms for redress by including the right for people to complain to supervisory authorities and seek both individual and collective redress when their rights have been violated. For example, consumer groups would be enabled to start legal proceedings under the Representative Actions Directive.

The draft report also proposes amendments to recognise people “affected” by AI, whereas the AIA currently only recognises “providers” – those putting an AI system on the market – and “users” – those deploying the AI system.

This is in line with recommendations published by the Ada Lovelace Institute on 31 March 2022, which said the AIA should recognise “affected persons” as distinct actors.

The Ada Lovelace Institute also recommended reshaping the meaning of “risk” within the AIA to judge systems based on their “reasonably foreseeable” purpose, which the Tudorache-Benifei report has now written into its suggested amendments.

In terms of governance, the report proposes a number of obligations for public authorities – but not private, commercial entities – including the need to conduct fundamental rights impact assessments, to inform people affected by high-risk AI systems, and to register any high-risk use cases in the public database defined in Article 60 of the AIA.

“The European parliament negotiators fill an important gap – the right of affected persons to complain when AI systems violate our rights,” said EDRi’s Chander. “However, they can go further and require that all users of high-risk AI, not just public authorities, should be transparent about their use.”

The Tudorache-Benifei report will set terms of debate around the AIA, with both the LIBE and IMCO committees set to discuss its conclusions on 11 May before finally voting on the amendments at the end of November 2022.

However, it is currently unclear whether the committees will adopt the report’s proposed amendments because of European lawmakers’ diverging opinions on the issue of predictive policing.

On 5 October 2021, for example, the European Parliament approved a LIBE committee report on the use of AI by police in Europe, which opposed using the technology to “predict” criminal behaviour and called for a ban on biometric mass surveillance.

But two weeks later, the Parliament voted in favour of a LIBE committee proposal to extend the mandate of international crime agency Europol, which would allow it to exchange information with private companies more easily and develop AI-powered policing tools.

Civil rights groups said at the time that the proposed mandate represented a “blank cheque” for the police to create AI systems that risk undermining fundamental human rights.

There are also points of divergence between Benifei and Tudorache themselves. For example, they could not agree on points around remote biometric identification, so it has been left out of the report.

Read more on Artificial intelligence, automation and robotics