Ban predictive policing systems in EU AI Act, says civil society

A coalition of civil society groups has called on European lawmakers to use the upcoming Artificial Intelligence Act as an opportunity to ban predictive policing systems

Civil society groups are calling on European lawmakers to ban artificial intelligence (AI)-powered predictive policing systems, arguing they disproportionately target the most marginalised in society, infringe on fundamental rights and reinforce structural discrimination.

In an open letter to European Union (EU) institutions – which are currently attempting to regulate the use of AI through the bloc’s upcoming Artificial Intelligence Act (AIA) – the 38 civil society organisations said the increasing use of automated decision-making systems to predict, profile or assess people’s risk or likelihood of criminal behaviour presents an “unacceptable risk” to people’s fundamental rights.

This includes the right to a fair trial and the presumption of innocence, the right to private and family life, and various data protection rights.

The group, led by Fair Trials and European Digital Rights (EDRi), said: “These predictions, profiles, and risk assessments, conducted against individuals, groups and areas or locations, can influence, inform, or result in policing and criminal justice outcomes, including surveillance, stop and search, fines, questioning, and other forms of police control.”

It further added that because the underlying data used to create, train and operate predictive policing systems is often reflective of historical structural biases and inequalities in society, their deployment will “result in racialised people, communities and geographic areas being over-policed, and disproportionately surveilled, questioned, detained and imprisoned across Europe.”

Legal and policy officer at Fair Trials, Griff Ferris, said: “The only way to protect people from these harms and other fundamental rights infringements is to prohibit their use.”

Fair Trials previously called for an outright ban on using AI and automated systems to “predict” criminal behaviour in September 2021.

Read more about predictive policing

As it currently stands, the AIA lists four practices that are considered “an unacceptable risk” and which are therefore prohibited, including systems that distort human behaviour; systems that exploit the vulnerabilities of specific social groups; systems that provide “scoring” of individuals; and the remote, real-time biometric identification of people in public places.

However, critics have previously told Computer Weekly that while the proposal provides a “broad horizontal prohibition” on these AI practices, such uses are still allowed in a law enforcement context and are “only prohibited insofar as they create physical or psychological harm”.

In their letter, published 1 March, the civil society groups explicitly call for predictive policing systems to be included in this list of prohibited AI practices, which is contained in Article 5 of the AIA.

“To ensure that the prohibition is meaningfully enforced, as well as in relation to other uses of AI systems which do not fall within the scope of this prohibition, affected individuals must also have clear and effective routes to challenge the use of these systems via criminal procedure, to enable those whose liberty or right to a fair trial is at stake to seek immediate and effective redress,” it said.

Lack of accountability

Gemma Galdon-Clavell, president and founder of Barcelona-based algorithmic auditing consultancy Eticas, said that her organisation signed the letter to European lawmakers because the current lack of accountability around how AI systems are developed and deployed.

“If we are to trust AI systems to decide on people’s future and life chances, these should be transparent as to how they work, those developing them should prove they have taken all possible precautions to remove bias and inefficiencies from such systems, and public administrations seeking to use them should develop and enforce redress systems for those who feel their rights are being infringed upon by such systems,” she told Computer Weekly.

“As algorithmic auditors, at Eticas we often see systems that work very differently to what they advertise and what is socially acceptable, and we fear that expanding AI into high-risk and high-impact contexts should not happen unless a regulatory ecosystem is in place.

“We believe that the possibilities of AI are being hindered by commercial AI practices that minimise risks and over-promise results, without any transparency or accountability.”

A group of more than 100 civil society organisations signed an open letter in November 2021, calling for European policymakers to amend the AIA so that it properly protects fundamental human rights and addresses the structural impacts of AI.

Long-standing critiques

Similar arguments have long been made by critics of predictive policing systems. In March 2020, for example, evidence submitted to the United Nations (UN) by the UK’s Equalities and Human Rights Commission (EHRC) said the use of predictive policing could replicate and magnify “patterns of discrimination in policing, while lending legitimacy to biased processes”.

It added: “A reliance on ‘big data’ encompassing large amounts of personal information may also infringe on privacy rights and result in self-censorship, with a consequent chilling effect on freedom of expression and association.”

In their book, Police: a field guide, which analyses the history and methods of modern policing, authors David Correia and Tyler Wall also argue that crime rates and other criminal activity data reflect the already racialised patterns of policing, creating a vicious circle of suspicion and enforcement against black and brown minorities in particular.

“Predictive policing … provides seemingly objective data for police to engage in those same practices, but in a manner that appears free of racial profiling … so it shouldn’t be a surprise that predictive policing locates the violence of the future in the poor of the present,” they said.

On 7 September 2021, a number of academics warned the UK’s House of Lords Home Affairs and Justice Committee (HAJC) about the dangers of predictive policing.

Rosamunde Elise van Brakel, co-director of the Surveillance Studies Network, noted that the data “often used is arrests data, and it has become very clear that this data is biased, especially as a result of ethnic profiling by the police”, and that all the time “this data has this societal bias baked in, the software will always be biased”.

“The first step here is not a technological question, it is a question about how policing and social practices are already discriminatory or are already biased,” she said. “I do not think you can solve this issue by tweaking the technology or trying to find AI to spot bias.”

Power discrepancies

Speaking to the HAJC in October 2021, Karen Yeung – an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School – noted the use of predictive policing technologies have the potential to massively entrench existing power discrepancies in society, as “the reality is we’ve tended to use the historic data that we have, and we have data in the masses, mostly about people from lower socio-economic backgrounds”.

“We’re not building criminal risk assessment tools to identify insider trading, or who’s going to commit the next kind of corporate fraud, because we’re not looking for those kinds of crimes,” she said.

“This is really pernicious … we are looking at high-volume data, which is mostly about poor people, and we are turning them into prediction tools about poor people, and we are leaving whole swathes of society untouched by these tools.

“This is a serious systemic problem and we need to be asking those questions,” said Yeung. “Why are we not collecting data, which is perfectly possible now, about individual police behaviour? We might have tracked down rogue individuals who are prone to committing violence against women. We have the technology, we just don’t have the political will to apply it to scrutinise the exercise of public authority.” 

Read more on Artificial intelligence, automation and robotics