Alan Stockdale - stock.adobe.com

NGO Fair Trials calls on EU to ban predictive policing systems

The use of artificial intelligence to predict criminal behaviour should be banned because of its discriminatory outcomes and high risk of further entrenching existing inequalities, claims Fair Trials

The European Union (EU) should place an outright ban on the use of artificial intelligence (AI) and automated systems to “predict” criminal behaviour, says a non-governmental organisation (NGO) campaigning for fair and equal criminal justice systems globally.

According to Fair Trials, the use of automated decision-making systems to predict, profile or assess people’s risk or likelihood of criminal behaviour – otherwise known as predictive policing – is reinforcing discrimination and undermining fundamental human rights, including the right to a fair trial and the presumption of innocence.

The group’s call for an outright ban on such systems comes ahead of the European Parliament’s upcoming debate on the use of AI in criminal matters by the police and judicial authorities, which is due to take place between 4 and 6 October 2021.

“The use of AI and automated systems to predict people’s future behaviour or alleged criminality is not just subject matter for dystopian futuristic films but is currently an existing operational strategy of police and criminal justice authorities across Europe,” said Griff Ferris, legal and policy officer at Fair Trials.

“These systems are being used to create predictions, profiles and risk assessments that affect people’s lives in a very real way. Among other serious and severe outcomes, they can lead to people, sometimes even children, being placed under surveillance, stopped and searched, questioned, and arrested – even though no actual crime has been committed.”

Fair Trials charted how predictive policing systems lead to discriminatory outcomes in its Automating injustice report, released on 9 September 2021, finding through numerous case studies that they “almost inevitably” use data that is either heavily reliant on or entirely made up of data originating from law enforcement authorities themselves.

“These data and records do not represent an accurate record of criminality, but merely a record of law enforcement, prosecutorial or judicial decisions – the crimes, locations and groups that are policed, prosecuted and criminalised within that society, rather than the actual occurrence of crime,” it said.

“The use of AI and automated systems to predict people’s future behaviour or alleged criminality is not just subject matter for dystopian futuristic films but is currently an existing operational strategy of police and criminal justice authorities across Europe”
Griff Ferris, Fair Trials

“The data may not be categorised or deliberately manipulated to yield discriminatory results, but it will reflect the structural biases and inequalities in the society which the data represents. For example, policing actions resulting from or influenced by racial or ethnic profiling, or the targeting of people on low incomes, can result in biased data concerning certain groups in society.”

Long-standing critiques

Similar arguments have long been made by critics of predictive policing systems. In March 2020, for example, evidence submitted to the United Nations (UN) by the UK’s Equalities and Human Rights Commission (EHRC) said the use of predictive policing could replicate and magnify “patterns of discrimination in policing, while lending legitimacy to biased processes”.

It added: “A reliance on ‘big data’ encompassing large amounts of personal information may also infringe on privacy rights and result in self-censorship, with a consequent chilling effect on freedom of expression and association.”

On 7 September 2021, a number of academics warned the House of Lords Justice and Home Affairs Committee about the dangers of predictive policing.

Rosamunde Elise Van Brakel, co-director of the Surveillance Studies Network, for example, noted that the data “often used is arrests data, and it has become very clear that this data is biased, especially as a result of ethnic profiling by the police” and that all the time “this data has this societal bias baked in, the software will always be biased”.

She added: “The first step here is not a technological question, it is a question about how policing and social practices are already discriminatory or are already biased. I do not think you can solve this issue by tweaking the technology or trying to find AI to spot bias.”

In their book, Police: a field guide, which analyses the history and methods of modern policing, authors David Correia and Tyler Wall also argue that crime rates and other criminal activity data reflect the already racialised patterns of policing, which creates a vicious cycle of suspicion and enforcement against black and brown minorities in particular.

“The first step here is not a technological question, it is a question about how policing and social practices are already discriminatory or biased. I do not think you can solve this issue by tweaking the technology or trying to find AI to spot bias”
Rosamunde Elise Van Brakel, Surveillance Studies Network

“Predictive policing… provides seemingly objective data for police to engage in those same practices, but in a manner that appears free of racial profiling… so it shouldn’t be a surprise that predictive policing locates the violence of the future in the poor of the present,” they write.

“Police focus their activities in predominantly black and brown neighbourhoods, which results in higher arrest rates compared to predominately white neighbourhoods. [This] reinforces the idea that black and brown neighbourhoods harbour criminal elements, which conflates blackness and criminality, [and] under CompStat [a data-driven police management technique] leads to even more intensified policing that results in arrest and incarceration.”

While Correia and Wall are writing in the context of policing in the US, Fair Trials’ report looked specifically at the use of predictive policing systems in the EU and found similar disparities in terms of which sections of the population are both targeted and over-represented.

In the UK, for example, Fair Trials noted that the Harm Assessment Risk Tool (HART) is used by Durham Constabulary to profile suspects of crime and predict their “risk” of re-offending in the future, but that its data is pulled from financial information, commercial marketing profiles and area codes – all of which can be a proxy for race or socio-economic status.

A separate academic study from April 2018 on the “lessons” of the HART system also noted that such variables risk “a kind of feedback loop that may perpetuate or amplify existing patterns of offending”.

Other systems detailed in the Fair Trials report, and which it claims produce similarly discriminatory results as evidenced by the case studies, include: Delia, a crime analysis and prediction system used by police in Italy; Top600, a system used by Dutch police to predict and profile those most at risk of committing violent crimes; and the System for Crime Analysis and Anticipation (SKALA), a geographic crime prediction tool used by German authorities.

Ferris added that simply having oversight mechanisms in place to stop abuses is not a sufficient measure: “The EU must ban the use of AI and automated systems that attempt to profile and predict future criminal behaviour. Without an outright ban, the discrimination that is inherent in criminal justice systems will be reinforced and the fundamental rights of millions of Europeans will be undermined.”

Europe’s current approach to regulating predictive policing

In April 2021, the European Commission published its proposed Artificial Intelligence Act (AIA) which focuses on creating a risk-based, market-led approach to regulating AI, and is replete with self-assessments, transparency procedures and technical standards.

However, digital civil rights experts and organisations told Computer Weekly that although the regulation was a step in the right direction, it ultimately fails to protect people’s fundamental rights and mitigate the technology’s worst abuses.

Giving the example of Article 10 in the proposal, which dictates that AI systems need to be trained on high-quality datasets, senior policy advisor at European Digital Rights (EDRi) Sarah Chander said the requirement was too focused on how AI operates at a technical level to be useful in fixing what is, fundamentally, a social problem.

“Who defines what high quality is? The police force, for example, using police operational data that will be high-quality datasets to them because they have trust in the system, the political construction of those datasets [and] in the institutional processes that led to those datasets – the whole proposal overlooks the highly political nature of what it means to develop AI,” she said.

“A few technical tweaks won’t make police use of data less discriminatory, because the issue is much broader than the AI system or the dataset – it’s about institutional policing [in that case],” said Chander.

While the proposal identified predictive policing systems as “high risk”, the experts said police forces across Europe would still be able to deploy these systems with relative ease because of, for example, the lack of human rights impact assessments and the fact that developers are themselves in charge of determining the extent to which their systems align with the regulation’s rules.

Read more about police use of data and algorithms

Read more on IT legislation and regulation