Brian Jackson - Fotolia
Police failing to consult public on new technologies
A freedom of information campaign has revealed that UK police are largely failing to consult the public on their use of new technologies, with the potential to undermine the principle of policing by consent
Police forces in the UK are failing to adequately engage with the public on their growing use of artificial intelligence (AI), with just one force holding consultations on their use of the technology, according to a freedom of information (FoI) campaign.
Conducted by the Royal Society of Arts, Manufactures and Commerce (RSA), the findings of the campaign were published in a report on 11 May, which specifically looks at how police forces are communicating with the public on their use of AI and automated decision systems (ADS).
A number of police systems use these technologies, including both live and retrospective facial recognition, predictive policing applications (whereby data is used to identify locations or individuals at higher risk of criminal activity), and case assessment tools (which algorithmically assess cases based on their supposed ‘solvability’).
The report noted “a conspicuous and concerning lack of public engagement around AI and ADS”, the absence of which “will serve to further distance the public from decisions made using these broadly unfamiliar technologies”.
“Aside from allowing the end users of these services to voice their concerns, public engagement is an educational process for both sides and a necessary recognition that the issues are more than just operational in nature,” it said.
“These technologies and their myriad uses are alien to much of the public – in 2018, RSA research found that just 9% of the public are aware that AI is being used in criminal justice.”
Policing by consent?
Out of the 43 police forces in England and Wales, just 7 confirmed they are using or trialling the technologies in question, with only one – South Wales Police – confirming it had consulted with the public on its use of AI or ADS at all.
Although the Metropolitan Police Service (MPS) later responded in mid-January that it was not using these technologies for policing decisions at that time, its operational roll-out of facial recognition was announced a week later, bringing the total number of police forces using AI or ADS to eight.
In a statement to the report’s authors, the MPS suggested there were plans being made for public consultations, although no evidence of this could be provided in response to a follow-up FoI request.
Five forces (Northamptonshire, North Yorkshire, Nottinghamshire, Sussex and Warwickshire) never responded to the requests for information, while the other 31 forces responded that they did not use either AI or ADS for policing decisions, contrary to public statements by the Home Office.
“We were concerned by the relative unwillingness of forces to detail their use of retrospective facial recognition through the freedom of information process. This is a matter of public record – the Home Office has noted that all police forces use retrospective facial recognition as recently as September 2019,” said the report.
“These two items taken together points – if not to a culture of quiet – then to a lack of understanding by police information offices about what facial recognition constitutes – they may, for example, have assumed that using ‘facial recognition’ only pertains to [live facial recognition].”
It added that none of the four forces identified as deploying or planning to deploy predictive policing systems confirmed they had consulted with the public, with one information office telling the authors that public input was not necessary as the system tracked trends rather individuals.
However, at the start of 2019, campaign group Liberty identified 14 forces that were using or planning to use predictive policing applications.
“We suspect, bar a recent change in strategy, some of those who do use these technologies may have not responded with information about their programs. Whether this difference in results suggests that predictive policing is on the wane, or whether it is because forces are changing how they report on this potentially problematic technology, is difficult to ascertain,” said the report.
It added that the authors, who started to make FoI requests in November 2019, had “significant difficulties in receiving responses” from some forces, and found it particularly difficult to map covert surveillance efforts due to the “national security” and “law enforcement” exemptions used by authorities.
“This is not limited to the police per se – rather, it is an example of how government generally is failing to adapt to the uptake of new and radical technologies. A consistent theme across this investigation was the inconsistency and paucity of information provided by police forces regarding how they are using AI,” it said.
Cutting costs and corners
One of the authors’ main concerns is AI technologies being used a means to increase the efficiency, rather than quality, of policing, as well as cut costs, which was cited as a direct reason for developing the systems, according to one respondent.
“The worry here is that artificial intelligence systems allow for cost-saving, which decreases the availability of less measurable benefits of policing, such as relationship and community building. There is also a danger that efficiency gains may be misleading or can produce unintended consequences,” it said.
“Racial and gender biases can be exacerbated by technologies as they are based on historic data, and we fear that a lack of transparency could undermine the principle of policing by consent.”
A February 2020 report by security think tank the Royal United Services Institute (Rusi), which called for national guidance on the use of police algorithms “as a matter of urgency”, detailed how the use of these data-intensive technologies have exploded in recent years.
It explained that a major reason was Britain’s ongoing austerity measures, which was described by every police officer interviewed for the report as the primary driver of developing new data capabilities.
“As discussed in a recent report from Cardiff University, these developments are part of a wider trend across the UK public sector of the use of algorithms and ‘data scoring’ to inform decision-making and resource allocation in an age of austerity,” it said.
While the RSA report agrees there needs to be clear guidelines on the use of these technologies, it added that guidance alone was not enough, and that a new cultural framework is needed to shift entrenched attitudes that exclude the public from decision-making related to the deployment of police technology.
“Adopting new technologies without adequate cultural safeguards – especially around deliberation and transparency – risks storing up considerable problems for the future, around both community cohesion but also truly innovative technological uptake,” it said.
“Deliberative methods could provide a bridge from the machinations of predictive policing and facial recognition to the end users who are affected by their decisions.
“We are interested in exploring how deliberative methods can be further applied to deal with the deficit in engagement identified in this report. We note that engagement is a broad term that is best satisfied when diverse, multidisciplinary groups come together and challenge the complex social, philosophical and practical issues around these technologies.”
Read more about police technology
- Met Police commissioner has called for legislative framework to govern police use of new technologies, while defending the decision to use live facial recognition technology operationally without it.
- A research project being conducted by UK universities in collaboration with the Home Office and Metropolitan Police could produce facial recognition systems that allows users of the technology to identify people with their faces covered.
- Equalities and Human Rights Commission says use of automatic facial recognition and predictive algorithms by police is discriminatory, stifles freedom of expression and lacks a proper legislative framework.