andreusK - stock.adobe.com
Met Police purchase new retrospective facial-recognition system
Retrospective facial-recognition software purchased for £3m by the Met Police will be deployed in coming months amid continuing controversy around the use of biometric technologies by law enforcement bodies
The Metropolitan Police Service (MPS) is deploying a new retrospective facial-recognition (RFR) technology in the next three months, allowing the force to process biometric information contained in historic images from CCTV, social media and other sources.
Unlike live facial-recognition (LFR) technology, which the MPS began deploying operationally in January 2020, RFR is applied to already-captured images retroactively.
Both versions of facial-recognition work by scanning faces and matching them against a set of selected images, otherwise known as “watch lists”, but the difference with LFR is that it does it in real-time by scanning people as they pass the camera.
A procurement proposal approved by the Mayor’s Office for Policing and Crime (MOPAC) at the end of August 2021 shows a £3m, four-year-long contract was awarded to Northgate Public Services for the provision of updated RFR software, which the MPS said will help support “all types of investigations”.
The main purpose of RFR is to assist in identifying suspects from still or specific images extracted from video, which will need to be lawfully held by the force, said the MPS in its MOPAC submission.
“These may be images that have been captured by cameras at burglaries, assaults, shootings and other crime scenes. They could also be images shared by or submitted by members of the public,” it said.
“As well as assisting in preventing and detecting crime, RFR searching could also be used to help in the identification of missing or deceased persons. RFR reduces the time taken to identify offenders and supports the delivery of improved criminal justice outcomes.”
A spokesperson for the Mayor of London said the technology stands to play a vital role in keeping Londoners safe, and that RFR will “reduce the time taken by officers to identify those involved, and help police take criminals off our streets and help secure justice for victims of crime”.
Human rights concerns
The use of facial recognition and other biometric technologies, especially by law enforcement bodies, has long been a controversial issue.
In June 2021, two pan-European data protection bodies – the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) – jointly called for a general ban on the use of automated biometric identification technologies in public spaces, arguing that they present an unacceptable interference with fundamental rights and freedoms.
“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” said Andrea Jelinek, EDPB chair, and Wojciech Wiewiórowski, the EDPS, in a joint statement.
“Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.”
A number of digital rights campaign groups, including Big Brother Watch, Liberty, Access Now, and European Digital Rights, have also previously called for bans on the use of biometric technologies, including both LFR and RFR, on similar grounds.
Speaking to Computer Weekly, Daniel Leufer, a Europe policy analyst at Access Now, said a major issue with facial-recognition technology generally is who is it used against: “It’s not going to be rich, white, middle- or upper-class people from posh areas of London who will have a high representation in those databases [the watch lists are drawn from].
“We know that black people are picked up more often in stop and search, [and] have a much higher chance of ending up on the police radar because of extremely petty crimes…whereas white people get off much more easily. All of these things will lead to the overrepresentation of marginalised groups in the watch lists, leading to more matches and further entrenching that pattern.”
In July 2021, the UK’s former biometrics commissioner Paul Wiles told the House of Commons Science and Technology Committee that an explicit legislative framework was needed to govern the use of biometric technologies, and highlighted that the retention of custody images in the Police National Database (PND) as a major problem.
According to Wiles, the PND currently holds 23 million images taken while people were in custody, regardless of whether they were subsequently convicted. These custody images are then used as the basis for the police’s facial-recognition watch lists, despite a 2012 High Court ruling finding the PND’s six-year retention period to be disproportionate and therefore unlawful.
Computer Weekly asked the MPS whether the PND’s custody images will be used as the basis for the RFR watch lists, as well as how it is dealing with the retention and deletion of custody images, but received no response by time of publication.
The introduction of RFR at scale is also worrisome from a human rights perspective, Leufer added, because it smooths out the various points of friction associated with conducting mass surveillance.
“One of the thing that’s stopped us being in a surveillance nightmare is the friction and the difficulty of surveilling people. You look at the classic example of East Germany back in the day, where you needed this individual agent following you around, intercepting your letters – it was expensive and required an awful lot of manpower,” he said.
“With CCTV, it involved people going through images, doing manual matches against databases…that friction, the time that it actually took to do that, meant that CCTV wasn’t as dangerous as it is now. The fact that it can now be used for this purpose requires a re-evaluation of whether we can have those cameras in our public spaces.”
Leufer added that the proliferation of video-capturing devices, from phones and social media to smart doorbell cameras and CCTV, is creating an “abundance of footage” that can be fed through the system. And that, unlike LFR, where specially equipped cameras are deployed with at least some warning by police, RFR can be applied to footage or images captured from ordinary cameras without any public knowledge.
“CCTV, when it was initially rolled out, was cheap, easy and quick, and retroactive facial-recognition wasn’t a thing, so that wasn’t taken in as a concern in those initial assessments of the necessity proportionality, legality and ethical standing of CCTV systems,” he said. “But when they’re coupled with retroactive facial recognition, they become a different beast entirely.”
MPS defends RFR
In its submission to MOPAC, the MPS said that the force would need to conduct a data protection impact assessment (DPIA) of the system, which is legally required for any data processing that is likely to result in a high risk to the rights of data subjects. It must also be completed before any processing activities begin.
While the DPIA is yet to be completed, the MPS added that it has already begun drafting an equality impact assessment (EIA) under its Public Sector Equality Duty (PSED) to consider how its policies and practices could be discriminatory.
It further noted that “the MPS is familiar with the underlying algorithm, having undertaken considerable diligence to date”, and that the EIA “will be fully updated once a vendor has been selected and the product has been integrated”.
In August 2020, South Wales Police’s (SWP’s) use of LFR technology was deemed unlawful by the Court of Appeal, in part because of the fact that the force did not comply with its PSED.
It was noted in the judgement that the manufacturer in that case – Japanese biometrics firm NEC, which acquired Northgate Public Services in January 2018 – did not divulge details of its system to SWP, meaning the force could not fully assess the tech and its impacts.
“For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable, but in our view it does not enable a public authority to discharge its own, non-delegable, duty under section 149,” said the ruling.
In response to questions from Computer Weekly about what due diligence it has already undertaken, as well as whether it had been granted full access to Northgate’s RFR systems, the MPS said potential vendors were asked to provide information which demonstrated how their respective RFR products would enable compliance with legal requirements, including the relevant data protection and equalities duties.
“The selected vendor was able to point to a very strong performance in the large-scale face-recognition vendor tests undertaken by the National Institute of Standards and Technology [NIST],” it said.
“In line with the ongoing nature of the legal duties, the Met will continue to undertake diligence on the algorithm as the new system is integrated into the Met to ensure high levels of real-world performance will be achieved.”
It added that “in line [with the SWP court ruling] Bridges, the Met has an obligation to be satisfied ‘directly, or by way of independent verification that the software programme does not have an unacceptable bias on the grounds of race or sex’. Prior to using the NEC RFR technology operationally, as part of its commitment to using technology transparently, the Met has committed to publish the DPIA and how it is satisfied that the algorithm meets the Bridges requirements.”
Ethical design
To mitigate any potentially discriminatory impacts of the system, the MPS also committed to embedding “human-in-the-loop” decision-making into the RFR process, whereby human operators intervene to interrogate the algorithm’s decision before action is taken.
However, a July 2019 report from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre – which marked the first independent review into trials of LFR technology by the MPS – highlighted a discernible “presumption to intervene” among police officers using the tech, meaning they tended to trust the outcomes of the system and engage individuals that it said matched the watchlist in use, even when they did not.
In terms of how it is dealing with the “presumption to intervene” in the context of RFR, the MPS said the use case was “quite different” because “it does not result in immediate engagement” and is instead “part of a careful investigative process with any match being an intelligence lead for the investigation to progress”.
It added: “In any event, the NEC system offers a number of ‘designed in’ processes (relating to how a match is viewed, assessed and confirmed), which help protect the value of the human-in-the-loop process. Now NEC has been selected, these can be considered as the RFR system is brought into the Met and will be a key part of the DPIA.”
While the MPS’ submission said that the force will be consulting with the London Police Ethics Panel about its use of the technology, the decision to purchase the software was made without this process taking place.
Asked why the procurement proposal was approved before the London Police Ethics Panel had been consulted, a spokesperson for the Mayor of London said: “While this is clearly an important policing tool, it’s equally important that the Met Police are proportionate and transparent in the way it is used to retain the trust of all Londoners.
“The London Policing Ethics Panel will review and advise on policies supporting the use of RFR technology, and City Hall will continue to monitor its use to ensure it is implemented in a way that is lawful, ethical and effective.”
The MPS said that, as noted in its submission, the panel will still be engaged: “As this is not a new technology to the Met, it will be important for LPEP to consider the safeguards in the context of the NEC product. This is because different vendors take quite different ‘privacy-by-design’ approaches and therefore require different controls and safeguards for use. These could only be put in place and considered by LPEP following the selection of a vendor.”
According to a report in Wired, previous versions of the MPS’ facial-recognition web page on the Wayback Machine show references to RFR were added at some stage between 27 November 2020 and 22 February 2021.
However, while the MPS said on this page it was “considering updating the technology used” for RFR, there is very little publicly available about its existing capabilities. Computer Weekly asked how long the MPS has been using RFR technology, and whether it has been deployed operationally, but received no response by time of publication.
Will RFR be used against protesters?
A March 2021 report by Her Majesty’s Inspectorate of Constabulary and Fire & Rescue Services (HMICFRS), which looked at how effectively UK police deal with protests, noted that six police forces in England and Wales are currently deploying RFR technology, although it did not specify which forces these were.
“Opinions among our interviewees were divided on the question of whether facial-recognition technology has a place in policing protests. Some believed that the system would be useful in identifying protesters who persistently commit crimes or cause significant disruption. Others believed that it breached protesters’ human rights, had no place in a democratic society and should be banned,” it said.
“On balance, we believe that this technology has a role to play in many facets of policing, including tackling those protesters who persistently behave unlawfully. We expect to see more forces begin to use facial recognition as the technology develops.”
According to Access Now’s Leufer, facial-recognition technology can have a “chilling effect” on completely legitimate protests if there is even a perception that it will be used to surveil those participating.
“If you as a citizen start to feel like you’re being captured everywhere you go by these cameras and the police, who do not always behave as they should, have the potential to go through all of this footage to track you wherever you go, it just places a really disproportionate amount of power in their hands for limited efficacy,” he said.
On whether it will place limits on when RFR can be deployed, including whether it will be used to identify people attending demonstrations or protests, the MPS said “the submission does provide some examples as to when RFR may be used – for example, in relation to images showing burglaries, assaults, shootings and other crime scenes.
“However, to ensure that the public can foresee how the Met may use RFR, the Met will publish, prior to operational use details of when RFR may be used. This publication will follow engagement with LPEP – this is because when RFR may be used is an important ethical and legal question.”
Read more about police use of facial-recognition technology
- Civil society groups have called for a ban on the use of live facial-recognition (LFR) technology amid claims that the government and the police are introducing intrusive surveillance measures without parliamentary scrutiny.
- Three major technology companies have committed to not selling facial-recognition tech to law enforcement, but remain silent on how other technologies can be used to fuel racial injustice and discriminatory policing.
- Facial-recognition supplier claims new system can accurately identify masked faces, therefore promoting public health during the pandemic. But questions remain about whether its existing UK law enforcement clients will be deploying the technology.