sp3n - stock.adobe.com
UK police facial recognition explained: What you need to know
In this essential guide, Computer Weekly looks at the use of facial recognition technology by UK police forces, including the points of contention around its deployment so far and where it’s heading next
UK police have been using live facial recognition (LFR) technology for the best part of a decade, with the Met being the first force to deploy it at Notting Hill Carnival in 2016.
Since then, the use of the biometric surveillance and identification tool by the Met has ramped up considerably. While the initial deployments were sparse, happening only every few months, they are now run-of-the-mill, with facial recognition-linked cameras regularly deployed to events and busy areas of London.
Similarly, while South Wales Police (SWP) – the only other force in England and Wales to have officially deployed the “live” version of facial recognition – used the technology much more extensively than the Met during its initial roll-outs through 2017, it is now deploying it with much more frequency.
From the police’s perspective, the main operational benefits of facial recognition include the ability to find people they otherwise would not be able to (whether that be for safeguarding or apprehending offenders), and as a preventative measure to deter criminal conduct.
Almost immediately, however, the technology proved controversial. Out of the gate, police facial recognition was derided for having no firm legal basis, poor transparency and questionable accuracy (especially for women and those with darker skin tones), all while being rolled out with zero public or Parliamentary debate.
The Met’s choice to first deploy the technology at Carnival – the biggest Afro-Caribbean cultural event in Europe and the second-largest street carnival in the world outside of Brazil – also attracted criticisms of institutional racism.
In the case of SWP, its use of live facial recognition against activists protesting an arms fair in Cardiff eventually led to a legal challenge.
In August 2020, the Court of Appeal concluded that SWP’s use of the tech up until that point had been unlawful, because the force had failed to conduct an appropriate Data Protection Impact Assessment (DPIA) and comply with its Public Sector Equality Duty (PSED) to consider how its policies and practices could be discriminatory.
Although the court also concluded that SWP had violated the privacy rights of the claimant, the judgement ultimately found the problem was with how the technology had been approached and deployed by police, rather than a particular problem with the technology itself.
In this essential guide, learn about how the police have been approaching the technology, the ongoing concerns around its proportionality, necessity and efficacy, and the direction of travel set for 2024 and beyond.
What is facial recognition?
While LFR has received the most public attention and scrutiny, other facial recognition techniques have also started gaining popularity among UK law enforcement.
With LFR, the technology essentially acts as a biometric police checkpoint, with a facial recognition-linked camera scanning public spaces and crowds to identify people in real time by matching their faces against a database of images compiled by police.
Otherwise known as a “watchlist”, these databases are primarily comprised of custody photos and can run into thousands of images for any given LFR deployment, but are deleted after each operation along with any facial images captured during.
The second technique is retrospective facial recognition (RFR). While it works in a similar fashion to LFR by scanning faces and matching them against a watchlist, RFR can be applied to any already-captured images retroactively.
Unlike LFR, which is used overtly with specially equipped cameras atop a visibly marked police van, RFR use is much more covert, and can be applied to footage or images behind closed doors without any public knowledge the surveillance has taken place.
Critics are particularly concerned by the increasing use of this technology, because the sheer amount of image and video-capturing devices in the modern world – from phones and social media to smart doorbell cameras and CCTV – is creating an abundance of material that can be fed into the software.
There is also concern about what its operation at scale means for human rights and privacy, as it smooths out the various points of friction that have traditionally been associated with conducting mass surveillance.
Looking at operator-initiated facial recognition (OIFR), the newest iteration of facial recognition being rolled out for UK police, the technology works via an app on officers’ phones that allows them to automatically compare the photos they’ve taken out in the field with a predetermined watchlist.
While national plans to equip officers with OIFR tools were only announced by UK police chiefs in November 2023, South Wales, Gwent and Cheshire police are already conducting joint trials of the tech.
Why is facial recognition so controversial?
A major question hanging over the police’s use of facial recognition is whether it is actually necessary and proportionate in a democratic society, especially given the lack of public debate about its roll-out.
Before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.
For example, the Met’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.
Responding to questions about how the force decided each individual deployment was both necessary and proportionate, the Met has given the same answer to Computer Weekly on multiple occasions.
“The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents,” it said, adding in each case that “the proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system”.
However, critics have questioned whether scanning tens of thousands of faces every time LFR is used is both a necessary and proportionate measure, particularly when other, less intrusive methods are already available to police.
While there are a number of legally recognised purposes (such as national security, prevention of disorder or public safety) that state authorities can use to intrude on people’s rights, proportionality and necessity tests are already well established in case law, and exist to ensure these authorities do not unduly interfere.
“In the case of police, they’re going to say ‘it’s prevention of disorder or crime, or public safety’, so they get past first base, but then one of the questions is, ‘is this necessary in a democratic society?’” said Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School.
“There’s a very rich case law about what that means, but the core test is you can’t use a hammer to crack a nut. Even though a machete might be perfectly good for achieving your task, if a pen knife will do, then you can only use the pen knife, and the use of a machete is unlawful because it’s disproportionate … the basic way of explaining it is that it has to go no further than necessary to achieve the specified goal.”
In the case of RFR, while it has its own separate legal mandate document, there are similarities in the need to establish the purpose and grounds of every search made with the software, as well as the proportionality and necessity of doing so in each case.
There is currently no legal mandate published for OIFR tools, but police chiefs have said this version of the tech won’t be rolled out to forces until sometime in 2024.
Is facial recognition biased or discriminatory?
Closely linked with necessity and proportionality, there is also the question of who the cameras are ultimately aimed at and why. This in turn brings up questions about bias and discrimination, which from the police and government perspective can be solved via improved algorithmic accuracy.
When LFR first began being deployed by UK police, one of the major concerns was its inability to accurately identify women and people with darker skin tones, which led to a number of people being wrongly identified over its first few years of deployment.
However, as the accuracy of the algorithms in use by UK police has improved, the concerns have shifted away from questions of algorithmic bias towards deeper questions of structural bias in policing, and how that bias is reflected in its technology practices.
Civil society groups maintain, for example, that the technology is “discriminatory and oppressive” given repeated findings of institutional racism and sexism in the police, and that it will only further entrench pre-existing patterns of discrimination.
Others have argued the point further, saying that accuracy is a red herring. Yeung, for example, has argued that even if LFR technology gets to the point where it is able to identify faces with 100% accuracy 100% of the time, “it would still be a seriously dangerous tool in the hands of the state”, because “it’s almost inevitable” that it would continue to entrench existing power discrepancies and criminal justice outcomes within society.
How do facial recognition watchlists work?
Watchlists are essentially images of people’s faces that facial recognition software uses to determine whether someone passing the camera is a match. While images can come from a range of sources, most are drawn from custody images stored in the Police National Database (PND).
Given the well-documented disproportionality in policing outcomes across different social groups in the UK, the concern is that – in using historic arrest data and custody images to direct where facial recognition should be deployed and who it’s looking for respectively – people from certain demographics or backgrounds then end up populating the watchlists.
“If you think about the disproportionality in stop and search, the numbers of black and brown people, young people, that are being stopped, searched and arrested, then that starts to be really worrying because you start to get disproportionality built into your watchlists,” London Assembly member and chair of the police committee, Caroline Russell, previously told Computer Weekly.
Further, in their appearances before a Lords committee in December 2023, senior officers from the Met and SWP confirmed to the Lords that both forces use generic “crime categories” to determine targets for their LFR deployments.
This means watchlists are selected based on the crime type categories linked to images of people’s faces (which are mostly custody images), rather than based on intelligence about specific individuals that are deemed a threat.
Another issue with the watchlists is the fact that millions of these custody images are held there completely unlawfully, meaning people never convicted of a crime could potentially be included.
In 2012, a High Court ruling found that its retention of custody images was unlawful because unconvicted people’s information was being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be disproportionate.
While the Met’s LFR Data Protection Impact Assessment (DPIA) says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”, millions of custody images are still being unlawfully retained.
Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the NPCC lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy and wider support and challenge in our use of these images for technologies such as facial recognition”.
In November 2023, the NPCC confirmed to Computer Weekly that it has launched a programme that (while still not yet publicised) will seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK. This will be implemented over a two-year period.
Is facial recognition effective?
Outside of these issues, there are open questions about the effectiveness of facial recognition in policing.
Speaking with Computer Weekly, the former biometrics and surveillance camera commissioner, Fraser Sampson, for example, questioned the ways facial recognition has been deployed by police, noting the thinness of the evidential basis around its effectiveness in tackling serious crimes.
He said that on the one hand, there are arguments from critics that UK police “never really seem to catch anyone significant using it, let alone very dangerous or high-harm offenders”, but on the other, those in policing will argue this is because it has been deployed so infrequently on relatively so few people, “we’re not going to have very spectacular results, so therefore, we’ve got to use it more to prove the case more”.
Given the Home Office’s repeated claim that LFR is a valuable crime prevention tool capable of stopping terrorists, rapists and other violent offenders, others have also questioned its effectiveness for this stated purpose given the majority of arrests made are for other offences, such as drug possession, not appearing in court or traffic violations.
Sampson has said the overt nature of the deployments – whereby police forces are required to publicly state when and where they are using it – can also hinder effectiveness, because it means wanted people will simply avoid the area.
He added that the argument then becomes about making the capability more covert to avoid this pitfall: “Then it becomes very sinister … you can’t just avoid one town, because it could be looking for you anywhere. The use case has made itself on that argument.”
Sampson further challenged the technology’s crime prevention capabilities on the basis that authorities are largely relying on its chilling effect, rather than its actual effectiveness in identifying wanted individuals. He said the logic here is that people “might behave” if they know the police have a certain capability and might be using it.
“It’s really challenging for the police then to find the evidence that it can work when used properly, without having to throw away all the safeguards to prove it, because once they’re gone, they’re gone,” said Sampson.
Is facial recognition legal?
There is no dedicated legislation in the UK to manage the police use of facial recognition technologies.
According to the Met Police’s legal mandate for LFR, the tech is regulated by a patchwork of the Police and Criminal Evidence Act (PACE) 1984; the Data Protection Act 2018; the Protection of Freedoms Act 2012; the Equality Act 2010; the Investigatory Powers Act 2000; the Human Right Act 1999; and common law powers to prevent and detect crime.
“These sources of law combine to provide a multi-layered legal structure to use, regulate and oversee the use of LFR by law enforcement bodies,” it says.
While the mandate also specifically references the Surveillance Camera Code of Practice as one of the “secondary legislative instruments” in place to regulate LFR use, the code is set to be abolished without replacement under the UK government’s data reforms.
Both Parliament and civil society have repeatedly called for new legal frameworks to govern law enforcement’s use of biometrics – including an official inquiry into police use of advanced algorithmic technologies by the Lords Justice and Home Affairs Committee (JHAC); two of the UK’s former biometrics commissioners, Paul Wiles and Fraser Sampson; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on LFR as far back as July 2019.
During his time in office before resigning in October 2023, Sampson also highlighted a lack of clarity about the scale and extent of public space surveillance, as well as concerns around the general “culture of retention” in UK policing around biometric data.
Throughout the JHAC inquiry – which described the police use of algorithmic technologies as a “new Wild West” characterised by a lack of strategy, accountability and transparency from the top down – Lords heard from expert witnesses that UK police are introducing new technologies with very little scrutiny or training, continuing to deploy them without clear evidence about their efficacy or impacts, and have conflicting interests with their own tech suppliers.
In a short follow-up inquiry, this time looking exclusively at facial recognition, the JHAC found that police are expanding their use of LFR without proper scrutiny or accountability, despite lacking a clear legal basis for their deployments. The committee also specifically called into question whether LFR is even legal.
The committee added that, looking to the future, there is a real possibility of networked facial recognition cameras capable of trawling entire regions of the UK being introduced, and that there is nothing in place to regulate for this potential development.
Despite myriad calls for a new legislative framework from different quarters, government ministers have claimed on multiple occasions that there is a sound legal basis for LFR in the UK, and that “a comprehensive network of checks and balances” already exists.
What are police doing next with facial recognition?
Despite open questions about the legality of police facial recognition tools, the UK government has not been deterred from pushing for much wider adoption of the technology.
In November 2023, for example, the National Police Chief Council’s (NPCC) chair Gavin Stephens noted it would play a “significant role” in helping UK policing become “an effective science-led service”.
In May 2023, an interim report into upcoming UK government data reforms revealed that policing minister Chris Philp was pushing for facial recognition technology to be rolled out by police forces across England and Wales, and will likely push to integrate the tech with police body-worn video cameras.
He later wrote to police chiefs in October 2023 setting out the importance of harnessing new technologies for policing, urging them to double the amount of RFR searches they are conducting and deploy LFR much more widely.
At the start of the same month, Philp, speaking at a fringe event of the Conservative Party Conference, outlined his plans to integrate data from the PND, the Passport Office and other national databases with facial recognition technology to help catch shoplifters and other criminals.
The plan was met with criticism from campaigners, academics and Scottish biometrics commissioner Brian Plastow, who said the “egregious proposal” to link the UK’s passport database with facial recognition systems is “unethical and potentially unlawful”.
Going forward, there are major concerns about what the UK government’s proposed data reforms mean for police technologies like facial recognition.
Some have argued, for example, that the forthcoming Data Protection and Digital Information Bill will weaken oversight of the police’s intrusive surveillance capabilities if enacted as is, because it would abolish the surveillance camera code of practice and collapse facial recognition into a mere data protection issue under the purview of the ICO.
Computer Weekly’s police facial recognition coverage
- 1 February 2019: Met Police deploys live facial recognition technology for ninth time.
- 22 March 2019: Live facial recognition technology should not be deployed by UK law enforcement under current circumstances, and the retention of millions of custody images would not hold up under another legal challenge, MPs have been told.
- 26 June 2019: As adoption of facial recognition systems continues to grow worldwide, there is increasing concern that this technology could undermine fundamental privacy rights and how it can be kept in check.
- 12 August 2019: South Wales Police’s decision to start trialing a new facial recognition mobile app has been described by campaigners as “shameful” given the force’s use of the technology is currently subject to ongoing court proceedings.
- 16 August 2019: The Information Commissioner’s Office has reiterated its warnings about the use of facial recognition technology amid concerns about privacy, and begun an investigation into use of the technology at King’s Cross in London.
- 1 November 2019: The Information Commissioner’s Office is calling for a statutory code of practice to govern how police in the UK deploy live facial recognition technology while controversy surrounding its use continues.
- 27 January 2020: The Metropolitan Police Service has announced that it will begin using facial recognition technology operationally for the first time, and has moved to assure privacy experts that its deployments will not infringe on civil liberties.
- 28 January 2020: Live facial recognition will be rolled out operationally by the Met Police, but police monitoring group Netpol believes it will hamper people’s ability to exercise their rights to protest.
- 11 February 2020: Police use of facial recognition ‘unjustifiable’, says Scottish Justice Committee.
- 5 March 2020: Collaboration between police forces and private entities on facial recognition technology comes under scrutiny by the Home Office’s Biometrics and Forensics Ethics Group.
- 13 March 2020: Equalities and Human Rights Commission says use of automatic facial recognition and predictive algorithms by police is discriminatory, stifles freedom of expression and lacks a proper legislative framework.
- 18 March 2020: A research project being conducted by UK universities in collaboration with the Home Office and Metropolitan Police could produce facial recognition systems that allows users of the technology to identify people with their faces covered.
- 21 May 2020: The Covid-19 coronavirus pandemic has prompted a slew of biometric firms to update their facial-recognition technology to identify people wearing face masks.
- 10 June 2020: IBM’s CEO writes to US congress about decision to stop using and selling facial-recognition technology, and says we should re-evaluate whether it should be sold to law enforcement agencies.
- 11 June 2020: Amazon is implementing a one-year moratorium on police use of its facial-recognition software following an international backlash over the e-commerce giant’s ties to law enforcement.
- 19 June 2020: Three major technology companies have committed to not selling facial-recognition tech to law enforcement, but remain silent on how other technologies can be used to fuel racial injustice and discriminatory policing.
- 12 August 2020: In a landmark decision, the Court of Appeal has ruled that South Wales Police’s facial recognition deployments breached human rights and data protection laws.
- 8 January 2021: Facial-recognition supplier claims new system can accurately identify masked faces, therefore promoting public health during the pandemic. But questions remain about whether its existing UK law enforcement clients will be deploying the technology.
- 21 January 2021: Tony Porter speaks to Computer Weekly about the changes in facial-recognition during his time as surveillance camera commissioner, the ethics of using the technology, and his new role as chief privacy officer at Corsight AI.
- 29 January 2021: Home Office’s Biometrics and Forensics Ethics Group releases briefing note on the use of live facial recognition in public–private collaborations following a year-long investigation.
- 18 June 2021: The UK information commissioner is “deeply concerned” about the inappropriate and reckless use of live facial recognition technologies in public spaces, noting that none of the organisations investigated by her office were able to fully justify its use.
- 22 June 2021: European Union’s data protection regulators say facial recognition and other biometric identification technologies present too great a risk to fundamental human rights to be allowed in public spaces.
- 2 July 2021: The UK’s former biometrics commissioner has told MPs that Parliament should explicitly legislate on the use of biometric technologies so it is clear to both police and private companies which uses are acceptable.
- 18 August 2021: Human rights group Liberty has criticised the UK’s governments proposed update to its “surveillance camera code of practice”, claiming it does not properly take into account court findings on the use of live facial-recognition technology by police, or the dangers such a surveillance tool presents.
- 1 October 2021: Retrospective facial-recognition software purchased for £3m by the Met Police will be deployed in coming months amid continuing controversy around the use of biometric technologies by law enforcement bodies.
- 12 October 2021: Experts giving evidence to the House of Lords have said that UK police use of facial-recognition technology is disproportionate and ineffective, and further questioned the utility of algorithmic crime ‘prediction’ tools.
- 20 October 2021: Criminal justice sector bodies procuring artificial intelligence technologies should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about accuracy and bias, an expert witness has told a House of Lords inquiry.
- 12 November 2021: Biometrics and surveillance camera commissioner Fraser Sampson has panned the UK government’s proposed plan to absorb the functions of those roles under the remit of the information commissioner.
- 24 November 2021: A Lords inquiry into the adoption of advanced algorithmic technologies by police in England and Wales has been told that new tools are being introduced without proper training and with little scrutiny of their impacts.
- 17 January 2022: Police algorithms and their impacts should be tested and challenged in court, rather than controlled by new laws, says policing minister.
- 7 February 2022: Met police deploy facial recognition in Westminster.
- 30 March 2022: Lords inquiry finds UK police are deploying artificial intelligence and algorithmic technologies without a thorough examination of their efficacy or outcomes, and are essentially ‘making it up as they go along’.
- 12 April 2022: Expert witness in Lords police tech inquiry welcomes committee’s findings but questions whether its recommendations on how to end the ‘Wild West’ of police artificial intelligence and algorithmic technologies in the UK would be implemented.
- 29 June 2022: Independent review says new framework is needed to clear up legal and ethical concerns over the use of biometric data and technologies, which can impact privacy, freedom of expression and other human rights.
- 4 July 2022: The government has largely rejected the findings and recommendations of a House of Lords inquiry into police tech, which called for an overhaul of how police deploy artificial intelligence and algorithmic technologies.
- 13 July 2022: Met police deploy facial-recognition technology in Oxford Circus.
- 26 July 2022: Privacy campaign group Big Brother Watch has filed a legal complaint against the Southern Co-op supermarket chain for its use of live facial recognition in stores, marking the first legal challenge brought against a retailer for using the technology in the UK.
- 10 August 2022: A Home Office scheme to biometrically scan the faces of convicted migrants who have already carried out punishments has come under fire from privacy and human rights groups for being discriminatory.
- 12 August 2022: The Metropolitan Police Service is ramping up its deployments of live facial recognition despite ongoing concerns about the proportionality and necessity of the technology, as well as its impact on vulnerable or marginalised communities.
- 19 January 2023: Live facial-recognition technology should not be used by police in Newham until biometric and anti-discrimination safeguards are in place, according to a motion passed unanimously by the council, but the Met Police and the Home Office have indicated they will not suspend its use.
- 13 February 2023: Clear, comprehensive and coherent frameworks are needed to regulate police use of artificial intelligence and biometrics in the UK, the biometrics and surveillance camera commissioner has said.
- 24 February 2023: A culture of retention around biometric data in UK policing is damaging public trust, says UK biometrics commissioner, who is calling for clear regulation to govern police use of biometric technologies.
- 12 April 2023: The Met and South Wales Police have doubled down on their use of facial recognition technology after research found improved accuracy in their algorithms when using certain settings.
- 17 May 2023: An independent report commissioned by the biometrics commissioner of England and Wales reveals that the UK policing minister is pushing for wider adoption of facial-recognition technology by police, and further criticises the government’s proposed changes to surveillance oversight.
- 1 June 2023: The Met Police’s director of intelligence has appeared before MPs to make the case for its continuing use of facial-recognition technology, following announcements from the force and the Home Office that they intend to press on with its adoption.
- 6 July 2023: The biometrics and surveillance camera commissioner is calling for a review of public space surveillance to gain a clearer picture about the proliferation of Chinese surveillance technology across the public sector, but warns against applying double standards on companies just because they are from China.
- 15 August 2023: Biometrics and surveillance camera commissioner resigns.
- 31 August 2021: The UK’s Defence and Security Accelerator is running a ‘market exploration’ exercise on behalf of the Home Office to identify new facial-recognition capabilities for security and policing bodies in the UK.
- 5 October 2023: The policing minister’s plans to integrate the UK’s passport database with police facial-recognition systems have been met with criticism from campaigners, academics, and the biometrics commissioner for England and Wales.
- 27 October 2023: The policing minister’s “egregious proposal” to link the UK’s passport database with facial recognition systems is “unethical and potentially unlawful”, according to the Scottish biometrics commissioner.
- 16 November 2023: The outgoing biometrics and surveillance camera commissioner for England and Wales discusses police deployment of powerful new surveillance technologies, and the declining state of oversight in this area.
- 24 November 2023: UK police chiefs plan to expand use of facial-recognition technology through a nationwide roll-out of mobile-based tools and increases in retrospective facial-recognition, but oversight problems persist.
- 15 December 2023: Senior police officers confirm to Lords committee that facial recognition watchlist image selection is based on crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.
- 10 January 2024: Scotland risks “sleepwalking” into a surveillance state that places every citizen in a permanent digital line-up if it does not deviate from the UK government’s data reform plans in 2024, according to the biometrics watchdog.
- 29 January 2024: UK police are expanding their use of live facial recognition technology without proper scrutiny or accountability, despite lacking a clear legal basis for their deployments, according to a Lords committee.