andreusK - stock.adobe.com
ICO issues guidance on facial recognition in public spaces
Information commissioner’s concern over the problematic use of facial recognition in public spaces has prompted her to publish official guidance on its deployment, while civil society calls for an outright ban
The UK information commissioner is “deeply concerned” about the inappropriate and reckless use of live facial recognition (LFR) technologies in public spaces, noting that none of the organisations investigated by her office were able to fully justify its use.
In a blog post published on 18 June 2021, information commissioner Elizabeth Denham said that although LFR technologies “can make aspects of our lives easier, more efficient and more secure”, the risks to privacy increase when it is used to scan people’s faces in real time and in more public contexts.
“When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” Denham wrote, adding that although “it is not my role to endorse or ban a technology”, there is an opportunity to ensure its use does not expand without due regard for the law.
“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you,” she said. “It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop.
“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”
Informed by her interpretation of data protection law and six separate investigations into LFR by the Information Commissioner’s Office (ICO), Denham has also published an official “Commissioner’s Opinion” to act as guidance for companies and public organisations looking to deploy biometric technologies.
“Today’s Opinion sets out the rules of engagement,” she wrote in the blog. “It builds on our Opinion into the use of LFR by police forces and also sets a high threshold for its use.
“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work.”
In the Opinion, Denham noted that any organisation considering deploying LFR in a public place must also carry out a data protection impact assessment (DPIA) to decide whether or not to go ahead.
“This is because it is a type of processing which involves the use of new technologies, and typically the large-scale processing of biometric data and systematic monitoring of public areas,” she wrote. “Even smaller-scale uses of LFR in public places are a type of processing which is likely to hit the other triggers for a DPIA as set out in ICO guidance.
“The DPIA should begin early in the life of the project, before any decisions are taken on the actual deployment of the LFR. It should run alongside the planning and development process. It must be completed prior to the processing, with appropriate reviews before each deployment.”
Read more about biometric technologies
- Privacy and human rights organisations have asked data protection regulators in the UK, France, Austria, Italy and Greece to investigate controversial facial recognition company Clearview AI.
- Sweden’s data watchdog has found that Swedish police failed to conduct the data protection checks required by law before using controversial facial-recognition tool.
- The European Commission’s proposal to regulate artificial intelligence is a step in the right direction but fails to address the fundamental power imbalances between those who develop and deploy the technology, and those who are subject to it, experts have warned.
On 7 June 2021, Access Now and more than 200 other civil society organisations, activists, researchers and technologists from 55 countries signed an open letter calling for legal prohibitions on the use of biometric technologies in public spaces, whether by governments, law enforcement or private actors.
“Facial recognition and related biometric recognition technologies have no place in public,” said Daniel Leufer, Europe policy analyst at Access Now. “These technologies track and profile people as they go about their daily lives, treating them as suspects and creating dangerous incentives for overuse and discrimination. They need to be banned here and now.”
On top of a complete ban on the use of these technologies in publicly accessible spaces, the civil society coalition is also calling on governments around the world to stop all public investment in biometric technologies that enable mass surveillance and discriminatory targeted surveillance.
“Amazon, Microsoft and IBM have backed away from selling facial recognition technologies to police,” said Isedua Oribhabor, US policy analyst at Access Now. “Investors are calling for limitations on how this technology is used. This shows that the private sector is well aware of the dangers that biometric surveillance poses to human rights.
“But being aware of the problem is not enough – it is time to act. The private sector should fully address the impacts of biometric surveillance by ceasing to create or develop this technology in the first place.”
The European data protection supervisor has also been very critical of biometric identification technologies, previously calling for a moratorium on its use and now advocating for it being banned from public spaces.
Speaking at CogX 2021 about the regulation of biometrics, Matthew Ryder QC, of Matrix Chambers, said that although governments and companies will often say they only deploy the technologies in limited, tightly controlled circumstances, without retaining or repurposing the data, legislation will often build in a range of exceptions that allow exactly that to happen.
“The solution to that can be much harder-edged rules than we would normally expect to see in a regulatory environment, because both governments and companies are so adept at gaming the rules,” said Ryder, adding that although it may not be a malicious exercise, their constant “stress testing” of the regulatory system can lead to use cases which, “on the face of it, you normally wouldn’t be allowed to do”.
He added that regulators and legislators both need to get comfortable setting “hard lines” for tech companies looking to develop or deploy such technologies. “I would err on the side of harder regulations which then get softer, rather than allowing a relatively permissive regulatory view with lots of exceptions,” he said.