Laurent - stock.adobe.com

AI researcher says police tech suppliers are hostile to transparency

Expert witness in Lords police tech inquiry welcomes committee’s findings but questions whether its recommendations on how to end the ‘Wild West’ of police artificial intelligence and algorithmic technologies in the UK would be implemented

Artificial intelligence (AI) researcher Sandra Wachter says that although the House of Lords inquiry into police technology “was a great step in the right direction” and succeeded in highlighting the major concerns around police AI and algorithms, the conflict of interest between criminal justice bodies and their suppliers could still hold back meaningful change. 

Wachter, who was invited to the inquiry as an expert witness, is an associate professor and senior research fellow at the Oxford Internet Institute who specialises in the law and ethics of AI.

Speaking with Computer Weekly, Wachter said she is hopeful that at least some of the recommendations will be taken forward into legislation, but is worried about the impact of AI suppliers’ hostility to transparency and openness.

“I am worried about it mainly from the perspective of intellectual property and trade secrets,” she said. “There is an unwillingness or hesitation in the private sector to be completely open about what is actually going on for various reasons, and I think that might be a barrier to implementing the inquiry’s recommendations.”

Following a 10-month investigation into the use of advanced algorithmic technologies by UK police, including facial recognition and various crime “prediction” tools, the Lords Home Affairs and Justice Committee (HAJC) found that there was “much enthusiasm” about the use of AI systems from those in senior positions, but “we did not detect a corresponding commitment to any thorough evaluation of their efficacy”.

The HAJC also noted a range of “dubious selling practices” stemming from a conflict of interest between police forces, which are obliged under the Public Sector Equality Duty (PSED) to consider how their policies and practices could be discriminatory, and private sector suppliers.

To deal with issues around procuring from private suppliers, the HAJC recommended giving extra support to police buyers so they could become “proficient customers” of new technologies, and establishing a national body to certify new technology.

“Pre-deployment certification could, in itself, reassure them about the quality of the products they are procuring. Enhanced procurement guidelines are also needed,” the committee said, adding that local and regional ethics committees should also be established on a statutory basis to investigate whether any given technology’s proposed and actual uses are “legitimate, necessary and proportionate”.

It also noted that although there were currently “no systemic obligations” on law enforcement bodies to disclose information about their use of advanced technologies, a “duty of candour” should be established, alongside a public register of police algorithms, so that regulators and the general public alike can understand exactly how new tools are being deployed.

Promoting openness and meaningful transparency

Wachter – who told the HAJC in October 2021 that UK law enforcement bodies procuring AI technologies should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about accuracy and bias – pointed out that suppliers’ lack of transparency about their systems is very unlikely to be a “technical problem of we cannot explain it”, but rather a case of “we don’t necessarily want to tell you”.

In August 2020, the use of live facial recognition technology by South Wales Police (SWP) was deemed unlawful by the Court of Appeal, in part because the force did not comply with its PSED.

It was noted in the judgment that the manufacturer in that case – Japanese biometrics firm NEC – did not divulge details of its system to SWP, which meant the force could not fully assess the technology and its impacts.

“For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested,” said the ruling. “That may be understandable, but in our view it does not enable a public authority to discharge its own, non-delegable, duty.”

Asked about the example of SWP, Wachter said she thinks there is a middle ground. “When people talk about transparency, they usually talk like one or zero – so either everything is transparent or nothing is transparent,” she said. “I think that’s a little bit misguided – not everybody needs to know everything, but the right people need to know enough.”

Read more about police technology

Wachter said part of the issue is that police users are buying the private suppliers’ arguments that certain aspects of the technology simply cannot be disclosed or discussed.

To get around this, she said it is about building in trustworthiness and reliability, and agreed with the HAJC on the need for a third-party certification system, much like an MoT for vehicles, in which qualified and trusted experts analyse the technology to understand exactly how it works and to ensure it is not causing harm.

As for how much information should be included in the proposed public registers of police algorithms, Wachter said that while there must always be open information about what technology is being used by police, she suggested going further by making companies publish their test results for the tech.

“The general public has a right to know what their tax money is being spent on,” she said. “And if it’s being used to deter people, send them to prison, to surveil them, then I have a right to know that this technology is working as intended.”

Wachter’s own peer-reviewed academic work has revolved around how to test AI systems for bias, fairness and compliance with the standards of equality law in both the UK and the European Union (EU).

The method developed by Wachter and her colleagues – dubbed “counterfactual explanations” – shows why and how a decision was made (for example, why did a person need to go to prison), and what would need to be different to get a different result, which can be a useful basis for challenging decisions. All of this is done without infringing on companies’ intellectual property rights.

“When you run that test, we say you should publish the results to show to the outside world that your algorithm is adhering to that,” she said, adding that suppliers are always obliged to be legally compliant. “If your system is racist and you don’t know about it, that doesn’t matter – you’re still going to be liable. So the incentive structure is that you should be testing, testing, testing, because you cannot tell a regulator afterwards ‘oh, I didn't know what was going on’ – if you have to do it anyway, then you might as well publish it.”

Potential government resistance to change

Although the government is yet to formally respond to the inquiry’s findings – and has until 30 May 2022 to do so – policing minister Kit Malthouse has previously suggested to the HAJC that the use of new technologies by police should be tested in court rather than defined by new legislation, which he said could “stifle innovation”.

This is in line with previous government claims about police technology. For example, in response to a July 2019 Science and Technology Committee report, which called for a moratorium on police use of live facial recognition technology until a proper legal framework was in place, the government claimed in March 2021 – after a two-year delay – that there was “already a comprehensive legal framework for the management of biometrics, including facial recognition”.

But according to Wachter, although the approach suggested by Malthouse could be acceptable in some limited circumstances, such as “if we are not sure if and when harm could actually arise”, in the case of tools such as facial recognition and “predictive” police analytics, the harm has already been well documented.

“We know the data is problematic,” she said. “We know the systems are problematic. Nobody can really pretend there is no problem.”

Wachter added that the vast majority of people in the UK simply do not have the resources to challenge police in court on their use of technology. “To say, ‘well, let’s just try and see who comes and complains’, that’s not what a legislator should do,” she said. “You should be protecting everybody because everybody’s freedom is at stake.”

Responding to the argument that legislation would “stifle innovation”, Wachter said: “It’s such a boring argument – most of the time, when people say innovation, they mean profit. Let’s not confuse those two things.

“It doesn’t matter what laws there are, I can do research on and develop whatever I want. The hold-back is whether or not something is being deployed in practice, and then we’re talking about profits. I think that’s very often what people mean.”

She added: “Good law is geared to guide ethical and responsible innovation and is trying to prevent harmful innovation. Those who do not want to follow ethical guidelines and those rules, I’m not sure if they are the ones I want to do business with, especially in the criminal justice sector.”

Although the HAJC concluded, in line with a number of expert witnesses to the inquiry, that those responsible for deploying police tech are essentially “making it up as they go along” without due consideration for the efficacy and impacts of systems, there is a clear commitment to procuring more tech for police from both Malthouse and the Strategic Review of Policing published in March 2022.

As for why UK police are so committed to rolling out new tech despite its often questionable effectiveness, Wachter said: “I think it’s very much driven by ideas of austerity and trying to cut costs. The public sector has always been, but especially now, under massive pressure to cut costs, and new technology is seen as a way of achieving that.”

Read more on Artificial intelligence, automation and robotics