chones - stock.adobe.com

IBM divests from facial-recognition market

IBM’s CEO writes to US congress about decision to stop using and selling facial-recognition technology, and says we should re-evaluate whether it should be sold to law enforcement agencies

IBM will no longer sell facial-recognition technology and is calling for a “national dialogue” on whether and how it should be deployed by US law enforcement agencies.

In an open letter to Congress date 8 June, IBM CEO Arvind Krishna said while technology can increase transparency and help police protect communities, it must not be used to promote discrimination or racial injustice.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” he said.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Speaking to The Verge, IBM added it will no longer develop or research the technology as well.

“Artificial intelligence [AI] is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported,” said Krishna.

What do the experts think?

During the Centre for Data Ethics and Innovation’s (CDEI’s) facial-recognition panel on 9 June at CogX, an annual global leadership summit focused on AI and other emerging technologies, independent researcher and broadcaster Stephanie Hare reiterated her call from last year’s event for a moratorium on it being rolled out.

“By all means, if companies want to continue researching this so they could at least solve the accuracy problem, go for it. But we haven’t solved the social problem – not a single country in the world has solved that problem of what this does in terms of the chilling effect on democracy…how [people’s behaviour] changes when you’re being surveyed, we’ve done nothing to address the private sector’s use, and the police again come under quite a lot of criticism, but they at least have to announce when they’re using it,” she said.

Commenting on IBM’s decision, Hare added tech companies are always dropping unprofitable technologies, “but they don’t write to US Congress and make a political point about it – so I think that’s the substantive change that IBM has done”.

According to Peter Fussey, a professor of sociology at the University of Essex, who conducted the first independent study of the Metropolitan Police’s facial recognition trials, the IBM decision was interesting because of how it changes the debate.  

“Who knows what the motivation is, it could be ethic-washing, whatever it is, but I think something that’s quite interesting about that announcement today, is it’s harder to therefore argue that regulating facial recognition is somehow anti-innovation,” he said.

“This has been an argument that’s been used a lot when people have sought to regulate the tech industry. But if, like Stephanie said, Google / Alphabet is calling for regulation, if IBM is pulling away, it’s harder to make that argument and I think it’s an interesting moment on that basis alone.”

Others, like respected American attorney and legal analyst Jonathan Turley, said “I don’t think it’s going to make a difference – there are a 1,000 companies that are spending billions on facial-recognition research.”

Since the start of the Covid-19 coronavirus pandemic, the facial-recognition industry does not look to be slowing down any time soon, with the number of companies claiming to have developed facial-recognition tools that can identify masked faces skyrocketing in that time.

Using technology to increase police accountability

IBM’s announcement follow mass protests in the US against the police murder of George Floyd, a 46 year-old African-American who was killed in Minneapolis during an arrest for allegedly using a counterfeit note.

As the protests have spread, first to every state in the US and then internationally, technology companies are coming under increased scrutiny for their contracts with law enforcement, with questions being raised about their complicity in police brutality and institutional racism.  

Amazon, for example, has repeatedly refused to answer questions on how its own recognition technology is used in policing, despite founder and CEO Jeff Bezos publicly coming out in support of the Black Lives Matter protests.

In 2018, a report by the American Civil Liberties Union (ACLU) found Amazon’s Rekognition software was fundamentally racially biased, misidentifying 28 black members of Congress during a trial conducted by the civil rights organisation.

In the letter, Krishna also expressed support for “police reform” through “modifications to the qualified immunity doctrine”, which shields police and other state officials from being held personally liable for actions that do not violate a clearly established statutory or constitutional right.

He added that national policy should be made to “encourage and advance the use of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques”.

However, in November 2015, a trial of body-worn cameras conducted by the Metropolitan Police, alongside the Mayor’s Office for Police and Crime, the College of Policing and the Home Office, found the technology had little-to-no impact on several areas of policing.

The trial revealed the cameras had “no overall impact” on the “number or type of stop and searches”, “no effect” on the proportion of arrests for violent crime, and “no evidence” that the cameras changed the way officers dealt with either victims or suspects.

It added that while body-worn videos can also reduce the number of allegations against officers, this “did not reach statistical significance”.

It is also unclear how “modern data analytics techniques” could increase police transparency and accountability, as typically when police use data analytics it is for “predictive policing”, a technique used to identify potential criminal activity and patterns, either in individuals or geographical areas depending on the model.

According to evidence submitted to the United Nations (UN) by the Equalities and Human Rights Commission (EHRC), the use of predictive policing can replicate and magnify “patterns of discrimination in policing, while lending legitimacy to biased processes”.

It added: “A reliance on ‘big data’ encompassing large amounts of personal information may also infringe upon privacy rights and result in self-censorship, with a consequent chilling effect on freedom of expression and association.”

Read more about technology and public trust

Read more on IT legislation and regulation