andreusK - stock.adobe.com
Developers have a moral duty to create ethical AI
Corsight AI publishes report on how organisations can develop more human-centric AI and biometric technologies
Developers of artificial intelligence (AI), machine learning (ML) and biometric-related technologies have “a moral and ethical duty” to ensure the technologies are only used as a force for good, according to a report written by the UK’s former surveillance camera commissioner.
Developers must be cognizant of both the social benefits and risks of the AI-based technologies they produce, and have a responsibility to ensure it is used only for the benefit of society, said the whitepaper, which was published by facial-recognition supplier Corsight AI in response to the European Commission’s (EC) proposed Artificial Intelligence Act (AIA).
“Organisational values and principles must irreversibly commit to only producing technology as a force for good,” it said. “The philosophy must surely be that we put the preservation of internationally recognised standards of human rights, our respect for the rule of law, the security of democratic institutions and the safety of citizens at the heart of what we do.”
It added a ‘human in the loop’ development strategy is key to assuaging any public concerns over the use of AI and related technologies, in particular facial-recognition technology.
“The most important ingredient of… [developing facial-recognition systems] is the human at the centre of the process,” it said. “Training, bias awareness, policies upon deployment, adherence to law, rules, regulations and ethics are key ingredients.
“Developers must work with the human to create a product that is human intuitive and not the other way around. Consideration of providing legal and regulatory support in the use of such sophisticated software must be a foremost consideration for developers.”
To make the technology more human-centric, the report further encourages developers to work closely alongside its client base to understand the user requirements and legitimacy of the project, as well as to support “compliance to statutory obligations and to build appropriate safeguards where vulnerabilities may arise”.
Speaking to Computer Weekly, the paper’s author Tony Porter – Corsight’s chief privacy officer and the UK’s former surveillance camera commissioner – said that when there have been cases of AI-related technologies such as facial-recognition being used unlawfully, it’s because of how they are deployed in a particular context rather than the tech in and of itself.
He added that part of his role at Corsight is to explain “the power of the technology, but also the correct and judicious use of it” to clients, which for Porter includes placing humans at the heart of the technology’s development and operation.
With police use of the tech in particular, Porter reiterated it is important for suppliers to “support the end users positive and enduring obligation to follow” the Public Sector Equalities Duty, mainly through greater transparency.
“My view is they [the developers] should be open about it, they should release the figures and the stats, they should explain where they think there’s a problem and an issue, because if we’re talking about trust, then how do you get trust? If we hand the black box over and don’t let anybody know what’s in it, you just can’t,” he said.
A ‘human in the loop’ approach
Porter said that regulators and lawmakers need to focus more on ensuring there is a human in the loop throughout the development and operation of various AI-based technologies.
He further added that while algorithms are obviously central to the operation of facial-recognition systems, “the biggest part is the human, the training, their understanding the software. That is very, very complex so there needs to be a very significant piece of work around that.
“I've been urging and encouraging Corsight to get ahead of that curve, to be in a position where clients who come to us have their hand held in relation to what they need to do, what the legalities are, what the pitfalls are, and how to maximise use.”
However, a July 2019 report from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre – which marked the first independent review into trials of live facial-recognition (LFR) technology by the Metropolitan Police – highlighted a discernible “presumption to intervene” among police officers using the tech, meaning they tended to trust the outcomes of the system and engage individuals that it said matched the watchlist in use, even when they did not.
When asked how organisations deploying LFR can avoid this situation – whereby the humans looped in are prompted by the technology to make a poor decision – Porter said developers can set up processes to highlight where risks can arise to human operators.
“The human operator – who we assume does not know how software is developed or how an algorithm is developed – is told what the risks are, what the variances are, what they need to know,” he said.
“If you work on the basis that no software is 100% bias free, and as I understand it can never be, what we can do is close that gap, take the hand of the human and say, ‘Look, if there is a risk, however small that risk is, you need to understand it’.
“What would that give a human operator? Well, it gives the operator and the management the opportunity to challenge presumptions, to focus training, to allow any responders to be aware of the variance, to know they’re perhaps operating in a less certain world.”
The whitepaper further noted that, however flawless the design of a technology, it can of course be abused when “operated by a dysfunctional or oppressive end user”, and encourages developers to work collaboratively with clients to understand their use case, its requirements and, ultimately, its legitimacy.
“Inclusion and diversity must be central to a developers’ efforts to ensure that any potential for the technology to discriminate against people or harm their human rights is removed,” it said. “Linked to this, companies need to develop policies that clearly stipulate they will not trade with customers who do not support and uphold internationally recognised standards of human rights.”
The UK’s patchwork legal framework for biometrics
In July 2019, the House of Commons Science and Technology Committee published a report that identified the lack of legislation surrounding LFR, and called for a moratorium on its use until a framework was in place.
In its official response to the report, which was given after a delay of nearly two years in March 2021, the UK government claimed there was “already a comprehensive legal framework for the management of biometrics, including facial recognition”.
Outlining the framework, the government said it included police common law powers to prevent and detect crime, the Data Protection Act 2018 (DPA), the Human Rights Act 1998, the Equality Act 2010, the Police and Criminal Evidence Act 1984 (PACE), the Protection of Freedoms Act 2012 (POFA), and police forces’ own published policies.
In early July 2021, the UK’s former biometrics commissioner Paul Wiles told the Science and Technology Committee that while there is currently a “general legal framework” governing the use of biometric technologies, Parliament needs to create legislation that explicitly deals with the use of these technologies in the UK.
Porter also told Computer Weekly that he agreed with Wiles that the current framework is too complicated and should be simplified.
“It would be quite straightforward to encapsulate this in a harmonious, legal and statutory framework that gives clarity to regulators, to operators, to police, and the private citizen,” he added.
However, unlike the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) – which have called for a complete ban on biometrics in public spaces on the basis they present an unacceptable interference with people’s fundamental rights and freedoms – Porter errs on the side of having a dedicated regulator that governs use of the tech against a dedicated legal framework instead.
Responding to the EC’s AIA proposal, civil society groups and digital rights experts have previously told Computer Weekly that while the proposal was a step in the right direction, it ultimately fails to address the fundamental power imbalances between those who develop and deploy the technology, and those who are subject to it.
For Porter, recognising that not everybody is on board with many AI use cases, particularly facial recognition, it ultimately comes down to building trust in the technology: “You may think it’s a great idea to stop a paedophile when you can in certain circumstances, but not at the risk of damaging hundreds and hundreds of people that are out of or not loved by the system.
“[Biometrics] has to be overseen by a body that demands public trust, that has an irrefutable, irrebuttable reputation for honesty, integrity, capability… [because marginalised communities] will know if the state hasn’t got a proper mechanism around it.”
Read more about facial-recognition and other biometric technologies
- Information commissioner’s concern over the problematic use of facial recognition in public spaces has prompted her to publish official guidance on its deployment, while civil society calls for an outright ban.
- Facial-recognition supplier claims new system can accurately identify masked faces, therefore promoting public health during the pandemic. But questions remain about whether its existing UK law enforcement clients will be deploying the technology.
- Met Police commissioner has called for legislative framework to govern police use of new technologies, while defending the decision to use live facial recognition technology operationally without it.