vchalup - stock.adobe.com
Auditing for algorithmic discrimination
Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences
Artificial intelligence (AI) systems and algorithmic decision-making are mainstays of every sector the global economy.
From search engine recommendations and advertising to credit scoring and predictive policing, algorithms can be deployed in an expansive range of use cases, and are often posited by advocates as a dispassionate and fairer means of making decisions, free from the influence of human prejudice.
However, according to Cathy O’Neil, author of Weapons of math destruction: how big data increases inequality and threatens democracy, in practice many of the mathematical models that power this big data economy “distort higher education, spur mass incarceration, pummel the poor at nearly every juncture, and undermine democracy”, all while “promising efficiency and fairness”.
“Big data processes codify the past. They do not invent the future. We have to explicitly embed better values into our algorithms, creating big data models that follow our ethical lead,” she wrote. “Sometimes that means putting fairness ahead of profit.”
Although awareness of algorithms and their potential for discrimination have increased significantly over the past five years, Gemma Galdon Clavell, director of Barcelona-based algorithmic auditing consultancy Eticas, tells Computer Weekly that too many in the tech sector still wrongly see technology as socially and politically neutral, creating major problems in how algorithms are developed and deployed.
On top of this, Galdon Clavell says most organisations deploying algorithms have very little awareness or understanding of how to address the challenges of bias, even if they do recognise it as a problem in the first place.
The state of algorithmic auditing
Many of the algorithms Eticas works on are “so badly developed, oftentimes our audit work is not just to audit but to actually reassess where everything’s being done”, Galdon Clavell says.
While analysing and processing data as part of an algorithm audit is not a particularly lengthy process, Eticas’s audits “six to nine months” because of how much work goes into understanding how algorithm developers are making decisions and where all the data is actually coming from, she adds.
“Basically all these algorithms have a really messy back end, like someone’s not even been labelling the data or indexing everything they’ve been using. There’s so many ad-hoc decisions we find in algorithms with a social impact – it’s just so irresponsible, it’s like someone building a medicine and forgetting to list the ingredients they used,” she says, adding that 99% of the algorithms she comes across are in this state.
However, there is a distance between “being aware and actually knowing what to do with that awareness”, she says, before pointing out that while the technology ethics world has been good at identifying problems, it has not been very constructive in offering solutions or alternatives.
“What we do is work with the [clients] team, ask them, ‘What is the problem you want to solve, what data have you been gathering, and what data did you want to gather that you couldn’t gather?’, so really trying to understand what is it they want to solve and what data they’ve been using,” she says.
“Then what we do is look at how the algorithm has been working, the outcomes of those algorithms, and how it’s been calculating things. Sometimes we just re-do the work of the algorithm to make sure that all the data we caught is accurate and then spot whether there’s any particular groups that are being affected in ways that are not statistically justified.”
From here, Eticas will also bring in “specific experts for whatever subject matter the algorithm is about”, so that an awareness of any given issues’ real-world dynamics can be better translated into the code, in turn mitigating the chances of that harm being reproduced by the algorithm itself.
How can bias enter algorithmic decision-making?
According to Galdon Clavell, bias can manifest itself at multiple points during the development and operation of algorithms.
“We realise there are problems throughout the whole process of thinking that data can help you address a social issue. So if your algorithm is for, say, organising how many trucks need to go somewhere to deliver something, then maybe there’s no social issues there.
“But for most of the algorithms we work with, we see how those algorithms are making decisions that have an impact on the real world,” she says, adding bias is already introduced at the point of deciding what data to even use in the model.
“Algorithms are just mathematical functions, so what they do is code complex social realities to see whether we can make good guesses about what may happen in the future.
“All the critical data that we use to train those mathematical functions comes from an imperfect world, and that’s something that engineers often don’t know and it’s understandable – most engineers have had no training on social issues, so they’re being asked to develop algorithms to address social issues that they don’t understand.
“We’ve created this technological world where engineers are calling all the shots, making all the decisions, without having the knowledge on what could go wrong.”
Gemma Galdon Clavell, Eticas
Clavell goes on to say how many algorithms are based on machine learning AI models and require periodic evaluation to ensure the algorithm has not introduced any new, unexpected biases to its own decision-making during the self-learning.
“Interestingly, we’re also seeing issues of discrimination at the point of conveying the algorithmic decision,” says Galdon Clavell, explaining how human operators are often not properly able to interrogate, or even understand, the machine’s choice, therefore exposing the process to their own biases as well.
As a real-world example of this, in January 2020 Metropolitan Police commissioner Cressida Dick defended the force’s operational roll out of live facial-recognition (LFR) technology, an algorithmically powered tool that uses digital images to identify people’s faces, partly on the basis that human officers will always make the final decision.
However, the first and only independent review of the Met’s LFR trails from July 2019 found there was a discernible “presumption to intervene”, meaning it was standard practice for officers to engage an individual if told to do so by the algorithm.
“Through algorithmic auditing what we’re trying to do is address the whole process, by looking not only at how the algorithm itself amplify problems, but how have you translated a complex social problem into code, into data, because the data you decide to use says a lot about what you’re trying to do,” says Galdon Clavell.
Barriers to auditing
While companies regularly submit to and publish the results of independent financial audits, Galdon Clavell notes there is no widespread equivalent for algorithms.
“Of course, a lot of companies are saying, ‘There’s no way I’m going to be publishing the code of my algorithm because I spent millions of dollars building this’, so we thought why not create a system of auditing by which you don’t need to release your code, you just need to have an external organisation (that is trusted and has its own transparency mechanisms) go in, look at what you’re doing, and publish a report that reflects how the algorithms are working,” she says.
“Very much like a financial audit, you just go in and certify that things are being done correctly, and if they’re not, then you tell them, ‘Here’s what you need to change before I can say in my report that you’re doing things well’.”
For Galdon Clavell, while she notes it is not difficult to find companies that do not care about these issues, in her experience most understand they have a problem, but do not necessarily know how to approach fixing it.
“The main barrier at the moment is people don’t know that algorithmic auditing exists,” she says. “In our in our experience, whenever we talk to people in the industry about what we do, they’re like, ‘Oh wow, so that’s a thing? That’s something that I can do?’, and then we get our contracts out of this.”
Galdon Clavell says algorithmic audits are not common knowledge because of the tech ethics world’s focus on high-level principles, particularly in the past five years, over practice.
“I’m just tired of the principles – we have all the principles in the world, we have so many documents that say the things that matter, we have meta-analysis of principles of ethics in AI and technology, and I think it’s time to move beyond that and actually say, ‘OK, so how do we make sure that algorithms do not discriminate?’ and not just say, ‘They should not discriminate’,” she says.
Re-thinking our approach to technology
While Galdon Clavell is adamant more needs to be done to raise awareness and educate people on how algorithms can discriminate, she says this needs to be accompanied by a change in how we approach technology itself.
“We need to change how we do technology. I think the whole technological debate has been so geared by the Silicon Valley idea of ‘move fast break things’ that when you break our fundamental rights, it doesn’t really matter,” she says.
“We need to start seeing technology as something that helps us solve problems, right now technology is like a hammer always looking for nails – ‘Let’s look for problems that could be solved with blockchain, let’s look for problems that we can solve with AI’ – actually, no, what problem do you have? And let’s look at the technologies that could help you solve that problem. But that’s a completely different way of thinking about technology than what we’ve done in the past 20 years.”
Gemma Galdon Clavell, Eticas
As an alternative, Galdon Clavell highlights how AI-powered algorithms have been used as a ‘bias diagnosis’ tool, showing how the same technology can be re-purposed to re-enforce positive social outcomes if the motivation is there.
“There was this AI company in France that used the open data from the French government on judicial sentencing, and they found some judges had a clear tendency to give harsher sentences to people of migrant origin, so people were getting different sentences for the same offence because of the bias of judges,” she says.
“This is an example where AI can help us identify where human bias has been failing specific groups of people in the past, so it’s a great diagnosis tool when used in the right way.”
However, she notes the French government’s response to this was to not to address the problem of judicial bias, but to forbid the use of AI to analyse the professional practices of magistrates and other members of the judiciary.
“When technology can really help us put an end to some really negative dynamics, oftentimes that’s uncomfortable,” she says.
However, Galdon Clavell adds that many companies have started to view consumer trust as a competitive advantage, and are slowly starting to change their ways when it comes to developing algorithms with social impacts.
“I’ve certainly found that some of the clients we have are people who really care about these things, but others care about the trust of their clients and they realise that doing things differently, doing things better, and being more transparent is also a way for them to gain a competitive advantage in the space,” she says.
“There’s also a slow movement in the corporate world that means they realise they need to stop seeing users as this cheap resource of data, and see them as customers who want and deserve respect, and want commercial products that do not prey on their data without their knowledge or ability to consent.”
Read more about ethics and technology
- Immigration rights campaigners have filed a judicial review against the Home Office, challenging its use of a selective algorithm in the processing of visa applications.
- The UK government should act immediately to deal with a ‘pandemic of misinformation’ and introduce a draft online harms bill as a matter of urgency to rebuild trust in democratic institutions, warns a report from the Lords Democracy and Digital Technologies Committee.
- A new set of nationally approved guidelines is needed to ensure police algorithms are deployed in lawful and ethical ways, claims a report by the Royal United Services Institute (Rusi) security think tank.