ma8 - stock.adobe.com

Home Office drops ‘racist’ visa algorithm

The Home Secretary has pledged to stop using an algorithmic decision-making system to assess visa applications after migrant welfare and technology justice groups challenged the tool for helping to create a hostile environment for migrants

The Home Office has agreed to scrap its “visa streaming” algorithm in response to the threat of legal action from the Joint Council for the Welfare of Immigrants (JCWI) and digital rights group Foxglove.

From 7 August, the Home Secretary, Priti Patel, will suspend use of the algorithm “pending a redesign of the process”, which she has pledged will take into account issues of unconscious bias and discrimination, as well as the use of nationalities to automatically assign people a risk score.

The Home Office has said that the redesign will be completed by 30 October 2020, and that it has plans to undertake equality impact and data protection impact assessments for processes adopted in the interim period.

The algorithm works by assigning each visa applicant a red, yellow or green risk rating, which plays a major role in determining the outcome of the application and is racially discriminatory because people holding “suspect” nationalities allegedly receive higher risk scores, according to JCWI and Foxglove.

The Home Office admitted it has a list of countries more likely to be deemed a “risk”, but has opted not to publicly identify which these are.

“We’re delighted the Home Office has seen sense and scrapped the streaming tool. Racist feedback loops meant that what should have been a fair migration process was, in practice, just ‘speedy boarding for white people’,” said Cori Crider, founder and director of Foxglove.

“What we need is democracy, not government by algorithm. Before any further systems get rolled out, let’s ask experts and the public whether automation is appropriate at all, and how historic biases can be spotted and dug out at the roots.”

Foxglove and JCWI filed a judicial review challenging the algorithm’s use in June, but the Home Office decision comes before the case could be heard in court, with lawyers writing to inform the claimants on 3 August.

However, the lawyers were clear that the decision to scrap the algorithm should not be taken as an admission of guilt.

“For clarity, the fact of the redesign does not mean that the Secretary of State for the Home Department accepts the allegations in your claim form. However, the redesign will be approached with an open mind in considering the concerns you have raised,” said the letter.

“Given the relatively low volumes of visa applications since the onset of the Covid-19 pandemic and the impact that the pandemic has had on international travel, it has been decided that it is timely to take advantage of the low number of overseas applications to redesign the system now.”

According to Chai Patel, legal policy director at JCWI, the Home Office’s own independent review of the Windrush scandal found that the organisation was “oblivious to the racist assumptions and systems it operates”.

“This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out,” he said.

Foxglove added on Twitter: “The bias, discrimination and distorting ‘feedback loops’ we identified here are also a risk with other government algorithms – more scrutiny is urgently needed.

“Sadly, scrapping the visa algorithm will not, on its own, mean an end to racism and bias in the visa system… There will now be a battle to make sure that the new process is better than the awful one it replaces.”

Read more about algorithms and bias

  • Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences.
  • A new set of nationally approved guidelines is needed to ensure police algorithms are deployed in lawful and ethical ways, claims a report by the Royal United Services Institute (Rusi) security think tank.
  • An investigation into the Gangs Matrix, a police intelligence database used to identify and monitor individuals considered to be linked to gangs, finds that the Metropolitan Police Service’s (MPS) use of the tool has led to serious breaches of data protection laws.

Read more on Artificial intelligence, automation and robotics