Mehdi - stock.adobe.com
Swedish authorities urged to discontinue AI welfare system
Amnesty International is calling on Sweden’s social insurance agency to immediately discontinue its machine learning-based welfare system, following an investigation by Lighthouse Reports and Svenska Dagbladet that found it to be discriminatory
Sweden’s algorithmically powered welfare system is disproportionately targeting marginalised groups in Swedish society for benefit fraud investigations, and must be immediately discontinued, Amnesty International has said.
An investigation published by Lighthouse Reports and Svenska Dagbladet (SvB) on 27 November 2024 found that the machine learning (ML) system being used by Försäkringskassan, Sweden’s Social Insurance Agency, is disproportionally flagging certain groups for further investigation over social benefits fraud, including women, individuals with “foreign” backgrounds, low-income earners and people without university degrees.
Based on an analysis of aggregate data on the outcomes of fraud investigations where cases were flagged by the algorithms, the investigation also found the system was largely ineffective at identifying men and rich people that actually had committed some kind of social security fraud.
To detect social benefits fraud, the ML-powered system – introduced by Försäkringskassan in 2013 – assigns risk scores to social security applicants, which then automatically triggers an investigation if the risk score is high enough.
Those with the highest risk scores are referred to the agency’s “control” department, which takes on cases where there is suspicion of criminal intent, while those with lower scores are referred to case workers, where they are investigated without the presumption of criminal intent.
Once cases are flagged to fraud investigators, they then have the power to trawl through a person’s social media accounts, obtain data from institutions such as schools and banks, and even interview an individual’s neighbours as part of their investigations. Those incorrectly flagged by the social security system have complained they then end up facing delays and legal hurdles in accessing their welfare entitlement.
“The entire system is akin to a witch hunt against anyone who is flagged for social benefits fraud investigations,” said David Nolan, senior investigative researcher at Amnesty Tech. “One of the main issues with AI [artificial intelligence] systems being deployed by social security agencies is that they can aggravate pre-existing inequalities and discrimination. Once an individual is flagged, they’re treated with suspicion from the start. This can be extremely dehumanising. This is a clear example of people’s right to social security, equality and non-discrimination, and privacy being violated by a system that is clearly biased.”
Testing against fairness metrics
Using the aggregate data – which was only possible as Sweden’s Inspectorate for Social Security (ISF) had previously requested the same data – SvB and Lighthouse Reports were able to test the algorithmic system against six standard statistical fairness metrics, including demographic parity, predictive parity and false positive rates.
They noted that while the findings confirmed the Swedish system is disproportionately targeting already marginalised groups in Swedish society, Försäkringskassan has not been fully transparent about the inner workings of the system, having rejected a number of freedom of information (FOI) requests submitted by the investigators.
They added that when they presented their analysis to Anders Viseth, head of analytics at Försäkringskassan, he did not question it, and instead argued there was no problem identified.
“The selections we make, we do not consider them to be a disadvantage,” he said. “We look at individual cases and assess them based on the likelihood of error and those who are selected receive a fair trial. These models have proven to be among the most accurate we have. And we have to use our resources in a cost-effective way. At the same time, we do not discriminate against anyone, but we follow the discrimination law.”
Computer Weekly contacted Försäkringskassan about the investigation and Amnesty’s subsequent call for the system to be discontinued.
“Försäkringskassan bears a significant responsibility to prevent criminal activities targeting the Swedish social security system,” said a spokesperson for the agency. “This machine learning-based system is one of several tools used to safeguard Swedish taxpayers’ money.
“Importantly, the system operates in full compliance with Swedish law. It is worth noting that the system does not flag individuals but rather specific applications. Furthermore, being flagged does not automatically lead to an investigation. And if an applicant is entitled to benefits, they will receive them regardless of whether their application was flagged. We understand the interest in transparency; however, revealing the specifics of how the system operates could enable individuals to bypass detection. This position has been upheld by the Administrative Court of Appeal (Stockholms Kammarrätt, case no. 7804-23).”
Read more about public sector algorithms
- Lords to challenge controversial DWP benefits bank account surveillance powers: Members of the House of Lords are pressing for amendments to the Data Protection and Digital Information Bill following concerns over government powers to monitor the bank accounts of people receiving benefits.
- Accountability in algorithmic injustice: Computer Weekly looks at the growing number of injustices involving algorithms and automated decision-making, and what can be done to hold governments and companies accountable for the failures of computer systems they deploy.
- Ban predictive policing and facial recognition, says civil society: A coalition of civil society groups is calling for an outright ban on predictive policing and biometric surveillance.
Nolan said if use of the system continues, then Sweden may be sleepwalking into a scandal similar to the one in the Netherlands, where tax authorities used algorithms to falsely accuse tens of thousands of parents and caregivers from mostly low-income families of fraud, which also disproportionately harmed people from ethnic minority backgrounds.
“Given the opaque response from the Swedish authorities, not allowing us to understand the inner workings of the system, and the vague framing of the social scoring ban under the AI Act, it is difficult to determine where this specific system would fall under the AI Act’s risk-based classification of AI systems,” he said. “However, there is enough evidence to suggest that the system violates the right to equality and non-discrimination. Therefore, the system must be immediately discontinued.”
Under the AI Act – which came into force on 1 August 2024 – the use of AI systems by public authorities to determine access to essential public services and benefits must meet strict technical, transparency and governance rules, including an obligation by deployers to carry out an assessment of human rights risks and guarantee there are mitigation measures in place before using them. Specific systems that are considered as tools for social scoring are prohibited.
Sweden’s ISF previously found in 2018 that the algorithm used by Försäkringskassan “in its current design [the algorithm] does not meet equal treatment”, although the agency pushed back at the time by arguing the analysis was flawed and based on dubious grounds.
A data protection officer who previously worked for the Försäkringskassan also warned in 2020 that the system’s operation violates the European General Data Protection Regulation, because the authority has no legal basis for profiling people.
On 13 November, Amnesty International exposed how AI tools used by Denmark’s welfare agency are creating pernicious mass surveillance, risking discrimination against people with disabilities, racialised groups, migrants and refugees.