Alexander - stock.adobe.com
Denmark’s AI-powered welfare system fuels mass surveillance
Research reveals the automated tools used to flag individuals for benefit fraud violate individuals’ privacy and risk discriminating against marginalised groups
Artificial intelligence (AI) tools used by the Danish welfare authority violate individual privacy, risk discrimination and breach the European Union’s (EU) AI Act’s regulations on social scoring systems, according to analysis from Amnesty International.
Udbetaling Danmark (UDK, or Payout Denmark) – established in 2012 to centralise the payment of various welfare benefits across five municipalities – uses AI-powered algorithms to flag individuals who are considered at the highest risk of committing social benefits fraud for further investigation. These were developed in partnership with ATP, Denmark’s largest pensions processing company, and various private multinational corporations.
The report details how UDK’s fraud control algorithms breach the human rights of social security benefits recipients, including their rights to privacy, equality and social security. It also concludes that the system creates a barrier to accessing social benefits for certain marginalised groups, including people with disabilities, low-income individuals and migrants.
“This mass surveillance has created a social benefits system that risks targeting, rather than supporting, the very people it was meant to protect,” said Hellen Mukiri-Smith, Amnesty International’s researcher on artificial intelligence and human rights.
“The way the Danish automated welfare system operates is eroding individual privacy and undermining human dignity. By deploying fraud control algorithms and traditional surveillance methods to identify social benefits fraud, the authorities are enabling and expanding digitised mass surveillance.”
Amnesty argues that UDK’s fraud detection system likely falls under the “social scoring” ban under the EU’s AI Act, which came into force on 1 August 2024.
The act defines AI social scoring systems as those that “evaluate or classify” individuals or groups based on social behaviour or personal traits, causing “detrimental or unfavourable treatment” of those people.
Mukiri-Smith said: “The information that Amnesty International has collected and analysed suggests that the system used by the UDK and ATP functions as a social scoring system under the new EU Artificial Intelligence law – and should therefore be banned.”
UDK and ATP provided Amnesty with redacted documentation on the design of certain algorithmic systems, and allegedly rejected Amnesty’s requests for a collaborative audit, refusing to provide full access to the code and data used in their algorithms.
The Danish authority also rejected Amnesty’s assessment that its fraud detection system likely falls under the AI Act’s social scoring ban, but did not offer an explanation for this reasoning.
In response to this, Amnesty has called on the European Commission to issue clear guidelines on which AI practices constitute a social scoring system in its AI Act guidance. The organisation has also requested that the Danish authorities stop using the system until it can be confirmed that it does not fall under this ban.
Mukiri-Smith added: “The Danish authorities must urgently implement a clear and legally binding ban on the use of data related to ‘foreign affiliation’ or proxy data in risk scoring for fraud control purposes. They must also ensure robust transparency and adequate oversight in the development and deployment of fraud control algorithms.”
Computer Weekly contacted UDK about the claims made by Amnesty International but received no response by the time of publication.
Violation of privacy
Alongside ATP, UDK uses a system of up to 60 algorithms to identify fraudulent social benefit applications and flag individuals for further investigation by Danish authorities.
To power these models, Danish authorities have enacted laws enabling the extensive collection and merging of personal data from public databases of millions of Danish residents. This includes information on residency status, citizenship, and other data that can also serve as proxies for a person’s race, ethnicity or sexual orientation.
Mukiri-Smith added: “This expansive surveillance machine is used to document and build a panoramic view of a person’s life that is often disconnected from reality. It tracks and monitors where a social benefit claimant lives, works, their travel history, health records, and even their ties to foreign countries.”
Individuals interviewed by Amnesty described the psychological impact of being subjected to surveillance by fraud investigators and case workers. Describing the feeling of being investigated for benefits fraud, Stig Langvad of Dansk Handicap Foundation told Amnesty that it is like “sitting at the end of a gun”.
UDK stated that its collection and merging of personal data to detect social benefits fraud is “legally grounded”.
Exacerbation of structural marginalisation
The report also reveals that the benefits fraud control system developed by UDK and ATP is built on inherently discriminatory structures in Denmark’s legal and social systems, which categorises people and communities based on difference.
According to the report, Danish law already creates a “hostile environment for migrants and people who have been granted refugee status”, with residency requirements for those seeking to claim benefits that disproportionately affect people from non-Western countries, with many refugees in Denmark, including Syria, Afghanistan and Lebanon.
The Really Single fraud control algorithm predicts a person’s family or relationship status to assess risk of benefit fraud in pensions and childcare schemes. One of the parameters employed by the algorithm includes “unusual” or “atypical” living patterns or family arrangements, but contains no clarity on what constitutes such situations, leaving room for dangerously arbitrary decision-making.
Mukiri-Smith added: “People in non-traditional living arrangements – such as those with disabilities who are married but live apart due to their disabilities; older people in relationships who live apart; or those living in a multi-generational household, a common arrangement in migrant communities – are all at risk of being targeted by the Really Single algorithm for further investigation into social benefits fraud.”
Gitte Nielsen, the chairperson of the social and labour market policy committee at Dansk Handicap Foundation, described the feeling of being constantly scrutinised and reassessed: “It is eating you up. A lot of our members … have depression because of this interrogation.”
UDK and ATP additionally use inputs related to “foreign affiliation” in their algorithmic models. For example, the Model Abroad algorithm identifies groups of beneficiaries deemed to have “medium and high-strength ties” to non-EEA countries and prioritises these groups for further investigation.
Amnesty’s research found that algorithms such as these discriminate against people based on factors such as national origin and migration status.
In a response to Amnesty, UDK stated that the use of “citizenship” as a parameter in their algorithms does not constitute processing of sensitive personal information.
Read more about automated decision-making, algorithms and AI
- Lord introduces bill to regulate public sector AI and automation: A private members’ bill seeking to regulate the use of artificial intelligence (AI) and other automated technologies throughout the public sector has been brought to Parliament.
- Scrutinising AI requires holistic, end-to-end system audits: Understanding the full impacts of artificial intelligence requires organisations to conduct end-to-end social and technical audits of their systems, but the process comes with a number of challenges.
- AI disempowers logistics workers while intensifying their work: Conversations on the algorithmic management of work largely revolve around unproven claims about productivity gains or job losses - less attention is paid to how AI and automation negatively affect low-paid workers.