Kiattisak - stock.adobe.com

Welcome minister, the next Horizon scandal is here in your department

Ministers in the new UK government should take prompt action to ensure that incorrect use of AI and algorithmic decision-making does not turn into the next Post Office Horizon scandal

To: All incoming secretaries of state

Algorithmic prediction and classification systems are probably in use in your department. They may or may not be called “AI”, and there may be other types of system labelled “AI” performing various functions. There is high risk they are causing harms to citizens and may be unlawful. If left unscrutinised they may lead to your future appearance at a public inquiry like the Horizon one, with your position being indefensible.

Recommendations:

You should immediately require that an Algorithmic Transparency Reporting Standard (ATRS) document is completed in full for every algorithmic system used in your department (and its agencies and arms-length bodies) that affects decisions or actions concerning legal persons or policy. It should include a full analysis of the legal basis for its use and its compliance with all relevant legislation. It should include an assessment of its accuracy and how that degree of accuracy is appropriate for the purpose for which it is used. Any use of a product called “AI” or use of “generative AI” products such as chatbots must be included within the scope of this exercise. You should stop the use of any system where there is any risk of harm to any citizen, where lawfulness is in any doubt, or where sufficient accuracy is not proven.

Further options:

You may also wish to consider:

  1. Saving money by stopping spending on "AI" unless and until its value is proven.
  2. Refusing to accept any official documents produced using “generative AI" products.
  3. Banning predictive systems outright.

Argument:

You are aware of the public inquiry into how problems with the Post Office Horizon accounting software led to multiple miscarriages of justice and devastating consequences for innocent subpostmasters and their families. You will also be aware of the consequent embarrassment (at least) to many former ministers for the Post Office who, for a variety of reasons, failed to identify or get to grips with the problem. This submission argues that you should immediately act to ensure that no system in your department is at risk of creating similar problems, given that there is currently no visible, comprehensive source of information to assure you otherwise.

Challenges to commonly presumed benefits

The National Audit Office (NAO) published a report in March 2024 on the use of artificial intelligence (AI) in government and the Public Accounts Committee subsequently initiated an inquiry based on it. The Committee called for written evidence and published the responses in May. Some of the responses supported the oft-stated presumption about benefits that AI might bring to government and public services. Many were more sceptical, describing problems with existing systems - in particular, algorithms that may not be called “AI” but that make predictions about people - and fundamental problems with the use of statistical predictive tools in public administration. Specific harms arising from specific systems were mentioned.

Issues with legality and transparency

Some submissions contained extensive legal and constitutional arguments that many such methods were likely to be unlawful and conflict with the rule of law and human rights. While views were mixed, there was a strong sense that stakeholders are very alert to the risks posed by the use of algorithmic and AI methods by government. One scholar mounted a strong argument that they should be banned by law; another argued that they may already be unlawful. One submission noted that, “Transparency about the use of algorithmic methods by central government is almost totally absent”. It is in this light that this advice is offered to you.

Hype, inaccuracy and misuse

The other piece of context is the extensive hype surrounding “AI”, in particular “generative AI”. Typically, most discussion about the use of AI in the public sector is couched in terms of “may” or “could” or “has potential to”, with claims of significant, transformational benefits in prospect. Little evidence yet exists to substantiate these claims. Countering them, specific proposed uses are argued to be implausible, undermining many of the asserted benefits.

In any future inquiry, 'I didn't know' is not going to be an adequate response to challenges to a minister’s inaction

For example, the Government Digital Service experimented with a chatbot interface to Gov.uk, finding that answers did not reach the level of accuracy demanded for a site where factual accuracy is crucial. For the same reason (plus their lack of explainability and consistency) they are not suitable for use in statutory administrative procedures.

There are claims that such tools could summarise policy consultations. However, a chatbot summary will not enable the nuanced positions on the policy by stakeholder groups to be ascertained. Further, even if accurate, an automated summarisation does not fulfil the democratic function of a consultation, to allow all voices to be heard and shown to be heard. Similar issues apply to using these tools to generate policy advice.

Worldwide, instances have been found of bias and inaccuracy in predictive and classification systems used in public administration. Recent guidance from the Department for Education and the Responsible Technology Adoption Unit (RTA) in the Department for Science, Innovation and Technology on the use of data analytic tools in children’s social care specifically warns about predictive uses, citing findings that they have not demonstrated effectiveness in identifying individual risks. Methods in use for fraud detection probably have similar problems, particularly with “false positive” predictions that lead to innocent people being interfered with or punished. Related classification methods have alarming political and social implications.

Mitigating your risk

The RTA and the Central Digital and Data Office developed and published ATRS as “a framework for accessible, open and proactive information sharing about the use of algorithmic tools across the public sector”. On 29 March 2024, the then government’s response to the consultation on its AI white paper announced that ATRS will become a requirement for UK government departments, but this has not yet been implemented either for current or future systems. Therefore, a tool to significantly improve visibility of, and assure the safety of, uses of algorithms and AI in government exists, awaiting effective deployment.

Your position in relation to the potential harms to the public and the government is therefore very exposed. In any future inquiry, “I didn't know” is not going to be an adequate response to challenges to a minister’s inaction. As a start to remedy this, an ATRS completion needs to be done and critically examined for risks, for every relevant system in use or proposed. This is urgent as many external organisations are on the lookout for cases of harms to people to challenge in court.

Your first few weeks in office is the window of opportunity to scrutinise your inheritance, identify any problems and act decisively to shut down any potentially harmful systems. Full publication of the ATRS documents and any decisions you take on basis of them will be a significant contribution to increasing public trust in the work of the department.

Paul Waller is research principal at Thorney Isle Research.

Read more about digital government

Read more on IT for government and public sector

CIO
Security
Networking
Data Center
Data Management
Close