SakhanPhotography - stock.adobe.

Local councils need ‘better support’ to buy AI responsibly

The Ada Lovelace Institute says there is lack of adequate support on how councils can safely and responsibly procure artificial intelligence systems in the public interest

Local authorities need more support to responsibly procure artificial intelligence (AI) systems, as existing government guidance does not provide a clear or comprehensive account of how to procure AI in the public interest, the Ada Lovelace Institute (ALI) has warned.

According to a research report published by the civil society group, councils face significant challenges in navigating the existing guidance and legislation because these documents are open to interpretation and lack clarity over key aspects of AI, including how to operationalise ethical considerations.

ALI said its research comes amid rising expectations and optimism about the potential of AI in the public sector, but warned the benefits of the technology will only be achieved if the public sector can ensure its adoption is safe, effective, and in the public interest. 

Based on an analysis of 16 guidance, legislation and policy documents relevant to the procurement of AI and data-driven systems – all of which were published between 2010 and 2024 – the report said there is “insufficient clarity” on how to apply concepts like fairness, how to define public benefit, and how to ensure that the use of AI is transparent and understandable to those affected by its use.

It added that because many AI technologies are supplied by the private sector, the procurement process can and should play an important role in assessing the effectiveness of potential solutions, anticipating and mitigating risks, and ensuring that any deployment is proportionate, legitimate and in line with broader public sector duties.

However, the report noted this also raises questions about technical expertise within local government – which the ALI said is a gap in the current guidance - and how procurement teams can be equipped and empowered to interrogate suppliers about the impacts of their technology and routes for redress during the procurement process.

“Our research highlights the need for greater clarity around guidelines and responsibilities, and enforceable redress. Procurement teams need better support and clearer guidance so that they can procure AI that is effective, ethical and in the interest of people and society,” said lead author Anna Studman, a senior researcher at the ALI.

“AI and data-driven systems can severely damage public trust and reduce public benefit if the predictions or outcomes they produce are discriminatory, harmful or simply ineffective. Procurement provides a vital opportunity for local authorities to interrogate suppliers on the possible societal impacts of their technologies.”

To help local councils improve their AI procurement efforts, the ALI said there must be a consolidation of central government guidance that provides clarity on legal obligations and best practice across the procurement lifecycle of a system; the creation of an algorithmic impact assessment standard that councils can use when buying; and consensus building around how the public sector defines key terms like "fairness" and "transparency".

On transparency specifically, the ALI added that local government bodies will need to take a “holistic approach” that looks at internal processes and fair competition, as well as how communities are informed of and empowered to challenge decisions made about them by automated systems using their data.

“It's essential that the procurers in the public sector are confident about the products they are buying in - and neither they nor the public are put at risk,” said Imogen Parker, associate director of the ALI.

“Embedding a robust, ethical procurement process in the context of reduced budgets is of course a significant challenge. But it is important to also consider the cost of not doing this, both financially and ethically, something demonstrated all too clearly by the Post Office Horizon scandal.”

The report further recommended upskilling local government procurement teams in how to both use and audit AI systems, and for government to complete the roll out of the Algorithmic Transparency Recording Standard (ATRS) across the entire public sector, not just central government departments.

While the ATRS was created by the Central Digital and Data Office - then part of the Cabinet Office - in cooperation with the government’s Centre for Data Ethics and Innovation in November 2021, it has not been widely adopted and was not promoted by the Conservative government in its March 2023 AI whitepaper, which set out governance proposals for the technology.

The ALI previously made similar warnings about the deployment of "foundation" or large language models (LLMs) throughout the public sector in October 2023, noting the risks around issues such as bias and discrimination, privacy breaches, misinformation, security, over-reliance on industry, workforce harms and unequal access.

It further added that there is a real risk such models are adopted by the public sector because they are a new technology, rather than because they are the best solution to a problem. 

“Public-sector users should therefore carefully consider the counterfactuals before implementing foundation models. This means comparing proposed use cases with more mature and tested alternatives that might be more effective, provide better value for money or pose fewer risks – for example, employing a narrow AI system or a human employee to provide customer service rather than building a foundation model-powered chatbot.”

Read more about public sector AI

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close