kirill_makarov - stock.adobe.com
Regulating automated decision-making in the Data (Use and Access) Bill
Government proposals around the use of automated decision-making through AI and algorithms require amendment or risk exposing people to a diminishing of their rights
The House of Lords is currently going through the Data (Use and Access) Bill line by line. There is much to mull on and amend. This week we considered automated decision-making (ADM).
It has become clear that, in just the past two years, exclusively automated decisions are increasingly being used across our society and economy. Just two examples - whether you get that loan (or not) and whether you get that job (or not) may well be in the “hands” of a black box rather than a human.
ADM is now deployed at scale in so many sectors, often with no regulatory oversight or redress. Employment and recruitment are only one instance, whether you are next on the list for a liver transplant is another.
It has thus become even more critical that the safeguards around ADM, which exist only in data protection law, ensure people understand when a significant decision about them is being automated - why that decision has been made, and that they have routes to challenge it or ask for it to be decided by a human.
Article 22 of the UK Data Protection Act (DPA) prohibits the processing of personal data for decisions about individuals with “legal or similarly significant” effects based solely on automated processing. Unless the person concerned has given explicit consent or narrow legal exemptions apply, there must be a “human in the loop” to review significant decisions by algorithms.
Previous regulatory interventions and documented incidents show the necessity of these legal protections, which safeguard against biased decisions and unfair power imbalances between algorithmic systems and data subjects.
Another example, Deliveroo’s use of the Frank platform to manage more than 8,000 gig worker riders through ADM, was found to be unlawful by the Italian Data Protection Authority.
Safeguards and protections
When considering the amendments to the current ADM provisions in the Data Bill set out by colleagues and myself, it is clear there is some way to go if we are all to be convinced that the bill offers the right safeguards, protections, and redress for citizens across the piece.
When it comes to the current draft of the bill, in respect of ADM, Clause 80: Automated decision-making sets out substantial reforms for ADM, removing the protections under Article 22 of the DPA that state solely automated decisions are only permitted in three circumstances - where entering into a contract, or authorised by law, or with the explicit consent of the data subject.
It is clear that under the proposed reforms, restrictions on solely automated decisions are materially relaxed - permitting it as long as individuals affected by those decisions can make representations, ask for meaningful human intervention, and challenge decisions made by ADM. These rights for data subjects are set out under a new Article 22C.
Lastly, the secretary of state is also given new regulation-making powers to determine “when meaningful involvement [in relation to ADM] can be said to have taken place in light of constantly emerging technologies, as well as changing societal expectations of what constitutes a significant decision in a data protection context.”
In effect, this could give government scope to waive restrictions where novel data-driven technologies cannot practicably be influenced by a human in the loop, or when doing so simply undermines desired performance.
Material dilution of rights
As a consequence of this material dilution of rights and protections I proposed two amendments to increase the cover individuals could rely on in the bill.
I suggested the bill should give every individual the right to a personalised explanation for any automated decision-making they find themselves on the end of.
Further, the personalised explanation must be clear, concise and in plain language of their choice, be understandable, and assume limited technical knowledge of algorithmic systems, address how the decision affects the individual personally, explaining which aspects of the individual’s data have likely influenced the automated decisions - or alternatively a counterfactual of what change in their data would have resulted in a more favourable outcome - be available free of charge and without being time-consuming for the individual to access, be in a readily accessible format that complies with equality duties, be provided through an accessible user interface, easily findable and free of deceptive design patterns, and enable meaningful challenge if needed.
In a further amendment I also suggested that data controllers must ensure human reviewers of algorithmic decisions have adequate capabilities, training, and authority to challenge and rectify automated decisions. Precious little point having someone in an organisation given this responsibility without the necessary authority to make it meaningful.
ADM safeguards are critical to public trust in AI. The level of control and agency that people have over significant decisions made about their lives by AI will have a profound and lasting impact on public attitudes to these technologies. It’s why colleagues and I put forward amendments in this area, it’s why I put public engagement and public trust at the heart of my proposed AI Regulation Bill earlier this year.
Significant debate
When it came to the minister’s response to our amendments, she stated that “we have had a really profound and significant debate on these issues”. However, she also went on to add, “The Information Commissioner’s Office already has guidance on how human review should be provided, and this will be updated after the bill to ensure that it reflects what is meant by ‘meaningful human involvement’.”
In addressing our proposals on Clause 80 and ADM as a whole, the minister concluded, “That is precisely what the new provisions in this bill attempt to address, in order to make sure that there is meaningful human involvement, and people’s futures are not being decided by an automated machine.”
It’s fair to say that colleagues and I were not convinced, at this stage, that the bill does indeed ensure this.
It’s the third time we have had a data bill in Parliament. Despite the previous two attempts, there is still so much to improve in this latest version.
We will continue to attempt these improvements as the bill progresses - to improve for the citizen, consumer, innovator, investor, for everyone in all our many manifestations - if we are truly to have something approaching a data-driven society and economy for the benefit of all.
Read more about automated decision-making
- DWP ‘fairness analysis’ reveals bias in AI fraud detection system - Information about people’s age, disability, marital status and nationality influences decisions to investigate benefit claims for fraud, says the Department for Work and Pensions.
- AI disempowers logistics workers while intensifying their work - Conversations on the algorithmic management of work largely revolve around unproven claims about productivity gains or job losses - less attention is paid to how AI and automation negatively affect low-paid workers.
- Denmark’s AI-powered welfare system fuels mass surveillance - Research reveals the automated tools used to flag individuals for benefit fraud violate individuals’ privacy and risk discriminating against marginalised groups.