vectorfusionart - stock.adobe.co
UK taskforce calls for cutting GDPR protections
The UK’s Taskforce on Innovation, Growth and Regulatory Reform has recommended scrapping safeguards against automated decision-making contained in the General Data Protection Regulation
A government taskforce is calling for key protections to be cut from the UK’s implementation of the General Data Protection Regulation (GDPR) that safeguard people from automated decision making, claiming that it hampers “much-needed progress” in the development of Britain’s artificial intelligence (AI) industry.
The Taskforce on Innovation, Growth and Regulatory Reform (TIGRR) – chaired by former Conservative leader Sir Iain Duncan Smith – was asked by prime minister Boris Johnson to identify and develop regulatory proposals that can drive innovation, growth and competitiveness for a post-Brexit UK.
In its final report, released 16 June 2021, TIGRR recommends ditching the UK GDPR’s Article 22 protections, which gives people “the right not to be subject to a decision based solely on automated processing, including profiling.”
According to the authors - which also include former life sciences minister George Freeman and former environment secretary Theresa Villiers – the requirement “makes it burdensome, costly and impractical for organisations to use AI to automate routine processes,” because separate manual processes must be created for those who decide to opt-out of automatic data processing.
“Article 22 of GDPR applies solely to automated decision-making. It does not apply when the output of algorithms is subject to meaningful human review. There are many examples of automated decision-making that involve human review, but where the output itself may well be wrong, not explainable or biased,” they wrote, adding the use of automated decision making that performs better than human decision makers if often not allowed.
“Article 22 of GDPR should be removed. Instead a focus should be placed on whether automated profiling meets a legitimate or public interest test, with guidance on how to apply these tests and the principles of fairness, accountability and an appropriate level of transparency to automated decision-making provided by the Information Commissioner’s Office [ICO].”
They added: “If removing Article 22 altogether is deemed too radical, GDPR should at a minimum be reformed to permit automated decision-making and remove human review of algorithmic decisions.”
Decision making
Aside from loosening protections around algorithmic decision making, the authors also want to overhaul how consent would function, arguing for a new framework that would be less intrusive and give “people more control over the use of their data, including its resale”.
“The kind of privacy self-management where consumers have to read, consent to and manage options in individual privacy policies to use products and services is simply not scalable,” they wrote. “The overemphasis on consent has led to people being bombarded with complex consent requests. An illustration of this is the cookie consent banner that appears every time you visit a website.”
Ultimately, they recommend solving the issue “through the creation of regulatory architecture that enables “Data Trusts” or “Data Fiduciaries” to be formed—private and third sector organisations to whom consumers would delegate their data authorisations and negotiations.”
In a letter to the Taskforce, Johnson welcomed the report’s recommendations and thanked the authors for “responding with substantive plans that will really put a TIGRR in the tank of British business.”
Johnson added while it is “obvious that the UK’s innovators and entrepreneurs can lead the world in the economy of the future… this can only happen if we clear a path through the thicket of burdensome and restrictive regulation.”
He further added that this was only the start of the process, and that a “Brexit Opportunities Unit” would be set up under Lord Frost to generate new ideas for post-Brexit Britain.
“Your bold proposals provide a valuable template for this, illustrating the sheer level of ambitious thinking needed to usher in a new golden age of growth and innovation right across the UK,” he wrote.
The dangers of abandoning Article 22
Reacting to the report and Johnson’s letter, director of communications and research at Prospect Union Andrew Pakes said it is “deeply concerning that data rights risk becoming a sacrificial victim” as politicians look for ways to revive the economy.
“We’ve been here before, with previous administrations trying to claim consumer and workers’ rights are a block to innovation, when the reality could not be further from the truth. GDPR is the foundation on which we should be building our data economy and protecting human rights,” he said.
“Scrapping Article 22 could be the green-light to the expansion of automated processing, profiling and transfer of personal data into private hands. We need data laws fit for the challenges of the digital economy, not a race to the bottom on standards.
“We need urgent clarity from government that GDPR is safe in their hands and that they are looking to work with social partners to build the UK’s reputation on data and worker’s rights.”
The Trade Union Congress (TUC) also published an “AI manifesto” in March 2021 calling for greater transparency and protections around the use of automated and AI-based decision making.
“Every worker must have the right to have AI decisions reviewed by a human manager. And workplace AI must be harnessed for good – not to set punishing targets and rob workers of their dignity,” said TUC general secretary Frances O’Grady at the time.
Read more about algorithmic decision-making
- A default judgment handed down by Dutch court has ordered Uber to reinstate six drivers with compensation following unevidenced accusations of fraud and automated firing via algorithm.
- The public sector is increasingly allowing algorithms to make vital decisions that affect people’s lives – but how do we know those algorithms work or what they do?
- The Home Secretary has pledged to stop using an algorithmic decision-making system to assess visa applications after migrant welfare and technology justice groups challenged the tool for helping to create a hostile environment for migrants.
Gemma Galdon Clavell, director of Barcelona-based algorithmic auditing consultancy Eticas, said while the attempt to throw out Article 22 is “somewhat expected” – as there have been rumors about the UK using Brexit as an excuse to lower data protections for some time – it is surprising that they see the need for human oversight as a problem.
“Human oversight and intervention, in practice, is mainly about accountability and liability. Often times when algorithmic decisions make mistakes, those affected by such mistakes find it hard or impossible to seek redress and compensation, and legal systems struggle to assign liability in automated processes,” she said, adding that a “human in the loop” is not only there to manually review algorithmic decisions, but to represent the bodies that need to take responsibility for these decisions.
“They are so thorough in pointing out why it should be removed, but provide so little detail on how to protect the issues that human oversight is meant to address.”
Gladon Clavell further added while she has seen in her work auditing algorithms how human intervention can sometimes re-introduce bias, this is largely due to bad practice at the moment of human-AI interaction.
“The issue is not Article 22, which is crucial to ensure that data subjects have a right to understand how decisions are made and have redress mechanisms that link the decision to a person and therefore to an organisation,” she said, adding it’s a concern that consent and purpose limitation are being viewed as a problem.
“Could Article 22 be developed further? Sure. Is removing it altogether a good decision? Absolutely not. The risks in AI without meaningful human intervention are far greater than its problems.
“What is currently hindering innovation is not GDPR, but an industry that often fails to understand the social context its innovations impact on. GDPR is an opportunity to rebuild trust with AI innovation by ensuring that data subjects have a say in how their data is used. Not seeing and seizing this opportunity is short-sighted.”
Impact on data adequacy
In relation to the granting of UK data adequacy by the European Union (EU), which member states unanimously voted in favour of on 17 June 2021, the validity of this data transfer deal is contingent on the UK maintaining a high level of data protection. On 16 July 2020, the European Court of Justice (ECJ) struck down the EU-US Privacy Shield data-sharing agreement, which the court said failed to ensure European citizens adequate right of redress when data is collected by the US National Security Agency (NSA) and other US intelligence services.
The ruling, colloquially known as Schrems II after the Austrian lawyer who took the case to the ECJ, also established that a “standard of essential equivalence” is needed for adequacy decisions under the GDPR, meaning people are offered the same level of protection they would be in the bloc.
According to Estelle Massé, global data protection lead at digital civil rights group Access Now, while we have known for some time that the UK government’s freedom to legislate post-Brexit could lower data protection standards, the government has been adamant at every turn that any new measures would actually be used to strengthen people’s rights.
“We’re now getting closer and closer to a reality where the measures suggested to the government are actually going in the direction of removing protection for people, with the justification that there would be less red tape, less barriers to trade, and more opportunities for businesses,” she said, adding the UK will have to make a choice about whether it wants the free flow of data with its closest partner, or whether it wants to go its complete own way.
“For the UK to be saying on the same day [as the adequacy decision] that ‘actually we might diverge and that divergence might mean lowering standards’ is a little bit incomprehensible… it’s obviously within the freedom of the United Kingdom to change their framework, but by changing it in a way that would alter already agreed levels of protection for people is not a positive move for human rights.”
Massé further added the UK government has been using the uncertainty around data flows to its advantage, with the risk being “as soon as they get the adequacy they will diverge, and basically force the EU to take the hard decision of removing an adequacy decision – it’s a huge power play, I feel.”
She said now that an adequacy decision has been granted, only the European Commission has the power to suspend it if the UK decides to diverge: “We have no certainty what the UK Government is going to do, but the signal they’re sending us is that they actually want to change [data protection] in a way that would not be positive for people. Until the UK make up their mind on what they want to do, we feel that the EU should not have given this adequacy.”