Tombaky - Fotolia
Parliamentary committee criticises big tech response to election threats
Parliamentary committee says tech companies ‘regurgitated publicly available content’ and failed to address questions raised by MPs and peers
Big tech and social media companies have come under fire for their failure to adequately address the risks posed by disinformation and artificial intelligence (AI) deepfakes to the integrity of elections in the UK.
A joint Parliamentary committee said today that big tech companies, including X, TikTok, Snap, Meta, Microsoft and Google, had taken an “uncoordinated, siloed approach” to the potential harms and threats facing democracy.
The Joint Committee on the National Security Strategy (JCNSS) said big tech companies had responded to the committee’s inquiries on the risk to democracy posed by AI deep fakes and disinformation with “opaque and diffuse” statements that made it difficult for them to be held to account.
The intervention by JCNSS chair Margaret Beckett comes amid an “escalating threat” from malicious actors using emerging technologies such as AI to create misleading deepfakes and to spread fake narratives to interfere with national elections.
This year is expected to see record numbers of people go to the polls around the world, with elections expected in more than 70 countries including the UK, Europe, US and India.
Evidenced submitted to the inquiry by big tech companies on how they are tackling the risks – published today and reviewed by Computer Weekly – appears to be heavy on PR and light on hard evidence of coordinated progress against election interference.
“I am concerned to see the huge disparity in approaches and attitudes to managing harmful digital content in written evidence we have received across companies, from X, TikTok, Snap and Meta to Microsoft and Google,” said Beckett.
Margaret Beckett, JCNSS
She said that while others were developing technology to help people decipher the “dizzying variety” of information available online, the Parliamentary committee would have expected the tech companies profiting from spreading the information to take responsibility and be similarly pro-active.
“Much of the written evidence that was submitted shows – with few and notable exceptions – an uncoordinated, siloed approach to the many potential threats and harms facing UK and global democracy,” she said.
“The cover of free speech does not cover untruthful or harmful speech, and it does not give tech media companies a get-out-free card for accountability for information propagated on their platforms,” she added.
With inquiries continuing, the committee chair said there was far too little evidence from global tech companies showing the foresight the committee expected to tackle threats to elections. With multiple elections due across the world, tech companies are not proactively anticipating and developing transparent, independently verifiable and accountable policies.
“There is far too little evidence of the learning and cooperation necessary for an effective response to a sophisticated and evolving threat,” said Beckett.
Tech companies have taken a siloed approach, she said, and much of the evidence shows companies developing individual policies, each based on their own set of principles, rather than coordinating standards and best practice.
The committee is also concerned about the lack of regulation of algorithms used by big tech and social media companies to promote their content. That has the potential to create “echo chambers” for users of social media that would limit their access to information needed to make informed judgements during an election period.
“The committee understands perfectly well that many social media platforms were at least nominally born as platforms to democratise communications to allow and support free speech and to circumvent censorship,” she said.
Beckett added that tech companies, or the owners who profit from them – in many cases by monetising information spread through addictive technologies – did not have the right or authority to arbitrate on what legitimate free speech is. “That is the job of democratically accountable authorities,” she said.
She criticised tech companies for failing to proactively engage with Parliament’s inquiries into the risks posed to elections by AI, disinformation, misinformation and deepfakes.
“If we must pursue a company operating and profiting in the UK to engage with a Parliamentary inquiry, we expect much more than a regurgitation of some of its publicly available content which does not specifically address our inquiry,” she said.
The telecoms regulator, Ofcom, told the committee in March that new powers to combat online threats were only likely to come into force after an election in the UK.
The committee’s inquiry into defending democracy is continuing.
Read more about the risk to elections
- Political candidates, election officials and others at high risk of being targeted online are being offered protection against phishing and malware attacks in the run-up to the next UK general election.
- Britain’s democracy under threat from Chinese cyber attackers, government warns.
- The UK is highly vulnerable to misinformation and disinformation in the lead-up to a general election, according to a report from independent fact-checking charity Full Fact.
- Disinformation and misinformation are the top risks facing businesses, governments and the public over the next two years.