ÐÐµÐ½Ð¸Ñ Ðойко - stock.ad

Proposals for scanning encrypted messages should be cut from Online Safety Bill, say researchers

Automatic scanning of messaging services for illegal content could lead to one billion false alarms each day in Europe

Proposals in the Online Safety Bill to give the telecoms regulator Ofcom powers to mandate technology companies to use scanning software to monitor encrypted messages for illegal content should be dropped, it was claimed this week.

According to Cambridge University professor of security engineering, Ross Anderson, proposals that could require tech companies to use software to bulk scan messages on encrypted services such as WhatsApp to catch violent criminals were “entirely implausible”.

A policy paper written by Anderson and Sam Gilbert argues that using artificial intelligence (AI)-based scanning to examine the content of messages would raise an unmanageable number of false alarms and prove “unworkable”.

The paper, which was presented at a panel discussion by the Adam Smith Institute at the Conservative Party Conference, argues that although the Online Safety Bill is right to impose a duty of care on technology and social media companies, the cost of some of its proposed measures would outweigh any benefits.

Anderson and Gilbert argue that “last resort” powers in the draft bill for Ofcom to mandate tech companies to use “proactive technologies”, such as client-side scanning, should be abandoned.

They claim the technology is “technically ineffective and impractical as a means of mitigating violent online extremism and child sexual abuse material”.

The paper follows a discussion document by Ian Levy, technical director of the UK National Cyber Security Centre (NCSC), and Crispin Robinson, technical director for cryptanalysis at GCHQ, in July 2022 that argued in favour of client-side scanning.

The GCHQ officials wrote that it was possible for tech companies to police encrypted messaging services for possible child abuse while still preserving the privacy and security of the people who use them.

Their proposals were criticised by Facebook owner Meta, academics and campaign groups.

Online Safety Bill

The Online Safety Bill aims to protect people who use online services from material which is legal but harmful, by giving a duty of care to large technology companies that provide online services, in addition to a responsibility to remove illegal content.

The bill creates high compliance costs that only large technology companies such as Facebook, Google and Twitter will be able to pay for, according to Anderson and Gilbert.

But they argue that the Online Safety Bill should be extended to cover online gaming platforms, which can expose children to financial risks and abuse by older players.

Ofcom has indicated that it expects to regulate between 30 and 40 service providers, which could face fines of 10% of their annual turnover or £18m, whichever is the greatest, for failing to comply with codes of conduct. Repeat offenders could be blocked.

Child protection

According to the paper, some online service providers, such as Gmail and Facebook, already scan communications for images that are known to be illegal.

Some services have recently started to use AI to scan for unknown images that might be illegal, but the technology has a higher error rate, resulting in a large number of false negatives and false positives.

In one case, when a father took a picture of his son at the request of a nurse, he later received a visit from the police and lost access to his Google accounts because the company’s AI had flagged the photograph as abusive.

European proposals

The European Commission has separately proposed a regulation, which is under consideration by the European Parliament, to extend AI scanning from images to text, and to require automated scanning of messages sent by end-to-end encrypted services such as WhatsApp.

This would greatly increase the number of false alarms and the number of people caught up in the “surveillance drag net”, Anderson and Gilbert argue in the paper published by the Bennet Institute for Public Policy and the University of Cambridge.

According to a European Commission paper, the commission admitted internally that there might be a false alarm rate of 10%, but wrongly calculated that if there were one million grooming messages, that would lead to 100,000 false alarms – a number that could be managed.

However, according to Anderson and Gilbert, there are 10 billion text messages sent every day in Europe, which would produce one billion false alarms.

“Europe’s 1.6 million police officers would have to scan 625 of them every day. Such a system would be simply unworkable,” they state in the paper.

Ofcom and GCHQ have proposed using client-side scanning technology on phones and laptops to detect illegal images and potential grooming content on encrypted messaging services, before messages are encrypted or after they have been received.

The technology has also been suggested as a way of detecting the activities of terrorists online.

Anderson and Gilbert argue that client-side scanning would be a significant departure from existing UK law, such as the Investigatory Powers Act 2016, which prohibits bulk interception against UK citizens, and would fall foul of the European Court of Human Rights.

More effective solutions

They say the problem would be better tackled by mandating tech companies to provide an effective way for users to report illegal content and have it rapidly taken down.

“Tech companies already do this for copyright holders [so] the law should compel them to treat vulnerable users, such as women or children, with the same consideration,” the paper says.

Users should be able to contact moderators quickly so they can preserve evidence, remove illegal material and block people who are trying to exploit them.

“Child protection is a complex problem, embedded in local communities, and requires coordinated action by parents, police, social workers and schools,” it says.

The paper also argues that there are more effective tools than scanning messages to combat terrorism. For example, research suggests the strongest predictor of violent political extremism is violence against women.

That suggests that local police work and other local interventions, rather than online surveillance, is the best way to tackle threats, the paper argues.

“It is hard to see any case for breaking everyone’s privacy, in contravention of the settled British tradition, to intercept communications at a great scale when more effective routes are available,” it says.

A report by 15 leading computer scientists, Bugs in our pockets: the risks of client-side scanning, published by Columbia University in October 2021, identified multiple ways that states, malicious actors and abusers could turn client-side scanning technology around to cause harm to others or society.

Read more about the debate on end-to-end encryption

 

Read more on Laptops and notebooks