terovesalainen - stock.adobe.com

Online Safety Bill updated to deal with anonymous abuse

Social media companies will be forced to deal with anonymous abuse online by the introduction of new measures in the Online Safety Bill

The UK government is giving social media users the option to block all anonymous accounts that choose not to verify their legal identity, as well as to opt out of seeing harmful content, under new duties added to its forthcoming Online Safety Bill (OSB).

The government claimed that the measures would remove the ability of anonymous accounts to target other users with abuse, helping to tackle the issue “at its root” and complementing existing duties within the OSB.

As it stands, the draft OSB would impose a statutory “duty of care” on technology companies that host user-generated content or allow people to communicate, meaning they would be legally obliged to proactively identify, remove and limit the spread of both illegal and legal but harmful content, such as child sexual abuse, terrorism and suicide material.

At the start of February 2022, the government expanded the list of “priority illegal content” – which refers to content that service providers are required to proactively seek out and minimise the presence of on their platform – to include revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling, and sexual exploitation.

Under the new measures, “category one” companies (those with the largest number of users and highest reach that are considered to represent the greatest risk) must offer ways for their users to verify their identities, as well as to control who can interact with them.

This could include giving users options to tick a box in their settings to only receive direct messages and replies from verified accounts. The government added that the onus will be on platforms to decide which methods to use to fulfil this identity verification duty, but that users must be given the option to opt in or out.

Category one social media companies will also have to make new tools available to adults users so they can choose whether to see legal but harmful content. The government said this includes racist abuse, the promotion of self-harm and eating disorders, and dangerous anti-vaccine disinformation.

These tools could include new settings or functions that prevent users receiving certain recommendations, or placing sensitivity screens over content to blur it out.

“Tech firms have a responsibility to stop anonymous trolls polluting their platforms,” said digital secretary Nadine Dorries. “We have listened to calls for us to strengthen our new online safety laws and are announcing new measures to put greater power in the hands of social media users themselves.

“People will now have more control over who can contact them and be able to stop the tidal wave of hate served up to them by rogue algorithms.”

In response to the new measures, Neil Brown, a tech lawyer at law firm decoded.legal, told iNews that requiring users to verify legal identities could risk relegating those that refuse to second-class users.

“If you don’t identify yourself, you could be grouped with millions of others, and with one click your comments will no longer be seen,” he said. “Those who are already willing to harass or spread misinformation under their own names are unlikely to be affected. The additional step of showing ID is unlikely to be a barrier to them.”

Although the government highlighted in its press release the racist abuse England footballers received during the Euro’s in 2020, Twitter said that 99% of all accounts linked to the abuse were not anonymous.

“Our data suggests that ID verification would have been unlikely to prevent the abuse from happening – as the accounts we suspended themselves were not anonymous,” it said in a blog post at the time.

On 24 February, consumer watchdog Which? reiterated its call for the government to tackle fraudulent paid-for advertising in the OSB – which remains absent – after conducting a survey that found that an estimated nine million people had been targeted by a scam on social media, and that only one in five consumers feel protected online.

Which? previously urged the government to include protection from online cyber scams in the OSB in May 2021, when it wrote a joint letter alongside a coalition of other organisations representing consumers, civil society and business.

In a report published in December 2021 by the joint parliamentary committee for the Online Safety Bill – which was set up to scrutinise the forthcoming bill and propose improvements before it goes to Parliament for final approval – MPs and Lords said the exclusion of paid-for advertising in the draft bill “would obstruct the government’s stated aim of tackling online fraud and activity that creates a risk of harm more generally”.

They added that “excluding paid-for advertising will leave service providers with little incentive to remove harmful adverts, and risks encouraging further proliferation of such content” and “Ofcom should be responsible for acting against service providers who consistently allow paid-for advertisements that create a risk of harm to be placed on their platform”.

Read more about online safety

Read more on IT governance