Ilona - stock.adobe.com

Online Safety Bill puts user protection onus on platform providers

The Online Safety Bill will place new duties and responsibilities on online platforms accessible from the UK, but as it currently stands, it contains several grey areas

This article can also be found in the Premium Editorial Download: Computer Weekly: The rules for a safer internet

The forthcoming Online Safety Bill is a proposal by the Department for Digital, Culture, Media and Sport to reduce harmful content found online. This can be anything from illegal content, such as sexual abuse, to anything considered harmful, such as cyber bullying. It focuses on online platforms that are accessible from the UK and will mark a major paradigm shift for the future of online platforms.

More people than ever are accessing the internet, with 92% of UK adults regularly using it last year. However, just over half of 12 to 15-year-olds have had some form of negative online experience.

There is also a concern about online content that is considered harmful, but still legal, such as online bullying and disinformation. Although this behaviour is generally not criminal, the government believes it can have a damaging effect on society.

The e-Commerce Directive (eCD) had previously prevented the UK government from imposing liability on platform providers, as long as the provider did not have knowledge of illegal activity on their platform. This limitation was contingent on the host removing any illegal content as soon as they became aware of it.

The status of the eCD following Brexit is governed by the European Union (EU) Withdrawal Act 2018. While the withdrawal agreement contains some provisions for the continued synchronisation with EU law, there is no legal obligation for the UK to legislate in line with the provisions of the eCD.

Other governments are taking similar action by introducing legislative measures to tackle harmful online content. Meanwhile, the European Commission published the Digital Services Act at the end of last year, which will update liability and safety rules for digital platforms.

“We have long called for new regulations and share the government’s objective of making the internet safer while maintaining the vast social and economic benefits it brings,” says a Facebook spokesperson. “Facebook has more than 16 years of experience in developing rules which seek to strike the right balance between protecting people from harm without undermining their freedom of expression. We want to support the government and Parliament in making this bill as effective as possible at meeting our shared objective.”

What will the Online Safety Bill do?

The Online Safety Bill intends to make the platforms safer by placing responsibility on the providers of those services. As such, any content that is illegal, or which although legal, is considered harmful to children or adults, will be the responsibility of the service provider, as well as the original poster.

Once the Online Safety Bill is implemented, platform providers will be subject to these new responsibilities:

  • Illegal content risk assessment and illegal content duties.
  • Rights to freedom of expression and privacy duties.
  • Duties about reporting and redress.
  • Record-keeping and review duties.
  • Children’s risk assessment and duties to protect children’s online safety.
  • Adults’ risk assessment duties and duties to protect adults’ online safety.
  • Protecting content of journalistic and/or democratic importance.

Certain types of services, which are associated with a low risk of harm, are exempt. These include internal business services, such as intranets, and certain services provided by public bodies.

There are also certain types of content that are exempt. These include emails, text messages, live aural communications, paid-for advertisements, comments and reviews, and content from recognised news publishers.

Age verification

There is a concern that many platforms will require age verification technology to comply with the demands of the Online Safety Bill. Age verification technology was previously part of the Digital Economy Act 2017, but was dropped two years later (and will be repealed as part of the Online Safety Bill).

According to the Online Safety Bill, providers will need to assess whether it is possible for children to access the platform. If it can be accessed by children, the platform will have to comply with the safety duties for child protection. Failure to conduct an assessment would result in the platform being treated as accessible to children until such time the assessment has been carried out.

Although not explicitly stated, it can be inferred that any platform wanting to display adult material will be required to have a process in place for preventing children from accessing their website.

“It might be a tick box, I suspect that’s the line that people will go down,” says David Varney, technology team director at Burges Salmon. “Maybe some indication of parental consent will be needed where there’s violence.”

Defining ‘harmful’

One of the more obfuscated parts of the Online Safety Bill is the definition of content that is legal but considered harmful.

Chapter 45 describes content that is harmful to children as content that the service provider has reasonable grounds to believe is a risk of causing (directly or indirectly) a “significant adverse physical or psychological impact on a child of ordinary sensibilities”. A similar definition is given for adults of ordinary sensibilities.

“The government’s asking these providers to take a censorship role,” says Varney. “It is looking at those platform providers to step up and make sure their internal processes are tight enough to ensure that any kind of hate speech is addressed.”

The Online Safety Bill defines harmful content with the following broad categories:

  • Illegal user-generated content such as child sexual exploitation and abuse, terrorism, hate crime, and sale of illegal drugs and weapons.
  • Legal but harmful content that may be legal but gives rise to a risk of psychological and physical harm – such as abuse or eating disorder content.
  • Underage exposure to content that gives rise to a foreseeable risk of psychological and physical harm to children; such as pornography and violent content.

It is the second point that causes the greatest challenge, as platform providers will need to take steps to remove content that, although legal, they believe could be harmful. The broad nature of this definition and the potential financial risks of taking too little action means platform providers could well block and/or remove innocent content.

“Free speech is based on you saying something that someone can subsequently take offence at and pursue as a hate crime,” says Jim Killock, director of the Open Rights Group. “The Online Safety Bill removes that and will screen for anything deemed harmful.”

Paying for the privilege

According to Chapter 52 of the bill, Ofcom will charge an annual fee to platform providers accessible from the UK to fund the regulation of these providers. This will be subject to the providers meeting the relevant financial threshold. The fee will be proportionate to the size of the company, with the threshold to be set by Ofcom, subject to approval by the secretary of state. However, any platforms that only host the aforementioned exempt content will not be subject to regulation and the associated fees.

It has been estimated that the initial compliance costs to UK businesses will be £9.2m for reading and understanding the regulations, £12.4m for putting reporting mechanisms in place, and £14.7m for updating terms of service.

Meanwhile, the ongoing compliance costs for UK business over 10 years has been estimated at £31m for producing risk assessments, £1.27bn for additional content moderation, £3.6m for transparency reporting and a total £346.7m industry fee.

“Larger companies will already have the legal teams in place, but this will be an additional hurdle for new platforms,” says Killock.

Read more about the Online Safety Bill

There will also be severe financial penalties for non-compliance, such as for failing to comply with information requests from Ofcom. These can be up to £18m or 10% of qualifying worldwide revenue, whichever is higher.

The Online Safety Bill also allows the secretary of state, through Ofcom, to deal with threats by directing Ofcom’s media literacy activity and giving public notice statements to service providers. As part of this, Section 35 allows Ofcom to prepare minor amendments to codes of practice without consultation and for these to be issued without being laid before Parliament. The Online Safety Bill also allows the secretary of state to direct Ofcom to modify a code of practice to ensure that it reflects, among other things, government policy.

Potential benefits but significant hurdles

Correctly implemented, the Online Safety Bill could provide a useful tool for revitalising the UK’s digital economy. Potential benefits include fewer people leaving platforms due to bullying, as well as parents being happier for their children to use regulated platforms. Minority groups are also more likely to join a platform if they are less concerned about harassment.

There would also be a reduced risk of harm leading to bad publicity and reputational damage – for example, a tragedy such as suicide being linked to harmful content viewed on a platform.

“There are a lot of queries about how to interact with free speech and the rights of opinion – that’s a balancing act that online providers are going to have to address,” says Varney. “Harmful but legal content, like online bullying, is exactly the thing this is set up to address.”

Harmful content remains a significant concern for the UK and the rest of the world. Removing such content would boost trust and therefore use of a platform’s services.

However, as it currently stands, there are several grey areas in the Online Safety Bill which will be difficult to implement, while the financial burden on platform providers and online services is also likely to be significant.

“This is one step towards more regulation for big technology,” says Varney. “We’re going to see more and more regulation of technology companies to make sure people can use technology safely.”

Read more on Privacy and data protection