
UK Online Safety Bill published: our overview
The government in the United Kingdom (“UK”) recently published its draft Online Safety Bill (“Bill”), a landmark law designed to protect young people and clamp down on racist abuse online, while safeguarding freedom of expression.
Background
The Bill has been in development for over two years, starting in April 2019, when the UK government released the Online Harms White Paper. The Paper put forward ambitious plans for addressing abuse, bullying and extremism in the online world. It was underscored by a government drive to make the UK “the safest place in the world to be online while defending free expression”.
Following stakeholder consultations and internal reviews, the Government published a full response to the White Paper in December 2020, which became the backbone of the newly introduced Bill.
It’s worth noting that the Bill closely follows the publication of the proposed European Digital Services Act, issued by the European Commission in December 2020. Similarly, this Act aims to create a safer and more open digital space for European Union citizens.
Who does the Bill apply to?
The Bill applies to all service providers that fall into one of the below categories. If a service provider is based outside of the UK, but its service is used by UK citizens, then the Bill still applies. The categories are:
- Category 1: Large companies, or companies with a large user base, that provide user-to-user services, such as instant messaging, or enable users to upload, share and interact with content. Most notably, this will apply to global social media providers like Facebook and Instagram.
- Category 2 a: Companies that provide search engine services, which allow users to browse and enter multiple websites.
- Category 2 b: Like category 1, this tier is focused on companies that provide user-to-user services. However, they do not do so to the same scale as companies in category 1.
It is expected that the majority of companies will fall into category 2. These organisations have less stringent obligations than those that fall into category 1. The latter category aims to regulate ‘Big Tech’, where much of the risk and controversy around online harm and illegal activity centres today.
The Bill also states that the following types of organisations are out of scope:
- News publishers
- Business-to-business services
- Internet service providers and other providers that play only a functional role in the online sphere
- Online retailers and product review sites
New duties for service providers
Online service providers who are subject to the Bill must adhere to certain duties of care. These duties fall under two banners: content duties and safeguarding duties.
Content duties
The Bill considers two types of content that service providers must protect users from: “illegal” and “harmful” content. Illegal content encompasses content relating to criminal offences, such as terrorism, sales of drugs and abuse.
Harmful content is not explicitly defined in the Bill, but pertains to content that a service provider identifies as having risk of an adverse physical or psychological impact on a child or adult. Harmful content also includes user-generated fraud, such as dating scams and counterfeit investment opportunities.
For service providers in all categories, the Bill outlines obligations in relation to illegal content, including to:
- Undertake illegal content risk assessments
- Take action to mitigate the risks of illegal content identified in risk assessments
- Put in place systems and processes to reduce the publishing and spread of illegal content, and take it down quickly once it is identified
- Introduce Terms of Service clauses that demonstrate how users are protected from illegal content
- Introduce reporting and complaints procedures so that users can report illegal/harmful content, and keep written records of these reports
Service providers have obligations in relation to harmful content when their services are likely to be accessed by children and/or they fall into category 1. As well as the obligations above, these service providers must also take steps to mitigate the risk of their users viewing and being impacted by harmful content.
Safeguarding duties
Companies that fall within the scope of the Bill will also need to balance content safety duties with the duty to protect users’ rights to freedom of expression. There are more duties for companies that fall into category 1:
- Protect user rights to freedom of expression and privacy (all categories)
- Protect democratically important content and journalistic content (category 1 providers only)
- Conduct assessments to check democratically important content is being effectively protected (category 1 providers only)
Who will enforce the Bill?
Ofcom will be the regulator of the Bill and will oversee enforcement and compliance. As part of its duties, it will establish a register of all regulated providers, as well as prepare codes of practice to assist organisations in meeting their safeguarding duties.
Should an organisation fail to meet its duties of care, Ofcom will have enforcement powers, such as the ability to fine companies up to £18 million, or 10% of their annual worldwide revenue. Ofcom will decide the fine amount based on the gravity of the compliance failure.
Analysis
The current draft of the Bill gives a good overview of the pending duties for service providers that are subject to the legislation. However, to help service providers feel confident in meeting compliance, it will need to be supplemented by further codes of practice from Ofcom, as well as more legislation to cement the definition of harmful content. Currently, this is vague, making it difficult for service providers to recognize in practice.
Even once defined, it’s likely that the main challenge with this law will be the delicate balance of protecting freedom of expression while also safeguarding users from harmful content. It will be interesting to see how the codes of practice develop for determining what constitutes harmful content, and how service providers should mitigate this risk.
It’s also important to note a continued emphasis on protecting users under the age of 18, with more obligations for service providers whose services are “likely to be accessed by children”. This follows similar thinking to the ICO’s Age-Appropriate Design Code, which we wrote about recently here.
Next steps
The next step is for a parliamentary joint committee to review the Bill. It will then be debated by Parliament. Following this, a final draft of the Bill will be prepared for parliamentary approval.
As this occurs, businesses who believe they are subject to the Bill should begin to make foundational steps towards compliance, in line with the obligations currently set out in the draft. Organisations should put in place systems and processes to identify harmful/illegal content and explore safeguarding options to protect user safety.
Similar to the UK General Data Protection Regulation, compliance with this Bill will not be a tick-box exercise, but an ongoing practice involving regular risk assessments, reporting and improvement.
Need help?
If you are a service provider that is concerned about complying with the upcoming Online Safety Bill, then get in touch. As a specialist data protection consultancy, Evalian is well placed to assist you with navigating the complexities of the uncertainty and constantly changing data protection landscape. If you would like an informal conversation on how we can assist, please get in touch. We can steer you in the right direction or, if you need help, we can assist at every level to ensure that you are covered.
Image Design by upklyak / Freepik – modified by Evalian