Online Safety Bill and the “harmful but legal” debate
- by Professor Lorna Woods, Professor of Internet Law, University of Essex; William Perrin, Trustee, Carnegie UK; Maeve Walsh, Carnegie Associate
- 9 November 2022
- 7 minute read
There’s been a lot written about the Online Safety Bill’s adult safety duty (so-called “harmful but legal”), some of it quite critical. Unfortunately, some of this criticism seems based on a misunderstanding of what the Bill says, as well as a downplaying of the severity of the sorts of content involved.
In short, the harmful to adults provisions do not require material harmful to adults but not contrary to the criminal law to be taken down or censored in any way. The Bill is quite explicit about this. Indeed, as Matthew Lesh, writing in The Spectator, noted, “[w]hile the impetus was for more user content to be removed, technically the platforms could have opted to do nothing”. Instead, the Bill helps empower people to manage their own risk through transparency, consistent enforcement, and user empowerment tools. Clause 13 is a ‘buyer beware’-type clause which requires the largest companies only to alert customers to the presence of harmful material and tell them what the company intends to do about it, including nothing at all. Although these requirements are quite minimal, there are two factors that weaken them further in the Bill: the freedom of expression duty and the high bar set by the definition of harm.
The material the government intends to classify as harmful in this area is very serious indeed – suicide promotion and methods, eating disorders, dangerous health cures, harassment and abuse – even extremist material short of terrorism. Some of this material is on the boundary of criminal. Such material does not appear on other mass distribution channels – TV, radio, in advertising or, in most cases, in newspapers. There, media self-regulation or regulation by OFCOM ensures that company systems reduce the harm. Social media would be more lightly regulated than traditional media in this respect, some might say paradoxically.
We unpack below the complex drafting behind this.
Note – numbers of clauses refer to the July 2022 version of the Online Safety Bill at Commons Report stage.
Structure of adult safety duty provisions
The adult safety duties are found in clauses 12-14. In addition, there are several safeguard provisions for freedom of expression: clause 19 provides a general safeguard for freedom of expression and privacy, with specific and higher obligations with regards to content that is harmful to adults(cl 19(5)-(7)). On top of that, clauses 15 and 16 deal with content of democratic importance and journalistic content and again require freedom of expression to be taken into account by platforms.
As with the other types of content covered by the Online Safety Bill, there are two aspects to the duties:
- risk assessment (cl 12); and
- mitigation (cls 13 and 14).
The mitigation requirements are essentially a transparency obligation and the provision of user empowerment tools. This approach could be said to facilitate user autonomy and choice (i) as to whether to use services in the first place (subject to market dominance and network effects); and (ii) the extent to which the user wants to curate their own content.
As in other areas, the Bill distinguishes between the main category of content harmful to adults (non designated content) and content designated as ‘priority content’ by the Secretary of State. No content is yet designated as priority content harmful to adults, but a written statement provides an indicative list. Significantly, the mitigation obligations with regard to ‘non-designated’ content are much lower that those in relation to prioirty content – to the point of being virtually non-existent.
Harmful Content
The starting point is to understand what is meant by content that is harmful for the purposes of the regime as these obligations only operate in relation to content that meets a specified threshold found in the definition of harmful content in clause 54(3). It says:
“Content that is harmful to adults” means –
(a) priority content that is harmful to adults, or
(b) content, not within paragraph (a), of a kind which presents a material risk of significant harm to an appreciable number of adults in the United Kingdom.
With regard to a) (priority content), the Government set out its indicative list in a Written Ministerial Statement ahead of Commons Report stage. This list includes:
- Online abuse and harassment. Mere disagreement with another’s point of view would not reach the threshold of harmful content, and so would not be covered by this.
- Circulation of real or manufactured intimate images without the subject’s consent
- Content promoting self-harm
- Content promoting eating disorders
- Legal suicide content
- Harmful health content that is demonstrably false, such as urging people to drink bleach to cure cancer. It also includes some health and vaccine misinformation and disinformation but is not intended to capture genuine debate.
At the time, the then Secretary of State Nadine Dorries made the case that:
“the types of content on the indicative list meet the threshold for priority harmful content set out in the Bill. This threshold is important to ensure that the online safety framework focuses on content and activity which poses the most significant risk of harm to UK users online. It is important for the framework to distinguish in this way between strongly felt debate on the one hand, and unacceptable acts of abuse, intimidation and violence on the other. British democracy has always been robust and oppositional. Free speech within the law can involve the expression of views that some may find offensive, but a line is crossed when disagreement mutates into abuse or harassment, which refuses to tolerate other opinions and seeks to deprive others from exercising their free speech and freedom of association.”
With regard to 54(3)(b) (non-designated content), this definition creates a triple test – materiality, significance and appreciability – which is far from a low threshold. The definition of “harm” for the purposes of part 3 duties is found in cl 190, covering ‘physical or psychological harm’ (emotional harm is not included). ‘Hurt feelings’ would not satisfy this threshold (a point also made in the WMS, above, listing the harms).
Mitigation Obligations
The Adults’ Safety Duty (cl 13) contains four main obligations – not all apply in any given situation:
- a duty to summarise findings from the most recent risk assessment in relation to all content harmful to adults in the terms of service;
- a duty to notify Ofcom of the existence of non-designated content harmful to adults (this may feed into Ofcom’s research as to whether that sort of content should be considered for designation as a priority, following the procedure set out in the Bill);
- in relation to priority content only, a duty to explain how the platform is responding to the priority content; and
- in relation to priority content only, an obligation to enforce certain specified terms of service consistently (with the question as to whether this allows consistently poor enforcement).
A government amendment (71) at Report stage in July made it clear that platforms retain the option of choosing to do nothing even in relation to categories of priority content harmful to adults.
The User Empowerment duties (cl 14) impose an obligation to provide “features” to users which would have the effect of reducing the likelihood of the user encountering priority content, or alerting the user to the harmful nature of types of priority content. It is, however, notable that the Bill does not envisage that users can control their own experience with regard to non-designated content despite the fact that that content is, by definition, harmful at quite a high level. Additionally, users are to be given the option to filter out non-verified users (and there is a matching obligation in clause 57, according to which the service must offer users the option to verify their identities). These tools give users more control over their online experience and support those users’ own freedom of expression.
Freedom of Expression
When carrying out their duties, all service providers are subject to duties relating to freedom of expression and privacy (clause 19). Category 1 services must additionally carry out an assessment of the impact that measures taken under the safety duties would have on users’ freedom of expression and privacy. In addition, when taking decisions about user generated content, category 1 providers must ensure that their systems take into account freedom of expression and the importance of content of democratic importance and journalistic content.
Help us make the case for wellbeing policy
Keep in touch with Carnegie UK’s research and activities. Learn more about ways to get involved with our work.
"*" indicates required fields