The Draft Online Safety Bill: Carnegie UK Trust initial analysis
- by Professor Lorna Woods, Professor of Internet Law, University of Essex, William Perrin, Trustee, Carnegie UK Trust and Maeve Walsh, Carnegie Associate
- 15 June 2021
- 18 minute read
The UK government published its draft Online Safety Bill on 12th May 2021.[1] It will now undergo a three-month period of pre-legislative scrutiny, undertaken by a joint Parliamentary committee, before the revised, final Bill will be introduced later in the year. Pre-legislative scrutiny is intended to examine “big picture” design and structure issues with the government then deciding whether to amend or reject the recommendations of the Committee’s report when it revises the Bill for introduction. This analysis sets out some of the issues which we think should be considered. You can view our full response in PDF format here.
Leading issues
The draft Bill has the potential to develop into an effective, evidence-based framework for the regulation of social media companies and search engines to prevent harm arising to people in the UK. This is an achievement, given that the Bill was drafted during a national crisis. On a spectrum of regulation, the regime would sit appropriately between regulation of broadcasting and self-regulation of the press.
The draft Bill is hard for a lay reader to understand. This will hinder scrutiny and increase regulatory burden. The government should structurally simplify the Bill’s three safety duties, three counterbalancing factors and its byzantine commencement process. Specifically, there should be a general safety duty to orientate and give coherence to the regime.
To meet the UK’s international commitments on free speech, there should be a separation of powers between the Executive and a communications regulator. The draft Bill takes too many powers for the Secretary of State. These should be reduced, removing in particular the Secretary of State’s power to direct OFCOM to modify its codes of practice to bring them in line with government policy.
The thresholds for harm underpin the entire regime for children and adults but no process is described for defining “significant harm”. The government’s intention is that the threshold is low enough to be effective, but this needs more than a side comment in the explanatory notes.
Harms to adults on the largest platforms are not well covered (Clause 11). The government needs to spell out how huge volumes of racism, misogyny, antisemitism etc – that are not criminal but are oppressive and harmful, particularly to prominent figures – will be addressed. No special treatment is given to protect politicians, candidates and journalists involved in the democratic process.
The regulator does not have enough powers to address threats to public safety, public security and national security. The Prime Minister and President Biden recently signed a new Atlantic Charter[2] to “oppose interference through disinformation or other malign influences, including in elections”. Building such capability into the risk assessments in this Bill would be an easy way of meeting that obligation, demonstrating to the USA how this can be done while respecting free speech. The UK has an opportunity to be a world-leader on this agenda.
The regime described in the draft Bill could be employed by other regulators to make markets work better and protect the public. The December policy document[3] said that the power to ‘co-designate’ a regulator to act under the OSB regime would be available, but we cannot see it in the draft Bill. This is strategically important and the government should confirm that this ability is inherited from the Communications Act.
About Carnegie UK Trust
Over the past three years, Carnegie UK Trust has shaped the debate in the UK on reduction of online harm through the development of, and advocacy for, a proposal to introduce a statutory duty of care. We believe this is of critical importance to our mission to improve wellbeing. Our proposal is for social media companies to design and run safer systems – not for government to regulate individual pieces of content. Companies should take reasonable steps to prevent reasonably foreseeable harms that occur in the operation of their services, enforced by a regulator.[4] The proposal has been developed by Professor Lorna Woods (Professor of Internet Law, University of Essex), William Perrin (Carnegie UK Trustee) and Maeve Walsh (Carnegie UK Trust Associate), working with the Carnegie UK Trust team. It draws on well-established legal concepts to set out a statutory duty of care backed by an independent regulator, with measuring, reporting and transparency obligations on the companies. A focus on the outcome (harm) makes this approach futureproof and necessarily systemic. We propose that, as in health and safety regulation companies should run their systems in a proportionate, risk-based manner to reduce reasonably foreseeable harm. Broadcast regulation demonstrates that a skilled regulator can work to assess harm in context, regulate it and balance this with maintaining free speech. Proportionality in regulation allows for innovation and market entry by SMEs.
We are pleased that the UK government has adopted – in part – our approach. But, as we set out below, it either excludes or omits several components that we have proposed or advocated for. In particular, it does not address: harms to democracy or the electoral process;[5] financial fraud and scams, except where these are via user-generated content[6]; mis- or disinformation that has a societal impact.[7] Hate crime is covered, but it is unclear how well hatred short of the criminal threshold will be covered.[8] There is no obvious way for regulators to work together[9] and a reference to “co-designation” powers for OFCOM that appeared in the December policy document has not been transposed to the draft Bill. This blog post sets out, in narrative form, our analysis of the structure and intent of the Bill and then goes through a series of specific questions which – we believe – are integral to the effective functioning of the regulations when they come into force and should be addressed during the process of pre-legislative scrutiny. It does not aim to provide a comprehensive analysis of all aspects of the Bill (e.g., we do not address reporting and transparency or what the freedom of expression duties might mean) or drafting issues. As with all our work, we offer this analysis to policymakers, legislators and civil society colleagues in the spirit of our long-term commitment to ensure the introduction of proportionate, systemic regulation that protects the widest number of users from online harms. There will be areas of harm and/or technical or operational issues on which others will have more expertise and we look forward to assessing how their analyses “dock” with our overarching view. As always, we welcome feedback on our work and would invite those with an interest in this area to contact us to discuss further.
Overview of the draft Bill
We welcome the fact that the draft Bill describes a regime taking a systemic approach to the regulation of online harms. This is a significant starting point and one for which Ministers and DCMS officials should be commended. For such an approach to achieve its maximum effect it should be linked into an international multilateral framework to build upon the recent G7 Technology Ministers’ Statement.[10] The draft Bill is limited, however, by the choices the government has made on where it wants that regulation to bite and – notably – by the areas that it has excluded and those that are ill-defined. These are policy and political choices that will come under intense scrutiny in the weeks and months ahead. We set out some of those we feel are most important in the second section of this analysis but start here by looking at the design of the regulatory framework.
Structural complexity
The draft Bill is a framework Bill, providing a structure in primary legislation under which supporting secondary legislation and a plethora of codes and guidance from OFCOM will sit. The draft Bill’s complexity flows from the choice the government has made to break the obligations on relevant operators down into multiple duties, some of which apply only to a limited group of operators. The Bill then sets out, for each of these duties and groups, a description of the process OFCOM and the Secretary of State has to follow to fill-in the framework.
The intricate nature of the Bill and the difficulty of reading it from end to end as a narrative will make scrutiny and deliberation more difficult. It makes it hard to ascertain what is meant to happen and allows room for people to assert meaning that may not be there. Equally, some of the complexity may lead to unintended outcomes. The regulatory burden is in any event increased. The government could make less complex design choices and provide better explanatory tools during the pre-legislative process – the occasional diagram and timeline would help.
There are three separate thematic duties of care, each with an underpinning risk assessment duty. Further, there are a number of procedural duties. Three counterbalancing considerations apply unevenly across categories of operator. The duties for user-to-user services differ depending on whether the service is “Category 1” (the largest/riskiest) or not. The basic range of duties are repeated in a different form for search engines, leading to much repetition and cross-referencing.
Unlike the deliberately all-encompassing statutory duties of the Health and Safety at Work or Occupiers Liability (Land) Acts, the government is at pains, first, not to be all-encompassing and, secondly, to make some duties (much) stronger than others. This is also different from the proposal in the Online Harms White Paper[11] and from the Carnegie proposal[12] both of which envisaged an overarching duty of care for all services, but a duty which might apply differently depending on the service.
The risk assessment and safety duties for user-to-user services target three areas of concern:
Child sexual abuse and terrorism offences and crimes that impact individuals (clauses 5(2), 7(1), and 9);
Harm to individual children (cls 5(4)), 7(3)-(4), and 10); and
Harm to individual adults (on Category 1 only) (cls 5(5), 7(6)-(7), and 11).
While OFCOM has a role in determining which user-to-user services are Category 1, Schedule 4 provides constraints on the criteria that will be used in such determination. The risk assessment and safety duties for search engines exclude adult harms altogether (but contain parallel illegal content (cl 17(2), 19(1), and 21) and children’s duties (cl 17(3), 19(2) and 22)).
Scope and Thresholds
To be of relevance to the duties, the content must meet certain thresholds. For criminal law, in addition to terrorism and child sexual exploitation and abuse (CSEA) offences, relevant offences are when the intended victim is an individual (cl 41(4)(d)) or the Secretary of State specifies the offence falls within scope (cl 41(4)(c) and cl 44) and is not an offence under cl 41(6). Content falls within relevant offences when the service provider has “reasonable grounds” for believing that the content constitutes a relevant offence. Clause 41(5) also envisages that the Secretary of State may use the Cl 44 process to specify content to be “priority illegal content” – we assume this is for the Secretary of State and Parliament to set priorities for OFCOM’s regulatory work. CSEA content and terrorism content are not automatically priority illegal content. Special obligations arise in relation to priority illegal content, particularly for user-to-user platforms (cl 9 (3)(a)-(c), 21(3)(a)). It appears that the hate crime offences would be covered by clause 41(4)(d), as well as a much broader range of offences, e.g. coercive or controlling behaviour, threats to kill and harassment.
For other content, the threshold is that the content gives rise to psychological or physical harm (cl 45(3) in relation to children and cl 46(3) in relation to adults). Content that is harmful to children is content in respect of which there are reasonable grounds for believing there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child of ordinary sensibilities (cl 45(3)). Similar wording is used in relation to the definition of content harmful to adults, though the reference point is an adult of ordinary sensibilities (cl 46(3)). In both cases, the Secretary of State may designate further content; there is also a priority content Category. For these two categories, the Secretary of State must consult OFCOM before making the regulations and OFCOM is under an obligation to review the incidence of types of designated harmful content. The focus on individual harms may result in boundaries being drawn through certain categories of content – notably misinformation and hate speech.
Risk assessment
OFCOM will do a broadly-based market risk assessment (cl 61) that underpins much of the regime and then produce guidance for service providers to assist them in carrying out (cl 62) their own risk assessment for each safety duty. This will take a year or more after Royal Assent. (See ANNEX A for our best guess on the timeline for commencement.)
Armed with this guidance from OFCOM, services then have to carry out their own risk assessments. Risk assessments should cover the ‘characteristics’ of the service, including the impact of the characteristics on the risk of harm from certain defined types of content. Clause 61(6) lists user base, business model, governance and other systems and processes, as well as its functionalities – i.e. the system design, operation of the algorithm etc. It seems, however, that there is little ability for the regulator to critique the providers’ own risk assessments.
There needs to be a clear connection between the risk assessments, the Online Safety Objectives (cl 30), the carrying out of the duties and enforcement of those by OFCOM to ensure that a ‘backstop’ exists and applies to systems, processes and content. One way of doing this might be an overarching duty of care.
Safety and other duties
The illegal harm duty (cl 9) and the harm to children duty (cl 10) are strong, albeit complex. The (quite different) harms to adults duty (cl 11) is ill-defined and weak and is imposed only on Category 1 providers. This means that the criteria on which this categorisation will rest has great significance for the impact of the regime – many of the high-profile concerns may fall in the “harms to adults” duty.
There are three counterbalancing ‘protective’ duties intended to meet concerns from some about the impact of safety regulation. The duty about rights such as free speech and privacy (cl 12) applies to all user-to-user services (but imposes specific obligations on Category 1 providers). A second group of rights apply to Category 1 providers only: duties in relation to content of democratic importance (cl 13); and protecting content from traditional media that is regulated or self-regulated elsewhere (cl 14).
Search engines are treated differently – with only a duty about freedom of expression and privacy (cl 23); there is no equivalent of clauses 13 and 14 (as there is no equivalent of a Category 1 provider).
OFCOM will also produce Codes of Practice relating to recommended steps for compliance with the safety duties; the Codes must be compatible with the pursuit of the Online Safety Objectives detailed in clause 30. These objectives underpin the codes and apply to any service claiming that it has met the codes of practice by other means. The codes in relation to the safety duties and certain other duties seem to be on a “comply or explain” basis (cl 36) and OFCOM is to take Codes of Practice into account (cl 37) as well as being admissible in court; these provisions do not apply to the guidance as regards risk assessments (cl 62). These Online Safety Objectives are the key to a “safety by design” approach and need to be considered closely. We note for example that there is no explicit obligation to consider the severity of harms posed in cl 30(2)(a). The Online Safety Objectives should be explicit about systems and processes being proportionate to the size of the threat.
Part 3 of the draft Bill contains other duties: the obligation to provide annual transparency reports, to notify OFCOM and to pay fees. OFCOM may specify what is to be included within the reports within the categories listed in clause 49(4). While the notification duty applies to all regulated services, the transparency report obligation applies only to specified services; this may limit to oversight of some small but problematic services.
In the Bill, the duties are relatively empty frameworks; most will be populated in part by OFCOM research, risk assessments and OFCOM guidance, Codes etc (the latter presented to the Secretary of State for them to lay as an SI). This means that the powers and resources made available to OFCOM will be a significant factor in the regime’s success or not. It also means that OFCOM’s independence must be safeguarded.
Powers of the Secretary of State
It is apparent from the description above that the role of the Secretary of State is significant. Although most Acts envisage some filling in of detail and a role for relevant Secretaries of State, the powers of the Secretary of State go much further than expected (see the table set out in ANNEX B). Part of this is over-reach (for instance, ensuring OFCOM’s guidance is in line with “government policy”), part is a function of using secondary legislation to give Parliament a say in development of the framework.
Enforcement
OFCOM’s enforcement powers at first glance seem logical, with good information gathering powers etc (Pt 4, Chapter 5). OFCOM, as well as being able to impose fines, is given broad ‘business disruption’ powers which seem like an effective enforcement suite. Powers are set out in the draft Bill for criminal sanctions against company executives, but the Secretary of State will only commence that part of the Bill in the future after a review by OFCOM and only if certain conditions are met.
The regime is focussed on systems and processes and OFCOM’s enforcement route is to challenge whether the duties are being broken in general. OFCOM can only get specific items of content taken down in an emergency. The regime does not replace existing causes of action that individual users could take against other users nor the conditional immunity from liability that platforms have against those actions. Moreover, the draft Bill does not specifically envisage class action and/or representative actions. Instead, the draft Bill contains provisions to enable a super-complaint mechanism (cl 106); this allows designated bodies to bring to OFCOM’s attention systemic problems – though these will be limited by the thresholds in the draft Bill too.
OFCOM’s decisions with regard to inclusion on the register of services or to issue a use of technology notice may be appealed (to the Upper Tribunal – cl 104-5).
Initial Analysis: Questions and issues
In the next section, we set out a series of detailed questions which need further thought or clarification during the pre-legislative scrutiny period:
- Does OFCOM remain independent?
- Is the regime systemic?
- What is the scope of harms included?
- How is legal but harmful defined and addressed?
- How is the boundary for Category 1 determined?
- What measures are there to protect children?
- How does the draft Bill make up for the repeal of Part 3 of the Digital Economy Act?
- What are the exclusions?
- How is national security addressed?
- How are threats to public health and public safety assessed and mitigated?
- What powers does OFCOM have?
- Is there a duty to cooperate?
- Can other regulators be co-designated to work under the regime?
- What is the redress system?
- How is online advertising addressed?
- What might enable a quick start?
- How does commencement work?
- ANNEX A: An indicative timeline
- ANNEX B: The role of the Secretary of State
[1] https://www.gov.uk/government/publications/draft-online-safety-Bill
[2] Clause 3 ‘Third, we remain united behind the principles of sovereignty, territorial integrity, and the peaceful resolution of disputes. We oppose interference through disinformation or other malign influences, including in elections, and reaffirm our commitment to debt transparency, sustainability and sound governance of debt relief.’ https://www.gov.uk/government/publications/new-atlantic-charter-and-joint-statement-agreed-by-the-pm-and-president-biden/the-new-atlantic-charter-2021
[3] https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response
[4] All our work is available here: https://www.carnegieuktrust.org.uk/project/harm-reduction-in-social-media/.
[5] See our March 2021 blog post on protecting those involved in the democratic process: https://www.carnegieuktrust.org.uk/blog/increased-online-safety-for-people-involved-in-the-democratic-process-in-the-uk/; and our January 2021 blog post on freedom of speech and political mis/-disinformation: https://www.carnegieuktrust.org.uk/blog/freedom-of-expression-speech-rights-modern-regulation/
[6] Carnegie UK Trust co-signed a letter to the Home Secretary and DCMS Secretary of State on this issue in April 2021: https://www.carnegieuktrust.org.uk/news/cukt-joins-online-scams-coalition/
[7] See our thoughts on how a systemic duty of care would tackle the Covid “infodemic”: https://www.carnegieuktrust.org.uk/blog/addressing-the-infodemic-through-a-focus-on-online-system-design/
[8] We have published a draft Code of Practice for Hate Crime and wider legal harms alongside this full response: https://www.carnegieuktrust.org.uk/publications/draft-code-of-practice-in-respect-of-hate-crime-and-wider-legal-harms-covering-paper-june-2021/
[9] Our proposal for “regulatory interlock” is outlined in this blog post from September 2020: https://www.carnegieuktrust.org.uk/blog/online-harms-interlocking-regulation/
[10] https://www.gov.uk/government/publications/g7-digital-and-technology-ministerial-declaration
[11] https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper
[12] Our full reference paper of April 2019 sets this out in detail: https://d1ssu070pg2v9i.cloudfront.net/pex/carnegie_uk_trust/2019/04/08091652/Online-harm-reduction-a-statutory-duty-of-care-and-regulator.pdf
Help us make the case for wellbeing policy
Keep in touch with Carnegie UK’s research and activities. Learn more about ways to get involved with our work.
"*" indicates required fields