Freedom of Expression, Speech Rights & Modern Regulation

Carnegie Logo
  • by William Perrin, Trustee, Carnegie UK Trust, Professor Lorna Woods, University of Essex and Maeve Walsh, Carnegie Associate
  • 29 January 2021
  • 12 minute read

A number of social networks and web services banned President Trump from their services suggesting that there was a risk of further incitement to violence.  Twitter was first to act, followed by several others.  Although these were American companies banning an American politician, there were reactions around the world covering a full range of emotions. Notably, some commentators claimed that this is a violation of freedom of expression.  In Europe, Chancellor Merkel appeared not to agree with banning President Trump. In the United Kingdom, two Cabinet Ministers expressed concerns, including (in an article in The Times) DCMS Secretary of State Oliver Dowden, who is responsible for online harms regulation: 

“The idea that free speech can be switched off with the click of a button in California is unsettling even for the people with their hands on the mouse. Just this week, Twitter’s chief executive, Jack Dorsey, said that while he felt that it was right for his platform to ban Mr Trump, leaving platforms to take these decisions “fragments” the public conversation and sets a dangerous precedent.”  

Against this background, we examine the legal position from the perspective of the European Court of Human Rights and consider what the implications of that jurisprudence are for government policy. The case law does not give us one clear answer to when and how speech rights are engaged, as the following seven propositions show.

We might say from this that, if a speaker has access to a number of platforms, then the balance of rights probably comes down in favour of the (private) platform, especially in respect of speech that does not comply with platform rules. Conversely, we would express concern if the rules have been applied in an unequal way – though this may be a criticism about under-enforcement as much as over-enforcement. Finally, the nature of some of the speech might be such to take it outside speech protections in any event.  

The law outlines what is possible but does not address what choices politicians and parliament might make within it. None of this prevents States from regulating access to private platforms (similar to the universal service obligation found in the postal and telecommunications sectors and must carry in broadcasting); in some instances they might be obliged to intervene. There are three main questions to look at when considering this:

The government’s online harms proposals in the UK are notable for steering clear of new rules that might apply to politicians and political discourse. The government had an opportunity to set rules in this area in September 2020 when responding to the report of the Lords Committee on Democracy and Digital Technologies that had been chaired by Lord Puttnam. But the government did not make commitments to reforms that might have addressed the Trump issue, referring to the generality of the then draft online harms work. 

The government’s Online Harms White Paper consultation response …fails to make any mention of a ‘duty of care’ towards democracy itself’. Technology is not a force of nature and online platforms are not inherently ungovernable. They can and should be bound by the restraints that we apply to the rest of society.

Lord Puttnam House Magazine 11 January 2021

The UK government’s proposals are intended to prevent reasonably foreseeable physical and psychological harm to individuals; this includes hate speech. The government chose not to tackle political disinformation or harms to society. It is not clear that even action by a British politician similar to President Trump’s would trigger obligations on a platform under the duty of care that the government proposes. The proposed UK overarching framework might engage the problem where harm to individuals is evident – such as threats to injure a particular politician – or hate speech.  Given that the current proposals seek to exclude political disinformation and harms to society, attempts to overturn a democratic election would only fall in the regime to the extent that they result in significant physical violence to someone.

Even assuming that this type of harm falls within the regime, there are some tensions in the position taken in the full response and, especially in the light of the events of 6 January, these should be clarified by the government.  In the final response to the consultation on the Online Harms White Paper, the government brings forward three proposals that are relevant to the Trump case: 

Firstly, the government makes the high-level proposal that:

‘Companies will not be able to arbitrarily remove controversial viewpoints and users will be able to seek redress if they feel content has been removed unfairly.’ (Full Government Response to the Online Harms White Paper: CP 354, page 33, para 2.34)

The government does not expand on this potentially significant and powerful phrase. It stands in isolation in the text so we do not know how it is intended to work at this stage. It seems similar to the non-discrimination point from the Convention free speech jurisprudence. We note that in the USA former Attorney General Barr proposed reform of Section 230 of the Communications Decency Act to prevent ‘arbitrary content moderation decisions’. This would seem to fit also with concerns about non-discrimination in the ECHR jurisprudence. We note that Mr Zuckerberg said on January 27th

‘we plan to keep civic and political groups out of recommendations for the long term, and we plan to expand that policy globally…. we’re also currently considering steps we could take to reduce the amount of political content in News Feed as well. We’re still working through exactly the best ways to do this’

The second proposal is that OFCOM will be responsible for ensuring that platforms have in place systems to enforce their own terms and conditions as part of overall harm reduction: 

Regulation will ensure transparent and consistent application of companies’ terms and conditions relating to harmful content. This will both empower adult users to keep themselves safe online, and protect freedom of expression by preventing companies from arbitrarily removing content.’ (Full Government Response to the Online Harms White Paper: CP 354, page 27, para 2.10)   

Again, this is less about what will be removed and more about consistency; in this the proposal seems to recognise the interests of each of the platforms in setting the terms on which users engage in their respective spaces.  Some platforms may introduce rules in relation to politicians’ speech generally. Some Category One platforms may already have their own codes of practice for some UK elections.  Should it have jurisdiction, this means that OFCOM will in fact oversee the processes of say, Twitter, should it make a similar decision to that on President Trump with regard to a British politician.  Again, this would likely be acceptable, if not desirable, from the perspective of the current Strasbourg jurisprudence.  

However, the third proposal relevant to the Trump affair is the somewhat standalone statement that: 

 ‘OFCOM should not be involved in political opinions or campaigning shared by domestic actors within the law.’ (Full Government Response to the Online Harms White Paper: CP 354, page 33, para 2.81)

On a first reading of this, it seems to deny Ofcom the oversight role of community standards envisaged for Category One platforms as far as political speech is concerned; this would allow greater space for a platform to develop an editorial (and not necessarily politically neutral) line about enforcement. One way to reconcile the statements is to see the statement as relating to scope of the duty of care rather than its implementation. On this basis, the proposal seems intended to reinforce the point that politics in general is a societal issue, out of scope of the regime,  and that it is not intended to over-ride steps to prevent of physical or psychological harm to individuals caused by domestic political actors.

The proposal reinforces that it is only harm to individuals in scope, not the disinformation that created an environment in which such harm could occur.  However, questions remain. It is unclear whether Ofcom’s oversight of the implementation of a platform’s community rules extends to all of those rules, or just the rules that pertain to content resulting in relevant harms. A narrower interpretation of Ofcom’s role leaves greater space for even a Category One platforms to follow their own editorial line on implementing their rules. Does it mean that if platforms systemically ‘arbitrarily’ remove controversial viewpoints of domestic political actors then OFCOM can’t act? At face value, it suggests that OFCOM would not supervise the platforms election codes, leaving them as unregulated as in the USA in that regard. Where the platform is a unique form of communication, this might give rise to issues under Article 10 case law (eg. Appleby), though as noted the Court has historically given a limited scope to the circumstances in which this might arise (Tele 1). Whether arbitrary enforcement of political speech, even by a private actor when that actor is a major platform, is acceptable from a Convention perspective is, however, far from certain.

Many voices have called for a more comprehensive approach to rules for political speech online, including during elections, not least the Lords Committee on Democracy and Digital Technologies, The Committee on Standards in Public Life in its Inquiry into Intimidation in Public Life and the Advertising Standards Authority which said in 2020 that political advertising should be regulated. OFCOM of course has experience of regulating broadcasters for impartiality and also in respect of broadcast political advertising rules. Working with Lord McNally, we made the modest suggestion about the particularly high-risk of harm period around elections that the duty of care on platforms should address 

threats that impede or prejudice the electoral process’ 

the basis on which the Crown Prosecution Service considers electoral crime. 

The Trump affair illustrates the issues raised by the many groups calling for modern regulation in this area. The latest online harms proposals need clarification in this respect. The government will come under sustained pressure to bring rules up to date.