Which social media services should be regulated for harm reduction?

Carnegie Logo
  • by William Perrin, trustee of Good Things Foundation, Indigo Trust and 360 Giving. He is a former senior civil servant in the UK government and Professor Lorna Woods, University of Essex.
  • 8 May 2018
  • 7 minute read

This article is one of series about reducing the harm emanating from social media (see end piece for information about this work and the authors). We set out in our earlier article a proposed system where every company that operates a qualifying social media platform used in the UK is subject to some general rules or conditions, notably a duty of care to their users.  The largest or most potentially harmful of the qualifying services would have to take more steps to satisfy this duty of care.

In this article, we discuss which social media services would be subject to a statutory duty of care towards their users.

Parliament would set out in law characteristics of social media services that could be covered by the regime. There are always difficult boundary cases and to mitigate this we propose the regulator makes a list of qualifying services.

Qualifying social media services

We suggest that the regime apply to social media services used in the UK that have the following characteristics:

  1. Have a strong two-way or multiway communications component;
  2. Display and organise user generated content publicly or to a large member/user audience;
  3. A significant number of users or audience – more than, say, 1,000,000;
  4. Are not subject to a detailed existing regulatory regime, such as the traditional media.

A regulator would produce detailed criteria for qualifying social media services based on the above and consult on them publicly.  The regulator would be required to maintain a market intelligence function to inform consideration of these criteria.  Evidence to inform judgements could come from: individual users, civil society bodies acting on behalf of individuals, whistle-blowers, researchers, journalists, consumer groups, the companies themselves, overseas markets in which the services operate, as well as observation of trends on the platforms.

In order to maintain an up to date list, companies which fall within the definition of a qualifying social media service provider would be required in law to notify the regulator after they have been operating for a given period.  Failure to do so would be an offence.  Notification would be a mitigating factor should the regulator need to administer sanctions.

The regulator will publish a list based on the notifications and on market intelligence, including the views of the public. The regulator’s decision to include a service on the list could, as for any such type of decision, be subject to judicial review, as could the decision not to include a service that the public had petitioned for.  Services could be added to the list with due process at any time, but the regulator should review the entire list every two years.

Broadly speaking we would anticipate at least the following social media service providers qualifying, we have asterisked cases for discussion below.

  1. Facebook
  2. Twitter
  3. YouTube
  4. Instagram
  5. Twitch*
  6. Snapchat
  7. Musical.ly*
  8. Reddit
  9. Pinterest*
  10. LinkedIn

Managing boundary cases

Providing a future proof definition of a qualifying social media service is tricky.  We would welcome views on how to tighten it up.  However we feel that giving the regulator freedom from political interference to make a list allows for some future-proofing rather than writing it in legislation.  The regulator making this list also reduces the risk of political interference – it is quite proper for the government to act to reduce harm, but in our view there would be free speech concerns if the government was to say who was on the list. An alternative would be for the regulator to advise the Secretary of State and for them to seek a negative resolution in Parliament but in our view this brings in a risk to independence and freedom of speech.

Internet forums have some of the characteristics we set out above. However hardly any l forums would currently have enough members to qualify.  The very few forums that do have over one million members have, in our opinion,  reached that membership level through responsible moderation and community management. In a risk based regime (see below) they would be deemed very low risk and barely affected.  We do not intend to capture blog publishing services, but it is difficult to define them out.   We would welcome views on whether the large scale interaction about a post that used to occur in blog comments in the hey day of blogging is of a similar magnitude to the two way conversation on social media. We do not think it is but it is hard to find data.  We would welcome comments on whether this boundary is sufficiently clear and how it could be improved.

Twitch has well documented abuse problems and has arguably more sophisticated banning regimes for bad behaviour than other social networks. Twitch allows gamers to stream content that the gamers have generated (on games sites) with the intention of interacting with an audience about that content.  Twitch provides a place for that display, multiway discussion about it and provides a form of organisation that allows a user to find the particular content they wish to engage with. We therefore feel that Twitch falls within scope.  Other gaming services with a strong social media element should also be considered, particularly with a strong youth user base.

Note that services do not need to include (much) text or voice: photo sharing services such as Pinterest could fall within the regime too.

Risk based regulation – not treating all qualifying services the same

This regime is risk based. We are not proposing that a uniform set of rules apply across very different services and user bases. The regulator would prioritise high risk services, the regulator would only have minimal engagement with low risk services.   Differentiation between high and low risk services is common in other regulatory regimes, such as for data in the GDPR and is central to health and safety regulation.  In those regimes, high risk services would be subject to closer oversight and tighter rules as we intend here.

Harmful behaviours and risk have to be seen in the context of the platform.  The regulator would examine whether a social media service operator has had particular regard to its audience.  For example, a mass membership, general purpose service should manage risk by setting a very low tolerance for harmful behaviour, in the same way that some public spaces take into account that they should be a reasonably safe space for all. Specialist audiences/user-bases of social, media services may have online behavioural norms that on a family-friendly service could cause harm but in the community where they originate are not harmful.  Examples might include sports-team fan services or sexuality-based communities.  This can be seen particularly well with Reddit: its user base with diverse interests self organises into separate subreddits, each with its own behavioural culture and moderation.

Services targeted at youths are innately higher risk – particularly where youth services are designed to be used on a mobile device away from immediate adult supervision.  For example, teen focussed lip synching and video sharing site musical.ly owned by Chinese group Bytedance according to Channel4 News has 2.5 million UK members and convincing reports of harmful behaviours.  The service is a phone app targeted at young people that also allows them to video cast their life (through their live.ly service) with as far as we can make out few meaningful parental controls.  In our opinion, this appears to be a high risk service.

We welcome views on this article to [email protected].

About this work

This blog is the fifth in a programme of work on a proposed new regulatory framework to reduce the harm occurring on and facilitated by social media services.  The authors William Perrin and Lorna Woods have vast experience in regulation and free speech issues.  William has worked on technology policy since the 1990s, was a driving force behind the creation of OFCOM and worked on regulatory regimes in many economic and social sectors while working in the UK government’s Cabinet Office.  Lorna is Professor of Internet Law at University of Essex, an EU national expert on regulation in the TMT sector, and was a solicitor in private practice specialising in telecoms, media and technology law.  The blogs posts form part of a proposal to Carnegie UK Trust and will culminate in a report later in the Spring.