How would a social media harm regulator work?
- by William Perrin, trustee of Good Things Foundation, Indigo Trust and 360 Giving. He is a former senior civil servant in the UK government and Professor Lorna Woods, University of Essex.
- 10 May 2018
- 13 minute read
Reducing harm in social media – regulation and enforcement
We have set out in a series of blog posts a proposal for reducing harm from social media services in the UK (see end for details about the authors). The harm reduction system will require new legislation and a regulator. In this post we set out our first thoughts on the tasks to be given to a regulator and how the regulator would go about putting them into action.
How a regulator might work
Parliament should only set a framework within which the regulator has flexibility to reduce harm and respond appropriately in a fast moving environment. Our proposal (see earlier posts) is that the regulator is tasked with ensuring that social media services providers have adequate systems in place to reduce harm while preserving freedom of speech in the European tradition. The regulator would not get involved in individual items of speech. The regulator must not be a censor.
Harm reduction cycle
We envisage an ongoing evidence based process of harm reduction. For harm reduction in social media the regulator could work with the industry to create an on-going harm reduction cycle that is transparent, proportionate measurable and risk-based.
A harm reduction cycle begins with measurement of harms. The regulator would draw up a template for measuring harms, covering scope, quantity and impact. The regulator would use as a minimum the harms set out in statute but, where appropriate, include other harms revealed by research, advocacy from civil society, the qualifying social media service providers etc. The regulator would then consult publicly on this template, specifically including the qualifying social media service providers. Regulators in the UK such as the BBFC, the ASA and OFCOM (and its predecessors) have demonstrated for decades that it is possible to combine quantitative and qualitative analysis of media, neutral of political influence, for regulatory process.
The qualifying social media services would then run a measurement of harm based on that template, making reasonable adjustments to adapt it to the circumstances of each service. The regulator would have powers in law to require the qualifying companies (see enforcement below) to comply. The companies would be required to publish the survey results in a timely manner. This would establish a first baseline of harm.
The companies would then be required to act to reduce these harms. We expect those actions to be in two groups – things companies just do or stop doing, immediately; and actions that would take more time (for instance new code or terms and conditions changes). Companies should seek views from users as the victims of harms or NGOs that speak for them. These comments – or more specifically the qualifying social media service providers respective responses to them (though it should be emphasised that companies need not adopt every such suggestion made) – would form part of any assessment of whether an operator was taking reasonable steps and satisfying its duty of care. Companies would be required to publish, in a format set out by the regulator:
- what actions they have taken immediately
- actions they plan to take
- an estimated timescale for measurable effect and
- basic forecasts for the impact on the harms revealed in the baseline survey and any others they have identified.
The regulator takes views on the plan from the public, industry, consumers/users and civil society and makes comments on the plan to the company, including comments as to whether the plan was sufficient and/or appropriate. The companies would then continue or begin their harm reduction work.
Harms would be measured again after a sufficient time has passed for harm reduction measures to have taken effect, repeating the initial process. This establishes the first progress baseline.
The baseline will reveal four likely outcomes – that harms:
- have risen;
- stayed the same;
- have fallen; or
- new harms have occurred.
If harms surveyed in the baseline have risen or stayed the same the companies concerned will be required to act and plan again, taking due account of the views of victims, NGOS and the regulator. In these instances, the regulator may take the view that the duty of care is not being satisfied and, ultimately, may take enforcement action (see below). If harms have fallen then companies will reinforce this positive downward trajectory in a new plan. Companies would prepare second harm reduction reports/plans as in the previous round but including learning from the first wave of actions, successful and unsuccessful. Companies would then implement the plans. The regulator would set an interval before the next wave of evaluation and reporting.
Well-run social media services would quickly settle down to much lower level of harm and shift to less risky designs. This cycle of harm measurement and reduction would continue to be repeated , as in any risk management process participants would have to maintain constant vigilance.
At this point we need to consider the impact of the e-Commerce Directive. As we discussed, the e-Commerce Directive gives immunity from liability to neutral intermediaries under certain conditions. Although we are not convinced that all qualifying social media companies would be neutral intermediaries, there is a question as whether some of the measures that might be taken as part of a harm reduction plan could mean that the qualifying company loses its immunity, which would be undesirable. There are three comments that should be made here:
- Not all measures that could be taken would have this effect;
- That the Commission has suggested that the e-Commerce Directive be interpreted – in the context of taking down hate speech and other similarly harmful content (See Communication 28 Sept 2017) as not meaning that those which take proactive steps to prevent such content should be regarded as thereby assuming liability;
- After Brexit, there may be some scope for changing the immunity regime – including the chance to include a ‘good Samaritan defence’ expressly.
This harm reduction cycle is similar to the techniques used by the European Commission as it works with the social media service providers to remove violent extremist content.
Other regulatory techniques
Alongside the harm reduction cycle we would expect the regulator to employ a range of techniques derived from harm reduction practice in other areas of regulation. We draw the following from a wide range of regulatory practice rather than the narrow set of tools currently employed by the tech industry (take down, filtering etc). Some of these the regulator would do, others the regulator would require the companies to do. For example:
Each qualifying social media service provider could be required to:
- develop a statement of risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
- provide its child protection and parental control approach, including age verification, for the regulator’s approval;
- display a rating of harm agreed with the regulator on the most prominent screen seen by users;
- work with the regulator and civil society on model standards of care in high risk areas such as suicide, self-harm, anorexia, hate crime etc; and
- provide adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator.
The regulator would:
- publish model policies on user sanctions for harmful behaviour, sharing research from the companies and independent research ;
- set standards for and monitoring response time to queries (as the European Commission does on extremist content through mystery shopping);
- co-ordinate with the qualifying companies on training and awareness for the companies’ staff on harms;
- contact social media service companies that do not qualify for this regime to see if regulated problems move elsewhere and to spread good practice on harm reduction
- publish a forward-look at non-qualifying social media services brought to the regulator’s attention that might qualify in future;
- support research into online harms – both funding its own research and co-ordinating work of others;
- establish a reference/advisory panel to provide external advice to the regulator – the panel might comprise civil society groups, people who have been victims of harm, free speech groups; and
- maintains an independent appeals panel.
Consumer redress
We note the many complaints from individuals that social media services companies do not deal well with complaints. The most recent high profile example is Martin Lewis’s case against Facebook. At the very least qualifying companies should have internal mechanisms for redress that meet standards set by an outside body of simplicity (as few steps as possible), are fast, clear and transparent. We would establish, or legislate to make the service providers do so, a body or mechanism to improve handling of individual complaints. There are a number of routes which require further consideration – one route might be an ombudsman service, commonly used with utility companies although not with great citizen satisfaction, another might be a binding arbitration process or possibly both. We would welcome views to the address below.
Publishing performance data (specifically in relation to complaints handling) to a regulatory standard would reveal how well the services are working. We wish to ensure that the right of an individual to go to court is not diluted, which makes the duty of care more effective, but recognise that that is unaffordable for many. None of the above would remove an individual’s right to go to court, or to the police if they felt a crime had been committed.
Sanctions and compliance
Some of the qualifying social media services will be amongst the world’s biggest companies. In our view the companies will want to take part in an effective harm reduction regime and comply with the law. The companies’ duty is to their shareholders – in many ways they require regulation to make serious adjustments to their business for the benefit of wider society. The scale at which these companies operate means that a proportionate sanctions regime is required. We bear in mind the Legal Services Board (2014) paper on Regulatory Sanctions and Appeals processes:
‘if a regulator has insufficient powers and sanctions it is unlikely to incentivise behavioural change in those who are tempted to breach regulators requirements.’
Throughout discussion of sanctions there is a tension with freedom of speech. The companies are substantial vectors for free speech, although by no means exclusive ones. The state and its actors must take great care not to be seen to be penalising free speech unless the action of that speech infringes the rights of others not to be harmed or to speak themselves. The sanctions regime should penalise bad processes that lead to harm.
All processes leading to the imposition of sanctions should be transparent and subject to a civil standard of proof. By targeting the largest companies, all of which are equipped to code and recode their platforms at some speed, we do not feel that a defence of ‘the problem is too big’ is adequate. There may be a case for some statutory defences and we would welcome views as to what they might be.
Sanctions would include:
- Administrative fines in line with the parameters established through the Data Bill regime of up to €20 million, or 4% annual global turnover – whichever is higher.
- Enforcement notices – (as used in data protection, health and safety) – in extreme circumstances a notice to a company to stop it doing something. Breach of an enforcement service could lead to substantial fines.
- Enforceable undertakings where the companies agree to do something to reduce harm.
- Adverse publicity orders – the company is required to display a message on its screen most visible to all users detailing its offence. A study on the impact of reputational damage for financial services companies that commit offences in the UK found it to be nine times the impact of the fine.
- Forms of restorative justice – where victims sit down with company directors and tell their stories face to face.
Sanctions for exceptional harm
The scale at which some of the qualifying social media services operate is such that there is the potential for exceptional harm. In a hypothetical example – a social media service was exploited to provoke a riot in which people were severely injured or died and widespread economic damage was caused. The regulator had warned about harmful design features in the service, those flaws had gone uncorrected, the instigators or the spreaders of insurrection exploited deliberately or accidentally those features. Or sexual harm occurs to hundreds of young people due to the repeated failure of a social media company to provide parental controls or age verification in a teen video service. Are fines enough or are more severe sanctions required, as seen elsewhere in regulation?
In extreme cases should there be a power to send a social media services company director to prison or to turn off the service? Regulation of health and safety in the UK allows the regulator in extreme circumstances which often involve a death or repeated, persistent breaches to seek a custodial sentence for a director. The Digital Economy Act contains power (Section 23) for the age verification regulator to issue a notice to internet service providers to block a website in the UK. In the USA the new FOSTA-SESTA package apparently provides for criminal penalties (including we think arrest) for internet companies that facilitate sex trafficking. This led swiftly to closure of dating services and a sex worker forum having its DNS service withdrawn in its entirety.
None of these powers sit well with the protection of free speech on what are generalist platforms – withdrawing the whole service due to harmful behaviour in one corner of it deprives innocent users of their speech on the platform. However, the scale of social media service mean that acute large scale harm can arise that would be penalised with gaol elsewhere in society. Further debate is needed.
About this blog post
This blog is the sixth in a programme of work on a proposed new regulatory framework to reduce the harm occurring on and facilitated by social media services. The authors William Perrin and Lorna Woods have vast experience in regulation and free speech issues. William has worked on technology policy since the 1990s, was a driving force behind the creation of OFCOM and worked on regulatory regimes in many economic and social sectors while working in the UK government’s Cabinet Office. Lorna is Professor of Internet Law at University of Essex, an EU national expert on regulation in the TMT sector, and was a solicitor in private practice specialising in telecoms, media and technology law. The blog posts form part of a proposal to Carnegie UK Trust and will culminate in a report later in the Spring.
Help us make the case for wellbeing policy
Keep in touch with Carnegie UK’s research and activities. Learn more about ways to get involved with our work.
"*" indicates required fields