Addressing the “infodemic” through a focus on online system design
- by Professor Lorna Woods, University of Essex, William Perrin, Trustee, Carnegie UK Trust and Maeve Walsh, Carnegie Associate
- 14 October 2020
- 12 minute read
The design of online platforms and services shapes the information environment that users experience. That principle has underlined the development of our proposal for a statutory duty of care for online harm reduction since its inception in 2018.
Design choices impact on the content posted and the way information flows across communications platforms – including but not limited to recommender algorithms. Developing a regulatory system based on individual rules tackling specific issues (e.g the role of bots in spreading misinformation) might quickly become outdated, both as regards to technologies and services available and the problems faced. By contrast, an overarching duty on operators to ensure, so far as possible, that their services are ‘safe by design’ is an on-going obligation that looks towards outcome. It bites at the system level, rather than the level of individual pieces of content. A systems-based approach is neutral as to the topics of content and, to a large extent, future-proofed: the concept of harm is consistent regardless of the technology and service that might cause it as the state-of-the-art advances.
As Will Perrin argued in a previous blog post, the scale and prevalence of misinformation and disinformation – and the consequential harms to public health and critical infrastructure that we have seen during the Covid19 pandemic – could have been reduced had a systemic duty of care enforced by an independent regulator been in place. As the UK government puts the finishing touches to its Online Harms proposals, the debate about the balance between reducing harm to users online and protecting free speech is intensifying. This article sets out a means to navigate the tension, protecting users from the multiple harms that arise from the spread of misinformation and disinformation while also protecting the individual’s right to speak. (Further discussion on the duty of care and fundamental freedoms can be found in this paper by Professor Lorna Woods.)
The importance of community standards – and their enforcement
Direct content regulation in the context of social media is problematic for a range of reasons, including the amount of data and the speed with which content is uploaded. Assessing content requires an understanding of context which means it is very difficult to develop general rules that apply across different countries. Moreover, the specification of content as acceptable and unacceptable goes to the heart of freedom of expression and raises concerns about the risks of politically motivated suppression of speech.
Regulatory obligations which focus on the underlying system, and the extent to which they have been designed with an awareness of the risks of design features and to allow user control, mitigate these concerns. Such an approach could allow the development (within the space allowed by the law) of differently calibrated communities. So-called ‘community standards’ are important – however, in our view oversight of their design and enforcement should be part of the regulatory system not a separate, non-statutory expectation on platforms. (Government Ministers have recently seemed to suggest that much of the “legal but harmful” category of harms could be dealt with through companies “enforcing their own terms and conditions”, outwith the statutory duty of care). In addition to concerns about the impact of design features on content and information flow, rules focussing on the underlying system should take into account the following aspects:
- Community standards should be clear, with explanation as to what those standards mean (rather than just legalese or brief statements); the language used should be appropriate for the user group (this is particularly relevant for services that might be used by children, and bearing in mind the different stages of children’s development).
- Community standards should be upfront in platform design (not right at the bottom of page with lots of scrolling ending in click through).
- Community standards should be enforced, with visibility as to the enforcement as well as transparency. Requirements should be in place as to resources for user complaints/enforcement of standards (perhaps a percentage of revenue as a benchmark) with requirements to demonstrate provision of resources and relevant training of staff. The Carnegie proposal envisaged that the operation of the complaints and redress mechanisms should be part of an operator’s reporting obligations. This should be at a reasonably granular level so that inequality or even discrimination in the system (in terms of whose complaints are taken seriously; the type of complaint that is responded to swiftly) can be identified and tackled.
Platform design
The impact of platform design on users’ choices, as well as the flow of information across platforms is central to the duty of care proposal. The connection between design and content can be seen at the following stages, though some issues come up at multiple stages:
- user posting – this concerns sign-in features (the necessity/desirability of user or age verification, but also concerns around whether private groups or encryption are deployed); augmented reality features (e.g. filters and overlays, for example plastic surgery filters); ease or difficulty of embedding content from other platforms; incentives to post clickbait/impact of metrics
- discovery and navigation – this includes recommender algorithms, content curation/personalisation features as well as push notifications; it also includes ex ante moderation (especially that carried out automatically)
- advertisers – does the platform engage any KYC (“know your client”) on clients; what ads does it permit? How are audiences segmented (e.g. what controls are there around permitted groupings/topics – are any segments impermissible or undesirable?)
- recipient user – layout of page (what information feeds are prioritised); tools for engaging with content (e.g. like/retweet/upvotes) or controlling/blocking content – are such controls usable and prominent; ability to forward content to individuals or large groups
- complaints – how easy to use and accessible is the system/ is there a need for appeals; does the complaints system work as a reasonable user might expect it to (and taking into account where services are used by children, their different developmental stages)? To what extent is the position of a victim of harm appreciated by the platform bearing in mind different groups may have differing experiences?
The aim of the duty of care approach is not to say that any one design feature is prohibited or mandatory but rather that the platform should assess those features for risk of harm (as set out in the underpinning statute) and to take appropriate steps to mitigate those risks. For example, with regard to encryption, perhaps mitigating features relate to identification of users, or limitation of how many users could be in a group using encryption. Twitter has made a wide range of design changes to reduce harm during 2019 and 2020 including a raft of specific measures in advance of the USA Presidential election. [In terms of advertising, some form of due diligence as to who is using the system and for what ends may be helpful, as well as some form of risk assessment of the categories of audience segmentation. Is there a risk, for example, in segmenting the audience by reference to its members’ interest in conspiracy theories/alternative facts, or allowing new parents to be targeted with disproven claims about the link between the MMR vaccine and autism? The availability of micro-targeting itself should be assessed for its risks. The process surrounding risk assessment and mitigation should be documented; in general, the onus should be on the platforms to demonstrate compliance.
In the context of the pandemic, some platforms have sought to emphasise reliable sources of information. While this is a welcome step, it is questionable whether this is a sufficient analysis of the issues that seem to lead recommender algorithms and similar features to prioritise (increasingly) extreme or emotive content in general. This response is a piecemeal response, based on individual categories of content rather than a fundamental systems check.
Private messaging
While there may be other policy concerns driving decisions about scope of the forthcoming online harms regulation with regard to the inclusion or otherwise of private messaging, one particular issue deserves focus – that is, issues relating to privacy. Social media in particular have blurred the boundaries between public expression – that is communication to large audiences or that is intended to be viewed by such an audience – and private communication (historically, one-to-one communication typically by letter or telephone conversations). These types of communication have typically been protected from state intrusion, whether in international human rights instruments or national constitutions. This protection should not be defined by reference to the technology alone (so that fact that a service as a matter of domestic law is not a telephony service should not be conclusive as to the scope of privacy). While concerns about state access are less prevalent regarding posts which are open to view generally, questions remain about communication that is closed, that is within a particular group.
Yet messaging services and private groups raise real concerns about some of the most serious harms, especially when they are encrypted. Many governments, of course, have communications interception regimes with judicial and/or parliamentary oversight, although the operation of these regimes is frequently contested. It has become clearer that messaging services have gone beyond one-to-one communications and small groups; many platforms allow large size groups. The size of these groups suggests that the communication mediated via the service is neither private nor confidential. Other characteristics also indicate the non-private nature of the communication, notably the growing practice of public groups, sharing of group links and browsers and search apps for groups. Services that enable the creation of public groups and/or large groups would, in our view, suggest that these services – if they can be used for multiway communications – should fall within a regulatory regime. Any risk assessment should take into account the specificities of this form of service. In this context the encrypted nature of a messaging service is a key risk factor; some counter measures could be user identification, limitation on numbers in groups, limitations on how material may be forwarded or searched, limitations on bot membership of groups, or the number of groups which one account may join.
A further question arises with regard messaging services (though also relevant to other platforms as well); that is, the responsibility for aspects provided by third party apps. In particular, third parties have provided apps that allow users to search private groups, to search for membership codes for private groups and the like. Should these apps be the responsibility of the platform provider (this seems unfair save in the instance of lax security which may engage other responsibilities); should the third-party apps also be within scope? Conversely where a platform chooses to incorporate a feature using a package from a third party, the service provider should take responsibility for that feature, defects and all.
Transparency
Currently, it is hard to assess the effectiveness of platform operators’ attempts to ensure safety for their users. Information is not available, and when it is available, it covers matters chosen by the operator in a format chosen by the operator, making assessment and comparison between platforms difficult if not impossible. In the context of the current pandemic a number of social media companies have taken some steps to limit misinformation/disinformation. For example, Pinterest took a decision last year not to have anti-vax material on their platform; its very clear community guidelines do not allow content that might have “immediate and detrimental” effects to health or public safety, so this allowed them to easily extend it to cover searches for Covid19 and limit the results to material from authoritative sources. WhatsApp’s “velocity limiter” reduced the number of times things can be forwarded which, it claims, has led to a 70% reduction in “highly forwarded” messages on its services. There is, however, no way to investigate such a claim.
More information is then an essential part of ensuring platforms take responsibility for their products, though on its own it is insufficient. Our 2019 full report suggested the following points that could be required in general – that each relevant operator should be required to:
- develop a statement of assessed risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
- provide its child protection and parental control approach, including age verification, for the regulator’s approval;
- develop an internal review system for risk assessment of new services, new tools or significant revision of services prior to their deployment (so that the risk is addressed prior to launch or very risky services do not get launched) – and document this;
- develop a triage process for emergent problems (the detail of the problem may be unknown, but it is fairly certain that new problems will be arising, as the issue of misinformation and disinformation related to Covid19 illustrates) – and document this;
- provide adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator, including a self-assessment of performance. The Carnegie proposal also included information gathering powers for the regulator. As in many other regulatory fields, failure to comply should be a violation of the regime in and of itself.
This is an abridged version of our submission to the Forum on Information and Democracy Working Group on Infodemics.
Help us make the case for wellbeing policy
Keep in touch with Carnegie UK’s research and activities. Learn more about ways to get involved with our work.
"*" indicates required fields