Addressing the “infodemic” through a focus on online system design

Carnegie Logo
  • by Professor Lorna Woods, University of Essex, William Perrin, Trustee, Carnegie UK Trust and Maeve Walsh, Carnegie Associate
  • 14 October 2020
  • 12 minute read

The design of online platforms and services shapes the information environment that users experience. That principle has underlined the development of our proposal for a statutory duty of care for online harm reduction since its inception in 2018.

Design choices impact on the content posted and the way information flows across communications platforms – including but not limited to recommender algorithms. Developing a regulatory system based on individual rules tackling specific issues (e.g the role of bots in spreading misinformation) might quickly become outdated, both as regards to technologies and services available and the problems faced. By contrast, an overarching duty on operators to ensure, so far as possible, that their services are ‘safe by design’ is an on-going obligation that looks towards outcome.  It bites at the system level, rather than the level of individual pieces of content. A systems-based approach is neutral as to the topics of content and, to a large extent, future-proofed: the concept of harm is consistent regardless of the technology and service that might cause it as the state-of-the-art advances.

As Will Perrin argued in a previous blog post, the scale and prevalence of misinformation and disinformation – and the consequential harms to public health and critical infrastructure that we have seen during the Covid19 pandemic – could have been reduced had a systemic duty of care enforced by an independent regulator been in place. As the UK government puts the finishing touches to its Online Harms proposals, the debate about the balance between reducing harm to users online and protecting free speech is intensifying. This article sets out a means to navigate the tension, protecting users from the multiple harms that arise from the spread of misinformation and disinformation while also protecting the individual’s right to speak. (Further discussion on the duty of care and fundamental freedoms can be found in this paper by Professor Lorna Woods.)

The importance of community standards – and their enforcement

Direct content regulation in the context of social media is problematic for a range of reasons, including the amount of data and the speed with which content is uploaded. Assessing content requires an understanding of context which means it is very difficult to develop general rules that apply across different countries. Moreover, the specification of content as acceptable and unacceptable goes to the heart of freedom of expression and raises concerns about the risks of politically motivated suppression of speech.

Regulatory obligations which focus on the underlying system, and the extent to which they have been designed with an awareness of the risks of design features and to allow user control, mitigate these concerns. Such an approach could allow the development (within the space allowed by the law) of differently calibrated communities. So-called ‘community standards’ are important – however, in our view oversight of their design and enforcement should be part of the regulatory system not a separate, non-statutory expectation on platforms. (Government Ministers have recently seemed to suggest that much of the “legal but harmful” category of harms could be dealt with through companies “enforcing their own terms and conditions”, outwith the statutory duty of care). In addition to concerns about the impact of design features on content and information flow, rules focussing on the underlying system should take into account the following aspects:

Platform design

The impact of platform design on users’ choices, as well as the flow of information across platforms is central to the duty of care proposal. The connection between design and content can be seen at the following stages, though some issues come up at multiple stages:

The aim of the duty of care approach is not to say that any one design feature is prohibited or mandatory but rather that the platform should assess those features for risk of harm (as set out in the underpinning statute) and to take appropriate steps to mitigate those risks. For example, with regard to encryption, perhaps mitigating features relate to identification of users, or limitation of how many users could be in a group using encryption. Twitter has made a wide range of design changes to reduce harm during 2019 and 2020 including a raft of specific measures in advance of the USA Presidential election. [In terms of advertising, some form of due diligence as to who is using the system and for what ends may be helpful, as well as some form of risk assessment of the categories of audience segmentation. Is there a risk, for example, in segmenting the audience by reference to its members’ interest in conspiracy theories/alternative facts, or allowing new parents to be targeted with disproven claims about the link between the MMR vaccine and autism? The availability of micro-targeting itself should be assessed for its risks. The process surrounding risk assessment and mitigation should be documented; in general, the onus should be on the platforms to demonstrate compliance.

In the context of the pandemic, some platforms have sought to emphasise reliable sources of information. While this is a welcome step, it is questionable whether this is a sufficient analysis of the issues that seem to lead recommender algorithms and similar features to prioritise (increasingly) extreme or emotive content in general. This response is a piecemeal response, based on individual categories of content rather than a fundamental systems check.

Private messaging

While there may be other policy concerns driving decisions about scope of the forthcoming online harms regulation with regard to the inclusion or otherwise of private messaging, one particular issue deserves focus – that is, issues relating to privacy. Social media in particular have blurred the boundaries between public expression – that is communication to large audiences or that is intended to be viewed by such an audience – and private communication (historically, one-to-one communication typically by letter or telephone conversations). These types of communication have typically been protected from state intrusion, whether in international human rights instruments or national constitutions. This protection should not be defined by reference to the technology alone (so that fact that a service as a matter of domestic law is not a telephony service should not be conclusive as to the scope of privacy). While concerns about state access are less prevalent regarding posts which are open to view generally, questions remain about communication that is closed, that is within a particular group.

Yet messaging services and private groups raise real concerns about some of the most serious harms, especially when they are encrypted. Many governments, of course, have communications interception regimes with judicial and/or parliamentary oversight, although the operation of these regimes is frequently contested. It has become clearer that messaging services have gone beyond one-to-one communications and small groups; many platforms allow large size groups. The size of these groups suggests that the communication mediated via the service is neither private nor confidential. Other characteristics also indicate the non-private nature of the communication, notably the growing practice of public groups, sharing of group links and browsers and search apps for groups. Services that enable the creation of public groups and/or large groups would, in our view, suggest that these services – if they can be used for multiway communications – should fall within a regulatory regime. Any risk assessment should take into account the specificities of this form of service. In this context the encrypted nature of a messaging service is a key risk factor; some counter measures could be user identification, limitation on numbers in groups, limitations on how material may be forwarded or searched, limitations on bot membership of groups, or the number of groups which one account may join.

A further question arises with regard messaging services (though also relevant to other platforms as well); that is, the responsibility for aspects provided by third party apps. In particular, third parties have provided apps that allow users to search private groups, to search for membership codes for private groups and the like. Should these apps be the responsibility of the platform provider (this seems unfair save in the instance of lax security which may engage other responsibilities); should the third-party apps also be within scope? Conversely where a platform chooses to incorporate a feature using a package from a third party, the service provider should take responsibility for that feature, defects and all.

Transparency

Currently, it is hard to assess the effectiveness of platform operators’ attempts to ensure safety for their users. Information is not available, and when it is available, it covers matters chosen by the operator in a format chosen by the operator, making assessment and comparison between platforms difficult if not impossible. In the context of the current pandemic a number of social media companies have taken some steps to limit misinformation/disinformation. For example, Pinterest took a decision last year not to have anti-vax material on their platform; its very clear community guidelines do not allow content that might have “immediate and detrimental” effects to health or public safety, so this allowed them to easily extend it to cover searches for Covid19 and limit the results to material from authoritative sources. WhatsApp’s “velocity limiter” reduced the number of times things can be forwarded which, it claims, has led to a 70% reduction in “highly forwarded” messages on its services.  There is, however, no way to investigate such a claim.

More information is then an essential part of ensuring platforms take responsibility for their products, though on its own it is insufficient. Our 2019 full report suggested the following points that could be required in general – that each relevant operator should be required to:

This is an abridged version of our submission to the Forum on Information and Democracy Working Group on Infodemics.