Regulating the future: the Online Safety Bill and the metaverse
- By Professor Lorna Woods, Professor of Internet Law, University of Essex and William Perrin, Trustee, Carnegie UK
- 4 February 2022
- 17 minute read
The metaverse – and the tech companies who create it[1]– will not escape regulation in the UK, based on our analysis of the Online Safety Bill. The metaverse means different things to different people, it is the subject of enormous hype and conjecture but may be one future of life online. The United Kingdom government wants its Online Safety Bill to make the UK the safest place in the world to be online. We discuss here whether the OSB could work based on a central version of the metaverse – online worlds, some interlinked where people use avatars to congregate for work and pleasure. Our broad conclusion is that the OSB regime will roughly work, with some tweaks. Or to put it another way that technology companies can’t use the metaverse to escape regulation.
Technology companies are working out what the metaverse is and a clear definition or concept has not yet emerged. For discussion we assume some core elements:
- The metaverse is either a single online immersive environment or a series of interconnected environments.
- Environments can be wholly virtual or augmented reality overlays of the physical world.
- These environments are mainly designed and run by companies due to the resources required to build or rent computing power.
- Companies take design, systems and process decisions for what people do on their service.
- Companies have a variety of business models from advertising to subscription to ‘crypto currencies’, these could include commerce with and between users inside the including paid user labour.
- Metaverse access is through a computer/phone screen or through virtual reality hardware.
- At point of access for most people there is a log in or registration process as part of a commercial model, for liability management etc.
- A person uses the metaverse through a form of avatar –like a game character
- Within the metaverse people can interact with each other through speech, text, physical interaction of avatars or game-like elements such as props, spells, weapons etc.
- There is technical, systems and process interoperability between environments to varying degrees – people can move in some form from one environment to another but perhaps not universally.
These elements seem commonplace in current discourse – we are not forecasting that this is how the metaverse will manifest but using this model as the basis for analysis. It is apparent even now that there are legal questions raised. These might be more complex than those facing us with Web 2.0 if the metaverse infrastructure turns out to be one that makes it possible for lots of different environments to co-exist and interact. This is by contrast to the relatively closed digital platforms that we see today.
The likely issues are legion and include: intellectual property; ownership of assets; contractual issues (and not just smart contracts), including consumer protection; worker protection; data privacy; issues arising from the use of AI; platform integrity; cyber-security; competition, especially if there is cooperation between companies or vertical integration; applicability of financial regulation; application of the criminal law; and user safety, including image rights. No doubt there are more, but in the context of Carnegie UK’s current work it is user safety that is central, though data privacy and cyber-security intersect with safety issues. To what extent to the currently planned regimes look likely to adapt to metaverse regulation?
Harm
Before considering the detail of the law, we need to consider whether harm in the metaverse would be any different from that on social media. Already in the context of immersive games there are questions about the harm suffered when, for example, an avatar is assaulted and the intensity of the physiological, emotional and psychological reaction. This is especially given emerging haptic technologies, by comparison to circumstances where users are engaging through text and image. One popular VR game required the player in game to put a virtual gun to their head and ‘shoot themselves’ to progress. Groping and harassment have already been reported in the Metaverse. The question of whether such acts are less problematic online becomes the more pressing in a virtual reality environment; we know already that the hazards and harms of the Internet are not felt equally between different groups in society. While some game developers included personal safety features (a personal bubble) after a player’s avatar was hijacked, this is not the same as recognising structural inequalities. There is a risk that the “gendered censorship” already found online will likewise permeate the metaverse. Moderation becomes more difficult in a 3-D environment, and we have known for a while that companies are not adequately investing in moderation, as is.
Now, there are questions about whether activity in such an environment would be stored. In games at the moment, unless the player records the activity, there is no record of the audiovisual feed. So proving incidents and tracking perpetrators might become more difficult. Having said that, there are the usual questions about privacy, particularly given the sensitive nature of the data that could be captured – especially if the metaverse, with its interoperable spaces, expects people to have a single online identity. If the intention is to develop a virtual reality that is wider, it needs to be inclusive and it also needs to ensure that inappropriate or unwelcome interactions are guarded against, especially as regards children. Would avatars, for example, have to reflect the user’s actual age?
There are issues around advertising. The problems around targeted advertising are (unless prohibited as an advertising technique) likely to remain but there are other issues including the maintaining of boundaries between editorial and commercial ‘content’, whatever that means in the metaverse. It is uncertain whether users are more or less susceptible to being misled by virtual adverts – which could allow for exaggerated or surreal adverts. Influencers on social media are already pushing, if not going beyond, the limits set by law. There are also questions about the safety of current technologies by which users engage with virtual reality. VR equipment is better designed for men than women and one might also question whether there is a need for child-friendly designs too. Its use can induce motion sickness. The issue of addictive design may become ever more salient if we are talking about an immersive virtual reality (as opposed to augmented reality), especially in a society dealing with the pandemic and facing global warming. Some have noted that virtual reality ‘warps’ a user’s sense of time. There may be other behavioural, cognitive and societal effects, but these may be not well understood; may even be not foreseeable. This points to the need for investigation, not an approach that presses ahead regardless; while there are some discussions particularly about data and security, the potential for harm does not yet seem to be been fully considered.
Scope of the draft Online Safety Bill
The first question is whether the regime (assuming the scope of the draft Bill remains the same to the final act) could cover the metaverse. There are two aspects to scope: the persons covered (regulated services) and the sort of content which must be taken into account (i.e. it must be ‘regulated content’).
The definition in the draft OSB of regulated service is quite broad, though it is made up of several elements. So, a service falls into the regime if (cl 3):
- it is either a user-to-user service or a search service (cl 2);
- it has links with the UK (see cl 3(5) and (6)); and
- it is not exempt (see Schedule 1).
Assuming condition 2 is satisfied (which is the territorial element to the definition – though there might be difficult questions here too), would the definitions in cl 2 be broad enough to catch metaverse services or would a service provider fall outside the regime?
The definition of user-to-user service has a number of elements, all of which must be shown:
- an internet service (defined cl 133);
- content (a defined term found in cl 137) that is generated/uploaded/shared by a user (also a defined term in cl 122);
- the encountering (encounter is a defined term – cl 137) of that content by other users, which is further elaborated in cl 2(2).
A search function within a user-to user-platform, falls to be assessed under the user-to-user provisions. A general search engine which picked up the metaverse as part of its search remit, would still be caught as a general search engine. If a search engine specialised in metaverse sites, there is a question as to whether such a search engine would be caught. This depends on whether metaverse sites could be described as websites (see cl 134).
- An Internet Service
This means something that is provided across the internet, or across the internet in conjunction with an electronic communications service. The term ‘internet’ is not defined but it seems that the fact that payment or subscription is required for such a service is irrelevant. While the question of what the internet is (or is not) for the purposes of current search and social media platforms seems not of great moment (they obviously use the internet) and would cover all sorts of apps, websites and communications-related software, it might be more of a question for the metaverse if is uses a system not linked, for example, to IP protocols. Given that the internet protocols are the basis of addressing and are service-type neutral, it could be argued that it would be unlikely that services constituting the metaverse move away from this. Instead, the metaverse becomes just the next iteration of the internet, after the internet of things. Another question, however, is whether companies continue to use internet delivery (or the current IP protocols) or – for example – adapt content delivery networks to become non-Internet based delivery mechanisms thereby falling outside the regime. Whether or not this happens, it flags the point that it is important that every element of a definition is scrutinised to check whether it is necessary and future -proof.
- User Content
Content is incredibly broad – ‘anything communicated by means of an internet service’ (cl 137(1), emphasis added) – but it also depends on the communication being done via an Internet service. Assuming an internet service to be in play, ‘content’ should surely cover experiences in virtual reality. Note, however, that because the definition of ‘user’ excludes the service provider, content provided by the service (unless shared by a user) would not fall within the definition, excluding perhaps dangerous features programmed into the environment. Third-party professional providers of services or features available to end-users would also seem to fall within the definition of user, however. Indeed, everyone except those specifically excluded by the draft bill from being a user (essentially the service provider and those working for or providing a business service to the provider), would seem to be a user.
There could well be boundary questions; for example, games seem to be in scope of the draft OSB but there has been little discussion about what is content generated by the service and that which is generated by the user. The framing within which users interact is provided by the service. The personalisation of avatars or other features provided by the service could constitute a set of boundary cases. Presumably tools to aid creation would result in content generated by a user, especially if the tool was provided by a third party, whereas the use of an avatar in its pre-programmed form might not. Of course, the purpose of the metaverse is to allow users to engage with one another; presumably this would constitute content for the purposes of triggering the regime. For instance, an avatar moving in an online space following commands from the user and seen by another user of the same service would be generating content. There appears to be no de minimis threshold; note, however that the exclusions for some services where the user generated content is a small part of the service. Schedule 1 mentions services in which the ways in which users can communicate with each other are limited (e.g. via like buttons or through review functions); this seems unlikely to apply to the metaverse.
One exception that might need a little more attention is that for internal business services and tools found in paragraph 4 to Schedule 1. Employer intranets or portals would seem to fall within this exception. The Explanatory Notes also mention that productivity and collaboration tools, content management systems, customer relationship management systems and database management software could fall within this category. Would a work-focussed metaverse – the virtual office – benefit from this exception? It seems to depend on who is deemed to be the provider of the service as it is only when the provider is the same as the business using the tools that the exception applies (para 4(2)(b), Sch 1), so the question of who or what a “provider” is for this purpose is key. The draft Bill states that the “provider” will be the entity that has control over who can use the service, but, where a party simply provides an “access facility” for a user-to-user service, it will not be deemed to be the “provider”. Much may depend on the facts in each instance. In an interoperable environment run by different companies, there may be questions about which is the relevant provider, however.
- Encountering
The definition here requires only the possibility that content may be encountered, not that the content actually is encountered; there is no minimum for the potential audience beyond the hypothetical one user. “Encountered” is drafted broadly and includes reading, viewing, hearing or otherwise experiencing the content. The fact that the content might be distributed live, or made available for only a limited period of time is irrelevant; it does not require user engagement with the content. Content can be encountered if it is capable of being shared with another user by virtue of a functionality of the service; it can include private messaging channels. This aspect of the definition is likely easily to be satisfied, even in more private spaces.
In principle, it seems that the metaverse would fall within the OSB. So, are there any particular aspects about it that seem to add to the existing questions around the draft OSB?
Criminal Content
In terms of obligations to prevent and to minimise harm, some of the strongest protections in the Bill relate to harms arising from content that violates the criminal law (Clause 9 of the draft OSB). It is as yet unclear whether virtual rape or sexual assault (which have already occurred in the online gaming environment) would fall within the criminal law. The Law Commission’s recent review of communication offences shows that similar behaviours can fall either side of the line of criminality for reasons unconnected with the harm experienced (see e.g. cyber-flashing and the limitations around image-based sexual abuse). As the Australian E-safety Commissioner noted, virtual and augmented reality may open the way for new forms of image abuse – giving the example of the faking of a sexually explicit three-dimensional image or video of a real person and interact with it, without that person’s consent.
CSAEM rightly receives particular emphasis within the OSB. Whether or not the listed types of crime cover all forms of potential abuse in virtual reality just as a matter of statutory interpretation, it seems that the risks present on the current internet will likely be exacerbated in virtual reality. The use of gaming sites for sexual predators to meet and to groom children has been noted as has risks to minors in the metaverse. The ability to hide behind an avatar or fictional character is common to virtual reality as well. It has been suggested that children may be more easily influenced and manipulated, and for younger children especially may become confused about what is real or not, in a hyper-realistic environment. The risk is greater – especially given the aim of enabling people to feel closer together and the increasing availability of haptic technology, especially assuming that this technology will improve (and question the extent to which these encounters could be recorded and shared). Mechanisms around ensuring that children do not get introduced to unknown adults by platforms may need to be adapted, especially if there is an interoperable set of virtual environments that are open to one another; there may however be rights-based concerns around the VR equivalent of a ‘real-name policy’, or the requirement for a single identity. Who has gate security for children in this scenario? Would there be, for example, the equivalent of a private account setting in the metaverse? Of course, cross-platform issues arise in the current iteration of the Internet and have not been adequately addressed; it would seem however more pressing in the metaverse.
Note the current draft Bill envisages obligations on operators to take steps to minimise the prevalence and dissemination of criminal content (cl 9) and to prevent children from encountering content harmful to children (cl 10). Given that the metaverse as currently envisaged seems to emphasise real-time and casual interactions, there are questions here about how preventative mechanisms, however imperfect, designed for a system where there is some element of recorded content (e.g. the IWF’s hash list) can translate to the metaverse. This is something to be considered in the implementation phase of the regime, perhaps through codes of conduct.
Harmful but Legal
As we and many others have argued in evidence to the Joint Committee on the Draft Online Safety Bill, Clause 11 of the Online Safety Bill provides but weak protection – the emphasis on terms of service implies that the only requirements relate to controls on user content, rather than changes through design choice. (The Joint Committee has since recommended that this Clause is removed.) The focus on content in the draft Bill also raises questions about whether platform contribution to harms are adequately addressed, and this seems to be even more of an issue for the metaverse (given it will cover different sorts of activity: work/school, socialising, shopping, entertainment). Length of time in virtual reality might become a problem (and this is also relevant in the context of the harmful to children category) but does not seem to fit the notion of content particularly well. This bias affecting design in terms of current platforms is likely to materialise in virtual reality (though it seems some companies are now at least aware that there is a problem).
Conclusion
While we may wish that tech companies focused first on fixing the problems with their existing products before inventing new ones, there is some optimism that the UK Government’s intention is that their proposal for legislation to reduce online harms will enable the regulator Ofcom to tackle harm in the metaverse too.
In response to a question in the House of Lords following the recent Sunday Times report on the risks emerging in the metaverse to children, the DCMS Minister Lord Parkinson said:
I read the very disturbing report in the Sunday Times to which the noble Lord referred. That is why the online safety Bill takes the approach of not being specific on certain technologies and making sure that our legislation can be future-proofed so that, as the internet continues to develop and new technologies are invented, the legislative protections for users keep pace with that. The metaverse, to which he referred, is a key example.
All the more reason for a well-designed, well-resourced Online Safety regulatory framework to be introduced and enacted as soon as possible.
[1] While the recent change in nomenclature has associated ex-Facebook with the metaverse (and its Oculus VR technology), it is not the only company interested in this development. Gaming companies (e.g. Epic Games, Roblox, Valve) have each developed 3D worlds which users navigate via avatars. Microsoft has also announced it is developing metaverse features – AI-enabled avatars and immersive workplaces via Teams (called Mesh) are expected sometime next year; its announcement at the BUILD conference predated Facebook’s announcement (and note Microsoft has X-Box and HoloLens). Google and Apple are working on relevant technologies too. There remains the possibility that other companies might also enter this market (e.g. Magic Leap, Spatial, Disney, Nvidia); some envisage an environment specifically built on blockchain – and use of cryptocurrencies and trading in NFTs (e.g. Decentraland; The Sandbox).
Help us make the case for wellbeing policy
Keep in touch with Carnegie UK’s research and activities. Learn more about ways to get involved with our work.
"*" indicates required fields