,
In a recent , a reporter posing as a 13-year-old girl in a virtual reality (VR) app was exposed to sexual content, racist insults and a rape threat. The app in question, VRChat, is an interactive platform where users can create 鈥渞ooms鈥 within which people interact (in the form of avatars). The reporter saw avatars simulating sex, and was propositioned by numerous men.
The results of this investigation have led to warnings from child safety charities including the National Society for the Prevention of Cruelty to Children (NSPCC) about the dangers children face in the metaverse. The refers to a network of VR worlds which Meta (formerly Facebook) has positioned as a future version of the internet, eventually allowing us to engage across education, work and social contexts.
The NSPCC appears to put the blame and the responsibility on technology companies, arguing they need to do more to safeguard children鈥檚 safety in these online spaces. While I agree platforms could be doing more, they can鈥檛 tackle this problem alone.
Reading about the BBC investigation, I felt a sense of d茅j脿 vu. I was surprised that anyone working in online safeguarding would be 鈥 to use the NSPCC鈥檚 words 鈥 鈥溾 by the reporter鈥檚 experiences. Ten years ago, well before we鈥檇 heard the word 鈥渕etaverse鈥, similar stories emerged around platforms including and .
These avatar-based platforms, where users interact in virtual spaces via a text-based chat function, were actually designed for children. In both cases adults posing as children as a means to investigate were exposed to sexually explicit interactions.
The demands that companies do more to prevent these incidents have been around for a long time. We are locked in a cycle of new technology, emerging risks and moral panic. Yet nothing changes.
It鈥檚 a tricky area
We鈥檝e seen demands for companies to put age verification measures in place to prevent young people accessing inappropriate services. This has included proposals for to require verification that the user is aged 13 or above, or for to require proof that the user is over 18.
If age verification was easy, it would have been widely adopted by now. If anyone can think of a way that all 13-year-olds can prove their age online reliably, without data privacy concerns, and in a way that鈥檚 easy for platforms to implement, there are many tech companies that would like to talk to them.
In terms of policing the communication that occurs on these platforms, similarly, this won鈥檛 be achieved through an algorithm. Artificial intelligence is nowhere near clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive. And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.
The reality is that platforms already provide a lot of tools to tackle harassment and abuse. The trouble is are aware of them, believe they will work, or want to use them. VRChat, for example, provides tools for blocking abusive users, and the means to , which might ultimately result in the user having their account removed.
We cannot all sit back and shout, 鈥渕y child has been upset by something online, who is going to stop this from happening?鈥. We need to shift our focus from the notion of 鈥渆vil big tech鈥, which really isn鈥檛 helpful, to looking at the role other stakeholders could play too.
If parents are going to buy their children VR headsets, they need to have a look at safety features. It鈥檚 often possible to monitor activity by having the young person cast what is on their headset onto the family TV or another screen. Parents could also check out the apps and games young people are interacting with prior to allowing their children to use them.
What young people think
I鈥檝e spent the last two decades researching online safeguarding 鈥 discussing concerns around online harms with young people, and working with a variety of stakeholders on how we might better help young people. I rarely hear demands that the government needs to bring big tech companies to heel from young people themselves.
They do, however, regularly call for better education and support from adults in tackling the potential online harms they might face. For example, young people they want discussion in the classroom with informed teachers who can manage the debates that arise, and to whom they can ask questions without being told 鈥渄on鈥檛 ask questions like that鈥.
However, without national coordination, I can sympathise with any teacher not wishing to risk complaint from, for example, outraged parents, as a result of holding a discussion on such sensitive topics.
I note the UK government鈥檚 , the legislation that policymakers claim will prevent online harms, contains just two mentions of the word 鈥渆ducation鈥 in 145 pages.
We all have a role to play in supporting young people as they navigate online spaces. Prevention has been the key message , but this approach isn鈥檛 working. Young people are calling for education, delivered by people who understand the issues. This is not something that can be achieved by the platforms alone.
, Professor of IT Ethics and Digital Rights,
This article is republished from under a Creative Commons license. Read the .