After a single hour of metaversion, one researcher said her avatar was sexually attacked by another user while others watched.
The researcher evaluated online behavior in the metaverse for the non-profit organization SumOfUs. She said that within an hour of entering the Horizon Worlds platform, other users had urged her to remove the personal security settings that provided users with a four-foot balloon of personal space, then her avatar was attacked.
You can replay the researcher’s experience here (content warning: language and sexual content)
The final report from SumOfUs entitled “Metaverse: Another cesspool of toxic content” described user accounts of harassment, violence and lax moderation on Meta’s Horizon Worlds virtual reality platform.
The report covered in detail the experiences of virtual rape, threats, verbal assault, the use of racial and homophobic insults, and arms violence in the metaspace.
One account published by psychotherapist and metaversion researcher Nina Jane Patel on Medium described how it took a full 60 seconds for her avatar to be harassed, touched and “gang-raped” by a group of avatars representing men.
“I wish I was more surprised, but based on what we’ve seen on Facebook over the last few years, it wasn’t that surprising,” said SumOfUs campaign director Rewan Al-Haddad. “But it’s really disappointing that he’s reaching that level in the meta universe.”
While bad behavior on the Internet is nothing new, social virtual reality differs from ordinary social media because the user is immersed in the virtual world.
The panoramic view provided by the VR headset, the use of sounds and even the manual control that simulates the touch make it a multi-sensory experience. This creates a feeling of “being there” and reduces the separation between the user’s virtual body and his physical body, according to a report from Harvard’s Carr Center.
This makes online experiences of hate, violence and sexual harassment more realistic in virtual reality.
For example, when other avatars touch yours in Horizon Worlds, the hand controls vibrate. For the SumOfUs researcher, this feature created “a very special experience of the sexually assaulted, very confused by what is happening, and then also experiencing this physical reaction,” Al-Haddad said.
Moderating online activity on any platform can be difficult. It’s even harder when it has to be done in real time, and moderators have to judge not only published information or content, but also user behavior.
To help with reporting, Meta has implemented a monitoring and recording policy for its CoR platforms. Records of the latest user interactions are automatically stored in their Oculus headphones. If a user wants to report bad behavior, an interaction record will be sent to Meta so that the company’s “security experts” can review the record and determine if it violates their conduct.
But the criteria used by Meta staff to decide what qualifies as a violation are not clear. BuzzFeed reporters created the experimental “Qniverse”, a room in Horizon Worlds that contained content banned on Facebook and Instagram Meta, including conspiracy theories and misinformation. When they reported the world, they were told that the content did not violate community guidelines. Only when the team from BuzzFeed contacted Meta communications directly and Qniverse was withdrawn.
Meta wrote in a blog post that she is committed to addressing these issues and has invested in finding ways to moderate content to keep people safe while respecting users’ privacy.
SumOfUs and its partner organizations are demanding greater oversight of the company and its moderation practices, including a human rights impact assessment for Meta’s Horizon Worlds platform, a request rejected by the company.
Al-Haddad said the research would help introduce regulation into the technology industry, and pointed to the recently passed European law on digital services as a “huge step forward” in regulating these companies.
#researcher #practically #attacked #hour #metaspace