According to a recent Bloomberg Intelligence report, the metaverse is an $800 billion market. Still others argue about what the metaverse actually is, but with so much money and curiosity surrounding it, it has everyone talking.
Undoubtedly, AI will play a huge role in the metaverse, especially as we communicate with others. While we’ll be more connected than ever, AI untethered to any government, standard or ethical code can have diabolical implications. As former Google CEO Eric Schmidt asked recently, “Who gets to set the rules?”
Understanding the implications of AI
Because AI algorithms are built by people with biases, they can be created to follow the thought patterns and biases of their creators — which can then multiply. We’ve seen how AI can create gender bias, for example, or how AI can give larger credit card limits to men than women, or that certain ethnicities are more prone to unfair bias. To create a flourishing and more equitable metaverse, dark AI patterns that can create and perpetuate bias need to be addressed. But who gets to decide? And how can humans avoid bias?
The solution to mitigate this “unchecked AI” is to develop ethical standards across all organizations. From our view, dark AI patterns can be invasive. Most AI is developed without ethical oversight, and this must change in the metaverse.
Some people enjoy the idea of being able to smash into parked cars on an insanely fast motorcycle because you’re running from the cops after robbing a bank. Normally when a motorcycle smashes into a car there’s only one loser, but in your world, you’re indestructible! This is the experience that led to the popularity of the GTA video games (GTA V in particular). So many of us built an empire there and lived different lives inside the game. None of us wants to be bank robbers in the real world, we just liked the escapism of being the bad guy for once. And then you lose your login credentials to some hacker kid, and it’s a nightmare because even though it’s a world of make-believe, it’s one we worked hard at!
What’s this got to do with the metaverse? Just like with games, the metaverse isn’t “real”, but it will take real effort and time to build and nurture a presence in it. It’s not just 1’s and 0’s being processed by a GPU and shown to you, it’s something we will have an emotional attachment to and will have invested time and real money into, whenever it gets here. And we are going to need ways to protect those assets we have or create! We will need laws!
PRIVACY IN THE METAVERSE
A simple browser cookie reveals so much about us that we’re uncomfortable about it. As it is, technology is spooky, with ads suddenly appearing for things we swear we were just talking about with friends or family members. The way tech has invaded our lives, and the lack of privacy is already unsettling, imagine what happens when that entire life is lived digitally?
Is it the end of our privacy when we will be using a system that allows us to “live” online? Imagine, you’re walking through a virtual world, and now everything you choose to focus on, what you look at, zoom into, stop in front of, interact with, all of it is data about your behaviours. If you think it’s spooky how well algorithms know you now, prepare to be terrified soon!
With one or two of the big tech companies essentially owning the metaverse, they’re also going to be the ones who own all of the data about you (not too different from today, thus far), but the difference between what’s real and what’s metaverse might start to blur. This is something we will explore in more detail in the next story, but in terms of privacy, that’s a bad, bad thing.
Our only hope is legal recourse, and world governments hopefully siding with us instead of corporations (unlikely), to have strict opt-in policies and personal control over our own data and its use. Let’s hope it doesn’t come too late.
Last week, Meta (the umbrella company formerly known as Facebook) opened up access to its virtual-reality social media platform, Horizon Worlds. Early descriptions of the platform make it seem fun and wholesome, drawing comparisons to Minecraft. In Horizon Worlds, up to 20 avatars can get together at a time to explore, hang out, and build within the virtual space.
But not everything has been warm and fuzzy. According to Meta, on November 26, a beta tester reported something deeply troubling: she had been groped by a stranger on Horizon Worlds. On December 1, Meta revealed that she’d posted her experience in the Horizon Worlds beta testing group on Facebook.
Meta’s internal review of the incident found that the beta tester should have used a tool called “Safe Zone” that’s part of a suite of safety features built into Horizon Worlds. Safe Zone is a protective bubble users can activate when feeling threatened. Within it, no one can touch them, talk to them, or interact in any way until they signal that they would like the Safe Zone lifted.
Vivek Sharma, the vice president of Horizon, called the groping incident “absolutely unfortunate,” telling The Verge, “That’s good feedback still for us because I want to make [the blocking feature] trivially easy and findable.”
It’s not the first time a user has been groped in VR—nor, unfortunately, will it be the last. But the incident shows that until companies work out how to protect participants, the metaverse can never be a safe place.
“There I was, being virtually groped”
When Aaron Stanton heard about the incident at Meta, he was transported to October 2016. That was when a gamer, Jordan Belamire, penned an open letter on Medium describing being groped in Quivr, a game Stanton co-designed in which players, equipped with bow and arrows, shoot zombies.
In the letter, Belamire described entering a multiplayer mode, where all characters were exactly the same save for their voices. “In between a wave of zombies and demons to shoot down, I was hanging out next to BigBro442, waiting for our next attack. Suddenly, BigBro442’s disembodied helmet faced me dead-on. His floating hand approached my body, and he started to virtually rub my chest. ‘Stop!’ I cried … This goaded him on, and even when I turned away from him, he chased me around, making grabbing and pinching motions near my chest. Emboldened, he even shoved his hand toward my virtual crotch and began rubbing.
“There I was, being virtually groped in a snowy fortress with my brother-in-law and husband watching.”
As the experience of virtual worlds grows richer, virtual crimes such as assault and theft may become as serious as their counterparts in the physical world.
This story is adapted from Reality+: Virtual Worlds and the Problems of Philosophy, by David J. Chalmers.
Note: The following paragraphs describe a virtual sexual assault in a text-based virtual world.
BEFORE THERE WAS the metaverse, there were MUDs, or multi-user domains. In 1993, they were the most popular virtual worlds for social interaction. MUDs were text-based worlds with no graphics. Users navigated through a number of “rooms” with text commands and interacted with others there. One of the most popular MUDs was LambdaMOO, whose layout was based on a California mansion. One evening, a number of users were in the “living room” talking with one another. A user named Mr. Bungle suddenly deployed a “voodoo doll,” a tool that produces text such as John kicks Bill, making users appear to perform actions. Mr. Bungle made one user appear to perform sexual and violent acts toward two others. These users were horrified and felt violated. Over the following days, there was much debate about how to respond within the virtual world, and eventually a “wizard” eliminated Mr. Bungle from the MUD.
Almost everyone agreed that Mr. Bungle had done something wrong. How should we understand this wrong? Someone who thinks virtual worlds are fictions might say that the experience is akin to reading a short story in which you are assaulted. That would still be a serious violation, but different in kind to a real assault. That’s not how most of the MUD community understood it, however. The technology journalist Julian Dibbell reported a conversation with one of the victims recounting the assault:
If you’ve struggled to envision your virtual self in the metaverse, you’re not alone.
Meta Platforms, formerly Facebook, may be restructuring its company and hiring thousands for newfound augmented and virtual reality roles, but there’s not even a universal definition for what a metaverse is yet—let alone a synchronous set of ground rules.
Just before the pandemic upended our world, a study came out which found that brands who want to be successful in customer experience will have to adapt their technology models to become more agile and to rely more on automation and smart and immersive tech if they want to reach customers with personalized interactions in real time.
In hindsight, the report's creator, Futurum Research, must have had a crystal ball, because the pandemic has certainly brought its initial prediction around agility and smart immersive tech about — albeit much more quickly than anticipated.
McKinsey calls it “the quickening” and states that if companies are feeling whiplash, it is because we have just jumped forward 10 years in 90 days’ time in terms of growth of digital activity and particularly ecommerce.
Forrester also reinforced Futurum’s pre-pandemic conclusion about smart and immersive tech in its “Future of Martech” (paywall) report that published late last year. According to Forrester, the boundaries between the human, digital, physical, and virtual realms are blurring as CX becomes more immersive. They highlight AI techniques like natural language processing and generation, voice and facial recognition, and image and video analysis and say these underpin emerging technologies that will fuel next-generation CX.