Analysis

April 18, 2022

The deepfake dangers lurking in the metaverse

The technology exists for anyone to create a hyperreal avatar of themselves. But do we stop these deepfake images from being misused?


Maija Palmer

5 min read

When you're in the metaverse, you are generally represented by either a blocky or cartoonish avatar or a disembodied floating torso and a pair of hands. None of which looks remotely like you.

But what happens when things become much more real?

A number of companies are developing ways for you to create hyper-realistic representations of yourself for the metaverse, with your face, your voice and even the way you move. One of these is Metaphysic, a deepfake or synthetic media company, founded by Chris Ume, creator of the Deep Tom Cruise videos that took TikTok by storm last year.

The videos purported to show the Hollywood actor doing things like eating into a lollipop and playing golf. The footage was of a different actor, with Tom Cruise’s face so skilfully transposed on top that it’s hard to tell it isn’t real.

Advertisement

Now Metaphysic wants to put that technology in everyone’s hands, so they can use it to make their own hyperreal avatar. The startup recently raised $7.5m from investors including Winkelvoss Capital and YouTuber Logan Paul to help fund this.

“Any person can come and create their own hyperreal synthetic avatar,” says Tom Graham, CEO and cofounder of Metaphysic.

But that’s not all: with Metaphysic you can also securely store your avatar (as a non-fungible token), so that you can keep ownership of your own image and — crucially — the biometric data used to create it.

Other companies are doing this as well. Romanian-based Humans.ai is also offering a service where people can create an NFT of their own face or voice. 

Partly it’s about having fun — who doesn’t want to create a mini-me of themselves? Or see what they’d look like dressed as Lady Gaga?

But there is also a serious side. If we don’t find ways to secure our identities in the metaverse right from the start — as Metaphysic aims to do — the result could be a horrible loss of control of our own images and biometric data.

A quick dive into the deepfake dilemma

No-one knows this better than Henry Ajder, a researcher who spent years looking into the malicious uses of synthetic media. A joint investigation he carried out with Karen Hao of MIT Technology Review in 2019 found that 96% of all synthetic media at that time were pornographic, mainly created by bots that could swap people’s faces onto the body of someone performing lewd acts.

That was at a time when deepfake technology was still in its infancy. Now it has become exponentially easier and more realistic.

The future will be synthesised and there's no sugarcoating the challenges ahead

“It used to take 150 CGI people and $250m to create a really profound set of effects for a movie. Now we can do it for a couple of thousand dollars, a couple of GPUs and a single person,” says Graham.

And deepfake videos keep popping up everywhere. At the start of Russia’s invasion of Ukraine a clumsy deepfake video of president Volodymyr Zelensky supposedly surrendering showed how this kind of media might be weaponised for political purposes. The video was relatively crude, but experts are warning that the next ones might not be so easy to tell apart.

Advertisement

You can’t ban the technology, says Ajder: “If you ban synthetic media you ban all Instagram filters, you ban the computational photography on your camera and your smartphone, you ban the dinosaurs in Jurassic Park. It's not going away — the future will be synthesised and there's no sugarcoating the challenges ahead.”

Setting a good example

The only thing you could do, Ajder reasoned at the time of his paper, was to try to create a big enough industry around the legitimate and ethical deepfake technology to try to set best practice — and maybe, just maybe, help balance out the nefarious uses.

Ajder teamed up with Ume and Graham, who was at the time setting up Metaphysic, to advise on how to take things in an ethical direction.

“I'm very much coming from this perspective of understanding how the technology can be used in malicious ways, but I've also seen an explosion in really interesting creative and commercial uses of the technology, and the need for a more nuanced conversation around synthetic media as a technology,” he told Sifted last year when they were getting started. “In the right hands and used responsibly it could actually be the future of creative expression. We've got to make sure that we're seeing a good example.”

Metaphysic tried to model ways that media companies could use synthetic media responsibly. The company helped famous actors, for example, lease their images to advertising companies to create a campaign, but everything had to be done with consent and within agreed boundaries.

“There are some quite clear use cases that we think of as just explicitly bad — non-consensual image abuse in the pornographic context, deceptive political stuff, cybersecurity issues with fraud,” Ajder said.

Another synthetic media company, D-ID, which has worked with Warner Brothers on various film projects, has also been pushing to establish a code of ethics in the industry.

But now it is becoming everyone’s problem

As deepfake technology emerges from the realm of advertising and film projects and becomes accessible to anyone, companies like Metaphysic feel they need to take things further.

The team don’t want to just make sure that the film and advertising industry uses synthetic media ethically — they want everyone to be able to create and secure their avatar. They're offering their service, called Every Anyone, for free, with users just having to spend an estimated $20 for the NFT minting fees.

Every Anyone face changes
Every Anyone's platform allows for faces to be manipulated in all sorts of ways

Sure, it won’t stop someone from stealing your face to create some revenge porn if they really want to. But they want people to understand what can be done with images, and to look at ways in which they can control how their face and voice are used.

“We want individual users to feel that they have got more control over who they are, and to not be worried that they're sending all of their data to a dodgy company and what might happen to that in the future,” says Graham.

“It's really fundamentally about consent. We want to level the playing field a little bit and create a paradigm where this is the norm.”