It’s New Year's Eve 2023. Across Iceland people are glued to their TVs watching a comedy music video as part of national broadcaster RÚV’s annual variety show filled with some of the country’s most famous people. But there’s a catch: one of them — comedian Hemmi Gunn — died in 2013.
Gunn made his appearance from beyond the grave thanks to technology from Icelandic startup Overtune. The company was picked as a tech partner to produce the video and normally develops a platform for audio creation.
Not all Icelanders were impressed though. Members of Gunn’s family were reportedly upset to see him brought back from the dead and the whole thing prompted a backlash on social media, with Icelanders calling the sketch “evil”, “unethical” and “gross”.
But while the incident might have happened in one of Europe’s smaller tech markets — the country has a population smaller than most mid-sized European cities — it serves as an example of how even small AI companies can have an oversized and potentially damaging impact on society.
The dead comedian
Overtune's cofounder Sigurdur Arnason says that the startup trusted that the broadcaster would get permission from Gunn’s family to use his image. But he doesn’t seem too fazed about the impact of upsetting members of the family.
“It excited me that the tech can cause this emotional response,” he says. “The last time I remember this kind of emotional response from tech was the internet and Y2K.”
“From our perspective, this is PR heaven. This couldn't have gone any better — I wouldn't change a thing, obviously.”
Sifted reached out to RÚV for comment on whether all of Gunn’s family had been asked for permission, but didn’t get a response before publication.
The incident grabbed attention at the highest level of Icelandic government. Björn Leví Gunnarsson, a member of the Icelandic parliament, is proposing a bill to amend the country’s copyright directive. The changes would force people recreating AI voices and images to seek permission from the people they imitate, and would also require watermarking of AI-generated content.
He has little hope that the bill will pass, though, as his party is not a member of the ruling coalition. “Unfortunately, [tech companies] do what they can get away with, as long as it makes them money,” he says.
Arnason is quick to point out, though, that his startup is in favour of AI being regulated, but has more mixed views on self-regulation and whether tech companies should be held responsible for their own ethical decisions.
“It's a really hard question to answer, because the nature of tech companies is to innovate. That is the nature of paradigm shifts,” he argues. “We just put the product on the market.”
The societal impact of tech like Overtune’s is about to be tested to its limit in 2024. In a year where more than four billion people are expected to go out to vote in elections, the stakes are high. AI has also already started to play a role, with deepfakes of Joe Biden’s voice being used to tell Democrats not to vote in January’s US presidential primary elections.
While Overtune is not primarily an AI company, its tech can be used to generate convincing synthetic audio versions of public figures like Barack Obama, Donald Trump and Kim Kardashian.
Arnason acknowledges that these kinds of use cases for AI might have “massive societal effects on everything”, but he doesn’t want to commit to removing the ability to make deepfakes from Overtune’s tech.
“We have been talking about it, but we haven't gotten any complaints [about recreating Donald Trump’s voice],” he says.
Arnason describes the copyright issues surrounding AI models as “the big ethical problem” with the technology so far.
“It’s an exciting thing that you can copy someone's voice with the permission of an artist, so they can monetise their voice… That’s a really exciting business model,” he tells Sifted, before conceding that Overtune didn’t get sign off from Kim Kardashian before letting users clone her voice on its platform.
Many in the sector now believe that AI companies need to start self-policing their behaviour, as governments prove slow in coming up with solid rules.
AI safety researcher Remmelt Ellen tells Sifted that AI companies that create tech that allows people to make deepfakes “should be held partly liable for harmful misuses of their products, because they have enabled such misuses.”
In some ways the Overtune story exemplifies the stereotypical “move fast and break things” startup mentality that’s made its way over from Silicon Valley to Europe, and it’s already caused genuine upset in the startup’s homeland of Iceland. But as people go to the polls around the world this year, companies like this are about to have a big impact on the societal fabric that our democracies are stitched together with — and how we decide which of our leaders we can trust, and which we can’t.