Analysis

May 8, 2023

Here are dozens of ways AI could be used for harm — and some too scary to test

One even involves... Taylor Swift-based terror


Tim Smith

4 min read

LAGG's hompage

Since ChatGPT’s release last year, the Twitterverse has done a great job of crowd-sourcing nefarious uses for generative AI. New chemical weapons, industrial-scale phishing scams — you name it, someone's suggested it.

But we’ve only scratched the surface of how large language models (LLMs) like GPT-4 could be manipulated to harm. So one London-based team has decided to dive deeper into the abyss, brainstorming the ways AI bots could be used by bad actors, and in the process revealing the potential for everything from scientific fraud to generated Taylor Swift songs being used as booby traps. 

If you don’t want to sleep at night, read on.

Deception

The team put their findings on the many ways generative AI could be used for harm together under the iconically titled site "LLMS are Going Great! (LAGG)". 

Advertisement

One of its members, Stef Lewandowski, says that some of the most wide-reaching harmful applications of generative AI could come for “grey-area” use cases that will be hard to legislate against. To demonstrate one of these, LAGG developed a chatbot called “Manipulate Me”, which tasks users with guessing how it is trying to manipulate them.

“You can adjust the way news is perceived to get a slightly different message across to people,” says Lewandowski. “I think it's in that area that I have more concern, because it's so easy to adjust and to manipulate people into forming a different opinion and that's definitely not in the realms of being illegal, or something we could legislate against.”

He adds that language models are also already being put to use for the purpose of creating fake identities or credentials, which LAGG demonstrated via a fabricated academic profile (see below).

“It's quite easy to make an image, make a description of someone. But then you could take it one further and start generating a fake paper, referencing some other real researchers, so they get fake backlinks to their research,” says Lewandowski. “We're actually seeing people making these fake references for themselves in order to get entry into other countries when they need citations to prove their academic credentials.”

Engaging with the dark side

Beyond the potential for manipulation or fabricating valuable information, LAGG explores other more playful ideas, like an Amazon Alexa that could be programmed to play AI-generated Taylor Swift songs when a user says a secret word, and not turn off until the secret word is repeated. 

But Lewandowski says the team, who worked on the project at a recently organised generative AI hackathon, also discussed ideas so worrying that they thought they'd be too risky to flesh out, like the idea of creating a self-replicating LLM, which could act as a kind of exponentially multiplying computer virus.

“I certainly wouldn't want to be the person to put that out to the world,” he says. “I think it would be naive to assume that we don't see some kind of replicating LLM-based code within the next few years.”

Lewandowski says he built LAGG because it’s important that developers consider the risks that are built into the tools they build with generative AI to try and minimise future harm from the technology.

He adds that, of the various projects that came out of the hackathon, most offered an optimistic vision of the future of AI, and that there was little enthusiasm for engaging with the more negative aspects.

“My pitch was, ‘You folks have all presented extremely positive, heartwarming ideas about how AI could go, how about let's look at the dark side?’” Lewandowski says. “I didn't get that many takers who wanted to explore this side.”

Advertisement

Atomico angel investor Sarah Drinkwater — who co-organised the hackathon with Victoria Stoyanova — adds that, while the event was largely optimistic in tone, they also wanted to draw attention to possible risks.

“It was important for us to, in a lighthearted way, consider the societal implications of some LLM use cases, given technology tends to move faster than policy,” she tells Sifted. “Stef and team's product, while funny, makes a serious point about consequences, unintended or not.”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn