If you’re someone who reads the news regularly, by now you’ll be aware of the fact that we’re all gonna die. If we’re not wiped out by the climate crisis, it will be nuclear war and if it’s not that, beware antibiotic-resistant diseases.
But now the media has a new villain for humanity: AI powered by large language models (LLMs)
Of course we wouldn’t believe a word of it without having been primed by a host of books, movies and TV series — Terminator, Skynet, the list goes on. These well-worn science fiction plots are built on the premise of a hostile AI “other”.
And, as products of our surroundings, we lap it all up until we’ve had our fill: the truth is, we want to believe the end is nigh. The old adage is that “sex sells” but, when it comes to tech, fear of impending disaster and a heightened sense of our own mortality are the ultimate turn-on.
Just as the US and USSR pushed each other literally into space, being one step from annihilation excites us like nothing else.
The right kind of obstacle
I hate to be the one to gatecrash the fear party, but AI doomerism could become a serious obstacle to the development of important technologies and overall scientific progress.
We can’t forget that AI is a technology. Because it uses text as an interface (and is becoming multimodal too, meaning it can process images and audio too), it might get very, very relatable and effectively pretend to behave as a human — but it has no will and no motivation.
It has none of the pathways which pump our brains with stuff like serotonin and dopamine, chemicals which create the desire to aspire to something global (and stupid) like world annihilation.
They benefit from a public that is scared of AI
This fear-mongering over AI is even fed by those who stand to benefit the most from wider AI use, again hindering healthy and sustainable development of the technology. Take the dozens of respected technologists and academics — including Elon Musk — who signed an open letter earlier this year asking for a blanket pause on AI development.
Among the signees were staff and execs at Meta, Google, Apple and Amazon. While it’s obviously unfair to paint them all with one cynical brush stroke, it is fair to say that many of them have a vested interest in a) maintaining the status quo for as long as possible and b) being at the forefront of the AI revolution when it does take place.
What their letter also distracts us from remembering is that they’ve done hardly anything so far about one of the most plausible major concerns surrounding AI: the potential for mis and disinformation. They benefit from a public that is scared of AI.
An exciting death
Burying our heads in the sand and hoping all this AI stuff goes away isn’t the way forward — as shown by Italy’s recent reversal of its (also recent) ban on ChatGPT — but that’s not to say I think we should let things continue to develop at unabated warp speed.
Rather than putting a blanket pause on development and failing to learn more about what we’re dealing with here, we need to rapidly upskill independent watchdogs and regulators. These bodies can then take logical and targeted actions such as forbidding LLMs from creating their own code, or ensuring they’re disconnected from social networks.
If you’ve followed the US inquiry into TikTok this probably won’t fill you with reassurance about state efforts to understand tech, but in the case of AI we’re all feeling our way around in the dark to an extent. Besides, without watchdogs and regulators we’re left with two choices: put our trust in the theoretically independent — the companies with obvious vested interests — or just let the AI regulate itself.
AI energy consumption will in turn also need to be managed and optimised — perhaps a job for the AI itself
All of us want the benefits of AI, but we need to find ways to debate the risks that aren’t led by the fantasy narratives from popular culture.
AI is as important an invention as the printing press or the internal combustion engine. It will take a lot of mundane processes out of the equation, but will create a multitude of new jobs, professions and optimisations which will allow our kids to create and distribute resources more effectively. AI energy consumption will in turn also need to be managed and optimised — perhaps a job for the AI itself. So let’s think about the risks that might come with new jobs or this energy optimisation issue.
In that sense, author Neil Stephenson was correct in his 1992 novel Snow Crash (and more importantly in Diamond Age from 1995) — our society has already become like a movie director who runs your life to make it more exciting.
Nothing excites us more than the knowledge that we’re all about to die. The "good" news is that we’re “gonna die in more exciting ways than anyone who came before us”.