Christopher Brennan

Opinion

July 7, 2023

If Sunak wants the UK to lead on AI, he needs to invite startups to his summit

Founders need to be part of the conversation if the UK is to become an AI superpower

British prime minister Rishi Sunak is gathering a Mt Olympus of political and tech leaders to grapple at the “first major global summit on AI” in September. With this, he is positioning the United Kingdom between the approaches of the US and the EU, as well as a benevolent mediator between regulators and companies such as OpenAI and Google. Just last week, OpenAI announced it was opening its first foreign office in London.

But keeping conversations around AI to that elite group of household names misses one key fact: it’s names we haven’t heard of that are going to be the future of the technology. And many of those will be small startups still climbing their way up the mountain. 

There is indeed a real opportunity for the UK to take global leadership on AI. But if the government wants to take bold initiatives — like this summit — to guide AI forward, Sunak will make sure that startups play a crucial role, not a bit part, in dealing with policymakers and leaders.

Advertisement

Between the US and the EU

Sunak's push comes at a time when the EU and the US have positioned themselves on opposite sides of the AI regulatory showdown.

The EU has made the most progress towards regulation so far. It's discussing an AI Act that will set rules for uses of artificial intelligence including but not limited to large language models (LLMs). Leaders in the EU have made a political point of challenging the (largely American) Big Tech companies through regulation such as GDPR and the Digital Services Act, and are set to continue that narrative.

On the other side are the Americans who have traditionally let large tech companies regulate themselves — and haven't shown any signs of changing tack on AI. “This is your chance, folks, to tell us how to get this right,” US senator John Kennedy told an IBM executive in a recent congressional hearing.

But missing from both approaches is the fact that there are other players in the equation besides Big Tech and Big Tech-backed scaleups. OpenAI’s sudden plea for regulation may have other motivations.

A leaked memo from a Google researcher published by SemiAnalysis goes into why Google and OpenAI both “have no moat” because of rapid tech advances — not of the headline-stealing ChatGPT, but the growth of the open-source language models. It cites examples of people running models on infrastructure as small as phones.

Startups powered by the democratisation of LLMs

So if there's no moat, who are the barbarians sitting out there that could storm Olympus?

That would be the startups. There are thousands of them, from more established scaleups to pre-seed teams of three or four people, armed with LLaMAs, Alpacas, Falcons and all sorts of other animal-named models we don’t even know about yet.

Those startups, empowered by the democratisation of LLMs, are part of what OpenAI’s Sam Altman is worried about but also what policymakers fear. Artificial intelligence has repeatedly been compared to nuclear technology, explicitly in open letters and implicitly in the G7’s unfortunately named "Hiroshima AI Process". The logic goes that if you let the technology be open to all, then any yahoo with access to cloud services can create something too powerful to be controlled, or that could cause damage online, from misinformation to harassment to cybersecurity issues.

Mitigating these problems needs nuance to distinguish AI with a high risk of causing harm at scale, and to create different tiers of regulation. This nuance is impossible, however, if the smaller players in the AI space, the startups, are not in the room to discuss how new rules will impact their work.

In fact, leaving startups out of determining the rules on AI — even inadvertently — may be even more dangerous. Keeping AI locked up risks fortifying the top players in their positions, repeating and aggravating some of the harms to competition that we’ve seen from the centralised tech giants who have profited from amassing personal data.

The business models of 10 years from now don’t yet exist

The vast majority of the value that will be extracted from language and generative models will be from startups experimenting with using them to solve problems in people’s lives, in fields as varied as medicine and media, rather than using them to serve the existing business models of major companies. The business models of 10 years from now have not yet been invented. Letting startups have, if not a leading role, then at least a voice in conversations is essential for creating a more dynamic, fairer society and a better version of the internet that could actually live up to its beautiful potential.

Advertisement

Sunak, the “startup-friendly prime minister,” is right that the UK and its strong tech scene are in a good position to take leadership on these issues in a way that brings together large parts of the world. In his announcement of the summit, he touted having met with the CEOs of OpenAI, Anthropic and DeepMind, but that is not enough.

He has rebuffed concerns that the UK is too small to be a host for the lawmakers and leading lights of AI. It is time to extend that logic to the summit’s attendees.

If the UK isn’t too small to be a global centre for AI regulation, then startups aren’t too small to help shape those rules. 

Christopher Brennan

Christopher Brennan is the cofounder of AI startup Overtone.