Analysis

October 12, 2023

‘France is a bit overhyped’ — Nathan Benaich on the state of AI in 2023

The annual report from Air Street Capital charts the rise of generative models and the entry of big tech into AI


Tim Smith

4 min read

2023 is the year that AI went mainstream, stealing the hearts of big tech companies and propelling chipmaker NVIDIA to a trillion-dollar market capitalisation. 

And while many of the headlines have focused on the US, there is still a lot to be optimistic about in Europe, according to Air Street Capital’s State of AI report. The research shows the UK continues to be a unicorn leader and gives props to UK-based DeepMind, Google’s AI arm.

Here are the main takeaways: 

The UK still leads the pack in Europe

While the US and China dwarf Europe when it comes to AI unicorns, the UK boasts more than double the number of billion-dollar valued companies than its closest continental rival.

Advertisement

The country added three AI unicorns this year, bringing the total to 27.

When asked whether France — which has seen buzzy new companies like Dust and Mistral spin out of big tech AI labs this year — is the sleeping giant of European AI, Air Street founder Nathan Benaich poured a dash of cold water on the hype.

“I would say it [France] is probably a bit overhyped. That crew is awesome, but it’s been around for a while,” he tells Sifted. 

“If I go back and see who were the cool kids doing AI a couple of years ago, compared to who are the cool kids doing it now, it's probably the same people…  French tech marketing is very good — a lot better than the UK for some reason. They have immense public national pride for this stuff.”

The report notes London-based Synthesia (an Air Street portfolio company) as one GenAI startup gaining real enterprise traction — noting that “44% of Fortune 100 companies” use its tech. Benaich also namechecked London-founded ElevenLabs — which has raised from VC heavyweight Andreessen Horowitz — as another success story.

High hopes for DeepMind 

Air Street’s report spotlights breakthrough research from European companies like self-driving car company Wayve’s GAIA-1 and LINGO-1 models that use both text and visuals to create better autonomous car control, and Google DeepMind’s RoboCat model, which can operate 36 robots across 253 tasks. 

Benaich argues that companies like these deserve praise for focusing on “hard” problems, rather than the low-hanging-fruit use cases that companies are applying text and image generation to. 

“DeepMind didn't really invest all that much attention into chasing OpenAI and doing GPT-style systems because fundamentally, Demis [Hassabis] and the leadership are building a science organisation,” he says. “They’re more interested in solving science’s grand challenges than making nice pictures and I have a lot of respect for that.”

Former Google DeepMind founders are now increasingly taking their experience from the AI lab and founding their own companies in life sciences, a space that Benaich describes as having “a lot of potential”.

He also expects good things from Google DeepMind’s new multimodal generative model Gemini, which is expected to be released before the end of the year.

“They [Google] can compete on this because it doesn't cannibalise their business. They already have a money-printing ads business. So it's quite compelling to be in that position,” he says.

Advertisement

The report also notes that there are “growing efforts” in the AI industry to build more efficient, smaller models — something that Paris-based Mistral is focused on — and Benaich says there will be room for these cheaper alternatives to the likes of GPT-4.

“I do believe in the long tail idea… Some open-ended dialogue problems will require a large model, but not every use case needs a general purpose system,” he tells Sifted.

AI risk is no longer the industry’s “unloved cousin”

The Air Street report also notes how the conversation around AI risk has “shed its status as the unloved cousin” of AI research, to one that’s front and centre of national policy debates today.

Benaich says he’s less concerned by existential risks, and more by the threat of misinformation and the impact that products like AI companion apps might have on young people.

“If you look at things like developing a new anthrax, you talk to people in biology and they’ll tell you that you don’t need AGI [artificial general intelligence] for this stuff,” he says. “Being an AI doomer is like part of the fundraising strategy in a way — it’s the classic thing of: ‘I invented something, it’s really powerful. It can wreak havoc but it can also reap a lot of benefits. I’m the one who can save you.’”

And while the report notes growing opacity around the research behind powerful AI models from companies like OpenAI and Google, it also highlights the continuing explosion of open source activity. More than 600m AI models were downloaded from AI research platform Hugging Face in August alone, the report points out.

“The show right now is still really run by OpenAI, but now you’ve also got Meta with its Llama models that are really driving open source progress,” says Benaich.

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn