Deeptech/Analysis/

Why generative AI is getting VCs all excited

Generative AI has made a big splash on social media via tools like DALL-E and ChatGPT, but why the hype among VCs?

By Tim Smith

Generative AI is the new must have accessory for the VC winter season. And this time — unlike with the metaverse — top investors are assuring us that the hype is justified.

You’ve likely seen the social media viral moments created by OpenAI’s generative AI chatbot ChatGPT, or image creator DALL-E. But beyond writing biblical-style verses about how to remove a peanut butter sandwich from a video player, or creating images of Darth Vader going ice fishing, is the technology going to be able to make a return for investors?

What’s new about generative AI?

Generative AI tech has actually been around and in use for some time. Nathan Benaich, general partner at London-based AI-focused VC firm Air Street Capital, mentions Grammarly as one well-known company that’s already put generative AI to use.

“It’s definitely not the first time that generative models have produced pretty astounding outputs,” he says. “You can see some work from DeepMind on short-term weather predictions, which also uses generative models to roll out the future of what a certain number of hours of weather movements might entail.”

But how is generative AI different to machine learning models that we’ve seen in the past, and why are people getting excited by it?

Antoine Blondeau, managing partner at AI-focused fund Alpha Intelligence Capital, explains that previous applications of AI have been outperforming humans in two areas for some time: “perception” and “optimisation”.

An example of perception is facial recognition algorithms that can pick out every face from a given ethnicity from a crowd of thousands, in seconds —  “the ability to extract patterns extremely quickly,” Blondeau says. 

Optimisation refers to the ability of an algorithm to assess a wide number of factors, and come up with the best decision. An example of this would be the algorithms that Uber uses to optimise its drivers’ journeys, which are able to crunch a large number of data sources to find the most efficient route.

But, until now, AI has been lagging behind when it comes to emulating one key area of human intelligence: cognition. This, Blondeau says, is all about being able to make sense of things and understand them within their context. This is where generative AI is changing the game.

“It’s not just about understanding the language, it’s about understanding the story,” he explains. “It’s the ability to see the salient point in the sentence that matters.”

This is why things like ChatGPT can write an op-ed about AI that makes sense: it can pick out the most important parts from written information.

How does generative AI work?

Generative AI is an umbrella term for a number of machine learning methods — including Large Language Models (LLMs) and Generative Adversarial Networks (GANs).

Tariq Rauf, founder of London-based AI-powered company Qatalog, says that advances in the tech have come about due to progress in something called transformer architectures (GPT stands for Generative Pre-trained Transformer). If you want to dig into the specifics of how transformers work, Pathmind has a good explainer here.

Previously, machine learning algorithms used in mainstream products were mostly something called “discriminative”, meaning they could discriminate between various types of data. 

Rauf says these would be things like:

  • Recommendation systems — ‘here are five movies similar to Titanic’
  • Classification algorithms — ‘here’s an image of a cat’
  • Data clustering — ‘these are five tweets about Barack Obama’

He adds that, in these scenarios, new data isn’t being created — the outputs from the AI are strictly limited by the training and input data. The significant outcome from generative AI is that it can create data. This is why it’s called generative — the models can generate coherent and novel outputs. 

“One of the key developments driving this has come from something called the ‘attention’ mechanism, alongside a slew of other techniques, that has given these models the ability to mimic human-quality output,” he explains.

“We’re in the midst of transitioning into a whole new world of automation possibilities across every sector and industry, where humans are assisted by machines in almost every task,” he adds.

Applications

While generative AI applications in the content creation space like DALL-E and ChatGPT have been the ones grabbing people’s attention on social media, this isn’t where Air Street’s Benaich has been directing his attention.

He says he’s more excited about backing companies using generative AI to solve problems like finding new drugs, or speeding up complex business operations that involve large amounts of data.

“Asking questions of large documents is going to be a pretty fruitful area for this technology,” he says. “For example in M&A, contract negotiation or market research, where you’re combing through lots of links and lots of corporate filings and it’s just a pain to parse through these documents and figure out where the facts are and synthesise it all together.”

A headshot of Nathan Benaich, general partner at London-based AI-focused VC firm Air Street Capital.
Nathan Benaich, Air Street Capital

One example of a company already working on this use case is Stockholm-based Sana, which recently raised $34m to scale its generative AI-powered technology that helps businesses make sense of their own institutional knowledge and information.

London-based Qatalog’s generative AI has already been able to help more than 3,500 clients build operational software for businesses at a fraction of the cost and time it would normally take. It does this by scouring large datasets of information to assess what functionalities are most important for any business, and then builds an app based on what it has learnt.

“We had a real estate company that built their real estate operation system and set up 800 properties on their first day. This is a step change that AI enables,” Rauf says.

Enabling creators

Some people might worry that roles like app software engineers might soon be put out of work by this kind of technology, but those working on such applications are not. 

Gordon Midwood is CEO and cofounder of London-based Anything World — a startup that uses AI to generate 3D moving animations from static models of animals and vehicles.

An example of one of Anything World's 3D moving animations.
One of Anything World’s animations

Anything World is already selling its tech to big game development studios, which use it to speed up prototyping of virtual worlds in games, and Midwood says all this is really doing is removing laborious grunt work that most people would rather not do.

“We see it as lowering the barrier to creativity — within 3D worlds in our instance — but also within writing and visual creation. So we see it as a massive opportunity, allowing more people to create content,” he says. “Obviously people will need to adapt, but we don’t see it as a threat.”

Benaich points to Paris-based PhotoRoom as one example of a European startup that’s using generative AI to open up the creative process to more people, by giving them the ability to make advanced photo edits that would have been costly and time-consuming in the past.

Blondeau also cites Paris-based Qatent — which helps users write patent documents — and London-based Boltzbit — which gives businesses insights into marketing and ecommerce — as other examples of startups that are already putting generative AI to useful work.

The risk of overhyping generative AI

Generative AI might be about to make big changes to the way we work, but Benaich says that investors need to be aware that the technology is still in its early days, and not necessarily as powerful as some might assume.

“The risk I see is that generative AI has become voted as a consensus investment theme in the span of like two weeks or however long,” he says. “The reality is — even though the progress right now seems like it’s exponential in images, video and text — I think there’s just so many nuances that companies need to solve for when they take these capabilities and build products for many people to use in a workflow.”

What does seem clear is that the latest next-best-thing in VC land has a little more substance behind it than a half-baked vision of a world where we sit around wearing VR goggles all day.

Tim Smith is Sifted’s Iberia correspondent. He tweets from @timmpsmith.