April 24, 2024

Nvidia, Tesla, Intel and Apple graduates raise $30m to fix the AI compute bottleneck

The company is building software to make it easier for AI developers to use hardware from different providers

Tim Smith

4 min read

Paris-based startup FlexAI is coming out of stealth today, announcing a $30m round to develop software that helps developers build their systems on a range of different chips from other providers. 

The round was led by Alpha Intelligence Capital (AIC), Elaia Partners and Heartcore Capital.

If you’re building a startup in the AI hardware space, you could do worse than having names like Nvidia, Apple, Tesla and Intel on your CV. FlexAI founders Brijesh Tripathi and Dali Kilani met while working on chip technology at Nvidia 20 years ago. Tripathi has since worked in the hardware teams at Apple and Tesla, and then as a VP and CTO at Intel.

It was the latter role that helped seed the idea for the new startup, which aims to reduce the AI industry’s reliance on Nvidia hardware for training and running complex models.


What will FlexAI do?

The AI industry appears to be pretty hooked on the Nvidia drug. And, while it’s primarily seen as a hardware company, its software CUDA — which AI and ML engineers use to build and train models — is widely accepted to be the easiest to use and most efficient on the market too.

While working at Intel, Tripathi began thinking about how the company’s latest graphics processing unit (GPU) chip Gaudi 3 — along with other hardware being made by companies like Silicon Valley-based AMD — wasn’t being utilised to its full potential.

“The compute demand has grown with the explosion of generative AI large language models and Nvidia is sold out for the next two years. At the same time multiple other options have come up,” he explains, adding that using chips made by companies like AMD and Intel is more complex and requires more effort.

So, to try and unlock some of that hardware capacity made by non-Nvidia providers, Tripathi and Kilani teamed up to build what they call “a simplification layer for AI comput”. The product is essentially a software layer that lets AI developers build systems with the most widely used AI coding languages — Python and Pytorch — and then optimises them to be used on different chips from different companies.

“Customers don't have to worry about the complexities of multiple [hardware] architectures; we take care of mapping it to the right architecture, we take care of the networking, the stability, the reliability,” says Tripathi. 

“We actually review their code with an automated tool that goes through it and says, ‘Hey, by the way, there is this line in here that is very hardware specific, but here's the replacement that will now allow you to get access to all these other computer architectures’.”

Going to market

FlexAI is planning to reach its customers in two main ways: by partnering with big cloud compute providers like Amazon, Google or Microsoft to embed its system into their platforms, and by letting users come directly to the startup to build.

This means that FlexAI is talking with various data centres and compute providers to get access to GPUs that its customers can use. It will also be raising a debt financing round later this year to begin building its data centre capacity.

As well as making it easier for AI developers to access compute — by opening up access to other chip providers — Tripathi says FlexAI can also help them to build models more efficiently, by optimising the usage of those chips.

“Today, if you look at even the best Nvidia numbers they are at best 50% utilised. We want to improve that utilisation and efficiency to a significantly higher number,” he says, predicting that FlexAI’s technology will get AI models running on chips at 70-80% efficiency.


If they can achieve that, it will make AI systems more cost- and energy-efficient to run.

FlexAI has a team of around 40 that Tripathi says is roughly split across hardware and software expertise. It chose Paris as its HQ due to the availability of good developers, as well as the hub of companies that are building AI technology in the city that could be natural customers.

It isn’t the only one trying to tempt AI companies away from Nvidia. Companies like Finland-based Silo AI have developed software for running models on AMD hardware via European supercomputer Lumi, and all of the big chip manufacturers are trying to improve their software to make it as easy to use as possible.

But FlexAI believes it’s the only company that’s currently building a system that can work across multiple hardware systems, and give developers as much flexibility as possible, without having to worry about what chips are available on the market at any given time.

If the AI boom continues, it could be a very valuable market to win.

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn