Analysis

December 12, 2023

Will European AI regulation crush its hottest AI startups?

French AI startup Mistral just raised a huge Series A round from top US investors. But it could be facing an uphill battle with European regulations


The Mistral AI founding team

This week kicked off with a bang in European tech, with the announcement that Paris-based generative AI startup Mistral AI had closed a €385m Series A round led by Andreessen Horowitz and Lightspeed Venture Partners. The round is one of the largest in European tech this year. 

But the good news — major US investors throwing their weight behind a European AI company — was overshadowed for many in the sector. The reason? European AI regulations freshly agreed on Friday night could significantly hinder smaller startups like Mistral’s progress, investors and AI founders say, potentially putting Europe even further back in the global AI race. 

Risky business

In Brussels, securing a deal on AI regulation was seen as a long-awaited victory.

“Balancing user safety and innovation for startups, while respecting fundamental rights and European values, was no easy feat. But we managed,” wrote European commissioner Thierry Breton.

Advertisement

The Act — which has yet to be finalised — would not apply to open source models like Mistral’s, unless they are considered high risk or being used for banned purposes, per a draft of the legislation seen by Sifted. But some say that most large-scale AI models will be considered “high risk”.

“[Mistral] will have to anticipate that they fall under that category and if they do, they will have to comply with very heavy obligations, particularly around transparency,” says Marianne Tordeux Bitker, director of public affairs at startup and VC lobby France Digitale. 

The law states that providers of general-purpose AI should have technical documentation about the model available for regulatory authorities should they request it, as well as provide information and documentation to the providers of AI systems who intend to use the model.

AI models that present “systemic risk” will have additional obligations ranging from performing model evaluation to ensuring an adequate level of cybersecurity protection.

Critics argue that unlike deep-pocketed US companies developing similar models, such as OpenAI, Google or Facebook, smaller companies like Mistral will struggle to cope with the legal overhead caused by the new rules.

And other onlookers suggest that any advantages for European open source developers, should they skirt high-risk use cases, may be short lived: American companies deploying closed models (like OpenAI) “have been known to adapt to these regulatory rules quite quickly”, says Andreas Goeldi, an AI-focused partner at b2venture.

Data transparency and red teaming

Another important element of the EU’s AI Act that needs to be ironed out for startups is the obligation to be transparent about training data.

The draft legislation states that the companies building general-purpose models will have to provide “detailed” public summaries of the data they used to train their model.  

“This process will be tough and it will require a lot of back and forth with regulators. It's definitely going to slow down Mistral and other smaller teams,” says Gabriele Venturi, cofounder of open source AI startup PandasAI. “They're trying to challenge the big players out there: Google won't have any issues doing this back and forth but for smaller startups like Mistral this is going to have a huge negative impact.”

Some also worry that making the model’s training data publicly available will cause intellectual property issues as companies like Mistral will find themselves sharing significantly more information with competitors. 

“When you’re sharing this type of information, who is capable of reading and understanding what you’re saying, apart from your competitors?” says France Digitale’s Tordeux Bitker.

Advertisement

“It will require re-thinking the whole company’s strategy.”

Other startup providers building LLMs also have concerns about extra costs around so-called “red teaming” new models — the process of testing against various types of risks.

“I'm a bit worried about the compliance costs for many model providers,” says Vanessa Cann, cofounder of German generative AI startup Nyonic. “For aspects like red teaming we think we’ll have to hire about three people to check the robustness of our models.”

All to play for

While EU negotiators may have agreed to a high-level political agreement on how to regulate AI, Peter Sarlin, CEO of Finnish AI company Silo AI, says that there is still a lot of uncertainty for startups on what the regulation will mean for business.

“We have a lot of technical level work that is happening behind closed doors in order to clarify details from the political agreement,” he says. “What happens now is crucial.”

And despite the lack of clarity, Mistral’s investors remain bullish about the company’s future.

“[The AI Act] is a risk, it would be foolish to say otherwise, but we have returned to this second round so we believe that this risk is controlled,” says Antoine Moyroud, partner at Lightspeed Venture Partners. 

Mistral declined to comment.

Daphné Leprince-Ringuet

Daphné Leprince-Ringuet is a reporter for Sifted based in Paris and covering French tech. You can find her on X and LinkedIn

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn

Anne Sraders

Anne Sraders is a senior reporter at Sifted based in Berlin. She covers the venture capital industry and deeptech startups, including robotics, spacetech and defence tech. She also co-writes Sifted's weekly VC newsletter Up Round. Previously, Anne was a senior writer at Fortune in New York City, where she co-wrote the Term Sheet newsletter. Follow her on X and LinkedIn