The home countries of Europe’s most successful generative AI startups are blocking new rules for AI foundation models proposed in the AI Act talks, a move that could derail the wider EU AI Act and severely delay Europe-wide regulation on the sector.
The EU is entering the final stretch of drawing up the details of the its flagship legislation on AI, but some European policymakers want to add last minute provisions that would put extra requirements on the makers of AI foundation models, such as large language models (LLMs).
As a result, the French and German governments, home to AI giants Mistral and AlephAlpha respectively, are blocking the deal in order to protect their startups, several people close to the talks tell Sifted.
It is likely that if a deal isn’t reached before December 6 it will have to wait until the election of a new EU parliament and commission in 2024.
“At this stage of the negotiations, there is an interesting chance the file might not be closed by the end of the year, as initially expected,” says Maxime Ricard, policy manager at Allied for Startups, a lobby group. “I would not bet on it”.
Originally, the act was to impose requirements on AI companies depending on the level of risk posed by their application, rather than the type of technology they are built on.
The attempt to regulate foundation models directly would put new bureaucratic burdens on the technology-makers, worrying many startups and industry groups.
Ricard says that both France and Germany have always been rather wary of regulating foundation models, but “having been a breeding ground for successful startups is likely to have influenced their position.”
The French government’s position against it was driven by lobbying from Mistral AI, which has a former government minister as cofounder, one industry representative says.
Timothée Lacroix, cofounder and CTO of Mistral AI, tells Sifted the startup “likes the risk-based approach because that’s where the risks are understood. Regulating the base technology makes less sense to us.”
Lacroix says that when it comes to foundational models, there is as little as 1% or 2% of cases that are high risk, and as little as 0.1% that are very high risk. He says that given the low numbers involved, these models should not be regulated directly.
“Blocking is a strong word,” he says, to describe the French position.
“We are in discussions because we are interested. We are not happy with the state of things. We’d like to at least have a chance to discuss [this] before things go into the law.”
France Digitale, France’s leading startup lobby, argues that regulating foundational models could kill Europe’s chances of developing AI champions just when the continent finally has a chance to compete against the US.
“Europe can still play its cards right,” says Marianne Tordeux Bitker, the group’s director of public affairs. “It shouldn’t shoot itself in the foot with regulation that is poorly put together.”
Christoph Stresing, managing director at Startup-Verband, Germany’s startup association, agrees when it comes to Germany.
“We must not stifle our innovation,” he says. “Particularly with regard to foundation models, it is important to maintain the ability to innovate.”
But policymakers don’t agree. Dragoș Tudorache, a Romanian MEP who’s the chair of the European parliament’s special committee on AI, says the EU AI Act is a “business-friendly regulation” even if some AI startups perceive it as putting the brakes on innovation.
Meanwhile, policymakers continue to negotiate the technical aspects of the law, with major discussions pencilled in for December 6, the date when many hoped the AI Act would receive its formal agreement.
“An agreement by the end of the year currently seems rather unrealistic,” says Stresing.
“In our view, there must be considerable improvements to the AI Act in the further negotiations. [...] It is better to do it well than to rush it. Because the AI Act is too important for bad compromises”.
For critics of the act, that could be a blessing in disguise.
Guillaume Liegey, the CEO of AI startup eXplain, says that the text in its current form could significantly slow down Europe’s AI ecosystem. “If the rules are applied strictly, this ecosystem will be dead from the start,” he says.
“What’s happened is beneficial because it shows that even if negotiations are very advanced, we can still do something.”
But Carme Artigas, AI minister at the Spanish government, still believes that an agreement can be reached, telling a conference in Madrid on Friday that she is “optimistic.”