Robin Röhm, the founder of an AI-based startup, has mixed feelings about the EU’s upcoming legislation on artificial intelligence.
“The impact of AI that we will see over the next decade will be enormous. The only way to deal with real later challenges, from economic ones to societal ones to ecological ones, is through regulation,” he says.
But, adds Röhm, whose startup Apheris uses AI to help organisations work with decentralised data sets, while regulation is definitely needed, the way the EU is going about it could hinder the industry’s growth.
“What I worry about is the way it's constructed… We'll put a lot of unnecessary bureaucracy over companies that are innovating quickly.”
The EU’s proposal is still under discussion and will enter into force late this year at the earliest — but it’s already giving people working in and with AI in Europe headaches.
While the law has been designed to restrict big tech companies, it'll still apply to Europe's much smaller startups — and could cause them a huge headache when trying to comply with it.
Recent research conducted by a group of European AI associations found that 73% of surveyed VCs expect the AI Act will reduce or significantly reduce the competitiveness of European startups in AI. Meanwhile 50% of surveyed AI startups said the law will slow down AI innovation in Europe, and 16% said they’re even considering stopping developing AI or relocating outside the EU.
The end result is that young European AI businesses won’t be able to handle all this additional compliance, and will decide not to develop their solutions in Europe, says Piotr Mieczkowski, managing director of Digital Poland, one of the groups that commissioned the survey.
3x a week
We tell you what's happening across startup Europe — and why it matters.
“Startups will go to the US, they’ll develop in the US, and then they’ll come back to Europe as developed companies, unicorns, that’ll be able to afford lawyers and lobbyists,” he says. “Our European companies won’t blossom, because no one will have enough money to hire enough lawyers.”
The EU AI Act
The EU’s AI Act is meant to be the world’s first regulatory framework for building artificial intelligence products — a set of rules that, the EU hopes, other geographies will follow in the future.
The draft proposal — which is still under heavy negotiation and might change — outlines several categories of AI risk and rules that apply to each one.
AI solutions that bear “unacceptable risk”, such as social scoring by governments, exploitation of children, use of subliminal techniques and — subject to narrow exceptions — live biometric identification systems in public spaces for law enforcement, will be banned.
The other group of “high-risk” AI systems, such as those that analyse creditworthiness, are used in recruiting or use biometric identification in non-public spaces, will be subject to strict monitoring and auditing. These companies will have to, for example, produce impact assessments, ensure their systems are explainable and overseeable and often engage third-party auditors.
Other AI applications which bear “low”, “limited” or “minimal” risk, such as chatbots or spam filters, will be much less impacted by the proposed regulation; they’ll mostly have to comply with some transparency obligations, such as making users aware that they are interacting with a machine.
The new requirements will apply to all companies producing AI models, including startups, that want to operate in the EU market. Those that don’t comply will risk facing hefty one-off fines of up to €30m or 6% of the company’s total worldwide annual turnover.
For Monika Wyszyńska, cofounder of SmartyMeet, a Polish online meetings app that uses AI for things like creating automatic notes and real-time language translation, these requirements aren’t that scary. Her startup, like many other European companies that leverage existing models, like ChatGPT, is likely to be classified as “low risk”.
“We’ll be responsible for informing customers that they’re dealing with a chatbot. This is the only impact on us,” she says.
American tech giants like Google, Microsoft and Amazon are likely to be the most affected, she says. But the AI Act could be quite burdensome for European startups building AI in a “high-risk” category too. “There will be a lot of bureaucracy and many administrative issues,” she says. “New consultancies will emerge to advise on this aspect.”
That will give founders setting up new AI startups even more to think about.
“What we see on the market now is that startups focus on the business. And then when they want to scale up, or when they want to come into contact with investors, then they think about compliance or the legal aspects to present in the best way possible to potential investors,” says Sonia Cissé, partner at the law firm Linklaters. “But this is not going to fly; they will have to do that from the very beginning of their project now.”
She expects that startups that have been already exposed to tough regulation, such as fintechs and healthtechs, will feel the extra admin burden the least.
Some worry that the regulation will be one hurdle too many.
“European innovation will get hurt. Suddenly it’ll turn out you’ll have to comply with so many regulations and, as a startup, you can’t even afford to have a lawyer,” says Mieczkowski from Digital Poland.
“You’ll tell a VC that you’ll need $1m for a start to understand what’s going on — they’d rather pay the same in the US, and everything will be tested on the spot, with no regulations.”
There have been some accusations that the law was rushed through, without taking public consultations into account. But Eva Maydell, one of the key MEPs who helped put together the act, says that she tried to reach out to all AI startups in Europe.
“When we sit and work on legislation, I wonder whether the legislation alone would deliver the results we're hoping it will deliver, particularly when we look at competitors, such as China and the US,” she says. “The big challenge for us here is how do we make the regulation make us more competitive. We need to listen more to startups, to scaleups, to the community that creates the innovation to find incentives and ways to help them develop to help them further innovate, to help them research.”
She says that’s why the AI Act won’t only add new requirements, but will also introduce AI regulatory sandboxes — legal frameworks in which startups can test their solutions — and special channels of communication with the regulators.
“One big benefit of the ambitious approach I’m advocating for regulatory sandboxes is that smaller companies that are not used to dealing with red tape will receive guidance. They’ll be able to deploy their AI products on the European market much faster, while also at the same time having legal certainty and legal clarity,” she says.
Not everyone in the sector is worrying about the risks. Herman Kienhuis, managing partner at Dutch VC Curiosity, which backs founders “in building responsible AI software companies”, says that while there’s always the risk of overregulation, the new law will actually strengthen trust in AI in Europe.
“It will improve adoption in the market. It generates more clarity on the requirements. It’s also important [for there to be] some harmonisation across Europe,” he says.
He adds that as time passes, more tools will become available to help startups comply with the law and best practices will be shared — and founders will get used to it, as they did to GDPR, EU’s flagship data protection law.
“As Europe, we want to support companies that develop AI in line with European values and European regulations,” he adds. “If people don't want to do that, maybe they should move somewhere else.”