How To

March 7, 2024

Everything you need to know to prepare for the EU’s AI Act

The EU’s flagship AI Act is set to become the law in the next few months — here's what founders must know to be sure they meet the new requirements


EU lawmakers finally came to an agreement on the AI Act at the end of 2023 — a piece of legislation that had been in the works for years to regulate artificial intelligence and prevent misuses of the technology.

Now the text is going through a series of votes before it becomes EU law and it is looking likely that it will come into force by summer 2024.

That means that any business, big or small, that produces or deploys AI models in the EU will need to start thinking about a new set of rules and obligations to follow. No company is expected to be compliant immediately after the law is voted through — in fact, most businesses can expect a two-year period of transition. But it’s still worth doing some planning ahead, especially for startups with smaller or no legal teams. 

Advertisement

“Don’t bury your head in the sand,” says Marianne Tordeux-Bitker, director of public affairs at startup and VC lobby France Digitale. “Implement the processes that will enable you to anticipate.”

“Those that get started and develop systems that are compliant by design will develop a real competitive advantage,” adds Matthieu Luccheshi, specialist in digital regulation at law firm Gide Loyrette Nouel.

Since a read through the AI Act is likely to raise more questions than it answers, Sifted put together the key elements that founders must be aware of as they prepare for the new rules.

Who is concerned?

Elements of the law apply to any company that develops, provides or deploys AI systems for commercial purposes in the EU. 

This includes companies that build and sell AI systems to other businesses but also those that use AI for their processes, whether they build the technology in-house or pay for off-the-shelf tools. 

The bulk of the regulation, however, falls upon companies that create AI systems. Those that only deploy an externally-sourced tool mostly need to ensure that the technology provider complies with the law.

The regulation also affects businesses headquartered outside of the EU that import their AI systems and models inside the bloc.

What is a risk-based approach?

EU legislators adopted a “risk-based approach” to the law that identifies the level of risk posed by an AI system based on what it is used for. This ranges from AI-powered credit scoring tools to anti-spam filters.

One category of use cases labelled “unacceptable risk” is entirely banned. Those are AI systems which can threaten the rights of EU citizens, such as social scoring by governments. An initial list of systems that fall under this category is published in the AI Act.

Other use cases are classified as high-risk, low-risk or minimal-risk and each has a different set of rules and obligations.

Advertisement

The first step for founders, therefore, is to map all the AI systems their company is building, selling or deploying — and to determine which risk category they fall into.

Does your AI system fall under the high-risk category?

An AI system is considered high-risk if it is used for applications in the following sectors:

  • Critical infrastructure;
  • Education and vocational training;
  • Safety components and products;
  • Employment;
  • Essential private and public services;
  • Law enforcement;
  • Migration, asylum and border control management;
  • Administration of justice and democratic processes.

“It should be quite obvious. High-risk AI use cases are things you would naturally consider high-risk,” says Jeannette Gorzala, vice president of the European AI forum, which represents AI entrepreneurs in Europe. 

Gorzala estimates that between 10-20% of all AI applications fall into the high-risk category.

What should you do if you are developing a high-risk AI system?

If you are developing a high-risk AI system, whether to use it in-house or to sell, you will have to comply with several obligations before the technology can be marketed and deployed. They include carrying out risk assessments, developing mitigation systems, drafting technical documentation and using training data that meets certain criteria for quality. 

Once the system has met these requirements, a declaration of conformity must be signed and the system can be submitted to EU authorities for CE marking — a stamp of approval that certifies that a product conforms with EU health and safety standards. After this, the system will be registered in an EU database and can be placed on the market. 

An AI system used in a high-risk sector can be considered non high-risk depending on the exact use case. For example, if it is deployed for limited procedural tasks, the model does not need to obtain CE marking.

What about low-risk and minimal-risk systems?

“I think the greater difficulty won’t be around high-risk systems. It will be to differentiate between low-risk and minimal-risk systems,” says Chadi Hantouche, AI and data partner at consultancy firm Wavestone.

Limited-risk AI systems are those that interact with individuals, for example through audio, video or text content like chatbots. These will be subjected to transparency obligations: companies will have to inform users that the content they are seeing was generated by AI. 

All other use cases, such as anti-spam filters, are considered minimal risk and can be built and deployed freely. The EU says the “vast majority” of AI systems currently deployed in the bloc will fall under this category.

“If in doubt, however, it’s worth being overly cautious and adding a note showing that the content was generated by AI,” says Hantouche.

What’s the deal with foundation models?

Most of the AI Act is intended to regulate AI systems based on what they are used for. But the text also includes provisions to regulate the largest and most powerful AI models, regardless of the use cases they enable.

These models are known as general-purpose AI (GPAIs) and are the type of technologies that power tools like ChatGPT.

The companies building GPAI models, such as French startup Mistral AI or Germany’s Aleph Alpha, have different obligations depending on the size of the model they are building and whether or not it is open source. 

The strictest rules apply to closed-source models that were trained with computing power exceeding 10^25 FLOPs, a measure of how much compute went into training the system. The EU says that OpenAI's GPT-4 and Google DeepMind’s Gemini likely cross that threshold.

These have to draw up technical documentation for how their model is built, put in place a copyright policy and provide summaries of training data, as well as follow other obligations ranging from cybersecurity controls, risk assessments and incidents reporting.

Smaller models, as well as open-source models, are exempt from some of these obligations.

The rules that apply to GPAI models are separate from those that concern high-risk use cases. A company building a foundation model does not necessarily have to comply with the regulations surrounding high-risk AI systems. It’s the company that is applying that model to a high-risk use case which must follow those rules.

Should you look into regulatory sandboxes?

Over the next two years, every EU country will be establishing regulatory sandboxes — which enable businesses to develop, train, test and validate their AI systems under the supervision of a regulatory body.

“These are privileged relationships with regulatory bodies, where they accompany and support the business as it goes through the process of compliance,” says Tordeux Bitker. 

“Given how much CE marking will change the product roadmap for businesses, I would advise any company building a high-risk AI system to go through a sandbox.”

What is the timeline?

The timeline will depend on the final vote on the AI Act, which is expected to take place in the next few months. Once it is voted through, the text will be fully applicable two years later.

There are some exceptions: unacceptable-risk AI systems will have to be taken off the market in the following six months, and GPAI models must be compliant by the following year.

Could you be fined?

Non-compliance can come at a high price. Marketing a banned AI system will be punished by a fine of €35m or up to 7% of global turnover. Failing to comply with the obligations that cover high-risk systems risks a €15m fine or 3% of global turnover. And providing inaccurate information will be fined €7.5m or 1% of global turnover.

For startups and SMBs, the fine will be up to whichever percentage or amount is lower.

“I don’t think fines will come immediately and out of the blue,” says Hantouche. “With GDPR, for example, the first large fines came after a few years, several warnings and many exchanges.”

If in doubt, who is your best POC?

Each EU Member State has been tasked with appointing a national authority responsible for applying and enforcing the Act. These will also support companies to comply with the law.

In addition, the EU will establish a new European AI Office, which will be businesses’ main point of contact for submitting their declaration of conformity if they have a high-risk AI system.

Daphné Leprince-Ringuet

Daphné Leprince-Ringuet is a reporter for Sifted based in Paris and covering French tech. You can find her on X and LinkedIn