News

February 3, 2025

EU AI Act explained: Which rules come into effect this week?

A fresh set of rules governing the technology have come into force across Europe


Martin Coulter

4 min read

Companies caught using banned AI tools face fines of up to €35m. CREDIT: Unsplash

The European Union’s sweeping AI Act is steadily rolling out across the continent, with organisations using the technology facing a host of new responsibilities and obligations. 

A fresh set of regulations came into force on Sunday, with some systems banned altogether, and staff at companies using the technology expected to reach a certain level of “AI literacy”. 

Here’s everything you need to know about the rules coming into force this week.

What is the EU’s AI Act? 

The EU formally adopted the AI Act in March 2024, after months of tense negotiations between warring political factions over the scope of the regulations. 

Advertisement

By far the most comprehensive set of laws governing the technology seen anywhere so far, the Act introduces new obligations on organisations using AI, emphasising the importance of ethics, safety and transparency.  

While the Act technically came into force in August, companies have had six months to get up to speed on the first set of rules, which officially kicked in on Sunday, February 2. 

As of then, rules around AI literacy and bans on certain AI systems have been legally enforceable. 

What’s banned? 

As of this week, AI systems used for any of the below purposes are banned in the EU: 

  • Biometric categorisation: For example using AI to try and work out someone’s race, religion or political views based on their physical features or clothing.
  • Subliminal manipulation techniques: AI can’t be used to try and influence someone’s behaviour without their knowledge.
  • Emotion recognition: Such as a voice recognition system being used to register frustration or satisfaction from a customer on a call.
  • Social scoring: For example offering someone a job based on their ethnicity or place of birth.
  • Facial databases: Indiscriminately scraping images of people from the internet in order to create a database of faces is banned.
  • Real-time biometric identification: This means law enforcement agencies can’t use AI tools like facial recognition to identify people in public places, unless they’re trying to track down a missing person or someone accused of committing a crime.
  • Predictive policing: The Act says AI can’t be used to “​predict the likelihood of a… person committing a criminal offence”. The practice has been trialled around the world, including in China and some US states.
  • Exploiting people’s vulnerabilities: Including children, people with disabilities or those who face language barriers.

What is AI literacy? 

Under the AI Act, organisations using the technology will be responsible for ensuring employees have a reasonable understanding of how it works, depending how important it is to their specific role.

For example, those in HR or marketing departments are generally only expected to have a “basic awareness” or “competent understanding” of things like risks of bias in an AI’s output or data security. 

However some professionals, for example in legal services or healthcare, will be expected to have a “competent understanding” or “advanced proficiency” in these areas. 

"The requirement is broad and generic, which also provides businesses with room for manoeuvre," says Dessislava Savova, partner and head of the continental Europe tech group at law firm Clifford Chance.

"In any event, there is no 'one size fits all' approach and various factors need to be taken into account when devising and implementing an AI literacy and training programme."

Companies in the EU using AI will be expected to track internal training and assessments, with officials empowered to audit these records. 

What happens if a company breaks the rules? 

With regards to AI literacy, the Act doesn’t have a fixed set of penalties, but fines can be imposed depending how severe a breach is deemed. 

However, companies caught using AI for any of the purposes listed above risk fines of up to €35m or 7% of annual global turnover, whichever is highest. 

Advertisement

What happens next? 

The next key date for AI companies in Europe is August 2, when rules around general-purpose AI systems, such as OpenAI’s ChatGPT, will come into effect. 

Lawmakers are still consulting with industry and academic experts behind the scenes on exactly how those rules will be enforced, with practical guidance expected to be published in the spring. 

Martin Coulter

Martin Coulter is Sifted's news editor, based in London. You can follow him on LinkedIn and X