Explainer

July 11, 2025

The EU published its new rulebook for GenAI. What do startups need to know?

New guidelines will help determine how the controversial AI Act will be enforced

Martin Coulter

4 min read

After months of backroom wrangling with industry leaders, the EU just published a final draft of guidelines which will determine how the AI Act is enforced. 

The AI Act comprises the first set of laws comprehensively governing how the fast-moving technology will be regulated — but has been criticised by some for putting too much pressure on a nascent industry. 

On Thursday, the European Commission released its final draft of the “code of practice”, a rulebook designed to help organisations using AI interpret and ensure their compliance with the law. 

Advertisement

What's in the code of practice?

The General-Purpose AI (GPAI) code of practice is a voluntary checklist drawn up to help companies interpret parts of the AI Act that remained unclear when the law was passed. (GPAI is an EU term referring to AI models that can perform a wide variety of tasks, like ChatGPT.)

For example, the letter of the law says some companies will have to publish a “sufficiently detailed summary” of the data used to train their AI models, but doesn’t explain what “sufficiently detailed” means. Given ongoing debates around the ethics of AI companies using films, books and other content without permission for model training, that’s caused some problems. 

Since the start of the year, the EU has been convening regular meetings between industry leaders, academics and other experts to debate what should be in the code of practice. The final draft ditched some of the most unpopular measures but retained others. 

“While the safety and security chapter has been slightly streamlined, several elements that industry has flagged as being overly prescriptive — such as mandatory external evaluations — remain in place,” says Miranda Cole, EU and UK antitrust lead at US law firm Perkins Coie. “These could create operational and financial burdens, particularly for smaller players.”

As the code is technically voluntary, it remains to be seen if companies — from startups in Europe to tech giants in the US — will actually sign up. 

What do startups need to know?

As of August 2, a new set of rules governing GPAI will be enforceable across the EU. 

That means three weeks from now, GPAI model providers will be expected to publish summaries of their training data, demonstrate their compliance with copyright laws and make sure everything from model evaluations to risk assessments have been documented.  

On the copyright front, signatories of the code must “commit to implement safeguards to prevent their models from generating outputs that reproduce training data”, says Alex Shandro, partner at law firm A&O Shearman. In practice, that means an AI image-generator shouldn’t be able to reproduce an image from its database. 

As far as data disclosures go, the Commission’s AI Office is expected to publish a template for companies to follow — but there’s no sign of it yet, leaving room for more uncertainty. 

While the rules technically come into effect in a few weeks, the AI Office won’t start dishing out penalties for another year. According to Emma Wright, partner at law firm Crowell and Moring, it's worth notifying the Commission if a model is going on the market before August 2, as pre-existing models get some leeway (an extra 12 months) to comply. 

"However, there is a counter argument that these highly prescriptive measures set the bar too high and these super-sophisticated tech companies will adopt alternative approaches for demonstrating compliance," she tells Sifted. "That reflects reality both commercially and the industrial strategies of where they are headquartered."

Advertisement

The bulk of the AI Act’s remaining rules will come into force in August 2026, including new laws on “high-risk” AI systems used in sensitive areas — such as those used to screen loan or job applications, by the police or in hospitals — requiring even more documentation and transparency. 

While that may seem far off, “from an operational readiness standpoint, we are already running out of time,” says Randolph Barr, CISO at California-based Cequence Security. 

Those deploying AI systems deemed “high-risk” need to be documented meticulously — everything from risk mitigation strategies to training data and performance evaluations — and all that information needs to be kept for up to 10 years after the model is last available on the market. 

Will the AI Act be delayed?

In recent weeks, the EU has been under mounting pressure to pause the rollout of the AI Act, with dozens of startup execs and investors recently signing an open letter — published exclusively in Sifted — calling for the law to be halted. 

But the EU firmly rejected these demands, with a Commission spokesperson saying: “Let me be as clear as possible, there is no ‘stop the clock’. There is no grace period. There is no pause.” 

Tim Hickman, partner at law firm White & Case, tells Sifted: “Many industry voices had pushed for a pause — a ‘stop the clock’ moment — to allow time for practical guidance and smoother implementation. The Commission has indicated that there will be no pause, so businesses will need to press ahead with their compliance roadmaps.

“To an extent, the code helps understand the likely enforcement position that the Commission will adopt on these issues.”

Martin Coulter

Martin Coulter is Sifted's news editor, based in London. You can follow him on LinkedIn and X

Sifted Daily newsletter

Sifted Daily newsletter

Weekdays

Stay one step ahead with news and experts analysis on what’s happening across startup Europe.