October 30, 2023

G7 countries agree voluntary AI code of conduct

Europe, Japan, Canada and US ask their AI companies to commit to testing, transparency and watermarks, among other measures

G7 governments want AI companies to voluntarily commit to testing their most advanced models for a range of potential risks, boosting their cybersecurity defences and using watermarks for AI-generated content.

Leaders of the Group of Seven (G7) countries made up of Canada, France, Germany, Italy, Japan, UK and US, as well as the EU, on Monday published guiding principles and a 11-point code of conduct to “promote safe, secure, and trustworthy AI worldwide” aimed at companies developing the most advanced AI systems.

The code is the first concrete guidance on what AI companies in G7 countries will be encouraged to do. It urges companies, including startups, to assess and tackle risks emerging from their AI models, and identify patterns of misuse that could emerge once consumers start using their AI products. The G7 governments are trying to persuade AI companies to commit to the code, but a list of signatories has not yet been released.


Its release comes just two days before Britain receives leaders and AI industry representatives from all G7 countries and others for an AI Safety summit at Bletchley Park.

“The potential benefits of Artificial Intelligence for citizens and the economy are huge,” said European Commission president Ursula von der Leyen. “However, the acceleration in the capacity of AI also brings new challenges.”

Hiroshima AI process

The G7 code is one outcome of the so-called Hiroshima AI process, a forum of G7 ministers chaired by Japan, which kicked off in May with the goal of coming up with safeguards across its member countries, some of which have notably different regulatory approaches to AI. 

Companies agreeing to follow the code should test their AI systems at several moments throughout their AI lifecycle, and pay special attention to whether they could be used by criminals in the development of chemical, biological, radiological or nuclear weapons; to launch stronger cyber attacks; to interfere with national critical infrastructure; to train other models; or promote disinformation, discrimination or harmful bias.

Developers of AI models should also keep documents of reported incidents with their models and publish transparency reports detailing the capabilities of all new significant releases of advanced AI systems, the code says.

Another code commitment aims to prevent misleading people — by requiring AI companies to label content generated by their AI models with watermarks or other disclaimers. Signatories would also commit to investing in cyber and physical security, to prevent their algorithms and data from being stolen, the document adds.

Both the principles and the code will be updated as necessary, to ensure they keep up with the fast pace of AI development, the G7 leaders said in a joint statement.

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid and Barcelona. She covers Europe's tech sovereignty, deeptech and Iberia. Follow her on X and LinkedIn