Analysis

November 2, 2023

5 takeaways from the UK AI safety summit

Didn’t make it to Bletchley Park this week? This is what you need to know


Rishi Sunak and Ursula von der Leyen on day two of the UK AI Summit

As the curtain comes down on the UK’s AI Safety summit, Sifted takes you through the five top discussions among policymakers and industry leaders at Bletchley Park.

The summit was broadly hailed as a diplomatic success because it managed to bring together senior Chinese and US officials around the same table, and was bolstered by the participation of the European Commission president Ursula von der Leyen and even tech entrepreneur Elon Musk. 

1. The UK might have been hosting, but the US was calling the shots

The UK may have been the host of the world’s first AI safety summit — but the US, home to most of the world’s AI giants, made it clear they are the ones calling the shots.

Advertisement

US vice president Kamala Harris said the US voluntary commitments to responsible AI practices, adopted by a range of large AI labs in the US, are just an “initial step” toward legislation on AI safety. That will add to a range of US safety policies already announced. 

“We intend that these domestic AI policies will serve as a model for global policy, understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world,” Harris said in a speech on Wednesday.

That might be a tricky pill for the EU to swallow, as the aim of the bloc’s delegation was “selling” the EU AI Act at the summit, “the first ever legislation on AI”, said Commission vice president Věra Jourová, stressing the EU law can “go further” than the US executive order on AI published this week. 

The EU delegation emphasised that even though some of the most powerful AI models are being developed in the US, they will have to comply with the EU laws to operate inside the bloc, and advocated for the right of different jurisdictions to regulate as they find more appropriate.

UN secretary general António Guterres wasn’t happy either, insisting the UN should be the forum for global decisions on AI safety. “One country or group of countries cannot dominate,” he said on Thursday.

2. The UK says it’s in no rush to pass any AI laws

The UK insisted at the summit that it’s in no rush to pass new laws on AI — hoping it can boost its AI businesses with its innovation-first approach.

Some Britain-based founders worry it is not clear what they must do to ensure their products do not cause harm to society, because the UK has so far refused to legislate — unlike some of its biggest competitors such as the EU or Canada.

The UK’s technology secretary Michelle Donelan said the UK government plans to release more details on how it plans to regulate AI “by the end of the year”, but comments from Jonathan Berry, the UK’s minister for AI, suggested that no legislation would be debated in parliament next year. 

Donelan said the UK was trying to move towards an “evidence-based empirical approach” and focus on measures that could be implemented faster than legislation.

“We are interested in applying the solutions, once we fully understand the problems,” she said. “Are we ruling out legislation? Absolutely not.”

Advertisement

Leaders at the summit were told to think of safety measures that could be in place within the next five years. 

Mustafa Suleyman, cofounder of Google DeepMind, told reporters he did not think current AI frontier models posed any “significant catastrophic harms” but said it made sense to plan ahead as the industry trains ever larger models.

3. France fights for open source AI development

The summit showed the split on open source models between the US and the UK on one side, and France and some other countries in the EU and the Global South on the other, said Mariano-Florentino (Tino) Cuéllar, president of the Carnegie Endowment for International Peace.

Open source refers to when model developers let the public develop, modify and iterate on their models. Meta’s LLaMA model, for example, is open source. 

Open source advocates were very vocal on the need to make sure that AI technology is available to multiple countries, but there was also “some recognition that open source models carry risks that have not fully materialised”, Cuéllar said. 

The French government fought hard for open source inside the room, said Jean Noël Barrot, France’s junior minister in charge of digital issues. France’s AI darling, Paris-based Mistral, is an open source startup.

“We shouldn’t discard open source upfront,” Barrot said. “What we’ve seen in previous generations of technologies is that open source has been very useful both for transparency and democratic governance of these technologies; it has helped us ensure competitive equity and has prevented in certain sectors the development of monopolies, which are detrimental to innovation.”

“There is a very interesting discussion to be had about open source,” conceded Jonathan Berry, the UK’s minister for AI, noting that the UK does not have a clear stance on this yet. “Our position is listening and learning.”

4. Protectionism has not gone away

The most concrete result to come out of the summit was the announcement of plans for the UK and US to set up national AI safety institutes — to help AI startups prevent societal risks and threats to national security from the most advanced models. 

Britain was the first country to float the idea of a national safety institute. But in the run up to the summit, US officials grew wary of giving this UK expert body too much access to US-made frontier models, despite existing information-sharing deals between a number of US AI developers and the UK government.

The issue was solved by the US decision to create its own national safety institute, announced on the first day of the summit. 

UK prime minister Rishi Sunak said the institutes will be able to test the models before they are released — calling this a “truly landmark agreement”.

Both institutes will enter a formal agreement to work very closely together on their assessments of threats from new, more powerful models. On Thursday, the UK announced Singapore will also collaborate on safety testing with the institutes. The idea is to widen the network and connect with other organisations working on AI safety elsewhere in the world.

5. There’s no safety without China

China was a key participant at the summit, given its role in developing AI. 

Its involvement in the summit was described as constructive by several officials from other governments, and was a victory for those advocating that the Chinese government needs to be involved in high-level discussions on AI safety and transparency, as some of the world’s biggest industry players are Chinese companies.

“They are too big to be ignored,” said Jourová, who visited Beijing in September to hold talks on AI and international data flows. “I think it was important that they were here, also that they heard our determination to work together, and honestly, for the really big global catastrophic risks, we need to have China on board.”

However, trust in China remains very limited. The UK and its closest allies wanted China to take part only in discussions about risks — but excluded the Chinese delegates from sessions on priorities for AI development in the next five years and opportunities for international collaboration.

In a big diplomatic coup for host nation Britain, China signed the Bletchley Declaration, a joint statement committing to working with the US, the EU and all the other countries attending the event to collectively manage the existential risks from AI. 

But policymakers must now work out what the shared language in the statement means in real practice for companies, including startups. South Korea and France will host the next AI safety summits, within six months from each other, with the goal of converting the Bletchley Declaration into “more concrete and tangible” actions, Berry said.

The tough work has just begun.

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid and Barcelona. She covers Europe's tech sovereignty, deeptech and Iberia. Follow her on X and LinkedIn