October 30, 2023

UK gears up to major AI summit under the shadow of its own lack of regulation

Britain is eager to stake its claim as a leader in AI by hosting the world’s first global AI summit this week, but does it lead by example? Some think not

Bletchley Park

Amid the surge in AI innovation globally, Britain is eager to stake its claim as a leader in the technology by hosting the world’s first global summit on AI this week. 

But as the Nov 1-2 event looms closer, entrepreneurs and diplomats wonder if the world’s sixth-largest economy has done enough to cement its position as the initiator of international AI policy. On one hand, the country does not have the AI giants of the US; on the other, it has dragged its heels on debating concrete regulation — unlike the EU next door, which is sprinting to enact laws on AI. 

“One thing that’s amusing is the idea that on the one hand, there’s this positioning as being the home of AI regulation, while at the same time actually having done very little about it,” says Eric Topham, CEO and cofounder of UK data security startup OctaiPipe. 


Other officials involved in summit preparations are growing concerned at the scale of the challenge: pushing countries with opposite regulatory approaches — some entrenched in a long-term fight for technological supremacy such as the US and China — towards a common approach to AI.

Not walking the walk

The first challenge is that Britain’s own cautious approach to AI is undermining its efforts to persuade others, AI entrepreneurs say. The UK has rejected pausing AI development, at least until there’s a better understanding of frontier models, and gone for a hands-off approach that so far excludes new legislation. 

“It’s hard to regulate something if you don’t fully understand it,” UK prime minister Rishi Sunak said on Thursday.

“I found some of the prime minister’s statements, on the one hand scaring people and then saying ‘but I will look after you’, a little bit disingenuous…” says the CEO of one of Britain’s top AI companies.

Some UK-based founders argue Britain will see its influence over businesses curtailed if it refuses to regulate. Washington will hold to account the biggest AI companies, mostly US-owned, they say, while the EU will have its own AI Act in place in two years’ time. 

In the run-up to the Bletchley Park gathering, UK officials are pressing participating governments to agree on a common regulatory framework on frontier models by next summer, one UK official says, in time for the release of the next wave of cutting-edge AI systems.

The UK’s bid to position itself as a global AI safety hub without legislating on the issue has also raised eyebrows in the EU.

“The problem with this summit is that they pretend nothing has been done before,” says a diplomat from a European country taking part in the event. “But the EU has already done a lot of work on AI safety so we don’t start from scratch… The European Commission will highlight or repeat what’s been done in the EU just to showcase that we don’t start from nothing.”

Startups feel alienated

AI entrepreneurs also worry that the event’s focus on frontier models and top US players could alienate young companies actually pushing AI innovation forward. 

About 100 people are expected to attend the summit, including political leaders such as European Commission president Ursula von der Leyen and US vice president Kamala Harris, as well as chief execs of OpenAI, DeepMind and Anthropic, and representatives from Adobe, Amazon, Meta, Google and Microsoft.


The CEOs of British AI unicorns Graphcore and Stability AI, as well as those of Faculty, Darktrace and Builder.AI, have also been invited, but most UK startups will only be allowed to take part in the first-day discussions.

“I feel it’s a giant slap in the face, especially when you go back to the [summit’s] mission, which is to show that the UK is a leader in both AI and AI regulation,” says a UK-based AI founder. “But you have barely invited anyone that you want to regulate.”

Founders also want the UK to pay more attention to more immediate risks such as bias and manipulation, which will not be a priority at the summit, rather than speculative conversations around super-powerful frontier models.

“There’s either an element of grandstanding around this idea of frontier AI, or there’s an element of not really understanding what’s important,” OctaiPipe’s Topham says. 

“It’s all well and good to talk about frontier or [artificial general intelligence], but in the real world where real engineering is happening, we’re concerned more with the issues that exist in narrower applications of machine learning and the impacts that they can have on people.”

One AI startup founder who is attending the event added that frontier models shouldn’t be the “be-all and end-all” of AI safety: “It’s like getting a shiny toy in front of everyone, getting them all distracted with the shiny toy.”

Voluntary commitments

Those who attend the second day of the summit — when Sunak will chair a discussion on tackling the risks of frontier models — are expected to be pressed to sign up to a new voluntary code of conduct issued Monday by the G7 group of countries, and implement the voluntary commitments they made as part of an agreement with the White House earlier this year. These include conducting internal and external testing of their AI systems before release to prevent cybersecurity risks. 

Government leaders are also expected to sign a joint statement setting a framework for international collaboration on risks; expressing concern at the possibility of AI causing “significant, even catastrophic harm”; and warning of bad actors using AI to create bioweapons and launch stronger cyberattacks. The text may also include wording around the need for AI companies to increase transparency around their models, by being subject to “evaluation metrics” and other safety tools. 

Sunak intends to pitch fellow country leaders a new AI research network, formed by international experts and modelled on the Intergovernmental Panel on Climate Change, to publish an annual state of AI science report. Members would be nominated by the countries attending the AI Safety summit.

The event will yield at least another clear tangible outcome: A UK-based AI Safety Institute tasked with evaluating the risks for national security posed by new AI frontier models, and making at least part of its findings available to the world. The institute would build on the work of the government’s Frontier AI Taskforce, an advisory body led by tech entrepreneur Ian Hogarth. 

UK technology secretary Michelle Donelan will also hold a meeting with counterparts from participating countries to agree next steps, including the host country of the second summit. The government envisages these events taking place every six to twelve months to ensure policymakers can keep up with the pace of AI development.

Founders remain sceptical as to whether, even with these results, Britain can become the world’s leader in AI policy. 

“A lot of the recommendations are not mandatory in any way,” says the CEO of one of Britain’s top AI companies. “They’re asking companies to take certain steps but I’m not sure that’s going to build the level of trust that is going to be required in the system. If I were being cynical, I would say this is the UK trying to look important in the world of AI, but the reality is that most of the innovation is happening in the US and China.”

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid. She covers tech sovereignty and Iberia. Follow her on X and LinkedIn

Tim Smith

Tim Smith is a senior reporter at Sifted. He covers deeptech and all things taboo, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn