The world needs a better understanding of AI frontier models before pressing pause on the technology, according to UK government AI advisor Matt Clifford, who is helping spearhead a summit on the risks of the technology.
The November 1-2 event — the first of its kind in the world — comes amid an acceleration in AI innovation globally that has governments jostling to take a leadership role.
While some UK officials have speculated about the possibility of the creation of a UK-based international body for AI safety post-summit, Clifford says in an interview with Sifted that “it would be absurd for a new institution” to come out of the summit given engagement with participating countries began only weeks ago.
While some business leaders like Elon Musk have called for a moratorium on AI development, Clifford says, “Personally, I think there’s a lot more work that needs to be done on the identification [of risks] before we get to that”.
“The thing that I always think that’s most chilling is we’re so bad at thinking in exponentials as a species. And, you know, we all went through a two-and-a-half-year forced experiment in learning about them. And to me, this is the January 2020 moment in AI,” he says.
“It’s really at this point [that] we’re about to launch another mass experiment where we build massively increased investment in these models, which is also why, by the way, all those positives might be possible. But if we don’t understand the capabilities, and we don’t have plans in place for how we evaluate those capabilities, that I did find quite scary.”
That’s why the summit has been designed to narrowly focus on risks, including the creation of bioweapons or stronger cyber attacks, and the nightmare scenario of creators losing control of their own models, he says, to work towards a global shared understanding of AI so society can reap the benefits.
What the summit won’t discuss in detail, despite increasing public interest, is the advent of artificial general intelligence (AGI), a hypothetical type of intelligent agent.
“AGI is one part of that loss of control idea, but it’s not the only part, so actually one could, in theory, be a complete AGI sceptic and still feel that extreme or catastrophic risks from the 2024-era models is a very real possibility,” Clifford says. “What we won’t do is open the summit up so broadly that people who want to talk about something completely different dominate the conversation.”
Clifford, who runs accelerator Entrepreneur First and has taken 12 weeks of leave to work on the event, acknowledges that the gathering’s limited scope might feel disappointing for some. “There’s that famous tweet where a guy says: ‘The problem of Twitter is that if you tweet, you know, I really like waffles, someone replies and says, When are you going to get into pancakes?’ Sometimes it feels a bit like that with the summit: we’ve chosen a narrow focus, not because we don’t care about all the other things but because it’s the bit that feels urgent, important and neglected.”
Bridging the gaps
Britain’s decision to host an AI summit this autumn, when the diplomatic agenda is crowded with AI discussions — including at the G7 and the Global Partnership on Artificial Intelligence (GPAI) meeting in New Delhi on December 12-14 — has raised eyebrows in Europe. The EU and the US are also discussing AI at their bilateral Trade and Technology Council meetings.
But Clifford said the UK is “not trying to compete with, dominate, or replace any of those” and instead wants to “broaden the conversation out” to include developed and developing countries; those with leading companies developing AI frontier models and those that will become consumers of the technology.
China is the only country the UK government has confirmed it has invited, but the US, six EU member states (France, Germany, Ireland, Italy, the Netherlands and Spain), the European Commission, Canada, South Africa, Brazil and India are also on the guest list, sources told Sifted.
China’s role at Bletchley Park, once Britain’s top-secret home of World War II codebreakers and the venue for the summit, remains unclear, with several UK officials pointing out that Chinese officials won’t participate in the same capacity as those from other governments.
Clifford acknowledges it is “extremely unlikely” that China and the West will be in “perfect alignment” on how to tackle AI risks, but says it’s in their common interest to mitigate them. “In my conversations with Chinese academics, business leaders and government, it’s very clear that they’re thinking hard about this topic,” he says.
The EU, which is finalising a large piece of legislation on AI, is keen to mention the EU’s work on this area, marking the difference with the UK, which is wary of stifling innovation by regulating too much or too early, according to one EU diplomat. Clifford says the summit will not get into domestic regulation unless it’s raised independently by speakers.
“My sense talking to colleagues in Brussels is that they do see the value of a focused conversation on frontier AI,” Clifford says. “I think the EU AI Act touches on some of these issues but we’re all in this weird position where whatever you write down in month one, technology moves so fast that by month six [it] has moved again.”
A global watchdog?
Clifford anticipates Bletchley Park will give way to a “whole series of bilateral and multilateral collaborations” and leaves the door open for a more formal structure being formed in the future — once the risks are better understood and countries build domestic watchdogs that can talk to each other. At the summit, the UK will share the experience of its own Frontier AI Taskforce, which Clifford sees as a “pretty extraordinary example of building state capacity quite rapidly”.
“Until you have more countries having that shared understanding, it feels premature to me to rush to an international institution,” he says, noting: “We want to be really careful that it doesn’t, at any stage, look like the UK is trying to gain some strategic advantage of this.”
Smaller AI companies, meanwhile, are floating dissatisfaction for not being invited, and say the debate has been kidnapped by Big Tech. Critics will hold an AI Fringe, a series of events between October 30 and November 3 designed to widen out the discussion beyond frontier models and involve a more diverse range of voices.
Clifford welcomed this parallel event, and countered that the leaders of the AI companies developing large models, most of which are headquartered in the US, have “internalised the trade-offs” and so far behaved in a “thoughtful and sincere” way. “It won’t be acceptable to the public anywhere for the companies to rate their own papers, mark their own homework,” he adds.
Startups developing applications with limited uses of AI shouldn’t want to be part of that conversation, which is about ensuring companies developing the most powerful models face “uniquely onerous scrutiny,” Clifford continues. “If you’re building a narrow AI system for mammography, you’re not in that club, don’t worry about it.”