News

November 1, 2023

The UK and US are setting up national AI safety institutes — what that means for startups

These organisations would work together to prevent harm from AI, but not everyone is convinced

The most concrete result to come out of the UK’s highly anticipated AI summit was the announcement of plans for the UK and US to set up national AI safety institutes — to help AI startups prevent societal risks from the most advanced models.

These organisations would help assess AI models and their potential to cause existential threats, but they could also mean that startups have to wait longer to get their hands on the most cutting-edge systems from big tech players.

The fact that the US and the UK have chosen to work so closely on this issue is an important step in building public trust in AI, as startups try to convince consumers that runaway algorithms are not going to go Terminator on us (as some tabloids would have us believe).

Advertisement

What would the institutes do?

The institutes would essentially try to study and mitigate the risks of new, powerful AI systems, though they wouldn’t have any direct rule-making power. They will also exchange information and set up joint research projects into new safety tools, partnering with academics, industry and non-profits. 

The UK institute will be built on the government’s Frontier AI Taskforce, a panel of AI experts led by tech entrepreneur Ian Hogarth. The US’s AI Safety Institute will sit within the US Department of Commerce’s National Institute of Standards and Technology (NIST). 

The US’s institute will also develop technical guidance for regulators working on issues such as watermarking AI-generated content and tackling harmful algorithmic discrimination.

The institute’s findings will help NIST set standards for security and testing, and provide testing environments for known risks and emerging risks of AI at the frontier.

The US and British institutes will work together closely, policymakers said, potentially exchanging information on the risks from each other’s models. 

“We need to be working in a partnership, sharing some of our expertise, some of our intelligence and that is certainly what the world needs to be doing,” the UK’s technology secretary Michelle Donelan tells Sifted. “We’ve all been doing our individual work in silos. That’s got to stop now. We need to work together as well as independently.”

What do investors and founders think?

AI company leaders attending the summit were optimistic about the UK government’s efforts to establish a combined push on safety, and said that the plan for AI safety institutes is unlikely to impact smaller businesses directly. 

“As I understand it, the safety institutes are geared towards these very frontier models that are costing hundreds of millions, if not billions of pounds to train. Those are not the preserve of startups,” said Marc Warner, CEO of B2B AI implementation startup Faculty.

Tech entrepreneur and X-owner Elon Musk said he was supportive of the idea of establishing an “independent referee” that “can observe what leading AI companies are doing and at least sound the alarm if they have concerns”, but cautioned governments against “charging in with regulations that inhibit the positive side of AI”.

Some in the tech community were less impressed by the summit’s concentration on “catastrophic” risks from highly advanced systems — which was highlighted in the “Bletchley Declaration”, a joint statement signed by attendees of the summit. 

Advertisement

Nathan Benaich, founder of AI investment firm Air Street Capital, tells Sifted that he believes this “narrow focus on extreme risk” does not “reflect the balance of opinion in the AI community” and could concentrate power in the hands of a small number of big companies.

“No one is denying the importance of building robust, safe systems, but the answer lies in an open ecosystem with a diversity of participants,” he says. “Alarmism, however sincerely motivated, risks slamming the door on open source and handing the future over to a small number of large companies.”

Will other countries follow suit?

British officials will tomorrow attempt to persuade a host of other governments to follow suit with their own national institutes, an official involved in the talks said. 

The EU is aiming at creating a “permanent research structure” under its EU AI Act, which “have to cooperate” to prevent duplicities with the UK and US institutes, the European Commission vice president Věra Jourová said.

Jean Noël Barrot, France’s junior minister in charge of digital issues, tells Sifted that the network proposal is “very useful” and that expertise centres launched in France and Canada under the banner of the French-led Global Partnership on Artificial Intelligence (GPAI) “will naturally connect” with the UK and US institutes.

“We need to work together in order to develop a fine-grained understanding of what these vital risks are,” he says, warning this work should not distract policymakers from also addressing more immediate risks such as privacy and copyright breaches and exploring the benefits of open source models. 

But not everyone’s convinced by the network model.

The Dutch minister for digitisation Alexandra van Huffelen tells Sifted that she would prefer companies to pool their research and safety efforts, in a structure more similar to the Intergovernmental Panel on Climate Change.

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid and Barcelona. She covers Europe's tech sovereignty, deeptech and Iberia. Follow her on X and LinkedIn

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn