News

February 6, 2024

AI startups stay cool as UK signals ‘binding’ safety requirements

British government acknowledges AI too powerful to be regulated only through voluntary commitments

The most cutting-edge AI systems developed in the UK may be subject to “binding” requirements over safety, the British government says, but startups building with the technology seem reassured that the country’s approach will be good for businesses.

UK AI companies can already subscribe to voluntary commitments on AI safety but, for the first time, the government recognised that in some circumstances these may not be enough. 

Businesses developing “highly capable general-purpose AI systems” in the UK could face “targeted binding requirements”, the government today said, without giving more details. This will be aimed at ensuring that AI companies operating in Britain are “accountable for making these technologies sufficiently safe.”

Advertisement

The new approach was detailed in the government’s response to a public consultation on how the technology should be regulated.

But ministers insisted they do not plan to follow the EU’s approach, which last week finalised the world’s first AI Act regulating a number of use cases for AI and imposing fines for non compliance.

The UK mostly wants to stick to its approach of tasking existing regulators in various sectors — from telecoms to healthcare to finance — with overseeing the rollout of AI in their areas and rules that should govern the technology. A new steering committee, set to launch in the spring, will coordinate the activities of the UK regulators overseeing different AI applications.

‘Careful balance’

British AI startups welcomed the government’s outline, arguing that it doesn’t seem to stifle innovation. 

“We’re pleased to see the UK focus on regulator capacity, international coordination and lowering research barriers as startups across the nation have expressed these as critical concerns,” says Kir Nuthi, head of tech regulation at the industry association Startup Coalition.

Marc Warner, CEO and cofounder of Faculty, said it was “reassuring to see the government strike a careful balance between promoting innovation and managing risks”, and warned “it would be disastrous to stifle innovation” by overregulating narrow applications of the technology, such as AI helping doctors read mammograms.

Emad Mostaque, CEO of British AI unicorn Stability AI, one of a handful of UK companies that could be subject to binding requirements, did not specifically comment on their potential imposition. He did say that the government’s plan to upskill regulators is “critical to ensuring that policy decisions support the government’s commitment to making the UK a better place to build and use AI.”

The government’s focus on improving access to AI, he adds, will “help to power grassroots innovation and foster a dynamic and competitive environment”, as well as boosting “transparency and safety”.

Upskilling regulators

In order to upskill regulators, the government also announced it will spend £10m on training them on the risks and opportunities of the technology. 

Darren Jones, Labour’s shadow chief secretary to the Treasury, had previously warned that “asking regulators without the expertise to regulate AI is like asking my nan to do it [nan is a British term for grandmother].” 

Advertisement

Two of Britain’s biggest regulators, Ofcom and the Competition and Markets Authority, will have to publish their approach to managing AI risks in their fields by April 30.

The Department for Science, Innovation and Technology said it will spend £90m on the creation of nine research hubs on AI safety across the UK, as part of a partnership with the US agreed in November to prevent societal risks from the technology. The hubs will focus on studying how to harness AI technology in healthcare, maths and chemistry.

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid and Barcelona. She covers Europe's tech sovereignty, deeptech and Iberia. Follow her on X and LinkedIn