The field of AI regulation looks to be the next pendulum of differing opinion between Britain and the European Union — both have recently outlined their approaches, and the contrast is stark.
The EU’s AI Act is prescriptive, risks stifling innovation and could blunt Europe’s edge in AI. Britain, on the other hand, has gone rogue with a government white paper that favours a hands-off approach with no plans to introduce new legislation. That could be bad news for the rest of Europe.
Why? Britain ranks third behind the US and China for spending on AI research and development, according to the UK government, while a third of Europe’s AI companies are based in the country. By pursuing such a contrasting approach to regulation, the UK is making it harder for European companies to compete in a vital market — perhaps deliberately.
Fumbling cross-border collaboration
The UK’s approach, in many ways, is a positive step for startups building AI products. Heavy-handed government intervention rarely works for business, and business must be at the heart of AI regulation as it develops.
Yet it’s still unclear how the UK’s framework will allow its homegrown companies to collaborate with organisations in the EU, and vice versa. While the white paper rightly discusses interoperability and the importance of working across borders, it's also framed as an opportunity for the UK to assert its post-Brexit sovereignty.
The essential part of strong AI regulation is working across borders to create an environment that allows organisations to tackle the world’s most pressing problems. Forget the headlines around ChatGPT and the rest. We will not make progress on issues such as climate change or healthcare — where AI genuinely could enable a breakthrough — if we have a network of barely compatible regulatory regimes around the world.
Hazy ground
AI regulation is coming and we need to get it right, with safety and good governance balanced with enough legroom for companies to experiment and innovate — we don’t want a repeat of the mess around the regulation of social media companies.
The problem is there are strategies and some nice ideas, but nothing you could say will confidently define AI regulation for the next decade or more — we’re lacking decisiveness, especially in the UK.
These uncertainties make it difficult to make big decisions about a company’s future
That’s bad news for founders, who can’t make decisions on the future of their businesses with vague proposals. While the UK white paper contained some promise, it lacked precision in key areas.
For example, we know that responsibility for regulating will be spread across multiple government departments, but we are left in the dark about how they will deal with the challenge of continuously responding to new innovations. We know the intention of supporting interoperability across different regulatory regimes — but how will that be implemented? And while the UK does not “intend” to introduce legislation, that language suggests it isn't being ruled out completely. These uncertainties make it difficult to make big decisions about a company’s future.
Being crystal clear, decisive and fast is the only way for regulators to deal with an AI industry that is on the cusp of delivering profound change to the way we live our lives.
All about the data
Ultimately, the success or failure of AI comes down to data. Big Tech companies are building the foundational models upon which AI products of the future will be based or buying up companies that have the technical capacity to do it for them. For companies outside that elite circle, the big opportunity is to customise these models using their own data and apply them to specific real-world problems.
To do that requires confidence about data usage around privacy and trust. This is where regulation comes in and where European companies are at a disadvantage.
The prize for getting AI regulation right is enormous
While the UK’s light-touch approach — which includes a “regulatory sandbox” concept for experimenting with different ideas — will encourage innovation and collaboration between organisations, it's a different story in Europe. Companies operating in Europe face a much more prescriptive regime, which is likely to discourage the sort of data collaboration required to drive progress in AI. By going rogue with its AI regulation, the UK is both laying down a challenge and erecting a barrier to the rest of Europe.
One solution is federated learning, a technology that allows AI models to be trained on distributed datasets, leaving the data where it resides. This enables organisations to access complementary datasets to collaboratively train AI models while retaining their competitive edge. It also means the companies that own the data can leverage it to securely build more powerful AI models.
The prize for getting AI regulation right is enormous. It will mean we provide safety and governance in an industry that could easily spiral out of control. It is important to say that both the EU and the UK still have a lot of potential to get AI regulation right but to do that, we need to get serious and regulate. That will require governments to invest in and embrace innovation across borders — in particular around AI safety and governance.