👉 Listen to Startup Europe on Apple Podcasts
👉 Listen to Startup Europe on Spotify
👉 Other places to listen
Matt Clifford is one of the OGs of the European tech scene. When he cofounded talent accelerator Entrepreneur First back in 2011, London tech was still just a blip on the map.
A decade on, EF is the only accelerator in Europe worth getting into, founders say. Alumni include insurtech unicorn Tractable and fintech Cleo, and the group now has branches from Berlin to Bangalore.
Clifford has also been increasingly involved in tech policy in the UK. Since last year, he’s chaired the UK’s Advanced Research and Invention Agency (ARIA), a state-backed R&D funder broadly based on the US’s DARPA.
Recently, he’s also been appointed as an adviser on artificial intelligence to the UK government on a taskforce to discuss foundation models — like OpenAI’s GPT-4 — and how they can be used to support the UK economy. It’s a key role given the political urgency in the UK to nurture the country’s own AI talent and capabilities in the face of US giants like Microsoft and Google.
Sifted invited Clifford on our podcast, Startup Europe, for an in-depth interview covering his views on AI policy, his views on DeepMind, the British AI giant which was bought by Google in 2014 for £400m, and what Germany is doing better than the UK. Listen to the full episode here, watch it here or read some highlights below:
On whether the UK should have AI sovereign capabilities
“The specific question that the UK government has asked me to look at with them and help them scope an answer to is, ‘how should the UK think about how it plays in this [AI] space? And in particular, are there any capabilities or infrastructure that supports those capabilities that the UK government or UK public sector generally should own rather than just access?'
“I think some of the things that are clearly up for grabs are: should the UK government commission, for example, a large language model aimed at serving the public sector in some way that's built by UK-controlled companies, rather than by American-owned or controlled companies? Should the UK either own or have exclusive access to large amounts of sovereign compute to train those models?
“Nearly all the cutting-edge research has been done in private labs, like DeepMind and OpenAI and Anthropic, not in universities. There's been this idea floating around for a while, pushed by people like Jack Clark from Anthropic, that maybe countries like the UK need a national research cloud so that academia can remain at the cutting edge. If so, what would that require? And how would we build that?”
On whether the UK should build a competitor to OpenAI
“I think it would be a very unusual decision to say we should just build a full-on OpenAI competitor. I think what you could do is say, well, actually research in alignment [in AI, alignment refers to how we can build models that fit our intended goals] is undersupplied by the market. And maybe there are certain public sector use cases for these models, where investment — particularly if it goes to UK-controlled companies — in alignment to get really robust models… might be an interesting area for governments.
I think it would be a very unusual decision to say we should just build a full-on OpenAI competitor
“I think most people, even those that follow the space, probably have no idea just how fierce the competition for the very best talent has become. It is now not uncommon at all for low to mid-seven-digit salaries for the best machine learning talent, people who have worked on or built these large models. That’s another reason, by the way, why I don't think the government deciding to set up an OpenAI competitor is a very good idea. You can just imagine the headlines.”
On DeepMind
“The tragedy of DeepMind is that, when [CEO] Demis Hassabis and co sold back in 2014, the capital ecosystem, particularly for AI, was just much less developed, and that was, arguably, the only real way to get access to the kind of capital that turned out to be required to build models of the scale that DeepMind does. I think that is sad. I think had it not sold and been able to access the capital needed, I think it would be an enormous, UK-owned company today.
“But what I would say is the good news is that that's no longer true. If you see the pace at which, something like Stability AI, which went from being completely unknown in the UK to raising $100m from Coatue and Lightspeed, it's pretty, pretty extraordinary.
“That's my thinking from the investor side. If I were to think about it from a government side, I think, probably — although maybe it would have been a little much to suggest this at the time — it is a shame that the UK government didn't intervene in some way for DeepMind to be an independent company, in my personal view. I think that would have been an extraordinary company to have here as an independent company. I think it still is, I do think you can slightly overstate it. I think the fact that it's a UK-headquartered company, aggregating talent here in this way, is an enormously positive thing.”
It is a shame that the UK government didn't intervene in some way for DeepMind to be an independent company
On the EU's AI policy
“When you build an AI model, it’s not neutral. You see that if you spent any time playing with Chat GPT, you know that it's not neutral. So the question rightly is: whose values? I do think that it would be a real shame for the EU to miss out on playing an active role in shaping that rather than just regulating that.
“Ian Hogarth, who is a VC at Plural has this great essay called AI Nationalism, which I think has held up really well. One of the things he talks about, which is one of the things that I think has definitely turned out to be true, is that you can't opt out of AI. AI is going to change everything. So the real question is: are you going to be an AI taker or an AI maker? If you're an AI taker, the chances are that however you regulate, you're going to be basically taking the value set of the people that made it. So choosing not to build, but just to try and mould after the fact, I think is probably the wrong strategy.”
The real question is: are you going to be an AI taker or an AI maker? If you're an AI taker, the chances are that however you regulate, you're going to be basically taking the value set of the people that made it
On the power of procurement and how Germany has owned that in quantum
“One of the levers that governments everywhere underrate is the power of procurement. I talked about why government might be a particularly good first customer for extremely robust, extremely aligned language models. But that's true across a whole range of emerging technologies. And actually, if you look at the history of technology, it's very clear the role that governments have played in being the customer of first resort.
“A good, plausible counterexample is the level of ambition [with which] Germany is pursuing quantum computing right now. I won't name it in case it's not public, but I met with a quantum computing startup recently that had won a grant from Innovate UK measured in the hundreds of thousands (of dollars) and simultaneously an advanced purchase from the German government measured in the tens of millions.
“There was an update on this in the budget in the UK [in March] that looks like we are going to spend more money in this space. But I certainly think that scale of ambition on the procurement side with respect to quantum computing in Germany is pretty impressive.”