Interview

August 30, 2024

“We can control AI” — Brunch with DeepMind’s first policy chief, Verity Harding

The former global head of policy at DeepMind says her worries around AI are less about the technology itself, more about humans disengaging


Tim Smith

9 min read

Few people understand the delicate interplay between politics and technology as well as Verity Harding.

Having started her career as a special advisor to UK deputy prime minister Nick Clegg, she went on to serve as head of security policy in EMEA for Google, before becoming AI lab DeepMind’s first global head of policy.

Her mission at DeepMind? Making the case for how humanity can get the best out of its most powerful technology such as in health and climate, while avoiding its pitfalls. 

Advertisement

“In all the GenAI madness, I haven't seen much about climate tech. Some of what you're seeing with AI at the moment, to me is cool, but not necessarily inspiring,” Harding says.

Today she’s the founder of AI for business consultancy Formation Advisory Ltd, director of the AI and Geopolitics project at Cambridge University’s Bennett Institute for Public Policy — and promoting her book: AI Needs You.

The work looks back on big technological developments in history, making a case that humans have more control over the direction of breakthrough innovation than we might think. 

Speaking freely

She meets me at the doors of London’s Shoreditch House with all of the enthusiasm of someone who’s enjoying life, free of the need to stay strictly “on-message” that comes with working inside government or a big multinational like Google.

“If you're inside a company, you're very restricted. I couldn't talk to you. I couldn't write a book. I couldn't talk to any other businesses and everything I say and do would have to be through that prism,” she says.

We take our seats upstairs, with Harding ordering scrambled eggs, a side of avocado and salmon and a tea, before launching into her great passion: how to get politicians and technologists working better together.

“I've had this really privileged up-close experience at the cutting edge of technology policy, both in government and in leading AI and technology companies, figuring out this stuff right at the beginning,” she says. “Now I feel like the two things I do — the consulting business and the research institute — that’s where I feel like I can have the best impact, sharing what I know.”

Getting into politics

After completing her degree in modern history from the University of Oxford, Harding studied history and politics at Harvard then got her first job in 2009 working for Nick Clegg — now Meta’s president of global affairs but then an MP leading the UK’s Liberal Democrats party.

“When I got to the end of university, I was like, ‘Oh, what’s the next thing?’ A friend of mine who knows me really well said, ‘Well you are quite opinionated. Have you thought about politics?’” she says. “I wasn’t party political. I just really liked Nick and I thought he was trying to do something different.”

One year on, Clegg had shot to the forefront of British politics, winning over the media in the 2010 election campaign, kickstarting the term “Cleggmania” and then forming a coalition government with David Cameron’s Conservative Party as deputy prime minister.

Advertisement

This landed Harding a position at the centre of the UK government, just two years after leaving university.

“I went in as a special advisor, so suddenly I was a government appointee, rather than just working for the opposition,” she remembers. “The best thing that I did was nothing to do with tech, but working on the equal marriage bill.”

Things got knottier

But while Harding looks back on the legalisation of same sex marriage in the UK with pride, there are other areas of the coalition government’s legacy that are clearly more uncomfortable to talk about. 

“One of the things I had to work on was what was nicknamed ‘the Snoopers’ Charter’,” she says, weighing her words more slowly and carefully.

The Snoopers’ Charter refers to the Investigatory Powers Act, first proposed in 2012, that would require internet and phone companies to store customers’ information for 12 months, providing government agencies and the police with unprecedented access to this data.

The law — which eventually came into force in 2016 —  has been criticised by civil rights groups like Liberty, who’ve called it the “most intrusive mass surveillance regime of any democratic country”. Harding remembers feeling increasingly queasy as she found out more about what the bill would entail in its early days.

“As I dug in and learned about it, it became clear it was drawn very broadly and needed to be more targeted,” she says. 

The Liberal Democrats and Nick Clegg opposed the bill in 2013, delaying its passage through parliament. But Harding says the experience showed her that there was a big “democratic deficit,” in that politicians and civil servants — as she saw it — lacked the necessary “socio-technical knowledge required to really interrogate a piece of legislation like that”.

By July of 2013, she’d left government, to take up a new role, as head of security policy in the EMEA region for Google.

“I decided that for what I was trying to do — having an impact and, improving things when it came to tech and human rights, tech and civil liberties — that I could have more of an impact if I acted as this translator,” Harding says. “I can explain tech to political people, and I can explain politics to tech people. And that second part is just as important.”

Google

As we sit in the swanky surroundings of a members’ club, she reflects on how moving to a deep-pocketed tech company felt like a new world, after her time working in the crumbling halls of Westminster.

“It was such such a stark change. When I was in the cabinet office at Number 10, there were literally mice running around. One time I turned to pick up my phone — a proper old school landline phone — and there's a mouse on your phone,” she remembers. 

“Then suddenly, I was in this beautiful office with free, unbelievable food… I think it's really good to do something else before you end up in that kind of luxury. You see some people in tech — maybe that's their first job — they think that's a normal way to work. It’s not.”

But trading in vermin on the phones for vermicelli noodles at lunch wasn’t the only new phenomenon that Harding had to deal with at her new job. She joined Google around the time that the terrorist organisation ISIS was using platforms like YouTube to share extremist propaganda, forcing her to make decisions on whether media platforms should be able to include such material in their coverage.

“ISIS started uploading beheading videos to YouTube, that kind of thing just hadn't happened before,” says Harding. “I really started to understand that it's important to bring in outside perspectives, because I started to talk a lot with radicalisation experts and community leaders to try to understand: ‘What is the role of technology here?’ Because news organisations have a YouTube channel, so they need to be able to show some of it — you can't just ban everything.”

Data and democracy

Debates like these — around big tech companies’ responsibilities to society — seem to present a personal dilemma for somebody who spent seven years of her career singing from the Google hymn sheet.

“My natural home would be with the people that care about this privacy, because I think privacy is a fundamental human right on which the fabric of our society depends,” Harding says, going on to add that some US companies aren’t interested in having open debate about what constitutes ethical collection and use of data.

“There's definitely that element of certain parts of Silicon Valley where it feels like their position is ‘just let us get on with this’.”

When it comes to her old company Google, she’s keen to defend its approach to information security, citing the transparency reports it publishes, which outline how the company protects and processes data, and how it handles government requests.

“I’ve felt comfortable using Google products ever since I worked there, because I’ve seen the importance of security there,” she says.

Harding adds that, while she believes in the right to privacy, there is an inevitable trade off, if people want to enjoy free tech products.

“I've always tried to have a pretty measured view on this kind of stuff, which is that people enjoy the products and services they get, right? I allow Google Maps to have my location because I find that the service I get because of that works for me,” she says. 

“We have to adapt to a modern world where those types of services are really useful.”

Trust in AI

In early 2016, Harding moved to a new role within DeepMind, the London-based AI lab that Google acquired two years earlier for $400m, as its first global head of policy. 

“At Google I was working on current, live issues. At DeepMind I was having to explain to people what AI even was, and why they should care about it… I really felt that I was there to help share the knowledge that [cofounder and CEO] Demis Hassabis had about what this technology could do,” she remembers.

“I felt like we needed to get away from this sense that AI was like Terminator, because I think if you scare people, they disengage. Whereas if you say: ‘Actually, citizens, you have some really cool potential uses, but here's the downsides of that and let’s try and work as hard as we can to mitigate those downsides.’ That shouldn't be a controversial thing to do.”

Among these potential downsides, Harding lists AI’s impact on labour, algorithmic bias and the lack of transparency and accountability of intelligent systems.

She says that the best way for people to respond to these kinds of concerns is to engage actively in the political debate around AI, and not just accept that the technology will end up ruling our lives and societies.

“10 years ago, people were saying, ‘Well, in 10 years time, we won't need any truck drivers or radiographers, that will be gone’,” says Harding. “Actually, that's not the case. And did making such strong announcements get us to a better outcome? Or did it make everyone sort of freak out and think that there was nothing we can do about it?”

When it comes to the threat of super-powerful AI, she’s fairly dismissive of concerns about  systems that become so smart we can’t control them. 

“I don't believe that AI is going to be the one technology that we uniquely can't control,” Harding says, while conceding that she believes an “arms race,” of private companies scrabbling to make ever more powerful systems is not a positive thing.

Instead, Harding says she’d like to see more scrutiny of where investment into AI is going, and more emphasis put on its potential to help revolutionise healthcare, or mitigate climate break down.

“We have to ask, ‘What are we doing this for?’ Because if it's just to make money, then it's not going to be a good outcome. It's got to be for something else,” she says. 

We leave our brunch with Harding telling me excitedly about her hiring plans for her new consultancy, as she tries to help private companies get the most out of AI, while avoiding its possible risks and dangers. 

I’m left with the feeling that her decade spent in tech policy has made her naturally optimistic about how entrepreneurs and politicians can shape the world, or at least that she’s learnt to tell that story with an impressively convincing level of confidence.

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn