Alex Kendall, cofounder and CEO or Wayve


October 24, 2023

Wayve’s Alex Kendall on autonomous driving, giving robots feedback and being vindicated by Elon Musk

The UK-based autonomous vehicle startup's “AV2.0 approach” sees its vehicles effectively ‘learn’ while on the job, just like a human would

You’d be hard pushed to find a sci-fi movie of the last three decades in which self-driving cars haven’t played some sort of role. Almost as long as we’ve had cars, we’ve been obsessed with the idea that, one day, we wouldn’t have to actually drive them ourselves. 

But the motivation for unlocking autonomous vehicles isn’t only human laziness; self-driving cars could cut transportation costs, reduce congestion and traffic deaths and free up city space. 

One of the firms working towards our driverless future is UK-based startup Wayve. Founded in 2017, it has raised more than $200m from Microsoft, Virgin, Balderton and Baillie Gifford. Bill Gates has even taken a spin in a Wayve autonomous vehicle in London to pick up some fish and chips.


The company has distinguished itself with its “AV2.0 approach” to self-driving technology — its vehicles effectively ‘learn’ while on the job, just like a human would. The company recently released its AI model Lingo-1, which allows passengers to — like a spouse on a long car journey — question their vehicle about the decisions it makes on the road. 

We sat down with Alex Kendall, cofounder and CEO of Wayve, on the Sifted podcast to expand on that, as well as on the competition (Uber and Tesla, among others) and regulation, working with Microsoft and how to “level up” as an academic founder. 

Find our interview highlights below — or listen to the full conversation here

Tell us a little bit about your background, and how that led you to Wayve.

I've always been an engineer at heart, building things growing up. I've always been curious about how technology works and the impact it can have on the world. I think there's no greater challenge than building an intelligent machine. 

I've always been interested in things in the physical world, whether that's growing up in New Zealand and adventuring around in the outdoors, or some of the technology I played with growing up. I think the combination of these factors led to my fascination with embodied AI. 

And you studied computer vision, that was your speciality in your PhD. How did that tie in?

If you think about the autonomous vehicles that you can ride in today, the exciting thing is that you can go to a few cities in the world, download an app and a car can turn up and pick you up with no one in the front seat. But these are autonomous vehicles, they follow a set of rules and require maps that tell them how to drive and behave in a certain environment. In many senses, they are rigid and inflexible to the environment.

If you think about what the future of robotics should be, or look at some of the science fiction that people have created and dreamed of — what I dream of — it is a system that actually has the flexibility to operate in an environment that you expect to change, as your needs change, to be able to let you converse with it and actually build trust and understand why it is doing what it's doing. The exciting thing is that it is no longer science fiction, it is possible with AI. So that's the vision that we set out to build. 

But when we started five or six years ago, that was really contrarian. We had multiple large technology companies making billion dollar investments in autonomous driving and telling us it'd be a year away. And over the last six years, we've seen the classical approach to autonomous driving — which predicates its requirements on technology that tells cars how to behave with sets of rules and maps — get some initial launches together, but in a very constrained way, while also missing deadlines and costing exorbitant amounts of money.

Meanwhile, we have been quietly working away on this technology. And you know, this year, everything is starting to work. We're seeing an inflection point of so many different enabling technologies that made us really realise this, this future vision of autonomous driving with AI. 

So how do you actually build that? How do you actually train the car to be able to adapt to the environment?

You want to ensure that they learn from data. You can never encode all of the different situations they might encounter. What that looks like from an autonomous driving system is a system that has a lot of the hardware that you see in cars: cameras, radars, computers. 

From the AI perspective you need the most powerful and flexible neural network. We specifically use a number of techniques to train this system, whether it is pre-trained on large scale amounts of data through contrastive learning, or whether it actually learns how to control these vehicles through both imitation and reinforcement learning. And crucially, not just on real data, but also synthetic data that lets us get access to larger scale, more rare, or dangerous scenarios that we perhaps can't see in the real world. 


How much does human feedback fit into that process of training the car?

It is important, and one of the things I think that's crucial for robots in the future is that they should adjust to your preferences. So maybe you're in a rush, and you want the car to take more assertive actions to get there quickly. Or maybe you want to go for a quiet Sunday drive through the hills. Whatever style you have, you should be able to give that kind of feedback to a system.

And for us, if we're deploying our technology in a grocery delivery application, or ride hailing, or public transport, there's differences in each of these scenarios. So what our technology is able to do is learn a base foundation model that learns how to drive in general, and then learns different driving cultures, whether it's Britain, or the US, or southeast Asia. Once we have that, we can fine tune it through feedback for specific scenarios. 

So reinforcement learning by human feedback?

That's right, just as your large language model today is trained on next word prediction. It is trained how to produce sentences and paragraphs, and then the way it goes from being rude and dangerous to being safe is through reinforcement learning from human feedback. 

We have a similar process where our system learns to drive from generally observing the world with unsupervised learning. And then we can bias towards specific behaviours through feedback.

Tell us about the first time you actually were able to go for a drive, what was that like?

In 2017, I was just finishing my PhD at the University of Cambridge and I'd spent a year in Silicon Valley as part of a Series A startup, Skydio. I wanted to go build this technology, and I didn't really know much about venture capital.

But, thankfully, I was surrounded by many people who had followed this kind of path and could help mentor me. Myself and my cofounder raised a seed round and ended up moving into a residential house in Cambridge, and we built our first autonomous car in the garage. One of the bedrooms was our server room and the other one was our boardroom. 

Of course, it failed during many early tests. But I remember the most memorable moment was the first time that we got reinforcement learning to actually learn to drive this car with no human rules or actions and correcting it when it started to drive off the road. It learned how to lane follow. 

I remember that day we actually put a video on YouTube and it went viral because reinforcement learning in 2018 was only really shown to work on video games with millions of examples of simulation. To have it work on a real robot was an incredible breakthrough for us. 

I remember that when we put the video out we put a soundtrack over it because if you actually listen to the video with the real audio, you'll hear a lot of whooping and screaming from us in the background. But that moment was really special. We came back to the house and watched the video as a team.

A lot is being made of Tesla pivoting to machine learning for self driving. Does that feel validation for your approach or does that feel like a major competitor?

I've got a lot of respect for what they're building and many friends in the autopilot team. I think they have the most extraordinary access to data and are building a fairly formidable supercomputer.

When Elon said they were pivoting to an entire learning approach. I mean, that's fantastic, right? It's what we've been saying for many years. I remember meeting Elon a couple of years ago, and he didn't believe end-to-end learning was going to work. And so for him to come out and say: actually, we're changing our approach to this. I mean, that's vindicating for me. It's absolutely the way to go. I think what we've been able to put together with the level of resource and access we have so far is certainly competitive. And I'm excited to see what they produce in the coming years. 

One concern that many people have about, you know, the widespread use of autonomous vehicles in the future would be its impact on jobs. What would you say to those people?

I think any new technology that comes in is going to change the nature of work. The jobs we had a couple of thousand years ago are very different to the jobs we have today. But I think it's crucial that we support those who are going through this disruption, whether that's helping transition those people into the new jobs that are acquired with this technology, or other support programmes. Society and government is and should be thinking really carefully there. 

But I'd also point out that new technology creates new economic prosperity and growth. For example, autonomy: the government estimates that by 2035, autonomous driving will create 40k new, high paying technology jobs to support the industry — that's a significant opportunity. 

Also, once you have transportation that's more reliable and accessible and lower cost than what we see today, it opens up whole new business models. I mean, if you think about the big shifts we've had in mobility recently, whether it's micro mobility or quick commerce or other applications, there's so many more that are now unlocked through safer, more accessible, more sustainable transportation. So I'm really excited about the innovation that becomes possible due to embodied AI. And overall, I think we're on track to get this right and to me, that would have a net benefit for society.

Listen to the full conversation here.

This podcast is brought to you by Harper James, a national full-service law firm designed to support ambitious businesses. Having supported over 3,500 businesses, Harper James isn’t a run-of-the-mill law firm. It has transformed the traditional law firm model through unique price plans, smart technology and teams of almost exclusively senior lawyers — giving you affordable, commercial and high quality legal advice.

If this sounds too good to be true, then head over to and see for yourself. While you’re there, you’ll find 100s of resources to help your journey from startup to scaleup and beyond.