In partnership with

February 12, 2026

Why the future of AI belongs to models that simulate reality

Meta’s former chief scientist says LLMs are ‘missing something big’


Lara Bryant

6 min read

In partnership with

Paladin Capital

Since OpenAI released ChatGPT to the public just over three years ago, large language models (LLMs) have dominated discussion of AI, powering everything from chatbots to code assistants. Yet LLMs mostly operate as pattern recognisers: they predict the next word or token based on vast amounts of historical data, rather than maintaining an understanding of how the physical world works or how actions unfold over time.

A growing number of founders, investors and researchers argue that the next wave of AI will need something more: systems that build internal models of their surroundings, reason about cause and effect and use that understanding to plan actions under uncertainty. These systems, known as ‘world models’, ‘physical’ or ‘embodied’ AI, are designed not just to describe reality but to simulate it, updating their beliefs as new sensory data arrives. That makes them particularly suited to tasks where AI needs to act autonomously in complex environments.

Applications of world models 

World models are designed to be used in industries that benefit from simulating real, shifting environments, such as autonomous vehicles, defense, robotics and gaming. 

Advertisement

LLMs learn statistical regularities from static datasets: once trained, they are largely frozen and must be retrained or fine-tuned when the world changes. World models, by contrast, are built around an internal representation of how the world behaves; they continuously compare this internal “generative model” with incoming observations and adjust accordingly.

“If you think about the requirements of autonomous driving, the environment constantly throws up the need for reasoning and sequencing,” Nazo Moosa, Managing Director of Paladin Capital Group, says. “The model needs to be reflexive while operating within clear guardrails.”

A familiar problem in AI development has been the seemingly endless need for more data on which to train models, but that doesn’t hold true for every approach. 

What our model has to be able to do is understand the consequences of its actions and what new data those consequences will reveal about how the world works.

Many of the headline-grabbing world model approaches, such as those emerging from AMI Labs, founded by former Meta chief scientist Yann LeCun, and AI ‘godmother’ Fei-Fei Li’s World Labs, still depend on large-scale datasets of images and simulations to learn the dynamics of 3D environments. These approaches typically capture motion, physics and multi-step interactions better than LLMs, but they remain data-hungry and compute-intensive, echoing some of the scaling challenges in the LLM world.

London-based Stanhope AI takes a different approach. The company builds systems that mimic the human brain, learning by inference, rather than relying on training datasets and pre-trained simulations. CEO Professor Rosalyn Moran, whose background is in computational neuroscience, describes the brain as “quite literally a world model”, constantly updating its understanding of the world based on incomplete information. 

In practice, Stanhope’s model runs in a tight loop: it starts with a hypothesis about what the environment looks like, takes an action - say, nudging a drone left to avoid an obstacle - and then compares what its sensors actually see with what it expected. When the two don’t match, the system either adjusts its behaviour or revises its internal map, gradually reducing uncertainty with every movement. Since the model is updating on the fly, a drone that has never flown a particular route before can still make reasonable decisions about where to go next, without needing a library of pre-recorded flight paths.

“Our world model is nowhere near the complexity of the real human brain,” says Moran. “But it has all the key features that allow it to do reasoning and planning.

“What our model has to be able to do is understand the consequences of its actions and what new data those consequences will reveal about how the world works.” 

From theory to deployment

Dr. Najwa Aaraj, CEO of the Technology Innovation Institute, says a key differentiator for world models is the idea of predictive modelling, referring to their use of past or present data to forecast future outcomes.

We are now seeing world models move from foundational theory toward systems that can make critical decisions and predict what matters most.

“This is what sets them apart. Predictive decision-making will become critical as world models move into real-world deployment, from autonomous systems to industrial simulation,” Aaraj says.

“We are now seeing world models move from foundational theory toward systems that can make critical decisions and predict what matters most.”

Advertisement

While many discussions around world models remain firmly rooted in research, some systems are already being deployed commercially. Stanhope is working with European governments to integrate its technology into autonomous platforms, primarily drones, and has signed contracts with major aerospace companies.

The company has also closed a seed round supported by Paladin Capital Group, underscoring growing investor interest in world models that can operate safely in high-stakes environments, challenging any suggestion that practical applications are far off.

Creating ‘managed autonomy’

As AI systems move towards acting autonomously in the physical world, questions around safety take on a different character. 

Instead of just trying to prevent harmful outputs, safety concerns will increasingly focus on ensuring that AI can be trusted to make decisions in real-life situations.

“World models raise the same core considerations as any advanced AI system,” says Aaraj. “Robust guardrails and countermeasures must be built in from the start.”

We programme our model to answer: ‘Why are you making these decisions?’

Ethical concerns around the technology include AI bias, lack of explainability and possible misuse leading to deepfakes and cyberattacks. To mitigate these concerns, AI guardrails are implemented through safeguards, policies and predetermined boundaries.

In high-stakes environments such as healthcare, AI systems must be able to explain their decisions in order to meet compliance regulations, such as the EU AI Act, and to avoid being referred to as a ‘black box’ (models that do not reveal their internal reasoning to users).

Moran says an advantage of world models is that safety is not bolted on later but built in. “We programme our model to answer: ‘Why are you making these decisions?’ during development,” she says. “That’s fundamentally different from black box AI.”

We're not in a race for data. We're in a different race in bringing on a team who can construct these AIs.

This approach, sometimes described as “managed autonomy”, means systems can operate independently, but within clearly defined constraints and human oversight built in from the outset. In Stanhope’s case, that might mean hard-coding absolutes such as never flying a drone within a set distance of people, regardless of the mission objective.

Stanhope also makes a conscious effort to work with investors who consider the long-term impact of world model AI. “The investors that we've brought on have been sophisticated in how they think of long-term impact. We're not in a race for data. We're in a different race in bringing on a team who can construct these AIs,” Moran says.

The development of world model AI is ultimately about combining both the digital world and the physical world, says Moosa. “LLMs will continue to play a very important role, but we will start to see a hybrid where LLMs work together with models that know the laws of physics and understand sequencing and causality in a way that LLMs cannot. For investors, the most compelling stories are where this hybrid unlocks concrete use cases in large markets, from autonomous drones and robotic platforms to mission-critical monitoring and security.”

Sifted Daily newsletter

Sifted Daily newsletter

Weekdays

Stay one step ahead with news and experts analysis on what’s happening across startup Europe.