Paid for and produced by

London Business School

Paid for and produced by

January 29, 2026

AI’s innocence lost: How capital will force us to get real about value

If geopolitics is entering an era of competition without rules, AI is doing the same. 2026 looks set to be the year innocence is lost.

AI capability is advancing at breakneck speed. But capital being committed far outweighs the value being realised. We’re living through the biggest freemium product demo in history — and it can’t last. Sooner or later, we have to find out: who pays, how much and for what?

The levels of investment going into AI are unprecedented, and the chase for talent has led to recruitment packages of eight figures. Much of this has been justified by hazy promises that equity markets and private capital have been delighted to fund. Some of it has the familiar smell of circular finance — deals and dependencies between companies that all benefit as long as everyone believes. In any other sector, we’d call it a bubble. In AI, it’s “the future”.

Optimists say we haven’t seen anything yet. But aspirations aren’t cashflows. Where will the money actually come from?

Advertisement

Right now, the market is acting as if profitability is optional and success is a done deal. Consider Palantir trading on stratospheric multiples, or OpenAI reportedly seeking a valuation of $750bn in its next funding round, something that only makes sense if its inferred $12bn quarterly losses secure future dominance. 

Yet most of the actual money still flows to the picks and shovels: data centres, cloud capacity, chips. That’s why Nvidia makes money, why hyperscalers look indispensable and why a small army of vertical specialists and integrators can thrive selling “AI transformation” to a corporate sector in which nobody wants to be the last one without a pilot.

My research suggests something far less romantic: AI will have an impact, but it will be uneven. There will be winners and losers but no universal profit uplift to pay for the current spending spree.

Some sectors will shelter behind regulation and institutions. Others, especially those where work is more modular, will face direct displacement. Translation and advertising are obvious targets, and business schools aren’t immune. 

Young graduates may be the most immediate losers, as AI eats into the entry-level roles that traditionally trained them. At the same time, some companies with powerful complementary assets (proprietary data, trusted distribution, embedded workflows) will use AI to deliver better personalisation, new services and new business models.

Even then, the aggregate returns still don’t obviously cover what’s being built upstream. Worse, impact isn’t just about industry position. It’s also about organisational reality.

Leaders are discovering the hard way that organisations aren’t collections of individuals. They’re systems: incentives, handoffs, decision rights, bottlenecks. Far from fixing these problems, AI usually amplifies them. AI demands new leadership skills, a rethink of workflows, and a redesign of how work is coordinated. That’s why the most radical reinventions often come from new companies and not incumbents trying to bolt AI onto old structures while soothing employees who fear replacement.

So yes, there will be winners. But even so, will they be able to afford the AI buildout when the bill finally arrives?

Starting in 2026, Big AI will be forced to monetise harder. Capital markets can’t subsidise thought experiments indefinitely, and training, inference and distribution don’t come cheap.

That monetisation will come, first, through something the tech sector knows better than it admits: control of attention. If OpenAI’s valuation is to make sense, it needs more than subscription revenue. It needs leverage over distribution and what users see, trust, click and buy. That points to product placement, default recommendations and an “answer engine” that quietly becomes a direction engine. The temptation will be too profitable to resist.

Advertisement

Google could have owned this space, but it was understandably hesitant to cannibalise its lucrative quasi-monopoly over search. However, it won’t stand idly by while AI hijacks eyeballs that could be gazing on Google ads. And because consumers are lazy, whoever controls AI-driven defaults gains immense power over brands, over commerce, over what “choice” even means.

This is where agentic AI changes the game. It won’t just answer our questions; it will put its answers into action. Most likely, we’ll have agents sponsored by Big Tech plus our own user agents to deal with corporate agents on our behalf. Who provides these agents and at what cost? Who do customers trust? What happens when the “best buy” is the one the agent is paid to recommend? Or when sensitive personal queries, perhaps in areas like mental health, are quietly diverted towards a purchase?

Then there’s regulation, no longer a technocratic afterthought but a battleground over who captures value. Whoever sets the standards, defines compliance and allocates liability will shape not just safety, but the division of power and profits across the economy, determining which sectors can adopt AI at scale, on what terms, and who gets squeezed in the process. In today’s geopolitical climate, where everything is recast as national competition, governments will be pushed to act quickly. The risk is that “protecting the national interest” becomes the banner under which incumbents write the rules, locking in their advantage and hardening today’s AI stack into tomorrow’s economic order.

The second monetisation route for tech and AI firms seeking returns on their investments is B2B. If AI becomes essential for innovation and product development, the prize is enormous. But who will win it? Will it be Big Tech like Nvidia broadening and deepening their ecosystem boundaries? How will this change the landscape?

Will pharma giants or fast-moving consumer goods behemoths retool their organisations to use AI for innovation at scale? Or will their structures, incentives and risk systems keep them locked out of the speed and experimentation that AI rewards? Will startups fill the gap using compute-for-hire, external data and specialised models? Or will Big AI move up the stack into science and innovation directly, as some rumours suggest?

Either way, conflict is inevitable. Expect stack wars: enterprise resource planning vendors, cloud providers, model providers, device makers, integrators, consultancies — everyone will fight for the right to sit at the control point where AI is embedded into workflows and decisions. 

Trust will become a strategic asset, especially for sensitive work. And the battle won’t just be over capability; it will be over governance, liability and whose standards become the default.

No doubt, the technology is extraordinary. But the era of innocently marvelling at capabilities is done. It’s time to decide who does what, who decides and who pays.

Michael G. Jacobides

Michael G. Jacobides is the Sir Donald Gordon Professor of Entrepreneurship & Innovation and Professor of Management at the London Business School, where he directs the FT/LBS GenAI Masterclasses. He is the Founder and Lead Advisor of Evolution Ltd, a boutique advisory on strategy in a digital context.

Sifted Daily newsletter

Sifted Daily newsletter

Weekdays

Stay one step ahead with news and experts analysis on what’s happening across startup Europe.