Analysis

January 17, 2024

AI has a trust problem — meet the startups trying to fix it

Young companies are trying to build new AI architectures that fix a lot of the big problems around trust and reliability of today’s latest models


Tim Smith

6 min read

While the investment world went doolally over AI in 2023, there’s an uncomfortable truth for any organisation trying to implement the technology into their workflow: it has a trust problem.

The latest generative AI models like OpenAI’s GPT-4, which powers ChatGPT, are known to “hallucinate” (industry speak for “make things up”). Even more traditional systems can be unreliable and are therefore not suitable to deploy in many business cases.

“If I’m using AI to recommend shoes to display on an online shopping site, to a user based on their prior behaviour, AI is going to do it a million times better than any human could do it,” says Ken Cassar, CEO of London-based Umnai, a startup that’s working on new AI architecture design.

Advertisement

“If the model is 85% accurate, that’s brilliant  — a paradigm shift better than humans could do at scale. But if you’re landing aeroplanes, 85% ain’t no good.”

To solve the problem, startups like Umnai are trying to build AI models that are more accurate, reliable and useful, while others are focussing on building new systems that are less likely to gain superhuman abilities and present an existential risk to society.

The problem of the black box 

A central problem with the AI models in use today is known as the “black box”. 

Systems are fed huge amounts of data that are then put through complex computation processes. When the AI creates an answer or output, it’s impossible to know how it came to that decision. 

Cassar says this aspect of AI systems creates a trust issue because it goes against the human instinct to make “rule-based” decisions.

“If I’m a bank, and I’m doing loan approvals and I’ve got 10 data points, I can build a rules-based system to do that,” he says.

“If I suddenly start taking people’s social media activity and all sorts of other factors then I’ve got too much data to do that. What AI then is very good at is going into these massive amounts of data and finding the knowledge. But because it’s a black box, it hits a trust barrier.”

Cassar tells Sifted that his technical cofounder Angelo Dalli has devised a new “neuro-symbolic” AI architecture that allows the user to better see how the system has created its output.

This, he explains, works by combining neural nets (the kind of tech that powers big models like GPT-4) and rule-based logic. It subdivides a big task — for instance, judging someone’s credit-worthiness — into small neural nets organised in “modules”, like age, education level and income. 

When the system then provides an output, the user can then query how it came to its decision part by part.

A model that gets better, not worse

Oxford-based Aligned AI is another startup that’s trying to improve trust in AI systems. But unlike Umnai, it’s not building new AI model architecture from the ground up. It is building a system that it says can work with pre-existing models to improve the reliability and quality of their output.

Advertisement

 The majority of AI systems never actually make it into the marketplace, as they prove too “fragile” when confronted with real-world data that’s different from what they were trained on, says cofounder and CEO Rebecca Gorman.

“They break when they encounter the real world. Seven out of 10 models that are made for commercial use, never actually make it into the marketplace,” she tells Sifted. 

Aligned AI’s technology is designed to improve results generated by pre-existing models by teaching the system to generalise and extrapolate from the rules it’s been taught, says cofounder and chief technology officer Stuart Armstrong.

“The central idea is that AIs don’t generalise the way that humans do. If we want an AI to do something like  ‘don't be racist’ we give them a lot of examples of racist speech, we give them a lot of examples of non-racist speech. And we say: ‘More of this less of that,’” he explains. 

“The problem is that these examples are clear to us in terms of what they mean, but the AIs do not generalise the same concepts from those examples that we do, so you can jailbreak them.”

While companies like OpenAI have teams dedicated to trying to make their systems more aligned with human values, so-called jailbreaking techniques have been successfully used to get ChatGPT to generate things like porn and misinformation.

Aligned AI says its tech allows humans to give live feedback to the system as it makes decisions, which then helps the AI to extrapolate concepts from its training data, to continuously improve its output.

Gorman says that this human-assisted, self-improving AI marks a stark contrast to the majority of systems out there, which tend to degrade over time due to not adjusting well to new data, meaning they need to be retrained.

Are we “super, super fucked”?

Solving problems like these, which make AI models hard to use in the business world, could be a huge business opportunity as more and more companies race to adopt the technology, but some startups are trying to solve what they see as more existential issues.

Connor Leahy, founder of London-based startup Conjecture, told Sifted last year that “we are super, super fucked” if AI models continue to get more powerful without getting more controllable.

His company is now trying to develop a new AI approach which Leahy calls “boundedness”. 

The idea is to essentially provide an alternative to general purpose systems like GPT-4, which are trained on more information than any human could ever know, and can answer a huge range of different requests.

“The default vision of what AI should be like is an autonomous blackbox agent, some blob that you tell to do things and then it runs off and does things for you,” says Leahy. 

“Our view is that this is just a fundamentally bad and unsafe way of how to build intelligent and powerful systems. You don’t want your superintelligent system to be a weird brain in a jar that you don’t understand.”

Conjecture is addressing this issue by breaking down AI systems into separate processes that can then be combined by a human user to complete a given task, the idea being there is no one part of the system that is more powerful than a human brain.

By design, such a system would not be “general purpose”, but Leahy believes that as well as being a safer vision for AI than the more wide-ranging systems, it’s actually a more valuable product for a business customer because such a system would be easier to reliably control.

“What corporations want is a thing where they can automate a workflow that currently cannot be automated using traditional software. They want it to be reliable, auditable and, if something goes wrong, they want it to be debuggable,” he says.

“The product that people, in my experience, actually want in business is not a quirky little brain in a jar that has its own personality and hallucinates wild fictional stories.”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn