AI adoption is no longer a fringe experiment — it’s fast becoming the norm. A 2025 McKinsey survey of nearly 1,500 business leaders across 101 countries found that 78% are already using AI in at least one business function.
But as startups move beyond pilots and begin weaving AI into their products and growth strategies, the pressure mounts. Transparency, bias and safety are no longer “nice-to-haves” but essential guardrails. Without them, companies risk eroding trust — and facing expensive rebuilds later.
So the question is: How can we design intelligent systems that are not just powerful, but also responsible and resilient?
Our panel of industry experts was:
- Meri Williams, chief technical officer at business expenses company Pleo
- Lawrence Jones, engineer at incident managing platform Incident.io
- Dan Lifshits, cofounder, chief operating officer and chief product officer at property management company Dwelly
- Faisal Khan, governance risk and compliance subject matter expert at AI trust management platform Vanta
1/ Embed transparency and accountability from day one
Transparency has to be a key cornerstone of any operations involving AI.
“When you start to introduce new technology into the world and that technology may pose specific risks... it's only natural for us to ask, ‘Okay, how do we best roll out some of those deployments across the world?’” said Khan.
Lifshits agreed, adding “an ability to flag up some kind of errors and bias” is a must for users.
For Jones, this technology shift is happening at a rate that he’s never seen: all businesses are trying to figure out how to use AI safely and responsibly, while ensuring it performs well.
“Building a prototype AI feature is actually super easy, and it's very hard to distinguish the difference between a prototype AI feature that looks very impressive… and one that is actually doing a good job,” he said.
This highlights the need to closely evaluate AI outputs for quality — though this can be challenging.
“It's extremely highly contextually dependent and mentally taxing to look at a lot of these AI flows and try and pick apart whether or not you've been handed ‘AI slop’ from a product or if it's actually doing the job” — Lawrence Jones, Incident.io
2/ Governance is critical at any stage
Jones stressed the importance of regular check-ins — ideally every three months — to track industry shifts and the rapid evolution of tools.
As AI becomes more pervasive, the sheer volume of data these systems handle will only increase connectivity between them. That makes it essential to establish guardrails early: clearly define data flows, limit interconnectivity and prevent unintended interactions.
Understanding which parts of the system should communicate — and which should not — will be key. “Otherwise, we will end up in the future where all of your systems can talk to each other — and you may find that that's maybe not what you wanted,” said Jones.
“It's primarily just about having proper checklists integrated into your product to just understand… to provide some guardrails of how you use the data effectively — and the inputs and outputs of that” — Dan Lifshits, Dwelly
3/ Invest wisely in your AI stack so you don’t have to rebuild
Startups with limited resources must be smart about where they invest. Lifshits suggested using product-integrated checklists during development.
Meanwhile, Jones explained that most businesses are still in research and development stages, so the priority is to make a system work before it works fast.
He suggests starting with larger models, then once you're confident, you can switch to smaller, optimised models. But with agentic systems, AI can “choose to do more work,” which increases cost unpredictably, he warned.
“If we're ever in a situation where we're building our new feature we're using the larger models by default” — Jones
4/ Pricing AI features requires a close understanding of usage and cost
Jones explained that running tests, evaluations and serving customer demand increases AI costs.
“The best approach is to metre everything and expose that data in a developer-friendly dashboard,” which can help developers understand the costs of their interactions, he said.
Williams added that from experience, seemingly minute differences can have a big impact on how cost scales.
“It used to be a great thing if your product was being used every day. Maybe now it's a little more double-edged, ” they said.
They speculated that AI may push usage-based pricing models into the mainstream.
“Unless you end up like gyms do where they rely on there being a whole bunch of people paying for a membership that don't ever turn up. I don't think we want that for our customers,” they added.
“There needs to be an accountability aspect to those responsible for maintaining that data and content and then knowing where it goes — kind of an emphasis of data lineage and data provenance, and I think that's a really big concept that's getting emphasised as part of this boom of AI” — Faisal Khan, Vanta
5/ Stay cautious of regulations while scaling
Williams warned that businesses must be cautious of data protection and financial regulations when using AI.
“We're very much taking the approach of trying to not silo but… contextualise down exactly what task we're giving the AI, and be really careful that it's limited to just what is okay for it to do — and not overreaching in any way,” they said.
Khan added that businesses must anticipate evolving regulations, such as the EU AI Act, and build governance accordingly.
“We already had a product development process that was very aware of GDPR ... And this has just been an extra layer of complexity” — Meri Williams, Pleo





