The volume of fraud that businesses face is on the rise and is advancing in sophistication. According to a 2024 report by digital identity platform Signicat, payment fraud alone costs merchants $38bn globally and could rise to as much as $91bn by 2028 — that’s 100 times as much as ransomware.
AI plays a huge role in this uptick and is making the mechanisms of fraud more accessible for criminals and more difficult to identify for victims. Signicat’s report found that fraud attempts in total are up 80% over the last three years.
“AI-enabled fraud is becoming more sophisticated, making traditional fraud detection and identity verification methods increasingly ineffective,” Riten Gohil, digital ID, fraud and AML evangelist, at Signicat, tells Sifted.
“Fraudsters are exploiting AI to generate highly convincing fake identities, bypassing traditional security measures with alarming ease. Financial services are no longer the main target — AI-driven identity fraud is now infiltrating healthcare, e-commerce, government services, crypto, gaming and beyond.”
The report underlines an urgent need for businesses to modernise their fraud prevention strategies. Here’s how you can do it.
AI is the threat — and the solution
Hundreds of thousands of fraudulent messages are being sent by email or SMS every day thanks to AI creating highly convincing fake documentation that people are more likely to believe.
“AI is used to collate information on those people the fraudsters engage with on a far more compelling and convincing basis,” explains Simon Miller, director of policy, strategy and communications at Cifas, a nonprofit working to reduce and prevent fraud.
“It's much easier for me to pretend I'm your bank if I've used AI to collate and summarise your social media history in a matter of seconds. It also enables better targeting of individuals who might be defrauded for more complicated types of fraud, or might be susceptible to a type of fraud — if they have certain types of savings or travel often, for example.”
Miller adds that AI also enables fraudsters to take a better look for weaknesses in the systems of banks, telephone networks, media and tech companies — and these businesses and organisations must be prepared for such attacks.
“All businesses need to invest in improved detection, and that would also be AI-based and AI-driven,” he says. “Whilst AI is in part the threat, it is very much also the solution. Most banks have been using machine learning and large language models for a really long time to pick up anomalous behaviour — that just needs to become supercharged.”
Education for action
Miller also says that education is integral in preventing fraud.
“Let's educate ourselves about how we can identify content that is fraudulent by learning little tips and tricks that mean that we are content savvy and aware so that when we are confronted with fraudulent content, we're much better placed to be able to recognise it and protect ourselves,” he says.
James Nurse, managing director and head of consulting at Fintrail, a financial crime consultancy, shares the same view that education is a powerful tactic for businesses.
He recommends two key steps for businesses: threat assessment, where companies map out the threats they see and those that are likely inherent to their business; and mapping controls to those threats to ensure appropriate coverage. He says companies must look at the entire user journey, identifying all the interactions a customer may have.
“Fraudsters work off margins,” Nurse says. “They know a certain percentage of accounts, which they use to move illicit funds, will be identified but scaling the amount of account opening attempts means they can increase their margins. This is where AI is having an impact.”
Gohil highlights the importance of industry collaboration to overcome AI-driven fraud and says that fraudsters share tactics, so businesses must do the same.
“Engaging in fraud prevention networks and sharing insights with peers can strengthen collective defences,” he says. “Data sharing with organisations like Cifas is a critical tactic for the industry in the UK. The better the understanding of threat vectors and common attributes used by criminals, the better the industry will be at preventing such harms.”
AI and human teamwork
Signicat’s report found that account takeover was the most common fraud type for B2B organisations, and the largest increase in deep fake fraud attempts over the past three years were against fintechs, banks and large businesses. Yet, the company believes AI can provide a toolkit for prevention, as long as it’s coupled with proactive human engagement and oversight.
AI-driven identity fraud is not a future problem — it is happening now.
Gohil says AI should be “utilised to enhance [human] work by working in tandem.”
"We can't be scared of AI, a bit like we can't be scared of fraud," says Millers. "It is everywhere, so let's engage with it positively and knowingly."
And for businesses worried about fraud, they can always engage with the experts, such as fraud detection companies.
“AI-driven identity fraud is not a future problem — it is happening now, at scale, across every industry,” says Gohil. “By taking action today, businesses can protect their customers, their reputations and their bottom lines. The fight against AI-driven fraud is a race — those who move first will be the ones who win.”