Sponsored by

Vention

Analysis

February 27, 2024

The role of AI in cybersecurity

Cybercriminals are leveraging AI to launch ultra-sophisticated attacks. But far from making companies vulnerable, this tech could usher in a new era of cyber defensiveness

Sarah Drumm

4 min read

AI advances have captured the interest of startups, CEOs, investors — and cybercriminals.

The cost of cybercrime is expected to reach $10.5trn per year globally by 2025, according to GlobalData, as it becomes easier and cheaper for hackers to launch attacks.

“The AI sector is getting stronger and therefore the exploitation is going to get stronger,” says Glyn Roberts, CTO of Vention, a leading software development company. 

Deepfakes and phishing scams, in particular, are “faster, cheaper to create and more accurate” thanks to generative AI (GenAI), he adds. “It’s worth trying if it only costs you pennies to try and attack something, whereas before it would cost you several thousands of pounds per attempt.”

Advertisement

Earlier this month, a finance worker in Hong Kong transferred $26m to a fraudster who used deepfake technology to mimic multiple team members, including the company’s UK-based chief financial officer, in a video call. “It turns out everyone he saw was fake,” Hong Kong police told reporters

But is GenAI a threat or an opportunity? 

The risks 

According to insurance firm Gallagher, 75% of security professionals say attacks rose in 2023, with 85% attributing this uptick to GenAI. 

AI is helping hackers become more efficient. Companies using GenAI to develop new products and workflows are also exposed to new risks. 

Prompt injection, for example, is an attack where inputs are used to make large language models (LLMs) behave in ways that range from catastrophic — revealing sensitive customer information such as health records or financial data — to embarrassing, such as generating reputation-damaging responses. In January, delivery firm DPD had to take its AI chatbot offline after it started swearing and writing self-referential poetry. 

There are also concerns about companies sharing too much of their information with their LLMs. 

“If the LLM is not self-hosted, the most obvious risk is sending your data to a third-party service,” says Andrei Papou, lead AI engineer at Vention. “Even if it’s self-hosted and not accessible outside of the company, there is a risk of leaking data across departments. There’s no current way to organise role-based access.”

Cybercriminals appear to have the upper hand when it comes to using AI but as more companies develop and adopt cutting-edge cybersecurity applications, the scales will tip.

“When you’re an attacker, you want to harm somebody and you can adopt new technologies more easily,” says Ahmed Achchak, the cofounder of Qevlar AI, a Paris-based cybersecurity company. “But if you’re a large company, you’re more risk averse. It’s more complex for you to adopt new technologies from a defensive point of view and to get up to speed.”

How AI can enhance cybersecurity defenses

The use of AI in cybersecurity is nothing new but the capabilities that GenAI brings are. It’s expected this will result in a new wave of security tools and applications that protect companies against threats that exist today as well as detect and learn about emerging dangers.

Qevlar’s product, for example, uses GenAI to automate the investigation of cybersecurity alerts. The AI can learn about different alerts, decide what actions need to be taken next and draw up reports that explain its decision-making process. Achchak says the tool reduces the time security analysts spend per alert by 30%.

He says that as cybersecurity tools using GenAI become more widely adopted, “you will have an Iron Man future where AI detects patches and is ready to detect new things.”

Advertisement

Other startups using GenAI in the cybersecurity space include Lakera, a Swiss company that makes tools to protect LLM-powered applications, and Snyk, a UK-based startup that analyses AI-generated code and identifies any vulnerabilities. Pistachio, based in Norway, uses AI to create tailored cybersecurity training programmes for staff. 

“AI is going to be a beneficial tool, because it’ll be able to keep up to date with the latest exploits, resulting in stronger and more cost-effective solutions,” says Roberts.

Outspending the hackers

Still, convincing companies to continue increasing their investment remains the perennial challenge in the cybersecurity space. According to International Data Corporation, European security spending grew by 12.2% in 2023, compared to AI spending’s projected 29.6% annual growth rate.

“Cybersecurity is the forgotten child in a lot of companies, unfortunately,” says Roberts. “But even if your data wasn’t a high-target before, now even basic data is more valuable because the cost to try and exfiltrate it is much less than it was previously.”

An effective cybersecurity strategy for 2024 must weigh up the benefits technologies like generative AI will bring — particularly when it comes to beefing up cybersecurity systems — as well as thinking about what might attract a hacker to your company in the first place. 

In other words, how much would it cost to attack your business?