Chinese AI startup DeepSeek sent shockwaves through the public markets after releasing its high-performing, low-cost R1 chatbot last month.
A few weeks on, a number of European startups told Sifted they were either integrating or experimenting with DeepSeek’s models, despite some security concerns.
Why is DeepSeek so controversial?
DeepSeek claimed its large language model (LLM) cost less than $6m to develop, apparently at a fraction of the costs incurred by leading AI companies such as OpenAI and Google.
With a new high-performance, low-cost player in town, there are reasons to doubt whether competitors will be able to run viable businesses selling the same product, which is probably why big tech players like Nvidia, Microsoft and Amazon saw their shares slide shortly after R1's release.
But even as it grows in popularity, questions have been raised about the reliability of DeepSeek’s models — and how risky using them could be.
Who's using DeepSeek?
DeepSeek’s models have proven hugely popular with consumers and businesses around the world, becoming one of the most downloaded apps worldwide. Companies like Databricks and ex-Intel CEO Pat Gelsinger have integrated R1 into their workflow.
This week Sifted reported a growing number of European startups had started integrating the company’s models into their products.
DeepSeek’s high-functionality and low cost has made it a hit with tech companies looking to incorporate the company’s models into their stack.
“If you have built your application using OpenAI, you can easily migrate to the other ones,” Hemanth Mandapati, CEO of German startup Novo AI, told Reuters. “It took us minutes to switch.”
What are the security risks?
This really depends on how you’re using DeepSeek’s models: If you use the company’s API, your data is likely going to be stored on servers in China, with an as-yet-unknown level of access granted to the government.
"With the DeepSeek API, a company’s data will be leaving their boundary, and depending upon the nature of the data, this may pose a risk,” says Dr. Stuart Millar, principal AI engineer at cybersecurity company Rapid7.
“The data itself may then reside on servers located in China. This could include copies of the questions asked of DeepSeek, and also the responses, which may be a security risk.”
However, if you use DeepSeek via a serverless API in the cloud, you’ll essentially get your own copy of the model in a local environment, without your data going anywhere.
“They should check the terms with the provider,” Millar says. “If that is the case, since no data leaves the boundary, that is a better scenario than directly using the DeepSeek API.”
What are the legal risks?
Over the next two years, the EU will be steadily rolling out the AI Act, the world’s first comprehensive set of laws governing the technology.
The Act categorises different types of AI based on how risky they are, covering everything from low-risk applications, like spam filters in email inboxes, to high-risk ones, like predictive policing or China-style social credit systems.
Under the terms of the Act, providers of AI systems — such as OpenAI or Google — face just as many terms and conditions as deployers, the startups building their products on top of the baseline models.
“Where companies build their solutions on top of R1 or another DeepSeek model, they will likely be deployers per the Act,” says Tim Wright, partner and tech lawyer at UK-based law firm Fladgate.
“This means they face a raft of compliance issues, which will be even more stringent if their AI is classified as high-risk.”
LatticeFlow, a trustworthy AI company based in Switzerland, published a study of DeepSeek on Tuesday, which suggested DeepSeek’s models may not be compliant with the EU’s AI Act, which is slowly coming into force over the next two years.
The report suggested DeepSeek’s models were more susceptible to hijacking — being manipulated to leak sensitive information — and show significantly higher bias compared to rivals.
Robert Kilian, CEO of CetifAI, an AI testing and certification company in Germany, tells Sifted: “Companies integrating DeepSeek models need to assess whether they are modifying the model for a high-risk application, and hence, can become a high-risk provider themselves."