Pro AI Tools

Know what's happening in AI

AI News

Security Researchers Issue Stark Warning: Do Not Use DeepSeek-R1

[ad_1]

The DeepSeek-R1 model has sent shockwaves through the AI industry. Its rapid rise to prominence has been fueled by organisations like Ola Krutrim, which has integrated the model available into its cloud infrastructure. Considering DeepSeek’s popularity, many other companies are poised to follow suit.

However, several key questions arise: Would it be safe if they decided to integrate DeepSeek-R1 into their organisations? Is that advisable from a security perspective? What are the recommendations?

Cybersecurity firms such as Threatsys, an Indian cybersecurity firm, and AppSOC have identified significant security issues related to the DeepSeek AI model. These insights must be examined more closely to determine whether DeepSeek is suitable for any organisation.

Getting The Basics Wrong With DeepSeek

According to a report from Threatsys, the official hosted platform for DeepSeek R1 was found to have multiple security vulnerabilities, indicating a sign of hasty implementation.

The investigation revealed that the platform is susceptible to cross-site scripting (XSS), which allows attackers to inject malicious code into the web pages viewed by users. Furthermore, unauthorised access to accounts and intercepting sensitive user information, including session logs and cookies, was also possible. 

Deepak Kumar Nath, CEO and founder of Threatsys, said, “Threatsys acted swiftly and responsibly by notifying DeepSeek of these vulnerabilities. The company promptly secured the exposed issues, preventing potential large-scale exploitation. However, this incident highlights a critical lesson for AI developers: security should never be an afterthought.”

To add to his thoughts, Debarshi Das, a senior security engineer at we45, told AIM, “Generally, in tech, when adoption is done at a super fast rate due to FOMO (fear of missing out), security is left out. That’s where the problem begins.”

Glaring Security Risks of the AI Model at its Core

The platform could potentially fix the vulnerabilities and improve security. But what if the model itself is not safe enough?

An AppSOC report mentions alarming failure rates in key security areas. The testing included static analysis, dynamic tests, and red-teaming techniques. 

DeepSeek-R1 bypassed its safety mechanisms, generating harmful content with a failure rate of 91%. The model was also tested against the ability to generate malicious code, where it had a 93% failure rate, meaning it could be weaponised easily to create phishing scripts, malware, and other tools for cyberattacks.

The security researchers also witnessed a 68% failure rate when trying to generate toxic or harmful language, indicating poor safeguards.

Moreover, the tests also found a failure rate of 81% and 86% for hallucinations and prompt injection attacks, respectively.

“These issues collectively led AppSOC researchers to issue a stark warning: DeepSeek-R1 should not be deployed for any enterprise use cases, especially those involving sensitive data or intellectual property,” the researchers noted.

Indian Government’s Push for Sovereign AI and Trust Issues

During an interview with AIM at MLDS 2025, Rohit Thakur, GenAI lead at Synechron, said, “It’s a Chinese company; people are not really that comfortable sharing the data. We’re dealing with the first generation of reasoning models; they will get better as time passes, so we’ll just wait and watch.”

In addition to trust issues with Chinese companies, the Indian government has been pushing to build sovereign LLMs. Startups like Sarvam AI are already in discussion with the government on how to kickstart this effort.

Companies like Tata Communications have also started partnering with AI startups like CoRover.ai to provide infrastructure for AI solutions for governments and enterprises.

With developments like this, DeepSeek may not be a future-proof choice for every use case, even if the security issues are addressed.

To Use or Not to Use?

Meanwhile, Das said, “I guess in a restricted environment, you’re free to use any model, making sure that you handle LLM pitfalls so that rogue or exploited LLMs don’t become a problem.”

Considering this insight, one should keep in mind the security implications of an AI model before integrating it into an organisation.

Organisations should follow best practices when using the AI model. While self-hosting seems a safer alternative, it comes with its share of issues, as highlighted by reports.

If the selection is based on cost and DeepSeek proves to be useful, it may be worth trying while keeping the associated risks in mind. Considering it is open source, it can be adapted to specific needs, though careful consideration should be given before incorporating it into solutions.

[ad_2]

Source link

Pro AI Tools is a seasoned expert in the field of artificial intelligence and technology. With a passion for innovation and a keen understanding of AI's transformative power, they have dedicated their career to exploring and sharing insights into cutting-edge tools and technologies.Drawing from extensive experience in the tech industry, Pro AI Tools is committed to providing valuable resources and comprehensive reviews to help individuals and businesses leverage AI for enhanced productivity and success. Their expertise spans a wide range of AI applications, from machine learning and natural language processing to automation and data analysis.Pro AI Tools believes in the potential of technology to drive positive change and is dedicated to making complex concepts accessible to a broad audience. Through their website, ProAITools.tech, they aim to empower users with the knowledge and tools needed to stay at the forefront of AI advancements.When not immersed in the latest tech developments, Pro AI Tools enjoys exploring new technologies, attending industry conferences, and sharing insights with a community of tech enthusiasts.