DEEPSEEK EXPOSES SENSITIVE DATABASE: CHAT HISTORY, API KEYS, AND AI SECURITY RISKS
Wiz Research discovered that DeepSeek, a Chinese AI startup, exposed over 1 million log entries, including chat history and internal information. This incident highlights significant security risks in emerging AI systems.

1. Summary

Wiz Research recently identified a publicly accessible ClickHouse database from DeepSeek, the Chinese AI startup, which did not require authentication. This database contained over a million log entries, including chat history, secret API keys, backend system details, and other sensitive information. Notably, attackers could perform arbitrary SQL operations to escalate privileges or gain control of the database. After Wiz Research's disclosure, DeepSeek quickly addressed the issue. This article provides a detailed analysis of the discovery process and key lessons on cybersecurity in the age of AI.

2. Context: DeepSeek – A Prominent AI "Unicorn"

DeepSeek has recently garnered attention with its DeepSeek-R1 AI model, which is considered on par with leading systems like OpenAI's GPT but more cost-effective and resource-efficient. However, the focus on developing technology may have led DeepSeek to overlook some fundamental security issues.

3. Incident Details: From Discovery to Exploitation

a. Network Scanning & Vulnerability Discovery
The Wiz Research team began by scanning DeepSeek’s public domains. After identifying about 30 subdomains, they discovered two strange service ports (8123 and 9000) at the following addresses:
This is the web interface of ClickHouse, a real-time analytics database management system. Remarkably, the service required no authentication, allowing anyone to access it directly.
b. Discovery of Sensitive Data
By running the simple SQL command SHOW TABLES;, the research team found that the log_stream table contained:
  • User chat history
  • API keys and backend configuration
  • System activity metadata
  • Server directory structure
Importantly, attackers could use ClickHouse's file() function to read system files (e.g., SELECT * FROM file('etc/passwd')), potentially giving them control over the server.

4. Risks & Security Lessons

a. AI ≠ Security by Default
Rushing to deploy AI without controlling infrastructure is a fatal “trap.” The DeepSeek case shows that:
  • Simple configuration errors (like open ports without authentication) can lead to massive data leaks.
  • AI training data and user logs need to be encrypted and tightly permissioned.
b. Don’t Focus Only on “Future Threats”
Many AI startups today attract users with promises of “low cost, high performance.” However, lower costs often come with trade-offs in security.
  • Users should avoid putting sensitive data into unverified systems.
  • Emerging AI platforms often prioritize development speed over security.
  • Choosing a provider should be based on clear standards, especially for systems that handle private data or proprietary code.
c. Be Cautious with Emerging AI Platforms
Many AI startups today attract users with promises of “low cost, high performance.” However, lower costs often come with trade-offs in security.
  • Users should avoid putting sensitive data into unverified systems.
  • Emerging AI platforms often prioritize development speed over security.
  • Choosing a provider should be based on clear standards, especially for systems that handle private data or proprietary code.

5. Conclusion: AI Needs “Cloud-Level” Security Frameworks

The current pace of AI development is outstripping the security capabilities of many startups. To mitigate risks, the industry must:
  1. Treat AI data as a Critical Infrastructure asset
  2. Apply cloud security standards, like those used by AWS/Azure, across the entire AI ecosystem
  3. Integrate DevSecOps throughout the model development process
The DeepSeek incident serves as a wake-up call: In the AI race, security must be the "referee," not just a spectator.
In fact, even leading companies like OpenAI have faced issues with exposed chat histories due to system errors. This shows that no system is completely immune to risks. Organizations and individuals developing AI must stay vigilant: Behind models with billions of parameters and impressive demos, security is what keeps systems—and careers—sustainable in the long term.
Author: Nguyễn Anh Bình Source: Wiz Research Phát Hiện Cơ Sở Dữ Liệu DeepSeek Lộ Thông Tin Nhạy Cảm, Bao Gồm Lịch Sử Trò Chuyện
Reference resource: Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History
4/22/2025
Thumbnail.png
NEW
INTRODUCTION TO AI AGENT AND HOW TO APPLY IT WITH N8N – BUILDING AI AGENT EASILY IN 15 MINUTES
This article will introduce how to build a personalized AI Agent with n8n – an open-source automation platform capable of integrating with AI models
Hình 01 (thumbnail).jpg
NEW
UNDERSTANDING DEEPSEEK R1
DeepSeek R1 is a Large Language Model (LLM) developed by a Chinese AI research team, marking an important milestone in the field of artificial intelligence. It is not just a regular AI model but a technological breakthrough with impressive features
490324334_2216176002185972_4856937695174447258_n.jpg
NEW
QaiDora Vision wins Gold Stevie at the Asia-Pacific Stevie Awards 2025
The Asia-Pacific Stevie Awards 2025 recently announced the list of businesses that won Gold, Silver, and Bronze awards in various categories. FPT won the Gold award in the category "Innovation in Artificial Intelligence (AI) and Machine Learning (ML)—Financial Services" with the akaCam solution.
QaiDora Products
Trusted by
Contact us
Copyright by qaidora.com