Another day, another AI company left the front door wide open. Security researchers at Wiz recently found that DeepSeek, a Chinese generative AI platform, accidentally exposed a massive online database. And when I say huge, I mean over a million records—including user prompts, API keys, and system logs.
The Wiz team immediately tried to contact DeepSeek through every email and LinkedIn profile they could find. While the company never responded directly, the database was locked down within 30 minutes of the report, meaning someone there got the message. The big question is, who else may have accessed the data before that happened?
This was easy to avoid, yet it exposed way too much.
This is another reminder that even major AI platforms aren’t immune to basic security failures. If companies collecting and processing sensitive data don’t take security seriously, it’s only a matter of time before someone less friendly than a security researcher finds the same loophole.
DeepSeek is growing fast, but this slip-up raises a genuine concern: Is their AI safe enough to trust?
Remember developers: LOCK DOWN YOUR DATABASES!!