This week, a female-only dating safety app with the virus suffered a massive data breaches after 4chan users discovered that their backend database was completely unsecured. No password, no encryption, nothing.
result? Over 72,000 private images, including selfies and government IDs submitted for user verification, were scraped and spread online within hours. Some have been mapped and searchable. Private DM has been leaked. The app, designed to protect women from dangerous men, has just exposed the entire user base.
It contains exposed data, totalling 59.3 GB.
- Over 13,000 Verification Selfies and Government Issued IDs
- Tens of thousands of images from messages and public posts
- The IDs that I’ve been dating recently in 2024 and 2025 contradict Tea’s claim that only “old data” was involved in the violation
4Chan users first posted the file, but even after the original thread was deleted, the automated script continued to reduce the data. On a distributed platform like BitTorrent, once you go outside, it goes outside forever.
From virus apps to total meltdown
Tea hit #1 on the App Store and was riding the wave of virality with over 4 million users. Its pitch: A women’s only space for “gossip” about men for safety purposes. Critics saw it as a “manshaming” platform wrapped in the empowerment brand.
One Reddit user summarised Schadenfreude. “Vy creates a female-centric app for men who don’t want. You’ll accidentally make a female client doxx. I love it.”

The verification required users to upload government IDs and selfies. Now those documents are in the wild.
The company said 404 Media ”
Decryption He reached out to him, but he has not yet received an official response.
Criminal: “Vibe coding”
This is what OG hacker wrote. “This is what happens when you entrust your personal information to Dei employment with a lot of atmospheric content.”
“Vibe Coding” is when a developer enters “I’ve become a dating app” into ChatGpt or another AI chatbot and ships anything that comes up. There are no security reviews and they don’t understand what the code actually does. Just an atmosphere.
Apparently, Tea’s Firebase Bucket had zero authentication because it was something that AI tools generate by default. “No certification, nothing. It’s a public bucket,” said the original leak.
It could be a cosmetic coding, or simply a bad coding. Anyway, excessive reliance on the generator AI only increases.
This is not an isolated incident. In early 2025, the founder of Saastr saw an AI agent delete the entire company’s production database during a “vibe coding” session. The agent then created fake accounts, generated hallucination data, and lied to the logs.
Overall, researchers at Georgetown University found that 48% of AI-generated codes contain exploitable flaws, while 25% of Y-combined startups use AI for their core features.
So, coding in the atmosphere is effective for occasional use, praying for high-tech giants like Google and Microsoft to claim that chatbots build impressive parts of their code in the gospel of AI, but the average user and small entrepreneurs may stick to human coding.
“Vibe coding is great, but the code these models generate is full of security holes and can be easily hacked,” computer scientist Santiago Bardarama warned on social media.
Vibe coding is great, but the code these models generate is full of security holes and can be easily hacked.
This is a live, 90 minute session where @SNYKSEC builds a demo application using Copilot + ChatGPT and hacks it to find all the weaknesses generated.
– Santiago (@svpino) March 17, 2025
The problem gets worse with “slope sting.” AI suggests non-existent packages, hackers create packages filled with malicious code, and developers install them without checking.
Tee users are scrambled, and some IDs are already displayed on searchable maps. Signing up for credit monitoring may be a good idea for users looking to prevent further damage.
Discover more from Earlybirds Invest
Subscribe to get the latest posts sent to your email.