Nearly one billion identity records and millions of personal photos vanished into unsecured databases, thanks to two AI-powered services you’ve probably encountered. IDMerit, the identity verification platform banks use for account setup, exposed records from 26 countries. Meanwhile, a popular Android app for AI video generation leaked user content alongside generated media. Your national ID, selfies, and AI creations were sitting unprotected in cloud storage—like leaving your wallet on a subway seat.
The Identity Verification Disaster
IDMerit’s MongoDB database hemorrhaged 1 billion KYC (Know Your Customer) records, with over 203 million Americans affected most severely. Full names, addresses, postal codes, birth dates, national IDs, phone numbers, and email addresses—everything needed for comprehensive identity theft—sat exposed from November 11 until researchers alerted the company the next day. These weren’t just random accounts. This data powers the verification screens you see when opening bank accounts or accessing financial services.
AI Photo App Adds to the Mess
Codeway’s “Video AI Art Generator & Maker”—downloaded over 500,000 times—leaked 2.87 million AI-generated videos, 386,000 audio files, and 1.57 million user images through misconfigured cloud storage. Your experimental face swaps and artistic video filters were publicly accessible until February 3, 2025. The app’s popularity mirrors our growing comfort with uploading personal media to AI services, often without considering where that content lives afterward.
Real Consequences Beyond Headlines
“At this scale, downstream risks include account takeovers, targeted phishing, credit fraud, SIM swaps,” warn Cybernews researchers who discovered both breaches. While neither company confirmed malicious exploitation, automated crawlers likely harvested exposed data long before security patches arrived. This creates a frustrating reality: companies claim “no harm done” while admitting they can’t track who accessed what.
The AI Trust Reckoning
These exposures spotlight an uncomfortable reality—AI services handle your most sensitive data with startup-level security practices. Every face filter app and identity verification service becomes a potential single point of failure. Check your breach status on Have I Been Pwned, audit which AI apps have access to your photos, and remember that “powered by AI” doesn’t automatically mean “secured by professionals.”





























