Massive Data Leak Exposes 1 Billion IDs From AI Apps – Here’s What You Need to Know

IDMerit and Codeway exposed 1 billion identity records and millions of user photos through unsecured cloud databases

C. da Costa Avatar
C. da Costa Avatar

By

Image: Flickr – Blogtrepreneur

Key Takeaways

Key Takeaways

  • IDMerit exposed 1 billion identity records from 26 countries through unsecured MongoDB database
  • AI video app leaked 2.87 million generated videos and 1.57 million user images
  • Breaches enable account takeovers, targeted phishing, credit fraud, and SIM swap attacks

Nearly one billion identity records and millions of personal photos vanished into unsecured databases, thanks to two AI-powered services you’ve probably encountered. IDMerit, the identity verification platform banks use for account setup, exposed records from 26 countries. Meanwhile, a popular Android app for AI video generation leaked user content alongside generated media. Your national ID, selfies, and AI creations were sitting unprotected in cloud storage—like leaving your wallet on a subway seat.

The Identity Verification Disaster

IDMerit’s MongoDB database hemorrhaged 1 billion KYC (Know Your Customer) records, with over 203 million Americans affected most severely. Full names, addresses, postal codes, birth dates, national IDs, phone numbers, and email addresses—everything needed for comprehensive identity theft—sat exposed from November 11 until researchers alerted the company the next day. These weren’t just random accounts. This data powers the verification screens you see when opening bank accounts or accessing financial services.

AI Photo App Adds to the Mess

Codeway’s “Video AI Art Generator & Maker”—downloaded over 500,000 times—leaked 2.87 million AI-generated videos, 386,000 audio files, and 1.57 million user images through misconfigured cloud storage. Your experimental face swaps and artistic video filters were publicly accessible until February 3, 2025. The app’s popularity mirrors our growing comfort with uploading personal media to AI services, often without considering where that content lives afterward.

Real Consequences Beyond Headlines

“At this scale, downstream risks include account takeovers, targeted phishing, credit fraud, SIM swaps,” warn Cybernews researchers who discovered both breaches. While neither company confirmed malicious exploitation, automated crawlers likely harvested exposed data long before security patches arrived. This creates a frustrating reality: companies claim “no harm done” while admitting they can’t track who accessed what.

The AI Trust Reckoning

These exposures spotlight an uncomfortable reality—AI services handle your most sensitive data with startup-level security practices. Every face filter app and identity verification service becomes a potential single point of failure. Check your breach status on Have I Been Pwned, audit which AI apps have access to your photos, and remember that “powered by AI” doesn’t automatically mean “secured by professionals.”

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →