Your photos scattered across social media just became evidence against you. Angela Lipps, a 50-year-old Tennessee grandmother, spent five months in custody for bank fraud crimes she never committed—because Clearview AI’s facial recognition system flagged her as a suspect based on surveillance footage from North Dakota, a state she’d never even visited.
Clearview AI maintains a database of billions of photos scraped from the internet and social platforms. West Fargo Police ran surveillance video through this system and received a match flagging Lipps as having “similar features” to their fraud suspect. They shared this AI-powered lead with Fargo Police, who built their entire case around it without verifying basic facts like whether Lipps had ever been to North Dakota.
When Algorithms Replace Detective Work
The system failed catastrophically at every level. Fargo Police issued an arrest warrant on July 1, 2025, leading to Lipps’ detention on July 14. For over five months, she remained locked up while her family scrambled to prove her innocence.
Bank records eventually confirmed what basic investigation should have revealed immediately—Lipps was in Tennessee during the crimes, making her participation impossible.
No Apology, Just Process Changes
Fargo Police Chief Dave Zibolski acknowledged “a few errors” but issued no apology, citing the ongoing investigation. The department now:
- Prohibits using West Fargo’s AI information
- Requires monthly oversight reviews
- Promises improved warrant procedures
Attorney Jay Greenwood criticized the lazy approach: “The problem is they used it as pretty much the only tool.”
Your Digital Footprint, Their Criminal Database
This case exposes the dangerous gap between AI marketing promises and real-world reliability. Clearview’s billions of scraped photos mean your vacation selfies, professional headshots, and tagged digital photos could theoretically flag you as a criminal suspect anywhere in America.
As Professor Ian Adams notes, “It’s not just a technology problem, it’s a technology and people problem”—human oversight failed alongside flawed algorithms. Lipps’ attorneys are exploring civil rights lawsuits that could establish crucial precedents for AI misidentification liability within broader tech scandals.





























