Your phone can create photorealistic humans in seconds. Brooke Schinault discovered law enforcement can detect them just as fast. The Florida woman told St. Petersburg police she’d been sexually assaulted on October 7, providing what she claimed was a photo of her attacker sitting on her couch. The evidence seemed compelling until forensic analysis revealed a digital smoking gun.
According to police reports, investigators determined the image was AI-generated—reportedly using ChatGPT’s image capabilities. Worse for Schinault’s story, court documents show investigators found the fabricated photo in a deleted folder dated several days before the alleged attack. Reality had timestamps; her fiction didn’t.
Schinault spent one night in jail and posted $1,000 bond after being charged with filing a false police report. Court records don’t reveal her motive, though the incident occurred during Domestic Violence Awareness Month—October’s designation as a time to raise awareness about domestic abuse.
The Evidence Arms Race
Consumer AI tools outpace law enforcement detection capabilities, creating forensic blind spots.
This case isn’t just about one false report—it’s about the collision between accessible AI and criminal justice. You can generate convincing photos faster than most people can fact-check them. While deepfakes dominated headlines with celebrity face-swaps, the real threat was always mundane: everyday people weaponizing AI for personal gain.
Law enforcement faces an uncomfortable truth: standard digital forensics playbooks assume human-created evidence. AI-generated content requires entirely different detection methods, specialized training, and updated protocols most departments haven’t developed yet. Think of it like the shift from film photography to digital—except this time, the learning curve involves distinguishing reality from artificial creation.
The technology gap is widening. Consumer AI tools improve monthly while forensic capabilities advance at bureaucratic speed. Every police department now needs AI detection expertise, but few have budget or training for it. When your neighborhood app store offers image generation tools more advanced than what CSI labs were using five years ago, the investigative playing field has fundamentally shifted.
This won’t be the last case. When creating fake evidence becomes as simple as typing a prompt, the entire foundation of digital proof shifts. Courts must grapple with authenticity questions that didn’t exist five years ago, while investigators play catch-up with technology that’s already in everyone’s pocket. The Florida incident serves as a wake-up call: in the age of accessible AI, “seeing is believing” no longer applies to criminal evidence.