OpenAI’s Sora 2 Can Generate Realistic Videos of People Shoplifting

New AI video tool raises concerns about fabricated criminal evidence and identity theft in legal proceedings

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Key Takeaways

  • Sora 2 generates realistic fake crime videos featuring anyone’s likeness without consent
  • Courts struggle to distinguish authentic security footage from AI-generated criminal evidence
  • Detection technology lags behind deepfake creation, leaving people vulnerable to fabricated accusations

Picture this nightmare scenario: security footage emerges showing you shoplifting from a store you’ve never visited, committing a crime that never happened. The video looks completely real because, technically, it is real—just not actual. OpenAI’s new Sora 2 makes this dystopian possibility disturbingly achievable.

When AI Gets Too Good at Lying

Sora 2’s enhanced video generation capabilities can simulate complex scenarios with frightening accuracy.

This isn’t your typical “AI generates funny cat videos” story. Sora 2 delivers enhanced realism and control over video content, capable of generating complex scenarios like Olympic gymnastics routines. That level of sophistication means it can just as easily fabricate footage of criminal activity.

Your face, your mannerisms, your distinctive walk—all potentially hijacked for synthetic evidence that could fool investigators, employers, or anyone else who matters in your life.

The Courtroom Deepfake Dilemma

Fake videos of people committing crimes could undermine evidence and due process.

According to research findings, Sora 2 can generate fake videos of people committing crimes, creating unprecedented challenges for legal proceedings. How do courts distinguish between authentic security footage and AI-generated fabrications?

Defense attorneys and prosecutors now face the burden of proving authenticity in ways that didn’t exist five years ago. The technology moves faster than legal frameworks, leaving a dangerous gap where synthetic evidence might be treated as real.

Your Digital Doppelganger Problem

OpenAI introduces safeguards, but broader privacy concerns remain unresolved.

OpenAI has introduced a “cameo” feature allowing users to control their likenesses, but this reactive approach highlights the core problem. Your image exists in countless photos across social media, news articles, and public records—all potential training data for AI systems you never consented to participate in.

The company implements safety measures and moderation policies, yet the fundamental privacy violation occurs during the data collection phase, long before any video gets generated.

The arms race between detection and generation technology isn’t keeping pace with the potential for abuse.

The Detection Deficit

As AI video generation advances, authentication tools struggle to keep up.

This mirrors the ongoing battle between spam filters and spammers, except the stakes involve your reputation and legal standing. While OpenAI claims to prioritize safety measures, the company’s track record suggests moving fast and dealing with consequences later.

You’re left hoping that detection technology evolves quickly enough to protect you from becoming an unwilling star in someone else’s fabricated crime drama. The question isn’t whether this technology will be misused—it’s whether society can adapt its legal and social frameworks fast enough to handle the consequences.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →