Lawsuit Reveals “Nudify” Websites Exploiting Google, Apple, and Discord Sign-On Systems

A lawsuit exposes the alarming rise of AI-powered “nudify” websites exploiting Google, Apple, and Discord sign-on systems to create explicit images without consent.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Key Takeaways

  • A lawsuit exposed “nudify” websites exploiting Google, Apple, and Discord sign-on systems to create non-consensual explicit images.
  • In the first half of 2024, 16 identified “nudify” websites had over 200 million visits, with advertising increasing by 2,400%.
  • Tech companies, governments, and society must work together to hold abusers accountable, support victims, and prevent deepfake abuse.

A recent lawsuit filed by the San Francisco City Attorney’s office has exposed a disturbing trend: “nudify” websites using AI-generated content to create explicit images of people without their consent. Even more alarming? These sites have been exploiting sign-on systems from tech giants like Google, Apple, and Discord as reported by The Conversation.

The scope of the problem is staggering. In the first half of 2024 alone, 16 identified “nudify” websites racked up over 200 million visits. Advertising for these sites on social media has skyrocketed by 2,400% since the start of the year.

The impact on victims is severe. Deepfake abuse can ruin reputations, destroy careers, and cause devastating mental and physical health effects like social isolation, self-harm, and a loss of trust in others.

As reported by Wired, while Google has announced some measures to combat deepfake abuse, like removing non-consensual explicit deepfakes from search results, more action is needed from tech companies to stop the spread of these harmful sites. Apple and Discord have yet to publicly address the issue.

Current efforts fall short of preventing the creation and spread of “nudify” content. Holding perpetrators accountable is crucial. In Australia, criminal laws target the non-consensual sharing of intimate images, and proposed legislation would create a federal offense.

Improving digital literacy through education initiatives can help users spot and challenge deepfakes. Governments also play a key role by introducing laws and regulations to block access to “nudify” sites and criminalize non-consensual image sharing.

Tech companies can build “guardrails” for AI image-generators to prevent the creation of harmful or illegal content, like watermarking synthetic images and using digital hashing to stop future sharing of non-consensual material.

The mental and physical toll on deepfake abuse victims is devastating. Support services and raised awareness about the harms are essential. Education promoting respectful relationships and critical thinking skills is vital to prevention.

As the fight against deepfake abuse continues, holding abusers accountable and supporting victims must be the top priorities. Tech companies, governments, and society as a whole all have crucial roles to play.

Image credit: Wikimedia

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →