A recent lawsuit filed by the San Francisco City Attorney’s office has exposed a disturbing trend: “nudify” websites using AI-generated content to create explicit images of people without their consent. Even more alarming? These sites have been exploiting sign-on systems from tech giants like Google, Apple, and Discord as reported by The Conversation.
The scope of the problem is staggering. In the first half of 2024 alone, 16 identified “nudify” websites racked up over 200 million visits. Advertising for these sites on social media has skyrocketed by 2,400% since the start of the year.
The impact on victims is severe. Deepfake abuse can ruin reputations, destroy careers, and cause devastating mental and physical health effects like social isolation, self-harm, and a loss of trust in others.
As reported by Wired, while Google has announced some measures to combat deepfake abuse, like removing non-consensual explicit deepfakes from search results, more action is needed from tech companies to stop the spread of these harmful sites. Apple and Discord have yet to publicly address the issue.
Current efforts fall short of preventing the creation and spread of “nudify” content. Holding perpetrators accountable is crucial. In Australia, criminal laws target the non-consensual sharing of intimate images, and proposed legislation would create a federal offense.
Improving digital literacy through education initiatives can help users spot and challenge deepfakes. Governments also play a key role by introducing laws and regulations to block access to “nudify” sites and criminalize non-consensual image sharing.
Tech companies can build “guardrails” for AI image-generators to prevent the creation of harmful or illegal content, like watermarking synthetic images and using digital hashing to stop future sharing of non-consensual material.
The mental and physical toll on deepfake abuse victims is devastating. Support services and raised awareness about the harms are essential. Education promoting respectful relationships and critical thinking skills is vital to prevention.
As the fight against deepfake abuse continues, holding abusers accountable and supporting victims must be the top priorities. Tech companies, governments, and society as a whole all have crucial roles to play.
Image credit: Wikimedia