Even Cybercriminals Are Getting Fed Up With AI-Generated Slop

Edinburgh researchers find cybercriminals rejecting AI-generated forum posts while embracing AI for actual attacks

C. da Costa Avatar
C. da Costa Avatar

By

Image: Technology Marketing Toolkit

Key Takeaways

Key Takeaways

  • Cybercriminals reject AI-generated forum posts while embracing AI for actual criminal operations
  • Underground forums experience quality crisis as AI content floods hacking communities
  • Research analyzed 97,895 conversations revealing widespread hostility toward AI-generated explanations

While mainstream platforms battle AI-generated content cluttering their feeds, cybercriminals face the same quality crisis in their own digital backrooms. Underground forums that trade stolen data and hacking techniques are witnessing something unexpected: users actively rejecting AI-generated content despite enthusiastically adopting AI for actual criminal operations. This resistance reveals how AI fatigue transcends legal boundaries, affecting even communities that operate outside traditional oversight.

Research Exposes the Criminal Community Divide

Ben Collier, a security researcher at the University of Edinburgh, led a team that analyzed 97,895 conversations across cybercrime forums from late 2022 through 2025. The research uncovered a striking pattern: initial optimism about AI’s criminal potential has morphed into widespread hostility toward AI-generated forum posts.

Users complained specifically about “bullet-pointed explainers” of basic hacking concepts flooding their spaces, degrading the perceived value of community participation. The shift mirrors broader internet frustration with low-quality AI content, but carries higher stakes in communities where reputation directly impacts criminal networks.

“Stop Posting AI Shit”

The pushback isn’t subtle. On Hack Forums, users expressed blunt frustration: “I see a lot of members using AI for making their threads/posts, and it pisses me off since they don’t even take the time to write a simple sentence or two.” Another member was more direct: “Stop posting AI shit.”

When administrators proposed AI-enhanced marketplace features, one user responded: “IT’S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET.” The resistance mirrors mainstream platforms like Hacker News, which officially banned AI-generated comments.

Skills vs. Shortcuts

According to Collier, the resistance stems from AI threatening merit-based social structures. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person,” he explained. Cybercrime forums operate on reputation currencies earned through demonstrated expertise.

AI-generated posts allow low-skilled users to appear knowledgeable without proving actual capabilities. Ian Gray from Flashpoint Intelligence notes that sophisticated threat actors understand AI limitations and remain cautious about AI-generated projects that might expose vulnerabilities in their operations.

The Ironic Boundary

The research revealed a crucial distinction: cybercriminals enthusiastically use AI for tactical advantages—crafting phishing emails, generating malware code, automating social engineering attacks. But they draw a hard line against AI replacing human authorship in their social spaces.

One user captured this perfectly: “If I wanted to talk to an AI chatbot, there are many websites for me to do so… I come here for human interaction.” This boundary suggests that even in underground economies, authentic human connection remains irreplaceable—a lesson mainstream platforms are learning as AI saturation triggers universal fatigue.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →