Google Confirms That AI-Powered Hacking Has Become An Industrial-Scale Threat

Criminal groups use commercial AI models to automate attacks while security teams deploy AI-powered defenses

Annemarije de Boer Avatar
Annemarije de Boer Avatar

By

Image: Wabbi

Key Takeaways

Key Takeaways

  • Criminal hackers weaponize commercial AI tools like Gemini and Claude for cyberattacks
  • AI-powered malware operates autonomously without human oversight using agentic workflows
  • Security teams deploy AI agents like Big Sleep to hunt vulnerabilities

While politicians tout AI’s potential for massive efficiency boosts, new threat intelligence data reveals a darker reality: criminal hackers have weaponized the same commercial tools to industrialize cybercrime. In recent months, AI-powered attacks evolved from experimental curiosity to full-scale operations targeting everything from zero-day vulnerabilities to supply chain infiltration.

The Speed of Escalation

The transformation from AI curiosity to cyber weapon happened faster than anyone expected.

Google Threat Intelligence Group tracked this transformation with unsettling precision. “There’s a misconception that the AI vulnerability race is imminent,” says John Hultquist, GTIG’s chief analyst. “The reality is that it’s already begun.” His team documented threat actors using AI to boost attack speed, scale, and sophistication across the entire intrusion lifecycle. One criminal group nearly deployed an AI-assisted zero-day exploit targeting two-factor authentication systems—vulnerabilities that bypass security measures requiring both passwords and secondary verification—in a mass exploitation campaign before coordinated disclosure prevented it.

Your Favorite AI Tools, Weaponized

The same LLMs powering workplace productivity are now driving criminal enterprises.

Criminal gangs and state-linked actors from China, North Korea, and Russia now routinely abuse commercial models—including Gemini, Claude, and OpenAI’s systems—to refine attacks. These aren’t basement hackers fumbling with code. They’ve built automated pipelines to create anonymized accounts, abuse free trials, and maintain persistent access to premium AI tiers while evading guardrails.

Beyond Script Kiddies

Modern AI-powered malware requires no human operator once deployed.

PROMPTSPY, an Android backdoor, uses Google’s Gemini API to navigate user interfaces and maintain persistence without human oversight. These “agentic workflows” execute multi-stage security tasks at machine speed, interpreting system states and making tactical decisions like seasoned operators. Supply chain attacks increasingly target AI environments themselves, creating footholds for ransomware and data theft—transforming AI infrastructure from productivity tool to attack vector.

The Defense Strikes Back

Security teams are fighting AI with AI in an escalating technological arms race.

Google’s own AI agents offer hope. Big Sleep systematically hunts unknown vulnerabilities while CodeMender generates automated patches using Gemini’s reasoning capabilities. Steven Murdoch, a security engineering professor at University College London, argues this evolution benefits defenders too, suggesting most future bug discovery will be “LLM-assisted” rather than purely adversarial.

Meanwhile, Anthropic chose not to release its Mythos model after internal tests revealed it could autonomously find zero-day vulnerabilities in major operating systems and browsers.

The emerging landscape resembles streaming’s disruption of entertainment—same underlying technology, completely transformed economics. Except here, both sides of the cybersecurity equation are racing to deploy AI faster than their opponents can adapt.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →