Trapped inside a robotaxi while someone threatens your life—this nightmare became reality for Doug Fulop in January 2026. A man spent six minutes punching his Tesla‘s windows, trying to lift the vehicle, and screaming death threats for “giving money to a robot.” The car’s safety protocols kept doors locked and wheels motionless. No manual override existed. Fulop could only wait and hope.
This incident exposes Waymo’s cruel irony: the technology that makes these vehicles statistically safer than human drivers also creates new vulnerabilities that traditional cars never had.
When Safety Features Become Prison Bars
Waymo’s AI treats all nearby humans as potential collision risks. Whether you’re facing a child chasing a ball or someone trying to harm you, the response stays identical: complete stillness until the coast clears. During Fulop’s ordeal, bystanders actually cheered his attacker—a detail that sounds like dystopian fiction but appears in the official police report.
Similar attacks have escalated across San Francisco. Vandals spray-painted one robotaxi while three women screamed inside. Another assailant covered the sensors to disable a vehicle with trapped passengers. Each incident reveals the same pattern: Waymo’s life-saving caution becomes passenger imprisonment when humans turn hostile.
The Numbers Don’t Lie—But Neither Do the Stories
Context matters here. Waymo reports 90% fewer serious injury crashes than human drivers, with injury rates of just 0.88 incidents per million miles versus San Francisco’s 7.91 benchmark. These aren’t marketing numbers—they represent genuine lives saved through superior reaction times and 360-degree awareness.
Yet passengers like Amina Green describe feeling like “sitting ducks” during harassment, even while preferring robotaxis to distracted human drivers. Anders Sorman-Nilsson felt secure during a five-minute attack by e-bike riders, trusting the cameras to record everything without escalation risk.
The Expansion Dilemma
As Waymo expands from 15 million trips toward 20 cities in 2026, these incidents aren’t disappearing. Someone set a robotaxi ablaze with fireworks during Lunar New Year celebrations. The vehicle was empty, but the symbolism burned bright.
The company calls attacks “rare” events—technically accurate but psychologically insufficient. When you’re choosing between a human driver who might text while driving and an AI that might trap you during an attack, statistics compete with survival instincts. Both fears are rational, just focused on different threat categories.





























