Tesla’s safety operators—the human backstop meant to prevent autonomous vehicle crashes—have themselves caused accidents while remotely driving stuck Robotaxis in Austin. Newly unredacted federal crash reports reveal that two separate incidents occurred when remote teleoperators took control of vehicles after the automated driving system couldn’t proceed, undermining the narrative that human intervention automatically equals safer outcomes.
Remote Control Gone Wrong
The crashes paint a concerning picture of fallback systems failing. In July 2025, a teleoperator drove a Robotaxi “up the curb and made contact with a metal fence” at 8 mph after the automated system stopped and wouldn’t move forward, according to NHTSA reports.
Six months later, another remote operator crashed into a construction barricade at 9 mph while helping navigate a stuck vehicle. Both incidents happened with safety monitors onboard but no passengers—exactly the controlled conditions where remote assistance should work flawlessly.
Reality Check on Robotaxi Safety
The bigger safety story lurks in the numbers. Tesla’s Austin Robotaxis have logged one crash every 57,000 miles over roughly 800,000 total miles driven. Compare that to Tesla’s own benchmark claiming human drivers average one accident every 229,000 miles, and the autonomous future suddenly looks less inevitable.
You’re statistically safer behind the wheel yourself than riding in Tesla’s current Robotaxi—a reality that complicates CEO Elon Musk’s promises about superhuman safety.
Different Approaches to Remote Help
While Tesla allows remote operators to actually steer and throttle vehicles at speeds up to 10 mph, competitors like Waymo take a hands-off approach. Their remote staff provides high-level guidance—suggesting alternate routes—that the autonomous system then executes independently.
Network latency and limited camera views make direct remote driving inherently riskier, as Tesla’s fence and barricade encounters demonstrate.
Regulatory Reckoning Ahead
Tesla’s shift from complete redaction to detailed crash narratives comes as Texas prepares to enforce comprehensive AV regulations later this month. The teleoperator crashes raise thorny liability questions—when a remote human crashes, who’s responsible?
These regulatory uncertainties suggest slower rollouts than Tesla’s ambitious seven-city timeline might indicate.
The teleoperator crashes reveal autonomy’s inconvenient truth: even the safety systems need safety systems. Until Tesla solves why its vehicles get stuck requiring risky remote intervention, the promise of superhuman driving remains frustratingly human in its limitations.





























