According to NHTSA’s Standing General Order crash reports, Tesla has reported 9 crashes involving its robotaxi fleet in Austin, Texas between July and November 2025:
- November 2025: Right turn collision
- October 2025: Incident at 18 mph
- September 2025: Hit an animal at 27 mph
- September 2025: Collision with cyclist
- September 2025: Rear collision while backing (6 mph)
- September 2025: Hit a fixed object in parking lot
- July 2025: Collision with SUV in construction zone
- July 2025: Hit fixed object, causing minor injury (8 mph)
- July 2025: Right turn collision with SUV
According to a chart in Tesla’s Q4 2025 earnings report showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025. That works out to roughly one crash every 55,000 miles.


To me, this illustrates exactly why “human supervised” autonomous driving makes no sense. Nobody can sit there hour after hour, day after day doing nothing at all and still be ready to intervene in a moments notice. The whole point of autonomous driving is to take away the mental work of driving.
Also, when you’re driving, you have perfect knowledge of your own intentions. If you don’t know the intentions of the system, you’re purely reacting to all of the information presented to you which is a small subset of all information the car sees. Is the car swerving because a kid ran in front when you weren’t looking that way? No way to know in the moment whether a sudden input is life ending or life saving. The only reason supervised autonomous driving is considered acceptable by anybody is because the alternative is for car companies to take responsibility in the event of an accident. Politicians and car companies are happy for some flunky behind the wheel to take the heat.