Tesla's 'Project Rodeo' Pushes Self-Driving AI to Limits on Public Roads

Tesla’s secretive ‘Project Rodeo’ program has emerged as a critical component of the company’s autonomous driving ambitions, with test drivers pushing self-driving AI software to its absolute limits on public roads. Business Insider spoke with nine current and former Project Rodeo test drivers and three Autopilot engineers across California, Texas, and Florida, revealing a high-stakes testing program that balances innovation with safety concerns.

The program includes a specialized ‘critical intervention’ team whose members are trained to wait as long as possible before taking manual control of vehicles, even when the AI makes dangerous mistakes. According to sources, this approach is designed to collect maximum data for training Tesla’s Full Self-Driving (FSD) and Autopilot systems. “The idea is that you’re a cowboy on a bull and you’re just trying to hang on as long as you can,” explained a former San Francisco test driver.

Test drivers described numerous close calls between November 2023 and April 2024, including nearly hitting pedestrians, running red lights, swerving into other lanes, and failing to follow speed limits. One Texas-based critical-intervention driver ventured into bar districts late at night to test how the AI reacted to intoxicated pedestrians. Another driver recalled coming within three feet of hitting a bicyclist at Stanford University, with their trainer praising the near-miss as “perfect” data collection.

The testing methodology involves multiple specialized teams: one replicates ride-hailing patterns by driving between random map points, while the ‘golden manual’ team drives without AI assistance to train the software on error-free driving. Critical-intervention drivers allow the AI to continue operating even after mistakes, staging interventions only to prevent crashes.

Safety experts express concern about the fragmented regulatory environment. Missy Cummings, former NHTSA safety advisor, noted that “there are very few rules around autonomous testing and a lot of dependency on self-reporting.” The stakes are enormous for Tesla—CEO Elon Musk stated in 2022 that self-driving is “really the difference between Tesla being worth a lot of money or worth basically zero.” Morgan Stanley analyst Adam Jonas recently wrote that Tesla’s future valuation is “highly dependent on its ability to develop, manufacture, and commercialize autonomous technologies.”

Former test driver John Bernal, terminated in 2022, described breaking traffic laws to collect data, including driving into intersections when the AI failed to recognize red lights. Five drivers reported receiving feedback from supervisors if they disengaged the AI too early, creating pressure to push boundaries despite safety concerns.

Key Quotes

The idea is that you’re a cowboy on a bull and you’re just trying to hang on as long as you can

A former San Francisco test driver described the critical-intervention team’s approach, where drivers are trained to wait as long as possible before taking control from Tesla’s AI, even in dangerous situations. This metaphor captures the high-risk nature of Tesla’s data collection methodology.

Self-driving is really the difference between Tesla being worth a lot of money or worth basically zero

Tesla CEO Elon Musk made this statement in 2022, underscoring the existential importance of autonomous driving technology to the company’s valuation and future. This explains the intense pressure on Project Rodeo to rapidly develop functional self-driving AI.

There are very few rules around autonomous testing and a lot of dependency on self-reporting. If companies aren’t reporting, it’s hard to know what’s going on

Mark Rosekind, former NHTSA administrator and current chief safety innovation officer for Amazon-owned Zoox, highlighted the regulatory gaps that allow companies like Tesla to conduct aggressive AI testing on public roads with minimal oversight or transparency.

We want the data to know what led the car to that decision. If you keep intervening too early, we don’t really get to the exact moment where we’re like, OK, we understand what happened

A former Tesla Autopilot engineer explained the rationale behind allowing the AI to continue operating even when making mistakes. This reveals how machine learning systems require failure data to improve, creating tension between safety and effective training.

Our Take

Tesla’s Project Rodeo exposes a fundamental tension in AI development: systems learn best from mistakes, but those mistakes in physical environments can endanger lives. The critical-intervention approach—essentially letting AI fail as close to catastrophe as possible—is scientifically sound for training robust systems, but ethically questionable when conducted on public roads with unconsenting bystanders.

This investigation reveals how economic pressure distorts AI safety practices. With Tesla’s valuation riding on autonomous driving success, the company appears to prioritize data collection speed over conservative safety margins. The regulatory vacuum enables this approach, highlighting how AI governance lags dangerously behind deployment.

The broader lesson extends beyond autonomous vehicles: as AI systems move from digital environments into physical spaces—robotics, drones, industrial automation—society must establish clear frameworks for acceptable risk during AI training. Tesla’s cowboy approach may accelerate development, but it externalizes risk onto an uninformed public, setting a troubling precedent for AI safety culture.

Why This Matters

This investigation reveals the high-risk methodology behind Tesla’s autonomous driving development, raising critical questions about AI safety testing on public roads. As the autonomous vehicle industry races toward commercialization, Tesla’s approach of pushing AI systems to failure points with minimal human intervention represents a controversial data collection strategy that could accelerate development while potentially endangering public safety.

The story highlights the regulatory vacuum surrounding autonomous vehicle testing in the United States, where companies largely self-report incidents and face minimal oversight. With Tesla’s market valuation heavily dependent on achieving full autonomy, the pressure to collect training data quickly creates potential conflicts between safety and business objectives.

This matters beyond Tesla—it illuminates how AI systems learn from edge cases and failures, requiring exposure to dangerous scenarios to improve. The broader implications affect how society balances innovation speed with public safety as AI systems increasingly operate in physical environments. As autonomous vehicles move closer to widespread deployment, the testing practices revealed here will likely influence regulatory frameworks and industry standards for AI safety validation.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/tesla-self-driving-software-test-drivers-project-rodeo-experiences-2024-10