Tesla's AI Self-Driving Software Squashed A Mannequin Named Timmy—Should It Remain Legal?

Forbes’ exclusive test of Tesla’s Full Self-Driving software revealed serious safety flaws, including failure to recognize pedestrians, road signs, and speed bumps, necessitating frequent human intervention. Despite Tesla’s marketing claims, the system’s reliance solely on cameras and lack of comprehensive sensors raises concerns about its reliability and legal standing amid ongoing regulatory scrutiny.

In an exclusive test conducted by Forbes, Tesla’s Full Self-Driving (FSD) software was put under scrutiny, revealing significant shortcomings in its autonomous capabilities. Senior editor Alan Owingsman, who tested the system, described the experience as stressful and far from fully autonomous. Despite Tesla branding the system as “Full Self-Driving Supervised,” the vehicle frequently ignored critical road signs, failed to stop for pedestrians, and made erratic lane changes without clear reasons. These issues occurred even in ideal driving conditions, raising concerns about the system’s reliability and safety.

The test highlighted several alarming incidents, including the Tesla ignoring a flashing pedestrian crossing sign and failing to stop for people waiting to cross, as well as driving down a dead-end street and accelerating toward a red light on a highway off-ramp. The system also failed to recognize speed bumps and often exceeded posted speed limits, which could lead to traffic violations for drivers relying on the software. Human intervention was necessary multiple times to prevent potential accidents, underscoring that the system is not yet capable of fully autonomous operation.

One fundamental flaw identified is Tesla’s reliance solely on cameras for sensing the environment, without incorporating radar, lidar, or other sensors used by competitors like Waymo. This limited sensor suite means Tesla’s system struggles to detect certain obstacles, such as debris on the road, which recently caused severe damage to a Tesla vehicle during a cross-country test by influencers. Experts argue that Tesla’s approach prioritizes cost savings over safety, as other companies use a combination of sensors to create a more comprehensive and reliable perception of the surroundings.

The legal and regulatory landscape surrounding Tesla’s FSD is complex and evolving. Currently, there are few specific laws mandating the types of sensors or safety features required for autonomous vehicles in the U.S., allowing Tesla to market and sell FSD despite its flaws. However, investigations by agencies like the National Highway Traffic Safety Administration (NHTSA) are ongoing, particularly regarding Tesla’s accident reporting practices. Lawsuits and regulatory challenges, especially in California, are pressing Tesla to reconsider its marketing language and the safety claims associated with FSD.

Looking ahead, the future of Tesla’s FSD technology and its legal status remain uncertain. While Elon Musk aims to achieve ambitious targets tied to his $1 trillion pay package, including millions of active FSD subscriptions and robo-taxis on the road, experts remain skeptical about the system’s near-term readiness. Comparisons with competitors like Waymo, which offers more sophisticated and safer autonomous rides, suggest Tesla still has significant hurdles to overcome. Meanwhile, the relationship between Musk and political figures, including former President Trump, could influence regulatory decisions impacting Tesla’s autonomous driving ambitions.