As self-driving, or autonomous, cars become more commonplace, consumers have one main question: are they safe? While the creators of the first self-driving cars might say yes, the track record for current self-driving vehicles doesn’t reflect that. Self-driving vehicles from Waymo, Tesla, and Uber have all been involved in fatal accidents while operating on auto-pilot. Self-driving cars might eliminate human errors, but they introduce a brand-new risk: technical errors.
What Does the NHTSA Say?
According to the National Highway Traffic Safety Administration (NHTSA), self-driving cars will go through six levels of technology advancements to make them safer in the next several years. They will integrate into the roadways as they progress. The NHTSA touts self-driving cars for their safety, stating that they’ve already helped save lives and prevent injuries. It mentions the statistic that 94% of accidents stem from human error.
The NHTSA promotes self-driving technologies such as:
- Automatic emergency braking systems
- Crash-imminent braking
- Pedestrian automatic emergency braking
- Forward collision warning system
- Lane assist
- Lane-keeping
- Blind spot detection
- 911 notification
The NHTSA stands behind these progressive technologies and believes more technology is a good thing in the auto industry. The administration projects that by 2025, vehicles will benefit from tech such as adaptive cruise control, traffic jam assist, and self-parking. Beyond 2025, self-driving cars will be fully automated and come with highway autopilot. The NHTSA says that we are currently in the partial automation phase. The future will bring conditional automation, high automation, and eventually full automation – no driver needed.
What Do the Crash Reports Say?
Despite organizations and vehicle manufacturers backing the safety of self-driving cars, the hard numbers show they’ve already caused injuries and deaths. Perhaps no one expected self-driving vehicles to have a perfect track record right from the start, but consumers have the right to expect reasonably safe cars. When it comes to operating on auto-pilot, technical glitches and issues can have life-and-death consequences. The following are real-life cases in which self-driving cars have caused fatalities in the United States:
- Uber vehicle kills a pedestrian. The first self-driving vehicle death occurred in Tempe, Arizona. Uber’s self-driving vehicle struck and killed a pedestrian while in autonomous mode. Uber later reported that the vehicle’s technology should have responded and stopped or swerved the car to avoid striking the pedestrian, yet for some reason it did not. The driver of the Uber was not at fault, although she was not looking at the road at the time of the collision.
- Tesla’s fatal crashes. Tesla has had two confirmed fatal accidents involving its autopilot system. The first happened when the system failed to pick up the broadside of a white 18-wheeler crossing the road, causing the Tesla to run into it (killing the driver). In the second accident, the vehicle struck a concrete lane divider and burst into flames, also killing the driver.
- Waymo crash in Arizona. Self-driving vehicles from Google (“Waymo”) may have driven more than 5 million miles safely, but it only took one accident to make the world rethink the vehicle’s safety. The Waymo accident was luckily not fatal, but it did injure the driver. A Honda went through a red light and struck the Waymo, which did not move out of the oncoming Honda’s way.
It’s clear that self-driving cars still have a long way to go before being completely safe for drivers, passengers, and pedestrians. Perhaps the day will never come when there are no self-driving car accidents. However, high hope remains for them eliminating human error and reducing the number of car accidents all-around. Until that day comes, consumers should stay on their guard when operating autonomous vehicles. Keep a car accident lawyer close to protect your rights in case you ever end up in a self-driving crash.