On Monday, the California DMV received approval to allow self-driving cars on the road without a backup human driver behind the wheel. The changes will go into effect April 2nd, and will allow companies such as Waymo, Ford, and Nvidia to take the next step towards making truly autonomous vehicles a reality. The move will help California remain at the forefront of the technology, after facing competition from states such as Arizona, where Uber has been testing its autonomous vehicles. However, Arizona does not allow the vehicles to operate without a driver. In fact, California is the first state to create a regulatory framework for moving forward, full speed ahead, with the technology in this way.

The law includes several safety measures. Separate permits will apply to testing and commercial operations, such as ride-hailing or delivery services. The rules mandate that vehicles must be remotely monitored at all times by a human, a provision that could render large scale operations expensive and impractical for the time being.

Another provision ensures that police can deactivate vehicles and get in touch with the company that owns them. Companies will have to prove their vehicles are safe through testing before receiving the commercial permit. These steps are a cautious, comprehensive, and wise move toward regulating a rapidly emerging technology.

A successful rollout with a regulatory framework such as this should go far in allaying the public’s apparently vast doubt over driverless vehicles. A January poll reaffirmed similar themes from past surveys – most Americans don’t trust self-driving cars, even as companies and states like California move ahead with putting them on the roads. According to the poll, from Advocates for Highway and Auto Safety, when asked how they would feel about sharing the road with the vehicles, 31 percent of respondents said they would be “very concerned” while another 33 percent said they would be “somewhat concerned.”

This adds up to a total of 64 percent of Americans who are worried about autonomous vehicles on the roads. A survey last year from AAA found that 78 percent of respondents would be afraid to ride in an autonomous vehicle. Other surveys have found that the rollout of the technology has so far only exacerbated these fears.

But how safe are these vehicles, really?

In over two million miles that Waymo vehicles have logged on US roads, they have only been at fault in only a single accident, as of last year. Statistically, this means the vehicles have an at-fault rate lower than any demographic of human drivers, 10 times lower than the safest demographic (60 to 69 year-olds), and 40 times safer than new drivers.

Many of the other accidents in which they have been involved, are fender benders with no serious injuries, in which a human driver is at fault.

Researchers believe this may be due to the fact that autonomous vehicles follow the law more closely than human drivers, cautiously braking in situations in which other drivers have been conditioned not to expect such behavior. Hence, the relatively high rate of rear-end collisions with a human driver at fault.

Autonomous driving systems are never drunk, a factor responsible for a whopping 29 percent of fatal accidents, nor do they become distracted, which is responsible for another 10 percent. Google’s own data indicates that 94 percent of minor traffic accidents are due to human error. In one of the incidents involving Waymo vehicles, the self-driving car had been stopped at a traffic light for 17 seconds when it was hit from behind.

However, studies have shown that it would take hundreds of millions of logged hours to prove with certainty that autonomous vehicles cause fewer fatalities than human drivers, a task that would take at least decades to complete with the current number of self-driving vehicles on the road. For now, real life driving trials have been supplemented with testing on private tracks, and with computer simulations.

So while it has yet to be statistically proven beyond a reasonable doubt, there is plenty of reason, including the beginning of a good track record in real life road situations, to expect self-driving cars to be safer than human drivers. And much of the ambiguity arises from how human drivers will react to the consistently law-abiding autonomous vehicles.

But it’s very possible that much of the doubt does not arise from the statistics at all, but instead from human nature itself. Research has indicated that people are more averse to risks when they perceive they have no control over the outcome. It’s the same reason many people are more afraid to fly on airplanes than to drive cars, despite the statistically much higher chance of a car accident than a plane crash. Furthermore, people may hold machines to even higher standards when it comes to mistakes. In a car accident with conventional vehicles, it’s possible, and often accurate, to attribute the incident to an error by a particular driver. With autonomous vehicle accidents, the technology as a whole will be on trial each time. Even if self-driving cars are many times safer than human drivers, some accidents will still occur, and many experts have warned these high-profile incidents will likely set the industry back once these vehicles start making it onto roads on a large scale.

As the technology is rolled out, it’s important that the public rationally examine the data instead of giving in to a knee-jerk reaction . What’s at stake is much bigger than whether these companies can roll out a profitable new innovation. Researchers believe self-driving cars could potentially cut down on traffic fatalities 90 percent, by midcentury. This could mean almost 30,000 lives saved each year, comparable to the reduction in fatalities that followed requirements for airbags and seatbelts in the 1970s. It could result in 1.5 million saved over half a century, in the United States alone.

Surely, that should be enough to motivate us to question our biases, and to rely on real data instead.

Leave a Reply

Your email address will not be published.