Pressroom

Weighing Safety Against Autonomous Technologies


Select Year to View:

Release date: 9/12/2018

Weighing Safety Against Autonomous Technologies
 
Some speculate that, because of car-to-car communication, sensors, machine-learning, and unwavering attention to the task of driving, autonomous vehicles will cause crashes to become rare events. Others claim that a system is only as infallible as its very human designers and cannot be trusted with lives until overwhelming certainty is achieved.
 
In 2016, Mark Rosekind, then-administrator of the National Highway Traffic Safety Administration (NHTSA), said, "Ninety-four percent of crashes can be tied back to a human choice or error." If one was to remove a host of judgment impairments - such as drowsy driving or driving under the influence of drugs or alcohol - a significant reduction in fatalities could be achieved. This is the promise of an automated driving system that removes the impulsive human driver from the equation and places trust instead in computers, camera arrays, and speedy triangulation.
 
However, even though a human is not driving, a collision can still occur. On March 19, 2018, pedestrian Elaine Herzberg was struck and killed by an Uber self-driving vehicle in Arizona.
 
As described in an article on the website The Conversation, the conditions were such that the car should have seen Herzberg as she crossed her bike through the road on foot, but it did not. It is presumed that a human driver would have seen Herzberg in enough time to brake and either avoid the crash or reduce speed enough to have prevented the fatality. The event has once again roiled the debate about whether such technologies are the key to the future or a dangerous flirtation.
 
There's also the potential for information that was once missing from the equation to drive up crash numbers. In 2016, the United States logged approximately 7 million traffic collisions. However, a NHTSA survey discovered that drivers failed to report collisions 29 percent of the time, meaning the actual number of crashes was likely much higher than recorded.
 
The communication components necessary to achieve fully autonomous driving would give a truer report of all the crashes that take place so, paradoxically, the roads might actually be safer but the accounts would show them as skewing less so.
 
There persists the argument that such vehicle technology, like any technology, is imperfect and cannot guarantee zero-fatality roads. Regulatory bodies may look at recent tragedies involving Uber, Tesla, and Waymo and determine the advancements are still not ready for public usage, consequently slowing down research, development, and implementation.
 
Nonetheless, several experts claim that the evolution toward the autonomous is ultimately for the better. A RAND Corp. report from 2017 authored by Nidhi Kalra, Senior Technology Policy Adviser to Senator Kamala Harris; and David G. Groves, Co-Director, RAND Water and Climate Resilience Center stated that widespread adoption of autonomous cars that are even 10 percent safer than average human drivers could save as many as 3,000 lives a year.
 
Other studies find that it isn't the vehicles that aren't ready for us, but that we are not ready for them. As stated in research from the National League of Cities, only six percent of the largest of U.S. cities include language focused on driverless technology in their transportation plans.
 
Until the balance shifts away from traditional methods, the two worlds of human driving and machine driving are bound to collide, and decision makers should expect imperfect baby steps toward a better mobility model.