Although self-driving vehicles aren’t completely autonomous or vastly widespread at this point, the National Highway Traffic Safety Administration is moving in that direction. The agency not long ago crafted a formal Federal Automated Vehicles Policies that recognizes the way in which this technology could transform transportation, and the many dilemmas it faces. The policy outlines vehicle performance guidance, recommendations for national (rather than state) policies for testing, existing regulatory tools and potentially new regulatory tools.
One of those issues that has been raised – but not resolved – is how automated vehicles would handle a conflict in public safety, and how liability for resulting car accident injuries might be affected by this.
The New York Times recently explored this issue by asking who your car should save if an accident is pending? For example, what if a vehicle is faced with a situation in which it must either run off the road to avoid a head-on collision with another car or careen into a large crowd of people on the sidewalk? Whose risk should be minimized by the autonomous vehicle’s algorithm? Should the vehicle’s first priority be the protection of its own occupants? Or should it be the pedestrians who face a more serious risk of injury if struck?
Researchers with the University of California, Irvine and the Massachusetts Institute of Technology recently released a study, published in the journal Science, examining this issue. In, “The Social Dilemma of Autonomous Vehicles,” researchers said that overall autonomous vehicles should slash car accidents and car accident fatalities because they eliminate human error – the prime factor in most crashes. However, researchers opine there will be some situations in which the car will have to “choose” between two non-desirable outcomes. How will it decide the most moral and ethical option?
The researchers located half a dozen participants in a study of approved automatic vehicles that are designed to sacrifice their passengers for the greater good. The participants said they would like to buy the vehicles for ease of use, but would prefer cars that would protect themselves and their passengers at all costs.
Study authors presented people with a number of hypothetical scenarios and forced them to choose between an autonomous vehicle that was self-protective and one that was utilitarian, designed to impartially minimize overall deaths and serious injuries. Although respondents agreed that, ethically speaking, cars that impartially minimized the number of overall injuries and deaths was the better choice, they personally would refuse to buy such a car, instead citing a strong preference for a vehicle that would be self-protective.
Overall attitudes gleaned from surveys about self-driving cars seem to be that Americans don’t want the government compelling vehicle manufacturers to make cars that are programmed with algorithms that are self-sacrificial.
Automakers for the most part haven’t yet weighed in on where they stand. Mercedes-Benz indicated that it would prioritize passenger safety, but then later reversed this statement, saying this wouldn’t be its policy. The concern is that do they alienate the public by making cars that act in a way that is seen as not ethical? Or do they drive away buyers with cars that would sooner put their lives and well-being in jeopardy?
This isn’t simply a philosophical issue. The reality is that widespread cars are only going to gain traction if people feel safe with the solutions that government regulators and car makers reach on these issues.
Call Freeman Injury Law — 1-800-561-7777 for a free appointment to discuss your rights. Now serving Orlando, West Palm Beach, Port St. Lucie and Fort Lauderdale.
Additional Resources:
Whose Life Should Your Car Save? Nov. 3, 2016, By Azim Shariff, Iyad Rahwan and Jean-Francois Bonnefon, The New York Times
More Blog Entries:
Is Driving With a Cold As Dangerous as Driving Drunk? Nov. 9, 2016, Port St. Lucie Car Accident Lawyer Blog