Who Should Your Autonomous Car Save?

Should it save you, or the people outside your car?

Tesla Model S, by Taina Sohlman/Shutterstock.com
A Tesla Model S in Finland. Photo by Taina Sohlman/Shutterstock.com

On May 7, 2016, the driver of a Tesla Model S with its Autopilot feature enabled was killed in a crash with a tractor trailer in Florida. Tesla posted an announcement about the incident today. The National Highway Traffic Safety Administration is opening a preliminary evaluation into the performance of Autopilot, according to Tesla.

Last week in Science Friday’s Good Thing/Bad Thing segment, Ira talked about autonomous cars with Azim Shariff, an assistant professor of psychology and social behavior at UC Irvine. He’s one of the authors of a recent study in the journal Science about some of the moral issues surrounding the topic of autonomous vehicles, such as how people feel about letting their cars decide who lives or dies in the case of an accident. The following is a transcript of their conversation, lightly edited for clarity.

AZIM SHARIFF: Thanks. Thanks for having me, Ira.

IRA FLATOW: As I mentioned, autonomous vehicles certainly have some benefits, right? Let’s talk about the good thing.

AZIM SHARIFF: Well, they have a lot of benefits, and you mentioned some of them. I think the one that most people think about is that idea of sipping coffee and reading the newspaper while being in the car, that convenience factor. But there’s a bunch of other things which people don’t think about as much. So one is fuel efficiency. These cars are going to be much more fuel efficient. Their capabilities of driving are going to be much more efficient.

People who are kind of locked out of the driving business themselves—so, people with disabilities, people who are blind, the elderly—they will be able to be driven easily by these cars. We’re going to be able to redesign our cities, so we won’t have to have all the street parking. As you mentioned, the car can drop you off and then go zip off to some sort of central parking facility and then come back and pick you up when it needs to.

There is a tremendous well-being benefit. So right now, commuting is the worst part of most people’s day for people who commute. The psychological literature is absolutely clear on that.

IRA FLATOW: I know all about the traffic. Let’s talk about the bad thing.

AZIM SHARIFF: Well, there’s one big good thing that I want to mention there, and that’s safety. So that’s probably the biggest and the most morally interesting advantage of having these cars. The estimate is that 90 percent of all the accidents that happen are human-caused. They’re the product of human error. And that kills about 30,000 to 40,000 people in the United States every year. So that’s a tremendous benefit that we’ll see with these cars.

But yeah, there are some bad things. So one of the big things is labor displacement. So these AVs, these autonomous vehicles, are going to replace a lot of jobs that currently require human drivers. So, Uber is trying to develop their own autonomous vehicles so they can take out that pesky human side of their business model.

And you can imagine a lot of trucking companies would also be keen to do this, because that’s an expense and it’s a lag on their efficiency. With an autonomous car, the truck can just go all night. Doesn’t have to rest. Doesn’t have to stop. And you don’t have to pay the humans for doing that. So that’s the negative.

IRA FLATOW: Yeah, well, let’s talk about what you studied, the moral, psychological angle here, the more complicated thing.

AZIM SHARIFF: Right. There, it’s a challenge of adoptions. So if it’s the case that these are going to be beneficial innovations, how do we get people to adopt them, and what’s actually going to dissuade them from adopting them? So in addition to replicating human capabilities, these cars are going to have to be making some value-laden moral decisions.

I mentioned it’s going to reduce accidents, but it’s not going to reduce all of them. And in certain accidents, these cars are going to be able to see different paths which will apportion harm to different people involved. So if they take one option, they could harm pedestrians.

But if they see another option, that could actually harm the passengers. That might be from a utilitarian perspective the more ethical thing to do, but involves sacrificing the very people who bought and own the car.

IRA FLATOW: Hey, man, I want to buy a car that does that.

AZIM SHARIFF: Well, not one that’s actually trying to kill you in particular situations, no.

IRA FLATOW: That is complicated. You’re absolutely right, Azim. Thank you for joining us. Azim Shariff is an assistant professor of psychology and social behavior at UC Irvine.

Meet the Writer

About Brandon Echter

Brandon Echter was Science Friday’s digital managing editor. He loves space, sloths, and cephalopods, and his aesthetic is “cultivated schlub.”

Explore More

A Peek Inside the Mind of Elon Musk

An excerpt from the new biography "Elon Musk."

Read More