Stop me if you've heard this one:
An autonomous car is driving down the road when a child jumps out in front with little warning. The car can either hit the child or swerve into oncoming traffic, injuring the rider inside. There are no other options. Ethically speaking, how can engineers program a car to make the right decision?
The answer, experts say, is moot.
"The trolley problem largely becomes irrelevant," said Danny Shapiro, director of automotive at Nvidia, referencing a term commonly used to describe this scenario.
Ideally, Shapiro said, the sensor and artificial intelligence technology should be advanced enough so the car doesn't get into those situations in the first place.
Yet the cocktail party-perfect thought experiment has become a hugely popular way to discuss the promise of self-driving cars, despite its macabre framing. MIT researchers even developed an online simulator dubbed the 'Moral Machine' to let you decide which pedestrians to mow down in a virtual world.
But what affect does this have on public perception of a technology that's viewed as a solution to tens of thousands preventable traffic deaths each year?
"It’s a red herring," said Paul Brubaker, CEO of the Alliance for Transportation Innovation and champion for autonomous technology. "It’s an argument that's scaring people unnecessarily. I dismiss it, but I can’t dismiss it because it’s gotten so much traction out there and we need to address it."
In "A Matter of Life and Death," the latest episode in our podcast on the future of cars, we investigate how the conversation around self-driving cars has diverged and talk to an expert about what this means for their introduction to society.