SAN FRANCISCO -- Car companies don't usually get sucked into philosophical debates, but right now they're facing a big, thorny question: how the self-driving cars of the future will decide who will live and who will die.
Last month, an excellent Bloomberg story noted that automakers want help from Stanford University researchers to devise an ethical code for life-and-death scenarios.
Daimler CEO Dieter Zetsche expressed a similar concern in January. Automakers must decide how a self-driving car will react "if an accident is really unavoidable, when the only choice is a collision with a small car or a large truck, driving into a ditch or into a wall, or to risk sideswiping the mother with a stroller or the 80-year-old grandparent," he said at the International CES in Las Vegas. "These open questions are industry issues, and we have to solve them in a joint effort."
But this isn't just some remote, faraway concern. Experimental self-driving cars already make ethical decisions. Take, for example, Google Inc.'s experimental self-driving car.
If it sees a cyclist, it nudges over in its lane to give the cyclist more space. The cyclist poses no real threat to the car's occupants, and moving closer to other cars puts the car's occupants at greater risk, but the Google car does it anyway. That's an ethical choice.
"You're making a trade-off between mobility and safety," says Noah Goodall, a University of Virginia researcher. "Driving always involves risk for various parties. And anytime you distribute risk among those parties, there's an ethical decision there."
Google has thought about this for years. In 2012, the company filed a patent on a technology that allows Google's self-driving car to move over for a better line of sight. It's what human drivers do when they scope out a traffic jam. And it's a trade-off between information and safety.
Deep in the patent is a table that explains how Google's software manages the trade-off. It asks three questions.
1. How much information would the maneuver gain?
2. What are the odds something bad will happen?
3. How bad would that be? That is, what's the "risk magnitude"?
Google's software multiplies probability by risk magnitude, then compares that to the value of the information to be gained. If the information gain outweighs the risk, Google's self-driving car will make the maneuver.
It's a hard choice. Deer, pedestrians, cyclists and cars have no inherent value. Google must decide how bad harming any of them would be. And that decision relies on human judgment.
In Google's example, getting sideswiped by the truck blocking the view has a risk magnitude of 5,000. Getting into a head-on crash with another car would be four times worse, a risk magnitude of 20,000. And hitting a pedestrian is deemed 20 times worse, a value of 100,000.
Google used those numbers only to show how its algorithm works. But there are many ways that pedestrians, cyclists, animals and cars could be valued. Now is when to have a real conversation about what the industry's values should be, before any life-and-death scenarios arise.