SAN FRANCISCO -- Car companies aren’t usually involved in philosophic debates, but right now they’re facing a big, thorny question: how the self-driving cars of the future will decide who will live and who will die.
This week, Bloomberg came out with an excellent story on that point, discussing how automakers have sought help from Stanford University researchers in devising a new ethical code for life-and-death scenarios.
Dieter Zetsche, the CEO of Daimler AG, gave voice to that concern in January. Automakers must decide how a self-driving car will react “if an accident is really unavoidable, when the only choice is a collision with a small car or a large truck, driving into a ditch or into a wall, or to risk sideswiping the mother with a stroller or the 80-year-old grandparent,” Zetsche said in a speech at the International CES technology convention in Las Vegas. “These open questions are industry issues, and we have to solve them in a joint effort.”
From my point of view, the obsession with rare life-and-death scenarios overlooks the fact that experimental self-driving cars are already making ethical decisions today. This isn’t just some remote, faraway concern.
Take, for instance, Google Inc.’s experimental self-driving car.
If the car sees a cyclist, it nudges over in its lane to give the cyclist more space. The cyclist poses no real threat to the car’s occupants, and moving closer to other cars puts the car’s occupants at greater risk, but the Google car does it anyway.
By accepting greater danger for itself and its passengers, all in order to protect a vulnerable cyclist, Google’s car is making an ethical choice.
“Whenever you get on the road, you’re making a trade-off between mobility and safety,” Noah Goodall, a researcher at the University of Virginia who has written about the ethics of self-driving cars, said in an interview. “Driving always involves risk for various parties. And anytime you distribute risk among those parties, there’s an ethical decision there.”
It’s clear Google has been thinking about this sort of thing for years. Goodall pointed me to a patent that Google filed in 2012 to protect its solution for situations where its self-driving car can’t see the road well.
In this scenario, Google’s self-driving car can move into the next lane to get a better line of sight. It’s the sort of maneuver human drivers do when they want to know the cause or extent of a traffic jam. And it’s a trade-off between information and safety.
Buried deep in the patent is a table that explains how Google’s software manages the trade-off. The software essentially needs to ask three questions.
1. How much information would be gained by making this maneuver?
2. What’s the probability that something bad will happen?
3. How bad would that something be? In other words, what’s the “risk magnitude”?
Google’s software multiplies probability by risk magnitude. Then it compares the result to the value of the information that would be gained. If the information gain outweighs the risk, Google’s self-driving car will make the maneuver.
This isn’t an easy decision. Deer, pedestrians, cyclists and cars have no inherent mathematical value. Google inevitably has to decide how bad it would be for any of them to be harmed, and that decision is based on human judgment.
In the example from Google’s patent, getting side-swiped by the truck that’s blocking the self-driving car’s view has a risk magnitude of 5,000. Getting into a head-on crash with another car would be four times worse -- the risk magnitude is 20,000. And hitting a pedestrian is deemed 25 times worse, with a risk magnitude of 100,000.
Google was merely using these numbers for the purpose of demonstrating how its algorithm works. However, it’s easy to imagine a hierarchy in which pedestrians, cyclists, animals, cars and inanimate objects are explicitly protected differently.
People tend to talk about life-and-death situations. It lends gravitas.
But every time a self-driving car moves over to give a cyclist room, it is exercising ethical judgment on behalf of a human programmer. Now is the time to have a real conversation what those ethics should be, before a real life-and-death scenario arises.