LAS VEGAS -- Gill Pratt has heard the "what-if" warnings from critics of autonomous cars.
To avert a crash that would be fatal to the driver only, what if the only option is for the car to swerve into a crowd, potentially killing many people?
Skeptics say programming such split-second ethical decisions into a driving machine is impossible. But Pratt, head of the recently formed Toyota Research Institute, says it's a waste to even talk about such scenarios.
"I think it's important not to get too hung up on these extraordinarily contrived rare cases where we have the mistaken belief that human beings actually solved the problem well," said Pratt in an interview last week at the Consumer Electronics Show here. "Human beings don't solve the trolley problem well."
The trolley problem is the old ethical conundrum of whether to let a runaway trolley car mow down the five people caught in its path -- that is, to let what would happen anyway happen -- or actively switch it to a track where it would kill only one person. And what if that person was a child? Or your child?
Machines are likely to get these decisions wrong from time to time, Pratt says, just as humans do. But overall, autonomous vehicles would be significantly safer than cars controlled by humans, who are known to often make crucial mistakes when faced with an impending accident.
That doesn't mean the machines are infallible, and there still will be accidents, particularly as self-driving cars share the road with manually driven cars.
"As part of this drastic reduction in fatalities and accidents, there are still going to be some cases where the car had no choice, and it's important that we as a society come to understand that," said Pratt, a former Pentagon researcher and university professor.
His point is underlined by Google's recent troubles testing its autonomous cars on public roads. The cars are programmed to strictly follow the law -- speed limits and all -- even if that increases the chance of an accident. As a result, the Google cars have been bumped by drivers who expected them to accelerate more quickly.
As the public begins to accept what Pratt and his team see as the inevitable dominance of self-driving vehicles, he said, expectations of the machines will need to be tempered.
"Unfortunately, our standards for machines are far higher," he said. "We expect perfection from them."
As Pratt describes it, his mission at the TRI -- funded with an initial $1 billion, five-year investment from Toyota Motor Corp. to focus on artificial intelligence and robotics -- will encompass the technical and societal challenges of autonomous vehicles. The automaker also has spent $50 million to set up artificial-intelligence research centers at Stanford University and the Massachusetts Institute of Technology.
At CES last week, Pratt introduced the technical team and advisory board that will help guide his work. The two groups include experts with backgrounds in government, academia and industry and with deep experience in artificial intelligence, robotics and programming.
So if the trolley problem doesn't keep Pratt up at night, what does?
"I do worry a lot about the learning curve in making sure we don't mistakenly think the machine is better than it is and certify it to operate in a set of conditions beyond what it actually can do in a safe way," Pratt said.
The sooner Toyota or any manufacturer starts putting autonomous vehicles on the road, the sooner they start saving lives, he said. But consumers and engineers alike need to understand that autonomous driving will face the same growing pains as any other nascent technology.
"I think it is inevitable that there will be a learning curve in autonomy as well," Pratt said. "It's not a Toyota issue. It's not any other OEM's issue. It's everybody's issue."