"Our vehicles have now logged nearly 700,000 autonomous miles, and with every passing mile, we're growing more optimistic that we're heading toward an achievable goal -- a vehicle that operates fully without human intervention."
-- Chris Urmson, Director of Google's self-driving cars project, April 28
WASHINGTON -- Engineers often talk about wicked problems and tame ones.
As frustrating as a tame problem might be, it can be solved with the right hardware and software. Wicked problems, on the other hand, defy engineering solutions because they deal with human nature, politics or societal forces.
The companies that believe autonomous vehicles are the next big thing, such as Google, Nissan and Volvo, are mostly focused on tame problems. Volvo said last week it has started testing 100 cars in its hometown of Gothenburg, Sweden, to work out the challenges of city driving. Google, in its first formal update since August 2012, posted a video on its Web site last week that shows how its algorithms can now deal with road closures and crowds of pedestrians.
"Thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously," wrote Chris Urmson, director of Google's self-driving cars project.
But the wicked problems are starting to pop up as well, from state legislatures to Capitol Hill and Silicon Valley. And there is an ideological divide over one particular question:
Should hands-free cars get a hands-off government?
California, Nevada, Florida and the District of Columbia have passed legislation to license and govern autonomous vehicles. A dozen other states are actively considering such bills, according to Stanford Law School's Center for Internet and Society.
But many lawmakers are preoccupied with the question of who bears responsibility for an autonomous car that crashes. Experts such as John Villasenor, a senior fellow at the Brookings Institution and an engineering professor at the University of California, Los Angeles, worry that this mentality might make it hard to deploy the technology.
"Laws governing motor vehicle operation have evolved on the assumption that we have a human driver making decisions 100 percent of the time. We do need to update those laws," Villasenor said. "The concern I have is that people are afraid of the liability issues even though we already have a good liability framework in place, so states are hesitant to modify their laws to allow autonomous vehicles onto the roads. When that hesitation occurs, you slow the whole industry down."
This is a common source of frustration in technology circles. When the government steps in to impose regulations, it gets pushback from people who see benevolent businesses tied up in red tape. There is a term for the ideology underlying this frustration: cyberlibertarianism.
"Cyberlibertarians argue that regulators often lack the tools or know-how to provide smart enforcement," Eric Schneiderman, the New York attorney general, wrote last month in a New York Times op-ed calling for regulation of businesses such as the peer-to-peer lodging rental service Airbnb. "They are not entirely wrong. But that doesn't mean that regulation is unnecessary."
Companies such as Google, which aims to commercialize its self-driving technology by 2017, and Nissan, which says it will have a fully autonomous vehicle by 2020, will run into the same problem as they rethink the automobile.
And new political battle lines will form around auto safety regulation in Washington.
Everyone, from pro-government Democrats to laissez-faire Repub-licans, likes innovation. But many liberals do not know how to deal with the suggestion that self-driving cars will promote safety if regulators would just stay out of the way.
Think tanks are stepping in to shape the debate. Villasenor released a paper in April arguing that with a few exceptions, existing U.S. laws are well-equipped to sort out liability claims for self-driving cars that get into crashes.
RAND Corp., a nonpartisan think tank, released an extensive paper in January saying that self-driving cars should be allowed once they are better than human drivers and that aggressive regulation would do more harm than good for now.
In the long run, companies such as Google envision a car that drives with near-total safety, without any interference by an imperfect human driver.
But no algorithm can solve the wicked problems introduced by human society and politics. If the team at Google could crack that wicked code, it would be richer than all the hardware and software in the world will allow.