As Automotive News reported this week in its inaugural Shift magazine, there's a juxtaposition occurring in the auto industry around self-driving vehicles.
Traditional automakers such as General Motors are competing alongside emerging tech rivals such as Google spinoff Waymo — both leaders in this space.
By years of experience and miles driven, Waymo is hands down the leader. GM's potential for mass-producing and deploying the vehicles as well as its advanced testing on San Francisco streets are its strengths.
However, Waymo appears to be ahead of GM in a key area that isn't always talked about regarding autonomous cars: humanizing the artificial intelligence that drives them.
Based on required California crash reports involving self-driving vehicles, GM's Cruise AV cars don't always interact well with humans when they, well, act like humans.
The crash reports, which GM declined to comment on, detail frustration, even anger, by human drivers sharing the road with the self-driving vehicles. Most incidents led to small dents or scratches to sensors, while a small amount caused more severe damage.
Despite the Cruise AVs likely not being liable for most, if not all, of the accidents detailed in the reports, there's a pattern of the AI system causing humans to act irrationally or make maneuvers based on what a human driver would do, not an AI system.
Of the 11 incidents involving Cruise AVs so far in 2018, at least seven involved vehicles driven by humans attempting to pass the car. Others included the vehicle responding to pedestrians, cyclists or scooters, which led to an incident, including two instances of humans physically hitting the cars.
Waymo/Google had a similar problem while ramping up testing. In 2016, about half of its 13 reported accidents involved the cars being rear-ended while attempting to make a right turn.
Waymo declined to comment on the individual incidents. However, The Wall Street Journal a year earlier reported that the tech giant was attempting to make its cars drive more like humans. That included cutting corners, edging into intersections and crossing double-yellow lines.
It's a conundrum for automakers: How human do you make AI drivers? Being human means breaking laws, whether speeding to safely get around someone or slightly cutting someone off if there's something such as a sandbag in your lane -- a problem Google ran into in 2016 that led to a collision with a bus.
At this point, we should probably consider AI systems the same as teenage drivers. As they begin driving, they're going to make mistakes and get overwhelmed at times, but the more they drive, the better they become. They have to learn the unwritten rules of the road, and that may include bending, or possibly breaking, laws here and there.
Waymo's AI appears to have turned in its learner's permit for a license, while GM's is getting there. Waymo's California crashes have significantly decreased since switching to its most-recent vehicle, the Chrysler Pacifica, two years ago. (It also has shifted much of its testing to Arizona -- where it plans to launch a self-driving fleet of minivans this year — but still drove two and a half times more than GM's Cruise AVs in California in 2017.)
It's something every company — from Silicon Valley to Detroit — is going to have to deal with when deploying self-driving vehicles. GM and Waymo are just the first to graduate from driver's training.
— Michael Wayland