Raj Rajkumar, a professor of electrical and computer engineering at Carnegie Mellon University, was on the team that won the 2007 DARPA Urban Challenge, a Pentagon-sponsored competition and one of the first significant projects involving self-driving vehicles in urban environments. He has been intricately involved in development of the cars ever since.
His work on the Defense Advanced Research Projects Agency challenge forged a partnership with General Motors that continues today. In 2008, Rajkumar started a lab to focus exclusively on autonomous driving, with sponsorship from GM. His latest project is a self-driving Cadillac SRX. Rajkumar, 52, spoke with Special Correspondent Julie Halpert about the role of academic researchers in the development of self-driving cars and the challenges of making self-driving cars mainstream.
Q: What are academics contributing to the autonomous-car effort?
A: It has taken literally decades for this technology to evolve. If you go back to the DARPA 2007 challenge, that really put autonomous driving on the map. Six teams that competed were led by U.S. universities. Today, there continue to be major problems in terms of understanding what's happening around the car from a sensor standpoint. Researchers in academia are playing a significant part in finding some technologies that apply. You collect lots of data from the real world and try to learn from the data. Based on the training and learning and data collected, academics can make conclusions about what's happening in the real world, things like there is a pedestrian out there, there is a bicyclist, there is a car and the like.
Talk about the autonomous version of the Cadillac SRX.
The autonomous car we designed that won the 2007 DARPA challenge looked like a science project, with different things hanging off of it, sensors of all kinds, a laser scanner on the roof of the car. There was a huge trunk completely full of electronics. If you open the driver's side door, there were a couple of complex displays. We said, this technology is feasible, but let's basically make it look practical. Let's build a car that no longer appears like a science project. [In the SRX], when you open the trunk door, the trunk is completely empty. We ended up hiding the electronics in the cavities. We purposely had a target of building a car that looks normal.
How does this compare to Google's self-driving car?
Google has that laser scanner on the roof. To our surprise, they still have that in their prototype. It's not very pleasing, and it's not going to last very long, driving at 65, 70 mph, and [the laser scanner] costs about $60,000.
So what do you use?
Instead of using one, all-seeing laser scanner on the roof, we basically use a bunch of smaller laser scanners around the car, in the body of the car. So, we take the different pieces and fuse them together.
The Cadillac has a button on the console that allows drivers to take over if they feel uneasy about the vehicle driving itself. How do you address this issue of drivers ceding control?
This is a kind of a key question in the interaction process for this technology. Because of regulatory reasons as well as liability reasons, we do want the driver to be part of that process. Just like when you engage cruise control, you do pay attention and then cancel cruise control as required, so we do expect that the human will be part of the process for the next several years. We do expect there will be a trial monitoring system that monitors the driver. If the driver is distracted and not paying attention, the vehicle will warn the user and let the user slow down or come to a stop.
Will that be part of every autonomous car?
We definitely expect that to be a requirement for several years for several reasons. Automated cars cannot drive everywhere that a human can, so there must be a way to get back control to the human.
But will there come a point that the driver will have enough confidence to totally hand over control to the machine?
Yes. We can see a point in time, a few decades from now. Humans by nature are error-prone and would be harming themselves if they drove.
What else needs to be worked out before autonomous cars can go mainstream?
As humans, we can drive in heavy rain or snow or dense fog. It turns out that the sensors we have on the car won't work well in heavy rain or heavy fog or heavy snow. Second, when the road is covered with snow, the humans still drive if the snow isn't really thick, but [for autonomous cars] if the snow covers the lane markers, you have a very hard time.
So how do you work around that?
We're trying to learn from what we as humans do. If the road is covered, we no longer look for lane markers. We look for a piece of the road. We look for curves on the pavement. We follow a simple set of visual cues to know where the car is, and we drive in between what we think is the lane, and we drive where somebody else has been before. We do this naturally, but we have to teach our car to do it.