WASHINGTON -- Tesla Motors, the trailblazing electric-car company, is leading the National Highway Traffic Safety Administration into uncharted territory as well.
The agency's investigation into the fatal crash of a Tesla Model S is its first opportunity to get a real-world reading on how automated-driving technology works in the field, and how drivers interact with it.
It also pushes NHTSA into the unfamiliar role of evaluating the complex software behind Tesla's Autopilot system, as the agency tries to determine whether it's dealing with a malfunction, an engineering flaw or just the limits of technology's power to compensate for human error and inattention.
The Autopilot probe is "qualitatively different" from the evaluations and investigations NHTSA has done in the past, says Allan Kam, a former NHTSA enforcement attorney for 25 years who was counsel to the agency's Office of Defects Investigation.
There's also little or no data from competitors about how automated driving systems such as Autopilot perform in the field, meaning investigators will be unable to compare Tesla data with industry norms, Kam said.
"Here you have a totally new technology, and you don't really have peers," he said. "It does mean that the traditional methodologies that NHTSA has used to evaluate a safety related defect are not applicable here."
NHTSA chief Mark Rosekind and U.S. Transportation Secretary Anthony Foxx have lauded the potential of automated and autonomous vehicle systems to reduce the human factors that are behind 94 percent of vehicle crash fatalities, which topped 35,000 in the U.S. last year.
The agency has thus far resisted issuing rules for automated driving systems in part out of concern that they may squelch innovation and potential safety benefits. Instead, the agency has been crafting nonbinding guidelines for the safe deployment of autonomous vehicles and systems. Those guidelines are due this month.
Against that backdrop, the probe will force NHTSA to confront the risks associated with the early deployment of automated driving systems and how drivers interact with them, in the absence of federal rules.
NHTSA's inquiry is likely to focus on the ability of Autopilot's cameras and sensors to detect and react to road hazards. Even if the system performed as it was designed to, experts say, the agency could find that it poses an unreasonable safety risk as deployed.
Tesla rolled out its Autopilot system in October through software update, with key features still in the beta, or late testing, stage. The software helped activate the cameras, radar, ultrasonic sensors and GPS technology built into later models of the car, allowing it to largely steer itself, moderate its speed and change lanes automatically. But in a blog post at the time, Tesla warned: "The driver is still responsible for, and ultimately in control of, the car."
The extent to which drivers heed that warning is unclear. Missy Cummings, an engineering professor and human-factors expert at Duke University, says humans tend to show "automation bias," a trust that automated systems can handle all situations when they work 80 percent of the time.
The Model S crash that killed driver Joshua Brown, a former Navy SEAL and Tesla fan, was "not an isolated incident," Cummings said, citing examples of other reported crashes involving Tesla cars that had Autopilot functions activated.
That highlights the need for a more comprehensive testing and analysis program than what's currently available, she said.
"From a very clear engineering pragmatic standpoint, we know there is a technological flaw," if there is a blind spot, she said. "We know the cameras can't see everything they need to see, particularly at high speeds."