WASHINGTON/DETROIT (Reuters) -- U.S. vehicle safety regulators have said the artificial intelligence system piloting a self-driving Google car could be considered the driver under federal law, a major step toward ultimately winning approval for autonomous vehicles on the roads.
The National Highway Traffic Safety Administration told Google, a unit of Alphabet Inc., of its decision in a previously unreported Feb. 4 letter to the company posted on the agency's website this week.
Google's self-driving car unit on Nov. 12 submitted a proposed design for a self-driving car that has "no need for a human driver," the letter to Google from National Highway Traffic Safety Administration Chief Counsel Paul Hemmersbaugh said.
"NHTSA will interpret 'driver' in the context of Google's described motor vehicle design as referring to the [self-driving system], and not to any of the vehicle occupants," NHTSA's letter said.
"We agree with Google its [self-driving car] will not have a 'driver' in the traditional sense that vehicles have had drivers during the last more than one hundred years."
Major automakers and technology companies such as Google are racing to develop and sell vehicles that can drive themselves at least part of the time.
All participants in the autonomous driving race complain that state and federal safety rules are impeding testing and eventual deployment of such vehicles. California has proposed draft rules requiring steering wheels and a licensed driver in all self-driving cars.
If the car's artificial intelligence is the driver for legal purposes, then it clears the way for Google or automakers to design vehicle systems that communicate directly with the vehicle's artificial pilot.
In its response to Google, the federal agency offered its most comprehensive map yet of the legal obstacles to putting fully autonomous vehicles on the road. It noted existing regulations requiring some auto safety equipment cannot be waived immediately, including requirements for braking systems activated by foot control.
"The next question is whether and how Google could certify that the [self-driving system] meets a standard developed and designed to apply to a vehicle with a human driver," NHTSA said.
Google is "still evaluating" NHTSA's lengthy response, a company spokesperson said on Tuesday. Google executives have said they would likely partner with established automakers to build self-driving cars.
Google told NHTSA that the real danger is having auto safety features that could tempt humans to try to take control.
Google "expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking... could be detrimental to safety because the human occupants could attempt to override the (self-driving system's) decisions," the NHTSA letter stated.
NHTSA's Hemmersbaugh said federal regulations requiring equipment like steering wheels and brake pedals would have to be formally rewritten before Google could offer cars without those features.
For example, current federal rules require alerts on dashboards if tire pressure runs low. NHTSA said a test would need to be created that shows the vehicle computer is informed of the problem. NHTSA raised the question of whether humans in the vehicles should also be made aware.
In January, NHTSA said it may waive some vehicle safety rules to allow more driverless cars to operate on U.S. roads as part of a broader effort to speed up development of self-driving vehicles.
NHTSA said then it would write guidelines for self-driving cars within six months. Transportation Secretary Anthony Foxx said the administration may seek new legal authority to allow deployment of autonomous vehicles "in large numbers," when they are deemed safe, the department said.
The process of rewriting federal regulations governing the design, placement and operation of vehicle controls could take months or years. The NHTSA counsel said Google could consider applying for exemptions for certain regulations, providing NHTSA with supporting documents.