ORLANDO — For self-driving vehicles, safety remains in the eye of the beholder.
Dozens of companies are rolling out hundreds of autonomous cars and trucks on public roads across the U.S., yet there are no clear standards on how safe these vehicles should be during testing or commercial deployment.
Ten years after Google launched its self-driving car project, the question of "How safe is safe enough?" remains as essential and nebulous as ever.
"There's no single-bullet solution or single standard," said Nat Beuse, head of safety at Uber's Advanced Technologies Group and a former NHTSA official who oversaw automated-vehicle development at the nation's top auto safety regulator. "There are many approaches out there, and I think that's a healthy thing."
Beuse was among those gathered this month here at the sixth annual Automated Vehicles Symposium, a conference that brings together industry, government and academic leaders examining self-driving technology.
During the conference, Uber unveiled its Safety Case Framework, which provided fresh details on how the company has rethought its approach to safety in the wake of a March 2018 fatal crash involving one of its self-driving vehicles. Broadly, Uber says its vehicles should be proficient, fail-safe, continuously improving, resilient and trustworthy.
A key principle of the framework: The company's self-driving vehicles should be "acceptably safe" to operate on public roads.
But defining what may be acceptable in the realm of automotive safety remains an arduous task that cuts across areas such as technical, legal, regulatory, cultural and perhaps even medical. Company-by-company efforts such as Uber's continue, and experts say industrywide discussions have matured.
Yet codifying safety is nowhere near complete.