SAN FRANCISCO -- In a well-worn Indian fable, a collection of blind men grasp at an elephant, seeking to understand its essence. One man feels its trunk and decides an elephant is like a rope. Another, touching a leg, concludes an elephant is like a tree. A third strokes an ear and believes an elephant is like a leaf.
The auto industry, in its quest to design cars that can cruise down the highway on autopilot, is now grappling with an elephant problem of its own.
Modern cars are equipped with an array of cameras and sensors to watch the road, check blind spots and mind the bumpers, but each has only a limited view of reality, one that's too imprecise to be trusted. So in the run-up to 2016, when car companies such as General Motors plan to introduce hands-off highway driving features, the industry is working to mimic the full range of human perception with "sensor fusion," in which supercomputers stitch together data from many sensors into a more complete, 360-degree view of the world.
Like eyes, ears and fingers, the sensors in cars -- cameras, radar, ultrasound -- have shortcomings, says Walter Sullivan, who leads the Silicon Valley laboratory at German supplier Elektrobit, now a unit of Continental AG. But just as a person can navigate through a darkened room by touch, a car equipped with sensor fusion could compensate for that.
"Cameras are good at certain things, like detecting lane markers, but they get obscured by the glare of the sun," said Sullivan, whose company makes software that customers such as Mercedes-Benz and Volkswagen use in designing safety features. "So you really need a range of sensors, each of which has strengths and weaknesses, and you need to fuse all of them together to safely operate a vehicle."
Mass-market automakers such as Nissan have already given customers a taste of sensor fusion with in-dash parking monitors that stitch together four camera images to generate a virtual bird's-eye view.
Yet the technology will get a much bigger test next year.
GM sees sensor fusion as a crucial enabler of Super Cruise, which will allow customers to zoom down the highway without touching the wheel or the pedals. It is slated to make its debut just over a year from now in the 2017 Cadillac CT6, a large sedan that GM has positioned as a technological showcase.
Until now, sensor fusion hasn't been necessary or necessarily possible. Today's safety features operate with a collection of "little silver boxes" -- individual computing modules scattered throughout a car, says Danny Shapiro, senior director of automotive at chipmaker Nvidia Corp. Adaptive cruise control, for instance, can rely solely on a forward-looking radar and camera.
But more robust autonomous-driving technology requires systems to communicate with one another to judge many more parameters than speed and distance, and that takes an enormous amount of processing power. Google Inc., for the early iterations of its self-driving car, filled the entire trunk with computers to handle all the processing requirements.
General Motors has worked with a team of "critical suppliers" to design a computer architecture powerful and reliable enough to handle sensor fusion for Super Cruise, says Cem Saraydar, the director of GM's electrical control systems research lab. But even this system is not powerful enough to replace a human driver.
"Today's sensing technology offers tremendous capabilities that we didn't have five or 10 years ago, but it's still not at the point where it can create a cognitive capability like we humans have," Saraydar said. "We still need more computing prowess for a vehicle to make sense of its environment in the way a human does."
The market for driver-assistance systems is expected to grow from $300 million in 2015 to $700 million in 2020, according to consultancy IHS Automotive, with sensor fusion being among the fastest growing segments of the market.
Yet for companies such as Freescale and Qualcomm, which have long focused on selling small computing modules that power specific functions within the car, the power-hungry nature of sensor fusion is a new challenge.
This year at the International CES convention in Las Vegas, Nvidia unveiled Drive PX, an automotive computer slightly larger than a VHS cassette that boasts 2.3 teraflops of computing power -- more than the world's most powerful gaming console, Sony Corp.'s PlayStation 4. Nvidia expects it to ship in cars in 2017.
With multiple streams of sensor data being fed into a single supercomputer, automakers and software specialists will compete to develop new ways to interpret and exploit the data to make driving safer. Elektrobit, for instance, is using front-facing cameras to read speed limits on street signs. But street signs can get dirty and faded, and glare from the sun can make them hard to read. So the company is checking its sense of sight against other "senses," such as map databases, Sullivan said.
"Sensors don't know what they can do," he said. "The intelligence is in the software."