Nvidia adapts gaming chips to autos
Danny Shapiro, director of automotive, Nvidia Corp.
The redesigned Audi TT, scheduled to arrive in the United States in 2015, benefits from chipmaker Nvidia's quad-core chip, which works with a 3-D graphics program and can execute 8 billion operations per second.
Danny Shapiro, director of Nvidia's automotive division, has worked closely with Audi and other automakers since 2009. He spoke with Automotive News Europe Correspondent Andrea Fiorello about what's next for the Santa Clara., Calif., company.
Q: The new Audi TT has a large thin-film transistor monitor that delivers a sophisticated combination of gauges, navigation and infotainment data with 3-D graphics. How has all this been achieved?
A: The TT's monitor is powered by two Nvidia Tegra K1 mobile processors. This new chip is built on the same graphics architecture as that used in the world's most extreme gaming PCs and the fastest supercomputer in the United States. It also achieves substantial reductions in heat dissipation and weight. The result is a microprocessor that brings console-class gaming graphics and performance to mobile devices.
The TT actually has two autonomous processors working together, running a variety of applications at the same time and presenting the information on the same screen. As well as infotainment and navigation, they can be connected to cameras, laser scanners and radar to detect pedestrians and blind-spot objects and to read speed limit signs.
Which automakers are you supplying?
We started with high-end Audis, Lamborghinis, BMWs and Rolls-Royces, and at the moment, there are a little more than 4.5 million cars on the road fitted with Nvidia processors. Now, we're moving into more entry-level vehicles, and there will be announcements later this year of links with some mainstream Japanese manufacturers. Based on our contract signings for future developments, we expect to see our processors installed in a further 25 million vehicles over the next few years.
Carmakers say the autonomous car will be ready by 2020. Is this something you are working on with Google, and how is this progressing?
While we're not allowed to detail our cooperation with Google, I can say that the hardware already exists, and now it's all about creating the software. Sensing is easy, the difficult part is the decision-making process, the equivalent of what goes on in our brains when we drive. That requires even more computing power and algorithms that understand the external world. This task is made much easier by vehicle-to-vehicle communication, and we need to invest in developing this.