Q: What makes a good human-machine interface?
A: There are a lot of aspects to take into consideration and that number is increasing. In the past, a good HMI was about avoiding driver distraction. It was also about beauty — the system had to look good. Now, with the arrival of autonomous driving, it must deal with many new aspects because sometimes the driver may be out of the loop. How do we handle things if he comes back into the loop, and how can he be sure he can trust the car? It is a huge area to tackle.
How will human-machine interface reassure the driver that he can trust the car?
Let's take the example of an obstacle in the pathway of the car. The driver needs to know immediately that the car has seen it. This is where we can use augmented reality, especially when it comes to the head-up display, to put a frame around the obstacle so the driver knows the situation is safe.
What about a situation in which the driver wants to turn across oncoming traffic?
The question here is whether the car can cross without danger in front or behind these cars. The driver needs to know which decision the car will make and he needs to know it before the car moves. You can show this on the head-up, but you can also make a short sound. Which one is chosen will depend on what the driver is doing at the time.
What detection technologies are used in this type of situation?
The strategy is to use redundant information to be safe so it will not be just one technology. It will be a combination of laser sensors, cameras and radar.
Actually, automakers are making different decisions around what they will use. For some it will be radar and cameras, for others lidar and cameras. There are different strategies, and discussions are ongoing.
What are the challenges of handing the car back to the driver after automated driving?
Based on research projects, it is identifying the right form of warning. There are two possible situations here. The first is when the car becomes aware that the infrastructure ahead can no longer support automated driving and control must pass back to the driver. Normally there is a warning of this a few kilometers in advance. The second situation is where the handover has to happen faster because of a more critical situation.
The question is how long will the driver need to take over? If he is focused and facing front, he will see a warning in the cluster. But what if he is turned away and looking for something on the back seat? In this case, we would need a loud warning sound. We have to use all the sensors available, and also an interior car camera, to observe the driver so we always know what he is doing.
Will the human-machine interface in autonomous vehicles look different?
Not immediately, because as long as the driver is still driving some of the time, it needs to show all the normal information.
The initial change will be around the new information being added to reassure the driver the car is safe when being driven autonomously. For example, warnings about obstacles ahead.
A truly big change will come when driving becomes fully automated and you don't have a steering wheel anymore. Then other things, such as entertainment, become more important. But how exactly it will look then is hard to say now.
How will the cloud affect interior electronics?
The car will have much more information and a lot more functions will be hosted in the cloud. This will improve the quality of the information available, and it can be used to help people drive more efficiently. If you know there is a pothole ahead you can avoid it. If you know when the traffic lights will change you can drive accordingly.
The cloud will also enable services such as over-the-air updates. This function will allow updates without the need to bring the car into the workshop and it will enable the user to add new functions without wasting time.
Furthermore, we will see a change in the electrical/electronic architecture. Several separate control units will merge into a smaller number of powerful ones with multicore processors.
What is the main challenge of artificial intelligence right now?
There is not one simple answer, as there is so much ongoing development. But a key challenge is ensuring that no mislearning takes place that could lead to wrong decisions. In terms of what I will call "comfort" decision-making, such as reassuring the driver about safety, the starting point has been reached and some of the functions are already there.
Safety-critical decisions — those will take a little longer to develop since it will take a while to implement the right testing systems. It is difficult to give an exact time frame, as it will evolve step by step.