For now, this may be of mild concern. The spoofed system is part of a driver-assist feature. Humans still oversee — and retain all responsibility for — vehicle operations. But it's not a far leap to a future when similar algorithms govern the actions of self-driving systems, and that portends trouble for not merely Tesla, but for an entire industry.
The McAfee research is only the latest display of "brittleness" in artificial intelligence systems, says Missy Cummings, a professor at Duke University and director of the school's Humans and Autonomy Laboratory and Duke Robotics. She says machine learning, a subset of AI that's in this case used to train computer-vision algorithms such as the one in the Tesla, isn't yet all that smart.
"There's no 'learning' going on here," she said. "If we mean that I see a set of relationships and then I can extrapolate that to other relationships, I would call that learning. All machine learning does is recognize and make associations that aren't always correct."
Inferring contextual information that may be beneficial or critical to safe operations is an area where AI struggles. It cannot yet reason or adapt to uncertain circumstances.
"A human driver would see that '3' with the extended middle line and know that something was up," Cummings said. "You'd say 'God, that's a terrible new design. I wonder what DOT was thinking?' Or if you saw an 85-mph speed limit in an urban area, you'd know kids had been screwing around. You'd understand that the speed limit is still 35."
This month, she published her own research on the shortcomings of machine learning. In her paper, "Rethinking the maturity of artificial intelligence in safety-critical settings," she argues that while AI holds promise for assisting humans in operating systems, this should not be mistaken for the ability of AI to replace humans.