Definitions vary widely, but fundamentally, AI is technology that allows machines or computer systems to perform tasks in ways that mimic human intelligence. It's an umbrella term for systems that can adapt based on the data they receive.
"We have to teach these systems first, and then they can do what they've been taught better than a human, but they can't go beyond that scope," said Danny Shapiro, senior director of automotive at chip supplier Nvidia.
Artificial intelligence seems to have morphed into a catchall phrase for all kinds of smart technologies, but Jeffrey Miller, a University of Southern California associate computer science professor, says it boils down to "the ability for computers to start to think."
"Computers are never going to be able to have feelings or think like humans do, but they are able to 'think' because we've programmed them to do that," Miller said.
AI can learn yet not be self-aware, said Shapiro.
"You educate it by feeding it information, like the experience of a human. And so it can just keep getting smarter and smarter and smarter. Over time, what we can do is not just detect pedestrians, but we can train it to detect distracted pedestrians," Shapiro said. "Somebody walking, staring at their smartphone is a common reality now, and that person behaves differently than somebody standing on the corner, patiently waiting for the light to change, watching traffic."