In the early days of self-driving technology, it was assumed autonomous vehicles could best help blind people by giving them rides to places they could have never driven themselves. But more recently, an alternate possibility has emerged.
The same maps self-driving systems use to understand their precise location in the world could help blind and visually impaired pedestrians walk along sidewalks and guide them through crowded urban environments.
Researchers from New York University and Woven Planet Holdings, a Toyota subsidiary, have compiled a data set of more than 200,000 images that go beyond the traditional forward-facing views most associated with mapping for self-driving operations.
The open-source data set includes sideview images — akin to the ones collected by Google Street View cameras but at higher precision — and they can map storefronts, sidewalks and outdoor environments.
Improving algorithms to better understand those lateral views is a key part of the ongoing research.