Researchers at the Johns Hopkins University Applied Physics Laboratory (APL), in Laurel, intend to capitalize on recent advances in computer vision – including developments in object recognition, depth sensing, and simultaneous localization and mapping (SLAM) technologies – to augment the capabilities of two commercial retinal prostheses.

The research is being funded by a grant from the National Eye Institute at the National Institutes of Health. The APL team is collaborating with prosthetic maker Second Sight, as well as Roberta Klatzky, a psychologist at Carnegie Mellon University who has studied the impact of human neurophysiology and psychology on the design of instruments and devices that can be used as navigational aids for the blind.

About 1.3 million Americans are legally blind, most of them affected by late-onset diseases such as glaucoma and age-related macular degeneration. Second Sight’s Argus II Retinal Prosthesis and Orion Cortical Visual Prosthesis systems provide a visual representation of a patient’s surroundings that improves their ability to orient themselves and navigate obstacles.

“Our goal is to enable current users of visual prosthesis systems to experience a ‘spatial image’ of their environment, which is a continuously updated mental representation that an individual forms of their surroundings, which enables independent navigation with minimal cognitive burden,” said APL’s Seth Billings, the principal investigator for the project.