Wearable AI system helps blind people navigate

Publicly released:
International
Fig. 1 | Overview of the wearable multimodal visual assistance system. CREDIT: Gu, et al.
Fig. 1 | Overview of the wearable multimodal visual assistance system. CREDIT: Gu, et al.

A wearable AI system has been developed to help blind and partially-sighted people navigate their environments. The international team developed an AI algorithm that processes video from a camera in the device to help the user find an obstacle-free route. Signals about the environment are sent to users through bone conduction headphones, as well as stretchable artificial skins worn on the wrists, which send vibration signals to the user to guide their direction.

Media release

From: Springer Nature

Biomedical engineering: A wearable AI system to help blind people navigate *VIDEOS*

A wearable system designed to assist navigation for blind and partially sighted people is presented in Nature Machine Intelligence. This system uses artificial intelligence (AI) algorithms to survey the environment and send signals to the wearer as they approach an obstacle or object.

Wearable electronic visual assistance systems offer a promising alternative to medical treatments and implanted prostheses for blind and partially sighted people. These devices convert visual information from the environment into other sensory signals to assist with daily tasks. However, current systems are difficult to use and this has hindered widespread adoption.

Leilei Gu and colleagues present a wearable visual assistance system that can provide direction via voice commands. The authors developed an AI algorithm that processes video from a camera in the device to determine an obstacle-free route for the user. Signals about the environment in front of the user can be sent to them via bone conduction headphones. They also created stretchable artificial skins to be worn on the wrists, which send vibration signals to the user to guide the direction of movement and avoid lateral objects. The authors tested the device with humanoid robots and blind and partially sighted participants in both virtual and real-world environments. They observed significant improvements in navigation and post-navigation tasks among the participants, such as their ability to avoid obstacles when getting through a maze and reaching and grasping an object.

The findings suggest that the integration of visual, audio and haptic senses can enhance the usability and functionality of visual assistance systems. Future research should focus on refining the system further and exploring its potential applications in other areas of assistive technology.

Multimedia

Supplementary video 1b
Supplementary video 4
Supplementary video 5
Supplementary video 8
Supplementary video 6

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research Springer Nature, Web page The URL will go live after the embargo lifts.
Journal/
conference:
Nature Machine Intelligence
Research:Paper
Organisation/s: Shanghai Jiao Tong University, China
Funder: This work was funded by STI 2030—Major Projects (grant no. 2022ZD0210000), National Science Foundation China grant (no. 62274110) and Shanghai Rising-Star Program (grant no. 21QA1404000). The individuals involved in the 2022ZD0210000 project include L.G. and B.Y., the 62274110 project includes L.G. and the 21QA1404000 project includes L.G.
Media Contact/s
Contact details are only visible to registered journalists.