http://en.people.cn/n3/2025/0416/c90000-20302772.html
https://www.nature.com/articles/d41586-025-01214-9
The system, developed by Shanghai Jiao Tong University, in collaboration with researchers from Fudan University, the Hong Kong University of Science and Technology, East China Normal University, and other partners. integrates visual, auditory and tactile feedback, using AI algorithms to scan the surrounding environment. When the wearer approaches obstacles or objects, it sends signals to guide them through movements, object handling, and other visual tasks, thereby enhancing their independence in daily life.
The signals guide the wearer to move forward, turn left or right, or adjust their path in real time until they reach their destination. In tests involving humanoid robots and visually impaired participants in both virtual and real-world settings, the system showed significant improvements in navigation efficiency. Users successfully maneuvered through mazes and cluttered rooms, and performed object-grasping tasks with greater ease.