People with visual impairments are in great need of better digital aids to find their way around, especially in unfamiliar surroundings.
Now, Bineeth Kuriakose, in his doctoral work, has found a practical and affordable solution using artificial intelligence and sensors in smartphones.
Kuriakose first took a closer look at the navigation assistant systems that have already been developed for people with visual impairments and found that often they are not particularly usable.
Unmanageable solutions
Many of these navigation assistant systems rely on hardware boards like the small credit card-sized computer Raspberry Pi, while some demand integration with laptops – both of which entail unwieldy setups.
Furthermore, they often require prolonged training periods and lack the portability needed for outdoor usage.
Despite the proliferation of smartphone-based solutions, they frequently require an active internet connection, rendering them unsuitable in areas with limited connectivity, such as garages and basements.
Most contemporary systems also rely on cloud resources, introducing processing delays that hinder real-time navigation – a critical factor when every second counts.
Many of these systems tend to prioritise technology over addressing the practical challenges faced by users during navigation.
Better and cheaper with artificial intelligence and smartphones
The rapid development of artificial intelligence and increasingly greater computing power of smartphones, on the other hand, make it possible to develop far more usable solutions.
“This technological synergy can be used effectively to develop a navigation assistant for people with visual impairments,” says Kuriakose. The solution can then also be made available to many more people.
Statistics from the World Health Organization (WHO) underscore the magnitude of this issue, with 200 million people facing impaired vision and lacking access to technological aids.
WHO is committed to improving access to high-quality, affordable assistive technology for everyone, everywhere. Hence, a reasonable and user-friendly navigation system that helps the user to get around by using technological advances will be of great importance.
Easier to navigate on your own
Kuriakose’s primary goal has been to empower the visually impaired to navigate independently, leveraging the latest technological innovations, especially in artificial intelligence and smartphone sensors, to fulfil their unique needs.
The plan is to give users greater control, so that they can adapt the system to their own navigation preferences and styles.– Bineeth Kuriakose
A smartphone-based navigation assistant
Deep learning is a type of artificial intelligence that mimics the way humans acquire knowledge.
Collaborating closely with visually impaired individuals, Kuriakose designed and developed DeepNAVI, a deep learning-based smartphone navigation assistant to help people with visual impairments find their way around, increasing their independence.
In this way, the navigation assistant is taught to perceive the common obstacles and ways out that blind and partially sighted people may encounter.
Utilising lightweight on-board deep learning models, DeepNAVI identifies obstacles in real-time, harnessing edge intelligence on smartphones.
It leverages smartphone sensors to provide detailed obstacle information, including distance, position, and movement.
The system also learns to recognise different locations, such as a kitchen, an office, a garage, or a street, so that the system can help the user to identify the layout of the environment.
Kuriakose explains:
“The system we have developed can tell you what kind of obstacles are in front of you, and how far away they are. Or whether something is moving or not, and whether it is to the right or left of you.”
DeepNAVI can be easily installed as an app on an Android phone, with the smartphone securely stored in a waistcoat pocket, capturing real-time video of the surroundings and relaying feedback to the user through wireless earphones.
User testing with people with visual impairments yielded positive feedback regarding usability, portability, and the effectiveness of the system in providing navigation information.
Through his research, Kuriakose found that without using any additional sensors, additional devices or external computer networks, a smartphone on its own can be used as an assistant to help people with visual impairments to navigate.
”One advantage of the solution is that it does not need to be connected to the internet. In some situations, there is no internet access, such as when the user is in the basement or in a garage where there is no internet connection.”
The user’s needs must be in focus
The research also shows that if the user’s needs and preferences are prioritised, a navigation assistant can become more accessible and user-friendly, so that more people can use it with ease and confidence in their daily lives.
“During the evaluation studies, we asked the users how it felt to use DeepNAVI, and whether they could trust a smartphone-based navigation assistant, and which one they preferred: DeepNAVI alone, white cane alone or DeepNAVI plus white cane combination?”
The majority replied that they preferred the combination of DeepNAVI plus white cane at that time.
They added that right now, they would not trust using just a smartphone assistant alone for navigation in public settings.
However, they did acknowledge that their perception might change in the future and their trust in using DeepNAVI alone could grow with practice and familiarity, given its ability to provide more comprehensive information about the environment.
More control for the users can offer better accessibility
Kuriakose’s system is currently a research prototype. However, the knowledge gained through this research can be useful for further research in this area.
His research received international recognition in the form of two best paper awards.
“We still need to make some refinements to the system, and we will need interface designers to help create a fully functional and user-friendly smartphone solution.”
”Additionally, considering the exciting advancements in artificial intelligence and the development of more advanced deep learning models, we anticipate further enhancing the system’s capabilities.”
“Our future plans also involve providing users with greater control, allowing them to personalise the system based on their unique navigation preferences and styles. This will offer users more flexibility when using DeepNAVI,” Kuriakose explains.
As an added benefit, the wide prevalence of smartphones, regardless of economic circumstances, makes this technology accessible to a significant proportion of the blind and partially sighted population.
By eliminating the need for an additional navigation aid, individuals can rely on their smartphones as their primary “navigation assistant”, offering unprecedented convenience.
References
- Bineeth Kuriakose, Raju Shrestha, Frode Eika Sandnes; DeepNAVI: A deep learning based smartphone navigation assistant for people with visual impairments (sciencedirect.com), Expert Systems with Applications, Volume 212, February 2023, 228720.
- Bineeth Kuriakose, Raju Shrestha, Frode Eika Sandnes: Exploring the User Experience of an AI-based Smartphone Navigation Assistant for People with Visual Impairments (dl.acm.org). CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, September 2023, Article No.: 17, Pages 1–8.
- Bineeth Kuriakose, Ida Marie Ness, Maja Åskov Tengstedt, Jannicke Merete Svendsen, Terese Bjørseth, Bijay Lal Pradhan, Raju Shrestha: Turn Left Turn Right - Delving type and modality of instructions in navigation assistant systems for people with visual impairments (sciencedirect.com). International Journal of Human-Computer Studies, Volume 179, November 2023, 103098.
- Bineeth Kuriakose; Raju Shrestha; Frode Eika Sandnes: SceneRecog: Deep Learning Scene Recognition Model for Assisting Blind and Visually Impaired Navigate using Smartphones (ieeexplore.ieee.org). 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 17-20 October 2021.