Norwegian version

Public defense: Bineeth Kuriakose

Bineeth Kuriakose will defend his thesis “Smartphone-based Navigation Assistant using AI for People with Visual Impairments: Connecting User Needs with Technological Furtherance” for the PhD program in Engineering Science.

This event will also be available via live stream (oslomet.zoom.us).

Trial Lecture:

The trial lecture starts at 10:00. Please do not enter the room after the lecture has begun. 

Title: “Edge computing in Assistive Technologies”

Public defense:

The candidate will defend his thesis at 12:00. Please do not enter the room after the defence has begun.

Title of the thesis: “Smartphone-based Navigation Assistant using AI for People with Visual Impairments: Connecting User Needs with Technological Furtherance”

Ordinary opponents:

Leader of the evaluation committee/Chair of the committee:

Professor Pietro Murano, OsloMet, Faculty of Technology and Art and Design (TKD), Department of Computer Science.

Leader of the public defense:

Academic coordinator and Head of group at the PhD programme in Engineering Science, Siri Fagernes, OsloMet, TKD.

Supervisors:

Abstract

Background and Motivation:

Several navigation assistant systems based on indoor and outdoor environments have been developed for people with visual impairments. However, many systems do not work effectively in real-time and could be arduous for users, such as requiring more initial training time to get acquainted with the system and not being portable to be used in public environments. Moreover, many existing systems cannot be considered user-centric because they focus on technology rather than solving the practical difficulties users face during navigation. We are witnessing developments in artificial intelligence and the advancements of smartphones with increasing computational power. This technological synergy can be effectively utilized in developing a navigation assistant for people with visual impairments. In addition, statistics from the World Health Organization (WHO) show that 200 million people with low vision do not have access to assistive products for low-vision. WHO has given importance to improving access to high-quality, affordable assistive technology for everyone, everywhere. Hence, a navigation system that gives prominence to the user by adopting technological advancements is valid and necessary.

Objectives:

The main objective of this research is to facilitate the independent navigation of people with visual impairments by developing a navigation assistant utilizing technological advancements, particularly artificial intelligence (AI), that meets their needs and requirements. Four research questions (RQs) have been framed to achieve the objective.

[RQ1] How should navigation assistants be designed to meet the needs of people with visual impairments?

[RQ2] What opportunities are there to use smartphones and artificial intelligence to help people with visual impairments navigate?

[RQ3] How visual and non-visual cues from the environment can be utilized to help people with visual impairments navigate?

[RQ4] What are the most effective ways of presenting navigation-related information to people with visual impairments?

Methods:

Three main phases are involved in the research design. The first phase was identifying the knowledge gap, which involves the preliminary study and requirement analysis. In the second phase, design and development, the system was designed through a collaborative design process involving the users. The design considerations that emerged from the process contributed to developing a smartphone-based navigation assistant. The third and final phase was the evaluation, which involved user testing experiments conducted with people with visual impairments on the navigation assistant. Both quantitative and qualitative methods were used in all three phases mentioned above for data collection, evaluation, and analysis.

Contributions:

The research identifies the preferences and requirements of the users during navigation. This research contributed a smartphone-based navigation assistant that leverages deep learning named DeepNAVI. Moreover, an extensive evaluation of the navigation assistant was conducted to understand the varying perceptions of people with visual impairments toward smartphone navigation assistants. In addition, through the research, various research contributions, such as the development of a lightweight scene recognition model, user-based studies to understand output modality preferences, and distance estimation methods deployable in smartphones, have been made in different scientific areas, including AI, assistive technology, and human-mobile interaction.

Conclusion:

The research found that without using any additional sensors, peripheral devices, or external data networks, a smartphone alone can be used as an assistant to help people with visual impairments in navigation. The research reveals that if users’ needs and preferences are prioritized, then a navigation assistant becomes more accessible and user-friendly, allowing more users to use it to navigate with ease and confidence. The knowledge contributed to the scientific community through this research could be useful for further research in this domain.