Multimodal Mobile Guide for Blind Users
Needs or problems
In most Western countries legislations accessibility guidelines have been adopted. However, the technical solutions that have been proposed mainly focus to provide support when users access to Web applications through desktop interfaces. Recent technological evolutions in mobile devices can provide a number of interesting opportunities for disabled people in order to improve the quality of their life and social interactions. There is a need to identify solutions for disabled users when they are on the move and they want to carry out important social activities (such as visiting museums). For these reasons we believe that mobile technologies can represent new opportunities for all, especially for users with special needs. In this regard, research should better investigate on how mobile devices, such as PDAs or smart phones, can be considered for obtaining new applications to support various activities during the daily life.
In order to make mobile solutions more accessible and usable, innovative multimodal techniques can be adopted. In particular, interaction based on the combined use of voice and gesture can provide new useful solutions for mobile technology. This is particular important for users with special needs, such as blind or low vision people. The goal of this project is to investigate how a multimodal mobile solution can be introduced in certain activities performed by this specific type of users: blind and low vision users. In order to focus on a specific case study, we will take into account the case of a museum visit. In such a situation, when users move, several details can be very useful for better contextualising the visit.
The goal of this project is to develop and implement a solution for a multimodal guide for blind users. Thus, the system will be designed for a mobile device in such a way to support location-awareness and usable and accessible interaction for this type of users. Novel solutions will be designed in order to support blind users’ interactions. In particular, we plan to exploit gestures with mobile devices to support vocal interaction. The goal is to allow users to freely move and ask information at any time. The type of answer will depend on the users’ location and the preferences identified by their previous behaviour. The main requests and those particularly detailed will be entered vocally, with the possibility of controlling the output through small gestures in order to go next/back or to different levels of details. Such gestures will be detected through accelerometers connected to the mobile device and can be suitable for blind users who cannot exploit the visual channels to provide such commands. The specific solutions integrating gestural and vocal interaction will be particularly useful for blind people. In order to better define requirements and advanced solutions, we will focus on a specific application case. The application will be developed for accessing museum information by a blind mobile visitor but the solution will be structured in such a way that can be easily adapted to other similar applications (such as support for shopping or for moving in a complex building). The resulting system will be tested in collaboration with Unione Italiana Ciechi (the Italian Association for the Blind) that has agreed to provide us with a number of blind people for the usability tests. In addition, a young blind woman with Ph.D. in computer science working in the group proposing this research will be actively involved in this project and will be supported by the fellowship.
An important goal of this project is investigating how an accessible and usable application based on mobile technologies can be used for improving the social integration for the blind. Therefore, we intend to show how a blind user can interact with a multimodal guide by interacting through voice and gesture. Thus, the evaluation process will be focused on testing a guide prototype designed to be used as a support during a museum visit. To this end, the multimodal guide will be designed in such a way to provide dynamic useful information to enrich the visit. We intend to carry out a user testing with a number of visually-impaired persons (blind and low vision users) in order to collect various feedbacks on the usage of the guide prototype. As our multimodal mobile application refers to a visit in a museum, the prototype developed for a mobile device will be tested in a museum context. The Marble Museum (Carrara, Italy) has agreed to provide us with the basic digital content describing their artworks. This content will be used and manipulated by our application supporting mobile visitors. A set of tasks to be performed will be assigned to the user group in order to evaluate the main features of our application. Tasks will be thought in order to be sure that users test specific features based on gestural actions. Users will be observed during the evaluation sessions in order to gather some information on application usage. In addition, at the end of the evaluation process, a questionnaire will be given to the users in order to collect specific data and impressions.
The results will be presented in international workshop and conferences in the areas of human-computer interaction, accessibility and mobile applications.
In addition, demos will be created and shown in important international events.