Multimodal Mobile Guide for Blind Users

The goal of this project is to develop and implement a solution for a multimodal guide for blind users. Thus, the system will be designed for a mobile device in such a way to support location-awareness and usable and accessible interaction for this type of users. Novel solutions will be designed in order to support blind users’ interactions. In particular, we plan to exploit gestures with mobile devices to support vocal interaction. The goal is to allow users to freely move and ask information at any time. The type of answer will depend on the users’ location and the preferences identified by their previous behaviour. The main requests and those particularly detailed will be entered vocally, with the possibility of controlling the output through small gestures in order to go next/back or to different levels of details. Such gestures will be detected through accelerometers connected to the mobile device and can be suitable for blind users who cannot exploit the visual channels to provide such commands. The specific solutions integrating gestural and vocal interaction will be particularly useful for blind people. In order to better define requirements and advanced solutions, we will focus on a specific application case. The application will be developed for accessing museum information by a blind mobile visitor but the solution will be structured in such a way that can be easily adapted to other similar applications (such as support for shopping or for moving in a complex building). The resulting system will be tested in collaboration with Unione Italiana Ciechi (the Italian Association for the Blind) that has agreed to provide us with a number of blind people for the usability tests. In addition, a young blind woman with Ph.D. in computer science working in the group proposing this research will be actively involved in this project and will be supported by the fellowship.