Remote Usability Evaluation

In the area of usability evaluation, the HCI Group of ISTI-C.N.R. in Pisa has focused on identifying methods and developing supporting automatic tools for remote usability evaluation. The main idea is to perform an intelligent analysis of application logs (recorded during the user test) by using the information contained in the task model of the application with the aim to compare the actual user behaviour contained in the log files with the planned user behaviour contained in the task model. Such an automatic evaluation will provide the evaluator with a set of measures, concerning also group of users, useful to identify usability problems derived from a lack of correspondence between how users perform tasks and the system task model.

In the remote usability method we use the ConcurTaskTrees notation (publicly available at http://giove.isti.cnr.it/tools/CTTE/) to specify task models in a hierarchical structure enriched with a number of flexible temporal relationships among such tasks (concurrency, enabling, disabling, suspend-resume, order-independence, optionality, …). In order to record user interactions we implemented a logging tool able to store the actions of the user during the session. The information coming from the logging tool has beeen enriched with multimodal information regarding user behaviour coming from web cams and eye-trackers. Further support has been also provided for mobile applications, in order to capture the effects of possible interferences on the user interactions that can be provoked by the surrounding context.

Over the years, differents versions of the method and the related automatic tool have been developed by the HCI group, in order to cope with the evolving needs of usability evaluators. They vary for the type of application addressed and the type of results provided:

  • The first version, USINE (USer INterface Evaluator) [1] , mainly addressed the issue of using task models for analyzing user logs without considering its use as remote evaluation.
  • The next version, RemUSINE (Remote USer INterface Evaluator) [2], was developed for remotely evaluating desktop Java applications and was tested in industrial sites, providing useful information regarding its possibilities even in comparison with other methods (eg: video-based evaluation).
  • WebRemUSINE [3] was aimed at evaluating web applications. In this version of the tool an efficient, interoperable, client-side logging system was developed. In addition to information regarding task performance, the Web-oriented version provides a lot of information regarding the Web pages analyzed—visited pages, never visited pages, extent of scrolling and resizing, page patterns, and download and visit time. Some information is provided, along with summary data regarding the content of the page. Thus, the visit time is provided and also indicates the number of forms, links, and words in the page so the evaluator can compare the visit time with the quantity of information available in the page.
  • One of the most recent versions of the tool, MultimodalWebRemUSINE [4] , aimed at exploiting the possibilities opened up by recent technologies to gather a richer set of information regarding user behavior. Thus, the traditional graphical logs can be analyzed together with the logs from webcams and portable eye-trackers.
  • The last version of the tool, MultiDevice RemUsine [5] , is aimed at remotely evaluating mobile applications. Various modules have been developed in order to gather contextual data about the usage of such applications in different environments, such as noise, light, network availability, user position, etc.. In addition, timelines have been included in order to visualise the evaluation data and better support the designers’ work in analysing them.