INTERACT 2005 will feature three keynote speakers:
Sketching and Experience Design
Among others, Hummels, Djajadiningrat and Overbeeke (Knowing, Do-ing and Feeling: Communication with your Digital Products. Interdisziplinäres Kol-leg Kognitions und Neurowissenschaften, Günne am Möhnesee, March 2-9 2001, 289-308.), have expressed the notion that the real product of design is the resultant “context for experience” rather than the object or software that provokes that experi-ence. This closely corresponds to what I refer to as a transition in focus from a mate-rialistic to an experiential view of design. Paraphrasing what I have already said, is not the physical entity or what is in the box (the “material” product) that is the true outcome of the design process. Rather, it is the behavioural, experiential and emo-tional responses that come about as a result of its existence and use in the “wild”.
Designing for experience comes with a whole new level of complexity. This is espe-cially true in this emerging world of information appliances, reactive environments and ubiquitous computing, where, along with those of their users, we have to factor in the convoluted behaviours of the products themselves. Doing this effectively requires both a different mind-set, as well as different techniques.
This talk is motivated by a concern that, in general, our current training and work practices are not adequate to meet the demands of this level of design. This is true for those coming from a computer science background, since they do not have sufficient grounding in design, at least in the sense that would be recognized by an architect or industrial designer. Conversely, those from the design arts, while they have the design skills, do not generally have the technical skills to adequately address the design is-sues relating to the complex embedded behaviours of such devices and systems.
Hence, in this talk, we discuss the design process itself, from the perspective of meth-ods, organization, and composition. Fundamental to our approach is the notion that sketching is a fundamental component of design, and is especially critical at the early ideation phase. Yet, due to the temporal nature of what we are designing, conven-tional sketching is not – on its own – adequate. Hence, if we are to design experience or interaction, we need to adopt something that is to our process that is analogous to what traditional sketching is to the process of conventional industrial design.
It is the motivation and exploration of such a sketching process that is the foundation of this presentation.
embedding spaces with a mind for
Sensing Places and MIT
Our society’s modalities of communication are rapidly changing: we divide our activities between real and digital worlds and our daily lives are characterized by our constant access-to and processing-of a vast quantity and variety of information. These transformations of our lifestyle demand both a new architecture and interaction modalities that support the new as well as old ways of communicating and living.
As a consequence of the prevalent role of information in today’s society, architecture is presently at a turning point. Screens are everywhere, from the billboards which dot the contemporary urban cityscape, to the video walls which welcome us in the entry-halls of corporate headquarter buildings, to our desktop computer monitor at home, the PDA in our pocket, or the tiny private-eye screens of wearable computers. Wearable computers are starting to transform our technological landscape by reshaping the heavy, bulky desktop computer into a lightweight, portable device that’s accessible to people at any time. Computation and sensing are moving from computers and devices into the environment itself. The space around us is instrumented with sensors and displays, and this tends to reflect a widespead need to blend together the information space with our physical space. "Augmented reality" and "mixed reality" are the terms most often used to refer to this type of media-enhanced interactive space. The combination of large public and miniature personal digital displays together with distributed computing and sensing intelligence offers unprecedented opportunities to merge the virtual and the real, the information landscape of the Internet with the urban landscape of the city, to transform digital animated media in public installations, in storytellers, also by means of personal wearable technology.
To meet the challenges of the new information- and technology-inspired architecture we need to think of the architectural space not simply as a container but as a living body endowed with sensors, actuators, and a brain (a mind), a space capable of assisting people in the course of their activities within such spaces.
On the basis of my work and research I will argue that intelligent architecture needs to be supported by three forms of intelligence: perceptual intelligence, which captures people's presence and movement in the space in a natural and non-encumbering way; interpretive intelligence, which "understands" people's actions and is capable of making informed guesses about their behavior; and narrative intelligence, which presents us with information, articulated stories, images, and animations, in the right place, at the right time, all tailored to our needs and preferences.
This talk will describe and illustrate a series of models, technological platforms and installations the author developed originally at the MIT Media Lab (1994 to 2002) and later commercially for Sensing Places (2003 to 2005). They contribute to defining new trends in architecture that merge virtual and real spaces, and are currently in the process of reshaping the way we live and experience the museum, the home, the theater, and the modern city.
The Future of Web Interfaces
The Web took the world by storm, and as a result developed rapidly in many directions. However it still exhibits many aspects of its early development, such as its visual and computer-screen orientation. But the Web is still developing rapidly: there are now more browsers on mobile telephones than on desktops, and there is a vast diversity in types of devices, types and orientations of screens, and sizes (in number of pixels), and resolutions (in dpi) of screens.
Dealing with this diversity is impossible to address just by keeping a list of all the possible devices, or even a list of the most-used ones, and producing different sites for them, since the complexity would be unmanageable, and because once sites started turning away browsers and devices they didn't know, the browser makers responded by disguising themselves to such sites as other browsers.
On top of this diversity there is also the diversity required for accessibility. Although providing access for the visually impaired is an important reason for accessibility, we are all more or less visually impaired at one time or another. When displaying an application on a projector screen at a conference or meeting, the whole audience will typically be visually impaired in comparison to someone sitting behind a computer screen. The existence of separate so-called "Ten-foot Interfaces" (for people controlling their computers by remote control from an armchair ten feet away) demonstrates that the original applications are not designed for accessibility. Furthermore, Google (and all other search engines) is blind, and sees only what a blind user sees of a page; as the webmaster of a large bank has remarked, "we have noticed that improving accessibility increases our Google rating".
The success of the Web has turned the browser into a central application area for the user, and you can spend most of your day working with applications in the browser, reading mail, shopping, searching your own diskdrive. The advent of applications such as Google Maps and GMail has focussed minds on delivering applications via the web, not least because it eliminates the problems involved with versioning: everyone always has the most recent version of your application. Since Web-based applications have benefits for both user and provider, we can only expect to see more of them in the future.
The Web Interfaces landscape is in turmoil at the moment. Microsoft has announced a new markup language and vector graphics language for the next version of Windows; probably as a response Adobe has acquired Macromedia and therefore Flash; W3C have standards for applications in the form of XForms, XHTML and SVG and are working on 'compound documents'; and other browser manufacturers are calling for their own version of HTML.
What are we to make of these different approaches? Are they conflicting? Have any addressed authorability, device-independence, usability or accessibility? Is it even possible to make accessible applications? HTML made creating hypertext documents just about as easy as it could be; do any of the new approaches address this need for simplicity, or has power been irretrievably returned to the programmers?
This talk discusses the requirements for Web Applications, and the underpinnings necessary to make Web Applications follow in the same spirit that engendered the Web in the first place.