4 1 2 V T T P U B L I C A T I O N S

The need to build devices that are more context aware has recently emerged. 412 PUBLICATIONS VTT The motivation is to make devices easier to use on one hand, and decrease the information overload on the other. should be helpful Esa Tuulari because it provides information to the device without bothering the user. In this paper we concentrate on the context awareness of hand-held devices. Hand-held devices have special needs in the user interface, as they Context aware hand-held devices are small in size and fairly weak in performance. Moreover, they should also be easy to use while walking or doing something else. In the theoretical part of the work we analyzed the possibilities that hand-held devices have for obtaining and exploiting contextual information. We propose use-case modeling and interaction scenarios to be used in revealing the potential use for context awareness. We also discuss the devices hand-held aware Context similarities between context awareness and agents. In the constructive part of the work we implemented a general purpose sensor box which we used for increasing the context awareness of an example device. In the experiment we evaluated the theoretical ideas to gain hands-on experience of the practical problems involved in increasing context awareness of hand-held devices. Research into context awareness has been rather limited, especially in applying theories to practice. Research prototypes that use sensor information as a base of user interface operations have been built without much concern for the underlying principles. In this paper we propose a theory for context awareness and show how it is used in practice.

Tätä julkaisua myy Denna publikation säljs av This publication is available from VTT TIETOPALVELU VTT INFORMATIONSTJÄNST VTT INFORMATION SERVICE PL 2000 PB 2000 P.O.Box 2000 02044 VTT 02044 VTT FIN–02044 VTT, Finland Puh. (09) 456 4404 Tel. (09) 456 4404 Phone internat. + 358 9 456 4404

Faksi (09) 456 4374 Fax (09) 456 4374 Fax + 358 9 456 4374 Esa Tuulari Esa ISBN 951–38–5563–5 (soft back ed.) ISBN 951–38–5564–3 (URL: http://www.inf.vtt.fi/pdf/) ISSN 1235–0621 (soft back ed.) ISSN 1455–0849 (URL: http://www.inf.vtt.fi/pdf/) TECHNICAL RESEARCH CENTRE OF FINLAND ESPOO 2000 VTT PUBLICATIONS 412

Context aware hand-held devices

Esa Tuulari VTT Electronics

TECHNICAL RESEARCH CENTRE OF FINLAND ESPOO 2000 ISBN 951–38–5563–5 (soft back ed.) ISSN 1235–0621 (soft back ed.) ISBN 951–38–5564–3 (URL: http://www.inf.vtt.fi/pdf/) ISSN 1455-0849 (URL: http://www.inf.vtt.fi/pdf/)

Copyright © Valtion teknillinen tutkimuskeskus (VTT) 2000

JULKAISIJA – UTGIVARE – PUBLISHER Valtion teknillinen tutkimuskeskus (VTT), Vuorimiehentie 5, PL 2000, 02044 VTT puh. vaihde (09) 4561, faksi (09) 456 4374 Statens tekniska forskningscentral (VTT), Bergsmansvägen 5, PB 2000, 02044 VTT tel. växel (09) 4561, fax (09) 456 4374 Technical Research Centre of Finland (VTT), Vuorimiehentie 5, P.O.Box 2000, FIN–02044 VTT, Finland phone internat. + 358 9 4561, fax + 358 9 456 4374

VTT Elektroniikka, Verkkoteknologiat, Kaitoväylä 1, PL 1100, 90571 OULU puh. vaihde (08) 551 2111, faksi (08) 551 2320 VTT Elektronik, Nätteknologier, Kaitoväylä 1, PB 1100, 90571 ULEÅBORG tel. växel (08) 551 2111, fax (08) 551 2320 VTT Electronics, Networking Research, Kaitoväylä 1, P.O.Box 1100, FIN–90571 OULU, Finland phone internat. + 358 8 551 2111, fax + 358 8 551 2320

Technical editing Maini Manninen

Otamedia Oy, Espoo 2000 Tuulari, Esa. Context aware hand-held devices. Espoo 2000. Technical Research Centre of Finland, VTT Publications 412. 81 p. Keywords context awareness, , hand-held devices, wearable computers, personal technology

Abstract

The need to build devices that are more context aware has recently emerged. The motivation is to make devices easier to use on one hand, and decrease the information overload on the other. Context awareness should be helpful because it provides information to the device without bothering the user.

In this thesis we concentrate on the context awareness of hand-held devices. Hand-held devices have special needs in the user interface, as they are small in size and fairly weak in performance. Moreover, they should also be easy to use while walking or doing something else.

In the theoretical part of this thesis we analyze the possibilities that hand-held devices have for obtaining and exploiting contextual information. We propose use-case modeling and interaction scenarios to be used in revealing the potential use for context awareness. We also discuss the similarities between context awareness and agents.

In the constructive part of the thesis we implement a general purpose sensor box which we use for increasing the context awareness of an example device. In the experiment we evaluate the theoretical ideas to gain hands-on experience of the practical problems involved in increasing context awareness of hand-held devices.

Research into context awareness has been rather limited, especially in applying theories to practice. Research prototypes that use sensor information as a base of user interface operations have been built without much concern for the underlying principles. In this thesis we propose a theory for context awareness and show how it is used in practice.

3 Preface

The roots of this work, which will be submitted to the University of Oulu Faculty of Technology, as part of the qualifications for the degree of Licentiate in Technology, go back several years. The work I did in the EMMI research project, funded by VTT, concentrated on hand-held devices. At that time I came across ParcTab and Mark Weiser's writings about Ubiquitous Computing. In a later project I was involved in developing next generation electronic water taps for Oras Ltd. At that time we did not realize that we were working with context awareness and implicit user interfaces.

VTT has funded the writing of this work, which is gratefully acknowledged.

During the last year I have worked a lot together with Vesa-Matti Mäntylä and Esko Strömmer. I would like to thank both of these gentlemen for their part of the work and also for supporting me to do my share.

I would also like to thank the supervisors of this thesis, Prof. Juha Röning from the University of Oulu and Prof. Veikko Seppänen from VTT Electronics, for their knowledgeable advice and ever-positive attitude towards myself, a much novice scientist.

Thanks are also due to Jukka Korhonen from VTT Electronics for commenting on an early manuscript of this thesis.

Last but not least I would like to thank my family, Ulla, Perttu, Tuuna, Jaakko and Sampo. Especially discussions with now 10-year-old Perttu, often in the sauna, about many and varied topics of technology, are worth mentioning.

Oulu, February 2000

Esa Tuulari

4 Contents

Abstract ...... 3

Preface...... 4

List of symbols and abbreviations...... 7

1. Introduction...... 9 1.1 Scope of the research ...... 14 1.2 Outline of the thesis ...... 15

2. Hand-held devices...... 17 2.1 Evolution of hand-held devices ...... 17 2.2 Benefits of possessing hand-held devices...... 21 2.3 Problems of hand-held devices...... 22

3. Context Awareness ...... 24 3.1 Self-contained context awareness...... 24 3.2 Infrastructure-based context awareness ...... 26 3.3 Measuring the user...... 28 3.3.1 Gesture Recognition ...... 28 3.3.2 Biometric identification...... 28 3.3.3 Affective computing...... 29 3.3.4 Direct electric control ...... 29 3.4 Summary...... 30

4. Increasing context awareness...... 31 4.1 Interaction with hand-held devices ...... 31 4.2 Sources of contextual information...... 34 4.3 Analysis of existing products...... 35 4.4 Use cases and context awareness...... 38 4.5 The point of view of agents ...... 43

5. Sensor box...... 47 5.1 Design and conduct of the experiment...... 47 5.2 Requirements for the sensor box ...... 48 5.3 Design of the sensor box...... 49

5 5.4 Expectations for the experiment ...... 52

6. Aware phone experiment ...... 58 6.1 Contexts of mobile phones ...... 58 6.2 Identifying contexts with the sensor box ...... 64 6.3 Presenting the context in the experiment...... 68 6.4 Results of the experiment ...... 69

7. Results and Discussion...... 71

8. Conclusions...... 74

References...... 76

6 List of symbols and abbreviations

AD Analog to Digital Converter

CA Context Awareness

CVI C for Visual Instruments, National Instruments programming environment

EEG Electroencephalograph

EMG Electromyography

EOG Electro-oculograph

EU European Union

GSM Global System of Mobile Phones

GUI Graphical User Interface

HCI Human-Computer Interaction

HMM Hidden Markov Model

MIT Massachusetts Institute of Technology

NI-IO National Instruments Input Output Card

PCMCIA Personal Computer Memory Card International Association, same as PC-CARD

PDA Personal Digital Assistant

PWR Operating voltage

TTT Things That Think, a research consortium at MIT

7 SGML Standard Generic Markup Language

TEA Technology for Enabling Awareness

UI User Interface

WIMP Windows, Icons, Mouse, Pointing

8 1. Introduction

The foundations for both context awareness and modern hand-held devices was laid at the end of the 1980s. Researchers at Xerox PARC started to study the evolution of computers and found it to be analogous to the development of electric motors (Weiser 1991). First there were many machines that used the same motor, then there was one motor per machine and finally many motors in one machine. It is said that in a modern, well-equipped car there are tens of motors. According to the researchers this same trend will happen with computers.

To a wider audience the idea of "computers everywhere" became familiar as Mark Weiser presented his vision about Ubiquitous Computing in Scientific American (Weiser 1991). In order to experiment with ubicomputing (a shorthand version of ubiquitous computing) Weiser, together with his colleagues, build ParcTab (Want et al. 1996). It was a hand-size computer terminal which made it possible to study some of the key technologies of ubicomputing, namely technology for PDA-devices, communication and applications for the future office. Other important issues in ubicomputing are privacy, power consumption, location awareness and interaction (Demers 1994, Weiser 1993).

Bell and Gray (1998) look back over 50 years of computing to 1947, and forecast the next fifty years till 2047. They predict that there will be more and more computers around us. One of the key problems they see in the penetration of ever-smaller computers in all sorts of devices is their ability to connect to the physical world.

Two research directions have concentrated on the effects of mobility to computing devices. Mobile computing, the use of portable computers capable of wireless communication, involves communication, mobility and portability (Forman & Zahorjan 1994). Nomadic computing concentrates more on the effects of information access and quality-of-service issues (Kleinrock 1995, Bagrodia et al. 1995). Thus it can be seen more as communication–oriented, whereas mobile computing is a more general term which includes all aspects of mobile terminals. In nomadic computing it is required that two questions about the mobile terminal are known, namely "what is it?" and "where is it?". Answers to these questions make it possible to deliver relevant information effectively to the terminal (Schnase et al. 1995).

9 One extreme example of portable devices is wearable computers. The problems that arise as computers are made wearable offer interesting and challenging research opportunities (Bass et al. 1997). They are too heavy, they are too difficult to use and they are almost futile. Heavy weight, at present, is caused mainly by batteries. However, they are becoming more efficient and the electronics is becoming less power-hungry. Measuring performance (MIPS) against power consumption (Watts) is a good indication of this trend (Weiser & Brown 1998). One novel possibility to overcome the problems with batteries is to use parasitic power taken from the user (Starner 1996). User interfaces of wearable devices are often taken directly from PC workstations, which makes them very difficult to use while on move. Speech recognition and hand-written text recognition have been used to alleviate these problems. However, there is plenty of room for entirely new user interface paradigms that could make the computer easy to use while also doing something else. The debate about applications for wearable computers has not been settled yet. Some believe that each type of is useful only in a limited application area, while others are waiting for the killer application that would make general purpose wearable computers compelling for customers (Rhodes 1997).

Present wearable computers are scaled-down versions of office workstations. Another trend that affects the development of hand-held devices is that smaller, portable and highly mobile products increase in performance and have more and more features. There are wristwatches that include, for example, compass and altimeter (http://www.suunto.fi/wristop). Some models even have memory for birthdays and phone numbers, which are wirelessly loaded from a PC-screen with a bar code interface (http://www.timex.com/html/data_link.html). Top models of mobile phones are more and more like laptop PCs with a phone than vice versa. They have good communication capabilities but they still have problems with user interface. Moreover, they do not have many applications to make them top-selling products yet.

Researchers at Olivetti & Oracle Research Center have continued to study the future office that was also addressed by ParcTab. Their hand-held device is called ActiveBadge and is a derivative of an ordinary badge. One of their goals is to develop Smart Office, where users carrying ActiveBadges would have access to the information of the office as well as to all the equipment (Want & Hopper 1992).

10 Mark Weiser himself has turned his interest lately into the development of more efficient user interfaces under the term Calm Technology (Figure 1.1) (Weiser 1998a, Weiser & Brown 1998), after admitting that the ubiquitous computing experiment did not succeed in creating invisible user interfaces. "Our focus [in ubicomputing] was on invisibility, at disappearing the 'computer' to let the pure human interaction come forward. I must admit to you, largely we failed. ... we did not succeed at creating the invisibility we craved. We did not because we did not appreciate the enormity of the challenge, primarily the challenge of a proper model of the human being for whom we were designing.", confessed Weiser in a recent keynote speech (Weiser 1998b).

1991, Ubiquitous Computing

Tab, Pad, Board Wireless Communication Computers everywhere

Virtual Reality Wearable PDA-devices Techniques Computers

Mobile Computing Piconetworks, PAN Thinks That Think

User Interface

Nomadic Computing

Calm Technology

Figure 1.1. Relation of mobile computing concepts.

Negroponte has a similar vision, as he compares the ideal interface to a butler (Negroponte 1997). The butler knows all the things that are happening in the house. He also knows who else is present and the habits of the people living in the house. The interface to the butler is very casual, as he knows almost beforehand what services are expected. To achieve this level of casualness, devices should be much more aware of their surroundings than they are today.

11 At MIT, researchers have taken another view to ubicomputing by concentrating on embedding computational intelligence into everyday objects such as toys, clothing, furniture and even balloons (Resnick 1996, Hawley et al. 1997). Resnick says that "to some, ubiquitous computing means that computation disappears". They have taken a broader view of ubiquitous computing, including also personal digital assistants and hand-held devices (Resnick et al. 1996), which obviously do not disappear. Resnick also emphasizes that they want to affect the way that people think and develop things to think with.

One of the unifying elements in all research directions that have addressed ubicomputing is how to make the devices more aware of their surroundings. In Smart Office, devices have to know where individual workers are and what they are doing. In TTT devices should know with whom they are affiliated and who else is nearby. In modern HCI research the goal is to develop interfaces that implicitly know what the user wants to do, which is only possible if the device has some sort of awareness of the user’s current actions and desires.

There are many computer science communities that work with these problems (Table 1.1). The points of view vary from one community to another and there are important questions that have not been resolved yet.

Table 1.1 Definition of ubiquitous computing and related concepts.

Term Definition

Ubiquitous computing Computers everywhere. Making many computers available throughout the physical environment, while making them effectively invisible to the user. Ubiquitous computing is held by some to be the Third Wave of computing. The First Wave was many people per computer, the Second Wave was one person per computer. The Third Wave will be many computers per person. Three key technical issues are: power consumption, user interface, and wireless connectivity. The idea of ubiquitous computing as invisible computation was first articulated by Mark Weiser in 1988 at the Computer Science Lab at Xerox PARC. (Weiser 1991, 1998a)

12 Mobile computing Use of portable computers capable of wireless networking. Essential properties: communication, mobility and portability (Forman & Zahorjan 1994).

Nomadic computing Nomadicity refers to the system support needed to provide a rich set of computing and communication capabilities and services to the nomad as he or she moves from place to place in a transparent, integrated and convenient form. Nomadic computing differs from conventional operation in the huge variability in connectivity to the rest of one's computing environment (Bagrodia et al. 1995).

Wearable computers Next generation of portable computers. Worn by the body, they provide constant access to computing and communication resources. Wearable computers are typically always on and always accessible, composed of belt or back pack PC, head-mounted display, wireless communication hardware and an input device such as touchpad or cording keyboard (Bass et al. 1997, Rhodes 1997).

Affective computing Computing that relates to, arises from, or deliberately influences emotions. Also, giving the computer the ability to recognize and express emotions, developing its ability to respond intelligently to human emotions (Picard 1997).

Context awareness Knowledge of the environment, location, situation, user, time and current task. Context awareness can be exploited in selecting application or information, adjusting communication and adapting user interface according to current context (Schilit et al. 1994, Schmidt et al. 1998).

13 Calm technology The goal of creating technology that honors the model of human beings. Uses Iceberg model, where the above surface part is the center (conscious) and below the surface part is the periphery (unconscious). Technology should mainly stay at the periphery, out of the way, making the environment calmer (Weiser & Brown 1998, Weiser 1998b).

1.1 Scope of the research

Our work is based on the basic assumptions of ubicomputing discussed in the previous chapter and concentrates on the effects this new technology has on hand-held devices and their users. We think that the main difference between new type of devices and previous ones is in the way they are used and the services they offer to the user. The question is no longer about increasing the size of the display or adding more colors to it. The devices we are studying are designed based on knowledge of human skills and needs. The ultimate goal is that the computer and the human user complement each other. Ideas of Wearable Remembrance Agent (Rhodes 1997) and Human Memory Prosthesis (Lamming et al. 1994) offer a hint of what is to be expected. Our work, as presented in this thesis, concentrates more on HCI issues where the developments lead to calm technology and user interfaces that are casual to deal with (Negroponte 1996). Seen from a wider perspective, both of these goals share a common objective, to make computers adapt to human users and not vice versa.

The specific research questions that we address are as follows: - what is the role of context awareness in personal technology devices, - why is context awareness of personal technology important, - what are the contexts that are associated with hand-held devices, and - how can hand-held devices, with limited resources, identify these contexts?

14 1.2 Outline of the thesis

In Chapter 2 we analyze the area of hand-held devices and highlight some of the problems associated with them. In Chapter 3 we take a closer look at the results obtained, by adding awareness in hand-held devices. Special attention is given to the potential of awareness, to eliminate some of the problems stated in Chapter 2.

Chapter 4 proposes a new interaction model and tools for analyzing the properties of hand-held, or more generally speaking, personal technology devices. In Chapter 5, we present the Sensor box that we have built to serve as a general purpose prototype for carrying out hands-on experiments with context awareness. Experiments done with a are described in Chapter 6. Discussion of results, subjects for further study, and conclusions are presented in the last two chapters, 7 and 8.

The contribution of this thesis is divided in three parts (Figure 1.2): 1) proposition of means for increasing context awareness of hand-held devices, 2) implementation of a general purpose sensor box, and 3) evaluation of the ideas with an experiment.

15 Introduction Background and introduction to the area of the thesis

THEORETICAL PART

Hand-held devices Context awareness Analysis of current situation

Extending the definition of context Tools for deployment awareness and proposition of tools for finding ways to increase context awareness

CONSTRUCTIVE PART

Requirements for the prototype and Sensor box prototype implementation

Aware phone Evaluation with an experiment experiment

Discussion and conclusions Results and subjects for further study

Figure 1.2. Content of the thesis work.

16 2. Hand-held devices

2.1 Evolution of hand-held devices

The development of hand-held devices has proceeded side-by-side with research prototypes and commercial products. In retrospect it is difficult to say which one has been in the lead. Commercial pressures more than academic interest have been the driving force for more efficient batteries and more easy-to-use user interfaces, for example. Topics that have been addressed by researchers include quality of service, adaptive user interfaces and distributed computing, to name a few (Imilienski & Korth 1996).

The most influential research prototypes in the personal technology area have been ParcTab (Figure 2.1, http://www.ubiq.com/parctab/pics.html) and Active Badge (Figure 2.2, http://www.uk.research.att.com/thebadge.html). They have some common features, such as location awareness, although their underlying motivation is quite different.

Figure 2.1. ParcTab.

17 The goals of the ParcTab project were to design a mobile device, the ParcTab, that enables personal communication; to design an architecture that supports mobile computing; to construct context-sensitive applications that exploit this architecture, and to test the system in the office community (Want et al. 1995).

ParcTab does not include any sensors. Location awareness is obtained by identifying the transmitter with whom the Tab is communicating. There is one transmitter per room, which makes it easy to determine the location of the Tab to the accuracy of one room (Schilit et al. 1994). Continuous communication between the Tab and the server makes it also possible for the server to know the location of all the Tabs.

The user interface consists of a rather small display and just three buttons. The design of the device is symmetrical which is exploited in such a way that it is possible to change the orientation of the display 180-degrees with a menu selection. This makes the Tab suitable for both right and left-handed users (Want et al. 1995).

Figure 2.2. Four types of Active Badges.

18 The Active Badge has two features that are related to context awareness. Firstly, it is location aware either in room resolution or in more precise resolution with a research prototype. The second feature is that power down operation is activated as the in-built light sensor detects that it is in darkness (perhaps in a drawer). Location awareness was utilized from the very beginning as the initial application of Active Badge was to assist the telephone receptionist in locating people in the office (Want et al. 1992).

Terminology

There are several terms that are commonly used when discussing mobile devices, namely: mobile terminal, hand-held device and personal technology. Mobile terminal refers to a device that has wireless connection to a server machine or to a network. A mobile phone is a good example. Hand-held device means a device that is carried and operated in the hand. It can have wireless communication capabilities but it is also operational without them. PDA devices fall into this category. Personal technology refers to all modern electronic equipment that is carried around by people. This is the largest category of products, including for example Walkmans and heart rate monitors.

The use of the terms is usually quite liberal. The definitions of the terms are also somewhat overlapping. In Table 2.1 we position some of the existing personal technology devices. A more detailed analysis of these devices is presented in Chapter 4.

19 Table 2.1. Defining some of the most well known products. xxx = first grouping, xx = second grouping, x = third grouping, empty = does not belong to group.

Mobile terminal Hand-held Personal device technology

Wristwatch xxx xx

Heart-rate xx xxx monitor

Mobile phone xxx xx x

GPS-navigator x xx xxx

Walkman xx xxx

PDA x xx xxx

There is a continuous debate on how these devices will evolve in the future. Some have predicted that part of them or all devices will be integrated in wristwatches. The integration of wristwatches and mobile phones would become a wristfone (Pescovitz 1998). More revolutionary thinkers predict that future products will be integrated into our clothes (Gershenfeld 1996). At the beginning of 1999 Professor Kevin Warwick implanted a microchip into his arm, being the first human being to do so. Still there are researchers who believe that instead of one integrated device there will be a vast variety of devices each specific to some limited task (Weiser 1991).

Some integration has already taken place, as the heart-rate monitor usually includes a watch and some of the more expensive mobile phones include a GPS-navigator (http://www.benefon.com). The latest news even tells us about wristwatches that include a GPS-navigator (http://www.casio.com/ gps/index.htm).

We will not discuss the future of these devices in the miniaturization or integration point-of-view further, it is not in the scope of this thesis. We are more interested in the services that these devices offer to the user. We are also interested in the problems that are associated with the usage of hand-held devices.

20 2.2 Benefits of possessing hand-held devices

Fogg has studied computer functions and found three types of different persuasive affordances that they offer (Fogg 1998). He divides the essence of the functions offering these affordances as increasing capabilities, providing experience and creating relationship.

There are two main reasons for increasing capabilities. The first is to overcome some, usually physical, disabilities. The second is to augment the capabilities of a normal healthy person, Figure 2.3.

Capability

Augmented

Normal

Reduced

Person with Normal Person with disabilities person personal technology

Figure 2.3. Normalizing versus augmenting human capabilities.

In both cases we can divide the use of the device in two types, regular and occasional. The use of binoculars is clearly occasional and augmentative whereas the use of a pacemaker is regular and normalizing.

21 One interesting phenomenon is that what we consider normal is steadily increasing capability. In many cultures people carrying no devices with them are nowadays more abnormal than those carrying one or two personal technology devices. The effect of this trend is quite considerable. For example, the use of a mobile phone increases one's communication capabilities drastically compared to the situation where mobile phones do not exist.

Some of the negative effects of this development are discussed by Araya (1995). He warns us about a new kind of otherness that arises as people are not mentally in the same place as they are physically. On a larger scale, this critique is related to the well-known critique of technology dependence presented by Orwell.

2.3 Problems of hand-held devices

One of the key problems with personal technology devices is their user interface. There is usually no room for normal WIMP interface and if there is, the small size of the keyboard and display makes the use of the device quite cumbersome. In the HCI research community this problem has been studied very actively during the last few years. The belief that speech recognition will solve the problem is common. However, some researchers think that talking to the device is so unnatural that it cannot be the solution and that new paradigms are needed (Dam 1997).

One possibility that is emerging is to make the device more aware of its current environment and operational state and restrict the interface accordingly. This decreases the amount of interaction needed with the user, thus decreasing the need for complicated interaction gadgets, thus making them eventually unnecessary. Devices should not only be easy to use, as was the goal some years ago, but there could be devices that need no use at all.

One example of this sort of device is an electronic water tap that notices hands put below it and opens the valves to give water. After the hands are taken away it closes the valves automatically. There is no need for the user to operate the tap, he or she only needs to wash his or her hands. An electronic water tap is, of course, not a hand-held device, but it is adequate to illustrate the development that is already ongoing in modern user interfaces.

22 The user interface of a water tap is rather simple. In the experimental part of this work we will present how these ideas accommodate to more complicated situations.

We think that there is a paradigm shift going on in the way computers and users interact. There are several reasons for this. 1) The office computer model with efficient GUI and WIMP interfaces is becoming outdated because "computers are many sizes and shapes" (Weiser 1993). 2) The increased performance of modern processors make it possible to add user interface software to even the smallest computerized devices. 3) New wireless communication technologies combined with increased use of the Internet makes continuous information access almost mandatory. 4) Rapid development in basic sensor technology driven mainly by the automotive industry makes small and cost-effective sensors available for new application areas. 5) Increased understanding of both human and computer interaction suggests that letting humans be humans and computers be computers is the best way to proceed (Norman 1998).

23 3. Context Awareness

In the previous chapter we discussed some of the most advanced modern hand- held devices. We noticed that context awareness has been used almost only as location awareness to enhance the user interface. This minimal use of context awareness is due to the fact that the devices presented were not designed as being context-aware. The premise for exploiting context awareness is to first obtain information from the context. In this chapter we take a close look at research which includes increasing context awareness as one of the basic research problems. This makes the usefulness of context awareness go beyond location awareness and HCI.

Having said that we have to admit that location is often the most valuable form of context awareness, especially as far as mobile devices are concerned. HCI, on the other hand, was declared as one of the most problematic areas of mobile devices in the previous chapter. These facts suggest that using location awareness in improving HCI is a good starting point for exploiting context awareness in highly portable hand-held devices.

We divide context awareness into two parts, depending on the method used to achieve it. The first part consists of context awareness that is achieved by the device itself without any outside support, called self-contained context awareness. The second part includes those context awareness methods that need some support from a larger system or infrastructure and are called infrastructure-based context awareness.

3.1 Self-contained context awareness

In this chapter we present works that deal with context aware hand-held devices that recognize their context without any external support.

Technology for Enabling Awareness, TEA

TEA is a multi-national research project partly funded by the EU (Schmidt et al. 1998). One of the participants in TEA is TecO. Researchers at TecO emphasize that "there is more to Context than Location". They have exploited environment-

24 sensing technologies for automated context recognition. They also propose to use combined sensors for recognition of higher level contexts. They define ultra- mobile devices as computing devices that are operational and operated while on the move. According to their reasoning the most notable use of context awareness is the adaptation of user interfaces to given conditions in specific situations. They also note that context awareness is hardly applied in mobile user interfaces yet. To structure the concept of context they propose two categories: human factors and the physical environment, with three subcategories for each. For the former they are: information on user, social environment and task, and for the latter; location, infrastructure and conditions, Figure 3.1. Context history is orthogonal to these categories and provides additional information to calculate new context.

Figure 3.1. TEA structure for contexts.

The selection of application in a hand-held device could be based on context awareness. For example, it would be useful to see the shopping list while in a grocery store. In the constructive part of the work TecO researchers have developed a prototype of a PDA device that senses the orientation of the device and selects the display mode between portrait and landscape automatically.

25 Wearable computers

The most advanced wearable computers are much more than scaled-down office PCs. For example, Steve Mann has developed wearable computers that include a wide variety of sensors for measuring both the environment and the person wearing the computer (Mann 1996). This makes the wearable computer aware of its environment.

The sensoring in Mann's wearable computer consists of sensors for measuring the state of the user, i.e. heart rate and temperature, as well as cameras for "seeing the same as Steve sees" (Gershenfeld 1999).

One unique property of Steve Mann's wearable computer is that it is more an information provider than a consumer. Unlike location-based reality augmenting notes that will be described in the next chapter, Mann's wearable computer acts as a mobile information source offering information about the context of the device and of the user (usually Steve) to the rest of the world through the Internet. The location of the information is not fixed to any absolute place but moves along with Steve. We can assume that this type of information selection is in some sense more relevant than selecting information only by location. For example, for relatives and friends, context information related to Steve is more interesting than information related to some specific fixed location.

3.2 Infrastructure-based context awareness

In this chapter we present works that deal with augmenting the environment, in order to improve the context awareness of the hand-held device. There are several possibilities for doing the augmentation. For example, adding RF or IR Tags to the building could provide location information for the hand-held device. Connecting the hand-held device wirelessly to the office's intranet, for example with WLAN, could provide timely information of meetings, menus, etc.

Augment-able reality

Jun Rekimoto, from the Sony Computer Science Laboratory, has developed an environment that supports information registration to real world objects

26 (Rekimoto et al. 1998). He points out that current augmented reality systems that do not dynamically attach information to objects are essentially context-sensitive browsers. Based on his experience with the prototype system, he feels that the key design issue in augment-able reality is how the system can gracefully notify situated information. The current practice of overlaying information on a see- through heads-up display is too obtrusive. As a more handy approach, Rekimoto suggests small LEDs for eyeglasses. After having seen the notice of situated data, the user can browse the data via a palmtop or wrist-top display.

Stick-e notes

Peter Brown from Kent University states that the present trends in hand-held computing devices are making context-aware applications very interesting (Brown et al. 1997). His opinion is that the creation of context-aware applications has to be made easier. Specifically, the aim is to make the creation of context-aware applications as easy as making web documents. The technology proposed by Brown is based on stick-e notes, which are electrical equivalents to post-it notes. A stick-e note consists of two parts, a context and a body. Whenever the context is matched, the body is triggered. The context is described by location, objects that need to be with the user, time and orientation. The notation for writing stick-e notes is SGML, which should make it easy to use even for non-programmers.

Smart Rooms

Alex Pentland, together with his group at the Media Laboratory at the Massachusetts Institute of Technology, has developed computer systems for recognizing faces, expressions and gestures (Pentland 1996). Pentland claims that computers must be able to see and hear what we do before they can prove truly helpful. This new technology has enabled them to build environments that are not deaf and blind like current computers. Areas that they call Smart Rooms are equipped with computers that can assess what people in the room are saying or doing. Visitors in the room can use actions, voices and expressions to control computer programs, browse multimedia information or venture into realms of virtual reality. In Smart Rooms the user doesn’t need to carry any external devices, all the computers are in the room and all computing is done by the infrastructure.

27 3.3 Measuring the user

Contexts can be divided in two groups: the environment of the user and the user him/herself. Context awareness is usually more related to environment, although some wearable computers have the ability to measure certain attributes of the user as well. In this chapter we describe projects that concentrate on measuring the user.

3.3.1 Gesture Recognition

One special field of context awareness is gesture recognition. It has been studied both as an HCI problem (Nielsen 1993) and as a pattern recognition problem (Lee & Kim 1998). Depending on the technology that is used, it is either self- supportive or augmented. The applications that are used vary from conducting music to recognizing American Sign Language. Most of the works are video- based and use video stream as an information source (Starner et al. 1998). The second most used method is to incorporate acceleration sensors in the device, in order to recognize the gestures that the user makes (Sawada & Hashimoto 1997).

The reason to include gesture recognition as a sub-field of context awareness is not straightforward. The motivation is, however, rather clear as we look at the situation from the devices’ (or applications’) point of view. It should somehow react to changes in its location, for example. In the same manner, it should react as the hand is moved from one position to another. As seen from the hand-held device, there is no difference if the device is moved together with the user or in relation to the user. It detects a change in its environment and should react to it. What the desired reaction is depends, of course, on the detected change.

3.3.2 Biometric identification

Identifying the user by measuring some physiological parameters is known as biometric identification (Strassberg 1998). There are several parameters that are suitable for this purpose. A method that uses fingerprint identification is the most suitable for portable devices. Siemens has already demonstrated a smart card that uses this technology as a password. Iris scanning is claimed to be the most reliable method, because there are no two identical iris-scans. The drawback is that a high quality camera is needed.

28 Hand-held devices are often personal and user identification is therefore also a security feature. There are, however, situations like in an office, where there could be several users for the same device. In this type of use it is important that the device adapts to the user. Different users might prefer different application programs or different kinds of user interfaces even in the same situations. Identification of the user could make the adaptation automatic.

3.3.3 Affective computing

Measuring and recognizing the mood of the user is a key step towards giving computers the ability to interact more naturally and intelligently with people (Vyzas & Picard 1998). Rosalind Picard, together with her colleagues at MIT, has studied computing that is related to human emotions as "affective computing" (Picard 1997). The first part of affective computing is to understand the various alternatives that people use in expressing their emotions. Some forms are apparent to other people, like gestures and voice intonation, while others are less apparent, like heart rate and blood pressure. The second part of affective computing is to develop methods for measuring and recognizing human emotions. Finally, the third part consists of synthesizing emotions in computers.

The signal processing and pattern recognition methods used in affective computing resemble the methods that are used with our sensor box.

3.3.4 Direct electric control

Controlling computers directly with the body's electric signals is an option that is most familiar from science-fiction literature and movies. However, the possibility has also interested some scientists. Research into the use of EOG, EMG and EEG to control computers has been conducted at least in accordance with disabled users (Lusted & Knapp 1996). If these methods prove to be useful, they may provide an effortless way to communicate with computers.

29 3.4 Summary

The design and implementation of research prototypes has been the most commonly used method in context awareness research. Both self-supportive prototypes and prototypes that possess infrastructure-based context awareness has been demonstrated. The division of contexts in several parts, at least into the environment and the user, is widely accepted.

However, the general underlying principles of context awareness has not been sufficiently addressed. In the next chapter we present a framework that we have used in analyzing the principles of context awareness and methods for obtaining and exploiting contextual information in hand-held devices.

30 4. Increasing context awareness

Overall, trials to improve context awareness of personal technology devices have been rather modest so far. One reason for this is probably that there is not enough knowledge about where to find contextual information and how to exploit this information in the operation of the device. In this chapter we propose some solutions to these problems. Firstly, we analyze the interaction models of personal technology devices and examine the associated information flows. Secondly, we present how use-case diagrams and scenarios can be used to find ways to utilize context information.

4.1 Interaction with hand-held devices

Rekimoto and Nagano, who have analyzed computer usage, divide HCI into four different styles (Figure 4.1) (Rekimoto & Nagano 1995). In a traditional GUI interface the user interacts with the computer and with the real world but these do not interact with each other (Fig. 4.1 a)). In Virtual Reality the user cannot interact directly with the real world because s/he is surrounded by the computer- world created by the Virtual Reality system (Fig. 4.1 b)). In the Ubiquitous Computer style the user interacts directly with the real world and with several computers that are located in the real world (Fig 4.1 c)). Finally, in Augmented Interaction part of the users interactions with the real world is captured and augmented with a computer that forms a wall between the user and the real world (Fig. 4.1 d)).

Although these styles cover a large part of HCI, none of the styles is sufficient for describing the user interaction that is common with personal technology devices.

31 Figure 4.1. Comparison of HCI styles involved in human-computer interaction and human-real world interaction.

One way to understand personal technology (devices) is to treat it as a connection between the user carrying it and the rest of the world. In this respect is it quite close to the Augmented Interaction style, shown in Figure 4.1 d). However, this style is too simple for describing the interaction of a user carrying hand-held devices with the real world. For example, the fact that most of the interaction between the user and the real world goes past the computer is not well explained by the Augmented Interaction style.

In Figure 4.2 we have illustrated a more comprehensive interaction model. This model is sufficient for describing all interactions with a personal technology device. In the model we purposefully divide the real world (R in Figure 4.1) into two parts, artificial reality (AR), and natural reality (NR). We also restrict the computer (C in Figure 4.1) to include only personal technology devices (PT).

32 Dividing reality into natural and artificial parts is important from the context awareness point of view. As seen in Chapter 3, context aware systems can be divided in two categories according to their interaction with the infrastructure. Clearly, this type of infrastructure belongs to artificial reality.

AR PT-Device

Human User

NR

Figure 4.2. Interactions as the human user is carrying a personal technology device(PT). AR=Artificial Reality, NR=Natural Reality.

With this model we want to emphasize that interaction with hand-held devices takes parts from many different interaction styles. Continuous intimate connection with the device, for example in the form of a vibration alarm, is similar to haptic interface commonly used in virtual reality. The use of the menus of the device is equal to using any GUI of an office PC. The user of the hand-held device is surrounded by other computers. This is well in accordance with Ubiquitous Computing. Finally, the hand-held device is situated between the user and the rest of the world, augmenting the human-world interaction.

It is important to also present the interaction between natural and artificial realities, because it affects the interaction of the user as well. The user can obtain

33 some information much more easily from artificial reality than from natural reality. One example of this type of information flow is the thermometer. The thermometer gets the temperature from natural reality by measuring. The human user then obtains the outside temperature by looking at the thermometer attached, for example, outside the kitchen window.

4.2 Sources of contextual information

Figure 4.2 shows a general interaction model that includes all interactions that can happen in the user-centered world. There are four different possibilities for a hand-held device to obtain information from its surroundings (Figure 4.3):

1) measuring the user and the environment (of the device) with sensors, 2) detecting the user controlling the user interface, 3) communication with other artificial objects, and 4) self sensitivity for measuring e.g. the orientation of the device.

AR PT-device User User control interface

Sensors Context NR awareness Communication Operation Self-sensitivity HU

Figure 4.3. Possibilities for the hand-held device to obtain information.

Schmidt et al. 1999 distinguishes two different sensor types, physical sensors and logical sensors. The former measure the physical parameters of the environment and the latter gather information from the host (e.g. current time, GSM cell, etc.). We will use the term self-sensitivity, because all information from the host is not logical in nature (e.g. charge of batteries).

34 It is important to note that in all interaction some amount of information is transferred from one part of the interaction participant to the other. In Figure 4.3 we have used the term context awareness to denote the part of the device that gathers and processes all the information that is obtained.

Traditional devices obtain information mostly from user controls and from self- sensitivity. Self-supported context awareness adds sensoring and augmented context awareness adds communication among information providers.

From the user's point of view the most important interaction takes place between the user and the hand-held device. The goal of calm technology (Weiser & Brown 1998) is to make this interaction as relaxed and implicit as possible. However, the information that is needed to do so does not come only from that interaction. User-device interaction and associated information demand is analyzed in more detail in Sections 4.4 and 4.5.

There are several possibilities for achieving context awareness. In the most simple form only a small piece of software (or electronics) is needed to obtain the required information. It is more demanding to include special hardware in the form of sensors and appropriate software for obtaining information. The most demanding form is to build infrastructure for context awareness. Communication capabilities have to be built into the device, in order for it to be able to communicate with the infrastructure.

4.3 Analysis of existing products

Most of the existing PT devices get information only from direct control by the user. In the following we look at some of the most widely used PT devices and analyze their interaction models.

Wristwatch

Time as information belongs to natural reality. Human beings can also work out the time without a watch, but the accuracy is not enough for many purposes. Also, it is difficult to know the time during the night, for example. There are watches that get regular time marks from radio, but usually the time is set

35 manually by the user. If the accuracy of the watch is good enough, this doesn’t cause too much work for the user. However, while traveling through different time-zones, it would be very nice if the watch could adjust the time accordingly. Unfortunately, most watches do not have enough information to do this autonomously. The idea of a watch as an agent is discussed in Russell & Norvig 1995.

The wristwatch obtains information from direct control by the user. Some advanced models adjust their time according to official time-marks received by RF-communication. Normal wristwatches do not acquire information by self- sensitivity, for example the charge of the battery is not checked. The watch crudely stops when the battery becomes empty.

Heart-rate monitor

A heart-rate monitor is a good example of a device that makes it easier to obtain accurate information of oneself. It is, of course, possible to measure one's heart rate without a heart-rate monitor, but during exercise it is difficult or even impossible.

The heart-rate monitor obtains information in a similar way to a wristwatch, i.e. from direct user control. The heart rate that the device measures is only passed on to the user. In most commercial products it is not used by the device from the context awareness point of view.

LA radio and GSM phone

The area in which the number of PT devices has increased most rapidly during recent years is wireless telecommunication. In our interaction model we divide these devices in two categories. The first category includes devices that connect one PT device directly to another. Devices that connect two PT devices with the support of the AR belong to the second category. The LA radio belongs to the first category and the GSM phone to the second category. The reason for this distinction is that the GSM phone needs support from the communication infrastructure, e.g. base stations, in order to make a connection.

36 Based on Figure 4.3 it is reasonable to claim that it could be easier for the GSM phone to obtain information from the environment, because it already communicates with the infrastructure. Obtaining, for example, a local weather report from the nearest base station should be rather simple.

GPS navigator

GPS (Global ) is a rather new technology and GPS devices exist in many forms and sizes. The hand-held models clearly belong to PT. The operation of GPS devices, from the user's point of view, is very similar to the wristwatch, in that it gives the value of one important parameter of NR accurately and precisely to the user. It differs from wristwatch, however, as it needs the support of AR (satellites) in its operation.

GPS is often proposed as a technique for detecting location in context aware devices. However, the GPS device itself does not seem to use the location information in increasing its own location awareness.

Wristop computer

A more recent example of a hand-held electronic device is a Wristop computer, which is a wristwatch with extended capabilities (www.suunto.fi/wristop). It is a wristwatch that has environmental sensors built into it. Besides being an ordinary watch it is also a compass and an altimeter and offers, for example, continuous information about the current temperature to the user. At least so far, all of the features included in the Wristop computer transfer information from the NR to the user. It is tempting to forecast that at some point in the future Wristop computers or their derivatives could also connect AR to the human user.

Summary of analysis

As can be seen from the examples of PT devices presented above, there are more and more sensors involved. This means that some PT devices already have means for obtaining information from their surroundings. However, most of the devices do not use this information for increasing their context awareness. The information is used only in the primary function of the device. For example, the

37 heart-rate monitor measures heart rate because it is heart-rate monitor, not because it wants to know the state of the user.

Table 4.1 summarizes the main characteristics of the discussed PT devices. In the discussion we dealt with normal, average products. Some high-end models include features beyond this discussion. These are grouped separately as advanced products in the table.

Table 4.1. Information sources for some example products. X = normal product, A = advanced product.

Measure User control Communication Self- sensitivity

Wristwatch X A A

Heart-rate XX X monitor

LA radio X X X

GSM phone X X X

GPS XXX navigator

Wristop XX X computer

4.4 Use cases and context awareness

One of the problems of hand-held devices, as discussed in Chapter 2, is that they are difficult to use while doing something else. Another problem is that the increasing information overload that is partly caused by the devices demanding constant attention and active interaction is annoying and can cause stress for the user.

38 There seems to be two solutions to decrease the information flow to and from the user. Firstly, the PT device could autonomously use as much of the information as possible that otherwise would end to the user. Secondly, the information that the PT device demands from the user in the form of direct control can be decreased if the device can obtain this without bothering the user.

There are different levels at which the device can use the information autonomously. The most primitive way is to use it directly in controlling the operation of the device (Figure 4.4). This can happen in two different ways. Firstly, it is possible that some user interactions become obsolete as the awareness of the device increases. Secondly, it is possible to use context awareness technology in modifying the way the user interacts with the device. This is important especially in hand-held devices, where manual control of the device is often difficult.

AR PT-device User interface

Context NR awareness Communication Application

HU

Figure 4.4. Exploiting context awareness.

The exploitation of context information can take several forms. Some user interface actions can become futile, because the device knows what it should do. Other actions are modified to happen automatically as the user carries the device. The way the device is controlled may also change. Furthermore, the device can decrease the information flowing to the user by using part of the information itself.

39 As a simple example of exploiting context awareness, we analyze the use of an electronic calculator. A simple use-case diagram of the calculator is presented in Figure 4.5.

Electronic calculator

Calculate

User

Figure 4.5. Use case for an electronic calculator.

The scenario of calculating the sum "3+2" with the electronic calculator is presented as an interaction diagram in Figure 4.6. The diagram reveals that there are at best two user interface actions that can be done automatically by the calculator. Namely 'Turn On' and 'Turn Off'.

40 Electronic calculator Electronic calculator Electronic calculator User keys arithmetic unit display

Turn On Power On Power On '3' 3 '3' '+' + '2' 2 '2' '=' = '5' Turn Off Power Off Power Off

Figure 4.6. Scenario of calculating the sum "3+2" with an electronic calculator.

The calculator can set itself in a power-down mode when it recognizes that the user has not used it, for example, for seven minutes. There is no need to bother the user to turn off the calculator. Only a timer for measuring the time between key-presses is needed to obtain enough information for the operation. Nowadays there are also solar cell powered calculators that do not need any explicit 'Power Off', because they are always on.

There are no means for the calculator to know what the user is going to calculate. So, there is no possibility to decrease the interaction required from the user any further. However, it is possible to change the modality of the required interaction. Changing button presses, for example, to speech commands, would make the calculator much easier to use while walking or doing something else with one’s hands.

41 In general it is important to find out what user actions are needed to use the device. Use cases and use scenarios offer a powerful tool for analyzing this. However, it is important to include all interactions in the analysis. Restricting the analysis only to the core task or use of the menus might leave some important interactions unrevealed.

Almost as simple as turning the calculator on and off is to adjust the volume of a car radio according to the outside noise level. A microphone for measuring the noise level is needed, i.e. the device has to have sensors for obtaining extra information from the environment.

There are two essential issues in these examples. Firstly, the device has some means to measure its environment. The calculator does this by observing the user controlling the user interface (Fig. 4.3), while the car radio uses a microphone for measuring the environment. Secondly, the device emulates the user when using this information.

Based on this observation we can divide the essence of context awareness into three distinct elements: a) a part of the user tasks has to be transferred to the device's responsibility, b) the device has to have similar or better sensitivity than the user, and c) the device (or the designer of the device) has to know the probable behavior of the user. In addition, there has to be enough resources, for example computing power, to carry out the emulation, and there has to be a general pattern in the user's behavior in the situation.

This definition is interesting, because it considers the information gathering as being only a part of context awareness. At least as important a part of context awareness is to exploit it, for example, by increasing the intelligence of the device by taking responsibility for certain tasks that have previously belonged to the user. In Figure 4.7 we have opened the context awareness box shown in Figures 4.3 and 4.4 according to this observation. We have also illustrated the hierarchical nature of context and the fact that contexts form a context history as time elapses.

42 Context awareness

Current Context (hierarchical) Obtain Exploit Context Context Information Information

Context History

Figure 4.7. Context awareness formed from obtaining and exploiting contextual information.

4.5 The point of view of agents

In use-case modeling users of the system are presented as actors. However, the notation allows non-human actors as well. It is tempting to consider the context awareness part of the device as an actor. In this chapter we will analyze this option more closely.

Henry Lieberman has stated that "An agent is any program that can be considered by the user to be acting as an assistant or helper, rather than as a tool" (Lieberman 1997). According to Lieberman's classification an agent should operate some part of an interface in an autonomous manner, in order to be called an autonomous interface agent. If one of these conditions is not met, the agent is either an interface agent or an autonomous agent.

43 The idea of an agent fits quite well with our model of gathering and exploiting context awareness. We may think that two different kinds of software actually exist in the product. One is the actual application program and the other is responsible for context awareness. The latter can be thought of as an autonomous interface agent. The requirement for the interface is fulfilled as the program interconnects the user to the physical world. Autonomous behavior is a natural outcome of the program, it works in the background without any need for user guidance.

Without context awareness the user interface is passively waiting for the user's commands. With awareness this changes as the agent takes an active role in the interface, although remaining in the background. Examples of this type of functionality have already been implemented in the ActiveBadge and TEA prototypes.

In our example of the electronic calculator we can divide the use of the device into two distinct actors, namely the actual user and the agent (Figure 4.8). The agent is the part of the embedded program that knows what the user would probably do in a certain situation. The agent also has means for perceiving information from the environment. In the example this information is obtained from the user interface as time intervals between key-presses.

44 Turn Off

Agent

Calculate

User

Figure 4.8. Agent taking responsibility.

Pentland calls this type of intelligence perceptual intelligence (Pentland 1999) and emphasizes that most tasks do not need complex reasoning. He states that only perceptual abilities and simple learned responses are required. However, he does not clearly explain where and how responses for perceptual intelligence can be found. Our approach of using use-case diagrams and interaction diagrams offer one alternative for revealing those user actions that are appropriate to be transferred to the devices responsibility. Using a separate actor for illustrating the autonomous behavior of the device emphasizes the shared or transferred responsibility.

The importance of the work of Picard (1997) in developing Affective Computing becomes evident as we notice that the behavior of the user can vary in similar situations at different times, according to the mood of the user. If the device is aware of this mood it is much more likely that it selects the right behavior on that very occasion.

45 In practice, if everything goes well, the user might learn to trust the agent. This may eventually lead to a situation where the user never cares to act by him/herself.

Systems where the agent learns the user's behavioral pattern and gradually takes more and more responsibility have been studied by Maes (Maes 1997).

When talking about agents, we have to be careful, because there are different kinds of agents depending on which area of computing is involved. In software engineering, agents are like objects, although they are more powerful and more autonomous. In telecommunications, mobile agents move around networks and can operate on various computing platforms. Mapping context awareness to an actor and further to an agent is closer to the definition given in Russell & Norvig 1995. Russell states that even a clock is an agent, although it has no perceptions. With perceptions it could change time according to time zones, for example.

From the context awareness point of view it is obvious that perceptions are justified as a requirement for an agent. Otherwise it cannot have enough up-to- date information for autonomous behavior.

In using the agent metaphor to clarify the role of context awareness in personal technology devices, we have to be cautious not to use the term carelessly. Shoham (1997) uses a light switch as an example of an object that could be called an agent. He stresses that there is, however, no need to do so, because the mechanistic model of the light switch is sufficient for describing its behavior. Shoham's point is that the term "agent" should be reserved for those applications whose behavior cannot be described in other terms. We believe that context awareness, as part of personal technology devices, is this kind of an application.

46 5. Sensor box

5.1 Design and conduct of the experiment

The constructive part of this thesis consists of two parts. Firstly, we design and implement a sensor box that provides basic percepts for achieving contextual information. Secondly, we use this information as a basis for transferring some user tasks to the device's responsibility.

On one hand the constructive part forms the evaluation of the ideas presented in Chapter 4. On the other hand, we want to provoke new ideas, insights and hopefully innovations in context awareness by studying the practical side of the subject: "Grab one corner of the problem and go! Start doing it!" (Glaser 1995). We think that this approach is especially suitable here, because human beings are strongly involved and it is difficult to analyze analytically or simulate human behavior.

We have not built any infrastructure for context awareness, but instead focus on self-contained awareness. Being not in the position to build entire products, we decided to build an add-on device that could be installed to any product in order to increase its awareness. This device is called the sensor box.

The basic design rationale for the add-on device was to build an experimental system that could be used to gain practical knowledge of as many context awareness aspects as possible.

As a summary, requirements for the constructive part of the work were as follows:

• The possibility to measure all parts of the devices' environment: - natural reality (NR), - artificial reality (AR), and - the human user (HU)

• based on self-contained context awareness

• increasing context awareness without making the use of the device obscure

47 • exploiting context awareness in - reducing the amount of explicit interaction in the user interface, - modifying interaction to be more suitable for mobile use, and - decreasing information overload

• provoking inventions by using the prototype.

5.2 Requirements for the sensor box

Having decided to build a stand-alone device, it was obvious that it had to include a wide range of sensors. It needed sensors both for measuring the environment and the user. Although we could not build any infrastructure to support the device, we were optimistic about being able to get information on both artificial and natural reality from the sensors.

A basic problem with sensored devices is what to measure and where to put the sensors? Our primary goal in this respect was to concentrate all sensors on the hand-held device. This would make it easy for the user to take along and carry the sensors, compared with some wearable computer systems where the dressing of the system makes the use very cumbersome. Another reason is that if sensors are embedded in the device, it is possible to measure both the user and the environment with the same sensors. Actually, the sensor location is perfect for measuring the user as he or she operates the hand-held device.

In some prototypes sensors are located in the device itself, but their usage is limited only to the user interface. We wanted to achieve broad context awareness without sacrificing ease of use. We also wanted context awareness to be as unobtrusive as possible. Having these limitations in mind, we decided to build all the sensors in one box, the sensor box, that could be attached to any hand- held device.

There are several factors that have to be considered when sensors are used as input devices. In a conventional user interface the user controls the computer explicitly. It is usually quite trivial for the computer to notice the user's actions, as he or she moves the cursor or presses the buttons of the mouse, for example.

48 Using sensor information as part of the user interface is much more complicated. One problem arises from the fact that the information flow from the sensor is continuous and variant. Sophisticated signal processing methods have to be used to detect the user's implicit actions from signal noise and other disturbances.

Another problem arises as we want to increase the awareness of the device. The device should be aware of the context that is a combination of the device itself, the user and the environment in which they both are. To determine this, information from many sensors has to be used concurrently.

Our approach to solving these problems was 1) to restrict the amount of different contexts and user interface actions that the device can recognize, 2) to use pattern recognition methods that can be taught to recognize signal patterns that are associated with selected user actions, 3) to model the world from the sensors' point of view in order to understand how contextual information is converted to sensor information.

One motivation for studying context awareness of the environment and the user together is that there is dependence between user actions and contexts. Some user actions are likely to happen only in certain contexts. This extra knowledge can be valuable in situations where it is difficult to differentiate one context from another.

5.3 Design of the sensor box

In the first version of our sensor box implementation we decided to keep the system as simple as possible and to use only those sensors that were readily available. This led to the selection of the sensors listed in Table 5.1.

49 Table 5.1. List of the sensors.

Type Manufacturer

Acceleration sensors (2) ADXL202JQC Analog Devices

Light sensor IPL10530D IPL

Temperature sensor TMP36F Analog Devices

Moisture sensor HIH-3605-B Honeywell

Conductance sensor Self-made Self-made

Two acceleration sensors are needed to get acceleration in three dimensions. The sensors are positioned so that one sensor gives acceleration in x and y directions and the other sensor in the z-direction (Figure 5.1).

Light

M o i s t u r C Acceleration(x,y) e o n d u c T t e a Acceleration(z) m n p c e e r a t u r e

Figure 5.1. Location of the sensors in the sensor box.

50 A temperature sensor is positioned in the side of the box in order to measure the temperature of the hand when the device is in hand. A light sensor is situated on the top of the box, which should also make it possible to measure the lightness when the device is in hand. A moisture sensor is positioned in the side of the box for the same reason as the temperature sensor. A conductance sensor is placed along the side of the box. The idea was to measure skin conductance in the same manner as Picard (1997) had done.

The electronics of the sensor box is presented in Figure 5.2. Operating voltage for the electronics is taken from measurement board (Fig. 5.3) and regulated to 5 V with a linear regulator. The circuit for measuring conductance is on top of the diagram. Voltage difference between the second electrode and ground is measured and amplified with an operational amplifier. Signal noise is reduced by filtering all signal outputs with 100 nF condensators.

Figure 5.2. Sensor box electronics.

The sensor box was connected to a laptop PC where AD-conversion was carried out with a DaqCard-1200 type measurement board manufactured by National Instruments (Figure 5.3). The measurement software that is running on the PC

51 was written in C for LabWindows/CVI, which is a programming environment specifically designed for that purpose by National Instruments.

Laptop PC

Measurement programs written in LabVIEW- and LabWindows/CVI- languages. National Instruments Sensor-box DAQCard-1200

Temperature, Analog signals PC Card-type Acceleration (3D), measurement card. Lightness, Moisture 8 SE/4 DI 12-bit Operating voltage analogue inputs, analog and digital outputs

Figure 5.3. Sensor box system diagram.

5.4 Expectations for the experiment

Before experimenting with the sensor box we had some presumptions of how we would exploit the sensors, described in Table 5.2.

52 Table 5.2. Use of different sensors.

Sensor Use

Acceleration Gestures Movement (Train / Bicycle / Bus) Position

Temperature Location of the user (In / Out) Location of the device (in hand / in pocket / on table)

Humidity Location of the user (Office / Lunch-room / Bathroom)

Light Location of the device (in pocket / on table) Time of day Time of year

Conductance Location of the device (being held / not being held) Clothing of hand

From Table 5.2 we can see that it is possible, at least in principle, to use the sensor box for a variety of purposes. Location awareness, which is the basic feature for any context aware mobile device, is achieved from several sensors although with a very coarse resolution. The location of the device in relation to the user seems to be well acquired. Gestural events, which are an important part of user contexts, are easily measured. Conductance measured as the box is in hand is equivalent to the skin conductance measurement that Picard uses as a source of affective information. This means that the sensor box could be used also for determining the mood of the user by analyzing the conductance level. The conductance increases as the user's hands get wet and this is partially dependent on the mood of the user. However, in our experiment we used conductance only for measuring whether the box is in hand or not.

If we insert the sensors illustrated in Figure 5.1 into the interaction diagram presented in Figure 4.2 we get Figure 5.4.

53 AR PT-Device

Temperature

Human Moisture User Conductance

Acceleration

Lightness NR

Figure 5.4. PT device enhanced with the sensors used in the sensor box. From Figure 5.4 we can clearly detect that the sensors measure their environment in such a way that their values are affected by all four interaction components: AR, NR, Human User and the PT device itself. For example, for temperature this means that

TS = f(TAR, TNR, THU, TPT), (1)

where

TS = temperature measured by the sensor,

TAR = temperature of the artificial reality,

TNR = temperature of the natural reality,

THU = temperature of the human user,

TPT = temperature of the personal technology device

Similar formulas are valid for all other sensors as well.

54 The interaction diagram (Fig. 4.2) is general, high-level illustration of all interactions that concern the PT device. In our experiment we only use sensors as a means for obtaining context information. Therefore, the meaning of the arrows in Figure 4.2 is reduced to mean only information that is detected by sensors. In some other case the arrows might have some other meaning. For example, if there are location tags involved, then communication with these tags has to be considered.

If we compare Formula (1) to Table 5.2, we notice that there are some rather strong assumptions being made. For example, in Table 5.2 we claim that from conductance information we can conclude if the sensor box is in hand or not, or even the clothing of the hand. We clearly assume that the conductance signal is dominated by interaction with the Human User, i.e. we omit the other components that affect conductance. This can be expressed in mathematical form

CS = f(CHU), (2)

where C is conductance.

This assumption is justified based on the location of the conductance sensor in the sensor box and on general knowledge of the conductance of a skin compared to the conductance of other materials that the sensor box could be in contact with. Both of these justifications are based on the fact that we are aware of the application of the sensor box, i.e. the usage pattern of hand-held devices.

Expressing Table 5.2 in mathematical form we get the following formulae

TS = f(TAR, TNR, THU)(3)

The temperature of the device does not affect the measured temperature.

MS = f(MAR, MNR, MHU)(4)

The moisture signal is not caused by the moisture of the device itself.

55 The conductance sensor measures only the conductance of the user (formula 2). If the conductance is zero or very small we can conclude that the device is not in hand.

AS = f(AAR, AHU)(5)

The acceleration of the device is caused either by the user’s actions or by actions caused by the artificial environment. Natural accelerations caused by e.g. earthquakes are not considered.

LS = f(LAR, LNR)(6)

The light from the user as well as the light of the device are negligible compared to the lightness obtained from artificial and natural environments.

Figure 5.3 present these formulae in graphical form. Even without measurement theoretical analysis it is evident that sensors which measure several signal sources at the same time have a great risk of not measuring anything sufficiently. According to Figure 5.5 conductance is the most reliable measurement in this sense because it has only one signal source. Temperature and moisture both have three signal sources making their usefulness questionable.

56 PT-Device

AR Temperature

Moisture

Human User Conductance

Acceleration

NR

Lightness

Figure 5.5. Information sources of sensors in the sensor box.

57 6. Aware phone experiment

It is possible to design general sensoring for context aware hand-held devices, as was described in the previous chapter. However, to experiment with context awareness, a specific target device is needed, otherwise it would not be possible to draw and analyze the use case scenarios and to transfer user responsibility to the imagined embedded agent. To put it another way, without an explicit device we cannot know how it is used.

The sensor box that we have designed and implemented is a suitable add-on module for a wide range of hand-held devices. In this respect it can be seen as a general-purpose solution.

The context recognition experiments were made supposing that the sensor box was attached to a mobile phone. A mobile phone was selected as an example device for two reasons. Firstly, it is one of the most common hand-held devices that people regularly carry with them. Secondly, the daily use of a mobile phone is varied, including many fruitful candidates to context recognition. Another possibility would have been a wristwatch, but it was rejected because the sensor box was too big to emulate it conveniently and because the usage pattern of a watch is not as diverse as it is with a mobile phone.

6.1 Contexts of mobile phones

The context recognition problem is associated with mobile phones from two different directions. Firstly, what are the contexts that are important for mobile phones? Secondly, what contexts can be detected, in theory, with the sensor box attached to a phone?

The contexts that are relevant to the use of a mobile phone can be divided into three groups

1) where the user is, 2) where the device is in relation to the user, and 3) what the user is doing (with the device).

58 Group One resembles location awareness, as described in Chapter 3. Gesture recognition would belong to Group 3, but is only a part of it. Group 2 is most clearly specific to hand-held devices and is largely absent or meaningless for other types of devices.

This grouping and the contexts that are listed below are largely suitable also for other types of hand-held devices, for example PDA devices:

• Location of the user - Indoors - Outdoors - In a meeting - At the desk - In the lunch-room - In a car - On a bicycle

• Location of the device in relation to the user - In hand - In a pocket - On the table - Different orientations of the device

• What the user is doing - Walking - Talking - Sitting - Running - Waving hand - Answering the ringing phone - Hanging up

The organization of contexts is not flat but hierarchical. Some contexts are on a higher level and might include several lower level contexts. Context "In a meeting", for example, includes contexts "In" and "Sitting". In practice, it is not as straightforward as this, because meetings, for example, can be held also "Out" and it is possible to "Walk" during a meeting. This is not, however, in

59 contradiction to the fact that there is a hierarchy of contexts. It only means that deciding how one recognizes a context that supports the existence of some other context is a difficult problem. A certain degree of uncertainty will probably always exist.

The use of a mobile phone is more complex than the use of a traditional wired phone (Figure 6.1). With the traditional phone there is no need to explicitly open the line by pressing buttons. This difference is noteworthy because the mobile phone should be the one that is especially well-suited to mobile use where pressing buttons is often difficult. Another point is that it could be very convenient if the phone recognizes that it is taken in hand. In some situations it would be nice if the phone stopped ringing, for example, as soon as it was picked up. There should be no need to tie this operation in with the pressing of the answer button.

60 User User interface GSM interface

Phonecall arrives Ring

Pick up the phone

Press 'answer' button

Stop ringing

Open the line Lift the phone to the ear

Talk Speech

Speech Listen

Remove phone from ear

Press 'hang up' button Close the line

Figure 6.1. Interaction diagram of a normal phone call. Handling the case of the mobile phone is included as part of the user interface.

The sensor box attached to a mobile phone makes the above-mentioned features possible, at least in principle. According to the elements of context awareness discussed in Section 4.4, the device has to have enough percepts for recognizing contexts and their change, and the user behavior in this situation has to be known.

61 Recognizing context

Contexts that need to be recognized: - phone in hand, not in hand, - phone lifted to the ear, from the ear, - ringing phone.

Figure 6.2 shows sensors that are needed to detect the relevant actions as well as the information sources that are involved. Although there is no need to measure the artificial environment in order to detect users’ actions, we have to take it into consideration because it affects the measured signal as discussed in Section 5.4. The effect of AR to the acceleration signal is either noise or valuable context information. For example, if answering the phone occurs while the user is walking, it would be nice to detect both contexts, i.e. walking and answering the phone. However, in practice it is difficult to detect even single contexts while there are simultaneous other contexts that affect the same sensors.

Detecting the ringing of the phone would be easiest by letting the phone send a message to the sensor box as a phone call arrives.

62 AR

PT-Device

Human User Conductance

Acceleration

Figure 6.2. Obtaining context information for detecting a phone call.

User behavior

The user behavior is rather easy to predict by analyzing the use-case scenario presented in Figure 6.1.

User behavior: - if the phone rings and it is picked up, it should stop ringing, - if the phone rings and it is lifted to the ear, it should open the line.

Modeling context awareness in these situations with the embedded agent proposed in Chapter 4, we get the following use-case diagram (Figure 6.3).

63 Mobile Phone

Open the line

Agent

Close the line

Talk with the caller User

Figure 6.3. Embedded agent taking responsibility for controlling the phone.

By analyzing the user behavior we can find several other operations that could be transferred to the device's responsibility. For example, increasing the volume of the phone in noisy environments or adjusting the LCD backlight brightness according to the lighting conditions. Users would probably very soon get accustomed to these 'automatic' features and expect all mobile phones to act similarly.

6.2 Identifying contexts with the sensor box

Contexts can be divided into three different classes according to their dynamics. The first class consists of stabile contexts where signals from sensors remain stable or are only slightly fluctuating. The level of signal changes when the

64 context changes. Conductance signal measured, for example, as the phone is picked up, belongs to this class (Figure 6.4). In the initial phase the phone is on the table and conductance is around zero. Then the phone is taken in hand and conductance increases to 0.8 V. After a while the phone is returned to the table and conductance decreases to the same level it was before it was taken in hand. This episode is then repeated, but this time the phone is probably being held more firmly because the conductance rises to 2 V.

2.5

2

1.5

1

0.5

0

-0.5 0 500 1000 1500 2000 2500

Figure 6.4. Signal from the conductance sensor as the box is taken in hand and put back on table, two occasions.

The second class consists of contexts where signals vary periodically. Acceleration measured when walking and waving belongs to this class. In Figure 6.5 there is an example recording while the user is walking and the sensor box is in his pocket. From the figure we can conclude that the frequency of walking remains the same during the recording. The figure also clearly illustrates how differently right and left leg steps affect the sensors located on one side of the user.

65 2.6

2.4

2.2

2

1.8

1.6

1.4

1.2 0 50 100 150 200 250 300 350

Figure 6.5. Acceleration signals as the person carrying the box is walking. The box is in the pocket.

The third class consists of contexts that happen only at certain time. These could also be called events. Answering the phone is an example (Figure 6.6).

2.5

2.4

2.3

2.2

2.1

2

1.9

1.8

1.7 0 100 200 300 400 500 600 700 800 900

Figure 6.6. Acceleration signals as the box is picked up and lifted to the ear. The motion imitates 'Answering the phone context'.

66 The dynamics of the context is very important if the context recognition is considered. Class One contexts are easy to recognize, in some cases a simple signal limit will do the job. However, setting the limit is problematic, because the level can change from one situation to another. The level can also be user- dependent. Class Two contexts are more difficult to recognize but the periodic nature of the phenomenon gives a second (and third and so on) chance, if recognition of the first period does not succeed. Only the delay of the recognition increases. The most difficult contexts belong to the third class. There is no second chance, and the context and the possibility to exploit it are lost if the recognition does not succeed.

The reason that we do not call Class Three contexts events is that we want to call transitions from one context to another events. For example, taking the phone from a pocket is an event where the context of the phone changes.

Most of the contexts listed in Section 6.1 are continuous which ease the recognition. Some of the contexts are stable in the frequency domain. For example, the light signal forms a peak in 50 or 60 Hz of the frequency spectrum if the phone is in artificial light. The peak frequency depends on the frequency of the mains current of light.

We have also studied more advanced pattern recognition methods that are needed to recognize more dynamic contexts, such as walking, running, answering to the phone and hanging up. The methods that we have used so far include neural networks and Hidden Markov Models (HMM). We have, however, studied these methods only in off-line situations. The results are very promising and will be published in a forthcoming paper.

One requirement for the sensor box was to make it unobtrusive. Having all the sensors in the sensor box is very convenient and does not affect the normal use of the phone too much. We can identify a broad range of contexts with very small amount of sensoring. Unfortunately, there are some drawbacks associated with this sensor location policy. The sensors are not as near the signal source as they should be according to the measurement theory. Distinguishing various body movements, like walking along a street or walking up stairs could be easier if the acceleration sensors were placed, for example, on the ankle of the subject. Also the fact that the light sensors can be in a pocket when they should be measuring ambient light makes things harder than they would be with different sensor positioning.

67 6.3 Presenting the context in the experiment

In the aware phone experiment the awareness part of the program was run on a laptop PC. The same PC was also used for measurement and signal processing purposes. This awareness program was not physically connected to the mobile phone. Therefore, we were not able to control the phone in reality. Our demonstration was restricted to showing that the context, or its change, is recognized, based on the information obtained with the sensor box.

The user interface of the PC software is divided into three planes (Figure 6.7). Each plane is implemented as a separate window in the user interface. The first plane shows the unprocessed signals, the second plane presents the frequency spectrum of the signals and the third plane the recognized contexts. Contexts are presented with simple LEDs. The LED is lit as the context is recognized and turned off as the context disappears.

Figure 6.7. The user interface of the demonstration system.

68 The idea behind this structure is to present different information in different use cases of the program. In the development phase it is useful to see both raw data and processed signals. In a demonstration it is, however, more useful to look only at the contexts in order to get a feeling for automatic context recognition.

The contexts that are implemented in our prototype are 50Hz, 60 Hz, bright light, several for phone position, in hand and the stability of the phone. The set of contexts is not rigid, as the prototype is evolving all the time. New LEDs are created and some others are deleted constantly.

6.4 Results of the experiment

Using a separate (stand-alone) sensor box attached to a mobile phone for demonstrating context awareness was very fruitful. It was useful to have a real product (a mobile phone), the contexts of which we could study. There were real contexts that we could try to recognize. Thinking of the usage pattern was easy, as we had an existing product that we could play with. The two-way approach of considering the possible contexts and then trying to look at how these contexts might be seen from the sensors' point of view was also valuable.

The usefulness of the various sensors was well in line with the speculations presented in Section 5.4. Temperature and moisture sensors were almost useless while information obtained from the conductance sensor was reliable and easy to exploit.

The problems we encountered were associated with the fact that the sensor box and the target device, the mobile phone, were not operationally connected to each other. Some of those who saw our demonstration could not understand why they should be excited about a LED named "in hand" turning on and off. This result might seem discouraging at first, but is reasonable as we remember that in our experiment we concentrated more in obtaining than in exploiting context information.

Future work should include building a more convincing prototype, where the awareness that is achieved by the sensor box is truly exploited by the target

69 device. This should not be too difficult, because there are several features in the mobile phone that can be used to demonstrate increased awareness.

In subsequent versions of the sensor box we have also carefully considered the sensoring. Having all the sensors in the same box evidently increases the unobtrusiveness of the device. However, requirements set for the sensor box in Section 5.1 include both unobtrusiveness and the possibility to measure the device’s environment. Our current approach was clearly more biased to the ease of use. Some level of compromise could provide the best possible result.

70 7. Results and Discussion

In the analytical part of the thesis we analyzed the interaction of hand-held devices. We defined the elements of context awareness and proposed use-case modeling as a tool for finding ways to exploit context information. We also discussed the relation of context awareness and agents.

In the constructive part of the work we implemented a prototype device that is aware of its surroundings by processing sensor information. Part of awareness is related to the user itself, like walking. Some contexts that are recognized are related to the natural and artificial environment, like in In and Out. We also briefly discussed the problems associated with obtaining contextual information with sensors.

The results of this thesis can be summarized as follows:

• analyzing the interaction of hand-held devices,

• using interaction as a base for obtaining context information,

• defining the elements of context awareness,

• using use-case modeling as a tool for finding ways to exploit context information,

• analyzing the relation of context awareness and agents, and

• evaluating theoretical findings with prototype experiences.

Based on the experience with the prototype we believe that it is impossible or at least very difficult to exploit context awareness fully without the support of an infrastructure. It is of course possible to use video and audio signals in order to gain more context information. However, a lot of computing power is needed to carry out signal processing and pattern recognition for these signals. Recognizing, for example, in which room the user is, is straightforward if there are tags with different IDs located in every room. Doing the same recognition from audio or video signals measured by the hand-held device is far more difficult and probably precarious.

71 In general, the signal provided by a sensor attached to a hand-held device is a mixture of several signal sources. No amount of signal processing is enough for separating two or more signals with the same characteristics. This sets great demands for locating the sensors efficiently. A novel sensor location policy could connect signals to signal sources more accurately. In some cases redundant, but differently located, sensors could be used.

Another point supporting the need for an infrastructure is that the hand-held device cannot do very much alone. It needs information from other devices and from the environment. This is especially important if the goal is not to bother the user in increasing context awareness of the device. As well as getting information from the surroundings, the device also needs to give information to the environment. For example, the location of the user is valuable information to other people and devices.

In the future we would like to continue our research more in the agent direction. The idea of an agent living in the mobile device is refreshing and gives birth to new ideas to try out. Adding wireless communication to the device and to the agent would make it possible to transfer information between the agent and the environment it is in. Think of the possibility to attach information to a front door, for example. As soon as the device and the agent embedded in it passes this location (location awareness) it also retrieves the information that is waiting for it at the door. A message from a family member to visit the grocery store would reach a person when it was needed.

Another interesting subject for further study is to question the long-lived interactive user interface paradigm. Interactivity has long been characteristic and even a goal for good human-computer interfaces. Latest research results show, however, that this leads to inefficiencies as the user and the computer are working alternately (Lieberman 1997). This type of interaction is most inappropriate in hand-held devices where the goal should be to work while the user is on move. Adding more computing resources to the user interface might make the interface more intuitive and thus requiring less attention by the user (Kuivakari et al. 1999).

In this thesis we have addressed the improvement of the capabilities of existing products by exploiting context awareness. In discussing the future it is also

72 appropriate to give a blue-eyed vision that illustrates entirely new products that might be created as better sensors are developed, computing power increases and pattern recognition methods are improved.

Blue-eyed vision of the future:

We think that it will be possible to record all contexts as symbols in an automatic diary fashion. That will make it possible for the user to remember 'everything' and search past events by symbols, symbol combinations and symbol sequences. Seeing the past situation from the diary will activate the user's episodic memory bringing to mind even more information than had been stored in the diary.

We believe that sensors will make it possible to implement implicit user interfaces in the same manner as has already been done, for example, with electronic water taps. We think that calm technology and casual use are realistic goals for future projects.

We think that in the future there will be products that combine the best parts of human capabilities and computer performance. So far research and product development has concentrated on maximizing the performance of the device. In the future the goal is more about maximizing the performance and capabilities of the human-computer combination*.

* This trend has been forecast also by Donald A. Norman as he has stated that "The differences between humans and machines make for excellent complements, even while they also make for remarkably poor interaction" (Norman 1998).

73 8. Conclusions

The research community as well as commercial manufacturers have developed a wide range of mobile hand-held devices during recent years. Many people carry these devices more or less all day. Although they are usually useful in the task they are designed for, there is a growing interest to increase the capabilities of these devices. On one hand, there have been attempts to make the devices adapt to their current situation (in, out, time-of-day), for example by loading appropriate application programs or information in them. On the other hand, work has been done to make the user interface of the devices simpler.

There are some common aspects in these two efforts. Firstly, both use some sort of sensoring in order to obtain information from the environment of the device. Secondly, this extra information is processed with the computing power that is left over from processing the primary application program.

We have analyzed both hand-held devices and context awareness. Our analysis led to more detailed definition of context awareness including a clear partition between obtaining context information and exploiting context information.

In this thesis we have proposed user scenarios and use-case modeling as a method for revealing potential exploitation possibilities. We have used sensors as a means for obtaining context information and discussed problems associated with this approach.

We have also presented experimental results of increasing the awareness of an example product, as well as the design and implementation of a general-purpose sensor box that made the experiment possible.

The results of our work implicate that the range of achieving and utilizing context awareness is quite large. It is rather straightforward to use simple methods in context recognition and use the derived information directly in the user interface. The step to using context awareness beyond this state is quite large, though. Advanced pattern recognition methods, sophisticated real-time programs and smart infrastructures are needed before hand-held devices can serve as memory prosthesis, automatic diaries and electronic assistants.

74 However, this does not mean that it is pointless to add simple context awareness capabilities to existing products; in fact quite the contrary. For example, using information from light and acceleration sensors can improve the user interface of the device significantly. Decreasing the amount of required user intervention is especially important in hand-held devices, which should be usable also while doing something other than controlling the device. Making it possible to use the device casually is not unimportant either, when we think about the time spent using these devices daily.

Proposals for a new profession, interaction design (Winograd 1998), is well in line with these thoughts. There is a growing need to consider the user more comprehensively in this ever-more computerized world. Considering user interface only as a software problem is clearly an old-fashioned standpoint.

75 References

Araya, A.A. 1995. Questioning ubiquitous computing. In: Magazine: Proceedings - ACM Computer Science Conference. Publ. by ACM. New York, NY. USA, pp. 230–237. Conference: Proceedings of the 1995 ACM Computer Science Conference. Nashville, TN. USA.

Bagrodia, R., Chu, W.W., Kleinrock, L. & Popek, G. 1995. Vision, issues, and architecture for nomadic computing. IEEE Personal Communications, ISSN 1070–9916: v 2, n 6. Dec. Publ. by IEEE. Piscataway, NJ. USA. Pp. 14–27.

Bass, L., Mann, S., Siewiorek, D. & Thompson, C. 1997. Issues in Wearable Computing. A CHI 97 Workshop. SIGCHI Bulletin, October, Vol. 29, No. 4, pp. 34–39.

Bell, G. & Gray, J.N. 1998. The Revolution Yet to Happen. In: Denning, P.J. & Metcalfe, R.M. Beyond Calculation, The next fifty years of computing. New York: Springer-Verlag, pp. 5–32, ISBN 0–87–98588–3.

Brown, P.J., Bovey, J.D. & Chen, X. 1997. Context-Aware Applications: From the Laboratory to the Marketplace, IEEE Personal Communications, October, pp. 58–64.

Dam, A. 1997. Post-WIMP User Interfaces. Communications of the ACM, February, Vol. 40, No. 2, pp. 63–67.

Demers, A.J. 1994. Research issues in ubiquitous computing. Proceedings of the 13th Annual ACM Symposium on Principles of Distributed Computing. Publ. by ACM. New York , NY. USA. 1994. Pp. 2–8.

Fogg, B.J. 1998. Persuasive Computers: Perspectives and Research Directions. In: Proceedings of CHI 98 18–23 April, Los Angeles, USA, pp. 225–232.

Forman, G.H. & Zahorjan, J. 1994. The Challenges of Mobile Computing. Computer, April, pp. 38–47.

76 Gershenfeld, N. 1996. Digital Dressing, or Software to wear. The New York Times Magazine, March 24, pp. 14–15.

Gershenfeld, N. 1999. When Thinks Start To Think. Henry Holt and Company, Inc. New York, ISBN 0–8050–5874–5.

Glaser, M. 1995. Measuring Intuition. Research and Technology Management. March - April, pp. 43–46.

Hawley, M., Poor, R.D. & Tuteja, M. 1997. Things That Think. Personal Technologies, No 1, pp. 13–20.

Imilienski, T. & Korth. H.F. 1996. Introduction To Mobile Computing, In: Imilienski, T. & Korth. H.F. (eds.) Mobile Computing, pp. 1–43.

Kleinrock, L. 1995. Nomadic computing - an opportunity. Computer Communication Review , ISSN 0146–4833: v 25, n 1. Date: Jan . Publ. by ACM. New York, NY. USA. Pp. 36–40.

Kuivakari, S., Huhtamo, E., Kangas, S. & Olsson, E. 1999. Keholliset käyttöliittymät. Digitaalisen median raportti 6/99. Tekes, Helsinki 1999.

Lamming, M., Brown, P., Carter, K., Eldridge, M., Flynn, M., Louie, G., Robinson, P. & Sellen, A. 1994. The Design of a Human Memory Prosthesis. The Computer Journal, Vol. 37, No. 3, pp. 153–163.

Lee, H.K. & Kim, J.H. 1998. Gesture spotting from continuous hand motion. Pattern Recognition Letters, Vol. 19, pp. 513–520.

Lieberman, H. 1997. Autonomous Interface Agents. In: Proceedings of CHI 97, 22–27 March, Atlanta GA USA. Pp. 67–74.

Lusted, H.S. & Knapp, B.R. 1996. Controlling Computers With Neural Signals. Scientific American, October. Scientific American Inc., New York, NY. USA, pp. 82–87.

77 Maes, P. 1997. Agents that Reduce Work and Information Overload. In: Bradshaw, J.M. (Ed.) Software Agents. AAAI Press, Menlo Park, California, pp. 145–164. ISBN 0–262–52234–9.

Mann, S. 1996. 'Smart Clothing': Wearable Multimedia Computing and 'Personal Imaging' to Restore the Technological Balance Between People and Their Environments. ACM Multimedia 96, Boston MA USA.

Negroponte, N. 1996. Being Digital.Vintage Books, ISBN 0679762906.

Negroponte, N. 1997. Agents: From Direct Manipulation to Delegation. In: Bradshaw, J.M. (Ed.) Software Agents. AAAI Press, Menlo Park, California, pp. 57– 66. ISBN 0–262–52234–9.

Nielsen, J. 1993. Noncommand User Interfaces. Communications of the ACM. April, Vol. 36, No. 4, pp. 83–99.

Norman, D.A. 1998. Why It's Good That Computers Don't Work Like The Brain. In: Denning, P.J. & Metcalfe, R.M. (eds.) Beyond Calculation, The next fifty years of computing. New York: Springer-Verlag. Pp. 105–116. ISBN 0–387–98588–3.

Pentland, A. 1996. Smart Rooms. Scientific American, April, Vol. 274, No. 4, pp. 68–76.

Pentland, A. 1999. Perceptual Intelligence. In: Gellersen, H.W. (Ed.) Handheld and Ubiquitous Computing, Proceedings of the First International Symposium, HUC'99 Karlsruhe, Germany, September 1999. Lecture Notes in Computer Science 1707. Pp. 74–88.

Pescovitz, D. 1998. Small will be beautiful. New Scientist, 27, June, pp. 40–42.

Picard, R. 1997. Affective Computing. The MIT Press, Cambridge Massachusetts, 1997, ISBN 9 780262 161701.

78 Rekimoto, J. & Nagano, K. 1995. The World through the Computer: Computer Augmented Interaction with Real World Environments. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST'95) Nov 14–17 1995, ACM Press, November, pp. 29–36.

Rekimoto J., Ayatsuka Y. & Hayashi K. 1998. Augment-able Reality: Situated Communication through Physical and Digital Spaces. IEEE 2nd Interanational Symposium on Wearable Computer (ISWC'98), pp. 68–75.

Resnick, M. 1996. Things to think with. IBM Systems Journal, Vol 35, Nos 3&4, pp. 441–442.

Resnick, M., Martin, F., Sargent, R. & Silverman, B. 1996. Programmable Bricks: Toys to think with. IBM Systems Journal, Vol 35, Nos 3&4, pp. 443–452.

Rhodes, B.J. 1997. The wearable remembrance agent. A system for augmented memory, Personal Technologies, No 1, pp. 218–224.

Russell, S.J. & Norvig, P. 1995. Artificial Intelligence, A Modern Approach. (Prentice Hall Series in Artificial Intelligence), Prentice Hall, ISBN 0131038052.

Sawada, H. & Hashimoto, S. 1997. Gesture Recognition Using an Acceleration Sensor and Its Application to Musical Performance Control. Electronics and Communications in Japan, Part 3, Vol. 80, No. 5, pp. 9–17.

Schilit, B.N., Norman, A. & Want, R. 1994. Context-Aware Computing Applications, Proceedings of the IEEE Workshop on Mobile Computing Systems and Applications, December 8–9, Santa Cruz, pp. 85–90.

Schmidt, A., Beigl, M. & Gellersen, H.W. 1998. There is more to Context than Location. In: Proceedings of the International Workshop on Interactive Applications of Mobile Computing (IMC98), Rostock, Germany, November.

79 Schmidt, A., Aidoo, K.A., Takaluoma, A., Tuomela, U., Laerhoven, K.V. & Velde, W.V. 1999. Advanced Interaction in Context. In: Gellersen, H.W. (Ed.) Handheld and Ubiquitous Computing, Proceedings of the First International Symposium, HUC'99 Karlsruhe, Germany, September, Lecture Notes in Computer Science 1707, 1999. Pp. 89–101.

Schnase, J.L., Cunnius, E.L. & Dowton, S.B. 1995. The StudySpace Project: Collaborative Hypermedia in Nomadic Computing Environments. Communications of the ACM, August, Vol. 38, No. 8, pp. 72–73.

Shoham, Y. 1997. An Overview of Agent-Oriented Programming. In: Bradshaw, J.M. (ed.) Software Agents. AAAI Press / MIT Press, pp. 271–290, ISBN 0–262–52234–9.

Starner, T. 1996. Human-powered wearable computing. IBM Systems Journal, Vol 35, NOS 3&4, pp. 618–629.

Starner, T., Weaver, J. & Pentland, A. 1998. Real-time American Sign Language Recognition Using Desk and Wearable Computer Based Video. IEEE Transactions on Pattern Analysis and Machine Intelligence v 20 n 12 Dec, pp. 1371–1375.

Strassberg, D. 1998. Biometrics: You are your password. EDN, May 7, pp. 41–48.

Want, R. & Hopper, A. 1992. Active Badges and Personal Interactive Computing Objects. IEEE Transactions on Consumer Electronics v 38 n 1 Feb, pp. 10–20.

Want, R., Hopper, A., Falcao, V. & Gibbons, J. 1992. The Active Badge Location System. ACM Transactions on Information Systems, Vol. 10, No. 1, January, pp. 91–102.

Want, R., Schilit, B.N., Adams, N.I., Gold, R., Petersen, K., Goldberg, D., Ellis, J.R. & Weiser, M. 1995. An Overview of the ParcTab Ubiquitous Computing Experiment, IEEE Personal Communications, December, pp. 28–43.

80 Want, R., Schilit, B.N., Adams, N.I., Gold, R., Petersen, K., Goldberg, D., Ellis, J.R. & Weiser, M. 1996. The ParcTab Ubiquitous Computing Experiment, In: Imilienski T. & Korth F.H. (eds.) Mobile Computing, pp. 45–101.

Weiser, M. 1991. The computer for the 21st century. Scientific American, vol. 265, no. 3, September, pp. 94–104.

Weiser, M. 1993. Some Computer Science Issues In Ubiquitous Computing, CACM July Vol. 36, No. 7. pp. 75–84.

Weiser, M. 1998a. The Future of Ubiquitous Computing on Campus. Communications of the ACM, January, Vol. 41. No. 1, pp. 41–42.

Weiser, M. 1998b. The Invisible Interface: Increasing the Power of the Environment through Calm Tehcnology. Opening Keynote Speech. In: Streitz, N., Konomi, S. & Burkhardt, H.-J. (Eds.), Cooperative Buildings - Integrating Information, Organization, and Architecture. Proceedings of CoBuild'98. Darmstadt, February 1998. Lecture Notes in Computer Science 1370. Springer:Heidelberg, (267 pages). Available from "http://www.darmstadt.gmd.de/CoBuild98/abstract/0weiser.html".

Weiser, M. & Brown, J.S. 1998. The Coming Age of Calm Technology. In: Denning, P.J. & Metcalfe, R.M. Beyond Calculation, The next fifty years of computing. New York: Springer-Verlag. Pp. 75–85. ISBN 0–387–98588–3.

Winograd, T. 1998. The Design of Interaction. In: Denning, P.J., Metcalfe & R.M. Beyond Calculation, The next fifty years of computing. New York: Springer-Verlag. Pp. 149–161, ISBN 0–387–98588–3.

Vyzas, E. & Picard, R.,W. 1998. Affective Pattern Classification. AAAI Fall Symbosium Series: Emotional and Intelligent: The Tangled Knot of Cognition, October 23–25, Orlando, Florida.

81 Published by Series title, number and report code of publication Vuorimiehentie 5, P.O.Box 2000, FIN-02044 VTT, Finland Phone internat. +358 9 4561 VTT Publications 412 Fax +358 9 456 4374 VTT–PUBS–412 Author(s) Tuulari, Esa Title Context aware hand-held devices Abstract The need to build devices that are more context aware has recently emerged. The motivation is to make devices easier to use on one hand, and decrease the information overload on the other. Context awareness should be helpful because it provides information to the device without bothering the user. In this thesis we concentrate on the context awareness of hand-held devices. Hand- held devices have special needs in the user interface, as they are small in size and fairly weak in performance. Moreover, they should also be easy to use while walking or doing something else. In the theoretical part of this thesis we analyze the possibilities that hand-held devices have for obtaining and exploiting contextual information. We propose use-case modeling and interaction scenarios to be used in revealing the potential use for context awareness. We also discuss the similarities between context awareness and agents. In the constructive part of the thesis we implement a general purpose sensor box which we use for increasing the context awareness of an example device. In the experiment we evaluate the theoretical ideas to gain hands-on experience of the practical problems involved in increasing context awareness of hand-held devices. Research into context awareness has been rather limited, especially in applying theories to practice. Research prototypes that use sensor information as a base of user interface operations have been built without much concern for the underlying principles. In this thesis we propose a theory for context awareness and show how it is used in practice.

Keywords context awareness, ubiquitous computing, hand-held devices, wearable computers, personal technology Activity unit VTT Electronics, Networking Research, Kaitoväylä 1, P.O.Box 1100, FIN–90571 OULU, Finland

ISBN Project number 951–38–5563–5 (soft back ed.) 951–38–5564–3 (URL: http://www.inf.vtt.fi/pdf/) Date Language Pages Price April 2000 English 81 p. B

Name of project Commissioned by

Series title and ISSN Sold by VTT Publications VTT Information Service 1235–0621 (soft back ed.) P.O.Box 2000, FIN–02044 VTT, Finland 1455–0849 (URL: http://www.inf.vtt.fi/pdf/) Phone internat. +358 9 456 4404 Fax +358 9 456 4374