Framework for Building Situation-Aware Ubiquitous Services in Smart Home Environments

vorgelegt von Diplom-Informatiker Andreas Rieger

Von der Fakultät IV – Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften Dr.-Ing.

genehmigte Dissertation

Promotionsausschuss: Vorsitzender: Prof. Dr. habil. Odej Kao Berichter: Prof. Dr. habil. Sahin Albayrak Berichter: Prof. Arkady Zaslavsky

Tag der wissenschaftlichen Aussprache: 31.05.2011

Berlin 2011

D 83

Zusammenfassung

Die zunehmende Verbreitung des Computers in verschiedenen Lebensbereichen er- laubt neue Möglichkeiten der Nutzung, bringt aber auch Herausforderungen für Wis- senschaftler in verschiedenen Fachrichtungen der Informatik. Mit Hilfe vernetzter Geräte können intelligente Umgebungen geschaffen werden, die die Benutzer in auf unterschiedliche Weise unterstützen. Die voranschreitende Miniaturisierung von Sen- soren und Aktoren, die in Geräte des täglichen Lebens integriert werden, ermöglichen einen allmählichen Paradigmenwechsel in Richtung des “Ubiquitous Computing”.

Insbesondere unsere eigene Wohnumgebung gerät dabei immer mehr in den Fokus. Wohnungen und ganze Häuser werden zu so genannten Smart Homes. Ziel dieser Smart Homes ist die Unterstützung der Bewohner hin zu mehr Komfort, Sicherheit und Energieeffizienz. Vielfältige Technologien unterstützen diesen Ansatz, jedoch fehlt es bisher an einem einheitlichen Framework, welches die sich dadurch ergeben- den Herausforderungen adressiert. Die Integration verschiedenster Geräte, die unter- schiedliche Kommunikationsmedien und Protokolle verwenden, die Erfassung von Informationen aus der Umgebung und deren aufbereitete Bereitstellung, sowie die Generierung von intuitiv bedienbaren Diensten sind dabei die wesentlichen Heraus- forderungen.

Diese Dissertation befasst sich mit der Erforschung, Entwicklung und Evaluierung von kontextsensitiven, allgegenwärtigen Diensten für Smart Homes. Dazu wurde eine eingehende Analyse des Marktes, der Technologien und der Anwendungsgebi-

i ete für allgegenwärtige Dienste für Smart Homes vorgenommen. Darauf basierend wurden drei Hauptkomponenten identifiziert, deren Bereitstellung die Umsetzung von solchen Diensten unterstützt. Diese Komponenten verringern die sich ergebene Komplexität für Softwareentwickler und ermöglichen eine Fokussierung auf service- spezifische Aspekte.

Die “Device Management”-Komponente stellt dabei ein transparentes, funktionales Modell der darunterliegenden Geräte einer Smart Home Umgebung dar. Sie inte- griert verschiedene Technologien und Protokolle und stellt diese über eine Abstrak- tionsschicht den darüber liegenden Diensten zur Verfügung. Ein Kontext- und Situa- tionsmodell stellt Umgebungsinformationen in einer standardisierten Form dar. Die Einführung des Situationsmodells, welches analog zum menschlichen Prozess der Erkennung und Bewertung von Situationen umgesetzt wurde, stellt den Diensten auf- bereitete Situationsinformationen zur Verfügung, die für Adaption von Diensten und Benutzerschnittstellen verwendet werden können. Die dritte Komponente stellt ein Framework zur Bereitstellung von intuitiven, multimodalen Benutzerschnittstellen für Smart Homes dar. Basierend auf einer abstrakten, modellbasierten Beschreibung der Benutzerschnittstellen werden diese an die Situation des Benutzer und der ak- tuellen Umgebung angepasst und ausgeliefert.

Die Implementierung wurde u.a. im Rahmen des Service Centric Home Projektes durchgeführt und in eine Smart Home Testbed integriert. Verschiedene Dienste aus dem Bereich Smart Home wurden implementiert und in das Testbed integriert. Dabei konnten die Dienste und deren Entwicklung von den verschiedenen Komponenten der Architektur profitieren.

ii Abstract

The increasing spread of computing technology in different areas of life brings with it new possibilities but also challenges for scientists in various disciplines of computer science. With the help of connected devices we can create smart environments that assist the user in many ways. The miniaturization of sensors and actuators that are integrated into everyday devices enable a gradual paradigm shift in the direction of "ubiquitous computing".

In particular, our own living environment increasingly becomes the focus of our atten- tion. Flats and houses become so-called smart homes. The goal of these smart homes is support the inhabitants to more comfort, safety and energy efficiency. Multiple tech- nologies support this approach; however, there is a lack of a unified framework that addresses the challenges. The key challenges are the integration of different devices using different communication technologies and protocols, the acquisition of informa- tion from the environment and their aggregated provisioning, and the generation of intuitively usable service.

This dissertation is concerned with research, development and evaluation of situation- aware ubiquitous services for smart homes. We have performed an in-depth analysis of the market, the technologies and application areas for ubiquitous services for smart homes. Based on this analysis we have identified three main components for the im- plementation of smart homes, supporting the implementation of ubiquitous services. These components reduce the resulting complexity for software developers and focus

iii attention on service-specific aspects.

The device management component thereby offers a transparent, functional model of the underlying devices in a smart home environment. It integrates various tech- nologies and protocols and provides an abstraction layer for the available services. A context-and situation model provides information about the environment in a stan- dardized form. The introduction of a situation model, which was developed analo- gously to the human process of detection and assessment of situations, provides pro- cessed situation information for services, which can be used for adaptation of services and user interfaces. The third component provides a framework for the provision of intuitive, multimodal user interfaces for smart homes. Based on an abstract, model- based description, user interfaces are adapted to the situation of the user and the cur- rent environment and delivered.

The implementation was done inter alia on the Service Centric Home project and inte- grated into a smart home testbed. Several smart home applications have been imple- mented and integrated into the testbed. This way, the services and their development could benefit from the various components of the architecture.

iv Acknowledgments

The completion of my dissertation has been a long journey. It is true that “Life is what happens to you while you’re busy making other plans”. I stopped counting the many times I have been questioned whether I have have finished my PhD project yet. Many things happened and changed while I was involved with this project. Even though my dissertation has always been a priority, due to planned, motivating, exciting, but also unexpected, disturbing, and bad challenges in personal and work life my dissertation could not always be the number one priority. Without the continuous support of the select few that I’m about to mention, I may not have gotten to where I am today.

I’d like to give special thanks to my advisor Prof. Sahin Albayrak for the great sup- port and motivation and for giving me the opportunity to conduct my research in the exciting environment of the DAI-Labor of the Technische Universität Berlin. He gave me the opportunity to work with and lead the competence center Next Genera- tion Services (NGS), providing the basis for my research. I would also like to thank Prof. Arkady Zaslavsky for his motivating support and feedback as member of the committee. His feedback helped to greatly to improve my thesis.

This work has been conducted as part of the research of the NGS group. All current and former team member, in particular Marco Blumendorf, Richard Cissee, Grzegorz Lehmann, Dirk Roscher, Frank Trollmann, Sebastian Feuerstack, Jens Wohltorf, and every other which was not mentioned, made a direct and indirect contribution to this work for which I’m greatly thankful.

v And last but not least I’d like to thank my parents for their unflagging belief, their constant source of support, and for always being there for me. This thesis would certainly not have existed without them.

vi viii Contents

1. Introduction 3

1.1. Ubiquitous Computing ...... 3

1.2. Motivation and Problem Description ...... 5

1.3. Terminology and definitions ...... 7

1.4. Contribution ...... 9

1.5. Structure of the Thesis ...... 10

2. Smart Homes 13

2.1. Roles ...... 14

2.2. Market analysis ...... 18

2.3. Application areas ...... 24

2.4. Technologies ...... 30

2.5. Challenges ...... 46

3. Literature Review 49

3.1. Ubiquitous Computing / Pervasive Computing ...... 49

3.2. Smart Home projects ...... 53

3.3. Context- and Situation Awareness ...... 58

3.4. Summary ...... 67

4. CoSHA - Conceptual Smart Home Architecture 69

ix Contents

4.1. Requirements analysis ...... 69 4.2. Conceptual architecture ...... 75 4.3. Device Management ...... 76 4.4. Context and Situation-Awareness ...... 80 4.5. User Interaction ...... 90

5. CoSHA Implementation 93

5.1. Device management ...... 93 5.2. Context model ...... 95 5.3. Multi-Access Service Platform ...... 102 5.4. Summary ...... 108

6. CoSHA Evaluation 111

6.1. Smart Environment Testbed ...... 111 6.2. Service Centric Home ...... 119 6.3. Smart Home Services ...... 121 6.4. Summary ...... 133

7. Conclusion and Future Work 135

7.1. Conclusion ...... 135 7.2. Future Work ...... 136

A. Outcomes of the dissertation work 139

A.1. Publications ...... 139 A.2. List of talks ...... 141

List of Figures 143

List of Tables 147

References 149

x

Contents

2 1. Introduction

“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” - Weiser, 1991

1.1. Ubiquitous Computing

Almost 15 years ago Mark Weiser described in his vision of ubiquitous computing com- puter systems that will disappear into the environment of our daily lives, and comput- ing infrastructure as a tool that is available everywhere, every time, and to everybody. Weiser believes that computing technology will evolve into everyday life and will be as natural as today’s non computing technologies like writing on a piece of paper with a pencil. These changes will lead to new environments saturated with computing and communication capabilities, yet having devices and appliances integrated into the en- vironment such that they disappear from the user. By the time of his writing, technol- ogy didn’t allow the realization his vision in an appropriate form yet. However, his work was a starting signal for many researches to explore a new paradigm of comput- ing systems. Nowadays, ubiquitous computing is still an ongoing research area with many unsolved research questions. Many prototype applications have been built, but less have evolved into commercial systems. Due to the continuously advancing avail- ability of computing and network infrastructure, the potential audience of computing,

3 1. Introduction communication or other services of informational nature is growing steadily. Smaller processors, embedded into everyday objects with increasing processing and commu- nications abilities can build the foundation for the evolution of ubiquitous computing systems. These technological changes will have a strong influence in the way we in- teract with computing systems. Explicit usage of applications and services will fade into the background, while services running in the background will come to the fore, but without clear perceiving the use of these services. Given the advances in the fields of distributed computing, sensor networks, and middleware, the area of ubiquitous embedded computing is now being envisioned as the way of the future. The systems and technologies that will arise in support of ubiquitous embedded computing will undoubtedly need to raise a variety of issues, including human-computer interaction, dependability, autonomy, resource constraints, etc. In a ubiquitous computing world, service provisioning systems will be able to proactively address user needs, negoti- ate for services, act on the user’s behalf, and deliver services anywhere and anytime across a multitude of networks and devices. As a result of the development humans benefits from the use of technology without having to consider in a specific situation, whether and how she will use the technology. The use of technology gets more or less unconsciously.

These days the vision of ubiquitous computing is increasingly becoming reality, as Moore’s law about a long-term trend in the history of computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years, continues to pertain. The capabilities of many digital electronic devices are strongly linked to Moore’s law, leading to smaller sensors and inexpensive devices with more processing capabilities. Nowadays mobile phones have not only the ability to act as a mobile communication device, but also as a universal personal assistant. They are capable of storing and accessing different types of information, as well as extending their usage by the use of internal or external sen-

4 1.2. Motivation and Problem Description sors. GPS, compass, and accelerometer are just three of the most common today. They allow not only having services like turn-by-turn navigation on the phone, but allow also the seamless integration of mobile phones into a bigger service and communi- cation infrastructure, where the mobile phone acts only as a part of a larger system. Such an interconnection of sensors and devices shows how technology allows us the realization ubiquitous computing scenarios.

With all that technological advances, ubiquitous computing is already a partial part of our daily life. We start using mobile communication nearly anywhere and anytime. The usage of small sensors in our devices, e.g. GPS in mobile phones or cameras, works automatically, mostly without taking notice of it. We experience the use of that technology later, when take note of the online photo service knowing the location where the picture was taken. However not all challenges of ubiquitous and invisi- ble computing mentioned by (Schilit et al., 1993) have been met yet. Heterogeneous devices and networks, interoperability among disparate entities, and mobility and se- curity are still a challenge for researchers (Kumar and Das, 2006).

1.2. Motivation and Problem Description

Technology plays a more and more important role in our lives. We start to use tech- nology for an increasing number of tasks of our daily activities. On the other hand devices that we use tend to get more and more complicated, the consequence of being that we do not make use of all the possibilities the devices are offering. Almost every mobile phone for example can be used for more than making calls and sending texts. There are of course users that make use of the camera, the integrated calendar, the pos- sibility to receive emails or install new applications, but many users are overwhelmed and don’t make use of the possibilities, even if they might be useful for them.

Our homes combine appliances and technology of different branches and vendors,

5 1. Introduction each of which may have been acquired at different times (Edwards and Grinter, 2001). Basic approaches for new smarter technology are mostly isolated applications. Appli- ances in our homes get more and more capabilities too. Each individual appliance is equipped with computing power. Yet also much potential remains unused, as the de- vices cannot communicate with each other. If at all only devices from the same branch or the same manufacturer can communicate and exchange information. A manufac- turer of white goods interconnects his devices resulting in the possibility to read off the fridge if the dishwasher has finished. Yet it is not possible for the user receive the information on her mobile phone or as a subtle note on the TV screen while watching a thrilling blockbuster. Hence it is an important research problem how we can easily interconnect devices and applications of different heterogeneous sources. It is even essential to integrate new technology next to existing older systems without making the old ones unusable (Pensas and Vanhala, 2010).

Technological advances help us realizing smart environments. Smart environments are defined as “one that is able to acquire and apply knowledge about the environment and its inhabitants in order to improve their experience in that environment” (Cook and Das, 2007). Yet, our homes will not turn into smart homes until the potentials of the technology will get used. Smart environments and smart homes remain an open research area (Schmidt, 2010) as the impact of technology in these areas is less domi- nant than in mobile computing or personal computing devices. A main goal of smart homes is to make quality of living at home better. Everyday activities should get more intuitive, more enjoyable, more convenient, safer, easier, faster, and better in many other ways (Saizmaa and Kim, 2008). Real-time and real life issues like continuous availability, extensibility, resource efficiency, safety, security or privacy (Nehmer et al., 2006; Becker, 2008) are major challenges for such systems.

One of the primary technological challenges for ubiquitous computing noted by Weiser is the issue of context management, that is, the need for methods of managing and co-

6 1.3. Terminology and definitions ordinating the flow of information between the nodes and services available in a ubiq- uitous environment. The ubiquitous interface envisioned by ubiquitous computing re- quires that devices continuously support the user’s current actions; context-awareness forms the basis of this capability. Context-awareness uses context information to pro- vide relevant information and/or services to the user, where relevancy depends on the user’s task (Dey, 2000). Context information consists of information about the en- vironment, the user and their current task(s). Offering this information in a formatted way, e.g. as a context model, to the service running in the environment allows these services to provide context-aware information. Ubiquitous services can then seam- lessly integrate and cooperate in support of user requirements, desires and objectives. Context-aware services integrate context information to make applications and ser- vices to be more user-friendly, flexible, and adaptable (Jang et al., 2001).

1.3. Terminology and definitions

In the following we provide some definitions of the terms that are often used within our research.

1.3.1. Context-Awareness

Context-Awareness describes applications and services that are aware of their environ- ment that they are running in. They make use of this information to run differently in varying environmental conditions. This usually involves three steps: collecting, representing and reasoning. Information about the environment is collected usually via sensors but also via explicit user input. The collected information is being pre- processed and stored in a common format. Based on that representation reasoning algorithms interpret the information. All this together forms the basis for adaptation of applications and services in a smart home environment.

7 1. Introduction

1.3.2. Situation-Awareness

Situation-Awareness is built on top of Context-Awareness. While the latter does only a limited interpretation of context information, the former tries to conclude real life sit- uations form context information. Therefore understanding how information, events, actions, and the intention of the user will impact goals and objectives in the future will be used to evaluate situations from context information.

1.3.3. Smart Home

The term smart home describes homes equipped with “intelligent” technologies that generate an added value for the inhabitant. Usually smart homes are privately used homes (e.g. home, apartment) in which the many home automation devices (such as heating, lighting, ventilation), home appliances (such as refrigerator, washing ma- chine, dryer), consumer electronics and communications equipment are interconnected and oriented towards the needs and demands of the user. The interconnection allows the provisioning of new services and assistance functionalities that go beyond the in- dividual value of the home’s appliances.

1.3.4. Adaptation

Adaptation is the logical use of the properties context-awareness offers. Adaptation can take place in several ways. First, services themselves can be adapted. This means that the services conduct related to their usage situation in different ways. Second, the user interface can be adapted to the current context. The performance of the service remains untouched, but the adaptation of user interfaces for improved inter-location allows the services. They may include graphic enhancements also take a change of rules to a more intuitive, more natural or more appropriate situation to allow interac- tion.

8 1.4. Contribution

1.4. Contribution

We started our work with analyzing research in ubiquitous computing and context- awareness. The vision of ubiquitous computing is still the motivating force for many research projects. Yet, many issues remain unresolved. We have chosen the home environment as the focus of our research. It provides ideal conditions for research focusing on how to support the user best with technology that disappears from the at- tention of the user. The analysis of technologies that are commonly used for realizing smart home solutions showed a general heterogeneity among devices and communi- cation protocols. This led our research to the first problem: how the integration of heterogeneous devices should be handled.

Based on our findings and based on related work in that area we have proposed a conceptual architecture for building situation aware services for smart home environ- ments. The architecture addresses three main different research topics that we have identified for developing services for smart homes:

• Integration of heterogeneous device

• Situation aware adaptation of services

• Intuitive and multimodal user interfaces

We have developed an abstract device model that allows integration of heterogeneous devices with different communication technologies, protocols and APIs. Abstract mapping of underlying functionality to functional properties allows service devel- opers to focus on features rather than on specific implementation issues of used tech- nologies. It is possible to handle the devices without knowing their device-specific functionalities when creating services that make use of device specific functionalities.

Our context model offers a basis for automated processing of the environmental state. The usage of Hidden Markov Models in our approach extends current context-aware

9 1. Introduction approaches and allows us to find situations that match the current human mental state during a sequence of interactions with the environment (Rieger and Albayrak, 2010).

We have developed a user interaction framework that generates multimodal user in- terfaces. Using abstract models for description of user interfaces allows adaptation of user interfaces based on the current situation of the user or device characteristics of the device (Rieger et al., 2005).

Based on the conceptual architecture we developed several ubiquitous services with focus on specific parts of the architecture. These services where carried out in testbed for smart home services. We have created this testbed in order to allow evaluation of ubiquitous services for smart environments in a real smart home environment. Var- ious home automation technologies and interaction devices have been integrated to support different application areas.

1.5. Structure of the Thesis

This thesis is structured as follows: In the following chapter we describe smart homes in general, the market of smart homes, their application areas and technologies. Based on this general overview we identify challenges as well as existing barriers for realiz- ing smart home solutions.

10 1.5. Structure of the Thesis

Figure 1.1.: Structure of the thesis

Thereafter a literature review shows other smart home research projects. In addi- tion we take a look at research activities in ubiquitous computing and context- and situation-awareness, which form the basis for many smart home services.

Based on a requirement analysis we describe “CoSHA”, our conceptual architecture for smart homes in the chapter thereafter. We identify three main requirements that support the development of smart home services; easy integration of devices, a context- and situation model as basis for adaptation, and a user interface framework that sup- ports multimodal interaction.

Afterwards, in chapter 5, we describe the implementation of our main components. A device model allows transparent integration and access of heterogeneous devices and their data by services. The context model stores sensed information and allows derivation of usage situations. The usage of the situation model to derive intentions of the user has been published in (Rieger and Albayrak, 2010). A framework for deliver- ing multimodal user interfaces acts as the central component for providing interaction

11 1. Introduction between the user and her environment.

The second last chapter describes the evaluation of our approach with several proto- type applications in a real life testbed for smart homes. The described BerlinTainment application has been published in (Wohltorf et al., 2005; Rieger et al., 2005).

In the final chapter we summarize our work in the conclusion and present an outlook about open challenges that might be a starting point for future research.

12 2. Smart Homes

The area of smart homes has attracted significant attention as a specific area within the research of ubiquitous computing. Smart homes have been defined as

“domestic environments in which we are surrounded by interconnected technologies that are, more or less, responsive to our presence and actions” (Edwards and Grinter, 2001).

A smart home can be an apartment or house equipped with sensors, user interfaces, actuators, networks, and data and decision fusion modules to offer helpful services to its occupants. Such services can often adapt to the preferences of the user through a period of observing or interacting with the user.

Until now, most homes offer only little technological support for the user. Often the term smart home is used to refer to homes that use simple automaton tasks based on temperature measurements or time constraints. Simple installations that turn off the lights in a room, after the last person left the room or automatic blinds that close at night and open again in the morning is two examples. These blinds work standalone, taking into account only the current time and a simple trigger as a factor of operation. They close, even if the occupants are outside in the garden and closing might lock them out. In the blind control application the operation can be made smart if the oper- ation takes into account other information from door sensors and activity monitoring and remembering the setting for the same or similar conditions in the future.

13 2. Smart Homes

One important point that turns regular homes into smart homes is the interconnec- tion of various devices and appliances through a communication network. This al- lows home appliances, consumer electronic devices, and communication devices to turn into smart devices that follow the needs and demands of the inhabitants. In or- der to leverage the capabilities of these devices, new services have to be offered and deployed into smart homes. These services will be able to improve the quality of life through manifold assistance in daily activities. Devices, systems and technologies will be used to create more comfort, flexibility, energy efficiency, and security.

Future solutions will sense the presence of a person in a room and conclude the situa- tion and context of the user to support her better. A smart home could automatically set the appropriate lighting, switch on the heating to a warm and comfortable tem- perature, and tune in to the favorite music station or television channels, taking into account the situation and goals of the user. Another example is an automated lawn sprinkler that not only gains efficiency through the use of moisture sensors in the ground, but also by the use of the weather forecast from the .

2.1. Roles

A smart home environment is characterized by three main roles. The first and most obvious one is the user. In a smart home environment this is typically the inhabitant and her family members. In some scenarios also guests cannot be ignored. The sec- ond and not rather important role is devices. Devices allow the realization of smart homes and are the interface to the inhabitant. The third main role is services. Services provide the intelligent functionality for the inhabitants and make use of devices in the environment.

14 2.1. Roles

2.1.1. User

Smart home solutions are tailor made for the inhabitant of a home. Devices, appli- ances and services are based on her preferences. Access to the services is granted to each family member. Based on individual properties, different parts of the system are available after the system knows the current user.

If a smart home system gets an integral part of a home that includes every task in a home, one must not forget another special kind of user: the guest user. Certain well known tasks from today’s homes need to be accessible for guest users also. Turning on the lights should not require any explanation for example.

Interaction with the services should be adapted to preferences of the user. Left-handed users might want to experience a different layout of interaction on a touchscreen than right-handed users.

2.1.2. Devices

In today’s homes a multitude of devices exist already as shown in Figure 2.1. Un- til now, these devices usually work standalone, in their specific application area not producing any added value through interconnection with other devices.

Devices of a smart home can be divided into different categories:

White goods: Devices like fridge, drier, or kitchen equipment. Studies Franz et al. (2006) show their potential for optimizing and reducing the energy consumption. From the users’ perspective, the usage time of these devices is flexible under certain restrictions, allowing services to take a certain amount of control over the operation of these devices.

Until now, these devices are usually not connected to other devices. These devices start to become networked and assistive services allow now possibilities of operation

15 2. Smart Homes of these devices. There have been some commercial available products from Siemens (“serve@home” series) or Miele (“miele@home” series). Until now, these devices were more technology demonstrators, than real market players. Only view devices on the on the higher price segment have been available. No standard for communication to these devices has emerged yet, leading to proprietary protocols that allow only operation between devices of the same manufacturer.

With emerging smart homes connectivity with and among these devices becomes in- creasingly interesting again. Not only, they can help reducing the electricity bill and increase overall electrical efficiency, they can also be integrated into comfort or secu- rity related scenarios.

Figure 2.1.: Home automation devices

16 2.1. Roles

Consumer Electronics: This category of devices comprises those devices that are al- ready present in many households. TV screens, DVD player, phones, or hifi-systems are the most important examples of this category.

Some of these devices offer already connectivity. The DLNA standard allows media sharing and streaming between these devices, independent of the manufacturer. Even though these solutions sometimes seem to be smart and convincing they have one big drawback: their services are available only within the category of consumer electron- ics. These limit the usage of these services to one branch of devices.

In smart homes it should also be possible to create services in all branches. The TV could not only be used to control entertainment functions, but can also act as an infor- mation or control device for white goods.

Other devices: Apart from afore mentioned devices other technical equipment is present in our homes. Mostly these are is legacy systems like lights, light switches, heating regulators, or blinds that have been present in our homes for a long time. Smart home services will increase the usability of these devices significantly. Their operation can be controlled, automated and optimized.

In smart homes an additional category of devices will be added to the existing infras- tructure:

Home automation devices: These devices will work in the background to enable some of the operational possibilities of smart homes. Switching actors, binary and other input devices, heating actors, and dimming actors belong to this category. They offer functions for switching lights, adjusting the heating or collecting input from a variety of sensors. In a regular home, these devices (or the functions of these devices) might be present partly, but the main difference compared to legacy solutions is their offering of connectivity as a standard.

To sum up, we can say that our current home infrastructure is characterized by a wide

17 2. Smart Homes heterogeneity of different devices from different manufacturers with widely varying functionalities. Thus, development of device-independent services that combine func- tionalities of more than one device is a challenging task. In addition the layout of a home environment and the available devices is always different.

2.1.3. Services

Services are the enabler for intelligent behavior in smart home environments. They establish a connection between the user and her environment. One major difference between current services for homes and smart home services is their ubiquitous avail- ability and their spanning across multiple devices. While traditional services are usu- ally bound to a single device, the interconnection of devices in a smart home allows offering new services that combine devices of different manufacturers and branches. This combination will deliver new services that increase the benefit for the user in dif- ferent application areas, ranging from intelligent energy management to control and safety or services that support daily life and routines for seniors.

2.2. Market analysis

Current home environments are increasingly getting equipped with technical devices. In 2008 94% of all German households had a TV screen, 70% also a DVD-player. Mo- bile phones were present in 86%, computers and laptops in 75% of all households1. It is not unexpected that equipping of households with technical devices will increase even more in the future, as people tend to adapt to new technologies and devices very fast.

1Source: German Federal Statistical Office (Statistisches Bundesamt), “Einkommens- und Ver- brauchsstichprobe 2008”

18 2.2. Market analysis

The increasing computing power of devices, and short life cycles of electronic devices lead to devices that deliver more features than the user actually uses. Either the user is not aware of the possibilities that the device offers or the operation is too complex for her. Thus many capabilities of devices will go unused. Another important aspect for users when adapting to new technology is their wish of staying in control. The re- quirement analysis within the SerCHo project (see section 6.2) has shown, that control can appear on different levels:

• Control over usage: Customers want to decide autonomously if they use a par- ticular device, application or service at all.

• Control over operation: Customers want to control consciously in which way they use a particular device, application or service.

• Control over time: Customers want to decide actively how much time they are spending on the use of a particular device, application or service.

• Control over combination: Customers want to decide autonomously which de- vices, applications or services are part of their smart home solutions.

• Control over evolution: Customers want to decide autonomously if and in which way they will upgrade their smart home solutions

Smart home solutions will of course take some of the controlling over from the user to the system or service. In order to give the user not the feeling of losing control, smart home systems should always provide sufficient information to the user about the current action. If a user understands why the system has set a device to a spe- cific operation and if the user has the possibility to overwrite this decision, he will certainly tend to accept the loose of control better. In addition the benefit for the user should always be highlighted. Thereby the user can evaluate the benefit over the loss of control.

End users normally choose the device they want to use depending on the intended

19 2. Smart Homes purpose and the actual application. Smart home solution will offer systems that are composed out of a combination of different devices. Hence for end users not conver- gence on the level of devices but convergence on the level of controlling devices and accessing content is important.

Current consumer electronics for homes already show some of the requirements that a smart home solution has to fulfill in order for customers to accept the solution. Consumer electronics offer their services with no significant boot up time (“instant on”), have an attractive design that fits in a well-designed living room, try to be user friendly, and offer plug-and-play characteristics. Smart home solutions will have to build up on these basic requirements.

For customers the main benefit of smart home solutions consists in the ability to use personal information and services independent of time, location or device according to their individual preferences. Customers do not (want to) care about the technology, which is necessary to realize these benefits. Overall, they want to achieve an improve- ment of their life by means of smart home solutions, which allow customers to do new things or to do things in a quicker, better, saver, cheaper, or easier way than before.

Beside the benefits of smart home solutions customers also perceive several threats. Firstly, they fear an ongoing and cumulative loss of sovereignty and control. More- over, they anticipate an increasing dependency on technological systems and a poten- tial intrusion into privacy. Finally, customers assess as a critical aspect of smart home the risk of unplanned and unsolicited side-effects, e.g. health hazards, environmental hazards, technical malfunction, additional costs, or extra time.

Overall, customers are at present open-minded towards smart home scenarios and they show an incipient interest in respective applications and services. Despite the perceived threats, most customers generally don’t assume an adverse attitude. But they demand for a comprehensible and stepwise implementation of smart home solu-

20 2.2. Market analysis tions in order to explicitly address their misgivings and to avoid a mental overload by complex systems.

The adoption of smart home solutions is influenced by several factors. The demand analysis within the SerCHo project (see section 6.2) has authored six relevant spheres of influence:

• Price: level and transparency of end-user prices with respect to the first-time purchase and the day-to-day usage.

• Value added: concreteness, availability and relevance of the primary features of an smart home solution.

• Security: data protection, privacy protection and fraud prevention.

• Durability: compatibility, standardization, modularity, extensibility, quality and persistence of the complete system and its components.

• Simplicity: ease of installation, ease of use and ease of experience.

• Outer appearance: visual integration into the home environment with regard to aesthetics and design.

Beside these user-driven restraints there are some market-driven influences, which still hinder the adoption of smart home solutions. Particularly, the comparatively low broadband penetration in Germany and the not sufficiently solved and standardized technologies belong to these factors.

Customers who are interested in buying or using smart home solutions can be divided into the three groups “demanding for simplification”, “demanding for variety” and “demanding for options”. Each type is characterized by specific motivations, which represent different success factors for suppliers. Smart home solutions addressing the type “demanding for simplification” should be comprehensible and straightfor- ward. Moreover, these customers expect a tangible simplification of their everyday

21 2. Smart Homes life. The respective smart home solutions should be based upon familiar and estab- lished components. Customers of the type “demanding for variety” particularly ap- preciate high grade and innovative smart home solutions. At the same time image and brand awareness are highly valuated by these customers. The value proposition should primarily focus on quality and interoperability. Modularity and ability for in- tegration are the key success factors of smart home solutions of the type “demanding for options”. Moreover suppliers should emphasize the innovative character of smart home solutions and the diversity of usage.

If customers perceive a relevant and real, not only fictitious or optional advantage of smart home services, they clearly show a willingness to pay. The majority of cus- tomers expect a structural distinction between the application level and the equip- ment level within pricing models. With services and / or contents they usually prefer a renting model. Thereby, the payment may consist of a monthly fixed amount, a use-dependent amount or a combination of both elements. In contrast, with regard to hardware equipment a purchase model with a unique payment is clearly preferred. For a single customer the actual willingness to pay strongly depends on his individual perception of benefits, the relevant application field, and his fundamental expenditure behavior.

With respect to the choice of a supplier of smart home solutions consumers tend to differentiate between the initial purchase and the extension of solutions. In the case of an initial purchase consumers often demand for solutions from one source. This desire does not necessarily imply that customers expect fully integrated companies who are providing all components of smart home solutions themselves. They rather want to have a unique relationship to one supplier in the sense of "one stop shop- ping" and "one face to the customer". This supplier may even consist of a consortium of different companies that features a unique customer interface. Additionally, such a combination must create the impression that the consortium as a whole is work-

22 2.2. Market analysis ing closely together without friction losses. In case of the extension of smart home solutions, customers are seeking the adequate supplier in a more open way. Mostly, they are not committed to suppliers, which they already used in the past. Depending on their specific needs they also take other providers into consideration. Therefore, modularity and openness are important features of smart home solutions.

Figure 2.2.: Use cases for smart homes (translated from Strese et al.(2010))

A variety of use cases can be described for smart home services as seen in figure 2.2. These use cases a motivated by customer needs like entertainment, housekeeping, or home security. In order to realize these use cases, sensors and actors are needed to support the implementation of these use cases. Sensors and actors play a central role in smart home environments; they are the mediator between the user and her environment. Sensors provide the services with information about the environment

23 2. Smart Homes and the user, while actors deliver the feedback to the user or change environmental conditions. If sensing and acting technology gets combined with existing devices, a new category of devices is being created. These “smart” devices combine well know devices with additional, intelligent and autonomous functionality. The core of the smart home environment constitutes out of home networking technologies.

2.2.1. Summary

Until now the development of smart home solutions did not “take off” quite as rapidly as expected. The main driving forces in the current development and deployment of home automation have to be identified. Saving energy, increasing of comfort and simplification of the running of the home could be it. In addition the need for health and social services for specific target groups could also support turning our homes into smart homes.

In the following subchapter we’ll describe the relevant application areas in detail.

2.3. Application areas

Smart homes are especially characterized applications that offer their services to the occupants. Services for smart homes make use of the technology and offer services that adapt to the environment and to the user’s situation. Services tend to evolve from individual services for specific problems to more complete solutions that ad- dress a combination of different application areas, in order to support the occupants in a more complete way. Building up smart home solutions requires composing ser- vices and usage scenarios, which fit well together in terms of their basic character. This combination of devices and services can result in value-added services that en- courages customers to invest in smart home solutions.

24 2.3. Application areas

Figure 2.3.: Preferences of end users for smart homes (Szuppa, 2007)

Customer surveys as shown in figure 2.3 show, that not one single killer application for all consumers exists, but one can identify main applications areas for smart home solutions. In the following we will highlight the five most common applications areas.

2.3.1. Comfort

Services that increase the comfort of the occupants are the main driver at customer side. These services help them to reduce burden of annoying tasks or make occupants feel more comfortable in their homes. Services in this category generally increase the service level of the home.

Comfort related services adjust lighting and temperature to make living in the house a more pleasant experience. In the morning, the inhabitant could awake to a home already at its ideal temperature as the heating or air conditioning has already adjusted itself on at a preset time. On the way to the bathroom movement sensors control

25 2. Smart Homes lighting, and ensure instant availability of hot water by turning in the hot water pump. Using a touch screen on the way to the kitchen starts a breakfast scene. Lightning in the kitchen is turned on, the favorite morning news station is switched on the TV, and the coffee machine serves a fresh latte. An intelligent fridge updates the shopping list, suggests a meal plan for the week based on user preferences and diet, and delivers health information for each dish. Before leaving the house for work, the occupant is notified about open windows or doors. In the afternoon, automated blinds prevent the sun to heat up the living room to ensure, a comfortable temperature when the occupants returns from work. In the evening, the lighting is set automatically for a family meal, dinner party, or a television evening, with simultaneous operation of suitable devices.

2.3.2. Communication and Entertainment

Closely related to comfort is the area of communication and entertainment services. Entertainment services in a smart home control the consumption of audio and media. Home theater systems, audio and visual equipment can be used to recreate a cinema experience at home. The system automatically adjusts lighting levels, closes blinds and powers up audio visual equipment to suit the entertainment experience. As the inhabitant sits down to watch a DVD or television program, the TV screen can be automatically raised, or a projector screen lowered. Audiovisual content can be accessed and played on any output device within the smart home. Multi-room audio systems automatically pipe music from the hifi-stereo-system to any room while the inhabitant moves through the rooms during his daily activities. Automated recording of favorite shows, or program recommendation based on learned user preferences based on previous watched program will enhance our entertainment experience even further.

26 2.3. Application areas

2.3.3. Safety and Security

Another prominent area for services in smart homes is safety. Everybody wants to feel safe at home and protect her possessions. Therefore any technology that supports these wishes will be gladly accepted by occupants of homes. Alarm systems are a familiar product to many of us. Traditionally, an alarm system protects a home while the inhabitants are away. It uses glass breakage detectors or motion sensors after it has been armed to set up an alarm. Alarm systems in smart homes can provide even more features; they can monitor regular behavior and create an alarm or a warning when something unusual occurs. It can also turn off the lightning and the stove, or close the blinds when a thunder storm approaches. Remote access to the system allow the user to check from a distance if everything is alright – offering them a feeling of safety.

2.3.4. Energy Management

Energy efficiency is becoming a more and more important topic in our society. Pre- serving energy can start within our own homes. Think of the following scenario: In order to save on heating and thus to protect our environment the temperature is low- ered in the absence of the occupier, or when windows are opened. Shortly before the arrival of the inhabitant, the temperature will be set to a comfortable level again, e.g. via a cell phone or automatic synchronizing with the user’s calendar. In general, heat- ing, ventilation and blinds co-ordinate with each other and create an energy-efficient, pleasant climate.

For occupants it is not only possible to save energy but also costs. Under pressure from government regulations, energy suppliers start to roll out electricity smart meters. Smart meters allow accurate real-time information on energy use in the home to the consumer and back to the energy supplier. Based on smart meters it is possible to offer tariffs that charge rates based on the time of the day that usage occurs, so called

27 2. Smart Homes

Time of Use (TOU) tariffs. Usually this means a lower rate during the night when there is less demand for electricity and higher rates during the day when there is more demand for electricity. In order to do better with TOU tariffs, it is necessary to shift usage of appliances to lower priced zones.

Too complex tariff structures would overwhelm end-consumers making it impossible to them to get the most out of the new possibilities. Thus, smart homes will not only offer alternative means of displaying energy consumption than we are used today– i.e. through mobiles, the internet or via digital TV, but also allow intelligent management of appliances, based on forecasted energy prices and user preferences resulting in a cost and user optimized usage of appliances.

For energy suppliers a tariff based management of energy demand can also be ad- vantageous. Tariff-based incentives for shifting the usage of appliances to other times can be used to flatten the load curve. A flattened load curve results in a better over- all efficiency generation of energy. Proposals of integrating electrical vehicles into the home energy management go even further. The idea behind that is that the energy stored in batteries could be used during peak time in order to reduce the current en- ergy consumption of a smart home during peak times and to recharge the batteries off-peak.

2.3.5. Ambient Assisted Living

Another application area that does not come into the mind of customers at first is Am- bient Assisted Living (AAL). AAL is about concepts, products and services that com- bine new technologies with social environment with the aim to increase the quality of life for people in all stages of life. AAL is motivated by the unstoppable demographic change. In 2035 Germany will have one of the oldest populations in the world. More than half of people will be 50 years and older; every third person will be older than 60

28 2.3. Application areas years. This is a challenge for society, economy and politics to develop and implement affordable solutions that increase the quality of life while keep in check costs for social systems.

Figure 2.4.: AAL innovation model

Smart homes can help elderly people to improve their everyday life, increase their security, and help keeping them social contacts. Smart homes can enable them to perform activities by themselves they were not able to do before and which are im- portant for their daily life. Improving the quality of life of elderly people for as long as possible will also allow them to live longer in their familiar surroundings and will help the healthcare system to reduce costs for nursing fees. As most elderly suffer from personal health, maintaining health and functional capability of the elderly and promoting a better and healthier lifestyle for possible health risks is an additional re- quirement for AAL.

On the other hand older people are not very familiar with technical solutions; they might feel to be overstrained by the technology. Therefore it is particularly important for smart home for the elderly to make services and solutions not to complicated.

29 2. Smart Homes

2.3.6. Summary

We have shown that smart homes can improve the quality of living in different ap- plication areas. Considerable advantages need to be present for the user to give him motivation to invest into new technologies needed for realizing smart homes. These technologies already exist, thus in the following subchapter we’ll take a look at tech- nologies available for realizing smart home solutions.

2.4. Technologies

Different kinds of technologies form the basis for realizing smart home solutions. They can be divided into three main groups of devices: Sensors, actors and communication technologies, actors that control home appliances, input and output devices that allow for the control and operation of services installed in the home, and a communications network that connects all the devices and lets them exchange information. Often the boundaries between these components are fluent. Some home automation devices are sensors and actors simultaneously. They allow on- and off switching of load while also sensing the current energy consumption.

Sensors and other measuring devices collect and distribute a variety of information of the environment they are installed in. They perceive current status information as well as and changes of measured valued. A few examples are information about tem- perature, light, movement, smoke, noise, resource consumption and so on. Sensors information delivers the basis for smart home services to create better, traceable, yet smarter decisions.

Actors play the role of the extended arm of the inhabitant. They can be found in many home appliances. They turn on lights, open and close blinds, turn of the oven, adjust the radiator or air conditioning system, and open and close doors and windows, and

30 2.4. Technologies so on. Actors replace the direct user interaction with devices and appliances in smart home; they are the enabler for home automation.

Special characteristics of smart homes are the interplay of different technologies. A communication network forms the basis for connection of devices and sensors within a smart home. These communication networks exist in manifold ways.

2.4.1. Home Automation systems

Several competing home automation systems are available to support the realization of a smart home. These technologies have been created directly for the intended use in smart home environments. They offer sensing and acting devices, as well as a com- munication network interconnecting these devices. In the following we will present the most widely used home automation systems.

KNX

KNX is field bus system for home and building automation. KNX uses a standard- ized OSI-based network communications protocol2. The intended use of this protocol makes KNX independent of any particular hardware platform. KNX is a further devel- opment of the European standards EIB, BatiBus and EHS. While keeping the interop- erability with EIB, KNX added the configuration mechanisms and transmission media of BatiBus and EHS to the standard. The KNX Association with more than 110 mem- bers certifies products that implement the KNX standard. Certification guarantees interoperability between products. The KNX standard support different transmission media for operation:

• Twisted pair with 9,6 kbit/s with a max distance of 1000 meter

2http://www.knx.org/knx-standard/standardisation/, accessed on December 10, 2010

31 2. Smart Homes

• Powerline communication (PLC) with data rates of 1,2 kbit/ s with a max. dis- tance of 600 meter

• Radio frequency (RF) at 868 Mhz

• Ethernet cable with 10 Mbit/s

The most commonly used communication technology today which is supported by almost every KNX appliances manufacturer is the twisted pair system - even though it has the disadvantage of a separate cable that has to be installed. PLC and RF have less support by the manufacturers, even if they are more easily to install. KNX over Ethernet cable has the smallest dissemination among used technologies.

Figure 2.5.: KNX devices for home automation

KNX has a broad support from manufactures in Europe. Figure 2.5 gives an overview about available devices, sensors and actors that support the KNX standard. KNX offers devices and solutions for almost every possible use case for smart homes. How- ever, constantly high prices for the components and the difficulty of retrofitting pre- vent a wider spread of the technology until now.

32 2.4. Technologies

X10 and INSTEON

X10 3 is one of the oldest communication standards that enable communication among electronic devices for a home environment. In most application areas X10 uses house- hold electrical wiring’s for communication between devices, even though a radio fre- quency X10 protocol has also been defined. The usage of the wires in a house allows realizing automation and controlling of devices without the need of rewiring. Adapter plugged into regular power outlets can communicate to or receive signals from other devices using the same technology. Basic functions are turning devices on and off and reporting of status information. More advanced devices like cameras, or sensors are also available.

The standard X10 power line and RF protocols also lack support for encryption, and can only address 256 devices. As with all power line technologies, signal filtering is needed, so that close neighbors using X10 may not interfere with each other. Interfer- ing RF wireless signals may similarly be received, with it being easy for anyone nearby with an X10 RF remote to wittingly or unwittingly cause mayhem if an RF to power line device is being used on a premises. Despite some drawbacks of this technology, it has become quite popular in the United States. One reason for that might be the inexpensive availability of new components.

Two main reasons prevent a broader distribution of X10 technology in Germany. Firstly, technical regulations for wireless communication limit the transmission power to 5mW per device. With this small power it was nearly impossible to guarantee a failure-free operation. Secondly, the usage of three phase network required the installation of a phase connector, which made the usage of X10 in private homes uneconomic.

INSTEON 4 has been developed to address the inherent limitations in the X10 standard a new standard while keeping compatibility with X10. Backward compatibility with

3http://en.wikipedia.org/wiki/X10_(industry_standard), accessed on October 15, 2010 4http://www.insteon.net/pdf/insteondetails.pdf, accessed on November 23, 2010

33 2. Smart Homes

X10 allows home owners to migrate to the new INSTEON technology without having to throw away all their used X10 devices.

LonWorks

LonWorks was created to address the needs of control applications. Echelon Corpo- ration is the driving force behind this technology. The LonWorks networking technol- ogy is aproved as an international ISO/IEC standard 5. It is currently mostly used in business environments, less for home automation purposes. Machine control, street lightning, air conditioning systems, or intelligent electricity metering are typical cases of application. For example, Italian energy supplier ENEL uses LonWorks as the basis for its advanced metering project with more than 20 million nodes in operation.

Using a peer-to-peer protocol LonWorks can use twisted pair, power-line, fiber optics, and RF as transmission media. The twisted pair layer can transfer up to 78 kbit/s of data, while the power line achieves either 5.4 or 3.6 kbit/s, depending on the used frequency.

BacNet

BacNet is a data communication protocol for building automation and control net- works that has been standardized under ISO 16484-5 6. It has been developed by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). Interoperability between devices from different manufacturers is ensure by BACnet, if all the project partners involved agree on certain BACnet Interoperability Building Block (BIBB) defined by the standard. A BIBB defines the services and procedures that need to be supported at server and client side to achieve a specific requirement of the

5http://www.lonmark.org/news_events/press/2008/1208_iso_standard, accessed on August 8, 2010 6http://www.bacnetinternational.org/displaycommon.cfm?an=1&subarticlenbr=23, accessed on March 7, 2011

34 2.4. Technologies system. For each device a PICS (Protocol Implementation Conformance Statement) lists all supported BIBBs, object types, character sets and communication options.

digitalSTROM

A relatively new technology in home automation is digitalSTROM7 developed by ETH Zürich. It offers a simple method of connecting electrical appliances in a home. The system works over the existing mains making it an optimal solution for upgrading existing homes to smart homes. A small server installed in the fuse box acts a central controlling unit.

Ant-size chips (see 2.6), mounted directly into the devices or in an adapter plug, it lets devices communicate with one another. Each device equipped with a digitalSTROM chip can be addressed with its unique address. In contrast to other power line based technologies, digitalSTROM offers an interception-proof power line communication system. It only communicates within a single circuit within a private dwelling. Out- side of this circuit, including parallel running wires, it is physically invisible. Another unique feature is two methods of measuring power consumption of electrical devices. A per zone power consumption of its electric circuits in combination with calibrated value of the house meter as a reference and a per device refined measurement down to the individual device using a digitalSTROM dSID chip.

7http://www.digitalstrom.org, accessed on April 7, 2011

35 2. Smart Homes

Figure 2.6.: digitalSTROM chips - © aizo ag

Controlling of the system can be done either with novel digitalSTROM switches, which are equipped with a LED. The LED indicated the application area that is currently being controlled with the switch. Complex configurations can be set-up with a web- based configuration utility. The server installed in the fuse box offers also an inte- grated web server allowing external services to access and control digitalSTROM de- vices.

Several target application areas including comfort, security, entertainment, and energy allow a broad support for developing new applications for a smart home.

EnOcean

EnOcean offers self powered devices that use energy harvesting mechanisms for Home automation. EnOcean technology is making use of energy created from slight changes in motion, pressure, light, temperature or vibration allowing to build building battery- less and wire-free control sensors. Thereby sensors can be easily installed and won’t need any maintenance. Using wireless transmission media at 868 or 315 Mhz, energy efficiency during radio reception and transmission is also of key importance.

36 2.4. Technologies

Figure 2.7.: EnOcean power generator transmitter module - © www.enocean.com

The EnOcean Alliance drives the standardization of communication profiles to ensure communication between sensors and gateways of different manufacturer. The EnO- cean Equipment Profile (EEP) is a unique identifier that describes the functionality of an EnOcean device irrespective of its vendor. Different interfaces to other technologies like LonWorks, EIB/KNX and TCP/IP allow integration into and extension of already established home automation solutions.

2.4.2. Communication Technology

Typically different networking technologies exist in a smart home, as there has no standard technology emerged yet. The communication network lets different devices within a smart home exchange data.

Communication networks can be divided into two main categories: wired and wire- less technology. While the former has advantages regarding security concerns, it lacks ease of installation. The installation of the latter is in most cases more easily, yet it has some other drawbacks. Besides concerns of inhabitants regarding their well-being because of the wireless installation, the power supply of wireless sensors is often a

37 2. Smart Homes problem that can easily sour installation.

Wireless Technologies

Wireless Sensor Networks have become increasingly popular as connectivity networks for smart environments (Pensas and Vanhala, 2010). Wireless Sensor Networks for smart homes allow a flexible and comfortable integration of sensors and devices into the home environments. Installation can be easily done, as only power supply needs to be provided. With batteries getting more powerful, in many cases one can even abandon direct power supply and integrate a warning system that alarms the user just before the device gets unusable. Recently, new approaches like the EnOcean tech- nology 2.4.1 allow realization of battery-less, wireless control systems only based on the use of energy created from slight changes in motion or pressure.

Bluetooth Bluetooth is a standardized and royalty-free radio technology for wireless voice and data communication of up to 256 participants, with only 8 mostly small mobile devices, such as phones with a wireless headset, laptops with a printer can be active simultaneously via a short distance up to 100 meters.

Since version 2.0, data rates of around 2.1 Mbit/s can be transferred and thus new application areas of Bluetooth technology are possible, like the encrypted transmission of audiovisual information. The available bandwidth in the 2.4 GHz band is shared between all participating devices. Disorders may be caused, however, e.g. by garage door openers, microwave ovens and cordless phones which use the same frequency band.

Currently Bluetooth is very common for the interconnection of mobile phone and con- nection of mobile phones to other devices. In home automation Bluetooth has not gained a reasonable market share yet.

38 2.4. Technologies

DECT DECT (Digital Enhanced Cordless Telecommunications) is a European Stan- dard for digital wireless phones. DECT has coverage of about 30 to 50m within build- ings using 120 encrypted channels at using the frequency band between 1880 and 1900 MHz. In addition, DECT-based wireless data networks with data devices on can be operated. The DECT Application Profiles contain additional specifications defining how the DECT air interface should be used in specific applications to achieve max- imum interoperability between DECT equipment from different manufacturers. The Packet Radio Service DPRS and the Multimedia Access Profile e.g. DMAP permit data with higher data rates of up to 2 Mbit/s.

Until now DECT the majority of DECT product shipments are in the residential appli- cations with single base station and single handset configurations for wireless phones.

GSM/UMTS GSM (Global System for Mobile Communication) is currently the defacto standard for mobile phones in Europe. The radio is based on cells, whose expansion depends on the number of participants. GSM is used for voice telephony and Short Message Services SMS at data rated of 9.6 kbit/s. Smart phones, notebooks and PDAs preference data over the wireless network with the more recent GPRS (General Packet Radio Service) with a maximum data rate of up to 160 kbit / s or EDGE (Enhanced Data Rates for GSM Evolution) with a realistic data rate of 110 kbit / s for full mobility and 220 kbit / s in stationary operation.

UMTS (Universal Mobile Telecommunications System) is the follow up standard for mobile communications. While GSM is called second generation (2G), UMTS is re- ferred to third generation (3G). UMTS was mainly developed to increase the available data rates. It enables data rated up to 2 Mbit/s. Based on the UMTS standard HS- DPA (High Speed Downlink Packet Access) enables even higher data rates up to 7,2 Mbit/s.

While the general availability of GSM/UMTS speaks for this technology, the related

39 2. Smart Homes costs hinder a broader usage for smart home devices. One exception are smart meter, where many energy provider use GSM for transmission of usage and billing informa- tion.

Konnex-RF Konnex-RF (KNX-RF) is a wireless extension to the twisted pair based Konnex Home Automation bus. For installation conditions, where neither twisted pair nor power-line communication can be used, KNX-RF allows to transfer data wire- lessly, while still being able to cooperate with wired KNX devices. As the complete KNX standard also the wireless part KNX-RF is vendor independent. Using the 868 MHz Konnex-RF shows a better transmission behavior within buildings compared to the 2.4 GHz band used by Wi-Fi and many other protocols.

RFID RFID (radio frequency identification) is suitable for wireless identification of goods (such as serial numbers or product name) and even over a distance of a few millimeters up to several meters. Currently RFID is increasingly used for inventory management, but is also interesting for refrigerators, microwave ovens, etc. where the RFID labels on foods automatically recognize the date of expiry and provide possible preparation tips.

Wireless-USB Wireless USB is a high-speed wireless networking technology for var- ious devices, such as keyboards, mouse, camera, printer and so on. It provides a com- plement to the traditional USB interface. Ultra Wideband (UWB) used as the radio- technical base, works with transfer rates of 480 Mbit/s at distances of 3 meters.

WLAN WLAN (Wireless Local Area Network)refers to a wireless network, which be- long to the network devices in a radius of several meters to several kilometers at a speed of 11Mbit/s with 802.11b and 54Mbit/s for 802.11g and linking example with

40 2.4. Technologies wireless internet supplies. While the IEEE 802.11n working group, currently finished the draft for the standardization of high-speed WLANs with 300 Mbit/s, already a discussion about a gigabit wireless network has started.

WIMAX WIMAX (Worldwide Interoperability for Microwave Access) is a wireless technology for wideband, bidirectional high-speed transmissions in the access net- work with about 75 Mbps at a range of up to 50 kilometers. Unfortunately WiMAX can not fulfill both goals at the same time. It can either operate at higher bitrates or over longer distances but not both: operating at the maximum range of 50 km increases bit error rate and thus results in a much lower bitrate. WiMAX refers to interoperable implementations of the IEEE 802.16 wireless-networks standard. WIMAX is seen as a technology that provides next-generation of wireless technology designed to enable pervasive, high-speed mobile Internet access to the widest array of devices including notebook PCs, handsets, smartphones, and consumer electronics such as gaming de- vices, cameras, camcorders, music players, and more. WiMAX is a long range system, covering many kilometers, which uses licensed or unlicensed spectrum to deliver a point-to-point connection to the Internet.

Like most wireless systems, available bandwidth is shared between users in a given radio sector, so performance could deteriorate in the case of many active users in a single sector. In practice, most users will have a range of 2-3 Mbit/s services and additional radio cards will be added to the base station to increase the number of users that may be served as required.

ZigBee ZigBee is an industry standard based on IEEE 802.15.4 for wireless sensor and control networks with a low data rate of 20 kbit/s or 250 kbit/s to short distances to about 75 meters. It is intended for the use of maintenance-free wireless switches and wireless sensors with limited energy supply (e.g. battery) in poorly accessible

41 2. Smart Homes areas, where the replacement of batteries is only possible with great effort. To achieve this, ZigBee offers a comparatively low data rate. Main focus is the lowest possible power consumption, so battery-powered devices can be operated for several months to several years without replacement.

The ZigBee, and IEEE-802.15.4-standard offer the developer three different types Zig- Bee devices. With these devices, a ZigBee Personal Area Network (PAN) is built. There are three roles that a ZigBee device can take:

• ZigBee End Device: Simple devices such as light switches implement only a part of the ZigBee protocol, and therefore are also called Reduced Function Devices (RFD). They log on to a router of, thus forming a network in star topology.

• ZigBee Router: Full function devices (FFD) can also act as a router, log on to an existing router and create a tree network topology.

• ZigBee coordinator: Exactly one router in a PAN will also assume the role of the coordinator. He sets the basic parameters of the PAN and manages the network.

Since the ZigBee standard uses one coordinator, when using indirect addressing, a failure of the coordinator will jeopardize the entire network, since all of these devices, and routing information are kept in a volatile memory. A ZigBee network is hierar- chically structured and has an single point of failure (SPoF). However, routers can be configured so that they take over the task of the coordinator in a case of failure.

An advantage of ZigBee technology is it’s wireless multihop ability. Long distances can be bridged, as devices can forward information of adjacent devices.

Z-Wave Z-Wave was developed by Danish company Zensys and the Z-Wave Al- liance for home automation with special focus requirements of home automation tech- nologies. It operates at 868 Mhz allowing data transmission rates between 9.600 bit/s and 40 Kbit/s.

42 2.4. Technologies

Z-Wave uses a meshed network topology, that is, each network node is connected to one or more other network nodes. This has the advantage that a message can be transmitted between two network nodes, even if they cannot communicate directly with each other as they are too far apart. In this case, the message is transmitted over one or more intermediate nodes. Due to its similar features Z-Wave is in direct competition with ZigBee.

2.4.3. Location

Location information is important in almost every smart home scenario. Using explicit input methods are one easy way to provide context-aware systems with location infor- mation. Users need to inform the system about changes of their location. This could be done e.g. by using a finger print sensor before entering a room or letting the user enter his current position in a field before starting service usage. This explicit user interaction for acquiring context information is contradictory to the vision of ubiq- uitous computing. Also the provided information may not be very accurate neither can it be dependably, without demanding too much effort from the user. To realize autonomous smart home systems, only automatic location sensing techniques can be used and will be considered in the following. Depending on the needed precision of the information it is more or less difficult to obtain location information.

Technologies

Common to all location sensing technologies is the use of radar to define the position of the client. In the following we present the main technologies that can be used to determine the location of an entity.

43 2. Smart Homes

Nearest sensor The least precise but simplest method is the nearest sensor method. This capability determines the access point or cellular base station to which a client device is associated. It assumes that this sensor is the closest sensor to the device. It then computes how far the signal radiates. The diameter of the 360-degree radiation "cell" surrounding the sensor is as precise as this method alone gets, even presum- ing that the client does indeed associate with the nearest sensor. If a base station has approximately a 100-by-100-meter coverage area, for example, the nearest-sensor method tracks the client to within a 10,000-square-meter area. Note, though, that a client might associate with a sensor a bit farther away if the nearest one is overloaded or its signal strength is otherwise not as strong.

Triangulation The usage of triangulation methods goes back to ancient times. Trian- gulation measures angles between three or more nearby known points. The intersec- tion of those angels is calculated as the client location. With triangulation, accuracy is reduced if the signal is reflected off of the walls in a room or if the signal has taken multiple paths before reaching the device. Triangulation does not take into account the effects that a building and/or other objects can have on a signal. Trilateration is similar, but uses the distance between known points, rather than the angles between them.

Wi-Fi Positioning Normally indoor location sensing systems require installing ad- ditional hardware infrastructure. As in today’s cities more and more wireless access points are being installed, one started to use those access points for location sensing. A requirement for using this technology is a map of well-known positions of access points. Today companies have started to drive through major cities with specially equipped sensor vehicles that scan existing Wi-Fi access points and their position. Us- ing triangulation or nearest sensor technology a Wi-Fi client can determine its location

44 2.4. Technologies very accurately. Apple’s first generation iPhone mobile device was one of the first broader available devices that made use of Wi-Fi positioning (due to the lack of an internal GPS sensor). Surprising to many, the device showed a great accuracy within major cities, but as expected, had no luck in getting a position in rural areas.

Microsoft Research has developed RADAR, a building-wide tracking system based on the IEEE 802.11 Wi Fi technology (Bahl and Padmanabhan, 2000). RADAR measures, at the base station, the signal strength and signal-to-noise ratio of signals that wireless devices send, then it uses this data to compute the 2D position within a building. Microsoft has developed two RADAR implementations: one using scene analysis and the other using lateration.

The Cricket Location Support System uses ultrasound emitters to create the infrastruc- ture and embeds receivers in the object being located (Priyantha et al., 2000). Cricket uses the radio frequency signal not only for synchronization of the time measurement, but also to delineate the time region during which the receiver should consider the sounds it receives.

2.4.4. Input and output devices

Input and output devices are the interface to the inhabitant. They give the inhabitant control over the environment and let him override or adjust actions that have been take or proposed by smart home services.

Input devices

In many smart home scenarios touch displays, integrated into walls, serve as input de- vices for the inhabitants. While this might seem nice at first, there are some drawbacks that come with this input method. The user is fixed to a specific position in his home in order to interact with his system; also adaption can only be done for the graphical

45 2. Smart Homes part of the user interface. A first extension to this input method is adding voice input at the same place. This makes the user a bit more flexible, e.g. when he has no hands free.

To give the user even more flexibility other input methods need to be considered. In the living room, the inhabitant is used to interact with his television screen via a remote control. Thus, the remote control should also be considered.

In addition other input methods like gestures that are captured by cameras in the smart home environment, or implicit inputs from sensors around the environment should be taken into account. Such sensors could for example sense the current posi- tion of the user or the status of a device. Based on this input information, services and user interface can adapt themselves in order to support the user in the best possible way.

Output devices

The purpose of output devices in smart home environments has two aspects. The first is of course to support the direct interaction with the user. These devices present information to the user. This can be graphical interfaces on touch screens, the TV screen, or mobile devices. Additional output channels like voice can be added. The second aspect of output devices is to control the environment. These devices change the flow of the air-condition system or turn appliances on- and off.

2.5. Challenges

The aforementioned application areas, scenarios and technologies raise different chal- lenges for developers that intend to realize smart home services.

46 2.5. Challenges

2.5.1. Hardware and appliances

Hardware and appliances for smart homes are strongly diversified. An overview about relevant technologies has been given in section 2.4. Smart homes bring together branches and technologies that have been standalone for a long time now. White goods and entertainment hardware had nothing in common. Until now, that is. Here lies one of the challenges for realizing smart homes. How hardware from different branches and manufactures that uses different communication standards and tech- nologies can be integrated in a smart home and how can interoperability between these devices be generated.

Also a smart home is a dynamic environment regarding its technical equipment. With new technologies making it into our homes, consumers buy new categories of devices that have not been present before. In addition the average lifetime of home appliances is limited. The result is devices getting replaced or added to a smart home from time to time. In addition to the challenge of interconnecting devices a smart home should also offer the possibility to add new devices with previously unfamiliar technologies. In order to prevent the need of calling technician every time the user changes one of his devices, this process should be as simple as plug and play technology on today’s computers. Besides the hardware integration new devices should also automatically enhance the software compatibilities of smart home in order to allow occupant to get most out of their new possessions.

Many scenarios of smart homes are based on information about the current position of the user. Particular attention shall be focused on location aware technology. Many sys- tems, technologies, and basic approaches exist determining of the user’s position. Be- sides special requirements regarding security and privacy, these technologies should also be as non-intrusive as possible. Neither should the user should get the burden of carrying extra tags, nor should the lack of accurate location information result in

47 2. Smart Homes malfunctioning of the whole system.

2.5.2. Human Computer Interaction

The possibilities of smart home technologies raise new issues for Human-computer interaction. Switched can have more than the simple function for turning lights on and off. They can have different operating levels for advances functions that can be activated through a sequence of interaction steps. While these might be usable for the installer of the system, other members of the family or guest will not be able to uses these functions. Intuitive and seamless interaction across different contexts of use can be achieved by providing ubiquitous user interfaces that adapt to the environment, its users and platforms.

Services in smart environments target the assistance of the user during everyday life. Thus, the less interaction between humans and the computing systems is needed, the better. Goal should be to create a home environment that acts in the right way and in the intention of the occupant. Only when smart homes follow this challenge, it will be possible to let computing technology to get as invisible as envisioned in the vision of ubiquitous computing.

48 3. Literature Review

Different research field contribute for realizing smart home environments. Research in ubiquitous computing delivers fundamentals for all smart environments. In addi- tion, specialized research about smart homes has evolved in recent years. As we see context- and situation awareness as mandatory basis, we describe also some of the most important research projects in this field.

3.1. Ubiquitous Computing / Pervasive Computing

The terms ubiquitous computing and pervasive computing are commonly referred to the same programming paradigm that follow the realization of Mark Weiser’s vision, which describes small, inexpensive, interconnected devices distributed in our every- day lives. Ubiquitous computing research delivers important general concepts for the realization of smart home environments.

Services in pervasive computing has been defined by (Kumar and Das, 2006) as “what you want, when you want it, how you want it, and where you want it” kinds of ser- vices to users, applications, and devices. They elaborate that these services could help meeting the challenges encountered in myriad applications in almost all areas of human activity and by that improve human quality of life. Cooperation and collabo- ration between various devices and software entities have been defined as a necessity for realizing pervasive computing services, and thus as one of the major challenges.

49 3. Literature Review

We take this up in our approach by providing a component that allows seamless and transparent integration of devices based on different technologies.

3.1.1. Mark Weisers “The Computer for the 21st Century”

Mark Weiser’s article “The Computer for the 21st Century” (Weiser, 1991) published in the Scientific American laid the foundations of ubiquitous computing as a research area. Even though technology did not allow realizing his vision by the time of his writing, he envisioned a new paradigm that’ll change the use of computing technol- ogy radically and that has not been realized until today. His ideas would change the way we interact with computers radically. Computing would become a standard in our lives, without needing extra attention like computers do until now.

Weiser clearly distinguished ubiquitous computing distinguishes from the trend of mobile computing upcoming at that time:

“Ubiquitous computing in this context does not just mean computers that can be carried to the beach, jungle or airport.” (Weiser, 1991)

While the focus of mobile computing lies on the mobility of devices ubiquitous com- puting goes way beyond this trend. Mobile computing is seen as only one techno- logical precondition, delivering some fundamentals and basic concepts, allowing the realization of ubiquitous computing. A core statement of his vision was that comput- ing technology will vanish from explicit usage in the foreground into implicit usage in the background:

“we are trying to conceive a new way of thinking about computers in the world, one that takes into account the natural human environment and allows the computers themselves to vanish into the background”

By that time computers were still explicit objects that required explicit usage. It was unthinkable that computers could work in the background to support the user non-

50 3.1. Ubiquitous Computing / Pervasive Computing intrusively. But that was exactly was this vision has predicted. This description fits well into the idea of smart homes. Even though these homes will be saturated with computing technologies, it won’t be visible at first. Only the effects will be noticeable. In Weiser’s vision even the awareness of these technologies was questioned.

“Such a disappearance is a fundamental consequence not of technology, but of human psychology. Whenever people learn something sufficiently well, they cease to be aware of it.”

Looking at today’s smart home we can observe thinking about technology plays still a major role. We are fare away from realizing the vision of ubiquitous computing, even if we have many of the needed technologies available right out of our hands. Only in small closed areas ubiquitous computing is already state of the art. Until we are surrounded by a complete ubiquitous environment there are still more challenges to be resolved.

3.1.2. Donald A. Norman “The Invisible Computer”

While Donald Norman acknowledges in “The Invisible Computer” (Norman, 1999) that the personal computer allows for "flexibility and power," he also makes its limita- tions perfectly clear. Currently, computer users have to navigate through help pages, frequently asked questions, and wizards to perform a task such as searching the web or creating a spreadsheet. "Of all the technologies, perhaps the most disruptive for in- dividuals is the personal computer" he writes. "It should be quiet, invisible, unobtru- sive." His vision is that of the "information appliance", digital tools created to answer our specific needs, are interconnected allowing communication between devices.

Normal picks up Weiser’s vision and expands it more. "Design the tool to fit so well that the tool becomes a part of the task." He proposes using the PC as the infrastructure for devices hidden in walls, in car dashboards, and held in the palm of the hand. His

51 3. Literature Review message is reasonably situated in the concept that the tools should bend to fit us and our goals (and not vice versa): we sit down to write, not to word process; to balance bank accounts, not to fill in cells on a spreadsheet.

He envisions “a user-centered, human-centered humane technology where today’s personal computer has disappeared into invisibility.“ This obviously means that ex- plicit usage, like we are used to for decades will disappear just like the awareness of technology. Instead of adapting our tasks to fit into restrictions given by programs or services, we should be surrounded by technology that really assists us.

3.1.3. Cook “How smart are our environments? An updated look at the state of

the art”

In a quite comprehensive overview about current research in smart environments is given. Smart environments are defined “as one that is able to acquire and apply knowledge about the environment and its inhabitants in order to improve their expe- rience in that environment” (Cook and Das, 2007). According to figure 3.1the authors see smart environments characterized by an intelligent agent, which captures the con- dition of the house and its inhabitants and reacts to this information.

Figure 3.1.: Smart environments as an intelligent agent, based on (Cook and Das, 2007)

52 3.2. Smart Home projects

The authors try to go through all challenges of creating Smart environments, starting from the low level physical layer, where sensors play the major role, to the top layer of decision making. For each layer, different research projects with their own solution for the particular layer, are presented.

The view of smart environments as an intelligent agent can be seen as a general basis when realizing services for smart environments. Services and the environment will continuously demand action and reaction.

3.2. Smart Home projects

Many research projects focused are especially on various aspects of the realization of smart homes. In the following we describe some of the most common smart home projects and their approach.

3.2.1. Georgia Tech Aware Home

The Georgia Institute of Technology has build a home for research issues centered on computing in the home (Kidd et al., 1999). The setting allows conducting of research in an authentic yet experimental setting.

A particular feature of this project is the building of two independent living spaces. This setting allows allow occupants to live on one floor while prototyping or providing demonstrations can be deployed on the other floor. As in many smart home projects the research in the Aware Home covers many different areas. Context awareness and ubiquitous sensing and individual Interaction with the home might the main interests of researchers that contribute to this project.

Apart from technological issues the Aware Home projects offers also a human-centered research agenda. One specific application is the support for the elderly. The question is

53 3. Literature Review how a person can be assisted in order to remain in familiar surroundings for a longer period of life time. To achieve this goal the projects designs a system that provides monitoring capabilities for elderly that do not need continuous assistance.

3.2.2. MavHome

The MavHome project (Cook et al., 2003) of the University of Texas at Arlington uses an agent-based middleware to organize capabilities of a smart home. The architec- ture allows developing and integrating components that will enable the intelligent environments of the future. Sensors can perceive observations, reason about those observations, and actions can be taken to automate features of that environment. The MavHome project is using a concept of abstract and concrete layers for its architecture. The Physical layer contains the devices, transducers, and network hardware of the smart environment. The communication layer is responsible for exchanging informa- tion among agents; therefore it provides an infrastructure for exchanging information, requests, and queries. The information layer collects and stores all useful information of the environment. The decision layer on top of the architecture selects actions for the agents to perform while using information of the underlying layers. The architecture was implemented using a CORBA interface between software components.

To reduce the complexity of controlling a complete smart home, the problem is de- composed into reconfigurable subareas. Using an intelligent, pattern based prediction component, the MavHome architecture allows agents to use past observed activities for decisions for controlling devices throughout the smart home.

The MavHome project has recognized in an early stage, that the combination of dif- ferent technologies in necessary to create a smart environments. MavHome combines artificial intelligence, machine learning, mobile networks, sensors, and others to create a smart home that acts as in intelligent butler for the user.

54 3.2. Smart Home projects

3.2.3. Gator Tech Smart House

The Gator Tech Smart House developed by the University of Florida (Helal et al., 2005) proposes to offer a scalable, cost-effective way to develop and deploy extensible smart technologies.

The presented generic reference architecture for pervasive computing environment of- fers several components that until now are part in many architectures for smart homes – including our own. The physical layer contains the various devices and appliances the inhabitants use. Above is the sensor layer that communicates with a wide vari- ety of devices, appliances, sensors, and actuators and represent them to the rest of the middleware in a uniform way. The service layer maintains leases of activated services. The knowledge layer allows reasoning about services using an ontology of the vari- ous services offered and the appliances and devices. The context management layer defines or restricts service activation for various applications based on environmental changes represented by context items. The topmost layer is application layer offering an application manager to activate and deactivate services.

3.2.4. iDorm

The Essex intelligent dormitory (iDorm) is a student dormitory and intelligent inhab- ited environment test-bed developed at the University of Essex (Hagras et al., 2004). It uses agent technology to learn user behavior and adapt to user needs, to create an ambient intelligence environment. The system recognizes the people that live in it and programs itself to meet their needs by learning from their behavior.

The testbed combines several different networks and provides a common protocol to act as a gateway between the different sensors and effectors (Pounds-Cornish and Holmes, 2002). The gateway server offers a standard interface to all network technolo- gies by exchanging XML-formatted queries with all the principal computing compo-

55 3. Literature Review nents. This gateway server allows the iDorm system to operate over any standard network.

Rules that are available to the iDorm embedded agents are divided into fixed and dynamic sets, where the dynamic behaviors are learned from the person’s behavior and the fixed behaviors are preprogrammed. Predefined rules are primary used for properties that can’t be learned, e.g. the temperature at which a water pipe freezes. In addition specific safety, emergency, and economy behaviors are embedded into static rules.

Learning of rules consists out of a nonintrusive monitoring and the lifelong-learning phase. Learning is based on negative reinforcement because users will usually request a change to the environment when they are dissatisfied with it. The learning mech- anism replaces a subset of rules that consequently were unsatisfactory to the user in order to improve the system performance over time.

3.2.5. Adaptive House

Building smart homes has been around four years. Yet many mistakes have been made in the past as highlighted by (Mozer, 2005). Explicit interaction with a smart home, even if it is made very simple in combination with forgetfulness of the inhabi- tants is counterproductive. Even if user interfaces are made as simple as possible, the enormous variety of option a smart home offers can lead to frustration. In many early smart home installations a big part of the possibilities stay unused.

Learning from that mistakes researcher of the University of Colorado have proposed and implemented a smart home environment premised on the notion that there should be no user interface beyond the sort of controls one ordinarily finds in a home. Being not able to take into account all aspects of living, the implementation focused on home comfort systems like temperature regulation and lighting.

56 3.2. Smart Home projects

Using reinforcement learning algorithms the system was able to predict e.g. the re- turn of the inhabitant for the current day, based on previous days. If the system noticed manual adjustment of automatically controlled values, a discomfort cost is incurred. Additionally energy costs are incurred based on the intensity setting of a bank of lights.

The implemented system was evaluated informally. Even though the system appeared surprisingly intelligent at times, it also frustrated the user from time to time. A com- mon mistake of the system was the incorrect prediction of the passage of the user into another room resulting into lights that would turn on unnecessarily. Prediction of hu- man behavior is limited, so is the implementation of complete automated homes that try to automate every single aspect of the daily life.

3.2.6. PlaceLab

The Massachusetts Institute of Technology has developed an apartment-scale research facility which allows testing and evaluating new technologies and design concepts in the concepts of everyday living (Intille et al., 2005, 2006). PlaceLab combines the ca- pabilities of a highly instrumented research environment with a residential building. It therefore allows researchers to systematically test and evaluate strategies and tech- nologies for the home in a home setting.

Using hundreds of sensing components allows deployment of new and innovative applications that help people easily control their environment, save resources, remain mentally and physically active, and stay healthy.

These goals of the PlaceLab project match closely with the man application areas for smart homes as stated in 2.3. It technological focus is on developing context-aware and ubiquitous interaction technologies.

57 3. Literature Review

3.2.7. CASAS Smart Home Project

The CASAS smart home, research project at Washington State University, focuses on the creation of an intelligent home environment that perceives its environment through the use of sensors, and can act upon the environment through the use of ac- tuators Cook and Das(2004). Intelligent environmental agents pursue certain overall goals, such as minimizing the cost of maintaining the home and maximizing the com- fort of its inhabitants. In order to meet these goals, the home reasons about and adapts to provide information.

Based on sensor information the research focuses on topics like activity recognition, identification of trends and anomalies, monitoring of exercise habits, or estimating energy usage.

3.2.8. Summary

Many projects targeting the smart home have been conducted in the past. Each of it targeting specific aspects. The main area of interest of these projects lies in the support of the inhabitant through the creation of somehow intelligent environments. Services offered should simplify the living in such an environment.

The Gator Tech Smart House project recognized the need for a reference architecture for smart homes that fulfills many of the requirements that arise when realizing such services. The general structure of these architectures has not changed, yet many details of these architecture can be improved in order to support a better service development.

3.3. Context- and Situation Awareness

Context- and situation awareness play a major role for many smart home services. It is the link between services and the environment. As a research field it has attracted re-

58 3.3. Context- and Situation Awareness searchers for a long time. First context-aware applications were simple location based services that used the position of the user as the only context variable. Today com- plex context frameworks exist that include any available information about the en- vironment and provide intelligent reasoning mechanisms on the underlying context model.

3.3.1. Context Toolkit

At the GeorgiaTech work has been conducted to create an middleware supporting creation of context-aware applications (Salber et al., 1999). The Context Toolkit is similar to a GUI toolkit and which is used for developing context-aware applications simplifying designing, implementing, and evolving context-aware applications. This work emphasizes the strict separation of context sensing and storage from application- specific reaction to contextual information. The architecture using an object-oriented approach contains three types of objects: widgets, servers, and interpreters.

Figure 3.2.: Architecture of Context Toolkit

A context widget is a software component that provides applications with access to

59 3. Literature Review context information from their operating environment. Low-level sensing implemen- tations are hidden from high level application implementation. This means that they actually hide the complexity of the sensors used from the applications and they can abstract context information to suit the expected needs of applications. They are aimed as reusable and customizable building blocks of context sensing. The widget is defined by its attributes and callbacks. Attributes are pieces of context that it makes avail- able to other components via polling or subscribing. Callbacks represent the types of events that the widget can use to notify subscribing components. Other compo- nents can query the widget’s attributes and callbacks. The widget also allows other components to retrieve historical context information.

Context servers are used to collect the entire context about a particular entity, such as a person, for example. The context server is responsible for subscribing to every widget of interest, and acts as a proxy to the application. It can be seen as a compound widget. Just like widgets, it has attributes and callbacks, it can be subscribed to and polled, and its history can be retrieved.

Context interpreters are responsible for implementing the interpretation of context in- formation. They could transform between different representation formats or com- bining different context information and create context information out of that. Each of these objects is autonomous in execution. The objects can be instantiated all on a single computing device or on multiple computing devices. For communication be- tween the different objects HTTP and XML are used. They can be replaced by other mechanisms if so wanted.

Application developers are burdened to discover the building blocks, and are exposed to differences in context derivation (whether they are sensed, interpreted or aggre- gated). Furthermore, since the context components are treated as separate entities, it is not clear how the functionality of dynamic data composition from multiple compo- nents may be integrated into the toolkit metaphor. Also, the Context Toolkit doesn’t

60 3.3. Context- and Situation Awareness provide a common context model, thus not enabling knowledge sharing or reasoning about context.

3.3.2. Context Broker Architecture (CoBrA)

CoBrA (Context Broker Architecture) is an infrastructure that supports the interaction among agents, services and devices that explore context information in active spaces (Chen, 2004). CoBra provides a framework to simplify context-aware application de- velopment. Its main component is an intelligent agent called Context Broker that col- lects context information from different heterogeneous sources like semantic web, sen- sors, agents, and other devices and makes them available to other applications. The context broker is also responsible for providing a common context model, that can be shared by all devices, services, and agents in the space, mediating the informa- tion exchange between context providers and resource restrained context consumers, protects the privacy of the user by enforcing user defined polices about controlling and sharing of contextual information, and reasoning about data that is not directly available from sensors. For communication between the broker and the information sources the JADE API ? has been used. JADE can rely on different communication pro- tocols like Java RMI, HTTP or IIOP. The payload is represented in FIPA ACL (Agent Communication Language).

The central broker is composed out of different components, in order to fulfill the above described tasks:

• CoBrA Ontology (COBRA-ONT): The ontology is required to describe contex- tual information. It defines classes and relationships between entities for the specific domain CoBrA is aimed at: smart meeting rooms. Thus COBRA-ONT defines people, agents, places, and presentation events for supporting an intelli- gent meeting room system on a university campus.

61 3. Literature Review

• Context acquisition module: It is responsible for collection context information of all available resources. This is usually performed by agents that implement specific communication mechanisms to obtain the desired information of the original source (sensors, devices, databases, and so on).

• Context knowledge base: It acts as a central storage for all context information provided by different sources. The context knowledge based uses a RDF triplet storage facility and is implemented through an SQL database. Single devices don’t need to take care of maintaining the context information and can use the context knowledge base for that.

• Context reasoning engine: It is a logic interference engine that is used for reason- ing with ontologies and detecting inconsistencies in context information. The engine consists out of two parts. The first part uses ontology-based reasoning, the second part used domain specific heuristics in form of rules.

• Policy management module: It checks the user’s policies before sharing per- sonal contextual information. The privacy language in CoBrA is based on the Rei policy language (Kagal, 2004), an ontology for modelling “rights, prohibi- tions, obligation, and dispensations (deferred obligations)”.

CoBrA brokers can form federations in order to share information. CoBra is miss- ing any pervasive discovery mechanisms, so elements in the environment need to be known previously or need to use the mechanisms provided by the JADE API for agent discovery. Thus, in a smart environment based in CoBrA, all computational entities must be aware of the context broker in advance. Agents that provide and consume context information communicate directly with the context broker to carry on their tasks. This situation may be disadvantageous because the context broker becomes a bottleneck. Besides, there is no available API to help the programmers to implement applications.

62 3.3. Context- and Situation Awareness

Figure 3.3.: CoBrA Architecture

The main innovation of CoBrA was the usage of ontologies and semantic web tech- nologies, such as RDF and OWL for modeling ubiquitous computing applications.

3.3.3. Service-Oriented Context-Aware Middleware (SOCAM)

Service-Oriented Context-Aware Middleware (SOCAM) that aims to enable building and rapid prototyping of context-aware services in pervasive computing environ- ments (Gu et al., 2005). Efficient support for acquiring, discovering, interpreting and accessing various contexts is being provided by the architecture. Also a context model, based on the Web Ontology Language is being proposed.

63 3. Literature Review

Figure 3.4.: SOCAM context model

The architecture of SOCAM is similar to the architecture of CoBrA context interpreter, it collects context information and handles requests from applications.

Figure 3.5.: SOCAM architecture

The SOCAM architecture consists out of five main components:

64 3.3. Context- and Situation Awareness

1. Context Providers: They are used to abstract useful contexts from heterogeneous sources - External or Internal; while external context provider collect information from sources outside of the context-aware system (e.g. weather information), in- ternal context provider collect information from sources within the environment of the context-aware system like location sensors. Context provider and convert them to OWL representations so that contexts can be shared and reused by other service components.

2. Context Interpreter: It provides logic reasoning services including inferring in- direct contexts from direct contexts, querying context knowledge, maintaining the consistency of context knowledge and resolving conflicts.

3. Context Database: It stores context ontologies and instances according to the sub-domain, i.e., a smart home.

a) Context-aware Services: They make use of different level of contexts and adapt the way they behave according to the current context.

4. Service Locating Service: Service Locating Service provides a mechanism where Context Providers and the Context Interpreter can advertise their presence; it also enables users or applications to locate these services.

A novel aproach of the SOCAM architecture is the consideration of dependency be- tween context items.

Independent service components enable this architecture to operate in distributed and heterogeneous networks.

3.3.4. ContextExplorers

The ContextExplorers project provides a solution for management and control diffi- culties associated with systems that are intended for smart environments. Instead of

65 3. Literature Review using a centralized, server-oriented, resource consuming mechanism that would not scale well, would not be fault tolerant, and would be inappropriate for complex ubiq- uitous system ContextExplorers uses a decentralized system to improve the system’s fault tolerance and scalability as well as the ability to intelligently gather information in a dynamically changing network environment.

Figure 3.6.: Situation spaces

The introduction of “Context Spaces” (Zaslavsky(2008), Padovitz et al.(2008)) which describe context and situations as geometrical structures in a multidimensional space (see figure 3.6) provide a novel context model and reasoning approach. Using a theo- retical model built on intuitions from state-space analysis and multi-dimensional ge- ometry for capturing context details, the implementation has been applied to mobile agents, creating context-aware mobile agents, which process context information in complex, open, and heterogeneous environments. Context Spaces approach offers reasoning, verification and prediction of context in context-aware, ubiquitous envi- ronments. Thus Context Spaces can be useful for uncertain context-aware ubiquitous computing systems.

66 3.4. Summary

3.4. Summary

In the literature review we have taken a closer look into existing research of three areas.

Research in ubiquitous computing can be seen as a starting signal for research that focuses on the realization of smart environment. The works of Weiser and Norman raise many of the challenges that are still an open research question. Open issues in these projects are the specific technological issues, like heterogeneity of devices in smart environment that need to be observed. We have addressed this gap with the device management component of our architecture.

Smart homes have attracted many researchers as we have shown with the description of various projects. Many of these projects where driven by the use case. In our ap- proach, we develop the architecture use case independent in order to support various use cases and address general barriers of creating smart home services.

For the context- and situation aware layer in our approach, research in context- and situation awareness delivers the basic concepts. These have been understood and utilized by many researchers today. The utilization of context information can still be improved, as much of the information that is clearly identifiable for users, remains unused by services. In our approach, we transfer concepts of perception of situations into the context- and situation layer, in order to be able to understand the intention of the user.

67 3. Literature Review

68 4. CoSHA - Conceptual Smart Home

Architecture

Based on the knowledge of the literature review and the general description of smart homes we have developed a conceptual architecture for smart homes - “CoSHA”. This architecture addresses three of the most important problems when creating smart home solution. But before describing these solutions, we started with a requirements analysis that provided an in-depth, user –oriented view for the realization of the sys- tem.

4.1. Requirements analysis

The following requirement analysis determines the needs and conditions that should be met by applications and services for smart homes. Use cases and scenarios are used to document these requirements.

4.1.1. Use cases

Use cases describe requirements in a customer-centric way. Commonly they illustrate a potential series of interactions between a software entity and an external entity (e.g. the customer), which lead the entity towards something useful. Functional product

69 4. CoSHA - Conceptual Smart Home Architecture features are described via a customer’s interaction with the product and thus state how the feature satisfies the customer’s needs. But use cases do not describe how the feature is implemented. In the literature use cases often contain very detailed descriptions, without following a fixed structure. The objective of a use case is to create a list of requirements for the actors within a smart environment.

The Digital Living Network Alliance (DLNA) defines the term use case as follows (DLNA, 2004):

Use cases are descriptions that define a user need, the context of use and behaviors that meet that need. They may hint at the value derived from meeting the need, but do not provide much detail. Use cases describe fun- damental human needs that are not likely to change in their broad scope over time. They act as umbrella categories that each has one or more usage scenarios associated with them.

According to the above definition use cases are categories of usage describing ba- sic human needs that don’t change markedly over time. Since they are an expres- sion of human needs, they also change less rapidly than the underlying technologies that enable them. Usually they contain e.g. all possible paths through the use case (main path, alternative path, exception path). These paths are also known as sce- narios and are separated from the use cases by the DLNA. Thus, use cases are simply data-descriptions and as such, it is easy to create them as group containing a set of sce- narios. The association of detailed scenarios with use cases allows tracking how tech- nology solutions change over time and assessing improvements in its ability to meet and then exceed users’ needs and expectations. The relation between use cases and their associated scenarios is further expressed and explicitly underlined by the term “usage scenario” introduced by the DLNA. A usage scenario provides details about the user (attitudes, knowledge, habits, experience), the use context (social factors, en- vironment, information requirements) and the solution that supports or enables the

70 4.1. Requirements analysis particular use.

To overcome the lack of a published use case theory (Cockburn, 1997) introduces a theory based on a small model of communication, distinguishing “goals” as a key el- ement of use cases. This way use cases are an easily to use, scalable, recursive model that provides benefits outside requirements analysis. Using goals as the distinct ele- ment that make use cases distinguishable also helps for project management, project tracking, staffing, and business process re-engineering.

In the following we use use case templates provided by Alistair Cockburn (Cockburn, 1998) in order to structure our use cases.

The following use case from the application area energy efficiency offers the advantage of saving money to the user, without forge of too much comfort. Realizing of this use cases demands integration and communication between different technological branches.

Use case name Energy efficient appliance usage Goal To save money through postponing usage of dryer Summary The user fills dryer with wet laundry. He starts the program. As the energy tariff is cheaper at a later time, the system suggests postponing the activity, which the user accepts. Later the program is started automatically. Preconditions User starts program of dryer. Variable energy tariffs are available. Triggers Cheaper energy tariff. Postconditions Energy efficient dry. Money was saved.

Table 4.1.: Use Case: Energy efficient appliance usage

The next use case does not require the integration of different technologies. The chal- lenges are the creation of intelligent services that provide expected and helpful results, as well as the implementation of intuitive interaction technologies, that allow an easy and comfortable use of the service.

71 4. CoSHA - Conceptual Smart Home Architecture

Use case name Leisuretime planning Goal To support user in planing activities for his leisuretime Summary User want to eat out and see a move afterwards. She gets a recommendation for a restaurant near a movie cinema that is playing a movie with her actor Preconditions None. Triggers User initiated. Postconditions User receives an activity plan with activity, time, place, and routing information.

Table 4.2.: Use case: Leisuretime planning

Our last use case shows a clear added value for living safely in a smart home. Here the integration of various sensors and again creation of a service, that analysis the sensed environmental information are the main challenges.

Use case name Fall detection Goal To react to abnormal behaviors immediately, as they are potentially dangerous Summary User performs activity in his home. While performing the activity the user suddenly falls down and can’t get up any more Preconditions User is home alone Triggers Fall detection through sensors and algorithm Postconditions System tries to contact user via voice. If user is not answering, the system notfies a contact person outside of the home about the accident

Table 4.3.: Use case: fall detection

4.1.2. Usage scenarios

The DLNA defines the term (usage) scenario as follows (DLNA, 2004):

A usage scenario provides details about the user (attitudes, knowledge, habits, experience), the use context (social factors, environment, informa- tion, requirements), and the solution that supports or enables the particu-

72 4.1. Requirements analysis

lar use. The solution gives an insight into the technologies that may meet the need at various times.

Kulak and Guiney present in different definitions of the term scenario they are aware of (which are partly very contrary) (Kulak and Guiney, 2004). In a short discussion of the different variants the authors extract the following definition that is corresponding to the one of the DLNA:

To summarize, scenarios are instances of use cases (complete with pro- duction data) that effectively test one path through a use case.

The notation of a scenario is composed of various elements; the devices used for input and output, triggers that initiate the execution of the scenario, execution steps by the system and the involved users. In the following we have listed scenarios of from the corresponding application areas of smart homes (see 2.3).

Name Trigger Devices Operation Closing of Sunset and Blinds Open and close blinds blinds sunrise time in the morning resp. in the evening Absence Manual or Lights Turn off all lights assistant through activity Door lock Lock front door monitoring Window Inform about open sensors windows Heating Reduce interior temperature Night Manual or Lights Turn off all lights assistant through sensors Heating except night lights in bedroom Reduce interior temperature

Table 4.4.: Comfort Scenarios

73 4. CoSHA - Conceptual Smart Home Architecture

Name Trigger Devices Operation Randomize Time of day and Lights give the appearance of interior occupied home lights through activity Break-in Burglar Glass Sound alarm horn detection breakage Turn on all interior detector lights Motion Notify user and/or detector police station Lights Notification component Alarm Sensor for gas, Different Notify homeowner by function water or smoke sensing units cell of the leak Notification component

Table 4.5.: Safety and Security scenarios

Name Trigger Devices Operation Standby Absence Switching Turn off all Standby power information units device in case of absence of the inhabitants Energy Usage sensors for Interaction Analysis of past energy consump- power, water, gas device consumption tion analysis Shift of Variable energy Various Based on energy tariffs energy con- tariffs a user triggered device sumption Operation of operation is shifted device into lower priced times

Table 4.6.: Energy management scenarios

74 4.2. Conceptual architecture

Name Trigger Devices Operation Fall Supervised Body and Communicate event to detection person falls environmental external person sensors Pill Time of pill Use communication reminder taking devices as a reminder Vital data If certain vital Vital data Communicate event to alarm parameters are body sensors external person out of defined range

Table 4.7.: AAL scenarios

4.2. Conceptual architecture

Based on the requirements analysis and the overview of smart home technologies, it is clear that a smart home architecture should aim to allow interoperation among heterogeneous sensors, actuators and other services. Bases on information acquired by these sensors and other context acquisition methods, such an architecture need to manage environmental information. An additional objective of such an architecture is to support ubiquitous user interaction.

Thus, our conceptual architecture for smart home services targets the developer sup- port for creating smart home services in three dimensions: Devices, system/services and user infarction. Each of these offers specific support. In combination these com- ponents provide a complete developer support – from the bottom with issues of com- munication technologies and devices, over environmental information in an accessible and interpreted form, to mediation to the inhabitant via a user interaction component.

Main components of the architecture as depicted in figure4.1 are:

• Devices – Detection of devices; routing of remote control requests to the ad- dressed device; hosting of device drivers; providing uniform access to device functions.

75 4. CoSHA - Conceptual Smart Home Architecture

• System/Services– Context- and situation model; implantation of services for dif- ferent applications areas.

• User Interaction – Hosting and execution for multi-access, multi-modal user in- terfaces.

Figure 4.1.: CoSHA smart home architecture

For service developers each of these three layers offer functions to easy the service development. In the following we will decribe each of the layers in detail.

4.3. Device Management

The coexistence of different technologies in smart homes has been acknowledged by researchers, e.g.(Perumal et al., 2008; Pensas and Vanhala, 2010). Different technolo- gies (as we have shown in section 2.4) provide the option to interconnect heteroge-

76 4.3. Device Management neous devices and services in a smart home; however, not all of them can equally accommodate the others under the same hood. Many standards for home automation exist in parallel and new standards keep entering the market. The challenge for de- vice managementlies in supporting all different kinds of technologies while keeping the integration as simple as possible and don’t overwhelm service developers with properties of different technologies.

To support the integration of devices into an home infrastructure, an abstract device description, should allow a direct operation and automation of consumer electronics device or a home automation components. Smart home services should automati- cally benefit from any new device connected to a smart home through device descrip- tion, as the device description enables a seamless integration of device capabilities into service user interfaces to control the components. One must no longer deal with device-specific operation concepts, as the usage of new equipment follows already well known usage pattern of his existing services. The semantic device description abstracts device specific functionality for services on a functional level.

4.3.1. Physical properties

Technology of devices might be different, but the properties that each device provides can be categorized. Two types of main properties for smart homes exist; in- and output properties. Both categories can be classified into subgroups.

77 4. CoSHA - Conceptual Smart Home Architecture

Properties Measured values Devices (exemplary) Physical temperature, humidity, Temperature sensor, light sensor, properties pressure, brightness Motion position, velocity, angular Motion sensors, accelerometer, properties velocity, acceleration, gyroscopic sensor orientation Contact strain, force, torque, slip, pressure sensor, accelerometer properties vibration Presence tactice/contact, proximity, Infrared sensor, ultrasonic sensors, distance/range, motion triangulation-based sensor Biochemical smoke, gas, water Smoke detector, gas detector, water sensor Identification personal features, personal RFID, fingerprint sensors ID

Table 4.8.: Input properties for smart homes

The second parts of significant hardware are output devices. Output devices either influence the environment directly with their actions, or act as interaction devices with the user.

Properties Devices Information TV, touchscreen, speakers, mobile phone Environmental settings Lamp, heating actuator, blinds

Table 4.9.: Output properties for smart homes

Our approach for device integration is based on the idea, to let developers of services for smart homes access the devices on a functional, property based level. Instead of dealing with different technologies and APIs, the developer should simply operate on a functional level.

4.3.2. Mapping model

Common ground of technology is the information that it provides. For the same type of information different technological options exist allowing gathering the informa-

78 4.3. Device Management tion. A light sensor could use KNX/EIB technology as well as ZigBee or others. But all sensors will provide information about the brightness at their installation place. The information might be delivered in different data types though, but it can be mapped to a common format. On the other hand, actuation devices can also use different type of technologies. Controlling the lights could be done with KNX/EIB or a proprietary control system. Yet, independent from the technology, the common target is to change the lightning conditions.

A mapping model for a smart home should provide abstractions in two ways. First all input parameters should be mapped to a common model. Usually for that purpose a context model is being provided that stores and handles all environmental informa- tion. Different context provider entities have the task to update the context model. For one kind of context information a context provider might exist using different type of hardware sensors that provide the desired information. The context provider entity is responsible to evaluate the context information for reliability and store the evaluated value in the context model. With the use of more sensors, the context provider can also derive high-level information from low level context data. The location information for examples in x/y-coordinates could already be mapped to a specific room within a home.

The second mapping is the output mapping. From the service developer point of view, he should not have to care about the underlying technologies. He wants to act on a semantic level of actions that should be executed. Turning off the lights in one room should be handled the exact same way as turning of the lights in another room, even if both rooms don’t share the same technology for controlling the lights. The abstraction model therefore needs to provide an abstraction on a semantic level and be responsible for translating the actions into the corresponding commands for the specific technology.

79 4. CoSHA - Conceptual Smart Home Architecture

Figure 4.2.: UPnP mapping for lights

In figure 4.2 we show the exemplary mapping of lights using (UPnP) profiles. UPnP was developed as a technology intended for smart homes, of- fice environments and other types of local area networks. Technically UPnP provides a distributed computing framework based on web technologies. The Digital Living Network Alliance (DLNA), whose goal is to develop the standards needed for inter- operable networked products for smart homes, uses UPnP for their specifications. In our approach a UPnP profile offers the functionalites to services. These functions are mapped to UPnP services, according to the specification. These services then again communicate with our device management component, which maps the services to the specific underlying technology.

4.4. Context and Situation-Awareness

Context- and siituation-awareness forms the basis for intelligent actions in smart en- vironments. Based on a world model build out of sensed and derived information, a context model allows components to reason about environmental changes and ser- vices to adapt to changing conditions.

The simplest form of context-awareness is personalization of application behavior. Personalization is already very common in today’s applications and is a widely re- searched area. Diversity and dynamics of applications lead to the need of personal- ization of applications. Personalization tries to create a user profile either by explicit

80 4.4. Context and Situation-Awareness user input or (more advanced) by monitoring user interaction with the system and building the user profile from the observations made. The user profile is then mostly used to simplify the usage of the application, i.e. by removing unnecessary menus (see Microsoft’s personalized menus) or by preselecting options. More advanced per- sonalized applications use the user profile to generate personalized recommendations bases on user preferences (Wohltorf et al., 2005).

The second form of context-aware applications is passive context-awareness. Con- textual information is collected from the user and the environment and is been made available for the user through the user interface. The user is responsible to take appro- priate actions following the context information. No adaptation of application behav- ior or changes in the user interface is being made.

The third and last level of separation of context-aware applications is active context- awareness. Active context-awareness describes applications that, on the basis of sen- sor data and other context information, change their content autonomously, where passive context-aware applications solely present the updated context to the user and let the user specify how the application should change, if at all. A simple example of an active context-aware application is the mobile phone that changes its time au- tomatically when the phone enters a new time zone, or more advances changes the ringing tone profile to silent when the user is in a meeting. In the corresponding pas- sive context-aware application, the mobile phone prompts the user with information about the change in context and lets the user choose whether an action should be taken or not.

Devices settings and operation moves from the user to the environment. A smart environment thus has to make decisions about automating its behavior to meet goals of the users. It has to monitor the acting of the user and make conclusions about the task a user is trying to fulfill. Based on a variety of alternate possibilities the system needs to choose the right choice, and react quickly if it detects, that the anticipated

81 4. CoSHA - Conceptual Smart Home Architecture user goal doesn’t match with user’s actions. In that case, it should also learn there from to avoid this mistake in the future. A context model allows the environment to implement this feature.

4.4.1. Context Definition

Context is (among others) the key enabler for context-aware applications. Many re- searchers have defined the term context within the last decade. Even though one has immediately a notion of what context means, it is hard to find a common definition, as different research groups in the area of ubiquitous computing have slightly different definitions.

Another very application centric definition of context is the one by Chen and Kotz emphazing states:

“Context is the set of environmental states and settings that either deter- mines an application’s behavior or in which an application event occurs and is interesting to the user.” (Chen and Kotz, 2000)

They introduced the application, and therefore implicitly the state of the application, into the definition of context. This could lead to an infinite loop when developing context-aware applications, where changes in context lead to changes in the applica- tion and therefore again lead to a changes in context. Thus, when using this definition, the developer has to be careful about recursion, otherwise the application might be- have not as expected.

Dey describes context in a similar way, also including the application and using the term situation in his definition:

“Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant

82 4.4. Context and Situation-Awareness

to the interaction between a user and an application, including the user and application themselves.” (Dey, 2000)

Following either definition of context, it is hard to reduce it to a finite set of attributes. It is therefore possible to see almost every available piece of information at the time of interaction of an application with the user as context information. It will always be a challenge for application developers to integrate context information into appli- cations and services. Thus, first work in context-aware applications reduced context mainly to location information (e.g. (Schilit et al., 1993)), therefore making it easier for developers to use this information. Most of the newer context-aware application still use the location information as one attribute, but extend it with other attributes like person or time to allow developing even more specific applications.

The definition of context includes implicit (e.g. via sensors) and explicit input. A person’s identity e.g. could be determined either by scanning one’s iris or by letting the user enter his name and password. From the application perspective both ways fill the context attribute identity with hopefully the same information. As the latter way of creating context information is already common in today applications and as it is a not very user-friendly way of gathering context information, especially when dealing with lots of attributes, this work will concentrate on the extraction of context information by implicit input.

To overcome the broad expanse of Dey’s definition, Winograd defined context in a more specific way:

“Context is an operational term: something is context because of the way it is used in interpretation, not due to its inherent properties.” (Winograd, 2001)

With this definition he manages to select only those values that have or might have an operational impact on the context-aware application. As an example, he brings up the

83 4. CoSHA - Conceptual Smart Home Architecture voltage of the power supply that only is being referred to as contextual information, if there is some interaction/interpretation by the user and/or the application that de- pends on it. Otherwise it is only part of the environment, thus it is not necessary to model it in a context model.

Another definition consolidates context-awareness in a more open definition: “A sys- tem is context-aware if it can extract, interpret and use context information and adapt its functionality to the current context of use” (Schmidt, 2002). This definition outlines a main challenge for context-aware systems: the gathering of context information. The challenge for context-aware systems lies not only in the adaptation to the current con- text but also in the complexity of capturing, representing and processing contextual data. To capture context information generally additional sensors and/or programs are required. In addition to being able to obtain the context-information, applications must include some "intelligence" to process the information and to deduce the mean- ing. This is probably the most challenging issue, since context is often indirect or deducible by combining different pieces of context information. Deriving information from single pieces of context information allows systems to make assumptions about the user’s current situation. To transfer the context information to applications and for different applications to be able to use the same context information a common representation format for such information should exist.

Even though all context definitions share some similarities, it is hard to find a common ground. These definitions are not defined with a mathematical formula, and they rely strongly on the authors research focus.

4.4.2. Situation-Awareness

Situation awareness involves being aware of what is happening around a user in his environment and to understand how information, events, and his actions will im-

84 4.4. Context and Situation-Awareness pact his goals and objectives. Situation awareness is a highly relevant topic in areas, where the information overflow is quite high or where decisions lead to immanent consequences in the future. Topics where situation awareness has been researched in broader attention include city traffic management, military operations, or piloting an airplane.(Endsley and Garland, 2000)defined situation awareness as

“The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future”.

A theoretical framework about situation awareness by (Endsley, 1995) is dividing situ- ation awareness into three levels, each representing a different level of understanding.

• Perception of elements in the current situation (Level 1 SA): The first step in achieving SA is to perceive the status, attributes, and dynamics of relevant ele- ments in the environment. Thus, Level 1 SA, the most basic level of SA, involves the processes of monitoring, cue detection, and simple recognition, which lead to an awareness of multiple situational elements (objects, events, people, systems, environmental factors) and their current states (locations, conditions, modes, ac- tions).

• Comprehension of current situation (Level 2 SA): The next step in SA formation involves a synthesis of disjointed Level 1 SA elements through the processes of pattern recognition, interpretation, and evaluation. Level 2 SA requires inte- grating this information to understand how it will impact upon the individual’s goals and objectives. This includes developing a comprehensive picture of the world, or of that portion of the world of concern to the individual.

• Projection of the future status (Level 3 SA): The third and highest level of SA involves the ability to project the future actions of the elements in the environ- ment. Level 3 SA is achieved through knowledge of the status and dynamics of

85 4. CoSHA - Conceptual Smart Home Architecture

the elements and comprehension of the situation (Levels 1 and 2 SA), and then extrapolating this information forward in time to determine how it will affect future states of the operational environment.

As shown in figure 4.3 situation awareness can be seen as a part in an ongoing circula- tion. Based on the state of the environment, situation awareness takes place and leads to a decision, which is followed by a performance of actions, which at the end affects the current state of the environment.

Figure 4.3.: Situation Awareness (taken from Endsley(2000))

In a context-aware computing environment, situation awareness can be achieved in quite similar way. The state of the environment is also the basis for situation aware- ness. The context model, as build in the previous chapter, tries to give a basis for automated processing of the environmental state. It’s a value-free reflection of the environment representing, thus representing embodying level 1 SA. We have deeply described acquiring and building the context model in the previous section. The con- text model is about the perception of the environment and the current state of the application and the user. The context model does not do any interpretation of the

86 4.4. Context and Situation-Awareness information. For interpretation, as in level 2 SA an additional model is needed. We propose the use of a situation model combining different contextual information to new, interpreted, information.

Level_0 Raw Contextual Information. Low-level data, directly acquired from sen- sors or other context providers without any further processing

Level_1 Context Information. An interpreted raw contextual information

Level_2 Context Transition.

Level_3 Situation.

Situations from the service point of view

In context aware systems service developers are increasingly challenged with the is- sues of information overload and correlation of information from heterogeneous sources. The research in context awareness progressively evolves from establishing the viabil- ity of using context toward dealing with major challenges originating from the under- lying characteristics of ubiquitous systems.

On the one hand, developers integrate even more information from sensors and other information sources into the context model, allowing realized systems to achieve a new level of context awareness. Sometimes, different heterogeneous context informa- tion, like time, location and past history of information access can also influence the information needs of services. Context-aware systems should help users to combine data from heterogeneous sources according to their expressed needs (as represented by system queries), as well as less specific criteria. Thus, context-aware application development tends to get more complex, as the adaptation trigger consists of more and more loosely related contextual information.

A situations abstraction, defined as a structured snipped of a context model helps effectively dividing the virtual world of a context model into manageable pieces which

87 4. CoSHA - Conceptual Smart Home Architecture a system might recognize and adapt to. The situation abstraction is built on top of the context metamodel to aggregate context into situations and to compose complicated situations of simple context information via logical and temporal composition.

While situations can in a sense be captured simply as a collection of context elements, there is an advantage in introducing them as a conceptual entity which can be modeled declaratively. Firstly, this allows for developing a semantics that directly captures the meaning of situations within the system architecture. Secondly, it allows for domain experts to more readily specify the characteristics of the situations particular to the domain. Thirdly, it can allow for general purpose infrastructure which supports the recognition of situations and the appropriate adjustment of agent attitudes.

Situations from the users point of view

The user, as a individual, has influence on the perception of a situation also. Provid- ing the same system to different individuals will lead to a different situation awareness across individuals. The user using an application has a perception about the service he is using and of the environment she is in. The general perception about a service is called mental model. In (Rouse and Morris, 1986) mental models have been succinctly defined as “mechanisms whereby humans are able to generate description of system purposes and form, explanations of system functioning and observed system states, and predictions of future states”. Mental models store long-term general knowledge about a system or service, defining how a user thinks a service might work or might behave when he starts using it. The mental model is created by perceiving informa- tion, but also interpreting, filtering and integrating that information per the mental model. They might even be generated following the knowledge about quite similar systems. When a user starts using a service, the mental model is “instantiated” with the current situation of the service and the environment, leading to a situation model of the service. In (Endsley, 1995) a situation model has been describes as “a schema

88 4.4. Context and Situation-Awareness depicting the current state of the mental model of the system”. A situation model can be seen as a mental representation of a described or experienced situation in a real or imaginary world.

Building mental models and situation models is a process that a user executes subcon- sciously when he uses an application. But for service developers it is quite of interest how a user percepts an application or service. The mental model or the situation model user has about an application might differ of the system model. The system model is the internal model of an application, representing all states of an applica- tion and describing all tasks and possibilities an application offers. Service developers try to build their application with the goal, that the mental model a user has and the system model they have built, won’t differ too much. Big differences between those two models could lead to unsatisfied users, as their expectations about an application might not be fulfilled.

4.4.3. Context Acquisition

Depending on the field of application, various ways have been used to obtain context. Overall, three ways of context acquisition can be identified.

• Sensed context: this type of information is acquired by the mean of physical or software sensors such as temperature, pressure, lighting and noise level. This context, acquired without any further interpretation, is also called lowlevel con- text.

• Derived context: this kind of contextual information can be computed on the fly. The most illustrative examples are time and date.

• Context explicitly provided: For example, user’s preferences when they are ex- plicitly communicated by the user to the requesting application.

89 4. CoSHA - Conceptual Smart Home Architecture

Context acquisition from sensors is however not an easy process. This situation is due to the following main reasons: information may be acquired from different sensors and may require an additional step (interpretation) in order to be useful to an ap- plication. Plus, contextual information may be dynamic in nature and these changes need to be grabbed by the sensing infrastructure. Additionally, contextual information originates from heterogeneous and distributed sources and its corresponding sensing technology is generally fixed to a specific one. This is a poor practice from a soft- ware engineering point of view since it prevents reuse of application code when the sensors change. To overcome these problems, an abstraction for accessing contextual information should be provided.

Figure 4.4.: Context aggregation process

Figure 4.4 gives an overview of the process of context aggregation from sensor infor- mation. Raw data has to be collected via sensors or other mechanisms and transferred into a interpretation model. From there context information can be queried.

4.5. User Interaction

Focusing development on human computer interaction has long time been neglected. Former research projects like the Adaptive House (see section 3.2.5) tried actually to build a smart home without any explicit user interaction. Even if it seems desirable

90 4.5. User Interaction at first to build a smart home without any user interaction, this attempt can be coun- terproductive, as human behavior is not 100% foreseeable. We believe that for smart environments it’s getting more and more important to focus on user interaction with the environment, to allow the user a continuous, yet non-intrusive interaction with devices and services.

Combing different modalities and interaction devices in smart home allow advanced means of control and interaction with complex and distributed systems like smart en- vironments. Suitable interaction technologies and modalities that support the current situation best can be selected from a set of available devices. Combining multiple devices also supports the user moving around or being occupied with other tasks.

Interaction in a smart home should be intuitive like writing with a pencil on a piece of paper, so that interaction with services doesn’t distract the user. One should keep in mind the mistakes of the past, where human computer interaction has not been seen as crucial. The difficulties when scheduling a recording with a video cassette recorder can be a warning signal for all developers of services for smart homes. Today’s chal- lenge is to create human-centered interaction instead of technology driven interaction paradigms.

Approaches for user interfaces for smart environments had a strong focus on tech- nologies and single devices. Mobile scenarios where developed as standalone services instead of an integration into other usage conditions. This dissents the wish of users to use different services under various circumstances without having to think about the interaction.

We propose to follow the need for continuous user interaction in different environ- ments and with different devices and modalities also for the implementation. A frame- work for user interfaces should support these requirements without imposing much effort to the developer.

91 4. CoSHA - Conceptual Smart Home Architecture

Our architecture for user interfaces uses abstract models that are device, modality and situation independent, and adapt during runtime to these. Adaptation in our ap- proach can be either adaptation of user interfaces (e.g. adaption the UI to the distance of the user from the screen by changing font sizes or selecting elements for presenta- tion depended on their importance for the current situation) or selection of modalities services (depended on the usage situation and their suitability for interaction).

92 5. CoSHA Implementation

We have implemented the concepts of the previous chapter in our CoSHA smart home architecture as reusable components. Each of the components delivers certain func- tionality needed for creating services for smart home environments. In the following subchapters we will present details of the implementation.

5.1. Device management

The heterogeneity of devices leads also into a heterogeneity of communication tech- nologies. Wireless and wired communications technologies (see section 2.4.2) usually exist in parallel. An architecture for smart homes needs to support different commu- nication technologies while keeping the individual implementation transparent to the user. In the following we show how we have realized the integration of three tech- nologies in our smart home environment. To demonstrate the transferability of this approach we have chosen three technologies, that each uses a different communica- tion network as a basis.

93 5. CoSHA Implementation

Figure 5.1.: Example of connected devices

The example in figure 5.1 illustrates the physical connection of three different tech- nologies with different characteristics.

KNX devices can be connected to a central PC based controlling unit using a KNX USB interface. This interface enables the physical connection link between devices communicating via the KNX communication protocol and a standard PC hardware. In order send and receive KNX packets a driver component is needed that offers an API to the KNX world. In our example we use EIBd1 driver. EIBd provides over a TCP/IP and/or unix domain sockets access to the EIB bus using a simple protocol.

DigitalSTROM technology offers an integrated webserver as one of their components – making connection more easily, as this component offers direct TCP/IP based com- munication API. No other components are required.

The connection of wireless ZigBee devices can be done using a regular ZigBee USB dongle and corresponding drivers. The multi-hop abilities of Zigbee don’t require

1http://www.auto.tuwien.ac.at/~mkoegler/index.php/eibd

94 5.2. Context model every single zigbee device to set up a direct connection to the USB dongle.

The abstract service layer uses the implentation of our mapping model presented in section 4.3.2 to provide a uniform acess to the integrated technologies using UPnP profiles. This layer is responsible for handling service requests and translating it into device and technology specific actions.

5.2. Context model

We have seen that context information is a mandatory requirement for the realiza- tion of ubiquitous services. In the following our context model is described in detail. Thereafter we present our extension of the context model, to support the recognition of the user’s intention.

5.2.1. Context information

Our implemented context model provides an abstract model about the environment. The context model can be accessed by other components in order to deliver adaptive services and user interfaces based on the current state of the environment.

We have noticed, that the available context information for the service developer is different in different phases of the development proccess. To address the different cre- ation phases, we propose a layered architecture of context models and sensors, which clearly structures context information and allows orchestrating it at runtime. The hier- archy proposed in our approach represents the different stages of context information processing:

• L1 – is compromised out of all hardware sensors and actors available in the en- vironment that sense the status of the environment and allow user interaction.

95 5. CoSHA Implementation

This level lacks concrete contextual information of an environment at a specific time.

• L2 – holds context models describing the smart home environment in which a service will be executed. The L2 context models are configured during the service deployment. In contract to L1 this layer is configured to the concrete environment the service will be deployed to.

• L3 – the top layer contains the context information relevant for a service and identified by its developer at the service design time. In combination with the dynamic context data during runtime, this layer allows the context- and situa- tion aware adaption of services.

Figure 5.2 presents the dimensions of the architecture, while also illustrating a clas- sification of context information for services in smart environments. The horizontal axis represents the service lifecycle, starting with service design, going through ser- vice configuration and ending with service execution. The vertical dimension shows the scope of the modeled context information

The lifecycle of a service begins with its design and development. A service devel- oper implements the service and identifies the context information relevant for it. She defines relevant situations, to which the service should react at runtime, and context information, which the service should use for adaptation. The modeled information is service-dependent and thus has an application scope. Furthermore, at design time, the developer creates static models, which will yet be filled with dynamic data upon the service’s execution.

Before a service can be executed, it must be deployed in the local environment. Ev- ery smart home has its own context, defined by the available hardware, its users and the physical environment. A service executed in a smart home environment must be provided with means to analyze the local context and extract the relevant context in-

96 5.2. Context model formation from it. In other words, before the service is executed context models of the local environment must be provided in some form. Our layered architecture classifies such models as L2 models, with the scope of the local environment and created before the execution of services.

Figure 5.2.: Context-model

The context information aggregated on the L2 layer is the same for all applications run- ning in a given smart environment. Examples of such information include data about the inhabitants, a floor plan, current positions of users or devices, etc. In contrast to L2, the L3 models describe application-specific context situations and parameters (e.g. list of all currently available appliances of type T, of the current service user, at most

97 5. CoSHA Implementation

2 meters away of her current location). Provided with access to the L2 models, ser- vices are empowered to extract the context information relevant for their L3 models, evaluate it with regard to the local environment and trigger appropriate adaptations.

The utilization of the situation models by adaptive services can be twofold. A contin- uous evaluation of the models may trigger adaptations of the service, or an intelligent service may evaluate the situation models and utilize the results on its own initiative. For example: a service may continuously extract the distance of the user to the nearest display and trigger the adaptation of the user interface (e.g. making hardly readable text parts bigger), or an assisting energy management service can extract data about home appliances upon a user’s request. Both scenarios are feasible in our approach.

From the perspective of L3 the whole L2 information varies between different envi- ronments making it dynamic from the perspective of the designed application. The static L2 information is unknown at the creation time of L3’s static information. Since a service can be executed in different environments, even the static part of the local context is variable for the service. This phenomenon is depicted by an arrow between the static context data and dynamic situation data in figure 5.2, and makes the inter- connection of both layers a complex problem. The information flow between L3 and L2 can neither be established by the service developer, nor prior to the service exe- cution. The underlying service infrastructure must thus be capable of dynamically interconnecting both layers at runtime.

As the context continuously changes at runtime, so must the models, which represent it. In order to provide services with up-to-date information, the data flowing from L2 models to L3 models includes both the static context data (e.g. number of rooms, age of users, devices present in the environment) as well as the dynamic information (e.g. current mood of a user, her location coordinates or heartbeat). The former is config- ured upon the deployment of a service in the environment (e.g. by means of man- ual configuration), whereas the latter can only be determined at runtime by context

98 5.2. Context model sensors in the environment (e.g. localization systems, temperature sensors, e-health devices). Therefore the hardware layer L1 consists of sensors and actors available in the environment. The sensors extract context data from the environment and make it available for environment-wide context models on the L2 layer. The L2 models give the proprietary data delivered by the L1 hardware semantics.

Last but not least the L1 hardware layer is the interface between the user and the ser- vices. On the one side, the services learn about user and environment by reasoning about the data delivered by the hardware. On the other side, the user explicitly com- municates with the services using interaction hardware available in the environment.

The proposed layered architecture enables a classification of context information uti- lized by adaptive services, depending on the service lifecycle. It clarifies the scope of context information, its sources and flow during context processing.

5.2.2. Situations and user intention

In earlier work we have developed the MeMo workbench (Jameson et al., 2007) that simulates user behavior on the basis of an user interaction model using an ideal inter- action path that generates variances i.e. "use errors" by rules; these "errors" are each provided with probability, so that a variety of alternative interaction patterns of can be generated. We have used our experiences form this work to generate the following approach for the identification of user intention.

In our approach an intention is a succession of situations. Situations can’t be foreseen by the designer, as human actions offer infinite possibilities of occurring situations. However, situations effects for context items in a smart environment. We can make use of this property to extrapolate situations out of observed context items.

99 5. CoSHA Implementation

Figure 5.3.: 3-tier context and situation model

On the bottom level of our approach we achieve awareness of the environment by using a context model. It aggregates low level data from different sensors and other information providers (e.g. user profile databases) and provides a consistent way of accessing this information. In our approach the context model distinguishes between two types of context information: global context information, stored in a global context model that can be used by different applications and context situations that define application specific context information relevant for the application.

Figure 5.4 shows the relation of the two types of information. Global context informa- tion is provided by context providers that process the information from available sen- sors and store it in an application spanning global context model. Application specific context situations relevant for adaptation purposes are specified through context situ- ation providers. They extract the required information from the global context model to calculate the specific situation. Both parts and their interrelation are described in detail in the following.

100 5.2. Context model

Figure 5.4.: Global context model

Based on the executable models approach we developed a context model that is filled with context information at runtime to provide the basis for adaptations. The underly- ing context metamodel comprises information about the user, the available interaction devices as well as information about the environment, including position information of user and devices in terms of rooms and coordinates (see also the generic context model by (Park et al., 2007)).

The context model offers a basis for automated processing of the environmental state. It’s about the perception of the environment and gives a value-free reflection of the environment. The context model does not do any interpretation of the information, yet supports to make services aware of the environment. Based on the context model, it is now possible to find situations that match a human mental state. We want to observe our context model and find out the current state our system. Unfortunately we don’t have sensors that return situation information. We can only monitor context variables that are influenced by our (hidden) situations. We can use Hidden Markov Models (HMM) as shown in figure 5.5 to derive situations from observed context information. In contrast to Markov models where the state is directly visible to the observer, in HMM only variables influenced by states are visible. In our case, the output of states (situations) is different observed context information.

101 5. CoSHA Implementation

Figure 5.5.: Hidden Markov models for determining situations

In the last layer of our approach we need to map observed situation to user’s intention to include the user’s mental model. This cannot be done as straight forward as the pre- vious steps, as the creation of situations in a user’s mental model is an infinite process. Generally user’s interaction with a system follow a user task goal, as interaction has the purpose to fulfill a general goal of the user. To reach his goal, a user will perform several steps. We define intention as performance of actions to reach user goal under a given environment.

In our model the state transition probabilities between situation of the HMM show the likeliness of a transition between different states. If based on historical data a sequence of tokens generated by a HMM gives matching information about the sequence of states our system can assume the intention of the user. If on the other hand observation of the user follows a state without a transition or a very low likeliness, this can be interpreted as an intentional change.

5.3. Multi-Access Service Platform

We have addressed the requirements of user interaction in smart home environments with the Multi-Access Service Platform (MASP). The MASP allows creation of mul- timodal user interfaces for smart home environments. It provides model based user

102 5.3. Multi-Access Service Platform interface management system designed to deliver adaptive and device independent user interfaces for smart home environments. The MASP supports the extensive in- frastructure of networked devices that allow the creation of adaptive, ubiquitous, and situation-aware services. Different input and output channels allow the creation of a unique, multimodal and intuitive user experience. The MASP manages the adapta- tion of user interfaces to the current context of use as well as the selection of differ- ent in- and output channels. The MASP runtime system allows the development of multi-modal user interfaces, based on a model driven approach. To support consis- tent interaction it generates common user interfaces in HTML and VoiceXML as well as more complex multi-modal user interfaces (for example a combination of HTML and VoiceXML) from an abstract model. Additionally it provides a notification ser- vice that can be used to notify the user about important information and to enable, enabling the system to trigger service interaction if required (push).

Figure 5.6.: MASP layer model

Meeting the requirements stated in section 4.5 the MASP supports an easy and effi- cient method of user interface generation. An abstract service description is rendered to a user interface for a wide range of devices and modalities. The MASP uses CC/PP- based device capability detection by using a framework specified by the W3C orga- nization for the detection of device capabilities at runtime. Using CC/PP enables us to optimize the automatic user interface generation by taking screen size, supported

103 5. CoSHA Implementation media formats and user preferences into consideration. In figure 5.6 the basic features of the process of adapting user interfaces via the MASP are explained. Each service‘s content has to be described in an abstract model using a language called Abstract Interaction Description Language (AIDL). While the MASP itself implements the con- troller layer of the model-view-controller design pattern, the AIDL-language realizes the view layer. Concrete graphical, speech-based, or multimodal user interfaces are derived from this single model. It is necessary to mark the services in- and output information most relevant to the user. This process can be done for every modality or device class to distinguish the amount and order of information the MASP presents to the user.

To utilize the MASP system the developer basically develops a model of the user inter- face, representing the various aspects of the final user interface, relevant for different devices and modalities. This model is connected to a system of back-end services and deployed in the MASP runtime environment, generating the required user interface source code (i.e. HTML) from the model.

In our approach this model currently consists of a task tree model, annotated with additional information about the tasks. The task model is defined according to the Concurrent Task Tree (CTT) notation proposed by Fabio Paternò in (Paternò, 1999). It helps identifying the main tasks of an interactive application starting at a very ab- stract design level and moving to a more concrete one step by step. The elements in the task model therefore represent actions the user can undertake (interaction tasks), actions the system undertakes (system tasks) at different levels of detail as well as temporal relationships between these actions. Figure 5.7 shows an example of a task tree, defining the interaction tasks (person in front of the pc) and the system task (grey pc) as well as the temporal relationships between the tasks. More complex tasks are grouped as abstract tasks (clouds) on higher levels.

104 5.3. Multi-Access Service Platform

Figure 5.7.: Concurrent Task Tree

To be able to derive the interactive user interface from the task definition of the system, each leaf task is annotated with additional information. In case of a system task this information is the definition of the service to call to accomplish the task. In case of an interaction task this incorporates an abstract description of a user interface part (cur- rently in form of a JSP page for the representation in a web browser and an additional voice annotation for the multimodal representation).

At runtime this task tree is interpreted by the MASP and the set of currently active tasks is determined. Active system tasks result in service calls and the representations of the active interaction tasks are assembled in the provided user interface. When a task is completed, the system moves to the next state by determining the new set of active tasks and the user interface is updated.

The goal of this approach is to dynamically adapt the user interface to the properties of the various devices available to access the user interface and to always provide a con- sistent user interface supporting multi-modal interaction as well as the specific device features. The basic building blocks of the runtime system needed for the interpretation of the task tree are depicted in figure 5.7.

The implementation of the MASP has been carried out based on the Foundation for Intelligent Physical Agents (FIPA)-compliant Multi-Agent System (MAS) architecture

105 5. CoSHA Implementation

“Java Intelligent Agent Componentware” (JIAC). JIAC integrates fundamental aspects of autonomous agents regarding pro-activeness, intelligence, communication capabil- ities and mobility by providing a scalable component-based architecture. Addition- ally, JIAC offers components realizing management and security functionality, and provides a methodology for Agent-Oriented Software Engineering (AOSE). The main features of applications realized with the JIAC MAS architecture are:

• Modularity: MAS-based applications are mainly configured by selecting and defining the participating agents. Therefore, different modules made up by groups of agents may be changed easily.

• Scalability: Scalability is mainly achieved by duplicated the agents responsible for critical tasks, thus distributing the load between multiple identical agents and removing bottlenecks.

• Adaptability: MAS-based applications may be reconfigured at runtime, i.e. agents may be added or removed to adapt the functionality provided. The newly of- fered services may be used immediately.

• Distributedness: Mobile agents have the ability to migrate between platforms that may be located on different servers.

Using JIAC allows the MASP to be distributed for different requirements. In a smart home environment, the available processing power usually is low, in order not to waste too much energy by the supporting systems. For this purpose a micro edition of the MASP uses an internal web server to answer service requests and is small enough to run in embedded environments such as home gateways.

106 5.3. Multi-Access Service Platform

Figure 5.8.: MASP architecture

The architecture of the platform is divided into three main roles, as shown in figure 5.8:

• The Multi-Access Agent (MAA) receives and answers service requests from the user’s device. The MAA is the main access point for all user connections to the system.

• The Content Processor is responsible for processing all multimedia content. Un- der the precondition that all multimedia content contains semantic description attributes, the Content Processor can deliver the optimal presentation according to the user’s device.

• The Mediative Agent (MA) is a broker connecting the MAA and the services offered by the application server.

The MASP uses the Java Authentication and Authorization Service (JAAS) to enable services to authenticate users and to enforce access controls mechanisms. Currently an operating system login module as well as a device independent login module is provided. The latter is directly connected to a user management directory of an LDAP server to authenticate user information. Since JAAS implements the pluggable au- thentication module (PAM) framework, other authentication modules can easily be

107 5. CoSHA Implementation added. As shown in Figure 5.9, each PAM module can directly connect to an LDAP server to authenticate user information. A module that connects to the Sun Java Sys- tems Access Manager in order to provide single sign-on features is currently being developed.

Figure 5.9.: MASP authentication

After the login process both the MASP and the Application Server may ask the user management for the user’s authorization to use the service. Authorization is sup- ported by attaching a service control list (SCL) to each individual service offered by a service provider. A SCL contains rules for allowing or denying access to a service. These rules can be defined freely for any information available at the time a service is requested. Currently, predefined rules for JAAS subjects created during the login and rules for X.509 certificates that are exchanged in SSL connections are available. A SCL may be created or modified by using a security policy editor.

5.4. Summary

Based on the requirements for situation-aware ubiquitous Services in smart home en- vironments we have implemented three main components that fulfill these require- ments and simplify the service development process.

108 5.4. Summary

The device management component cares about the integration of various existing technologies in smart homes, and their uniform access for service developers on an abstract level.

Our context model integrated contextual information of different sources, and pro- vides the basis for the definition of services. In addition we have implemented algo- rithms that extract situations from context over time and support the recognition of user intention.

The MASP is responsible reduces the development effort for creating ubiquitous user interfaces for smart environments using a model bases approach. The utilization of our multi agent technology makes the MASP small enough to run on less powerful home gateways.

109 5. CoSHA Implementation

110 6. CoSHA Evaluation

Based on the implemented components that have been described in the previous chap- ter, several prototype implementations of smart home services have been realized. The Smart Environment Testbed serves as a deployment environment for all services. It al- lows testing and evaluating the prototypes in a real smart home environment.

The Service Centric Home (SerCHo) project, funded by the Federal Ministry Eco- nomics and Technology, provides the main architecture for implementation of our components. It includes device integration, a tool suite for easy service creation, the user interaction framework MASP, and the context- and situation model.

The smart home energy assistant is one of the implemented SerCHo services. It uses the device integration to gather information about the energy consumption of different home devices, it allows controlling the functions of home devices for energy saving, and it uses the MASP for presenting a user interface for easy and intuitive human computer interaction with the system.

6.1. Smart Environment Testbed

The testbed realized in the SerCHo project provides a a fully networked home infras- tructure for the field of smart home environments. The automation of processes in form of smart home services can be developed under real conditions and scalability,

111 6. CoSHA Evaluation functionality and usability tests can be conducted. The smart environment testbed consists of four rooms with a sustainable technical infrastructure realizing a modern home environment. Installed technologies include a localization system with an ac- curacy of up to 30 cm, a verity of interaction devices, allowing provisioning of multi- modal user interfaces in all rooms, and different home automation technologies (e.g. EIB, DigitalSTROM).

Figure 6.1.: Concept of smart environment testbed

The testbed consists out of an apartment with four rooms, from kitchen, to living room, working room and a recreation room. The smart home testbed integrates a multiplicity of heterogeneous technologies and creates a holistic solution for a modern living concept. All rooms within the apartment are equipped with a location sensing technology, allowing people or things that are equipped with a tag, to be located with an accuracy of about 20-30 centimeter. All appliances are switchable individually, as well as all power outlets. Each power outlet can monitor its status and current energy consumption additionally. Besides all the sensing technology, a rich set of input and output devices, allowing natural, multimodal interaction has also been installed. A touch screen has been integrated in the wall cupboard of the kitchen for example; also a microphone and loudspeakers in the kitchen add voice in- and output to the set of available interaction channels. In the living room, the television screen is used as the main interaction device, using a remote control based interaction with user interfaces. The recreation room acts as a showcase for integrating different health devices into

112 6.1. Smart Environment Testbed our scenarios. A scale and an exercise bike are seamlessly integrated into the infras- tructure, allowing health services to be built.

Figure 6.2.: Realization of smart environment testbed

Figure 6.2 on the left shows the kitchen unit with the integrated devices. We have been using Siemens home appliances based on the “serve@home” technologies. Each device has a communication interface that allows the devices to communicate among them via power line. The connection to other home automation components is re- alized via a Siemens web service gateway that offers functions of the devices via a standardized web service API. Also the touch display that was integrated into the wall cupboard can be seen. On both pictures sensors for the location system can be seen in the top the corners of each the room.

6.1.1. Communication infrastructure

Communication between services and the underlying system layer is completely based on IP, taking into account that especially with IPv6 being widely available by now, IP will allows the comprehensive usage as communication and connection technology in all areas of life. IP also allows the direct integration of the home networks and devices into current networking and internet-based infrastructures. On the communication layer, IP can be seen as the least common denominator for networking technologies

113 6. CoSHA Evaluation that upper layers use to facilitate communication. Though IP was chosen as a default communication standard on the top level, we had to integrate many other technolgies on the physical layer, which do not provide an IP communication interface directly. With the help of our device management solution, these technologies can also be ac- cesses via UPnP over IP (see section 5.1)

On the physical layer, a broad variety of communication media is supported. The se- lection of relevant technologies for the four domains low/wide-range wired/wireless is introduced in the following:

• Low-Range, Low-Bandwidth Wireless (PAN)

◦ Bluetooth: Bluetooth has a broad customer base and is a standard for low- range wireless connectivity. The support of a variety of different communi- cation profiles makes Bluetooth a very flexible tool, though profiles with a smaller user base often lack software or hardware support. Another draw- back of Bluetooth is its power consumption – though it is lower than 802.11 WLAN, it is still too high for many mobile application scenarios, especially for small sensor devices that fit the vision of networking technology, hidden from the attention of the user.

◦ ZigBee: ZigBee is aimed at home control and communication scenarios and has a lower power consumption that Bluetooth. The development stage of ZigBee has not progressed as far as Bluetooth. Though there are standards and hardware components like ZigBee chipsets and extension cards for mo- bile devices, ZigBee has a much smaller footprint on current markets. Yet ZigBee allowed us to develop and integrate new devices, like energy mea- suring adaptor plugs. These plugs can be used in retrofit scenarios, where the lack of other communication infrastructure hinders the installation of other, competing, technologies.

114 6.1. Smart Environment Testbed

• Mid-Range,High-Bandwidth Wireless (LAN)

◦ IEEE 802.11 standards group: In the domain of wireless home networking for immobile devices, the 802.11 group of standards is a clear market leader in terms of market share. Another very interesting technology could be the use of UWB (ultra wideband) technology, as it allows transmission of multiple HD AV signals over a wireless link with a bandwidth that would overload even a next generation 802.11 link. Though UWB has not been chosen explicitly as a transmission technology, it is still used by one of the selected localization technologies.

• Wired technologies

◦ Ethernet: Ethernet, currently available for the home market with 100mbps and 1000mbps bandwidth, serves as the wired general purpose backbone for the . For network topology, a full star topology was cho- sen for reasons of flexibility.

◦ Power line: Though power line has a lower bandwidth and lower reliability due to it being a shared medium, it still serves a valid purpose in a realistic home scenario, as refitting a whole house or apartment with CAT cabling for Ethernet use, or twisted pair for technologies like KNX, is often not feasible. In our smart home environment, we have chosen DigitalStrom as the power line technology, as its new approach promises a more robust operation. In combination with wireless technologies, power line can serve as a substitute for wired connectivity via Ethernet or twisted pair.

Our smart home environment was realized with heterogonous technologies on pur- pose. We have seen that is unrealistic by now, to assume that a home environment will be based on a single technology in the near future.

115 6. CoSHA Evaluation

Another important architectural feature of our testbed is the separation of the home network from the public network and infrastructure by a home gateway, connecting the private home network to the internet. The gateway provides access to the net- work services provided by a service provider. From the home services perspective, the gateway allows the connection to remote services and content, while keeping sen- sitive information within the own four walls of the user.

6.1.2. Middleware technologies

Three middleware technologies have been selected for the smart environment testbed, namely Universal Plug and Play (UPnP), Java-based Agent Component Ware (JIAC) and Open Service Gateway initiative (OSGi). While UPnP is more lightweight, featur- ing more simple functions like service discovery and usage, OSGi and JIAC support a full middleware framework approach.

Figure 6.3.: CoSHA Middleware

116 6.1. Smart Environment Testbed

UPnP provides a generic connectivity backbone for device integration and can run on a number of transport protocols, which makes UPnP a de-facto standard for de- vice interconnection in home environments. AV-specific functions can be enabled by using the UPnP-AV profile as used by the DLNA. Deploying UPnP as the basic tech- nology for the device interconnection also allows making other technologies like KNX or OSGi services available through communication gateways, mapping i.e. the KNX communication to the IP network. In contrast to UPnP, OSGi adds a management in- frastructure to the pure service access layer, allowing the installation and management of services. Similar features are also provided by the JIAC framework, adding BSI cer- tified security mechanisms as well as distribution and extended communication and development mechanisms. JIAC also provides agent-based service development fea- tures and supports the realization of artificial intelligence which makes it especially interesting for learning and reasoning strategies.

6.1.3. Entertainment and Communication technologies

The UPnP standard enables the delivery of media content of any format to various devices available in the home environment. Additionally, controlling of the client de- vices is also supported by UPnP. However, UPnP focuses on the control of devices, but does not provide easy access for human users. Our middleware for user interfaces, the MASP, therefore connects to UPnP devices and provides a consistent user interaction among all devices.

In the living room we have chosen Windows XP Media Center Edition (MCE) as one of the main control interfaces for entertainment serves. MCE serves a basis for invoking our services and also allows the integration of other UPnP-based content. As MCE supports the remote control as an interaction device by default, our services could be controlled by a remote control when rendered by the MCE.

117 6. CoSHA Evaluation

Additionally we have deployed other UPnP-based media rendering devices like the Streamium and Roku Soundbridge. These devices can be seamlessly selected as output devices for audio data.

To support communication services we have integrated Voice over IP (VoIP) based communication mechanisms in the underlying infrastructure. As will all infrastruc- ture services, these can also be accessed by MASP user interfaces in a uniform way. Us- ing the SIP/RDP protocol pair for VoIP communication for signaling and data trans- port allows a seamless integration of virtual telephone services on the one hand and physical phone devices on the other hand.

6.1.4. Localization system

The integrated localization system allows the determination of the position of the users and devices in the smart home. Services and interaction schemes can be op- timized considering the user’s current position, movement and her nearby devices. To support localization, we decided to use a hybrid approach utilizing optical and radio technologies. The selected UWB localization system is able to provide a very accurate measurement, but has the drawback of having to wear a small tag. Also it is not possible to get the orientation of the user. To further improve the localization data gathered, a second localization system is being used, which is based on networked cameras and an algorithm that is able to track individual positions based on image recognition and estimate their line of sight. Combining these two systems allows a more precise detection of persons and devices including the orientation of the user’s head. In combination with various available interaction technologies like voice recog- nition, gesture recognition and multiple screens, the localization system allows devel- opment of adaptive interaction technologies.

118 6.2. Service Centric Home

6.2. Service Centric Home

The Service Centric Home (SerCHo) provides an infrastructure that is able to imple- ment new, intelligent smart home services, using components of the CoSHA architec- ture. These services are mainly incorporating home devices and appliances. The main goals of the projects were

• to enable a platform independent service development for smart homes

• to provide a software environment with hardware integration and uniform user interfaces

To achieve these goals the SerCHo project, was concerned with the development of a integrated solution, consisting of a "Home Service Platform", and a “Service Provider Platform" providing procedural models and service creation and deployment tools for the support of the service development and the service management. With the help of the SerCHo framework future services can be designed to be ubiquitously useable and to support users more effectively and efficiently.

Figure 6.4.: SerCHo platforms

119 6. CoSHA Evaluation

The SerCHo approach features a modular and scalable design, integrating and har- monizing various pervasive computing technologies. These include advanced locat- ing technologies, as well as advanced user interaction devices and controllable home electronic devices. From the view of the developer, service developers and service provider can provide their modular services in an effective and efficient way. All services are ubiquitously usable and provide multi-modal user interfaces featuring even simultaneous multi-modality in some cases. The service usage can be seamlessly transferred from one device to another. Thus the services developed with SerCHo fulfill the stringent usability requirements which apply to home application.

6.2.1. Home Service Platform

The Home Service Platform is the main infrastructure tool to run new, intelligent home applications. Any service that incorporates the use of other services and home appli- ances is implemented to run on the Home Service Platform.

Advantages of having a Home Service Platform, rather than a centralized server in the cloud are:

• Integration of local devices

• Reduced communication effort

• Better privacy of sensitive information

The home service platform uses our three components of the CoSHA architecture and has been deployed on a small embedded home gateway.

6.2.2. Service Provider Platform

The Service Provider Platform hosts services and applications that necessarily need to run outside the home domain, e. g. for consistency and security reasons, or that do

120 6.3. Smart Home Services not need to access resources located in the home.

The Service Provider Platform acts as an integration point for network-located service applications. Service applications need to have access to some of the operator’s re- sources, such as accounting systems and service management. Access to these systems is often only feasible using legacy interfaces, probably with bridging these interfaces to other technologies. Thus, no definitions should be made here.

Interfaces from one service application to another should be clearly defined. In our implementation two types of interface technologies are used: EJB and JIAC. There are adapters from one technology to the other (and vice-versa).

Interfaces outside of our system should be HTTP-based, using SOAP for invoking re- mote operations. The control of HTTP flows is easily manageable, and SOAP appears to be the most popular standard for remote invocations over HTTP.

6.3. Smart Home Services

Carried out with the SerCHo project and deployed and evaluated in our smart envi- ronment testbed we have developed several services. In the following we describe three of these services. Each service targets a different application area and makes more or less use of our CoSHA architecture.

6.3.1. Smart Home Energy Assistant

The Smart Home Energy Assistant (SHEA) was created as a service that allows users to get transparency about their energy consumption in their home. The SHEA provides energy consumption measuring, analysis, and optimization tools.

121 6. CoSHA Evaluation

Figure 6.5.: Screenshot of the SHEA

Figure 6.5 shows the main user interface of the SHEA. On the left side a diagram allows analysis of the past energy consumption. This diagram can show the overall household energy consumption over time as well as room or device based figures. The middle part of the user interface shows all devices in a specific area, their current state and energy consumption. The right side of the user interface allows changing the operation mode of the devices and allows switching to another room also.

An advanced version of the SHEA has also been realized that adds an additional fea- ture: the integration of heterogeneous energy sources. In the realized scenario the energy provided by a electrical or plug-in hybrid car can be used to level peaks in home energy consumption. Whenever there is a short term high demand for energy, some devices within the smart home can be switched automatically to get supplied by battery of the car. Later in off-peak times, the battery of the car can be researched again. In combination with flexible energy tariffs this feature can help the energy sup- plier to reduce peaks, and therefore increase efficiency, while simultaneously reducing the energy bill of the household.

122 6.3. Smart Home Services

Because of the SHEA’s ability to monitor not only the entire power usage, but also individual usage of specific appliances, reoccurring situations can be derived from the underlying context model enabling the detection of uncommon activities within the ambient assisted living environment.

6.3.2. 4-Star Cooking Assistant

The focus of the 4-Star Cooking Assistant (4SCA) lies on offering a multimodal ac- cess to the realized service with the integration of controlling different kitchen ap- pliances to support the user during the cooking process. Using the MASP the 4SCA also demonstrates the capability of user interface adaptation for different devices and demonstrates the integration of devices through the CoSHA device management.

Figure 6.6.: 4SCA user interface

The 4SCA provides cooking information (such as recipes) as well as guidance during the cooking process to the user. Furthermore it automatically recognizes available devices in the kitchen and adapts appropriately. Multimodal interaction is supported as the user is usually occupied and thus can not use his hands for interaction. He interacts with the system via speech. On the other hand, providing the whole recipe only as speech output is not suitable as the process requires complicated steps and

123 6. CoSHA Evaluation parallel displayed information. The recipe and the steps to accomplish are therefore displayed on a screen in the kitchen and can be controlled via speech commands. It is also possible to let the system read out step details and other information.

The 4SCA application is composed of several context-dependent sub-services: a per- sonalized recipe finder, a shopping assistant and a cooking guide. The 4SCA makes strong use of the available context information by for example personalizing the recipe search process (in accordance with the user’s preferences and health status) and con- trolling the available cooking devices. The 4SCA assists the elderly by helping them to plan their meals according to their health status and makes interaction with modern kitchen appliances easier by providing multimodal interaction and automated con- trolling of appliances.

Figure 6.7.: 4SCA interaction flow

The interaction flow of the 4SCA as depicted in figure 6.7 starts with a Start Screen where the user can choose between three options. He could continue with a recipe

124 6.3. Smart Home Services suggested by his health assistant (SHA), he could start a manual recipe search, or he could select a recipe that he has seen on his TV (via a sematic IPTV service). After he has chosen the meal to prepare, he gets presented all the ingredients needed for cooking that meal. In a multimodal interaction process he can check those items of the list that he has already at home. After he has finalized the shopping list he can optionally transfer it to his mobile device or start directly with the cooking process. Each cooking step is presented to him on a screen with steps to do and the options to watch for each step and let the service automatically control the devices.

User interface adaption

We take a look at the recipe finder application that gives the user the possibility to input several criteria and start a database search for recipes (Figure 6.6). The criteria include selecting the desired dish type (e.g. main meal or dessert), calories’ amount and the national origin of the recipe. Each criterion contains several possibilities and the recipe search supports multiple-choice selection.

Figure 6.8 on the left shows the graphical user interface of the Recipe Finder when sufficient display size is available. Additionally figure 6.8 visualizes the task tree part corresponding to the user interface elements presented to the user. To ease the un- derstanding of our example we decided to use the ConcurrTaskTree Notation, a well known task modeling notation. In figure 6.8 on the left active tasks are marked green and the task currently performed by the user (recognized because the user focuses the associated GUI elements with the mouse) is marked blue. Assuming that the user switches to a different interaction device with a reduced screen size it might become impossible to render all Recipe Finder elements at once, because of the missing space. In such a case the user interface must adapt. A possible adaptation includes render- ing only one recipe criteria container at once and letting the user select the criteria she wishes to input. This can be done with a modification of the task tree where pre-

125 6. CoSHA Evaluation viously concurrent tasks are put in a sequence. Figure 6.8 in the middle shows the Recipe Finder after such an adaptation. Now the user may only set one type of criteria at once and navigate between the criteria types using the Next>> button. Ideally the adaptation algorithm takes the current state of the application into account and does not hide the currently focused task (DishTypeSelection) from the user so his interac- tion is not interrupted. Without the runtime data it might happen that one of the other criteria selections is rendered first after the adaptation (e.g. calories selection). Such a behavior is very likely to confuse or at least irritate the user.

Figure 6.8.: Recipe Finder Adaptation

As a proof of concept we have used the adaptation metamodel to implement the recipe finder application. For this purpose we have connected the adaptation metamodel with our runtime architecture, the Multi Access Service Platform (MASP). The MASP utilizes executable metamodels for the creation and delivery of multimodal user inter- faces as has been described in section 5.3. Thanks to the meta-metamodel our architec- ture enables the designer to describe the user interfaces with any model conforming to an executable metamodel. For the development of our example we use a set of models compatible with the Cameleon Reference Framework consisting of a Task and Domain

126 6.3. Smart Home Services models, Abstract (AUI) and Concrete User Interface (CUI) models, a Service model and a context model. On top of these models a Mapping model is used to connect and synchronize them at runtime. For all models we have defined executable metamodels, which specify how a conforming model is to be executed and which model elements hold the runtime state information. With the introduction of construction elements to our meta-metamodel we have extended the metamodels by adding suitable construc- tion elements, which specify how the definition elements of each model can be altered. We have defined the Recipe Finder example application as a set of models as denoted in the above description. First we have specified an executable task model containing the sub-tree depicted in 6.8.

Figure 6.9.: 4SCA adapted user interface

Then the AUI and CUI models have been defined, containing abstract user interface components like commands associated with concrete user interface elements like but- tons or labels. All have been mapped with each other and the corresponding tasks via mappings held in a mapping model. For the sake of brevity we skip the description of the other models, since they are not directly involved in the example adaptations. The example adaptation described at the beginning requires modifications spanning multiple models at different levels of abstraction. First of all the task model must be transformed into the form depicted in Figure 6.8. Afterwards each of the CriteriaSelec- tion children tasks needs to be mapped to the AUI Command element associated with

127 6. CoSHA Evaluation the Next>> button. We have therefore defined an Adaptation CreateTaskSequence triggered by the situation DisplayTooSmall and consisting of several steps that alter different models of the Recipe Finder application. Information from three models (Task, AUI and Mapping) is needed for the adaptation thus three absolute queries to the respective models have been defined in form of XPath expressions:

• .//TaskModel:children[TaskModel:name=’CriteriaSelection’] pointing to the Cri- teriaSelection task in the Task model

• .//AUI:interactors[AUI:name=’NextCriteriaCommand’] pointing to the Com- mand component in the AUI model

• ./query pointing to the root element of the Mapping model

Figure 6.10.: Recipe Finder Task Model

Because our task models contain the state information about the tasks we were able to take the currently focused CriteriaSelection child task into account and assure it

128 6.3. Smart Home Services is still visible to the user after adaptation. We thus need to guarantee that the cur- rently focused task will be the first task in the new task sequence. This has been done in two adaptation steps. The first removes the currently focused task from the Cri- teriaSelection sub-tree, thus its target is the absolute query pointing to the Criteri- aSelection task. To remove the focused child task (extracted with the relative query /TaskModel:children[TaskModel:state=’Focused’]) the remove construction element is used. In the second step of the adaptation the removed task (stored in a removed- FocusedTask variable) is added to the CriteriaSelection task as the first child task using the addAsFirst construction element. In the next steps the CriteriaSelection task must be marked as iterative and the temporal operators of its children tasks need to be set to Enabled. Finally in the last step a mapping between the CriteriaSelection children tasks and the AUI Command element associated with the Next>> button must be es- tablished. This way, interactions with the button are propagated into the executable task model and a new set of available tasks is calculated. The new mapping is created using the addMapping construction element of the Mapping model, which requires to specify the type of the mapping (in this case Task2CommandMapping) as well as its source and target elements. Figure 6.10 shows the complete model of the adaptation in XML format. We have implemented the example adaptation 2 as a ReplaceButton- sWithLabels adaptation modifying the mapping model. The adaptation is triggered by the situation OnlyVoiceInputAvailable and contains adaptation steps, which replace the mappings between the AUI components NextCriteriaCommand and StartSearch- Command and the CUI buttons with mappings to the desired text labels. Please note that the utilization of XPath queries allows to reference elements, which do not exist during the entire lifecycle of the application. This is the case with the Next>> button, which may even never be created if the necessary context situation does not occur. In order to avoid the problem of inconsistent user interface (elaborated in the description of example adaptation 2) we specified, that adaptation ReplaceButtonsWithLabels af-

129 6. CoSHA Evaluation fects CreateTaskSequence. This way the former will always be executed after the latter if the trigger situation still occurs.

6.3.3. BerlinTainment

BerlinTainment was one of the first projects that have been realized with the MASP platform. Offering services for convenient leisure time planning, the focus lies on adaptation of user interfaces for mobile devices, session management, and personal- ization. BerlinTainment utilizes features of the MASP to create a device independent and multimodal mobile service. The BerlinTainment project Wohltorf et al., 2005 fo- cused on the realization of a scalable Serviceware Framework based on multi-agent technology. The framework is utilized for the development of context-aware services, i.e. services providing personalized, location-based information. As a showcase for the functionality of the Serviceware Framework, different services in the entertain- ment domain have been developed and integrated into the BerlinTainment demon- strator. Using BerlinTainment, a tourist visiting Berlin is assisted by a set of services including a restaurant, theater, concert and movie finder, a calendar and a routing service. An additional service, the Intelligent Day Planner, integrates the different services, allowing the user to schedule his activities for a given day and to receive per- sonalized and location-based recommendations for each activity, such as restaurant, cinema or theater visits. Based on these recommendations, the user is given the possi- bility to make reservations for the various activities and plan his route between the dif- ferent locations. A frequent usage scenario highlighting the MASP-related features of BerlinTainment is using the “Intelligent Dayplanner” to generate suggestions on how to spend the leisure time. Most users prefer to start BerlinTainment on their stationary personal computer, because using a browser-based interface is more comfortable for editing user preferences. After BerlinTainment has made a suitable recommendation, the user may freeze the service and leave for the location of the recommended event.

130 6.3. Smart Home Services

While traveling to the location, the user may want to obtain precise directions. This may be achieved by dialing the phone number of the BerlinTainment system with an in-car or mobile phone and selecting the location-based routing services by voice con- trol. Reaching the location, the BerlinTainment user may finally continue the frozen service session in order to look at the details of the recommendation, reschedule sug- gestions or review the visited location by using a WML-enabled smart phone.

Figure 6.11.: BerlinTainment WML screen

Utilizing the MASP developers were able to focus on creating the different services comprising the BerlinTainment demonstrator, rather than creating user interfaces for each different device. It turned out that the first mock-up of user interfaces on differ- ent devices created by the MASP were usable and had only to be reworked slightly with layout and graphical information. The implemented system benefits from using the Multi-Access Service Platform, which enables it to extend its services so that they can be used in a ubiquitous and seamless way. Examples of supported devices and modalities include mobile phones via WML-based user interfaces and telephones via

131 6. CoSHA Evaluation voice-based interfaces. In each case, the MASP adds value to BerlinTainment for mo- bile users by letting them switch between modalities, as they like. Figure 6.11shows two WML screenshots of the BerlinTainment demonstrator. The left screenshot shows the portal service, the HTML interface of which is shown in figure 6.12. Note that the images of the HTML version have been reduced to textual links due to display size restrictions of the device. The right screenshot shows a dialog in which the image size has been automatically adapted to the constraints of the device.

The following example explains the basic features of the process adapting user inter- faces by the MASP: Figure 6.12 shows the runtime optimization of a service portal user interface. The scenario “Select Service” has been defined in AIDL, consisting of a primitive data type describing the current state of the interaction (“I can offer you the following services:”) and a group (“available services”). This group is constrained to be rendered in a circular layout.

Figure 6.12.: BerlinTainment UI adaptation

132 6.4. Summary

If the user connects to the portal service, the MASP detects the device capabilities with the help of CC/PP. In our example, the screen width is 800 pixels, whereas the AIDL- based scenario asks for an optimal size of 600 pixels (indicated by the “Media Group” parameters in the figure). The preferred attribute indicates to the MASP whether the rendered user interface should be as near as possible to the optimum value, or as far away as possible from a minimal value.

Based on the screen and group constraints, the parameters for single images (“Multi- media data”) are adjusted: Due to the circular layout of the group, the width of the single images is preferred to be close to 100 pixels. The maximal width is limited by the maximal width of the superior element, i.e. the group.

Apart from constraints that are derived from the user’s device capabilities, there is another option for constraint definition from the service’s point of view. In our exam- ple, a minimum size constraint is given by the “Service Representation” parameters, requiring each single service image to be at least 50 pixels in width. Specifying con- straints on the service level makes sense if a service is targeting specific audiences such as people with disabilities who prefer larger text and images.

6.4. Summary

We have evaluated our approach with different prototype applications for smart homes. Each application targets a different application area for smart homes. We have shown that our approach is feasible for creating smart homes services for different application areas.

The device management component has been used for the 4-Star Cooking assistant and the Smart Home Energy assistant. A variety of different devices has been inte- grated and accessed by the services on an abstract level. Especially when devices with different technologies address the same logical function (e.g. light management via

133 6. CoSHA Evaluation

DigitalStrom and EIB), our approach reduces the effort for the developer, as he don’t need to care about the different technologies.

The context-and situation model has been used in the middleware for all implemented services. Lately we have integrated the context model as a basic feature of the MASP, as it has been shown, that services for smart homes always need to access information about the environment.

The Multi-Access-Service platform has been used as the user interface platform for all service in our smart home environment. In BerlinTainment the MASP shows its capability of rendering user interface for different devices. Based on device profiles user interfaces are automatically adapted. In the 4SCA the MASP enables the user to interact via different interaction methods (in this case touch screen and voice input) simultaneously.

134 7. Conclusion and Future Work

Finally we want to summarize our work, highlight the most important contributions and publications, and provide an outlook on possible future research that can be based on our research.

7.1. Conclusion

In this work, basic concepts for creating situation-aware ubiquitous services fort smart environments has been explored. These services are characterized by the fact that they integrate seamlessly into the environment of the user. To be able to provide these services, the environment usually shows certain characteristics. It is satirized with variety of technical devices that offer two main functions: Sensing and affecting the environment on the one hand and offering an interaction modality for the user on the other hand.

We have chosen smart homes as sub-category of smart environemts as the application area of our research. Smart homes

Aiming at the realization of smart home environments, a lack of developer support dealing with the complexity of smart environments has been observed within the cur- rent state of the art. We have identified three major gaps within the scope of this thesis:

1. interoperability between heterogeneous devices with various communication technologies and protocols in smart environments

135 7. Conclusion and Future Work

2. context- and situation model that provides uniform access for ubiquitous ser- vices

3. missing support for user interfaces that adapt to the environment and the usage situation

The first gap has been addressed by a device model that abstracts from the underlying implementation properties. Using UPnP as a middleware layer allows developers to focus on logical functions of devices instead of implementation specific issues.

To solve the second problem, we have developed a situation model (Rieger and Al- bayrak, 2010) that is based on a 3-tier context model an integrates the human per- ception of a situation using Hidden-Markov models. Based on the context model, it is possible to find situations that match a human intention. Observing the context model over time therefore allows us to detect intention changes of human behavior.

The third gap is adressed by the the Multi-Access-Service Platform (Rieger et al., 2005), which uses a model based approach to support the creation of ubiquitous user inter- faces for smart homes. Using the context model of the environment user interfaces automatically adapt to device properties and user preferences.

The approach has been evaluated with several prototype services that have been de- ployed into our smart environment testbed. Each of the applications make use of one or more components of our approach and shows significant improvement in develop- ment of ubiquiouts services for smart homes.

7.2. Future Work

Our approach with its three components can be a fundament for future research. As we have chosen a broad approach for our work, future work should deal with more specific aspects.

136 7.2. Future Work

The integration of devices is right now mostly a manual task, which needs manual configuration for each technology. As this is a one-time effort, we have not consid- ered, automatic configuration of new devices. Future research could enhance our ap- proach by using semantic device information that is been provided by more and more devices nowadays using standards like UPnP. But also proprietary device communi- cation standards could be integrated in a way that devices of that class are automati- cally added to the configuration. In the end, it should be possible for the end-user to integrate new devices on thy fly.

The recognition of human intention, based on situational information as in our ap- proach is only at the beginning. We think this promising approach could be expanded further, to allow ubiquitous services to assist the user better. Human intention is the basis for decisions of the user. Better understanding and evaluating these decisions can significantly enhance the user experience, as adaptation, suggestions, and warn- ings of services improve.

Our testbed provides an excellent basis for future research in many directions. It can be enhanced physically by integrating new technologies and devices, like smart me- ter, personal health devices or ambient displays. Thereby it can serve as a basis for new services and application in existing and new application areas that have not been touched by this thesis. Additionally new interaction and reaction methods like gesture interaction or the provisioning of ambient information to the user should be explored, as these would enhance the unobtrusive user experience in such an environment even further.

137 7. Conclusion and Future Work

138 A. Outcomes of the dissertation work

The author received his degree of “Diplom-Informatiker (Dipl.-Inform.)” from the Technische Universität Berlin (TU Berlin), where he was studying computer engineer- ing (Technische Informatik) at the department of Elektrotechnik & Informatik from October 1997 to September 2003. Since October 2003, he is a research assistant and a doctoral student at the DAI-Labor, a research laboratory of the Technische Universität Berlin.

A.1. Publications

Within the scope of this dissertation, the author has published the following articles to conferences and a journal.

•(Rieger and Albayrak, 2010): Andreas Rieger, Sahin Albayrak, “Integrating Hu- man Intention into a Situation Model for Smart Environments”, ICPS ’10: Pro- ceedings of the7th ACM International Conference on Pervasive Services, ISBN: 978-1-4503-0249-4

•(Lehmann et al., 2010): Grzegorz Lehmann, Andreas Rieger, Marco Blumendorf, Sahin Albayrak, “A 3-Layer Architecture for Smart Environment Models”, IEEE PerCom Workshop on Smart Environments (SmartE 2010), ISBN: 978-1-4244- 6605-4

139 A. Outcomes of the dissertation work

•(Albayrak et al., 2009): Sahin Albayrak, Marco Blumendorf, Sebastian Feuer- stack, Tobias Küster, Andreas Rieger, Veit Schwartze, Carsten Wirth, Paul Zer- nicke, “Ein Framework für Ambient Assisted Living Services”, Ambient As- sisted Living 2009, 2. Deutscher AAL Kongress, Berlin

•(Jameson et al., 2007): Anthony Jameson, Angela Mahr, Michael Kruppa, An- dreas Rieger, Robert Schleicher, “Looking for Unexpected Consequences of In- terface Design Decisions: The MeMo Workbench”, 6th International workshop on TAsk MOdels and DIAgrams (TAMODIA 2007)

•(Cissée et al., 2006): Richard Cissée, Jens Wohltorf, Andreas Rieger, Sahin Al- bayrak, “An Agent-Based Framework for Personalized Information Services,” EATIS 2006

•(Wohltorf et al., 2005): Jens Wohltorf, Richard Cissée, Andreas Rieger, “Berlin- Tainment: An Agent-Based Context-Aware Entertainment Planning System”, New York, USA | IEEE Communications Magazine - June 2005 - Vol. 43 No. 6 - Entertainment Everywhere: System and Networking Issues in Emerging Network-Centric Entertainment Systems: Part II, p.: 102-109, ISBN: 0163-6804/05

•(Rieger et al., 2005): Andreas Rieger, Richard Cissée, Sebastian Feuerstack, Jens Wohltorf, Sahin Albayrak, “An Agent-Based Architecture for Ubiquitous Mul- timodal User Interfaces”, The 2005 International Conference on Active Media Technology, Best paper award, Takamatsu, Kagawa, Japan, ISSN/ISBN: 0-7803- 9036-9

•(Wohltorf et al., 2004b): Jens Wohltorf, Richard Cissée, Andreas Rieger, Heiko Scheunemann, “BerlinTainment - An Agent-Based Serviceware Framework for Context-Aware Services”, Proceedings of 1st International Symposium on Wire- less Communication Systems - ISWCS 2004, IEEE Catalog Number: 04EX844C, ISBN: 0-7803-8473-3

140 A.2. List of talks

•(Wohltorf et al., 2004a): Jens Wohltorf, Richard Cissée, Andreas Rieger, Heiko Scheunemann, “An Agent-Based Serviceware Framework for Ubiquitous Context- Aware Services”, Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004), demonstration

• Jens Wohltorf, Nicolas Varone, Andreas Rieger, Burak Simsek, Min Hoe Park, “3G beyond 3G Services - Key factors for success”, working paper, 2003

A.2. List of talks

The following presentations and talks about his work were held by the author:

• “Nutzergerechte Telematikdienste am Beispiel des Projektes Connected Living”, Workshop “Gesund und selbstbestimmt im Alter - Senioren zwischen Gesund- heitsförderung und Gesunheitsmarkt?”, March 30, 2011, Berlin

• “Integrating Human Intention into a Situation Model for Smart Environments”, Internation Conference on Pervasive Services, July 13, 2010, Berlin

• “User Interfaces for Smart Environments”, DSSE Seminar, Monash University, April 9, 2010, Melbourne, Australia

• “Smart Energy Solution – Energiemanagement Middleware”, Smart Metering Congress, December 6, 2007, Berlin

• “An Agent-Based Architecture for Ubiquitous Multimodal User Interfaces”, In- ternational Symposium on Wireless Communication Systems, May 19, 2005, Taka- matsu, Kagawa, Japan

141

List of Figures

1.1. Structure of the thesis ...... 11

2.1. Home automation devices ...... 16

2.2. Use cases for smart homes (translated from Strese et al.(2010)) . . . . . 23

2.3. Preferences of end users for smart homes (Szuppa, 2007)...... 25

2.4. AAL innovation model ...... 29

2.5. KNX devices for home automation ...... 32

2.6. digitalSTROM chips - © aizo ag ...... 36

2.7. EnOcean power generator transmitter module - © www.enocean.com . . 37

3.1. Smart environments as an intelligent agent, based on (Cook and Das, 2007)...... 52

3.2. Architecture of Context Toolkit ...... 59

3.3. CoBrA Architecture ...... 63

3.4. SOCAM context model ...... 64

3.5. SOCAM architecture ...... 64

3.6. Situation spaces ...... 66

4.1. CoSHA smart home architecture ...... 76

4.2. UPnP mapping for lights ...... 80

4.3. Situation Awareness (taken from Endsley(2000)) ...... 86

143 List of Figures

4.4. Context aggregation process ...... 90

5.1. Example of connected devices ...... 94 5.2. Context-model ...... 97 5.3. 3-tier context and situation model ...... 100 5.4. Global context model ...... 101 5.5. Hidden Markov models for determining situations ...... 102 5.6. MASP layer model ...... 103 5.7. Concurrent Task Tree ...... 105 5.8. MASP architecture ...... 107 5.9. MASP authentication ...... 108

6.1. Concept of smart environment testbed ...... 112 6.2. Realization of smart environment testbed ...... 113 6.3. CoSHA Middleware ...... 116 6.4. SerCHo platforms ...... 119 6.5. Screenshot of the SHEA ...... 122 6.6. 4SCA user interface ...... 123 6.7. 4SCA interaction flow ...... 124 6.8. Recipe Finder Adaptation ...... 126 6.9. 4SCA adapted user interface ...... 127 6.10. Recipe Finder Task Model ...... 128 6.11. BerlinTainment WML screen ...... 131 6.12. BerlinTainment UI adaptation ...... 132

144

List of Figures

146 List of Tables

4.1. Use Case: Energy efficient appliance usage ...... 71 4.2. Use case: Leisuretime planning ...... 72 4.3. Use case: fall detection ...... 72 4.4. Comfort Scenarios ...... 73 4.5. Safety and Security scenarios ...... 74 4.6. Energy management scenarios ...... 74 4.7. AAL scenarios ...... 75 4.8. Input properties for smart homes ...... 78 4.9. Output properties for smart homes ...... 78

147

References

[Albayrak et al. 2009] ALBAYRAK, S. ; BLUMENDORF, M. ; FEUERSTACK, S. ; KÜSTER,

T. ; RIEGER, A. ; SCHWARTZE, V. ; WIRTH, C. ; ZERNICKE, P.: Ein Framework für Ambient Assisted Living Services. In: Ambient Assisted Living 2009, 2. Deutscher AAL Kongress, 2009

[Bahl and Padmanabhan 2000] BAHL, P. ; PADMANABHAN, V. N.: RADAR: An In- Building RF-Based User Location and Tracking System. In: IEEE Infocom 2000. Los Alamitos, California : IEEE CS Press, 2000, 775-784

[Becker 2008] BECKER, M.: Software Architecture Trends and Promising Technology

for Ambient Assisted Living Systems. In: KARSHMER, A. I. (Ed.) ; NEHMER, J. (Ed.)

;RAFFLER, H. (Ed.) ; TRÖSTER, G. (Ed.): Assisted Living Systems - Models, Architec- tures and Engineering Approaches. Dagstuhl, Germany : Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany, 2008 (Dagstuhl Seminar Proceedings 07462). – ISSN 1862–4405

[Chen and Kotz 2000] CHEN, G. ; KOTZ, D.: A Survey of Context-Aware Mobile Com- puting Research / Dept. of Computer Science, Dartmouth College. 2000 (TR2000- 381)

[Chen 2004] CHEN, H. L.: An Intelligent Broker Architecture for Pervasive Context-Aware Systems, University of Maryland, Thesis, 2004

[Cissée et al. 2006] CISSÉE, R. ; WOHLTORF, J. ; RIEGER, A. ; ALBAYRAK, S.: An Agent-

149 References

Based Framework for Personalized Information Services. In: Proceedings of the Euro American Conference on Telematics and Information Systems. Santa Marta, Columbia,

2006, 2006

[Cockburn 1998] COCKBURN, A.: Basic use case template / Human and Technology. 1998 (TR.96.03a)

[Cockburn 1997] COCKBURN, A. A.: Structuring use cases with goals. In: Journal of Object-Oriented Programming (1997)

[Cook and Das 2004] COOK, D. ; DAS, S.: Smart Environments: Technology, Protocols and Applications. 2004

[Cook and Das 2007] COOK, D. J. ; DAS, S. K.: How smart are our environments? An updated look at the state of the art. In: Pervasive Mob. Comput. 3 (2007), Nr. 2, S. 53–73. – ISSN 1574–1192

[Cook et al. 2003] COOK, D. J. ; YOUNGBLOOD, M. ; EDWIN O.HEIERMAN, I. ; GOPAL-

RATNAM, K. ; RAO, S. ; LITVIN, A. ; KHAWAJA, F.: MavHome: An Agent-Based Smart Home. In: Pervasive Computing and Communications, IEEE International Confer- ence on 0 (2003), S. 521. ISBN 0–7695–1893–1

[Dey 2000] DEY, A. K.: Providing Architectural Support for Building Context-Aware Ap- plications, Georgia Institute of Technology, Thesis, 2000

[DLNA 2004] DLNA: Use Case Scenarios - White Paper. Version:2004, accessed on March 11, 2011. http://www.dlna.org/industry/why_dlna/DLNA_Use_ Cases.pdf

[Edwards and Grinter 2001] EDWARDS, W. K. ; GRINTER, R. E.: At Home with Ubiq- uitous Computing: Seven Challenges. In: Ubicomp 2001: Ubiquitous Computing Bd. 2201, Springer Berlin / Heidelberg, 2001 (Lecture Notes in Computer Science). – ISBN 978–3–540–42614–1, 256–272

150 References

[Endsley 1995] ENDSLEY, M.: Toward a theory of situation awareness in dynamic systems. In: Human Factors 37(1) (1995), S. 32–64

[Endsley 2000] ENDSLEY, M. R.: Situation Models: An Avenue to the Modeling of Mental Models. In: Proc. of 14th Treniall Congress of the International Ergonomics Asso- ciation and the 44th Annual Meeting of the Human Factors and Ergonomics Society, 2000

[Endsley and Garland 2000] ENDSLEY, M. R. (Ed.) ; GARLAND, D. J. (Ed.): Situation Awareness Analysis and Measurement. Lawrence Erlbaum Associates, 2000

[Franz et al. 2006] FRANZ, O. ; WISSNER, M. ; BÜLLINGEN, F. ; GRIES, C.-I.: Poten- ziale der Informations- und Kommunikations-Technologien zur Optimierung der Energieversorgung und des Energieverbrauchs (eEnergy) / wik-Consult - FhG Ver- bund Energie. 2006

[Gu et al. 2005] GU, T. ; PUNG, H. K. ; ZHANG, D. Q.: A service-oriented middleware for building context-aware services. London, UK, UK : Academic Press Ltd., 2005. – ISSN 1084–8045, S. 1–18

[Hagras et al. 2004] HAGRAS, H. ; CALLAGHA, V. ; COLLEY, M. ; CLARKE, G. ;

POUNDS-CORNISH, A. ; HAKAN, D.: Creating an Ambient-Intelligence Environ- ment Using Embedded Agents. In: IEEE Intelligent Systems 19 (2004), Nov.-Dec., Nr. 6, S. 12– 20

[Helal et al. 2005] HELAL, S. ; MANN, W. ; EL-ZABADANI, H. ; KING, J. ; KADDOURA,

Y. ; JANSEN, E.: The Gator Tech Smart House: A Programmable Pervasive Space. In: IEEE Computer 38 (2005), S. 50–60. – ISSN 0018–9162

[Intille et al. 2005] INTILLE, S. S. ; LARSON, K. ; BEAUDIN, J. S. ; NAWYN, J. ; TAPIA,

E. M. ; KAUSHIK, P.: A living laboratory for the design and evaluation of ubiquitous computing technologies. In: In Extended Abstracts of the 2005 Conference on Human Factors in Computing Systems, ACM Press, 2005, S. 1941–1944

151 References

[Intille et al. 2006] INTILLE, S. S. ; LARSON, K. ; TAPIA, E. M. ; BEAUDIN, J. ; KAUSHIK,

P. ; NAWYN, J. ; ROCKINSON, R.: Using a Live-In Laboratory for Ubiquitous Com- puting Research. In: Pervasive, 2006, S. 349–365

[Jameson et al. 2007] JAMESON, A. ; MAHR, A. ; KRUPPA, M. ; RIEGER, A. ; SCHLE-

ICHER, R.: Looking for Unexpected Consequences of Interface Design Decisions: The MeMo Workbench. In: 6th International workshop on TAsk MOdels and DIAgrams (TAMODIA 2007), 2007

[Jang et al. 2001] JANG, S.-I. ; KIM, J.-H. ; RAMAKRISHNA, R. S.: Framework for Build- ing Mobile Contex-Aware Applications. In: Proceedings of the First International Con- ference on The Human Society and the Internet - Internet Related Socio-Economic Issues. London, UK : Springer-Verlag, 2001. – ISBN 3–540–42313–3, 139–150

[Kagal 2004] KAGAL, L.: A Policy-Based Approach to Governing Autonomous Behavior in Distributed Environments, University of Maryland Baltimore County, Thesis, 2004

[Kidd et al. 1999] KIDD, C. D. ; ORR, R. ; ABOWD, G. D. ; ATKESON, C. G. ; A, I. ;

MACINTYRE, B. ; MYNATT, E. ; STARNER, T. E. ; NEWSTETTER, W.: The aware home: A living laboratory for ubiquitous computing research, 1999, S. 191–198

[Kulak and Guiney 2004] KULAK, D. ; GUINEY, E.: Use Cases: Requirements in Context. Addison-Wesley Professional, 2004

[Kumar and Das 2006] Chapter Pervasive Computing: Enabling Technologies and

Challenges. In: KUMAR, M. ; DAS, S.: Handbook of Nature-Inspired and Innovative Computing. Springer, 2006, S. 613–631

[Lehmann et al. 2010] LEHMANN, G. ; RIEGER, A. ; BLUMENDORF, M. ; ALBAYRAK, S.: A 3-Layer Architecture for Smart Environment Models. In: IEEE PerCom Workshop on Smart Environments Proceedings IEEE Computer Society, IEEE Computer Society Publications Office, 2010. – ISBN 978–1–4244–6605–4

152 References

[Mozer 2005] Chapter Lessons from an Adaptive Home. In: MOZER, M. C.: Smart Envi- ronments: Technologies, Protocols, and Applications. John Wiley & Sons, Inc., Hoboken, NJ, USA, 2005

[Nehmer et al. 2006] NEHMER, J. ; BECKER, M. ; KARSHMER, A. ; LAMM, R.: Living Assistance Systems: An Ambient Intelligence Approach. In: ICSE ’06: Proceedings of the 28th International Conference on Software Engineering. New York, NY, USA : ACM Press, 2006. – ISBN 1–59593–375–1, S. 43–50

[Norman 1999] NORMAN, D. A.: The Invisible Computer. MIT Press, 1999

[Padovitz et al. 2008] PADOVITZ, A. ; LOKE, S. ; ZASLAVSKY, A.: Multiple-Agent Per- spectives in Reasoning About Situations for Context-Aware Pervasive Computing Systems. In: Systems, Man and Cybernetics, Part A, IEEE Transactions on 38 (2008), July, Nr. 4, S. 729–742. – ISSN 1083–4427

[Park et al. 2007] Chapter Identifying a Generic Model of Context for Context-Aware

Multi-services. In: PARK, T. H. ; ; KWON, O.: LNCS. Bd. Volume 4611/2007: Ubiqui- tous Intelligence and Computing. Springer Berlin / Heidelberg, 2007, 919-928

[Paternò 1999] PATERNÒ, F.: Model-Based Design and Evaluation of Interactive Applica- tions. Springer-Verlag, 1999 (Applied Computing). – 208 S. – ISBN 1–85233–155–0

[Pensas and Vanhala 2010] PENSAS, H. ; VANHALA, J.: WSN Middleware for Existing Smart Homes. In: Sensor Technologies and Applications, International Conference on 0 (2010), S. 74–79. ISBN 978–0–7695–4096–2

[Perumal et al. 2008] PERUMAL, T. ; RAMLI, A. R. ; LEONG, C. Y. ; MANSOR, S. ; SAM-

SUDIN, K.: Interoperability among Heterogeneous Systems in Smart Home Envi- ronment. In: Signal-Image Technologies and Internet-Based System, International IEEE Conference on 0 (2008), S. 177–186. ISBN 978–0–7695–3493–0

[Pounds-Cornish and Holmes 2002] POUNDS-CORNISH, A. ; HOLMES, A.: The iDorm

153 References

- A Practical Deployment of Grid Technology. In: Cluster Computing and the Grid, IEEE International Symposium on 0 (2002), S. 470. ISBN 0–7695–1582–7

[Priyantha et al. 2000] PRIYANTHA, N. B. ; CHAKRABORTY, A. ; BALAKRISHNAN, H.: The Cricket Location-Support System. In: 6th ACM MOBICOM. Boston, MA, Au- gust 2000

[Rieger and Albayrak 2010] RIEGER, A. ; ALBAYRAK, S.: Integrating Human Intention into a Situation Model for Smart Environments. In: ICPS ’10: Proceedings of the7th ACM International Conference on Pervasive Services, ACM Press, New York (NY), USA, 2010. – ISBN 978–1–4503–0249–4

[Rieger et al. 2005] RIEGER, A. ; CISSE, R. ; FEUERSTACK, S. ; WOHLTORF, J. ; AL-

BAYRAK, S.: An Agent-Based Architecture for Ubiquitous Multimodal User Inter- faces. In: International Conference on Active Media Technology. Takamatsu, Kagawa, Japan, 2005

[Rouse and Morris 1986] ROUSE, W. B. ; MORRIS, N. M.: On looking into the black box: Prospects and limits in the search for mental models. In: Psychological Bulletin 100(3) (1986), S. 349–363

[Saizmaa and Kim 2008] SAIZMAA, T. ; KIM, H.-C.: Smart Home Design: Home or House? In: Convergence Information Technology, International Conference on 1 (2008), S. 143–148. ISBN 978–0–7695–3407–7

[Salber et al. 1999] SALBER, D. ; DEY, A. K. ; ABOWD, G. D.: The Context Toolkit: Aiding the Development of Context-Enabled Applications. In: CHI 99 Conference on Human Factors in Computing Systems. Pittsburgh, PA : ACM Press, May 1999, 434-441

[Schilit et al. 1993] SCHILIT, B. N. ; ADAMS, N. ; GOLD, R. ; TSO, M. ; WANT, R.: The ParcTab Mobile Computing System. In: Proceedings Fourth Workshop on Workstation Operating Systems (WWOS-IV) IEEE, 1993, S. 34–39

154 References

[Schmidt 2002] SCHMIDT, A.: Ubiquitous Computing - Computing in Context, Lancaster University, Thesis, November 2002

[Schmidt 2010] SCHMIDT, A.: Ubiquitous Computing: Are We There Yet? In: Computer 43 (2010), S. 95–97. – ISSN 0018–9162

[Strese et al. 2010] STRESE, H. ; SEIDEL, U. ; KNAPE, T. ; BOTTHOF, A.: Smart Home in Deutschland. 2010

[Szuppa 2007] SZUPPA, S.: Marktforschung für komplexe Systeme aus Sach- und Dienstleis- tungen im Privatkundenbereich: Entwicklung und Überprüfung eines Vorgehenskonzeptes

am Beispiel des ’Intelligenten Hauses’, Brandenburgische Technische Universität Cot- tbus, Thesis, 2007

[Weiser 1991] WEISER, M.: The computer for the 21st century. In: Scientific American 265 (1991), September, Nr. 3, 66–75. ISBN 1–55860–246–1

[Winograd 2001] WINOGRAD, T.: Architectures for Context. In: Human-Computer Interaction 16 (2001), S. 401–419

[Wohltorf et al. 2005] WOHLTORF, J. ; CISSEE, R. ; RIEGER, A.: BerlinTainment: an agent-based context-aware entertainment planning system. In: IEEE Communica- tions Magazine 43 (2005), June, Nr. 6, S. 102– 109

[Wohltorf et al. 2004a] WOHLTORF, J. ; CISSÉE, R. ; RIEGER, A. ; SCHEUNEMANN, H.: An Agent-Based Serviceware Framework for Ubiquitous Context-Aware Ser- vices. In: Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004), Demonstration session, New York, USA (2004)

[Wohltorf et al. 2004b] WOHLTORF, J. ; CISSÉE, R. ; RIEGER, A. ; SCHEUNEMANN, H.: BerlinTainment - An Agent-Based Serviceware Framework for Context-Aware Services. In: Proceedings of the 1st International Symposium on Wireless Communication Systems (Iswcs ’04), 2004. – ISBN 0–7803–8473–3

155 References

[Zaslavsky 2008] ZASLAVSKY, A.: Smartness of Pervasive Computing Systems through Context-Awareness. In: Proceedings of the 8th international conference, NEW2AN and 1st Russian Conference on Smart Spaces, ruSMART on Next Generation

Teletraffic and Wired/Wireless Advanced Networking. Berlin, Heidelberg : Springer- Verlag, 2008 (NEW2AN ’08 / ruSMART ’08). – ISBN 978–3–540–85499–9, 261–262

156