Smart Eyeglasses, E-Textiles, and the Future of Wearable Computing Oliver Amft
Total Page:16
File Type:pdf, Size:1020Kb
Smart eyeglasses, e-textiles, and the future of wearable computing Invited paper Oliver Amft ACTLab, Chair of Sensor Technology, University of Passau, Germany [email protected] Keywords: wearable accessories, skin-attachables, Ubiquitous computing, health monitoring, textile electronics, chewing monitoring, light exposure monitoring, wearable ethics Where a decade a ago mostly visions and bulky carry-on devices existed, today several wearable computing products could be found. For example, activity trackers are already selling in con- venience stores. The development does neither mean that the core innovations of the wearable computing vision are realised, nor that there will be any successful wearable device beyond those activity trackers. The product announcements and explorations, such as Google Glass, have identified key challenges that are urging further research investments. The lessons to learn from those recent developments are discussed here, leading to an approach towards multi-function materials and wearable devices. Two projects are described that implement a multi-function ap- proach. In the SimpleSkin project, a generic fabric is developed to realise different sensor func- tions, controlled via software apps in a Garment OS. The same fabric material is used in smart eyeglasses to realise temple-integrated electrodes. Whereas SimpleSkin aims at skin-attached wearables, the smart eyeglasses developed here closely resemble regular glasses and thus could become publicly accepted wearable accessories. Moving towards wearable technology that is truly embedded into everyday life opens a series of new health support applications that are sketched here, based on the concept of smart eyeglasses. Introduction Wearable computing changes the interface between humans, data, and computers. Classic computers are operated by humans entering information, in the form of text and numbers, then analyse, process, and store the data. Not so in the vision of wearable computing as proposed by Steve Mann [15]: wearers could keep their hands free and remain involved in a real-life task, while the computer is continuously processing data from sensors. Operating sensors and computers in free-living provides access to data that have been inaccessible before. The sensor data could be used to extract context in- formation, including wearer actions, position, physiological and ambient states. Interrelating context information provides new knowledge. Nevertheless, wearable computing devices must attain accep- tance and usability, which has shown to be a hard problem by various products launched or announced during the last years. In particular for fitness trackers that represent the biggest wearable computing market so far, wearers abandoned devices quickly, i.e. within six to 18 months [14]. Apart from lack- ing functionality and robustness, which are all typical issues in early products, long-term acceptance of wearable computing devices involves designing for and integrating into everyday life. Wearable devices need to integrate into usual everyday objects, either as wearable accessories, e.g. in smart eyeglasses, or in skin-attachables, e.g. smart cloths. The integration can take diverse forms, as discussed further below. Clearly, the success of smartphones as mobile, on-body device, where brands and models share a principle system architecture and from factor, is unlikely to repeat for any particular wearable device. A key approach could thus be to derive a common base structure, which serves for different device realisations. Below, two different projects are discussed, one on smart textiles (skin-attachables), one in smart eyeglasses (wearable accessories). Both follow the idea of using software 'apps' to derive application-specific solutions from a common physical baseline. SimpleSkin: e-textiles with programmable sensing functions. Fabric patches make the basis of any garment. However smart garment prototypes often used highly-specialised fabric materials, built to realise one particular sensor function. SimpleSkin aims to develop a general-purpose fabric that could implement different sensors, depending on software to configure electronics and process retrieved data [6]. The approach could allow textile manufacturers to concentrate on few smart fabrics produced in large-volume, and confectionery to integrate a smart garment for one or several applica- tions. Examples of sensor types that could be implemented using the SimpleSkin fabric include re- sistive sensors to detect strained or compressed garment regions, capacitive electrodes, bioimpedance electrodes, and others. While some function configurations are made during garment confectionery already, e.g. due to electrode placement that suit for a particular sensor type only, other configurations could be made during system runtime. As a result, mutli-function garments can be constructed. Smart eyeglasses: regular look, several sensing functions. Most of our senses, vital signs, and actions involve the head in some form, promoting it to the most important body location for simul- taneous sensing and interaction. Numerous wearable computing studies showed that head-worn sen- sors and adequate signal processing could reveal behaviour, vital parameters, such as heart beat and breathing rate, and even mental state. Behaviour and vital data is the key component for many assis- tance applications in daily life, from memory augmentation to advising chronic patients and require continuous measurements and context estimation. Head-attached sensors and devices are however of- ten considered uncomfortable, irritating, or stigmatising. Eyeglasses are right at the spot to fill the sensing and assistance gap [3]. For smart eyeglasses to integrate in everyday life as a wearable ac- cessory, they need to maintained the typical appearance of eyeglasses, even with sensors, processing, and potentially interaction functions. In addition to the integration challenge, smart eyeglasses must be multi-functional devices, e.g. running apps similar to smart watches and smartphones today. Lessons to learn from head-mounted displays What started out in 2012 as a highly innovative product development at Google quickly turned into the largest publicity disaster for wearable computing so far: Google Glass was enthusiastically celebrated by many researchers in the field as well as early adopters. Their enthusiasm decayed on the search for the ultimate application due to the early stage of the technology. In the eye of the public however, it was the steadily growing privacy-concern of being recorded by Glass-wearers that drove disapproval. The capability of recording the wearer's view on video as well as audio showed to be a no-go and Glass wearers got assaulted in various instances, Glass got banned from restaurants, cinemas, etc. Privacy and fraud concerns, i.e. the option for filming movies in cinemas, has shown in a unexpected clarity that Glass has been an incomplete development. Ethics related to wearable computing is further elaborated in the Section on privacy below. Google Glass was nevertheless a great innovation regarding technology integration in wearable computing: It provided a multitude of sensors, including motion, light, bone vibration, as well as display optics and interaction features. On the downside of this massive feature integration, however, was the unnatural appearance for eyeglasses in size, form factor, and battery runtime. Fig. 1 illustrates the device. A basic lesson of product development appears here: too many features do not necessarily make a product more attractive. In another view, the failure may be considered confirmatory of Mark Weiser's vision of invisible technology, which clearly Glass was not. Ancestors of Glass can be seen in the large variety of head-mounted displays (HMDs) and head- up display (HUD) technology, for virtual reality (VR) and augmented reality (AR) applications. VR and AR glasses existed for more than a decade already. Examples of such HMDs are Oculus Rift that provides a field of view VR according to head movement, SixthSense, and Vuzix, both being AR devices. As Google Glass, the VR and AR HMDs are described as smart eyewear, however they are focused on very particular applications. Bridge Frame front Nose pads egend: Temple fronts HUD optics !amera Touchpad Electronics: processing, power supply; sensors Temple ear bend Temple ends (A) (B) Fig. 1: Analysis of Google Glass: (A) Schematic illustration of the Google Glass HMD alignment onto eyeglasses. (B) Google Glass worn. Glass displayed information on a small see-through screen and could be controlled via a touch interface. Designing smart eyewear for everyday use is an open challenge. Google Glass pioneered in ex- ploring everyday life applications and found viable ones. For example, Glass was used in a study with breastfeeding mothers to guide them with information, instructions, or call a consultant, while nursing the newborn. Receiving information and advise while keeping hands involved in a task appears to be a central application template for smart eyewear so far. Many lightpole projects have shown that smart eyewear can provide a clear value to wearers. The open challenge could thus be described as follows: Wearable computing is born into a world of expected multi-functionality, spurred by the concept of apps on smartphones. Multi-functional is a natural expectation: items that we consider wearing every- day must be useful in various everyday situations. Moreover, everyday use contradicts