
Internet of Things DIS '19, June 23–28, 2019, San Diego, CA, USA Astral: Prototyping Mobile and Smart Object Interactive Behaviours Using Familiar Applications David Ledo1, Jo Vermeulen2, Sheelagh Carpendale1, Saul Greenberg1, Lora Oehlberg1, Sebastian Boring3 1 Department of Computer Science, University of Calgary, Canada · {first.last}@ucalgary.ca 2 Department of Computer Science, Aarhus University, Denmark · [email protected] 3 Department of Computer Science, University of Copenhagen, Denmark · [email protected] ABSTRACT Astral is a prototyping tool for authoring mobile and smart object interactive behaviours. It mirrors selected display con- tents of desktop applications onto mobile devices (smartphones and smartwatches), and streams/remaps mo- bile sensor data to desktop input events (mouse or keyboard) to manipulate selected desktop contents. This allows design- ers to use familiar desktop applications (e.g. PowerPoint, Af- terEffects) to prototype rich interactive behaviours. Astral combines and integrates display mirroring, sensor streaming and input remapping, where designers can exploit familiar desktop applications to prototype, explore and fine-tune dy- namic interactive behaviours. With Astral, designers can vis- ually author rules to test real-time behaviours while interac- tions take place, as well as after the interaction has occurred. We demonstrate Astral’s applicability, workflow and expres- siveness within the interaction design process through both new examples and replication of prior approaches that illus- Figure 1. Astral allows designers to prototype interactive behav- trate how various familiar desktop applications are leveraged iours by (1) mirroring contents of a desktop region to a mobile and repurposed. device, (2) streaming mobile sensor data to the desktop, and Author Keywords (3) remapping the sensor data into desktop input (e.g. mouse and Prototyping; design tool; interactive behaviour; smart ob- keyboard events) on a designer-chosen desktop application. jects; mobile interfaces. and, in some cases, circuit building or soldering (when pro- CCS Concepts totyping in physical computing). As a result, implementation • Human-centered computing → Interface design proto- details consume the majority of designers’ time and re- typing; User interface toolkits sources (e.g., code setup, learning the platform, program- ming basic visuals), instead of important early-stage design INTRODUCTION decisions such as exploring alternative solutions or defining Smart interactive devices such as mobile devices, wearables elements of animation and interaction [13]. The alternative and smart objects vary widely in input and output, physical option is to explore a device’s interaction via desktop-based form, and development platforms. When prototyping inter- prototyping tools (e.g. InVision). However, such prototyping active behaviors for these devices, designers are faced with tools often specialize in specific tasks (e.g. wireframing, two options. The first option is to build prototypes directly standard UI widgets, creating animations as videos) without on the target device or platform. This involves programming room for integrating different workflows [39]. Furthermore, while some tools support deploying prototypes on mobile de- Permission to make digital or hard copies of all or part of this work for vices, they often do not support live sensor input other than personal or classroom use is granted without fee provided that copies are touch events on simple trigger action states. This limits de- not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for com- signers’ ability to envision how the interaction might play out ponents of this work owned by others than ACM must be honored. Abstract- on the target device [25]. Exploring and fine-tuning respon- ing with credit is permitted. To copy otherwise, or republish, to post on sive behaviours is crucial for interactions that (1) go beyond servers or to redistribute to lists, requires prior specific permission and/or a standard UI patterns on mobile platforms; (2) involve the tar- fee. Request permissions from [email protected]. get device’s physical form or possible physical actions (e.g. DIS '19, June 23–28, 2019, San Diego, CA, USA shaking the device); and (3) contain nuanced continuous an- © 2019 Association for Computing Machinery. imations that play out as the inputs are happening (e.g. slide ACM ISBN 978-1-4503-5850-7/19/06 $15.00 to unlock). In these situations, designers should be able to https://doi.org/10.1145/3322276.3322329 711 Internet of Things DIS '19, June 23–28, 2019, San Diego, CA, USA explore the dynamic interplay between inputs and outputs as 1. Enabling Designers to Leverage and Repurpose Famil- interaction happens. We aim to empower designers to envi- iar Applications to Prototype Interactive Behaviours. As- sion these responsive behaviours in their prototypes, a pro- tral combines and integrates display mirroring, sensor cess that is currently not facilitated by existing tools in re- streaming and input remapping, where designers can exploit search and industry. desktop applications (e.g. PowerPoint, AfterEffects, Chrome) [39, 63, 64] in a novel manner for prototyping in- ASTRAL We introduce Astral (Figure 1 & video figure), a prototyping teractive behaviours. Because any desktop application can be tool that uses mirroring, streaming and input remapping via mirrored via Astral, any sensor data can be remapped to the simple rules to enable designers to author, test, and fine-tune mouse and keyboard events understood by that desktop ap- interactive behaviour in mobile devices using familiar appli- plication. Astral facilitates a tight design-test-refine cycle on cations. We achieve this through the following three steps: target devices without needing to write code through encap- sulation of sensor data and mappings into rules, and imme- 1. Mirroring Desktop Visual Content to a Mobile Display. diate interactive preview. Astral is designed to support vari- Astral consists of a desktop client and a mobile client appli- ous interaction design practices (getting the right de- cation. As shown in Figure 1.1, the designer selects a region sign [16]), including: video-based prototyping, prototyping (here a web browser running Flappy Bird) to mirror it to a multiple alternatives, iterative prototyping, and smart object connected mobile (phone or watch) display. The mobile dis- prototyping (by placing mobile devices into physical forms play is then updated live as the desktop display refreshes. [32, 56, 57, 68]). 2. Streaming Mobile Sensor Data to Provide Input. As 2. Allowing Designers to Visually Author Dynamic Inter- shown in Figure 1.2, a designer can select data from multiple actions Based on Continuous Sensor Data. The common types of sensors, such as touch, acceleration, ambient light or state model approach (e.g. d.tools [23], Adobe XD, inVision) the microphone. In this case, the designer enables the touch does not lend itself well to dynamic visuals and fine-tuned sensor to be streamed back to the Astral desktop client. Every temporal aspects of user interface behaviour. By working time the designer now taps the mobile display, the touch lo- with and visualizing continuous streams of sensor data, and cation is visualized on the Astral desktop client. mapping (and easing) these to continuous outputs, our work enables authoring “user-in-the-loop behaviours, where con- 3. Interactivity via Input Remapping. As shown in Fig- tinuous user input drives the behaviour” [21, pp. 31] without ure 1.3, the designer can provide interactivity to the mirrored coding. This goes beyond prior prototyping tools (e.g. Exem- display by remapping the mobile sensor data to desktop key- plar [22]) that focus on supporting designers in authoring in- board or mouse inputs. Here, by remapping a mobile touch teractive behaviours based on discrete events extracted from down event to a desktop spacebar keypress, the bird in the continuous sensor data [21, pp. 99]. Moreover, Astral’s con- mirrored web browser can flap its wings. The designer can tinuous behaviours focus on how the input becomes animated visually select the entire screen of the phone to sense touch as output while the action takes place. events. Designers now have a greater range of expressive- ness: they can prototype and fine-tune interactive behav- RELATED WORK AND GOALS iours, such as continuous and discrete actions, apply interac- The goal of our prototyping tool is to author nuanced inter- tive animation easing functions, or emulate states. active behaviour on mobile devices. In this paper, we con- sider ‘interactive behaviour’ as the ‘feel’ of a prototype [49] While we primarily focus on mobile interactions, Astral’s that cannot be easily conveyed through a physical sketch and building blocks and workflow also translate to prototyping tends to require programming. ‘Feel’ emerges from not only smart object behaviours. Instead of challenging designers to the outputs once interactions happen, but also as interactions solder, create custom circuitry and program electronics [5], happen – a dynamic interplay discussed in past work on pro- our mobile focus by extension allows designers to place mo- gressive feedback [62] and animation applications [18, 19]. bile devices into a fabricated shell and repurpose their inputs and outputs (e.g. as shown in Soul–Body Prototyping [32], We situate our research
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-