
“Hey Model!” – Natural User Interactions and Agency in Accessible Interactive 3D Models Samuel Reinders Matthew Butler Kim Marriott Monash University Monash University Monash University Melbourne, Australia Melbourne, Australia Melbourne, Australia [email protected] [email protected] [email protected] ABSTRACT that directly addresses this question is that of Shi et. al. [44]. While developments in 3D printing have opened up opportuni- They conducted a Wizard-of-Oz (WoZ) study with 12 BLV ties for improved access to graphical information for people participants to elicit preferred user interaction for a pre-defined who are blind or have low vision (BLV), they can provide set of low-level tasks using three simple 3D models. only limited detailed and contextual information. Interactive Here we describe two user studies that complement and ex- 3D printed models (I3Ms) that provide audio labels and/or a tend this work. We investigate: (1) a wider range of interac- conversational agent interface potentially overcome this lim- tion modalities including the use of embodied conversational itation. We conducted a Wizard-of-Oz exploratory study to agents; (2) the desired level of model agency – should the uncover the multi-modal interaction techniques that BLV peo- model only respond to explicit user interaction or should it ple would like to use when exploring I3Ms, and investigated proactively help the user; and (3) interaction with more com- their attitudes towards different levels of model agency. These plex models containing removable parts. Such models are findings informed the creation of an I3M prototype of the common in STEM, e.g. anatomical models in which the or- solar system. A second user study with this model revealed gans are removable. We were interested in the interactions and a hierarchy of interaction, with BLV users preferring tactile level of model intervention desired by participants, particularly exploration, followed by touch gestures to trigger audio la- when reassembling the model. bels, and then natural language to fill in knowledge gaps and confirm understanding. Our first study, Study 1, was an open-ended WoZ study of I3Ms with eight BLV participants. It extends that of [44] Author Keywords in three significant ways. First, it uses a wider variety of 3D printing; Accessibility; Multi-Modal Interaction; Agency models, some of which contain multiple components. Second, we asked the participants to use the model in any way they CCS Concepts wished. This allowed us to elicit a broader range of input and •Human-centered computing ! Accessibility; output modalities and interactions. Third, each participant was presented with a low-and high-agency model allowing us to INTRODUCTION explore the impact of agency. In the last decade there has been widespread interest in the Study 2 was a follow-up study with six BLV participants which use of 3D printed models to provide blind and low vision involved exploring a prototype I3M of the solar system, the (BLV) people access to educational materials [50], maps and design of which was informed by Study 1. It supported tactile floor plans [20, 21], and for cultural sites [39]. While these exploration, tap controlled audio labels, and a conversational models can contain braille labels this is problematic because interface supporting natural language questions. This study of the difficulty of 3D printing braille on a model, the need allowed us to confirm the findings of Study 1, whilst address- to introduce braille keys and legends if the labels are too ing a limitation of that study, where participant behaviour may arXiv:2009.00224v2 [cs.HC] 4 Sep 2020 long, and the fact that the majority of BLV people are not have been biased due to a human providing audio feedback on fluent braille readers. For this reason, many researchers have behalf of the model. While synthesised audio output was ac- investigated interactive 3D printed models (I3Ms) with audio tually controlled by the experimenter in Study 2, participants labels [18, 38, 16, 45, 21]. However, to date almost all research were unaware of this and believed the model was behaving has focused on technologies for interaction, not on ascertaining autonomously. the needs and desires of the BLV end-user and their preferred interaction strategies. The only research we are aware of Contributions: Our findings contribute to the understanding of the design space for I3Ms. In particular, we found that: Permission to make digital or hard copies of all or part of this work for personal or • Interaction modalities: Participants wished to use a mix classroom use is granted without fee provided that copies are not made or distributed of tactile exploration, touch triggered passive audio labels, for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM and natural language questions to obtain information from must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, the model. They mainly wanted audio output, but vibration to post on servers or to redistribute to lists, requires prior specific permission and/or a was also suggested. fee. Request permissions from [email protected]. CHI ’20, April 25–30, 2020, Honolulu, HI, USA. © 2020 Association of Computing Machinery. ACM ISBN 978-1-4503-6708-0/20/04 ...$15.00. http://dx.doi.org/10.1145/3313831.3376145 • Independence and model-agency: Participants wished to 3D printed maps during tactile exploration, triggering auditory be as independent as possible and establish their own in- labels. This method only allowed the use of one hand to terpretations. They wanted to initiate interactions with the tactually explore the 3D prints as it required the user to hold model and generally preferred lower model agency, however and point their smartphone camera at the print. Shi et. al. [43] they did want the model to intervene if they did something created Tickers, small percussion instruments that when added wrong such as placing a component in the wrong place. to 3D prints can be strummed and detected by a smartphone that triggers auditory descriptions. Testing, however, found • Conversational agent: Participants preferred more intelli- that as strummers distorted the model appearance, it interfered gent models that support natural language questions which, with tactile exploration. Further work by Shi et. al. [45] when appropriate, could provide guidance to the user. investigated how computer vision can be used to allow BLV • Interaction strategy: We found a hierarchy of interaction people to freely explore and extract auditory labels from 3D modality. Most participants preferred to glean information prints, but this required affixed 3D markers to support tracking. and answer questions using tactile exploration, to then use Very little research has investigated how BLV users would touch triggered audio labels for specific details, and finally like to interact with I3Ms, with most studies offering only use natural language questions to obtain information not in basic touch interaction [47, 18, 21]. A notable exception the label or to confirm their understanding. is Shi et. al. [44] who conducted a WoZ study to examine • Prior experience: Interaction choices were driven by par- input preferences and identified distinct techniques across ticipants’ prior tactile and technological experiences. three modalities: gestures; speech; and buttons. These findings were of considerable value; however, the study considered • Multi-component models: Participants found models with only three simple models, none of which featured detachable multiple parts engaging and would remove parts to more components, and focused on a pre-defined set of six generic readily compare them. low-level tasks involving information retrieval and audio note recording. Our research extends this by considering more RELATED WORK complex, multi-component models, conversational agents and Accessible Graphics & 3D Models: The prevalent methods the impact of model agency. to produce tactile accessible graphics include using braille embossers, printing onto micro-capsule swell paper, or using The role that auditory output can serve in I3Ms is also under- thermoform moulds [40]. Their main limitation is that they explored, with the majority of I3M research considering only cannot appropriately convey height or depth [21], restricting passive auditory labels such as descriptions [18, 38, 16, 45, the types of graphics that can be produced to those that are 21] and soundscape cues to add contextual detail [1, 11, 42]. largely flat and two-dimensional in nature. Holloway et. al. [21] gave preliminary guidelines to inform how auditory labels should be used in I3Ms, identifying that: As a consequence, handmade models are sometimes used a) trigger points should not distort model appearance, b) trig- in STEM education and other disciplines that rely on con- gering should be a deliberate action, and that (c) different cepts and information that is more three-dimensional in nature. gestures should be used to provide different levels of infor- However, while they are uncommon due to difficulties in pro- mation. Co-designing I3Ms with teachers of BLV students, duction and the costs involved [21], commodity 3D printing participants in Shi et. al. [42] suggested that in addition to has seen the cost and effort required to produce 3D models fall providing passive auditory labelling, I3Ms should allow users in line with tactile graphics. 3D printing has been used to cre- to ask the model questions about what it represents. How- ate accessible models in many contexts: resources to support ever, this was not explored further in the Shi et. al. [42] study. special education [12]; tangible aids illustrating graphic design Doing so is a major contribution of our work. theory for BLV students [29]; graphs to teach mathematical concepts [10, 22]; programming course curriculum [24]; and Conversational and Intelligent Agents: Allowing the user 3D printed children’s books [26, 46].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-