Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users

ATHINA PANOTOPOULOU, Dartmouth College and Boston University, USA XIAOTING ZHANG and TAMMY QIU, Boston University, USA XING-DONG YANG, Dartmouth College, USA EMILY WHITING, Boston University, USA

Fig. 1. We present a novel approach for generating tactile to improve shape understanding in blind individuals. (a) Physical 3D object (3D printed). (b) The input to our pipeline is a pre-partitioned object, colors indicate segmented parts. (c) A local camera is assigned to each part, with a master camera to combine the resulting multi-projection image. (d) Resulting stylized . Cross-sections are used for texturing the interior of each part to communicate surface geometry. (e) We evaluated the technique in a user study with 20 blind participants. Tactile illustrations were fabricated using microcapsule paper.

Members of the blind and visually impaired community rely heavily on Additional Key Words and Phrases: fabrication, design, accessibility, tactile tactile illustrations – raised line graphics on paper that are felt by hand – to shape perception, tactile images, non-photorealistic rendering understand geometric ideas in school textbooks, depict a story in children’s books, or conceptualize exhibits in museums. However, these illustrations ACM Reference Format: often fail to achieve their goals, in large part due to the lack of understanding Athina Panotopoulou, Xiaoting Zhang, Tammy Qiu, Xing-Dong Yang, and Emily in how 3D shapes can be represented in 2D projections. This paper describes Whiting. 2020. Tactile Line Drawings for Improved Shape Understanding a new technique to design tactile illustrations considering the needs of blind in Blind and Visually Impaired Users. ACM Trans. Graph. 39, 4, Article 89 individuals. Successful illustration design of 3D objects presupposes identif- (July 2020), 13 pages. https://doi.org/10.1145/3386569.3392388 cation and combination of important in topology and geometry. We propose a twofold approach to improve shape understanding. First, we 1 INTRODUCTION introduce a part-based multi-projection rendering strategy to display geo- metric information of 3D shapes, making use of canonical viewpoints and Understanding the 3D geometry of everyday objects via 2D media is removing reliance on traditional projections. Second, curvature essential to live, learn, and work. Designers communicate the shape information is extracted from cross sections and embedded as textures in of a product (e.g., a chair) via sketches and renderings. Consumers our illustrations. understand the design by browsing images of the product in a cata- logue. Students learn physics, such as why a plane is shaped in a CCS Concepts: • Human-centered computing → Accessibility systems and tools; • Computing methodologies → Shape analysis; Perception; certain way for aerodynamics, from fgures in a textbook. Visual Non-photorealistic rendering. perception of complex 3D geometry from a 2D projection is taken for granted by people with healthy vision. However, for people with Authors’ addresses: Athina Panotopoulou, [email protected], Dart- near or total blindness, understanding 3D geometry of daily objects mouth College and Boston University, USA; Xiaoting Zhang, [email protected]; Tammy Qiu, [email protected]; Emily Whiting, [email protected], Boston University, USA; Xing- from current media is extremely challenging. Dong Yang, [email protected], Dartmouth College, USA. In this paper we introduce a novel approach to generating tactile that aid 3D shape understanding in users with near or Permission to make digital or hard copies of all or part of this work for personal or line drawings classroom use is granted without fee provided that copies are not made or distributed total blindness, enabling the blind community to perceive complex for proft or commercial advantage and that copies bear this notice and the full citation 3D objects from 2D media. Members of the blind community rely on the frst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, heavily on tactile illustrations, defned as raised graphics (e.g. on to post on servers or to redistribute to lists, requires prior specifc permission and/or a paper) that are felt by hand. Common uses of tactile illustrations fee. Request permissions from [email protected]. are to make visual information accessible in textbooks, [Brock © 2020 Association for Computing Machinery. 0730-0301/2020/7-ART89 $15.00 et al. 2015], scientifc [Brown and Hurst 2012], children’s https://doi.org/10.1145/3386569.3392388 literature [Claudet et al. 2008; Stangl et al. 2014; Stangl 2019], or

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:2 • Panotopoulou et al.

• We present results of a formative study involving sculpting 3D objects, aiming to discover salient information for blind users in haptic shape understanding (Sec. 3.1). • We present results of a second formative study on the suc- cess of existing 2D tactile illustration guidelines for shape understanding (Sec. 3.2). • We introduce a new design methodology for tactile illustra- tions based on the outcome of our formative studies, apply- ing techniques in multi-projection rendering and geometry- Fig. 2. Tactile illustrations (a,c) displayed alongside exhibits in the Museum aware textures (Sec. 4). of Fine Arts Boston (b,d) for accessibility purposes ( ©Museum • We design and implement a user study evaluating the ability of Fine Arts, Boston, www.mfa.org). Black areas in illustrations indicate for blind users to understand 3D shape with our new illustra- raised regions on the paper surface that can be perceived by touch. tion style (Sec. 5).

2 RELATED WORK museum exhibits. The tactile illustration in Figure 2 shows two rep- resentative samples, both used to give access to artifacts displayed Our approach to tactile graphics is motivated by cognitive princi- in the Museum of Fine Arts Boston. ples in tactile perception and design principles of stylized graphic To design these types of tactile illustrations, designers rely on illustration. guidelines provided by a number of associations, such as the Braille Authority of North America [2010]. Hundreds of pages of rules Tactile Perception. Mental representations in shape understand- make manual design time consuming. For example, rules related to ing are an active area of research in psychology and neurobiology, size, placement, variety, and form of diferent elements (e.g. lines, particularly concerning similarities in how humans perceive shapes textures, points, labels, titles, and captions) permit easier explo- visually and through touch [Farley Norman et al. 2004; James et al. ration. Rules related to simplifcation, separation, elimination, and 2002; Lakatos and Marks 1999]. A prominent theory on shape un- distortion of the illustration avoid clutter. Rules related to semantic derstanding is “recognition by components" [Biederman 1987; Blake segmentation, keying of the design, and selection of viewpoints and Sekuler 2006], suggesting that our visual system processes shape permit easier understanding. information by representing an object as a spatial arrangement of Unfortunately, studies have shown that shape understanding is 3D parts called geons (generalized cones). Our visual system extracts fawed in traditional tactile diagrams, which can be seen from low geons and their arrangement from an image to recognize the object. object identifcation scores [Klatzky et al. 1993]. We aim to improve Visual translation theory hypothesizes that haptic perception is shape understanding by re-evaluating considerations translated to the same visual representations [Cooke et al. 2007; and accounting for the unique needs of blind individuals. For exam- Erdogan et al. 2014; Handjaras et al. 2016; Lederman and Klatzky ple, while current tactile illustrations are created from a standard 2009], while other work suggests vision and haptics are functionally single viewpoint, a blind individual who has never had access to overlapping but do not necessarily have equivalent representations visual information might fnd it unnatural to explore a visual image of 3D shapes [Farley Norman et al. 2004; James et al. 2002; Lakatos that includes perspective distortions. This is supported by drawings and Marks 1999]. Our proposed work on illustration style abides created by blind individuals [Kennedy 1993], in which illustrations by visual shape recognition theory, where pictorial representations of 3D objects appear to use a “fold-out” method to arrange faces should clearly show subcomponents and their arrangement in space onto a fat surface [Kurze 1997]. [Thompson et al. 2006]. In this paper we frst present two formative studies that moti- vated our approach to tactile illustration. We investigate salient Tactile illustrations. Tactile illustrations are typically made by ex- shape properties in tactile perception, and assess the strengths and pert designers in a time consuming process, involving identifcation weaknesses of existing tactile illustration styles. Next, we propose a and application of principles from existing guidelines (e.g. Braille novel method to render tactile graphics with a focus on improving Authority of North America [2010]). Rules relate to use of points, shape understanding of 3D objects. We design a pipeline introduc- lines, textures, and labels, as well as clutter avoidance, e.g., by elim- ing a multi-projection rendering approach combined with texture ination, simplifcation, or separation of graphical elements. How- inflls that communicate surface geometry. Compared to a single- ever, previous studies have still shown poor 3D shape perception viewpoint illustration, our multi-projection method carefully places from traditional 2D tactile diagrams. Similar to our formative study a local camera for each semantic part of the input object, so that (Sec. 3.2), Klatzky et al. [1993] cite success rates at the level of 30% each part may be rendered from its optimal viewpoint. A composit- for identifying everyday objects, with some improvement observed ing stage aligns image layers created from each camera to maintain for textured images compared to raised line graphics [Theurel et al. proper connectivity of the input object. Our texturing technique 2013]. Thompson and Chronicle [2006] introduced the TexyForm then uses closely spaced lines generated from cross sections to give system and discuss the need for re-evaluation of design methods. cues about surface geometry of the object. In summary, we make Automatic tactile illustration generation has been studied for the following contributions: 2D input images, with improved edge extraction [Hernandez and

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:3

Barner 2000], and flters for removing details too small to be per- design [Ching and P. 2019; Krikke 2000], while ceived by touch [Way and Barner 1997]. In contrast, our proposed linear convergent perspective projection is found more commonly, semi-automatic method operates on 3D geometric input and in- e.g., in videogames and flm. Linear divergent perspective is used troduces a new stylization procedure. We were inspired by two in artistic drawing [Howard and Allison 2011; JMS 2010], and non lines of work that explored non-standard illustration techniques: linear perspective (e.g. [Sudarsanam et al. a) First, the procedure of “unfolding" or “fattening" 3D shapes to 2005]) projection characterizes wide angle photography and some obtain 2D depictions [Kurze 1997], based on drawings made by artistic stylizations, e.g., Escher [Brosz et al. 2007]. Past work in blind individuals. We expand on this idea with a multi-projection multi-projection rendering [Agrawala et al. 2000; Yu and McMillan strategy for general 3D objects to create fattened illustrations. Our 2004] shows how images from diferent camera viewpoints can be approach considers viewpoint selection, known to play a key role combined into a single image. We draw from the multi-projection in 3D shape recognition [Farley Norman et al. 2004]. b) Second, we approach of Agrawala et al. [2000] and extend their work to using expand on the use of line curvature and orientation, which have local cameras for segmented parts within the same object. been shown to be salient features [Hsiao 2008; Yau et al. 2015], used successfully in 2D tactile Braille and Moon alphabets [Moore 2011]. 3 FORMATIVE STUDIES Thompson et al. [2006] studied the use of curvature lines in shape In this section we present two formative studies that guided our new perception, however, examples were constructed case by case and design approach for tactile illustrations. The frst investigates what did not provide a system for application on novel objects. We de- information is considered salient in tactile shape understanding. velop a systematic approach to the design of tactile illustrations The second study assesses how successfully existing illustration requiring minimal manual intervention. guidelines convey 3D shape information. More broadly, research on tactile graphics covers a range of ap- plications such as maps and scientifc diagrams [Brock et al. 2015; 3.1 User Study 1: 3D Object Replication Brown and Hurst 2012], computer user interfaces [Li et al. 2019], children books [Cottin et al. 2008; Kim et al. 2015, 2014; Kim and We performed an object replication task to develop a deeper under- Yeh 2015; Stangl et al. 2014; Stangl 2019; Walsh 2017], and artistic standing of shape characteristics blind people consider salient. The pieces [Ré 1981]. See Levesque [2005] for a survey. experiment involved a sculpting task where the aim was to identify shape characteristics that participants focus on. Stylized Rendering. Extensive work has been done in on non-photorealistic line drawings that improve how shape and topology are conveyed [Grabli et al. 2004]. E.g., using suggestive contours [DeCarlo et al. 2003], emulating pen strokes on parametric surfaces [Winkenbach and Salesin 1996], and silhouettes and hatching [Hertzmann and Zorin 2000; Kalogerakis et al. 2012; Praun et al. 2001; Zander et al. 2004]. Studies show the efectiveness of these techniques in depicting shape [Cole et al. 2009]. However these methods were designed for , and properties such as shading due to illumination are not relevant to tactile per- ception. Further, the cutaneous system for touch is limited in spatial resolution compared to the visual system [Blake and Sekuler 2006; Lederman and Klatzky 2009]. High frequency details are lost when perceiving haptically raised lines. Way and Barner [1997] identi- fed size restrictions for the low resolution sense of touch which are enforced in most guidelines [Hasty 1999; McLennan et al. 1998; Fig. 3. Sculptures from User Study 1. (Top row) Reference objects. (Middle row) Participant 1 sculptures. (Botom row) Participant 2 sculptures. of North America 2010]. Geometric modifcations can provide stylization [Liu and Jacob- son 2019], or help visualize internal structure with approaches such as cutaways [Li et al. 2007], splitting objects [Islam et al. 2004], 3.1.1 Apparatus and Participants. A variety of man-made and nat- or using deformations [McGufn et al. 2003]. Geometric modifca- ural objects with diferent levels of shape complexity were selected. tions also help display the global shape of forms, by abstracting the Two congenitally blind participants (1 female) participated in the geometry [Mehra et al. 2009], or improving visibility in exploded study, where they were given an hour to sculpt fve shapes, includ- diagrams [Li et al. 2008; Tatzgern et al. 2010] for, e.g., complex me- ing a pyramid, dolphin, cup, chair, and human in a neutral standing chanical assemblies. Non-geometric modifcations include material pose (Fig. 3, top row) using 4 oz of polymer clay for each shape. changes, such as the use of transparency and ghosting [Diepstraten et al. 2003]. 3.1.2 Task and Procedure. Participants were instructed to sculpt When representing 3D shapes in 2D media, choice of camera replicas of the shapes. They were also asked to identify each ob- projections vary signifcantly depending on use case. For example, ject and describe its shape and any difculties encountered while orthographic and oblique projections are used widely in CAD and sculpting.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:4 • Panotopoulou et al.

3.1.3 Observations. The results of the study (see Fig. 3) suggest a (3) Line drawing with canonical view and texture-inflls to indi- number of interesting fndings. First, the participants segmented cate diferent semantic regions in the illustration. the shapes into distinct geometric parts (F1). For example, they frst In case (2) we used two viewpoints (i.e. 2 corresponding illus- created the four chair legs and brace segments, then proceeded to trations) if important information was contained in both, follow- connect the pieces together. Participants were also able to correctly ing recommendations by the Braille Authority of North America follow the topology of the reference shapes (F2). An example of (BANA) guidelines [2010]. In case (3) the separation into semantic this is sculpting the correct number of pieces to construct the legs regions and selection of textures were following BANA guidelines. in the chair model. They were also able to sense simple shapes All illustrations were made by a trained graphic designer. The phys- of the objects or the cross-section of a part (e.g., square base for ical tactile diagrams were created by laser engraving commercially the pyramid, rounded cross-section for the body of the dolphin, or available capsule paper. cylindrical cross-section for the cup). Participants were also able to notice diferences when cross-sections change within the same part (F3). For example, they noticed the change in cross-section from the front to the back of the dolphin body. Participants also pay attention to the curvature properties of the surface geometry (F4). For example, they fattened the facets of the pyramid and created distinct edges, or sculpted a rounded shape for the dolphin’s body. Participants encountered a number of challenges during the study. For example, participants were unable to accurately replicate po- sitions of parts (F5). Sometimes they even misidentifed relative positions (e.g., failing to make the dolphin’s pectoral fippers sym- metric). The participants also experienced difculties in accurately capturing proportions (F6), e.g., the height-to-width ratio of the pyramid was less important than replicating a sharp point at the apex. Our fndings suggest that to communicate 3D shape information efectively to blind users, the following important characteristics should be conveyed through the illustration: Fig. 4. 3D printed reference objects for User Study 2 (row 1). Corresponding • Distinct parts within an object (F1) illustrations: non-canonical views (row 2), canonical views (row 3), and • Topology of the shape (i.e. preserve part connectivity) (F2) textured canonical views(row 4). • Cross section shape and propagation (F3) • Curvature of surface geometry (F4) Our fndings also suggest that the following characteristics have 3.2.2 Task and Procedure. During the study, participants were asked low importance in shape understanding: to use the three tactile illustrations to recognize the tested objects, identify parts of the objects (e.g., locate the spout of the teapot), • Accurate placement of the connection points (F5) explain the perceived shape of the parts, explain the connection • Accurate proportions (F6) between the parts (e.g., where the spout connects to the main ves- Note that F5-F6 were independently observed, not inferred from sel), and count the number of the parts. We showed the objects in F1-F4. The preliminary fndings in our object replication study align the same sequence to all users: (1) non canonical, (2) canonical, (3) well with existing theory on shape understanding, in particular the textured canonical. “recognition by components” theory [Biederman 1987; Blake and 3.2.3 Observations. Several important fndings were observed from Sekuler 2006] of how any view of an object can be represented as a our study regarding the key characteristics related to tactile line- spatial arrangement of 3D parts (called geons, or generalized cones). drawings in communicating 3D shape information, including the number of viewpoints, viewpoint selection, occlusion, and shape 3.2 User Study 2: Assessment of Existing Illustration understanding. Guidelines According to the Braille Authority of North Seven congenitally blind partici- Multiple viewpoints. 3.2.1 Apparatus and Participants. America (BANA) guidelines for tactile graphics [2010], multiple pants (3 female, aged between 30 and 72) participated in our study. views should be provided if they contain information important We chose fve reference objects familiar to our participants: teapot, for the intended task (guidelines: 10.1.4; 7.1.1.5; 3.6.2). However, lamp, eyeglasses, chair, and rocking horse. We created tactile illus- we found people could not successfully identify correspondences trations for each object in three diferent styles (Fig. 4): between the diferent illustrations (F7). Instead, participants often (1) Line drawing with object in non-canonical view (also known chose to explore their preferred viewpoint to perform a task, even as perspective or 3/4 view). if the essential information was contained in the other view. It was (2) Line drawing with object in canonical view (e.g. front or side). considered cumbersome to explore more than one at a time.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:5

Viewpoint Selection. Selection of viewpoints has been shown to approach to generating tactile infll textures that communicate the infuence object recognition accuracy [Heller et al. 2009]. We also surface geometry of the object. found that participants’ ability to identify an object from an illus- Our illustration approach draws inspiration from the artistic tration had higher success rates for canonical viewpoints (21%) vs. multi-projection rendering stylization introduced by Agrawala et non-canonical viewpoints (6%) (F8). However, participants were gen- al. [2000]. Similar to their work, we create a set of local cameras erally not satisfed with any of the canonical viewpoints provided. which render images from diferent viewpoints. We then employ a For example, in the eyeglasses illustrations participants commented master camera to combine the local renderings into a single layered that the front view depicted the correct size for the lenses but the image. A key diference to Agrawala et al. [2000] is that our local wrong size for the frame arms, and the opposite was true for the cameras are assigned to segmented parts within the same object. side view. A participant suggested combining viewpoints: showing This means our method must constrain relative positions of part- the lenses from the front and the frame arms from the side. That is, level renderings in order to maintain correct connectivity in the fnal create a combined multi-projection image in which each part will illustration. Further, our approach automates camera positioning be shown in its optimal view (F9). by defning a notion of optimal viewpoint for each part. The input to our pipeline is a partitioned 3D object. Users can Occlusions. One of the most common complaints from partici- either manually partition the object or use an automated technique pants was the absence of parts from the diagrams, which was a con- (e.g. [Zhou et al. 2015]). We use the PartNet dataset [Mo et al. 2019] sequence of occlusion (F10). For example, in the chair illustrations, consisting of pre-partitioned 3D objects in a wide range of common front and side views were provided (see Fig. 4) and a participant object categories, and provided in up-right orientation. Our pipeline indicated that the chair seemed to have 2 legs instead of 4. This is outputs a stylized multi-projection rendering, which can then be consistent with an existing study by Moringen et al. [2017] that fabricated to produce a tactile illustration. The main steps of our showed occlusions afect tactile shape recognition. algorithm are as follows: (1) Determine local camera poses. A is assigned to Illustrations rendered in canonical views part camera Shape understanding. generate a rendering for each decomposed part of the object, were generally better for understanding surface geometry compared with position and orientation chosen according to visibility to non-canonical views ( ). When asked about parts being more F8 objectives (Sec. 4.3). A is employed to combine rectangular vs. cylindrical, users were 71% correct for canonical master camera the multiple renderings into a unifed illustration (Sec. 4.2). illustrations vs. 48% for non-canonical views. However, in some (2) Generate texture inflls for each part rendering. We integrate cases the non-canonical views improved shape understanding. For surface geometry information using cross-sections extracted example, the non-canonical view of the teapot was more successful along the object skeleton (Sec. 4.4). for recognizing the spherical shape of the main container, whereas (3) Compositing the fnal illustration from local part renderings. in the canonical view it was perceived as cylindrical or cubical. Pos- The combination of image layers accounts for relative up- sible reasons are the curved edge along the bottom of the teapot and vector alignment (Sec. 4.5), part connectivity within the 3D ellipsoidal shape of the lid (see Fig. 4). This is in agreement with a object, and occlusion handling (Sec. 4.6). study by Thompson et al. [2006] which showed that curved lines as textures in tactile illustrations give indication about the geometry of the represented shape. Further, a study by Yau et al. [2015] showed 4.2 Master camera placement that line orientations and the level of curvature and curvature di- In our multi-projection approach a master camera is used to organize rection of lines changes are identifable by touch (F11). the rendered images from each local part camera. As discussed in Sec. 3.2.4 (design foundation (F10)), viewpoints that result in non 3.2.4 Foundations for the new design. Based on User Study 2, we coincidence of parts on the image (i.e., not more than one part propose a number of additional design guidelines to facilitate 3D appearing on each pixel) are preferred. The view from the master shape understanding using tactile illustrations: camera helps to defne the local part camera frames by prioritizing • Contain all information in a single illustration (F7) visibility in the fnal composition. ˆ • Depict canonical viewpoints (F8) To determine the master view direction d� we follow the ap- • Combine multiple viewpoints within a single illustration so proach of Secord et al. [2011], where several attributes are defned that each part of the object can be depicted optimally (F9) for identifying preferred visual viewpoints. We notice that view- • Choose a viewpoint of the object to reduce occlusions (F10) points that maximize the silhouette length result in decreased part • Use curved lines to communicate surface geometry (F11) coincidence. Silhouette length is defned as the overall length of the object’s silhouette in the camera’s image plane. E.g., in Fig. 6(D), 4 TACTILE ILLUSTRATION METHOD overlap between the handle and main container is small when viewed from a direction that maximizes silhouette length. 4.1 Overview Next, we align the up-vector of the camera uˆ � with the up- Given the design foundations derived from our formative studies, we vector of the object. This results in a top to bottom layout in the present a new approach to creating tactile illustrations of 3D objects. fnal illustration (i.e., the base of the object is positioned at the Our method combines images from diferent viewpoints, applying bottom of the illustration), which is a natural choice and follows techniques in multi-projection rendering. We also introduce a new the Braille Authority of North America (BANA) guidelines and

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:6 • Panotopoulou et al.

Fig. 5. Overview of our pipeline for the design of tactile illustrations. (a) Input model, colors represent diferent parts of the faucet. (b) Master camera placement (inset box shows camera view). (e) Local cameras are assigned to each part (inset boxes show camera views). (f) The image layers are composited; positioning of each layer is guided by the master camera and the part connectivities (d). (g) Infill textures are generated based on cross sections of each part (black represents raised areas). (h) The fabricated illustration created with microcapsule paper.

We next extract a one dimensional skeleton for each part using the mean curvature skeleton by Tagliasacchi et al. [2012] (see white lines in Fig. 5(c)). We reject the PCA axis that most closely aligns with the skeleton, which will be important later when creating infll textures. Between the remaining axes we choose the direction correspond- ing to the largest projected area. For simplicity this is done heuris- tically by aligning a bounding box to the frame (see Fig. 5(c)), and selecting the axis corresponding to the largest face on the bounding box (i.e. face with normal eˆ� ). Between the ±eˆ� directions, we choose the one that gives a viewing direction in the same half space as the ˆ master camera, where local camera viewing direction is d� = ∓eˆ� . ˆ 3 The part camera location is given by p� = c� − �d� , where c� ∈ R is the centroid of part � and � is the distance to the camera (r > 0). Fig. 6. Diferent metrics considered for finding the master camera place- The part � is centered in the local camera viewport. ment. The viewpoints maximize: (A) Silhouete Curvature Extrema, (B) Sil- houete Curvature, (C) Projected Area, (D) Silhouete Length, (E) Surface 4.4 Cross Section Textures Visibility, (F) Maximum Depth, (G) Depth Distribution, (H) Viewpoint En- tropy. The maximum silhouete length (D) decreases part occlusion. The next step is to create the infll textures. Compared to the BANA standard [2010] which uses uniform textures to denote semantic regions, we choose to incorporate properties of surface geome- standards for tactile graphics [2010] (6.11). Note the object up-vector try motivated by design foundations (F3, F4, F11). Many meth- is not considered in Secord et al. [2011] for viewpoint selection. We ods have been explored for creating interior lines and textures in fx the up-vector and constrain the camera location to a ring centered non-photorealistic rendering, such as hatching and cross-hatching in the object (see Fig. 5(b)). [Kalogerakis et al. 2012] or apparent ridges [Judd et al. 2007]. How- ever these approaches are based on illumination efects of shading 4.3 Part Camera Placement or view-dependent properties which are not relevant to blind users Next, we determine a local frame for each part camera. As discussed for tactile perception of 3D forms. Similar to Deussen et al. [1999] in Sections 3.2 and 3.2.4 (design foundation (F8)), canonical views we choose to create texture inflls using cross section boundaries to tend to improve shape understanding for blind users. To identify illustrate the geometry of each part. the canonical viewpoint for each part we frst extract a coordinate As noted in Sec. 4.3, for each part we extract a 1D skeleton using frame {eˆ1, eˆ2, eˆ3} using PCA, similar to Vranic et al. [2001]. the method of Tagliasacchi et al. [2012]. We initialize the cross

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:7 sections to be uniformly sampled along the skeleton, with normals, nˆ�� , aligned with the local skeleton direction. Next, to allow for visibility of cross-sections on the image and remove perspective distortion, each cross section is fattened by rotating to lie parallel to its corresponding part camera view plane (see Fig. 5(g) and Fig. 7). The skeleton sample point remains fxed with the rotation axis given ˆ ˆ by d� ×nˆ�� , where d� is the view plane normal for local part camera �. We then render a line for each cross section silhouette from the corresponding part camera. The layering of cross section silhouettes in the image follows the sampling ordering along the skeleton (see pseudocode in Alg. 1). The spacing of cross section lines considers limits on resolution for tactile perception. The cutaneous system has poor spatial reso- Fig. 7. The chair displaying the bounding box and skeleton for each part (a). lution compared to vision [Blake and Sekuler 2006; Lederman and Cross sections sampled uniformly along the skeleton. The cross sections of Klatzky 2009; Way and Barner 1997], such that high frequency de- each part as seen from their corresponding local cameras (c), and afer the tails are lost when haptically perceiving raised lines. We follow flatening operation (d). recommendations from the BANA guidelines [2010] of 2-3mm gaps between elements to be distinguishable by hand.

� ∈ A� , computed as the centroid of the contact surface between �� ˆ ALGORITHM 1: FlattenedCrossSections (��, Shape� , d� ) and �� . The part centroids c� and connection points a� � are shown Input: �� = [sk1 ... sk� ] a sequence of regularly sampled points for the faucet in Figure 5(d). on the skeleton. The geometry of the part, Shape� . View We make a slight adjustment for the reference “center” point of plane normal ˆ for camera of part �. d� the part, c¯� . If there is only one connection point, the center of the Output: Cross sections �� = [��1 . . . ��� ] parallel to view plane and layered. Each �� defned by sample point from skeleton part c¯� coincides with the mesh centroid. Otherwise the average of all connection points is used. We then compute a rotation for b�� , normal nˆ�� , and Boundary�� . 1 for sk� ∈ �� do // Extract cross sections along skeleton each part camera’s up-vector, uˆ � , such that the relative horizontal 2 b�� ← sk� � positioning between c¯� and a� � is the same when viewed in the 3 nˆ��� ← local tangent direction at sk� Ñ CuttingPlane , ˆ master and part camera image planes (see Fig. 5(f), 8). To achieve 4 Boundary��� ← Shape� (b��� n��� ) �� CrossSection ( , ˆ , ) this we use the following steps: 5 � ← b��� n��� Boundary��� end Recall that we are using orthographic cameras, where the object 7 for ��� ∈ �� do // Align cross sections with view plane size in the image plane does not diminish with depth. We defne the ˆ 8 if d� ≠ nˆ��� then vector ¯ in world space. The horizontal distance in the ˆ v = a� � − c� 9 RotAxis ← d� × nˆ�� � master camera image is given by � = v · xˆ� , where xˆ� is the �-axis 10 PivotPt ← b��� of the master camera viewport. We then determine the rotation Angle ( ˆ , ˆ ) 11 RotAngle ← −d� n��� ˆ angle �� � of the camera frame around the view direction � . The 12 ��� ←RotateCrossSec (��� , RotAxis, PivotPt, RotAngle) d ′ ′ end angle is chosen such that v · xˆ � = �, where xˆ � is the �-axis of the ˆ 14 b��� ← b��� − SmallStep ∗i ∗ d� viewport for part camera � after rotation (see Fig. 8). end 16 return ��

4.5 Part Camera Up-Vector Alignment We next align the orientation of the local part cameras. In Section ˆ 4.3 we determined the view direction for each part camera, d� , prioritizing the direction of max projected area for the part. However, the up-vector orientation of the camera, uˆ � , was left ambiguous. Our approach takes into account the master camera viewpoint and connectivity of parts within the 3D object. As an initialization we use the vector feld provided by Lopes et al. [2013] – the vertical axis is the master camera up-vector, uˆ � , the normal vector corresponds ˆ to the local camera view direction, d� , and the tangent given by the Fig. 8. Alignment of the part camera up-vector. (a) Input object as seen from vector feld provides an initial guess for uˆ � . the master camera. (b) Center and connections for the tabletop from the The next step is to identify connection points between adjacent part camera view (top) and master camera view (botom). Angle computed so that horizontal distances of connections, , match the master camera parts. For each part �� , we collect the set of adjacent parts A� . We � 3 view. (c) Final upright orientation of the local camera, uˆ ′ . assign a representative point a� � ∈ R to each connecting part �� , for �

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:8 • Panotopoulou et al.

controlled lab environment to identify a set of daily objects through the proposed tactile illustrations.

5.0.1 Apparatus and Participants. Our study consisted of 20 blind participants: 12 female, aged between 26 and 72, and 14 congenitally blind. Seven objects were used in the study, we chose common household objects that would be familiar to the participants (in the Fig. 9. Aligning part camera image layers. (a) The input model as seen from order presented): monitor, square , headphones, chair, curved the master camera. (b) If the centroids in the master camera view are used monitor, round table, and faucet (See Fig. 10). For each object, the for aligning part renderings, the resulting image is disconnected. (c) Instead, study materials included: the connection points are used for aligning image layers. (1) Three variations of the 3D reference object (Fig. 12). All three versions were physically fabricated using 3D printing. We repeat the same process for all connection points of the part (2) One tactile illustration generated with our technique (Fig. 10). ¯ Fabricated with microcapsule paper (Fig. 11). and use the average rotation, � � = avg �� �  as the fnal align- � ∈A� (3) One tactile illustration designed following BANA guidelines ment of the up vector, ˆ ′ �¯ ˆ . The rotation axis is the u� = R(− � )u� as a baseline. Fabricated with microcapsule paper. ˆ viewing direction d� . Including variations of the 3D objects allowed us to measure how 4.6 Compositing Part Renderings well our illustration technique helps participants precisely perceive the 3D geometry information and recognize subtle diferences be- The fnal stage is to combine the images from each local part camera tween the reference objects and their variations. To create variations to create the composited illustration. We compute a depth value of the 3D reference objects, each model was modifed by either a for each part, calculated as the distance from the master camera replacement operation, subtraction, or addition of parts. The varia- location, p� , to the furthest vertex of the part’s bounding box (in the tions of the objects are as follows (see Fig. 12): master camera coordinate frame). The order controls the layering Monitor 1: changes in base design with which images appear on the multi-projection illustration. The • Table 1: leg design (rectangular vs. cylindrical cross section, default is to start from the farthest and move to the closest part. • and cylindrical with connectors) The user may choose to change the ordering manually since this Headphones: size of ear cushion and outer shape of earpieces does not handle many occlusion cases, e.g., where parts close to the • Chair: curved vs. fat back, and presence of cushion camera completely occlude other parts. • Monitor 2: curved vs. fat screen and CRT style Initially all image layers are stacked such that their local frames • Table 2: number of legs and straight vs. tilted shape align (i.e., image centers coincide). We traverse the graph of all parts • • Faucet: number and shape of lever arms on handles starting with the farthest part, where nodes are the parts �� , and • Jug (training example): cross section shape of main container edges are their part connectivity A� � . When we traverse the edge � �, i.e., a connection from part �� to its adjacent part �� , we update the Example materials for the chair are shown in Figure 13. Materials position of the image layer from part camera �. The image layer is for all objects in the study are included in Supplemental Materials. translated to enforce the part renderings to align at their connection The tactile illustrations were presented point (Fig. 9). The translation vector is given by R2: 5.0.2 Task and Procedure. t� ∈ to our participants, whose task was to fnd the corresponding 3D t� = proj� (a� � ) − proj� (a� � ) reference object. A baseline was included to allow us to measure the performance of our technique over the state-of-the-art. The baseline where is the projection of connection point in the proj� (a� � ) a� � was produced by following the Braille Authority of North America image plane of part camera �. After applying the transformations, (BANA) guidelines [2010]. See sample materials for the chair object the connection points from diferent camera renderings align on the in Fig. 13. Participants were trained in how to interpret the tactile image plane. Note we assume no cycles exist, this procedure does illustrations using the jug example. The procedure was as follows: not handle cases where the connectivity of parts forms a multiply- connected graph rather than a tree. (1) Participants were given materials for one object at a time As a fnal post-processing step, we create an outline around each consisting of three variations of the object (3D printed), and of the part textures of 1mm thickness, and enforce a 3mm wide gap one illustration of “type A”. between each part (see implementation details in Sec. 6). This helps (2) The participant was asked to choose which one of the three to diferentiate between parts, following BANA guidelines [2010]. object variations most closely matched the illustration. Renderings for the tactile illustrations of a selection of objects can (3) Steps (1) and (2) were repeated for all seven objects. be seen in Fig. 5(g) and Fig. 10. (4) Steps (1-3) were then repeated using illustrations of “type B”. For half of the participants “type A” was the baseline (BANA guide- 5 USER STUDY 3: EVALUATION lines) and “type B” was our illustrations. For the other half of par- We conducted a user study to evaluate the efectiveness of our novel ticipants the illustration type was reversed. We chose to use only tactile diagrams. The study included an object recognition task, in three variations of the reference object to avoid cognitive overload, which participants with near or total blindness were invited to a this was determined after running a small pilot study.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:9

Fig. 10. Resulting designs for tactile illustrations using our technique, with 3D input models shown.

faucet remarkably better, with an improvement of ≥ 45% in mean success rate (all � < 0.005). Table 1, monitor 2, and table 2 also have large improvement (e.g., improvement of ≥ 15% in mean success rate), but without statistical signifcance. An exception to this trend is the headphones where participants responded more accurately using the BANA baseline (� < 0.05). Figure 15 compares results be- tween congenitally (14) and late (6) blind participants. Both groups show better success rate with our technique. Although our results are promising, there were still misconcep- Fig. 11. Photo of tactile illustration of the monitor, including close-up view tions caused by our technique. One observation was that overlap- of the base. Fabricated with microcapsule paper. ping parts were sometimes overlooked. For example, in our illustra- tion for table 1 (square top, see Fig. 10) one of the legs is entirely contained by the tabletop. Several participants were unable to locate Our quantitative measures included matching rate, which mea- this leg, likely since the silhouette was not easily discoverable. sures the number of times participants were able to correctly match The headphones were the one object with poorer mean success a tactile illustration with the corresponding 3D object. Qualitative rate than the baseline. We found that participants were not using the observations were also collected to assess how easy it was for partic- infll textures to understand the shape of the earpieces, rather they ipants to understand the 3D shape information shown in the tactile were judging shape just by the outline which may have indicated a illustrations and identify the subtle diferences between them. spherical instead of cylindrical shape. Another consideration was the lack of symmetry in our illustration – although identical, the 5.0.3 Observations. We analyzed the data using a generalized linear two earpieces were shown from diferent views which produced mixed model for a comparison between the baseline BANA illustra- diferent silhouettes and seemed to confuse participants. tions and our technique, considering each response an individual In order to understand user perception of our new illustration sample. The network includes two fxed efects: design and object, style, we conducted a usability survey based on the System Usability and a random efect: participant, and is built on a family of binomial Scale (SUS) [Brooke 1996]. With a maximum score of 100 we found distributions. The variable object has seven levels (monitor 1, table similar scores on average of 65 for our technique and 62 for the 1, headphones, chair, monitor 2, table 2, and faucet) and the design BANA baseline. This result also indicates that participants tended has two (BANA, our technique). To compare success rates within to be open to new ideas and solutions for the improvement of tactile each object we do Post-Hoc pairwise comparisons on estimated illustrations. marginal means and apply Bonferroni corrections [Lenth 2019]. At the end of the study we collected additional unstructured The mean success rate in matching the correct 3D reference ob- feedback from participants, summarized here: ject to the tactile illustration was 58% for our technique versus 29% for the BANA baseline. We found that there exists a signifcant Infll textures. Participants had positive feedback on the infll diference in shape understanding among the two types of illus- textures for both our technique and the BANA baseline approach. trations (�2 (1, � = 140) = 22.77, � < 0.005). Figure 14 shows the Participants found the BANA textures useful for diferentiating mean success rate over all objects, as well as per object success rates. between diferent parts (2 users). In contrast, our textures were Confdence intervals are computed by standard error. There is also useful for understanding shape details (2 users). a signifcant efect of Object (�2 (6, � = 140) = 31.55, � < 0.005). Participants generally liked the textures in our illustration tech- Post-hoc analysis shows that higher success rates were observed for nique, with two users saying it is a consistent way to represent our technique for the majority of objects, with monitor 1, chair, and shapes, and four users saying the shape could be felt clearer/easier.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:10 • Panotopoulou et al.

Fig. 12. For each 3D object included in User Study 3: Evaluation, we created three versions with minor geometric variations. Beige regions are identical among the three versions; green regions have been modified. The models were 3D printed so that participants could compare (by touch) the 3D objects depicted in the 2D tactile illustrations. The tactile illustrations shown in Fig. 10 correspond to the middle version in each case.

Other positive remarks were that the designs were “more detailed,” 6 FABRICATION AND IMPLEMENTATION DETAILS “interesting,” and that the textures were “more tactile.” Specifc use Implementation Details. The skeleton extraction uses the CGAL cases were suggested, including art objects in a museum (e.g., stat- implementation of the mean curvature skeleton [Tagliasacchi et al. ues), and for depicting architecture (e.g., Cape vs. Victorian roof 2012]. To create the renderings for User Study 2, we extracted lines styles or diferent pillar styles). Three users raised concerns, mostly using Freestyle [Grabli et al. 2008], making modifcations manually related to the clutter that the lines created, which made the illus- when needed in Adobe Illustrator. We created lines of 1mm thick- tration feel “complicated” and “everywhere the same,” so that they ness with 3mm wide gaps between diferent object parts and then could not easily identify “where one piece starts and where it stops.” modifed according to BANA guidelines [2010]. To create the BANA baseline renderings for User Study 3, we follow the same approach, Multiple viewpoints. Participants generally appreciated that our using fve pre-made textures tested with a blind individual for their technique created a single unifed illustration, “there was only one identifability. When creating illustrations with our technique for shape to look at.” Two participants commented that our technique User Study 3, we created an outline around each part texture using represents the shape more efectively, is more simplifed, and con- Adobe Illustrator. We set the texture line thickness at 0.5mm, and veys the object more efciently. One participant commented: “If I the outlines and gaps are the same as User Study 2. As some manual had seen the diagrams [our technique] and not the objects, but I steps were used in creating the outlines, this post-processing stage was told that the diagram was a chair, then I could have interpreted prevented large-scale evaluation of our pipeline. it. But once we moved to diferent angled views [BANA baseline] that became less and less something I would be confdent about.” 3D Reference Objects. The 3D objects in User Study 2 were selected However, three users found difculties in interpreting our illustra- from online repositories: archibaseplanet.com, archive3d.net, and tions, saying they looked more like an “artist drawing” and “more www.thinginverse.com. For User Study 3, the 3D reference objects 3-dimensional,” and not knowing the view was problematic. were selected from the PartNet dataset [Mo et al. 2019]. PartNet Regarding the BANA baseline illustrations, four participants com- objects are pre-partitioned and provided in up-right orientation. mented that it was difcult to explore multiple views (e.g., top and Variations of the objects were created using Blender 2.79. side) which they later needed to match and compare. E.g.: “in some Fabrication Details. The objects in User Study 2 were fabricated of the views you did not see all the object, you did not understand with a Form 3 SLA 3D printer. The objects in User Study 3 were fabri- what you were looking at, you had to use both of them and kind of cated with a UPrint FDM 3D printer and Form 3 SLA 3D printer. The put them together or use what information you could get from the same printer was used for all variations of an object for consistency. one that made the most sense.” This response highlights the benefts To fabricate the tactile illustrations we used microcapsule paper. of a single illustration. The capsules expand under heat application, which was applied with laser engraving. We used an Epilog laser cutter with settings: Fabrication issues. One participant noted that he would like not 60 Watts, 100 speed, and power ranging between 12 to 15. Alterna- only thickness diferences in lines, but also height diferences, while tively: 40 Watts and speed ranging between 16 to 20. Each image another commented that he would prefer sharper lines. takes 20 to 30 minutes to complete.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:11

Fig. 14. User Study 3 results, shows mean success rate for matching tactile illustration to correct 3D object. Results shown for average over all objects, and per object success rates.

Fig. 15. User Sudy 3 results, shows mean success rate for matching tactile il- lustration to correct 3D object. Grouped by age of vision loss (self-identified).

be represented (e.g. objects with concavities such as bowls). Future work can also develop treatment for objects with cycles. Currently, each part is positioned to coincide with its parent connection, and a Fig. 13. Example materials for User Study 3: Evaluation. Participants are part that closes a cycle is not taken into account. given three variations of the object (row 1), produced by 3D printing (row Furthermore, symmetry has been shown to be a key characteristic 2). They are given one tactile illustration style – either our new multi- projection style (row 3, lef) or the BANA baseline style (row 3, middle-right). of tactile shape perception [Bauer et al. 2015], as our user study also Illustrations are produced using microcapsule paper (row 4). Participants indicates, and should be integrated into the design process in future are asked to identify which of the three variations matches the illustration. work. Similarly, our method could incorporate special handling of (See Supplemental Materials for all materials used in study 3). repetitions within an object to communicate identical parts. Addi- tionally, our method could be developed to incorporate other tactile saliency properties, such as the functional tactile saliency metrics 7 DISCUSSION developed by Lau et al. [2016] which was based on grasping. Our illustrations aim to convey shape characteristics that make 7.1 Limitations & Future Work designs distinct from one another. Future studies could also test the We have presented a new methodology for generating tactile il- efects of our illustration style on object recognition tasks (e.g. rec- lustrations. We introduced a preliminary algorithm that integrates ognizing ‘chair’ as the object category, rather than diferentiating design considerations on tactile graphics motivated by a set of for- between variations of an object). Further, an ablation study was mative studies. However, many opportunities exist for improving beyond the scope of our study resources but would be valuable as the technique and experimenting with alternative stylizations. For future work, for example, to study the efect of multi-projection example, further investigation could be done into selecting optimal rendering with standard textures. viewpoints for parts, and layering images appropriately to handle Our technique relies on a semantically segmented object. Skeleton- occlusions. We also hope to scale up to more complex objects while based segmentation could be applied, e.g., Livesu et al. [2017], Zhou maintaining a usable level of detail. Future work should investigate et al. [2015], Tagliasacchi et al. [2016]. The segmentation could also how shapes that are not approximated well by 1D skeletons can be obtained with machine-learning approaches, which have recently

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. 89:12 • Panotopoulou et al. shown improved performance over traditional approaches [Kaloger- and Prof. Margrit Betke (BU CS Dept.), who highlighted the impor- akis et al. 2010; Xu et al. 2015]. tance of tactile diagrams in books. Athina also thanks Elizabeth Finally, other stylizations could be explored, such as exploded Tremmel (Dartmouth College Multilingual Graduate Student and diagrams [Li et al. 2008] which might allow for better visibility of Postdoctoral Writing Consultant) for writing assistance. occluding parts, abstraction using characteristic curves [Mehra et al. sources ©Museum of Fine Arts, Boston: Fig. 2(a,b): 2009] or geometric primitives to handle more complex geometry, Pat Lyon at the Forge, John Neagle, 1826-27, Accession Number: or abstractions such as skeletons to emphasize topological relation- 1975.806, Henry H. and Zoe Oliver Sherman Fund; Fig. 2(c,d): Cofee ships. We also see opportunities for integration with interactive pot, Paul Revere, Jr., 1791, Accession Number: 95.1359, Bequest of tactile graphics, e.g., combined with interactive audio feedback, dy- Buckminster Brown, M.D. This project is partially supported by the namic markers [Suzuki et al. 2017], or refreshable tactile graphics National Science Foundation under Grant No. #1813319, and the displays where low resolution pin-based stimuli will present new Alfred P. Sloan Foundation: Sloan Research Fellowship. challenges [Brauner 2016]. REFERENCES 7.2 Conclusion 2013. Tangent vectors to a 3-D surface normal: A geometric tool to fnd orthogonal vectors based on the Householder transformation. Computer-Aided Design 45, 3 We have successfully built a general pipeline for the design of tactile (2013), 683 – 694. illustrations to improve 3D shape understanding for blind individu- Maneesh Agrawala, Denis Zorin, and Tamara Munzner. 2000. Artistic Multiprojection Rendering. In Eurographics Workshop. Springer-Verlag, London, UK, UK, 125–136. als. We have also designed a user study to evaluate the efectiveness Corinna Bauer, Lindsay Yazzolino, Gabriella Hirsch, Zaira Cattaneo, Tomaso Vecchi, of the illustrations, which we implemented on a selection of common and Lotf Merabet. 2015. Neural correlates associated with superior tactile symmetry household objects. We are the frst to create a systematic approach perception in the early blind. Cortex 63 (02 2015), 104–117. Irving Biederman. 1987. Recognition-by-components: a theory of human image under- that can generate tactile illustrations for objects with complex 3D standing. Psychological review 94, 2 (1987), 115–147. geometry. We demonstrate promising results, showing signifcant Randolph Blake and Robert Sekuler. 2006. Perception. McGraw-Hill, Boston, MA, USA. Diane Brauner. 2016. Refreshable Tactile Graphics Display: Making On-Screen Im- improvement compared to baseline guidelines for tactile illustration ages Accessible! https://www.perkinselearning.org/technology/posts/refreshable- design. tactile-graphics-display-making-screen-images-accessible Accessed: 2019-08-17. Providing a tool that creates tactile illustrations is integral as Anke M. Brock, Philippe Truillet, Bernard Oriola, Delphine Picard, and Christophe Joufrais. 2015. Interactivity Improves Usability of Geographic Maps for Visually a resource for the blind community to allow them to gain access Impaired People. Human–Computer Interaction 30, 2 (2015), 156–194. to visual information. For example, our illustrations could be used John Brooke. 1996. SUS-A quick and dirty usability scale. 194 (1996), 189–194. as a tool to depict products, from furniture in a store catalogue, John Brosz, Faramarz F. Samavati, M. Sheelagh T. Carpendale, and Mario Costa Sousa. 2007. Single Camera Flexible Projection (NPAR ’07). ACM, New York, NY, USA, to art pieces in a museum, to object repositories for 3D printing. 33–42. An exciting area of future work would be to use our illustration Craig Brown and Amy Hurst. 2012. VizTouch: Automatically Generated Tactile Visual- algorithm as a language to build a design tool for the blind. As one izations of Coordinate Spaces (TEI ’12). ACM, New York, NY, USA, 131–138. Francis D. K. Ching and Juroszek Steven P. 2019. Design Drawing. WILEY, Indianapolis, of our study participant mentioned: “One of the biggest difculties IN 46256. blind people have is getting an idea out of my head and into your P. Claudet, G. Chougui, and P. Richard. 2000-2008. The Typhlo and Tactus Guide for Children’s Books with tactile illustrations. head." Forrester Cole, K. Sanik, D. DeCarlo, Adam Finkelstein, T. Funkhouser, S. Rusinkiewicz, and M. Singh. 2009. How Well Do Line Drawings Depict Shape? ACM Trans. Graph. 28, 3, Article 28 (2009), 9 pages. ACKNOWLEDGMENTS Theresa Cooke, Frank Jäkel, Christian Wallraven, and Heinrich H. Bülthof. 2007. Multimodal similarity and categorization of novel, three-dimensional objects. Neu- We thank the anonymous reviewers for their feedback; Amber ropsychologia 45, 3 (2007), 484 – 495. Advances in Multisensory Processes. Pearcy (BU graduate student, MFA Boston Access Consultant, Na- Menena Cottin, Rosana Faría, and Elisa Elisa Amado. 2008. The Black Book of Colors Hardcover. Groundwood Books, Toronto, ON M6R 2B7, USA. tional Braille Press Proofreader) for helpful feedback throughout the Doug DeCarlo, Adam Finkelstein, Szymon Rusinkiewicz, and Anthony Santella. 2003. project; BU Engineering Product Innovation Center (EPIC) and crew Suggestive Contours for Conveying Shape. ACM Trans. Graph. 22, 3 (July 2003), 848–855. (Kara Mogensen and Caroline) for 3D-printing and laser-cutting re- Oliver Deussen, Jörg Hamel, Andreas Raab, Stefan Schlechtweg, and Thomas Strothotte. sources; VIBUG (Visually Impaired and Blind User Group); Colleen 1999. An Illustration Technique Using Hardware-based Intersections and Skeletons. Kim for designing the object modifcations for User Study 3; and For- Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 175–182. J. Diepstraten, D. Weiskopf, and Thomas Ertl. 2003. Transparency in Interactive Tech- rester Cole for helpful discussions on line drawing and perception. nical Illustrations. Computer Graphics Forum 21 (05 2003), 317 – 325. Perkins School for the Blind: Research Librarian Jennifer Arnott and Goker Erdogan, Ilker Yildirim, and Robert A. Jacobs. 2014. Transfer of object shape Museum tour-guide Kevin Hartigan provided access to resources, knowledge across visual and haptic modalities. In In Proceedings of the 36th Annual Conference of the Cognitive Science Society. and Executive Director of Perkins Library Kim Charlson and Braille J Farley Norman, Hideko F Norman, Anna Marie Clayton, Joann Lianekhammy, and Production Specialist Tanja Milojevic provided help in realizing Gina Zielke. 2004. The visual and haptic perception of natural object shape. Vol. 66. Springer Berlin Heidelberg, Berlin, Heidelberg. 342–51 pages. the study. Museum of Fine Arts Boston: Manager of Accessibility Stéphane Grabli, Emmanuel Turquin, Frédo Durand, and François Sillion. 2004. Pro- Hannah Goodwin and Accessibility Coordinator Ronit Minchom grammable Style for NPR Line Drawing. In Eurographics Symposium on Rendering. provided insight, resources and knowledge. We also thank Prof. ACM Press. Stéphane Grabli, Emmanuel Turquin, Frédo Durand, and François Sillion. 2008. Freestyle. Sam Ling (BU Visual Neuroscience Lab), who pointed us to the http://freestyle.sourceforge.net/ Recognition-by-components theory, Prof. Luis Carvalho (BU Dept. Giacomo Handjaras, Emiliano Ricciardi, Andrea Leo, Alessandro Lenci, Luca Cecchetti, Mirco Cosottini, Giovanna Marotta, and Pietro Pietrini. 2016. How concepts are of Mathematics and Statistics) and Prof. James O’Malley (Biomedical encoded in the human brain: A modality independent, category-based cortical Data Science of Dartmouth Institute) for advice on statistical testing, organization of semantic knowledge. 135 (04 2016).

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020. Tactile Line Drawings for Improved Shape Understanding in Blind and Visually Impaired Users • 89:13

Lucia Hasty. 1999. Design Principles for tactile graphics. http://www.tactilegraphics. Ravish Mehra, Qingnan Zhou, Jeremy Long, Alla Shefer, Amy Gooch, and Niloy J. org/readability.html Mitra. 2009. Abstraction of Man-Made Shapes. ACM Transactions on Graphics 28, 5 Morton Heller, Tara M Riddle, Erin K. Fulkerson, Lindsay Wemple, Anne D McClure (2009), #137, 1–10. Walk, Stephanie M Guthrie, Christine Kranz, and Patricia Klaus. 2009. The infuence Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and of viewpoint and object detail in blind people when matching pictures to complex Hao Su. 2019. PartNet: A Large-Scale Benchmark for Fine-Grained and Hierarchical objects. Perception 38 8 (2009), 1234–50. Part-Level 3D Object Understanding. In CVPR. Sergio E. Hernandez and Kenneth E. Barner. 2000. Tactile Using Watershed- David Moore. 2011. Disabled Students in Education: Technology, Transition, and Inclusiv- based Image Segmentation (Assets ’00). ACM, New York, NY, USA, 26–33. ity: Technology, Transition, and Inclusivity. IGI Global, Hershey, PA 17033, USA. 49 – Aaron Hertzmann and Denis Zorin. 2000. Illustrating Smooth Surfaces (SIGGRAPH ’00). 50 pages. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 517–526. A. Moringen, K. Krieger, R. Haschke, and H. Ritter. 2017. Haptic search for complex Ian Howard and Robert Allison. 2011. Drawing with divergent perspective, ancient 3D shapes subject to geometric transformations or partial occlusion. In 2017 IEEE and modern. Perception 40 (01 2011), 1017–33. World Haptics Conference (WHC). 299–304. Steven Hsiao. 2008. Central mechanisms of tactile shape perception. Current Opinion Braille Authority of North America. 2010. Guidelines and Standards for Tactile Graphics. in Neurobiology 18, 4 (2008), 418 – 424. Sensory systems. Braille Authority of North America, Pittsburgh, PA, USA. Shoukat Islam, Swapnil Dipankar, Deborah Silver, and Min Chen. 2004. Spatial and Emil Praun, Hugues Hoppe, Matthew Webb, and Adam Finkelstein. 2001. Real-time Temporal Splitting of Scalar Fields in Volume Graphics. In Symposium on Volume Hatching (SIGGRAPH ’01). ACM, New York, NY, USA, 581. and Graphics (VV ’04). IEEE Computer Society, Washington, DC, USA, Paul Ré. 1981. On My Drawings and Paintings: An Extension of the System of Their 87–94. Classifcation. Leonardo 14, 2 (1981), 106–113. Thomas W. James, G.Keith Humphrey, Joseph S. Gati, Philip Servos, Ravi S. Menon, Adrian Secord, Jingwan Lu, Adam Finkelstein, Manish Singh, and Andrew Nealen. 2011. and Melvyn A. Goodale. 2002. Haptic study of three-dimensional objects activates Perceptual Models of Viewpoint Preference. ACM Trans. Graph. 30, 5, Article 109 extrastriate visual areas. Neuropsychologia 40, 10 (2002), 1706 – 1714. (Oct. 2011), 12 pages. JMS. 2010. True Reverse Perspective. https://vimeo.com/12518619 Abigale Stangl, Jeeeun Kim, and Tom Yeh. 2014. 3D Printed Tactile Picture Books for Tilke Judd, Frédo Durand, and Edward H. Adelson. 2007. Apparent ridges for line Children with Visual Impairments: A Design Probe (IDC ’14). ACM, New York, NY, drawing. ACM Trans. Graph. 26, 3 (2007), 19. USA, 321–324. Evangelos Kalogerakis, Aaron Hertzmann, and Karan Singh. 2010. Learning 3D Mesh Abigale J. Stangl. 2019. Tactile Media Consumption and Production for and by People Who Segmentation and Labeling. ACM Trans. Graph. 29, 4, Article 102 (July 2010), Are Blind and Visually Impaired: A Design Research Investigation. Ph.D. Dissertation. 12 pages. University of Colorado at Boulder. Evangelos Kalogerakis, Derek Nowrouzezahrai, Simon Breslav, and Aaron Hertzmann. Nisha Sudarsanam, Cindy Grimm, and Karan Singh. 2005. Interactive Manipulation Of 2012. Learning Hatching for Pen-and-ink Illustration of Surfaces. ACM Trans. Graph. Projections With a Curved Perspective. In Eurographics. 31, 1, Article 1 (Feb. 2012), 17 pages. Ryo Suzuki, Abigale Stangl, Mark D. Gross, and Tom Yeh. 2017. FluxMarker: Enhancing J. M. Kennedy. 1993. Drawing and the Blind: Pictures to Touch. Yale Univ. Press, New Tactile Graphics with Dynamic Tactile Markers (ASSETS ’17). ACM, 190–199. Haven, CT 06520-9040, USA. Andrea Tagliasacchi, Ibraheem Alhashim, Matt Olson, and Hao Zhang. 2012. Mean Jeeeun Kim, Hyunjoo Oh, and Tom Yeh. 2015. A Study to Empower Children to Design Curvature Skeletons. Comput. Graph. Forum 31, 5 (Aug. 2012), 1735–1744. Movable Tactile Pictures for Children with Visual Impairments (TEI ’15). ACM, New Andrea Tagliasacchi, Thomas Delame, Michela Spagnuolo, Nina Amenta, and Alexandru York, NY, USA, 703–708. Telea. 2016. 3D Skeletons: A State-of-the-Art Report. Computer Graphics Forum 35 Jeeeun Kim, Abigale Stangl, and Tom Yeh. 2014. Using LEGO to Model 3D Tactile (05 2016). Picture Books by Sighted Children for Blind Children (SUI ’14). ACM, New York, Markus Tatzgern, Denis Kalkofen, and Dieter Schmalstieg. 2010. Compact Explosion NY, USA, 146–146. Diagrams (NPAR ’10). ACM, New York, NY, USA, 17–26. Jeeeun Kim and Tom Yeh. 2015. Toward 3D-Printed Movable Tactile Pictures for Anne Theurel, Arnaud Witt, Philippe Claudet, Yvette Hatwell, and Edouard Gentaz. Children with Visual Impairments (CHI ’15). ACM, New York, NY, USA, 2815–2824. 2013. Tactile Picture Recognition by Early Blind Children: The Efect of Illustration Roberta L. Klatzky, Jack M. Loomis, Susan J. Lederman, Hiromi Wake, and Naofumi Technique. Journal of experimental psychology. Applied 19 (09 2013), 233–40. Fujita. 1993. Haptic identifcation of objects and their depictions. Perception & Leanne Thompson and Edward Chronicle. 2006. Beyond visual conventions: Rethinking Psychophysics 54 (09 1993), 170–8. the design of tactile diagrams. British Journal of Visual Impairment 24 (05 2006), J. Krikke. 2000. : a matter of perspective. IEEE Computer Graphics and 76–82. Applications 20, 4 (July 2000), 7–11. Leanne J. Thompson, Edward P. Chronicle, and Alan F. Collins. 2006. Enhancing 2-D Martin Kurze. 1997. Rendering Drawings for Interactive Haptic Perception (CHI ’97). Tactile Picture Design from Knowledge of 3-D Haptic Object Recognition. European ACM, New York, NY, USA, 423–430. Psychologist - EUR PSYCHOL 11 (01 2006), 110–118. Stephen Lakatos and Lawrence E. Marks. 1999. Haptic form perception: Relative salience D. V. Vranic, D. Saupe, and J. Richter. 2001. Tools for 3D-object retrieval: Karhunen- of local and global features. Perception & Psychophysics 61, 5 (01 Jan 1999), 895–908. Loeve transform and spherical harmonics. In IEEE Workshop on Multimedia Signal Manfred Lau, Kapil Dev, Weiqi Shi, Julie Dorsey, and Holly Rushmeier. 2016. Tactile Processing. 293–298. Mesh Saliency. ACM Trans. Graph. 35, 4, Article 52 (July 2016), 11 pages. Benjamin Walsh. 2017. Texture, Buttons, Sound and Code: Modal Preference and the S.J. Lederman and Roberta Klatzky. 2009. Haptic Perception: A Tutorial. Attention, Formation of Expert Identities (FabLearn ’17). ACM, New York, NY, USA, Article 19, Perception & Psychophysics 71 (10 2009), 1439–59. 4 pages. R Lenth. 2019. emmeans: Estimated marginal means, aka least-squares means. R package T. P. Way and K. E. Barner. 1997. Automatic visual to tactile translation. IEEE Transac- version 1.4. 3.01. tions on Rehabilitation Engineering 5, 1 (March 1997), 95–105. Vincent Levesque. 2005. Blindness, Technology and Haptics. Technical Report. Ecole de Georges Winkenbach and David H. Salesin. 1996. Rendering Parametric Surfaces in technologie supérieure. Pen and Ink (SIGGRAPH ’96). ACM, New York, NY, USA, 469–476. Jingyi Li, Son Kim, Joshua A. Miele, Maneesh Agrawala, and Sean Follmer. 2019. Editing Kai Xu, Vladimir G. Kim, Qixing Huang, and Evangelos Kalogerakis. 2015. Data-Driven Spatial Layouts Through Tactile Templates for People with Visual Impairments (CHI Shape Analysis and Processing. arXiv:cs.GR/1502.06686 ’19). ACM, New York, NY, USA, Article 206, 11 pages. Jefrey Yau, Sung Soo Kim, Pramodsingh Thakur, and Sliman Bensmaia. 2015. Feeling Wilmot Li, Maneesh Agrawala, Brian Curless, and David Salesin. 2008. Automated form: The neural basis of haptic shape perception. Journal of neurophysiology 115 Generation of Interactive 3D Exploded View Diagrams. ACM Trans. Graph. 27, 3, (11 2015), jn.00598.2015. Article 101 (Aug. 2008), 7 pages. J. Yu and L. McMillan. 2004. A Framework for Multiperspective Rendering (EGSR’04). Wilmot Li, Lincoln Ritter, Maneesh Agrawala, Brian Curless, and David Salesin. 2007. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 61–68. Interactive Cutaway Illustrations of Complex 3D Models. ACM Trans. Graph. 26, 3, Johannes Zander, Tobias Isenberg, Stefan Schlechtweg, and Thomas Strothotte. 2004. Article 31 (July 2007). High Quality Hatching. Computer Graphics Forum 23, 3 (2004), 421–430. Hsueh-Ti Derek Liu and Alec Jacobson. 2019. Cubic Stylization. ACM Transactions on Yang Zhou, Kangxue Yin, Hui Huang, Hao Zhang, Minglun Gong, and Daniel Cohen-Or. Graphics (2019). 2015. Generalized Cylinder Decomposition. ACM Trans. Graph. 34, 6, Article 171 Marco Livesu, Marco Attene, Giuseppe Patane, and Michela Spagnuolo. 2017. Explicit (Oct. 2015), 14 pages. cylindrical maps for general tubular shapes. Computer-Aided Design (05 2017). Michael J. McGufn, Liviu Tancau, and Ravin Balakrishnan. 2003. Using Deforma- tions for Browsing Volumetric Data. In VIS’03 (VIS ’03). IEEE Computer Society, Washington, DC, USA, 53. N. McLennan, F. Hayden, T. Poppe, and W. Pierce. 1998. The good tactile graphic. Louisville, KY 40206-0085, USA.

ACM Trans. Graph., Vol. 39, No. 4, Article 89. Publication date: July 2020.