SIGACCESS NEWSLETTER A regular publication of ACM SIGACCESS: Special Interest Group on Accessible Computing

Report on ASSETS 2014 Inside this Issue Sri Kurniawan and John Richards p.3 Report on ASSETS 2014 by S. Overall, the conference was a great Kurniawan and J. Richards success. ASSETS 2014 had the highest p.6 Blind Drawing: Investigation into number of registered participants with 165 screen location tracking for computer attendees. As always, it was a welcome aided interactive drawing by S. sight to see many people with disabilities Fernando participate as attendees and speakers. p.10 Draw and Drag: Accessible Touchscreen Geometry for Students Welcome to the January issue of the who are Blind by W. Grussenmeyer SIGACCESS newsletter. This issue highlights p.14 mHealth Technologies for Self- the ACM ASSETS 2014 Conference. The first Management of Diabetes in the Older article written by the General and Program Population by S. Alexander chairs, Sri Kurniawan and John Richards, p.19 Web Searching by Individuals with respectively, provides an overview of the Cognitive Disabilities by R. Nour conference. p.26 Using Social Microvolunteering to Answer Visual Questions from Blind The Future of Accessible Users by E. Brady p.30 Developing a Chairable Computing Computing Research: ASSETS Platform to Support Power 2014 Doctoral Consortium Wheelchair Users by P. Carrington The following 9 articles describe the research p.33 Scalable Methods to Collect and work of the students who attended the Visualize Sidewalk Accessibility Data ASSETS 2014 Doctoral Consortium led by for People with Mobility Impairments Jinjuan Feng and Claude Chapdelaine. by K. Hara p.38 Performance Evaluation for Touch-

Based Interaction of Older Adults by Hugo Nicolau A. Sultana Newsletter editor p.42 The Use of Concurrent Speech to Enhance Blind People’s Scanning for Relevant Information by J. Guerreiro SIGACCESS Issue 111 Newsletter January 2015

SIGACCESS Issue 111 Newsletter January 2015

About the Newsletter SIGACCESS is a special interest group of ACM on Accessible Computing. The SIGACCESS Newsletter is a regular online publication of SIGACCESS that includes content of interest to the community. To join SIGACCESS, please visit our website www..org

SIGACCESS Officers

Andrew Sears Clayton Lewis Shari Trewin Hugo Nicolau Joshua Hailpern Chair Vice-Chair Secretary Newsletter Information Treasurer Editor Director

Want to see your article featured in the next issue? Contributions and inquiries can be emailed to [email protected] We encourage a wide variety of contributions, such as letters to the editor, technical papers, short reports, reviews of papers or products, abstracts, book reviews, conference reports and/or announcements, interesting web page URLs, local activity reports, and so forth. Actually, we solicit almost anything of interest to our readers and community. You may publish your work here before submitting it elsewhere. We are a very informal forum for sharing ideas with others who have common interests. Anyone interested in editing a special issue on an appropriate topic should contact the editor. As a contributing author, you retain copyright to your article and ACM will refer requests for republication directly to you. By submitting your article, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights: to publish in print on condition of acceptance by the editor; to digitize and post your article in the electronic version of this publication; to include the article in the ACM Digital Library and in any Digital Library related services; to allow users to make a personal copy of the article for non-commercial, educational or research purposes.

Page 2 SIGACCESS Issue 111 Newsletter January 2015

REPORT ON ASSETS 2014 Sri Kurniawan General Chair, University of California Santa Cruz [email protected]

John Richards Program Chair, IBM Watson Group [email protected]

The 16th ACM SIGACCESS International Conference on Computers and Accessibility (ASSETS 2014) was held in Rochester, New York, USA, one of America's first boomtowns that rose to prominence as the site of many flour mills located on the Genesee River. The conference was held at the Hyatt Rochester, Monday – Wednesday, October 20th - 22nd, preceded by a Doctoral Consortium on Sunday October 19th, led by Jinjuan Heidi Feng from Towson University and Claude Chapdelaine from CRIM. The conference began with a welcome by us thanking the organizing and program committee members and the reviewers. The opening session continued with the introduction of the 2014 ACM SIGACCESS Award for Outstanding Contribution to Computing and Accessibility by the SIGACCESS Chair Andrew Sears, followed by the presentation of the 2014 Award Recipient by Julio Abascal, the Chair of the Selection Committee. This award went to Vicki Hanson, a Distinguished Professor at RIT within the HCI and Accessibility research groups and the Chair of Inclusive Technologies at the University of Dundee, for “sustained and wide-ranging contributions including industry, academia, and policy organizations.” Julio Abascal also introduced Vicki Hanson as the keynote speaker for ASSETS 2014. Hanson’s keynote speech was entitled Computing for Humans. She discussed how the goal of supporting humans is never more evident than in the field of accessibility where we seek to create devices and software to serve people with extreme needs. In creating novel accessibility tools, research has advanced the state of the art in many areas from design of environmental spaces, to physical interfaces, to multiple software advances in computing. This was followed by our first paper session examining independent navigation by blind users and technology to reduce social isolation. (Altogether, 28 full technical papers were selected for presentation at the conference from a total of 106 submissions.) Our second paper session addressed accessibility-promoting practices and tools and was followed by our first poster and demo session. (28 posters were selected for inclusion in the conference from among 56 submissions and 18 demos were selected from among 25 submissions). We then moved to the campus of the Rochester Institute of Technology for demos and a special poster session featuring the work of participants in both the Student Research Competition and the Doctoral Consortium. Day 2 began with a paper session on information access, examining efficient ways for blind users to navigate text and understand graphics. A second paper session on objects and interfaces looked at the use of 3D interfaces, the use of 3D printing in special education environments, and included the first of two TACCESS presentations. Following this, we heard presentations by this year’s Student Research Competition finalists (finalists chosen on the basis of their poster presentations the previous evening). We then conducted our SIGACCESS business meeting, heard about work in support of accessible interaction (including our second TACCESS presentation), and had our second poster and demo session.

Page 3 SIGACCESS Issue 111 Newsletter January 2015

Our final day began with a look at applications and games for use by several target populations followed by a second paper session on issues of communication and website use. Our final paper session explored mobility issues and was followed by the closing session featuring announcements of the winners of the student research competition, the text entry challenge, the captioning challenge, the best paper (Tablet-Based Activity Schedule for Children with Autism in Mainstream Environment, by Charles Fage, Léonard Pommereau, Charles Consel, Émilie Balland, and Hélène Sauzéon), and the best student paper (Tactile Graphics with a voice: Using QR Codes to Access Text in Tactile Graphics, by Catherine Baker, Lauren Milne, Jeffrey Scofield, Cynthia Bennett, and Richard Ladner). This year, there were three new conference features. The first was the Text Entry Challenge, chaired by Torsten Felzer. The goals of the text entry challenge were to present an overview of current solutions to entering text through alternative means (other than keyboard) and to foster the development of practically usable systems. Ten systems were demoed during the regular poster and demo sessions, and the audience was invited to rank the systems following a set of criteria that included usability, time, and errors in inputting text. The best three systems based on audience vote were: DigitCHAT: Enabling AAC Input at Conversational Speed (by Karl Wigan and Rupal Patel), Text-Entry By Raising the Eyebrow with HaMCoS (by Torsten Felzer and Stephan Rinderknecht) and Phoneme-based Predictive Text Entry Interface (by Ha Trinh, Annalu Waller, Keith Vertanen, Per Ola Kristensson, and Vicki L. Hanson). The second new feature was the podium presentation of published TACCESS papers. TACCESS presentations allowed conference participants to hear directly from an author of work recently published in ACM’s Transactions on Accessible Computing. Two papers were presented: Automatic Task Assistance for People with Cognitive Disabilities in Brushing Teeth - a User Study with the TEBRA System (by Christian Petersen, Thomas Hermann, Sven Wachsmuth, and Jesse Hoey) and Distinguishing Users By Pointing Performance in Laboratory and Real-World Tasks (by Amy Hurst, Scott E. Hudson, Jennifer Mankoff, and Shari Trewin). The third new feature was the use of Beam telepresence robots from Suitable Technologies to allow two remote participants to attend the talks, mingle with other attendees, and see the demos and posters from a close distance. ASSETS 2014 also continued the captioning challenge (chaired again by Raja Kushalnagar) and experience reports (chaired by David Flatla). One experience report was featured in the conference: Designing a Text Entry Multimodal Keypad for Blind Users of Touchscreen Mobile Phones (by Maria Claudia Buzzi, Marina Buzzi, Barbara Leporini, Loredana Martusciello, and Amaury Trujillo). The conference reception was held at the National Technical Institute for the Deaf’s (NTID) new Rosica Hall with support from the Golisano College of Computing and Information Sciences of Rochester Institute of Technology (RIT). The second night’s reception was held in the caterpillar atrium of the Strong Museum of Play with support from the University of Rochester. The President of Visit Rochester (the official tourism site of Rochester), Don Jeffries, gave a short welcome speech. Then, two museum curators led attendees on museum tours. Overall, the conference was a great success. ASSETS 2014 had the highest number of registered participants with 165 attendees. As always, it was a welcome sight to see many people with disabilities participate as attendees and speakers. Sign language interpretation and real-time

Page 4 SIGACCESS Issue 111 Newsletter January 2015 captioning was available throughout the conference. The vast majority of the talks were given in an accessible manner.

About the Authors: Sri Kurniawan is an associate professor of Computational Media and Computer Engineering departments at University of California Santa Cruz. She is also a core faculty of the Center for Games and Playable Media at UCSC as well as an affiliate faculty in Psychology department and the Digital Arts and New Media program at UCSC, and of the Data Democracy Initiative of CITRIS (Center for Information Technology in the Interest of Society).

John Richards is a research manager in IBM’s Watson Group and holds an appointment as Honorary Professor in the School of Computing at the University of Dundee, Scotland. John is a Fellow of the Association of Computing Machinery, a member of the IBM Academy of Technology, and a Fellow of the British Computer Society.

Page 5 SIGACCESS Issue 111 Newsletter January 2015

BLIND DRAWING: INVESTIGATION INTO SCREEN LOCATION TRACKING FOR COMPUTER AIDED INTERACTIVE DRAWING Sandra Fernando Department of Computing Goldsmiths, University of London New Cross, London SE14 6NW [email protected]

Abstract One of the main problems faced by blind learners is lack of drawing technologies that support images and diagram drawing without the help of a sighted support worker. Even though some technologies have been experimented in the past, blind learners have not been keen on tactile drawing due to: the difficulty of the drawing task, the length of time taken to complete a simple task and the inefficiency of the drawing experience. This paper presents a blind drawing technology that introduces a drawing technique with screen navigation, sectioning and sizing. This will provide blind learner with an interactive drawing environment, which will improve the understanding of a concept or, a subject matter. It will contribute to the improvement of the overall quality of learning and drawing experience by mapping their mind images into objects and spatial information. This technique promotes an interactive and easy drawing environment to build objects, associations, layout information by zooming, navigation, and grouping. It will lead to future possibilities such as 3D world modelling, printing and multisensory integration of inputs and output methods.

Introduction The aim of this project is to investigate techniques, tools and technologies, which assist children and adults with Microphthalmia (small eyes) and Anophthalmia (no eyes) in creative processes. Blindness is inability to see anything. A blind person cannot see anything at all and are in totally darkness. Partially sighted people have limited vision. About 39 million are blind and 246 million have low vision according to World Health Organisation 2013.

Motivation and Problems I was inspired to start this project when I taught a blind student of computing who demonstrated considerable ability in the comprehension and expression of knowledge and ideas, but found the use of diagrams and images challenging. This experience inspired me to seek to develop a technique, which will help blind and partially sighted learners to develop and build diagrammatic “images” to help them learn and communicate visual ideas. Research shows that most blind learners often seek the help of a support worker to draw pictures or diagrams or, they avoid drawing because they find it difficult to believe that they would be able to create pictures or diagrams without the guidance from a sighted person and would not even make an attempt. Hence, expressing imaginative thinking through Computers is limited. Therefore, the need of a self-reliant blind drawing technique and technology is highly valued among blind communities. The question is not whether it is useful for a blind person to draw at all because they cannot see. Instead the focus questions should be: what sort of an object layout a blind person could easily conceive and convey? What sort of a navigation system would build a sense of object layout easily? What sort of a space arrangement could navigation be done easily?

Page 6 SIGACCESS Issue 111 Newsletter January 2015

Several studies have introduced Grids for easy navigation and finding the accurate places of maps in an unknown environment. The game technologies such as IF(Interactive Friction) games is purely based on text and command language for user input tracking, voice feedback for interaction and a screen compass for object searches. IF gaming technology has been popular among blind game players for decades. Hence, this study investigates compass and grid based location-tracking approaches together with interactive friction gaming communication style to develop the user’s concept of drawing. This method of drawing does not need the help of a support worker, does not take enormous amount of time to complete a task and is not expensive to use. The method is intuitive, interactive, and easy to learn and has functions for error detection and correction.

Background Vision is made up of multidimensional channels combining the senses to produce a composite, which we call “visualization”. While blind people may not have access to the specific visual channels, they may have alternative access via their other senses to enable them to fill in the gaps. The difficulty is in explaining how that process converts into what sighted people know as object relationships. There are several experimental tools for blind drawing and editing semantic diagrams. Kamel, Landay have introduced IC2D product that divides the screen in to 9 navigable smaller workspace [3]. There are pallets for users to select shapes, types and colours. In order to make more precise selections each of the nine cells is then divided into further nine cells, which in turn dived in-to 9 more cells. Annotations are given for meaningful semantics so that the user can build an accurate mental image. System “Kevin” [1] enables users to read, edit and create diagrams using N2 chart. This system does not keep track of screen layout information but retrains the mandatory information needed for a simple DFD diagram. System PLUMB [4] uses Linked lists and Heaps algorithms to store data in a data structure and to access them in a sequential manner. When the user selects “insert node” a new node is inserted to the existing link list, which would be, acknowledge to the user. Some systems explore the different levels of the UML diagrams, its hierarchic, floor plan and spatial information that has a special emphasis on semantically annotating the data, which results in interoperability of systems of intranet and internet. A good drawing enables the user to remember and access the floor plan, retrieve the objects in the floor plan and join them and make sense of them. Peripheral localization becomes more difficult for older adults and blind people. If a message is conveyed by voice, it has to be short and easy-to-understand due to the possibility of exceeding working-memory-capacity. However if the information is too simple there is less benefit from multiple modalities. Thus processing load, disorientation and redundancy effect has to be removed for good engagement or learning. Therefore distributing the information load across multiple channels is advantageous. Questions such as, “what technique a computer tool needs, to build up an unknown environment? Does going through a virtual environment generate a cognitive map for a blind person? Does a computer generated picture map give a clear reflection of a blind users’ mental image?” are not fully explored by main software providers.

Page 7 SIGACCESS Issue 111 Newsletter January 2015

Proposed Solution

Blind Drawing Technology and System The screen is divided in to 9 memorable locations such as north, south, east west, north-west, and northeast southeast, south-west. (See Figure 1: Division of the computer screen and line drawings) A compass on the interface will take the user to the intended location. The screen changes its granularity by a zoom-in and zoom-out technique. When zoom-in is executed, the current location is sub dived into another nine smaller locations. Inputs invoke the primitive objects, locations and operations. Audio feedback is given for conformation and verification. A set of keywords is used to form the language for producing art. Once established, we will explore extending the facility to extend 3D drawing and printing.

Figure 1. Division of the computer screen and line drawings

Proposed Researched Methodology My sample population will be selected from the MACS charity (UK’s national charity for children born without eyes or with underdeveloped eyes.) and also visually impaired learners from LeSoCo further education College UK. However, this experiment could be extended to the RNIB, UK and beyond the UK. Data and fact finding has already started using questionnaires, interviews and direct observations to gather evidence on the blind user’s preferred methods computer Navigation, communication, location tracking and drawing. Iterative development methodology

Page 8 SIGACCESS Issue 111 Newsletter January 2015 will be used to gather feedback and remodel the technique. During the testing phase, gender, age and vision, will also be taken in to consideration. I will be using quantitative and qualitative methods of data gathering and analysing outcomes throughout the project. The technique will be evaluated using complexity, ease of use, easiness to learn, reliability, efficiency and fitness-for- purpose criteria.

Current Stage of Research To date, I researched tools, techniques, and range of technologies available for blind drawing and I have conducted interviews, observations and teaching during the data gathering. I am currently devising a middle-tier “language” for the proposed blind drawing technique that will be experimented with a programming tool such as Java.

Acknowledgments This research is funded by MACS charity, UK.

References 1. Blenkhorn, P., Evans, D.G. 1998.Using speech and touch to enable blind people to access schematic diagrams. Journal of Network and Computer Applications, Vol.21,1(January 1998), 17-29 , http://dx.doi.org/10.1006/jnca.1998.0060 2. Hersh, Marion A. , Michael A. Johnson with David Keating et al. 2008, Assistive Technology for Visually Impaired and Blind People, Book. Berlin ; London : Springer, http://dx.doi.org/10.1007/978-1-84628-867-8 3. Kamel, H.M., Landay, J.A. 2002, Sketching images eyes-free: a grid-based dynamic drawing tool for the blind. In Proceedings of the fifth international ACM conference on Assistive technologies, Edinburgh, Scotland (July 2002), Assets `02, 33-40. http://doi.acm.org/10.1145/638249.638258 4. Calder, M., Cohen R.F., Lanzoni J, Landry N and Skaff J. 2007 .Teaching Data Structures to Students who are Blind , In Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education, Dundee, Scotland, UK (June 2007), ITiCSE’07, 23–27. http://doi.acm.org/10.1145/1268784.1268811

About the Author: Sandra Fernando has ten years’ experience in the software industry and as a teacher of IT and is now beginning a PhD investigating the development of a computer aided drawing tool for blind learners. Sandra was inspired by MACS (Micro and Anophthalmic Children's Society) member, Toby Ott, who she came across while teaching at Lewisham and Southwark College.

Page 9 SIGACCESS Issue 111 Newsletter January 2015

DRAW AND DRAG: ACCESSIBLE TOUCHSCREEN GEOMETRY FOR STUDENTS WHO ARE BLIND William Grussenmeyer University of Nevada, Reno [email protected]

Abstract The drawing of geometric shapes, such as rectangles or triangles of varying sizes, in geometry classes is inaccessible to blind students. Most solutions to this inaccessibility are inaccurate, low tech, time consuming, or require assistance. Draw and Drag is a research project to test the feasibility of using touchscreens to allow students who are blind to do their geometry homework easily and quickly without any assistance.

Introduction Currently, the most common forms of techniques for students who are blind to draw diagrams has been limited to raised line drawing kits, pins and rubber bands, verbal descriptions with sighted assistance, or printed braille diagrams [1]. But these methods have many limitations and difficulties. The raised line drawing kits have difficulty in producing exact and accurate geometric shapes. The printed braille diagrams cannot be changed or modified and are expensive to produce. It is very difficult for teachers and others to describe the graphs or geometric shapes accurately. Pins and rubber bands cannot be saved over time and are very error prone. In all cases, it has been found that all these methods are time consuming [2].

Related Work A recent research project by Choi and Walker allowed people who are blind to create auditory graphs by taking photographs of printed graphs or raised line graphs and processing the image [6]. After the image is uploaded, people who are blind can then listened to the graph in an auditory fashion supplemented with descriptions on a computer. Draw and Drag, on the other hand, is testing out geometric shapes rather than graphs, and uses a touchscreen rather than a desktop or laptop computer. In another attempt for people who are blind to draw their own graphs, McGookin and Brewster [2] used a force feedback method to allow blind folded participants to draw bar graphs. This required use of a non-standard and uncommon force feedback device. Also, the creation of geometric shapes was not evaluated. McGookin et al investigated the use of tangible user interfaces (TUIs) for Construction of graphs by people who are blind [7]. Their system allowed users to construct line and bar graphs. The users place different physical objects within a physical grid. The system then automatically draws lines between the physical objects. The use of another physical object moved over the grid data sonifies the data series. The participants of the user study were able to successfully construct line graphs with a high level of accuracy. But it required the participants to count along the x axis and y axis with their hands in order to find the exact data point. Further, their system required a setup of physical objects with cameras, which was bulky and unrealistic. Touchscreens on the other hand are commonly being used in educational settings.

Page 10 SIGACCESS Issue 111 Newsletter January 2015

Solution and Methodology Proposed Solution Draw and Drag is a research project to test the viability of students who are blind being able to use a touchscreen to draw geometric shapes without any sighted assistance. This solution allows for accurate drawings, easy editing, and convenient file formats for printing an emailing. In addition, it has already been shown in previous research that geometric shapes can be traced, identified, and located on touchscreens by people who are blind [3, 4, 5]. Draw and Drag takes things a step further by providing different touchscreen interfaces for students who are blind to draw the shapes and not just identify them.

Methodology The methodology will be experiments in which the participants will be people who use screen readers on a regular basis. The touchscreen that will be used is an iPad running IOS 7 and using VoiceOver as a screen reader. Custom software has been and will continue to be Objective-C. The procedure will be to give participants tasks to draw different shapes on the screen. The participants will be given some training in how the interface works beforehand. Afterwards, qualitative feedback will be taken on user control, satisfaction, demographics, and general feedback. The experiments will be designed with the independent variable as two competing interfaces: a direct manipulation interface using a finger on the touchscreen, and a text entry interface using an on screen keyboard. Several trials will be run in which the tasks will be timed and accuracy recorded to see which of the two types of interfaces perform better.

Status Report Work Done So Far A touchscreen prototype was built to draw three simple shapes: rectangles, triangles, and circles. The system provides two competing interfaces. The first interface is a direct manipulation interface where the student uses his or her finger on the touchscreen directly. The second interface is textual, and the student types the dimensions and locations of objects with lengths and coordinates. In the direct manipulation interface, when touching the screen, there is text to speech (TTS) feedback, which tells the student the x and y coordinate their finger is currently at. Once the student selects a starting location, all they have to do is then double tap anywhere on the screen to select the last point as their starting point. Then, the student proceeds to specify the geometric properties of whichever shape they happen to be drawing. For example, in a circle, the user would drag their finger out from the center while hearing the radius increasing. The precision problem comes into play when the user’s finger is too big to select a small point on the x-y plane. For example, if the user wants to select 1.2, this is a very difficult problem to select this specific small point. There are two solutions to this problem introduced in this prototype. One solution is the zoom feature. By tapping a button on the bottom of the screen the user can either press the zoom in or the zoom out button. The numbers on the plane are bigger in terms of screen space when zoomed in but have a smaller range. It is easier, then, to select a specific point on the screen. Another feature to deal with the precision problem is a swiping gesture. The gesture requires the user to swipe with two fingers up, down, left or right along the screen. When swiping up, the last point selected on the screen by the finger is then moved up .1 along the y axis.

Page 11 SIGACCESS Issue 111 Newsletter January 2015

When swiping down, the point is moved .1 down the y axis. This continues the same for the x axis when swiping left or right. This then allows the user to touch very close with their finger to their desired point and then swipe a small amount to get to the exact point. In the case of the circle’s radius, a swipe up increases the radius by .1 and a swipe down decreases the radius by 0.1. The textual entry works in a straightforward manner. For the circle, the user enters the numbers using a keyboard for the x and y coordinates for the center of the circle and then the radius. For the rectangle, the user only enters the length and height. The rectangle is then drawn at the same location with the given dimensions. For the triangle, the length of the three sides is specified by the user. Then the triangle is drawn, and its angles are automatically calculated, by the iPad, based on the given lengths of the sides.

Work To Be Done There are several future directions to explore for the drawing of geometric shapes. One of the most interesting is the creation of three dimensional shapes. For example, a cube can be specified with only one dimension while its location along the three axii would be difficult. Three dimensional shapes, though, are very important in geometry classes. Another important direction of the work is to find a technique that allows students to draw more complex shapes. One common type of complex shape used in geometry classes is triangles drawn within circles. A third related future work is the shape identification of three dimensional shapes on the touchscreen. While this research project specifically focused on the drawing of shapes, there have been no projects that looked at whether people who are blind can identify three dimensional shapes on a touchscreen using haptics or data sonification.

Contributions to the Field This research project will bring an original and important contribution to the field. Drawing diagrams unassisted, and in particular geometrical shapes, has been a long standing problem with little research done so far. The work that has been done with tangible user interfaces and force feedback devices has been unrealistic as the devices are not common place commercial devices used in schools, like touchscreens. The hope is to provide a strong improvement in the math and science education of students who are blind.

Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-907992. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

References 1. D. K. McGookin and Brewster, S. A. "Graph builder: Constructing non-visual visualizations. In People and Computers Engage. Springer, 2007, pp. 263-278. 2. S. H. Choi and B. N. Walker Digitizer auditory graph: Making graphs accessible to the visually impaired. In CHI '10 Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, 2010), CHI EA '10, ACM, pp.

Page 12 SIGACCESS Issue 111 Newsletter January 2015

3. N. A. Giudice, H. P. Palani, E. Brenner, and K. M. Kramer. 2012. Learning non-visual graphical information using a touch-based vibro-audio interface. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '12). ACM, New York, NY, USA, 103-110. 4. N. Kaklanis, C. Votis, and D. Tzovaras. 2013. A mobile interactive maps application for a visually impaired audience. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility (W4A '13). ACM, New York, NY, USA, Article 23 , 2 pages. 5. A. Awada, Y. B. Issa, J. Tekli, and R. Chbeir. 2013. Evaluation of touch screen vibration accessibility for blind users. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). ACM, New York, NY, USA, Article 48 , 2 pages. 6. S. H. Choi and B. N. Walker Digitizer auditory graph: Making graphs accessible to the visually impaired. In CHI '10 Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, 2010), CHI EA '10, ACM, pp. 3445-3450. 7. McGookin, D., Robertson, E., and Brewster, S. Clutching at straws: Using tangible interaction to provide non-visual access to graphs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2010), CHI '10, ACM, pp. 1715-1724.

About the Author: William Grussenmeyer is a 2nd year PhD student at the University of Nevada, Reno. His advisor is Eelke Folmer. His research interest is in touchscreen accessibility for people who are blind. He received a NSF Graduate Research Fellowship in 2013.

Page 13 SIGACCESS Issue 111 Newsletter January 2015

MHEALTH TECHNOLOGIES FOR THE SELF-MANAGEMENT OF DIABETES IN THE OLDER POPULATION Sam Alexander School of Systems Engineering, University of Reading Reading, United Kingdom [email protected]

Abstract Diabetes Mellitus affects 347 million people worldwide and is predicted to become the seventh leading cause of death by 2030. With an ageing population set to strain health- care resources globally, providers are shifting focus towards technology for a more efficient solution to managing people’s health. As a result, mobile health (mHealth) technologies have emerged to allow people to effectively self-manage chronic diseases, such as diabetes. However, many mobile technologies are not appropriate for older adults. The aim of this research is to develop a diabetes mHealth app through user-centred design with older adults.

Introduction/Motivation Diabetes is a highly prevalent and rapidly growing chronic disease globally. 347 million people worldwide have been diagnosed [2] (4.8% of the population) and the World Health Organisation (WHO) predicts that diabetes will become the seventh leading cause of death by 2030 [6]. Of the people affected by diabetes in the United States, 25.9% are aged 65+ [1]. A landmark study conducted by Diabetes UK clearly demonstrated that self-monitoring of blood- glucose, together with a range of interventions, improved the long-term glycemic control of patients [4]. A number of mobile health systems exist to help with diabetes self-monitoring, allowing users to record and, in a few cases, track their blood-glucose measurements. However, few are designed adequately for older adults and none seamlessly combine and analyse blood- glucose data with exercise and dietary data to form understandable and usable trends and real- time predictions of blood-glucose levels. In a health tracking study, Davidson et al found that 94% of their older adult participants were interested in tracking their health using a mobile device [3]. Whilst one element of this study showed there was a clear interest, it also identified that application designers routinely failed to make proper assessment of the ‘needs’ of the participants. This is of particular concern for older adults, who require careful design considerations to accommodate for accessibility and usability issues. Davidson et al. have also illustrated how participatory design can be used to identify what is important to older adults in the design of mobile healthcare apps. Therefore, my proposed research seeks to capitalise on this methodology to identify and resolve prevalent issues with current apps e.g. small size of text/graphs, hard to hear audio prompts, information that is too complex/specialised, limited use by (partially) blind older adults, and difficulty in understanding how to connect devices/synchronise data (see figure 1).

Page 14 SIGACCESS Issue 111 Newsletter January 2015

Figure 1: Prevalent issues with current diabetes mHealth applications

Proposed solution The proposed research will employ user-centred and participatory design methods in the design of a diabetes mHealth application for older adults. The work will comprise 5 phases: Phase 1: Conduct a survey with a group of older adults diagnosed with diabetes to collect information on how they currently manage their condition and to identify technologies and features they are interested in. Phase 2: Identify and evaluate existing diabetes mHealth apps on both the Apple App Store and the Google Play Store. Particular emphasis will be placed on the strengths and weaknesses of each with respect to older adult users. Phase 3: Participatory design of a new diabetes mHealth app with a group of older adults. This will aim to build upon the work of Davidson et al [3], but with the aim of designing a new diabetes mHealth app. Phase 4: Prototype and develop a diabetes mHealth smartphone app with specific design considerations for older adults, based upon the findings of the previous stages of research. Validate each iteration of prototype with a focus group of older adults to ensure needs are being met. Phase 5: Conduct an in-the-wild trial of the new diabetes mHealth app with a group of older adults, gathering information on frequency and type of usage over an extended period, as well as impact on behavior change. Comparisons can then be made with other mHealth apps to validate the projected improvement of adoption based on the new design.

Stage of research I have conducted a review of literature on ‘mobile healthcare (mHealth) applications’ and ‘design for older adults’, with a particular focus on diabetes-related research. The review has identified an ability to understand blood-glucose readings and what it specifically means to their individual condition as paramount to success in maintaining glycemic control. My findings suggest that very few diabetics are able to identify trends in their measurements and the main issue they face is the inability to understand how their condition is affected in situations outside what they would consider to be a normal day.

Page 15 SIGACCESS Issue 111 Newsletter January 2015

In order to gain a better understanding of the glycemic control needs and mobile device preferences of older adults, I have constructed a survey to gather such information. The survey is to be administered to a group of older adults who are diagnosed with diabetes aged 65+. The survey consists of two sections: understanding how the patients currently manage their diabetes and identifying what technologies they might be interested in using. The main questions I want to answer are: to what extent do people feel they are able to self-manage their diabetes successfully, whether they use any existing apps to help manage their condition and what do they feel are the associated strengths and weak- nesses of the apps, what information they would be interested in knowing about their condition. The survey will be conducted over a month-long period, aiming at a target of 50 responses with a wide spread of technology usage amongst respondents. To ensure maximum participation, the survey will be made available both online and in the form of a pa- per/phone interview alternative. I will also look to support the survey by conducting semi-structured interviews, following a similar methodology to Hope et al [5]. The ASSETS 2014 conference came at a critical time, one year into my doctorate research, when I needed to clearly define the topic of research for my thesis. At the time of the conference, I was in a good position to present my findings, receive feedback on the work to date and to get some guidance on the next stages of the research, which I have since acted upon.

Envisioned contributions The initial stages of this research will provide a solid understanding of the issues older adults face, both in managing their diabetes using traditional methods, such as a logbook and blood-glucose meter, and when using other currently available mHealth apps. Later stages of research will gain an understanding of what older adults would like to be able to do with their mobile devices to help manage their diabetes, whilst establishing a methodology for involving older adults in the design of mHealth apps. As a result of an extensive literature search, it has become apparent that only three other studies have used participatory design when building mobile health apps. Of these three, none are specific to apps for the self-management of diabetes and only one actually involves the older adult population. Beyond the development of the app itself, ‘in-the-wild’ end-user trials will allow us to gain an understanding of how such a technology can affect behavior change and impact on health. The technological value of this research also lies in novel methods for automating the wireless synchronisation of measurements from a commercial blood-glucose meter, integrating blood- glucose measurements with exercise, dietary and mood data and proactive blood-glucose level predictions based upon the result of this integration (see figure 2). Future work in this area will look to establish novel ways of tracking a user’s context and capture a user’s mood using a mobile device.

Page 16 SIGACCESS Issue 111 Newsletter January 2015

Figure 2: Proposed application architecture

What I have gained The ASSETS doctoral consortium provided the ideal platform for me to present on how mobile healthcare system design for older adults should be approached. The consortium helped validate my research methods, as well as understand the placement of my research topic within the field of accessibility and usability. The conference itself has given me a great deal of insight within this field and facilitated the making of research contacts amongst both researchers and students. I feel that the conference has facilitated me becoming an active member of the accessible computing community, allowing me to make use of its expert knowledge and mentorship whilst also offering my own expertise in the field of user interface and interaction design for mobile devices. I look forward to contributing to the future direction of accessible computing.

Acknowledgments  A big thank you to Faustina Hwang, my PhD supervisor.

 Thank you to the rest of the team: Carlos Sainz Martinez and Nic Hollinworth.

References 1. Centers for Disease Control and Prevention. National diabetes statistics report. http://www.cdc.gov/diabetes/pubs/statsreport14/national- diabetes-report-web.pdf (last accessed 2014- 06-24), 2014. 2. G. Danaei, M. Finucane, Y. Lu, G. Singh, M. Cowan, and C. Paciorek. National, regional, and global trends in fasting plasma glucose and diabetes prevalence since 1980: systematic analysis of health examination surveys and epidemiological studies with 370 country-years and 2.7 million participants. Lancet, 378(9785):31–40, 2011.

Page 17 SIGACCESS Issue 111 Newsletter January 2015

3. J. Davidson and C. Jensen. What health topics older adults want to track: a participatory design study. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility 2013. 4. Diabetes UK. Self monitoring of blood glucose (smbg) for adults with type 1 diabetes. http://www.diabetes.org.uk/upload/Position%20statements /SMBGType1positionstatement.pdf (last accessed 2014-06-24), 2012. 5. A. Hope, T. Schwaba, and A. M. Piper. Understanding digital and material social communications for older adults. Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14, pages 3903–3912, 2014. 6. World Health Organization. Global status report on noncommunicable diseases 2010, 2011.

About the Author: Sam Alexander is a second year PhD student in the school of Systems Engineering at the University of Reading, supervised by Dr Faustina Hwang. Sam has a background in mobile app development, user interface and interaction design. His research focuses on mobile healthcare technology to aid the self-management of chronic diseases.

Page 18 SIGACCESS Issue 111 Newsletter January 2015

WEB SEARCHING BY INDIVIDUALS WITH COGNITIVE DISABILITIES Redhwan Nour Department of Computer Science University of Colorado Boulder, CO 80309, USA [email protected]

Abstract The ability to search for information on the web can provide tremendous support to people with cognitive disabilities, but there are few research studies with this focus. An exploratory study was conducted to explore how individuals with cognitive disabilities use three searching methods (typing, voice searching with manual microphone control, and hands-free voice searching). The results support a flexible design approach, as the preferences of the participants for the conditions varied. Some preferred typing, since they experienced some voice recognition issues with the microphone, while others chose hands-free voice searching to overcome spelling difficulties. Future work will aim to relate search behaviour of users with cognitive disabilities to their functional capabilities, and to evaluate Google’s Search Education lessons to improve searching skills for people with cognitive disabilities.

Introduction According to a recent World Health Organization estimate (2011), there are around 630 million individuals worldwide and 28 million persons living in the United States with cognitive disabilities [2]. There are many different conditions that associated with cognitive disabilities. These include but are not limited to Down syndrome, autism, and mild learning disability. People with cognitive disabilities may have a variety of limitations; difficulties can be observed in planning, memory, performing action sequences, understanding complex concepts, solving problems and other mental processes. Many of these impairments coexist with other challenges such as motor, visual and other disabilities. However, it should be noted that if an individual has a cognitive disability, that does not indicate that all his or her cognitive functions are impaired. Generally, there is only limited research that focuses on people with cognitive disabilities using the web [1], and more work is needed. For web searching, I was able to find four studies that examined searching the web by someone with a cognitive disability. Davies, Stock, and Wehmeyer [3] found that individuals with cognitive disabilities made few errors when they used a specialized web browser named “Web Trek” that incorporated a picture-based search approach along with audio prompts. The importance of pictures was also highlighted in another study by Johnson and Hegarty [5]. In that study, individuals with mild to moderate learning disability, who were provided with needed support, were able to search the web for things they were interested in. The search inquiries included music artists, actors, musical instruments and a place of birth. The majority of participants in the study preferred websites that had many graphics. Harrysson, Svensk, and Johansson [4] reported that people with mild to moderate developmental disabilities had common difficulties in entering correct spelling of the words in the Google search box. Another problem was selecting from a large amount of text and links retrieved from search results. Finally, Kumin, Lazar, Feng, Wentz, and Ekedebe [6] reported that users with Down

Page 19 SIGACCESS Issue 111 Newsletter January 2015 syndrome took advantage of the “auto-suggest” Google search feature built into Safari on the iPad to find a site instead of typing the URL.

Research Problem The ability to search the web and find relevant sources is an essential skill to everyone. The aim of my research program is to investigate the obstacles that individuals with cognitive disabilities encounter while searching the web, considering their functional capabilities, and learn if it is possible to improve their web searching skills by applying Google Search Education lessons. But there is an issue to be addressed in attacking these questions. Because searching has generally involved spelling, typing, and reading, there has been uncertainty about how well searching works for users with cognitive disabilities. Recent advances in technology have created new ways for people to perform web searches. All the studies just reviewed focused only on one searching method, which was typing. However, new features available for the Google search engine allow searching to be done with speech input rather than typing. Further, images can be included in many search results, potentially making it easier for people with cognitive disabilities to use search to find information, by reducing reliance on reading text. Google Knowledge Graph, a feature that provides a summary of information in response to some searches, may also be helpful. As a first step in this research, therefore, I conducted a pilot study to assess the contributions of these Google search engine features and what impact they have for users with cognitive disabilities.

The pilot Study Method The features explored in the study were the following. First, voice searching with manual microphone control. The standard Google search box includes a microphone icon. By clicking that icon, the user can activate voice recognition, allowing them to speak their search terms. Second, hands-free voice searching. The new Google Chrome searching feature allows users to trigger voice recognition by saying “OK Google” rather than clicking the microphone icon. Third, images in search results. The Search Preview extension for Google Chrome adds thumbnail images to search results. The final feature is the Google Knowledge Graph. This relatively new feature (introduced in 2012) displays a summary of information in response to the user’s search terms, as a supplement to the search results, for some searches. When either form of voice search is used the Google Knowledge Graph summary is spoken. Figure 1 shows all the included features.

Participants The participants, who were recruited from a school for students with special needs, were 5 females and 1 male, ranging in age from 20 to 23. The medical diagnoses for the participants were Autism (two participants), Attention Deficit Hyperactivity Disorder (ADHD), Traumatic Brain Injury (TBI), Down syndrome, and Turner syndrome. A Mini Mental State Examination (MMSE) was used to measure the cognitive ability of the participants. The results of the MMSE show that all the participants can be classed as “higher functioning”. However, the participants reported challenges with reading, listening, math, paying attention, and memory. All six participants were frequent computer users, using “mainly desktop” computers either at school or at home. All of them use the internet daily, and use Google to search for things like games, facts, apps and others. Table 1 shows summary information about the participants.

Page 20 SIGACCESS Issue 111 Newsletter January 2015

Figure 1: Included search features.

Table 1: Participants’ information. Area of Medical MMSE P Gender Age Years of Education Challenges Diagnosis Score/30 12 “middle/high P1 Female 20 Paying attention ADHD 29 school” Math and Turner 12 “middle/high P2 Female 22 28 memory Syndrome school Listening, 17 “graduate P3 Male 23 memory and Autism 28 school” paying attention Reading, math, 15 P4 Female 22 memory, paying TBI 30 “college/university” attention Down 14 P5 Female 20 Paying attention 29 Syndrome “college/university” Listening and 12 “middle/high P6 Female 21 Autism 29 paying attention school

Procedure The study was conducted at a special needs school in Denver, Colorado. The researcher met with the students twice on different days that fit their schedule. The first meeting was to complete

Page 21 SIGACCESS Issue 111 Newsletter January 2015 consent forms, gather demographic information and conduct the cognitive measurement test. This meeting took around 15-20 minutes for each participant. The second meeting was to go through pre-test questions, perform the actual tasks and ask post-test questions to find their preferences and experience. The second meeting took around 25-35 minutes for each participant. Participants received $30 gift card from Target for their participation in the study. The participants performed the tasks using Google Chrome as the browser on an HP Windows touch tablet. All sessions were recorded using “Morae”, the usability software system.

Search Tasks Six search tasks were defined, two in each of three categories. First, the ‘specified source’ tasks were finding information about “cats” on Wikipedia, and locating a rating for a movie called “Home Alone 2” from International Movie Database’s (IMDB) website. Second, the ‘free source, easy’ tasks were finding the year George Washington was born, and finding the population of Omaha, Nebraska. Third, the ‘free source, difficult’ tasks were identifying two things that Google Glass does, and finding two ingredients in a recipe for onion soup. The information required for the ‘specified source’ and ‘free source, easy’ tasks could be found in the Google Knowledge Graph summary, while that required for the ‘free source, difficult’ tasks could not.

Experimental Design There were three input conditions (voice searching with manual control, hands-free voice searching, and searching by typing on the tablet’s on-screen keyboard), and two output conditions (with and without images in search results). Each participant performed six tasks, one each in each combination of the three input conditions and two output conditions. A balanced Latin square was used to balance the occurrence of particular tasks with the input and output conditions, and the order of tasks and conditions, as shown in table 2. After performing all six tasks, the participants were asked to use whatever search condition they preferred to look for any information they were interested in. This final task was used to assess participants’ preferences without asking for preferences explicitly. The researcher demonstrated to each participant how to use each method of searching when the method first appeared in the task order. The researcher showed what search results looked like, the first time images appeared in the task order, and the first time images did not appear. The question for each task was read once, and a paper with the question was placed in front of the participant in case they forget it. Rules were created for the researcher specifying when to intervene and when to move to next task. These included three scenarios. First, the researcher would intervene if the participants asked for help, for example if they did not understand the question. Second, if the participant said that he or she found the answer to the question, or said they could not find it, it was the time to move to another task. Finally, if the participant spent some time to find the answer, then the researcher would not intervene immediately, but would give between 2 to 4 additional minutes. The intent of this policy was to give adequate time to complete the tasks, without extending the testing period unduly.

Page 22 SIGACCESS Issue 111 Newsletter January 2015

Table 2: Task order. Task 2 Task 3 Task4 Task 5 Task 6 Task 1 P/T George Google Omaha Movie Soup Wikipedia Washington Glass Population Info Recipe Typing Typing OK Google Microphone OK Google Microphone 1 (image) (no image) (no image) (image) (image) (no image) Typing Microphone Typing Microphone OK Google OK Google 2 (no image) (image) (image) (no image) (no image) (image) Microphone Microphone Typing OK Google Typing OK Google 3 (image) (no image) (no image) (image) (image) (no image) Microphone OK Google Microphone OK Google Typing Typing 4 (no image) (image) (image) (no image) (no image) (image) OK Google OK Google Microphone Typing Microphone Typing 5 (image) (no image) (no image) (image) (image) (no image) OK Google Typing OK Google Typing Microphone Microphone 6 (no image) (image) (image) (no image) (no image) (image)

Results When performing the final task with the method of their choice, three participants preferred typing over voice searching, since some of them encountered voice recognition issues. On the other hand, the other three participants favored hands-free voice searching as they pointed to the importance of overcoming spelling difficulties. None of the participants liked voice searching by clicking on the microphone; one of them claimed that it required too many steps. Three participants found having images in the search results were helpful and the others found no difference, as they read the links. The easiest task was finding information on Wikipedia due to the fact that all the participants were familiar with the site. On the opposite end, the most difficult task was searching for Google Glass, as it was something new to all of them. Moreover, the Knowledge Graph did not provide any information here. Therefore, most of the participants spent more time on this task. The Knowledge Graph was very useful to all the participants, when it was available. All tasks were successfully completed except task 6 for participant 1 and task 3 for participant 2.

Discussion Typing: None of the participants had an issue with typing. All of them use computers either at home or at school. However, one participant mentioned that typing would be easier if she typed on an external keyboard instead of an on-screen keyboard since it was difficult to write fast without making a mistake. The reasons for preferring typing for some of the participants included familiarity, voice recognition issues with voice searching, and some concerns in the voice searching capability to work effectively in noisy or crowded places. Hands-free voice searching: Those who preferred hands-free voice searching indicated the importance of overcoming spelling problems and also it was considered as a faster way to get answers. P4 also highlighted the value of listening to the answer for a query. Microphone voice searching: Using the microphone was not favored by any of the participants. It was observed that some of them forgot to hit the microphone and said “OK Google”. In fact, P6

Page 23 SIGACCESS Issue 111 Newsletter January 2015 used “OK Google” instead of the microphone to complete task 6. Moreover, P2 mentioned that clicking microphone control, and typing, both require too many steps. Images in search results: Three participants mentioned the value of having this feature, particularly when searching for things they did not know. The other three preferred reading links when they search so having an image would not make a difference. The Knowledge Graph: All the participants were able to answer all the questions correctly when there was some information in the Knowledge Graph. In fact, in these cases the participants were not looking at the search results and just relied on the Knowledge Graph. When there was no information in the Knowledge Graph, some participants either failed to complete the task or took more time to find the answer. For example, P5 searched for Omaha population by state rather than saying “population of Omaha”. In that case, there was no Knowledge Graph and she had to move from one site to another until she found the answer. This was also observed in task 3, as all the participants were not familiar with Google’ glass. Therefore, they spent time to figure it out, and P2 failed to complete the task. Another example was in the last task “onion soup”, as P1 failed to answer the question correctly. She did not recognize that Google provided information about soup without the word “onion” when she spoke. Considering voice response for voice searching, most of the participants were not paying that much attention to it. It could be that they thought the speech was too rapid. Moreover, as indicated earlier, almost all of them relied on the information provided by the Knowledge Graph and would find the answers there. For instance, for task 4 “Omaha population”, the results appeared in the Knowledge Graph as “421,570 (2012)”. However, the voice response would say the following ”The population of Omaha was four hundred twenty one thousands six hundred and twenty twelve.” The participants depended on the information presented in the Knowledge Graph and ignored the voice response, except P4, who noticed the difference. She said “without the picture (meaning the Knowledge Graph) I would have incorporated 2012 into the data number instead of a year”.

Conclusion The result of this pilot study underlines the importance of alternative ways of communication and flexible design approach. Due to the variation in abilities and conditions among individuals with cognitive disabilities, there is no one design that fits all. When searching the web, some people preferred voice searching to overcome spelling hurdles, while others favored the traditional method of typing. Images in search results can be helpful for some individuals to have a better searching experience.

Current Stage and Future Directions The current stage of research focuses on taking the preliminary results from this exploratory study to the next step and to start conducting web searching studies at special needs schools. The goal will be to investigate how to improve web searching skills to support students with cognitive disabilities at school. The results of this pilot study showed variation among the participants in their abilities and preferences. Because people have different capabilities, the first step in future work will focus on the functional model instead of medical conditions. The second step will include evaluating the effectiveness of Google’s Search Education lessons as it applies to individuals with cognitive disabilities.

Page 24 SIGACCESS Issue 111 Newsletter January 2015

Envisioned Contribution I aim to propose guidelines that will enhance web searching skills for people with cognitive disabilities based on their functional capabilities. These guidelines will rest on evidence-based research that will contribute to the limited research in web use and cognitive disabilities.

Acknowledgments Special thanks to the Rehabilitation Engineering Research Center for Advancing Cognitive Technologies for supporting the preparation of the pilot study.

References 1. Borg, J., Lantz, A., & Gulliksen, J. (2014). Accessibility to electronic communication for people with cognitive disabilities: a systematic search and review of empirical evidence. Universal Access in the Information Society, 1-16. 2. Braddock, D., Hoehl, J., Tanis, S., Ablowitz, E., & Haffer, L. (2013). The Rights of People With Cognitive Disabilities to Technology and Information Access. Inclusion, 1(2), 95-102. 3. Davies, D. K., Stock, S. E., & Wehmeyer, M. L. (2001). Enhancing independent internet access for individuals with mental retardation through use of a specialized web browser: A pilot study. Education and Training in Mental Retardation and Developmental Disabilities, 36(1), 107-113. 4. Harrysson, B., Svensk, A., & Johansson, G. I. (2004). How people with developmental disabilities navigate the Internet. British Journal of Special Education, 31(3), 138-142. 5. Johnson, R., & Hegarty, J. R. (2003). Websites as educational motivators for adults with learning disability. British Journal of Educational Technology, 34(4), 479-486. 6. Kumin, L., Lazar, J., Feng, J., Wentz, B., & Ekedebe, N. (2012). A usability evaluation of workplace-related tasks on a multi-touch tablet computer by adults with Down syndrome. Journal of Usability Studies, 7(4), 118-142.

About the Author: Redhwan Nour is a third year PhD student at the University of Colorado at Boulder, Department of Computer Science. He earned his M.S. in Computer Science in 2012. His research interest includes Human Centred- Computing, web accessibility, web searching, technology and cognitive disabilities, inclusive design and assistive technology. For his thesis research, he focuses on improving web searching skills for people with cognitive disabilities. Website: rnour.com

Page 25 SIGACCESS Issue 111 Newsletter January 2015

USING SOCIAL MICROVOLUNTEERING TO ANSWER VISUAL QUESTIONS FROM BLIND USERS Erin Brady University of Rochester Rochester, NY [email protected]

Abstract Advances in technology have proven uniquely useful for blind people, but some visual questions they encounter still require human assistance to answer. We developed VizWiz, a smartphone application that connects blind people with visual questions to sighted workers who can provide answers. In order to make this app more sustainable and scalable, and keep costs from being passed directly to the users, we propose social microvolunteering, which allows people to donate their own time as answerers, but also access to a larger pool of answerers through their friends on social networking sites.

Introduction In everyday life, blind people encounter visual information that they are unable to access easily. This information could be straightforward (the number on the sign at a bus stop), more complex to parse (the cheapest coffee on a menu), or subjective (whether two items of clothing coordinate well). Traditionally, accessing this information has required the help of a sighted companion or use of assistive technologies, which may be task-specific or prohibitively expensive. While crowdsourcing (using remote people online to complete tasks) can assist blind people in accessing the visual information around them, workers are paid to answer questions. If these costs are passed on directly to the user, it may become financially prohibitive over time or discourage them from using the crowd to answer all the questions the encounter. In this paper, we present social microvolunteering, a proposed method to create a free, sustainable source for nearly-realtime answers to visual questions from blind users. Social microvolunteering allows people to donate not only their own labor, but also access to their network of friends on social networking sites. We discuss this method of volunteering, and some of the questions we have about its’ impact on participants’ use of social networking sites.

Related Work Mobile devices and smartphones have increased feelings of autonomy and independence among people with disabilities [1], as they can access information [2] or call others for help while out of the house [7]. However, asking for help over the phone has limitations, as the person providing information must be available to answer a call and cannot access visual information in the callers’ environment directly without photos or video. Other tools have been developed to replace or supplement the use of a sighted companion. OCR is available to read text, currency readers can distinguish similarly-sized bills, and GPS-based tools aid with navigation. However, most of these applications are single-purpose and can have limited functionality [3]. In order to overcome these limitations, we developed VizWiz [3,4], a

Page 26 SIGACCESS Issue 111 Newsletter January 2015 smartphone application that connects blind users with sighted workers to get answers to visual questions.

Social Microvolunteering VizWiz [3] is a mobile phone application that allows users to take photographs of things around them, record a question, and send them to sighted workers or family and friends to get answers. Users are able to get answers for a variety of questions that would be hard or impossible for current automated tools. The project had significant use from people in the blind community, with around 70,000 questions asked since its release. This has raised questions about sustainability - while the project is currently funded by research grants, at some point a sustainable model for paying workers must be developed. While costs could be passed on directly to the user, they may become prohibitive over time, and might influence the frequency with which users want to ask questions. Instead, the ideal solution would be to find free, always-available labor online that could be used to respond to VizWiz questions. While people have volunteered to Figure 1. A post from Visual Answers, the social answer VizWiz questions, the pool of microvolunteering application we built to combine the people available is limited, which benefits of friendsourcing and crowdsourcing. might impact response speeds. Instead, we have proposed the idea of social microvolunteering, where volunteers are able to donate not only their own labor, but access to their friends via social networking sites. Tasks, like VizWiz questions, are posted onto the volunteer’s Facebook feed (see Figure 1). While the original volunteer is able to answer the question by commenting if they are online, the question can also be seen and answered by their Facebook friends as well. In this way, each volunteer’s potential impact is amplified by the number of Facebook friends they have, potentially increasing both the latency until a first answer is received, and the likelihood of getting a correct answer.

Page 27 SIGACCESS Issue 111 Newsletter January 2015

Visual Answers In order to explore the concept of social microvolunteering, we combined a survey of Facebook users with a pilot study of a Facebook application called Visual Answers [5]. Visual Answers posts VizWiz questions to volunteers’ walls, as seen in Figure 1. For this pilot, we used a hand-selected set of archival VizWiz questions which had already been answered, so that we could verify the quality and speed of answers from the volunteers and their Facebook friends; however, a live deployment could directly record any Facebook comments as they are posted, and forward them back to the VizWiz user who had asked the question.

Survey with Facebook Users For the survey, we recruited 350 Facebook users through ads targeted towards people interested in volunteering, charity, or visual impairments. Respondents were heavy users of Facebook, with nearly having used the platform for one or more years and logging in once a day or more. However, only 24% used applications which posted on their behalf, like Visual Answers. The majority of respondents (55%) were positive about the Visual Answers application, viewing it as a way to help people with disabilities (88%) and raise awareness of disability issues among their friends. Those who didn’t like the application cited a variety of reasons, including privacy concerns around the application (40%) and a fear of annoying their friends through the applications’ posts.

Pilot Study of Visual Answers After completing the survey, respondents had the chance to install Visual Answers for 12-day pilot test. 91 respondents installed the application, 14 participants left the study before the 12 days had elapsed, leaving 77 participants who completed the pilot. When presenting the application installation option to the respondents, we alternated between offering participants paid compensation for installing (a $20 Amazon gift card) or no compensation. During the 12-day pilot, 1130 questions were posted. 479 questions received comments (42%), and a total of 756 comments were made. Our primary concerns for this pilot study were (i) response speeds, (ii) answer quality, and (iii) how volunteers responded to using the application.

Response Speed Comments on individual responses could be slow - first comments on individual posts came in an average of 50 minutes (median 18). However, this decreased dramatically when questions were posted to multiple volunteer’s walls. If a question was posted to 10 volunteers, the first comment arrived in less than 5 minutes.

Answer Quality Most of the comments on the Facebook posts made by Visual Answers were good-faith answers, meaning correct answers if the question could be answered from the photograph, or a comment indicating that the question couldn’t be answered using the photograph. 82% of the comments were good-faith, and 91% of the first comments posted on each question were good-faith.

Volunteer Feedback Volunteers enjoyed the experience of using Visual Answers: 90% said they thought that Facebook was a good place for social microvolunteering, and 83% said that they would want to use Facebook for social microvolunteering in the future.

Page 28 SIGACCESS Issue 111 Newsletter January 2015

Conclusion and Contributions to Accessibility Through our research on VizWiz and Visual Answers, we have learned about the visual information needs of blind people, and examined ways to route their questions to receive high- quality answers without incurring financial or social costs. As the developers of VizWiz, we have two extremely valuable tools for research: (i) access to a large scale of questions from blind users, telling us about areas where access to visual information could be improved, and (ii) the ability to change how questions are routed to answerers, to experiment with different worker motivations and compensations in an on-going effort to improve the quality and sustainability of VizWiz.

Acknowledgments The projects presented above were done in collaboration with Meredith Ringel Morris of Microsoft Research and Jeffrey P. Bigham of Carnegie Mellon University. This work has been supported by National Science Foundation Awards #IIS-1149709 and #IIS-1049080.

References 1. Abascal, J. Mobile communication for older people: new opportunities for autonomous life. Workshop on Universal Accessibility of Ubiquitous Computing: Providing for the Elderly, (2001), 1-10.

2. Azenkot, S., Prasain, S., Borning, A., Fortuna, E., Ladner, R.E., and Wobbrock, J.O. Enhancing independence and safety for blind and deaf-blind public transit riders. Proceedings of the 29th ACM SIGCHI Conference on Human Factors in Computing System (CHI), (2011), 3247.

3. Bigham, J.P., Jayant, C., Ji, H., et al. VizWiz: Nearly Real-time Answers to Visual Questions. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), (2010).

4. Brady, E., Morris, M.R., Zhong, Y., White, S., and Bigham, J.P. Visual Challenges in the Everyday Lives of Blind People. Proceedings of the 31st ACM SIGCHI Conference on Human Factors in Computing System (CHI), (2013).

5. Brady, E., Morris, M.R., and Bigham, J.P. Gauging Receptiveness to Social Microvolunteering. Proceedings of the 33rd ACM SIGCHI Conference on Human Factors in Computing System (CHI), (2015), (to appear).

6. Brady, E., Zhong, Y., Morris, M.R., and Bigham, J.P. Investigating the appropriateness of social network question asking as a resource for blind users. Proceedings of the 2013 ACM Conference on Computer Supported Cooperative Work (CSCW), (2013), 1225.

7. Kane, S.K., Jayant, C., Wobbrock, J.O., and Ladner, R.E. Freedom to Roam: A Study of Mobile Device Adoption and Accessibility for People with Visual and Motor Disabilities. Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), (2009), 0-7. About the Author: Erin Brady is a PhD candidate in Computer Science at the University of Rochester, advised by Jeffrey P. Bigham. Her dissertation work has focused on analysing the help-seeking behaviors that occur when blind people encounter visual information that is inaccessible.

Page 29 SIGACCESS Issue 111 Newsletter January 2015

DEVELOPING A CHAIRABLE COMPUTING PLATFORM TO SUPPORT POWER WHEELCHAIR USERS Patrick Carrington UMBC [email protected]

Introduction Power wheelchair users often use and carry multiple mobile computing devices. Many wheelchair users experience motor impairments that affect their hands, arms, neck, and head. Additionally, a power wheelchair user’s ability to interact with computing technology may be physically restricted by the wheelchair’s frame, which can obstruct movement or limit reach. We believe these accessibility issues can be overcome by designing new systems to take advantage of the physical abilities of the user while leveraging the wheelchair’s frame to create assistive systems. We also hope to make progress toward limiting social barriers that hinder technology adoption. This could potentially improve the overall benefit of emerging technologies for people with varying abilities. With this research I aim to develop an interactive wheelchair-based computing platform designed to support the unique and dynamic needs of power wheelchair users while also creating new interaction possibilities.

Related Work Designing Wheelchair-Based Technology There are several examples of intelligent wheelchairs that leverage alternative controls or operation methods for supporting mobility and navigation for people with severe mobility impairments. These solutions range from intelligent and autonomous navigation through an environment [8], to control methods using gaze [10], body motions [6], and voice commands [7]. Solutions like the PerMMA system [11] have been developed to utilize robotics in order to augment the physical abilities of the wheelchair user to empower them to complete tasks using a robotic arm. Our approach differs in that we focus on utilizing the physical features of the wheelchair to augment its’ interactive capabilities and enable access to computer I/O, rather than augmenting its’ physical capabilities.

Social Barriers to Technology Use People with disabilities choose not to use, or choose to discard, technologies for many reasons. Even when a technology is usable and accessible, users might avoid the technology if they feel it will make them stand out or feel abnormal. Elliot et al. [5] use the term stigma to describe the social interactions created when people are thought not to meet expectations of “normal.” Bispo and Branco [1] note that assistive technologies such as wheelchairs often act as symbols of stigma. Shinohara and Wobbrock [9] explored perceptions and misperceptions of assistive technology and disability with visually impaired users, and found that individuals had fewer concerns about stigma when using mass-market computing devices. They propose that issues of perception and social acceptance might be mitigated using mass-market devices that support assistive functions. While an assistive technology user’s perception of stigma appears to vary across contexts, it is clear that people with disabilities consider the form factor of a device, and its noticeability, when

Page 30 SIGACCESS Issue 111 Newsletter January 2015 considering using that device. We considered these factors when conducting our own interviews about technology use and during design sessions. We feel it is important to consider the social and ethical implications of using assistive technologies as these factors contribute to eventual technology adoption and use [4]. We aim to leverage the wheelchair’s familiar form factor to add accessibility features without significantly impacting the perceived stigma.

Previous Research We have explored the device and interaction preferences of power wheelchair users through multiple design sessions and interviews with power wheelchair users and clinicians [3]. We first conducted formative interviews and design sessions with power wheelchair users, developed design prototypes during a series of focus group style design sessions with clinicians at a spinal injury rehabilitation clinic, and evaluated these design options during interviews with power wheelchair users. We then explored the placement and form factor possibilities for inputs and outputs on a power wheelchair, and finally identified preferences for designing wheelchair-based interactive systems. From this exploration we defined the concept of Chairable devices; mobile and wearable technologies designed to fit a power wheelchair form factor. We continued in this area of research by developing a chairable input device we call the Gest-Rest [2]. The device was designed to utilize desired input types and form factors identified during our previous studies. We evaluated this prototype device with wheelchair users in order to determine its efficacy as an assistive technology solution.

Current and Future Work Our current work continues to engage people who use power wheelchairs to ensure a firm understanding of user needs as well as various user-generated solutions. Future work could have the following contributions: A set of design guidelines for wheelchair-based technologies, a wheelchair-based computing platform that applies these guidelines to address existing accessibility challenges, and insights into improving technology adoption and the social acceptability of assistive technologies. The continued development of prototype solutions for power wheelchair users will provide opportunities to explore useful applications for new input and output technologies. Through user centered design and evaluation of prototypes we will capture the details of interaction with new inputs and outputs. We will also gain insight and understanding of context-related issues when interacting with technology in different settings. These insights will contribute to the design of adaptive and context-aware systems by providing concrete interaction examples and contexts of use. We will continue to involve users to explore social issues and the impact of using these chairable technologies. We will conduct a study exploring the benefits provided by web interfaces and support groups targeted specifically for power wheelchair users, with regards to assistive technology adoption and use. We will use the findings from this study to continue to explore the impact of caregiver support vs. support from other people with similar experiences. Findings from these studies can make contributions to the overall social outcomes and perceptions of independence derived from Assistive Technology use.

Page 31 SIGACCESS Issue 111 Newsletter January 2015

Finally, we will develop a wheelchair based I/O platform that combines the prototype solutions we have developed and evaluated. We will implement this platform with power wheelchair users in order to validate the benefits of our design process and identify opportunities to expand the range of applications that can be supported. In my program of study, I have successfully completed my coursework, passed the comprehensive exams, and I am currently preparing to propose my dissertation. By attending the ASSETS doctoral consortium, I hope to engage with researchers who also work with wearable, assistive, and augmentative technology to assist in defining my dissertation. Advice, guidance, references, and collaborators from this group of established and talented researchers would be invaluable for ensuring the research contributions are unique, relevant, and valid.

Acknowledgments This work was partially funded by Microsoft and Nokia.

References 1. Bispo, R., & Branco, R. Designing out stigma: the role of objects in the construction of disabled people's identity. Proc. International Design & Emotion Conference (2008) 2. Carrington, P., Hurst, A., Kane, S.K. (2014) The Gest-Rest: A Pressure-Sensitive Chairable Input Pad for Power Wheelchair Armrests. Proc. ASSETS 2014. ACM. 3. Carrington, P., Hurst, A., Kane, S.K. (2014) Wearables and Chairables: Inclusive Design of Mobile Input and Output. Proc. CHI 2014. ACM. 4. Cook, A. M., & Polgar, J. M. (2007). Cook and Hussey's Assistive Technologies: Principles and Practice, 3rd ed. St. Louis, MO. Mosby Elsevier. 5. Elliott, G. C., Ziegler, H. L., Altman, B. M., & Scott, D. R. (1982). Understanding stigma: Dimensions of deviance and coping. Deviant Behavior, 3(3), 275-300. 6. Gulrez, T., Tognetti, A., Fishbach, A., Acosta, S., Scharver, C., De Rossi, D., & Mussa-Ivaldi, F. A. (2011). Controlling wheelchairs by body motions: A learning framework for the adaptive remapping of space. arXiv preprint arXiv:1107.5387. 7. Hockey, B. A., & Miller, D. P. (2007, October). A demonstration of a conversationally guided smart wheelchair. Proc. ASSETS 2007 (pp. 243-244). ACM. 8. Lin, C. T., Euler, C., Wang, P. J., & Mekhtarian, A. (2012). Indoor and outdoor mobility for an intelligent autonomous wheelchair. Proc. ICCHP 2012 (pp. 172- 179). Springer Berlin Heidelberg. 9. Shinohara, K., & Wobbrock, J. O. In the shadow of misperception: assistive technology use and social interactions. Proc. CHI 2011, ACM Press (2011) pp. 705-714. 10. Wästlund, E., Sponseller, K., & Pettersson, O. (2010, March). What you see is where you go: testing a gaze- driven power wheelchair for individuals with severe multiple disabilities. In Proc. ETRA 2010 (pp. 133-136). ACM. 11. Xu, J., Grindle, G. G., Salatin, B., Vazquez, J. J., Wang, H., Ding, D., & Cooper, R. A. (2010, October). Enhanced bimanual manipulation assistance with the personal mobility and manipulation appliance (PerMMA). In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on (pp. 5042- 5047). IEEE.

Page 32 SIGACCESS Issue 111 Newsletter January 2015

About the Author: Patrick Carrington is a Human-Centered Computing (HCC) Ph.D. student in the Information Systems department at UMBC (University of Maryland, Baltimore County). He is a student in the Prototyping and Design Lab, “The PAD”, where he is co-advised by Dr. Amy Hurst (UMBC) and Dr. Shaun K. Kane (University of Colorado at Boulder). His research includes human-computer interaction (HCI), accessibility, natural user interfaces (NUI), ubiquitous computing, and wearable technology. He is interested in developing systems that provide new and interesting ways to support users with a range of abilities and provide access to information technology. He is also interested in applying principles of ubiquitous and wearable computing to everyday (and not-so-everyday) computing challenges.

Page 33 SIGACCESS Issue 111 Newsletter January 2015

SCALABLE METHODS TO COLLECT AND VISUALIZE SIDEWALK ACCESSIBILITY DATA FOR PEOPLE WITH MOBILITY IMPAIRMENTS: AN OVERVIEW Kotaro Hara HCIL | Makeability Lab University of Maryland, College Park [email protected]

Figure 1: Our proposed data collection methods will provide unprecedented levels of street-level accessibility information. I will use the information to develop map-based accessibility tools such as RouteAssist, a mobile-phone based tool that personalizes route suggestions based on a user’s reported mobility-levels.

ABSTRACT Poorly maintained sidewalks pose considerable accessibility challenges for mobility impaired persons; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, I introduce four threads of research that I will conduct for my Ph.D. thesis aimed at creating new methods and tools to provide unprecedented levels of information on the accessibility of streets and sidewalk.

Introduction According to the most recent US Census (2010), roughly 30.6 million adults have physical disabilities that affect their ambulatory activities. Of these, nearly half report using an assistive aid such as a wheelchair (3.6 million), cane, crutches, or walker (11.6 million) [9]. Despite comprehensive civil rights legislation for Americans with disabilities many city streets and sidewalks in the US remain inaccessible. The problem is not just that sidewalk accessibility fundamentally affects where and how people travel in cities but also that there are few, if any, mechanisms to determine accessible areas a priori. The goal of my thesis is to create new methods and tools to provide unprecedented levels of data on the accessibility of streets and sidewalks (Figure 1). Using a combination of crowdsourcing, computer vision (CV), and online map imagery such as Google Street View (GSV), I will design, develop, and evaluate methods and tools to collect and visualize street-level accessibility information (Figure 2).

Page 34 SIGACCESS Issue 111 Newsletter January 2015

Background and Related Work The three primary components related to my thesis: (i) taxonomy of street-level accessibility features and existing accessibility audit methods; (ii) crowdsourcing image labeling; and (iii) CV techniques. Sidewalk Accessibility Factors and Audit Methods: As I develop the novel methods and tools to collect sidewalk accessibility information, it is crucial to understand how the design aspects of streets and sidewalks can limit the access of people with mobility impairments. A list of common accessibility problems include: lack of sidewalks, missing curb ramps, poor walking surfaces, blocked pathways, difficult street crossings, and narrow sidewalks [8]. Street and sidewalk walkability audit are often time-consuming and expensive, thus more efficient auditing methods have been explored including the use of GSV to audit characteristics of neighborhoods [6]. Reported benefits of these remote methods include time-savings and the ability to monitor cities from a central location.

Crowdsourcing and Image Labeling Tasks: Our image labeling task is analogous to that often performed in training CV systems (e.g., [7]). Our image labeling work is different not only in focus (accessibility) but also in the unique integration of crowdsourcing and CV to increase scalability and control for quality. In addition, our labels do not just identify features, they also describe their quality. Computer Vision: Work in analyzing outdoor scenes via CV algorithms is also relevant. For example, [1] use edge-based cues to trigger rectangular objects in man-made environments. I will apply these methods towards our aims, specifically automatically detecting accessibility features in GSV, triaging scenes and adapting crowd workflows.

Research Goals and Methods My thesis is comprised of four threads of research: (i) a formative study to better understand the accessibility problems; (ii) the development and evaluation of methods for accessibility data collection; (iii) the integration of CV algorithms; and (iv) the design and evaluation of accessible- aware map-based tools that demonstrate the utility of our data. i. Formative Inquiry To inform our designs, I will conduct three formative inquiries. I will interview: (a) sidewalk design experts to understand practices around accessible design; (b) mobility impaired persons to gain deeper insight into how street-level accessibility affects their lives; and (c) caregivers to understand their perspective on managing travel and what technologies they use for determining accessible routes. ii. Crowdsourcing Accessibility Data Collection The primary goal of my thesis is to design scalable methods and interactive tools to collect street- level accessibility information from GSV. Ultimately, I will create a volunteer-based crowdsourcing system where any online user can contribute to collecting accessibility information.

Page 35 SIGACCESS Issue 111 Newsletter January 2015

A user selects an accessibility problem type and outlines the obstacle in the GSV image.

A GSV pane is the primary interaction area for exploring and labeling.

Figure 2. I design, develop, and evaluate tools to locate, identify and assess sidewalk accessibility features in GSV Thus far, I have conducted four studies centered on exploring new data collection methods: First, I investigated the feasibility of inspecting sidewalk accessibility in GSV [3]; Second, I piloted a range of labeling interfaces and examined their effect on labeling time and accuracy [2]; Third, I performed preliminary experiments using a manually curated database of 229 GSV images demonstrating that minimally trained crowd workers could correctly determine the presence of accessibility problems with 81% accuracy [3]; Finally, the follow-up work complemented the studies above by addressing some of the limitations (e.g., improving the labeling interface (Figure 2) and analyzing the effect of images’ age) [5]. iii. Increasing Scalability with Computer Vision Crowdsourcing accessibility data collection is labor intensive, and is not readily scalable to large areas. At the same time, automatic methods are unlikely to identify accessibility problems with a sufficiently high level of accuracy to create a useful system. I will therefore explore methods of combining the two. I plan to use CV to produce improved labeling in several distinct ways. Thus far, I have developed a preliminary accessibility feature detector using state-of-the-art CV techniques, and proposed a CV/ML-based method of automatically controlling a workflow to allocate workers to increase time-efficiency [4,5]. iv. Proof of Concept Accessibility Applications There will be multiple ways to incorporate the collected accessibility information. As a proof of concept, I will design and evaluate RouteAccess (Figure 1). For the application, the key research question is how to incorporate new data related to mobility access. Employing a user-centered design process, I will design the tools to allow users to easily discover both areas of high accessibility in cities as well as areas that are particularly inaccessible.

Expected Contributions to ASSETS community This research will contribute to the accessibility community by: (i) extending the knowledge of how people with mobility impairment interact with technology to navigate through cities; (ii) demonstrates that GSV is a viable source for learning about the accessibility of the physical world; (iii) introducing scalable methods that combine crowdsourcing and CV to collect accessibility

Page 36 SIGACCESS Issue 111 Newsletter January 2015 information; and (iv) presenting and evaluating tools that empower people with mobility impairments as well as city governments based on collected data.

Acknowledgements I thank my advisor, Prof. Jon Froehlich for his guidance. This work is supported by an NSF grant (IIS-1302338), a Google Faculty Research Award, and an IBM PhD Fellowship.

REFERENCES 1. Han, F. and Zhu, S.C. Bottom-Up/Top-Down Image Parsing with Attribute Grammar. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1 (2009), 59–73. 2. Hara, K., Le, V., and Froehlich, J. A Feasibility Study of Crowdsourcing and Google Street View to Determine Sidewalk Accessibility. Proc. of ASSETS 2012 3. Hara, K., Le, V., and Froehlich, J. Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems. Proc. of CHI 2013 4. Hara, K., Sun, J., Chazan, J., Jacobs, D., and Froehlich, J. An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification using Google Street View Images. Proc. of HCOMP 2013, WiP. 5. Hara, K., Sun, J., Jacobs, D.W., and Froehlich, J.E. Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning. Proc. of UIST 2014. 6. Rundle, A.G., Bader, M.D.M., Richards, C.A., Neckerman, K.M., and Teitler, J.O. Using Google Street View to audit neighborhood environments. Ame. J. of Prev. Med. 40, 1 (2011), 94–100. 7. Russell, B., Torralba, A., Murphy, K.P., and Freeman, W.T. LabelMe: a database and web- based tool for image annotation. IJCV 77, 1-3 (2007), 157–173. 8. Sandt, L., Schneider, R., Nabors, D., Thomas, L., Mitchell, C., and Eldridge, R. A Resident’s Guide for Creating a Safe and Walkable Communities. 2008. 9. U.S. Census Bureau. Americans with Disabilities: 2010 Household Economic Studies. 2012.

About the Author: Kotaro Hara is a fifth year Ph.D. student in the Department of Computer Science at the University of Maryland, College Park advised by Dr. Jon E. Froehlich. His research focuses on designing, building, and evaluating systems powered by both people and machines to improve the accessibility of physical world. He received a B.E. in Information Engineering from Osaka University in 2010 and was named an IBM Ph.D. Fellow in 2014.

Page 37 SIGACCESS Issue 111 Newsletter January 2015

PERFORMANCE EVALUATION FOR TOUCH-BASED INTERACTION OF OLDER ADULTS Afroza Sultana School of Information Studies, McGill University Montreal, QC, Canada [email protected]

Abstract Understanding target acquisition tasks is particularly important for older adults aged 65 years and older, as some may find touch-based interfaces more challenging due to age-related declines in motor skills. This work aims to develop novel performance evaluation measures that can assess touch-based target selection difficulties of older adults. In addition, this study intends to explore real life unconstrained target selection tasks trajectories that are collected for a longer period of time to develop such performance evaluation measures.

Introduction Technology can assist older adults to carry out their daily activities, independently and socially, both inside and outside of their houses [23, 24]. However, the declined cognitive and motor functionalities due to aging can hinder the successful usage of modern technology [4]. Many older adults experience age-related motor declines that can lead to frequent selection errors while aiming at targets (e.g. buttons, or hyperlinks) on touch-screen devices, such as smartphones and tablets [12]. These errors may trigger the loading of unwanted and costly-to- load programs, or the loss of inputs (e.g., on a web-form), which can cause frustration [1]. Understanding older adult users’ target selection behaviour, particularly, their selection strategies and error recovery strategies can be helpful for designing accessible touch-based interfaces for them. However, little is known about the target selection performance of older adults on touch- based user interfaces. The existing target selection performance analysis metrics [13] were developed studying mainly the younger adults. In addition, major performance evaluation models were developed from the interaction of indirect input devices (such as: mouse), and later have been directly applied to evaluate touch-based direct input devices [13, 21]. The difference between target selection with mouse and with a touch-based interface is that mouse interaction takes place on a two-dimensional screen, whereas, touch-based interaction happens in a three- dimensional spaces, considering the distance between the finger and the screen as the additional dimension. It is not clear whether performance evaluation measures developed from two- dimensional interaction can fully capture the complexities of three-dimensional touch-based interaction. Therefore, further investigation is required to assess and extend these metrics to evaluate the performance of older adults on touch-based target selection tasks. The literature targeted towards performance evaluation of selection tasks shows that majority of such studies (both with older adults specifically, and users more generally) have relied on controlled laboratory experiments [6, 8, 16, 17, 18, 19, 21]. While such laboratory studies have high internal validity, making them useful for careful comparisons of selection mechanisms, they lack external validity in that they do not capture the context in which selections take place. In particular, they do not capture the cost of recovering from errors. As some older adults find it

Page 38 SIGACCESS Issue 111 Newsletter January 2015 difficult to recover from errors, error recovery could have a substantial influence on the performance. Moreover, laboratory studies often do not explore all types of selection tasks, and some techniques are only feasible in certain contexts, which can make field deployments difficult [11]. Furthermore, performance of the same user may vary significantly across different task sessions; hence data collected in one single lab session may not reflect the variations of the real word user interactions [9]. On the other hand, studies conducted “in the wild” or from real user interactions, must grapple with the challenges of inferring the user’s intent (i.e., whether the user aimed for the correct target but missed, or mistakenly aimed for the wrong target) and teasing out the influence of external variables [2, 5]. Furthermore, as the unconstrained user interaction data is collected over months, the multidimensional contents of such enormous datasets pose a challenge towards manual inspection of the performance measures of the users. This research gap in the literature motivated us to explore the target selection tasks in unconstrained environment, so that, the context of the target selection task (including for example, older adults’ negotiation of the trade- off between speed and accuracy, target verification strategies, and response to the environmental interruptions) can be studied in detail.

Proposed Methodology As mentioned above, we intend to develop performance evaluation measures to evaluate touch- based target acquisition behaviour of older adult users. We will extend the two-dimensional sub- movement trajectory model of target selection tasks [14] into three dimensional space. We will apply motion capture technology to collect the three dimensional target selection trajectories for further analysis. The study will also explore unconstrained real life user interaction data to understand the target selection behaviour of older adult users. The challenge associated with such data collection method is to segment each task from the streams of continuous in-situ target selection data. Prevailing human psychomotor models for target acquisition suggest that target selection tasks typically initiate with a ballistic movement, followed by secondary smaller sub-movements [15, 20]. A study on task segmentation with in-situ mouse data has shown that a peak velocity can identify the beginning of a target selection task. Moreover, tapping on a target can indicate the termination of a target selection task [3]. We will also investigate similar velocity relevant information for task segmentation. Another challenge with in-situ data is to distinguish unintended erroneous movements from the deliberate target selection movements. One study developed classifiers to separate intentional and unintentional target selection trajectories [7]. This work provided us the confidence to apply supervised machine-learning algorithms on the touch-based selection tasks to filter out the unintentional movements from the intentional ones. In a preliminary feasibility study, we have applied four supervised machine-learning algorithms: decision trees, naïve Bayesian networks, neural networks, and rule induction algorithms, on pen-based selection task data, from a previous controlled lab study [22]. We applied measures from Fitts’ model [13] and sub-movement models [10, 14] to classify correct and incorrect target selections by younger (19-29 years) and older (65- 86 years) adults. The study results showed that all classifiers achieved above 90% accuracy rate and about 65% true positive rate on identifying correct target selection trajectories. We will extend this preliminary study to in-situ touch-based target selection tasks for participants from both younger and older adult age groups.

Page 39 SIGACCESS Issue 111 Newsletter January 2015

Conclusion Older adult users need modern technologies for an independent lifestyle. However, the age related motor impairments pose difficulties on using modern touch based user interfaces. Although performance evaluation measures for target selection tasks have been extensively studied, little is known about their capabilities during unconstrained free tasks on touch-screens, especially for older adult users. This work aims to fill this gap by exploring the target selection behaviour of older adult users on touch-based interfaces. The outcome of this study will be useful to understand the general traits of target selection behaviour of older adults. Furthermore, the findings of this study will contribute to design accessible touch-based user interfaces for older adults, and users with reduced motor functionalities.

References 1. Birdi, K., and Zapf, D. 1997. Age differences in reactions to errors in computer-based work. Behav. & Info. Technology, 16, 6, 309-319. 2. Brown, B., Reeves, S., and Sherwood, S. 2011. Into the wild: challenges and opportunities for field trial methods. In Proc. of CHI’11. 1657-1666. 3. Chapuis, O., Blanch, R., and Beaudouin-Lafon, M. 2007. Fitts’ law in the wild: A field study of aimed movements. Laboratoire de Recherche en Informatique, LRI Technical Report Number 1480, December, 2007. 4. Dennis, N., and Cabeza R. 2011. Neuroimaging of Healthy Cognitive Aging. In The handbook of aging and cognition. Psychology Press, New York, NY. 1-54. 5. Evans, A., and Wobbrock, J. 2012. Taming wild behavior: the input observer for text entry and mouse pointing measures from everyday computer use. In Proc. of CHI’12. 1947-1956. 6. Findlater, L., Froehlich, J., Fattal, K., Wobbrock, J., and Dastyar, T. 2013. Age-related differences in performance with touchscreens compared to traditional mouse input. In Proc. of CHI’13. 343-346. 7. Gajos, K., Reinecke, K., and Herrmann, C. 2012. Accurate measurements of pointing performance from in situ observations. In Proc. of CHI’12. 3157-3166. 8. Hourcade, J., and Berkel, T. 2008. Simple pen interaction performance of young and older adults using handheld computers. Interacting with Computers, 20, 1. 166-183. 9. Hurst, A., Mankoff, J., and Hudson, S. 2008. Understanding pointing problems in real world computing environments. In Proc. of ACM SIGACCESS ASSETS, 2008. 43-50. 10. Hwang, F., Keates, S., Langdon, P., and Clarkson, J. 2004. Mouse movements of motion-impaired users: a submovement analysis. In Proc. of ACM SIGACCESS Accessibility and Computing, 2004, 102-109. 11. Jansen, A., Findlater, L., and Wobbrock, J. 2011. From the lab to the world: Lessons from extending a pointing technique for real-world use. In Proc. of CHI'11. 1867-1872. 12. Keates, S., & Trewin, S. 2005. Effect of age and Parkinson's disease on cursor positioning using a mouse. In Proc. of 7th ACM SIGACCESS ASSETS 2005, 68-75. 13. MacKenzie, I. S. (1992). Fitts' law as a research and design tool in human-computer interaction. Human- computer interaction, 7, 1, 91-139. 14. MacKenzie, I., Kauppinen, T., and Silfverberg, M. 2001. Accuracy measures for evaluating computer pointing devices. In Proc. of CHI’01. 9-16.

Page 40 SIGACCESS Issue 111 Newsletter January 2015

15. Meyer, D. E., Abrams, R. A., Kornblum, S., Wright, C. E., and Keith Smith, J. E. 1988. Optimality in human motor performance: ideal control of rapid aimed movements. Psychological review, 95, 3, 340 – 370. 16. Moffatt, K., and McGrenere, J. 2010. Steadied-bubbles: combining techniques to address pen-based pointing errors for younger and older adults. In Proc. of CHI’10. 1125-1134. 17. Motti, L., Vigouroux, N., and Gorce, P. 2014. Design of a Social Game for Older Users Using Touchscreen Devices and Observations from an Exploratory Study. Universal Access in Human-Computer Interaction. Aging and Assistive Environments. Springer. 69-78. 18. Ren, X., and Moriya, S. 2000. Improving selection performance on pen-based systems: a study of pen-based interaction for selection tasks. ACM Trans. on Comp.-Hum. Interaction (TOCHI), 7, 3, 384-416. 19. Schneider, N., Wilkes, J., Grandt, M., and Schlick, C. 2008. Investigation of input devices for the age- differentiated design of human-computer interaction. In Proc. of the Hum. Factors and Ergonomics Society Ann. Meeting. 144-148. 20. Schmidt, R. A., Zelaznik, H. N., and Frank, J. S. 1978. Sources of inaccuracy in rapid movement. Information processing in motor control and learning, 183-203. 21. Soukoreff, R., and MacKenzie, I. 2004. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. International Journal of Human-Computer Studies, 61, 6, 751-789. 22. Sultana, A., and Moffatt, K. 2013. Automatic error detection from pointing device input data. In Proc. of ASIST’13. 1-3. 23. Uhlenberg, P. 2004. Historical forces shaping grandparent–grandchild relationships: Demography and beyond. Annual review of gerontology and geriatrics, 24, 1, 77-97 24. Wey, S. 2004. Use of Assistive Technology. In M. Marshall (Ed.), Perspectives on rehabilitation and dementia, 202-208.

About the Author: Afroza Sultana is a second year Ph.D. student in the School of Information Studies at McGill University in Montreal, Canada, under the supervision of Dr. Karyn Moffatt. Her research interest is to develop performance evaluation metrics to understand the touch-based target selection behaviour of older adult users. Afroza’s overall research goal is to design accessible user interfaces for older adult users to assist them with prolonged independent lifestyle.

Page 41 SIGACCESS Issue 111 Newsletter January 2015

THE USE OF CONCURRENT SPEECH TO ENHANCE BLIND PEOPLE’S SCANNING FOR RELEVANT INFORMATION João Guerreiro INESC-ID / Instituto Superior Técnico, Universidade de Lisboa [email protected]

Abstract Screen readers present the information sequentially, which contrast with the visual presentation on screen. This fact affects blind users’ efficiency browsing digital information, particularly when performing fast reading tasks. In this thesis, we will study the use of concurrent speech sources to help blind people finding their information of interest faster. Our first results suggested that blind users are able to identify and understand a relevant speech signal when listening to 2 or 3 concurrent talkers. Our next steps include comparing with current alternatives; assessing the learning effect with practice; and finding ways to interact with those sources.

Introduction Great efforts have been made to enhance the access to digital information by visually impaired users. Moreover, such users develop their own tactics when tackling problematic situations [14] and develop browsing strategies to scan the information more efficiently (e.g. navigate through headings or increase the speech rate) [4]. However, “the biggest problem in non-visual browsing remains the speed of information processing” [4], which results in significant differences in prejudice of blind users when compared to sighted users browsing performance [3]. For instance, webpages such as news sites and Social Networking Sites present several summarized information items, where the user needs to navigate in order to assess which are relevant and deserve further attention [10]. The differences between sighted and blind users are influenced by the fast reading techniques that sighted users employ, such as looking through titles and other visually prominent content or catching separate phrases (the so called diagonal reading) [1]. These techniques enable sighted users to sift through a website quickly to get an overview of its content (skimming) or to find specific information (scanning). In contrast, screen readers present the information sequentially to blind users, which contrast with the visual presentation on screen that portrays much more information at a time. We believe that using multiple simultaneous speech sources is the pathway to enhance blind people’s ability to identify, quickly, their information of interest. In a recent study [11], with 23 participants, we aimed at understanding blind people’s ability to search for relevant news snippets while listening to 2, 3 or 4 concurrent speech channels. The results suggested that the identification of one relevant source is a straightforward task whilst listening to 2 simultaneous talkers and most participants were still able to identify it with 3 talkers. Moreover, both 2 and 3 simultaneous sources may be used to understand the relevant source’s content depending on speech intelligibility demands (how much needs to be perceived) and user characteristics. In our next steps, we will compare our approach with currently used techniques, in particular with the use of fast speech rates. Finally, we will explore different ways to interact with these simultaneous sources.

Page 42 SIGACCESS Issue 111 Newsletter January 2015

Background The Cocktail Party Effect states the human ability to focus the attention on a single person among several conversations and background noise [8]. Moreover, one may detect interesting content in the background (e.g. own name or favourite subject) and shift the attention to another person. In fact, several studies about the ear physiology and neuroscience address the human auditory system ability to segregate speech [5] and support the use of simultaneous speech sources. Yet, the speech sources configuration is crucial to maximize the number of concurrent talkers. Attributes such as the spatial location [6] and the voice characteristics [9] of each source play an important role in the segregation of a mixture of sounds into different streams. To sum up a large number of previous studies, most of them focus on selective attention tasks, where participants have to report speech of a specific talker, neglecting the others. These studies (and even those on divided attention – e.g. [2]) usually require the exact identification of small phrases. In contrast, we believe that bigger texts will provide the gist of each source without listening to the entire content, in order to assess its relevance (Relevance Scanning) or to find specific information (scanning). What is more, most of the studies resorted to sighted users, but there is evidence that blind people’s sensory compensation enhances their ability to segregate, understand and report speech [12]. This fact is clearer for congenitally/early-blind people due to the process of Neuro-Plasticity that occurs until 18-21 years old [7]. Although there is valuable research that considered these studies insights (e.g. [13]), they end up neglecting the speech sources parameterization and focus on rather different contexts than scanning.

Goals and Contributions In this research, we plan to investigate the opportunities to maximize the audio channel to provide more information to blind users, in order to allow them to find their information of interest faster. We believe concurrent speech will empower faster insights of the available information and a prompt decision about the ones of interest. Our hypothesis is: Delivering multiple simultaneous parameterized sound sources to blind people enables faster but still effective scanning to find relevant information.

Concurrent Speech in Scanning Scenarios The use of simultaneous speech sources may not fit every scenario. We believe this approach will be more appropriate for scanning rather than for skimming tasks, since the latter requires an overall understanding of the texts. Scanning tasks may refer to finding specific information that a person is already searching for or finding information of interest without previous clues. In our recent study [11], 23 participants had to listen to 2, 3 or 4 concurrent speech sources (news snippets) and identify and understand a specific one of them. A hint priming the relevant snippet was given before each trial. The speech sources spatial locations took inspiration from several experiments that use equally spaced positions in the frontal horizontal plane. Although other spatial configurations were proposed and provided better results overall, they ended up sacrificing specific locations [6]. In our spatial setting, the sound sources were separated by 180º, 90º and 60º, for 2, 3 and 4 talkers, respectively (Figure 1). Although most experiments on simultaneous speech segregation focus on pitch variations, the best results are achieved when varying the two main characteristics that influence male and female voices - the pitch (Glottal Pulse Rate) and formant frequencies (Vocal Tract Length) [9].

Page 43 SIGACCESS Issue 111 Newsletter January 2015

Figure 1. The sound source spatial positioning, in the user’s frontal horizontal plane, for two, three and four talkers. We wanted to validate if these differences are also valuable for longer speech signals. Thus, we selected three different configurations: all speech sources with the same voice; larger pitch and formant frequencies separation; smaller pitch and formant frequencies separation. Our results showed that identification of the relevant source is a straightforward task when listening to 2 talkers, and for most participants, it was also easy to identify with 3. Moreover, both 2 and 3 simultaneous sources may be used to understand the relevant source content, but depend on how much information needs to be understood and on user characteristics (working memory and attention). Unlike the related work, differences in voice characteristics did not provide a greater effect in neither speech identification nor intelligibility. However, participants preferred the use of concurrent talkers with different voices.

Sighted, Early and Late Blind Our previous study did not present statistical differences between early and late blind participants. We are currently performing the same study with sighted people, which may point towards the use of simultaneous sound sources for such population in eyes-free interaction. This last comparison targets the frequent need to consume information, even when it is not possible to look at the screen/paper.

Comparison with Faster Speech Rates We will compare the use of concurrent speech sources with faster speech rates. We will try to understand both their effectiveness and their effect on cognitive overload and fatigue. Moreover, we will include hybrid solutions that combine both. Finally, we will investigate the effect of learning and experience in the use of concurrent speech in scanning scenarios.

Interacting with Concurrent Speech Our first study guided us towards the use of multiple sound sources. In line with these results, the interaction with one sequential information channel may be replaced with an interaction with (2 or 3) concurrent channels in fast reading scenarios. Current solutions rely mostly on head-tracking or pointing in order to select the current focus. We will investigate the best ways to control concurrent speech, which may include selecting or pausing a particular source, reducing/increasing the number of sources or even controlling their volumes.

Conclusions At this stage of my research, it was very important to obtain feedback from accessibility experts and peers. This doctoral consortium helped me validating and improving our future research methodology. Our first results suggested that blind users are able to identify and understand a relevant speech signal when listening to 2 or 3 concurrent voices. In future work, we will continue

Page 44 SIGACCESS Issue 111 Newsletter January 2015 to explore the perception of concurrent speech and we will try to find ways to interact with simultaneous sources.

Acknowledgments We thank the Fundação Raquel e Martin Sain, Carlos Bastardo and all the participants of our studies. This work was supported by national funds: FCT- Fundação para a Ciência e Tecnologia, under individual grant SFRH/BD/66550/2009.

References 1. Ahmed, F. et al. Accessible skimming: faster screen reading of web pages. In Proceedings of the ACM symposium on User Interface Software and technology (UIST) (2012). 2. Best, V. et al. The effect of auditory spatial layout in a divided attention task. In Proceedings of ICAD (2005). 3. Bigham, J. et al WebinSitu: a comparative analysis of blind and sighted browsing behavior. In Proceedings of ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) (2007). 4. Borodin, Y. et al. More than meets the eye: a survey of screen-reader browsing strategies. In Proceedings of W4A (2010). 5. Bregman, A. Auditory scene analysis: The perceptual organization of sound. (1990). 6. Brungart, D. S., and Simpson, B. D. Optimizing the spatial configuration of a seven-talker speech display. ACM Transactions on Applied Perception 2, 4 (2005). 7. Burton, H. Visual cortex activity in early and late blind people. The Journal of neuroscience : the official journal of the Society for Neuroscience 23, 10 (2003). 8. Cherry, E. Some experiments on the recognition of speech, with one and with two ears. The Journal of the acoustical society of America (1953). 9. Darwin, C. J. et al. Effects of fundamental frequency and vocal-tract length changes on attention to one of two simultaneous talkers. Journal of Acoustical Society of America 114, 5 (2003). 10. Guerreiro, J., and Gonçalves, D. Blind People Interacting with Mobile Social Applications. Mobile Accessibility Workshop at CHI (2013). 11. Guerreiro, J., and Gonçalves, D. Text-To-Speeches: Evaluating the Perception of Concurrent Speech by Blind People. In proceedings of ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) (2014). 12. Hugdahl, K. et al. Blind individuals show enhanced perceptual and attentional sensitivity for identification of speech sounds. Cognitive brain research 19, 1 (2004). 13. Sodnik, J. et al. Enhanced synthesized text reader for visually impaired users. Proceedings of ACHI (2010). 14. Vigo, M., & Harper, S. (2013). Coping tactics employed by visually disabled users on the web. International Journal of Human-Computer Studies, 71(11), 1013-1025.

Page 45 SIGACCESS Issue 111 Newsletter January 2015

About the Author: João Guerreiro is a PhD Student at Instituto Superior Técnico (IST), University of Lisbon (UL). He is also a research assistant at the Visualization and Intelligent Multimodal Interfaces (VIMMI) group from INESC-ID. His main research interests include human-computer interaction and accessibility. In particular, he is currently exploring the use of concurrent speech to enhance blind people’s efficiency when scanning for relevant digital content.

Page 46