
Using Non-Speech Sound to Overcome Information Overload Stephen A. Brewster Glasgow Interactive Systems Group Department of Computing Science The University of Glasgow Glasgow, G12 8QQ, UK Tel: +44 (0)141 330 4966 Fax: +44 (0)141 330 4913 [email protected] http://www.dcs.gla.ac.uk/~stephen/ With ever increasing amounts of visual information to Keywords: Sonically-enhanced widgets, auditory take in when interacting with computers, users can interfaces, sonification, buttons, scrollbars. become overloaded. One reason is that computers INTRODUCTION communicate solely by graphical output. This paper With ever increasing amounts of visual information to suggests the use of non-speech sound output to take in when interacting with computers, users can enhance the graphical display of information can become overloaded. What causes this problem? In overcome overload. The question is how to integrate our everyday lives we are able to deal with an the display of sound and graphics to capitalise on the enormous amount of complex information of many advantages each offer. The approach described here different types without difficulty. One reason for the is to integrate sound into the basic components of the problem is that computers communicate solely by human-computer interface. Two experiments are graphical output, putting a heavy burden on our visual described where non-speech sounds were added to sense which may become overloaded. In the real buttons and scrollbars. Results showed sound world we have five senses and the combination of improved usability by increasing performance and these avoids any one sense becoming overloaded. reducing time to recover from errors. Subjective The next step forward in display design is to allow workload measures also showed a significant the use of these other senses when interacting with a reduction. Results from this work show that the computer. Such multimodal interfaces would allow a integrated display of graphical and auditory greater and more natural communication between the information can overcome information overload. computer and the user. They also allow the user to 1 employ appropriate sensory modalities to solve a must notice and deal with large amounts of dynamic problem, rather than just using one modality (usually data. For example, imagine you are working on your vision) to solve all problems. computer writing a report and are monitoring several on-going tasks such as a compilation, a print job and This paper suggests the use of non-speech sound downloading files over the Internet. The word- output to enhance the graphical display of processing task will take up all of your visual information at the human-computer interface. There is attention because you must concentrate on what you a growing body of research which indicates that the are writing. In order to check when your printout is addition of non-speech sounds to human-computer done, the compilation has finished or the files have interfaces can improve performance and increase downloaded you must move your visual attention usability [4, 6, 15]. Our visual and auditory senses away from the report and look at these other tasks. work well together: The visual sense gives us This causes the interface to intrude into the task you detailed data about a small area of focus whereas the are trying to perform. It is suggested here that some auditory provides data from all around. Users can be information should be presented in sound. This would informed of important events even if they are not allow you to continue looking at the report but to hear looking at the right position on the display (or even information on the other tasks that would otherwise not looking at the display at all). This is particularly not be seen (or would not be seen unless you moved important for large-screen, high-resolution, multiple your visual attention away from the area of interest, monitor interfaces. The question is how to integrate so interrupting the task you are trying to perform). the display of sound and graphics to capitalise on the Sound and graphics will be used together to exploit advantages each offer. the advantages of each. In the above example, you could be looking at the report you are typing but hear The motivation for this research is that users’ eyes progress information on the other tasks in sound. To cannot do everything. As mentioned, the visual sense find out how the file download was progressing you has a small area of high acuity. In highly complex could just listen to the download sound without graphical displays users must concentrate on one part moving your visual attention from the writing task. of the display to perceive the graphical feedback, so that feedback from another part may be missed. This Current interfaces depend heavily on graphical becomes very important in situations where users output. One reason for this is that when current 2 interaction techniques (such as buttons, scrollbars, SOUNDS USED etc.) were developed, visual output was the only The non-speech sounds used for this investigation communication medium available. However, were based around structured audio messages called technology has progressed and now almost all Earcons [5, 6, 24]. Earcons are abstract, synthetic computer manufacturers include sophisticated sound tones that can be used in structured combinations to hardware in their systems. This hardware is unused in create sound messages to represent parts of an daily interactions with machines (the sounds are interface. Detailed investigations of earcons by really only used to any extent in computer games). Brewster, Wright & Edwards [9] showed that they are This research will take advantage of this available an effective means of communicating information in hardware and make it a central part of users’ sound. The sounds were designed using earcon everyday interactions to improve usability. construction guidelines proposed by Brewster et al. [11]. Even though sound has benefits to offer it is not clear how best to use it in combination with graphical Earcons are constructed from motives. These are short output. The use of sound in computer displays is still rhythmic sequences that can be combined in different in its infancy, there is little research to show the best ways. The simplest method of combination is ways of combining these different media. This means concatenation to produce compound earcons. By sounds are sometimes added in ad hoc and ineffective using more complex manipulations of the parameters ways by individual designers [1, 21]. The approach of sound hierarchical earcons can be created [5] described here is to integrate sound in a structured which allow the representation of hierarchical way into the basic components of the interface to structures. improve the display of information from the bottom up. This paper describes two experiments where non- All the sounds used in the experiments were played speech sounds were added to buttons and scrollbars to on a Roland D110 multi-timbral sound synthesiser. correct usability errors that are due to the system The sounds were controlled by an Apple Macintosh requiring the user to look at more than one place at a via MIDI through a Yamaha DMP 11 digital mixer time. and presented to participants by loudspeakers. A web 3 demo of all of the earcons described in the paper Measures will be provided. In order to get a full range of quantitative and qualitative results time, error rates and subjective TESTING FRAMEWORK workload measures (see below) were used as part of In order to test the sonically-enhanced widgets an the framework. Time and error rate reductions would experimental testing framework was created. This show quantitative improvements and workload allowed the testing of the widgets in a simple and differences would show qualitative differences. This consistent manner. The same types of measures and gives a balanced view of the usability of a system [3]. designs would be used for each. Hart and Wickens ([17], p 258) define workload “...as A two-condition, within-subjects design was used to the effort invested by the human operator into task test both of the widgets. In one of the conditions the performance; workload arises from the interaction standard graphical widget was tested, in the other between a particular and task and the performer”. The condition the sonically-enhanced widget. The order of NASA Human Performance Research Group [20] presentation was counterbalanced to evenly distribute analysed workload into six different factors: Mental learning effects from Condition 1 to Condition 2. demand, physical demand, time pressure, effort Table 1 shows the format of the experiment expended, performance level achieved and frustration (progressing from left to right). After the test of each experienced. The NASA-Task Load Index (TLX) [16] condition participants were presented with workload is a set of six rating scales and was used for charts which they had to fill-in (this is described in estimating these subjective workload factors in the detail below). Instructions were read from prepared experiments described here. scripts. The basic six factors were used as described but a Participants Condition 1 Condition 2 Six Sonically- Visual Widget seventh factor was added: Annoyance. This is often Participants enhanced Train & Test Widget cited as a reason for not using sound for display as it á Train & Test Workload Workload Test Test is argued that continued presentation of sound would Six Visual Widget Sonically- Participants Train & Test enhanced be an annoyance for the user. So, by adding this as a Widget á specific factor in the usability assessment it would be Train & Test Table 1: Format of the experiments. possible to find out if participants felt that sonic 4 A B C feedback was an annoyance. Participants were also 1. asked to indicate overall preference: Which of the OK OK OK two interfaces they felt made the task easiest.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-