
Brewster, S.A. and Wright, P.C. and Edwards, A.D.N. (1994) The design and evaluation of an auditory-enhanced scrollbar. In, Adelson, B. and Dumais, S.T. and Olson, J.D., Eds. Conference on Human Factors in Computing Systems, 24-28 April 1994, pages pp. 173-179, Boston, Massachusetts, USA. http://eprints.gla.ac.uk/3255/ Glasgow ePrints Service http://eprints.gla.ac.uk The Design and Evaluation of an Auditory- Enhanced ScrollBar Stephen A. Brewster, Peter C. Wright and Alistair D. N. Edwards Department of Computer Science, University of York, Heslington, York, Y01 5DD, UK. Tel.: 0904 432765 [email protected] ABSTRACT At the present time almost all information presented goes A structured method is described for the analysis of through the visual channel. This means information can be interactions to identify situations where hidden information missed because of visual overload or because the user was may exist and where non-speech sound might be used to not looking in the right place at the right time. An interface overcome the associated problems. Interactions are that integrates information output to both senses can considered in terms of events, status and modes to find any capitalise on the interdependence between them and present hidden information. This is then categorised in terms of the information in the most efficient way possible. feedback needed to present it. An auditory-enhanced scrollbar, based on the method described, was then The work reported here is part of a research project looking experimentally tested. Timing and error rates were used at the best ways to integrate auditory and graphical along with subjective measures of workload. Results from information at the interface. The sounds used in this paper the experiment show a significant reduction in time to are based around structured audio messages called Earcons complete one task, a decrease in the mental effort required first put forward by Blattner, Sumikawa & Greenberg [3] and an overall preference for the auditory-enhanced and experimentally tested by Brewster, Wright & Edwards scrollbar. [4]. Earcons are abstract, synthetic tones that can be used in structured combinations to create sound messages to KEYWORDS represent parts of an interface. Auditory interfaces, multi-modal interfaces, earcons, sonification, auditory-enhanced widgets One question that might be asked is: Why use sound to present the extra information? A graphical method could be INTRODUCTION used instead. The drawback with this is that it puts an even It has been shown that non-speech sounds can be designed greater load on the visual channel. Furthermore, sound has to communicate complex information at the human- certain advantages. For example, it can be heard from 360° computer interface [3, 4, 8]. Unless these sounds supply all around, it does not disrupt the user’s visual attention and information that users need to know they will serve no real it can alert the user to changes very effectively. It is for purpose; they will become an annoying intrusion that users these reasons that we suggest that sound be used to enhance will want to turn off. Therefore an important question that the user interface. must be answered is: Where should non-speech sound be used to best effect in the graphical human-computer How should the information be apportioned to each of the interface? senses? Sounds can do more than simply indicate errors or supply redundant feedback for what is already available on The combination of graphical and auditory information at the graphical display. They should be used to present the interface is a natural step. In everyday life both senses information that is not currently displayed (give more combine to give complementary information about the information) or present existing information in a more world; they are interdependent. The visual system gives us effective way so that users can deal with it more efficiently. detailed data about a small area of focus whereas the Alty [1] has begun considering this problem in process auditory system provides general data from all around, control systems. Sound is often used in an ad hoc way by alerting us to things we cannot see. The combination of individual designers. In many cases no attempt is made to these two senses gives much of the information we need evaluate the impact of sound. Gaver [8] used a more about our environment. These advantages can be brought to principled approach in the SonicFinder that uses sounds in the human-computer interface. Whilst directing our visual ways suggested by the natural environment. For example, attention to one specific task, such as editing a document, selecting an item gave a tapping sound or dragging gave a we can still monitor the state of other tasks on our machine. scraping sound. However, in an interface there are many situations where there are no natural equivalents in the everyday world. The SonicFinder used sound redundantly with graphics. This proved to be very effective but we suggest that sound can do more. A method is needed to find situations in the interface where sound might be useful but currently there is no such method. It should provide for a clear, consistent and effective use of non-speech audio Modes: A mode is a state within a system in which a certain across the interface. Designers will then have a technique interpretation is placed on information (for example in one for identifying where sound would be useful and for using it mode characters typed into an editor may appear on the in a more structured way rather than it just being an ad hoc screen, in another they may be interpreted as commands). It decision. is often the case that the details of this interpretation are hidden from the user and mode errors result [13]. This paper describes a structured, informal method of analysing the interface to find hidden information that can Dix et al. [5] suggests two principles that, if satisfied, could cause user errors. It then describes an experiment to test an reduce the number of errors due to hidden information: auditory-enhanced scrollbar designed, using the analysis Observability and predictability. The user should be able to technique, to overcome the problems brought to light. The predict what the effects of their commands will be by aim of the experiment was to justify the model and see if observing the current status of the system. If information auditory-enhanced widgets were effective. about the system state is hidden, and not observable, then errors will result. The above points show that hidden Workload Assessment information is a problem and we suggests using sound to Wright & Monk [14] have argued that quantitative measures overcome it. A method is needed to find out where such of usability such as error rates and performance times do not hidden information exists. provide a complete picture of the usability problems of an interface. They argue that users may perform tasks well and Event, Status and Mode Analysis quickly and yet find the system frustrating to use and that it Analysing interactions with event and status information requires more effort to complete a task than they would was first put forward by Dix [6]. This has been extended to expect. This dissociation between behavioural measures and include explicit information about modes, one of the most subjective experience has also been addressed in studies of important areas of hidden information in the interface. A workload. Hart and Wickens [11] for example, define description of each of the types of information follows. workload “...as the effort invested by the human operator Event: A discrete point in time. It marks an important into task performance; workload arises from the interaction happening in the system. An event can be initiated by the between a particular and task and the performer”. Their user (mouse click) or system (mail arriving). There can be basic assumption is that cognitive resources are required for input events (mouse clicks) or output events (a beep a task and there is a finite amount of these. As a task indicating an error). becomes more difficult, the same level of performance can Status: This is information about the state of the system that be achieved but only by the investment of more resources. is perceivable by the user. If there is information about the We used a workload test as part of our usability evaluation state that is not in the displayed as status information then it of the auditory-enhanced scrollbar discussed in this paper. It will be hidden from the user. Status refers to something that provides a more rounded and sensitive view of usability always has a value such as an indicator. It is relatively static than just time and error analysis alone. and continues over time. The rendering is how the information about the state of the system is displayed. Thus WHERE TO USE SOUND AT THE INTERFACE status information can be rendered in different ways, for There are potentially many ways that sound could be used example graphically, through sound or a combination of in an interface This paper suggests using it to present both. information that is hidden from the user. This is a similar Mode: This maps the events onto operations on the system approach to that taken by Blattner, Papp & Glinert [2] when state. Different modes can make the same events have adding sound to maps. Here information is hidden because different effects. If the mode is not displayed then the user of visual clutter. Information may be hidden for a number of may not know the mapping. reasons: Information is not available: It may not be available on Problems occur when: the display due to hardware limitations such as lack of CPU • Events are not saliently signalled to the user, for example power or screen size.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-