Using Non-Speech Sound to Overcome Information Overload

Total Page:16

File Type:pdf, Size:1020Kb

Using Non-Speech Sound to Overcome Information Overload Using Non-Speech Sound to Overcome Information Overload Stephen A. Brewster Glasgow Interactive Systems Group Department of Computing Science The University of Glasgow Glasgow, G12 8QQ, UK Tel: +44 (0)141 330 4966 Fax: +44 (0)141 330 4913 [email protected] http://www.dcs.gla.ac.uk/~stephen/ With ever increasing amounts of visual information to Keywords: Sonically-enhanced widgets, auditory take in when interacting with computers, users can interfaces, sonification, buttons, scrollbars. become overloaded. One reason is that computers INTRODUCTION communicate solely by graphical output. This paper With ever increasing amounts of visual information to suggests the use of non-speech sound output to take in when interacting with computers, users can enhance the graphical display of information can become overloaded. What causes this problem? In overcome overload. The question is how to integrate our everyday lives we are able to deal with an the display of sound and graphics to capitalise on the enormous amount of complex information of many advantages each offer. The approach described here different types without difficulty. One reason for the is to integrate sound into the basic components of the problem is that computers communicate solely by human-computer interface. Two experiments are graphical output, putting a heavy burden on our visual described where non-speech sounds were added to sense which may become overloaded. In the real buttons and scrollbars. Results showed sound world we have five senses and the combination of improved usability by increasing performance and these avoids any one sense becoming overloaded. reducing time to recover from errors. Subjective The next step forward in display design is to allow workload measures also showed a significant the use of these other senses when interacting with a reduction. Results from this work show that the computer. Such multimodal interfaces would allow a integrated display of graphical and auditory greater and more natural communication between the information can overcome information overload. computer and the user. They also allow the user to 1 employ appropriate sensory modalities to solve a must notice and deal with large amounts of dynamic problem, rather than just using one modality (usually data. For example, imagine you are working on your vision) to solve all problems. computer writing a report and are monitoring several on-going tasks such as a compilation, a print job and This paper suggests the use of non-speech sound downloading files over the Internet. The word- output to enhance the graphical display of processing task will take up all of your visual information at the human-computer interface. There is attention because you must concentrate on what you a growing body of research which indicates that the are writing. In order to check when your printout is addition of non-speech sounds to human-computer done, the compilation has finished or the files have interfaces can improve performance and increase downloaded you must move your visual attention usability [4, 6, 15]. Our visual and auditory senses away from the report and look at these other tasks. work well together: The visual sense gives us This causes the interface to intrude into the task you detailed data about a small area of focus whereas the are trying to perform. It is suggested here that some auditory provides data from all around. Users can be information should be presented in sound. This would informed of important events even if they are not allow you to continue looking at the report but to hear looking at the right position on the display (or even information on the other tasks that would otherwise not looking at the display at all). This is particularly not be seen (or would not be seen unless you moved important for large-screen, high-resolution, multiple your visual attention away from the area of interest, monitor interfaces. The question is how to integrate so interrupting the task you are trying to perform). the display of sound and graphics to capitalise on the Sound and graphics will be used together to exploit advantages each offer. the advantages of each. In the above example, you could be looking at the report you are typing but hear The motivation for this research is that users’ eyes progress information on the other tasks in sound. To cannot do everything. As mentioned, the visual sense find out how the file download was progressing you has a small area of high acuity. In highly complex could just listen to the download sound without graphical displays users must concentrate on one part moving your visual attention from the writing task. of the display to perceive the graphical feedback, so that feedback from another part may be missed. This Current interfaces depend heavily on graphical becomes very important in situations where users output. One reason for this is that when current 2 interaction techniques (such as buttons, scrollbars, SOUNDS USED etc.) were developed, visual output was the only The non-speech sounds used for this investigation communication medium available. However, were based around structured audio messages called technology has progressed and now almost all Earcons [5, 6, 24]. Earcons are abstract, synthetic computer manufacturers include sophisticated sound tones that can be used in structured combinations to hardware in their systems. This hardware is unused in create sound messages to represent parts of an daily interactions with machines (the sounds are interface. Detailed investigations of earcons by really only used to any extent in computer games). Brewster, Wright & Edwards [9] showed that they are This research will take advantage of this available an effective means of communicating information in hardware and make it a central part of users’ sound. The sounds were designed using earcon everyday interactions to improve usability. construction guidelines proposed by Brewster et al. [11]. Even though sound has benefits to offer it is not clear how best to use it in combination with graphical Earcons are constructed from motives. These are short output. The use of sound in computer displays is still rhythmic sequences that can be combined in different in its infancy, there is little research to show the best ways. The simplest method of combination is ways of combining these different media. This means concatenation to produce compound earcons. By sounds are sometimes added in ad hoc and ineffective using more complex manipulations of the parameters ways by individual designers [1, 21]. The approach of sound hierarchical earcons can be created [5] described here is to integrate sound in a structured which allow the representation of hierarchical way into the basic components of the interface to structures. improve the display of information from the bottom up. This paper describes two experiments where non- All the sounds used in the experiments were played speech sounds were added to buttons and scrollbars to on a Roland D110 multi-timbral sound synthesiser. correct usability errors that are due to the system The sounds were controlled by an Apple Macintosh requiring the user to look at more than one place at a via MIDI through a Yamaha DMP 11 digital mixer time. and presented to participants by loudspeakers. A web 3 demo of all of the earcons described in the paper Measures will be provided. In order to get a full range of quantitative and qualitative results time, error rates and subjective TESTING FRAMEWORK workload measures (see below) were used as part of In order to test the sonically-enhanced widgets an the framework. Time and error rate reductions would experimental testing framework was created. This show quantitative improvements and workload allowed the testing of the widgets in a simple and differences would show qualitative differences. This consistent manner. The same types of measures and gives a balanced view of the usability of a system [3]. designs would be used for each. Hart and Wickens ([17], p 258) define workload “...as A two-condition, within-subjects design was used to the effort invested by the human operator into task test both of the widgets. In one of the conditions the performance; workload arises from the interaction standard graphical widget was tested, in the other between a particular and task and the performer”. The condition the sonically-enhanced widget. The order of NASA Human Performance Research Group [20] presentation was counterbalanced to evenly distribute analysed workload into six different factors: Mental learning effects from Condition 1 to Condition 2. demand, physical demand, time pressure, effort Table 1 shows the format of the experiment expended, performance level achieved and frustration (progressing from left to right). After the test of each experienced. The NASA-Task Load Index (TLX) [16] condition participants were presented with workload is a set of six rating scales and was used for charts which they had to fill-in (this is described in estimating these subjective workload factors in the detail below). Instructions were read from prepared experiments described here. scripts. The basic six factors were used as described but a Participants Condition 1 Condition 2 Six Sonically- Visual Widget seventh factor was added: Annoyance. This is often Participants enhanced Train & Test Widget cited as a reason for not using sound for display as it á Train & Test Workload Workload Test Test is argued that continued presentation of sound would Six Visual Widget Sonically- Participants Train & Test enhanced be an annoyance for the user. So, by adding this as a Widget á specific factor in the usability assessment it would be Train & Test Table 1: Format of the experiments. possible to find out if participants felt that sonic 4 A B C feedback was an annoyance. Participants were also 1. asked to indicate overall preference: Which of the OK OK OK two interfaces they felt made the task easiest.
Recommended publications
  • ES520) and Mesh Dynamic's (4000 SERIES) Networking Capabilities During COASTS 2007 Field Experiments
    Calhoun: The NPS Institutional Archive Theses and Dissertations Thesis Collection 2008-03 A comparative analysis of Fortress (ES520) and Mesh Dynamic's (4000 SERIES) networking capabilities during COASTS 2007 field experiments Tyler, Brian Keith. Monterey, California. Naval Postgraduate School http://hdl.handle.net/10945/4160 NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS A COMPARATIVE ANALYSIS OF FORTRESS (ES520) AND MESH DYNAMICS’ (4000 SERIES) NETWORKING CAPABILITIES DURING COASTS 2007 FIELD EXPERIMENTS by Brian Keith Tyler March 2008 Thesis Advisor: Rex Buddenberg Co Advisor: Tom Hoivik Approved for public release; distribution is unlimited THIS PAGE INTENTIONALLY LEFT BLANK REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED March 2008 Master’s Thesis 4. TITLE AND SUBTITLE A Comparative Analysis of Fortress (ES520) and 5. FUNDING NUMBERS Mesh Dynamics’ (4000 Series) Networking Capabilities During Coasts 2007 Field Experiments 6. AUTHOR(S) Brian Keith Tyler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8.
    [Show full text]
  • Using VAPS XT 661 Course Syllabus
    Using VAPS XT 661 Course Syllabus Session duration: Classroom 1.5 days Main Objective In this course, you will learn the basics of VAPS XT and leverage the 661 module to build ARINC 661 applications including how to create and add new widgets or objects with the tool. The course also covers how to build the VAPS XT 661 definition files, UA layers and widget sets. Upon completion of the course, the participant should be able to create formats and objects and be able to generate executables from these formats. You will understand how to use the major predefined objects in VAPS XT and will be able to create your own. Target Audience This is an ideal course for individuals to use the VAPS XT 661 module to design graphic applications. Prerequisites This course assumes basic PC knowledge and basic knowledge of VAPS XT regular mode. Format This Instructor-led course is taught through a series of lectures and hands-on exercises in which you learn how to use the ARINC 661 components of the tool. Topics Covered Explanation of the ARINC 661 Standard Using VAPS XT in the A661 mode Using the displays and windows Creating a graphical widget Creating a coded widget 1 Daily Outline for Classroom “Using VAPS XT 661” Course Day 1 Lesson 1: Explanation of the ARINC 661 Standard Lesson 2: Using VAPS XT in the A661 mode Lesson 3: Using the displays and windows Lesson 4: Creating a graphical widget Day 2 Lesson 5: Creating a coded widget Recap of the 5 lessons Questions and Answers specific from trainees Closing the course 2 Detailed Description
    [Show full text]
  • Leveraging Abstract User Interfaces
    Non-visual access to GUIs: Leveraging abstract user interfaces Kris Van Hees and Jan Engelen Katholieke Universiteit Leuven Department of Electrical Engineering ESAT - SCD - DocArch Kasteelpark Arenberg 10 B-3001 Heverlee, Belgium [email protected], [email protected] Abstract. Various approaches to providing blind users with access to graphi- cal user interfaces have been researched extensively in the past 15 years, and yet accessibility is still facing many obstacles. Graphical environments such as X Windows offer a high degree of freedom to both the developer and the user, complicating the accessibility problem even more. Existing technology is largely based on either a combination of graphical toolkit hooks, queries to the applica- tion and scripting, or model-driven user interface development. Both approaches have limitations that the proposed research addresses. This paper builds upon past and current research into accessibility, and promotes the use of abstract user interfaces to providing non-visual access to GUIs. 1 Introduction Ever since graphical user interfaces (GUIs) emerged, the community of blind users has been concerned about its effects on computer accessibility [1]. Until then, screen readers were able to truly read the contents of the screen and render it in tactile and/or audio format. GUIs introduced an inherently visual interaction model with a wider variety of interface objects. MS Windows rapidly became the de facto graphical environment of choice at home and in the workplace because of its largely consistent presentation, and the availability of commercial screen readers created somewhat of a comfort zone for blind users who were otherwise excluded from accessing GUIs.
    [Show full text]
  • Fakultät Informatik
    Fakultät Informatik TUD-FI08-09 September 2008 Uwe Aßmann, Jendrik Johannes, Technische Berichte Albert Z¨undorf (Eds.) Technical Reports Institut f¨ur Software- und Multimediatechnik ISSN 1430-211X Proceedings of 6th International Fujaba Days Technische Universität Dresden Fakultät Informatik D−01062 Dresden Germany URL: http://www.inf.tu−dresden.de/ Uwe Aßmann, Jendrik Johannes, Albert Zündorf (Eds.) 6th International Fujaba Days Technische Universität Dresden, Germany September 18-19, 2008 Proceedings Volume Editors Prof. Dr. Uwe Aßmann Technische Universität Dresden Departement of Computer Science Institute for Software and Multimedia Technologie Software Technologie Group Nöthnitzer Str. 46, 01187 Dresden, Germany [email protected] DIpl. Medieninf. Jendrik Johannes Technische Universität Dresden Departement of Computer Science Institute for Software and Multimedia Technologie Software Technologie Group Nöthnitzer Str. 46, 01187 Dresden, Germany [email protected] Prof. Dr. Albert Zündorf University of Kassel Chair of Research Group Software Engineering Department of Computer Science and Electrical Engineering Wilhelmshöher Allee 73, 34121 Kassel, Germany [email protected] Program Committee Program Committee Chairs Uwe Aßmann (TU Dresden, Germany) Albert Zündorf (University of Kassel, Germany) Program Commitee Members Uwe Aßmann (TU Dresden, Germany) Jürgen Börstler (University of Umea, Sweden) Gregor Engels (University of Paderborn, Germany) Holger Giese (University of Paderborn, Germany) Pieter van Gorp (University of Antwerp, Belgium) Jens Jahnke (University of Victoria, Canada) Mark Minas (University of the Federal Armed Forces, Germany) Manfred Nagl (RWTH Aachen, Germany) Andy Schürr (TU Darrmstadt, Germany) Wilhelm Schäfer (University of Paderborn, Germany) Gerd Wagner (University of Cottbus, Germany) Bernhard Westfechtel (University of Bayreuth, Germany) Albert Zündorf (University of Kassel, Germany) Editor’s preface For more than ten years, Fujaba attracts a growing community of researchers in the field of software modelling.
    [Show full text]
  • Graphical User Interfaces As Updatable Views
    Portland State University PDXScholar Dissertations and Theses Dissertations and Theses 2009 Graphical User Interfaces as Updatable Views James Felger Terwilliger Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Computer Engineering Commons, and the Computer Sciences Commons Let us know how access to this document benefits ou.y Recommended Citation Terwilliger, James Felger, "Graphical User Interfaces as Updatable Views" (2009). Dissertations and Theses. Paper 2671. https://doi.org/10.15760/etd.2672 This Dissertation is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. DISSERTATION APPROVAL The abstract and dissertation of James Felger Terwilliger for the Doctor of Philosophy in Computer Science were presented on November 26, 2008, and accepted by the dis- sertation committee and the doctoral program. COMMITTEE APPROVALS: Lois Delcambre, Chair David Maier Leonard Shapiro Andrew Black Robert Bertini Representative of the Office of Graduate Studies DOCTORAL PROGRAM APPROVAL: Wu-Chi Feng, Director Computer Science Ph.D. Program ABSTRACT An abstract of the dissertation of James Felger Terwilliger for the Doctor of Philosophy in Computer Science presented November 26, 2008. Title: Graphical User Interfaces as Updatable Views In contrast to a traditional setting where users express queries against the database schema, we assert that the semantics of data can often be understood by viewing the data in the context of the user interface (UI) of the software tool used to enter the data.
    [Show full text]
  • The Design of a GUI Paradigm Based on Tablets, Two-Hands, and Transparency
    The Design of a GUI Paradigm based on Tablets, Two-hands, and Transparency Gordon Kurtenbach, George Fitzmaurice, Thomas Baudel, and Bill Buxton Alias | Wavefront 110 Richmond Street East Toronto, Canada, M5C 1P1 <gordo, gf, tbaudel, buxton>@aw.sgi.com 1+ 416 362-9181 ABSTRACT between UI widgets and the artwork where a large majority An experimental GUI paradigm is presented which is based of the UI widgets exist around the edge of the artwork. The on the design goals of maximizing the amount of screen first design tension could be addressed by a larger screen used for application data, reducing the amount that the UI with the cost being the expense of a larger display. How- diverts visual attentions from the application data, and ever, as screen size increases the second design tension increasing the quality of input. In pursuit of these goals, we becomes a problem. As the screen and artwork become integrated the non-standard UI technologies of multi-sensor larger the distance a user must travel to/from a tool palette tablets, toolglass, transparent UI components, and marking or menu increases. This results in longer task times. Fur- menus. We describe a working prototype of our new para- thermore, a user's focus of attention must constantly change digm, the rationale behind it and our experiences introduc- from some point on the artwork to a UI widget at the edge ing it into an existing application. Finally, we presents some of the screen and then refocus on the artwork again. Divid- of the lessons learned: prototypes are useful to break the ing attention in this manner requires additional time to barriers imposed by conventional GUI design and some of reacquire the context and can also result in users missing their ideas can still be retrofitted seamlessly into products.
    [Show full text]
  • Package 'Gwidgetswww'
    Package ‘gWidgetsWWW’ July 2, 2014 Version 0.0-23 Title Toolkit implementation of the gWidgets API for use with web pages Author John Verzani Maintainer John Verzani <[email protected]> Depends R (>= 2.11.0), methods, proto, filehash, digest, rjson Suggests Extends canvas, RSVGTipsDevice, SVGAnnotation, webvis Description Port of the gWidgets API to program dynamic websites. Can be used with R's dynamic help web server to serve local files. This provides a convenient means to make local GUIs for users without concern for external GUI toolkits. Also, in combination with RApache,http://biostat.mc.vanderbilt.edu/rapache/index.html, this framework can be used to make public websites powered by R. The Ext JavaScript libararies (www.extjs.com) are used to create and manipulate the widgets dynamically. License GPL (>= 3) URL http://gwidgets.r-forge.r-project.org/ LazyLoad FALSE Repository CRAN Date/Publication 2012-06-09 10:18:19 NeedsCompilation no 1 2 gWidgetsWWW-package R topics documented: gWidgetsWWW-package . .2 gcanvas . 11 ggooglemaps . 13 gprocessingjs . 15 gsvg............................................. 18 gwebvis . 20 gWidgetsWWW-helpers . 21 gWidgetsWWW-undocumented . 22 gw_browseEnv . 22 gw_package . 22 localServerStart . 23 Index 25 gWidgetsWWW-package Toolkit implementation of gWidgets for WWW Description Port of gWidgets API to WWW using RApache and the ExtJS javascript libraries. Details This package ports the gWidgets API to WWW, allowing the authoring of interactive webpages within R commands. The webpages can be seen locally through a local server or viewed through the internet:The local version is installed with the package and uses R’s internal web server for serving help pages.
    [Show full text]
  • Computer Basics Handout
    Computer Basics Handout To register for computer training at Kitsap Regional Library please call your local branch: Bainbridge: 206-842-4162 Downtown Bremerton: 360-377-3955 Kingston Library: 360-297-3330 Little Boston: 360-297-2670 Manchester: 360-871-3921 Port Orchard: 360-876-2224 Poulsbo: 360-779-2915 Silverdale Library: 360-692-2779 Sylvan Way: 360-405-9100 or Toll-Free 1-877-883-9900 Visit the KRL website www.krl.org for class dates and times. Welcome to Computer Basics at Kitsap Regional Library. This ninety-minute lesson offers an introduction to computers, their associated hardware and software, and provides hands-on training to help you acquire the basic computer skills necessary to begin using a computer comfortably. Students enrolled in this Computer Basics class will learn: • to identify the various components of a computer • how to use the Mouse and scroll bar • how to log-on and off the library computers • the basics of the Internet browser and KRL home page • how to use the library’s online catalog (KitCat) Computers can be used for many different purposes: locate information, communicate, manage finances, purchase goods and services, or simply be entertained by music and movies. The full capabilities of a computer are dependent upon the technology included or added to its core structure. There are many tasks you can accomplish using the computers here at the Kitsap Regional library, for example a library user can: Locate Information: Access the Internet Search the library’s print and media collection using the KitCat Catalog
    [Show full text]
  • User Interface Software Tools
    User Interface Software Tools Brad A. Myers August 1994 CMU-CS-94-182 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Also appears as Human-Computer Interaction Institute Technical Report CMU-HCII-94-107 This report supersedes CMU-CS-92-114 from February, 1992, published as: Brad A. Myers. ‘‘State of the Art in User Interface Software Tools,’’ Advances in Human- Computer Interaction, Volume 4. Edited by H. Rex Hartson and Deborah Hix. Norwood, NJ: Ablex Publishing, 1993. pp. 110-150. Abstract Almost as long as there have been user interfaces, there have been special software systems and tools to help design and implement the user interface software. Many of these tools have demonstrated significant productivity gains for programmers, and have become important commercial products. Others have proven less successful at supporting the kinds of user interfaces people want to build. This article discusses the different kinds of user interface software tools, and investigates why some approaches have worked and others have not. Many examples of commercial and research systems are included. Finally, current research directions and open issues in the field are discussed. This research was sponsored by NCCOSC under Contract No. N66001-94-C-6037, ARPA Order No. B326. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NCCOSC or the U.S. Government. CR CATEGORIES AND SUBJECT DESCRIPTORS: D.2.2 [Software Engineering]: Tools and Techniques-User Interfaces; H.1.2 [Models and Principles]: User/Machine Systems-Human Factors; H.5.2 [Information Interfaces and Presentation]: User Interfaces-User Interface Management Systems; I.2.2 [Artificial Intelligence]: Automatic Programming-Program Synthesis; ADDITIONAL KEYWORDS AND PHRASES: User Interface Software, Toolkits, Interface Builders, User Interface Development Environments.
    [Show full text]
  • Mixing Board Vs. Mouse Interaction in Parameter Adjustment Tasks
    Mixing Board Versus Mouse Interaction In Value Adjustment Tasks Steven Bergner, Matthew Crider, Arthur E. Kirkpatrick, Torsten Moller¨ School of Computing Science, Simon Fraser University, Burnaby, BC, Canada We present a controlled, quantitative study with 12 participants comparing interaction with a haptically enhanced mixing board against interaction with a mouse in an abstract task that is motivated by several practical parameter space exploration settings. The study participants received 24 sets of one to eight integer values between 0 and 127, which they had to match by making adjustments with physical or graphical sliders, starting from a default position of 64. Based on recorded slider motion path data, we developed an analysis algorithm that identifies and measures different types of activity intervals, including error time moving irrelevant sliders and end time in breaks after completing each trial item. This decomposition facilitates data cleaning and more selective outlier removal, which is adequate for the small sample size. Our results showed a significant increase in speed of the mixing board interaction accompanied by reduced perceived cognitive load when compared with the traditional mouse-based GUI interaction. We noticed that the gains in speed are largely due to the improved times required for the hand to reach for the first slider (acquisition time) and also when moving between different ones, while the actual time spent manipulating task-relevant sliders is very similar for either input device. These results agree strongly with qualitative predictions from Fitts’ Law that the larger targets afforded by the mixer handles contributed to its faster performance. Our study confirmed that the advantage of the mixing board acquisition times and between times increases, as more sliders need to be manipulated in parallel.
    [Show full text]
  • Chapter Five Multiple Choice Questions 1. Which of the Following
    Chapter Five Multiple Choice Questions 1. Which of the following is not a characteristic of direct manipulation interfaces? a. Visibility of the objects and actions of interest. b. Menu selection and form fill-in. c. Rapid, reversible, incremental actions. d. Replacement of typed commands by a pointing action on the object of interest. 2. Augmented reality is _______________. a. The same thing as virtual reality b. A type of dashboard displaying a large volume of information at one time. c. An innovation in which users see the real world with an overlay of additional information. d. The use of haptic interaction skills to manipulate objects and convert the physical form to a digital form. 3. Drawbacks of direct manipulation include all of the following except _____________. a. Designs may consume valuable screen space. b. Users must learn the meanings of visual representations. c. Visual representation may be misleading d. The gulf of execution is increased 4. Remote environments are complicated by ______________. a. The gulf of execution, the gulf of evaluation, and time delays. b. Time delays, incomplete feedback, and unanticipated interferences. c. Supervisory control, lack of multiple coordinated views, and time delays d. Lack of precision, supervisory control, time delays, and gulf of execution. 5. All of the following are good guidelines for use of icons except _________________. a. Represent the object or action in a familiar and recognizable manner. b. Carefully consider three-dimensional icons; they are eye-catching but also can be distracting. c. Limit the number of different icons. d. Make the icon blend in with its background.
    [Show full text]
  • Graphical Widgets for Helvar IP Driver Installation and User Guide
    Graphical Widgets for Helvar IP Driver Installation and User Guide Doc. No. D005608 Iss. 1 Graphical Widgets for Helvar IP Driver Table of Contents 1 Revision History ....................................................................................................................... 3 2 Document Outline .................................................................................................................... 3 3 Scope ....................................................................................................................................... 3 4 Safety Precautions .................................................................................................................. 3 5 General Description ................................................................................................................. 3 6 System Requirements ............................................................................................................. 4 7 Disclaimer ................................................................................................................................ 4 8 Installation ................................................................................................................................ 4 9 Widget Overview ..................................................................................................................... 5 10 Adding a Widget onto a PX file ............................................................................................... 5 10.1 Adding a widget
    [Show full text]