<<

Human Factors: The Journal of the Human Factors and Ergonomics Society http://hfs.sagepub.com/

Humans and : Use, Misuse, Disuse, Abuse Raja Parasuraman and Victor Riley Human Factors: The Journal of the Human Factors and Ergonomics Society 1997 39: 230 DOI: 10.1518/001872097778543886

The online version of this article can be found at: http://hfs.sagepub.com/content/39/2/230

Published by:

http://www.sagepublications.com

On behalf of:

Human Factors and Ergonomics Society

Additional services and information for Human Factors: The Journal of the Human Factors and Ergonomics Society can be found at:

Email Alerts: http://hfs.sagepub.com/cgi/alerts

Subscriptions: http://hfs.sagepub.com/subscriptions

Reprints: http://www.sagepub.com/journalsReprints.nav

Permissions: http://www.sagepub.com/journalsPermissions.nav

Citations: http://hfs.sagepub.com/content/39/2/230.refs.html

>> Version of Record - Jun 1, 1997

What is This?

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN FACTORS, 1997,39(2),230-253

Humans and Automation: Use, Misuse, Disuse, Abuse

RAJA PARASURAMAN,1Catholic University of America, Washington, D.C., and VICTOR RILEY, Honeywell Technology Center, Minneapolis, Minnesota

This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to overreliance on automation, which can result in failures of moni- toring or decision . Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is com- monly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by design- ers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the au- tomation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.

INTRODUCTION Technical issues-how automation functions are implemented and the characteristics of the The revolution ushered in by the digital com- associated sensors, controls, and software- puter in the latter half of this century has funda- dominate most writing on automation technol- mentally changed many characteristics of work, ogy. This is not surprising, given the sophistica- leisure, travel, and other human activities. Even tion and ingenuity of design of many such more radical changes are anticipated in the next systems (e.g., automatic landing of an aircraft). century as computers increase in power, speed, The economic benefits that automation can pro- and "intelligence." These factors sustain much of vide, or is perceived to offer, also tend to focus the drive toward automation in the workplace public attention on the technical capabilities of and elsewhere, as more capable computer hard- automation, which have been amply documented ware and software become available at low cost. in such diverse domains as aviation (Spitzer, 1987), automobiles (IVHS America, 1992), manu- I Requests for reprints should be sent to Raja Parasuraman, facturing (Bessant, Levy, Ley, Smith, & Tran- Cognitive Science Laboratory, Catholic University of America, Washington, DC 20064. field, 1992), medicine (Thompson, 1994),

© 1997, Human Factors and Ergonomics Society. All rights reserved.

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-231

(Sheridan, 1992),and shipping (Grabowski & Wal- mation on team (Bowers, Oser, Salas, & Cannon- lace, 1993). Bowers, 1996) or job performance (Smith & Humans work with and are considered essen- Carayon, 1995) or on organizational behavior tial to all of these systems. However, in com- (Gerwin & Leung, 1986; Sun & Riis, 1994). We parison with technical capabilities, human capa- also do not examine the wider sociological, so- bilities-human performance and cognition in ciopsychological, or sociopolitical aspects of au- automated systems-are much less frequently tomation and human behavior (Sheridan, 1980; written about or discussed in public forums. This Zuboff, 1988), though such issues are becoming stems not from a relative lack of knowledge increasingly important to consider in automation (Bainbridge, 1983; Billings, 1991; Chambers & design (Hancock, 1996; Nickerson, 1995). Nagel, 1985; Hopkin, 1995; Mouloua & Parasura- What Is Automation? man, 1994; Parasuraman & Mouloua, 1996; Ras- mussen, 1986; Riley, 1995; Sheridan, 1992; Wick- We define automation as the execution by a ens, 1994; Wiener & Curry, 1980; Woods, 1996) machine agent (usually a computer) of a function but, rather, from a much greater collective em- that was previously carried out by a human. phasis on the technological than on the human What is considered automation will therefore aspects of automation. change with time. When the reallocation of a In this paper we examine human performance function from human to machine is complete aspects of the technological revolution known as and permanent, then the function will tend to be automation. We analyze the factors influencing seen simply as a machine operation, not as auto- human use of automation in domains such as mation. Examples of this include starter motors aviation, manufacturing, ground transportation, for cars and automatic elevators. By the same and medicine, though our treatment does not fo- token, such devices as automatic teller machines, cus on anyone of these systems. Consideration of cruise controls in cars, and the flight manage- these factors is important not only to systems ment system (FMS) in aircraft qualify as automa- currently under development, such as automa- tion because they perform functions that are also tion tools for air traffic management (Erzberger, performed manually by humans. Today's auto- 1992), but also to far-reaching system concepts mation could well be tomorrow's machine. that may be implemented in the future, such as Automation of physical functions has freed hu- "free flight" (Planzer & Hoffman, 1995; Radio mans from many time-consuming and labor- and Technical Committee on Aeronautics, 1995). intensive activities; however, full automation of A prevalent assumption about automation is cognitive functions such as decision making, that it resides in tyrannical machines that replace planning, and creative thinking remains rare. humans, a view made popular by Chaplin in his Could machine displacement of human thinking movie Modern Times. However, it has become become more commonplace in the future? In evident that automation does not supplant hu- principle, this might be possible. For example, man activity; rather, it changes the nature of the devices such as optical disks are increasingly re- work that humans do, often in ways unintended placing books as repositories of large amounts of and unanticipated by the designers of automa- information. Evolutionary neurobiologists have tion. In modem times, humans are consumers of speculated that extemal means of storing infor- automation. We discuss the human usage pat- mation and knowledge (as opposed to intemal tems of automation in this paper. storage in the human brain) not only have played First, however, some restrictions of scope an important role in the evolution of human con- should be noted. We focus on the influence of sciousness but will also do so in its future devel- automation on individual task performance. We opment (Donald, 1991). Hence permanent alloca- do not consider in any detail the impact of auto- tion of higher cognitive functions to machines

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 232-June 1997 HUMAN FACTORS

need not be conceptually problematic. Moreover, mance in the resulting system. As we discuss such a transfer will probably not replace but, later, however, this is not always the case. In the rather, will modify human thinking. In practice, ultimate extension of this practice, automation however, despite more than three decades of re- would completely replace operators in systems. search on artificial intelligence, neural networks, Automation has occasionally had this effect (e.g., and the development, in Schank's (1984) terms, in some sectors of the manufacturing industry), of the "cognitive computer," enduring transfer but more generally automation has not com- of thinking skills to machines has proven very pletely displaced human workers. Although in lay difficult. terms it is easiest to think of an automated sys- Automation can be usefully characterized by a tem as not including a human, most such sys- continuum of levels rather than as an all-or-none tems, including "unmanned" systems such as concept (McDaniel, 1988; Riley, 1989; Sheridan, spacecraft, involve human operators in a super- 1980). Under full manual control, a particular visory or monitoring role. function is controlled by the human, with no ma- One of the considerations preventing the total chine control. At the other extreme of full auto- removal of human operators from such systems mation, the machine controls all aspects of the has been the common perception that humans function, including its monitoring, and only its are more flexible, adaptable, and creative than products (not its internal operations) are visible automation and thus are better able to respond to to the human operator. changing or unforeseen conditions. In a sense, Different levels of automation can be identified then, one might consider the levels of automation between these extremes. For example, Sheridan and operator involvement that are permitted in a (1980) identified 10 such levels of automation; in system design as reflecting the relative levels of his seventh level, the automation carries out a trust in the designer on one hand and the opera- function and informs the operator to that effect, tor on the other. Given that no designer of auto- but the operator cannot control the output. Riley mation can foresee all possibilities in a complex (1989) defined automation levels as the combina- environment, one approach is to rely on the hu- tion of particular values along two dimensions: man operator to exercise his or her experience "intelligence" and autonomy. Automation with and judgment in using automation. Usually (but high autonomy can carry out functions only with not always) the operator is given override author- initiating input from the operator. At the highest ity and some discretion regarding the use of levels, the functions cannot be overridden by the automation. human operator (e.g., the flight envelope protec- This approach, however, tends to define the tion function of the Airbus 320 aircraft). human operator's roles and responsibilities in terms of the automation (Riley, 1995). Designers Human Roles in Automated Systems tend to automate everything that leads to an eco- nomic benefit and leave the operator to manage Until recently, the primary criteria for applying the resulting system. Several important human automation were technological feasibility and factors issues emerge from this approach, includ- cost. To the extent that automation could per- ing consequences of inadequate feedback about form a function more efficiently, reliably, or ac- the automation's actions and intentions (Nor- curately than the human operator, or merely re- man, 1990), awareness and management of au- place the operator at a lower cost, automation tomation modes (Sarter & Woods, 1994), under- has been applied at the highest level possible. reliance on automation (Sorkin, 1988), and Technical capability and low cost are valid rea- overreliance on automation (Parasuraman, Mol- sons for automation if there is no detrimental loy, & Singh, 1993; Riley, 1994b). An extensive impact on human (and hence system) perfor- list of human factors concerns associated with

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-233

cockpit automation was recently compiled by the operator used that automation is of interest, Funk, Lyall, and Riley (1995). but it may be difficult to say that the operator used the automation too much, too little, or oth- Incidents and Accidents erwise inappropriately. Often the best one can do Unfortunately, the ability to address human is to conclude that, the operator having used the performance issues systematically in design and automation in a certain way, certain conse- training has lagged behind the application of au- quences followed. The lessons learned from these tomation, and issues have come to light as a re- consequences then join the growing body of les- sult of accidents and incidents. The need for bet- sons related to automation design and use. In ter feedback about the automation's state was most cases the operator is not clearly wrong in revealed in a number of "controlled flight into using or not using the automation. Having deter- terrain" aircraft accidents, in which the crew se- mined that the operator must be trusted to apply lected the wrong guidance mode, and indications experience and judgment in unforeseen circum- presented to the crew appeared similar to when stances, he or she is granted the authority to de- the system was tracking the glide slope perfectly cide when and how to use it (though manage- (Corwin, Funk, Levitan, & Bloomfield, 1993). The ment may limit this authority to a greater or difficulty of managing complex flight guidance lesser extent). modes and maintaining awareness of which This brings up the question of how operators mode the aircraft was in was demonstrated by make decisions to use automation. How do they accidents attributed to pilot confusion regarding decide whether or not to use automation? Do the current mode (Sarter & Woods, 1994). For they make these decisions rationally or based on example, an Airbus A320 crashed in Strasbourg, nonrational factors? Are automation usage deci- France, when the crew apparently confused the sions appropriate given the relative performances vertical speed and flight path angle modes (Min- of operator and automation? When and why do istere de rEquipement, des Transports et du people misuse automation? Tourisme, 1993). Overoiew Underreliance on automation was demon- strated in railroad accidents in which crews In this paper we examine the factors influenc- chose to neglect speed constraints and their as- ing the use, misuse, disuse, and abuse of automa- sociated alerts. Even after one such accident near tion. Two points should be emphasized regarding Baltimore in 1987, inspectors found that the train our terminology. First, we include in our discus- operators were continuing to tape over the buzz- sion of human use of automation not only human ers that warned them of speed violations (Sorkin, operators of systems but also designers, supervi- 1988). Finally, overreliance on automation was a sors, managers, and regulators. This necessarily contributing cause in an accident near Colum- means that any human error associated with use bus, Ohio, in 1994. Apilot who demonstrated low of automation can include the human operator. confidence in his own manual control skills and the designer, or even management error; ex- tended to rely heavily on the automatic pilot dur- amples of each are provided throughout this paper. ing nighttime, low-visibility approaches failed to Second, in using terms such as misuse, disuse, monitor the aircraft's airspeed during final ap- and abuse, no pejorative intent is implied toward proach in a nighttime snowstorm and crashed any of these groups. We define misuse as overre- short of the runway (National Transportation liance on automation (e.g., using it when it Safety Board [NTSB], 1994). should not be used, failing to monitor it effec- Most such accidents result from multiple tively), disuse as underutilization of automation causes, and it can be difficult to untangle the (e.g., ignoring or turning off automated alarms or various contributing factors. Whenever automa- safety systems), and abuse as inappropriate ap- tion is involved in an accident, the issue of how plication of automation by designers or managers

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 234-June 1997 HUMAN FACTORS

(e.g., automation that fails to consider the conse- prefer not to use automatic brakes, and some quences for human performance in the resulting people like smoke alarms. system). Attitudes toward automation vary widely among individuals (Helmreich, 1984; McClumpha & USE OF AUTOMATION James, 1994; Singh, Deaton, & Parasuraman, The catastrophic accidents in Strasbourg, Bal- 1993). Understanding these attitudes-positive timore, and elsewhere are a powerful reminder and negative, general and specific---constitutes a that the decision to use (or not to use) automa- first step toward understanding human use of tion can be one of the most important decisions a automation. human operator can make, particularly in time- Wiener (1985, 1989) queried pilots of auto- critical situations. What factors influence this de- mated aircraft about their attitudes toward dif- cision? Several authors (Lee & Moray, 1992; ferent cockpit systems. Anotable finding was that Muir, 1988) have suggested that automation re- only a minority of the pilots agreed with the state- liability and the operator's trust in automation ment, "automation reduces workload." In fact, a are major factors. Riley (1989) examined several substantial minority of the pilots thought that au- other factors that might also influence automa- tomation had increased their workload. Later tion use decisions, including how much workload studies revealed that a major source of the in- the operator was experiencing and how much creased workload was the requirement to repro- risk was involved in the situation. He proposed gram automated systems such as the FMS when that automation usage was a complex, interactive conditions changed (e.g., having to land at a dif- function of these and other factors. Others (Mc- ferent runway than originally planned). Thus Clumpha & James, 1994; Singh, Molloy, & Para- many pilots felt that automation increased work- suraman, 1993a, 1993b) have suggested that op- load precisely at the time when it was needed erator attitudes toward automation might most-that is, during the high-workload phase of influence automation usage. We discuss the im- descent and final approach. Subsequent, more pact of each of these factors. formal questionnaire studies have also revealed substantial individual differences in pilot atti- Attitudes Toward Automation tudes toward cockpit automation (McClumpha & It is easy to think of examples in which auto- James, 1994; Singh et al., 1993a). mation usage and attitudes toward automation Beliefs and attitudes are not necessarily linked are correlated. Often these attitudes are shaped to behaviors that are consistent with those atti- by the reliability or accuracy of the automation. tudes. To what extent are individual attitudes to- For example, automatic braking systems are par- ward automation consistent with usage patterns ticularly helpful when driving on wet or icy of automation? The issue remains to be explored roads, and drivers who use these systems have fully. In the case of a positive view of automation, favorable attitudes toward them. Smoke detec- attitudes and usage may be correlated. Examples tors are prone to false alarms, however, and are include the horizontal situation indicator, which disliked by many people, some of whom might pilots use for navigation and find extremely help- disable them. In either case, automation use (or ful. and automatic hand-offs between airspace lack of use) reflects perceived reliability. In other sectors, which air traffic controllers find useful in instances, attitudes may not be so closely linked reducing their workload. to automation reliability. For example, many el- More generally, attitudes may not necessarily derly people tend not to use automatic teller ma- be reflected in behavior. Two recent studies chines because of a generally negative attitude found no relationship between attitudes toward toward computer technology and a more positive automation and actual reliance on automation attitude toward social interaction with other hu- during multiple-task performance (Riley, 1994a, mans (bank tellers). There are also people who 1996; Singh et aI., 1993b). Furthermore, there

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-235

may be differences between general attitudes to- management-and gave participants the option ward automation (i.e., all automation) and do- of automating the tracking task. Following ad- main-specific attitudes (e.g., cockpit automa- vance notice of an increase in task difficulty, tion). For all these reasons it may be difficult to there was a trend toward use of automation as a predict automation usage patterns on the basis of workload management strategy. However, indi- questionnaire data alone. Performance data on vidual variability in automation usage patterns actual human operator usage of automation are obscured any significant relationship between needed. We now tum to such evidence. task load increase and automation usage. The evidence concerning the influence of task Mental Workload load on automation usage is thus unclear. Nev- One of the fundamental reasons for introduc- ertheless, human operators often cite excessive ing automation into complex systems is to lessen workload as a factor in their choice of automa- the chance of human error by reducing the op- tion. Riley, Lyall, and Wiener (1993) reported erator's high mental workload. However, this that workload was cited as one of two most im- does not always occur (Edwards, 1977; Wiener, portant factors (the other was the urgency of the 1988). Nevertheless, one might argue that an op- situation) in pilots' choice of such automation as erator is more likely to choose to use automation the autopilot, FMS, and flight director during when his or her workload is high than when it is simulated flight. However, these data also low or moderate. Surprisingly, there is little evi- showed substantial individual differences. Pilots dence in favor of this assertion. Riley (1994a) had were asked how often, in actual line perfor- college students carry out a simple step-tracking mance, various factors influenced their automa- task and a character classification task that could tion use decisions. For most factors examined, be automated. He found that manipulating the many pilots indicated that a particular factor difficulty of the tracking task had no impact on rarely influenced their decision, whereas an al- the students' choice to use automation in the most equal number said that the same factor in- classification task. The overall level of automa- fluenced their decisions quite often; very few tion usage in this group was quite low-less than gave an answer in the middle. about 50%. A possible reason could be that these Studies of human use of automation typically young adults typically prefer manual over auto- find large individual differences. Riley (1994a) mated control, as reflected in their interest in found that the patterns of automation use dif- computer video games that require high levels of fered markedly between those who cited fatigue manual skill. However, in a replication study car- as an influence and those who cited other factors. ried out with pilots, who turned on the automa- Moreover, there were substantial differences be- tion much more frequently, no relationship be- tween students and pilots, even though the task tween task difficulty and automation usage was domain was artificial and had no relation to avia- found. tion. Within both pilot and student groups were The difficulty manipulation used by Riley strong differences among individuals in automa- (1994a) may have been insufficient to raise work- tion use. These results suggest that different load significantly, given that task performance people employ different strategies when making was only slightly affected. Moreover, the partici- automation use decisions and are influenced by pants had to perform only two simple, discrete- different considerations. trial, artificial tasks. Perhaps task difficulty ma- Subjective perceptions and objective measure- nipulations affect automation usage only in a ment of performance are often dissociated (Yeh multitask environment with dynamic tasks re- & Wickens, 1988). Furthermore, the nature of sembling those found in real work settings. Har- workload in real work settings can be fundamen- ris, Hancock, and Arthur (1993) used three flight tally different from workload in most laboratory tasks-tracking, system monitoring, and fuel tasks. For example, pilots are often faced with

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 236-June 1997 HUMAN FACTORS

deciding whether to fly a clearance (from air traf- fort. (Only if several sheets of paper had to be fic control) through the FMS, through simple converted would the OCR option be considered.) heading or altitude changes on the glareshield Cognitive overhead may be important with panel, or through manual control. Casner (1994) high-level automation that provides the human found that pilots appear to consider the predict- operator with a solution to a complex problem. ability of the flight path and the demands of other Because these aids are generally used in uncer- tasks in reaching their decision. When the flight tain, probabilistic environments, the automated path is higWy predictable, overall workload may solution mayor may not be better than a manual be reduced by accepting the higher short-term one. As a result, the human operator may expend workload associated with programming the considerable cognitive resources in generating a FMS, whereas when the future flight path is more manual solution to the problem, comparing it uncertain, such a workload investment may not with the automated solution, and then picking be warranted. To fully explore the implications of one of the solutions. If the operator perceives that workload on automation use, the workload at- the advantage offered by the automation is not tributes of particular systems of interest should sufficient to overcome the cognitive overhead in- be better represented; in the flight deck, this volved, then he or she may simply choose not to would include a trade-off between short-term use the automation and do the task manually. and long-term workload investments. Kirlik (1993) provided empirical evidence of this phenomenon in a dual-task study in which Cognitive Overhead an autopilot was available to participants for a primary flight task. He found that none of the In addition to the workload associated with the participants used the automation as intended- operator's other tasks, a related form of work- that is, as a task-shedding device to allow atten- load-that associated with the decision to use the tion to be focused on the secondary task when it automation itself-may also influence automa- was present, but not otherwise. Kirlik (1993) hy- tion use. Automation usage decisions may be pothesized that factors such as an individual's relatively straightforward if the advantages of us- manual control skills, the time needed to engage ing the automation are clear cut. When the ben- the autopilot, and the cost of delaying the second- efit offered by automation is not readily appar- ary task while engaging the autopilot may have ent, however, or if the benefit becomes clear only influenced automation use patterns. Using a after much thought and evaluation, then the cog- Markov modeling analysis to identify the optimal nitive "overhead" involved may persuade the op- strategies of automation use given each of these erator not to use the automation (Kirlik, 1993). factors, he found that conditions exist for which Overhead can be a significant factor even for the optimal choice is not to use the automation. routine actions for which the advantages of au- Trust tomation are clear-for example, entering text from a sheet of paper into a word processor. One Trust often determines automation usage. Op- choice, a labor-intensive one, is to enter the text erators may not use a reliable automated system manually with a keyboard. Alternatively, a scan- if they believe it to be untrustworthy. Conversely, ner and an optical character recognition (OCR) they may continue to rely on automation even program can be used to enter the text into the when it malfunctions. Muir (1988) argued that word processor. Even though modem OCR pro- individuals' trust for machines can be affected by grams are quite accurate for clearly typed text, the same factors that influence trust between in- most people would probably choose not to use dividuals; for example, people trust others if they this form of automation because the time in- are reliable and honest, but they lose trust when volved in setting up the automation and correct- they are let down or betrayed, and the subsequent ing errors may be perceived as not worth the ef- redevelopment of trust takes time. She found that

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-237

use of an automated aid to control a simulated plexity of the task may be relevant; complex task soft drink manufacturing plant was correlated domains may prompt different participants to with a simple subjective measure of trust in adopt different task performance strategies, and that aid. operator use of automation may be influenced by Using a similar process control simulation, Lee these task performance strategies as well as by and Moray (1992) also found a correlation be- factors directly related to the automation. tween automation reliance and subjective trust, although their participants also tended to be bi- Confidence, Risk, and Other Factors ased toward manual control and showed "inertia" in their allocation policy. In a subsequent study, Several other factors are probably also impor- Lee and Moray (1994) found that participants tant in influencing the choice to use or not to use chose manual control if their confidence in their automation. Some of these factors may have a own ability to control the plant exceeded their direct influence, whereas others may interact trust of the automation. and that they otherwise with the factors already discussed. For example, chose automation. the influence of cognitive overhead may be par- A factor in the development of trust is automa- ticularly evident if the operator's workload is al- tion reliability. Several studies have shown that ready high. Under such circumstances, operators operators' use of automation reflects automation may be reluctant to use automation even if it is reliability, though occasional failures of automa- reliable, accurate, and generally trustworthy. Lee tion do not seem to be a deterrent to future use of and Moray (1992) and Riley (1994a) also identi- the automation. Riley (1994a) found that college fied self-confidence in one's manual skills as an students and pilots did not delay turning on au- important factor in automation usage. If trust in tomation after recovery from a failure; in fact, automation is greater than self-confidence, auto- many participants continued to rely on the auto- mation would be engaged, but not otherwise. mation during the failure. Parasuraman et a1. Riley (1994a) suggested that this interaction (1993) found that even after the simulated cata- could be moderated by other factors, such as the strophic failure of an automated engine- risk associated with the decision to use or not to monitoring system, participants continued to use automation. He outlined a model of automa- rely on the automation for some time, though to tion usage based on a number of factors (see Fig- a lesser extent than when the automation was ure 1). The factors for which he found support more reliable. include automation reliability, trust in the auto- These findings are surprising in view of earlier mation, self-confidence in one's own capabilities, studies suggesting that operator trust in automa- task complexity, risk, learning about automation tion is slow to recover following a failure of the states, and fatigue. However, he did not find that automation (Lee & Moray, 1992). Several pos- self-confidence was necessarily justified; partici- sible mitigating factors could account for the dis- pants in his studies were not able to accurately crepancy. assess their own performance and use the auto- First, if automation reliability is relatively high, mation accordingly, again showing the dissocia- then operators may come to rely on the automa- tion between subjective estimates and perfor- tion, so that occasional failures do not substan- mance mentioned earlier. Furthermore, large tially reduce trust in the automation unless the individual differences were found in almost all failures are sustained. A second factor may be the aspects of automation use decisions. This can ease with which automation behaviors and state make systematic prediction of automation usage indicators can be detected (Molloy & Parasura- by individuals difficult, much as the prediction man, 1994). As discussed earlier, the overhead of human error is problematic even when the involved in enabling or disengaging automation factors that give rise to errors are understood may be another factor. Finally, the overall com- (Reason, 1990).

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 238-June 1997 HUMAN FACTORS

•••,.... ~ operator accuracy, ...... '4 I , ...... ;...... "I "'- ... i" " I system accuracy workload ' skill" ,/ ,~ , /1 I I , \ , / 1 I I , \ , , 1 I I task Complexity I \ perceive~workload +,'~ ' \ ..•••..••• I r.II . I I rnachin e accuracy _-.--....,.. " I

trust In automation perceived risk~%f/ . .. ./ . fatigue

risk state learning

Figure 1. Interactions between factors influencingautomation use. Solid arrows rep- resent relationships supported by experimentaldata; dotted arrows are hypothesized relationships or relationshipsthat depend on the systemin question.Reproducedfrom Parasuraman & Mouloua(1996) with permission from LawrenceErlbaum Associates.

Practical Implications should be made aware of the biases they may bring to the use of automation. For example, if an These results suggest that automation use de- individual is more likely to rely on automation cisions are based on a complex interaction of when tired, he or she should be made aware that many factors and are subject to strongly diver- fatigue may lead to overreliance and be taught to gent individual considerations. Although many of recognize the implications of that potential . the factors have been examined and the most Finally, policies and procedures may profitably important identified, predicting the automation higWight the importance of taking specific con- use of an individual based on knowledge of these siderations into account when deciding whether factors remains a difficult prospect. Given this or not to use automation, rather than leaving that conclusion, in a human-centered automation decision vulnerable to biases and other factors philosophy, the decision to use or not to use au- that might produce suboptimal strategies. tomation is left to the operator (within limits set by management). Having granted the operator MISUSE OF AUTOMATION this discretion, designers and operators should recognize the essential unpredictability of how Most automated systems are reliable and usu- people will use automation in specific circum- ally work as advertised. Unfortunately, some may stances, if for no other reason than the presence fail or behave unpredictably. Because such oc- of these individual differences. currences are infrequent, however, people will If automation is to be used appropriately, po- come to trust the automation. However, can tential biases and influences on this decision there be too much trust? Just as mistrust can lead should be recognized by training personnel, de- to disuse of alerting systems, excessive trust can velopers, and managers. Individual operators lead operators to rely uncritically on automation

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-239

without recognizing its limitations or fail to Decision Biases monitor the automation's behavior. Inadequate monitoring of automated systems has been im- Human decision makers exhibit a variety of bi- plicated in several aviation incidents-for in- ases in reaching decisions under uncertainty stance, the crash of Eastern Flight 401 in the (e.g., underestimating the influence of the base Florida Everglades. The crew failed to notice the rate or being overconfident in their decisions). disengagement of the autopilot and did not moni- Many of these biases stem from the use of deci- tor their altitude while they were busy diagnosing sion heuristics (Tversky & Kahneman, 1984) that a possible problem with the landing gear (NTSB, people use routinely as a strategy to reduce the 1973). Numerous other incidents and accidents cognitive effort involved in solving a problem since this crash testify to the potential for human (Wickens, 1992). For example, Tversky and Kah- operators' overreliance on automation. neman (1984) showed that even expert decision makers use the heuristic of representativeness in Overreliance on Automation making decisions. This can lead to errors when a particular event or symptom is highly represen- The Air Transport Association (ATA,1989) and tative of a particular condition but highly un- Federal Aviation Administration (FAA, 1990) likely for other reasons (e.g., a low base rate). have expressed concern about the potential reluc- Although heuristics are a useful alternative to tance of pilots to take over from automated sys- analytical or normative methods (e.g., utility tems. In a dual-task study, Riley (1994b) found theory or Bayesian statistics) and generally lead that although almost all students turned the au- to successful decision making, they can result tomation off when it failed, almost half the pilots in biases that lead to substandard decision did not, even though performance on the task performance. was substantially degraded by the failed automa- Automated systems that provide decision sup- tion and the participants were competing for port may reinforce the human tendency to use awards based on performance. Even though the heuristics and the susceptibility to automation task had no relation to aviation, pilots showed bias (Mosier & Skitka, 1996). Although reliance significantly more reliance on automation than on automation as a heuristic may be an effective did students in all conditions. However, the fact strategy in many cases, overreliance can lead to that almost half the pilots used the automation errors, as is the case with any decision heuristic. when it failed, whereas the rest turned it off, is Automation bias may result in omission errors, further evidence of marked individual differences in which the operator fails to notice a problem in automation use decisions. or take an action because the automated aid fails Overreliance on automation represents an as- to inform the operator. Such errors include pect of misuse that can result from several forms monitoring failures, which are discussed in more of human error, including decision biases and detail later. Commission errors occur when op- failures of monitoring. It is not only untrained erators follow an automated directive that is operators who show these tendencies. Will (1991) inappropriate. found that skilled subject matter experts had mis- Mosier and Skitka (1996) also pointed out that placed faith in the accuracy of diagnostic expert reliance on the decisions of automation can make systems (see also Weick, 1988). The Aviation humans less attentive to contradictory sources of Safety Reporting System (ASRS) also contains evidence. In a part-task simulation study, Mosier, many reports from pilots that mention monitor- Heers, Skitka, and Burdick (1996) reported that ing failures linked to excessive trust in or overre- pilots tended to use automated cues as a heuristic liance on automated systems such as the autopi- replacement for information seeking. They found lot or FMS (Mosier, Skitka, & Korte, 1994; Singh that pilots tended not to use disconfirming evi- et aI., 1993a, 1993b). dence available from cockpit displays when there

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 240-June 1997 HUMAN FACTORS

was a conflict between expected and actual auto- craft gradually and smoothly, are much less sa- mation performance. Automation bias error rates lient and can go undetected until the aircraft were also found to be similar for student and pro- wings are nearly vertical (e.g., the 1985 China Air- fessional pilot samples, indicating that expertise lines incident; NTSB, 1986). McClellan pointed does not guarantee immunity from this bias out that autopilot failures, though infrequent, do (Mosier, Skitka, Burdick, & Heers, 1996). occur and that incidents and accidents can be avoided not only by appropriate certification but Human Monitoring Errors also by training: Automation bias represents a case of inappro- priate decision making linked to overreliance on All autopilot certification theory and testing is automation. In addition, operators may not suf- based on the human pilot identifyingan autopi- lot failure and promptly disabling the autopi- ficiently monitor the inputs to automated sys- lot.... It may not always be easy to quickly tems in order to reach effective decisions should identify an actual autopilot failure, because a the automation malfunction or fail. It is often malfunction could manifest itself in various ways.Instead of taking time to troubleshoot an pointed out that humans do not make good autopilot failure, [pilots]must treat everyunex- monitors. In fact, human monitoring can be very pected maneuverwhen the autopilot is engaged efficient. For example, in a high-fidelity simula- as a failure and immediatelydisable the autopi- lot and trim system.(P. 80) tion study of air traffic control, Hilburn, Jorna, and Parasuraman (1995) found that compared Although poor monitoring can have multiple with unaided performance, experienced control- determinants, operator overreliance on automa- lers using an automated descent advisor were tion may be an important contributor. Mosier et quicker to respond to secondary malfunctions al. (1994) found that 77% of ASRS incidents in (pilots not replying to data-linked clearances). which overreliance on automation was suspected Monitoring in other environments, such as in- involved a probable failure in monitoring. Simi- tensive care units and process control, is also lar incidents have occurred elsewhere. For ex- generally efficient (Parasuraman, Mouloua, Mol- ample, a satellite-based navigational system loy, & Hilburn, 1996). When the number of op- failed "silently" in a cruise ship that ran aground portunities for failure is considered-virtually ev- off Nantucket Island. The crew did not monitor ery minute for these continuous, 24-h systems- other sources of position information that would then the relatively low frequency of monitoring have indicated that they had drifted off course errors is striking. As Reason (1990) pointed out, (National Transportation Safety Board, 1997b). the opportunity ratio for skill-based and rule- Manual task load. Parasuraman and colleagues based errors is relatively low. The absolute num- (1993, 1994) have examined the factors influenc- ber of errors may be high, however, and the ap- ing monitoring of automation and found that the plication of increased levels of automation in overall task load imposed on the operator, which these systems creates more opportunities for fail- determined the operator's attention strategies, is ures of monitoring as the number of automated an important factor. In their studies, participants subsystems, alarms, decision aids, and so on simultaneously performed tracking and fuel increase. management tasks manually and had to monitor McClellan (1994) discussed pilot monitoring an automated engine status task. Participants for various types of autopilot failures in aircraft. were required to detect occasional automation FAA certification of an autopilot requires detec- failures by identifying engine malfunctions not tion of "hard-over," uncommanded banks (up to a detected by the automation. In the constant reli- maximum of 60°) within 3 s during test flight. ability condition, automation reliability was in- Although such autopilot failures are relatively variant over time, whereas in the variable easy to detect because they are so salient, "slow- reliability condition, automation reliability var- over" rolls, in which the autopilot rolls the air- ied from low to high every 10 min. Participants

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-241

detected more than 70% of malfunctions on the 100 engine status task when they performed the task 90 manually while simultaneously carrying out t loo 80 tracking and fuel management. However, when Ul Ow the engine status task was under automation con- wa: 70 ~3 Variable-Reliability trol, detection of malfunctions was markedly re- 60 a:-Cl: duced in the constant reliability condition. zu. Constant-Reliability 2z 50 1-0 In a separate experiment, the same conditions U_ WI- 40 were administered, but participants performed I-Cl: W:i!i only the monitoring task without the tracking °0 30 I- and fuel management tasks. Individuals were ::l Cl: 20 now nearly perfect (> 95%) in detecting failures in the automated control of the engine status 10 task, which was the only task. These results point 0 23456 789 to the potential cost of long-term automation on 10-MINUTE BLOCKS system performance and show that operators can Figure 2. Effects of consistency of automation reliabil- be poor at monitoring automation when they ity (constant or variable)on monitoringperformance have to perform other manual tasks simulta- under automation.Basedon data fromParasuramanet al. (1993). neously. Other studies have shown that poor au- tomation monitoring is exhibited by pilots as well Machine Monitoring as nonpilots (Parasuraman et al., 1994) and also when only a single automation failure occurs Can the problem of poor human monitoring of during the simulation (Molloy & Parasuraman, automation itself be mitigated by automation? 1996). Some monitoring tasks can be automated, such Automation reliability and consistency. The as automated checklists for preflight procedures monitoring performance of participants in these (e.g., Palmer & Degani, 1991), though this may studies supports the view that reliable automa- merely create another system that the operator tion engenders trust (Lee & Moray, 1992). This must monitor. Pattern recognition methods, in- leads to a reliance on automation that is asso- cluding those based on neural networks, can also ciated with only occasional monitoring of its be used for machine detection of abnormal con- efficiency, suggesting that a critical factor in ditions (e.g., Gonzalez & Howington, 1977). Ma- the development of this phenomenon might be chine monitoring may be an effective design the constant, unchanging reliability of the strategy in some instances, especially for lower- automation. level functions, and is used extensively in many Conversely, automation with inconsistent reli- systems, particularly process control. ability should not induce trust and should there- However, automated monitoring may not pro- fore be monitored more closely. This prediction vide a general solution to the monitoring prob- was supported by the Parasuraman et al. (1993) lem, for at least two reasons. First, automated finding that monitoring performance was signifi- monitors can increase the number of alarms, cantly higher in the variable reliability condition which is already high in many settings. Second, than in the constant reliability condition (see Fig- to protect against failure of automated monitors, ure 2). The absolute level of automation reliabil- designers may be tempted to put in another sys- ity may also affect monitoring performance. tem that monitors the automated monitor, a pro- May, Molloy, and Parasuraman (1993) found that cess that could lead to infinite regress. These the detection rate of automation failures varied high-level monitors can fail also, sometimes si- inversely with automation reliability. lently. Automated warning systems can also lead

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 242-June 1997 HUMAN FACTORS

to reliance on warning signals as the primary in- Pilots and nonpilots were tested with these en- dicator of system malfunctions rather than as gine status displays using the same paradigm de- secondary checks (Wiener & Curry, 1980). veloped by Parasuraman et al. (1993). In the manual condition, participants were responsible Making Automation Behaviors and State for identifying and correcting engine malfunc- Indicators Salient tions. Performance (detection rate) under The monitoring studies described earlier indi- manual conditions was initially equated for the cate that automation failures are difficult to de- baseline and EMACS tasks. In the automated tect if the operator's attention is engaged else- condition, a system routine detected malfunc- where. Neither centrally locating the automated tions without the need for operator intervention; task (Singh, Molloy, & Parasuraman, 1997) nor however, from time to time the automation superimposing it on the primary manual task routine failed to detect malfunctions, which the (Duley, Westerman, Molloy, & Parasuraman, in participants were then required to manage. Al- press) mitigates this effect. These results suggest though participants detected only about a third that attentional rather than purely visual factors of the automation failures with the nonintegrated (e.g., nonfoveal vision) underlie poor monitoring. baseline display, they detected twice as many fail- Therefore, making automation state indicators ures with the integrated EMACS display. more salient may enhance monitoring. Adaptive task allocation may provide another One possibility is to use display integration to means of making automation behaviors more sa- reduce the attentional demands associated with lient by refreshing the operator's memory of the detecting a malfunction in an automated task. automated task (Lewandowsky & Nikolic, 1995; Integration of elements within a display is one Parasuraman, Bahri, Deaton, Morrison, & Bar- method for reducing attentional demands of fault nes, 1992). The traditional approach to automa- detection, particularly if the integrated compo- tion is based on a policy of allocation of function nents combine to form an emergent feature such in which either the human or the machine has as an object or part of an object (Bennett & Flach, full control of a task (Fitts, 1951). An alternative 1992; Woods, Wise, & Hanes, 1981). If the emer- philosophy, variously termed adaptive task allo- gent feature is used to index a malfunction, de- cation or adaptive automation, sees function allo- tection of the malfunction could occur preatten- cation between humans and machines as flexible tively and in parallel with other tasks. (Hancock & Chignell, 1989; Rouse, 1988; Scerbo, Molloy and Parasuraman (1994; see also Mol- 1996). For example, the operator can actively loy, Deaton, & Parasuraman, 1995) examined control a process during moderate workload, al- this possibility with a version of an engine status locate this function to an automated subsystem display that is currently implemented in many during peak workload if necessary, and retake cockpits: a CRT-based depiction of engine instru- manual control when workload diminishes. This ments and caution and warning messages. The suggests that one method of improving monitor- display consisted of four circular gauges showing ing of automation might be to insert brief pe- different engine parameters. The integrated form riods of manual task performance after a long of this display was based on one developed by period of automation and then to return the task Abbott (1990)-the Engine Monitoring and Crew to automation (Byrne & Parasuraman, 1996; Alerting System (EMACS)-in which the four en- Parasuraman, 1993). gine parameters were shown as columns on a de- Parasuraman, Mouloua, and Molloy (1996) viation bar graph. Parameter values above or be- tested this idea using the same flight simulation low normal were displayed as deviations from a task developed by Parasuraman et al. (1993). horizontal line (the emergent feature) represent- They found that after a 40-min period of auto- ing normal operation. mation, a 10-min period in which the task was

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-243

reallocated to the operator had a beneficial im- Moray, 1992), creating a vicious circle (Mosier et pact on subsequent operator monitoring under al., 1994; Satchell, 1993). automation. Similar results were obtained in a Sarter and Woods (1995) proposed that the subsequent study in which experienced pilots combination of high authority and autonomy of served as participants (Parasuraman et al., 1994). automation creates multiple "agents," who must These results encourage further research into work together for effective system performance. adaptive systems (see Scerbo, 1996, for a com- Although the electronic copilot (Chambers & Na- prehensive review). However, such systems may gel, 1985) is still in the conceptual stage for the suffer from the same problems as traditional au- cockpit, current cockpit automation possesses tomation if implemented from a purely technology- many qualities consistent with autonomous, driven approach, without considering user needs agentlike behavior. Unfortunately, because the and capabilities (Billings & Woods, 1994). properties of the automation can create "strong Display techniques that afford "direct" percep- but silent" partners to the human operator, mu- tion of system states (Vicente & Rasmussen, tual understanding between machine and human 1990) may also improve the saliency of automa- agents can be compromised (Sarter, 1996). This tion states and provide better feedback about the is exemplified by the occurrence of FMS mode automation to the operator, which may in turn errors in advanced cockpits, in which pilots have reduce overreliance. Billings (1991) has pointed been found to carry out an action appropriate for out the importance of keeping the human opera- one mode of the FMS when, in fact, the FMS was tor informed about automated systems. This is in another mode (Sarter & Woods, 1994). clearly desirable, and making automation state Overreliance on automated solutions may also indicators salient would achieve this objective. reduce situation awareness (Endsley, 1996; However, literal adherence to this principle (e.g., Sarter & Woods, 1991; Wickens, 1994). For ex- providing feedback about all automated systems, ample, advanced decision aids have been pro- from low-level detectors to high-level decision posed that will provide air traffic controllers with aids) could lead to an information explosion and resolution advisories on potential conflicts. Con- increase workload for the operator. We suggest trollers may come to accept the proposed solu- that saliency and feedback would be particularly tions as a matter of routine. This could lead to a beneficial for automation that is designed to have reduced understanding of the traffic picture com- relatively high levels of autonomy. pared with when the solution is generated manu- ally (Hopkin, 1995). Whitfield, Ball, and Ord System Authority and Autonomy (1980) reported such a loss of the "mental pic- Excessive trust can be a problem in systems ture" in controllers, who tended to use automated with high-authority automation. The operator conflict resolutions under conditions of high who believes that the automation is 100% reliable workload and time pressure. will be unlikely to monitor inputs to the automa- Practical Implications tion or to second-guess its outputs. In Langer's (1989) terms, the human makes a "premature Taken together, these results demonstrate that cognitive commitment," which affects his or her overreliance on automation can and does hap- subsequent attitude toward the automation. The pen, supporting the concerns expressed by the autonomy of the automation could be such that ATA (1989) and FAA (1990) in their human fac- the operator has little opportunity to practice the tors plans. System designers should be aware of skills involved in performing the automated task the potential for operators to use automation manually. If this is the case, then the loss in the when they probably should not, to be susceptible operator's own skills relative to the performance to decision biases caused by overreliance on au- of the automation will tend to lead to an even tomation, to fail to monitor the automation as greater reliance on the automation (see Lee & closely as they should, and to invest more trust

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 244-June 1997 HUMAN FACTORS

in the automation than it may merit. Scenarios have a phenomenally high cost, yet their fre- that may lead to overreliance on automation quency is undoubtedly very low. However, if a should be anticipated and methods developed to system is designed to minimize misses at all counter it. costs, then frequent device false alarms may Some strategies that may help in this regard result. A low false alarm rate is necessary for include ensuring that state indications are salient acceptance of warning systems by human opera- enough to draw operator attention, requiring tors. Accordingly, setting a strict decision cri- some level of active operator involvement in the terion to obtain a low false alarm rate would process (Billings, 1991), and ensuring that the appear to be good design practice. other demands on operator attention do not en- However, a very stringent criterion may not courage the operator to ignore the automated provide sufficient advance warning. In an analy- processes. Other methods that might be em- sis of automobile collision warning systems, Far- ployed to promote better monitoring include the ber and Paley (1993) suggested that too Iowa use of display integration and adaptive task false alarm rate may also be undesirable because allocation. rear-end collisions occur very infrequently (per- haps once or twice in the lifetime of a driver). If DISUSE OF AUTOMATION the system never emits a false alarm, then the Few technologies gain instant acceptance first time the warning sounds would be just be- when introduced into the workplace. Human op- fore a crash. Under these conditions, the driver erators may at first dislike and even mistrust a might not respond alertly to such an improbable new automated system. As experience is gained event. Farber and Paley (1993) speculated that an with the new system, automation that is reliable ideal system would be one that signals a collision- and accurate will tend to earn the trust of opera- possible condition, even though the driver would tors. This has not always been the case with new probably avoid a crash. Although technically a technology. Early designs of some automated false alarm, this type of information might be alerting systems, such as the Ground Proximity construed as a warning aid in allowing improved Warning System (GPWS), were not trusted by pi- response to an alarm in a collision-likely situa- lots because of their propensity for false alarms. tion. Thus all false alarms need not necessarily be When corporate policy or federal regulation harmful. This idea is similar to the concept of a mandates the use of automation that is not "likelihood-alarm," in which more than the usual trusted, operators may resort to "creative disable- two alarm states are used to indicate several pos- ment" of the device (Satchell, 1993). sible levels of the dangerous condition, ranging Unfortunately, mistrust of alerting systems is from very unlikely to very certain (Sorkin, Kan- widespread in many work settings because of the towitz, & Kantowitz, 1988). false alarm problem. These systems are set with a Setting the decision criterion for a low false decision threshold or criterion that minimizes alarm rate is insufficient by itself for ensuring the chance of a missed warning while keeping the high alarm reliability. Despite the best intentions false alarm rate below some low value. Two im- of designers, the availability of the most ad- portant factors that influence the device false vanced sensor technology, and the development alarm rate and, hence, the operator's trust in an of sensitive detection algorithms, one fact may automated alerting system are the values of the conspire to limit the effectiveness of alarms: the decision criterion and the base rate of the haz- Iowa priori probability or base rate of most haz- ardous condition. ardous events. If the base rate is low, as it often is The initial consideration for setting the deci- for many real events, then the posterior probabil- sion threshold of an automated warning system ity of a true alarm-the probability that given an is the cost of a missed signal versus that of a false alarm, a hazardous condition exists--can be low alarm. Missed signals (e.g., total engine failure) even for sensitive warning systems.

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-245

Parasuraman, Hancock, and Olofinboba man operators tend to ignore and turn off (1997) carried out a Bayesian analysis to examine alarms-they have cried wolf once too often (Sor- the dependence of the posterior probability on kin, 1988). As Figure 3 indicates, reliably high the base rate for a system with a given detection posterior alarm probabilities are guaranteed for sensitivity (d'). Figure 3 shows a family of curves only certain combinations of the decision crite- representing the different posterior probabilities rion and the base rate. of a true alarm when the decision criterion (13) is Even if operators do attend to an alarm, they varied for a warning system with fixed sensitivity may be slow to respond when the posterior prob- (in this example, d' = 4.7). For example, 13 can be ability is low. Getty, Swets, Pickett, and Goun- set so that this warning system misses only 1 of thier (1995) tested participants' response to a every 1000 hazardous events (hit rate = .999) visual alert while they performed a tracking task. while having a false alarm rate of .0594. Participants became progressively slower to re- Despite the high hit rate and relatively low false spond as the posterior probability of a true alarm alarm rate, the posterior odds of a true alarm was reduced from .75 to .25. A case of a prison with such a system could be very low. For ex- escape in which the escapee deliberately set off a ample, when the a priori probability (base rate) motion detector alarm, knowing that the guards of a hazardous condition is low (say, .001), only 1 would be slow to respond, provides a real-life ex- in 59 alarms that the system emits represents a ample of this phenomenon (Casey, 1993). true hazardous condition (posterior probability = These results indicate that designers of auto- .0168). It is not surprising, then, that many hu- mated alerting systems must take into account

1.0

-a: 0.8 UJ -D. >- :!:: :c 0.6 Cll JJ •..0 D.•.. 0 0.4 ';: II) en -0 D. 0.2 B

0.0 0.00 0.02 0.04 0.06 0.08 0.10

A Priori Probability (p) Base Rate Figure 3. Posterior probability (P[SIR) of a hazardous condition S given an alarm response R for an automated warning system with fixed sensitivity d' = 4.7, plotted as a function of a priori probability (base rate) of S. From Parasuraman, Hancock, and Olofinboba (1997). Reprinted with permission of Taylor & Francis.

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 246-June 1997 HUMAN FACTORS

not only the decision threshold at which these gested earlier, an explicit expression of the rela- systems are set (Kuchar & Hansman, 1995; tive levels of trust between the human operator Swets, 1992) but also the a priori probabilities of and automation. In most cases, this trade-off is the condition to be detected (Parasuraman, Han- decided in favor of the operator; in this case, cock, et aI., 1997). Only then will operators tend however, it has been made in favor of automation to trust and use the system. In many warning specifically because the operator has been judged systems, designers accept a somewhat high false untrustworthy. alarm rate if it ensures a high hit rate. The reason Designers of alerting systems must take into for this is that the costs of a miss can be ex- account both the decision threshold and the base tremely high, whereas the costs of a false alarm rate of the hazardous condition in order for op- are thought to be much lower. erators to trust and use these systems. The costs For example, if a collision avoidance system's of unreliable automation may be hidden. If high decision criterion is set high enough to avoid a false alarm rates cause operators not to use a large number of false alarms, it may also miss a system and disuse results in an accident, system real event and allow two airplanes to collide. designers or managers determine that the opera- Having set the criterion low enough to ensure tor was at fault and implement automation to that very few real events are missed, the de- compensate for the operator's failures. The op- signers must accept a higher expected level of erator may not trust the automation and could false alarms. Normally this is thought to incur a attempt to defeat it; the designer or manager does very low cost (e.g., merely the cost of executing not trust the operator and puts automation in a an unnecessary avoidance maneuver), but the position of greater authority. This cycle of opera- consequences of frequent false alarms and con- tor distrust of automation and designer or man- sequential loss of trust in the system are often ager distrust of the operator may lead to abuse of not included in this trade-off. automation.

Practical Implications ABUSE OF AUTOMATION The costs of operator disuse because of mis- trust of automation can be substantial. Operator Automation abuse is the automation of func- disabling or ignoring of alerting systems has tions by designers and implementation by man- played a role in several accidents. In the Conrail agers without due regard for the consequences accident near Baltimore in 1987, investigators for human (and hence system) performance and found that the alerting buzzer in the train cab the operator's authority over the system. The de- had been taped over. Subsequent investigation sign and application of automation, whether in showed similar disabling of alerting systems in aviation or in other domains, has typically been other cabs, even after prior notification of an in- technology centered. Automation is applied spection was given. where it provides an economic benefit by per- Interestingly, following another recent train forming a task more accurately or more reliably accident involving Amtrak and Marc trains near than the human operator or by replacing the op- Washington, D.C., there were calls for fitting erator at a lower cost. As mentioned previously, trains with automated braking systems. This technical and economic factors are valid reasons solution seeks to compensate for the possibility for automation, but only if human performance that train operators ignore alarms by overriding in the resulting system is not adversely affected. the operator and bringing a train that is in viola- When automation is applied for reasons of tion of a speed limit to a halt. This is one of the safety, it is often because a particular incident or few instances in which a conscious design deci- accident identifies cases in which human error sion is made to allow automation to override was seen to be a major contributing factor. De- the human operator, and it reflects, as was sug- signers attempt to remove the source of error by

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-247

automating functions carried out by the human tentially catastrophic. If the weight-on-wheels operator. The design questions revolve around sensor fails, however, the pilot is prevented from the hardware and software capabilities required deploying these devices precisely when they are to achieve machine control of the function. Less needed. This represents an error of the designer, attention is paid to how the human operator will and it has resulted in at least one serious incident use the automation in the new system or how (Poset, 1992) and accident (Main Commission operator tasks will change. As Riley (1995) Aircraft Accident Investigation, 1994). pointed out, when considering how automation Second, certain management practices or cor- is used rather than designed, one moves away porate policies may prevent human operators from the normal sphere of influence (and inter- from using automation effectively, particularly est) of system designers. under emergency conditions. The weight-on- Nevertheless, understanding how automation wheels sensor case represents an example of the is used may help developers produce better auto- human operator not being able to use automa- mation. After all, designers do make presump- tion because of prior decisions made by the de- tions about operator use of automation. The signer of automation. Alternatively, even though implicit assumption is that automation will automation may be designed to be engaged flex- reduce operator errors. For example, automated ibly, management may not authorize its use in "solutions" have been proposed for many errors certain conditions. This appears to have been the that automobile drivers make: automated navi- case in a recent accident involving a local transit gation systems for route-finding errors, colli- train in Gaithersburg, Maryland. The train col- sion avoidance systems for braking too late be- lided with a standing train in a heavy snowstorm hind a stopped vehicle, and alertness indicators when the automatic speed control system failed for drowsy drivers. The critical human factors to slow the train sufficiently when approaching questions regarding how drivers would use such the station because of snow on the tracks. It was automated systems have not been examined determined that the management decision to (Hancock & Parasuraman, 1992; Hancock, Para- refuse the train operator's request to run the train suraman, & Byrne, 1996). manually because of poor weather was a major Several things are wrong with this approach. factor in the accident (National Transportation First, one cannot remove human error from the Safety Board, 1997a). Thus automation can also system simply by removing the human operator. act as a surrogate for the manager, just as it can Indeed, one might think of automation as a for the system designer. means of substituting the designer for the opera- Third, the technology-centered approach may tor. To the extent that a system is made less vul- place the operator in a role for which humans are nerable to operator error through the introduc- not well suited. Indiscriminate application of au- tion of automation, it is made more vulnerable to tomation, without regard to the resulting roles designer error. As an example of this, consider and responsibilities of the operator, has led to the functions that depend on the weight-on- many of the current complaints about automa- wheels sensors on some modern aircraft. The pi- tion: for example, that it raises workload when lot is not able to deploy devices to stop the air- workload is already high and that it is difficult to plane on the runway after landing unless the monitor and manage. In many cases, it has re- system senses that the gear are on the ground; duced operators to system monitors, a condition this prevents the pilot from inadvertently deploy- that can lead to overreliance, as demonstrated ing the spoilers to defeat lift or operate the thrust earlier. reversers while still in the air. These protections Billings (1991) recognized the danger of defin- are put in place because of a lack of trust in the ing the operator's role as a consequence of the pilot to not do something unreasonable and po- application of automation. His human-centered

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 248-June 1997 HUMAN FACTORS

approach calls for the operator to be given an CONCLUSIONS: DESIGNING FOR active role in system operation, regardless of AUTOMATION USAGE whether automation might be able to perform the function in question better than the operator. Our survey of the factors associated with the This recommendation reflects the idea that the use, misuse, disuse, and abuse of automation overall system may benefit more by having an points to several practical implications for de- operator who is aware of the environmental con- signing for more effective automation usage. ditions the system is responding to and the status Throughout this paper we have suggested many of the process being performed, by virtue of ac- strategies for designing, training for, and manag- tive involvement in the process, than by having ing automation based on these considerations. an operator who may not be capable of recogniz- These strategies can be summarized as follows. ing problems and intervening effectively, even if Automation Use it means that system performance may not be as good as it might be under entirely automatic con- 1. Better operator knowledge of how the auto- trol. Underlying this recognition is the under- mation works results in more appropriate use of standing that only human operators can be automation. Knowledge of the automation de- granted fiduciary responsibility for system safety, sign philosophy may also encourage more appro- so the human operator should be at the heart priate use. of the system, with full authority over all its 2. Although the influences of many factors af- functions. fecting automation use are known, large indi- Fourth, when automation is granted a high vidual differences make systematic prediction of level of authority over system functions, the op- automation use by specific operators difficult. erator requires a proportionately high level of For this reason, policies and procedures should feedback so that he or she can effectively monitor highlight the importance of taking specific con- the states, behaviors, and intentions of the auto- siderations into account when deciding whether mation and intervene if necessary. The more re- or not to use automation, rather than leaving that moved the operator is from the process, the more decision vulnerable to biases and other factors this feedback must compensate for this lack of that may result in suboptimal strategies. involvement; it must overcome the operator's 3. Operators should be taught to make rational complacency and demand attention, and it must automation use decisions. overcome the operator's potential lack of aware- 4. Automation should not be difficult or time ness once that attention is gained. The impor- consuming to turn on or off. Requiring a high tance of feedback has been overlooked in some level of cognitive overhead in managing automa- highly automated systems (Norman, 1990). tion defeats its potential workload benefits, When feedback has to compensate for the lack of makes its use less attractive to the operator, and direct operator involvement in the system, it makes it a more likely source of operator error. takes on an additional degree of importance. In general, abuse of automation can lead to Automation Misuse problems with costs that can reduce or even nul- lify the economic or other benefits that automa- 1. System designers, regulators, and operators tion can provide. Moreover, automation abuse should recognize that overreliance happens and can lead to misuse and disuse of automation by should understand its antecedent conditions and operators. If this results in managers' implement- consequences. Factors that may lead to overreli- ing additional high-level automation, further dis- ance should be countered. For example, work- use or misuse by operators may follow, and so load should not be such that the operator fails on, in a vicious circle. to monitor automation effectively. Individual

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-249

operators who demonstrate a bias toward over- 3. Designers of automated alerting systems reliance because of specific factors should be should consider using alarms that indicate when taught to recognize these biases and compensate a dangerous situation is possible ("likelihood" for them. Overreliance on automation may also alarms), rather than encouraging the operator to signal a low level of self-confidence in the opera- rely on the alarm as the final authority on the tor's own manual control skills, suggesting that existence of a dangerous condition. further training or evaluation of the operator's Automation Abuse suitability for the job is needed. 2. Operators use automation cues as heuristics 1. The operator's role should be defined based for making decisions. Although the use of heuris- on the operator's responsibilities and capabili- tics is usually effective, occasionally it may lead ties, rather than as a by-product of how the au- to error because of decision biases. Training is tomation is implemented. required to recognize and counter decision biases 2. The decision to apply automation to a func- that may lead to overreliance on automation. tion should take into account the need for active 3. Although it is often pointed out that human operator involvement in the process, even if such monitoring is subject to errors, in many instances involvement reduces system performance from operational monitoring can be efficient. Human what might be achieved with a fully automated monitoring tends to be poor in work environ- solution; keeping the operator involved provides ments that do not conform to well-established substantial safety benefits by keeping the opera- ergonomics design principles, in high-workload tor informed and able to intervene. situations, and in systems in which the automa- 3. Automation simply replaces the operator tion is higWy autonomous and there is little op- with the designer. To the extent that a system is portunity for manual experience with the auto- made less vulnerable to operator error through mated tasks. the application of automation, it is made more 4. Feedback about the automation's states, ac- vulnerable to designer error. The potential for tions, and intentions must be provided, and it and costs of designer error must be considered must be salient enough to draw operator atten- when making this trade-off. tion when he or she is complacent and informa- 4. Automation can also act as a surrogate for tive enough to enable the operator to intervene the manager. If the designer applied a specific effectively. automation philosophy to the design of the sys- tem, that philosophy should be provided to the Automation Disuse manager so that operational practices are not im- 1. The impact of automation failures, such as posed that are incompatible with the design. In false alarm rates, on subsequent operator reli- addition, just as system designers must be made ance on the automation should be considered as aware of automation-related issues, so must part of the process of setting automation perfor- those who dictate how it will be used. mance requirements, otherwise, operators may Finally, two themes merit special emphasis. grow to mistrust the automation and stop using First, many of the problems of automation mis- it. When the operator makes an error, system de- use, disuse, and abuse arise from differing expec- signers and managers may grow to mistrust the tations among the designers, managers, and op- operator and look to automation as the ultimate erators of automated systems. Our purpose is not authority in the system. to assign blame to designers, managers, or opera- 2. Designers of automated alerting systems tors but to point out that the complexities of the must take into account not only the decision operational environment and individual human threshold at which these systems are set but also operators may cause automation to be used in the base rate of the hazardous condition to be ways different from how designers and managers detected. intend. Discovering the root causes of these

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 2SD-June 1997 HUMAN FACTORS

differences is a necessary step toward informing concept and guidelines (NASATech. Memorandum 103885). Moffett Field, CA: NASA Ames Research Center. the expectations of designers and managers so Billings, C. E., & Woods, D. D. (1994). Concerns about adaptive that operators are provided with automation that automation in aviation systems. In M. Mouloua & R. Para- suraman (Eds.), Human performance in automated systems: better meets their needs and are given the author- Current research and trends (pp. 264-269). Hillsdale, NJ: ity and decision-making tools required to use the Erlbaum. Bowers, C. A., Oser, R. J., Salas, E., & Cannon-Bowers, J. A. automation to its best effect. (1996). Team performance in automated systems. In R. Second, individual differences in automation Parasuraman & M. Mouloua (Eds.), Automation and hu- man performance: Theory and applications (pp. 243-263). use are ubiquitous. Human use of automation is Hillsdale, NJ: Erlbaum. complex, subject to a wide range of influences, Byrne, E. A., & Parasuraman, R. (1996). Psychophysiology and adaptive automation. Biological Psychology, 42, 249-268. and capable of exhibiting a wide range of pat- Casey, S. (1993). Set phasers on stun. Santa Barbara, CA: Ae- terns and characteristics. That very complexity gean. Casner, S. (1994). Understanding the determinants of problem- makes the study of automation a large undertak- solving behavior in a complex environment. Human Fac- ing, but the growing importance of automation in tors, 36, 580-596. Chambers, N., & Nagel. D. C. (1985). Pilots of the future: Hu- systems makes such study increasingly impera- man or computer? Communications of the Association for tive. Better understanding of why automation is Computing Machinery, 28, 1187-1199. Corwin, W. H., Funk, H., Levitan, 1., & Bloomfield, J. (1993). used, misused, disused, and abused will help fu- Flight crew information requirements (Contractor Rep. ture designers, managers, and operators of sys- DTFA- 91-C-00040). Washington, DC: Federal Aviation Ad- ministration. tems avoid many of the errors that have plagued Donald, M. (1991). Origins of the modem mind: Three stages in those of the past and present. Application of this the evolution of culture and cognition. Cambridge, MA: Har- vard University Press. knowledge can lead to improved systems, the de- Duley, J. A., Westerman, S., Molloy, R., & Parasuraman, R. (in velopment of effective training curricula, and the press). Effects of display superimposition on monitoring of automated tasks. In Proceedings of the 9th International formulation of judicious policies and procedures Symposium on Aviation Psychology. Columbus: Ohio State involving automation use. University. Edwards, E. (1977). Automation in civil transport aircraft. Ap- ACKNOWLEDGMENTS plied Ergonomics, 4, 194-198. Endsley, M. (1996). Automation and situation awareness. In R. We thank Kathy Abbott, Peter Hancock, Kathy Mosier, Bill Parasuraman & M. Mouloua (Eds.), Automation and hu- Howell. and two anonymous reviewers for helpful comments man performance: Theory and applications (pp. 163-181). Ona previous draft of this paper. This paper also benefited from Hillsdale, NJ: Erlbaum. discussions with past and present members of the Human Fac- Erzberger, H. (1992). CTAS: Computer intelligence for air traffic tors of Automation group at the Cognitive Science Laboratory, control in the terminal area (NASA Tech. Memorandum including Evan Byrne, John Deaton, Jackie Duley, Scott Gal- 103959). Moffett Field, CA: NASA Ames Research Center. ster, Brian Hilburn, Anthony Masalonis, Robert Molloy, Mus- Farber, E., & Paley, M. (1993, April). Using freeway traffic data tapha Mouloua, and Indramani Singh. This research was sup- to estimate the effectiveness of rear-end collision countermea- ported by research grant NAG-I-1296 from the National sures. Paper presented at the Third Annual IVHS America Aeronautics and Space Administration, Langley Research Cen- Meeting, Washington, DC. ter, Langley, Virginia, to the Catholic University of America Federal Aviation Administration. (1990). The national plan for (R.P.) and by contract DTFAOI-91-C-0039 from the Federal aviation human factors. Washington, DC: Author. Aviation Administration to Honeywell Technology Center Fitts, P. M. (1951). Human engineering for an effective air navi- (V.R.). The views presented in this paper are those of the au- gation and traffic control system. Washington, DC: National thors and not necessarily representative of these organizations. Research Council. Funk, K., Lyall, B., & Riley, V. (1995). Perceived human factors REFERENCES problems of flightdeck automation (Phase I Final Rep.). Cor- vallis: Oregon State University. Abbott, T. S. (1990). A simulation evaluation of the engine moni- Gerwin, D., & Leung, T. K. (1986). The organizational impacts toring and control display (NASA Tech. Memorandum of flexible manufacturing systems. In T. Lupton (Ed.), Hu- 1960). Hampton, VA:NASA Langley Research Center. man factors: Man, machines and new technology (pp. 157- Air Transport Association. (1989). National plan to enhance 170). New York: Springer-Verlag. aviation safety through human factors improvements. Wash- Getty, D. J., Swets, J. A., Pickett, R. M., & Gounthier, D. (1995). ington, DC: Author. System operator response to warnings of danger: A labora- Bainbridge, 1. (1983). Ironies of automation. Automatica. 19, tory investigation of the effects of the predictive value of a 775-779. warning on human response time. Journal of Experimental Bennett, K., & Flach, J. M. (1992). Graphical displays: Implica- Psychology: Applied, 1, 19-33. tions for divided attention, focused attention, and problem Gonzalez, R. C., & Howington, 1. C. (1977). Machine recogni- solving. Human Factors, 34, 513-533. tion of abnormal behavior in nuclear reactors. IEEE Trans- Bessant, J., Levy, P., Ley, C., Smith, S., & Tranfield, D. (1992). actions on Systems, Man and Cybernetics, SMC-7, 717-728. Organization design for factory 2000. International Journal Grabowski, M., & Wallace, W. A. (t993). An expert system for of Human Factors in Manufacturing, 2, 95-125. maritime pilots: Its design and assessment using gaming. Billings, C. E. (1991). Human-centered aircraft automation: A Management Science, 39, 1506-1520.

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-251

Hancock, P. A. (1996). Teleology for technology. In R. Para- Molloy, R., Deaton, J. E., & Parasuraman, R. (1995). Monitor- suraman & M. Mouloua (Eds.), Automation and human per- ing performance with the EMACS display in an automated formance: Theory and applications (pp. 461-497). Hillsdale, environment. In Proceedings of the 8th International Sym- NJ: Erlbaum. posium on Aviation Psychology (pp. 68-73). Columbus, OH. Hancock, P. A., & Chignell, M. H. (1989). Intelligent interfaces. Molloy, R., & Parasuraman, R. (1994). Automation-induced Amsterdam: Elsevier. monitoring inefficiency: The role of display integration and Hancock, P. A., & Parasuraman, R. (1992). Human factors and redundant color coding. In M. Mouloua & R. Parasuraman safety in the design of intelligent vehicle-highway systems. (Eds.), Human performance in automated systems: Current Journal of Safety Research, 23, 181-198. research and trends (pp. 224-228). Hillsdale, NJ: Erlbaum. Hancock, P. A., Parasuraman, R., & Byrne, E. A. (1996). Driver- Molloy, R., & Parasuraman, R. (1996). Monitoring an auto- centered issues in advanced automation for motor vehicles. mated system for a single failure: Vigilance and task com- In R. Parasuraman & M. Mouloua (Eds.), Automation and plexity effects. Human Factors, 38, 311-322. human performance: Theory and applications (pp. 337-364). Mosier, K., Heers, S., Skitka, L., & Burdick, M. (1996, March). Hillsdale, NJ: Erlbaum. Patterns in the use of cockpit automation. Paper presented at Harris, W., Hancock, P. A., & Arthur, E. (1993). The effect of the 2nd Automation Technology and Human Performance task load projection on automation use, performance, and Conference, Cocoa Beach, FL. workload. In Proceedings of the 7th International Sympo- Mosier, K., & Skitka, L. J. (1996). Human decision makers and sium on Aviation psychology (pp. 890A-890F). Columbus: automated decision aids: Made for each other? In R. Para- Ohio State University. suraman & M. Mouloua (Eds.), Automation and human per- Helmreich, R. L. (1984). Cockpit management attitudes. Hu- formance: Theory and applications (pp. 201-220). Hillsdale, man Factors, 26, 583-589. NJ: Erlbaum. Hilburn, B., Jorna, P. G. A. M., & Parasuraman, R. (1995). The Mosier, K., Skitka, L., Burdick, M., & Heers, S. (1996). Auto- effect of advanced ATC automation on mental workload mation bias, accountability, and verification behaviors. In and monitoring performance: An empirical investigation in Proceedings of the Human Factors and Ergonomics Society Dutch airspace. In Proceedings of the 8th International Sym- 40th Annual Meeting (pp. 204-208). Santa Monica, CA: Hu- posium on Aviation Psychology (pp. 1382-1387). Columbus: man Factors and Ergonomics Society. Ohio State University. Mosier, K., Skitka, L. J., & Korte, K. J. (1994). Cognitive and Hopkin, V. D. (1995). Human factors in air-traffic control. Lon- social psychological issues in flight crew/automation inter- don: Taylor & Francis. action. In M. Mouloua & R. Parasuraman (Eds.), Human IVHS America. (1992). Strategic plan for IVHS in the United performance in automated systems: Current research and States. Washington, DC: Author. trends (pp. 191-197). Hillsdale, NJ: Erlbaum. Kirlik, A. (1993). Modeling strategic behavior in human- Mouloua, M., & Parasuraman, R. (1994). Human performance automation interaction: Why an "aid" can (and should) go in automated systems: Recent research and trends. Hillsdale, unused. Human Factors, 35, 221-242. NJ: Erlbaum. Kuchar, J. K., & Hansman, R. J. (1995). A probabilistic meth- Muir, B. M. (1988). Trust between humans and machines, and odology for the evaluation of alerting system performance. the design of decision aids. In E. Hollnagel, G. Mancini, & In Proceedings of the IFAC/IFlP/IFORS/IEA Symposium. D. D. Woods (Eds.), Cognitive engineering in complex dy- Cambridge, MA. namic worlds (pp. 71-83). London: Academic. Langer, E. (1989). Mindfulness. Reading, MA: Addison-Wesley. National Transportation Safety Board. (1973). Eastern Airlines Lee, J. D., & Moray, N. (1992). Trust, control strategies, and L-1011, Miami, Florida, 20 December /972 (Rep. NTSB- allocation of function in human-machine systems. Ergo- AAR-73-14). Washington, DC: Author. nomics, 35, 1243-1270. Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and op- National Transportation Safety Board. (1986). China Airlines erators' adaptation to automation. International Journal of B-747-SP, 300 NM northwest of San Francisco, 19 February Human-Computer Studies, 40, 153-184. 1985 (Rep. NTSB-AAR-86-03). Washington, DC: Author. Lewandowsky, S., & Nikolic, D. (1995, April). A connectionist National Transportation Safety Board. (1994). Aircraft accident approach to modeling the effects of automation. Paper pre- report: Stall and loss of control on final approach (Rep. sented at the 8th International Symposium on Aviation Psy- NTSB-AAR-94/07). Washington, DC: Author. chology, Columbus, OH. National Transportation Safety Board. (l997a). Collision of Main Commission Aircraft Accident Investigation-WARSAW. Washington Metropolitan Transit Authority Train TIll with (1994). Report on the accident to Airbus A320-21/ aircraft in standing train at Shady Grove Station, near Gaithersburg, Warsaw on 14 September 1993. Warsaw: Author. MD, January 6, 1996 (Rep. NTSB-ATL-96-MR008). Wash- May, P., Molloy, R., & Parasuraman, R. (1993, October). Effects ington, DC: Author. of automation reliability and failure rate on monitoring per- National Transportation Safety Board. (1997b). Grounding of formance in a multitask environment. Paper presented at the the Panamanian passenger ship Royal Majesty on Rose and Annual Meeting of the Human Factors Society, Seattle, WA. Crown shoal near Nantucket, Massachusetts, June 10, 1995 McClellan, J. M. (1994, June). Can you trust your autopilot? (Rep. NTSB/MAR-97-01). Washington, DC: Author. Flying, 76-83. Nickerson, R. S. (1995). Human interaction with computers McClumpha, A., & James, M. (1994, June). Understanding au- and . International Journal of Human Factors in tomated aircraft. In M. Mouloua & R. Parasuraman (Eds.), Manufacturing, 5, 5-27. Human performance in automated systems: Recent research Norman, D. (1990). The problem with automation: Inappropri- and trends (pp. 314-319). Hillsdale, NJ: Erlbaum. ate feedback and interaction, not over-automation. Proceed- McDaniel, J. W. (1988). Rules for fighter cockpit automation. In ings of the Royal Society of London, B237, 585-593. Proceedings of the IEEE National Aerospace and Electronics Palmer, E., & Degani, A. (1991). Electronic checklists: Evalua- Conference (pp. 831-838). New York: IEEE. tion of two levels of automation. In Proceedings of the 6th Ministere de I'Equipement, des Transports et du Tourisme. International Symposium on Aviation Psychology (pp. 178- (1993). Rapport de la Commission d'Enquete sur I'Accident 183). Columbus: Ohio State University. survenu Ie 20 Janvier 1992 pres du Mont Saite Odile a /lAir- Parasuraman, R. (1993). Effects of adaptive function allocation bus A.320 Immatricule F-GGED Exploite par lay Compagnie on human performance. In D. J. Garland & J. A. Wise Air Inter. Paris: Author. (Eds.), Human factors and advanced aviation technologies

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 252-June 1997 HUMAN FACTORS

(pp. 147-157). Daytona Beach, FL: Embry-Riddle Aeronau- suraman & M. Mouloua (Eds.), Automation and human per- tical University Press. formance: Theory and applications (pp. 267-280). Hillsdale, Parasuraman, R., Bahri, T., Deaton, J., Morrison, J., & Barnes, NJ; Erlbaum. M. (1992). Theory and design of adaptive automation in avia- Sarter, N., & Woods, D. D. (1991). Situation awareness: A criti- tion systems (Progress Rep. NAWCADWAR-92033-60). cal but ill-defined phenomenon. International Journal of Warminster, PA: Naval Air Warfare Center, Aircraft Divi- Aviation Psychology, 1, 45-57. sion. Sarter, N., & Woods, D. D. (1994). Pilot interaction with cockpit Parasuraman, R., Hancock, P. A., & Olofinboba, O. (1997). automation II: An experimental study of pilots' model and Alarm effectiveness in driver-centered collision-warning awareness of the flight management system. International systems. Ergonomics, 40, 390-399. Journal of Aviation Psychology, 4, 1-28. Parasuraman, R., Molloy, R., & Singh, I. 1. (1993). Perfor- Sarter, N., & Woods, D. D. (1995). "Strong, silent, and out-of- mance consequences of automation-induced "compla- the-loop": Properties of advanced {cockpit} automation and cency." International Journal of Aviation Psychology, 3, their impact on human-automation interaction (Tech. Rep. 1-23. CSEL 95-TR-0I). Columbus: Ohio State University, Cogni- Parasurarnan, R., & Mouloua, M. (Eds.). (1996). Automation tive Systems Engineering Laboratory. and human performance: Theory and applications. Hillsdale, Satchell, P. (1993). Cockpit monitoring and alerting systems. AI- NJ: Erlbaum. dershot, England: Ashgate. Parasuraman, R., Mouloua, M., & Molloy, R. (1994). Monitor- Scerbo, M. S. (1996). Theoretical perspectives on adaptive au- ing automation failures in human-machine systems. In M. tomation. In R. Parasuraman & M. Mouloua (Eds.), Auto- Mouloua & R. Parasuraman (Eds.), Human performance in mation and human performance: Theory and applications automated systems: Current research and trends (pp. 45-49). (pp. 37-63). Hillsdale, NJ: Erlbaum. Hillsdale, NJ: Erlbaum. Schank, R. (1984). The cognitive computer. Reading, MA: Addi- Parasuraman, R., Mouloua, M., & Molloy, R. (1996). Effects of son-Wesley. adaptive task allocation on monitoring of automated sys- Sheridan, T. (1980). Computer control and human alienation. tems. Human Factors, 38, 665-679. Technology Review, 10, 61-73. Parasuraman, R., Mouloua, M., Molloy, R., & Hilburn, B. Sheridan, T. (1992). Telerobotics, automation, and supervisory (1996). Monitoring automated systems. In R. Parasuraman control. Cambridge: MIT Press. & M. Mouloua (Eds.), Automation and human performance: Singh, I. 1., Deaton, J. E., & Parasuraman, R. (1993, October). Theory and applications (pp. 91-115). Hillsdale, NJ: Development of a scale to measure pilot attitudes to cockpit Erlbaum. automation. Paper presented at the Annual Meeting of the Phillips, D. (1995, August II). System failure cited in ship Human Factors Society, Seattle, WA. grounding. Washington Post, p. A7. Singh, I. 1., Molloy, R., & Parasuraman, R. (l993a). Automa- Planzer, N., & Hoffman, M. A. (1995). Advancing free flight tion-induced "complacency": Development of the compla- through human factors (Workshop Rep.). Washington DC: cency-potential rating scale. International Journal of Avia- FAA. tion Psychology, 3, 111-121. Poset, R. K. (1992). No brakes, no reversers! Air Line Pilot, 61 Singh, I. 1., Molloy, R., & Parasuraman, R. (l993b). Individual (4),28-29. differences in monitoring failures of automation. Journal of Radio and Technical Committee on Aeronautics. (1995). Free General Psychology, 120, 357-373. flight implementation (RTCA Task Force 3 Rep.). Washing- Singh, I. 1., Molloy, R., & Parasuraman, R. (1997). Automation- ton, DC: Author. related monitoring inefficiency: The role of display loca- Rasmussen, J. (1986). Information processing and human- tion. International Journal of Human Computer Studies, 46, machine interaction. Amsterdam: North-Holland. 17-30. Reason, J. T. (1990). Human error. Cambridge, England: Cam- Smith, M. J., & Carayon, P. (1995). New technology, automa- bridge University Press. tion, and work organization. International Journal of Hu- Riley, V. (1989). A general model of mixed-initiative human- man Factors in Manufacturing,S, 95-116. machine systems. In Proceedings of the Human Factors So- Sorkin, R. D. (1988). Why are people turning off our alarms? ciety 33rd Annual Meeting (pp. 124-128). Santa Monica, CA: Journal of the Acoustical Society of America, 84, 1107-1108. Human Factors and Ergonomics Society. Sorkin. R. D., Kantowitz, B. H., & Kantowitz, S. C. (1988). Riley, V. (I 994a). Human use of automation. Unpublished doc- Likelihood alarm displays. Human Factors, 30, 445-459. toral dissertation, University of Minnesota. Spitzer, C. R. (1987). Digital avionics systems. Englewood Cliffs, Riley, V. (I 994b). A theory of operator reliance on automation. NJ: Prentice-Hall. In M. Mouloua & R. Parasuraman (Eds.), Human perfor- Sun, H., & Riis, J. O. (1994). Organizational, technical, strate- mance in automated systems: Recent research and trends (pp. gic. and managerial issues along the implementation pro- 8-14). Hillsdale. NJ: Erlbaum. cess of advanced manufacturing technologies-A general Riley, V. (1995). What avionics engineers should know about framework and implementation. International Journal of pilots and automation. In Proceedings of the 14th AlAN Human Factors in Manufacturing, 4, 23-36. IEEE Digital Avionics Systems Conference (pp. 252-257). Swets, J. A. (1992). The science of choosing the right decision Cambridge, MA: AIAA. threshold in high-stakes diagnostics. American Psychologist, Riley, V. (1996). Operator reliance on automation: Theory and 47, 522-532. data. In R. Parasuraman & M. Mouloua (Eds.), Automation Thompson, J. M. (1994). Medical decision making and automa- and human performance: Theory and applications (pp. 19- tion. In M. Mouloua & R. Parasuraman (Eds.), Human per- 35). Hillsdale, NJ: Erlbaum. formance in automated systems: Current research and trends Riley, V., Lyall, B., & Wiener, E. (1993). Analytic methods for (pp. 68-72). Hillsdale, NJ: Erlbaum. flight-deck automation design and evaluation. phase two re- Tversky. A., & Kahneman, D. (1984). Judgment under uncer- port: Pilot use of automation (Tech. Rep.). Minneapolis, MN: tainty: Heuristics and biases. Science, 185, 1124-1131. Honeywell Technology Center. Vicente, K. J., & Rasmussen, J. (1990). The ecology of human- Rouse, W. B. (1988). Adaptive aiding for human/computer con- machine systems II: Mediating "direct perception" in com- troL Human Factors, 30, 431-438. plex work domains. Ecological Psychology, 2, 207-249. Sarter, N. (1996). Cockpit automation: From quantity to qual- Weick, K. E. (1988). Enacted sensemaking in crisis situations. ity. from individual pilots to multiple agents. In R. Para- Journal of Management Studies, 25. 305-317.

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014 HUMAN USE OF AUTOMATION June 1997-253

Whitfield, D., Ball, R. G., & Ord, G. (1980). Some human fac- Woods, D. D., Wise, J., & Hanes, L. (1981). An evaluation of tors aspects of computer-aiding concepts for air traffic con- nuclear power plant safety parameter display systems. In trollers. Human Factors, 22, 569-580. Proceedings of the Human Factors Society 25th Annual Meet- Wickens, C. D. (1992). Engineering psychology and human per- ing (pp. 110-114). Santa Monica, CA: Human Factors and formance (2nd. ed.). New York: HarperCollins. Ergonomics Society. Wickens, C. D. (1994). Designing for situation awareness and Yeh, Y.-Y., & Wickens, C. D. (1988). The dissociation of subjec- trust in automation. In Proceedings of the IFAC Conference tive measures of mental workload and performance. Hu- on Integrated Systems Engineering (pp. 174-179). Baden- man Factors, 30, 111-120. Baden, Germany: International Federation on Automatic Zuboff, S. (1988). In the age of the smart machine: The future of Control. work and power. New York: Basic Books. Wiener, E. L. (1985). Beyond the sterile cockpit. Human Fac- tors, 27, 75-90. Wiener, E. L. (1988). Cockpit automation. In E. L. Wiener & Raja Parasuraman received a Ph.D. in psychology from the Uni- D. C. Nagel (Eds.), Human factors in aviation (pp. 433-461). versity of Aston, Birmingham, England, in 1976. He is a pro- San Diego: Academic. fessor of psychology and director of the Cognitive Science Wiener, E. L. (1989). Human factors of advanced technology Laboratory at the Catholic University of America, Washington, ("glass cockpit") transport aircraft (Rep. 177528). Moffett D.C. Field, CA: NASA Ames Research Center. Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems. Ergonomics, 23, 995-1011. Victor Riley received a Ph.D. in experimental psychology from Will, R. P. (1991). True and false dependence on technology: the University of Minnesota in 1994. He is a senior research Evaluation with an expert system. Computers in Human scientist at Honeywell Technology Center, Minneapolis, Min- Behavior, 7, 171-183. nesota. Woods, D. D. (1996). Decomposing automation: Apparent sim- plicity, real complexity. In R. Parasuraman & M. Mouloua (Eds.), Automation and human performance: Theory and ap- Date received: June 17, 1996 plications (pp. 3-17). Hillsdale, NJ: Erlbaum. Date accepted: November 11, 1996

Downloaded from hfs.sagepub.com at HFES-Human Factors and Ergonomics Society on August 15, 2014