
Understanding Fault Tolerance and Guest Editors’ Introduction Reliability Future systems will be more complex and so more susceptible to failure. Despite many proposals in the past three decades, fault tolerance remains out of the reach of the average computer user. The industry needs techniques that add reliability without adding significant cost. Arun K. ost people who use computers regularly Reliability is the likelihood that a system will Somani have encountered a failure, either in the remain operational (potentially despite failures) for University of M form of a software crash, disk failure, power the duration of a mission. For instance, the require- Washington loss, or bus error. In some instances these failures ment might be stated as a 0.999999 availability for are no more than annoyances; in others they result a 10-hour mission. In other words, the probability Nitin H. in significant losses. The latter result will probably of failure during the mission may be at most 10-6. Vaidya become more common than the former, as society’s Very high reliability is most important in critical Texas A&M dependence on automated systems increases. applications such as the space shuttle or industrial University The ideal system would be perfectly reliable and control, in which failure could mean loss of life. never fail. This, of course, is impossible to achieve Availability expresses the fraction of time a sys- in practice: System builders have finite resources to tem is operational. A 0.999999 availability means devote to reliability and consumers will only pay so the system is not operational at most one hour in a much for this feature. Over the years, the industry million hours. It is important to note that a system has used various techniques to best approximate the with high availability may in fact fail. However, its ideal scenario. The discipline of fault-tolerant and recovery time and failure frequency must be small reliable computing deals with numerous issues per- enough to achieve the desired availability. High taining to different aspects of system development, availability is important in many applications, use, and maintenance. including airline reservations and telephone switch- The expression “disaster waiting to happen” is ing, in which every minute of downtime translates often used to describe causes of failure that are seem- into significant revenue loss. ingly well known, but have not been adequately accounted for in the system design. In these cases UNDERSTANDING FAULT TOLERANCE we need to learn from experience how to avoid fail- Systems fail for many reasons. The system might ure. have been specified erroneously, leading to an incor- Not all failures are avoidable, but in many cases rect design. Or the system might contain a fault that the system or system operator could have taken cor- manifests only under certain conditions that weren’t rective action that would have prevented or miti- tested. The environment may cause a system to fail. gated the failure. The main reason we don’t prevent Finally, aging components may cease to work prop- failures is our inability to learn from our mistakes. erly. It’s relatively easy to visualize and understand It often takes more than one occurrence of the same random failures caused by aging hardware. It’s much failure before corrective action is taken. more difficult to grasp how failures might be caused by incorrect specifications, design flaws, substan- EXPRESSING FAULT TOLERANCE dard implementation, poor testing, and operator The two most common ways the industry errors. And many times failures are caused by a com- expresses a system’s ability to tolerate failure are bination of these conditions. reliability and availability. In addition, a system can fail at several levels. The 0018-9162/96/$10.00 © 1997 IEEE April 1997 45 A good fault- system or subsystem itself may fail, its human opera- Software techniques, however, are more flexible tolerant tor may fail and introduce errors, or the two factors because software can be changed after the system has system may interact to cause a failure (when there is a mis- been built. design match between system operation and operator under- Here we describe the six most widely used hard- requires a standing). Operator error is the most common cause ware and software techniques. of failure. But many errors attributed to operators are careful study Modular redundancy and N-version programming of design, actually caused by designs that require an operator to choose an appropriate recovery action without much Modular redundancy uses multiple, identical repli- failures, guidance and without any automated help. The oper- cas of hardware modules and a voter mechanism. The causes of ator who chooses correctly is a hero; the one who voter compares the outputs from the replicas and failures, chooses incorrectly becomes a scapegoat. The bigger determines the correct output using, for example, and system question here is whether these catastrophic situations majority vote. Modular redundancy is a general tech- response could have been avoided if the system had been nique—it can tolerate most hardware faults in a to failures. designed in an appropriate, safe manner. minority of the hardware modules. N-version programming can tolerate both hard- Importance of design ware and software faults. The basic idea is to write Piecemeal design approaches are not desirable. A multiple versions of a software module. A voter good fault-tolerant system design requires a careful receives outputs from these versions and determines study of design, failures, causes of failures, and system the correct output. The different versions are written response to failures. Such a study should be carried by different teams, with the hope that these versions out in detail before the design begins and must remain will not contain the same bugs. N-version program- part of the design process. ming can therefore tolerate some software bugs that Planning to avoid failures—fault avoidance—is the affect a minority of the replicas. However, it does not most important aspect of fault tolerance. Proper tolerate correlated faults, which may be catastrophic. design of fault-tolerant systems begins with the In a correlated fault, the reason for failure may be requirements specification. A designer must analyze common to two modules. For example, two modules the environment and determine the failures that must may share a single power supply or a single clock. be tolerated to achieve the desired level of reliability. Failure of either may make both modules fail. Some failures are more probable than others. Some failures are transient and others are permanent. Some Error-control coding occur in hardware, others in software. To optimize Replication is effective but expensive. For certain fault tolerance, it is important (yet difficult) to esti- applications, such as RAMs and buses, error-correct- mate actual failure rates for each possible failure. ing codes can be used, and they require much less The next obvious step is to design the system to tol- redundancy than replication. Hamming and Reed- erate faults that occur while the system is in use. The Solomon codes are among those commonly used. basic principle of fault-tolerant design is redundancy, and there are three basic techniques to achieve it: spa- Checkpoints and rollbacks tial (redundant hardware), informational (redundant A checkpoint is a copy of an application’s state data structures), and temporal (redundant computa- saved in some storage that is immune to the failures tion). under consideration. A rollback restarts the execution Redundancy costs both money and time. The from a previously saved checkpoint. When a failure designers of fault-tolerant systems, therefore, must occurs, the application’s state is rolled back to the pre- optimize the design by trading off the amount of vious checkpoint and restarted from there. This tech- redundancy used and the desired level of fault toler- nique can be used to recover from transient as well as ance. Temporal redundancy usually requires recom- permanent hardware failures and certain types of soft- putation and typically results in a slower recovery ware failures. Both uniprocessor and distributed appli- from failure compared to spatial redundancy. On the cations can use rollback recovery. other hand, spatial redundancy increases hardware costs, weight, and power requirements. Recovery blocks Many fault tolerance techniques can be imple- The recovery block technique uses multiple alternates mented using only special hardware or software, and to perform the same function. One module is the pri- some techniques require a combination of these. mary module; the others are secondary modules. When Which approach is used depends on the system the primary module completes execution, its outcome requirements: Hardware techniques tend to provide is checked by an acceptance test. If the output is not better performance at an increased hardware cost; acceptable, a secondary module executes, and so on, software techniques increase software design costs. until either an acceptable output is obtained or the alter- 46 Computer Beyond Fault Tolerance Timothy C.K. Chou, Oracle Corporation 50,000 5,000 500 50 5 0.5 hile the cost of hardware and software drops Server every year, the cost of downtime only Hardware 716 W increases. Whether measured by lost pro- System software 898 ductivity, lost revenue, or poor customer satisfaction, downtime is becoming a significant part of the cost Application 873 of computer-based services. Although many believe software the state of the art in fault-tolerant technology is ade- Network quate, the following data shows that the industry has Hardware 170 a long way to go. System software 284 State of the art Availability is usually expressed in terms of the per- Application software centage of time a system is available to users. While this is a valid metric, a much more useful metric is Client outage minutes.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-