The Case for Topology-Aware Fault-Tolerance ∗

The Case for Topology-Aware Fault-Tolerance ∗

Handling Cascading Failures: The Case for Topology-Aware Fault-Tolerance ∗ Soila Pertet and Priya Narasimhan Electrical & Computer Engineering Department Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213-3890 [email protected], [email protected] Abstract Clearly, cascading failures involve significant downtime with financial consequences. Given the kinds of large, inter- Large distributed systems contain multiple components connected distributed applications that we increasingly de- that can interact in sometimes unforeseen and compli- ploy, system developers should consider the impact of cas- cated ways; this emergent “vulnerability of complex- cading failures, and mitigation strategies to address them. ity” increases the likelihood of cascading failures that The classical approach to dealing with cascading failures might result in widespread disruption. Our research ex- is fault-avoidance where systems are partitioned so that the plores whether we can exploit the knowledge of the sys- behavior and performanceof componentsin one sub-system tem’s topology, the application’s interconnections and the is unaffected by components in another subsystem. Trans- application’s normal fault-free behavior to build proac- actional systems avoid cascading aborts through atomic ac- tive fault-tolerance techniques that could curb the spread tions, for instance, by ensuring that no reads occur before of cascading failures and enable faster system-wide recov- a write commits. Unfortunately, fault-avoidance does not ery. We seek to characterize what the topology knowledge always work – systems are often put together from Com- would entail, quantify the benefits of our approach and un- mercial Off-The-Shelf (COTS) components or developed derstand the associated tradeoffs. hastily to meet time-to-market pressure, leading to complex systems with dependencies and emergent interactions that are not very well understood and with residual faults that 1. Introduction manifest themselves during operation. Transactional mod- els can also be too expensive to implement in some distrib- Cascading failures occur when local disturbances ripple uted systems due to the latency in ensuring atomicity. through interconnected components in a distributed system, Our research focuses on ways of building distributed causing widespread disruption. For example, in 1990, a bug systems capable of detecting and recovering from cascad- in the failure recovery code of the AT&T switches [10] led ing failures in a timely and cost-effective manner. This re- to cascading crashes in 114 switching nodes, 9 hours of search is novel because most dependable systems tend to downtime and at least $60 million in lost revenue. A more deal with independent, isolated failures; furthermore, they recent example of cascading failure is the electric power do not exploit knowledge of system topology or dependen- blackout [15] of August 2003 that spread through the Mid- cies for more effective, directed recovery. We seek to un- west and Northeast U.S. and part of Canada, affecting 50 derstand if topology-aware fault-tolerance can be effective, million people and costing $4-10 billion! Other examples i.e., can we exploit knowledge of system topology to con- include the escalation of a divide-by-zero exception into a tain cascading failures and to reduce fault-recovery laten- Navy ship’s network failure and subsequent grounding [4], cies in distributed systems? Here, we define topology as the and cascading failures in the Internet [8, 9]. Software up- static dependencies between an application’s components, grades, although they are not considered to be faults, can along with the dynamic dependencies that arise when the manifest themselves as cascading failures; this is because application’s components are instantiated onto physical re- an upgrade in one part of the system can trigger dependent sources. We also seek to discover whether certain topologies upgrades (with accompanyingperformance degradationand are more resilient to cascading failures than others are; this outages [6]) in other parts of the system. could potentially provide system developers with “rules-of- thumb” for structuring systems to tolerate such failures. ∗ This work is partially supported by the NSF CAREER Award CCR- 0238381, Bosch Research, the DARPA PCES contract F33615-03-C- The paper is organized as follows: Section 2 introduces 4110, and the General Motors Collaborative Laboratory at CMU. our topology-awarefault-tolerance approach. Section 3 dis- Failure Class Description Examples Malicious faults Deliberate attacks that exploit system vulnerabili- Viruses and worms ties to propagate from one system to the next Recovery Escalation Failures during execution of recovery routines AT&T switch failure [10], cas- lead to subsequent failures in neighboring nodes cading abort in transactional systems Unhandled exceptions Unanticipated failure conditions result in cascad- Unhandled system exception ing exceptions Cumulative, progressive error Severity of the failure increases in magnitude as Cascading timeouts, value-fault propagation failure propagates from node-to-node. propagation (Ariane5) Chained reconfiguration Transient errors leading to slew of reconfiguration Routing instability in BGP, live upgrades in distributed systems Table 1. A taxonomy of cascading failures cusses our proposed framework. Section 4 presents a pre- i.e., the likelihood that a component is affected if its “par- liminary case study which motivates the use of topology- ent” fails. Another question is how much information about aware fault-tolerance approaches. Section 5 critiques our the dependency graph a component would need in order currentsystem and discusses our future directions. Section6 to become more resilient to cascading failures, e.g., should discusses related work and Section 7 concludes. the component be aware only of other components that are within 2 hops from it, or does it need to know the entire dependency graph? How does the underlying system topol- 2. Our Approach in a Nutshell ogy influence the spread of cascading failures, and are some topologies more resilient to cascading failures than others? Topology-aware fault-tolerance aims to exploit knowledge of the system’s topology and the application’s normal fault- Failure-detection: What information, if any, would we free behavior to build proactive fault-tolerance techniques need to disseminate through the system to detect/diagnose that curb the spread of cascading failures and enable faster cascading failures? How accurate should this informa- system-wide recovery. For instance, if we provided each tion be? Also, what is the consequence of disseminating component in the system with some knowledge of other inaccurate information through the system? components in its dependency path (beyond its immediate Fault-recovery: How effective are existing fault-recovery neighbors), could this knowledge enhance the component’s strategies in curbing cascading failures? How can we coor- resilience to cascading failures? We outline below some of dinate recovery across multiple distributed components in the key research questions that we are hoping to address in a way that preserves system consistency? Can we develop this research. While our preliminary results will only ad- bounds for topology-aware fault-recovery latencies? dress a sub-set of these questions, through our participation in the HotDep Workshop, we hope to receive feedback on Evaluation: How do we inject cascading faults in large dis- our preliminary results and the potential of our approach to tributed systems for the purpose of evaluation? What met- influence dependable systems development. rics should we use to evaluate the effectiveness of our ap- proach? For instance, would response times and throughput be sufficient, or would the metric depend on the class of cas- 2.1. Key Research Questions cading failure that we aim to tolerate? Classification of cascading failures: We need to understand 3. Proposed Framework the classes of cascading failures that can occur in distrib- uted systems. We need to explore if there are any similar- We envision a system where each node is equipped with ities between the classes of failures, and if so, whether we a middleware layer that monitors the “health” of the node can develop a generic approach to curb the spread of fail- based on local and external data. (see Figure 1). Local data ures and ensure faster system-wide recovery. Table 1 illus- supplies system-level and application-level metrics specific trates our initial taxonomy of cascading failures. to that node, e.g., workload, resource usage and error logs. Dependency characterization: Dependencies in distributed External data supplies metrics from other nodes on the de- systems arise due to the invocation (or static-call) graph and pendency path. When the middleware layer suspects that a the dynamic mapping of application processes to resources. problem has occurred at a node, it first tries to pinpoint the We need to determine the “strength” of these dependencies, cause of the failure using local data. However, for cascad- 2 Internal Data State Workload Dependency Graph ata Recovery Failure Signature Leverage external data Action for more targeted fault ernal D Recovery Signature detection and recovery Ext Node 3: Local fault-detection and recovery Failure Signature Recovery Signature Node 6 crashed Failed over to Node 7 and observed Z Node 5 slowed down Node 5 did nothing and observed Y Node 4 reduced throughput Node

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us