A Failure Detector for HPC Platforms

Total Page:16

File Type:pdf, Size:1020Kb

A Failure Detector for HPC Platforms Article The International Journal of High Performance Computing Applications 1–20 A failure detector for HPC platforms © The Author(s) 2017 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/1094342017711505 journals.sagepub.com/home/hpc George Bosilca1, Aurelien Bouteiller1, Amina Guermouche2, Thomas Herault1, Yves Robert1,3, Pierre Sens4 and Jack Dongarra1,5,6 Abstract Building an infrastructure for exascale applications requires, in addition to many other key components, a stable and effi- cient failure detector. This article describes the design and evaluation of a robust failure detector that can maintain and distribute the correct list of alive resources within proven and scalable bounds. The detection and distribution of the fault information follow different overlay topologies that together guarantee minimal disturbance to the applications. A virtual observation ring minimizes the overhead by allowing each node to be observed by another single node, providing an unobtrusive behavior. The propagation stage uses a nonuniform variant of a reliable broadcast over a circulant graph overlay network and guarantees a logarithmic fault propagation. Extensive simulations, together with experiments on the Titan Oak Ridge National Laboratory supercomputer, show that the algorithm performs extremely well and exhibits all the desired properties of an exascale-ready algorithm. Keywords MPI, failure detection, fault tolerance 1. Introduction overall performance of a fault-tolerant HPC solution. A major factor to assess the efficacy of an FD algo- Failure detection (FD) is a prerequisite to failure miti- rithm is the trade-off that it achieves between scalabil- gation and a key component of any infrastructure that ity and the speed of information propagation in the requires resilience. This article is devoted to the design system. and evaluation of a reliable algorithm that will main- Although we focus primarily on the most widely tain and distribute the updated list of alive resources used programming paradigms, the message passing with a guaranteed maximum delay. We consider a typi- interface (MPI), the techniques, and algorithms pro- cal high-performance computing (HPC) platform in posed have a larger scope and are applicable in any steady-state operation mode. Because in such environ- resilient distributed programming environment. We ments the transmission time can be considered as consider the platform as being initially composed of N bounded (although that bound is unknown), it becomes nodes, but with a high probability, some of these possible to provide a perfect failure detector according resources will become unavailable throughout the exe- to the classical definition of Chandra and Toueg cution. When exposed to the death of a node, tradi- (1996). A failure detector is a distributed service able to tional applications would abort. However, the return the state of any node, alive or dead (subject to a applications that we consider are augmented with fault- crash).1 A failure detector is perfect if any node death is tolerant extensions that allow them to continue across eventually suspected by all surviving nodes and if no surviving node ever suspects a node that is still alive. 1 Critical fault-tolerant algorithms for HPC and imple- ICL, University of Tennessee Knoxville, Knoxville, TN, USA 2Telecom SudParis, E´vry, France mentations of communication middleware for unreli- 3LIP, E´cole Normale Supe´rieure de Lyon, Lyon, France able systems rely on the strong properties of perfect 4LIP6, Universite´ Paris 6, Paris, France failure detectors (see e.g. Bland et al., 2013a, 2013b, 5Oak Ridge National Lab, Oak Ridge, TN, USA 2015; Egwutuoha et al., 2013; Herault et al., 2015; 6Manchester University, Manchester, UK Katti et al., 2015). Their cost in terms of computation Corresponding author: and communication overhead, as well as their proper- Yves Robert, ICL, University of Tennessee Knoxville, Knoxville, TN, USA; ties in terms of latency to detect and notify failures and LIP, E´cole Normale Supe´rieure de Lyon, Lyon, France. of reliability, have thus a significant impact on the Email: [email protected] 2 The International Journal of High Performance Computing Applications 00(0) failures (e.g. Bland et al., 2013), either using a generic broadcast operation that introduces many messages. or an application-specific fault-tolerant model. The The major contributions of this work are as follows: design of this model is outside the scope of this article, but without loss of generality, we can safely assume • It provides a proven algorithm for FD based on a that any fault-tolerant recovery model requires a robust robust protocol that tolerates an arbitrary number fault detection mechanism. Our goal is to design such a of failures, provided that no more than f consecu- robust protocol that can detect all failures and enable tive failures strike within a time window of duration the efficient repair of the execution platform. T (f ). By repairing the platform, we mean that all surviving • The protocol has minimal overhead in failure-free nodes will eventually be notified of all failures and will operation, with a unique observer per node. therefore be able to compute the list of surviving nodes. • The protocol achieves FD and propagation in loga- The state of the platform where all dead nodes are rithmic time for up to fmax = b log nc - 1 where n is known to all processes is called a stable configuration the number of alive nodes. More precisely, the (note that nodes may not be aware that they are in a bound T ðf max Þis deterministic and logarithmic in n, stable configuration). even in the worst case. By robust, we mean that regardless of the length of • All performance guarantees are expressed within a the execution, if a set of up to f failures disrupt the plat- realistic one-port communication model. form and precipitate it into an unstable configuration, • It provides a detailed theoretical and practical com- the protocol will bring the platform back into a stable parison with randomized protocols. configuration within T ( f ) time units—we will define • Extensive simulations and experiments with user- T (f ) later in the article. Note that the goal is not to tol- level failure mitigation (ULFM; Bland et al., 2013) erate up to f failures overall. On the contrary, the pro- show very good performance of the algorithm. tocol will tolerate an arbitrary number of failures throughout an unbounded-length execution, provided The rest of the article is organized as follows: We that no more than f successive overlapping failures start with an informal description of the algorithm in strike within the T (f ) time window. Hence, f induces a Section 2. We detail the model, the proof of correctness, constraint on the frequency of failures, but not on the and the time-performance analysis in Section 3. Then, total number of failures. we assess the efficiency of the algorithm in a practical By efficiently, we aim at a low-overhead protocol setting, first by reporting on a comprehensive set of that limits the number of messages exchanged to detect simulations in Section 4, and then by discussing experi- the faults and repair the platform. Although we assume mental results on the Titan Oak Ridge National a fully connected platform (any node may communi- Laboratory (ORNL) supercomputer in Section 5. cate with any other), we use a realistic one-port commu- Section 6 provides an overview of related work. Finally, nication model (Bhat et al., 2003) where a node can we outline conclusions and directions for future work send and/or receive at most one message at any time in Section 7. step. Independent communications, involving distinct sender/receiver pairs, can take place in parallel; how- ever, two messages sent by the same node will be serial- 2. Algorithm ized. Note that the one-port model is only an This section provides an informal description of the assumption used to model the performance and provide algorithm (for a list of main notations, see Table 1). an upper bound for the overheads. In real situations We refer to Section 3 for a detailed presentation of the where platforms support multiport communications, model, a proof of correctness, and a time-performance our algorithm is capable of taking advantage of such analysis. We maintain two main invariants in the capabilities. All these goals seem contradictory, but algorithm: they only call for a carefully designed trade-off. As shown in the studies by Ferreira et al. (2008), Hoefler 1. Each alive node maintains its own list of known et al. (2010), and Kharbas et al. (2012), system noise dead resources. created by the messages and computations of the fault 2. Alive nodes are arranged along a ring, and each detection mechanism can impose significant overheads node observes its predecessor in the ring. In other in HPC applications. Here, system noise is broadly words, the successor/observer receives heartbeats defined as the impact of operating system and architec- from its predecessor/emitter (see below). tural overheads onto application performance. Hence, the efficiency of the approach must be carefully When a node dies, its observer broadcasts the infor- assessed. The overhead should be kept minimal in the mation and reconnects the ring: From now onward, the absence of failures, while FD and propagation should observer will observe the last known predecessor execute quickly, which usually implies a robust (accounting for locally known failures) of its former Bosilca et al. 3 Table 1. List of notations. Let A be the complement of Di in f0, 1, .. , N - 1g Platform parameters and let n = jAj. The elements of A are labeled from 0 to n - 1, where the source i of the broadcast is labeled N Initial number of nodes 0. The broadcast is tagged with a unique identifier and t Upper bound on the time to transfer a message involves only nodes of the labeled list A (this list is Protocol parameters computable at each participant as Di is part of the mes- h Period for heartbeats sage).
Recommended publications
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • Failure Detectors for Wireless Sensor-Actuator Systems
    Cleveland State University EngagedScholarship@CSU Electrical Engineering & Computer Science Electrical Engineering & Computer Science Faculty Publications Department 7-2009 Failure Detectors for Wireless Sensor-Actuator Systems Hamza A. Zia Cleveland State University Nigamanth Sridhar Clevealand State University, [email protected] Shivakumar Sastry University of Akron Follow this and additional works at: https://engagedscholarship.csuohio.edu/enece_facpub Part of the Digital Communications and Networking Commons How does access to this work benefit ou?y Let us know! Publisher's Statement NOTICE: this is the author’s version of a work that was accepted for publication in Ad Hoc Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Ad Hoc Networks, 7, 5, (07-01-2009); 10.1016/j.adhoc.2008.09.003 Original Citation Zia, H. A., Sridhar, N., , & Sastry, S. (2009). Failure detectors for wireless sensor-actuator systems. Ad Hoc Networks, 7(5), 1001-1013. doi:10.1016/j.adhoc.2008.09.003 Repository Citation Zia, Hamza A.; Sridhar, Nigamanth; and Sastry, Shivakumar, "Failure Detectors for Wireless Sensor-Actuator Systems" (2009). Electrical Engineering & Computer Science Faculty Publications. 64. https://engagedscholarship.csuohio.edu/enece_facpub/64 This Article is brought to you for free and open access by the Electrical Engineering & Computer Science Department at EngagedScholarship@CSU. It has been accepted for inclusion in Electrical Engineering & Computer Science Faculty Publications by an authorized administrator of EngagedScholarship@CSU.
    [Show full text]
  • Unlocking the Full Potential of the Cray XK7 Accelerator
    Unlocking the Full Potential of the Cray XK7 Accelerator ∗ † Mark D. Klein , and, John E. Stone ∗National Center for Supercomputing Application, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA †Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA Abstract—The Cray XK7 includes NVIDIA GPUs for ac- [2], [3], [4], and for high fidelity movie renderings with the celeration of computing workloads, but the standard XK7 HVR volume rendering software.2 In one example, the total system software inhibits the GPUs from accelerating OpenGL turnaround time for an HVR movie rendering of a trillion- and related graphics-specific functions. We have changed the operating mode of the XK7 GPU firmware, developed a custom cell inertial confinement fusion simulation [5] was reduced X11 stack, and worked with Cray to acquire an alternate driver from an estimate of over a month for data transfer to and package from NVIDIA in order to allow users to render and rendering on a conventional visualization cluster down to post-process their data directly on Blue Waters. Users are able just one day when rendered locally using 128 XK7 nodes to use NVIDIA’s hardware OpenGL implementation which has on Blue Waters. The fully-graphics-enabled GPU state is many features not available in software rasterizers. By elim- inating the transfer of data to external visualization clusters, currently considered an unsupported mode of operation by time-to-solution for users has been improved tremendously. In Cray, and to our knowledge Blue Waters is presently the one case, XK7 OpenGL rendering has cut turnaround time only Cray system currently running in this mode.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • Hardware Complexity and Software Challenges
    Trends in HPC: hardware complexity and software challenges Mike Giles Oxford University Mathematical Institute Oxford e-Research Centre ACCU talk, Oxford June 30, 2015 Mike Giles (Oxford) HPC Trends June 30, 2015 1 / 29 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Top500 supercomputers Really impressive { 300× more capability in 10 years! Mike Giles (Oxford) HPC Trends June 30, 2015 3 / 29 My personal experience 1982: CDC Cyber 205 (NASA Langley) 1985{89: Alliant FX/8 (MIT) 1989{92: Stellar (MIT) 1990: Thinking Machines CM5 (UTRC) 1987{92: Cray X-MP/Y-MP (NASA Ames, Rolls-Royce) 1993{97: IBM SP2 (Oxford) 1998{2002: SGI Origin (Oxford) 2002 { today: various x86 clusters (Oxford) 2007 { today: various GPU systems/clusters (Oxford) 2011{15: GPU cluster (Emerald @ RAL) 2008 { today: Cray XE6/XC30 (HECToR/ARCHER @ EPCC) 2013 { today: Cray XK7 with GPUs (Titan @ ORNL) Mike Giles (Oxford) HPC Trends June 30, 2015 4 / 29 Top500 supercomputers Power requirements are raising the question of whether this rate of improvement is sustainable.
    [Show full text]
  • The Dynamic Enterprise Bus∗
    Fourth International Conference on Autonomic and Autonomous Systems The Dynamic Enterprise Bus∗ Dag Johansen Havard˚ Johansen Dept. of Computer Science Dept. of Computer Science University of Tromsø, Norway University of Tromsø, Norway Abstract This is where autonomic computing [18] comes into play. Our goal is that information access software should In this paper we present a prototype enterprise informa- keep manual control and interventions outside the compu- tion run-time heavily inspired by the fundamental principles tational loop as much as possible. Closely related is that of autonomic computing. Through self-configuration, self- such autonomic solutions should consolidate and utilize re- optimization, and self-healing techniques, this run-time tar- sources better, with the net effect that large-scale informa- gets next-generation extreme scale information-access sys- tion access systems can be built in smaller scale. In this tems. We demonstrate these concepts in a processing cluster vein, self-configuration, self-optimization, and self-healing that allocates resources dynamically upon demand. become important. We are building the next generation run-time for fu- ture generation information access systems. Autonomic be- 1 Introduction havior is fundamental in order to dynamically adapt and reconfigure to accommodate changing situations. Hence, the run-time needs efficient mechanisms for monitoring Enterprise search systems are ripe for change. In their in- and controlling applications and their resource, moving ap- fancy, these systems were primarily used for information re- plications transparently while in execution, scheduling re- trieval purposes, with main data sources being internal cor- sources in accordance to end-to-end service-level agree- porate archives.
    [Show full text]
  • Low-Overhead Accrual Failure Detector
    Sensors 2012, 12, 5815-5823; doi:10.3390/s120505815 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Low-Overhead Accrual Failure Detector Xiao Ren, Jian Dong *, Hongwei Liu, Yang Li and Xiaozong Yang School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +86-451-8640-3317; Fax: +86-451-8641-3309. Received: 6 March 2012; in revised form: 20 April 2012 / Accepted: 25 April 2012 / Published: 4 May 2012 Abstract: Failure detectors are one of the fundamental components for building a distributed system with high availability. In order to maintain the efficiency and scalability of failure detection in a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. In this paper, an new accrual failure detector—LA-FD with low system overhead has been proposed specifically for current mobile network equipment on the Internet whose processing power, memory space and power supply are all constrained. It does not rely on the probability distribution of message transmission time, or on the maintenance of a history message window. By simple calculation, LA-FD provides adaptive failure detection service with high accuracy to multiple upper applications. The related experiments and results have also been presented. Keywords: failure detection; accrual failure detector; adaptive 1. Introduction Failure detector is one of the fundamental components for building a distributed system with high availability [1]. By providing the processes’ failure information to the system, it supports the solution of many basic issues (such as consensus and atomic broadcasting, etc.) in an asynchronous system.
    [Show full text]
  • A Literature Review of Failure Detection Within the Context of Solving the Problem of Distributed Consensus
    A literature review of failure detection Within the context of solving the problem of distributed consensus by Michael Duy-Nam Phan-Ba B.S., The University of Washington, 2010 AN ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate and Postdoctoral Studies (Computer Science) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April 2015 c Michael Duy-Nam Phan-Ba 2015 Abstract As modern data centers grow in both size and complexity, the probabil- ity that components fail becomes significant enough to affect user-facing services [3]. These failures have the apparent consequence of invoking the impossibility result for distributed consensus in the presence of even one failure [15]. One way to solve the impossibility result is to use failure de- tectors [8]. In this essay, we present the theoretical models that allow us to solve consensus. Then, we discuss practical refinements to the models for the purposes of implementing failure detectors in practice. Finally, we con- clude by surveying common design patterns for building distributed failure detectors. ii Table of Contents Abstract ................................. ii Table of Contents ............................ iii List of Tables .............................. v List of Figures .............................. vi List of Algorithms . vii Acknowledgements . viii Dedication ................................ ix 1 Introduction ............................. 1 2 The theory behind failure detection and consensus . 3 2.1 The impossibility result for consensus . 3 2.2 A model of failure detection . 4 2.3 The weakest failure detector . 7 3 Binary failure detection with partial synchrony . 13 3.1 Pull failure detection . 14 3.2 Push failure detection . 15 3.3 Push-pull failure detection .
    [Show full text]
  • Failure Detectors for Wireless Sensor-Actuator Systems Hamza A
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Cleveland-Marshall College of Law Cleveland State University EngagedScholarship@CSU Electrical Engineering & Computer Science Faculty Electrical Engineering & Computer Science Publications Department 7-2009 Failure Detectors for Wireless Sensor-Actuator Systems Hamza A. Zia Cleveland State University Nigamanth Sridhar Clevealand State University, [email protected] Shivakumar Sastry University of Akron Follow this and additional works at: https://engagedscholarship.csuohio.edu/enece_facpub Part of the Digital Communications and Networking Commons How does access to this work benefit oy u? Let us know! Publisher's Statement NOTICE: this is the author’s version of a work that was accepted for publication in Ad Hoc Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Ad Hoc Networks, 7, 5, (07-01-2009); 10.1016/j.adhoc.2008.09.003 Original Citation Zia, H. A., Sridhar, N., , & Sastry, S. (2009). Failure detectors for wireless sensor-actuator systems. Ad Hoc Networks, 7(5), 1001-1013. doi:10.1016/j.adhoc.2008.09.003 Repository Citation Zia, Hamza A.; Sridhar, Nigamanth; and Sastry, Shivakumar, "Failure Detectors for Wireless Sensor-Actuator Systems" (2009). Electrical Engineering & Computer Science Faculty Publications. 64. https://engagedscholarship.csuohio.edu/enece_facpub/64 This Article is brought to you for free and open access by the Electrical Engineering & Computer Science Department at EngagedScholarship@CSU.
    [Show full text]
  • INFORMATION to USERS This Manuscript Has Been Reproduced from the Microfilm Master. UMI Films the Text Directly from the Origina
    INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. Hie quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed through, substandard margins, and improper alignment can adversely afreet reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. University Microfilms International A Ben & Howell information Company 300 North ZeebRoad Ann Arbor Mt 48106-1346 USA 313 761-4700 800 521-0600 Order Number 9605332 Fault-tolerant distributed algorithms for consensus and termination detection Wu, Li-Fen, Ph.D. The Ohio State University, 1994 UMI 300 N.
    [Show full text]
  • Self-Healing Distributed Systems
    ÃABCÊÇÅÍÆGËÀ¼ Self-healing distributed systems Dissertation for the degree of Doctor of Natural Sciences (Dr. rer. nat.) submitted to the Department of Computer Science of the University of Augsburg presented by Benjamin Satzger in 2008 Examiner: Prof. Dr. rer. nat. Theo Ungerer Co-examiner: Prof. Dr. rer. nat. Bernhard Bauer Date of oral examination: 2008-12-18 Abstract The growing complexity of distributed systems demands for new ways of control. This work addresses self-healing in distributed environments. The term self-healing represents a quite new area of research and is used in a fairly broad way, but can be seen as dynamic fault tolerance. This work proposes generic concepts and algorithms to build self-healing systems. The detection of node failures in distributed environments is a non-trivial problem. Failure detectors are an important component of many fault tol- erant distributed systems. In this work a new failure detection algorithm is proposed with noteworthy features like a high flexibility and good perfor- mance. Furthermore an approach is presented to save the message overhead of failure detectors. New grouping algorithms are introduced in this work to enable a scalable self-monitoring property. This allows an autonomous installation of moni- toring relations in complex large scale distributed systems. A failure recovery engine based on automated planning, which manages a distributed system according to user-defined objectives, is proposed. It is able to generate and execute plans to autonomously recover a system from unwanted states. Finally, ideas for a generic self-healing architecture for highly complex dis- tributed systems are presented. The design is based on psychological and sociological concepts.
    [Show full text]
  • A Self-Tuning Failure Detection Scheme for Cloud Computing Service
    2012 IEEE 26th International Parallel and Distributed Processing Symposium A Self-tuning Failure Detection Scheme for Cloud Computing Service Naixue Xiong Athanasios V. Vasilakos Jie Wu Dept. of Computer Science Dept. of Comp. and Tele. Engi. Dept. of Comp. and Info. Scie. Georgia State Univ., USA Univ. of Western Macedonia, Greece Temple Univ., USA. E-mail: {nxiong, wsong, pan}@gsu.edu E-mail: [email protected] E-mail: [email protected] Y. Richard Yang Andy Rindos Yuezhi Zhou1,Wen-Zhan Song2,Yi Pan2 Dept. of Computer Science IBM Corp., Dept. W4DA/Bldg 503 1Dept. of Comp. Scie. & Tech. Yale Univ. Research Triangle Park, Tsinghua Univ., China New Haven, USA Durham, NC, USA E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] 2Dept. of Computer Science Georgia State Univ., USA E-mail: {wsong, pan}@cs.gsu.edu Abstract—Cloud computing is an increasingly important so- maintain high levels of Quality of Service (QoS) in accessing lution for providing services deployed in dynamically scalable information remotely in network environments, and should cloud networks. Services in the cloud computing networks may ensure users’ application security and dependability. be virtualized with specific servers which host abstracted details. In the cloud computing networks, some of the servers Some of the servers are active and available, while others are busy or heavy loaded, and the remaining are offline for various may be active and available, while others are busy or heavy reasons. Users would expect the right and available servers to loaded, and the remaining may be offline or even crashed complete their application requirements.
    [Show full text]