A Comparative Analysis of Network Dependability, Fault-Tolerance, Reliability, Security, and Survivability M

Total Page:16

File Type:pdf, Size:1020Kb

A Comparative Analysis of Network Dependability, Fault-Tolerance, Reliability, Security, and Survivability M 106 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 11, NO. 2, SECOND QUARTER 2009 A Comparative Analysis of Network Dependability, Fault-tolerance, Reliability, Security, and Survivability M. Al-Kuwaiti, Member, IEEE, N. Kyriakopoulos, Senior Member, IEEE, and S. Hussein, Member, IEEE Abstract—A number of qualitative and quantitative terms are In the evolution of engineering design approaches, the initial used to describe the performance of what has come to be known concern was whether a particular system would operate within as information systems, networks or infrastructures. However, a set of specifications; little consideration was given to how some of these terms either have overlapping meanings or contain ambiguities in their definitions presenting problems to those who long the system would operate within those specifications. attempt a rigorous evaluation of the performance of such systems. Military and subsequently space applications placed great The phenomenon arises because the wide range of disciplines emphasis on reliability, namely, the need for a given system covered by the term information technology have developed their to meet design requirements for specified periods of time. own distinct terminologies. This paper presents a systematic ap- Similar needs arose as simple computer programs evolved into proach for determining common and complementary character- istics of five widely-used concepts, dependability, fault-tolerance, complex software leading to the requirement for fault-tolerant reliability, security, and survivability. The approach consists of design. It did not take long to realize that although these comparing definitions, attributes, and evaluation measures for two concepts originated in separate fields, namely, engineering each of the five concepts and developing corresponding relations. and computer science, respectively, their end objectives were Removing redundancies and clarifying ambiguities will help the similar, that is to ensure the performance of a system, hardware mapping of broad user-specified requirements into objective performance parameters for analyzing and designing information in the first case, software in the second, for some specified infrastructures. time intervals. This realization led to the development of the concept of dependability as an all-encompassing concept Index Terms—Dependability, fault-tolerance, reliability, secu- rity, survivability. subsuming reliability and fault-tolerance [1]-[6]. As computers combined with communications formed global information networks, information security and network survivability have I. INTRODUCTION also been introduced as significant design objectives. In addition, many other concepts such as trustworthiness [2], HE DISRUPTIONS of the operation of various major high assurance or high confidence [3], ultra-reliability [5] T infrastructures have highlighted the need to develop and robustness [2] are also being used to characterize the mechanisms for minimizing the effects of disruptions and im- performance of complex systems. In [3] the observation is proving the performance of each infrastructure. The problems made, without any detailed analysis, that dependability, high start with the composition of infrastructures. They comprise confidence, survivability, and trustworthiness are all “essen- systems that have been developed via different disciplines. tially equivalent in their goals and address similar threats”. The hardware component of the information infrastructure includes devices from all fields of electrical engineering, as the These terms have entered into use via different disciplines software component includes development from all disciplines and at different times. For example, robustness has a long in computer science to mention just two examples. history of use in statistics and process control, survivability was introduced as a performance characteristic for military The integration of the products of diverse fields, including communications networks, and security has long been associ- the human component, into complex systems has created ma- ated with law enforcement and military operations. The last jor difficulties in the development of efficient mechanisms for two examples illustrate the specification of performance mea- analyzing and improving the performance of infrastructures. sures with discrete physical entities, while the first one relates One of the problems can be traced to variety of terminologies performance to measurements. As information networks have for describing performance across different fields. A designer become more complex, involving hardware, software and hu- or user is faced with terms that may be complementary, mans, and have assumed a prominent role, it became inevitable synonymous or somewhere in between. Thus, there is a need to that performance requirements for the services provided by the develop a common understanding of the meaning of the most entire system needed to be established. widely used terms without reference to a specific discipline. The concepts for measuring performance evolved along Manuscript received 30 September 2006; revised 23 April 2008. with technology. For example, the reliability of electronic The authors are with the Department of Electrical and Computer Engineer- devices led to the reliability of circuits and, eventually, to ing, The George Washington university, Washington, DC 20052 USA (emails: [email protected], [email protected], and [email protected]). the reliability of the entire launch operation, or, in another Digital Object Identifier 10.1109/SURV.2009.090208. application, an entire nuclear power plant. Other concepts 1553-877X/09/$25.00 c 2009 IEEE Authorized licensed use limited to: University of Pittsburgh. Downloaded on January 6, 2010 at 16:01 from IEEE Xplore. Restrictions apply. AL-KUWAITI et al.: A COMPARATIVE ANALYSIS OF NETWORK DEPENDABILITY, FAULT-TOLERANCE, RELIABILITY, SECURITY, AND SURVIVABILITY 107 have been “borrowed” from one technology and used in included in the definition of quality of service could be the another. Survivability started as a performance characteristic same attributes as those of dependability. Furthermore, in of physical communications networks and has been adopted contrast to the evolution of engineering design from the device to characterize performance of information infrastructures [7]- to the system, namely, a “bottom up” approach, dependability [12]. has a “top down” perspective. The result is an elaborate Another concept that has evolved with technology is the multi layer tree structure encompassing both quantitative and quality of service (QoS) which is defined in the ITU-T Rec. qualitative attributes. As one goes beyond a few levels in X.902 as “a set of quality requirements on the collective the tree structure, the multiplicity of possible paths from behavior of one or more objects” [13]. At the dawn of the root of the tree and the mixture of quantitative and digital communications, channel performance was expressed qualitative attributes present major challenges in translating in terms of bit error rate to be followed by quality of the top level dependability concepts into implementable design service in describing the performance of packet-switched specifications for a complex system. networks [14][15][16]. With the advent of the Internet and The work described in this paper aims toward the devel- data streaming, the quality of service concept is also used to opment of a framework for identifying a set of performance characterize the performance of an application [17]-[21]. indicators for complex systems such as an information infras- The different origins and development paths of these con- tructure. The objective is not to propose yet another concept, cepts in combination with lexicological similarities raise the but, rather, to identify from the existing concepts the proper question whether there is some degree of overlap among subset of their intersection and to develop a common and the various terms, and to what extend, if any, there are consistent understanding of the meaning of them. Depend- redundancies in using these concepts simultaneously. ability, fault-tolerance, reliability, security and survivability are This question assumes added significance, because of the used as representative examples for describing the proposed effort to develop dependability as an integrating concept that analytical framework. encompasses such attributes as availability, reliability, safety, The rapid development and wide applications of Informa- integrity, and maintainability [3]. Although, theoretically, this tion Technology (IT) in every aspect of human’s life have is a highly desirable goal, some practical problems arise made these performance indicators important challenges to primarily due to multiplicity of perspectives and definitions. system users and designers. Various critical infrastructures For example, there are more than one definitions of depend- aim towards possessing such concepts either by embedding ability [2][3][4][22][23]. Concepts that have been in use for a them in the first development design stages or as add-on long time and in different disciplines are well entrenched in features. Examples of these systems include defense sys- their respective fields and are viewed as end objectives rather tems, flight systems, communication systems, financial sys- than attributes of some other concept. One such widely used tems, energy systems, and transportation systems as presented concept is reliability. Applications range from
Recommended publications
  • The Google File System (GFS)
    The Google File System (GFS), as described by Ghemwat, Gobioff, and Leung in 2003, provided the architecture for scalable, fault-tolerant data management within the context of Google. These architectural choices, however, resulted in sub-optimal performance in regard to data trustworthiness (security) and simplicity. Additionally, the application- specific nature of GFS limits the general scalability of the system outside of the specific design considerations within the context of Google. SYSTEM DESIGN: The authors enumerate their specific considerations as: (1) commodity components with high expectation of failure, (2) a system optimized to handle relatively large files, particularly multi-GB files, (3) most writes to the system are concurrent append operations, rather than internally modifying the extant files, and (4) a high rate of sustained bandwidth (Section 1, 2.1). Utilizing these considerations, this paper analyzes the success of GFS’s design in achieving a fault-tolerant, scalable system while also considering the faults of the system with regards to data-trustworthiness and simplicity. Data-trustworthiness: In designing the GFS system, the authors made the conscious decision not prioritize data trustworthiness by allowing applications to access ‘stale’, or not up-to-date, data. In particular, although the system does inform applications of the chunk-server version number, the designers of GFS encouraged user applications to cache chunkserver information, allowing stale data accesses (Section 2.7.2, 4.5). Although possibly troubling
    [Show full text]
  • Introspection-Based Verification and Validation
    Introspection-Based Verification and Validation Hans P. Zima Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 E-mail: [email protected] Abstract This paper describes an introspection-based approach to fault tolerance that provides support for run- time monitoring and analysis of program execution. Whereas introspection is mainly motivated by the need to deal with transient faults in embedded systems operating in hazardous environments, we show in this paper that it can also be seen as an extension of traditional V&V methods for dealing with program design faults. Introspection—a technology supporting runtime monitoring and analysis—is motivated primarily by dealing with faults caused by hardware malfunction or environmental influences such as radiation and thermal effects [4]. This paper argues that introspection can also be used to handle certain classes of faults caused by program design errors, complementing the classical approaches for dealing with design errors summarized under the term Verification and Validation (V&V). Consider a program, P , over a given input domain, and assume that the intended behavior of P is defined by a formal specification. Verification of P implies a static proof—performed before execution of the program—that for all legal inputs, the result of applying P to a legal input conforms to the specification. Thus, verification is a methodology that seeks to avoid faults. Model checking [5, 3] refers to a special verification technology that uses exploration of the full state space based on a simplified program model. Verification techniques have been highly successful when judiciously applied under the right conditions in well-defined, limited contexts.
    [Show full text]
  • Fault-Tolerant Components on AWS
    Fault-Tolerant Components on AWS November 2019 This paper has been archived For the latest technical information, see the AWS Whitepapers & Guides page: Archivedhttps://aws.amazon.com/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved. Archived Contents Introduction .......................................................................................................................... 1 Failures Shouldn’t Be THAT Interesting ............................................................................. 1 Amazon Elastic Compute Cloud ...................................................................................... 1 Elastic Block Store ........................................................................................................... 3 Auto Scaling ....................................................................................................................
    [Show full text]
  • Use of Bioinformatics Resources and Tools by Users of Bioinformatics Centers in India Meera Yadav University of Delhi, [email protected]
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Library Philosophy and Practice (e-journal) Libraries at University of Nebraska-Lincoln 2015 Use of Bioinformatics Resources and Tools by Users of Bioinformatics Centers in India meera yadav University of Delhi, [email protected] Manlunching Tawmbing Saha Institute of Nuclear Physisc, [email protected] Follow this and additional works at: http://digitalcommons.unl.edu/libphilprac Part of the Library and Information Science Commons yadav, meera and Tawmbing, Manlunching, "Use of Bioinformatics Resources and Tools by Users of Bioinformatics Centers in India" (2015). Library Philosophy and Practice (e-journal). 1254. http://digitalcommons.unl.edu/libphilprac/1254 Use of Bioinformatics Resources and Tools by Users of Bioinformatics Centers in India Dr Meera, Manlunching Department of Library and Information Science, University of Delhi, India [email protected], [email protected] Abstract Information plays a vital role in Bioinformatics to achieve the existing Bioinformatics information technologies. Librarians have to identify the information needs, uses and problems faced to meet the needs and requirements of the Bioinformatics users. The paper analyses the response of 315 Bioinformatics users of 15 Bioinformatics centers in India. The papers analyze the data with respect to use of different Bioinformatics databases and tools used by scholars and scientists, areas of major research in Bioinformatics, Major research project, thrust areas of research and use of different resources by the user. The study identifies the various Bioinformatics services and resources used by the Bioinformatics researchers. Keywords: Informaion services, Users, Inforamtion needs, Bioinformatics resources 1. Introduction ‘Needs’ refer to lack of self-sufficiency and also represent gaps in the present knowledge of the users.
    [Show full text]
  • Establishing the Basis for a CIS (Computer Information Systems) Undergraduate Program: on Seeking the Body of Knowledge
    Information Systems Education Journal (ISEDJ) 13 (5) ISSN: 1545-679X September 2015 Establishing the Basis for a CIS (Computer Information Systems) Undergraduate Program: On Seeking the Body of Knowledge Herbert E. Longenecker, Jr. [email protected] University of South Alabama Mobile, AL 36688, USA Jeffry Babb [email protected] West Texas A&M University Canyon, TX 79016, USA Leslie J. Waguespack [email protected] Bentley University Waltham, Massachusetts 02452, USA Thomas N. Janicki [email protected] University of North Carolina Wilmington Wilmington, NC 28403, USA David Feinstein [email protected] University of South Alabama Mobile, AL 36688, USA Abstract The evolution of computing education spans a spectrum from computer science (CS) grounded in the theory of computing, to information systems (IS), grounded in the organizational application of data processing. This paper reports on a project focusing on a particular slice of that spectrum commonly labeled as computer information systems (CIS) and reflected in undergraduate academic programs designed to prepare graduates for professions as software developers building systems in government, commercial and not-for-profit enterprises. These programs with varying titles number in the hundreds. This project is an effort to determine if a common knowledge footprint characterizes CIS. If so, an eventual goal would be to describe the proportions of those essential knowledge components and propose guidelines specifically for effective undergraduate CIS curricula. Professional computing societies (ACM, IEEE, AITP (formerly DPMA), etc.) over the past fifty years have sponsored curriculum guidelines for various slices of education that in aggregate offer a compendium of knowledge areas in ©2015 EDSIG (Education Special Interest Group of the AITP) Page 37 www.aitp-edsig.org /www.isedj.org Information Systems Education Journal (ISEDJ) 13 (5) ISSN: 1545-679X September 2015 computing.
    [Show full text]
  • Network Reliability and Fault Tolerance
    Network Reliability and Fault Tolerance Muriel Medard´ [email protected] Laboratory for Information and Decision Systems Room 35-212 Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA 02139 Steven S. Lumetta [email protected] Coordinated Science Laboratory University of Illinois Urbana-Champaign 1308 W. Main Street, Urbana, IL 61801 1 Introduction The majority of communications applications, from cellular telephone conversations to credit card transactions, assume the availability of a reliable network. At this level, data are expected to tra- verse the network and to arrive intact at their destination. The physical systems that compose a network, on the other hand, are subjected to a wide range of problems, ranging from signal distor- tion to component failures. Similarly, the software that supports the high-level semantic interface 1 often contains unknown bugs and other latent reliability problems. Redundancy underlies all ap- proaches to fault tolerance. Definitive definitions for all concepts and terms related to reliability, and, more broadly, dependability, can be found in [AAC+92]. Designing any system to tolerate faults first requires the selection of a fault model, a set of possible failure scenarios along with an understanding of the frequency, duration, and impact of each scenario. A simple fault model merely lists the set of faults to be considered; inclusion in the set is decided based on a combination of expected frequency, impact on the system, and feasibility or cost of providing protection. Most reliable network designs address the failure of any single component, and some designs tolerate multiple failures. In contrast, few attempt to handle the adversarial conditions that might occur in a terrorist attack, and cataclysmic events are almost never addressed at any scale larger than a city.
    [Show full text]
  • Fault Tolerance
    Distributed Systems Fault Tolerance Paul Krzyzanowski Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Page Faults • Deviation from expected behavior • Due to a variety of factors: – Hardware failure – Software bugs – Operator errors – Network errors/outages Page A fault in a system is some deviation from the expected behavior of the system -- a malfunction. Faults may be due to a variety of factors, including hardware, software, operator (user), and network errors. Faults • Three categories – transient faults – intermittent faults – permanent faults • Any fault may be – fail-silent (fail-stop) – Byzantine • synchronous system vs. asynchronous system – E.g., IP packet versus serial port transmission Page Faults can be classified into one of three categories: transient faults: these occur once and then disappear. For example, a network message transmission times out but works fine when attempted a second time. Intermittent faults: these are the most annoying of component faults. This fault is characterized by a fault occurring, then vanishing again, then occurring, … An example of this kind of fault is a loose connection. Permanent faults: this fault is persistent: it continues to exist until the faulty component is repaired or replaced. Examples of this fault are disk head crashes, software bugs, and burnt-out hardware. Any of these faults may be either a fail-silent failure (also known as fail-stop) or a Byzantine failure. A fail-silent fault is one where the faulty unit stops functioning and produces no ill output (it produces no output or produces output to indicate failure). A Byzantine fault is one where the faulty unit continues to run but produces incorrect results.
    [Show full text]
  • Information Systems Foundations Theory, Representation and Reality
    Information Systems Foundations Theory, Representation and Reality Information Systems Foundations Theory, Representation and Reality Dennis N. Hart and Shirley D. Gregor (Editors) Workshop Chair Shirley D. Gregor ANU Program Chairs Dennis N. Hart ANU Shirley D. Gregor ANU Program Committee Bob Colomb University of Queensland Walter Fernandez ANU Steven Fraser ANU Sigi Goode ANU Peter Green University of Queensland Robert Johnston University of Melbourne Sumit Lodhia ANU Mike Metcalfe University of South Australia Graham Pervan Curtin University of Technology Michael Rosemann Queensland University of Technology Graeme Shanks University of Melbourne Tim Turner Australian Defence Force Academy Leoni Warne Defence Science and Technology Organisation David Wilson University of Technology, Sydney Published by ANU E Press The Australian National University Canberra ACT 0200, Australia Email: [email protected] This title is also available online at: http://epress.anu.edu.au/info_systems02_citation.html National Library of Australia Cataloguing-in-Publication entry Information systems foundations : theory, representation and reality Bibliography. ISBN 9781921313134 (pbk.) ISBN 9781921313141 (online) 1. Management information systems–Congresses. 2. Information resources management–Congresses. 658.4038 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying or otherwise, without the prior permission of the publisher. Cover design by Brendon McKinley with logo by Michael Gregor Authors’ photographs on back cover: ANU Photography Printed by University Printing Services, ANU This edition © 2007 ANU E Press Table of Contents Preface vii The Papers ix Theory Designing for Mutability in Information Systems Artifacts, Shirley Gregor and Juhani Iivari 3 The Eect of the Application Domain in IS Problem Solving: A Theoretical Analysis, Iris Vessey 25 Towards a Unied Theory of Fit: Task, Technology and Individual, Michael J.
    [Show full text]
  • Information System Owner
    DOE CYBERSECURITY: CORE COMPETENCY TRAINING REQUIREMENTS Key Cybersecurity Role: Information System Owner Role Definition: The Information System Owner (also referred to as System Owner) is the individual responsible for the overall procurement, development, integration, modification, operation, maintenance, and retirement of an information system. The System Owner is a key contributor in developing system design specifications to ensure the security and user operational needs are documented, tested, and implemented. Competency Area: Data Security Functional Requirement: Design Competency Definition: Refers to the application of the principles, policies, and procedures necessary to ensure the confidentiality, integrity, availability, and privacy of data in all forms of media (i.e., electronic and hardcopy) throughout the data life cycle. Behavioral Outcome: The System Owner will understand the policies and procedures required to protect all categories of information as well as have a working knowledge of data access controls required to ensure the confidentiality, integrity, and availability of information. He/she will apply this knowledge during all phases of the System Development Life Cycle (SDLC). Training concepts to be addressed at a minimum: Identify and document the appropriate level of protection for data, including use of encryption. Specify data and information classification, sensitivity, and need-to-know requirements by information type on a system in terms of its confidentiality, integrity, and availability. Utilize DOE M 205.1-5 to determine the information impacts for unclassified information and DOE M 205.1-4 to determine the Consequence of Loss for classified information. Create authentication and authorization system for users to gain access to data based on assigned privileges and permissions.
    [Show full text]
  • Detecting and Tolerating Byzantine Faults in Database Systems Benjamin Mead Vandiver
    Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2008-040 June 30, 2008 Detecting and Tolerating Byzantine Faults in Database Systems Benjamin Mead Vandiver massachusetts institute of technology, cambridge, ma 02139 usa — www.csail.mit.edu Detecting and Tolerating Byzantine Faults in Database Systems by Benjamin Mead Vandiver Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2008 c Massachusetts Institute of Technology 2008. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science May 23, 2008 Certified by.......................................................... Barbara Liskov Ford Professor of Engineering Thesis Supervisor Accepted by......................................................... Terry P. Orlando Chairman, Department Committee on Graduate Students 2 Detecting and Tolerating Byzantine Faults in Database Systems by Benjamin Mead Vandiver Submitted to the Department of Electrical Engineering and Computer Science on May 23, 2008, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science Abstract This thesis describes the design, implementation, and evaluation of a replication scheme to handle Byzantine faults in transaction processing database systems. The scheme compares answers from queries and updates on multiple replicas which are off-the-shelf database systems, to provide a single database that is Byzantine fault tolerant. The scheme works when the replicas are homogeneous, but it also allows heterogeneous replication in which replicas come from different vendors. Heteroge- neous replicas reduce the impact of bugs and security compromises because they are implemented independently and are thus less likely to suffer correlated failures.
    [Show full text]
  • Fault Tolerance Techniques for Scalable Computing∗
    Fault Tolerance Techniques for Scalable Computing∗ Pavan Balaji, Darius Buntinas, and Dries Kimpe Mathematics and Computer Science Division Argonne National Laboratory fbalaji, buntinas, [email protected] Abstract The largest systems in the world today already scale to hundreds of thousands of cores. With plans under way for exascale systems to emerge within the next decade, we will soon have systems comprising more than a million processing elements. As researchers work toward architecting these enormous systems, it is becoming increas- ingly clear that, at such scales, resilience to hardware faults is going to be a prominent issue that needs to be addressed. This chapter discusses techniques being used for fault tolerance on such systems, including checkpoint-restart techniques (system-level and application-level; complete, partial, and hybrid checkpoints), application-based fault- tolerance techniques, and hardware features for resilience. 1 Introduction and Trends in Large-Scale Computing Sys- tems The largest systems in the world already use close to a million cores. With upcoming systems expected to use tens to hundreds of millions of cores, and exascale systems going up to a billion cores, the number of hardware components these systems would comprise would be staggering. Unfortunately, the reliability of each hardware component is not improving at ∗This work was supported by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357. 1 the same rate as the number of components in the system is growing. Consequently, faults are increasingly becoming common. For the largest supercomputers that will be available over the next decade, faults will become a norm rather than an exception.
    [Show full text]
  • Fault-Tolerant Operating System for Many-Core Processors
    Fault-Tolerant Operating System for Many-core Processors Amit Fuchs Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 Fault-Tolerant Operating System for Many-core Processors Research Thesis In partial fulfillment of the requirements for the degree of Master of Science in Computer Science Amit Fuchs Submitted to the Senate of the Technion — Israel Institute of Technology Tevet 5778 Haifa December 2017 Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 The research thesis was carried out under the supervision of prof. Avi Mendelson, in the Faculty of Computer Science. Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 Technion - Computer Science Department - M.Sc. Thesis MSC-2018-28 - 2018 Contents List of Figures xi List of Listings xiii Abstract1 Glossary3 Abreviations5 1 Introduction7 1.1 Achievements.......................... 10 1.2 Related Work.......................... 12 1.2.1 Distributed Operating Systems ............ 12 1.2.2 Parallelization Techniques............... 13 1.2.2.1 Hardware Parallelism ............ 13 1.2.2.2 Native Multi-threading Libraries . 14 1.2.2.3 Message Passing............... 14 1.2.2.4 Transactional Memory............ 14 2 Architecture Model 17 2.1 Clustered Architecture..................... 17 2.2 Distributed Memory ...................... 19 2.2.1 Local Memories..................... 20 2.2.2 Globally Shared Memory................ 20 2.2.2.1 Scalable Consistency Model......... 21 2.2.2.2 No Hardware-based Coherency . 21 2.2.2.3 Distance Dependent Latency (NUMA) . 22 2.2.2.4 Address Translation ............
    [Show full text]