Multiple Instruction Issue in the Nonstop Cyclone System

Total Page:16

File Type:pdf, Size:1020Kb

Multiple Instruction Issue in the Nonstop Cyclone System ~TANDEM Multiple Instruction Issue in the NonStop Cyclone System Robert W. Horst Richard L. Harris Robert L. Jardine Technical Report 90.6 June 1990 Part Number: 48007 Multiple Instruction Issue in the NonStop Cyclone Processorl Robert W. Horst Richard L. Harris Robert L. Jardine Tandem Computers Incorporated 19333 Vallco Parkway Cupertino, CA 95014 Abstract This paper describes the architecture for issuing multiple instructions per clock in the NonStop Cyclone Processor. Pairs of instructions are fetched and decoded by a dual two-stage prefetch pipeline and passed to a dual six-stage pipeline for execution. Dynamic branch prediction is used to reduce branch penalties. A unique microcode routine for each pair is stored in the large duplexed control store. The microcode controls parallel data paths optimized for executing the most frequent instruction pairs. Other features of the architecture include cache support for unaligned double­ precision accesses, a virtually-addressed main memory, and a novel precise exception mechanism. lA previous version of this paper was published in the conference proceedings of The 17th Annual International Symposium on Computer Architecture, May 28-31, 1990, Seattle, Washington. Dynabus+ Dynabus X Dvnabus Y IIIIII I 20 MBIS Parallel I I II 100 MbiVS III I Serial Fibers CPU CPU CPU CPU 0 3 14 15 MEMORY ••• MEMORY • •• MEMORY MEMORY ~IIIO PROC110 IIPROC1,0 PROC1,0 ROC PROC110 IIPROC1,0 PROC110 F11IOROC o 1 o 1 o 1 o 1 I DISKCTRL ~ DISKCTRL I I Q~ / \. I DISKCTRL I TAPECTRL : : DISKCTRL : I 0 1 2 3 /\ o 1 2 3 0 1 2 3 0 1 2 3 Section 0 Section 3 Figure 1. Cyclone System Architecture. microcode updates downloaded to writeable con­ trol store. The Tandem instruction set has ap­ 1. Introduction proximately 300 fixed-length (16-bit) instruc­ tions, ranging from simple RISC-like instruc­ The NonStop Cyclone system is a fault-tolerant tions to very complex instructions, such as block mainframe targeted at transaction processing, moves and inter-processor sends, which may query processing and batch. Each system con­ take hundreds of clocks to complete. Most opera­ sists of four to sixteen processors that are con­ tions are zero-address with operands on the top of nected by dual high-speed busses (Figure 1). an eight-word register stack. The basic memory Sections of four processors may be geographi­ reference instructions are load and store in­ cally distributed and interconnected by fiber op­ structions with address displacements relative to tic cables. Each processor has its own memory a stack pointer or segment base register. and drives two to four I/O channels. Fault detec­ tion is performed primarily by the hardware, The Cyclone processor is over three times faster and fault recovery is performed by the message­ than its predecessor. Approximately half of the based operating system. The system can tolerate performance improvement is due to higher clock a single fault in a processor, peripheral con­ rates, and the other half is due to the new micro­ troller, power supply, or cooling system. Failed architecture. Much of the architectural im­ components can be serviced on-line without dis­ provement stems from the ability to issue up to rupting processing. two instructions per clock cycle. Other im­ provements are due to parallel data paths and Five generations of Tandem computers (NonStop new designs for the caches and main memory. II, TXP, VLX, CLX and Cyclone) are object-code This paper describes the architectural aspects of compatible and have been kept current through the NonStop Cyclone processor. In particular, it 1 concentrates on the features that have been in­ primary difference between the Cyclone proces­ cluded to support multiple-instruction issue. sor and other superscalar designs is in the selec­ tion of which sets of instructions are to be issued simultaneously. Other machines have divided 2. Overview the instruction set into categories, such as branches, memory reference, and execution op­ In recent years, advances in technology and erators. In those machines, at most one instruc­ computer architecture have allowed the design_of lion from each category can be issued simultane­ processors in which simple instructions can be ously. executed in a single clock cycle. Once that point is reached, further architectural performance During the design of the Cyclone processor, we improvements must be made by executing more recognized that there may be many cases where than one instruction per clock. Some previous several sequential instructions from the same scientific machines were capable of issuing category (or even the same instruction) should be multiple instructions per clock, but this was done issued simultaneously. For instance, in our through simultaneous execution of integer and stack-based machine, it is common to sequen­ floating point operations. When the instruction tially load two literal constants onto the register set can be partitioned into independent opera­ stack with Load Immediate (LDl) instructions. tions that share few resources, then it is possible This pair of instructions, LDI&LDI, could easily to design independent function units and to as­ be executed in a single clock with appropriate sign each instruction to one of these units. data path flexibility and enough register file Several instructions can be issued to the function ports. However, there was no obvious way to par­ units simultaneously [1]. tition the machine into independent function units to which instructions could be assigned. Issuing multiple integer instructions per clock is Some pairs could benefit from separate ALUs, more difficult because most integer instructions while others could benefit from separate parti­ require use of the same resources. Typically, tions for memory reference and ALU. A few op­ nearly all instructions access the same register erations even suggest a bit-partitioning; one fre­ file, and there are many inter-instruction data quent pair has separate instructions to load a dependencies. There is no simple partitioning full-word literal into a register from left and that would easily allow execution of multiple in­ right half-word literals. structions per clock. Rather than partitioning the processor into inde­ Very Long Instruction Word (VLIW) machines pendent function units, we chose to use firmware use sophisticated compiler technology to generate control and to program the microcode routines wide object code to control parallel data paths [2]. for each unique pair individually. In this way, Typically, each VLIW implementation has its there are no artificial restrictions on which in­ own unique object code format. While VLIW is structions can be paired. In addition, by using useful in some situations, our environment de­ microcode control, we do not restrict pairable op­ mands object-code compatibility between genera­ erators to ones that can execute in a single clock tions of machines. It was essential to find a way cycle. For instance, instructions that use indi­ to detect the parallelism at run-time rather than rect addressing make two sequential accesses to at compile-time. the data cache and require three clocks to com­ plete. However, it is still beneficial to pair indi­ The term "superscalar" was recently coined to rect operators with other instructions. It takes describe machines that issue multiple instruc­ three clocks to perform an indirect load, yet takes tions per clock, yet produce the same results as no more clocks when the indirect load is paired machines that execute instructions sequentially with a branch, immediate, or add instruction. [3]. At about the same time the NonStop Cyclone system was announced, superscalar micropro­ Once we decided to control pair execution with cessors were announced by Intel and IBM. The unique microcode routines, we could decide on a 2 case-by-case basis whether to include the hard­ data paths also turned out to be of great benefit in ware support to be able to execute a pair in a sin­ the execution of long instructions, such as the gle clock cycle. A hardware performance moni­ those that move or scan blocks of data, and those tor was built, and instruction-pair frequencies that send or receive messages. were gathered for transaction processing appli­ cations. We then examined the frequencies to In some cases, we chose not to include data path determine which hardware would gain the most support for pairing. For instance, support for the performance for the least cost. .pairing of memory reference instructions would have required more than twice the area and cost Figure 2 shows the pairing matrix for some rep­ of a simpler cache. The frequency of successive resentative instructions. Of the pairs shown, all memory references did not warrant such a cost. except those in the last row execute in a single Instead, we determined that a greater payoff clock. The indirect loads require three clocks. would result from supporting fast access to In the current microcode, the full table of 2014 unaligned cache data for double-words. pairs has 38 "first" instructions (out of a possible 64) and 53 "second" instructions (out of a possible The following sections describe in more detail 127). In future microcode releases, more pairs the support for multiple instruction issue in key may be added for improved performance. parts of the processor: the instruction fetch unit, the control store, the data paths, and the memory. The most important data path additions for the support of pairing were the inclusion of a nine­ port register file and two ALUs that could be con­ 3. Instruction FetchUnit trolled independently or linked together for dou­ ble-precision arithmetic. The flexibility of the The Cyclone Instruction Fetch Unit (lFU) has four main functions: 1) to fetch instructions from memory, 2) to decode these instructions to deter­ SECOND FIRST INSTRUCTION mine whether they are candidates for paired exe­ cution, 3) to provide the beginning address for INSTR BCC WI LOAD STOR DADD RRM microcode execution of the instruction or pair, and 4) to assist in the execution of branching in­ Bee x x x x x structions and exception handling.
Recommended publications
  • Fault Tolerance in Tandem Computer Systems
    1'TANDEM Fault Tolerance in Tandem Computer Systems Joel Bartlett * Wendy Bartlett Richard Carr Dave Garcia Jim Gray Robert Horst Robert Jardine Dan Lenoski DixMcGuire • Preselll address: Digital Equipmelll CorporQlioll Western Regional Laboralory. Palo Alto. California Technical Report 90.5 May 1990 Part Number: 40666 ~ TANDEM COMPUTERS Fault Tolerance in Tandem Computer Systems Joel Bartlett* Wendy Bartlett Richard Carr Dave Garcia Jim Gray Robert Horst Robert Jardine Dan Lenoski Dix McGuire * Present address: Digital Equipment Corporation Western Regional Laboratory, Palo Alto, California Technical Report 90.5 May 1990 Part Nurnber: 40666 Fault Tolerance in Tandem Computer Systems! Wendy Bartlett, Richard Carr, Dave Garcia, Jim Gray, Robert Horst, Robert Jardine, Dan Lenoski, Dix McGuire Tandem Computers Incorporated Cupertino, California Joel Bartlett Digital Equipment Corporation, Western Regional Laboratory Palo Alto, California Tandem Technical Report 90.5, Tandem Part Number 40666 March 1990 ABSTRACT Tandem produces high-availability, general-purpose computers that provide fault tolerance through fail­ fast hardware modules and fault-tolerant software2. This chapter presents a historical perspective of the Tandem systems' evolution and provides a synopsis of the company's current approach to implementing these systems. The article does not cover products announced since January 1990. At the hardware level, a Tandem system is a loosely-coupled multiprocessor with fail-fast modules connected with dual paths. A system can include a range of processors, interconnected through a hierarchical fault-tolerant local network. A system can also include a variety of peripherals, attached with dual-ported controllers. A novel disk subsystem allows a choice between low cost-per-byte and low cost-per-access.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • HP Nonstop Systems Deployments
    HP NonStop systems – as you haven’t seen them before HP NonStop systems as you haven’t seen them before Deployed in support of mission-critical applications in manufacturing and distribution, telecommunications, retail and wholesale banking, transportation and entertainment Richard Buckle Founder and CEO Pyalla Technologies, LLC Pyalla Technologies, LLC Page 1 HP NonStop systems – as you haven’t seen them before About the Author Richard Buckle is the founder and CEO of Pyalla Technologies, LLC. He has enjoyed a long association with the IT industry as a user, vendor, and more recently, as an industry commentator. Richard has over 25 years of research experience with HP’s NonStop platform, including eight years working at Tandem Computers, followed by just as many years at InSession Inc. and ACI Worldwide, as well as four years at Golden Gate, now a part of Oracle. Well known to the user communities of HP and IBM, Richard served as a Director of ITUG (2000-2006), as its Chairman (2004-2005), and as the Director of Marketing of the IBM user group, SHARE, (2007-2008). Richard provides industry commentary and opinions through his community blog as well as through his industry association and vendor blogs, web publications and eNewsletters. You can follow him at www.itug- connection.blogspot.com and at ATMmarketplace.com as well read his editorial, Musings on NonStop, published monthly in Tandemworld.net Pyalla Technologies, LLC Page 2 HP NonStop systems – as you haven’t seen them before Introduction The strength of NonStop systems has always been its support of real time, mission critical, transaction processing, starting out with an application built on a fault tolerant system continues to be the simplest way to assure its availability.
    [Show full text]
  • Atalla HSM & HPE Nonstop
    Data Security Overview GTUG – May 2018 Darren Burkey, Senior PreSales Consultant Atalla [email protected] The New Combined Company: built on stability, acquisition and innovation Network Management/ COBOL Data Protector 40 30 2 Years Years “Better Together” Portfolio Has Breadth and Depth Information Linux & DevOps IT Operations Cloud Security Governance Open Source Service Management, Cloud Service Digital Safe, Data Protector, Operations Bridge, Automation, Control Point, Data Center Hybrid Cloud Structured Data Manager, Automation, Management Storage Optimizer Network Management Mainframe Solutions, IT Operations Enterprise Linux, Management, OpenStack Private Cloud, Cobol Development, Host Connectivity, Identity-based Software-defined Software Delivery Collaboration Workload Migration Access Governance Storage and Testing and Security Big Data Analytics IDOL Data security portfolio: Voltage & Atalla Data privacy & security compliance Secure analytics, privacy and Hybrid cloud data protection & & risk reduction pseudonymization collaboration Voltage SecureData Enterprise, Big Data, Cloud, Mobile and Payments Data Security Tokenization, Encryption, Masking Voltage SecureMail Voltage SecureMail Cloud Easy, scalable email encryption Enterprise email encryption SaaS Atalla HSM Enterprise Secure Key Manager Payments crypto appliances & key storage KMIP Key Management for Storage, 3rd party apps 4 ® Atalla Product Overview History of Atalla • Established in 1972 • Mission: Protect financial transactions • Atalla introduced first
    [Show full text]
  • A Nonstop* Kernel Joel F. Bartlett Tandem Computers Inc. Cupertino
    A NonStop* Kernel Joel F. Bartlett Tandem Computers Inc. Cupertino, Ca. Abstract significantly expanded over its lifetime. The Tandem system is intended to fit these The Tandem NonStop System is a fault- requirements. tolerant [1], expandable, and distributed computer system designed expressly for i. Hardware Organization online transaction processing. This paper describes the key primitives of the kernel A network consists of up to 255 nodes. of the operating system. The first section Each node is composed of multiple processor describes the basic hardware building and I/O controller modules interconnected blocks and introduces their software by redundant buses [2,3] as shown in PMS analogs: processes and messages. Using [3] notation in Figure i. A node consists these primitives, a mechanism that allows of two to sixteen processors, where each fault-tolerant resource access, the processor (Pcentral) has its own power process-pair, is described. The paper supply, memory, backup battery, and I/O concludes with some observations on this channel (Sio). All processors are type of system structure and on actual use interconnected by redundant interprocessor of the system. buses (Sipb). Each I/O controller (Kdisc, Ksync, etc.) is connected to two I/O channels and is powered from two different Introduction power supplies using a diode ORing scheme. Fault-tolerant computing systems have been Finally, dual-ported I/O devices such as built over the last two decades in a number discs (Tdisc) may be connected to a second of places to satisfy a variety of goals. I/O controller. The contents of a disc may These results and differing approachs have be "mirrored" on a second volume, but this been summarized in [1,3,11].
    [Show full text]
  • Why Do Computers Stop and What Can Be Done About It?
    "1,TANDEMCOMPUTERS Why Do Computers Stop and What Can Be Done About It? Jim Gray Technical Report 85.7 June 1985 PN87614 Why Do Computers Stop and What Can Be Done About It? Jim Gray June 1985 Tandem Technical report 85.7 Tandem TR 85.7 Why Do Computers Stop and What Can Be Done About It? Jim Gray June, 1985 Revised November, 1985 ABSTRACT An analysis of the failure statistics of a commercially available fault-tolerant system shows that administration and software are the major contributors to failure. Various approachs to software fault- tolerance are then discussed notably process-pairs, transactions and reliable storage. It is pointed out that faults in production software are often soft (transient) and that a transaction mechanism combined with persistent process-pairs provides fault-tolerant execution -- the key to software fault-tolerance. DISCLAIMER This paper is not an "official" Tandem statement on fault-tolerance. Rather, it expresses the author's research on the topic. An early version of this paper appeared in the proceedings of the German Association for Computing Machinery Conference on Office Automation, Erlangen, Oct. 2-4, 1985. TABLE OF CONTENTS Introduct ion 1 Hardware Availability by Modular Redundancy....•.•.....•..•..•• 3 Analysis of Failures of a Fault-tolerant System•.••......•••.•. 7 Implications of the Analysis of MTBF ...•••.•.•••••...•........ 12 Fault-tolerant Execution 15 Software Modularity Through Processes and Messages 16 Fault Containment Through Fail-Stop Software Modules 16 Software Faults Are Soft, the Bohrbug-Heisenbug Hypothesis.17 Process-pairs For Fault-tolerant Execution 20 Transactions for Data Integrity..•......................... 24 Transactions for Simple Fault-tolerant Execution 25 Fault-tolerant Communication .......•..•.....•.•.•.•.•.•......
    [Show full text]
  • Tandem Computers Unplugged: a People's History
    Tandem Computers Unplugged: A People’s History company’s leadership in general and the charisma oF Jimmy Treybig, the company’s key Founder, in specific? Again a resounding yes, but not entirely as Jimmy hasn’t been the only charismatic leader in Silicon Valley - a number come to mind. Was it because oF the company’s offbeat corporate culture such as no private parking places, an on campus swimming pool and oF course the inFamous beer busts. Well yes, but!!! How can it be that over 16 years since From an insiders point of view it was its merger with Compaq and nearly a all oF these things wrapped in a decade since it’s reabsorption back cocoon of an integrated corporate into Hewlett Packard a vibrant and value system that permeated all active online Tandem Computers corners oF the company, world wide alumni community still exists on and deeply touched the souls and Yahoo!Groups and Linked In? How can minds oF all employees through the it be that in many countries in the good and bad times. world groups oF Tandem Alumni still get together in local pubs or other Tandem Computers Unplugged – A types oF venues at least yearly? How People’s History is an attempt to can it be that many Former employees capture not just the history of an maintain their collection oF T-shirts, important foundational contributor to double-handled cups, pens, and what is today’s Silicon Valley, but to trophies almost like shrines oF some share the experience through the eyes kind? How can it be that so many and hearts oF the employees and employees when asked to look back through this process bring to light on their working liFe almost to a some oF the important ‘lessons person claim that working at Tandem learned’ about how to manage and was one oF the best places they have motivate talent.
    [Show full text]
  • Dennis Adams Associates Tandem Concepts
    Dennis Adams a s s o c i a t e s Tandem Concepts: An Introduction for UNIX techies Introduction This document was prepared from some investigation work on the Tandem Non-Stop system. Background to Tandem Tandem was founded as a company to develop fault-tolerant systems in 1976 by ex-Hewlett Packard engineers, and the first Tandem NonStop system was based on HP3000 CPUs with a custom bus called Dynabus to provide fault-tolerance of the I/O and CPU. The Operating System was called Guardian, which implemented message passing on a common stack to ensure resilience. The key architecture of Tandem is the (RPC-like) message-passing mechanism. The kernel automatically re-routes a message from a failing component to a functioning one. This same approach is used both within a system and between systems. In 1983, the FOX fibre-optic interconnect was launched which enabled up to 14 individual nodes to participate in a closely coupled network and share workloads between them. In 1986, FOX II enabled the network to be up to 4 Km distance, as well as introducing NonStop VLX, which increased the Dynabus speed from 13 to 40 MB/s. The same year, NonStop SQL was introduced. 1991 saw the first of several changes of architecture, with the introduction of the MIPS CPU, later replaced by the MIPS R4400 in the NonStop Himalaya K-Series in 1993. Tandem computers introduced the S-Series in 1997. This replaced Dynabus and FOX with a new ServerNet interconnect. ServerNet is a true peer-to-peer network (instead of the ring protocol of the earlier networks) and is the basis of the InfiniBand interconnect architect.
    [Show full text]
  • Strategic Interactions in Dram and Risc Technology: a Network Approach
    Strategic interactions in dram and risc technology: A network approach Geert Duysters & Wim Vanhaverbeke 93.023 September 1993 MERIT, P.O. Box 616, 6200 MD Maastricht (Netherlands) - telephone (31)43-883875- fax: (31)43-216518 1 STRATEGIC INTERACTIONS IN DRA AND RISC TECHNOLOGY: A NETWORK APPROACH Geert Duysters and WiI Vanhaverbeke1 INTRODUCTION Cooperative agreements have been ignored in business literature for a veiy long time. Only recently, the use of cooperative agreements as part of corporate strategies gained substantial interest. This increase in attention is above all due to the fact that in the last decade the number of cooperative agreements by firms has rocketed. In fact cooperative agreements have outnumbered the fully owned foreign subsidiaries (Contractor and Lorange, 1988). In this paper we will use the term cooperation to denote cooperative agreements between partners which are not connected through (majority) ownership. A cooperative agreement can be seen as an agreement which is positioned between two extremes, arm's length transactions on the one hand and the merger of the two firs on the other hand. Cooperation among companies is usually analyzed in strategic management literature on the dyadic level or on the level of the partcipating firs. Only recently, interest by economists has grown to study cooperative efforts of firs within an inter-organisational network framework (see e.g. Hagedoom and Schakenraad. 1990, 1992, 1993). In high- tech industries where almost all incumbents are linked to each other by means of a network of cooperative agreements, an analysis on the level of the individual players or allances is not appropriate to understand the strategic value of cooperative strategies.
    [Show full text]
  • Software Fault Tolerance in Computer Operating Systems
    Chapter 11 in Software Fault Tolerance, Michael Lyu, Ed., Wiley, 1995. NASA-CR-1 97999 (NASA-CR-197999) SOFTWARE FAULT N95-24993 TOLERANCE IN COMPUTER OPERATING SYSTEMS (Illinois Univ.) 30 p G3/61 0045355 11 Software Fault Tolerance in Computer Operating Systems RAVISHANKAR K. IYER and INHWAN LEE University of Illinois at Urbana-Champaign ABSTRACT This chapter provides, data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% - of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.
    [Show full text]
  • 1 Fault Tolerance & Reliability CDA 5140 Chapter 3 – RAID & Sample
    Fault Tolerance & Reliability CDA 5140 Chapter 3 – RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components fail - hot standby refers to components that can fail, being associated with components that run in parallel (i.e. powered up) and can take over upon a failure - cold standby refers to having just one component powered up and others available but powered down (standby) that can be powered up upon detection of failure - once one standby takes over upon failure, either repair the failed unit or replace it, before the new operating component can fail - various techniques for improving system/component reliability: o improve manufacturing or design process to decrease component failure rate o parallel redundancy – sometimes synchronization, if multiple parallel components, can be difficult o cold standby can require complex switching o stage of repairing or replacing components is important RAID - increase of performance for processors & main memory has received considerable attention, unlike secondary storage - secondary storage is primarily a mechanical process & hence performance can only be pushed so far without using multiple parallel components - RAID, redundant arrays of independent disks, provides arrays of disks that can operate independently and in parallel, and is an industry standard so can be used on different platforms and operating systems - if data is on separate disks, then I/O requests can be done in parallel 1 - levels
    [Show full text]
  • Software Dependability in the Tandem GUARDIAN System
    NASA-CR'200160 /'_/_ /- G/_.._ IBBBTRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 21, NO. 5, MAY 1995 455 '/'_..-/._/?j J'7_" _(.,/<// :./ /;:_ Software Dependability in the Tandem GUARDIAN System Inhwan Lee, Member, IEEE, and Ravishankar K. Iyer, Fellow, IEEE /,' "5 / £,-'" " _; .' LZ; / Abstmct_Based on extensive field failure data for Tandem's [1], [2], [31. The dependability issues for operational software ? GUARDIAN operating system, this paper discusses evaluation of are typically very different from those for software under de- the dependability of operational software. Software faults consid- ered are major defects that result in processor failures and invoke velopment, due to differences in the operational environment backup processes to take over. The paper categorizes the underly- and software maturity. Also, the dependability of operational ing causes of software failures and evaluates the effectiveness of software needs to be investigated in the context of the overall the process pair technique in tolerating software faults. A model system. to describe the impact of software faults on the reliability of an A study of the dependability of operational software based overall system is proposed. The model is used to evaluate the sig- nificance of key factors that determine software dependability on real measurements requires, in addition to instrumentation and to identify areas for imProvement. and data collection, an understanding of the system architec- An analysis of the data shows that about 77% of processor ture, hardware, and software. It also requires an understanding failures that are initially considered due to software are con- of the development, service, and operational environments.
    [Show full text]