An Introduction to Single System Image (SSI) Cluster Technique

Total Page:16

File Type:pdf, Size:1020Kb

An Introduction to Single System Image (SSI) Cluster Technique Volume III, Issue IV, April 2014 IJLTEMAS ISSN 2278 - 2540 An Introduction to Single System Image (SSI) Cluster Technique Tarun Kumawat [CSE] , JECRC UDML College of Engineering. Kukas, Jaipur, Rajasthan, India1 Sandeep Tomar [CSE] , Arya College of Engineering & I.T. Kukas, Jaipur, Rajasthan, India2 Mohit Gupta [CSE] , Arya College of Engineering & I.T. Kukas, Jaipur, Rajasthan, India3 [email protected] [email protected] 3 [email protected] beowulf.myinstitute.edu), although the cluster Abstract-Cluster computing is not a new area of computing. may have multiple physical host nodes to serve It is, however, evident that there is a growing interest in its the login session. The system transparently usage in all areas where applications have traditionally used distributes user’s connection requests to different parallel or distributed computing platforms. A Single System physical hosts to balance load. Image (SSI) is the property of a system that hides the Single user interface: The user should be able to heterogeneous and distributed nature of the available use the cluster through a single GUI. The resources and presents them to users and applications as a single unified computing resource. SSI can be enabled in interface must have the same look and feel than numerous ways, this range from those provided by extended the one available for workstations (e.g., Solaris hardware through to various software mechanisms. SSI OpenWin or Windows NT GUI). means that users have a globalised view of the resources Single process space: All user processes, no available to them irrespective of the node to which they are matter on which nodes they reside, have a unique physically associated. cluster-wide process id. A process on any node can create child processes on the same or Keywords: Cluster SSI, SCO UnixWare, GLUnix, MOSIX different node (through a UNIX fork). A process should also be able to communicate with any I. INTRODUCTION other process (through signals and pipes) on a remote node. Clusters should support globalised Single System Image (SSI) is the property of a process management and allow the management A system that hides the heterogeneous and distributed and control of processes as if they are running on nature of the available resources and presents them to local machines. users and applications as a single unified computing Single memory space: Users have an illusion of a resource. SSI can be enabled in numerous ways, this big, centralised main memory, which in reality range from those provided by extended hardware through may be a set of distributed local memories. to various software mechanisms. SSI means that users Software DSM approach has already been used have a globalised view of the resources available to them to achieve single memory space on clusters. irrespective of the node to which they are physically Another approach is to let the compiler distribute associated. Furthermore, SSI can ensure that a system the data structure of an application across continues to operate after some failure (high availability) multiple nodes. It is still a challenging task to as well as ensuring that the system is evenly loaded and develop a single memory scheme that is efficient, providing communal multiprocessing (resource platform independent, and able to support management and scheduling). sequential binary codes. SSI design goals for cluster-based systems are mainly Single I/O space (SIOS): This allows any node to focused on complete transparency of resource perform I/O operations on local or remotely management, scalable performance, and system located peripheral or disk device. In this SIOS availability in supporting user applications [1][2][3][5][7]. design, disks associated to cluster nodes, A SSI can be defined as the illusion [1][2], created by network-attached RAIDs, and peripheral devices hardware or software, that presents a collection of form a single address space. resources as one, more powerful unified resource. Single file hierarchy: On entering into the system, the user sees a single, huge file-system II. SERVICES AND BENEFITS image as a single hierarchy of files and directories under the same root directory that The key services of a single-system image cluster include transparently integrates local and global disks the following [1][3][4]: and other file devices. Examples of single file Single entry point: A user can connect to the hierarchy include NFS, AFS, xFS, and Solaris cluster as a virtual host (like telnet MC Proxy. www.ijltemas.in Page 207 Volume III, Issue IV, April 2014 IJLTEMAS ISSN 2278 - 2540 Single virtual networking: This means that any It offers the same command syntax as in other node can access any network connection systems and thus reduces the risk of operator throughout the cluster domain even if the errors, with the result that end-users see an network is not physically connect to all nodes in improved performance, reliability and higher the cluster. Multiple networks support a single availability of the system. virtual network operation. It allows to centralise/decentralise system Single job-management system: Under a global management and control to avoid the need of job scheduler, a user job can be submitted from skilled administrators for system administration. any node to request any number of host nodes to It greatly simplifies system management and thus execute it. Jobs can be scheduled to run in either reduced cost of ownership. batch, interactive, or parallel modes. Examples of It provides location-independent message job management systems for clusters include communication. GLUnix, LSF, and CODINE. It benefits the system programmers to reduce the Single control point and management: The entire time, effort and knowledge required to perform cluster and each individual node can be task, and allows current staff to handle larger or configured, monitored, tested and controlled more complex systems. from a single window using single GUI tools, It promotes the development of standard tools much like an NT workstation managed by the and utilities. Task Manger tool. Checkpointing and Process Migration: III. SSI LAYERS/LEVELS Checkpointing is a software mechanism to periodically save the process state and The two important characteristics of SSI [1][2] are: intermediate computing results in memory or 1. Every SSI has a boundary, disks. This allows the roll back recovery after a 2. SSI support can exist at different levels within a system failure. Process migration is needed in dynamic — one able to be built on another. load balancing among the cluster nodes and in supporting Checkpointing. Figure 1 shows the SSI can be implemented in one or more of the following functional relationships among various key levels: middleware packages. Hardware, Operating System (so called underware [5]), These middleware packages are used as interfaces Middleware (runtime subsystems), between user applications and cluster hardware and OS Application. platforms. They support each other at the management, programming, and implementation levels. A good SSI is usually obtained by a co-operation between all these levels as a lower level can simplify the implementation of a higher one. A. Hardware Level Systems such as Digital/Compaq Memory Channel [8] and hardware Distributed Shared Memory (DSM) [8] offer SSI at hardware level and allow the user to view a cluster as a shared-memory system. Digital's Memory Channel is designed to provide a reliable, powerful and efficient clustering interconnect. It provides a portion of global virtual shared memory by mapping portions of Figure 1. The relationship between middleware modules [3]. remote physical memory as local virtual memory (called reflective memory). The most important benefits of SSI include the following Memory Channel consists of two components: a PCI [1]: adapter and a hub. Adapters can also be connected It provides a simple, straightforward view of all directly to another adapter without using a hub. The host system resources and activities, from any node in interfaces exchange heartbeat signals and implement flow the cluster. control timeouts to detect node failure or blocked data It frees the end-user from having to know where transfers. The link layer provides error detection through a in the cluster an application will run. 32 bit CRC generated and checked in hardware. Memory It allows the use of resources in a transparent Channel uses point-to-point, full-duplex switched 8x8 way irrespective of their physical location. crossbar implementation. It lets the user work with familiar interface and commands and allows the administrator to To enable communication over the Memory Channel manage the entire cluster as a single entity. network, applications map pages as read- or write-only into their virtual address space. Each host interface www.ijltemas.in Page 208 Volume III, Issue IV, April 2014 IJLTEMAS ISSN 2278 - 2540 contains two page control tables (PCT), one for write and 1) SCO UnixWare one for read mappings. For read-only pages, a page is pinned down in local physical memory. Several page UnixWare NonStop Clusters is SCO's high availability attributes can be specified: receive enable, interrupt on software. It significantly broadens hardware support receive, remote read etc. If a page is mapped as write- making it easier and less expensive to deploy the most only, a page table entry is created for an appropriate page advanced clustering software for Intel systems. It is an in the interface 128 Mbytes of PCI address space. Page extension to the UnixWare operating system where all attributes can be used to store a local copy of each packet, applications run better and more reliably inside a Single request acknowledgement message from receiver side for System Image (SSI) environment that removes the each packet, and define the packets as broadcast or point- management burden. It features standard IP as the to-point packets. Broadcasts are forwarded to each node interconnect, removing the need for any proprietary attached to the network.
Recommended publications
  • User Guide Laplink® Diskimage™ 7 Professional
    http://www.laplink.com/contact 1 ™ E-mail us at [email protected] Laplink® DiskImage 7 Professional User Guide Tel (USA): +1 (425) 952-6001 Tel (UK): +44 (0) 870-2410-983 Fax (USA): +1 (425) 952-6002 Fax (UK): +44 (0) 870-2410-984 ™ Laplink® DiskImage 7 Professional Laplink Software, Inc. Customer Service/Technical Support: Web: http://www.laplink.com/contact E-mail: [email protected] Tel (USA): +1 (425) 952-6001 Fax (USA): +1 (425) 952-6002 Tel (UK): +44 (0) 870-2410-983 User Guide Fax (UK): +44 (0) 870-2410-984 Laplink Software, Inc. 600 108th Ave. NE, Suite 610 Bellevue, WA 98004 U.S.A. Copyright / Trademark Notice © Copyright 2013 Laplink Software, Inc. All rights reserved. Laplink, the Laplink logo, Connect Your World, and DiskImage are registered trademarks or trademarks of Laplink Software, Inc. in the United States and/or other countries. Other trademarks, product names, company names, and logos are the property of their respective holder(s). UG-DiskImagePro-EN-7 (REV. 5/2013) http://www.laplink.com/contact 2 ™ E-mail us at [email protected] Laplink® DiskImage 7 Professional User Guide Tel (USA): +1 (425) 952-6001 Tel (UK): +44 (0) 870-2410-983 Fax (USA): +1 (425) 952-6002 Fax (UK): +44 (0) 870-2410-984 Contents Installation and Registration System Requirements . 1 Installing Laplink DiskImage . 1 Registration . 2 Introduction to DiskImage Overview of Important Features . 2 Definitions . 3 Start Laplink DiskImage - Two Methods . 4 Windows Start . .4 Bootable CD . .4 DiskImage Tasks One-Click Imaging: Create an Image of the Entire Computer .
    [Show full text]
  • Ada Departmental Supercomputer Shared Memory GPU Cluster
    Ada Departmental Supercomputer Shared Memory GPU Cluster The Ada Departmental Supercomputer is designed to provide System Specifications near top 500 class supercomputing capabilities at your office Processors: Head Node: 2 AMD EPYC 7702 Processors or lab. (64 core-2.0/3.3 GHz) Compute Nodes: 1 AMD EPYC 7702P Proces- Ada is a hybrid supercomputer consisting of a large memory sor (64 core-2.2/3.2 GHz), 8 AMD Radeon head node and 2 to 5 compute nodes, each with eight AMD Instinct MI50 GPUs Radeon Instinct MI50 GPUs. With 5 compute nodes Ada con- Global Memory: 2TB or 4TB 3200 MHz DDR4 tains 448 AMD EPYC processor cores, 40 MI50 GPUs and 2 or 4 TB of globally shared memory. The compute nodes are Compute Node 128 GB 3200 MHz DDR4 (each) Memory: connected to the head node with 200 Gb/s Mellanox Infini- band. The Ada departmental supercomputer can be config- Storage: 1TB on-board M.2 OS SSD ured to deliver 1060 TFLOPS of FP16, 532 TFLOPS of FP32 12x 3.5" SATA/SAS hot-swap and 264 TFLOPS of FP64 GPU floating point performance SSD/HDD bays (head node) Additional 8x 2.5” SSD hot-swap bays on each capable of operating on large computational models. compute node Ada is a true symmetric multi-processing (SMP) computer Interconnect: ConnectX-6 VPI 200 Gb/s InfiniBand Dual Port with a large shared memory and a single operating system PCIe Gen 4 Host Bus Adapters user interface based on Centos 8 Linux. It provides a 1TB (No InfiniBand switch is needed) globally shared fast file system, and a large disk storage ar- I/O: 2x 1 Gb/s LAN ports ray.
    [Show full text]
  • Sprite File System There Are Three Important Aspects of the Sprite ®Le System: the Scale of the System, Location-Transparency, and Distributed State
    Naming, State Management, and User-Level Extensions in the Sprite Distributed File System Copyright 1990 Brent Ballinger Welch CHAPTER 1 Introduction ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ This dissertation concerns network computing environments. Advances in network and microprocessor technology have caused a shift from stand-alone timesharing systems to networks of powerful personal computers. Operating systems designed for stand-alone timesharing hosts do not adapt easily to a distributed environment. Resources like disk storage, printers, and tape drives are not concentrated at a single point. Instead, they are scattered around the network under the control of different hosts. New operating system mechanisms are needed to handle this sort of distribution so that users and application programs need not worry about the distributed nature of the underlying system. This dissertation explores the approach of centering a distributed computing environment around a shared network ®le system. The ®le system is chosen as a starting point because it is a heavily used service in stand-alone systems, and the read/write para- digm of the ®le system is a familiar one that can be applied to many system resources. The ®le system described in this dissertation provides a distributed name space for sys- tem resources, and it provides remote access facilities so all resources are available throughout the network. Resources accessible via the ®le system include disk storage, other types of peripheral devices, and user-implemented service applications. The result- ing system is one where resources are named and accessed via the shared ®le system, and the underlying distribution of the system among a collection of hosts is not important to users.
    [Show full text]
  • Vis: Virtualization Enhanced Live Forensics Acquisition for Native System
    Vis: Virtualization Enhanced Live Forensics Acquisition for Native System Miao Yu, Zhengwei Qi, Qian Lin, Xianming Zhong, Bingyu Li, Haibing Guan Shanghai Key Laboratory of Scalable Computing and Systems Shanghai Jiao Tong University f superymk, qizhwei, linqian, zhongxianming, justasmallfish, hbguan g @ sjtu.edu.cn Abstract Focusing on obtaining in-memory evidence, current live acquisition efforts either fail to provide accurate native system physical memory acquisition at the given time point or require suspending the machine and altering the execution environment drastically. To address this issue, we propose Vis, a light-weight virtualization approach to provide accurate retrieving of physical memory content while preserving the execution of target native system. Our experimental results indicate that Vis is capable of reliably retrieving an accurate system image. Moreover, Vis accomplishes live acquisition within 97.09∼105.86 seconds, which shows that Vis is much more efficient than previous remote live acquisition tools that take hours and static acquisition that takes days. In average, Vis incurs only 9.62% performance overhead to the target system. Keywords: Vis, Live acquisition, Accuracy, Virtualization 1. Introduction After forensic scopes and medias are determined, a typical computer forensics scenario has three steps: acquisition, analyzing and reporting [47, 9]. Focusing on the stages of acquisition and analyzing, computer forensics pro- poses two key challenges: how to obtain the complete system state and how to analyze the retrieved image effectively [39]. Missing image of memory Preprint submitted to Digital Investigation February 16, 2012 content leads to an incomplete or wrong investigation result, even with an incomparable analyzing technology. Transcending static acquisition strategies, live acquisition extends the information gathering range of forensics examiner, i.e., involving with the volatile data.
    [Show full text]
  • A User Guide for the FRED Family of Forensic Systems Thank You for Your Recent Order
    A User Guide for the FRED Family of Forensic Systems Thank you for your recent order. We hope you like your new FRED! Please do not hesitate to contact us if you have any questions or require any additional information. Although we welcome a phone call anytime, our preferred method of contact is via our website www.digitalintelligence.com . The sales and technical support ticketing system is easy to use and allow us to track all requests and responses. To create your user account click on the User Icon on the top right of the web page banner and click on Sign Up. Here you can register your FRED system as well as track your web order history and support tickets. Please note your system serial number is the unique identifier for your system. It is helpful if you use the system serial number in your correspondence. If you have a sales related question or technical support issue, simply navigate to www.digitalintelligence.com/support A searchable knowledge base, links to other help or informational topics as well as a “Open A Ticket” button link can be found near the bottom of the page. We want to remind you, regardless of your warranty status, we will always be willing to assist with any technical questions you have regarding any Digital Intelligence product. *** Read me first *** Forensic Recovery of Evidence Device This document contains important information about the configuration and operation of your FRED system. FAILURE TO FOLLOW THESE GUIDELINES MAY RESULT IN PHYSICAL DAMAGE TO YOUR EQUIPMENT WHICH IS NOT COVERED UNDER WARRANTY.
    [Show full text]
  • Artificial Intelligence System Introduction to the Smallt Alk-80 System
    USERS Part No. 070-5606-00 TEK MANUAL Product Group 07 4404 ARTIFICIAL INTELLIGENCE SYSTEM INTRODUCTION TO THE SMALLT ALK-80 SYSTEM Please Check at the Rear of this Manual for NOTES and CHANGE INFORMA TION First Printing DEC 1984 Revised AUG 1985 COIIWITTED 10 EXCEL.l.ENCE Copyright © 1985 by Tektronix, Inc., Beaverton, Oregon. Printed in the United States of America. All rights reserved. Contents of this publication may not be reproduced in any form without permission of Tektronix, Inc. TEKTRONIX is a registered trademark of Tektronix, Inc. Smalltalk-80 is a trademark of Xerox Corp. MANUAL REVISION STATUS PRODUCT: 4404 Artificial Intelligence System Smalltalk-80 System This manual supports the following versions of this product: Version T2.1.2 REV DATE DESCRIPTION DEC 1984 Original Issue AUG 1985 Addition of NOTES Section 4404 Smalltalk-80 System User's CONTENTS INTRODUCTION .......................................................................................................... 1 About This Manual ................................................................................................... 1 The 4404 Artificial Intelligence System Documentation .......... ....... ....... ......... .......... 2 The Smalltalk-80 System Reference Books ............................................................... 3 A SMALLTALK-80 SYSTEM OVERVIEW ................................................................ 4 What is The Smalltalk-80 System? ............................................................................ 4 The User Interface: Mouse,
    [Show full text]
  • Network RAM Based Process Migration for HPC Clusters
    Journal of Information Systems and Telecommunication, Vol. 1, No. 1, Jan – March 2013 39 Network RAM Based Process Migration for HPC Clusters Hamid Sharifian* Department of Computer Engineering, Iran University of Science and Technology, Tehran, Iran [email protected] Mohsen Sharifi Department of Computer Engineering, Iran University of Science and Technology, Tehran, Iran [email protected] Received: 04/Dec/2012 Accepted: 09/Feb/2013 Abstract Process migration is critical to dynamic balancing of workloads on cluster nodes in any high performance computing cluster to achieve high overall throughput and performance. Most existing process migration mechanisms are however unsuccessful in achieving this goal proper because they either allow once-only migration of processes or have complex implementations of address space transfer that degrade process migration performance. We propose a new process migration mechanism for HPC clusters that allows multiple migrations of each process by using the network RAM feature of clusters to transfer the address spaces of processes upon their multiple migrations. We show experimentally that the superiority of our proposed mechanism in attaining higher performance compared to existing comparable mechanisms is due to effective management of residual data dependencies. Keywords: High Performance Computing (HPC) Clusters, Process Migration, Network RAM, Load Balancing, Address Space Transfer. 1. Introduction and data access locality in addition to an enhanced degree of dynamic load distribution [1]. A standard approach to reducing the runtime Upon migration of a process, the process of any high performance scientific computing must be suspended and its context information in application on a high performance computing the source node extracted and transferred to the (HPC) cluster is to partition the application into destination node.
    [Show full text]
  • Clustering with Openmosix
    Clustering with openMosix Maurizio Davini (Department of Physics and INFN Pisa) Presented by Enrico Mazzoni (INFN Pisa) Introduction • What is openMosix? – Single-System Image – Preemptive Process Migration – The openMosix File System (MFS) • Application Fields • openMosix vs Beowulf • The people behind openMosix • The openMosix GNU project • Fork of openMosix code 12/06/2003 HTASC 2 The openMosix Project MileStones • Born early 80s on PDP-11/70. One full PDP and disk-less PDP, therefore process migration idea. • First implementation on BSD/pdp as MS.c thesis. • VAX 11/780 implementation (different word size, different memory architecture) • Motorola / VME bus implementation as Ph.D. thesis in 1993 for under contract from IDF (Israeli Defence Forces) • 1994 BSDi version • GNU and Linux since 1997 • Contributed dozens of patches to the standard Linux kernel • Split Mosix / openMosix November 2001 • Mosix standard in Linux 2.5? 12/06/2003 HTASC 3 What is openMOSIX • Linux kernel extension (2.4.20) for clustering • Single System Image - like an SMP, for: – No need to modify applications – Adaptive resource management to dynamic load characteristics (CPU intensive, RAM intensive, I/O etc.) – Linear scalability (unlike SMP) 12/06/2003 HTASC 4 A two tier technology 1. Information gathering and dissemination – Support scalable configurations by probabilistic dissemination algorithms – Same overhead for 16 nodes or 2056 nodes 2. Pre-emptive process migration that can migrate any process, anywhere, anytime - transparently – Supervised by adaptive
    [Show full text]
  • Distributed Virtual Machines: a System Architecture for Network Computing
    Distributed Virtual Machines: A System Architecture for Network Computing Emin Gün Sirer, Robert Grimm, Arthur J. Gregory, Nathan Anderson, Brian N. Bershad {egs,rgrimm,artjg,nra,bershad}@cs.washington.edu http://kimera.cs.washington.edu Dept. of Computer Science & Engineering University of Washington Seattle, WA 98195-2350 Abstract Modern virtual machines, such as Java and Inferno, are emerging as network computing platforms. While these virtual machines provide higher-level abstractions and more sophisticated services than their predecessors from twenty years ago, their architecture has essentially remained unchanged. State of the art virtual machines are still monolithic, that is, they are comprised of closely-coupled service components, which are thus replicated over all computers in an organization. This crude replication of services forms one of the weakest points in today’s networked systems, as it creates widely acknowledged and well-publicized problems of security, manageability and performance. We have designed and implemented a new system architecture for network computing based on distributed virtual machines. In our system, virtual machine services that perform rule checking and code transformation are factored out of clients and are located in enterprise- wide network servers. The services operate by intercepting application code and modifying it on the fly to provide additional service functionality. This architecture reduces client resource demands and the size of the trusted computing base, establishes physical isolation between virtual machine services and creates a single point of administration. We demonstrate that such a distributed virtual machine architecture can provide substantially better integrity and manageability than a monolithic architecture, scales well with increasing numbers of clients, and does not entail high overhead.
    [Show full text]
  • Dynamic Scheduling with Process Migration*
    Dynamic Scheduling with Process Migration* Cong Du, Xian-He Sun, and Ming Wu Department of Computer Science Illinois Institute of Technology Chicago, IL 60616, USA {ducong, sun, wuming}@iit.edu Abstract* computing resources. In addition, besides load balance, migration-based dynamic scheduling also benefits Process migration is essential for runtime load dynamic Grid management [19] in the cases of new balancing. In Grid and shared networked machines joining or leaving, resource cost variation, environments, load imbalance is not only caused by the and local task preemption. dynamic nature of underlying applications, but also by An appropriate rescheduling should consider the the fluctuation of resource availability. In a shared migration costs. This is especially true in distributed environment, tasks need to be rescheduled frequently and heterogeneous environments, where plenty of to adapt the variation of resources availability. Unlike computing resources are available at any given time conventional task scheduling, dynamic rescheduling but the associated migration costs may vary largely. An has to consider process migration costs in its effective and broadly applicable solution for modeling formulation. In this study, we first model the migration and estimating migration costs, however, has been cost and introduce an effective method to predict the elusive. Even if an estimate is available, integrating cost. We then introduce a dynamic scheduling migration cost into a dynamic scheduling system is still mechanism that considers migration cost as well as a challenging task. Based on our years of experience in other conventional influential factors for performance process migration [8] and task scheduling [24], we optimization in a shared, heterogeneous environment.
    [Show full text]
  • Workstation Operating Systems Mac OS 9
    15-410 “Now that we've covered the 1970's...” Plan 9 Nov. 25, 2019 Dave Eckhardt 1 L11_P9 15-412, F'19 Overview “The land that time forgot” What style of computing? The death of timesharing The “Unix workstation problem” Design principles Name spaces File servers The TCP file system... Runtime environment 3 15-412, F'19 The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) Wow! Why was 1995-2004 ruled by single-core machines? What operating systems did those multi-core machines run? The Land That Time Forgot Why was 1995-2004 ruled by single-core machines? In 1995 Intel + Microsoft made it feasible to buy a fast processor that fit on one chip, a fast I/O bus, multiple megabytes of RAM, and an OS with memory protection. Everybody could afford a “workstation”, so everybody bought one. Massive economies of scale existed in the single- processor “Wintel” universe.
    [Show full text]
  • Distributed Operating Systems
    Distributed Operating Systems ANDREW S. TANENBAUM and ROBBERT VAN RENESSE Department of Mathematics and Computer Science, Vrije Universiteit, Amsterdam, The Netherlands Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. Categories and Subject Descriptors: C.2.4 [Computer-Communications Networks]: Distributed Systems-network operating system; D.4.3 [Operating Systems]: File Systems Management-distributed file systems; D.4.5 [Operating Systems]: Reliability-fault tolerance; D.4.6 [Operating Systems]: Security and Protection-access controls; D.4.7 [Operating Systems]: Organization and Design-distributed systems General Terms: Algorithms, Design, Experimentation, Reliability, Security Additional Key Words and Phrases: File server INTRODUCTION more convenient to use than the bare ma- chine. Examples of well-known centralized Everyone agrees that distributed systems (i.e., not distributed) operating systems are are going to be very important in the future. CP/M,’ MS-DOS,’ and UNIX.3 Unfortunately, not everyone agrees on A distributed operating system is one that what they mean by the term “distributed looks to its users like an ordinary central- system.” In this paper we present a view- ized operating system but runs on multi- point widely held within academia about ple, independent central processing units what is and is not a distributed system, we (CPUs).
    [Show full text]