Cluster Computing White Paper
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Ada Departmental Supercomputer Shared Memory GPU Cluster
Ada Departmental Supercomputer Shared Memory GPU Cluster The Ada Departmental Supercomputer is designed to provide System Specifications near top 500 class supercomputing capabilities at your office Processors: Head Node: 2 AMD EPYC 7702 Processors or lab. (64 core-2.0/3.3 GHz) Compute Nodes: 1 AMD EPYC 7702P Proces- Ada is a hybrid supercomputer consisting of a large memory sor (64 core-2.2/3.2 GHz), 8 AMD Radeon head node and 2 to 5 compute nodes, each with eight AMD Instinct MI50 GPUs Radeon Instinct MI50 GPUs. With 5 compute nodes Ada con- Global Memory: 2TB or 4TB 3200 MHz DDR4 tains 448 AMD EPYC processor cores, 40 MI50 GPUs and 2 or 4 TB of globally shared memory. The compute nodes are Compute Node 128 GB 3200 MHz DDR4 (each) Memory: connected to the head node with 200 Gb/s Mellanox Infini- band. The Ada departmental supercomputer can be config- Storage: 1TB on-board M.2 OS SSD ured to deliver 1060 TFLOPS of FP16, 532 TFLOPS of FP32 12x 3.5" SATA/SAS hot-swap and 264 TFLOPS of FP64 GPU floating point performance SSD/HDD bays (head node) Additional 8x 2.5” SSD hot-swap bays on each capable of operating on large computational models. compute node Ada is a true symmetric multi-processing (SMP) computer Interconnect: ConnectX-6 VPI 200 Gb/s InfiniBand Dual Port with a large shared memory and a single operating system PCIe Gen 4 Host Bus Adapters user interface based on Centos 8 Linux. It provides a 1TB (No InfiniBand switch is needed) globally shared fast file system, and a large disk storage ar- I/O: 2x 1 Gb/s LAN ports ray. -
Sprite File System There Are Three Important Aspects of the Sprite ®Le System: the Scale of the System, Location-Transparency, and Distributed State
Naming, State Management, and User-Level Extensions in the Sprite Distributed File System Copyright 1990 Brent Ballinger Welch CHAPTER 1 Introduction ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ This dissertation concerns network computing environments. Advances in network and microprocessor technology have caused a shift from stand-alone timesharing systems to networks of powerful personal computers. Operating systems designed for stand-alone timesharing hosts do not adapt easily to a distributed environment. Resources like disk storage, printers, and tape drives are not concentrated at a single point. Instead, they are scattered around the network under the control of different hosts. New operating system mechanisms are needed to handle this sort of distribution so that users and application programs need not worry about the distributed nature of the underlying system. This dissertation explores the approach of centering a distributed computing environment around a shared network ®le system. The ®le system is chosen as a starting point because it is a heavily used service in stand-alone systems, and the read/write para- digm of the ®le system is a familiar one that can be applied to many system resources. The ®le system described in this dissertation provides a distributed name space for sys- tem resources, and it provides remote access facilities so all resources are available throughout the network. Resources accessible via the ®le system include disk storage, other types of peripheral devices, and user-implemented service applications. The result- ing system is one where resources are named and accessed via the shared ®le system, and the underlying distribution of the system among a collection of hosts is not important to users. -
Clustering with Openmosix
Clustering with openMosix Maurizio Davini (Department of Physics and INFN Pisa) Presented by Enrico Mazzoni (INFN Pisa) Introduction • What is openMosix? – Single-System Image – Preemptive Process Migration – The openMosix File System (MFS) • Application Fields • openMosix vs Beowulf • The people behind openMosix • The openMosix GNU project • Fork of openMosix code 12/06/2003 HTASC 2 The openMosix Project MileStones • Born early 80s on PDP-11/70. One full PDP and disk-less PDP, therefore process migration idea. • First implementation on BSD/pdp as MS.c thesis. • VAX 11/780 implementation (different word size, different memory architecture) • Motorola / VME bus implementation as Ph.D. thesis in 1993 for under contract from IDF (Israeli Defence Forces) • 1994 BSDi version • GNU and Linux since 1997 • Contributed dozens of patches to the standard Linux kernel • Split Mosix / openMosix November 2001 • Mosix standard in Linux 2.5? 12/06/2003 HTASC 3 What is openMOSIX • Linux kernel extension (2.4.20) for clustering • Single System Image - like an SMP, for: – No need to modify applications – Adaptive resource management to dynamic load characteristics (CPU intensive, RAM intensive, I/O etc.) – Linear scalability (unlike SMP) 12/06/2003 HTASC 4 A two tier technology 1. Information gathering and dissemination – Support scalable configurations by probabilistic dissemination algorithms – Same overhead for 16 nodes or 2056 nodes 2. Pre-emptive process migration that can migrate any process, anywhere, anytime - transparently – Supervised by adaptive -
Distributed Virtual Machines: a System Architecture for Network Computing
Distributed Virtual Machines: A System Architecture for Network Computing Emin Gün Sirer, Robert Grimm, Arthur J. Gregory, Nathan Anderson, Brian N. Bershad {egs,rgrimm,artjg,nra,bershad}@cs.washington.edu http://kimera.cs.washington.edu Dept. of Computer Science & Engineering University of Washington Seattle, WA 98195-2350 Abstract Modern virtual machines, such as Java and Inferno, are emerging as network computing platforms. While these virtual machines provide higher-level abstractions and more sophisticated services than their predecessors from twenty years ago, their architecture has essentially remained unchanged. State of the art virtual machines are still monolithic, that is, they are comprised of closely-coupled service components, which are thus replicated over all computers in an organization. This crude replication of services forms one of the weakest points in today’s networked systems, as it creates widely acknowledged and well-publicized problems of security, manageability and performance. We have designed and implemented a new system architecture for network computing based on distributed virtual machines. In our system, virtual machine services that perform rule checking and code transformation are factored out of clients and are located in enterprise- wide network servers. The services operate by intercepting application code and modifying it on the fly to provide additional service functionality. This architecture reduces client resource demands and the size of the trusted computing base, establishes physical isolation between virtual machine services and creates a single point of administration. We demonstrate that such a distributed virtual machine architecture can provide substantially better integrity and manageability than a monolithic architecture, scales well with increasing numbers of clients, and does not entail high overhead. -
Workstation Operating Systems Mac OS 9
15-410 “Now that we've covered the 1970's...” Plan 9 Nov. 25, 2019 Dave Eckhardt 1 L11_P9 15-412, F'19 Overview “The land that time forgot” What style of computing? The death of timesharing The “Unix workstation problem” Design principles Name spaces File servers The TCP file system... Runtime environment 3 15-412, F'19 The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) The Land That Time Forgot The “multi-core revolution” already happened once 1982: VAX-11/782 (dual-core) 1984: Sequent Balance 8000 (12 x NS32032) 1985: Encore MultiMax (20 x NS32032) 1990: Omron Luna88k workstation (4 x Motorola 88100) 1991: KSR1 (1088 x KSR1) 1991: “MCS” paper on multi-processor locking algorithms 1995: BeBox workstation (2 x PowerPC 603) Wow! Why was 1995-2004 ruled by single-core machines? What operating systems did those multi-core machines run? The Land That Time Forgot Why was 1995-2004 ruled by single-core machines? In 1995 Intel + Microsoft made it feasible to buy a fast processor that fit on one chip, a fast I/O bus, multiple megabytes of RAM, and an OS with memory protection. Everybody could afford a “workstation”, so everybody bought one. Massive economies of scale existed in the single- processor “Wintel” universe. -
Distributed Operating Systems
Distributed Operating Systems ANDREW S. TANENBAUM and ROBBERT VAN RENESSE Department of Mathematics and Computer Science, Vrije Universiteit, Amsterdam, The Netherlands Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. Categories and Subject Descriptors: C.2.4 [Computer-Communications Networks]: Distributed Systems-network operating system; D.4.3 [Operating Systems]: File Systems Management-distributed file systems; D.4.5 [Operating Systems]: Reliability-fault tolerance; D.4.6 [Operating Systems]: Security and Protection-access controls; D.4.7 [Operating Systems]: Organization and Design-distributed systems General Terms: Algorithms, Design, Experimentation, Reliability, Security Additional Key Words and Phrases: File server INTRODUCTION more convenient to use than the bare ma- chine. Examples of well-known centralized Everyone agrees that distributed systems (i.e., not distributed) operating systems are are going to be very important in the future. CP/M,’ MS-DOS,’ and UNIX.3 Unfortunately, not everyone agrees on A distributed operating system is one that what they mean by the term “distributed looks to its users like an ordinary central- system.” In this paper we present a view- ized operating system but runs on multi- point widely held within academia about ple, independent central processing units what is and is not a distributed system, we (CPUs). -
Containers: a Sound Basis for a True Single System Image
Containers : A Sound Basis For a True Single System Image Renaud Lottiaux, Christine Morin To cite this version: Renaud Lottiaux, Christine Morin. Containers : A Sound Basis For a True Single System Image. [Research Report] RR-4085, INRIA. 2000. inria-00072548 HAL Id: inria-00072548 https://hal.inria.fr/inria-00072548 Submitted on 24 May 2006 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Containers : A Sound Basis For a True Single System Image Renaud Lottiaux, Christine Morin N˚4085 Novembre 2000 THÈME 1 apport de recherche ISRN INRIA/RR--4085--FR+ENG ISSN 0249-6399 Containers : A Sound Basis For a True Single System Image Renaud Lottiaux , Christine Morin Thème 1 — Réseaux et systèmes Projet PARIS Rapport de recherche n˚4085 — Novembre 2000 — 19 pages Abstract: Clusters of SMPs are attractive for executing shared memory parallel appli- cations but reconciling high performance and ease of programming remains an open issue. A possible approach is to provide an efficient Single System Image (SSI) operating system giving the illusion of an SMP machine. In this paper, we introduce the concept of container as a mechanism to unify global resource management at the lowest operating system level. -
A Single System Image Java Operating System for Sensor Networks
A SINGLE SYSTEM IMAGE JAVA OPERATING SYSTEM FOR SENSOR NETWORKS Emin Gun Sirer Rimon Barr John C. Bicket Daniel S. Dantas Computer Science Department Cornell University Ithaca, NY 14853 {egs, barr, bicket, ddantas}@cs.cornell.edu Abstract In this paper we describe the design and implementation of a distributed operating system for sensor net- works. The goal of our system is to extend total system lifetime through power-aware adaptation for sensor networking applications. Our system achieves this goal by providing a single system image of a unified Java virtual machine to applications over an ad hoc collection of heterogeneous sensors. It automatically and transparently partitions applications into components and dynamically finds a placement of these components on nodes within the sensor network to reduce energy consumption and increase system longevity. This paper describes the design and implementation of our system and examines the question of where and when to mi- grate components in a sensor network. We evaluate two practical, power-aware, general-purpose algorithms for object placement, as well as an adaptive scheme for deciding the time granularity of object migration. We demonstrate that our algorithms can increase sensor network longevity by a factor of four to five by effec- tively distributing energy consumption and avoiding hotspots. 1. Introduction able to components at each node, in particular the available power and bandwidth may change over Sensor networks simultaneously promise a radi- time and necessitate the relocation of application cally new class of applications and pose signifi- components. Further, event sources that are being cant challenges for application development. -
Load Balancing Experiments in Openmosix”, Inter- National Conference on Computers and Their Appli- Cations , Seattle WA, March 2006 0 0 5 4 9
Machine Learning Approach to Tuning Distributed Operating System Load Balancing Algorithms Dr. J. Michael Meehan Alan Ritter Computer Science Department Computer Science Department Western Washington University Western Washington University Bellingham, Washington, 98225 USA Bellingham, Washington, 98225 USA [email protected] [email protected] Abstract 2.6 Linux kernel openMOSIX patch was in the process of being developed. This work concerns the use of machine learning techniques (genetic algorithms) to optimize load 2.1 Load Balancing in openMosix balancing policies in the openMosix distributed The standard load balancing policy for openMosix operating system. Parameters/alternative algorithms uses a probabilistic, decentralized approach to in the openMosix kernel were dynamically disseminate load balancing information to other altered/selected based on the results of a genetic nodes in the cluster. [4] This allows load information algorithm fitness function. In this fashion optimal to be distributed efficiently while providing excellent parameter settings and algorithms choices were scalability. The major components of the openMosix sought for the loading scenarios used as the test load balancing scheme are the information cases. dissemination and migration kernel daemons. The information dissemination daemon runs on each 1 Introduction node and is responsible for sending and receiving The objective of this work is to discover ways to load messages to/from other nodes. The migration improve the openMosix load balancing algorithm. daemon receives migration requests from other The approach we have taken is to create entries in nodes and, if willing, carries out the migration of the the /proc files which can be used to set parameter process. Thus, the system is sender initiated to values and select alternative algorithms used in the offload excess load. -
HPC with Openmosix
HPC with openMosix Ninan Sajeeth Philip St. Thomas College Kozhencheri IMSc -Jan 2005 [email protected] Acknowledgements ● This document uses slides and image clippings available on the web and in books on HPC. Credit is due to their original designers! IMSc -Jan 2005 [email protected] Overview ● Mosix to openMosix ● Why openMosix? ● Design Concepts ● Advantages ● Limitations IMSc -Jan 2005 [email protected] The Scenario ● We have MPI and it's pretty cool, then why we need another solution? ● Well, MPI is a specification for cluster communication and is not a solution. ● Two types of bottlenecks exists in HPC - hardware and software (OS) level. IMSc -Jan 2005 [email protected] Hardware limitations for HPC IMSc -Jan 2005 [email protected] The Scenario ● We are approaching the speed and size limits of the electronics ● Major share of possible optimization remains with software part - OS level IMSc -Jan 2005 [email protected] Hardware limitations for HPC IMSc -Jan 2005 [email protected] How Clusters Work? Conventional supe rcomputers achieve their speed using extremely optimized hardware that operates at very high speed. Then, how do the clusters out-perform them? Simple, they cheat. While the supercomputer is optimized in hardware, the cluster is so in software. The cluster breaks down a problem in a special way so that it can distribute all the little pieces to its constituents. That way the overall problem gets solved very efficiently. - A Brief Introduction To Commodity Clustering Ryan Kaulakis IMSc -Jan 2005 [email protected] What is MOSIX? ● MOSIX is a software solution to minimise OS level bottlenecks - originally designed to improve performance of MPI and PVM on cluster systems http://www.mosix.org Not open source Free for personal and academic use IMSc -Jan 2005 [email protected] MOSIX More Technically speaking: ● MOSIX is a Single System Image (SSI) cluster that allows Automated Load Balancing across nodes through preemptive process migrations. -
Introduction Course Outline Why Does This Fail? Lectures Tutorials
Course Outline • Prerequisites – COMP2011 Data Organisation • Stacks, queues, hash tables, lists, trees, heaps,…. Introduction – COMP2021 Digital Systems Structure • Assembly programming • Mapping of high-level procedural language to assembly COMP3231/9201/3891/9283 language – or the postgraduate equivalent (Extended) Operating Systems – You are expected to be competent Kevin Elphinstone programmers!!!! • We will be using the C programming language – The dominant language for OS implementation. – Need to understand pointers, pointer arithmetic, explicit 2 memory allocation. Why does this fail? Lectures • Common for all courses (3231/3891/9201/9283) void func(int *x, int *y) • Wednesday, 2-4pm { • Thursday, 5-6pm *x = 1; *y = 2; – All lectures are here (EE LG03) – The lecture notes will be available on the course web site } • Available prior to lectures, when possible. void main() – The lecture notes and textbook are NOT a substitute for { attending lectures. int *a, *b; func(a,b); } 3 4 Tutorials Assignments • Assignments form a substantial component of • Start in week 2 your assessment. • A tutorial participation mark will • They are challenging!!!! – Because operating systems are challenging contribute to your final assessment. • We will be using OS/161, – Participation means participation, NOT – an educational operating system attendance. – developed by the Systems Group At Harvard – Comp9201/3891/9283 students excluded – It contains roughly 20,000 lines of code and comments • You will only get participation marks in your enrolled tutorial. 5 6 1 Assignments Assignments • Assignments are in pairs • Don’t under estimate the time needed to do the – Info on how to pair up available soon assignments. • We usually offer advanced versions of the – ProfQuotes: [About the midterm] "We can't keep you working assignments on it all night, it's not OS.“ Ragde, CS341 – Available bonus marks are small compared to amount of • If you start a couple days before they are due, you will be late. -
The Utility Coprocessor: Massively Parallel Computation from the Coffee Shop
The Utility Coprocessor: Massively Parallel Computation from the Coffee Shop John R. Douceur, Jeremy Elson, Jon Howell, and Jacob R. Lorch Microsoft Research Abstract slow jobs that take several minutes without UCop be- UCop, the “utility coprocessor,” is middleware that come interactive (15–20 seconds) with it. Thanks to makes it cheap and easy to achieve dramatic speedups the recent emergence of utility-computing services like of parallelizable, CPU-bound desktop applications using Amazon EC2 [8] and FlexiScale [45], which rent com- utility computing clusters in the cloud. To make UCop puters by the hour on a moment’s notice, anyone with a performant, we introduced techniques to overcome the credit card and $10 can use UCop to speed up his own low available bandwidth and high latency typical of the parallel applications. networks that separate users’ desktops from a utility One way to describe UCop is that it effectively con- computing service. To make UCop economical and easy verts application software into a scalable cloud service to use, we devised a scheme that hides the heterogene- targeted at exactly one user. This goal entails five re- ity of client configurations, allowing a single cluster to quirements. Configuration transparency means the ser- serve virtually everyone: in our Linux-based prototype, vice matches the user’s application, library, and con- the only requirement is that users and the cluster are us- figuration state. Non-invasive installation means UCop ing the same major kernel version. works with a user’s existing file system and application This paper presents the design, implementation, and configuration.