Why Parallel Parallel Processors Types of Parallelism Historical

Total Page:16

File Type:pdf, Size:1020Kb

Why Parallel Parallel Processors Types of Parallelism Historical Why Parallel Parallel Processors the greed for speed is a permanent malady ❏ The high end requires this approach 2 basic options: • DOE’s ASCI program for example ❏ Build a faster uniprocessor ❏ Advantages • advantages • leverage off the sweet spot technology • programs don’t need to change • huge partially unexplored set of options • compilers may need to change to take advantage of intra-CPU parallelism • disadvantages ❏ Disadvantages • improved CPU performance is very costly - we already see diminishing • software - optimized balance and change are required returns • very large memories are slow • overheads - a whole new set of organizational disasters are now possible ❏ Parallel Processors • today implemented as an ensemble of microprocessors • SAN style interconnect • large variation in how memory is treated University of Utah 1 CS6810 University of Utah 2 CS6810 School of Computing School of Computing Types of Parallelism Historical Perspective Note: many overlaps Table 1: Technology and Software and Representative Generation • lookahead & pipelining Architecture Applications Systems • vectorization First Vacuum tubes and relay Machine language ENIAC (1945 - 1954) memories - simple PC and Single user Princeton IAS • concurrency & simultaneity ACC Programmed I/O IBM 701 Second Discrete transistor Fortran & Cobol IBM 7090 • data and control parallelism (1955 - 1964) Core Memory Subroutine libraries CDC 1604 Floating point arith. Batch processing OS Univac LARC • partitioning & specialization I/O Processors Burroughs B5500 Third SSI and MSI IC’s More HLL’s IBM 360/370 • interleaving & overlapping of physical subsystems (1965 - 1974) microprogramming Multiprogramming CDC 6600 pipelining, cache, and loo- and Timesharing OS TI ASC • multiplicity & replication kahead Protection and file PDP-88 system capability • time & space sharing Fourth LSI/VLSI processors, semi- Multiprocessor OS, parallel VAX 9000 (1975 - 1990) conductor memory, vector languages, multiuser Cray X-MP • multitasking & multiprogramming supercomputers, applications FPS T2000 • multi-threading multicomputers IBM 3090 Fifth ULSI/VHSIC processors, MPP, grand challenge IBM SP • distributed computing - for speed or availability (1991 - present) memory, switches. High applications, distributed SGI Origin density packages and and heterogeneous process- Intel ASCI Red scalable architectures ing, I?O becomes real University of Utah 3 CS6810 University of Utah 4 CS6810 School of Computing School of Computing What changes when you get more than 1? Inter-PE Communication everything is the easy answer! software perspective 2 areas deserve special attention ❏ Implicit via memory ❏ Communication • distinction of local vs. remote • 2 aspects always are of concern • implies some shared memory • latency & bandwidth • sharing model and access model must be consistent • before - I/O meant disk/etc. = slow latency & OK bandwidth ❏ Explicitly via send and receive • now - interprocessor communication = fast latency and high • need to know destination and what to send bandwidth - becomes as important as the CPU • blocking vs. non-blocking option ❏ Resource Allocation • usually seen as message passing • smart Programmer - programmed • smart Compiler - static • smart OS - dynamic • hybrid - some of all of the above is the likely balance point University of Utah 5 CS6810 University of Utah 6 CS6810 School of Computing School of Computing Inter-PE Communication Communication Performance hardware perspective critical for MP performance ❏ Senders and Receivers ❏ 3 key factors • memory to memory • bandwidth • CPU to CPU • does the interconnect fabric support the needs of the whole collection • scalability issues • CPU activated/notified but transaction is memory to memory • latency • which memory - registers, caches, main memory • = sender overhead + time of flight + transmission time + receiver overhead ❏ Efficiency requires • transmission time = interconnect overhead • consistent SW & HW models • latency hiding capability of the processor nodes • lots of idle processors is not a good idea • policies should not conflict detailed study of interconnects last chapter topic since we need to understand I/O first University of Utah 7 CS6810 University of Utah 8 CS6810 School of Computing School of Computing Flynn’s Taxonomy - 1972 MIMD options too simple but it’s the only one that moderately works ❏ Heterogeneous vs. Homogeneous PE’s 4 Categories = (Single, Multiple) X (Data Stream, Instruction ❏ Stream) Communication Model ❏ SISD - conventional uniprocessor system • explicit: message passing • implicit: shared-memory • still lots of intra-CPU parallelism options • oddball: some shared some non-shared memory partitions ❏ SIMD - vector and array style computers ❏ Interconnection Topology • started with ILLIAC • which PE gets to talk directly to which PE • first accepted multiple PE style systems • blocking vs. non-blocking • now has fallen behind MIMD option • packet vs. circuit switched ❏ MISD - ~ systolic or stream machines • wormhole vs. store and forward • example: iWarp and MPEG encoder • combining vs. not ❏ MIMD - intrinsic parallel computers • synchronous vs. asynchronous • lots of options - today’s winner - our focus University of Utah 9 CS6810 University of Utah 10 CS6810 School of Computing School of Computing The Easy and Cheap Obvious Option Ideal Performance - the Holy Grail ❏ Microprocessors are cheap ❏ Requires perfect match between HW & SW ❏ Memory chips are cheap ❏ Tough given static HW and dynamic SW ❏ Hook them up somehow to get n PE’s • hard means cast in concrete ❏ Multiply each PE’s performance by n and get • soft means the programmer can write anything an impressive number ❏ Hence performance depends on: • The hardware: ISA, memory, cycle time, etc. What’s wrong with this picture? • The software: OS, task-switch, compiler, application code ❏ Simple performance model (aka uniprocessor) • most uP’s have been architected to be the only one in the system CPU-time (T)= Instruction-count (Ic)× CPI ×τCycle-time() • most memories only have one port • interconnect is not just somehow ❏ But CPI can vary by more than 10x • anybody who computes system performance with a single multiply is a moron University of Utah 11 CS6810 University of Utah 12 CS6810 School of Computing School of Computing CPI Stretch Factors The Idle Factor Paradox ❏ Conventional Uniprocessor factors ❏ After the stretch factor - the performance • TLB miss penalty, page fault penalty, cache miss penalty equation becomes • pipeline stall penalty, OS fraction penalty T= Ic× CPI ×τstretch × ❏ Additional Multiprocessor Factors • shared memory ❏ For an ideally scalable n PE system T/n will be • non-local access penalty • consistency maintenance penalty the CPU time required • message passing ❏ But idle time will create it’s own penalty • Send penalty even for non-blocking ❏ • Receive or notification penalty - task switch penalty (probably 2x) Hence n • Body copy penalty Ic× CPI ×τstretch × ∑ --------------------------------------------------- • Protection check penalty ()1– %idle T = ---------------------------------------------------------------i = 1 - • Etc. - the OS fraction goes up typically n ❏ What if %idle goes up faster than n? University of Utah 13 CS6810 University of Utah 14 CS6810 School of Computing School of Computing Shared Memory UMA Modern NUMA View Uniform Memory Access ❏ All uP’s set up for SMP ❏ Sequent Symmetry S-81 • SMP ::= symmetric multiprocessor • symmetric ==> all PE’s have same access to I/O, memory, • communication is usually the front side bus executive (OS) capability etc. • example • asymmetric ==> capability at PE’s differs • Pentium III and 4 Xeon’s set up to support 2 way SMP • just tie the FSB wires P0 P1 Pn • as clock speeds have gone up for n-way SMP’s $ $ $ • FSB capacitance has reduced the value of n ❏ Chip based SMP’s Interconnect (Bus, Crossbar, Multistage, ...) • IBM’s Power 4 • 2 Power 3 cores on the same die • set up to support 4 cores I/O0 I/Oj SM0 SM1 SMk University of Utah 15 CS6810 University of Utah 16 CS6810 School of Computing School of Computing NUMA Shared Memory opus 1 level NUMA Shared Memory opus 2 level Non-Uniform Memory Access ❏ e.g. Univ. of Ill. Cedar + CMU CM* & C.mmp ❏ BBN Butterfly + others GSM GSM GSM NOTES: LM0 P0 I Global Interconnect n transfer initiated t by: LMx or Px e LM1 P1 r P CSM P CSM c Answer to: o LMx or Px n P CSM P CSM n CIN CIN e All options have Today - nodes c been seen in can be SMP’s or LMn Pn t CMP’s practice P CSM P CSM e.g. SUN, Com- the easy and cheap option - just add paq, IBM interconnect University of Utah 17 CS6810 University of Utah 18 CS6810 School of Computing School of Computing COMA Shared Memory Lots of other DSM variants Cache Only Memory Access ❏ Cache consistency ❏ e.g. KSR-1 • DEC Firefly - up to 16 snooping caches in a workstation ❏ Directory based consistency • like the COMA model but deeper memory hierarchy Interconnect • e.g. Stanford DASH machine, MIT Alewife, Alliant FX-8 ❏ Delayed consistency • many models for the delayed updates D D D Directory • a software protocol more than a hardware model • e.g. MUNIN - John Carter (good old U of U) C C C Cache • other models - Alan Karp and the IBM crew P P P Processor University of Utah 19 CS6810 University of Utah 20 CS6810 School of Computing School of Computing NORMA Message Passing MIMD Machines No remote memory access = message passing M M M ❏ Many players: P P P • Schlumberger FAIM-1 • HPL Mayfly • CalTech Cosmic Cube and Mosaic M P Message P M Passing • NCUBE Interconnect
Recommended publications
  • ANSA: an Engineer’S Introduction
    An Engineer’s Introduction to the Architecture ANSA: An Engineer’s Introduction to the Architecture Release TR.03.02 November 1989 This document provides an introduction to ANSA. It is specifically oriented towards those with a software and systems background. It describes what is available and how it is used; it does not describe how the architecture is applied to specific application domains. Architecture Projects Management Limited Architecture Projects Management Limited and their sponsors take no responsibility for the consequences of errors or omissions in this manual, nor for any damages resultmg from the applicatron of the ideas expressed herein. Architecture Projects Management Limited Poseidon House Castle Park CAMBRIDGE CB3 ORD United Kingdom TELEPHONE UK Cambridge (0223) 323010 INTERNATIONAL +44 223 323010 FAX + 44 223 359779 UUCP . ..ukc!acorn!ansa!apm ARPA Internet [email protected] 8 1989 Architecture Projects Management Limited Permission to copy without fee all or part of this material is granted provided that notice IS given that copying is by permission of Architecture Projects Management Limited. To copy otherwise or to republish requiresspecific permlssion. Advanced Networked Systems Architecture CONTENTS Page 1 Background ........................................... 1 1.1 Objectives ......................................... 1 1.2 Activities ............................................ 1 1.3 Standardization ...................................... 2 2 Executive summary ................................... 3 2.1 The problem
    [Show full text]
  • Exception Handling with Fail-Safe Semantics Phd Thesis
    Exception Handling With Fail-Safe Semantics PhD Thesis Steven J. Drew B.App.Sci. (Computing) Hons. Date: 29th. November, 1996. Principal Supervisor: Prof.K.J.Gough School Of Computing Science Faculty Of Information Technology Queensland University Of Technology DEDICATION To my father for all his love and support, To my mother, I'm glad you saw the start of this, I'll tell you how it ended one day. QUT QUEENSLAND UNIVERSITY OF TECHNOLOGY DOCTOR OF PHILOSOPHY THESIS EXAMINATION CANDIDATE NAME Steven John Drew CENTRE/RESEARCH CONCENTRATION Programming Languages and Systems PRINCIPAL SUPERVISOR Professor John Gough ASSOCIATE SUPERVISOR(S) Dr John Hynd THESIS TITLE Exception Handling with Fail-Safe Semantics Under the requirements of PhD regulation 9.2, the above candidate was examined orally by the Faculty. The members of the panel set up for this examination recommend that the thesis be accepted by the University and forwarded to the appointed Committee for examination. Prof K J Gough Name ....................................................................... Panel Chairperson (Principal Supervisor) Name ...... ~?.';'.<?~.-:~.:.<?.~ .. ~.. -~-~XP.~:.E?~~ ......................... Panel Member Assoc Prof G Mohay Name ....................................................................... Panel Member Under the requirements of PhD regulation 9.15, it is hereby certified that the thesis of the above-named candidate has been examined. I recommend on behalf of the Thesis Examination Committee that the thesis be accepted in fulfilment of the conditions for the award of the degree of Doctor of Philosophy. Name .. :'9r.... Y~~~.. .zy~ • Date .. .<f. ..P~:'?.~ .. Chair of Examiners (Thesis Examination Comrilittee) Keywords Exception handling, programming languages, fail-safety, fail-safe semantics, software fault tolerance, program complexity, program comprehensibility, Modula-2. Abstract Computer architectures are becoming increasingly complex and are being used to solve problems of similarly increasing complexity.
    [Show full text]
  • Dissertation Acceptance
    SEVER INSTITUTE OF TECHNOLOGY DOCTOR OF SCIENCE DEGREE DISSERTATION ACCEPTANCE (To be the first page of each copy of the dissertation) DATE: July 24, 1998 STUDENT’S NAME: Charles D. Cranor This student’s dissertation, entitled Design and Implementation of the UVM Virtual Memory System has been examined by the undersigned committee of five faculty members and has received full approval for acceptance in partial fulfillment of the requirements for the degree Doctor of Science. APPROVAL: Chairman Short Title: Design and Implementation of UVM Cranor, D.Sc. 1998 WASHINGTON UNIVERSITY SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE DESIGN AND IMPLEMENTATION OF THE UVM VIRTUAL MEMORY SYSTEM by Charles D. Cranor, M.S. Prepared under the direction of Professor Gurudatta M. Parulkar A dissertation presented to the Sever Institute of Washington University in partial fulfillment of the requirements for the degree of Doctor of Science August, 1998 Saint Louis, Missouri WASHINGTON UNIVERSITY SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE ABSTRACT DESIGN AND IMPLEMENTATION OF THE UVM VIRTUAL MEMORY SYSTEM by Charles D. Cranor ADVISOR: Professor Gurudatta M. Parulkar August, 1998 Saint Louis, Missouri We introduce UVM, a new virtual memory subsystem for 4.4BSD that makes better use of existing hardware memory management features to reduce overhead and improve performance. Our novel approach focuses on allowing processes to pass memory to and from other processes and the kernel, and to share memory. This approach reduces or elim- inates the need to copy data thus reducing the time spent within the kernel and freeing up cycles for application processing. Unlike the approaches that focus exclusively on the networking and inter-process communications (IPC) subsystems, our approach provides a general framework for solutions that can improve efficiency of the entire I/O subsystem.
    [Show full text]
  • Appendix M Historical Perspectives and References
    M.1 Introduction M-2 M.2 The Early Development of Computers (Chapter 1) M-2 M.3 The Development of Memory Hierarchy and Protection (Chapter 2 and Appendix B) M-9 M.4 The Evolution of Instruction Sets (Appendices A, J, and K) M-17 M.5 The Development of Pipelining and Instruction-Level Parallelism (Chapter 3 and Appendices C and H) M-27 M.6 The Development of SIMD Supercomputers, Vector Computers, Multimedia SIMD Instruction Extensions, and Graphical Processor Units (Chapter 4) M-45 M.7 The History of Multiprocessors and Parallel Processing (Chapter 5 and Appendices F, G, and I) M-55 M.8 The Development of Clusters (Chapter 6) M-74 M.9 Historical Perspectives and References M-79 M.10 The History of Magnetic Storage, RAID, and I/O Buses (Appendix D) M-84 M Historical Perspectives and References If … history … teaches us anything, it is that man in his quest for knowledge and progress is determined and cannot be deterred. John F. Kennedy Address at Rice University (1962) Those who cannot remember the past are condemned to repeat it. George Santayana The Life of Reason (1905), Vol. 2, Chapter 3 M-2 ■ Appendix M Historical Perspectives and References M.1 Introduction This appendix provides historical background on some of the key ideas presented in the chapters. We may trace the development of an idea through a series of machines or describe significant projects. If you are interested in examining the initial development of an idea or machine or are interested in further reading, references are provided at the end of each section.
    [Show full text]
  • Resource Management in a Multicore Operating System
    Research Collection Doctoral Thesis Resource management in a multicore operating system Author(s): Peter, Simon Publication Date: 2012 Permanent Link: https://doi.org/10.3929/ethz-a-007579246 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS.ETH NO. 20664 Resource Management in a Multicore Operating System A dissertation submitted to ETH ZURICH for the degree of Doctor of Sciences presented by SIMON PETER Diplom-Informatiker, Carl-von-Ossietzky Universität Oldenburg born October 13, 1981 citizen of Federal Republic of Germany accepted on the recommendation of Prof. Timothy Roscoe, examiner Prof. Steven Hand, co-examiner Prof. Gustavo Alonso, co-examiner Prof. Markus Püschel, co-examiner 2012 Abstract Trends in hardware design have led to processors with many cores on a single die, which present the opportunity for commodity com- puting to become increasingly parallel. These multicore architectures bring with them complex memory and cache hierarchies and processor interconnects. While the execution of batch parallel applications has been researched in the context of high-performance computing (HPC), commodity hardware is evolving at a faster pace than specialized su- percomputers and applications are interactive, requiring fast system response times and the ability to react to ad-hoc workload changes. Leveraging and managing the existing potential for parallelization thus presents a difficult challenge for the development of both commodity operating systems and application programs, which have to keep up with hardware developments and present nimble solutions. This dissertation presents the design and implementation of oper- ating system mechanisms to support the execution of a dynamic mix of interactive and parallel applications on commodity multicore com- puters.
    [Show full text]
  • Uva-DARE (Digital Academic Repository)
    UvA-DARE (Digital Academic Repository) On the construction of operating systems for the Microgrid many-core architecture van Tol, M.W. Publication date 2013 Document Version Final published version Link to publication Citation for published version (APA): van Tol, M. W. (2013). On the construction of operating systems for the Microgrid many-core architecture. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:09 Oct 2021 128.256.16-II MicroGrid MicroGrid 128.256.16-II MicroGrid OS v4.01 MicroGridv4.01 OS MicroGrid OS v2.1 OS MicroGrid MicroGrid 128.256.16-II MicroGrid OS MicroGrid v4.01 On the construction of operating systemsv2.1 OS MicroGrid for the Microgrid MicroGrid 128.256.16-II
    [Show full text]
  • A High-Bandwidth Cross-Domain Transfer Facility
    Fbufs: A High-Bandwidth Cross-Domain Transfer Facility Peter Druschel and Larry L. Peterson* Department of Computer Science University of Arizona Tucson, AZ 85721 Abstract be copied from one domain to another. This paper con- siders the complementary issue of increasing data transfer We have designed and implemented a new operating throughput—we are interested in I/O intensive applications system facility for 1/0 buffer management and data trans- that require significant amounts of data to be moved across fer across protection domain boundm-ies on shared memory protection boundaries. Such applications include real-time machines. This facility, called ~ast buffers (fbufs), com- video, digital image retrieval, and accessing large scientific bines virtual page remapping with shared virtual memory, data sets. and exploits locality in I/O traffic to achieve high through- put without compromising protection, security, or modular- Focusing more specifically on network I/0, we observe ity. Its goal is to help deliver the high bandwidth afforded that on the one hand emerging network technology will by emerging high-speed networks to user-level processes, soon offer sustained data rates approaching one gigabit per both in monolithic and microkernel-based operating sys- second to the end host, while on the other hand, the trend tems. towards microkernel-based operating systems leads to a This paper outlines the requirements for a cross-domain situation where the 1/0 data path may intersect multiple transfer facility, describes the design of the fbuf mechanism protection domains. The challenge is to turn good net- that meets these requirements, and experimentally quanti- work bandwidth into good application-to-application band- fies the impact of fbufs on network performance.
    [Show full text]
  • Ism Genmin Frd
    PROCEEDINGS VI · VI HOT EL, CALE DON 2 - 3 JULY 1991 VI � SPONSORE D - I BY VI ISM FRDV· GENMIN I VI EDITED BY VI M H Linck DEPARTMENT OF COMPUTER SCIENCE • UNIVERSITY OF CAPE TOWN I I PROCEEDINGS / KONGRESOPSOMMINGS 6th SOUTHERN AFRICAN COMPUTER SYMPOSI� 6de SUIDELIKE-AFRIKAANSE REKENAARSIMPOSIUM De Overberger Hotel, Caledon 2 - 3 JULY 1991 SPONSORED by ISM FRD GENMIN EDITED by MHLINCK Department of Computer Science University of Cape Town TABLE OF CONTENTS Foreword 1 Organising Committee 2 Referees 3 Program 5 Papers (In order of presentation) 9 "A value can belong to many types" BH Venter, University of FortHare 10 "A Transputer Based Embedded Controller Development System" MR Webster, R G Harley,DC Levy& DR Woodward, University of Natal 16 "Improving a Control and Sequencing Language" G Smit & CFair, University of Cape Town 25 "Design of an Object Orientated Framework/or Optimistic Parallel Simulation on Shared-Memory Computers" PMachanick, University of Witwatersrand 40 "Using Statecharts to Design and Specifythe GMA Direct-Manipulation User Interface" L vanZijl & D Mitton, University of Stellenbosch 51 "Product FormSolutions/or Multiserver Centres with Heirarchical Classesof Customers" A Krzesinski, University of Stellenboschand RSchassberger, TechnischeU niversitatBraunschweig 69 "A ReusableKernel for the Development of ControlSoftware" WFouche and Pde Villiers, University of Stellenbosch 83 "An Implementation of LindaTuple Space under the, Helios Operating System" PG Clayton, E P Wentworth, G C Wells andF deHeer-Menlah, Rhodes
    [Show full text]
  • INFORMATION to USERS This Manuscript Has Been Reproduced
    Operating system support for high-speed networking. Item Type text; Dissertation-Reproduction (electronic) Authors Druschel, Peter Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 29/09/2021 12:17:24 Link to Item http://hdl.handle.net/10150/186828 INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely. event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerograpbically in this copy.
    [Show full text]
  • Digital Equipment Corporation Records
    http://oac.cdlib.org/findaid/ark:/13030/c8t72p80 No online items Guide to the Digital Equipment Corporation records Finding aid prepared by Bo Doub, Kim Hayden, and Sara Chabino Lott Processing of this collection was made possible through generous funding from The Andrew W. Mellon Foundation, administered through the Council on Library and Information Resources' Cataloging Hidden Special Collections and Archives grant. Computer History Museum 1401 N. Shoreline Blvd. Mountain View, CA, 94043 (650) 810-1010 [email protected] April 2017 Guide to the Digital Equipment X2675.2004 1 Corporation records Title: Digital Equipment Corporation records Identifier/Call Number: X2675.2004 Contributing Institution: Computer History Museum Language of Material: English Physical Description: 1,239 Linear feet,611 record cartons, 357 manuscript boxes, 56 newspaper boxes, 169 periodical boxes, and 150 other box types Date (bulk): Bulk, 1957-1998 Date (inclusive): 1947-2002 Abstract: The Digital Equipment Corporation (DEC) records comprise DEC’s corporate archives, with material dating from 1947 to 2002. The bulk of the collection was collected and created during the company’s years of operation from 1957 to 1998. DEC, founded by engineers Ken Olsen and Harlan Anderson, was one of the largest and most successful computer companies in the industry’s history. Widely recognized for its PDP and VAX minicomputer product lines, by 1988 DEC was second only to IBM as the world’s largest computer company. This collection holds the papers of DEC’s executives, engineers, and personnel -- including the personal collections of founders Ken Olsen and Harlan Anderson. Also included are DEC’s administrative records and material relating to product development and engineering, with committee meeting minutes, correspondence, internal newsletters, product proposals, and engineering drawings.
    [Show full text]
  • DSM-PM2: Une Plate-Forme Portable Pour L'implémentation De Protocoles
    DSM-PM2 : une plate-forme portable pour l’implémentation de protocoles de cohérence multithreads pour systèmes à mémoire virtuellement partagée Gabriel Antoniu To cite this version: Gabriel Antoniu. DSM-PM2 : une plate-forme portable pour l’implémentation de protocoles de co- hérence multithreads pour systèmes à mémoire virtuellement partagée. Informatique [cs]. Ecole normale supérieure de lyon - ENS LYON, 2001. Français. tel-00565382 HAL Id: tel-00565382 https://tel.archives-ouvertes.fr/tel-00565382 Submitted on 12 Feb 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. N d’ordre : 200 N bibliothèque : 01ENSL0200 ÉCOLE NORMALE SUPÉRIEURE DE LYON Laboratoire de l’Informatique du Parallélisme THÈSE pour obtenir le grade de Docteur de l’École Normale Supérieure de Lyon spécialité : Informatique au titre de l’école doctorale de MathIF présentée et soutenue publiquement le 21 novembre 2001 par Monsieur Gabriel ANTONIU DSM-PM2 : une plate-forme portable pour l’implémentation de protocoles de cohérence multithreads pour systèmes à mémoire virtuellement partagée Directeurs de thèse : Monsieur Luc BOUGÉ Monsieur Raymond NAMYST Après avis de : Monsieur Thierry PRIOL Monsieur Pierre SENS Devant la commission d’examen formée de : Monsieur Luc BOUGÉ, Membre Monsieur Denis CAROMEL, Membre Monsieur Raymond NAMYST, Membre Monsieur Thierry PRIOL, Membre et Rapporteur Monsieur Pierre SENS, Membre et Rapporteur Monsieur Jean-Bernard STEFANI, Membre À la mémoire de mes parents, Remerciements Ce manuscrit est le résultat de trois années de travail.
    [Show full text]
  • A Comparison of Basic CPU Multiprocessor UNIX
    A Comparison of Basic CPU Scheduling Algorithms for Multiprocessor UNIX Stephen Curran and Michael Stumm University of Toronto ABSTRACT: In this paper, we present the results of a simulation study comparing three basic algorithms that schedule independent tasks in multiprocessor versions of UNIX. Two of these algorithms, namely Central Queue and Initial Placement, are obvious extensions to the standard uniprocessor scheduling algorithm and are in use in a number of multipro- cessor systems. A third algorithm, Take, is a varia- tion on Initial Placement, where processors are allowed to raid the task queues of the other proces- sors. Our simulation results show the difference between the performance of the three algorithms to be small when scheduling a typical UNIX workload running on a small, bus-based, shared memory mul- tiprocessor. They also show that the Take algo- rithm performs best for those multiprocessors on which tasks incur overhead each time they migrate. In particular, the Take algorithm appears to be more stable than the other two algorithms under extreme conditions. @ Computing Systems, Vol. 3 'No. 4'Fall 1990 551 I. Introduction In this paper, we consider ways to organize and manage the ready tasks in a shared memory multiprocessor. The uniprocessor UNIX kernel uses a single priority queue for this purpose: A task is added to the ready queue behind all tasks ofhigher or equal prior- ity, and the CPU is allocated to the task with the highest priority which is at the head of the queue. For the shared memory mul- tiprocessor case, we consider three basic scheduling algorithms in a simulation study, and compare their behavior and performance in scheduling a UNIX workload of tasks.
    [Show full text]