Dataflow: Passing the Token

Total Page:16

File Type:pdf, Size:1020Kb

Dataflow: Passing the Token Dataflow: Passing the Token Arvind Computer Science and Artificial Intelligence Lab M.I.T. ISCA 2005, Madison, WI June 6, 2005 1 Inspiration: Jack Dennis General purpose parallel machines based on a dataflow graph model of computation Inspired all the major players in dataflow during seventies and eighties, including Kim Gostelow and I @ UC Irvine ISCA, Madison, WI, June 6, 2005 Arvind - 2 EM4: single-chip dataflow micro, Dataflow80PESigma-1: Machinesmultiprocessor, The largest ETL, Japan Static - mostlydataflow for signal machine, processing ETL, Japan NEC - NEDIP and IPP Hughes, Hitachi, AT&T, Loral, TI, Sanyo M.I.T. Engineering model Shown at ... Supercomputing 96 K. Hiraki Dynamic Shown at Manchester (‘81) Supercomputing 91 GregM.I.T. Papadopoulos -S.TTDA Sakai, Monsoon (‘88) M.I.T./Motorola - Monsoon (‘91) (8 PEs, 8 IS) ETL - SIGMA-1 (‘88) (128 PEs,128John IS) Gurd T. ShimadaETL - EM4 (‘90) (80 PEs), EM-X (‘96) (80 PEs) Sandia - EPS88, EPS-2 Related machines: IBM - Empire Burton Smith’s ... Andy Y. KodamaChris DenelcorMonsoon HEP, Jack Horizon, Tera Boughton Joerg Costanza ISCA, Madison, WI, June 6, 2005 Arvind - 3 Software Influences • Parallel Compilers – Intermediate representations: DFG, CDFG (SSA, φ ,...) – Software pipelining Keshav Pingali, G. Gao, Bob Rao, .. • Functional Languages and their compilers • Active Messages David Culler •Compiling for FPGAs, ... Wim Bohm, Seth Goldstein... • Synchronous dataflow –Lustre, Signal Ed Lee @ Berkeley ISCA, Madison, WI, June 6, 2005 Arvind - 4 This talk is mostly about MIT work • Dataflow graphs – A clean model of parallel computation • Static Dataflow Machines – Not general-purpose enough • Dynamic Dataflow Machines – As easy to build as a simple pipelined processor • The software view – The memory model: I-structures • Monsoon and its performance •Musings ISCA, Madison, WI, June 6, 2005 Arvind - 5 Dataflow Graphs {x = a + b; y = b * 7 a b in (x-y) * (x+y)} 12+ *7 • Values in dataflow graphs are represented as tokens x ip = 3 y token < ip , p , v > p = L 3 - 4 + instruction ptr port data • An operator executes when all its input tokens are present; copies of the result token are 5 * distributed to the destination operators no separate control flow ISCA, Madison, WI, June 6, 2005 Arvind - 6 Dataflow Operators • A small set of dataflow operators can be used to define a general programming language Fork Primitive Ops Switch Merge TT T F + T F ⇒ TT T F + T F ISCA, Madison, WI, June 6, 2005 Arvind - 7 Well Behaved Schemas ••• ••• one-in-one-out & self cleaning P P ••• ••• Before After Conditional BoundedLoop Loop T F F T F p fg T F f T F Needed for resource management ISCA, Madison, WI, June 6, 2005 Arvind - 8 Outline • Dataflow graphs √ – A clean model of parallel computation • Static Dataflow Machines ← – Not general-purpose enough • Dynamic Dataflow Machines – As easy to build as a simple pipelined processor • The software view – The memory model: I-structures • Monsoon and its performance •Musings ISCA, Madison, WI, June 6, 2005 Arvind - 9 Static Dataflow Machine: Instruction Templates 1 Opcode 2 + Destination 1 3 3L 4L * Destination 2 4 - 3R 4R 5 + 5L Operand 1 * 5R Operand 2 out Each arc in the graph has a a operand slot in the program 12 + Presence bits x b 3 *7 - ISCA, Madison, WI, June 6, 2005 4 y + 5 * Arvind - 10 Static Dataflow Machine Jack Dennis, 1973 Receive Instruction Templates 1 Op dest1 dest2 p1 src1 p2 src2 2 . FU FU FU FU FU Send <s1, p1, v1>, <s2, p2, v2> • Many such processors can be connected together • Programs can be statically divided among the processor ISCA, Madison, WI, June 6, 2005 Arvind - 11 Static Dataflow: Problems/Limitations • Mismatch between the model and the implementation – The model requires unbounded FIFO token queues per arc but the architecture provides storage for one token per arc – The architecture does not ensure FIFO order in the reuse of an operand slot – The merge operator has a unique firing rule • The static model does not support – Function calls - No easy solution in – Data Structures the static framework - Dynamic dataflow provided a framework for solutions ISCA, Madison, WI, June 6, 2005 Arvind - 12 Outline • Dataflow graphs √ – A clean model of parallel computation • Static Dataflow Machines √ – Not general-purpose enough • Dynamic Dataflow Machines ← – As easy to build as a simple pipelined processor • The software view – The memory model: I-structures • Monsoon and its performance •Musings ISCA, Madison, WI, June 6, 2005 Arvind - 13 Dynamic Dataflow Architectures • Allocate instruction templates, i.e., a frame, dynamically to support each loop iteration and procedure call – termination detection needed to deallocate frames • The code can be shared if we separate the code and the operand storage a token <fp, ip, port, data> frame instruction pointer pointer ISCA, Madison, WI, June 6, 2005 Arvind - 14 A Frame in Dynamic Dataflow 1 + 1 3L, 4L Program a b 2 * 2 3R, 4R 3 - 3 5L 12+ *7 4 + 4 5R 5 * 5 out x <fp, ip, p , v> y 3 - 4 + 1 2 7 3 5 * 4 Frame 5 Need to provide storage for only one operand/operator ISCA, Madison, WI, June 6, 2005 Arvind - 15 Monsoon Processor Greg Papadopoulos op rd1,d2 Instruction ip Fetch Code fp+r Operand Fetch Token Frames Queue ALU Form Token Network Network ISCA, Madison, WI, June 6, 2005 Arvind - 16 Temporary Registers & Threads Robert Iannucci op rS1,S2 Instruction Fetch n sets of Code registers (n = pipeline Operand depth) Frames Fetch Token Queue Registers Registers evaporate ALU when an instruction thread is broken Robert IannucciForm Token Registers are also Network used for exceptions & Network interrupts ISCA, Madison, WI, June 6, 2005 Arvind - 17 Actual Monsoon Pipeline: Eight Stages 32 Instruction Instruction Fetch Memory Effective Address 3 Presence Presence Bit bits Operation User System Queue Queue 72 Frame Frame Memory Operation 72 72 144 2R, 2W ALU 144 Network Registers Form Token ISCA, Madison, WI, June 6, 2005 Arvind - 18 Instructions directly control the pipeline The opcode specifies an operation for each pipeline stage: opcode r dest1 [dest2] EA WM RegOp ALU FormToken Easy to implement; no hazard detection EA - effective address FP + r: frame relative r: absolute IP + r: code relative (not supported) WM - waiting matching Unary; Normal; Sticky; Exchange; Imperative PBs X port → PBs X Frame op X ALU inhibit Register ops: ALU: VL X VR → V’L X V’R , CC Form token: VL X VR X Tag1 X Tag2 X CC → Token1 X Token2 ISCA, Madison, WI, June 6, 2005 Arvind - 19 Procedure Linkage Operators f a1 an ... get frame extract tag change Tag 0 change Tag 1 change Tag n Like standard call/return but caller & 1: n: callee can be Fork Graph for f active simultaneously token in frame 0 change Tag 0 change Tag 1 token in frame 1 ISCA, Madison, WI, June 6, 2005 Arvind - 20 Outline • Dataflow graphs √ – A clean model of parallel computation • Static Dataflow Machines √ – Not general-purpose enough • Dynamic Dataflow Machines √ – As easy to build as a simple pipelined processor • The software view ← – The memory model: I-structures • Monsoon and its performance •Musings ISCA, Madison, WI, June 6, 2005 Arvind - 21 Parallel Language Model Tree of Global Heap of Activation Shared Objects Frames f: g: h: active threads asynchronous loop and parallel at all levels ISCA, Madison, WI, June 6, 2005 Arvind - 22 Id World implicit parallelism Id Dataflow Graphs + I-Structures + . TTDA Monsoon *T *T-Voyager ISCA, Madison, WI, June 6, 2005 Arvind - 23 Id World people • Rishiyur Nikhil, • Keshav Pingali, • Vinod Kathail, • David Culler •Ken Traub • Steve Heller, • Richard Soley, •DinartMores • Jamey Hicks, • Alex Caro, •Andy Shaw, •Boon Ang •Shail Anditya • R Paul Johnson Keshav Pingali David Culler Ken Traub •Paul Barth R.S. Nikhil •Jan Maessen •Christine Flood • Jonathan Young • Derek Chiou • Arun Iyangar •Zena Ariola • Mike Bekerle •K. Eknadham(IBM) • Wim Bohm (Colorado) • Joe Stoy (Oxford) •... Boon S. Ang Jamey Hicks Derek Chiou Steve Heller ISCA, Madison, WI, June 6, 2005 Arvind - 24 Data Structures in Dataflow • Data structures reside in a Memory structure store ⇒ tokens carry pointers P . P a • I-structures: Write-once, Read multiple times or I-fetch – allocate, write, read, ..., read, deallocate ⇒ No problem if a reader arrives before the writer at the memory location a v I-store ISCA, Madison, WI, June 6, 2005 Arvind - 25 I-Structure Storage: Split-phase operations & Presence bits <s, fp, a > I-structure s Memory I-Fetch address to be read a1 a2 v1 v t a 3 2 fp.ip ⇓ a4 forwarding address s I-Fetch <a, Read, (t,fp)> split phase t • Need to deal with multiple deferred reads • other operations: fetch/store,ISCA, Madison, take/put, WI, June 6,clear 2005 Arvind - 26 Outline • Dataflow graphs √ – A clean parallel model of computation • Static Dataflow Machines √ – Not general-purpose enough • Dynamic Dataflow Machines √ – As easy to build as a simple pipelined processor • The software view √ – The memory model: I-structures • Monsoon and its performance ← •Musings ISCA, Madison, WI, June 6, 2005 Arvind - 27 The Monsoon Project Motorola Cambridge Research Center + MIT MIT-Motorola collaboration 1988-91 Research Prototypes Monsoon 16-node I-structure Processor Fat Tree 64-bit 100 MB/sec 10M tokens/sec Unix Box 4M 64-bit words 16 2-node systems (MIT, LANL, Motorola, Colorado, Oregon, McGill, USC, ...) 2 16-node systems (MIT, LANL) Id World Software Tony Dahbura ISCA, Madison, WI, June 6, 2005 Arvind - 28 Id Applications on Monsoon @ MIT •Numerical – Hydrodynamics - SIMPLE – Global Circulation Model - GCM – Photon-Neutron Transport code -GAMTEB – N-body problem •Symbolic – Combinatorics - free tree matching,Paraffins –Id-in-Id compiler •System – I/O Library – Heap Storage Allocator on Monsoon •Fun and Games –Breakout –Life – Spreadsheet ISCA, Madison, WI, June 6, 2005 Arvind - 29 Id Run Time System (RTS) on Monsoon • Frame Manager: Allocates frame memory on processors for procedure and loop activations Derek Chiou • Heap Manager: Allocates storage in I- Structure memory or in Processor memory for heap objects.
Recommended publications
  • Oral History of Fernando Corbató
    Oral History of Fernando Corbató Interviewed by: Steven Webber Recorded: February 1, 2006 West Newton, Massachusetts CHM Reference number: X3438.2006 © 2006 Computer History Museum Oral History of Fernando Corbató Steven Webber: Today the Computer History Museum Oral History Project is going to interview Fernando J. Corbató, known as Corby. Today is February 1 in 2006. We like to start at the very beginning on these interviews. Can you tell us something about your birth, your early days, where you were born, your parents, your family? Fernando Corbató: Okay. That’s going back a long ways of course. I was born in Oakland. My parents were graduate students at Berkeley and then about age 5 we moved down to West Los Angeles, Westwood, where I grew up [and spent] most of my early years. My father was a professor of Spanish literature at UCLA. I went to public schools there in West Los Angeles, [namely,] grammar school, junior high and [the high school called] University High. So I had a straightforward public school education. I guess the most significant thing to get to is that World War II began when I was in high school and that caused several things to happen. I’m meandering here. There’s a little bit of a long story. Because of the wartime pressures on manpower, the high school went into early and late classes and I cleverly saw that I could get a chance to accelerate my progress. I ended up taking both early and late classes and graduating in two years instead of three and [thereby] got a chance to go to UCLA in 1943.
    [Show full text]
  • Topic a Dataflow Model of Computation
    Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory - CAPSL Topic A Dataflow Model of Computation CPEG 852 - Spring 2014 Advanced Topics in Computing Systems Guang R. Gao ACM Fellow and IEEE Fellow Endowed Distinguished Professor Electrical & Computer Engineering University of Delaware CPEG852-Spring14: Topic A - Dataflow - 1 [email protected] Outline • Parallel Program Execution Models • Dataflow Models of Computation • Dataflow Graphs and Properties • Three Dataflow Models – Static – Recursive Program Graph – Dynamic • Dataflow Architectures CPEG852-Spring14: Topic A - Dataflow - 1 2 Terminology Clarification • Parallel Model of Computation – Parallel Models for Algorithm Designers – Parallel Models for System Designers • Parallel Programming Models • Parallel Execution Models • Parallel Architecture Models CPEG852-Spring14: Topic A - Dataflow - 1 3 What is a Program Execution Model? . Application Code . Software Packages User Code . Program Libraries . Compilers . Utility Applications PXM (API) . Hardware System . Runtime Code . Operating System CPEG852-Spring14: Topic A - Dataflow - 1 4 Features a User Program Depends On . Procedures; call/return Features expressed within .Access to parameters and a Programming language variables .Use of data structures (static and dynamic) But that’s not all !! . File creation, naming and access Features expressed Outside .Object directories a (typical) programming .Communication: networks language and peripherals .Concurrency: coordination; scheduling CPEG852-Spring14: Topic A - Dataflow - 1 5 Developments in the 1960s, 1970s Highlights 1960 Other Events . Burroughs B5000 Project . Project MAC Funded at MIT Started . Rice University Computer . IBM announces System 360 . Vienna Definition Method . Tasking introduced in Algol . Common Base Language, 1970 68 and PL/I Dennis . Burroughs builds Robert . Contour Model, Johnston Barton’s DDM1 . Book on the B6700, .
    [Show full text]
  • NASA Contrattbr 4Report 178229 S E M I a N N U a L R E P O
    NASA Contrattbr 4Report 178229 ICASE SEMIANNUAL REPORT April 1, 1986 through September 30, 1986 Contract No. NAS1-18107 January 1987 INSTITUTE FOR COMPUTER APPLICATIONS IN SCIENCE AND ENGINEERING NASA Langley Research Center, Hampton, Virginia 23665 Operatsd by the Universities Space Research Association National Aeronautics and Space Ad minis t rat ion bnglay Research Center HamDton,Virginia23665 CONTENTS Page Introduction .............................................................. iii Research in erogress ...................................................... 1 Reports and Abstracts ..................................................... 32 ICASE Colloquia........................................................... 49 ICASE Summer Activities ................................................... 52 Other Activities .......................................................... 58 ICASE Staff ............................................................... 60 i INTRODUCTION The Institute for Computer Applications in Science and Engineering (ICASE) is operated at the Langley Research Center (LaRC) of NASA by the Universities Space Research Associat€on (USRA) under a contract with the Center. USRA is a nonpro€it consortium of major U. S. colleges and universities. The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. ICASE has a small permanent staff. Research
    [Show full text]
  • Operating Systems & Virtualisation Security Knowledge Area
    Operating Systems & Virtualisation Security Knowledge Area Issue 1.0 Herbert Bos Vrije Universiteit Amsterdam EDITOR Andrew Martin Oxford University REVIEWERS Chris Dalton Hewlett Packard David Lie University of Toronto Gernot Heiser University of New South Wales Mathias Payer École Polytechnique Fédérale de Lausanne The Cyber Security Body Of Knowledge www.cybok.org COPYRIGHT © Crown Copyright, The National Cyber Security Centre 2019. This information is licensed under the Open Government Licence v3.0. To view this licence, visit: http://www.nationalarchives.gov.uk/doc/open-government-licence/ When you use this information under the Open Government Licence, you should include the following attribution: CyBOK © Crown Copyright, The National Cyber Security Centre 2018, li- censed under the Open Government Licence: http://www.nationalarchives.gov.uk/doc/open- government-licence/. The CyBOK project would like to understand how the CyBOK is being used and its uptake. The project would like organisations using, or intending to use, CyBOK for the purposes of education, training, course development, professional development etc. to contact it at con- [email protected] to let the project know how they are using CyBOK. Issue 1.0 is a stable public release of the Operating Systems & Virtualisation Security Knowl- edge Area. However, it should be noted that a fully-collated CyBOK document which includes all of the Knowledge Areas is anticipated to be released by the end of July 2019. This will likely include updated page layout and formatting of the individual Knowledge Areas KA Operating Systems & Virtualisation Security j October 2019 Page 1 The Cyber Security Body Of Knowledge www.cybok.org INTRODUCTION In this Knowledge Area, we introduce the principles, primitives and practices for ensuring se- curity at the operating system and hypervisor levels.
    [Show full text]
  • UC Berkeley Previously Published Works
    UC Berkeley UC Berkeley Previously Published Works Title Building the Second Mind, 1961-1980: From the Ascendancy of ARPA-IPTO to the Advent of Commercial Expert Systems Permalink https://escholarship.org/uc/item/7ck3q4f0 ISBN 978-0-989453-4-6 Author Skinner, Rebecca Elizabeth Publication Date 2013-12-31 eScholarship.org Powered by the California Digital Library University of California Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems copyright 2013 Rebecca E. Skinner ISBN 978 09894543-4-6 Forward Part I. Introduction Preface Chapter 1. Introduction: The Status Quo of AI in 1961 Part II. Twin Bolts of Lightning Chapter 2. The Integrated Circuit Chapter 3. The Advanced Research Projects Agency and the Foundation of the IPTO Chapter 4. Hardware, Systems and Applications in the 1960s Part II. The Belle Epoque of the 1960s Chapter 5. MIT: Work in AI in the Early and Mid-1960s Chapter 6. CMU: From the General Problem Solver to the Physical Symbol System and Production Systems Chapter 7. Stanford University and SRI Part III. The Challenges of 1970 Chapter 8. The Mansfield Amendment, “The Heilmeier Era”, and the Crisis in Research Funding Chapter 9. The AI Culture Wars: the War Inside AI and Academia Chapter 10. The AI Culture Wars: Popular Culture Part IV. Big Ideas and Hardware Improvements in the 1970s invert these and put the hardware chapter first Chapter 11. AI at MIT in the 1970s: The Semantic Fallout of NLR and Vision Chapter 12. Hardware, Software, and Applications in the 1970s Chapter 13.
    [Show full text]
  • Notes on the History of Fork-And-Join Linus Nyman and Mikael Laakso Hanken School of Economics
    Notes on the history of fork-and-join Linus Nyman and Mikael Laakso Hanken School of Economics As part of a PhD on code forking in open source software, Linus Nyman looked into the origins of how the practice came to be called forking.1 This search led back to the early history of the fork system call. Having not previously seen such a history published, this anecdote looks back at the birth of the fork system call to share what was learned, as remembered by its pioneers. The fork call allows a process (or running program) to create new processes. The original is deemed the parent process and the newly created one its child. On multiprocessor systems, these processes can run concurrently in parallel.2 Since its birth 50 years ago, the fork has remained a central element of modern computing, both with regards to software development principles and, by extension, to hardware design, which increasingly accommodates parallelism in process execution. The fork system call is imagined The year was 1962. Melvin Conway, later to become known for “Conway’s Law,”3 was troubled by what seemed an unnecessary inefficiency in computing. As Conway recalls:4 I was in the US Air Force at the time – involved with computer procurement – and I noticed that no vendor was distinguishing between “processor” and “process.” That is, when a proposal involved a multiprocessor design, each processor was committed to a single process. By late 1962, Conway had begun contemplating the idea of using a record to carry the status of a process.
    [Show full text]
  • Memorial Gathering for Corby
    Fernando J. Corbat6 Memorial Gathering Gifts in memory of Corby may be made to MIT's ... Fernando Corbat6 Fellowship Fund November 4, 2019 via MIT Memorial Gifts Office, Bonny Kellermann 617-253-9722 MIT Stata Center Kirsch Auditorium l•lii MIT EECS MIT CS A IL 1926- 2019 Today we gather to commemorate Fernando J. Corbat6, After CTSS, Corbat6 led a time-sharing effort called Multics 'Corby,' Professor Emeritus in the Department of Electrical (Multiplexed Information and Computing Service), which En~ neering and Computer Science at MIT directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics Fernando Corbat6 was Lorn on July 1, 1926 in Oakland, doubled as a fertile training ground for an emerging California. At 17, he enlisted as a technician in the U.S. Navy, generation of programmers. Multics was available for where he first got the en~neering bug working on a range of general use at MIT in October 1969 and became a Honeywell radar and sonar systems. After World War II, he earned his • product in 1973. Corbat6 and his colleagues also opened up bachelor's deuee at Caltech before heading to MIT to communication between users with early versions of email, complete a PhD in Physics in 1956. Corbat6 then joined MIT's instant messaging, and word processing. Computation Center as a research assistant, swn moving up to become deputy director of the entire center. Another legacy is "Corbat6's Law," which states that the number of lines of code someone can write in a day is the He was appointed Associate Professor in 1962, promoted to same regardless of the language used.
    [Show full text]
  • On Evaluating Parallel Computer Systems
    https://ntrs.nasa.gov/search.jsp?R=19890020606 2020-03-20T00:39:33+00:00Z View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NASA Technical Reports Server i On Evaluating Parallel Computer Systems George B. Adams 111 Robert L. Brown Peter J. Denning September 1985 Research Institute for Advanced Computer Science NASA Ames Research Center RIACS TR 85.3 Research Institute for Advanced Computer Science - (NASA-CR-Ib4399) ON EVALUATKNG PARALLEL id B 9- 2 9 Y 7 7 CQMPUTLR SYSTEM$ (Research inst. for Advanced Computer Science) 15 p CSCL 09R Uncl as G3/62 0217906 On Evaluating Parallel Computer Systems George B. Adams I11 Robert L. Brown Peter J. Denning Research Institute for Advanced Computer Science RIACS TR 85.3 September 1985 Prepared by RIACS under NASA Contract No. NAS 2-11530 and DARPA Contract No. BDM- S500-0X6000. The content of this document does not represent the official position of NASA or DARPA. 1. Introduction Modern supercomputers are within a factor of 10 of the 1 GFLOPS (approximate) speed limit for any single-processor architecture. By 1990 scientific and engineering applications will require speeds well beyond this limit. Computer architectures capable of massive parallelism are necessary to achieve such speeds. Because these machines are based on models of computation different from the familiar sequential process model, intuition is not a reliable basis for predicting how easy the new machines will be to program or how fast they are capable of running new algorithms. Instead, a systematic experimental approach is needed.
    [Show full text]
  • Oral History Interview with Jack Dennis
    An Interview with JACK DENNIS OH 177 Conducted by Judy O'Neill on 31 October 1989 Cambridge, MA Charles Babbage Institute Center for the History of Information Processing University of Minnesota, Minneapolis Copyright, Charles Babbage Institute 1 Jack Dennis Interview 31 October 1989 Abstract Dennis describes his educational background and work in time-sharing computer systems at the Massachusetts Institute of Technology (MIT). The interview focuses on time-sharing. Dennis discusses the TX0 computer at MIT, the work of John McCarthy on time-sharing, and the influence of the Information Processing Techniques Office of the Advanced Research Projects Agency (later the Defense Advanced Research Projects Agency) on the development of time-sharing. Dennis also recalls the competition between various firms, including Digital Equipment Corporation, General Electric, Burroughs, and International Business Machines, to manufacture time-sharing systems. He describes the development of MULTICS at General Electric. 2 JACK DENNIS INTERVIEW DATE: 31 October 1989 INTERVIEWER: Judy O'Neill LOCATION: Cambridge, MA O'NEILL: As you know our primary interest is with DARPA and in particular in the history of time-sharing. How did you get interested in computing and in the computer field, and what experience you had with computers before your PDP-1 time-sharing system? DENNIS: PDP-1 time-sharing system, that started when the machine was delivered to MIT. That was about 1961, wasn't it? Do you have the date? O'NEILL: I do have the date. I think it was a little bit earlier than that. DENNIS: Maybe 1960. O'NEILL: I recall '59 or '60.
    [Show full text]
  • University of Delaware Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory
    University of Delaware Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory Collaborative Research: Programming Models and Storage System for High Performance Computation with Many-Core Processors Jack B. Dennis, Guang R. Gao and Vivek Sarkar CAPSL Technical Memo 088 May 11th, 2009 Copyright c 2009 CAPSL at the University of Delaware MIT, University of Delaware and Rice University [email protected], [email protected] and vsarkar fatg rice.edu University of Delaware • 140 Evans Hall • Newark, Delaware 19716 • USA http://www.capsl.udel.edu • ftp://ftp.capsl.udel.edu • [email protected] Collaborative Research: Programming Models and Storage System for High Performance Computation with Many-Core Processors Future generation HEC architectures and systems built using many-core chips will pose un- precedented challenges for users. A major source of these challenges is the storage system since the available memory and bandwidth per processor core is starting to decline at an alarming rate, with the rapid increase in the number of cores per chip. Data-intensive applications that require large data sets and/or high input/output (I/O) bandwidth will be especially vulnerable to these trends. Unlike previous generations of hardware evolution, this shift in the hardware road-map will have a profound impact on HEC software. Historically, the storage architecture of an HEC system has been constrained to a large degree by the file system interfaces in the underlying Operating System (OS). The basic design of file systems has been largely unchanged for multiple decades, and will no longer suffice as the foundational storage model for many-core HEC systems.
    [Show full text]
  • FJC Multics Tlk-Edited-52914
    Early Days, … F. J. Corbato • 1949 Whirlwind I operaonal • 1956 Phil Morse started Computaon Center --with IBM 704: MIT one shiK ~43 NE colleges one shiK IBM one shiK + does maintenance Computaon Center Thread • 1957 McCarthy moved from Dartmouth to MIT in EE and joined CC - with Minsky started AI group • Jan. 1, 1959 wrote memo to Morse for Time- sharing the IBM 709 – needed a few cri>cal mods to hardware Comp Ctr Thread (cont.) • IBM agreeable to hardware mods • Herb Teager, a new EE prof, to do the Operang System, aach 4 Flexowriter terminals • McCarthy con>nued to argue for MIT to get a new computer with large amounts of memory • Long Range Comp. Study Commi]ee formed, Teager chair—15 from key parts of MIT Long Range Comp. Study Comm. • Many mee>ngs, no consensus • AKer a couple of months, Teager called a mee>ng, produced a finished report with conclusion to get an IBM Stretch, 137 pages • Commi]ee revolted, reformed w/o Teager, McCarthy, chair • April 1961, new report, 47 pages, recommended visi>ng vendors for large central Time Sharing installaon Comp. Ctr. Thread (cont.) • Teager efforts to do a T-S OS stalled -- too ambious, undermanned • Summer 1961, Corbato decided to build a “hack” T-S system to demo T-S on IBM 709 -- used 4 Flexowriter terminals -- mag.tape per terminal for swap storage • First demo of CTSS Nov. 1961 Other Threads • BBN with Lick, Fredkin, McCarthy (consult.) • Jack Dennis and students with TX-0 and PDP-1 • Sputnik (1957) shock and ARPA start (1958) • Spring (1961) McCarthy gave talk on Time Sharing as part of
    [Show full text]
  • Ieee-Level Awards
    IEEE-LEVEL AWARDS The IEEE currently bestows a Medal of Honor, fifteen Medals, thirty-three Technical Field Awards, two IEEE Service Awards, two Corporate Recognitions, two Prize Paper Awards, Honorary Memberships, one Scholarship, one Fellowship, and a Staff Award. The awards and their past recipients are listed below. Citations are available via the “Award Recipients with Citations” links within the information below. Nomination information for each award can be found by visiting the IEEE Awards Web page www.ieee.org/awards or by clicking on the award names below. Links are also available via the Recipient/Citation documents. MEDAL OF HONOR Ernst A. Guillemin 1961 Edward V. Appleton 1962 Award Recipients with Citations (PDF, 26 KB) John H. Hammond, Jr. 1963 George C. Southworth 1963 The IEEE Medal of Honor is the highest IEEE Harold A. Wheeler 1964 award. The Medal was established in 1917 and Claude E. Shannon 1966 Charles H. Townes 1967 is awarded for an exceptional contribution or an Gordon K. Teal 1968 extraordinary career in the IEEE fields of Edward L. Ginzton 1969 interest. The IEEE Medal of Honor is the highest Dennis Gabor 1970 IEEE award. The candidate need not be a John Bardeen 1971 Jay W. Forrester 1972 member of the IEEE. The IEEE Medal of Honor Rudolf Kompfner 1973 is sponsored by the IEEE Foundation. Rudolf E. Kalman 1974 John R. Pierce 1975 E. H. Armstrong 1917 H. Earle Vaughan 1977 E. F. W. Alexanderson 1919 Robert N. Noyce 1978 Guglielmo Marconi 1920 Richard Bellman 1979 R. A. Fessenden 1921 William Shockley 1980 Lee deforest 1922 Sidney Darlington 1981 John Stone-Stone 1923 John Wilder Tukey 1982 M.
    [Show full text]