Peter J. Denning Editor: David Walden

Total Page:16

File Type:pdf, Size:1020Kb

Peter J. Denning Editor: David Walden [3B2-9] man201204072.3d 7/11/012 14:3 Page 72 Interviews Peter J. Denning Editor: David Walden A leading scientist in computing wing of a gifted science teacher, I entered three science since his graduation from Massa- fairs with computers made of pinball parts and vacuum chusetts Institute of Technology in tubes—one to compute sums, one to solve linear equa- 1968, Peter J. Denning is best tions, and the last to solve cubic equations. The second known for his pioneering work in computer won the science fair. The third computer virtual memory, especially for worked perfectly but fared poorly at the fair because inventing the working-set model I paid no attention to marketing and presentation— for program behavior, which elimi- avaluablelifelesson. nated thrashing in operating sys- From Fairfield Prep, I went to Manhattan College to Photo courtesy tems and became the reference study electrical engineering in 1960. Although short on of Louis Fabian Bachrach. standard for all memory manage- computing, its curriculum gave me a solid grounding in ment policies. He is also known for practical engineering—the building and testing of his work on the principles of operating systems, opera- things people could use. tional analysis of queueing network systems, the design I came out on top of my class at Manhattan in 1964 and implementation of the Computer Science Network and got a National Science Foundation fellowship good (CSNET), ACM digital library, and codifying the princi- at any graduate school. I applied to MIT in fulfillment ples of computing. A primary goal of Denning’s career of my father’s advice (he had wanted me to attend has always been promoting the science in computer MIT rather than Manhattan). science through education, research, and the general health of the field.1 Walden: Say a bit about MIT. Denning: MIT had a completely different philosophy from Manhattan about EE principles and organization. David Walden: Please tell me a bit about To prepare for the PhD exams at the end of first year, your early life. I took all the MIT EE core courses in addition to my Peter J. Denning: I had interests in math, science, and na- required master’s courses. That intense preparation ture from a young age. At school I was too small to be was barely enough. With the help of my master’s thesis any good at athletics, which were socially popular, so advisor, Jack Dennis, who took me under his wing, I devoted myself completely to academics, which I passed the PhD qualifiers on the second try. He and were not. I have had a long and productive friendship for almost By age 12 I developed an interest in magician per- 50 years. formances, especially those that depended on mathe- My master’s thesis was about scheduling requests for matical tricks. By age 13 I had discovered a deep a rotating disk or drum memory so as to minimize fascination with electricity and electronics, which mean access time, a critical issue for an experimental seemed to have a magic all their own. time-sharing system Jack Dennis had been developing. My parents sent me to Fairfield Prep in 1956 to get During that year, I worked closely with Allan Scherr, me into an intellectual community and out of the who taught me about systems programming, language athletics-infatuated public school culture. Under the design, compiling, data collection in an OS kernel, Background of Peter J. Denning Born: 6 January 1942, Queens, New York. Teaching, 1971; IEEE Fellow, 1982; AAAS Fellow, 1984; Education: Fairfield College Prep, 1960; Manhattan ACM Distinguished Service, 1989; ACM Fellow, 1994; College, BEE, 1964; Massachusetts Institute of Technol- ACM Karlstrom Outstanding Educator, 1996; ACM ogy, MS, 1965; MIT, PhD, 1968. SIGCSE Outstanding Educator, 1999; Centennial Engi- Professional Experience: Princeton University, neer, Manhattan College, 1999; George Mason Univer- 1968–1972; Purdue University, 1972–1983; NASA-Ames sity, School of Engineering, best teacher, 2002; George RIACS, 1983–1991; George Mason University, 1991– Mason University, best teacher, 2002; SIGOPS Hall of 2002; Naval Postgraduate School, 2002 to present. Fame, 2005; NSF CISE Education Fellow, 2007; Postel Honors and Awards: Southern Connecticut Award for CSNET, 2009; ACM SIGCSE Lifetime Achieve- Science Fair Grand Award, 1959; Princeton Engineering ment, 2009. 72 IEEE Annals of the History of Computing Published by the IEEE Computer Society 1058-6180/12/$31.00 c 2012 IEEE [3B2-9] man201204072.3d 7/11/012 14:3 Page 73 discrete simulation, and queueing theory. and computer architecture. I took on two Through the thesis, Jack and I showed that PhD students and worked with several shortest latency time disk scheduling was op- others. I collaborated closely on several proj- timal for time-sharing systems. ects with computer scientists Ed Coffman, On passing my PhD qualifiers in the Jeff Ullman, and Al Aho and with electrical spring of 1966, I decided to tackle a much engineers Stuart Schwartz and Bruce Eisen- tougher resource allocation problem, which stein. Those projects extended the working was looming in the design of Multics. The set theory and validated it with experiments. problem was how to build a stable comput- They also codified operating systems ing system from multiprocess computations, principles. which could have large variations in their Prior to Princeton, I helped Jack Dennis processor and memory demands. I had to organize the first ACM Symposium on Oper- learn how to measure the demands of multi- ating System Principles (SOSP), held in Gat- process computations, configure a system linburg, Tennessee, in 1967. At Princeton, with appropriate capacity for the demand, Ed Coffman and I organized a follow-on, and manage the allocation of CPU and mem- SOSP-2, in 1969. There was a huge interest ory dynamically. Jerry Saltzer told me of in gaining a fundamental understanding of thrashing, a major instability they were operating systems, which were the most encountering with multiprogrammed virtual complex computing systems then known. memory systems, and challenged me to find The SOSP has continued every two years a solution. That solution turned out to be since that time. much harder than either of us imagined. In 1969 and 1970, I chaired a task force for My quest produced the theory of locality, the NSF Cosine (Computer Science in Engi- the working set model for program behavior, neering) project, which was developing pro- and a method of system balance for optimal totypes of new core courses for computer control.2 science programs. I invited Jack Dennis, During my PhD years, I also helped Jack Nico Habermann, Butler Lampson, and Dennis teach a course on computational Dennis Tsichritzis to the team on OS princi- models. ples. Our recommendations, released in 1971, were adopted nationally as many uni- Walden: I was a student of yours in that versities created their first systems-oriented course; it was a great course. core courses. Denning: I loved teaching that material and After the task force, Ed Coffman and I de- developed a deep understanding of computa- cided to write a book with the bold title tion and the essential role of machines in Operating System Theory.3 Published in 1973, doing it. Our class notes caught the attention it contained the best material we could find of a Prentice-Hall editor, and we signed a on the fundamental principles of operating contract for a book, Machines, Languages, systems. and Computation, in 1967. Unfortunately, writing a book was more work than I ever Walden: You were then recruited imagined—we did not finish until 1978. to Purdue University? In January 1968 Jack told me I had plenty Denning: By my fourth year at Princeton, pro- of material for my PhD thesis. I went on a motion was not looking good because of a crash program of writing and working with cap on tenured faculty—no more than two my committee. I graduated with my MIT promotions in the engineering school in PhD in May 1968. thenextfiveyearsandatmostoneinour As graduation approached, I pondered EE department, where CS was a minority. where to go next. I had offers from MIT and Early in 1972 I encountered Sam Conte, the three other universities. I chose Princeton be- CS chair at Purdue, on an elevator at a confer- cause it was more attractive to my family. ence. He said, ‘‘I hear you are looking around. I can make you an offer as tenured Walden: At Princeton you continued and associate professor and pay you 50 percent expanded the scope of your research in more salary.’’ Now that was a great elevator the areas of operating systems, as well as pitch! teaching and beginning other research. I interviewed at Purdue in the dead of Denning: Yes, my four years at Princeton were winter. The faculty members were warm productive. I developed and taught new and welcoming. I accepted an offer from courses in the principles of operating systems Sam a few weeks later. October–December 2012 73 [3B2-9] man201204072.3d 7/11/012 14:3 Page 74 Interviews My Purdue years were also productive. my science teacher gave me my first teaching With the help of several graduate students, I experience, for which I wrote a series of lec- continued the working-set project. We showed tures about basic electricity for the science that the working set model was very general; it club. I also wrote articles and even drew car- could simulate any paging algorithm with toons for the school’s magazine. In college, I memory contents that obeyed an inclusion won a couple of essay awards. At MIT I wrote property with increasing value of the control extensive course notes, which as I mentioned parameter.
Recommended publications
  • Oral History of Fernando Corbató
    Oral History of Fernando Corbató Interviewed by: Steven Webber Recorded: February 1, 2006 West Newton, Massachusetts CHM Reference number: X3438.2006 © 2006 Computer History Museum Oral History of Fernando Corbató Steven Webber: Today the Computer History Museum Oral History Project is going to interview Fernando J. Corbató, known as Corby. Today is February 1 in 2006. We like to start at the very beginning on these interviews. Can you tell us something about your birth, your early days, where you were born, your parents, your family? Fernando Corbató: Okay. That’s going back a long ways of course. I was born in Oakland. My parents were graduate students at Berkeley and then about age 5 we moved down to West Los Angeles, Westwood, where I grew up [and spent] most of my early years. My father was a professor of Spanish literature at UCLA. I went to public schools there in West Los Angeles, [namely,] grammar school, junior high and [the high school called] University High. So I had a straightforward public school education. I guess the most significant thing to get to is that World War II began when I was in high school and that caused several things to happen. I’m meandering here. There’s a little bit of a long story. Because of the wartime pressures on manpower, the high school went into early and late classes and I cleverly saw that I could get a chance to accelerate my progress. I ended up taking both early and late classes and graduating in two years instead of three and [thereby] got a chance to go to UCLA in 1943.
    [Show full text]
  • Topic a Dataflow Model of Computation
    Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory - CAPSL Topic A Dataflow Model of Computation CPEG 852 - Spring 2014 Advanced Topics in Computing Systems Guang R. Gao ACM Fellow and IEEE Fellow Endowed Distinguished Professor Electrical & Computer Engineering University of Delaware CPEG852-Spring14: Topic A - Dataflow - 1 [email protected] Outline • Parallel Program Execution Models • Dataflow Models of Computation • Dataflow Graphs and Properties • Three Dataflow Models – Static – Recursive Program Graph – Dynamic • Dataflow Architectures CPEG852-Spring14: Topic A - Dataflow - 1 2 Terminology Clarification • Parallel Model of Computation – Parallel Models for Algorithm Designers – Parallel Models for System Designers • Parallel Programming Models • Parallel Execution Models • Parallel Architecture Models CPEG852-Spring14: Topic A - Dataflow - 1 3 What is a Program Execution Model? . Application Code . Software Packages User Code . Program Libraries . Compilers . Utility Applications PXM (API) . Hardware System . Runtime Code . Operating System CPEG852-Spring14: Topic A - Dataflow - 1 4 Features a User Program Depends On . Procedures; call/return Features expressed within .Access to parameters and a Programming language variables .Use of data structures (static and dynamic) But that’s not all !! . File creation, naming and access Features expressed Outside .Object directories a (typical) programming .Communication: networks language and peripherals .Concurrency: coordination; scheduling CPEG852-Spring14: Topic A - Dataflow - 1 5 Developments in the 1960s, 1970s Highlights 1960 Other Events . Burroughs B5000 Project . Project MAC Funded at MIT Started . Rice University Computer . IBM announces System 360 . Vienna Definition Method . Tasking introduced in Algol . Common Base Language, 1970 68 and PL/I Dennis . Burroughs builds Robert . Contour Model, Johnston Barton’s DDM1 . Book on the B6700, .
    [Show full text]
  • NASA Contrattbr 4Report 178229 S E M I a N N U a L R E P O
    NASA Contrattbr 4Report 178229 ICASE SEMIANNUAL REPORT April 1, 1986 through September 30, 1986 Contract No. NAS1-18107 January 1987 INSTITUTE FOR COMPUTER APPLICATIONS IN SCIENCE AND ENGINEERING NASA Langley Research Center, Hampton, Virginia 23665 Operatsd by the Universities Space Research Association National Aeronautics and Space Ad minis t rat ion bnglay Research Center HamDton,Virginia23665 CONTENTS Page Introduction .............................................................. iii Research in erogress ...................................................... 1 Reports and Abstracts ..................................................... 32 ICASE Colloquia........................................................... 49 ICASE Summer Activities ................................................... 52 Other Activities .......................................................... 58 ICASE Staff ............................................................... 60 i INTRODUCTION The Institute for Computer Applications in Science and Engineering (ICASE) is operated at the Langley Research Center (LaRC) of NASA by the Universities Space Research Associat€on (USRA) under a contract with the Center. USRA is a nonpro€it consortium of major U. S. colleges and universities. The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. ICASE has a small permanent staff. Research
    [Show full text]
  • Operating Systems & Virtualisation Security Knowledge Area
    Operating Systems & Virtualisation Security Knowledge Area Issue 1.0 Herbert Bos Vrije Universiteit Amsterdam EDITOR Andrew Martin Oxford University REVIEWERS Chris Dalton Hewlett Packard David Lie University of Toronto Gernot Heiser University of New South Wales Mathias Payer École Polytechnique Fédérale de Lausanne The Cyber Security Body Of Knowledge www.cybok.org COPYRIGHT © Crown Copyright, The National Cyber Security Centre 2019. This information is licensed under the Open Government Licence v3.0. To view this licence, visit: http://www.nationalarchives.gov.uk/doc/open-government-licence/ When you use this information under the Open Government Licence, you should include the following attribution: CyBOK © Crown Copyright, The National Cyber Security Centre 2018, li- censed under the Open Government Licence: http://www.nationalarchives.gov.uk/doc/open- government-licence/. The CyBOK project would like to understand how the CyBOK is being used and its uptake. The project would like organisations using, or intending to use, CyBOK for the purposes of education, training, course development, professional development etc. to contact it at con- [email protected] to let the project know how they are using CyBOK. Issue 1.0 is a stable public release of the Operating Systems & Virtualisation Security Knowl- edge Area. However, it should be noted that a fully-collated CyBOK document which includes all of the Knowledge Areas is anticipated to be released by the end of July 2019. This will likely include updated page layout and formatting of the individual Knowledge Areas KA Operating Systems & Virtualisation Security j October 2019 Page 1 The Cyber Security Body Of Knowledge www.cybok.org INTRODUCTION In this Knowledge Area, we introduce the principles, primitives and practices for ensuring se- curity at the operating system and hypervisor levels.
    [Show full text]
  • UC Berkeley Previously Published Works
    UC Berkeley UC Berkeley Previously Published Works Title Building the Second Mind, 1961-1980: From the Ascendancy of ARPA-IPTO to the Advent of Commercial Expert Systems Permalink https://escholarship.org/uc/item/7ck3q4f0 ISBN 978-0-989453-4-6 Author Skinner, Rebecca Elizabeth Publication Date 2013-12-31 eScholarship.org Powered by the California Digital Library University of California Building the Second Mind, 1961-1980: From the Ascendancy of ARPA to the Advent of Commercial Expert Systems copyright 2013 Rebecca E. Skinner ISBN 978 09894543-4-6 Forward Part I. Introduction Preface Chapter 1. Introduction: The Status Quo of AI in 1961 Part II. Twin Bolts of Lightning Chapter 2. The Integrated Circuit Chapter 3. The Advanced Research Projects Agency and the Foundation of the IPTO Chapter 4. Hardware, Systems and Applications in the 1960s Part II. The Belle Epoque of the 1960s Chapter 5. MIT: Work in AI in the Early and Mid-1960s Chapter 6. CMU: From the General Problem Solver to the Physical Symbol System and Production Systems Chapter 7. Stanford University and SRI Part III. The Challenges of 1970 Chapter 8. The Mansfield Amendment, “The Heilmeier Era”, and the Crisis in Research Funding Chapter 9. The AI Culture Wars: the War Inside AI and Academia Chapter 10. The AI Culture Wars: Popular Culture Part IV. Big Ideas and Hardware Improvements in the 1970s invert these and put the hardware chapter first Chapter 11. AI at MIT in the 1970s: The Semantic Fallout of NLR and Vision Chapter 12. Hardware, Software, and Applications in the 1970s Chapter 13.
    [Show full text]
  • Notes on the History of Fork-And-Join Linus Nyman and Mikael Laakso Hanken School of Economics
    Notes on the history of fork-and-join Linus Nyman and Mikael Laakso Hanken School of Economics As part of a PhD on code forking in open source software, Linus Nyman looked into the origins of how the practice came to be called forking.1 This search led back to the early history of the fork system call. Having not previously seen such a history published, this anecdote looks back at the birth of the fork system call to share what was learned, as remembered by its pioneers. The fork call allows a process (or running program) to create new processes. The original is deemed the parent process and the newly created one its child. On multiprocessor systems, these processes can run concurrently in parallel.2 Since its birth 50 years ago, the fork has remained a central element of modern computing, both with regards to software development principles and, by extension, to hardware design, which increasingly accommodates parallelism in process execution. The fork system call is imagined The year was 1962. Melvin Conway, later to become known for “Conway’s Law,”3 was troubled by what seemed an unnecessary inefficiency in computing. As Conway recalls:4 I was in the US Air Force at the time – involved with computer procurement – and I noticed that no vendor was distinguishing between “processor” and “process.” That is, when a proposal involved a multiprocessor design, each processor was committed to a single process. By late 1962, Conway had begun contemplating the idea of using a record to carry the status of a process.
    [Show full text]
  • Memorial Gathering for Corby
    Fernando J. Corbat6 Memorial Gathering Gifts in memory of Corby may be made to MIT's ... Fernando Corbat6 Fellowship Fund November 4, 2019 via MIT Memorial Gifts Office, Bonny Kellermann 617-253-9722 MIT Stata Center Kirsch Auditorium l•lii MIT EECS MIT CS A IL 1926- 2019 Today we gather to commemorate Fernando J. Corbat6, After CTSS, Corbat6 led a time-sharing effort called Multics 'Corby,' Professor Emeritus in the Department of Electrical (Multiplexed Information and Computing Service), which En~ neering and Computer Science at MIT directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics Fernando Corbat6 was Lorn on July 1, 1926 in Oakland, doubled as a fertile training ground for an emerging California. At 17, he enlisted as a technician in the U.S. Navy, generation of programmers. Multics was available for where he first got the en~neering bug working on a range of general use at MIT in October 1969 and became a Honeywell radar and sonar systems. After World War II, he earned his • product in 1973. Corbat6 and his colleagues also opened up bachelor's deuee at Caltech before heading to MIT to communication between users with early versions of email, complete a PhD in Physics in 1956. Corbat6 then joined MIT's instant messaging, and word processing. Computation Center as a research assistant, swn moving up to become deputy director of the entire center. Another legacy is "Corbat6's Law," which states that the number of lines of code someone can write in a day is the He was appointed Associate Professor in 1962, promoted to same regardless of the language used.
    [Show full text]
  • On Evaluating Parallel Computer Systems
    https://ntrs.nasa.gov/search.jsp?R=19890020606 2020-03-20T00:39:33+00:00Z View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NASA Technical Reports Server i On Evaluating Parallel Computer Systems George B. Adams 111 Robert L. Brown Peter J. Denning September 1985 Research Institute for Advanced Computer Science NASA Ames Research Center RIACS TR 85.3 Research Institute for Advanced Computer Science - (NASA-CR-Ib4399) ON EVALUATKNG PARALLEL id B 9- 2 9 Y 7 7 CQMPUTLR SYSTEM$ (Research inst. for Advanced Computer Science) 15 p CSCL 09R Uncl as G3/62 0217906 On Evaluating Parallel Computer Systems George B. Adams I11 Robert L. Brown Peter J. Denning Research Institute for Advanced Computer Science RIACS TR 85.3 September 1985 Prepared by RIACS under NASA Contract No. NAS 2-11530 and DARPA Contract No. BDM- S500-0X6000. The content of this document does not represent the official position of NASA or DARPA. 1. Introduction Modern supercomputers are within a factor of 10 of the 1 GFLOPS (approximate) speed limit for any single-processor architecture. By 1990 scientific and engineering applications will require speeds well beyond this limit. Computer architectures capable of massive parallelism are necessary to achieve such speeds. Because these machines are based on models of computation different from the familiar sequential process model, intuition is not a reliable basis for predicting how easy the new machines will be to program or how fast they are capable of running new algorithms. Instead, a systematic experimental approach is needed.
    [Show full text]
  • Oral History Interview with Jack Dennis
    An Interview with JACK DENNIS OH 177 Conducted by Judy O'Neill on 31 October 1989 Cambridge, MA Charles Babbage Institute Center for the History of Information Processing University of Minnesota, Minneapolis Copyright, Charles Babbage Institute 1 Jack Dennis Interview 31 October 1989 Abstract Dennis describes his educational background and work in time-sharing computer systems at the Massachusetts Institute of Technology (MIT). The interview focuses on time-sharing. Dennis discusses the TX0 computer at MIT, the work of John McCarthy on time-sharing, and the influence of the Information Processing Techniques Office of the Advanced Research Projects Agency (later the Defense Advanced Research Projects Agency) on the development of time-sharing. Dennis also recalls the competition between various firms, including Digital Equipment Corporation, General Electric, Burroughs, and International Business Machines, to manufacture time-sharing systems. He describes the development of MULTICS at General Electric. 2 JACK DENNIS INTERVIEW DATE: 31 October 1989 INTERVIEWER: Judy O'Neill LOCATION: Cambridge, MA O'NEILL: As you know our primary interest is with DARPA and in particular in the history of time-sharing. How did you get interested in computing and in the computer field, and what experience you had with computers before your PDP-1 time-sharing system? DENNIS: PDP-1 time-sharing system, that started when the machine was delivered to MIT. That was about 1961, wasn't it? Do you have the date? O'NEILL: I do have the date. I think it was a little bit earlier than that. DENNIS: Maybe 1960. O'NEILL: I recall '59 or '60.
    [Show full text]
  • Dataflow: Passing the Token
    Dataflow: Passing the Token Arvind Computer Science and Artificial Intelligence Lab M.I.T. ISCA 2005, Madison, WI June 6, 2005 1 Inspiration: Jack Dennis General purpose parallel machines based on a dataflow graph model of computation Inspired all the major players in dataflow during seventies and eighties, including Kim Gostelow and I @ UC Irvine ISCA, Madison, WI, June 6, 2005 Arvind - 2 EM4: single-chip dataflow micro, Dataflow80PESigma-1: Machinesmultiprocessor, The largest ETL, Japan Static - mostlydataflow for signal machine, processing ETL, Japan NEC - NEDIP and IPP Hughes, Hitachi, AT&T, Loral, TI, Sanyo M.I.T. Engineering model Shown at ... Supercomputing 96 K. Hiraki Dynamic Shown at Manchester (‘81) Supercomputing 91 GregM.I.T. Papadopoulos -S.TTDA Sakai, Monsoon (‘88) M.I.T./Motorola - Monsoon (‘91) (8 PEs, 8 IS) ETL - SIGMA-1 (‘88) (128 PEs,128John IS) Gurd T. ShimadaETL - EM4 (‘90) (80 PEs), EM-X (‘96) (80 PEs) Sandia - EPS88, EPS-2 Related machines: IBM - Empire Burton Smith’s ... Andy Y. KodamaChris DenelcorMonsoon HEP, Jack Horizon, Tera Boughton Joerg Costanza ISCA, Madison, WI, June 6, 2005 Arvind - 3 Software Influences • Parallel Compilers – Intermediate representations: DFG, CDFG (SSA, φ ,...) – Software pipelining Keshav Pingali, G. Gao, Bob Rao, .. • Functional Languages and their compilers • Active Messages David Culler •Compiling for FPGAs, ... Wim Bohm, Seth Goldstein... • Synchronous dataflow –Lustre, Signal Ed Lee @ Berkeley ISCA, Madison, WI, June 6, 2005 Arvind - 4 This talk is mostly about MIT work • Dataflow
    [Show full text]
  • University of Delaware Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory
    University of Delaware Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory Collaborative Research: Programming Models and Storage System for High Performance Computation with Many-Core Processors Jack B. Dennis, Guang R. Gao and Vivek Sarkar CAPSL Technical Memo 088 May 11th, 2009 Copyright c 2009 CAPSL at the University of Delaware MIT, University of Delaware and Rice University [email protected], [email protected] and vsarkar fatg rice.edu University of Delaware • 140 Evans Hall • Newark, Delaware 19716 • USA http://www.capsl.udel.edu • ftp://ftp.capsl.udel.edu • [email protected] Collaborative Research: Programming Models and Storage System for High Performance Computation with Many-Core Processors Future generation HEC architectures and systems built using many-core chips will pose un- precedented challenges for users. A major source of these challenges is the storage system since the available memory and bandwidth per processor core is starting to decline at an alarming rate, with the rapid increase in the number of cores per chip. Data-intensive applications that require large data sets and/or high input/output (I/O) bandwidth will be especially vulnerable to these trends. Unlike previous generations of hardware evolution, this shift in the hardware road-map will have a profound impact on HEC software. Historically, the storage architecture of an HEC system has been constrained to a large degree by the file system interfaces in the underlying Operating System (OS). The basic design of file systems has been largely unchanged for multiple decades, and will no longer suffice as the foundational storage model for many-core HEC systems.
    [Show full text]
  • FJC Multics Tlk-Edited-52914
    Early Days, … F. J. Corbato • 1949 Whirlwind I operaonal • 1956 Phil Morse started Computaon Center --with IBM 704: MIT one shiK ~43 NE colleges one shiK IBM one shiK + does maintenance Computaon Center Thread • 1957 McCarthy moved from Dartmouth to MIT in EE and joined CC - with Minsky started AI group • Jan. 1, 1959 wrote memo to Morse for Time- sharing the IBM 709 – needed a few cri>cal mods to hardware Comp Ctr Thread (cont.) • IBM agreeable to hardware mods • Herb Teager, a new EE prof, to do the Operang System, aach 4 Flexowriter terminals • McCarthy con>nued to argue for MIT to get a new computer with large amounts of memory • Long Range Comp. Study Commi]ee formed, Teager chair—15 from key parts of MIT Long Range Comp. Study Comm. • Many mee>ngs, no consensus • AKer a couple of months, Teager called a mee>ng, produced a finished report with conclusion to get an IBM Stretch, 137 pages • Commi]ee revolted, reformed w/o Teager, McCarthy, chair • April 1961, new report, 47 pages, recommended visi>ng vendors for large central Time Sharing installaon Comp. Ctr. Thread (cont.) • Teager efforts to do a T-S OS stalled -- too ambious, undermanned • Summer 1961, Corbato decided to build a “hack” T-S system to demo T-S on IBM 709 -- used 4 Flexowriter terminals -- mag.tape per terminal for swap storage • First demo of CTSS Nov. 1961 Other Threads • BBN with Lick, Fredkin, McCarthy (consult.) • Jack Dennis and students with TX-0 and PDP-1 • Sputnik (1957) shock and ARPA start (1958) • Spring (1961) McCarthy gave talk on Time Sharing as part of
    [Show full text]