Notes on the History of Fork-And-Join Linus Nyman and Mikael Laakso Hanken School of Economics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Freebsd-And-Git.Pdf
FreeBSD and Git Ed Maste - FreeBSD Vendor Summit 2018 Purpose ● History and Context - ensure we’re starting from the same reference ● Identify next steps for more effective use / integration with Git / GitHub ● Understand what needs to be resolved for any future decision on Git as the primary repository Version control history ● CVS ○ 1993-2012 ● Subversion ○ src/ May 31 2008, r179447 ○ doc/www May 19, 2012 r38821 ○ ports July 14, 2012 r300894 ● Perforce ○ 2001-2018 ● Hg Mirror ● Git Mirror ○ 2011- Subversion Repositories svnsync repo svn Subversion & Git Repositories today svn2git git push svnsync git svn repo svn git github Repositories Today fork repo / Freebsd Downstream svn github github Repositories Today fork repo / Freebsd Downstream svn github github “Git is not a Version Control System” phk@ missive, reproduced at https://blog.feld.me/posts/2018/01/git-is-not-revision-control/ Subversion vs. Git: Myths and Facts https://svnvsgit.com/ “Git has a number of advantages in the popularity race, none of which are really to do with the technology” https://chapmanworld.com/2018/08/25/why-im-now-using-both-git-and-subversion-for-one-project/ 10 things I hate about Git https://stevebennett.me/2012/02/24/10-things-i-hate-about-git Git popularity Nobody uses Subversion anymore False. A myth. Despite all the marketing buzz related to Git, such notable open source projects as FreeBSD and LLVM continue to use Subversion as the main version control system. About 47% of other open source projects use Subversion too (while only 38% are on Git). (2016) https://svnvsgit.com/ Git popularity (2018) Git UI/UX Yes, it’s a mess. -
Computers Are Networked
Chapter 11 Computers are networked In the very early days, according to a remark often attributed to Howard Aiken, some of the people in business believed that “only a very small number of computers would be needed to serve the needs of the whole world, per- haps a dozen, with eight or ten for the United States.”. Although it is somewhat misquoted, the fact is that there used to be very few computers, and they were quite expensive. As a result, it was not very accessible to the general public. 1 It certainly has changed.... This situation started to change in the early 1980’s, when IBM introduced its personal com- puter (IBM PC) for use in the home, office and schools. With an introduction of $1,995, followed by many clones, computers suddenly became af- fordable. The number of personal computers in use was more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. The computers were getting smaller: from desk- top to laptop, notebook, to tablets today, while retaining its processing power, and the price keeps on dropping. We are even talking about $100 notebook for the kids. Thus, many of us can now afford to have a computer. 2 A big thing All such computers, big or small, were con- nected together into networks to share hard- ware such as a printer, software such as Alice, and, more importantly, information. Just look at any cluster on campus: twenty plus computers share one or two printers, they share some of the applications, such as the Microsoft Office suite, as well as the data: In- deed, when any one tries to look at your grade, they are looking at the same document. -
Getting Started With... Berkeley Software for UNIX† on the VAX‡ (The Third Berkeley Software Distribution)
Getting started with... Berkeley Software for UNIX† on the VAX‡ (The third Berkeley Software Distribution) A package of software for UNIX developed at the Computer Science Division of the University of California at Berkeley is installed on our system. This package includes a new version of the operating sys- tem kernel which supports a virtual memory, demand-paged environment. While the kernel change should be transparent to most programs, there are some things you should know if you plan to run large programs to optimize their performance in a virtual memory environment. There are also a number of other new pro- grams which are now installed on our machine; the more important of these are described below. Documentation The new software is described in two new volumes of documentation. The first is a new version of volume 1 of the UNIX programmers manual which has integrated manual pages for the distributed software and incorporates changes to the system made while introducing the virtual memory facility. The second volume of documentation is numbered volume 2c, of which this paper is a section. This volume contains papers about programs which are in the distribution package. Where are the programs? Most new programs from Berkeley reside in the directory /usr/ucb. A major exception is the C shell, csh, which lives in /bin. We will later describe how you can arrange for the programs in the distribution to be part of your normal working environment. Making use of the Virtual Memory With a virtual memory system, it is no longer necessary for running programs to be fully resident in memory. -
Oral History of Fernando Corbató
Oral History of Fernando Corbató Interviewed by: Steven Webber Recorded: February 1, 2006 West Newton, Massachusetts CHM Reference number: X3438.2006 © 2006 Computer History Museum Oral History of Fernando Corbató Steven Webber: Today the Computer History Museum Oral History Project is going to interview Fernando J. Corbató, known as Corby. Today is February 1 in 2006. We like to start at the very beginning on these interviews. Can you tell us something about your birth, your early days, where you were born, your parents, your family? Fernando Corbató: Okay. That’s going back a long ways of course. I was born in Oakland. My parents were graduate students at Berkeley and then about age 5 we moved down to West Los Angeles, Westwood, where I grew up [and spent] most of my early years. My father was a professor of Spanish literature at UCLA. I went to public schools there in West Los Angeles, [namely,] grammar school, junior high and [the high school called] University High. So I had a straightforward public school education. I guess the most significant thing to get to is that World War II began when I was in high school and that caused several things to happen. I’m meandering here. There’s a little bit of a long story. Because of the wartime pressures on manpower, the high school went into early and late classes and I cleverly saw that I could get a chance to accelerate my progress. I ended up taking both early and late classes and graduating in two years instead of three and [thereby] got a chance to go to UCLA in 1943. -
Topic a Dataflow Model of Computation
Department of Electrical and Computer Engineering Computer Architecture and Parallel Systems Laboratory - CAPSL Topic A Dataflow Model of Computation CPEG 852 - Spring 2014 Advanced Topics in Computing Systems Guang R. Gao ACM Fellow and IEEE Fellow Endowed Distinguished Professor Electrical & Computer Engineering University of Delaware CPEG852-Spring14: Topic A - Dataflow - 1 [email protected] Outline • Parallel Program Execution Models • Dataflow Models of Computation • Dataflow Graphs and Properties • Three Dataflow Models – Static – Recursive Program Graph – Dynamic • Dataflow Architectures CPEG852-Spring14: Topic A - Dataflow - 1 2 Terminology Clarification • Parallel Model of Computation – Parallel Models for Algorithm Designers – Parallel Models for System Designers • Parallel Programming Models • Parallel Execution Models • Parallel Architecture Models CPEG852-Spring14: Topic A - Dataflow - 1 3 What is a Program Execution Model? . Application Code . Software Packages User Code . Program Libraries . Compilers . Utility Applications PXM (API) . Hardware System . Runtime Code . Operating System CPEG852-Spring14: Topic A - Dataflow - 1 4 Features a User Program Depends On . Procedures; call/return Features expressed within .Access to parameters and a Programming language variables .Use of data structures (static and dynamic) But that’s not all !! . File creation, naming and access Features expressed Outside .Object directories a (typical) programming .Communication: networks language and peripherals .Concurrency: coordination; scheduling CPEG852-Spring14: Topic A - Dataflow - 1 5 Developments in the 1960s, 1970s Highlights 1960 Other Events . Burroughs B5000 Project . Project MAC Funded at MIT Started . Rice University Computer . IBM announces System 360 . Vienna Definition Method . Tasking introduced in Algol . Common Base Language, 1970 68 and PL/I Dennis . Burroughs builds Robert . Contour Model, Johnston Barton’s DDM1 . Book on the B6700, . -
A Postmortem for a Time Sharing System
A POSTMORTEM FOR A TIME SHARING SYSTEM HOWARD EWING STURGIS CSL 74-1 JANUARY, 1974 Abstract This thesis describes a time sharing system constructed by a project at the University of California, Berkeley Campus, Computer Center. The project was of modest size, consuming about 30 man years. The resulting system was used by a number of programmers. The system was designed for a commercially available computer, the Control Data 6400 with extended core store. The system design was based on several fundamental ideas, including: specification of the entire system as an abstract machine, a capability based protection system, mapped address space, and layered implementation. The abstract machine defined by the first implementation layer provided 8 types of abstractly defined objects and about 100 actions to manipulate them. Subsequent layers provided a few very complicated additional types. Many of the fundamental ideas served us well, particularly the concept that the system defines an abstract machine, and capability based protection. However, the attempt to provide a mapped address space using unsuitable hardware was a disaster. This thesis includes software and hardware proposals to increase the efficiency of representing an abstract machine and providing capability based protection. Also included is a description of a crash recovery consistency prob- lem for files which reside in several levels of storage, together with a solution that we used. XEROX PALO ALTO RESEARCH CENTER 3180 PORTER DRIVE/PALO ALTO/CALIFORNIA 94304 ACKNOWLEDGEMENTS∗ First, I thank Professor James Morris, my dissertation committee chairman, for many hours of discussions and painstaking reading of many drafts. Second, I thank the other members of my dissertation committee, Professor R. -
CS 350 Operating Systems Spring 2021
CS 350 Operating Systems Spring 2021 4. Process II Discussion 1: Question? • Why do UNIX OSes use the combination of “fork() + exec()” to create a new process (from a program)? • Can we directly call exec() to create a new process? Or, do you have better ideas/designs for creating a new process? • Hints: To answer this question: think about the possible advantages of this solution. Fork Parent process Child process Exec New program image in execution 2 Discussion 1: Background • Motived by the UNIX shell • $ echo “Hello world!” • $ Hello world! • shell figures out where in the file system the executable “echo” resides, calls fork() to create a new child process, calls exec() to run the command, and then waits for the command to complete by calling wait(). • You can see that you can use fork(), exec() and wait() to implement a shell program – our project! • But what is the benefit of separating fork() and exec() in creating a new process? 3 Discussion 1: Case study • The following program implements the command: • $ wc p3.c > p4.output # wc counts the word number in p3.c • # the result is redirected to p4.output Close default standard output (to terminal) open file p4.output 4 Discussion 1: Case study • $ wc p3.c # wc outputs the result to the default standard output, specified by STDOUT_FILENO • By default, STDOUT_FILENO points to the terminal output. • However, we close this default standard output → close(STDOUT_FILENO ); • Then, when we open the file “p4.output” • A file descriptor will be assigned to this file • UNIX systems start looking for free file descriptors from the beginning of the file descriptor table (i.e., at zero). -
SDS 940 THEORY of OPERATION Technical Manual SDS 98 01 26A
SDS 940 THEORY OF OPERATION Technical Manual SDS 98 01 26A March 1967 SCIENTIFIC DATA SYSTEMS/1649 Seventeenth Street/Santa Monica, California/UP 1-0960 ® 1967 Scientific Data Systems, Inc. Printed in U. S. A. TABLE OF CONTENTS Section Page I GENERAL DESCRIPTION ...•.••.•••••.••.••.•..•.••••• 1-1 1.1 General ................................... 1-1 1.2 Documentation .•...•..•.••..•••••••.•.••.••••• 1-1 1 .3 Physical Description ..•...•.••••.••..••••..••••. 1-2 1.4 Featu re s • . • . • • • • . • • • • • . • • . • . • . • • . • • • 1-2 1 .5 Input/Output Capabi I ity •.•....•.•.......•.....•.. 1-2 1 .5. 1 Parallel Input/Output System .•...•.....••..•. 1-6 1.5.1.1 Word Parallel System ...........•.. 1-6 1.5.1.2 Single-Bit Control and Sense System .... 1-8 1 .5.2 Time-Multiplexed Communication Channels .....•. 1-8 1 .5.3 Direct Memory Access System ............... 1-9 1.5.3.1 Direct Access Communication Channels •. 1-9 1.5.3.2 Data Multiplexing System ...•.....•. 1-10 1 .5.4 Priority Interrupt System . • . 1-10 1.5.4.1 Externa I Interrupt •..........•.... 1-11 1.5.4.2 Input/Output Channe I ..•..•.•••••. 1-12 1.5.4.3 Real-Time Clock •••.••.••••••••• 1-12 1 .6 Input/Output Devices •..•..••.••.••.••.•••••..•. 1-12 1 .6. 1 Buffered Input/Output Devices ..••.••.•.•••.•• 1-12 1.6.2 Unbuffered Input/Output Devices ••••..••••..•• 1-14 II OPERATION AND PROGRAMMING •..•••••.••.•..•....••. 2-1 2. 1 General .•..•.......•......•..............•. 2-1 2.2 Chang i ng Operat ion Modes •.•.••.••.•••.•..•..•.•• 2-2 2.3 Modes of Operation •..••.••.••.••..••••..••••••• 2-2 2.3. 1 Normal Mode .••••••••••••••••••••.••••• 2-2 2.3.1.1 Interrupt Rout i ne Return Instru ction .•.•. 2-3 2.3.1.2 Overflow Instructions ...••..•••••• 2-3 2.3.1.3 Mode Change Instruction ....••.•..• 2-3 2.3.1.4 Data Mu Itiplex Channe I Interlace Word •. -
The Dragonflybsd Operating System
1 The DragonFlyBSD Operating System Jeffrey M. Hsu, Member, FreeBSD and DragonFlyBSD directories with slightly over 8 million lines of code, 2 million Abstract— The DragonFlyBSD operating system is a fork of of which are in the kernel. the highly successful FreeBSD operating system. Its goals are to The project has a number of resources available to the maintain the high quality and performance of the FreeBSD 4 public, including an on-line CVS repository with mirror sites, branch, while exploiting new concepts to further improve accessible through the web as well as the cvsup service, performance and stability. In this paper, we discuss the motivation for a new BSD operating system, new concepts being mailing list forums, and a bug submission system. explored in the BSD context, the software infrastructure put in place to explore these concepts, and their application to the III. MOTIVATION network subsystem in particular. A. Technical Goals Index Terms— Message passing, Multiprocessing, Network The DragonFlyBSD operating system has several long- operating systems, Protocols, System software. range technical goals that it hopes to accomplish within the next few years. The first goal is to add lightweight threads to the BSD kernel. These threads are lightweight in the sense I. INTRODUCTION that, while user processes have an associated thread and a HE DragonFlyBSD operating system is a fork of the process context, kernel processes are pure threads with no T highly successful FreeBSD operating system. Its goals are process context. The threading model makes several to maintain the high quality and performance of the FreeBSD guarantees with respect to scheduling to ensure high 4 branch, while exploring new concepts to further improve performance and simplify reasoning about concurrency. -
An Interview With
An Interview with WARREN PRINCE OH 413 Conducted by Jeffrey R. Yost on 4 April 2012 Computer Services Project Sunnyvale, California Charles Babbage Institute Center for the History of Information Technology University of Minnesota, Minneapolis Copyright, Charles Babbage Institute Warren Prince Interview 4 April 2012 Oral History 413 Abstract Warren Prince was an executive at Tymshare and served as the leader of TYMNET. He served on the management team at McDonnell Douglas after it acquired TYMNET. The interview concentrates on TYMNET’s history and explaining and providing context to some of the documents he provided for the interview. 2 Prince: So will you produce the chapter in your book out of audio? I’ve got a bunch of papers that might be of interest or might not. Yost: Terrific, I would. Prince: Because it kind of shows; first, on the TYMNET side it shows the growth of it and really, the initial problems of it, which graphically shows it instead of saying it. And then the McDonnell Douglas part. Anyway, we can get into that when the time comes. Yost: Sounds good. My name is Jeffrey Yost, from the Charles Babbage Institute at the University of Minnesota, and I’m here today on April 4th, 2012, with Warren Prince of Tymshare and TYMNET. I’d like to begin just with some brief biographical stuff. Could you tell me where you were born, where you grew up? Prince: Born in Walnut Creek, California but grew up mostly in Arizona; Glendale, Arizona. And went through; started at the University of Arizona, and then joined the Army. -
The Great Telecom Meltdown for a Listing of Recent Titles in the Artech House Telecommunications Library, Turn to the Back of This Book
The Great Telecom Meltdown For a listing of recent titles in the Artech House Telecommunications Library, turn to the back of this book. The Great Telecom Meltdown Fred R. Goldstein a r techhouse. com Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data Goldstein, Fred R. The great telecom meltdown.—(Artech House telecommunications Library) 1. Telecommunication—History 2. Telecommunciation—Technological innovations— History 3. Telecommunication—Finance—History I. Title 384’.09 ISBN 1-58053-939-4 Cover design by Leslie Genser © 2005 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062 All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. International Standard Book Number: 1-58053-939-4 10987654321 Contents ix Hybrid Fiber-Coax (HFC) Gave Cable Providers an Advantage on “Triple Play” 122 RBOCs Took the Threat Seriously 123 Hybrid Fiber-Coax Is Developed 123 Cable Modems -
NASA Contrattbr 4Report 178229 S E M I a N N U a L R E P O
NASA Contrattbr 4Report 178229 ICASE SEMIANNUAL REPORT April 1, 1986 through September 30, 1986 Contract No. NAS1-18107 January 1987 INSTITUTE FOR COMPUTER APPLICATIONS IN SCIENCE AND ENGINEERING NASA Langley Research Center, Hampton, Virginia 23665 Operatsd by the Universities Space Research Association National Aeronautics and Space Ad minis t rat ion bnglay Research Center HamDton,Virginia23665 CONTENTS Page Introduction .............................................................. iii Research in erogress ...................................................... 1 Reports and Abstracts ..................................................... 32 ICASE Colloquia........................................................... 49 ICASE Summer Activities ................................................... 52 Other Activities .......................................................... 58 ICASE Staff ............................................................... 60 i INTRODUCTION The Institute for Computer Applications in Science and Engineering (ICASE) is operated at the Langley Research Center (LaRC) of NASA by the Universities Space Research Associat€on (USRA) under a contract with the Center. USRA is a nonpro€it consortium of major U. S. colleges and universities. The Institute conducts unclassified basic research in applied mathematics, numerical analysis, and computer science in order to extend and improve problem-solving capabilities in science and engineering, particularly in aeronautics and space. ICASE has a small permanent staff. Research