Computer Science

Total Page:16

File Type:pdf, Size:1020Kb

Computer Science Computer Science Edward K. Blum l Alfred V. Aho Editors Computer Science The Hardware, Software and Heart of It Editors Edward K. Blum Alfred V. Aho Department of Mathematics Department of Computer Science University of Southern California Columbia University Los Angeles, CA, USA New York, NY, USA [email protected] [email protected] ISBN 978-1-4614-1167-3 e-ISBN 978-1-4614-1168-0 DOI 10.1007/978-1-4614-1168-0 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011941144 # Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer ScienceþBusiness Media (www.springer.com) Contents Part I 1 Introduction and Prologue............................................. 3 Edward K. Blum 2 Computation: Brief History Prior to the 1900s ...................... 11 Edward K. Blum 3 The Heart of Computer Science....................................... 17 Edward K. Blum 4 The Software Side of Computer Science – Computer Programming ................................... 53 Edward K. Blum and Walter Savitch Part II 5 The Hardware Side..................................................... 71 Edward K. Blum 6 Operating Systems (OS) ............................................... 97 Edward K. Blum 7 Computer Networks.................................................... 105 Fan Chung Graham and Edward K. Blum 8 High Performance Computing and Communication (HPCC).......................................... 139 James M. Pepin 9 Programming for Distributed Computing: From Physical to Logical Networks................................... 155 Christian Scheideler and Kalman Graffi v vi Contents 10 Databases................................................................ 169 Michael Benedikt and Pierre Senellart 11 Computer Security and Public Key Cryptography.................. 231 Wayne Raskind and Edward K. Blum 12 Complexity Theory ..................................................... 241 Alfred V. Aho 13 Multivariate Complexity Theory...................................... 269 Michael R. Fellows, Serge Gaspers, and Frances Rosamond 14 Quantum Computing................................................... 295 Todd A. Brun 15 Numerical Thinking in Algorithm Design and Analysis............. 349 Shang-Hua Teng 16 Fuzzy Logic in Computer Science..................................... 385 Radim Belohlavek, Rudolf Kruse, and Christian Moewes 17 Statistics of the Field ................................................... 421 Frances Rosamond Epilogue ...................................................................... 467 vii Contributors Alfred V. Aho Columbia University, New York, NY, USA Radim Belohlavek Palacky University, Olomouc, Czech Republic Michael Benedikt University of Oxford, Oxford, UK Pierre Senellart Te´le´com ParisTech, Paris, France Edward K. Blum Department of Mathematics, University of Southern California, Los Angeles, CA, USA Todd A. Brun University of Southern California, Los Angeles, CA, USA Fan Chung Graham University of California at San Diego, La Jolla, CA, USA Michael R. Fellows School of Engineering and Information Technology, Charles Darwin University, Casuarina, Australia Serge Gaspers Institute of Information Systems, Vienna University of Technology, Vienna, Austria Rudolf Kruse Otto-von-Guericke University, Magdeburg, Germany James M. Pepin Clemson University, South Carolina, USA Wayne Raskind Arizona State University, Tempe, AZ, USA Frances Rosamond School of Engineering and Information Technology, Charles Darwin University, Casuarina, Australia Shang-Hua Teng University of Southern California, Los Angeles, CA, USA Christian Scheideler Department of Computer Science, University of Paderborn, Paderborn, Germany Kalman Graffi Department of Computer Science, University of Paderborn, Paderborn, Germany Walter Savitch University of California, San Diego, La Jolla, CA, USA Christian Moewes Otto-von-Guericke University, Magdeburg, Germany ix Part I Chapter 1 Introduction and Prologue Edward K. Blum It is said that the past is prologue. This saying is borne out by most of the history of science, but it applies only loosely to computer science, since this “modern” science does not have that much of a past compared to the traditional sciences like physics and chemistry. In the latter, in any era the events of the past do presage the ongoing and future development of these sciences. The development of computer science proceeds at such a rapid and frenetic pace that the boundary between past events and the current state of the subject is somewhat blurred. In this book, with its advisedly provocative title referring to the heart of computer science, we take the position that its prologue is indeed rather brief and somewhat diffuse. To recount how the history of computer science interacts with its current state we adopt a simplifying assumption about its early history, the reasonableness of which we shall defend, to the effect that the early and defining history of computer science (the prologue) can be encapsulated in the work of one man, the British mathematician Alan Turing. Except for some very early and rather naı¨ve contributions by the outstanding mathematicians Pascal and Leibniz and some later more substantial ones by the polymath Babbage and the logician Godel, the subject which we now call computer science sprang largely from the brilliant and original researches of Turing, as we shall attempt to illustrate. Turing was undoubtedly a genius in both the mathematics of computing as embodied in his studies of what we now call software and in the engineering side of computing which we usually refer to as hardware. As this book will argue, these seemingly disparate sub-disciplines of computer science, software and hardware, are intrinsically related, although in many school curricula there is an explicit marked separation and the single subject is organized for pedagogical and research purposes into two subjects: computer science and computer engineering. E.K. Blum (*) Department of Mathematics, University of Southern California, Los Angeles, CA, USA e-mail: [email protected] E.K. Blum and A.V. Aho (eds.), Computer Science: The Hardware, 3 Software and Heart of It, DOI 10.1007/978-1-4614-1168-0_1, # Springer Science+Business Media, LLC 2011 4 E.K. Blum To stress this apparent dichotomy in the structure of computer science is not only unnecessary but misleading. It was not the way Turing viewed his research projects and it is counter to our view of the subject, which recognizes the intrinsic interrelationships of many aspects of software and hardware. We regard computer science as a Union (with a capital U) of its two major constit-uents, hardware and software. Admittedly, this Union is diverse and rapidly evolving and defies precise characterization. Nevertheless, in this book we endeavor to present a report on the state of this Union as of the year 2011. It is not feasible to cover all facets of this state in a relatively short book. So we shall cover a reasonably large number of its key elements which in our opinion are not only having the greatest impact on the further develop- ment of computer science itself but also a measurable impact on the surrounding world with which computer science is now so actively engaged. We alert the reader to our style of presentation whereby we report on the state of the Union on two coordinated levels of exposition: 1. An informal intuitive level which minimizes the mathematical and technical details and aims to be readable and comprehensible, for the most part, by any intelligent layperson reader and in its entirety by a layperson reader with say a college education but one that need not include much science; 2. A concomitant technical/mathematical level that aims to provide a deeper quantitative understanding for the scientific/engineering-trained reader. Although these two expositions could be quite disparate, in this book they are carefully coordinated so that they do not constitute two books but are united as one. We are well aware of the difficulties and hazards in such a two-level exposition; trying to produce both a “popular” book and a “technical” book between the same covers. To smooth the reader’s progress we have used various literary devices, such as separating the intuitive concrete ideas from the more abstract ones by packaging the latter into appendices. The appendices are included in the main text at points close to the intuitive discussions (see the Table of Contents) and we attempt to use the same linguistic constructs for related intuitive and abstract ideas. The choice of familiar colloquial non-quantitative language, ordinary words and phrases, for unfamiliar quantitative concepts is quite common in traditional science. For example, physicists speak and think freely of electromagnetic waves, interference of such waves,
Recommended publications
  • A Universal Modular ACTOR Formalism for Artificial
    Artificial Intelligence A Universal Modular ACTOR Formalism for Artificial Intelligence Carl Hewitt Peter Bishop Richard Steiger Abstract This paper proposes a modular ACTOR architecture and definitional method for artificial intelligence that is conceptually based on a single kind of object: actors [or, if you will, virtual processors, activation frames, or streams]. The formalism makes no presuppositions about the representation of primitive data structures and control structures. Such structures can be programmed, micro-coded, or hard wired 1n a uniform modular fashion. In fact it is impossible to determine whether a given object is "really" represented as a list, a vector, a hash table, a function, or a process. The architecture will efficiently run the coming generation of PLANNER-like artificial intelligence languages including those requiring a high degree of parallelism. The efficiency is gained without loss of programming generality because it only makes certain actors more efficient; it does not change their behavioral characteristics. The architecture is general with respect to control structure and does not have or need goto, interrupt, or semaphore primitives. The formalism achieves the goals that the disallowed constructs are intended to achieve by other more structured methods. PLANNER Progress "Programs should not only work, but they should appear to work as well." PDP-1X Dogma The PLANNER project is continuing research in natural and effective means for embedding knowledge in procedures. In the course of this work we have succeeded in unifying the formalism around one fundamental concept: the ACTOR. Intuitively, an ACTOR is an active agent which plays a role on cue according to a script" we" use the ACTOR metaphor to emphasize the inseparability of control and data flow in our model.
    [Show full text]
  • Computational Learning Theory: New Models and Algorithms
    Computational Learning Theory: New Models and Algorithms by Robert Hal Sloan S.M. EECS, Massachusetts Institute of Technology (1986) B.S. Mathematics, Yale University (1983) Submitted to the Department- of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 1989 @ Robert Hal Sloan, 1989. All rights reserved The author hereby grants to MIT permission to reproduce and to distribute copies of this thesis document in whole or in part. Signature of Author Department of Electrical Engineering and Computer Science May 23, 1989 Certified by Ronald L. Rivest Professor of Computer Science Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students Abstract In the past several years, there has been a surge of interest in computational learning theory-the formal (as opposed to empirical) study of learning algorithms. One major cause for this interest was the model of probably approximately correct learning, or pac learning, introduced by Valiant in 1984. This thesis begins by presenting a new learning algorithm for a particular problem within that model: learning submodules of the free Z-module Zk. We prove that this algorithm achieves probable approximate correctness, and indeed, that it is within a log log factor of optimal in a related, but more stringent model of learning, on-line mistake bounded learning. We then proceed to examine the influence of noisy data on pac learning algorithms in general. Previously it has been shown that it is possible to tolerate large amounts of random classification noise, but only a very small amount of a very malicious sort of noise.
    [Show full text]
  • Oracle Vs. Nosql Vs. Newsql Comparing Database Technology
    Oracle vs. NoSQL vs. NewSQL Comparing Database Technology John Ryan Senior Solution Architect, Snowflake Computing Table of Contents The World has Changed . 1 What’s Changed? . 2 What’s the Problem? . .. 3 Performance vs. Availability and Durability . 3 Consistecy vs. Availability . 4 Flexibility vs . Scalability . 5 ACID vs. Eventual Consistency . 6 The OLTP Database Reimagined . 7 Achieving the Impossible! . .. 8 NewSQL Database Technology . 9 VoltDB . 10 MemSQL . 11 Which Applications Need NewSQL Technology? . 12 Conclusion . 13 About the Author . 13 ii The World has Changed The world has changed massively in the past 20 years. Back in the year 2000, a few million users connected to the web using a 56k modem attached to a PC, and Amazon only sold books. Now billions of people are using to their smartphone or tablet 24x7 to buy just about everything, and they’re interacting with Facebook, Twitter and Instagram. The pace has been unstoppable . Expectations have also changed. If a web page doesn’t refresh within seconds we’re quickly frustrated, and go elsewhere. If a web site is down, we fear it’s the end of civilisation as we know it. If a major site is down, it makes global headlines. Instant gratification takes too long! — Ladawn Clare-Panton Aside: If you’re not a seasoned Database Architect, you may want to start with my previous articles on Scalability and Database Architecture. Oracle vs. NoSQL vs. NewSQL eBook 1 What’s Changed? The above leads to a few observations: • Scalability — With potentially explosive traffic growth, IT systems need to quickly grow to meet exponential numbers of transactions • High Availability — IT systems must run 24x7, and be resilient to failure.
    [Show full text]
  • The Declarative Imperative Experiences and Conjectures in Distributed Logic
    The Declarative Imperative Experiences and Conjectures in Distributed Logic Joseph M. Hellerstein University of California, Berkeley [email protected] ABSTRACT The juxtaposition of these trends presents stark alternatives. The rise of multicore processors and cloud computing is putting Will the forecasts of doom and gloom materialize in a storm enormous pressure on the software community to find solu- that drowns out progress in computing? Or is this the long- tions to the difficulty of parallel and distributed programming. delayed catharsis that will wash away today’s thicket of im- At the same time, there is more—and more varied—interest in perative languages, preparing the ground for a more fertile data-centric programming languages than at any time in com- declarative future? And what role might the database com- puting history, in part because these languages parallelize nat- munity play in shaping this future, having sowed the seeds of urally. This juxtaposition raises the possibility that the theory Datalog over the last quarter century? of declarative database query languages can provide a foun- Before addressing these issues directly, a few more words dation for the next generation of parallel and distributed pro- about both crisis and opportunity are in order. gramming languages. 1.1 Urgency: Parallelism In this paper I reflect on my group’s experience over seven years using Datalog extensions to build networking protocols I would be panicked if I were in industry. and distributed systems. Based on that experience, I present — John Hennessy, President, Stanford University [35] a number of theoretical conjectures that may both interest the database community, and clarify important practical issues in The need for parallelism is visible at micro and macro scales.
    [Show full text]
  • Security Analysis of Cryptographically Controlled Access to XML Documents
    Security Analysis of Cryptographically Controlled Access to XML Documents ¤ Mart´in Abadi Bogdan Warinschi Computer Science Department Computer Science Department University of California at Santa Cruz Stanford University [email protected] [email protected] ABSTRACT ments [4, 5, 7, 8, 14, 19, 23]. This line of research has led to Some promising recent schemes for XML access control em- e±cient and elegant publication techniques that avoid data ploy encryption for implementing security policies on pub- duplication by relying on cryptography. For instance, us- lished data, avoiding data duplication. In this paper we ing those techniques, medical records may be published as study one such scheme, due to Miklau and Suciu. That XML documents, with parts encrypted in such a way that scheme was introduced with some intuitive explanations and only the appropriate users (physicians, nurses, researchers, goals, but without precise de¯nitions and guarantees for the administrators, and patients) can see their contents. use of cryptography (speci¯cally, symmetric encryption and The work of Miklau and Suciu [19] is a crisp, compelling secret sharing). We bridge this gap in the present work. We example of this line of research. They develop a policy query analyze the scheme in the context of the rigorous models language for specifying ¯ne-grained access policies on XML of modern cryptography. We obtain formal results in sim- documents and a logical model based on the concept of \pro- ple, symbolic terms close to the vocabulary of Miklau and tection". They also show how to translate consistent poli- Suciu. We also obtain more detailed computational results cies into protections, and how to implement protections by that establish security against probabilistic polynomial-time XML encryption [10].
    [Show full text]
  • Fiendish Designs
    Fiendish Designs A Software Engineering Odyssey © Tim Denvir 2011 1 Preface These are notes, incomplete but extensive, for a book which I hope will give a personal view of the first forty years or so of Software Engineering. Whether the book will ever see the light of day, I am not sure. These notes have come, I realise, to be a memoir of my working life in SE. I want to capture not only the evolution of the technical discipline which is software engineering, but also the climate of social practice in the industry, which has changed hugely over time. To what extent, if at all, others will find this interesting, I have very little idea. I mention other, real people by name here and there. If anyone prefers me not to refer to them, or wishes to offer corrections on any item, they can email me (see Contact on Home Page). Introduction Everybody today encounters computers. There are computers inside petrol pumps, in cash tills, behind the dashboard instruments in modern cars, and in libraries, doctors’ surgeries and beside the dentist’s chair. A large proportion of people have personal computers in their homes and may use them at work, without having to be specialists in computing. Most people have at least some idea that computers contain software, lists of instructions which drive the computer and enable it to perform different tasks. The term “software engineering” wasn’t coined until 1968, at a NATO-funded conference, but the activity that it stands for had been carried out for at least ten years before that.
    [Show full text]
  • The Best Nurturers in Computer Science Research
    The Best Nurturers in Computer Science Research Bharath Kumar M. Y. N. Srikant IISc-CSA-TR-2004-10 http://archive.csa.iisc.ernet.in/TR/2004/10/ Computer Science and Automation Indian Institute of Science, India October 2004 The Best Nurturers in Computer Science Research Bharath Kumar M.∗ Y. N. Srikant† Abstract The paper presents a heuristic for mining nurturers in temporally organized collaboration networks: people who facilitate the growth and success of the young ones. Specifically, this heuristic is applied to the computer science bibliographic data to find the best nurturers in computer science research. The measure of success is parameterized, and the paper demonstrates experiments and results with publication count and citations as success metrics. Rather than just the nurturer’s success, the heuristic captures the influence he has had in the indepen- dent success of the relatively young in the network. These results can hence be a useful resource to graduate students and post-doctoral can- didates. The heuristic is extended to accurately yield ranked nurturers inside a particular time period. Interestingly, there is a recognizable deviation between the rankings of the most successful researchers and the best nurturers, which although is obvious from a social perspective has not been statistically demonstrated. Keywords: Social Network Analysis, Bibliometrics, Temporal Data Mining. 1 Introduction Consider a student Arjun, who has finished his under-graduate degree in Computer Science, and is seeking a PhD degree followed by a successful career in Computer Science research. How does he choose his research advisor? He has the following options with him: 1. Look up the rankings of various universities [1], and apply to any “rea- sonably good” professor in any of the top universities.
    [Show full text]
  • ITERATIVE ALGOR ITHMS for GLOBAL FLOW ANALYSIS By
    ITERATIVE ALGOR ITHMS FOR GLOBAL FLOW ANALYSIS bY Robert Endre Tarjan STAN-CS-76-547 MARCH 1976 COMPUTER SCIENCE DEPARTMENT School of Humanities and Sciences STANFORD UN IVERS ITY Iterative Algorithms for Global Flow Analysis * Robert Endre Tarjan f Computer Science Department Stanford University Stanford, California 94305 February 1976 Abstract. This paper studies iterative methods for the global flow analysis of computer programs. We define a hierarchy of global flow problem classes, each solvable by an appropriate generalization of the "node listing" method of Kennedy. We show that each of these generalized methods is optimum, among all iterative algorithms, for solving problems within its class. We give lower bounds on the time required by iterative algorithms for each of the problem classes. Keywords: computational complexity, flow graph reducibility, global flow analysis, graph theory, iterative algorithm, lower time bound, node listing. * f Research partially supported by National Science Foundation grant MM 75-22870. 1 t 1. Introduction. A problem extensively studied in recent years [2,3,5,7,8,9,12,13,14, 15,2'7,28,29,30] is that of globally analyzing cmputer programs; that is, collecting information which is distributed throughout a computer program, generally for the purpose of optimizing the program. Roughly speaking, * global flow analysis requires the determination, for each program block f , of a property known to hold on entry to the block, independent of the path taken to reach the block. * A widely used amroach to global flow analysis is to model the set of possible properties by a semi-lattice (we desire the 'lmaximumtl property for each block), to model the control structure of the program by a directed graph with one vertex for each program block, and to specify, for each branch from block to block, the function by which that branch transforms the set of properties.
    [Show full text]
  • SQL Vs Nosql
    SQL vs NoSQL By: Mohammed-Ali Khan Email: [email protected] Date: October 21, 2012 YORK UNIVERSITY Agenda ● History of DBMS – why relational is most popular? ● Types of DBMS – brief overview ● Main characteristics of RDBMS ● Main characteristics of NoSQL DBMS ● SQL vs NoSQL ● DB overview – Cassandra ● Criticism of NoSQL ● Some Predictions / Conclusions History of DBMS – why relational is most popular? ● 19th century: US Government needed reports from large datasets to conduct nation-wide census. ● 1890: Herman Hollerith creates the first automatic processing equipment which allowed American census to be conducted. ● 1911: IBM founded. ● 1960s: Efforts to standardize the database technology – US DoD: Conference on Data Systems Language (Codasyl). History of DBMS – why relational is most popular? ● 1968: IBM introduced IMS DBMS (a hierarchical DBMS). ● IBM employee Edgar Codd not satisfied, quoted as saying: – “Taking the old line view, that the burden of finding information should be placed on users...” ● Edgar Codd publishes a paper “A relational Model of Data for large shared Data Banks” – Key points: Independence of data from hardware/storage implementation – High level non-procedural language for data access – Pointers/keys (primary/secondary) History of DBMS – why relational is most popular? ● 1970s: Two database projects launched based on relational model – Ingres (Department of Defence Initiative) – System R (IBM) ● Ingres had its own query language called QUEL whereas System R used SQL. ● Larry Ellison, who worked at IBM, read publications of the System R group and eventually founded ORACLE. History of DBMS – why relational is most popular? ● 1980: IBM introduced SQL for the mainframe market. ● In a nutshell, American Government's requirements had a strong role in development of the relational DBMS.
    [Show full text]
  • DATA BASE RESEARCH at BERKELEY Michael Stonebraker
    DATA BASE RESEARCH AT BERKELEY Michael Stonebraker Department of Electrical Engineering and Computer Science University of California, Berkeley 1. INTRODUCTION the performance of the prototype, finishing the imple- Data base research at Berkeley can be decom- mentation of promised function, and on integrating ter- posed into four categories. First there is the POSTGRES tiary memory into the POSTGRES environment in a project led by Michael Stonebraker, which is building a more flexible way. next-generation DBMS with novel object management, General POSTGRES information can be obtained knowledge management and time travel capabilities. from the following references. Second, Larry Rowe leads the PICASSO project which [MOSH90] Mosher, C. (ed.), "The POSTGRES Ref- is constructing an advanced application development erence Manual, Version 2," University of tool kit for use with POSTGRES and other DBMSs. California, Electronics Research Labora- Third, there is the XPRS project (eXtended Postgres on tory, Technical Report UCB/ERL Raid and Sprite) which is led by four Berkeley faculty, M90-60, Berkeley, Ca., August 1990. Randy Katz, John Ousterhout, Dave Patterson and Michael Stonebraker. XPRS is exploring high perfor- [STON86] Stonebraker M. and Rowe, L., "The mance I/O systems and their efficient utilization by oper- Design of POSTGRES," Proc. 1986 ating systems and data managers. The last project is one ACM-SIGMOD Conference on Manage- aimed at improving the reliability of DBMS software by ment of Data, Washington, D.C., June dealing more effectively with software errors that is also 1986. led by Michael Stonebraker. [STON90] Stonebraker, M. et. al., "The Implementa- In this short paper we summarize work on POST- tion of POSTGRES," IEEE Transactions GRES, PICASSO, and software reliability and report on on Knowledge and Data Engineering, the XPRS work that is relevant to data management.
    [Show full text]
  • Actor Model of Computation
    Published in ArXiv http://arxiv.org/abs/1008.1459 Actor Model of Computation Carl Hewitt http://carlhewitt.info This paper is dedicated to Alonzo Church and Dana Scott. The Actor model is a mathematical theory that treats “Actors” as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client- cloud computing and many-core computer architectures has galvanized interest in the Actor model. An Actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other Actors; create a finite number of new Actors; designate the behavior to be used for the next message it receives. There is no assumed order to the above actions and they could be carried out concurrently. In addition two messages sent concurrently can arrive in either order. Decoupling the sender from communications sent was a fundamental advance of the Actor model enabling asynchronous communication and control structures as patterns of passing messages. November 7, 2010 Page 1 of 25 Contents Introduction ............................................................ 3 Fundamental concepts ............................................ 3 Illustrations ............................................................ 3 Modularity thru Direct communication and asynchrony ............................................................. 3 Indeterminacy and Quasi-commutativity ............... 4 Locality and Security ............................................
    [Show full text]
  • The Discoveries of Continuations
    LISP AND SYMBOLIC COMPUTATION: An InternationM JournM, 6, 233-248, 1993 @ 1993 Kluwer Academic Publishers - Manufactured in The Nett~eriands The Discoveries of Continuations JOHN C. REYNOLDS ( [email protected] ) School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 Keywords: Semantics, Continuation, Continuation-Passing Style Abstract. We give a brief account of the discoveries of continuations and related con- cepts by A. van Vv'ijngaarden, A. W. Mazurkiewicz, F. L. Morris, C. P. Wadsworth. J. H. Morris, M. J. Fischer, and S. K. Abdali. In the early history of continuations, basic concepts were independently discovered an extraordinary number of times. This was due less to poor communication among computer scientists than to the rich variety of set- tings in which continuations were found useful: They underlie a method of program transformation (into continuation-passing style), a style of def- initionM interpreter (defining one language by an interpreter written in another language), and a style of denotational semantics (in the sense of Scott and Strachey). In each of these settings, by representing "the mean- ing of the rest of the program" as a function or procedure, continnations provide an elegant description of a variety of language constructs, including call by value and goto statements. 1. The Background In the early 1960%, the appearance of Algol 60 [32, 33] inspired a fi~rment of research on the implementation and formal definition of programming languages. Several aspects of this research were critical precursors of the discovery of continuations. The ability in Algol 60 to jump out of blocks, or even procedure bod- ies, forced implementors to realize that the representation of a label must include a reference to an environment.
    [Show full text]