Ieee John Von Neumann Medal Recipients

Total Page:16

File Type:pdf, Size:1020Kb

Ieee John Von Neumann Medal Recipients IEEE JOHN VON NEUMANN MEDAL RECIPIENTS 2021 JEFFREY ADGATE DEAN “For contributions to the science and (nonmember)—Google Senior engineering of large-scale distributed Fellow, Mountain View, California, computer systems and artificial intelligence USA systems.” 2020 MICHAEL I. JORDAN “For contributions to machine learning and Professor, University of California, data science.” Berkeley, California, USA 2019 EVA TARDOS “For contributions to the field of algorithms, Jacob Gould Schurman Professor of including foundational new methods in Computer Science, Cornell optimization, approximation algorithms, and University, Ithaca, New York, USA algorithmic game theory.” 2018 PATRICK COUSOT “For introducing abstract interpretation, a Professor, New York University, powerful framework for automatically New York, New York, USA calculating program properties with broad application to verification and optimization.” 2017 VLADIMIR VAPNIK “For the development of statistical learning Professor, Columbia University and theory, the theoretical foundations for Facebook AI Research, New York, machine learning, and support vector New York, USA machines.” 2016 CHRISTOS HARILAOS "For providing a deeper understanding of PAPADIMITRIOU computational complexity and its Professor, University of California, implications for approximation algorithms, Berkeley, Berkeley, California, USA artificial intelligence, economics, database theory, and biology." 2015 JAMES A. GOSLING “For the Java programming language, Java Chief Software Architect, Liquid Virtual Machine, and other contributions to Robotics, Redwood, California, USA programming languages and environments.” 2014 CLEVE MOLER “For fundamental and widely used Chief Mathematician, MathWorks, contributions to numerical linear algebra and Santa Fe, New Mexico, USA scientific and engineering software that transformed computational science.” 2013 JACK B. DENNIS “For fundamental abstractions to implement Professor Emeritus, Massachusetts protection in operating systems and for the Institute of Technology Computer dataflow programming paradigm.” Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA 1 of 4 IEEE JOHN VON NEUMANN MEDAL RECIPIENTS 2012 EDWARD McCLUSKEY “For fundamental contributions that shaped Professor Emeritus, Departments of the design and testing of digital systems.” Electrical Engineering and Computer Science, Stanford University, Stanford, CA, USA 2011 C.A.R. (TONY) HOARE “For seminal contributions to the scientific Principal Researcher, Microsoft foundation of software design.” Research Ltd.; Cambridge, UK 2010 JOHN HOPCROFT “For laying the foundations for the fields of IBM Professor of Engineering and automata and language theory and many Applied Mathematics, Cornell seminal contributions to theoretical computer University, Cornell, NY, USA science.” AND JEFFREY ULLMAN Professor Emeritus of Computer Science, Stanford University Stanford, CA, USA 2009 SUSAN L. GRAHAM “For contributions to programming language Pehong Chen Distinguished design and implementation and for Professor of Computer Science, exemplary service to the discipline of Univ of California at Berkeley, computer science.” Berkeley, CA, USA 2008 LESLIE LAMPORT “For establishment of the foundations of Researcher, Microsoft Corporation, distributed and concurrent computing.” Silicon Valley Research Center, Mountain, View, CA, USA 2007 CHARLES THACKER “For a central role in the creation of the Distinguished Engineer, Microsoft personal computer and the development of Corporation, Redmond, WA, USA networked computer systems.” 2006 EDWIN CATMULL “For fundamental contributions to computer President, Pixar Animation Studios, graphics, and a pioneering role in the use of Emeryville, CA, USA computer animation in motion pictures.” 2005 MICHAEL STONEBRAKER "For contributions to the design, Adjunct Professor, Laboratory for implementation, and commercialization of CS, Massachusetts Institute of relational and object-relational database Technology, Bedford, NH, USA systems." 2004 BARBARA H. LISKOV “For fundamental contributions to Ford Professor of Engineering and programming languages, programming Associate Head for Computer methodology, and distributed systems.” Science, Massachusetts Institute of Technology Cambridge, MA, USA 2 of 4 IEEE JOHN VON NEUMANN MEDAL RECIPIENTS 2003 ALFRED V. AHO “For contributions to the foundations of Professor, Columbia University, computer science and to the fields of New York, NY, USA algorithms and software tools.” 2002 OLE-JOHAN DAHL "For the introduction of the concepts Univ of Oslo, Oslo, Norway underlying object-oriented programming AND through the design and implementation of KRISTEN NYGAARD SIMULA 67." Univ of Oslo, Oslo, Norway 2001 BUTLER W. LAMPSON "For technical leadership in the creation of Distinguished Engineer at Microsoft timesharing, distributed computing, and Adjunct Professor at MIT networking security and program languages." 2000 JOHN L. HENNESSY “For creating a revolution in computer Stanford University architecture through their exploration, Stanford, CA, USA popularization, and commercialization of architectural innovations.” AND DAVID A. PATTERSON University of California at Berkeley Berkeley, CA, USA 1999 DOUGLAS C. ENGELBART "For creating the foundations of real time, Bootstrap Institute interactive, personal computing including Fremont, CA, USA CRT displays, windows, the mouse, hypermedia linking and conferencing, and on-line journals." 1998 IVAN EDWARD SUTHERLAND "For pioneering contributions to computer Sun Microsystems Laboratories graphics and microelectronic design, and Palo Alto, CA, USA leadership in the support of computer science and engineering research" 1997 MAURICE V. WILKES "For a lifelong career of seminal Olivetti Research Ltd. contributions to computing, including the Cambridge, England first full-scale operational stored program computer and to the foundations of programming." 1996 CARVER A. MEAD "For leadership and innovative contributions California Institute of Technology to VLSI and creative microelectronic Pasadena, CA, USA structures." 1995 DONALD E. KNUTH "For fundamental contributions to the theory Stanford University and practice of computer science and to the Stanford, CA, USA art of computer programming." 1994 JOHN COCKE "For contributions to the computer industry 3 of 4 IEEE JOHN VON NEUMANN MEDAL RECIPIENTS IBM/T.J. Watson Research Center including the invention, development and Yorktown Heights, NY, USA implementation of Reduced Instruction Set Computer (RISC) architecture and program optimization technology." 1993 FREDERICK P. BROOKS, JR. "For significant developments in computer Univ. of North Carolina architecture, insightful observations on Chapel Hill, NC, USA software engineering, and for computer science education and professional service." 1992 C. GORDON BELL "For innovative contributions to computer Stardent Computer architecture and design." Sunnyvale, CA, USA 4 of 4 .
Recommended publications
  • Motion and Time Study the Goals of Motion Study
    Motion and Time Study The Goals of Motion Study • Improvement • Planning / Scheduling (Cost) •Safety Know How Long to Complete Task for • Scheduling (Sequencing) • Efficiency (Best Way) • Safety (Easiest Way) How Does a Job Incumbent Spend a Day • Value Added vs. Non-Value Added The General Strategy of IE to Reduce and Control Cost • Are people productive ALL of the time ? • Which parts of job are really necessary ? • Can the job be done EASIER, SAFER and FASTER ? • Is there a sense of employee involvement? Some Techniques of Industrial Engineering •Measure – Time and Motion Study – Work Sampling • Control – Work Standards (Best Practices) – Accounting – Labor Reporting • Improve – Small group activities Time Study • Observation –Stop Watch – Computer / Interactive • Engineering Labor Standards (Bad Idea) • Job Order / Labor reporting data History • Frederick Taylor (1900’s) Studied motions of iron workers – attempted to “mechanize” motions to maximize efficiency – including proper rest, ergonomics, etc. • Frank and Lillian Gilbreth used motion picture to study worker motions – developed 17 motions called “therbligs” that describe all possible work. •GET G •PUT P • GET WEIGHT GW • PUT WEIGHT PW •REGRASP R • APPLY PRESSURE A • EYE ACTION E • FOOT ACTION F • STEP S • BEND & ARISE B • CRANK C Time Study (Stopwatch Measurement) 1. List work elements 2. Discuss with worker 3. Measure with stopwatch (running VS reset) 4. Repeat for n Observations 5. Compute mean and std dev of work station time 6. Be aware of allowances/foreign element, etc Work Sampling • Determined what is done over typical day • Random Reporting • Periodic Reporting Learning Curve • For repetitive work, worker gains skill, knowledge of product/process, etc over time • Thus we expect output to increase over time as more units are produced over time to complete task decreases as more units are produced Traditional Learning Curve Actual Curve Change, Design, Process, etc Learning Curve • Usually define learning as a percentage reduction in the time it takes to make a unit.
    [Show full text]
  • Computational Learning Theory: New Models and Algorithms
    Computational Learning Theory: New Models and Algorithms by Robert Hal Sloan S.M. EECS, Massachusetts Institute of Technology (1986) B.S. Mathematics, Yale University (1983) Submitted to the Department- of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 1989 @ Robert Hal Sloan, 1989. All rights reserved The author hereby grants to MIT permission to reproduce and to distribute copies of this thesis document in whole or in part. Signature of Author Department of Electrical Engineering and Computer Science May 23, 1989 Certified by Ronald L. Rivest Professor of Computer Science Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students Abstract In the past several years, there has been a surge of interest in computational learning theory-the formal (as opposed to empirical) study of learning algorithms. One major cause for this interest was the model of probably approximately correct learning, or pac learning, introduced by Valiant in 1984. This thesis begins by presenting a new learning algorithm for a particular problem within that model: learning submodules of the free Z-module Zk. We prove that this algorithm achieves probable approximate correctness, and indeed, that it is within a log log factor of optimal in a related, but more stringent model of learning, on-line mistake bounded learning. We then proceed to examine the influence of noisy data on pac learning algorithms in general. Previously it has been shown that it is possible to tolerate large amounts of random classification noise, but only a very small amount of a very malicious sort of noise.
    [Show full text]
  • On-The-Fly Garbage Collection: an Exercise in Cooperation
    8. HoIn, B.K.P. Determining lightness from an image. Comptr. Graphics and Image Processing 3, 4(Dec. 1974), 277-299. 9. Huffman, D.A. Impossible objects as nonsense sentences. In Machine Intelligence 6, B. Meltzer and D. Michie, Eds., Edinburgh U. Press, Edinburgh, 1971, pp. 295-323. 10. Mackworth, A.K. Consistency in networks of relations. Artif. Intell. 8, 1(1977), 99-118. I1. Marr, D. Simple memory: A theory for archicortex. Philos. Trans. Operating R.S. Gaines Roy. Soc. B. 252 (1971), 23-81. Systems Editor 12. Marr, D., and Poggio, T. Cooperative computation of stereo disparity. A.I. Memo 364, A.I. Lab., M.I.T., Cambridge, Mass., 1976. On-the-Fly Garbage 13. Minsky, M., and Papert, S. Perceptrons. M.I.T. Press, Cambridge, Mass., 1968. Collection: An Exercise in 14. Montanari, U. Networks of constraints: Fundamental properties and applications to picture processing. Inform. Sci. 7, 2(April 1974), Cooperation 95-132. 15. Rosenfeld, A., Hummel, R., and Zucker, S.W. Scene labelling by relaxation operations. IEEE Trans. Systems, Man, and Cybernetics Edsger W. Dijkstra SMC-6, 6(June 1976), 420-433. Burroughs Corporation 16. Sussman, G.J., and McDermott, D.V. From PLANNER to CONNIVER--a genetic approach. Proc. AFIPS 1972 FJCC, Vol. 41, AFIPS Press, Montvale, N.J., pp. 1171-1179. Leslie Lamport 17. Sussman, G.J., and Stallman, R.M. Forward reasoning and SRI International dependency-directed backtracking in a system for computer-aided circuit analysis. A.I. Memo 380, A.I. Lab., M.I.T., Cambridge, Mass., 1976. A.J. Martin, C.S.
    [Show full text]
  • Oracle Vs. Nosql Vs. Newsql Comparing Database Technology
    Oracle vs. NoSQL vs. NewSQL Comparing Database Technology John Ryan Senior Solution Architect, Snowflake Computing Table of Contents The World has Changed . 1 What’s Changed? . 2 What’s the Problem? . .. 3 Performance vs. Availability and Durability . 3 Consistecy vs. Availability . 4 Flexibility vs . Scalability . 5 ACID vs. Eventual Consistency . 6 The OLTP Database Reimagined . 7 Achieving the Impossible! . .. 8 NewSQL Database Technology . 9 VoltDB . 10 MemSQL . 11 Which Applications Need NewSQL Technology? . 12 Conclusion . 13 About the Author . 13 ii The World has Changed The world has changed massively in the past 20 years. Back in the year 2000, a few million users connected to the web using a 56k modem attached to a PC, and Amazon only sold books. Now billions of people are using to their smartphone or tablet 24x7 to buy just about everything, and they’re interacting with Facebook, Twitter and Instagram. The pace has been unstoppable . Expectations have also changed. If a web page doesn’t refresh within seconds we’re quickly frustrated, and go elsewhere. If a web site is down, we fear it’s the end of civilisation as we know it. If a major site is down, it makes global headlines. Instant gratification takes too long! — Ladawn Clare-Panton Aside: If you’re not a seasoned Database Architect, you may want to start with my previous articles on Scalability and Database Architecture. Oracle vs. NoSQL vs. NewSQL eBook 1 What’s Changed? The above leads to a few observations: • Scalability — With potentially explosive traffic growth, IT systems need to quickly grow to meet exponential numbers of transactions • High Availability — IT systems must run 24x7, and be resilient to failure.
    [Show full text]
  • The Declarative Imperative Experiences and Conjectures in Distributed Logic
    The Declarative Imperative Experiences and Conjectures in Distributed Logic Joseph M. Hellerstein University of California, Berkeley [email protected] ABSTRACT The juxtaposition of these trends presents stark alternatives. The rise of multicore processors and cloud computing is putting Will the forecasts of doom and gloom materialize in a storm enormous pressure on the software community to find solu- that drowns out progress in computing? Or is this the long- tions to the difficulty of parallel and distributed programming. delayed catharsis that will wash away today’s thicket of im- At the same time, there is more—and more varied—interest in perative languages, preparing the ground for a more fertile data-centric programming languages than at any time in com- declarative future? And what role might the database com- puting history, in part because these languages parallelize nat- munity play in shaping this future, having sowed the seeds of urally. This juxtaposition raises the possibility that the theory Datalog over the last quarter century? of declarative database query languages can provide a foun- Before addressing these issues directly, a few more words dation for the next generation of parallel and distributed pro- about both crisis and opportunity are in order. gramming languages. 1.1 Urgency: Parallelism In this paper I reflect on my group’s experience over seven years using Datalog extensions to build networking protocols I would be panicked if I were in industry. and distributed systems. Based on that experience, I present — John Hennessy, President, Stanford University [35] a number of theoretical conjectures that may both interest the database community, and clarify important practical issues in The need for parallelism is visible at micro and macro scales.
    [Show full text]
  • Security Analysis of Cryptographically Controlled Access to XML Documents
    Security Analysis of Cryptographically Controlled Access to XML Documents ¤ Mart´in Abadi Bogdan Warinschi Computer Science Department Computer Science Department University of California at Santa Cruz Stanford University [email protected] [email protected] ABSTRACT ments [4, 5, 7, 8, 14, 19, 23]. This line of research has led to Some promising recent schemes for XML access control em- e±cient and elegant publication techniques that avoid data ploy encryption for implementing security policies on pub- duplication by relying on cryptography. For instance, us- lished data, avoiding data duplication. In this paper we ing those techniques, medical records may be published as study one such scheme, due to Miklau and Suciu. That XML documents, with parts encrypted in such a way that scheme was introduced with some intuitive explanations and only the appropriate users (physicians, nurses, researchers, goals, but without precise de¯nitions and guarantees for the administrators, and patients) can see their contents. use of cryptography (speci¯cally, symmetric encryption and The work of Miklau and Suciu [19] is a crisp, compelling secret sharing). We bridge this gap in the present work. We example of this line of research. They develop a policy query analyze the scheme in the context of the rigorous models language for specifying ¯ne-grained access policies on XML of modern cryptography. We obtain formal results in sim- documents and a logical model based on the concept of \pro- ple, symbolic terms close to the vocabulary of Miklau and tection". They also show how to translate consistent poli- Suciu. We also obtain more detailed computational results cies into protections, and how to implement protections by that establish security against probabilistic polynomial-time XML encryption [10].
    [Show full text]
  • Arxiv:1909.05204V3 [Cs.DC] 6 Feb 2020
    Cogsworth: Byzantine View Synchronization Oded Naor, Technion and Calibra Mathieu Baudet, Calibra Dahlia Malkhi, Calibra Alexander Spiegelman, VMware Research Most methods for Byzantine fault tolerance (BFT) in the partial synchrony setting divide the local state of the nodes into views, and the transition from one view to the next dictates a leader change. In order to provide liveness, all honest nodes need to stay in the same view for a sufficiently long time. This requires view synchronization, a requisite of BFT that we extract and formally define here. Existing approaches for Byzantine view synchronization incur quadratic communication (in n, the number of parties). A cascade of O(n) view changes may thus result in O(n3) communication complexity. This paper presents a new Byzantine view synchronization algorithm named Cogsworth, that has optimistically linear communication complexity and constant latency. Faced with benign failures, Cogsworth has expected linear communication and constant latency. The result here serves as an important step towards reaching solutions that have overall quadratic communication, the known lower bound on Byzantine fault tolerant consensus. Cogsworth is particularly useful for a family of BFT protocols that already exhibit linear communication under various circumstances, but suffer quadratic overhead due to view synchro- nization. 1. INTRODUCTION Logical synchronization is a requisite for progress to be made in asynchronous state machine repli- cation (SMR). Previous Byzantine fault tolerant (BFT) synchronization mechanisms incur quadratic message complexities, frequently dominating over the linear cost of the consensus cores of BFT so- lutions. In this work, we define the view synchronization problem and provide the first solution in the Byzantine setting, whose latency is bounded and communication cost is linear, under a broad set of scenarios.
    [Show full text]
  • Mathematical Writing by Donald E. Knuth, Tracy Larrabee, and Paul M
    Mathematical Writing by Donald E. Knuth, Tracy Larrabee, and Paul M. Roberts This report is based on a course of the same name given at Stanford University during autumn quarter, 1987. Here's the catalog description: CS 209. Mathematical Writing|Issues of technical writing and the ef- fective presentation of mathematics and computer science. Preparation of theses, papers, books, and \literate" computer programs. A term paper on a topic of your choice; this paper may be used for credit in another course. The first three lectures were a \minicourse" that summarized the basics. About two hundred people attended those three sessions, which were devoted primarily to a discussion of the points in x1 of this report. An exercise (x2) and a suggested solution (x3) were also part of the minicourse. The remaining 28 lectures covered these and other issues in depth. We saw many examples of \before" and \after" from manuscripts in progress. We learned how to avoid excessive subscripts and superscripts. We discussed the documentation of algorithms, com- puter programs, and user manuals. We considered the process of refereeing and editing. We studied how to make effective diagrams and tables, and how to find appropriate quota- tions to spice up a text. Some of the material duplicated some of what would be discussed in writing classes offered by the English department, but the vast majority of the lectures were devoted to issues that are specific to mathematics and/or computer science. Guest lectures by Herb Wilf (University of Pennsylvania), Jeff Ullman (Stanford), Leslie Lamport (Digital Equipment Corporation), Nils Nilsson (Stanford), Mary-Claire van Leunen (Digital Equipment Corporation), Rosalie Stemer (San Francisco Chronicle), and Paul Halmos (University of Santa Clara), were a special highlight as each of these outstanding authors presented their own perspectives on the problems of mathematical communication.
    [Show full text]
  • 1 the Principle of Wave–Particle Duality: an Overview
    3 1 The Principle of Wave–Particle Duality: An Overview 1.1 Introduction In the year 1900, physics entered a period of deep crisis as a number of peculiar phenomena, for which no classical explanation was possible, began to appear one after the other, starting with the famous problem of blackbody radiation. By 1923, when the “dust had settled,” it became apparent that these peculiarities had a common explanation. They revealed a novel fundamental principle of nature that wascompletelyatoddswiththeframeworkofclassicalphysics:thecelebrated principle of wave–particle duality, which can be phrased as follows. The principle of wave–particle duality: All physical entities have a dual character; they are waves and particles at the same time. Everything we used to regard as being exclusively a wave has, at the same time, a corpuscular character, while everything we thought of as strictly a particle behaves also as a wave. The relations between these two classically irreconcilable points of view—particle versus wave—are , h, E = hf p = (1.1) or, equivalently, E h f = ,= . (1.2) h p In expressions (1.1) we start off with what we traditionally considered to be solely a wave—an electromagnetic (EM) wave, for example—and we associate its wave characteristics f and (frequency and wavelength) with the corpuscular charac- teristics E and p (energy and momentum) of the corresponding particle. Conversely, in expressions (1.2), we begin with what we once regarded as purely a particle—say, an electron—and we associate its corpuscular characteristics E and p with the wave characteristics f and of the corresponding wave.
    [Show full text]
  • The Best Nurturers in Computer Science Research
    The Best Nurturers in Computer Science Research Bharath Kumar M. Y. N. Srikant IISc-CSA-TR-2004-10 http://archive.csa.iisc.ernet.in/TR/2004/10/ Computer Science and Automation Indian Institute of Science, India October 2004 The Best Nurturers in Computer Science Research Bharath Kumar M.∗ Y. N. Srikant† Abstract The paper presents a heuristic for mining nurturers in temporally organized collaboration networks: people who facilitate the growth and success of the young ones. Specifically, this heuristic is applied to the computer science bibliographic data to find the best nurturers in computer science research. The measure of success is parameterized, and the paper demonstrates experiments and results with publication count and citations as success metrics. Rather than just the nurturer’s success, the heuristic captures the influence he has had in the indepen- dent success of the relatively young in the network. These results can hence be a useful resource to graduate students and post-doctoral can- didates. The heuristic is extended to accurately yield ranked nurturers inside a particular time period. Interestingly, there is a recognizable deviation between the rankings of the most successful researchers and the best nurturers, which although is obvious from a social perspective has not been statistically demonstrated. Keywords: Social Network Analysis, Bibliometrics, Temporal Data Mining. 1 Introduction Consider a student Arjun, who has finished his under-graduate degree in Computer Science, and is seeking a PhD degree followed by a successful career in Computer Science research. How does he choose his research advisor? He has the following options with him: 1. Look up the rankings of various universities [1], and apply to any “rea- sonably good” professor in any of the top universities.
    [Show full text]
  • ITERATIVE ALGOR ITHMS for GLOBAL FLOW ANALYSIS By
    ITERATIVE ALGOR ITHMS FOR GLOBAL FLOW ANALYSIS bY Robert Endre Tarjan STAN-CS-76-547 MARCH 1976 COMPUTER SCIENCE DEPARTMENT School of Humanities and Sciences STANFORD UN IVERS ITY Iterative Algorithms for Global Flow Analysis * Robert Endre Tarjan f Computer Science Department Stanford University Stanford, California 94305 February 1976 Abstract. This paper studies iterative methods for the global flow analysis of computer programs. We define a hierarchy of global flow problem classes, each solvable by an appropriate generalization of the "node listing" method of Kennedy. We show that each of these generalized methods is optimum, among all iterative algorithms, for solving problems within its class. We give lower bounds on the time required by iterative algorithms for each of the problem classes. Keywords: computational complexity, flow graph reducibility, global flow analysis, graph theory, iterative algorithm, lower time bound, node listing. * f Research partially supported by National Science Foundation grant MM 75-22870. 1 t 1. Introduction. A problem extensively studied in recent years [2,3,5,7,8,9,12,13,14, 15,2'7,28,29,30] is that of globally analyzing cmputer programs; that is, collecting information which is distributed throughout a computer program, generally for the purpose of optimizing the program. Roughly speaking, * global flow analysis requires the determination, for each program block f , of a property known to hold on entry to the block, independent of the path taken to reach the block. * A widely used amroach to global flow analysis is to model the set of possible properties by a semi-lattice (we desire the 'lmaximumtl property for each block), to model the control structure of the program by a directed graph with one vertex for each program block, and to specify, for each branch from block to block, the function by which that branch transforms the set of properties.
    [Show full text]
  • Verification and Specification of Concurrent Programs
    Verification and Specification of Concurrent Programs Leslie Lamport 16 November 1993 To appear in the proceedings of a REX Workshop held in The Netherlands in June, 1993. Verification and Specification of Concurrent Programs Leslie Lamport Digital Equipment Corporation Systems Research Center Abstract. I explore the history of, and lessons learned from, eighteen years of assertional methods for specifying and verifying concurrent pro- grams. I then propose a Utopian future in which mathematics prevails. Keywords. Assertional methods, fairness, formal methods, mathemat- ics, Owicki-Gries method, temporal logic, TLA. Table of Contents 1 A Brief and Rather Biased History of State-Based Methods for Verifying Concurrent Systems .................. 2 1.1 From Floyd to Owicki and Gries, and Beyond ........... 2 1.2Temporal Logic ............................ 4 1.3 Unity ................................. 5 2 An Even Briefer and More Biased History of State-Based Specification Methods for Concurrent Systems ......... 6 2.1 Axiomatic Specifications ....................... 6 2.2 Operational Specifications ...................... 7 2.3 Finite-State Methods ........................ 8 3 What We Have Learned ........................ 8 3.1 Not Sequential vs. Concurrent, but Functional vs. Reactive ... 8 3.2Invariance Under Stuttering ..................... 8 3.3 The Definitions of Safety and Liveness ............... 9 3.4 Fairness is Machine Closure ..................... 10 3.5 Hiding is Existential Quantification ................ 10 3.6 Specification Methods that Don’t Work .............. 11 3.7 Specification Methods that Work for the Wrong Reason ..... 12 4 Other Methods ............................. 14 5 A Brief Advertisement for My Approach to State-Based Ver- ification and Specification of Concurrent Systems ....... 16 5.1 The Power of Formal Mathematics ................. 16 5.2Specifying Programs with Mathematical Formulas ........ 17 5.3 TLA .................................
    [Show full text]