The 2014 Edsger W. Dijkstra Prize in Distributed Computing
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
ACM SIGACT News Distributed Computing Column 28
ACM SIGACT News Distributed Computing Column 28 Idit Keidar Dept. of Electrical Engineering, Technion Haifa, 32000, Israel [email protected] Sergio Rajsbaum, who edited this column for seven years and established it as a relevant and popular venue, is stepping down. This issue is my first step in the big shoes he vacated. I would like to take this opportunity to thank Sergio for providing us with seven years’ worth of interesting columns. In producing these columns, Sergio has enjoyed the support of the community at-large and obtained material from many authors, who greatly contributed to the column’s success. I hope to enjoy a similar level of support; I warmly welcome your feedback and suggestions for material to include in this column! The main two conferences in the area of principles of distributed computing, PODC and DISC, took place this summer. This issue is centered around these conferences, and more broadly, distributed computing research as reflected therein, ranging from reviews of this year’s instantiations, through influential papers in past instantiations, to examining PODC’s place within the realm of computer science. I begin with a short review of PODC’07, and highlight some “hot” trends that have taken root in PODC, as reflected in this year’s program. Some of the forthcoming columns will be dedicated to these up-and- coming research topics. This is followed by a review of this year’s DISC, by Edward (Eddie) Bortnikov. For some perspective on long-running trends in the field, I next include the announcement of this year’s Edsger W. -
Edsger Dijkstra: the Man Who Carried Computer Science on His Shoulders
INFERENCE / Vol. 5, No. 3 Edsger Dijkstra The Man Who Carried Computer Science on His Shoulders Krzysztof Apt s it turned out, the train I had taken from dsger dijkstra was born in Rotterdam in 1930. Nijmegen to Eindhoven arrived late. To make He described his father, at one time the president matters worse, I was then unable to find the right of the Dutch Chemical Society, as “an excellent Aoffice in the university building. When I eventually arrived Echemist,” and his mother as “a brilliant mathematician for my appointment, I was more than half an hour behind who had no job.”1 In 1948, Dijkstra achieved remarkable schedule. The professor completely ignored my profuse results when he completed secondary school at the famous apologies and proceeded to take a full hour for the meet- Erasmiaans Gymnasium in Rotterdam. His school diploma ing. It was the first time I met Edsger Wybe Dijkstra. shows that he earned the highest possible grade in no less At the time of our meeting in 1975, Dijkstra was 45 than six out of thirteen subjects. He then enrolled at the years old. The most prestigious award in computer sci- University of Leiden to study physics. ence, the ACM Turing Award, had been conferred on In September 1951, Dijkstra’s father suggested he attend him three years earlier. Almost twenty years his junior, I a three-week course on programming in Cambridge. It knew very little about the field—I had only learned what turned out to be an idea with far-reaching consequences. a flowchart was a couple of weeks earlier. -
On-The-Fly Garbage Collection: an Exercise in Cooperation
8. HoIn, B.K.P. Determining lightness from an image. Comptr. Graphics and Image Processing 3, 4(Dec. 1974), 277-299. 9. Huffman, D.A. Impossible objects as nonsense sentences. In Machine Intelligence 6, B. Meltzer and D. Michie, Eds., Edinburgh U. Press, Edinburgh, 1971, pp. 295-323. 10. Mackworth, A.K. Consistency in networks of relations. Artif. Intell. 8, 1(1977), 99-118. I1. Marr, D. Simple memory: A theory for archicortex. Philos. Trans. Operating R.S. Gaines Roy. Soc. B. 252 (1971), 23-81. Systems Editor 12. Marr, D., and Poggio, T. Cooperative computation of stereo disparity. A.I. Memo 364, A.I. Lab., M.I.T., Cambridge, Mass., 1976. On-the-Fly Garbage 13. Minsky, M., and Papert, S. Perceptrons. M.I.T. Press, Cambridge, Mass., 1968. Collection: An Exercise in 14. Montanari, U. Networks of constraints: Fundamental properties and applications to picture processing. Inform. Sci. 7, 2(April 1974), Cooperation 95-132. 15. Rosenfeld, A., Hummel, R., and Zucker, S.W. Scene labelling by relaxation operations. IEEE Trans. Systems, Man, and Cybernetics Edsger W. Dijkstra SMC-6, 6(June 1976), 420-433. Burroughs Corporation 16. Sussman, G.J., and McDermott, D.V. From PLANNER to CONNIVER--a genetic approach. Proc. AFIPS 1972 FJCC, Vol. 41, AFIPS Press, Montvale, N.J., pp. 1171-1179. Leslie Lamport 17. Sussman, G.J., and Stallman, R.M. Forward reasoning and SRI International dependency-directed backtracking in a system for computer-aided circuit analysis. A.I. Memo 380, A.I. Lab., M.I.T., Cambridge, Mass., 1976. A.J. Martin, C.S. -
Arxiv:1909.05204V3 [Cs.DC] 6 Feb 2020
Cogsworth: Byzantine View Synchronization Oded Naor, Technion and Calibra Mathieu Baudet, Calibra Dahlia Malkhi, Calibra Alexander Spiegelman, VMware Research Most methods for Byzantine fault tolerance (BFT) in the partial synchrony setting divide the local state of the nodes into views, and the transition from one view to the next dictates a leader change. In order to provide liveness, all honest nodes need to stay in the same view for a sufficiently long time. This requires view synchronization, a requisite of BFT that we extract and formally define here. Existing approaches for Byzantine view synchronization incur quadratic communication (in n, the number of parties). A cascade of O(n) view changes may thus result in O(n3) communication complexity. This paper presents a new Byzantine view synchronization algorithm named Cogsworth, that has optimistically linear communication complexity and constant latency. Faced with benign failures, Cogsworth has expected linear communication and constant latency. The result here serves as an important step towards reaching solutions that have overall quadratic communication, the known lower bound on Byzantine fault tolerant consensus. Cogsworth is particularly useful for a family of BFT protocols that already exhibit linear communication under various circumstances, but suffer quadratic overhead due to view synchro- nization. 1. INTRODUCTION Logical synchronization is a requisite for progress to be made in asynchronous state machine repli- cation (SMR). Previous Byzantine fault tolerant (BFT) synchronization mechanisms incur quadratic message complexities, frequently dominating over the linear cost of the consensus cores of BFT so- lutions. In this work, we define the view synchronization problem and provide the first solution in the Byzantine setting, whose latency is bounded and communication cost is linear, under a broad set of scenarios. -
Arxiv:1901.04709V1 [Cs.GT]
Self-Stabilization Through the Lens of Game Theory Krzysztof R. Apt1,2 and Ehsan Shoja3 1 CWI, Amsterdam, The Netherlands 2 MIMUW, University of Warsaw, Warsaw, Poland 3 Sharif University of Technology, Tehran, Iran Abstract. In 1974 E.W. Dijkstra introduced the seminal concept of self- stabilization that turned out to be one of the main approaches to fault- tolerant computing. We show here how his three solutions can be for- malized and reasoned about using the concepts of game theory. We also determine the precise number of steps needed to reach self-stabilization in his first solution. 1 Introduction In 1974 Edsger W. Dijkstra introduced in a two-page article [10] the notion of self-stabilization. The paper was completely ignored until 1983, when Leslie Lamport stressed its importance in his invited talk at the ACM Symposium on Principles of Distributed Computing (PODC), published a year later as [21]. Things have changed since then. According to Google Scholar Dijkstra’s paper has been by now cited more than 2300 times. It became one of the main ap- proaches to fault tolerant computing. An early survey was published in 1993 as [26], while the research on the subject until 2000 was summarized in the book [13]. In 2002 Dijkstra’s paper won the PODC influential paper award (renamed in 2003 to Dijkstra Prize). The literature on the subject initiated by it continues to grow. There are annual Self-Stabilizing Systems Workshops, the 18th edition of which took part in 2016. The idea proposed by Dijkstra is very simple. Consider a distributed system arXiv:1901.04709v1 [cs.GT] 15 Jan 2019 viewed as a network of machines. -
Distributed Algorithms for Wireless Networks
A THEORETICAL VIEW OF DISTRIBUTED SYSTEMS Nancy Lynch, MIT EECS, CSAIL National Science Foundation, April 1, 2021 Theory for Distributed Systems • We have worked on theory for distributed systems, trying to understand (mathematically) their capabilities and limitations. • This has included: • Defining abstract, mathematical models for problems solved by distributed systems, and for the algorithms used to solve them. • Developing new algorithms. • Producing rigorous proofs, of correctness, performance, fault-tolerance. • Proving impossibility results and lower bounds, expressing inherent limitations of distributed systems for solving problems. • Developing foundations for modeling, analyzing distributed systems. • Kinds of systems: • Distributed data-management systems. • Wired, wireless communication systems. • Biological systems: Insect colonies, developmental biology, brains. This talk: 1. Algorithms for Traditional Distributed Systems 2. Impossibility Results 3. Foundations 4. Algorithms for New Distributed Systems 1. Algorithms for Traditional Distributed Systems • Mutual exclusion in shared-memory systems, resource allocation: Fischer, Burns,…late 70s and early 80s. • Dolev, Lynch, Pinter, Stark, Weihl. Reaching approximate agreement in the presence of faults. JACM,1986. • Lundelius, Lynch. A new fault-tolerant algorithm for clock synchronization. Information and Computation,1988. • Dwork, Lynch, Stockmeyer. Consensus in the presence of partial synchrony. JACM,1988. Dijkstra Prize, 2007. 1A. Dwork, Lynch, Stockmeyer [DLS] “This paper introduces a number of practically motivated partial synchrony models that lie between the completely synchronous and the completely asynchronous models and in which consensus is solvable. It gave practitioners the right tool for building fault-tolerant systems and contributed to the understanding that safety can be maintained at all times, despite the impossibility of consensus, and progress is facilitated during periods of stability. -
2020 SIGACT REPORT SIGACT EC – Eric Allender, Shuchi Chawla, Nicole Immorlica, Samir Khuller (Chair), Bobby Kleinberg September 14Th, 2020
2020 SIGACT REPORT SIGACT EC – Eric Allender, Shuchi Chawla, Nicole Immorlica, Samir Khuller (chair), Bobby Kleinberg September 14th, 2020 SIGACT Mission Statement: The primary mission of ACM SIGACT (Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory) is to foster and promote the discovery and dissemination of high quality research in the domain of theoretical computer science. The field of theoretical computer science is the rigorous study of all computational phenomena - natural, artificial or man-made. This includes the diverse areas of algorithms, data structures, complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor. 1. Awards ▪ 2020 Gödel Prize: This was awarded to Robin A. Moser and Gábor Tardos for their paper “A constructive proof of the general Lovász Local Lemma”, Journal of the ACM, Vol 57 (2), 2010. The Lovász Local Lemma (LLL) is a fundamental tool of the probabilistic method. It enables one to show the existence of certain objects even though they occur with exponentially small probability. The original proof was not algorithmic, and subsequent algorithmic versions had significant losses in parameters. This paper provides a simple, powerful algorithmic paradigm that converts almost all known applications of the LLL into randomized algorithms matching the bounds of the existence proof. The paper further gives a derandomized algorithm, a parallel algorithm, and an extension to the “lopsided” LLL. -
Mathematical Writing by Donald E. Knuth, Tracy Larrabee, and Paul M
Mathematical Writing by Donald E. Knuth, Tracy Larrabee, and Paul M. Roberts This report is based on a course of the same name given at Stanford University during autumn quarter, 1987. Here's the catalog description: CS 209. Mathematical Writing|Issues of technical writing and the ef- fective presentation of mathematics and computer science. Preparation of theses, papers, books, and \literate" computer programs. A term paper on a topic of your choice; this paper may be used for credit in another course. The first three lectures were a \minicourse" that summarized the basics. About two hundred people attended those three sessions, which were devoted primarily to a discussion of the points in x1 of this report. An exercise (x2) and a suggested solution (x3) were also part of the minicourse. The remaining 28 lectures covered these and other issues in depth. We saw many examples of \before" and \after" from manuscripts in progress. We learned how to avoid excessive subscripts and superscripts. We discussed the documentation of algorithms, com- puter programs, and user manuals. We considered the process of refereeing and editing. We studied how to make effective diagrams and tables, and how to find appropriate quota- tions to spice up a text. Some of the material duplicated some of what would be discussed in writing classes offered by the English department, but the vast majority of the lectures were devoted to issues that are specific to mathematics and/or computer science. Guest lectures by Herb Wilf (University of Pennsylvania), Jeff Ullman (Stanford), Leslie Lamport (Digital Equipment Corporation), Nils Nilsson (Stanford), Mary-Claire van Leunen (Digital Equipment Corporation), Rosalie Stemer (San Francisco Chronicle), and Paul Halmos (University of Santa Clara), were a special highlight as each of these outstanding authors presented their own perspectives on the problems of mathematical communication. -
LNCS 4308, Pp
On Distributed Verification Amos Korman and Shay Kutten Faculty of IE&M, Technion, Haifa 32000, Israel Abstract. This paper describes the invited talk given at the 8th Inter- national Conference on Distributed Computing and Networking (ICDCN 2006), at the Indian Institute of Technology Guwahati, India. This talk was intended to give a partial survey and to motivate further studies of distributed verification. To serve the purpose of motivating, we allow ourselves to speculate freely on the potential impact of such research. In the context of sequential computing, it is common to assume that the task of verifying a property of an object may be much easier than computing it (consider, for example, solving an NP-Complete problem versus verifying a witness). Extrapolating from the impact the separation of these two notions (computing and verifying) had in the context of se- quential computing, the separation may prove to have a profound impact on the field of distributed computing as well. In addition, in the context of distributed computing, the motivation for the separation seems even stronger than in the centralized sequential case. In this paper we explain some motivations for specific definitions, sur- vey some very related notions and their motivations in the literature, survey some examples for problems and solutions, and mention some ad- ditional general results such as general algorithmic methods and general lower bounds. Since this paper is mostly intended to “give a taste” rather than be a comprehensive survey, we apologize to authors of additional related papers that we did not mention or detailed. 1 Introduction This paper addresses the problem of locally verifying global properties. -
Verification and Specification of Concurrent Programs
Verification and Specification of Concurrent Programs Leslie Lamport 16 November 1993 To appear in the proceedings of a REX Workshop held in The Netherlands in June, 1993. Verification and Specification of Concurrent Programs Leslie Lamport Digital Equipment Corporation Systems Research Center Abstract. I explore the history of, and lessons learned from, eighteen years of assertional methods for specifying and verifying concurrent pro- grams. I then propose a Utopian future in which mathematics prevails. Keywords. Assertional methods, fairness, formal methods, mathemat- ics, Owicki-Gries method, temporal logic, TLA. Table of Contents 1 A Brief and Rather Biased History of State-Based Methods for Verifying Concurrent Systems .................. 2 1.1 From Floyd to Owicki and Gries, and Beyond ........... 2 1.2Temporal Logic ............................ 4 1.3 Unity ................................. 5 2 An Even Briefer and More Biased History of State-Based Specification Methods for Concurrent Systems ......... 6 2.1 Axiomatic Specifications ....................... 6 2.2 Operational Specifications ...................... 7 2.3 Finite-State Methods ........................ 8 3 What We Have Learned ........................ 8 3.1 Not Sequential vs. Concurrent, but Functional vs. Reactive ... 8 3.2Invariance Under Stuttering ..................... 8 3.3 The Definitions of Safety and Liveness ............... 9 3.4 Fairness is Machine Closure ..................... 10 3.5 Hiding is Existential Quantification ................ 10 3.6 Specification Methods that Don’t Work .............. 11 3.7 Specification Methods that Work for the Wrong Reason ..... 12 4 Other Methods ............................. 14 5 A Brief Advertisement for My Approach to State-Based Ver- ification and Specification of Concurrent Systems ....... 16 5.1 The Power of Formal Mathematics ................. 16 5.2Specifying Programs with Mathematical Formulas ........ 17 5.3 TLA ................................. -
Stabilization
Chapter 13 Stabilization A large branch of research in distributed computing deals with fault-tolerance. Being able to tolerate a considerable fraction of failing or even maliciously be- having (\Byzantine") nodes while trying to reach consensus (on e.g. the output of a function) among the nodes that work properly is crucial for building reli- able systems. However, consensus protocols require that a majority of the nodes remains non-faulty all the time. Can we design a distributed system that survives transient (short-lived) failures, even if all nodes are temporarily failing? In other words, can we build a distributed system that repairs itself ? 13.1 Self-Stabilization Definition 13.1 (Self-Stabilization). A distributed system is self-stabilizing if, starting from an arbitrary state, it is guaranteed to converge to a legitimate state. If the system is in a legitimate state, it is guaranteed to remain there, provided that no further faults happen. A state is legitimate if the state satisfies the specifications of the distributed system. Remarks: • What kind of transient failures can we tolerate? An adversary can crash nodes, or make nodes behave Byzantine. Indeed, temporarily an adversary can do harm in even worse ways, e.g. by corrupting the volatile memory of a node (without the node noticing { not unlike the movie Memento), or by corrupting messages on the fly (without anybody noticing). However, as all failures are transient, eventually all nodes must work correctly again, that is, crashed nodes get res- urrected, Byzantine nodes stop being malicious, messages are being delivered reliably, and the memory of the nodes is secure. -
The Specification Language TLA+
Leslie Lamport: The Speci¯cation Language TLA+ This is an addendum to a chapter by Stephan Merz in the book Logics of Speci¯cation Languages by Dines Bj¿rner and Martin C. Henson (Springer, 2008). It appeared in that book as part of a \reviews" chapter. Stephan Merz describes the TLA logic in great detail and provides about as good a description of TLA+ and how it can be used as is possible in a single chapter. Here, I give a historical account of how I developed TLA and TLA+ that explains some of the design choices, and I briefly discuss how TLA+ is used in practice. Whence TLA The logic TLA adds three things to the very simple temporal logic introduced into computer science by Pnueli [4]: ² Invariance under stuttering. ² Temporal existential quanti¯cation. ² Taking as atomic formulas not just state predicates but also action for- mulas. Here is what prompted these additions. When Pnueli ¯rst introduced temporal logic to computer science in the 1970s, it was clear to me that it provided the right logic for expressing the simple liveness properties of concurrent algorithms that were being considered at the time and for formalizing their proofs. In the early 1980s, interest turned from ad hoc properties of systems to complete speci¯cations. The idea of speci- fying a system as a conjunction of the temporal logic properties it should satisfy seemed quite attractive [5]. However, it soon became obvious that this approach does not work in practice. It is impossible to understand what a conjunction of individual properties actually speci¯es.