Supplemental Material for CORE Ranking Application for the International Symposium on Distributed Computing (DISC)

Total Page:16

File Type:pdf, Size:1020Kb

Supplemental Material for CORE Ranking Application for the International Symposium on Distributed Computing (DISC) Supplemental material for CORE ranking application for the International Symposium on Distributed Computing (DISC) 1 General (Part F.2 of General CORE Form) DISC is a conference in Theoretical Distributed Computing. Theoretical Distributed Computing is its own research area, intersecting with both general CS Theory and distributed systems, but not a subarea of either of those: There are plenty of research topics that are covered by Theoretical Distributed Computing conferences that would not be covered by STOC/FOCS/SODA and other general CS Theory conferences nor by networking/distributed systems conferences, but which are very relevant theoretical areas with direct application to current distributed systems (e.g., theoretical underpinnings of blockchain, consensus, and low-level concurrency). To further understand how theoretical distributed computing is not a direct subset of CS Theory, note that the acceptance rates of DISC are in fact lower than the acceptance rates of the top general CS Theory conferences: SODA has an average acceptance rate of 30%, FOCS of 27.6% and STOC of 26.3%, according to lamsade.dauphine.fr/ sikora/ratio/confs.php – these three conferences are all rated A* by CORE. DISC and PODC (PODC is an A*-ranked conference at CORE) are the top two conferences in theoretical dis- tributed computing conferences. The theoretical distributed computing community opted to have two smaller, single- session top flagship conferences spread throughout the year, with support both from the US and from Europe through ACM and EATCS respectively, rather than having just one flagship multiple-session conference. DISC and PODC have very similar number of submissions and acceptance rates (see Section 4) and the same set of outstandingly qualified ”top people” in theoretical distributed computing publish in both conferences regularly: In addition to the information in Sections B, C and D of the CORE application, the attachment produced with the WPP tool shows the similarity of the pattern of publications in DISC and PODC for a broader set of 36 top scholars who publish in theo- retical distributed computing, ranging from the younger to the more established researchers. Moreover, the top award for Theoretical Distributed Computing, the Dijkstra Prize, and the Distributed Computing Dissertation award, which awards the top PhD thesis in theoretical distributed computing in a given year, are both jointly awarded by DISC and PODC and are delivered in alternate years at each of the two conferences. DISC brings together a large number of colocated workshops that span many areas connected to theoretical dis- tributed computing. In 2019 we had as many as 8 workshops colocated with DISC, with diverse topics including e.g. biological computation, hardware design, formal methods, and programming languages, all studied from the perspec- tive of distributed systems. In 2020, even though the conference was held online, we still had 4 colocated workshops, and all of them were very popular, with 100+ registered participants; a highlight was the CELLS workshop, which brought a large number of participants also from outside the traditional computer science community. Hence DISC (as well as PODC) serve as a hub that connects many different research communities with ties to theoretical distributed computing. Lastly, we would like to note that DISC is only listed at CORE under Field of Research 4606 – Distributed Computing and Systems Software, which spans all of the more applied areas in distributed computing, networking and systems. As mentioned above DISC is a theoretical CS conference (in distributed computing) and hence we would like to ask you that it be also listed under Field of Research 4613 – Theory of Computation, reflecting the correct subarea that DISC represents, Theoretical Distributed Computing. In the following, you will find additional supporting supporting information to DISC’s application that is not directly included in the general CORE application, such as a graph with DISC and PODC’s acceptance rates for the last several years, a list of researchers who regularly publish at DISC, with their affiliations, and a report by the DISC 2018 PC chair that document and illustrates the rigorous review process that papers submitted to DISC go through. 1 2 Qualifications of Regular DISC Participants Regular participants and contributors to PODC include distinguished researchers at all career levels with high h-index. The citation numbers are from Google Scholar, unless otherwise noted. Generally, citation numbers in theory are smaller than those in systems, as the community is smaller. Here is a sampling1: Name Affiliation Selected Honors h-index Number of DISC Count of most cited papers DISC paper Marcos Aguilera VMWare 42 12 184 James Aspnes Yale Dijkstra Prize 39 11 199 Hagit Attiya Techion ACM Fellow, 44 19 103 Dijkstra Prize Tushar Deepak Uber Dijkstra Prize 34 1 232 Chandra Dave Dice Oracle 42 1 1196 Danny Dolev Hebrew University ACM Fellow, 66 6 75 Dijkstra Award Shlomi Dolev Ben-Gurion 49 13 143 University Faith Ellen University of ACM Fellow 13 Toronto Pierre Fraigniaud CNRS 50 17 143 Seth Gilbert National U. of 32 18 142 Singapore Rachid Guerraoui EPFL ACM Fellow 75 33 205 Vassos Hadzilacos University of Dijkstra Prize 26 7 102 Toronto Bernhard Haeupler CMU Sloan Research 27 10 100 Fellow Joe Halpern Cornell ACM Fellow, IEEE 89 3 Fellow, NAE2, Godel¨ Prize, ACM AAAI Allen Newell Award Maurice Herlihy Brown University ACM Fellow, NAE, 70 21 622 Dijkstra Prize twice, Godel¨ Prize Idit Keidar Technion 42 16 113 Valerie King University of ACM Fellow 31 5 Victoria Fabian Kuhn University of 41 22 143 Freiburg Leslie Lamport Microsoft Research Turing Award, 80 7 116 NAE, NAS3, ACM Fellow, IEEE John von Neumann Medal, IEEE Emanuel R. Piore Award, Dijkstra Prize (three times) Nancy Lynch MIT ACM Fellow, NAE, 75 25 240 NAS, Knuth Prize, Dijkstra Prize Dahlia Malkhi Diem Association ACM Fellow 55 17 128 Yoram Moses Technion Dijkstra Prize, 33 11 Godel¨ Prize Gopal Pandurangan University of 35 10 114 Houston 1We will leave a field blank when the respective information was not available. 2US National Academy of Engineering 3US National Academy of Science 2 Name Affiliation Selected Honors h-index Number of DISC Count of most cited papers DISC paper Andrzej Pelc University of 50 9 Quebec in Outaouais David Peleg Weizmann Institute ACM Fellow, 75 15 of Science Dijkstra Prize Michel Raynal INRIA Academia Europea 61 22 225 Christian Scheideler U. of Paderborn 40 8 74 Stefan Schmid University of 42 5 74 Vienna Michael Scott University of ACM Fellow, IEEE 59 11 301 Rochester Fellow, Dijkstra Prize Nir Shavit MIT ACM Fellow, 13 Dijkstra Prize, Godel¨ Prize Gadi Taubenfeld Interdisciplinary 22 13 97 Center Sam Toueg University of Dijkstra Prize 45 10 184 Toronto Nitin Vaidya Georgetown IEEE Fellow 72 3 University Roger Wattenhofer ETH Zurich¨ 86 10 150 Jennifer Welch Texas A& M 34 10 152 Moti Yung Google ACM Fellow, IEEE 108 4 216 Fellow, EATCS Fellow, IACR Fellow 3 Quality of Reviewing of DISC Submissions We present detailed sample materials from the review process for the 2018 DISC conference, whose PC chair was Ulrich Schmid, Technical University of Vienna, Austria. The makeup of the DISC steering committee includes the PC chairs from the previous three years, so there is good continuity of processes and expectations. 3 Details of the DISC’18 Reviewing Process Since DISC 2018 was expected to get a similar number of submissions as DISC 2017, a large PC consisting of 39 distinguished members of the community was formed in an attempt to sufficiently cover all the 17 topics specifically addressed in the call for papers. In addition, stimulated by concerns with the reviewing process used at DISC and PODC in the past1, a number of quality-enhancing measures were foreseen for DISC 2018. Besides enforcing the requirement for self-contained submissions (15 pages LIPIcs, without references) by disallowing appendices but encouraging full versions on publicly accessible archives like arxiv or HAL, which facilitates a fair comparison of submissions given the tight reviewing time constraints, the following measures were implemented: (i) To facilitate effective paper bidding, EasyChair’s ability to match the selected topics of the submissions with the selected topics of expertise of the PC members was used to generate an initial bidding proposal for every PC member that could be modified during the actual paper bidding phase. The result of the bidding phase allowed EasyChair to find an optimal paper assignment (3 reviewers per submission) in a single assignment run, in negligible time. (ii) In order not to rule out the most competent reviewers for a submission by an overly restrictive conflict of interest policy, prohibitive CoI (like supervisor or personal relations, to be declared during bidding as usual) that forbid any access to the reviewing process, and milder forms of CoI (like occasional co-authorship, to be declared in the “comments to the PC section” of the reviews) were distinguished. (iii) A reviewing process with two intermediate reviews before the final review was enforced. The first intermediate review just asked for the reviewers’ actual expertise for reviewing the assigned papers [1 week after paper assignment], the second intermediate review asked for an estimate of the overall merit figure (and optionally major strenghts and weaknesses) [3 weeks after paper assignment]. The intermediate reviews were used to assign additional PC members/reviewers to submissions that either did not have at least 2 reviewers with expertise 3 (“knowledgable”) or 4 (“expert”), or suffered from controversial merit figure estimates (a difference larger or equal to 3, from knowledgable reviewers). At the end, 50 (resp. 3) submissions ended up with 4 (resp. 5) reviewers. (iv) The full reviews were due 6 weeks after paper assignment, which allowed 3 weeks of discussion before the PC meeting.
Recommended publications
  • Dagrep-V007-I011-Complete.Pdf
    Volume 7, Issue 11, November 2017 New Challenges in Parallelism (Dagstuhl Seminar 17451) Annette Bieniusa, Hans-J. Boehm, Maurice Herlihy, and Erez Petrank ........... 1 Algorithmic Cheminformatics (Dagstuhl Seminar 17452) Jakob L. Andersen, Christoph Flamm, Daniel Merkle, and Peter F. Stadler . 28 Connecting Visualization and Data Management Research (Dagstuhl Seminar 17461) Remco Chang, Jean-Daniel Fekete, Juliana Freire, and Carlos E. Scheidegger . 46 A Shared Challenge in Behavioural Specification (Dagstuhl Seminar 17462) Klaus Havelund, Martin Leucker, Giles Reger, and Volker Stolz . 59 Artificial and Computational Intelligence in Games: AI-Driven Game Design (Dagstuhl Seminar 17471) Pieter Spronck, Elisabeth André, Michael Cook, and Mike Preuß . 86 Addressing the Computational Challenges of Personalized Medicine (Dagstuhl Seminar 17472) Niko Beerenwinkel, Holger Fröhlich, and Susan A. Murphy . 130 Reliable Computation and Complexity on the Reals (Dagstuhl Seminar 17481) Norbert T. Müller, Siegfried M. Rump, Klaus Weihrauch, and Martin Ziegler . 142 DagstuhlReports,Vol. 7,Issue11 ISSN2192-5283 ISSN 2192-5283 Published online and open access by Aims and Scope Schloss Dagstuhl – Leibniz-Zentrum für Informatik The periodical Dagstuhl Reports documents the GmbH, Dagstuhl Publishing, Saarbrücken/Wadern, program and the results of Dagstuhl Seminars and Germany. Online available at Dagstuhl Perspectives Workshops. http://www.dagstuhl.de/dagpub/2192-5283 In principal, for each Dagstuhl Seminar or Dagstuhl Perspectives Workshop a report is published that Publication date contains the following: March, 2018 an executive summary of the seminar program and the fundamental results, Bibliographic information published by the Deutsche an overview of the talks given during the seminar Nationalbibliothek (summarized as talk abstracts), and The Deutsche Nationalbibliothek lists this publica- summaries from working groups (if applicable).
    [Show full text]
  • Chicago Journal of Theoretical Computer Science the MIT Press
    Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 1 12 March 1997 ISSN 1073–0486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142 USA; (617)253-2889; [email protected], [email protected]. Published one article at a time in LATEX source form on the Internet. Pag- ination varies from copy to copy. For more information and other articles see: http://www-mitpress.mit.edu/jrnls-catalog/chicago.html • http://www.cs.uchicago.edu/publications/cjtcs/ • ftp://mitpress.mit.edu/pub/CJTCS • ftp://cs.uchicago.edu/pub/publications/cjtcs • Feige and Kilian Limited vs. Polynomial Nondeterminism (Info) The Chicago Journal of Theoretical Computer Science is abstracted or in- R R R dexed in Research Alert, SciSearch, Current Contents /Engineering Com- R puting & Technology, and CompuMath Citation Index. c 1997 The Massachusetts Institute of Technology. Subscribers are licensed to use journal articles in a variety of ways, limited only as required to insure fair attribution to authors and the journal, and to prohibit use in a competing commercial product. See the journal’s World Wide Web site for further details. Address inquiries to the Subsidiary Rights Manager, MIT Press Journals; (617)253-2864; [email protected]. The Chicago Journal of Theoretical Computer Science is a peer-reviewed scholarly journal in theoretical computer science. The journal is committed to providing a forum for significant results on theoretical aspects of all topics in computer science. Editor in chief: Janos Simon Consulting
    [Show full text]
  • Kein Folientitel
    182.703: Problems in Distributed Computing (Part 3) WS 2019 Ulrich Schmid Institute of Computer Engineering, TU Vienna Embedded Computing Systems Group E191-02 [email protected] Content (Part 3) The Role of Synchrony Conditions Failure Detectors Real-Time Clocks Partially Synchronous Models Models supporting lock-step round simulations Weaker partially synchronous models Dynamic distributed systems U. Schmid 182.703 PRDC 2 The Role of Synchrony Conditions U. Schmid 182.703 PRDC 3 Recall Distributed Agreement (Consensus) Yes No Yes NoYes? None? meet All meet Yes No No U. Schmid 182.703 PRDC 4 Recall Consensus Impossibility (FLP) Fischer, Lynch und Paterson [FLP85]: “There is no deterministic algorithm for solving consensus in an asynchronous distributed system in the presence of a single crash failure.” Key problem: Distinguish slow from dead! U. Schmid 182.703 PRDC 5 Consensus Solvability in ParSync [DDS87] (I) Dolev, Dwork and Stockmeyer investigated consensus solvability in Partially Synchronous Systems (ParSync), varying 5 „synchrony handles“ : • Processors synchronous / asynchronous • Communication synchronous / asynchronous • Message order synchronous (system-wide consistent) / asynchronous (out-of-order) • Send steps broadcast / unicast • Computing steps atomic rec+send / separate rec, send U. Schmid 182.703 PRDC 6 Consensus Solvability in ParSync [DDS87] (II) Wait-free consensus possible Consensus impossible s/r Consensus possible s+r for f=1 ucast bcast async sync Global message order message Global async sync Communication U. Schmid 182.703 PRDC 7 The Role of Synchrony Conditions Enable failure detection Enforce event ordering • Distinguish „old“ from „new“ • Ruling out existence of stale (in-transit) information • Distinguish slow from dead • Creating non-overlapping „phases of operation“ (rounds) U.
    [Show full text]
  • 2011 Edsger W. Dijkstra Prize in Distributed Computing
    2011 Edsger W. Dijkstra Prize in Distributed Computing We are proud to announce that the 2011 Edsger W. Dijkstra Prize in Distributed Computing is awarded to Hagit Attiya, Amotz Bar-Noy, and Danny Dolev, for their paper Sharing Memory Robustly in Message-Passing Systems which appeared in the Journal of the ACM (JACM) 42(1):124-142 (1995). This fundamental paper presents the first asynchronous fault-tolerant simulation of shared memory in a message passing system. In 1985 Fischer, Lynch, and Paterson (FLP) proved that consensus is not attainable in asynchronous message passing systems with failures. In the following years, Loui and Abu-Amara, and Herlihy proved that solving consensus in an asynchronous shared memory system is harder than implementing a read/write register. However, it was not known whether read/ write registers are implementable in asynchronous message-passing systems with failures. Hagit Attiya, Amotz Bar-Noy, and Danny Dolev’s paper (ABD) answered this fundamental question afrmatively. The ABD paper makes an important foundational contribution by allowing one to “view the shared-memory model as a higher-level language for designing algorithms in asynchronous distributed systems.” In a manner similar to that in which Turing Machines were proved equivalent to Random Access Memory, ABD proved that the implications of FLP are not idiosyncratic to a particular model. Impossibility results and lower bounds can be proved in the higher-level shared memory model, and then automatically translated to message passing. Besides FLP, this includes results such as the impossibility of k-set agreement in asynchronous systems as well as various weakest failure detector constructions.
    [Show full text]
  • Combinatorial Topology and Distributed Computing Copyright 2010 Herlihy, Kozlov, and Rajsbaum All Rights Reserved
    Combinatorial Topology and Distributed Computing Copyright 2010 Herlihy, Kozlov, and Rajsbaum All rights reserved Maurice Herlihy Dmitry Kozlov Sergio Rajsbaum February 22, 2011 DRAFT 2 DRAFT Contents 1 Introduction 9 1.1 DecisionTasks .......................... 10 1.2 Communication.......................... 11 1.3 Failures .............................. 11 1.4 Timing............................... 12 1.4.1 ProcessesandProtocols . 12 1.5 ChapterNotes .......................... 14 2 Elements of Combinatorial Topology 15 2.1 Theobjectsandthemaps . 15 2.1.1 The Combinatorial View . 15 2.1.2 The Geometric View . 17 2.1.3 The Topological View . 18 2.2 Standardconstructions. 18 2.3 Chromaticcomplexes. 21 2.4 Simplicial models in Distributed Computing . 22 2.5 ChapterNotes .......................... 23 2.6 Exercises ............................. 23 3 Manifolds, Impossibility,DRAFT and Separation 25 3.1 ManifoldComplexes ....................... 25 3.2 ImmediateSnapshots. 28 3.3 ManifoldProtocols .. .. .. .. .. .. .. 34 3.4 SetAgreement .......................... 34 3.5 AnonymousProtocols . .. .. .. .. .. .. 38 3.6 WeakSymmetry-Breaking . 39 3.7 Anonymous Set Agreement versus Weak Symmetry Breaking 40 3.8 ChapterNotes .......................... 44 3.9 Exercises ............................. 44 3 4 CONTENTS 4 Connectivity 47 4.1 Consensus and Path-Connectivity . 47 4.2 Consensus in Asynchronous Read-Write Memory . 49 4.3 Set Agreement and Connectivity in Higher Dimensions . 53 4.4 Set Agreement and Read-Write memory . 59 4.4.1 Critical States . 63 4.5 ChapterNotes .......................... 64 4.6 Exercises ............................. 64 5 Colorless Tasks 67 5.1 Pseudospheres .......................... 68 5.2 ColorlessTasks .......................... 72 5.3 Wait-Free Read-Write Memory . 73 5.3.1 Read-Write Protocols and Pseudospheres . 73 5.3.2 Necessary and Sufficient Conditions . 75 5.4 Read-Write Memory with k-Set Agreement .
    [Show full text]
  • R S S M : 20 Y A
    Robust Simulation of Shared Memory: 20 Years After∗ Hagit Attiya Department of Computer Science, Technion and School of Computer and Communication Sciences, EPFL Abstract The article explores the concept of simulating the abstraction of a shared memory in message passing systems, despite the existence of failures. This abstraction provides an atomic register accessed with read and write opera- tions. This article describes the Attiya, Bar-Noy and Dolev simulation, its origins and generalizations, as well as its applications in theory and practice. Dedicated to the 60th birthday of Danny Dolev 1 Introduction In the summer of 1989, I spent a week in the bay area, visiting Danny Dolev, who was at IBM Almaden at that time, and Amotz Bar-Noy, who was a post- doc at Stanford, before going to PODC in Edmonton.1 Danny and Amotz have already done some work on equivalence between shared memory and message- passing communication primitives [14]. Their goal was to port specific renaming algorithms from message-passing systems [10] to shared-memory systems, and therefore, their simulation was tailored for the specific construct—stable vectors– used in these algorithms. Register constructions were a big fad at the time, and motivated by them, we were looking for a more generic simulation of shared-memory in message passing systems. ∗This article is based on my invited talk at SPAA 2008. 1 During the same visit, Danny and I also worked on the first snapshot algorithm with bounded memory; these ideas, tremendously simplified and improved by our coauthors later, lead to the atomic snapshots paper [2].
    [Show full text]
  • On the Minimal Synchronism Needed for Distributed Consensus
    On the Minimal Synchronism Needed for Distributed Consensus DANNY DOLEV Hebrew University, Jerusalem, Israel CYNTHIA DWORK Cornell Univer.sity, Ithaca, New York AND LARRY STOCKMEYER IBM Ahnaden Research Center, San Jose, California Abstract. Reaching agreement is a primitive of distributed computing. Whereas this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: A system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. Fischer et al. have shown that in a completely asynchronous model, even one failure cannot be tolerated. In this paper their work is extended: Several critical system parameters, including various synchrony conditions, are identified and how varying these affects the number of faults that can be tolerated is examined. The proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems- distributed applications; distributed databases; network operating systems; C.4 [Performance of Systems]: reliability, availability, and serviceability; F. I .2 [Computation by Abstract Devices]: Modes of Computation-parallelism; H.2.4 [Database Management]: Systems-distributed systems General Terms: Algorithms, Reliability, Theory, Verification Additional Key Words and Phrases: Agreement problem, asynchronous system, Byzantine Generals problem, commit problem, consensus problem, distributed computing, fault tolerance, reliability 1. Introduction The problem of reaching agreement among separated processors is a fundamental problem of both practical and theoretical importance in the area of distributed systems; (see, e.g., [I], [7], [8], [ 1 l] and [ 121).We consider a system of N processors A preliminary version of this paper appears in the Proceedings of the 24th Annual Symposium on Foundations of Computer Science, November 7-9, 1983, Tucson, Ariz., pp.
    [Show full text]
  • Tight Bounds for K-Set Agreement Soma Chaudhuri Maurice Herlihy CRL 98/4 Nancy A
    TM Tight Bounds for k-Set Agreement Soma Chaudhuri Maurice Herlihy Nancy A. Lynch Mark R. Tuttle CRL 98/4 May 1998 Cambridge Research Laboratory The Cambridge Research Laboratory was founded in 1987 to advance the state of the art in both core computing and human-computerinteraction, and to use the knowledge so gained to support the Company’s corporate objectives. We believe this is best accomplished through interconnected pur- suits in technology creation, advanced systems engineering, and business development. We are ac- tively investigating scalable computing; mobile computing; vision-based human and scene sensing; speech interaction; computer-animated synthetic persona; intelligent information appliances; and the capture, coding, storage, indexing, retrieval, decoding, and rendering of multimedia data. We recognize and embrace a technology creation model which is characterized by three major phases: Freedom: The life blood of the Laboratory comes from the observations and imaginations of our research staff. It is here that challenging research problems are uncovered (through discussions with customers, through interactions with others in the Corporation, through other professional interac- tions, through reading, and the like) or that new ideas are born. For any such problem or idea, this phase culminates in the nucleation of a project team around a well articulated central research question and the outlining of a research plan. Focus: Once a team is formed, we aggressively pursue the creation of new technology based on the plan. This may involve direct collaboration with other technical professionals inside and outside the Corporation. This phase culminates in the demonstrable creation of new technology which may take any of a number of forms - a journal article, a technical talk, a working prototype, a patent application, or some combination of these.
    [Show full text]
  • A Complexity-Based Hierarchy for Multiprocessor Synchronization
    A Complexity-Based Hierarchy for Multiprocessor Synchronization Faith Ellen Rati Gelashvili Nir Shavit University of Toronto MIT MIT [email protected] [email protected] [email protected] Leqi Zhu University of Toronto [email protected] Abstract For many years, Herlihy’s elegant computability-based Consensus Hierarchy has been our best ex- planation of the relative power of various types of multiprocessor synchronization objects when used in deterministic algorithms. However, key to this hierarchy is treating synchronization instructions as distinct objects, an approach that is far from the real-world, where multiprocessor programs apply syn- chronization instructions to collections of arbitrary memory locations. We were surprised to realize that, when considering instructions applied to memory locations, the computability based hierarchy collapses. This leaves open the question of how to better capture the power of various synchronization instructions. In this paper, we provide an approach to answering this question. We present a hierarchy of synchro- nization instructions, classified by the space complexity necessary to solve consensus in an obstruction- free manner using these instructions. Our hierarchy provides a classification of combinations of known instructions that seems to fit with our intuition of how useful some are in practice, while questioning the effectiveness of others. In particular, we prove an essentially tight characterization of the power of buffered read and write instructions. Interestingly, we show a similar result for multi-location atomic assignments. 1 Introduction Herlihy’s Consensus Hierarchy [Her91] assigns a consensus number to each object, namely, the number of processes for which there is a wait-free binary consensus algorithm using only instances of this object and read-write registers.
    [Show full text]
  • Efficient Replication Via Timestamp Stability
    Efficient Replication via Timestamp Stability Vitor Enes Carlos Baquero Alexey Gotsman Pierre Sutra INESC TEC and INESC TEC and IMDEA Software Institute Télécom SudParis University of Minho University of Minho Abstract most critical data. State-machine replication (SMR) [38] is an Modern web applications replicate their data across the globe approach for providing such guarantees used by a number and require strong consistency guarantees for their most of systems [8, 14, 21, 26, 39, 44]. In SMR, a desired service critical data. These guarantees are usually provided via state- is defined by a deterministic state machine, and each site machine replication (SMR). Recent advances in SMR have maintains its own local replica of the machine. An SMR pro- focused on leaderless protocols, which improve the avail- tocol coordinates the execution of commands at the sites to ability and performance of traditional Paxos-based solutions. ensure that the system is linearizable [18], i.e., behaves as if We propose Tempo – a leaderless SMR protocol that, in com- commands are executed sequentially by a single site. parison to prior solutions, achieves superior throughput and Traditional SMR protocols, such as Paxos [28] and offers predictable performance even in contended workloads. Raft [34], rely on a distinguished leader site that defines the To achieve these benefits, Tempo timestamps each applica- order in which client commands are executed at the replicas. tion command and executes it only after the timestamp be- Unfortunately, this site is a single point of failure and con- comes stable, i.e., all commands with a lower timestamp are tention, and a source of higher latency for clients located far known.
    [Show full text]
  • A Topological Perspective on Distributed Network Algorithms∗
    A Topological Perspective on Distributed Network Algorithms∗ Armando Casta~neda† Pierre Fraigniaud‡ Ami Paz§ UNAM, Mexico CNRS and Univ. de Paris CS Faculty, Univ. of Vienna Sergio Rajsbaum¶ Matthieu Roy Corentin Travers‖ UNAM, Mexico CNRS, France Univ. of Bordeaux and CNRS Abstract More than two decades ago, combinatorial topology was shown to be useful for analyzing distributed fault-tolerant algorithms in shared memory systems and in message passing systems. In this work, we show that combinatorial topology can also be useful for analyzing distributed algorithms in failure-free networks of arbitrary structure. To illustrate this, we analyze consensus, set-agreement, and approximate agreement in networks, and derive lower bounds for these problems under classical computational settings, such as the local model and dynamic networks. Keywords: Distributed computing; Distributed graph algorithms; Combinatorial topology arXiv:1907.03565v3 [cs.DC] 1 Oct 2020 ∗Some of the results in this paper were presented in an invited talk in SIROCCO 2019 conference [12]. †Supported by UNAM-PAPIIT IN108720. ‡Supported by ANR projects DESCARTES and FREDA. Additional support from INRIA project GANG. §Supported by the Austrian Science Fund (FWF): P 33775-N, Fast Algorithms for a Reactive Network Layer. ¶Supported by project UNAM-PAPIIT IN109917. ‖Supported by ANR projects DESCARTES and FREDA. 1 Introduction 1.1 Context and Objective A breakthrough in distributed computing was obtained in the 1990's, when combinatorial topol- ogy, a branch of Mathematics extending graph theory to higher dimensional objects, was shown to provide a framework in which a large variety of distributed computing models can be stud- ied [10, 35, 47].
    [Show full text]
  • Chapter on Distributed Computing
    Chapter on Distributed Computing Leslie Lamport and Nancy Lynch February 3, 1989 Contents 1 What is Distributed Computing? 1 2 Models of Distributed Systems 2 2.1 Message-PassingModels . 2 2.1.1 Taxonomy......................... 2 2.1.2 MeasuringComplexity . 6 2.2 OtherModels........................... 8 2.2.1 SharedVariables . .. .. .. .. .. 8 2.2.2 Synchronous Communication . 9 2.3 FundamentalConcepts. 10 3 Reasoning About Distributed Algorithms 12 3.1 ASystemasaSetofBehaviors . 13 3.2 SafetyandLiveness. 14 3.3 DescribingaSystem . .. .. .. .. .. .. 14 3.4 AssertionalReasoning . 17 3.4.1 Simple Safety Properties . 17 3.4.2 Liveness Properties . 20 3.5 DerivingAlgorithms . 25 3.6 Specification............................ 26 4 Some Typical Distributed Algorithms 27 4.1 Shared Variable Algorithms . 28 4.1.1 MutualExclusion. 28 4.1.2 Other Contention Problems . 31 4.1.3 Cooperation Problems . 32 4.1.4 Concurrent Readers and Writers . 33 4.2 DistributedConsensus . 34 4.2.1 The Two-Generals Problem . 35 4.2.2 Agreement on a Value . 35 4.2.3 OtherConsensusProblems . 38 4.2.4 The Distributed Commit Problem . 41 4.3 Network Algorithms . 41 4.3.1 Static Algorithms . 42 4.3.2 Dynamic Algorithms . 44 4.3.3 Changing Networks . 47 4.3.4 LinkProtocols ...................... 48 i 4.4 Concurrency Control in Databases . 49 4.4.1 Techniques ........................ 50 4.4.2 DistributionIssues . 51 4.4.3 Nested Transactions . 52 ii Abstract Rigorous analysis starts with a precise model of a distributed system; the most popular models, differing in how they represent interprocess commu- nication, are message passing, shared variables, and synchronous communi- cation. The properties satisfied by an algorithm must be precisely stated and carefully proved; the most successful approach is based on assertional reasoning.
    [Show full text]