Resource-Competitive Algorithms1 1 Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Resource-Competitive Algorithms1 1 Introduction Resource-Competitive Algorithms1 Michael A. Bender Varsha Dani Department of Computer Science Department of Computer Science Stony Brook University University of New Mexico Stony Brook, NY, USA Albuquerque, NM, USA [email protected] [email protected] Jeremy T. Fineman Seth Gilbert Department of Computer Science Department of Computer Science Georgetown University National University of Singapore Washington, DC, USA Singapore [email protected] [email protected] Mahnush Movahedi Seth Pettie Department of Computer Science Electrical Eng. and Computer Science Dept. University of New Mexico University of Michigan Albuquerque, NM, USA Ann Arbor, MI, USA [email protected] [email protected] Jared Saia Maxwell Young Department of Computer Science Computer Science and Engineering Dept. University of New Mexico Mississippi State University Albuquerque, NM, USA Starkville, MS, USA [email protected] [email protected] Abstract The point of adversarial analysis is to model the worst-case performance of an algorithm. Un- fortunately, this analysis may not always reflect performance in practice because the adversarial assumption can be overly pessimistic. In such cases, several techniques have been developed to provide a more refined understanding of how an algorithm performs e.g., competitive analysis, parameterized analysis, and the theory of approximation algorithms. Here, we describe an analogous technique called resource competitiveness, tailored for dis- tributed systems. Often there is an operational cost for adversarial behavior arising from band- width usage, computational power, energy limitations, etc. Modeling this cost provides some notion of how much disruption the adversary can inflict on the system. In parameterizing by this cost, we can design an algorithm with the following guarantee: if the adversary pays T , then the additional cost of the algorithm is some function of T . Resource competitiveness yields results pertaining to secure, fault tolerant, and efficient distributed computation. We summarize these results and highlight future challenges where we expect this algorithmic tool to provide new insights. 1 Introduction In his influential exegesis on The Art of War, the warlord Cao Cao wrote that \he who wishes to fight must first count the cost", noting that preparing for conflict requires a careful accounting 1This research is supported by NSF grants CNS-1318294 and CCF-1420911. ACM SIGACT News 57 September 2015, vol. 46, no. 3 of available resources [71]. In this article, we argue that this maxim should guide our approach to distributed computation, which is often analyzed as a struggle between an algorithm and an adversary. Resource competitiveness is a recent algorithmic technique that accounts for the cost incurred by an adversary for disrupting the system. Here, the notion of cost corresponds to any resource such as bandwidth, computational power, or an onboard energy supply. In parameterizing by this cost, we can design an algorithm with the following guarantee: if the adversary pays T , then the additional cost of the algorithm is some function of T . In much of the literature on robust distributed computing, the adversary has a known (or upper bounded) budget that can be used to disrupt a task being executed by correct nodes. As a common example, this budget may be expressed by the number of malicious (bad) nodes controlled by the adversary. In this context, a number of seminal results have originated from the theory community (for examples, see [57, 46, 29]) and from the community of practitioners (for examples, see [45, 16, 60, 1, 65]). Such results fix the maximum amount of resources at the adversary's disposal | expressed as an upper bound on the number of bad nodes, t, that can be tolerated | and then focus on optimizing metrics such as latency or communication overhead. There are many settings where this treatment makes sense: perhaps the adversary is unconcerned with its own costs, or the distributed computation is provably impossible beyond this maximum budget t. However, there are limitations inherent to this approach: • Both the good nodes and the adversary may be resource constrained, and ignoring this aspect places algorithm designers at an unnecessary disadvantage. Instead, we should incorporate the resource constraints of the adversary into the design of our algorithms. Such a situation may arise when the adversary incurs a cost to obtain physical resources for launching attacks; for example, there are monetary costs for renting a botnet [30]. In such cases, a notion of relative cost is compelling. • The budget may be unknown. Moreover, the adversary may never actually utilize any of this budget. Consequently, an algorithm that proactively has a large overhead to tolerate disruption | disruption which may never occur | is inefficient. By ignoring these aspects, traditional approaches to fault tolerance consider a one-sided picture. Instead, a more complete approach follows from measuring the performance of an algorithm relative to the amount of disruption in the system. 1.1 A Formal Definition of Resource Competitiveness We now formally define what it means for a distributed algorithm A to be resource competitive. This was originally proposed in [37], but we refine the definition here. Assume a system with a set G of n good nodes that obey actions specified by algorithm A. There exists an adversary who incarnates a source of disruption in the system. For example, the adversary may represent (1) any number of malicious nodes that collude and deviate arbitrarily from A, or (2) the effects of more benign failures due to software or hardware faults. Let Cost(α; v) denote the resource expenditure (or cost) to a good node v for executing the actions prescribed by A in an execution α. A resource might be bandwidth, CPU cycles, energy, actual money, or another useful domain-specific measure. ACM SIGACT News 58 September 2015, vol. 46, no. 3 Let T (α) be the adversary's total cost; this is typically unknown to the good nodes. It is possible for Cost(α; v) and T (α) to correspond to different resources. For example, good nodes may be concerned with bandwidth while the adversary is concerned with CPU cycles. We now define what it means for A to be resource competitive: Definition 1. Algorithm A is (ρ, τ)-resource competitive if maxfCost(α; v)g ≤ ρ(T (α)) + τ for v 2 G any α. Definition 1 states that A is resource competitive if the maximum cost incurred by any good node is less than some function of the adversary's total cost, ρ(T (α)), plus some additive term τ, where both ρ and τ are functions mapping to non-negative real values. The function ρ is called the robustness function and it is a function of T and possibly other parameters such as n. Throughout, we will simply refer to T instead of T (α) since α is implicit. Why do we need τ? Note that when T = 0 (there is no disruption), the good nodes clearly cannot incur less than zero cost; τ represents the unavoidable cost to attain a goal when there is no disruption. It is useful to make this separation explicit via this notation τ which we call the efficiency function; τ can be a function of parameters such as n, but it is not a function of T . In designing A, we desire ρ to be a slow-growing function. The efficiency function, τ, may not be a slow growing function, but it should be small relative to the best-performing algorithm that functions in the absence of disruption. For example, if a distributed computation requires n messages to be exchanged, then τ = O(n) would imply only a constant-factor increase in overhead which, while not a slow-growing function, is asymptotically optimal. In the example to follow in Section 2, we will observe the following adaptive behavior of A. When T is zero or \small", the efficiency function will dominate the cost of A; this is the unavoidable (or upfront) cost of running the algorithm. As T increases, the robustness function will be the dominating component. Therefore, beyond the upfront cost, the cost of A grows as a function of the amount of disruption. Functions other than the maximum cost over all nodes may be appropriate depending on the context. For instance, we may consider the average or median cost. Furthermore, if a resource- competitive algorithm is randomized, then we can speak of cost in terms of a high probability guarantee, or in expectation, with minor changes to the definition. Definition 1 accounts for finite executions. In the case of an infinite execution, an algorithm is resource competitive if it is resource-competitive for every prefix of the execution. Finally, we note that resource-competitive results are typically reported using big-O notation of the form O(ρ(T ) + τ). 2 Proof of Concept: Jamming-Resistant Communication We now provide a concrete \proof of concept" by summarizing a result from [43] dealing with wire- less sensor networks. The wireless medium is vulnerable to interference caused by malfunctioning, or even malicious, devices; such interference is called jamming. A number of elegant results have addressed communication under jamming attacks (see [76] for examples). However, these prior works constrain the jamming strategy of the adversary in some fashion; for example, the jamming may be random [58], or bounded within a window time [5, 61, 62, 63, 21], or limited to a subset of the total spectrum [24, 25, 35, 28, 50]. ACM SIGACT News 59 September 2015, vol. 46, no. 3 Less Naive for any round Robust Communication for round i ≥ 2 Send Phase: For each of ` slots do Send Phase: For each of 2ci slots do p • Alice sends m with probability d= ` • Alice sends m with probability 2=2i p • Bob listens with probability d= ` and ter- • Bob listens with probability 2=2(c−1)i and minates if he receives m terminates if he receives m Nack Phase: For 1 slot do Nack Phase: For each of 2i slots do • Bob sends a nack message • Bob sends a nack message • Alice listens with probability 1 and termi- • Alice listens with probability 4=2i and ter- nates if the slot is clear minates if she hears a clear slot Figure 1: Pseudocode for Less Naive and Robust Communication.
Recommended publications
  • Edsger Dijkstra: the Man Who Carried Computer Science on His Shoulders
    INFERENCE / Vol. 5, No. 3 Edsger Dijkstra The Man Who Carried Computer Science on His Shoulders Krzysztof Apt s it turned out, the train I had taken from dsger dijkstra was born in Rotterdam in 1930. Nijmegen to Eindhoven arrived late. To make He described his father, at one time the president matters worse, I was then unable to find the right of the Dutch Chemical Society, as “an excellent Aoffice in the university building. When I eventually arrived Echemist,” and his mother as “a brilliant mathematician for my appointment, I was more than half an hour behind who had no job.”1 In 1948, Dijkstra achieved remarkable schedule. The professor completely ignored my profuse results when he completed secondary school at the famous apologies and proceeded to take a full hour for the meet- Erasmiaans Gymnasium in Rotterdam. His school diploma ing. It was the first time I met Edsger Wybe Dijkstra. shows that he earned the highest possible grade in no less At the time of our meeting in 1975, Dijkstra was 45 than six out of thirteen subjects. He then enrolled at the years old. The most prestigious award in computer sci- University of Leiden to study physics. ence, the ACM Turing Award, had been conferred on In September 1951, Dijkstra’s father suggested he attend him three years earlier. Almost twenty years his junior, I a three-week course on programming in Cambridge. It knew very little about the field—I had only learned what turned out to be an idea with far-reaching consequences. a flowchart was a couple of weeks earlier.
    [Show full text]
  • Resource Bounds for Self-Stabilizing Message Driven Protocols
    RESOURCE BOUNDS FOR SELF-STABILIZING MESSAGE DRIVEN PROTOCOLS SHLOMI DOLEV¤, AMOS ISRAELIy , AND SHLOMO MORANz Abstract. Self-stabilizing message driven protocols are defined and discussed. The class weak- exclusion that contains many natural tasks such as `-exclusion and token-passing is defined, and it is shown that in any execution of any self-stabilizing protocol for a task in this class, the configuration size must grow at least in a logarithmic rate. This last lower bound is valid even if the system is supported by a time-out mechanism that prevents communication deadlocks. Then we present three self-stabilizing message driven protocols for token-passing. The rate of growth of configuration size for all three protocols matches the aforementioned lower bound. Our protocols are presented for two processor systems but can be easily adapted to rings of arbitrary size. Our results have an interesting interpretation in terms of automata theory. Key words. self-stabilization, message passing, token-passing, shared-memory AMS subject classifications. 68M10, 68M15, 68Q10, 68Q20 1. Introduction. A distributed system is a set of state machines, called pro- cessors, which communicate either by shared variables or by message-passing. In the first case, the system is a shared memory system, in the second case the system is a message-passing system. A distributed system is self-stabilizing if it can be started in any possible global state. Once started, the system regains its consistency by itself, without any kind of an outside intervention. The self-stabilization property is very useful for systems in which processors may crash and then recover spontaneously in an arbitrary state.
    [Show full text]
  • Optical PUF for Non-Forwardable Vehicle Authentication
    Computer Communications 93 (2016) 52–67 Contents lists available at ScienceDirect Computer Communications journal homepage: www.elsevier.com/locate/comcom R Optical PUF for Non-Forwardable Vehicle Authentication ∗ Shlomi Dolev a,1, Łukasz Krzywiecki b,2, Nisha Panwar a, , Michael Segal c a Department of Computer Science, Ben-Gurion University of the Negev, Israel b Department of Computer Science, Wrocław University of Technology, Poland c Department of Communication Systems Engineering, Ben-Gurion University of the Negev, Israel a r t i c l e i n f o a b s t r a c t Article history: Modern vehicles are configured to exchange warning messages through IEEE 1609 Dedicated Short Range Available online 1 June 2016 Communication over IEEE 802.11p Wireless Access in Vehicular Environment. Essentially, these warning messages must associate an authentication factor such that the verifier authenticates the message origin Keywords: Authentication via visual binding. Interestingly, the existing vehicle communication incorporates the message forward- Certificate ability as a requested feature for numerous applications. On the contrary, a secure vehicular communica- Wireless radio channel tion relies on a message authentication with respect to the sender identity. Currently, the vehicle security Optical channel infrastructure is vulnerable to message forwarding in a way that allows an incorrect visual binding with Challenge response pairs the malicious vehicle, i.e., messages seem to originate from a malicious vehicle due to non-detectable Verification message relaying instead of the actual message sender. We introduce the non-forwardable authentication to avoid an adversary coalition attack scenario. These messages should be identifiable with respect to the immediate sender at every hop.
    [Show full text]
  • A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability
    A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability Heng Guo School of Informatics, University of Edinburgh, Informatics Forum, Edinburgh, EH8 9AB, United Kingdom. [email protected] https://orcid.org/0000-0001-8199-5596 Mark Jerrum1 School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London, E1 4NS, United Kingdom. [email protected] https://orcid.org/0000-0003-0863-7279 Abstract We give a fully polynomial-time randomized approximation scheme (FPRAS) for the all-terminal network reliability problem, which is to determine the probability that, in a undirected graph, assuming each edge fails independently, the remaining graph is still connected. Our main contri- bution is to confirm a conjecture by Gorodezky and Pak (Random Struct. Algorithms, 2014), that the expected running time of the “cluster-popping” algorithm in bi-directed graphs is bounded by a polynomial in the size of the input. 2012 ACM Subject Classification Theory of computation → Generating random combinatorial structures Keywords and phrases Approximate counting, Network Reliability, Sampling, Markov chains Digital Object Identifier 10.4230/LIPIcs.ICALP.2018.68 Related Version Also available at https://arxiv.org/abs/1709.08561. Acknowledgements We thank Mark Huber for bringing reference [8] to our attention, Mark Walters for the coupling idea leading to Lemma 12, and Igor Pak for comments on an earlier version. We also thank the organizers of the “LMS – EPSRC Durham Symposium on Markov Processes, Mixing Times and Cutoff”, where part of the work is carried out. 1 Introduction Network reliability problems are extensively studied #P-hard problems [5] (see also [3, 22, 18, 2]).
    [Show full text]
  • Arxiv:1611.01647V4 [Cs.DS] 15 Jan 2019 Resample to Able of Are Class We Special Techniques, Our a with for Inst Extremal Solutions Answer
    UNIFORM SAMPLING THROUGH THE LOVASZ´ LOCAL LEMMA HENG GUO, MARK JERRUM, AND JINGCHENG LIU Abstract. We propose a new algorithmic framework, called “partial rejection sampling”, to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds new connections between the variable framework of the Lov´asz Local Lemma and some classical sampling algorithms such as the “cycle-popping” algorithm for rooted spanning trees. Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences. 1. Introduction The Lov´asz Local Lemma [9] is a classical gem in combinatorics that guarantees the existence of a perfect object that avoids all events deemed to be “bad”. The original proof is non- constructive but there has been great progress in the algorithmic aspects of the local lemma. After a long line of research [3, 2, 30, 8, 34, 37], the celebrated result by Moser and Tardos [31] gives efficient algorithms to find such a perfect object under conditions that match the Lov´asz Local Lemma in the so-called variable framework. However, it is natural to ask whether, under the same condition, we can also sample a perfect object uniformly at random instead of merely finding one. Roughly speaking, the resampling algorithm by Moser and Tardos [31] works as follows. We initialize all variables randomly. If bad events occur, then we arbitrarily choose a bad event and resample all the involved variables. Unfortunately, it is not hard to see that this algorithm can produce biased samples.
    [Show full text]
  • Cyber Security Cryptography and Machine Learning
    Lecture Notes in Computer Science 12716 Founding Editors Gerhard Goos Karlsruhe Institute of Technology, Karlsruhe, Germany Juris Hartmanis Cornell University, Ithaca, NY, USA Editorial Board Members Elisa Bertino Purdue University, West Lafayette, IN, USA Wen Gao Peking University, Beijing, China Bernhard Steffen TU Dortmund University, Dortmund, Germany Gerhard Woeginger RWTH Aachen, Aachen, Germany Moti Yung Columbia University, New York, NY, USA More information about this subseries at http://www.springer.com/series/7410 Shlomi Dolev · Oded Margalit · Benny Pinkas · Alexander Schwarzmann (Eds.) Cyber Security Cryptography and Machine Learning 5th International Symposium, CSCML 2021 Be’er Sheva, Israel, July 8–9, 2021 Proceedings Editors Shlomi Dolev Oded Margalit Ben-Gurion University of the Negev Ben-Gurion University of the Negev Be’er Sheva, Israel Be’er Sheva, Israel Benny Pinkas Alexander Schwarzmann Bar-Ilan University Augusta University Tel Aviv, Israel Augusta, GA, USA ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-030-78085-2 ISBN 978-3-030-78086-9 (eBook) https://doi.org/10.1007/978-3-030-78086-9 LNCS Sublibrary: SL4 – Security and Cryptology © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc.
    [Show full text]
  • On the Complexity of Partial Derivatives Ignacio Garcia-Marco, Pascal Koiran, Timothée Pecatte, Stéphan Thomassé
    On the complexity of partial derivatives Ignacio Garcia-Marco, Pascal Koiran, Timothée Pecatte, Stéphan Thomassé To cite this version: Ignacio Garcia-Marco, Pascal Koiran, Timothée Pecatte, Stéphan Thomassé. On the complexity of partial derivatives. 2016. ensl-01345746v2 HAL Id: ensl-01345746 https://hal-ens-lyon.archives-ouvertes.fr/ensl-01345746v2 Preprint submitted on 30 May 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. On the complexity of partial derivatives Ignacio Garcia-Marco, Pascal Koiran, Timothée Pecatte, Stéphan Thomassé LIP,∗ Ecole Normale Supérieure de Lyon, Université de Lyon. May 30, 2017 Abstract The method of partial derivatives is one of the most successful lower bound methods for arithmetic circuits. It uses as a complexity measure the dimension of the span of the partial derivatives of a polynomial. In this paper, we consider this complexity measure as a computational problem: for an input polynomial given as the sum of its nonzero monomials, what is the complexity of computing the dimension of its space of partial derivatives? We show that this problem is ♯P-hard and we ask whether it belongs to ♯P. We analyze the “trace method”, recently used in combinatorics and in algebraic complexity to lower bound the rank of certain matri- ces.
    [Show full text]
  • Interactions of Computational Complexity Theory and Mathematics
    Interactions of Computational Complexity Theory and Mathematics Avi Wigderson October 22, 2017 Abstract [This paper is a (self contained) chapter in a new book on computational complexity theory, called Mathematics and Computation, whose draft is available at https://www.math.ias.edu/avi/book]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: • Number Theory: Primality testing • Combinatorial Geometry: Point-line incidences • Operator Theory: The Kadison-Singer problem • Metric Geometry: Distortion of embeddings • Group Theory: Generation and random generation • Statistical Physics: Monte-Carlo Markov chains • Analysis and Probability: Noise stability • Lattice Theory: Short vectors • Invariant Theory: Actions on matrix tuples 1 1 introduction The Theory of Computation (ToC) lays out the mathematical foundations of computer science. I am often asked if ToC is a branch of Mathematics, or of Computer Science. The answer is easy: it is clearly both (and in fact, much more). Ever since Turing's 1936 definition of the Turing machine, we have had a formal mathematical model of computation that enables the rigorous mathematical study of computational tasks, algorithms to solve them, and the resources these require.
    [Show full text]
  • Statistical Queries and Statistical Algorithms: Foundations and Applications∗
    Statistical Queries and Statistical Algorithms: Foundations and Applications∗ Lev Reyzin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago [email protected] Abstract We give a survey of the foundations of statistical queries and their many applications to other areas. We introduce the model, give the main definitions, and we explore the fundamental theory statistical queries and how how it connects to various notions of learnability. We also give a detailed summary of some of the applications of statistical queries to other areas, including to optimization, to evolvability, and to differential privacy. 1 Introduction Over 20 years ago, Kearns [1998] introduced statistical queries as a framework for designing machine learning algorithms that are tolerant to noise. The statistical query model restricts a learning algorithm to ask certain types of queries to an oracle that responds with approximately correct answers. This framework has has proven useful, not only for designing noise-tolerant algorithms, but also for its connections to other noise models, for its ability to capture many of our current techniques, and for its explanatory power about the hardness of many important problems. Researchers have also found many connections between statistical queries and a variety of modern topics, including to evolvability, differential privacy, and adaptive data analysis. Statistical queries are now both an important tool and remain a foundational topic with many important questions. The aim of this survey is to illustrate these connections and bring researchers to the forefront of our understanding of this important area. We begin by formally introducing the model and giving the main definitions (Section 2), we then move to exploring the fundamental theory of learning statistical queries and how how it connects to other notions of arXiv:2004.00557v2 [cs.LG] 14 May 2020 learnability (Section 3).
    [Show full text]
  • The Trichotomy of HAVING Queries on a Probabilistic Database
    VLDBJ manuscript No. (will be inserted by the editor) The Trichotomy of HAVING Queries on a Probabilistic Database Christopher Re´ · Dan Suciu Received: date / Accepted: date Abstract We study the evaluation of positive conjunctive cant extension of a previous dichotomy result for Boolean queries with Boolean aggregate tests (similar to HAVING in conjunctive queries into safe and not safe. For each of the SQL) on probabilistic databases. More precisely, we study three classes we present novel techniques. For safe queries, conjunctive queries with predicate aggregates on probabilis- we describe an evaluation algorithm that uses random vari- tic databases where the aggregation function is one of MIN, ables over semirings. For apx-safe queries, we describe an MAX, EXISTS, COUNT, SUM, AVG, or COUNT(DISTINCT) and fptras that relies on a novel algorithm for generating a ran- the comparison function is one of =; ,; ≥; >; ≤, or < . The dom possible world satisfying a given condition. Finally, for complexity of evaluating a HAVING query depends on the ag- hazardous queries we give novel proofs of hardness of ap- gregation function, α, and the comparison function, θ. In this proximation. The results for safe queries were previously paper, we establish a set of trichotomy results for conjunc- announced [43], but all other results are new. tive queries with HAVING predicates parametrized by (α, θ). Keywords Probabilistic Databases · Query Evaluation · For such queries (without self joins), one of the following Sampling Algorithms · Semirings · Safe Plans three statements is true: (1) The exact evaluation problem has P-time data complexity. In this case, we call the query safe.
    [Show full text]
  • Contents - Part II
    Contents - Part II Invited Talks Towards the Graph Minor Theorems for Directed Graphs 3 Ken-Ichi Kawarabayashi and Stephan Kreutzer Automated Synthesis of Distributed Controllers 11 Anca Muscholl Track B: Logic, Semantics, Automata and Theory of Programming Games for Dependent Types 31 Samson Abramsky, Radha Jagadeesan, and Matthijs Vakdr Short Proofs of the Kneser-Lovasz Coloring Principle 44 James Aisenberg, Maria Luisa Bonet, Sam Buss, Adrian Craciun, and Gabriel Istrate Provenance Circuits for Trees and Treelike Instances 56 Antoine Amarilli, Pierre Bourhis, and Pierre Senellart Language Emptiness of Continuous-Time Parametric Timed Automata 69 Nikola Benes, Peter Bezdek, Kim G. Larsen, and Jin Srba Analysis of Probabilistic Systems via Generating Functions and Pade Approximation 82 Michele Boreale On Reducing Linearizability to State Reachability 95 Ahmed Bouajjani, Michael Emmi, Constantin Enea, and Jad Hamza The Complexity of Synthesis from Probabilistic Components 108 Krishnendu Chatterjee, Laurent Doyen, and Moshe Y. Vardi Edit Distance for Pushdown Automata 121 Krishnendu Chatterjee, Thomas A. Henzinger, Rasmus Ibsen-Jensen, and Jan Otop Solution Sets for Equations over Free Groups Are EDTOL Languages 134 Laura Ciobanu, Volker Diekert, and Murray Elder Limited Set quantifiers over Countable Linear Orderings 146 Thomas Colcombet and A.V. Sreejith http://d-nb.info/1071249711 XXVM Contents - Part II Reachability Is in DynFO 159 Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and Thomas Zeume Natural Homology 171
    [Show full text]
  • Dagstuhl-Seminar-Report
    Dagstuhl Seminar on Time Service Danny Dolev, Hebrew University R¨udiger Reischuk, Med. Universitdt zu L¨ubeck Fred B. Schneider, Cornell University H. Raymond Strong, IBM Almaden Research Schloß Dagstuhl, March 11. – March 15. 1996 Contents Introduction ................................................... ........ 3 Final Seminar Programme ............................................. 4 Abstracts of Presentation: David Mills: Precision Network Time Synchronization ................... 5 Keith Marzullo: Synchronizing Clocks and Reading Sensors .............. 5 John Rushby: Formal Verification of Clock Synchronization Algorithms .. 6 Ulrich Schmid: Interval-based Clock Synchronization..................... 7 Klaus Schossmaier: UTCSU - An ASIC for Supporting Clock Synchronization for Distributed Real-Time Systems ................................ 8 Wolfgang Halang: Time Services Based on Radio Transmitted Official UTC................................................... .......... 9 Christof Fetzer: Fail-Aware Clock Synchronization....................... 9 Paulo Verissimo: Cesium Spray: a Precise and Accurate Time Services on Large-Scaled Distributed Systems ............................. 10 Shlomi Dolev: Self-Stabilizing Clock Synchronization Algorithms ........ 10 Wolfgang Reisig: A Temporal Logic for “As Soon As Possible” ......... 11 Augusto Ciuffoletti: Self-Stabilization Issues in Clock Synchronization Algorithms ................................................... ... 11 Sergio Rajsbaum:Extending Causal Order with Real-Time Specifications and
    [Show full text]