The K-Server Problem
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Skip B-Trees
Skip B-Trees Ittai Abraham1, James Aspnes2?, and Jian Yuan3 1 The Institute of Computer Science, The Hebrew University of Jerusalem, [email protected] 2 Department of Computer Science, Yale University, [email protected] 3 Google, [email protected] Abstract. We describe a new data structure, the Skip B-Tree that combines the advantages of skip graphs with features of traditional B-trees. A skip B-Tree provides efficient search, insertion and deletion operations. The data structure is highly fault tolerant even to adversarial failures, and allows for particularly simple repair mechanisms. Related resource keys are kept in blocks near each other enabling efficient range queries. Using this data structure, we describe a new distributed peer-to-peer network, the Distributed Skip B-Tree. Given m data items stored in a system with n nodes, the network allows to perform a range search operation for r consecutive keys that costs only O(logb m + r/b) where b = Θ(m/n). In addition, our distributed Skip B-tree search network has provable polylogarithmic costs for all its other basic operations like insert, delete, and node join. To the best of our knowledge, all previous distributed search networks either provide a range search operation whose cost is worse than ours or may require a linear cost for some basic operation like insert, delete, and node join. ? Supported in part by NSF grants CCR-0098078, CNS-0305258, and CNS-0435201. 1 Introduction Peer-to-peer systems provide a decentralized way to share resources among machines. An ideal peer-to-peer network should have such properties as decentralization, scalability, fault-tolerance, self-stabilization, load- balancing, dynamic addition and deletion of nodes, efficient query searching and exploiting spatial as well as temporal locality in searches. -
Deterministic and Efficient Minimal Perfect Hashing Schemes
Deterministic and efficient minimal perfect hashing schemes Leandro M. Zatesko 1 Jair Donadelli 2 Abstract: This paper presents deterministic versions to the hashing schemes of Botelho, Kohayakawa and Ziviani (2005) and Botelho, Pagh and Ziviani (2007), also proves a statement left as open problem in the former work, related to the correct- ness proof and to the complexity analysis of their scheme. Our deterministic variants have been implemented and executed over datasets with up to 25,000,000 keys and have brought equivalent performance results between the deterministic and the origi- nal randomized algorithms. Resumo: Neste trabalho apresentamos versões determinísticas para os esquemas de hashing de Botelho, Kohayakawa e Ziviani (2005) e de Botelho, Pagh e Ziviani (2007). Também respondemos a um problema deixado em aberto no primeiro dos trabalhos, relacionado à prova da corretude e à análise de complexidade do esquema por eles proposto. As versões determinísticas desenvolvidas foram implementadas e testadas sobre conjuntos de dados com até 25.000.000 de chaves, e os resultados verificados se mostraram equivalentes aos dos algoritmos aleatorizados originais. 1 Introduction A minimal perfect hashing scheme, as defined in [1, 2, 3], is an algorithm that, given a set S with n keys from an universe U, constructs a hash function h: U →{0,...,n−1} which maps without collision S to {0,...,n − 1}. We are interested only in hashing schemes whose outputs are hash functions with O(1) lookup time. For our purposes, every key x is assumed to be a chain of at most L symbols taken from a finite alphabet Σ, for a fixed constant L. -
Curriculum Vitae
Curriculum Vitae Prof. Michal Feldman School of Computer Science, Tel-Aviv University Personal Details • Born: Israel, 1976 • Gender: Female • email: [email protected] Education • Ph.D.: Information Management and Systems, University of California at Berkeley, May 2005. Thesis title: Incentives for cooperation in peer-to-peer systems. • B.Sc.: Bar-Ilan University, Computer Science, Summa Cum Laude, June 1999. Academic Positions 2013 - present: Associate Professor, School of Computer Science, Tel-Aviv University, Israel. Associate Professor, School of Business Administration and Center for 2011 - 2013: the Study of Rationality, Hebrew University of Jerusalem, Israel. 2007 - 2011: Senior Lecturer, School of Business Administration and Center for the Study of Rationality, Hebrew University of Jerusalem, Israel. Additional Positions 2011 - 2013: Visiting Researcher (weekly visitor), Microsoft Research New England, Cambridge, MA, USA. 2011 - 2013: Visiting Professor, Harvard School of Engineering and Applied Sciences, Center for Research on Computation and Society, School of Computer Science, Cambridge, MA, USA (Marie Curie IOF program). 2008 - 2011: Senior Researcher (part-time), Microsoft Research in Herzliya, Israel. 2007 - 2013: Member, Center for the Study of Rationality, Hebrew University. 2005 - 2007: Post-Doctoral Fellow (Lady Davis fellowship), Hebrew University, School of Computer Science and Engineering. 2004: Ph.D. Intern, HP Labs, Palo Alto, California, USA. 1 Grants (Funding ID) • European Research Council (ERC) Starting Grant: \Algorithmic Mechanism Design - Beyond Truthfulness": 1.4 million Euro, 2013-2017. • FP7 Marie Curie International Outgoing Fellowship (IOF): \Innovations in Algorithmic Game Theory" (IAGT): 313,473 Euro, 2011-2014. • Israel Science Foundation (ISF) grant. \Equilibria Under Coalitional Covenants in Non-Cooperative Games - Existence, Quality and Computation:" 688,000 NIS (172,000 NIS /year), 2009-2013. -
Information Theory Methods in Communication Complexity
INFORMATION THEORY METHODS IN COMMUNICATION COMPLEXITY BY NIKOLAOS LEONARDOS A dissertation submitted to the Graduate School—New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Computer Science Written under the direction of Michael Saks and approved by New Brunswick, New Jersey JANUARY, 2012 ABSTRACT OF THE DISSERTATION Information theory methods in communication complexity by Nikolaos Leonardos Dissertation Director: Michael Saks This dissertation is concerned with the application of notions and methods from the field of information theory to the field of communication complexity. It con- sists of two main parts. In the first part of the dissertation, we prove lower bounds on the random- ized two-party communication complexity of functions that arise from read-once boolean formulae. A read-once boolean formula is a formula in propositional logic with the property that every variable appears exactly once. Such a formula can be represented by a tree, where the leaves correspond to variables, and the in- ternal nodes are labeled by binary connectives. Under certain assumptions, this representation is unique. Thus, one can define the depth of a formula as the depth of the tree that represents it. The complexity of the evaluation of general read-once formulae has attracted interest mainly in the decision tree model. In the communication complexity model many interesting results deal with specific read-once formulae, such as disjointness and tribes. In this dissertation we use information theory methods to prove lower bounds that hold for any read-once ii formula. -
Limitations and Possibilities of Algorithmic Mechanism Design By
Incentives, Computation, and Networks: Limitations and Possibilities of Algorithmic Mechanism Design by Yaron Singer A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Christos Papadimitriou, Chair Professor Shachar Kariv Professor Scott Shenker Spring 2012 Incentives, Computation, and Networks: Limitations and Possibilities of Algorithmic Mechanism Design Copyright 2012 by Yaron Singer 1 Abstract Incentives, Computation, and Networks: Limitations and Possibilities of Algorithmic Mechanism Design by Yaron Singer Doctor of Philosophy in Computer Science University of California, Berkeley Professor Christos Papadimitriou, Chair In the past decade, a theory of manipulation-robust algorithms has been emerging to address the challenges that frequently occur in strategic environments such as the internet. The theory, known as algorithmic mechanism design, builds on the foundations of classical mechanism design from microeconomics and is based on the idea of incentive compatible pro- tocols. Such protocols achieve system-wide objectives through careful design that ensures it is in every agent's best interest to comply with the protocol. As it turns out, however, implementing incentive compatible protocols as advocated in classical mechanism design the- ory often necessitates solving intractable problems. To address this, algorithmic mechanism design focuses on designing -
On SZK and PP
Electronic Colloquium on Computational Complexity, Revision 2 of Report No. 140 (2016) On SZK and PP Adam Bouland1, Lijie Chen2, Dhiraj Holden1, Justin Thaler3, and Prashant Nalini Vasudevan1 1CSAIL, Massachusetts Institute of Technology, Cambridge, MA USA 2IIIS, Tsinghua University, Beijing, China 3Georgetown University, Washington, DC USA Abstract In both query and communication complexity, we give separations between the class NISZK, con- taining those problems with non-interactive statistical zero knowledge proof systems, and the class UPP, containing those problems with randomized algorithms with unbounded error. These results significantly improve on earlier query separations of Vereschagin [Ver95] and Aaronson [Aar12] and earlier commu- nication complexity separations of Klauck [Kla11] and Razborov and Sherstov [RS10]. In addition, our results imply an oracle relative to which the class NISZK 6⊆ PP. This answers an open question of Wa- trous from 2002 [Aar]. The technical core of our result is a stronger hardness amplification theorem for approximate degree, which roughly says that composing the gapped-majority function with any function of high approximate degree yields a function with high threshold degree. Using our techniques, we also give oracles relative to which the following two separations hold: perfect zero knowledge (PZK) is not contained in its complement (coPZK), and SZK (indeed, even NISZK) is not contained in PZK (indeed, even HVPZK). Along the way, we show that HVPZK is contained in PP in a relativizing manner. We prove a number of implications of these results, which may be of independent interest outside of structural complexity. Specifically, our oracle separation implies that certain parameters of the Polariza- tion Lemma of Sahai and Vadhan [SV03] cannot be much improved in a black-box manner. -
Grad Cohort 2015
2015 GradCohort www.cra-w.org @CRAWomen SAN FRANCISCO APRIL 10-11, 2015 Hyatt Regency San Francisco Dear Grad Cohort Participant, We welcome you to the 2015 CRA-Women Graduate 2015 Cohort Workshop! The next two days are filled with sessions where 25 senior computing researchers and professionals will be sharing their strategies and experiences to help increase your graduate school and career success. There are also plenty of opportunities to meet and network with these successful senior women as well as graduate students from other Grad universities. We hope that you will take the utmost advantage of this unique experience by actively participating in discussions, developing peer networks, and building mentoring relationships. The 2015 CRA-Women Graduate Cohort Workshop is Cohort made possible through generous contributions by Microsoft Research, Computing Research Association, National Science Foundation, Google, a private foundation, Alfred P. Sloan Foundation, Two Sigma, SAN FRANCISCO Intel Corporation, ACM SIGARCH, ACM SIGMICRO, ACM SIGGRAPH, ACM SIGOPS, ACM SIGPLAN, ACM SIGSOFT, Yahoo! Labs, Facebook, IBM and in some cases department funds from participating universities/ institutions. Please join us in thanking them for their kind support. We hope that you take home many nuggets and connections from this workshop to help you in your journey to make an impact in the world through computing. Be ready to be inspired, learn, and meet many interesting technical women. Sincerely, — Lori Clarke, Sandhya Dwarkadas, and Ayanna Howard CO-CHAIRS, -
Satisfiability and Evolution
2014 IEEE Annual Symposium on Foundations of Computer Science Satisfiability and Evolution Adi Livnat Christos Papadimitriou Aviad Rubinstein Department of Biological Sciences Computer Science Division Computer Science Division Virginia Tech. University of California at Berkeley University of California at Berkeley [email protected] [email protected] [email protected] Gregory Valiant Andrew Wan Computer Science Department Simons Institute Stanford University University of California at Berkeley [email protected] [email protected] Abstract—We show that, if truth assignments on n variables genes are able to efficiently arise in polynomial populations reproduce through recombination so that satisfaction of a par- with recombination. In the standard view of evolution, a ticular Boolean function confers a small evolutionary advan- variant of a particular gene is more likely to spread across tage, then a polynomially large population over polynomially many generations (polynomial in n and the inverse of the initial a population if it makes its own contribution to the overall satisfaction probability) will end up almost surely consisting fitness, independent of the contributions of variants of other exclusively of satisfying truth assignments. We argue that this genes. How can complex, multi-gene traits spread in a theorem sheds light on the problem of the evolution of complex population? This may seem to be especially problematic adaptations. for multi-gene traits whose contribution to fitness does not Keywords-evolution; algorithms; Boolean functions decompose into small additive components associated with each gene variant —traits with the property that even if I. INTRODUCTION one gene variant is different from the one that is needed The TCS community has a long history of applying its for the right combination, there is no benefit, or even a net perspectives and tools to better understand the processes negative benefit. -
A Decade of Lattice Cryptography
Full text available at: http://dx.doi.org/10.1561/0400000074 A Decade of Lattice Cryptography Chris Peikert Computer Science and Engineering University of Michigan, United States Boston — Delft Full text available at: http://dx.doi.org/10.1561/0400000074 Foundations and Trends R in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends R in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. R This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper. ISBN: 978-1-68083-113-9 c 2016 C. Peikert All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for in- ternal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. -
The CNN Problem and Other K-Server Variants
The CNN Problem and Other k-Server Variants Elias Koutsoupias 1 Department of Informatics, University of Athens, Athens, Greece and Computer Science Department, University of California, Los Angeles, USA David Scot Taylor Department of Computer Science, San Jos´eState University, San Jos´e, California, USA Abstract We study several interesting variants of the k-server problem. In the cnn problem, one server services requests in the Euclidean plane. The difference from the k- server problem is that the server does not have to move to a request, but it has only to move to a point that lies in the same horizontal or vertical line with the request. This, for example, models the problem faced by a crew of a certain news network trying to shoot scenes on the streets of Manhattan from a distance; for any event at an intersection, the crew has only to be on a matching street or avenue. The cnn problem contains as special cases two important problems: the bridge problem, also known as the cow-path problem, and the weighted 2-server problem in which the 2 servers may have different speeds. We√ show that any deterministic online algorithm has competitive ratio at least 6 + 17. We also show that some successful algorithms for the k-server problem fail to be competitive. In particular, no memoryless randomized algorithm can be competitive. We also consider another variant of the k-server problem, in which servers can move simultaneously, and we wish to minimize the time spent waiting for service. This is equivalent to the regular k-server problem under the L∞ norm for movement 1 costs. -
The Price of Privacy for Low-Rank Factorization
The Price of Privacy for Low-rank Factorization Jalaj Upadhyay Johns Hopkins University Baltimore, MD - 21201, USA. [email protected] Abstract In this paper, we study what price one has to pay to release differentially private low-rank factorization of a matrix. We consider various settings that are close to the real world applications of low-rank factorization: (i) the manner in which matrices are updated (row by row or in an arbitrary manner), (ii) whether matrices are distributed or not, and (iii) how the output is produced (once at the end of all updates, also known as one-shot algorithms or continually). Even though these settings are well studied without privacy, surprisingly, there are no private algorithm for these settings (except when a matrix is updated row by row). We present the first set of differentially private algorithms for all these settings. Our algorithms when private matrix is updated in an arbitrary manner promise differential privacy with respect to two stronger privacy guarantees than previously studied, use space and time comparable to the non-private algorithm, and achieve optimal accuracy. To complement our positive results, we also prove that the space required by our algorithms is optimal up to logarithmic factors. When data matrices are distributed over multiple servers, we give a non-interactive differentially private algorithm with communication cost independent of dimension. In concise, we give algorithms that incur optimal cost across all parameters of interest. We also perform experiments to verify that all our algorithms perform well in practice and outperform the best known algorithm until now for large range of parameters. -
A Revisionist History of Algorithmic Game Theory
A Revisionist History of Algorithmic Game Theory Moshe Y. Vardi Rice University Theoretical Computer Science: Vols. A and B van Leeuwen, 1990: Handbook of Theoretical Computer Science Volume A: algorithms and complexity • Volume B: formal models and semantics (“logic”) • E.W. Dijkstra, EWD Note 611: “On the fact that the Atlantic Ocean has two sides” North-American TCS (FOCS&STOC): Volume A. • European TCS (ICALP): Volumes A&B • A Key Theme in FOCS/STOC: Algorithmic Game Theory – algorithm design for strategic environments 1 Birth of AGT: The ”Official” Version NEW YORK, May 16, 2012 – ACM’s Special Interest Group on Algorithms and Computation Theory (SIGACT) together with the European Association for Theoretical Computer Science (EATCS) will recognize three groups of researchers for their contributions to understanding how selfish behavior by users and service providers impacts the behavior of the Internet and other complex computational systems. The papers were presented by Elias Koutsoupias and Christos Papadimitriou, Tim Roughgarden and Eva Tardos, and Noam Nisan and Amir Ronen. They will receive the 2012 Godel¨ Prize, sponsored jointly by SIGACT and EATCS for outstanding papers in theoretical computer science at the International Colloquium on Automata, Languages and Programming (ICALP), July 9–13, in Warwick, UK. 2 Three seminal papers Koutsoupias&Papadimitriou, STACS 1999: Worst- • case Equilibira – introduced the “price of anarchy” concept, a measure of the extent to which competition approximates cooperation, quantifying how much utility is lost due to selfish behaviors on the Internet, which operates without a system designer or monitor striving to achieve the “social optimum.” Roughgarden & Tardos, FOCS 2000: How Bad is • Selfish Routing? – studied the power and depth of the “price of anarchy” concept as it applies to routing traffic in large-scale communications networks to optimize the performance of a congested network.