40Th ACM Symposium on Theory of Computing (STOC 2008) Saturday
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Reproducibility and Pseudo-Determinism in Log-Space
Reproducibility and Pseudo-determinism in Log-Space by Ofer Grossman S.B., Massachusetts Institute of Technology (2017) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020 c Massachusetts Institute of Technology 2020. All rights reserved. Author...................................................................... Department of Electrical Engineering and Computer Science May 15, 2020 Certified by.................................................................. Shafi Goldwasser RSA Professor of Electrical Engineering and Computer Science Thesis Supervisor Accepted by................................................................. Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science Chair, Department Committee on Graduate Students 2 Reproducibility and Pseudo-determinism in Log-Space by Ofer Grossman Submitted to the Department of Electrical Engineering and Computer Science on May 15, 2020, in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science Abstract Acuriouspropertyofrandomizedlog-spacesearchalgorithmsisthattheiroutputsareoften longer than their workspace. This leads to the question: how can we reproduce the results of a randomized log space computation without storing the output or randomness verbatim? Running the algorithm again with new -
Fault-Tolerant Distributed Computing in Full-Information Networks
Fault-Tolerant Distributed Computing in Full-Information Networks Shafi Goldwasser∗ Elan Pavlov Vinod Vaikuntanathan∗ CSAIL, MIT MIT CSAIL, MIT Cambridge MA, USA Cambridge MA, USA Cambridge MA, USA December 15, 2006 Abstract In this paper, we use random-selection protocols in the full-information model to solve classical problems in distributed computing. Our main results are the following: • An O(log n)-round randomized Byzantine Agreement (BA) protocol in a synchronous full-information n network tolerating t < 3+ faulty players (for any constant > 0). As such, our protocol is asymp- totically optimal in terms of fault-tolerance. • An O(1)-round randomized BA protocol in a synchronous full-information network tolerating t = n O( (log n)1.58 ) faulty players. • A compiler that converts any randomized protocol Πin designed to tolerate t fail-stop faults, where the n source of randomness of Πin is an SV-source, into a protocol Πout that tolerates min(t, 3 ) Byzantine ∗ faults. If the round-complexity of Πin is r, that of Πout is O(r log n). Central to our results is the development of a new tool, “audited protocols”. Informally “auditing” is a transformation that converts any protocol that assumes built-in broadcast channels into one that achieves a slightly weaker guarantee, without assuming broadcast channels. We regard this as a tool of independent interest, which could potentially find applications in the design of simple and modular randomized distributed algorithms. ∗Supported by NSF grants CNS-0430450 and CCF0514167. 1 1 Introduction The problem of how n players, some of who may be faulty, can make a common random selection in a set, has received much attention. -
UC San Diego Electronic Theses and Dissertations
UC San Diego UC San Diego Electronic Theses and Dissertations Title Scalable Traffic Management for Data Centers and Logging Devices Permalink https://escholarship.org/uc/item/2hp6b5sm Author Lam, Vinh The Publication Date 2013 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA, SAN DIEGO Scalable Traffic Management for Data Centers and Logging Devices A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science by Vinh The Lam Committee in charge: Professor George Varghese, Chair Professor Tara Javidi Professor Bill Lin Professor Amin Vahdat Professor Geoffrey Voelker 2013 Copyright Vinh The Lam, 2013 All rights reserved. The Dissertation of Vinh The Lam is approved, and it is acceptable in quality and form for publication on microfilm and electronically: Chair University of California, San Diego 2013 iii DEDICATION To my parents iv EPIGRAPH Science is what we understand well enough to explain to a computer. Art is everything else we do. Donald Knuth Simplicity is prerequisite for reliability. Edsger Dijkstra v TABLE OF CONTENTS SignaturePage...................................... ................................. iii Dedication ......................................... ................................. iv Epigraph........................................... ................................. v TableofContents .................................... ................................ vi ListofFigures -
Concurrent Non-Malleable Zero Knowledge Proofs
Concurrent Non-Malleable Zero Knowledge Proofs Huijia Lin?, Rafael Pass??, Wei-Lung Dustin Tseng???, and Muthuramakrishnan Venkitasubramaniam Cornell University, {huijia,rafael,wdtseng,vmuthu}@cs.cornell.edu Abstract. Concurrent non-malleable zero-knowledge (NMZK) consid- ers the concurrent execution of zero-knowledge protocols in a setting where the attacker can simultaneously corrupt multiple provers and ver- ifiers. Barak, Prabhakaran and Sahai (FOCS’06) recently provided the first construction of a concurrent NMZK protocol without any set-up assumptions. Their protocol, however, is only computationally sound (a.k.a., a concurrent NMZK argument). In this work we present the first construction of a concurrent NMZK proof without any set-up assump- tions. Our protocol requires poly(n) rounds assuming one-way functions, or O~(log n) rounds assuming collision-resistant hash functions. As an additional contribution, we improve the round complexity of con- current NMZK arguments based on one-way functions (from poly(n) to O~(log n)), and achieve a near linear (instead of cubic) security reduc- tions. Taken together, our results close the gap between concurrent ZK protocols and concurrent NMZK protocols (in terms of feasibility, round complexity, hardness assumptions, and tightness of the security reduc- tion). 1 Introduction Zero-knowledge (ZK) interactive proofs [GMR89] are fundamental constructs that allow the Prover to convince the Verifier of the validity of a mathematical statement x 2 L, while providing zero additional knowledge to the Verifier. Con- current ZK, first introduced and achieved by Dwork, Naor and Sahai [DNS04], considers the execution of zero-knowledge protocols in an asynchronous and con- current setting. -
FOCS 2005 Program SUNDAY October 23, 2005
FOCS 2005 Program SUNDAY October 23, 2005 Talks in Grand Ballroom, 17th floor Session 1: 8:50am – 10:10am Chair: Eva´ Tardos 8:50 Agnostically Learning Halfspaces Adam Kalai, Adam Klivans, Yishay Mansour and Rocco Servedio 9:10 Noise stability of functions with low influences: invari- ance and optimality The 46th Annual IEEE Symposium on Elchanan Mossel, Ryan O’Donnell and Krzysztof Foundations of Computer Science Oleszkiewicz October 22-25, 2005 Omni William Penn Hotel, 9:30 Every decision tree has an influential variable Pittsburgh, PA Ryan O’Donnell, Michael Saks, Oded Schramm and Rocco Servedio Sponsored by the IEEE Computer Society Technical Committee on Mathematical Foundations of Computing 9:50 Lower Bounds for the Noisy Broadcast Problem In cooperation with ACM SIGACT Navin Goyal, Guy Kindler and Michael Saks Break 10:10am – 10:30am FOCS ’05 gratefully acknowledges financial support from Microsoft Research, Yahoo! Research, and the CMU Aladdin center Session 2: 10:30am – 12:10pm Chair: Satish Rao SATURDAY October 22, 2005 10:30 The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics Tutorials held at CMU University Center into `1 [Best paper award] Reception at Omni William Penn Hotel, Monongahela Room, Subhash Khot and Nisheeth Vishnoi 17th floor 10:50 The Closest Substring problem with small distances Tutorial 1: 1:30pm – 3:30pm Daniel Marx (McConomy Auditorium) Chair: Irit Dinur 11:10 Fitting tree metrics: Hierarchical clustering and Phy- logeny Subhash Khot Nir Ailon and Moses Charikar On the Unique Games Conjecture 11:30 Metric Embeddings with Relaxed Guarantees Break 3:30pm – 4:00pm Ittai Abraham, Yair Bartal, T-H. -
Magic Adversaries Versus Individual Reduction: Science Wins Either Way ?
Magic Adversaries Versus Individual Reduction: Science Wins Either Way ? Yi Deng1;2 1 SKLOIS, Institute of Information Engineering, CAS, Beijing, P.R.China 2 State Key Laboratory of Cryptology, P. O. Box 5159, Beijing ,100878,China [email protected] Abstract. We prove that, assuming there exists an injective one-way function f, at least one of the following statements is true: – (Infinitely-often) Non-uniform public-key encryption and key agreement exist; – The Feige-Shamir protocol instantiated with f is distributional concurrent zero knowledge for a large class of distributions over any OR NP-relations with small distinguishability gap. The questions of whether we can achieve these goals are known to be subject to black-box lim- itations. Our win-win result also establishes an unexpected connection between the complexity of public-key encryption and the round-complexity of concurrent zero knowledge. As the main technical contribution, we introduce a dissection procedure for concurrent ad- versaries, which enables us to transform a magic concurrent adversary that breaks the distribu- tional concurrent zero knowledge of the Feige-Shamir protocol into non-black-box construc- tions of (infinitely-often) public-key encryption and key agreement. This dissection of complex algorithms gives insight into the fundamental gap between the known universal security reductions/simulations, in which a single reduction algorithm or simu- lator works for all adversaries, and the natural security definitions (that are sufficient for almost all cryptographic primitives/protocols), which switch the order of qualifiers and only require that for every adversary there exists an individual reduction or simulator. 1 Introduction The seminal work of Impagliazzo and Rudich [IR89] provides a methodology for studying the lim- itations of black-box reductions. -
Communications Cacm.Acm.Org of Theacm 06/2009 Vol.52 No.06
COMMUNICATIONS CACM.ACM.ORG OF THEACM 06/2009 VOL.52 NO.06 One Laptop Per Child: Vision vs. Reality Hard-Disk Drives: The Good, The Bad, and the Ugly How CS Serves The Developing World Network Front-End Processors The Claremont Report On Database Research Autonomous Helicopters Association for Computing Machinery Think Parallel..... It’s not just what we make. It’s what we make possible. Advancing Technology Curriculum Driving Software Evolution Fostering Tomorrow’s Innovators Learn more at: www.intel.com/thinkparallel ACM Ad.indd 1 4/17/2009 11:20:03 AM ABCD springer.com Noteworthy Computer Science Journals Autonomous Biological Personal and Robots Cybernetics Ubiquitous G. Sukhatme, University W. Senn, Universität Bern, Computing of Southern California, Physiologisches Institut; ACM Viterbi School of Engi- J. Rinzel, National neering, Dept. Computer Institutes of Health (NIH), P. Thomas, Univ. Coll. Science Dept. Health Education & London Interaction Centre Autonomous Robots Welfare; J. L. van Hemmen, reports on the theory and TU München, Abt. Physik Personal and Ubiquitous applications of robotic systems capable of Biological Cybernetics is an interdisciplinary Computing publishes peer-reviewed some degree of self-sufficiency. It features medium for experimental, theoretical and international research on handheld, wearable papers that include performance data on actual application-oriented aspects of information and mobile information devices and the robots in the real world. The focus is on the processing in organisms, including sensory, pervasive communications infrastructure that ability to move and be self-sufficient, not on motor, cognitive, and ecological phenomena. supports them to enable the seamless whether the system is an imitation of biology. -
A Decade of Lattice Cryptography
Full text available at: http://dx.doi.org/10.1561/0400000074 A Decade of Lattice Cryptography Chris Peikert Computer Science and Engineering University of Michigan, United States Boston — Delft Full text available at: http://dx.doi.org/10.1561/0400000074 Foundations and Trends R in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is C. Peikert. A Decade of Lattice Cryptography. Foundations and Trends R in Theoretical Computer Science, vol. 10, no. 4, pp. 283–424, 2014. R This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper. ISBN: 978-1-68083-113-9 c 2016 C. Peikert All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for in- ternal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. -
Typical Stability
Typical Stability Raef Bassily∗ Yoav Freundy Abstract In this paper, we introduce a notion of algorithmic stability called typical stability. When our goal is to release real-valued queries (statistics) computed over a dataset, this notion does not require the queries to be of bounded sensitivity – a condition that is generally assumed under differential privacy [DMNS06, Dwo06] when used as a notion of algorithmic stability [DFH+15b, DFH+15c, BNS+16] – nor does it require the samples in the dataset to be independent – a condition that is usually assumed when generalization-error guarantees are sought. Instead, typical stability requires the output of the query, when computed on a dataset drawn from the underlying distribution, to be concentrated around its expected value with respect to that distribution. Typical stability can also be motivated as an alternative definition for database privacy. Like differential privacy, this notion enjoys several important properties including robustness to post-processing and adaptive composition. However, privacy is guaranteed only for a given family of distributions over the dataset. We also discuss the implications of typical stability on the generalization error (i.e., the difference between the value of the query computed on the dataset and the expected value of the query with respect to the true data distribution). We show that typical stability can control generalization error in adaptive data analysis even when the samples in the dataset are not necessarily independent and when queries to be computed are not necessarily of bounded- sensitivity as long as the results of the queries over the dataset (i.e., the computed statistics) follow a distribution with a “light” tail. -
An Efficient Boosting Algorithm for Combining Preferences Raj
An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A uth or .............. ..... ..... .................................. Department of E'ectrical ngineering and Computer Science August 24, 1999 C ertified by .. ................. ...... .. .............. David R. Karger Associate Professor Thesis Supervisor Accepted by............... ........ Arthur C. Smith Chairman, Departmental Committe Graduate Students MACHU OF TEC Lo NOV LIBRARIES An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science on August 24, 1999, in partial fulfillment of the requirements for the degree of Master of Science Abstract The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experi- ment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of Rank- Boost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. -
Communication Complexity (For Algorithm Designers)
Full text available at: http://dx.doi.org/10.1561/0400000076 Communication Complexity (for Algorithm Designers) Tim Roughgarden Stanford University, USA [email protected] Boston — Delft Full text available at: http://dx.doi.org/10.1561/0400000076 Foundations and Trends R in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is T. Roughgarden. Communication Complexity (for Algorithm Designers). Foundations and Trends R in Theoretical Computer Science, vol. 11, nos. 3-4, pp. 217–404, 2015. R This Foundations and Trends issue was typeset in LATEX using a class file designed by Neal Parikh. Printed on acid-free paper. ISBN: 978-1-68083-115-3 c 2016 T. Roughgarden All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Photocopying. In the USA: This journal is registered at the Copyright Clearance Cen- ter, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by now Publishers Inc for users registered with the Copyright Clearance Center (CCC). The ‘services’ for users can be found on the internet at: www.copyright.com For those organizations that have been granted a photocopy license, a separate system of payment has been arranged. -
Multi-Input Functional Encryption with Unbounded-Message Security
Multi-Input Functional Encryption with Unbounded-Message Security Vipul Goyal ⇤ Aayush Jain † Adam O’ Neill‡ Abstract Multi-input functional encryption (MIFE) was introduced by Goldwasser et al. (EUROCRYPT 2014) as a compelling extension of functional encryption. In MIFE, a receiver is able to compute a joint function of multiple, independently encrypted plaintexts. Goldwasser et al. (EUROCRYPT 2014) show various applications of MIFE to running SQL queries over encrypted databases, computing over encrypted data streams, etc. The previous constructions of MIFE due to Goldwasser et al. (EUROCRYPT 2014) based on in- distinguishability obfuscation had a major shortcoming: it could only support encrypting an apriori bounded number of message. Once that bound is exceeded, security is no longer guaranteed to hold. In addition, it could only support selective-security,meaningthatthechallengemessagesandthesetof “corrupted” encryption keys had to be declared by the adversary up-front. In this work, we show how to remove these restrictions by relying instead on sub-exponentially secure indistinguishability obfuscation. This is done by carefully adapting an alternative MIFE scheme of Goldwasser et al. that previously overcame these shortcomings (except for selective security wrt. the set of “corrupted” encryption keys) by relying instead on differing-inputs obfuscation, which is now seen as an implausible assumption. Our techniques are rather generic, and we hope they are useful in converting other constructions using differing-inputs obfuscation to ones using sub-exponentially secure indistinguishability obfuscation instead. ⇤Microsoft Research, India. Email: [email protected]. †UCLA, USA. Email: [email protected]. Work done while at Microsoft Research, India. ‡Georgetown University, USA. Email: [email protected].