Computational Complexity
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly. -
LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students. -
Foundations of Cryptography – a Primer Oded Goldreich
Foundations of Cryptography –APrimer Foundations of Cryptography –APrimer Oded Goldreich Department of Computer Science Weizmann Institute of Science Rehovot Israel [email protected] Boston – Delft Foundations and TrendsR in Theoretical Computer Science Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 USA Tel. +1 781 871 0245 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 A Cataloging-in-Publication record is available from the Library of Congress Printed on acid-free paper ISBN: 1-933019-02-6; ISSNs: Paper version 1551-305X; Electronic version 1551-3068 c 2005 O. Goldreich All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. now Publishers Inc. has an exclusive license to publish this mate- rial worldwide. Permission to use this content must be obtained from the copyright license holder. Please apply to now Publishers, PO Box 179, 2600 AD Delft, The Netherlands, www.nowpublishers.com; e-mail: [email protected] Contents 1 Introduction and Preliminaries 1 1.1 Introduction 1 1.2 Preliminaries 7 IBasicTools 10 2 Computational Difficulty and One-way Functions 13 2.1 One-way functions 14 2.2 Hard-core predicates 18 3 Pseudorandomness 23 3.1 Computational indistinguishability 24 3.2 Pseudorandom generators -
Glossary of Complexity Classes
App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems -
A Note on Perfect Correctness by Derandomization⋆
A Note on Perfect Correctness by Derandomization? Nir Bitansky and Vinod Vaikuntanathan MIT Abstract. We show a general compiler that transforms a large class of erroneous cryptographic schemes (such as public-key encryption, indis- tinguishability obfuscation, and secure multiparty computation schemes) into perfectly correct ones. The transformation works for schemes that are correct on all inputs with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity 2O(n) and non-deterministic circuit complexity 2Ω(n). Our transformation complements previous results that showed how public- key encryption and indistinguishability obfuscation that err on a notice- able fraction of inputs can be turned into ones that for all inputs are often correct. The technique relies on the idea of \reverse randomization" [Naor, Crypto 1989] and on Nisan-Wigderson style derandomization, previously used in cryptography to remove interaction from witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003]. 1 Introduction Randomized algorithms are often faster and simpler than their state-of-the-art deterministic counterparts, yet, by their very nature, they are error-prone. This gap has motivated a rich study of derandomization, where a central avenue has been the design of pseudo-random generators [BM84, Yao82a, NW94] that could offer one universal solution for the problem. This has led to surprising results, intertwining cryptography and complexity theory, and culminating in a deran- domization of BPP under worst-case complexity assumptions, namely, the ex- istence of functions in E = Dtime(2O(n)) with worst-case circuit complexity 2Ω(n) [NW94, IW97]. -
Counting T-Cliques: Worst-Case to Average-Case Reductions and Direct Interactive Proof Systems
2018 IEEE 59th Annual Symposium on Foundations of Computer Science Counting t-Cliques: Worst-Case to Average-Case Reductions and Direct Interactive Proof Systems Oded Goldreich Guy N. Rothblum Department of Computer Science Department of Computer Science Weizmann Institute of Science Weizmann Institute of Science Rehovot, Israel Rehovot, Israel [email protected] [email protected] Abstract—We study two aspects of the complexity of counting 3) Counting t-cliques has an appealing combinatorial the number of t-cliques in a graph: structure (which is indeed capitalized upon in our 1) Worst-case to average-case reductions: Our main result work). reduces counting t-cliques in any n-vertex graph to counting t-cliques in typical n-vertex graphs that are In Sections I-A and I-B we discuss each study seperately, 2 drawn from a simple distribution of min-entropy Ω(n ). whereas Section I-C reveals the common themes that lead 2 For any constant t, the reduction runs in O(n )-time, us to present these two studies in one paper. We direct the and yields a correct answer (w.h.p.) even when the reader to the full version of this work [18] for full details. “average-case solver” only succeeds with probability / n 1 poly(log ). A. Worst-case to average-case reductions 2) Direct interactive proof systems: We present a direct and simple interactive proof system for counting t-cliques in 1) Background: While most research in the theory of n-vertex graphs. The proof system uses t − 2 rounds, 2 2 computation refers to worst-case complexity, the impor- the verifier runs in O(t n )-time, and the prover can O(1) 2 tance of average-case complexity is widely recognized (cf., be implemented in O(t · n )-time when given oracle O(1) e.g., [13, Chap. -
A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms
Mathematics and Computers in Business, Manufacturing and Tourism A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms AHMED TAREK AHMED FARHAN Department of Science Owen J. Roberts High School Cecil College, University System of Maryland (USM) 981 Ridge Road One Seahawk Drive, North East, MD 21901-1900 Pottstown, PA 19465 UNITED STATES OF AMERICA UNITED STATES OF AMERICA [email protected] [email protected] Abstract: Computational complexity is a widely researched topic in Theoretical Computer Science. A vast majority of the related results are purely of theoretical interest, and only a few have treated the model of complexity as applicable for real-life computation. Even if, some individuals performed research in complexity, the results would be hard to realize in practice. A number of these papers lack in simultaneous consideration of temporal and spatial complexities, which together depict the overall computational scenario. Treating one without the other makes the analysis insufficient and the model presented may not depict the true computational scenario, since both are of paramount importance in computer implementation of computational algorithms. This paper exceeds some of these limitations prevailing in the computer science literature through a clearly depicted model of computation and a meticulous analysis of spatial and temporal complexities. The paper also deliberates computational complexities due to a wide variety of coding constructs arising frequently in the design and analysis of practical algorithms. Key–Words: Design and analysis of algorithms, Model of computation, Real-life computation, Spatial complexity, Temporal complexity 1 Introduction putations require pre and/or post processing with the remaining operations carried out fast enough. -
Quantum Computing : a Gentle Introduction / Eleanor Rieffel and Wolfgang Polak
QUANTUM COMPUTING A Gentle Introduction Eleanor Rieffel and Wolfgang Polak The MIT Press Cambridge, Massachusetts London, England ©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email [email protected] This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965– Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6 (hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10987654321 Contents Preface xi 1 Introduction 1 I QUANTUM BUILDING BLOCKS 7 2 Single-Qubit Quantum Systems 9 2.1 The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 2.2 Single Quantum Bits 13 2.3 Single-Qubit Measurement 16 2.4 A Quantum Key Distribution Protocol 18 2.5 The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 23 2.5.3 Comments on General Quantum State Spaces -
Computational Complexity
Computational Complexity The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Vadhan, Salil P. 2011. Computational complexity. In Encyclopedia of Cryptography and Security, second edition, ed. Henk C.A. van Tilborg and Sushil Jajodia. New York: Springer. Published Version http://refworks.springer.com/mrw/index.php?id=2703 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:33907951 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP Computational Complexity Salil Vadhan School of Engineering & Applied Sciences Harvard University Synonyms Complexity theory Related concepts and keywords Exponential time; O-notation; One-way function; Polynomial time; Security (Computational, Unconditional); Sub-exponential time; Definition Computational complexity theory is the study of the minimal resources needed to solve computational problems. In particular, it aims to distinguish be- tween those problems that possess efficient algorithms (the \easy" problems) and those that are inherently intractable (the \hard" problems). Thus com- putational complexity provides a foundation for most of modern cryptogra- phy, where the aim is to design cryptosystems that are \easy to use" but \hard to break". (See security (computational, unconditional).) Theory Running Time. The most basic resource studied in computational com- plexity is running time | the number of basic \steps" taken by an algorithm. (Other resources, such as space (i.e., memory usage), are also studied, but they will not be discussed them here.) To make this precise, one needs to fix a model of computation (such as the Turing machine), but here it suffices to informally think of it as the number of \bit operations" when the input is given as a string of 0's and 1's. -
Complexity Theory and Cryptography Understanding the Foundations of Cryptography Is Understanding the Foundations of Hardness
Manuel Sabin Research Statement Page 1 / 4 I view research as a social endeavor. This takes two forms: 1) Research is a fun social activity. 2) Research is done within a larger sociotechnical environment and should be for society's betterment. While I try to fit all of my research into both of these categories, I have also taken two major research directions to fulfill each intention. 1) Complexity Theory asks profound philosophical questions and has compelling narratives about the nature of computation and about which problems' solutions may be provably beyond our reach during our planet's lifetime. Researching, teaching, and mentoring on this field’s stories is one my favorite social activities. 2) Algorithmic Fairness' questioning of how algorithms further cement and codify systems of oppression in the world has impacts that are enormously far-reaching both in current global problems and in the nature of future ones. This field is in its infancy and deciding its future and impact with intention is simultaneously exciting and frightening but, above all, of absolute necessity. Complexity Theory and Cryptography Understanding the foundations of Cryptography is understanding the foundations of Hardness. Complexity Theory studies the nature of computational hardness { i.e. lower bounds on the time required in solving computational problems { not only to understand what is difficult, but to understand the utility hardness offers. By hiding secrets within `structured hardness,' Complexity Theory, only as recently as the 1970s, transformed the ancient ad-hoc field of Cryptography into a science with rigorous theoretical foundations. But which computational hardness can we feel comfortable basing Cryptography on? My research studies a question foundational to Complexity Theory and Cryptography: Does Cryptography's existence follow from P 6= NP? P is the class of problems computable in polynomial time and is qualitatively considered the set of `easy' problems, while NP is the class of problems whose solutions, when obtained, can be checked as correct in polynomial time. -
Descriptive Complexity: a Logicians Approach to Computation
Descriptive Complexity: A Logician’s Approach to Computation N. Immerman basic issue in computer science is there are exponentially many possibilities, the complexity of problems. If one is but in this case no known algorithm is faster doing a calculation once on a than exponential time. medium-sized input, the simplest al- Computational complexity measures how gorithm may be the best method to much time and/or memory space is needed as Ause, even if it is not the fastest. However, when a function of the input size. Let TIME[t(n)] be the one has a subproblem that will have to be solved set of problems that can be solved by algorithms millions of times, optimization is important. A that perform at most O(t(n)) steps for inputs of fundamental issue in theoretical computer sci- size n. The complexity class polynomial time (P) ence is the computational complexity of prob- is the set of problems that are solvable in time lems. How much time and how much memory at most some polynomial in n. space is needed to solve a particular problem? ∞ Here are a few examples of such problems: P= TIME[nk] 1. Reachability: Given a directed graph and k[=1 two specified points s,t, determine if there Even though the class TIME[t(n)] is sensitive is a path from s to t. A simple, linear-time to the exact machine model used in the com- algorithm marks s and then continues to putations, the class P is quite robust. The prob- mark every vertex at the head of an edge lems Reachability and Min-triangulation are el- whose tail is marked. -
The Complexity of Estimating Min-Entropy
Electronic Colloquium on Computational Complexity, Revision 1 of Report No. 70 (2012) The Complexity of Estimating Min-Entropy Thomas Watson∗ April 14, 2014 Abstract Goldreich, Sahai, and Vadhan (CRYPTO 1999) proved that the promise problem for esti- mating the Shannon entropy of a distribution sampled by a given circuit is NISZK-complete. We consider the analogous problem for estimating the min-entropy and prove that it is SBP- complete, where SBP is the class of promise problems that correspond to approximate counting of NP witnesses. The result holds even when the sampling circuits are restricted to be 3-local. For logarithmic-space samplers, we observe that this problem is NP-complete by a result of Lyngsø and Pedersen on hidden Markov models (JCSS 2002). 1 Introduction Deterministic randomness extraction is the problem of taking a sample from an imperfect physical source of randomness (modeled as a probability distribution on bit strings) and applying an efficient deterministic algorithm to transform it into a uniformly random string, which can be used by a randomized algorithm (see [Sha11, Vad12] for surveys of this topic). For such extraction to be possible, the source of randomness must satisfy two properties: (i) it must contain a sufficient “amount of randomness”, and (ii) it must be “structured”, meaning that it has a simple description. Regarding property (i), the most useful measure of the “amount of randomness” is the min- entropy, which is the logarithm of the reciprocal of the probability of the most likely outcome (in other words, if a distribution has high min-entropy then every outcome has small probability).