Noisy Information and Computational Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Noisy Information and Computational Complexity Noisy Information and Computational Complexity L. Plaskota Institute of Applied Mathematics and Mechanics University of Warsaw Contents 1 Overview 5 2 Worst case setting 9 2.1 Introduction . 9 2.2 Information, algorithm, approximation . 12 2.3 Radius and diameter of information . 17 2.4 Affine algorithms for linear functionals . 24 2.4.1 Existence of optimal affine algorithms . 25 2.4.2 The case of Hilbert noise . 28 2.5 Optimality of spline algorithms . 37 2.5.1 Splines and smoothing splines . 38 2.5.2 α{smoothing splines . 41 2.6 Special splines . 50 2.6.1 The Hilbert case with optimal α . 51 2.6.2 Least squares and regularization . 57 2.6.3 Polynomial splines . 61 2.6.4 Splines in r.k.h.s. 65 2.7 Varying information . 71 2.7.1 Nonadaptive and adaptive information . 71 2.7.2 When does adaption not help? . 74 2.8 Optimal information . 78 2.8.1 Linear problems in Hilbert spaces . 79 2.8.2 Approximation and integration of Lipschitz functions 89 2.9 Complexity . 96 2.9.1 Computations over the space G . 97 2.9.2 Cost and complexity, general bounds . 99 2.10 Complexity of special problems . 107 2.10.1 Linear problems in Hilbert spaces . 107 1 2 CONTENTS 2.10.2 Approximation and integration of Lipschitz functions 114 2.10.3 Multivariate approximation in a Banach space . 117 3 Average case setting 129 3.1 Introduction . 129 3.2 Information and its radius . 131 3.3 Gaussian measures on Banach spaces . 140 3.3.1 Basic properties . 140 3.3.2 Gaussian measures as abstract Wiener spaces . 142 3.4 Linear problems with Gaussian measures . 147 3.4.1 Induced and conditional distributions . 148 3.4.2 Optimal algorithms . 151 3.5 The case of linear functionals . 157 3.6 Optimal algorithms as smoothing splines . 164 3.6.1 A general case . 164 3.6.2 Special cases . 165 3.6.3 A correspondence theorem . 168 3.7 Varying information . 170 3.7.1 Nonadaptive and adaptive information . 170 3.7.2 Adaption versus nonadaption, I . 173 3.8 Optimal information . 177 3.8.1 Linear problems with Gaussian measures . 178 3.8.2 Approximation and integration on the Wiener space . 188 3.9 Complexity . 206 3.9.1 Adaption versus nonadaption, II . 207 3.9.2 General bounds . 211 3.10 Complexity of special problems . 215 3.10.1 Linear problems with Gaussian measures . 215 3.10.2 Approximation and integration on the Wiener space . 221 4 First mixed setting 227 4.1 Introduction . 227 4.2 Affine algorithms for linear functionals . 228 4.2.1 The one{dimensional problem . 229 4.2.2 Almost optimality of affine algorithms . 233 4.2.3 A correspondence theorem . 242 4.3 Approximation of operators . 246 4.3.1 Ellipsoidal problems in Rn . 246 4.3.2 The Hilbert case . 250 CONTENTS 3 5 Second mixed setting 259 5.1 Introduction . 259 5.2 Linear algorithms for linear functionals . 260 5.2.1 The one{dimensional problem . 260 5.2.2 Almost optimality of linear algorithms . 264 5.2.3 A correspondence theorem . 272 5.3 Approximation of operators . 274 6 Asymptotic setting 279 6.1 Introduction . 279 6.2 Asymptotic and worst case settings . 280 6.2.1 Information, algorithm and error . 281 6.2.2 Optimal algorithms . 282 6.2.3 Optimal information . 288 6.3 Asymptotic and average case settings . 292 6.3.1 Optimal algorithms . 293 6.3.2 Convergence rate . 296 6.3.3 Optimal information . 301 4 CONTENTS Chapter 1 Overview In the process of doing scientific computations we always rely on some infor- mation. A typical situation in practice is that this information is contami- nated by errors. We say that it is noisy. Sources of noise include: previous computations, • inexact measurements, • transmission errors, • arithmetic limitations, • adversary's lies. • Problems with noisy information have always attracted a considerable attention of researchers in many different scientific fields: statisticians, engi- neers, control theorists, economists, applied mathematicians. There is also a vast literature, especially in statistics, where noisy information is analyzed from different perspectives. In this monograph, noisy information is studied in the context of the computational complexity of solving mathematically posed problems. The computational complexity focuses on the intrinsic difficulty of prob- lems as measured by the minimal amount of time, memory, or elementary operations necessary to solve them. Information{based complexity (IBC) is a branch of computational complexity that deals with problems for which the available information is: partial, • noisy, • priced. • 5 6 CHAPTER 1. OVERVIEW Information being partial means that the problem is not uniquely de- termined by the given information. Information is noisy since it may be contaminated by some errors. Finally, information is priced since we must pay for getting it. These assumptions distinguish IBC from combinatorial complexity, where information is complete, exact, and free. Since information is partial and noisy, only approximate solutions are possible. One of the main goals of IBC is finding the complexity of the prob- lem, i.e., the intrinsic cost of computing an approximation with given accu- racy. Approximations are obtained by algorithms that use some information. These solving the problem with minimal cost are of special importance and called optimal. Partial, noisy and priced information is typical of many problems arising in different scientific fields. These include, for instance, signal processing, control theory, computer vision, and numerical analysis. As a rule, a digital computer is used to perform scientific computations. A computer can only make use of a finite set of numbers. Usually, these numbers cannot be exactly entered into the computer memory. Hence, problems described by infinitely many parameters can be \solved" using only partial and noisy information. The theory of optimal algorithms for solving problems with partial in- formation has a long history. It can be traced back to the late forties when Kiefer, Sard and Nikolskij wrote pioneering papers. A systematic and uni- form approach to such kind of problems was first presented by J.F. Traub and H. Wo´zniakowski in the monograph A General Theory of Optimal Al- gorithms, Academic Press, 1980. This was an important stage in the devel- opment of the theory of IBC. The monograph was followed then by Information, Uncertainty, Com- plexity, Addison-Wesley, 1983, and Information-Based Complexity, Academic Press, 1988, both authored by J.F. Traub, G.W. Wasilkowski, and H. Wo´znia- kowski. Computational complexity of approximately solved problems is also studied in the books: Deterministic and Stochastic Error Bounds in Numer- ical Analysis by E. Novak, Springer Verlag, 1988, and The Computational Complexity of Differential and Integral Equations by A.G. Werschulz, Oxford University Press, 1991. Relatively few IBC papers study noisy information. One reason is the technical difficulty of the analysis of noisy information. A second reason is that even if we are primarily interested in noisy information, the results on exact information establish a benchmark. All negative results for exact information are also applicable for the noisy case. On the other hand, it is not clear whether positive results for exact information have a counterpart 7 for noisy information. In the mathematical literature, the word \noise" is used mainly by statis- ticians and means a random error that occurs for experimental observations. We also want to study deterministic error. Therefore by noise, we mean random or deterministic error. Moreover, in our model, the source of the information is not important. We may say that \information is observed" or that it is \computed". We also stress that the case of exact information is not excluded, neither in the model nor in most results. Exact information is obtained as a special case by setting the noise level to zero. This permits us to study the depen- dence of the results on the noise level, and to compare the noisy and exact information cases. In general, optimal algorithms and problem complexity depend on the setting. The setting is specified by the way the error and cost of an algorithm are defined. If the error and cost are defined by their worst performance, we have the worst case setting. The average case setting is obtained when the average performance of algorithms is considered. In this monograph, we study the worst and average case settings as well as mixed settings and asymptotic setting. Other settings such as probabilistic and randomized settings will be the topic of future research. Despite the differences, the settings have certain features in common. For instance, algorithms that are based on smoothing splines are optimal, independent of the setting. This is a very desirable property, since it shows that such algorithms are universal and robust. Most of the research presented in this monograph has been done over the last 5{6 years by different people, including the author. Some of the results have not been previously reported. The references to the original results are given in Notes and Remarks at the end of each section. Clearly, the author does not pretend to cover the whole subject of noisy information in one monograph. Only these topics are presented that are typical of IBC, or are needed for the complexity analysis. Many problems are still open. Some of these are indicated in the text. The monograph consists of six chapters. We start with the worst case setting in Chapter 2. Chapter 3 is devoted to the average case setting. Each of these two settings is studied following the same scheme. We first look for the best algorithms that use fixed information. Then we allow the informa- tion to vary and seek optimal information.
Recommended publications
  • Nitin Saurabh the Institute of Mathematical Sciences, Chennai
    ALGEBRAIC MODELS OF COMPUTATION By Nitin Saurabh The Institute of Mathematical Sciences, Chennai. A thesis submitted to the Board of Studies in Mathematical Sciences In partial fulllment of the requirements For the Degree of Master of Science of HOMI BHABHA NATIONAL INSTITUTE April 2012 CERTIFICATE Certied that the work contained in the thesis entitled Algebraic models of Computation, by Nitin Saurabh, has been carried out under my supervision and that this work has not been submitted elsewhere for a degree. Meena Mahajan Theoretical Computer Science Group The Institute of Mathematical Sciences, Chennai ACKNOWLEDGEMENTS I would like to thank my advisor Prof. Meena Mahajan for her invaluable guidance and continuous support since my undergraduate days. Her expertise and ideas helped me comprehend new techniques. Her guidance during the preparation of this thesis has been invaluable. I also thank her for always being there to discuss and clarify any matter. I am extremely grateful to all the faculty members of theory group at IMSc and CMI for their continuous encouragement and giving me an opportunity to learn from them. I would like to thank all my friends, at IMSc and CMI, for making my stay in Chennai a memorable one. Most of all, I take this opportunity to thank my parents, my uncle and my brother. Abstract Valiant [Val79, Val82] had proposed an analogue of the theory of NP-completeness in an entirely algebraic framework to study the complexity of polynomial families. Artihmetic circuits form the most standard model for studying the complexity of polynomial computations. In a note [Val92], Valiant argued that in order to prove lower bounds for boolean circuits, obtaining lower bounds for arithmetic circuits should be a rst step.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • UNIONS of LINES in Fn 1. Introduction the Main Problem We
    UNIONS OF LINES IN F n RICHARD OBERLIN Abstract. We show that if a collection of lines in a vector space over a finite field has \dimension" at least 2(d−1)+β; then its union has \dimension" at least d + β: This is the sharp estimate of its type when no structural assumptions are placed on the collection of lines. We also consider some refinements and extensions of the main result, including estimates for unions of k-planes. 1. Introduction The main problem we will consider here is to give a lower bound for the dimension of the union of a collection of lines in terms of the dimension of the collection of lines, without imposing a structural hy- pothesis on the collection (in contrast to the Kakeya problem where one assumes that the lines are direction-separated, or perhaps satisfy the weaker \Wolff axiom"). Specifically, we are motivated by the following conjecture of D. Ober- lin (hdim denotes Hausdorff dimension). Conjecture 1.1. Suppose d ≥ 1 is an integer, that 0 ≤ β ≤ 1; and that L is a collection of lines in Rn with hdim(L) ≥ 2(d − 1) + β: Then [ (1) hdim( L) ≥ d + β: L2L The bound (1), if true, would be sharp, as one can see by taking L to be the set of lines contained in the d-planes belonging to a β- dimensional family of d-planes. (Furthermore, there is nothing to be gained by taking 1 < β ≤ 2 since the dimension of the set of lines contained in a d + 1-plane is 2(d − 1) + 2.) Standard Fourier-analytic methods show that (1) holds for d = 1, but the conjecture is open for d > 1: As a model problem, one may consider an analogous question where Rn is replaced by a vector space over a finite-field.
    [Show full text]
  • Collapsing Exact Arithmetic Hierarchies
    Electronic Colloquium on Computational Complexity, Report No. 131 (2013) Collapsing Exact Arithmetic Hierarchies Nikhil Balaji and Samir Datta Chennai Mathematical Institute fnikhil,[email protected] Abstract. We provide a uniform framework for proving the collapse of the hierarchy, NC1(C) for an exact arith- metic class C of polynomial degree. These hierarchies collapses all the way down to the third level of the AC0- 0 hierarchy, AC3(C). Our main collapsing exhibits are the classes 1 1 C 2 fC=NC ; C=L; C=SAC ; C=Pg: 1 1 NC (C=L) and NC (C=P) are already known to collapse [1,18,19]. We reiterate that our contribution is a framework that works for all these hierarchies. Our proof generalizes a proof 0 1 from [8] where it is used to prove the collapse of the AC (C=NC ) hierarchy. It is essentially based on a polynomial degree characterization of each of the base classes. 1 Introduction Collapsing hierarchies has been an important activity for structural complexity theorists through the years [12,21,14,23,18,17,4,11]. We provide a uniform framework for proving the collapse of the NC1 hierarchy over an exact arithmetic class. Using 0 our method, such a hierarchy collapses all the way down to the AC3 closure of the class. 1 1 1 Our main collapsing exhibits are the NC hierarchies over the classes C=NC , C=L, C=SAC , C=P. Two of these 1 1 hierarchies, viz. NC (C=L); NC (C=P), are already known to collapse ([1,19,18]) while a weaker collapse is known 0 1 for a third one viz.
    [Show full text]
  • Dspace 1.8 Documentation
    DSpace 1.8 Documentation DSpace 1.8 Documentation Author: The DSpace Developer Team Date: 03 November 2011 URL: https://wiki.duraspace.org/display/DSDOC18 Page 1 of 621 DSpace 1.8 Documentation Table of Contents 1 Preface _____________________________________________________________________________ 13 1.1 Release Notes ____________________________________________________________________ 13 2 Introduction __________________________________________________________________________ 15 3 Functional Overview ___________________________________________________________________ 17 3.1 Data Model ______________________________________________________________________ 17 3.2 Plugin Manager ___________________________________________________________________ 19 3.3 Metadata ________________________________________________________________________ 19 3.4 Packager Plugins _________________________________________________________________ 20 3.5 Crosswalk Plugins _________________________________________________________________ 21 3.6 E-People and Groups ______________________________________________________________ 21 3.6.1 E-Person __________________________________________________________________ 21 3.6.2 Groups ____________________________________________________________________ 22 3.7 Authentication ____________________________________________________________________ 22 3.8 Authorization _____________________________________________________________________ 22 3.9 Ingest Process and Workflow ________________________________________________________ 24
    [Show full text]
  • User's Guide for Complexity: a LATEX Package, Version 0.80
    User’s Guide for complexity: a LATEX package, Version 0.80 Chris Bourke April 12, 2007 Contents 1 Introduction 2 1.1 What is complexity? ......................... 2 1.2 Why a complexity package? ..................... 2 2 Installation 2 3 Package Options 3 3.1 Mode Options .............................. 3 3.2 Font Options .............................. 4 3.2.1 The small Option ....................... 4 4 Using the Package 6 4.1 Overridden Commands ......................... 6 4.2 Special Commands ........................... 6 4.3 Function Commands .......................... 6 4.4 Language Commands .......................... 7 4.5 Complete List of Class Commands .................. 8 5 Customization 15 5.1 Class Commands ............................ 15 1 5.2 Language Commands .......................... 16 5.3 Function Commands .......................... 17 6 Extended Example 17 7 Feedback 18 7.1 Acknowledgements ........................... 19 1 Introduction 1.1 What is complexity? complexity is a LATEX package that typesets computational complexity classes such as P (deterministic polynomial time) and NP (nondeterministic polynomial time) as well as sets (languages) such as SAT (satisfiability). In all, over 350 commands are defined for helping you to typeset Computational Complexity con- structs. 1.2 Why a complexity package? A better question is why not? Complexity theory is a more recent, though mature area of Theoretical Computer Science. Each researcher seems to have his or her own preferences as to how to typeset Complexity Classes and has built up their own personal LATEX commands file. This can be frustrating, to say the least, when it comes to collaborations or when one has to go through an entire series of files changing commands for compatibility or to get exactly the look they want (or what may be required).
    [Show full text]
  • The Computational Complexity of Probabilistic Planning
    Journal of Arti cial Intelligence Research 9 1998 1{36 Submitted 1/98; published 8/98 The Computational Complexity of Probabilistic Planning Michael L. Littman [email protected] Department of Computer Science, Duke University Durham, NC 27708-0129 USA Judy Goldsmith [email protected] Department of Computer Science, University of Kentucky Lexington, KY 40506-0046 USA Martin Mundhenk [email protected] FB4 - Theoretische Informatik, Universitat Trier D-54286 Trier, GERMANY Abstract We examine the computational complexity of testing and nding small plans in proba- bilistic planning domains with b oth at and prop ositional representations. The complexity of plan evaluation and existence varies with the plan typ e sought; we examine totally ordered plans, acyclic plans, and lo oping plans, and partially ordered plans under three natural de nitions of plan value. We show that problems of interest are complete for a PP PP variety of complexity classes: PL, P, NP, co-NP, PP,NP , co-NP , and PSPACE. In PP the pro cess of proving that certain planning problems are complete for NP ,weintro duce PP a new basic NP -complete problem, E-Majsat, which generalizes the standard Bo olean satis ability problem to computations involving probabilistic quantities; our results suggest that the development of go o d heuristics for E-Majsat could b e imp ortant for the creation of ecient algorithms for a wide variety of problems. 1. Intro duction Recent work in arti cial-intelligence planning has addressed the problem of nding e ec- tive plans in domains in which op erators have probabilistic e ects Drummond & Bresina, 1990; Mansell, 1993; Drap er, Hanks, & Weld, 1994; Ko enig & Simmons, 1994; Goldman & Bo ddy, 1994; Kushmerick, Hanks, & Weld, 1995; Boutilier, Dearden, & Goldszmidt, 1995; Dearden & Boutilier, 1997; Kaelbling, Littman, & Cassandra, 1998; Boutilier, Dean, & Hanks, 1998.
    [Show full text]
  • Quantum Computational Complexity Theory Is to Un- Derstand the Implications of Quantum Physics to Computational Complexity Theory
    Quantum Computational Complexity John Watrous Institute for Quantum Computing and School of Computer Science University of Waterloo, Waterloo, Ontario, Canada. Article outline I. Definition of the subject and its importance II. Introduction III. The quantum circuit model IV. Polynomial-time quantum computations V. Quantum proofs VI. Quantum interactive proof systems VII. Other selected notions in quantum complexity VIII. Future directions IX. References Glossary Quantum circuit. A quantum circuit is an acyclic network of quantum gates connected by wires: the gates represent quantum operations and the wires represent the qubits on which these operations are performed. The quantum circuit model is the most commonly studied model of quantum computation. Quantum complexity class. A quantum complexity class is a collection of computational problems that are solvable by a cho- sen quantum computational model that obeys certain resource constraints. For example, BQP is the quantum complexity class of all decision problems that can be solved in polynomial time by a arXiv:0804.3401v1 [quant-ph] 21 Apr 2008 quantum computer. Quantum proof. A quantum proof is a quantum state that plays the role of a witness or certificate to a quan- tum computer that runs a verification procedure. The quantum complexity class QMA is defined by this notion: it includes all decision problems whose yes-instances are efficiently verifiable by means of quantum proofs. Quantum interactive proof system. A quantum interactive proof system is an interaction between a verifier and one or more provers, involving the processing and exchange of quantum information, whereby the provers attempt to convince the verifier of the answer to some computational problem.
    [Show full text]
  • Introduction to Complexity Classes
    Introduction to Complexity Classes Marcin Sydow Introduction to Complexity Classes Marcin Sydow Introduction Denition to Complexity Classes TIME(f(n)) TIME(f(n)) denotes the set of languages decided by Marcin deterministic TM of TIME complexity f(n) Sydow Denition SPACE(f(n)) denotes the set of languages decided by deterministic TM of SPACE complexity f(n) Denition NTIME(f(n)) denotes the set of languages decided by non-deterministic TM of TIME complexity f(n) Denition NSPACE(f(n)) denotes the set of languages decided by non-deterministic TM of SPACE complexity f(n) Linear Speedup Theorem Introduction to Complexity Classes Marcin Sydow Theorem If L is recognised by machine M in time complexity f(n) then it can be recognised by a machine M' in time complexity f 0(n) = f (n) + (1 + )n, where > 0. Blum's theorem Introduction to Complexity Classes Marcin Sydow There exists a language for which there is no fastest algorithm! (Blum - a Turing Award laureate, 1995) Theorem There exists a language L such that if it is accepted by TM of time complexity f(n) then it is also accepted by some TM in time complexity log(f (n)). Basic complexity classes Introduction to Complexity Classes Marcin (the functions are asymptotic) Sydow P S TIME nj , the class of languages decided in = j>0 ( ) deterministic polynomial time NP S NTIME nj , the class of languages decided in = j>0 ( ) non-deterministic polynomial time EXP S TIME 2nj , the class of languages decided in = j>0 ( ) deterministic exponential time NEXP S NTIME 2nj , the class of languages decided
    [Show full text]
  • Quantum Logarithmic Space and Post-Selection
    Quantum Logarithmic Space and Post-selection Fran¸cois Le Gall1 Harumichi Nishimura2 Abuzer Yakaryılmaz3,4 1Graduate School of Mathematics, Nagoya University, Nagoya, Japan 2Graduate School of Informatics, Nagoya University, Nagoya, Japan 3Center for Quantum Computer Science, University of Latvia, R¯ıga, Latvia 4QWorld Association, https://qworld.net Emails: [email protected], [email protected], [email protected] Abstract Post-selection, the power of discarding all runs of a computation in which an undesirable event occurs, is an influential concept introduced to the field of quantum complexity theory by Aaronson (Proceedings of the Royal Society A, 2005). In the present paper, we initiate the study of post-selection for space-bounded quantum complexity classes. Our main result shows the identity PostBQL = PL, i.e., the class of problems that can be solved by a bounded- error (polynomial-time) logarithmic-space quantum algorithm with post-selection (PostBQL) is equal to the class of problems that can be solved by unbounded-error logarithmic-space classical algorithms (PL). This result gives a space-bounded version of the well-known result PostBQP = PP proved by Aaronson for polynomial-time quantum computation. As a by- product, we also show that PL coincides with the class of problems that can be solved by bounded-error logarithmic-space quantum algorithms that have no time bound. 1 Introduction Post-selection. Post-selection is the power of discarding all runs of a computation in which an undesirable event occurs. This concept was introduced to the field of quantum complexity theory by Aaronson [1]. While unrealistic, post-selection turned out to be an extremely useful tool to obtain new and simpler proofs of major results about classical computation, and also prove new results about quantum complexity classes.
    [Show full text]
  • Circuit Complexity and Computational Complexity
    Circuit Complexity and Computational Complexity Lecture Notes for Math 267AB University of California, San Diego January{June 1992 Instructor: Sam Buss Notes by: Frank Baeuerle Sam Buss Bob Carragher Matt Clegg Goran Gogi¶c Elias Koutsoupias John Lillis Dave Robinson Jinghui Yang 1 These lecture notes were written for a topics course in the Mathematics Department at the University of California, San Diego during the winter and spring quarters of 1992. Each student wrote lecture notes for between two and ¯ve one and one half hour lectures. I'd like to thank the students for a superb job of writing up lecture notes. January 9-14. Bob Carragher's notes on Introduction to circuit complexity. Theorems of Shannon and Lupanov giving upper and lower bounds of circuit complexity of almost all Boolean functions. January 14-21. Matt Clegg's notes on Spira's theorem relating depth and formula size Krapchenko's lower bound on formula size over AND/OR/NOT January 23-28. Frank Baeuerle's notes on Neciporuk's theorem Simulation of time bounded TM's by circuits January 30 - February 4. John Lillis's notes on NC/AC hierarchy Circuit value problem Depth restricted circuits for space bounded Turing machines Addition is in NC1. February 6-13. Goran Gogi¶c's notes on Circuits for parallel pre¯x vector addition and for symmetric functions. February 18-20. Dave Robinson's notes on Schnorr's 2n-3 lower bound on circuit size Blum's 3n-o(n) lower bound on circuit size February 25-27. Jinghui Yang's notes on Subbotovskaya's theorem Andreev's n2:5 lower bound on formula size with AND/OR/NOT.
    [Show full text]
  • The Complexity of Symmetric Boolean Parity Holant Problems (Extended Abstract)
    The Complexity of Symmetric Boolean Parity Holant Problems (Extended Abstract) Heng Guo1, Pinyan Lu2, and Leslie G. Valiant3? 1 University of Wisconsin-Madison [email protected] 2 Microsoft Research Asia [email protected] 3 Harvard University [email protected] Abstract. For certain subclasses of NP, ⊕P or #P characterized by local constraints, it is known that if there exist any problems that are not polynomial time computable within that subclass, then those prob- lems are NP-, ⊕P- or #P-complete. Such dichotomy results have been proved for characterizations such as Constraint Satisfaction Problems, and directed and undirected Graph Homomorphism Problems, often with additional restrictions. Here we give a dichotomy result for the more ex- pressive framework of Holant Problems. These additionally allow for the expression of matching problems, which have had pivotal roles in com- plexity theory. As our main result we prove the dichotomy theorem that, for the class ⊕P, every set of boolean symmetric Holant signatures of any arities that is not polynomial time computable is ⊕P-complete. The result exploits some special properties of the class ⊕P and characterizes four distinct tractable subclasses within ⊕P. It leaves open the corre- sponding questions for NP, #P and #kP for k =6 2. 1 Introduction The complexity class ⊕P is the class of languages L such that there is a poly- nomial time nondeterministic Turing machine that on input x 2 L has an odd number of accepting computations, and on input x 62 L has an even number of accepting computations [29, 25]. It is known that ⊕P is at least as powerful as NP, since NP is reducible to ⊕P via (one-sided) randomized reduction [28].
    [Show full text]