Introduction to Complexity Theory Big O Notation Review Linear Function: R(N)=O(N)

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Complexity Theory Big O Notation Review Linear Function: R(N)=O(N) GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) Introduction to Complexity Theory Big O Notation Review Linear function: r(n)=O(n). Polynomial function: r(n)=2O(1) Exponential function: r(n)=2nO(1) Logarithmic function: r(n)=O(log n) Poly-log function: r(n)=logO(1) n Definition 1 (TIME) Let t : . Define the time complexity class, TIME(t(n)) to be ℵ−→ℵ TIME(t(n)) = L DTM M which decides L in time O(t(n)) . { |∃ } Definition 2 (NTIME) Let t : . Define the time complexity class, NTIME(t(n)) to be ℵ−→ℵ NTIME(t(n)) = L NDTM M which decides L in time O(t(n)) . { |∃ } Example 1 Consider the language A = 0k1k k 0 . { | ≥ } Is A TIME(n2)? Is A ∈ TIME(n log n)? Is A ∈ TIME(n)? Could∈ we potentially place A in a smaller complexity class if we consider other computational models? Theorem 1 If t(n) n, then every t(n) time multitape Turing machine has an equivalent O(t2(n)) time single-tape turing≥ machine. Proof: see Theorem 7:8 in Sipser (pg. 232) Theorem 2 If t(n) n, then every t(n) time RAM machine has an equivalent O(t3(n)) time multi-tape turing machine.≥ Proof: optional exercise Conclusion: Linear time is model specific; polynomical time is model indepedent. Definition 3 (The Class P ) k P = [ TIME(n ) k Definition 4 (The Class NP) GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) k NP = [ NTIME(n ) k Equivalent Definition of NP NP = L L has a polynomial time verifier . A polynomial{ | time verifier for a language A} is an algorithm, V ,whereA = w V acepts <w;c> for some string c . { | } Example 2 Let RELPRIME = <x;y> gcd(x; y)=1.IsRELPRIME NP?Is RELPRIME P ? { | } ∈ ∈ Example 3 PATH = <G;s;t> G has a directed path from s to t .IsPATH P ? { | } ∈ Construct a deterministic polynomial time Turing machine, M, that decides PATH. M = \On Input <G;s;t> where G is a directed graph with nodes s and t. 1. Place a mark on node s. 2. While there exists an unmarked node do 3. Search for all edges (a; b)wherea is marked and b unmarked, and mark such b. 4. If t is marked, accept ; otherwise, reject. To show an algorithm runs in polynomial, one must show that each step is executed only a poly- nomial number of steps as well as each steps executes in polynomial time. Example 4 SAT = <φ> φ is a satisfiable boolean formula .IsSAT NP? { | } ∈ Construct a nondeterministic polynomial time Turing machine, M, as follows: M = \On Input <φ>: 1. Nondeterministically select an assignment of the variables, x. 2. Teset where x satisfies φ. 3. If thest test passes, accept ; otherwise, reject." Example 5 VALUE = <φ;x > φ(x) where φ is a boolean formula and x an assignment of variables .IsVALUE {P ? | } ∈ Example 6 RATIONALROOT = <p> q Q such that p(q)=0where p(x)=a0 + a1x + k { |∃ ∈ :::+ akx is a polynomial of degree k where i =1:::k, ai . Is RATIONALROOT P ? ∀ ∈ℵ ∈ Construct a nondeterministic polynomial time Turing machine, M, for RATIONALROOT: M = \On Input <p>where p(x)=a + a x + :::+ a xk, a . 0 1 k i ∈ℵ GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) 1. Nondeterministically select q Q. 2. Evaluate p(q). ∈ 3. If p(q)=0,accept ; otherwise, reject." Does this run in polynomial time? Cearly, to evaulate p(q) is polynomial in the length of q since multipication can be computed in polynomial time. How big is q though? From the Rational Root Theorem in algebra, any rational root, x,ofp(x)=0isoftheformx = s where s a and t a . t | 0 | k Since a0 =log(a0) <nand ak =log(ak) <n, q = O(n). Thus, step 1 and 2 can be computed in polynomial| | steps. | | | | Example 7 Let SUBSETSUM = <S;t> y1;y2;::: ;yn = Y S = s1;s2;::: ;sn such that y = t { |∃{ } ⊆ { } Pi i } Construct a nondeterministic polynomial time Turing machine, M, as follows: M = \On Input <S;t>: 1. Nondeterministically select a subset c of the numbers in S. 2. Test whether c is a collection of numbers that sum to t. 3. If thest test passes, accept ; otherwise, reject." Theorem 3 P NP ⊆ Proof: A O(t(n)) DTM has an equivalent O(t(n)) NDTM. 2 Definition 5 (The Class EXPTIME) nk EXPTIME = [ TIME(2 ) k Definition 6 (The Class NEXPTIME) nk NEXPTIME = [ NTIME(2 ) k Relations: P NP EXPTIME NEXPTIME ⊆ ⊆ ⊆ Definition 7 (Space Complexity Classes) SPACE(t(n)) = L DTM M which decides L in space O(t(n)) . { |∃ } NSPACE(t(n)) = L NDTM M which decides L in space O(t(n)) . { |∃ } k PSPACE = [ SPACE(n ) k k NPSPACE = [ NSPACE(n ) k GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) Definition 8 (Logarithmic Space Classes) L = SPACE(log n) NL = NSPACE(log n) How can our definition of a turing machine use only logarithmic space since the input tape uses linear space? We introduce a new turing machine with two tapes: a read-only input tape and a read/write tape. Example 8 PATH NL ∈ Relations between time and space: Theorem 4 TIME(f(n)) SPACE(f(n)) ⊆ Proof: A deterministic turing machine that decides membership in O(f(n)) steps can use at most O(f(n)) space since one TM step requires can only write to one memory cell. 2 Theorem 5 If f(n) log(n), SPACE(f(n)) TIME(kf(n)) ≥ ⊆ Proof: Define a configuration as a snap shot of a turing machine including the position of the heads, the state of the control unit, and the contents of all cells on tape. If our turing machine is restricted to O(f(n)) space, the work tape head can be in only O(f(n)) locations and the input tape head canbeinonlyO(n). The control unit can be in any of c positions were c is a constant representing the number of states in the fsm. If we have k characters that can be written to a cell, then there are O(kf(n)) possibilities for the content of the tape. Thus, there are O(cnf(n) kf(n))=O(kf(n)) configurations. If the turing machine is to halt, there will be no duplicate configuration and thus it must run in O(kf(n))2. Corollary 1 L P ⊆ Corollary 2 P PSPACE ⊆ Corollary 3 PSPACE EXPTIME ⊆ Complements of complexity classes Definition 9 (coC) The complement of a complexity class of decision problems C,denotedcoC, is the set of decision problems that are complemnt of decision problems of C. Example 9 coSAT = <φ> φ is NOT a satisfiable boolean formula .IscoSAT NP? Is coSAT EXPTIME?Is{ coSAT| PSPACE? } ∈ ∈ ∈ Theorem 6 If C is a deterministic time or space complexity class, then C=coC: GS019 - Lecture 1 on Complexity Theory Jarod Alper (jalper) Proof: Deterministic turing machines are closed under complementation. Is NP = coNP? Theorem 7 NP coNP = T 6 {} Proof: Let PRIMALITY = <n> n is prime .WeshowPRIMALITY NP T coNP . not quite done. { | } ∈ p 1 Lemma 1 An integer p>2 is prime iff there is an integer 1 <r<psuch that r − 1(modp) p 1 ≡ and q such that q p 1, r −q 1(modp). ∀ | − 6≡ Proof: not quite done.
Recommended publications
  • COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps
    COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps Markus Krotzsch¨ Knowledge-Based Systems TU Dresden, 26th Dec 2018 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 2 of 19 Review: Time Hierarchy Theorems Time Hierarchy Theorem 12.12 If f , g : N N are such that f is time- → constructible, and g log g o(f ), then · ∈ DTime (g) ( DTime (f ) ∗ ∗ Nondeterministic Time Hierarchy Theorem 12.14 If f , g : N N are such that f → is time-constructible, and g(n + 1) o(f (n)), then A Hierarchy for Space ∈ NTime (g) ( NTime (f ) ∗ ∗ In particular, we find that P , ExpTime and NP , NExpTime: , L NL P NP PSpace ExpTime NExpTime ExpSpace ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ , Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 3 of 19 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 4 of 19 Space Hierarchy Proving the Space Hierarchy Theorem (1) Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- → constructible, and g o(f ), then For space, we can always assume a single working tape: ∈ Tape reduction leads to a constant-factor increase in space • DSpace(g) ( DSpace(f ) Constant factors can be eliminated by space compression • Proof: Again, we construct a diagonalisation machine . We define a multi-tape TM Therefore, DSpacek(f ) = DSpace1(f ). D D for inputs of the form , w (other cases do not matter), assuming that , w = n hM i |hM i| Compute f (n) in unary to mark the available space on the working tape Space turns out to be easier to separate – we get: • Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- Initialise a separate countdown tape with the largest binary number that can be → • constructible, and g o(f ), then written in f (n) space ∈ Simulate on , w , making sure that only previously marked tape cells are • M hM i DSpace(g) ( DSpace(f ) used Time-bound the simulation using the content of the countdown tape by • Challenge: TMs can run forever even within bounded space.
    [Show full text]
  • CS601 DTIME and DSPACE Lecture 5 Time and Space Functions: T, S
    CS601 DTIME and DSPACE Lecture 5 Time and Space functions: t, s : N → N+ Definition 5.1 A set A ⊆ U is in DTIME[t(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M) ≡ w ∈ U M(w)=1 , and 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps. Definition 5.2 A set A ⊆ U is in DSPACE[s(n)] iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. A = L(M), and 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells. (Input tape is “read-only” and not counted as space used.) Example: PALINDROMES ∈ DTIME[n], DSPACE[n]. In fact, PALINDROMES ∈ DSPACE[log n]. [Exercise] 1 CS601 F(DTIME) and F(DSPACE) Lecture 5 Definition 5.3 f : U → U is in F (DTIME[t(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) halts within c · t(|w|) steps; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. Definition 5.4 f : U → U is in F (DSPACE[s(n)]) iff there exists a deterministic, multi-tape TM, M, and a constant c, such that, 1. f = M(·); 2. ∀w ∈ U, M(w) uses at most c · s(|w|) work-tape cells; 3. |f(w)|≤|w|O(1), i.e., f is polynomially bounded. (Input tape is “read-only”; Output tape is “write-only”.
    [Show full text]
  • The Complexity of Space Bounded Interactive Proof Systems
    The Complexity of Space Bounded Interactive Proof Systems ANNE CONDON Computer Science Department, University of Wisconsin-Madison 1 INTRODUCTION Some of the most exciting developments in complexity theory in recent years concern the complexity of interactive proof systems, defined by Goldwasser, Micali and Rackoff (1985) and independently by Babai (1985). In this paper, we survey results on the complexity of space bounded interactive proof systems and their applications. An early motivation for the study of interactive proof systems was to extend the notion of NP as the class of problems with efficient \proofs of membership". Informally, a prover can convince a verifier in polynomial time that a string is in an NP language, by presenting a witness of that fact to the verifier. Suppose that the power of the verifier is extended so that it can flip coins and can interact with the prover during the course of a proof. In this way, a verifier can gather statistical evidence that an input is in a language. As we will see, the interactive proof system model precisely captures this in- teraction between a prover P and a verifier V . In the model, the computation of V is probabilistic, but is typically restricted in time or space. A language is accepted by the interactive proof system if, for all inputs in the language, V accepts with high probability, based on the communication with the \honest" prover P . However, on inputs not in the language, V rejects with high prob- ability, even when communicating with a \dishonest" prover. In the general model, V can keep its coin flips secret from the prover.
    [Show full text]
  • Lecture 14 (Feb 28): Probabilistically Checkable Proofs (PCP) 14.1 Motivation: Some Problems and Their Approximation Algorithms
    CMPUT 675: Computational Complexity Theory Winter 2019 Lecture 14 (Feb 28): Probabilistically Checkable Proofs (PCP) Lecturer: Zachary Friggstad Scribe: Haozhou Pang 14.1 Motivation: Some problems and their approximation algorithms Many optimization problems are NP-hard, and therefore it is unlikely to efficiently find optimal solutions. However, in some situations, finding a provably good-enough solution (approximation of the optimal) is also acceptable. We say an algorithm is an α-approximation algorithm for an optimization problem iff for every instance of the problem it can find a solution whose cost is a factor of α of the optimum solution cost. Here are some selected problems and their approximation algorithms: Example: Min Vertex Cover is the problem that given a graph G = (V; E), the goal is to find a vertex cover C ⊆ V of minimum size. The following algorithm gives a 2-approximation for Min Vertex Cover: Algorithm 1 Min Vertex Cover Algorithm Input: Graph G = (V; E) Output: a vertex cover C of G. C ; while some edge (u; v) has u; v2 = C do C C [ fu; vg end while return C Claim 1 Let C∗ be an optimal solution, the returned solution C satisfies jCj ≤ 2jC∗j. Proof. Let M be the edges considered by the algorithm in the loop, we have jCj = 2jMj. Also, jC∗j covers M and no two edges in M share an endpoint (M is a matching), so jC∗j ≥ jMj. Therefore, jCj = 2jMj ≤ 2jC∗j. Example: Max SAT is the problem that given a CNF formula, the goal is to find a assignment that satisfies as many clauses as possible.
    [Show full text]
  • Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems*
    Electronic Colloquium on Computational Complexity, Report No. 75 (2019) Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems* Lijie Chen Dylan M. McKay Cody D. Murray† R. Ryan Williams MIT MIT MIT Abstract A frontier open problem in circuit complexity is to prove PNP 6⊂ SIZE[nk] for all k; this is a neces- NP sary intermediate step towards NP 6⊂ P=poly. Previously, for several classes containing P , including NP NP NP , ZPP , and S2P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove k C 6⊂ SIZE[n ] for all k, we show that C ⊂ P=poly implies a “collapse” D = C for some larger class D, where we already know D 6⊂ SIZE[nk] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for PNP that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for PNP are equivalent to fixed-polynomial size circuit lower NP NP k NP bounds for P . That is, P 6⊂ SIZE[n ] for all k if and only if (NP ⊂ P=poly implies PH ⊂ i.o.-P=n ). Next, we present new consequences of the assumption NP ⊂ P=poly, towards proving similar re- sults for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses.
    [Show full text]
  • Spring 2020 (Provide Proofs for All Answers) (120 Points Total)
    Complexity Qual Spring 2020 (provide proofs for all answers) (120 Points Total) 1 Problem 1: Formal Languages (30 Points Total): For each of the following languages, state and prove whether: (i) the lan- guage is regular, (ii) the language is context-free but not regular, or (iii) the language is non-context-free. n n Problem 1a (10 Points): La = f0 1 jn ≥ 0g (that is, the set of all binary strings consisting of 0's and then 1's where the number of 0's and 1's are equal). ∗ Problem 1b (10 Points): Lb = fw 2 f0; 1g jw has an even number of 0's and at least two 1'sg (that is, the set of all binary strings with an even number, including zero, of 0's and at least two 1's). n 2n 3n Problem 1c (10 Points): Lc = f0 #0 #0 jn ≥ 0g (that is, the set of all strings of the form 0 repeated n times, followed by the # symbol, followed by 0 repeated 2n times, followed by the # symbol, followed by 0 repeated 3n times). Problem 2: NP-Hardness (30 Points Total): There are a set of n people who are the vertices of an undirected graph G. There's an undirected edge between two people if they are enemies. Problem 2a (15 Points): The people must be assigned to the seats around a single large circular table. There are exactly as many seats as people (both n). We would like to make sure that nobody ever sits next to one of their enemies.
    [Show full text]
  • Probabilistic Proof Systems: a Primer
    Probabilistic Proof Systems: A Primer Oded Goldreich Department of Computer Science and Applied Mathematics Weizmann Institute of Science, Rehovot, Israel. June 30, 2008 Contents Preface 1 Conventions and Organization 3 1 Interactive Proof Systems 4 1.1 Motivation and Perspective ::::::::::::::::::::::: 4 1.1.1 A static object versus an interactive process :::::::::: 5 1.1.2 Prover and Veri¯er :::::::::::::::::::::::: 6 1.1.3 Completeness and Soundness :::::::::::::::::: 6 1.2 De¯nition ::::::::::::::::::::::::::::::::: 7 1.3 The Power of Interactive Proofs ::::::::::::::::::::: 9 1.3.1 A simple example :::::::::::::::::::::::: 9 1.3.2 The full power of interactive proofs ::::::::::::::: 11 1.4 Variants and ¯ner structure: an overview ::::::::::::::: 16 1.4.1 Arthur-Merlin games a.k.a public-coin proof systems ::::: 16 1.4.2 Interactive proof systems with two-sided error ::::::::: 16 1.4.3 A hierarchy of interactive proof systems :::::::::::: 17 1.4.4 Something completely di®erent ::::::::::::::::: 18 1.5 On computationally bounded provers: an overview :::::::::: 18 1.5.1 How powerful should the prover be? :::::::::::::: 19 1.5.2 Computational Soundness :::::::::::::::::::: 20 2 Zero-Knowledge Proof Systems 22 2.1 De¯nitional Issues :::::::::::::::::::::::::::: 23 2.1.1 A wider perspective: the simulation paradigm ::::::::: 23 2.1.2 The basic de¯nitions ::::::::::::::::::::::: 24 2.2 The Power of Zero-Knowledge :::::::::::::::::::::: 26 2.2.1 A simple example :::::::::::::::::::::::: 26 2.2.2 The full power of zero-knowledge proofs ::::::::::::
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Is It Easier to Prove Theorems That Are Guaranteed to Be True?
    Is it Easier to Prove Theorems that are Guaranteed to be True? Rafael Pass∗ Muthuramakrishnan Venkitasubramaniamy Cornell Tech University of Rochester [email protected] [email protected] April 15, 2020 Abstract Consider the following two fundamental open problems in complexity theory: • Does a hard-on-average language in NP imply the existence of one-way functions? • Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes. Both one-way functions and problems in TFNP can be interpreted as promise-true distri- butional NP search problems|namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true. This result follows from a more general study of interactive puzzles|a generalization of average-case hardness in NP|and in particular, a novel round-collapse theorem for computationally- sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information- theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence of a hard-on-average problem in NP=poly.
    [Show full text]
  • On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars
    On Exponential-Time Completeness of the Circularity Problem for Attribute Grammars Pei-Chi Wu Department of Computer Science and Information Engineering National Penghu Institute of Technology Penghu, Taiwan, R.O.C. E-mail: [email protected] Abstract Attribute grammars (AGs) are a formal technique for defining semantics of programming languages. Existing complexity proofs on the circularity problem of AGs are based on automata theory, such as writing pushdown acceptor and alternating Turing machines. They reduced the acceptance problems of above automata, which are exponential-time (EXPTIME) complete, to the AG circularity problem. These proofs thus show that the circularity problem is EXPTIME-hard, at least as hard as the most difficult problems in EXPTIME. However, none has given a proof for the EXPTIME- completeness of the problem. This paper first presents an alternating Turing machine for the circularity problem. The alternating Turing machine requires polynomial space. Thus, the circularity problem is in EXPTIME and is then EXPTIME-complete. Categories and Subject Descriptors: D.3.1 [Programming Languages]: Formal Definitions and Theory ¾ semantics; D.3.4 [Programming Languages]: Processors ¾ compilers, translator writing systems and compiler generators; F.2.2 [Analysis of Algorithms and Problem Complexity] Nonnumerical Algorithms and Problems ¾ Computations on discrete structures; F.4.2 [Grammars and Other Rewriting Systems] decision problems Key Words: attribute grammars, alternating Turing machines, circularity problem, EXPTIME-complete. 1. INTRODUCTION Attribute grammars (AGs) [8] are a formal technique for defining semantics of programming languages. There are many AG systems (e.g., Eli [4] and Synthesizer Generator [10]) developed to assist the development of “language processors”, such as compilers, semantic checkers, language-based editors, etc.
    [Show full text]
  • Circuit Lower Bounds for Merlin-Arthur Classes
    Electronic Colloquium on Computational Complexity, Report No. 5 (2007) Circuit Lower Bounds for Merlin-Arthur Classes Rahul Santhanam Simon Fraser University [email protected] January 16, 2007 Abstract We show that for each k > 0, MA/1 (MA with 1 bit of advice) doesn’t have circuits of size nk. This implies the first superlinear circuit lower bounds for the promise versions of the classes MA AM ZPPNP , and k . We extend our main result in several ways. For each k, we give an explicit language in (MA ∩ coMA)/1 which doesn’t have circuits of size nk. We also adapt our lower bound to the average-case setting, i.e., we show that MA/1 cannot be solved on more than 1/2+1/nk fraction of inputs of length n by circuits of size nk. Furthermore, we prove that MA does not have arithmetic circuits of size nk for any k. As a corollary to our main result, we obtain that derandomization of MA with O(1) advice implies the existence of pseudo-random generators computable using O(1) bits of advice. 1 Introduction Proving circuit lower bounds within uniform complexity classes is one of the most fundamental and challenging tasks in complexity theory. Apart from clarifying our understanding of the power of non-uniformity, circuit lower bounds have direct relevance to some longstanding open questions. Proving super-polynomial circuit lower bounds for NP would separate P from NP. The weaker result that for each k there is a language in NP which doesn’t have circuits of size nk would separate BPP from NEXP, thus answering an important question in the theory of derandomization.
    [Show full text]
  • Introduction to Complexity Classes
    Introduction to Complexity Classes Marcin Sydow Introduction to Complexity Classes Marcin Sydow Introduction Denition to Complexity Classes TIME(f(n)) TIME(f(n)) denotes the set of languages decided by Marcin deterministic TM of TIME complexity f(n) Sydow Denition SPACE(f(n)) denotes the set of languages decided by deterministic TM of SPACE complexity f(n) Denition NTIME(f(n)) denotes the set of languages decided by non-deterministic TM of TIME complexity f(n) Denition NSPACE(f(n)) denotes the set of languages decided by non-deterministic TM of SPACE complexity f(n) Linear Speedup Theorem Introduction to Complexity Classes Marcin Sydow Theorem If L is recognised by machine M in time complexity f(n) then it can be recognised by a machine M' in time complexity f 0(n) = f (n) + (1 + )n, where > 0. Blum's theorem Introduction to Complexity Classes Marcin Sydow There exists a language for which there is no fastest algorithm! (Blum - a Turing Award laureate, 1995) Theorem There exists a language L such that if it is accepted by TM of time complexity f(n) then it is also accepted by some TM in time complexity log(f (n)). Basic complexity classes Introduction to Complexity Classes Marcin (the functions are asymptotic) Sydow P S TIME nj , the class of languages decided in = j>0 ( ) deterministic polynomial time NP S NTIME nj , the class of languages decided in = j>0 ( ) non-deterministic polynomial time EXP S TIME 2nj , the class of languages decided in = j>0 ( ) deterministic exponential time NEXP S NTIME 2nj , the class of languages decided
    [Show full text]