Computational Complexity of Inference

Total Page:16

File Type:pdf, Size:1020Kb

Computational Complexity of Inference Probabilistic AI Srihari Computational Complexity of Inference Sargur Srihari [email protected] 1 Probabilistic AI Srihari Topics in Complexity of Inference 1. Inferring a conditional probability from a BN 2. NP Complete Decision Problem 3. Inference decision problem is NP-complete 4. Exact Inference is #P-complete 5. Approximate Inference is NP-hard 6. Types of Inference algorithms: exact and approximate 2 Probabilistic AI Srihari Common BN Inference Problem • PGMs represent joint probability distributions over a set of variables χ • They are used to answer queries of interest • The most common query type is a conditional probability query – E: evidence variables, whose known value is e – Y: query variables, whose distribution we wish to know – Conditional probability query is P(Y|E=e) 3 Probabilistic AI Srihari Computing conditional probability • Assume: variables χ, evidence E=e, query Y • Conditional probability query P(Y|E=e) is: P(Y,e) P(Y | E = e) = From product rule P( ) e – Evaluation of Numerator P(Y,e) • If W=χ-Y-E, ie, variables not query or evidence P(y,e) = ∑P(y,e,w) (1) Each term in summation over w is simply w an entry in the distribution – Evaluation of Denominator P(e) P(e) = ∑P(y,e) Rather than marginalizing over P(y,e,w) this y allows reusing computation of (1) – This process corresponds to taking a vector of marginal probabilities P(y1,e),..P(yk,e) and renormalizing the entries by P(e) to sum to 1 Probabilistic AI Srihari Example Query with Cancer BN • χ={A,Ge,E,Sm,C,Gd,Se,L} – Evidence variable is Se • Serum Calcium is calcium in blood – Query variable is C cancer • We can evaluate P(C|Se )as: P(C,Se) P(C | Se) = P(Se) – With W = χ-C-Se={A,Ge,E,Sm,Gd,L}: P(C,Se) = ∑ P(A,Ge,E,Sm,Se,C,L,Gd) A,Ge,E,Sm,L,Gd = ∑ P(A)P(Ge)P(Gd)P(E | A),P(Sm | A,G)P(Se |C)P(C | E,Sm)P(L |C,Gd) A,Ge,E,Sm,L,Gd P(Se) = ∑P(C,Se) C We take the marginal probabilities P(c1,Se), P(c2,Se) and renormalize the entries by P(Se) to sum to 1 Probabilistic AI Srihari Analysis of Complexity • Approach of summing out the variables in the joint distribution is unsatisfactory P(y,e) = ∑P(y,e,w) w – Returns us to exponential blow-up • PGM was precisely designed to avoid this! • We now show that problem of inference in PGMs is NP-hard – Requires exponential time in the worst case except if P=NP – Even worse, approximate inference is NP - hard • Discussion for BNs applies to MNs also 6 Probabilistic AI Srihari What is a decision problem? • Subset sum decision problem – Given a set of integers, is there a subset that sums to zero? • No polynomial time algorithm to decide this – Given an instance, say set ω={−2, −3, 15, 14, 7, −10} • The guess {−2, −3, −10, 15} can be verified in poly time • Definition of Decision Problem Π: – LΠ defines a precise set of instances • LΠ consists of integer sets which have a subset that sums to zero – Decision problem Π: Is instance ω in LΠ ? • Is ω a set of integers with a subset that sums to zero? Probabilistic AI Srihari P and NP decision problems • Definition of Decision Problem Π: – LΠ defines a precise set of instances – Decision problem: Is instance ω in LΠ ? • Decision problem Π is in – P if there is an algorithm that decides in poly time – NP if a guess can be verified in poly time • Guess is produced non-deterministically • Hence the name non-deterministic polynomial time – Subset sum decision problem is in NP • Whether a given subset sums to zero verified in poly time • But not in P – No poly algorithm to determine whether there exists any subset that sums to zero Probabilistic AI Srihari 3-SAT (Satisfiability) decision problem • 3-SAT formula over binary variables q1,.,,qn – has the form C1 C2 ..Cm Ci is a clause of form li,1 li,2 li,3 ; li,j i =1,..,m; j =1,2,3 are literals which are either qk or ~qk e.g., with n=3 m=2 and clauses (q1 ~q2 ~q3) (~q1 q2 ~q3) there is a satisfying assignment (assigns true to formula): q1=q2=q3= true with clauses (~q1 q2 ) ( q2 q3) (~q1 q3) there is no satisfying assignment Each assignment verified in polynomial time • Decision problem Π: Given a 3-SAT formula of size n, is there a satisfying assignment? – To answer this we need to check n binary variables with 2n assignments 9 •LΠ is the set of 3-SAT formulas that have a satisfying assignment Probabilistic AI Srihari What is P=NP ? • Input is a formula of size n – A particular assignment γ can be verified in polynomial time, e.g., q1=q2=q3= true – Suppose generate guess γ and verify if it satisfies • Since guess verified in polynomial time, decision problem Π is in NP • Deterministic problems are subset of nondeterministic ones. So P ⊆ NP . – Converse is biggest problem in complexity • If you can verify in polynomial time, can you decide in polynomial time? – Eg., is there a prime greater than n? 10 Probabilistic AI Srihari P=NP intuition • Creating great art – Complexity is NP • Appreciating art: – Complexity is P • Is P=NP ? – Most mathematicians don’t think so 11 Probabilistic AI Srihari NP-hard and NP-complete • Hardest problems in NP are called NP-complete – If poly time solution exists, can solve any in NP • NP-hard problems need not have polynomial time verification • If Π is NP-hard it can be transformed into Π’ in NP • 3-SAT is NP-complete 12 Probabilistic AI Srihari BN for 3-SAT • Propositional variables q1,..,qn – Return true if C1 C2 ..Cm , where Ci is a DNF of 3 binary variables qk , has a satisfying assignment, • e.g., return true for 3-SAT formula (q1 V ~q2 V ~q3) (~q1 V q2~q3) since q1=q2=q3=true is a satisfying assignment and return false for (~q1 q2 ~q3) ( q2 q3) (~q1 q3) which has no satisfying assignments BN to infer this: Q1 Q2 Q3 Q4 Qn 1 P(qk )=0.5 Ci are deterministic OR C1 C2 C3 . Cm –1 Cm Ai are deterministic AND X is output (has value 1 A A . A 1 2 m –2 X 13 iff all of the Ci’s are 1 Probabilistic AI Srihari #P-complete Problem • Counting the no. of satisfying assignments • E.g., Propositional variables q1,.,,qn Return true if C1 C2 ..Cm , where Ci is a DNF of 3 binary variables qk , has a satisfying assignment, 14 Probabilistic AI Srihari Analysis of Exact Inference • Worst case: CPD is a table of size |Val({Xi} ∨ PaXi)| • Most analyses of complexity are stated as decision-problems • Consider decision problem first, then numerical one • Natural version of conditional probability task is given next Probabilistic AI Srihari Conditional Probability Decision Task • Bayesian Network – Probability - Decision task: – BN-Pr-DP: Bayesian Network Decision Problem • Given a BN B over χ, a variable X ε χ, and a value x εVal(X) decide PB (X=x)>0 • This decision problem can be shown to be NP- complete as seen next Probabilistic AI Srihari Proof of BN-Pr-DP is NP-complete • Whether in NP : – Guess assignment ξ to network variables. Check whether X=x and P(ξ)>0 – One such guess succeeds iff P(X=x)>0. – Done in linear time • Is NP-hard: – Answer for instances in BN-Pr-DP can be used to answer an NP-hard problem – Show a reduction from 3-SAT problem 17 Probabilistic AI Srihari Reduction of 3-SAT to BN inference • Given a 3-SAT formula φ create BN Bφwith variable X such that φ is satisfiable iff PBφ(X=x1)>0 • If BN inference is solved in poly time we can also solve 3-SAT in poly time Q1 Q2 Q3 Q4 Qn BN to infer this: 1 P(qk )=0.5 C1 C2 C3 . Cm –1 Cm Ci are deterministic OR Ai are deterministic AND X is output A1 A2 . Am –2 X 18 Probabilistic AI Srihari Original Inference Problem • It is a numerical problem – rather than a decision problem • Define BN-Pr – Given a BN B over χ, a variable X ε χ, and a value x εVal(X) compute PB (X=x) – Task is to compute the total probability of instantiations that are consistent with X=x • Weighted count of instantiations, with weight being the probability • This problem is #P-complete 19 Probabilistic AI Srihari Analysis of Approximate Inference • Metrics for quality of approximation • Absolute Error – Estimate ρ has absolute error ε for P(y|e) if |P(y|e)-ρ| ≤ ε – A weak definition of error. If a rare disease has probability 0.0001 then error of 0.0001 is unacceptable. If the probability is 0.3 then error of 0.0001 is fine • Relative Error – Estimate ρ has relative error ε for P(y|e) if ρ/(1+ε) ≤ P(y|e) ≤ ρ(1+ε) • ε=4 means P(y|e) is at least 20% of ρ and at most 600% of ρ. For low values much better than absolute error Probabilistic AI Srihari Approximate Inference is NP-hard • The following problem is NP-hard – Given a BN B over χ, a variable X εχ and a value x ε Val(X), find a number ρ that has relative error ε for PB(X=x) • Proof: 1 – It is NP-hard to decide if PB (x ) >0 1 – Assume algorithm returns estimate ρ to PB(x ) which has relative error ε for some ε > 0 1 – ρ>0 if and only if PB(x )>0 – This achieving relative error is NP-hard 21 Probabilistic AI Srihari Inference Algorithms • Worst case is exponential • Two types of inference algorithms – Exact • Variable Elimination • Clique trees – Approximate • Optimization – Propagation with approximate messages – Variational (analytical approximations) • Particle-based (sampling) 22.
Recommended publications
  • Randomised Computation 1 TM Taking Advices 2 Karp-Lipton Theorem
    INFR11102: Computational Complexity 29/10/2019 Lecture 13: More on circuit models; Randomised Computation Lecturer: Heng Guo 1 TM taking advices An alternative way to characterize P=poly is via TMs that take advices. Definition 1. For functions F : N ! N and A : N ! N, the complexity class DTime[F ]=A consists of languages L such that there exist a TM with time bound F (n) and a sequence fangn2N of “advices” satisfying: • janj ≤ A(n); • for jxj = n, x 2 L if and only if M(x; an) = 1. The following theorem explains the notation P=poly, namely “polynomial-time with poly- nomial advice”. S c Theorem 1. P=poly = c;d2N DTime[n ]=nd . Proof. If L 2 P=poly, then it can be computed by a family C = fC1;C2; · · · g of Boolean circuits. Let an be the description of Cn, andS the polynomial time machine M just reads 2 c this description and simulates it. Hence L c;d2N DTime[n ]=nd . For the other direction, if a language L can be computed in polynomial-time with poly- nomial advice, say by TM M with advices fang, then we can construct circuits fDng to simulate M, as in the theorem P ⊂ P=poly in the last lecture. Hence, Dn(x; an) = 1 if and only if x 2 L. The final circuit Cn just does exactly what Dn does, except that Cn “hardwires” the advice an. Namely, Cn(x) := Dn(x; an). Hence, L 2 P=poly. 2 Karp-Lipton Theorem Dick Karp and Dick Lipton showed that NP is unlikely to be contained in P=poly [KL80].
    [Show full text]
  • Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi
    Analysis of Boolean Functions (CMU 18-859S, Spring 2007) Lecture 10: Learning DNF, AC0, Juntas Feb 15, 2007 Lecturer: Ryan O’Donnell Scribe: Elaine Shi 1 Learning DNF in Almost Polynomial Time From previous lectures, we have learned that if a function f is ǫ-concentrated on some collection , then we can learn the function using membership queries in poly( , 1/ǫ)poly(n) log(1/δ) time.S |S| O( w ) In the last lecture, we showed that a DNF of width w is ǫ-concentrated on a set of size n ǫ , and O( w ) concluded that width-w DNFs are learnable in time n ǫ . Today, we shall improve this bound, by showing that a DNF of width w is ǫ-concentrated on O(w log 1 ) a collection of size w ǫ . We shall hence conclude that poly(n)-size DNFs are learnable in almost polynomial time. Recall that in the last lecture we introduced H˚astad’s Switching Lemma, and we showed that 1 DNFs of width w are ǫ-concentrated on degrees up to O(w log ǫ ). Theorem 1.1 (Hastad’s˚ Switching Lemma) Let f be computable by a width-w DNF, If (I, X) is a random restriction with -probability ρ, then d N, ∗ ∀ ∈ d Pr[DT-depth(fX→I) >d] (5ρw) I,X ≤ Theorem 1.2 If f is a width-w DNF, then f(U)2 ǫ ≤ |U|≥OX(w log 1 ) ǫ b O(w log 1 ) To show that a DNF of width w is ǫ-concentrated on a collection of size w ǫ , we also need the following theorem: Theorem 1.3 If f is a width-w DNF, then 1 |U| f(U) 2 20w | | ≤ XU b Proof: Let (I, X) be a random restriction with -probability 1 .
    [Show full text]
  • BQP and the Polynomial Hierarchy 1 Introduction
    BQP and The Polynomial Hierarchy based on `BQP and The Polynomial Hierarchy' by Scott Aaronson Deepak Sirone J., 17111013 Hemant Kumar, 17111018 Dept. of Computer Science and Engineering Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur Indian Institute of Technology Kanpur Abstract The problem of comparing two complexity classes boils down to either finding a problem which can be solved using the resources of one class but cannot be solved in the other thereby showing they are different or showing that the resources needed by one class can be simulated within the resources of the other class and hence implying a containment. When the relation between the resources provided by two classes such as the classes BQP and PH is not well known, researchers try to separate the classes in the oracle query model as the first step. This paper tries to break the ice about the relationship between BQP and PH, which has been open for a long time by presenting evidence that quantum computers can solve problems outside of the polynomial hierarchy. The first result shows that there exists a relation problem which is solvable in BQP, but not in PH, with respect to an oracle. Thus gives evidence of separation between BQP and PH. The second result shows an oracle relation problem separating BQP from BPPpath and SZK. 1 Introduction The problem of comparing the complexity classes BQP and the polynomial heirarchy has been identified as one of the grand challenges of the field. The paper \BQP and the Polynomial Heirarchy" by Scott Aaronson proves an oracle separation result for BQP and the polynomial heirarchy in the form of two main results: A 1.
    [Show full text]
  • NP-Complete Problems and Physical Reality
    NP-complete Problems and Physical Reality Scott Aaronson∗ Abstract Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adia- batic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” re- sults. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics. 1 Introduction “Let a computer smear—with the right kind of quantum randomness—and you create, in effect, a ‘parallel’ machine with an astronomical number of processors . All you have to do is be sure that when you collapse the system, you choose the version that happened to find the needle in the mathematical haystack.” —From Quarantine [31], a 1992 science-fiction novel by Greg Egan If I had to debate the science writer John Horgan’s claim that basic science is coming to an end [48], my argument would lean heavily on one fact: it has been only a decade since we learned that quantum computers could factor integers in polynomial time. In my (unbiased) opinion, the showdown that quantum computing has forced—between our deepest intuitions about computers on the one hand, and our best-confirmed theory of the physical world on the other—constitutes one of the most exciting scientific dramas of our time.
    [Show full text]
  • 0.1 Axt, Loop Program, and Grzegorczyk Hier- Archies
    1 0.1 Axt, Loop Program, and Grzegorczyk Hier- archies Computable functions can have some quite complex definitions. For example, a loop programmable function might be given via a loop program that has depth of nesting of the loop-end pair, say, equal to 200. Now this is complex! Or a function might be given via an arbitrarily complex sequence of primitive recursions, with the restriction that the computed function is majorized by some known function, for all values of the input (for the concept of majorization see Subsection on the Ackermann function.). But does such definitional|and therefore, \static"|complexity have any bearing on the computational|dynamic|complexity of the function? We will see that it does, and we will connect definitional and computational complexities quantitatively. Our study will be restricted to the class PR that we will subdivide into an infinite sequence of increasingly more inclusive subclasses, Si. A so-called hierarchy of classes of functions. 0.1.0.1 Definition. A sequence (Si)i≥0 of subsets of PR is a primitive recur- sive hierarchy provided all of the following hold: (1) Si ⊆ Si+1, for all i ≥ 0 S (2) PR = i≥0 Si. The hierarchy is proper or nontrivial iff Si 6= Si+1, for all but finitely many i. If f 2 Si then we say that its level in the hierarchy is ≤ i. If f 2 Si+1 − Si, then its level is equal to i + 1. The first hierarchy that we will define is due to Axt and Heinermann [[5] and [1]].
    [Show full text]
  • Quantum Computational Complexity Theory Is to Un- Derstand the Implications of Quantum Physics to Computational Complexity Theory
    Quantum Computational Complexity John Watrous Institute for Quantum Computing and School of Computer Science University of Waterloo, Waterloo, Ontario, Canada. Article outline I. Definition of the subject and its importance II. Introduction III. The quantum circuit model IV. Polynomial-time quantum computations V. Quantum proofs VI. Quantum interactive proof systems VII. Other selected notions in quantum complexity VIII. Future directions IX. References Glossary Quantum circuit. A quantum circuit is an acyclic network of quantum gates connected by wires: the gates represent quantum operations and the wires represent the qubits on which these operations are performed. The quantum circuit model is the most commonly studied model of quantum computation. Quantum complexity class. A quantum complexity class is a collection of computational problems that are solvable by a cho- sen quantum computational model that obeys certain resource constraints. For example, BQP is the quantum complexity class of all decision problems that can be solved in polynomial time by a arXiv:0804.3401v1 [quant-ph] 21 Apr 2008 quantum computer. Quantum proof. A quantum proof is a quantum state that plays the role of a witness or certificate to a quan- tum computer that runs a verification procedure. The quantum complexity class QMA is defined by this notion: it includes all decision problems whose yes-instances are efficiently verifiable by means of quantum proofs. Quantum interactive proof system. A quantum interactive proof system is an interaction between a verifier and one or more provers, involving the processing and exchange of quantum information, whereby the provers attempt to convince the verifier of the answer to some computational problem.
    [Show full text]
  • PR Functions Are Computable, PR Predicates) Lecture 11: March
    Theory of Formal Languages (PR Functions are computable, PR Predicates) Lecture 11: March. 30, 2021 Prof. K.R. Chowdhary : Professor of CS Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 11.1 PR functions are Computable We want to show that every PR function is mechanically computable. Given the general strategy we described in previous section, it is enough to show it in three statements: 1. The basic functions (Z,S,P ) are computable. 2. If f is defined by composition from computable functions g and h, then f is also computable. 3. If f is defined by primitive recursion from the computable functions g and h, then f is also computable. k The statement ‘1.’ above is trivial: the initial functions S (successor), Z (zero), and Pi (projection) are effectively computable by a simple algorithm; and statement ‘2.’ – the composition of two computable functions g and h is computable (we just feed the output from whatever algorithmic routine evaluates g as input, into the routine that evaluates h). To illustrate statement ‘3.’ above, return to factorial function, defined as: 0!=1 (Sy)! = y! × Sy The first clause gives the value of the function for the argument 0;,then we repeatedly use the second clause which is recursion, to calculate the function’s value for S0, then for SS0, SSS0, etc. The definition encapsulates an algorithm for calculating the function’s value for any number, and corresponds exactly to a certain simple kind of computer routine.
    [Show full text]
  • General Models of Computation
    General Models of Computation Introduction When thinking of the meaning of computation, we are often led to the notion of algorithm, i.e. a process or set of rules to be followed in a step-by-step manner for the purpose of producing a desired outcome. This definition may seem somewhat vague and informal, and for this reason, over the past century, mathematicians and computer scientists have formally defined several models of computation that attempt to capture the notion of algorithm. Moreover, a general model of computation (GMC), is one that purports to encompass all possible algorithms. As an example, a general programming language, such as Java, is a general model of computation, since every Java program induces a process that can be followed in a step-by-step manner by adhering to the program's control statements. However, how do we know that any algorithm can be realized by some Java program? A GMC is said to be complete provided it can realize all possible algorithms. The problem with proving that a GMC is complete lies in the fact that, again, there is no a priori formal definition of algorithm. Now, suppose A and B are two different GMC's. Then we write 1. A ≤ B iff every algorithm realizable by A can also be realized by B 2. A < B iff there is an algorithm that can be realized by B, but not by A 3. A ≡ B, read as \A is equivalent to B" iff both A ≤ B and B ≤ A. Then the best we can do is to modify our definition of completeness to mean that A is complete iff there is some complete B for which A ≡ B.
    [Show full text]
  • Hierarchies of Recursive Functions
    Leibniz Universitat¨ Hannover Institut f¨urtheoretische Informatik Bachelor Thesis Hierarchies of recursive functions Maurice Chandoo February 21, 2013 Erkl¨arung Hiermit versichere ich, dass ich diese Arbeit selbst¨andigverfasst habe und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe. i Dedicated to my dear grandmother Anna-Maria Kniejska ii Contents 1 Preface 1 2 Class of primitve recursive functions PR 2 2.1 Primitive Recursion class PR ..................2 2.2 Course-of-value Recursion class PRcov .............4 2.3 Nested Recursion class PRnes ..................5 2.4 Recursive Depth . .6 2.5 Recursive Relations . .6 2.6 Equivalence of PR and PRcov .................. 10 2.7 Equivalence of PR and PRnes .................. 12 2.8 Loop programs . 14 2.9 Grzegorczyk Hierarchy . 19 2.10 Recursive depth Hierarchy . 22 2.11 Turing Machine Simulation . 24 3 Multiple and µ-recursion 27 3.1 Multiple Recursion class MR .................. 27 3.2 µ-Recursion class µR ....................... 32 3.3 Synopsis . 34 List of literature 35 iii 1 Preface I would like to give a more or less exhaustive overview on different kinds of recursive functions and hierarchies which characterize these functions by their computational power. For instance, it will become apparent that primitve recursion is more than sufficiently powerful for expressing algorithms for the majority of known decidable ”real world” problems. This comes along with the hunch that it is quite difficult to come up with a suitable problem of which the characteristic function is not primitive recursive therefore being a possible candidate for exploiting the more powerful multiple recursion. At the end the Turing complete µ-recursion is introduced.
    [Show full text]
  • Ion-Size Effect on Superconducting Transition Temperature Tc in …Er, Dy, Gd, Eu, Sm, and Nd؍R1؊X؊Yprxcayba2cu3o7؊Z Systems „R Li-Chun Tung, J
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by National Tsing Hua University Institutional Repository PHYSICAL REVIEW B VOLUME 59, NUMBER 6 1 FEBRUARY 1999-II Ion-size effect on superconducting transition temperature Tc in R12x2yPrxCayBa2Cu3O72z systems „R5Er, Dy, Gd, Eu, Sm, and Nd… Li-Chun Tung, J. C. Chen, M. K. Wu, and Weiyan Guan Department of Physics, National Tsing Hua University, Hsinchu, 300, Taiwan, Republic of China ~Received 14 September 1998! We observed a rare-earth ion-size effect on Tc in R12x2yPrxCayBa2Cu3O72z ~R5Er, Dy, Gd, Eu, Sm, and Nd! systems which is similar to that in R12xPrxBa2Cu3O72z systems in our previous reports. For fixed Pr and 31 Ca concentration ~fixed x and y!, Tc is linearly dependent on rare-earth ion radius rR . For a fixed Pr concentration x, there exists a maximum of Tc (Tc,max)inTc vs Ca concentration y curves. Tc,max shifts to higher Ca concentration region for samples with larger R ion radius. The enhancement of Tc ~DTc,max 5Tc,max2Tc,y50 ; Tc,y50 is the transition temperature Tc without Ca doping! increases with increasing R ion 31 radius. We proposed an empirical formula for Tc (rR ,x,y) to fit our experimental data: Tc(x,y)5Tc0 2 2 2 2Ab (a/x1y/b) 2Bx. All fitting parameters in this formula, Tc0 , B, b, a/b, and Ab , are rare-earth ion-size dependent. @S0163-1829~99!01506-4# I. INTRODUCTION tration x. It suggests that Pr ions are trivalent and localize, rather than fill, the mobile holes on CuO2 planes.
    [Show full text]
  • Quantum Complexity Theory
    QQuuaannttuumm CCoommpplelexxitityy TThheeooryry IIII:: QQuuaanntutumm InInterateracctivetive ProofProof SSysystemtemss John Watrous Department of Computer Science University of Calgary CCllaasssseess ooff PPrroobblleemmss Computational problems can be classified in many different ways. Examples of classes: • problems solvable in polynomial time by some deterministic Turing machine • problems solvable by boolean circuits having a polynomial number of gates • problems solvable in polynomial space by some deterministic Turing machine • problems that can be reduced to integer factoring in polynomial time CComommmonlyonly StudStudiedied CClasseslasses P class of problems solvable in polynomial time on a deterministic Turing machine NP class of problems solvable in polynomial time on some nondeterministic Turing machine Informally: problems with efficiently checkable solutions PSPACE class of problems solvable in polynomial space on a deterministic Turing machine CComommmonlyonly StudStudiedied CClasseslasses BPP class of problems solvable in polynomial time on a probabilistic Turing machine (with “reasonable” error bounds) L class of problems solvable by some deterministic Turing machine that uses only logarithmic work space SL, RL, NL, PL, LOGCFL, NC, SC, ZPP, R, P/poly, MA, SZK, AM, PP, PH, EXP, NEXP, EXPSPACE, . ThThee lliistst ggoesoes onon…… …, #P, #L, AC, SPP, SAC, WPP, NE, AWPP, FewP, CZK, PCP(r(n),q(n)), D#P, NPO, GapL, GapP, LIN, ModP, NLIN, k-BPB, NP[log] PP P , P , PrHSPACE(s), S2P, C=P, APX, DET, DisNP, EE, ELEMENTARY, mL, NISZK,
    [Show full text]
  • Randomized Computation, Chernoff Bounds. 1 Randomized Time Complexity
    Notes on Computer Theory Last updated: November, 2015 Randomized computation, Chernoff bounds. Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity Is deterministic polynomial-time computation the only way to define \feasible" computation? Al- lowing probabilistic algorithms, that may fail with tiny probability, seems reasonable. (In particular, consider an algorithm whose error probability is lower than the probability that there will be a hard- ware error during the computation, or the probability that the computer will be hit by a meteor during the computation.) This motivates our exploration of probabilistic complexity classes. There are two different ways to define a randomized model of computation. The first is via Tur- ing machines with a probabilistic transition function: as in the case of non-deterministic machines, we have a Turing machine with two transitions functions, and a random one is applied at each step. The second way to model randomized computation is by augmenting Turing machines with an additional (read-only) random tape. For the latter approach, one can consider either one-way or two-way random tapes; the difference between these models is unimportant for randomized time complexity classes, but (as we will see) becomes important for randomized space classes. Whichever approach we take, we denote by M(x) a random computation of M on input x, and by M(x; r) the (deterministic) computation of M on input x using random choices r (where, in the first case, the ith bit of r determines which transition function is used at the ith step, and in the second case r is the value written on M's random tape).
    [Show full text]