Probabilistic Machines

Probabilistic Machines

Computational complexity lecture 10 Probabilistic machines Class RP (randomized polynomial time): a language L is in RP iff there is a polynomial T(n) and a machine M with a source of random bits, working in at most T(n) steps, and such that: ● w∈L ⇒ Prs[(w,s)∈LM]≥0.5 ● w∉L ⇒ ∄s. (w,s)∈LM As s we can take sequences of length T(n), or infinite sequences, does not matter. Intuition: a word is in L, if at least half of possible witnesses confirm this (but there are no witnesses for words not in L) In other words: if a word is not in L, we will certainly reject; if it is in L, then choosing transitions randomly, we will accept with probability at least 0.5 Amplification Fact (amplification) Instead of 0.5, in the definition of RP we can take any constant between 0 and 1. Amplification At the end of the previous lecture, as a side effect, we have observed the following stronger version of amplification: Fact Suppose that a language L is recognized by a machine M with a source of random bits, working in polynomial time, and such that for some polynomial p(n): ● w∈L ⇒ Prs[(w,s)∈LM]≥1/p(n) (error probability almost 1) ● w∉L ⇒ ∄s. (w,s)∈LM Then L∈RP. Proof ● We execute the machine p(n) times. This is enough, since lim (1-1/p(n))p(n) = 1/e n→∞ ● For large n this is <0.5 ● We have finitely many „small” n, we can deal with them somehow Amplification We can go even further: Fact Let L∈RP. Then, for every polynomial q(n) there is a machine M with a source of random bits, working in polynomial time, and such that: q(n) ● w∈L ⇒ Prs[(w,s)∈LM]≥1-1/2 (error probability exponentially small) ● w∉L ⇒ ∄s. (w,s)∈LM Proof ● We take a machine that makes a mistake with probability <1/2, and we run it q(n) times Examples of randomized algorithms Perfect matching in a bipartite graph: input: bipartite graph G=(V1,V2,E), where |V1|=|V2| question: is there a perfect matching in G? ● several deterministic algorithms are known for detecting if a perfect matching exists ● we present here a randomized algorithm Examples of randomized algorithms Perfect matching in a bipartite graph: input: bipartite graph G=(V1,V2,E), where |V1|=|V2|=n question: is there a perfect matching in G? Solution ● Consider the n×n matrix X whose entry Xij is a variable xij if (i,j)∈E, and 0 otherwise. ● Recall that the determinant det(X) is n sign(s) det(X)=S(-1) PXi,s(i) s∈Sn i=1 ● Every permutation in Sn is a potential perfect matching. ● A perfect matching exists iff the determinant is nonzero. Examples of randomized algorithms Perfect matching in a bipartite graph: input: bipartite graph G=(V1,V2,E), where |V1|=|V2|=n question: is there a perfect matching in G? Solution ● Consider the n×n matrix X whose entry Xij is a variable xij if (i,j)∈E, and 0 otherwise. ● Recall that the determinant det(X) is n sign(s) det(X)=S(-1) PXi,s(i) s∈Sn i=1 ● Every permutation in Sn is a potential perfect matching. ● A perfect matching exists iff the determinant is nonzero. ● The determinant itself, as a polynomial, is large. ● But for specific numbers substituted for the variables, we can compute in quickly (as fast as matrix multiplication). ● Randomized algorithm: substitute something for the variables, and check that the determinant is nonzero. ● Advantage: the algorithm parallelizes (matrix_determinant ∈ NC) Class PP Class PP (probabilistic polynomial): like RP, but: ● w∈L ⇒ Prs[(w,s)∈LM]≥0.5 ● w∉L ⇒ Prs[(w,s)∈LM]<0.5 Intuition: acceptance by voting of witnesses Class PP Class PP (probabilistic polynomial): like RP, but: ● w∈L ⇒ Prs[(w,s)∈LM]≥0.5 ● w∉L ⇒ Prs[(w,s)∈LM]<0.5 Intuition: acceptance by voting of witnesses Advantages of this class: ● errors allowed on both sides ⇒ closed under complement ● a „syntactic” class ⇒ has a complete problem MAJSAT: is a given boolean formula satisfied by at least half of possible valuations? Class PP Class PP (probabilistic polynomial): like RP, but: ● w∈L ⇒ Prs[(w,s)∈LM]≥0.5 ● w∉L ⇒ Prs[(w,s)∈LM]<0.5 Intuition: acceptance by voting of witnesses Advantages of this class: ● errors allowed on both sides ⇒ closed under complement ● a „syntactic” class ⇒ has a complete problem MAJSAT: is a given boolean formula satisfied by at least half of possible valuations? Disadvantages of this class – it is too large: ● in practice: maybe M gives probabilities close to 0.5, so running it (even many times) does not tell us too much about the result ● NP⊆PP – tutorials Class BPP Class BPP (bounded probabilistic polynomial): errors allowed on both sides, but only small errors: ● w∈L ⇒ Prs[(w,s)∈LM]≥3/4 ● w∉L ⇒ Prs[(w,s)∈LM]≤1/4 In other words: the probability of an error is ≤1/4, both sides Class BPP Class BPP (bounded probabilistic polynomial): errors allowed on both sides, but only small errors: ● w∈L ⇒ Prs[(w,s)∈LM]≥3/4 ● w∉L ⇒ Prs[(w,s)∈LM]≤1/4 In other words: the probability of an error is ≤1/4, both sides Remarks: ● easy fact: RP⊆BPP⊆PP ● we are not aware of any problem, which is in BPP, and about which we do not know whether it is in RP or in coRP ● BPP is a good candidate for the class of these problems, which can be quickly solved “in practice” ● open problem: how is BPP related to NP? ● tutorials: if NP⊆BPP, then NP⊆RP (i.e., NP=RP) ● conjecture (open problem): BPP=P (“randomization doesn't add anything”) Class BPP Amplification for BPP: instead of the error 1/4, one can take any number p∈(0,1/2) Proof: Let the original error probability be p<1/2. We run the algorithm 2m+1 times, and we take the decision of majority. The probability of error decreases to: m m 2m+1 2m+1 i 2m+1-i (1-p)mpm+1 = 22m(1-p)mpm+1 ≤ (4p(1-p))m ( i )(1-p) p ≤ S( i ) i=0S i=0 precisely i correct answers Class BPP Amplification for BPP: instead of the error 1/4, one can take any number p∈(0,1/2) Proof: Let the original error probability be p<1/2. We run the algorithm 2m+1 times, and we take the decision of majority. The probability of error decreases to: m m 2m+1 2m+1 i 2m+1-i (1-p)mpm+1 = 22m(1-p)mpm+1 ≤ (4p(1-p))m ( i )(1-p) p ≤ S( i ) i=0S i=0 precisely i correct answers Remark: As for RP, we can prove a stronger version: we can start from an algorithm with error probability 1-1/p(n) (very large), and obtain an algorithm with error probability 1/2q(n), for polynomials p(n), q(n). It is enough to take as m appropriate polynomial. Class ZPP Two types of randomized algorithms: ● Monte Carlo: the algorithm is always fast, usually the answer is correct – RP, BPP, PP ● Las Vegas: the answer is always correct, usually the algorithm is fast – ZPP Class ZPP (zero-error probabilistic polynomial time): problems that can be solved in expected polynomial time. Class ZPP Two types of randomized algorithms: ● Monte Carlo: the algorithm is always fast, usually the answer is correct – RP, BPP, PP ● Las Vegas: the answer is always correct, usually the algorithm is fast – ZPP Class ZPP (zero-error probabilistic polynomial time): problems that can be solved in expected polynomial time. How do we compute the expected running time? ● The machine has access to an infinite tape with random bits ● Every bit is chosen independently (0 or 1 with probability 0.5) ● I.e., a computation that halts after reading k random bits has probability 2-k ● The probability of looping forever is required to be 0 Class ZPP Two types of randomized algorithms: ● Monte Carlo: the algorithm is always fast, usually the answer is correct – RP, BPP, PP ● Las Vegas: the answer is always correct, usually the algorithm is fast – ZPP Class ZPP (zero-error probabilistic polynomial time): problems that can be solved in expected polynomial time. How do we compute the expected running time? ● The machine has access to an infinite tape with random bits ● Every bit is chosen independently (0 or 1 with probability 0.5) ● I.e., a computation that halts after reading k random bits has probability 2-k ● The probability of looping forever is required to be 0 Tutorials: ZPP=RP∩coRP (i.e, Las Vegas algorithms can be changed to Monte Carlo algorithms) Non-uniform derandomization Theorem (Adleman 1978): BPP⊆P/poly Remark: this theorem says that for every language in BPP there is a sequence of circuits of polynomial size, which recognizes it. If this sequence would be uniform, the language would be in P (it would be possible to derandomize every language from BPP). This is an open problem, though, so the sequence of circuits obtained in our proof should be “strange”, i.e., difficult to compute. Non-uniform derandomization Theorem (Adleman 1978): BPP⊆P/poly Proof: ● Suppose that M recognizes L with error probability ≤1/4. ● On input of length n we repeat the computation 2(3n)+1 times; the error decreases to ≤(4p(1-p))3n=(3/4)3n (running time is still polynomial, number of random bits polynomial) ● The probability that a random sequence of bits gives an incorrect answer for a fixed input of length n is ≤(3/4)3n, thus the probability that a random sequence of bits gives an incorrect answer for at least one input of length n is ≤2n(3/4)3n=(27/32)n<1 ● Thus there exists a sequence of bits, which gives a correct answer for every input of length n – we take this sequence of bits as the advice Derandomization Generally, we only know that BPP⊆PSPACE, but some algorithms can be derandomized, and there are some techniques for this.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    26 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us