RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
35 / 93 I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
36 / 93 coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
37 / 93 I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
38 / 93 BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
39 / 93 I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
40 / 93 ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
41 / 93 I Zero error
Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
42 / 93 Randomized Complexity Classes
RP (Randomized Polynomial time)
I One-sided error
coRP (complement of RP )
I One-sided error
BPP (Bounded-error Probabilistic Polynomial time)
I Two-sided error
ZPP (Zero-error Probabilistic Polynomial time)
I Zero error
43 / 93 Definition: RP The complexity class RP is the class of all languages L for which there exists a polynomial PTM M such that 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
Definition: coRP The complexity class coRP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
Randomized Polynomial time
44 / 93 Definition: coRP The complexity class coRP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
Randomized Polynomial time
Definition: RP The complexity class RP is the class of all languages L for which there exists a polynomial PTM M such that 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
45 / 93 Randomized Polynomial time
Definition: RP The complexity class RP is the class of all languages L for which there exists a polynomial PTM M such that 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
Definition: coRP The complexity class coRP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
46 / 93 Let PTM M decide a language L ∈ RP 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
If M(x) = 1, then x ∈ L. If M(x) = 0, then x ∈ L or x ∈/ L, we can’t really tell.
Let PTM M decide a language L ∈ coRP
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
If M(x) = 0, then x ∈/ L. If M(x) = 1, then x ∈ L or x ∈/ L, we can’t really tell.
Randomized Polynomial time
47 / 93 Let PTM M decide a language L ∈ coRP
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
If M(x) = 0, then x ∈/ L. If M(x) = 1, then x ∈ L or x ∈/ L, we can’t really tell.
Randomized Polynomial time Let PTM M decide a language L ∈ RP 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
If M(x) = 1, then x ∈ L. If M(x) = 0, then x ∈ L or x ∈/ L, we can’t really tell.
48 / 93 Randomized Polynomial time Let PTM M decide a language L ∈ RP 1 x ∈ L =⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L =⇒ Prob[M(x) = 0] = 1
If M(x) = 1, then x ∈ L. If M(x) = 0, then x ∈ L or x ∈/ L, we can’t really tell.
Let PTM M decide a language L ∈ coRP
x ∈ L =⇒ Prob[M(x) = 1] = 1 1 x ∈/ L =⇒ Prob[M(x) = 0] ≥ 2
If M(x) = 0, then x ∈/ L. If M(x) = 1, then x ∈ L or x ∈/ L, we can’t really tell.
49 / 93 Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
50 / 93 Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
51 / 93 Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
Proof
52 / 93 If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L.
53 / 93 So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string.
54 / 93 If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine.
55 / 93 So L ∈ NP
RP is in NP
Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x.
56 / 93 RP is in NP
Theorem RP ⊆ NP
Proof Let L be a language in RP . Let M be a polynomial probabilistic Turing Machine that decides L. If x ∈ L, then there exists a sequence of coin tosses y such that M accepts x with y as the random string. So we can consider y as a certificate instead of a random string. y can be verified in polynomial time by the same machine. If x ∈/ L, then Prob[M(x) = 1] = 0. So there is no certificate. M rejects x. So L ∈ NP
57 / 93 The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
58 / 93 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1
59 / 93 We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently:
60 / 93 Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise
61 / 93 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0
62 / 93 If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer
63 / 93 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake
64 / 93 Thus, we can make the error exponentially small by polynomial number of repetitions.
Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t
65 / 93 Boosting for RP
The constant 1/2 in the definition of RP can be replaced by any constant k, 0 < k < 1 Since the error is one-sided, we can repeat the algorithm t times independently: We accept the string x if any of the t runs accept, and reject x otherwise Clearly, if x ∈/ L, all t runs will return a 0 If x ∈ L, if any of the t runs returns a 1, we return the correct answer If x ∈ L, if all t runs returns 0, we make mistake 1 Pr[algorithm makes a mistake t times] ≤ 2t Thus, we can make the error exponentially small by polynomial number of repetitions.
66 / 93 Example of RP Primes, the set of all prime numbers, is in RP . This was shown by Adelman and Huang (1992), who gave a primality testing algorithm that always rejected composite numbers, and accepted primes with probability at least 1/2
Example of coRP Primes, the set of all prime numbers, is in coRP also. Miller and Rabin gave a primality test that always accepts prime numbers, but rejects composites with probability at least 1/2
Examples of RP and coRP
67 / 93 Example of coRP Primes, the set of all prime numbers, is in coRP also. Miller and Rabin gave a primality test that always accepts prime numbers, but rejects composites with probability at least 1/2
Examples of RP and coRP
Example of RP Primes, the set of all prime numbers, is in RP . This was shown by Adelman and Huang (1992), who gave a primality testing algorithm that always rejected composite numbers, and accepted primes with probability at least 1/2
68 / 93 Examples of RP and coRP
Example of RP Primes, the set of all prime numbers, is in RP . This was shown by Adelman and Huang (1992), who gave a primality testing algorithm that always rejected composite numbers, and accepted primes with probability at least 1/2
Example of coRP Primes, the set of all prime numbers, is in coRP also. Miller and Rabin gave a primality test that always accepts prime numbers, but rejects composites with probability at least 1/2
69 / 93 Definition: BPP The complexity class BPP is the class of all languages L for which there exists a polynomial PTM M such that 2 x ∈ L =⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L =⇒ Prob[M(x) = 1] < 3 M answers correctly with probability 2/3 on any input x regardless if x ∈ L or x ∈/ L
Example of BPP Primes, the set of all prime numbers, is in BPP . BPP is closed under complement.
BPP = coBPP
Bounded-error Probabilistic Polynomial time
70 / 93 Example of BPP Primes, the set of all prime numbers, is in BPP . BPP is closed under complement.
BPP = coBPP
Bounded-error Probabilistic Polynomial time Definition: BPP The complexity class BPP is the class of all languages L for which there exists a polynomial PTM M such that 2 x ∈ L =⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L =⇒ Prob[M(x) = 1] < 3 M answers correctly with probability 2/3 on any input x regardless if x ∈ L or x ∈/ L
71 / 93 Bounded-error Probabilistic Polynomial time Definition: BPP The complexity class BPP is the class of all languages L for which there exists a polynomial PTM M such that 2 x ∈ L =⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L =⇒ Prob[M(x) = 1] < 3 M answers correctly with probability 2/3 on any input x regardless if x ∈ L or x ∈/ L
Example of BPP Primes, the set of all prime numbers, is in BPP . BPP is closed under complement.
BPP = coBPP
72 / 93 2/3 is arbitrary and can be improved as follows:
I Repeat the algorithm t times, say it returns Xi at the i-th run
I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p. Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
Boosting for BPP
73 / 93 I Repeat the algorithm t times, say it returns Xi at the i-th run
I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p. Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
Boosting for BPP
2/3 is arbitrary and can be improved as follows:
74 / 93 I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p. Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
Boosting for BPP
2/3 is arbitrary and can be improved as follows:
I Repeat the algorithm t times, say it returns Xi at the i-th run
75 / 93 The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p. Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
Boosting for BPP
2/3 is arbitrary and can be improved as follows:
I Repeat the algorithm t times, say it returns Xi at the i-th run
I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
76 / 93 Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
Boosting for BPP
2/3 is arbitrary and can be improved as follows:
I Repeat the algorithm t times, say it returns Xi at the i-th run
I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p.
77 / 93 Boosting for BPP
2/3 is arbitrary and can be improved as follows:
I Repeat the algorithm t times, say it returns Xi at the i-th run
I Take majority answer, i.e., if ≥ t/2 times M returns 1, return 1, otherwise return 0
The Chernoff Bound
Suppose X1,..., Xt are t independent random variables with values in {0, 1} and for every i, Pr[Xi = 1] = p. Then
t 1 X −2. t Pr[ X − p > ] < e 2p(1−p) t i i=1
t 1 X −2. t Pr[ X − p > −] < e 2p(1−p) t i i=1
78 / 93 Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
79 / 93 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2
80 / 93 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2
81 / 93 Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x]
82 / 93 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ]
83 / 93 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ]
84 / 93 − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ]
85 / 93 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3
86 / 93 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant
87 / 93 So by running the algorithm O(n) times, reduce probability exponentially
Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c
88 / 93 Boosting for BPP
Pt t x ∈ L: M outputs correctly if i=1 Xi ≥ 2 Pt t The algorithm makes a mistake when i=1 Xi < 2 Pr[ Algorithm outputs the wrong answer on x] Pt t = Pr[ i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi < 2 ] 1 Pt 1 = Pr[ t i=1 Xi − 2/3 < − 6 ] − t ≤ e 36·2·2/3·1/3 = e−ct , where c is some constant 1 n ≤ 2n for t = (ln 2 )/c So by running the algorithm O(n) times, reduce probability exponentially
89 / 93 BPP 2 x ∈ L ⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L ⇒ Prob[M(x) = 1] < 3
Can boost RP probability by running twice: 3 x ∈ L ⇒ Prob[M(x) = 1] ≥ 4 x ∈/ L ⇒ Prob[M(x) = 1] = 0 Clearly the above definition satisfies BPP
RP ⊆ BPP RP 1 x ∈ L ⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L ⇒ Prob[M(x) = 1] = 0
90 / 93 Can boost RP probability by running twice: 3 x ∈ L ⇒ Prob[M(x) = 1] ≥ 4 x ∈/ L ⇒ Prob[M(x) = 1] = 0 Clearly the above definition satisfies BPP
RP ⊆ BPP RP 1 x ∈ L ⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L ⇒ Prob[M(x) = 1] = 0
BPP 2 x ∈ L ⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L ⇒ Prob[M(x) = 1] < 3
91 / 93 3 x ∈ L ⇒ Prob[M(x) = 1] ≥ 4 x ∈/ L ⇒ Prob[M(x) = 1] = 0 Clearly the above definition satisfies BPP
RP ⊆ BPP RP 1 x ∈ L ⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L ⇒ Prob[M(x) = 1] = 0
BPP 2 x ∈ L ⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L ⇒ Prob[M(x) = 1] < 3
Can boost RP probability by running twice:
92 / 93 RP ⊆ BPP RP 1 x ∈ L ⇒ Prob[M(x) = 1] ≥ 2 x ∈/ L ⇒ Prob[M(x) = 1] = 0
BPP 2 x ∈ L ⇒ Prob[M(x) = 1] ≥ 3 1 x ∈/ L ⇒ Prob[M(x) = 1] < 3
Can boost RP probability by running twice: 3 x ∈ L ⇒ Prob[M(x) = 1] ≥ 4 x ∈/ L ⇒ Prob[M(x) = 1] = 0 Clearly the above definition satisfies BPP
93 / 93 Randomized Computation
Nabil Mustafa
Computational Complexity
1 / 85 Definition: ZPP The complexity class ZPP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ M(x) = 1 or M(x) = ‘Don’t know’
x ∈/ L =⇒ M(x) = 0 or M(x) = ‘Don’t know’
∀x, Prob[M(x) = ‘Don’t know’] ≤ 1/2 Whenever M answers with a 0 or a 1, it answers correctly. If M is not sure, it’ll output a ‘Don’t know’. On any input x, it outputs ‘Don’t know’ with probability at most 1/2.
Zero-error Probabilistic Polynomial time
2 / 85 x ∈/ L =⇒ M(x) = 0 or M(x) = ‘Don’t know’
∀x, Prob[M(x) = ‘Don’t know’] ≤ 1/2 Whenever M answers with a 0 or a 1, it answers correctly. If M is not sure, it’ll output a ‘Don’t know’. On any input x, it outputs ‘Don’t know’ with probability at most 1/2.
Zero-error Probabilistic Polynomial time
Definition: ZPP The complexity class ZPP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ M(x) = 1 or M(x) = ‘Don’t know’
3 / 85 ∀x, Prob[M(x) = ‘Don’t know’] ≤ 1/2 Whenever M answers with a 0 or a 1, it answers correctly. If M is not sure, it’ll output a ‘Don’t know’. On any input x, it outputs ‘Don’t know’ with probability at most 1/2.
Zero-error Probabilistic Polynomial time
Definition: ZPP The complexity class ZPP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ M(x) = 1 or M(x) = ‘Don’t know’
x ∈/ L =⇒ M(x) = 0 or M(x) = ‘Don’t know’
4 / 85 Whenever M answers with a 0 or a 1, it answers correctly. If M is not sure, it’ll output a ‘Don’t know’. On any input x, it outputs ‘Don’t know’ with probability at most 1/2.
Zero-error Probabilistic Polynomial time
Definition: ZPP The complexity class ZPP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ M(x) = 1 or M(x) = ‘Don’t know’
x ∈/ L =⇒ M(x) = 0 or M(x) = ‘Don’t know’
∀x, Prob[M(x) = ‘Don’t know’] ≤ 1/2
5 / 85 Zero-error Probabilistic Polynomial time
Definition: ZPP The complexity class ZPP is the class of all languages L for which there exists a polynomial PTM M such that
x ∈ L =⇒ M(x) = 1 or M(x) = ‘Don’t know’
x ∈/ L =⇒ M(x) = 0 or M(x) = ‘Don’t know’
∀x, Prob[M(x) = ‘Don’t know’] ≤ 1/2 Whenever M answers with a 0 or a 1, it answers correctly. If M is not sure, it’ll output a ‘Don’t know’. On any input x, it outputs ‘Don’t know’ with probability at most 1/2.
6 / 85 Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
7 / 85 Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP
8 / 85 We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP
9 / 85 Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar
10 / 85 Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’.
11 / 85 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x).
12 / 85 M(x) = 1 or M(x) =‘Don’t know’, so M0(x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x).
I x ∈ L:
13 / 85 or M(x) =‘Don’t know’, so M0(x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x).
I x ∈ L: M(x) = 1
14 / 85 , so M0(x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x).
I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’
15 / 85 I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
16 / 85 M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L:
17 / 85 , which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0
18 / 85 , or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct
19 / 85 where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’
20 / 85 So L ∈ coRP, and ZPP ⊆ coRP
A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2.
21 / 85 A Theorem on ZPP Theorem ZPP = RP ∩ coRP
Claim: ZPP ⊆ RP ∩ coRP Show that ZPP ⊆ RP and ZPP ⊆ coRP We prove ZPP ⊆ coRP , the other proof is similar Let L ∈ ZPP . Then ∃ PTM M that, for all x, either correctly decides x ∈ L or outputs ‘Don’t know’. Let M0 be the Turing Machine that on input x, returns 1 if M(x) =‘Don’t know’ and otherwise returns M(x). 0 I x ∈ L: M(x) = 1 or M(x) =‘Don’t know’, so M (x) = 1.
I x ∈/ L: M(x) = 0, which is correct, or M(x) =‘Don’t know’ where M0(x) returns incorrectly with probability ≤ 1/2. So L ∈ coRP, and ZPP ⊆ coRP
22 / 85 Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP
23 / 85 I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
24 / 85 I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
25 / 85 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2
26 / 85 I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
27 / 85 I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
28 / 85 I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
29 / 85 Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’.
30 / 85 Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct.
31 / 85 I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
32 / 85 I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
33 / 85 I The case for x ∈/ L similar.
A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
34 / 85 A Theorem on ZPP
Claim: RP ∩ coRP ⊆ ZPP Let L ∈ RP ∩ coRP . Then there exist two TM ’s s.t.:
I M1(x) = 1 if x ∈ L. Incorrect for x ∈/ L with prob. ≤ 1/2
I M2(x) = 0 if x ∈/ L. Incorrect for x ∈ L with prob. ≤ 1/2 Construct a PTM M0 which works as follows on M0(x):
I Run M1(x), and if M1(x) = 0, return x ∈/ L.
I Run M2(x), and if M2(x) = 1, return x ∈ L.
I Else return ‘Don’t know’. Claim: If M0(x) returns a 0 or a 1, it is correct. Claim: M0(x) returns ‘Don’t know’ with prob. ≤ 1/2
I Assume x ∈ L. Then we don’t return a 1 iff M2(x) = 0.
I By definition, M2(x) = 0 for x ∈ L with prob. ≤ 1/2.
I The case for x ∈/ L similar.
35 / 85