Great Ideas in Theoretical Summer 2013 Lecture 7: Expander Graphs in Computer Science

Lecturer: Kurt Mehlhorn & He Sun

Over the past decades expanders have become one basic tool in various areas of computer science, including design, complexity theory, coding theory, and etc. Informally, expanders are regular graphs with low degree and high connectivity. We can use different ways to define expander graphs.

1. Combinatorically, expanders are highly connected graphs, and to disconnect a large part of the graph, one has to sever many edges.

2. Geometrically, every vertex set has a relatively large boundary.

3. From the Probabilistic view, expanders are graphs whose behavior is “like” ran- dom graphs.

4. Algebraically, expanders are the real- whose first positive eigen- value of the Laplace operator is bounded away from zero.

1 Spectra of Graphs

Definition 1. Let G = (V,E) be an undirected graph with vertex set [n] , {1, . . . , n}. The of G is an n by n matrix A given by ( 1 if i and j are adjacent A = i,j 0 otherwise

If G is a multi-graph, then Ai,j is the number of edges between vertex i and vertex j. By definition, the adjacency matrix of graphs has the following properties: (1) The sum of elements in every row/column equals the degree of the corresponding vertex. (2) If G is undirected, then A is symmetric.

Example. The adjacency matrix of a triangle is

 0 1 1   1 0 1  1 1 0

Given a matrix A, a vector x 6= 0 is defined to be an eigenvector of A if and only if there is a λ ∈ C such that Ax = λx. In this case, λ is called an eigenvalue of A.

1 Definition 2 (graph spectrum). Let A be the adjacency matrix of an undirected graph G with n vertices. Then A has n real eigenvalues, denoted by λ1 ≥ · · · ≥ λn. These eigenvalues associated with their multiplicities compose the spectrum of G.

Here are some basic facts about the graph spectrum.

Lemma 3. Let G be any undirected simple graph with n vertices. Then Pn 1. i=1 λi = 0. Pn 2 Pn 2. i=1 λi = i=1 deg(i).

3. If λ1 = ··· = λn, then E[G] = ∅.

4. degavg ≤ λ1 ≤ degmax. p 5. degmax ≤ λ1 ≤ degmax. Proof. We only prove the first three items. (1) Since G does not have self-loops, all the diagonal elements of A are zero. By the Pn Pn definition of trace, we have ı=1 λi = tr(A) = i=1 Ai,i = 0. Pn 2 2 Pn 2 (2) By the properties of matrix trace, we have ı=1 λi = tr (A ) = i=1 Ai,i. Since 2 2 Ai,i is the degree of vertex i, tr(A ) equals the sum of all vertices’ degrees in G. Pn (3) Combing i=1 λi = 0 with λ1 = ··· = λn, we have λi = 0 for every vertex i. By item (2), we have deg(i) = 0 for any vertex i. Therefore E[G] = ∅.

k For a graph G with adjacency matrix A and integer k ≥ 1, Au,v is the number of walks of length k from u to v .

Pn p 1/p Let kxkp , ( i=1 |xi| ) . Then for any 1 ≤ p ≤ q < ∞, it holds that

1/p−1/q kxkq ≤ kxkp ≤ n · kxkq.

Lemma 4. For any graph G with m edges, the number of cycles of length k in G is bounded by O mk/2.

Proof. Let A be the adjacency matrix of G with eigenvalues λ1, . . . , λn. Thus the number k Pn k of Ck (cycles of length k) in G is bounded by tr A /(2k) = i=1 λi /(2k). For any k ≥ 3, it holds that

n !1/k n !1/k n !1/2 X k X k X 2 1/2 λi ≤ |λi| ≤ |λi| = (2 · m) . i=1 i=1 i=1

k k/2 k/2 Hence tr A ≤ (2m) and the number of Ck is at most O m . Example. Some examples for different spectra of graphs:

• For the Kn, the eigenvalues are n − 1 with multiplicity 1 and −1 with multiplicity n − 1. √ √ • For the complete bipartite graph Km,n, the eigenvalues are + mn, − mn and 0 with multiplicity m + n − 2.

2 • For the cycle Cn, the spectrum is 2 cos(2πj/n)(j = 0, 1, . . . , n − 1). Two assumptions that we make throughout the lecture are as follows: 1. We only consider undirected graphs. Note that if G is not undirected, then A is not symmetric any more and the eigenvalues of A could be complex numbers.

A matrix A = (ai,j)n×n is called a Hermitian matrix if ai,j = aj,i for any element ai,j. Hermitian matrices always have real eigenvalues.

2. Unless mentioned otherwise, we consider regular graphs. Lemma 5. Consider any undirected graph G with adjacency matrix A.

1. If G is d-regular, then λ1 = d and |λi| ≤ d for i = 2, . . . , n.

2. G is connected iff λ2 < d, i.e., the eigenvalue d has multiplicity 1. Moreover, the number of connected components of G equals the multiplicity of eigenvalue d.

3. If G is connected, then G is bipartite iff λn = −d. For studying regular graphs, it is convenient to work with the normalized adjacency matrix M of graph G. For any d-regular graph with adjacency matrix A, define 1 M · A. , d

We use λ1 ≥ · · · ≥ λn to denote the eigenvalues of matrix M of graph G. For regular graphs, λ1 = 1 and we mainly consider the second largest eigenvalue in absolute value. The formal definition is as follows.

Definition 6 (spectral expansion). The spectral expansion of graph G is defined by λ , max {|λ2|, |λn|}, i.e. λ = max kAxk , kxk=1,x⊥u √ √ where u = (1/ n, . . . , 1/ n) ∈ Rn.

Courant-Fischer Formula. Let B be an n by n symmetric matrix with eigen- values λ1 ≤ · · · ≤ λn and corresponding eigenvectors v1,..., vn. Then

T T x Bx λ1 = min x Bx = min , kxk=1 x6=0 xT · x T T x Bx λ2 = min x Bx = min , kxk=1 x6=0 xT · x x⊥v1 x⊥v1 T T x Bx λn = max x Bx = max . kxk=1 x6=0 xTx

It is well known that λ relates to various graph properties. In particular, we will see that there is a close connection between λ and the expansion of the graph.

3 Lemma 7. s n − d λ ≥ . d(n − 1)

2 Pn 2 2 Proof. Follow from tr (M ) = n/d = i=1 λi ≤ 1 + (n − 1)λ .

Theorem 8. [2] Any√ infinite family of d-regular graphs {Gn}n∈N has spectral expansion (as n → ∞) at least 2 d − 1/d − o(1).

Definition√ 9 (Ramanujan graphs). A family of d-regular graphs with spectral expansion at most 2 d − 1/d is called Ramanujan graphs. Although Friedman [6] showed that random d-regular graphs are close to being Ra- manujan in the sense that λ satisfies √ λ ≤ 2 d − 1/d + 2 log(d)/d + o(1), constructing families of Ramanujan graphs with arbitrary degrees is one of the biggest open problems in this area. So far, we only know the construction of Ramanujan graphs with certain degrees and these constructions are based on deep algebraic knowledge. See [8] for example. Another quite important problem is to find a combinatorial construction of Ramanujan graphs. We list some more interesting facts on eigenvalues of graphs: 1. If graphs G and H are isomorphic, then there is a permutation matrix P such that P · A(G) · PT = A(H) and hence the matrices A(G) and A(H) are similar. 2. There are nonisomorphic graphs with the same spectrum. See Figure 1.

Figure 1: An example for two graphs which are not isomorphic but have the same spec- trum. Their common graph spectrum is 2, 0, 0, 0, −2.

2 Combinatorial Expansion of Graphs

For any d-regular graph G = (V,E), let Γ(v) be the set of neighbors of v, i.e.,

Γ(v) , {u | (u, v) ∈ E }. 0 For any subset S ⊆ V , let Γ(S) , ∪v∈SΓ(v) and Γ (S) , Γ(S) ∪ S. Moreoever, for any set S ⊆ V we define ∂S , E(S, S). Definition 10 (vertex expansion). A graph G with n vertices is said to have vertex expansion (K,A) if |Γ(S)| min ≥ A. S : |S|≤K |S| If K = n/2, then for simplicity we call G an A-expander.

4 Definition 11 (edge expansion). The edge expansion of a graph G = (V,E) is defined by |∂S| h(G) , min . S : |S|≤|V |/2 |S| To explain edge expansion, let us see two examples. (1) If G is not connected, we choose one connected as S so that |E(S, S)| = 0. Therefore h(G) = 0. (2) If G is a complete graph Kn, then |E(S, S)| = |S| · (n − |S|) and h(G) = dn/2e.

Definition 12 (expanders). Let d ∈ N. A sequence of d-regular graphs {Gi}i∈N of size increasing with i is a family of expanders if there is a constant ε > 0 such that h(Gi) ≥ ε for all i.

Usually, when speaking of an expander Gi, we actually mean a family of graphs

{Gi}i∈N, where each graph in {Gi}i∈N is d-regular and its expansion is lower bounded by a constant ε > 0. Observation 13. Any expander graph is a connected graph.

3 Spectral Expansion vs. Combinatorial Expansion

The next result shows that small spectral expansion implies large vertex expansion. Theorem 14 (spectral expansion ⇒ vertex expansion). If G has spectral expansion λ,  1  then for all 0 < α < 1, G has vertex expansion αn, (1−α)λ2+α . Before showing the proof, we introduce some notations at first. For any probability distribution π, the support of π is defined by support(π) = {x : πx > 0}. Definition 15. Given a probability distribution π, the collision probability of π is defined P 2 to be the probability that two independent samples from π are equal, i.e. CP(π) = x πx. Lemma 16. Let u = (1/n, . . . , 1/n) be the uniform distribution. Then for every proba- bility distribution π ∈ [0, 1]n, we have 1. CP(π) = ||π||2 = ||π − u||2 + 1/n. 2. CP(π) ≥ 1/|support(π)|. Proof. (1) We write π as π = u + (π − u) where u⊥(π − u). By Pythagorean theorem

CP(π) = ||π||2 = ||π − u||2 + ||u||2 = ||π − u||2 + 1/n.

(2) By Cauchy-Schwarz inequality, we get

 2 X X 2 1 =  πx ≤ |support(π)| · πx x∈support(π) x and hence X 1 CP(π) = π2 ≥ . x |support(π)| x

5 Let us turn to the proof of Theorem 14. Proof. Let |S| ≤ αn. Choose a probability distribution π that is uniform on S and 0 on the S, i.e.  1 1 1  π = , ,..., , 0,..., 0 . |S| |S| |S|

Note that M is a real symmetric matrix, then M has n orthonormal eigenvectors v1,..., vn. Pn Hence we can decompose π as i=1 πi where πi is a constant multiplicity of vi. Then CP(π) = 1/|S| and by Lemma 16 (2) it holds that 1 1 CP(Mπ) ≥ = . |support(Mπ)| |Γ(S)|

On the other hand, by item (1) of Lemma 16 we have 1 CP(Mπ) − = kMπ − uk2 n 2 = kMu + Mπ2 + ··· + Mπn − uk 2 = kλ2π2 + ··· + λnπnk  1   1 1  ≤ λ2kπ − uk2 = λ2 CP(π) − = λ2 − . n |S| n

Hence 1 1 1  1 1  − ≤ CP(Mπ) − ≤ λ2 − , |Γ(S)| n n |S| n and 1 |S| |S| |Γ(S)| ≥ = =     2 2 2 1 1 1 2 |S| |S| λ + (1 − λ ) · |S|/n λ |S| − n + n λ 1 − n + n |S| |S| ≥ = . λ2 + (1 − λ2)α α + (1 − α)λ2

Theorem 17 (vertex expansion ⇒ spectral expansion). Let G be a d-regular graph. For every δ > 0 and d > 0, there exists γ > 0 such that if G is a d-regular (1 + δ)-expander according to Definition 10, then G has spectral expansion (1 − γ). Specifically, we can take γ = Ω(δ2/d).

When talking about expanders, we often mean a family of d-regular graphs satisfying one of the following two equivalent properties:

• Every graph in the family has spectral expansion λ. • Every graph in the family is a (1 + δ)-expander for some constant δ.

6 4 Existence and Constructibility of Expander Graphs

Expander graphs have two seemingly contradictory properties: low degree and high con- nectivity. Two general problems are existence and constructibility of expander graphs. Among these two problems, existential proofs of expanders are easier, as one can resort to probabilistic techniques. Further, the existence of expanders can be often used as a black-box to show the existence of other interesting combinatorial objects. On the other hand, many applications of expanders need explicit constructions. We will mention some explicit constructions in this lecture, but they do not always match the bounds from the existential proofs. Let Gd,N be the set of bipartite graphs with bipartite sets L, R of size N and left degree d. The following lemma shows the existence of expanders.

Theorem 18. For any d, there exists an α(d) > 0, such that for all N

Pr [ G is an (αN, d − 2)-expander ] ≥ 1/2,

where G is chosen uniformly from Gd,N . Proof. Define pk , Pr [ ∃S ⊆ L : |S| = k, |Γ(S)| < (d − 2)|S| ] . P So G is not an (αN, d − 2)-expander iff k pk > 0. Assume that there is a set S of size K and |Γ(S)| < (d − 2)|S|. Then there are at least 2k repeats among all the neighbors of vertices in S. We calculate the probability

dk dk 2k Pr [ at least 2k repeats among all the neighbors of vertices in S ] ≤ . 2k N

Therefore

Ndk dk 2k p ≤ k k 2k N Nεk dkε2k dk 2k ≤ · · k 2k N cd4k k = N

3 4 −k where c = ε . By setting α = 1/(cd ) and k ≤ αN, we know that pk ≤ 4 and

αN αN X X −k Pr [ G is not an (αN, d − 2)-expander ] ≤ pk ≤ 4 ≤ 1/2. k=1 k=1

Let us now turn to the constructibility of expanders.

Definition 19. Let {Gi}i∈N be a family of expander graphs where Gi is a d-regular graph 2 on ni vertices and the integers {ni} are increasing, but not too fast.(e.g. ni+1 ≤ ni will do)

7 1. The family is called Mildly Explicit if there is an algorithm that generates the

j-th graph in the family {Gi}i∈N in time polynomial in j. 2. The family is called Very Explicit if there is an algorithm that on input of an integer i, a vertex v ∈ V (Gi) and k ∈ {1, ··· , d} computes the kth neighbor of the vertex v in the graph Gi. The algorithm’s running time should be polynomial in its input length.

Example. A family of 3-regular p vertex graph for a prime number p. Let G = (Zp,E). −1 For any vertex x ∈ Zp, vertex x is connected to x + 1, x − 1 and x . (The inverse of 0 is defined to be 0.)

The family of expanders above is just mildly explicit, since at present we are unable to generate large prime numbers deterministically.

Example (Margulis, 1973). Fix a positive integer M and let [M] = {1, 2, ··· ,M}. Define the bipartite graph G = (V,E) as follows. Let V = [M]2 ∪[M]2, where vertices in the first partite set as denoted (x, y)1 and vertices in the second partite set are denoted (x, y)2. From each vertex (x, y)1, put in edges

(x, y)2, (x, x + y)2, (x, x + y + 1)2, (x + y, y)2, (x + y + 1, y)2,

where all arithmetic is done modulo M. Then G is an expander.

Example (Jimbo and Maruoka, 1987). Let G = (L ∪ R,E√) be the graph described above, then ∀X ⊂ L, |Γ(X)| ≥ |X|(1+d0|X|/n), where d0 = (2− 3)/4 is the optimal constant.

5 Expander Mixing Lemma

Consider two experiments on a d-regular graph G. (1) Pick a random vertex u ∈ V and then pick one of its neighbors v. (2) Pick two random vertices u, v ∈ V randomly and independently from V ×V . What is the probability of the event (u, v) ∈ S ×T,S,T ⊆ V for these two experiments? For the first experiment, the probability is |E(S,T )|/(nd). The probability for the second probability is µ(S) · µ(T ), where µ(S) , |S|/n is the density of set S. For the random bits used in these two experiments, it is easy to show that the first experiment uses log n + log d random bits and the second one uses 2 log n random bits. However, we will show that for graphs with good expansion these two probabilities are quite close to each other.

Lemma 20 (Expander Mixing Lemma). [3] Let G = (V,E) be a d-regular n-vertex graph with spectral expansion λ. Then for any subset S,T ⊆ V , we have

d|S| · |T | p |E(S,T )| − ≤ λd |S||T |. n

Let us look at the two terms in the left side: the size of E(S,T ) is the number of edges between two sets, and d|S| · |T |/n is the expected number of edges between S and T in a random graph with edge density d/n. So small λ implies that G is “more” random.

8 Proof. Let 1S, 1T be the characteristic vectors of S and T . Expand these vectors in the P P orthonormal basis of eigenvectors v1, ··· , vn, i. e. 1S = i αivi, and 1T = i βivi. Then ! ! X X X |E(S,T )| = 1S · A · 1T = αivi A βivi = λiαiβi, i i i |S| |T | √1 √ √ where λis are eigenvalues of A. Since α1 = h1S, n i = n , β1 = n and λ1 = d, then n |S| · |T | X |E(S,T )| = d · + λ α β . n i i i i=2 Thus n n d|S||T | X X |E(S,T )| − ≤ λiαiβi ≤ λ · d · |αiβi| n i=2 i=2 By Cauchy-Schwartz inequality, we have

d|S||T | p |E(S,T )| − ≤ λ||1S|| · ||1T || = λ · d · |S| · |T |. n

Some remarks about the expander mixing lemma. Lemma 21 (Converse of the Expander Mixing Lemma). [5] Let G be a d-regular graph and suppose that

d|S||T | p |E(S,T )| − ≤ θd |S||T | n holds for every two disjoint sets S,T and for some positive θ. Then λ = O(θ(1 + log(d/θ))). In the following, we use a three-tuple (n, d, λ) to represent an n-vertex d-regular graph with spectral expansion λ. Corollary 22. The size of the independent set for any (n, d, λ)-graph is at most λn. Proof. Let T = S. By Expander Mixing Lemma, we get |S| ≤ λn. Corollary 23. For any (n, d, λ)-graph G, the chromatic number χ(G) ≥ 1/λ. Proof. Let c : V → {1, . . . , k} be a coloring of G. Then for every 1 ≤ i ≤ k, c−1(i) is an independent set. Since the size of every independent set is at most λn, so the chromatic number is at least 1/λ.

6 Cheeger’s Inequality

The expander Mixing Lemma states that on graphs with good expansion, the graph’s edges are well distributed and the spectral expansion of graphs are closely related to behavior of any cut in a graph. Cheeger’s inequality provides another result of that flavour. Theorem 24 (Cheeger’s Inequality). Let G be a d-regular graph and let the eigenvalues of A(G) be λ1 ≥ ... ≥ λn. Then d − λ 2 ≤ h(G) ≤ p2d(d − λ ). 2 2

9 7 Random Walks on Graphs

We now focus on random walks on graphs. We assume G = (V,E) is an undirected, 1 unweighted and connected graph which is also d-regular. Recall that M = d · A is the normalized adjacency matrix which will be transition matrix of the random walk on G. Lemma 25. Let M be any symmetric transition matrix. Then for any probability vector x: t t M x − π 2 ≤ λ ,

where π = (1/n, . . . , 1/n) is the uniform vector and λ = max{|λ2|, |λn|}. In particular, for t = O(log n/ log(1/λ)) = O(log n/(1 − λ)), it holds that

t M x − π ∞ ≤ 1/(2n),

t 1 1 3 1 i.e., for all u, v ∈ V , Mu,v ∈ [ 2 · n , 2 · n ].

Proof. Since M is a real, symmetric matrix, M has n orthogonal eigenvectors v1,..., vn. Let x be any starting probability distribution on the vertices of G. Then we can decom- Pn pose x uniquely as i=1 where xi = αivi. Since x is a probability vector and all vi, i ≥ 2 are orthogonal to π = (1/n, . . . , 1/n), it follows that αi = 1. Then,

n ! 2

2 X kMx − πk = M αivi − π i=1 2 2 n n X X = π + αiλivi − π = αiλivi i=2 i=2 n X 2 = kαiλivik i=2 n 2 X 2 ≤ λ kαivik i=2 2 n 2 X = λ αivi i=2 = λ2 kx − πk2 .

Taking square roots yields

kMx − πk ≤ λ kx − πk , so that for any t ≥ 1,

t t−1 t−1 t M x − π = M(M x) − π ≤ λ M x − π ≤ · · · ≤ λ kx − πk .

Finally,

t t M x − π ≤ λ · kx − πk ≤ λt · kxk t t ≤ λ · kxk1 = λ ,

10 where in the first inequality we have used that

2 2 2 kx − πk2 + kπk2 = kxk2 , since x − π and π are orthogonal, which immediately implies kx − πk ≤ kxk. t 1  log n  log n  Hence kM π − uk∞ < 2n , when t = O log(1/λ) = O 1−λ . To see the last step, we note that 1  1  log(1 + x) = 1 − + O 1 + x (1 + x)2 by taking the Taylor expansion of both sides.

8 Probability Amplification by Random Walks on Expanders

Suppose we run a randomized sampling algorithm for which there is an unknown (bad) set B ⊆ V for which the algorithm cannot solve the problem. For instance, B could describe all possible choices for the (random) bits the algorithm could use. Then after t repetitions of the algorithm we find the correct answer if and only if at least one sample lies outside B. While the obvious way to amplify the success probability is to generate t independent samples, there is a more clever and somewhat surprising solution. One performs a t-step random walk with a random starting vertex on an expander graph with vertex set V . Despite the large dependencies among two consecutive vertices, it turns out the probability for the random walk to hit at least one vertex outside B is very close to the probability that we have when we sample all t vertices independently from V .

B B

Figure 2: Illustration of Independent Samples and the corresponding random walk. B represents the set of “bad” inputs that we seek to avoid.

Theorem 26 (Ajtai-Komlos-Szemerdi (1987), Alon-Feige-Wigderson-Zuckerman (1995), [1, 4]). Let G be a d-regular graph with n vertices and spectral expansion λ. Let B ⊆ V with |B| = βn. Then,

t Pr [ B ] , Pr [ ∀t: Xt ∈ B ] ≤ (β + λ) .

Proof. Define a new matrix Pe as follows: ( 1 if u = v ∈ B Pe u,v = 0 otherwise.

Lemma 27. Let π = (1/n, . . . , 1/n) be the uniform vector. Then,

Pr [ B ] = (PMe )tPeπ 1

11 Proof. Note that Peπ gives a vector which is 1/n at components corresponding to vertices t in B and 0 otherwise. Moreover, Mu,v is the probability for a random walk on G starting from u to be located at v at step t. Since Pe is a projection matrix, it follows that

t (PMe )u,v is the same probability as before, but now the random walk is additionally required to use only vertices in B. We now argue (slightly) more formally: h i X h i h i (PMe )tPeπ = (PMe )t · Peπ u u,w w w∈V

X h ti 1 = (PMe ) · 1w∈B · u,w n w∈V X 1 h i = (PMe )t n w,u w∈B Hence the claim of the lemma follows.

We continue with another lemma. Lemma 28. For any probability vector v ∈ Rn, it holds that

kPMe Pve k2 ≤ (β + λ) · kvk2.

Proof. We first note that we can assume that Pve = v. Otherwise, we replace v by Pev which doesn’t change the left-hand-side, and can only make the right-hand-side smaller. For the same reason, we can also assume that v is non-negative and by scaling, we may P also assume that i vi = 1. Hence, we can express v as follows:

Pve = v = π + z,

where z = v − π is orthogonal to π. With this, we obtain

PMe Pve = PMve = PMe π + PMze = Peπ + PMze , and thus

PMe Pve ≤ Peπ + PMze . (1) 2 2 2 We now bound the two summands on the right-hand side separately. By Cauchy-Schwartz inequality we have X X p 1 = vi = 1i∈B · vi ≤ βn · kvk2. i i

12 p Since kPeuk2 = β/n, it follows that

kPue k2 ≤ β · kvk2. Pn For the other summand, recall that z is orthogonal to π. Hence, z = i=2 civi where vi is the ith eigenvector of M. Moreover, since Pe is a projection,

n n X X PMze ≤ kMzk2 ≤ ciλivi ≤ λ · civi = λ · kzk2 ≤ λ · kvk2, 2 i=2 2 i=2 2 where the last inequality holds since v = π + z and z is orthogonal to π. Hence,

kzk2 + kπk2 = kv − πk2 + kπk2 = kvk2

Plugging in the two inequalities in (1) yields the lemma. By combining the two lemmas, we obtain the theorem as follows. √ (PMe )tPeπ ≤ n · (PMe )tPeπ 1 2 √ = n · (PMe Pe)tπ 2 √ t t ≤ n · (β + λ) · kπk2 = (β + λ) .

There exist various extensions of Theorem 26; for instance, where we only consider a subset of the time-steps in {1, . . . , t} or the set to be avoided changes over time. We refer to [7] for further details and the references therein. Let us now apply Theorem 26 for a probabilistic algorithm A for the language L ∈ RP (the class of problems with one-sided error).

Complexity Class RP. The complexity class RP consists of all languages L for which there exists a polynomial-time randomized algorithm A such that 3 x ∈ L =⇒Pr [ A(x) = 1 ] ≥ , 4 x 6∈ L =⇒Pr [ A(x) = 1 ] = 0.

To decide whether a given input x is in L, the algorithm A samples a random string r ∈ {0, 1}` of length ` and computes in polynomial time a boolean function A(x, r). If x∈ / L, then A(x, r) = 0 for all r. If x ∈ L, then the probability (over r) that A(x, r) = 0 is at most β. Now take a graph G = (V,E) with V = {0, 1}k which has spectral expansion at least λ and suppose that λ is sufficiently smaller than β which is the error of the given algorithm. Then the new algorithm Ae based on random walks is defined as follows:

(1) Pick a vertex u0 ∈ V uniformly at random.

(2) Perform a random walk of length t (X0,X1,...,Xt) Wt (3) Return i=0 A(x, vi).

13 Algorithm Error Probability Random Bits Rand. Algorithm 1/4 k t Repetitions (1/4)t t · k t-step Random Walk (1/4)t k + O(t) · log(d)

Figure 3: Comparison of the methods for probability amplification. If k is a sufficiently large constant, then for any value of t, the t-step Random Walk algorithm requires less random bits for achieving the same error probability as the t-repetitions.

By definition of RP, we only need to consider the case where x ∈ L. By Theorem 26,

h i t Pr Ae fails ≤ Pr [ ∀i: Xi ∈ B ] ≤ (β + λ) , which is at most 2−Ω(t) assuming that λ is a constant smaller than < 3/4. Adjusting the constants, we conclude that the error probability is at most 4−t if we use k + O(t) · log(d) random bits.

Complexity Class BPP. The complexity class BPP consists of all languages L for which there exists a polynomial-time randomized algorithm A such that 3 x ∈ L =⇒Pr [ A(x) = 1 ] ≥ , 4 3 x 6∈ L =⇒Pr [ A(x) = 0 ] ≥ . 4 For this class, we can achieve a similar probability amplification by performing a random walk of length t and returning the majority vote. For more details, see [7].

9 Application: Superconcentrators References

[1] Mikl´osAjtai, J´anosKoml´os, and Endre Szemer´edi. Deterministic simulation in logspace. In Proceedings of the 19th Annual ACM Symposium on Theory of Com- puting, pages 132–140, 1987.

[2] N. Alon. Eigenvalues and expanders. Combinatorica, 6(2):83–96, 1986.

[3] N. Alon and F. R. K. Chung. Explicit construction of linear sized tolerant networks. Discrete Mathematics, 72(1-3):15–19, 1988.

[4] , Uriel Feige, , and David Zuckerman. Derandomized graph products. Computational Complexity, 5(1):60–75, 1995.

[5] Y. Bilu and N. Linial. Lifts, discrepancy and nearly optimal spectral gap. Combina- torica, 26(5):495–519, 2006.

[6] Joel Friedman. On the second eigenvalue and random walks in random d-regular graphs. Combinatorica, 11:331–362, 1991.

14 [7] Shlomo Hoory, Nathan Linial, and Avi Widgerson. Expander graphs and their appli- cations. Bulletin (New series) of the American Mathematical Society, 43(4):439?561, 2006.

[8] A. Lubotzky, R. Phillips, and P. Sarnak. Ramanujan graphs. Combinatorica, 8:261– 277, 1988.

15