58th Annual IEEE Symposium on Foundations of Computer Science

Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space

Jack Murtagh∗, Omer Reingold†, Aaron Sidford‡, and Salil Vadhan∗ ∗School of Engineering & Applied Sciences, Harvard University, Cambridge, MA USA Email: [email protected], [email protected] †Computer Science Department, Stanford University, Stanford, CA USA Email: [email protected] ‡Management Science & Engineering, Stanford University, Stanford, CA USA Email: [email protected]

/ def / Abstract—We give a deterministic O˜(log n)-space algorithm RL ⊆ SPACE(log3 2 n) = L3 2. Unfortunately, despite for approximately solving linear systems given by Laplacians much effort over the past two decades, this remains the best of undirected graphs, and consequently also approximating known upper bound on the deterministic space complexity hitting times, commute times, and escape probabilities for RL undirected graphs. Previously, such systems were known to of . be solvable by randomized algorithms using O(log n) space For many years, the most prominent example of a problem (Doron, Le Gall, and Ta-Shma, 2017) and hence by determin- in RL not known to be in L was UNDIRECTED S-T O 3/2 n istic algorithms using (log ) space (Saks and Zhou, FOCS CONNECTIVITY, which can be solved in randomized log- 1995 and JCSS 1999). arithmic space by performing a polynomial-length random Our algorithm combines ideas from time-efficient Laplacian s solvers (Spielman and Teng, STOC ‘04; Peng and Spielman, walk from the start vertex and accepting if the destination STOC ‘14) with ideas used to show that UNDIRECTED S-T vertex t is ever visited [7]. In 2005, Reingold [8] gave CONNECTIVITY is in deterministic logspace (Reingold, STOC a deterministic logspace algorithm for this problem. Since ‘05 and JACM ‘08; Rozenman and Vadhan, RANDOM ‘05). UNDIRECTED S-T CONNECTIVITY is complete for SL Keywords-space complexity; derandomization; expander (symmetric logspace) [9], this also implies deterministic graphs; spectral sparsification; random walks; linear systems logpsace algorithms for several other natural problems, such as BIPARTITENESS [10], [11]. I. INTRODUCTION Reingold’s algorithm provided hope for renewed progress on the general RL vs. L problem. Indeed, it was shown in A. The RL vs. L Problem [12] that solving S-T connectivity on directed graphs where One of the central problems in computational complexity the random walk is promised to have polynomial mixing is to understand the power of randomness for efficient time (and where t has noticeable stationary probability) is computation. There is strong evidence that randomness does complete for RL (generalized to promise problems). While not provide a substantial savings for standard algorithmic [12] was also able to generalize Reingold’s methods to problems, where the input is static and the algorithm has Eulerian directed graphs, the efforts to handle all of RL time to access it in entirety. Indeed, under widely believed with this approach stalled. Thus, researchers turned back to complexity assumptions (e.g. that SAT has circuit complex- constructing pseudorandom generators for more restricted Ω(n) ity 2 ), it is known that every randomized algorithm can models of space-bounded computation, such as various types be made deterministic with only a polynomial slowdown of constant-width branching programs [13], [14], [15], [16], (BPP = P, RP = P) and a constant-factor increase in [17], [18], [19], [20], [21]. space (BPL = L, RL = L) [1], [2]. A major challenge is to prove such derandomization B. Our Work results unconditionally, without relying on unproven com- plexity assumptions. In the time-bounded case, it is known In this paper, we restart the effort to obtain deterministic that proving that RP = P requires progress on circuit logspace algorithms for increasingly rich graph-theoretic lower bounds [3], [4]. But for the space-bounded case, problems beyond those known to be in SL. In particular, there are no such barriers, and we can hope for an un- we provide a nearly logarithmic space algorithm for ap- conditional proof that RL = L. Indeed, Nisan [5] gave proximately solving UNDIRECTED LAPLACIAN SYSTEMS, an unconditional construction of a pseudorandom generator which also implies nearly logarithmic-space algorithms for for space-bounded computation with seed length O(log2 n), approximating hitting times, commute times, and escape which was used by Saks and Zhou [6] to prove that probabilities for random walks on undirected graphs.

0272-5428/17 $31.00 © 2017 IEEE 801 DOI 10.1109/FOCS.2017.79 Our algorithms are obtained by combining the techniques where n is the bitlength of the input (G, b, ). from recent nearly linear-time Laplacian solvers [22], [23] In particular, for =1/poly(n), the algorithm uses with methods related to Reingold’s algorithm [24]. The body space O˜(log n). Note that since the algorithm applies to of work of time-efficient Laplacian solvers has recently been multigraphs (represented as edge lists), we can handle poly- extended to Eulerian directed graphs [25], which in turn nomially large integer edge weights. Known reductions [26] was used to obtain algorithms to approximate stationary imply the same space bounds for estimating hitting times, probabilities for arbitrary directed graphs with polynomial commute times, and escape probabilities for random walks mixing time through recent reductions [26]. on undirected graphs. (See Section VIII.) This raises the tantalizing possibility of extending our nearly logarithmic-space Laplacian solvers in a similar way E. Techniques def to prove that RL ⊆ L˜ = SPACE(O˜(log n)), since ap- The starting point for our algorithm is the undirected proximating stationary probabilities on digraphs with poly- Laplacian solver of Peng and Spielman [23], which is nomial mixing time suffices to solve all of RL [12], [27], a randomized algorithm that uses polylogarithmic parallel and Reingold’s algorithm has been extended to Eulerian time and a nearly linear amount of work. (Interestingly, digraphs [12], [24]. concurrently and independently of [8], Trifonov [40] gave a O˜(log n)-space algorithm for UNDIRECTED S-T CONNEC- C. Laplacian system solvers TIVITY by importing techniques from parallel algorithms, Given an undirected multigraph G with adjacency like we do.) It implicitly1 computes an approximate pseu- A and diagonal degree matrix D, the Laplacian of G is the doinverse L+ of a graph Laplacian (formally defined in matrix L = D − A. Solving systems of equations Lx = b Section II-B), which is equivalent to approximately solving in the Laplacians of graphs (and in general symmetric Laplacian systems. Here we will sketch their algorithm and diagonally dominant systems) arises in many applications how we obtain a space-efficient analogue of it. throughout computer science. In a seminal paper, Spielman By using the deterministic logspace algorithm for UNDI- and Teng gave a nearly linear time algorithm for solving RECTED S-T CONNECTIVITY [8], we may assume without such linear systems [22], which was improved in a series of loss of generality that our input graph G is connected follow up works [28], [29], [30], [23], [31] including recent (else we find the connected components and work on each extensions to directed graphs [26], [25]. These methods have separately). By adding self-loops (which does not change sparked a large body of work using Laplacian solvers as the Laplacian), we may also assume that G is regular and an algorithmic primitive for problems such as max flow nonbipartite. For notational convenience, here and through [32], [33], randomly sampling spanning trees [34], sparsest most of the paper we will work with the normalized cut [35], graph sparsification [36], as well as problems in Laplacian, which in the case that G is regular,2 equals computer vision [37]. L = L(G)=I − M, where M = M(G) is the n × n Recent works have started to study the space complexity transition matrix for the random walk on G and now we are of solving Laplacian systems. Ta-Shma [38] gave a quan- using n for the number of vertices in G. Since the uniform tum logspace algorithm for approximately solving general distribution is stationary for the random walk on a regular linear systems that are suitably well-conditioned. Doron, Le graph, the all-ones vector 1 is in the kernel of L. Because G Gall, and Ta-Shma [39] showed that there is a randomized is connected, there is no other stationary distribution and L logspace algorithm for approximately solving Laplacian is of rank n − 1. Thus computing L+ amounts to inverting systems on digraphs with polynomial mixing time. L on the space orthogonal to 1. The Peng–Spielman algorithm is based on the following D. Main Result identity: We give a nearly logarithmic-space algorithm for approx- 1 (I − M)+ = · I − J +(I + M)(I − M 2)+(I + M) , imately solving undirected Laplacian systems: 2 Theorem I.1. There is a deterministic algorithm that, given where J is the matrix with all entries 1/n. That is, an undirected multigraph G =(V,E) specified as a list of 1 b ∈ Q|V | G L(G)+ = · I − J +(I + M(G))L(G2)+(I + M(G)) , edges, a vector in the image of ’s Laplacian 2 L, and an approximation parameter >0, finds a vector (1) x ∈ Q|V |, such that 1Nearly linear-time algorithms for Laplacian system solvers do not x − x∗≤ ·x∗ have enough time to write done a full approximate pseudoinverse, which may be dense; instead they compute the result of applying the approximate ∗ ∗ matrix to a given vector. for some x such that Lx = b, in space 2In an irregular graph, the normalized Laplacian is defined to be I − D−1/2AD−1/2, where D is the diagonal matrix of vertex degrees. When O (log n · log log(n/)) , G is d-regular, we have D = dI,soD−1/2AD−1/2 = A/d = M.

802 G2 where is the multigraph whose edges correspond to G0 = G and Gi is the derandomized square of Gi−1, walks of length 2 in G. a graph of degree d · ci. We recursively compute spectral This gives rise to a natural algorithm for computing approximations of the pseudoinverses L(Gi)+ as follows: L(G)+, by recursively computing L(G2)+ and applying k O n + Equation (1). After squaring the graph = (log ) L(Gi−1) = k times, we are considering all walks of length 2 , which is 1 · I − J I M G LG + I M G . beyond the O(n2) mixing time of regular graphs, and hence +( + ( i−1)) ( i) ( + ( i−1)) k 2 M(G2 ) is approximately J, and we can easily approximate (2) k L G2 + ≈ I − J + I − J the pseudoinverse as ( ) ( ) = . Opening up all k = O(log n) levels of this recursion, there However, each time we square the graph, the degree is an explicit quadratic matrix polynomial p where each of also squares, which is too costly in terms of computa- the (noncommuting) variables appears at most twice in each tional complexity. Thus, to obtain nearly linear time, Peng term, such that and Spielman [23] sparsify G2 at each step of recursion. Specifically, they carefully combine elements of random- L(G0)=p(I + M(G0),I + M(G1),...,I + M(Gk)). ized sparsification procedures from [41], [42], [43] that Our task is to evaluate this polynomial in space O˜(log n). retains O˜(n)/2 edges from G2 and provides a spectral  First, we use the fact, following [8], [24], that a care- -approximation G to G2 in the sense that for every T  T ful implementation of k-fold derandomized squaring using vector v,wehavev L(G )v =(1± )v L(G2)v. The explicit expanders of degree c can be evaluated in space recursion of Equation (1) behaves nicely with respect to O n k · c O˜ n G2 (log + log )= (log ). (A naive evaluation would spectral approximation, so that if we replace with such 2  take space O(k · log n)=O(log n).) This means that we an -approximation G at each step, we obtain a O(k)- can construct each of the matrices I + M(Gi) in space approximation over the k = O(log n) levels of recursion O˜(log n).3 Next, we observe that each term in the multilinear provided ≤ 1/O(k). Thus, we can take =1/O(log n) polynomial multiplies at most k of these matrices and obtain a good spectral approximation overall, using 2 +1 together, and hence can be computed recursively in space graphs with O˜(n) edges throughout and thereby maintaining O k · O n O˜ n . Since iterated addition is a nearly linear amount of work. (log ) (log )= (log ) also in logspace, we can then sum to evaluate the entire There are two issues with trying to obtain a determin- O˜ n istic O˜(log n)-space algorithm from this approach. First, matrix polynomial in space (log ). To obtain an arbitrarily good final approximation error > we cannot perform random sampling to sparsify. Second, c n / even if we derandomize the sampling step in deterministic 0, we could use expanders of degree = poly((log ) ) for our derandomized squaring, but this would yield a space logarithmic space, a direct recursive implementation would k · n ≥ / · n cost O(log n) space for each of the k = O(log n) levels of complexity of at least log log(1 ) log .To 2 obtain the doubly-logarithmic dependence on / claimed recursion, for a space bound of O(log n). 1 For the first issue, we show that the sparsification can in Theorem I.1, we follow the same approach as [23], be done deterministically using the derandomized square of computing a constant factor spectral approximation as above, Rozenman and Vadhan [24], which was developed in order and then using “Richardson iterations” at the end to im- prove the approximation factor. Interestingly, this doubly- to give a simpler proof that UNDIRECTED S-T CONNECTIV- logarithmic dependence on is even better than what is ITY is in L.IfG is a d-regular graph, a derandomized square of G is obtained by using an expander H on d vertices to achieved by the randomized algorithm of [39], which uses O n/ select a subset of the walks of length 2 in G.IfH has space (log( )). degree c  d, then the corresponding derandomized square II. PRELIMINARIES of G will have degree d · c  d2. In [24], it was shown that A. Graph Laplacians the derandomized square improves expansion (as measured G V,E n by spectral gap) nearly as well actual squaring, to within Let =( ) be an undirected multigraph on A n × n a factor of 1 − , where is the second largest eigenvalue vertices. The is the matrix such A j of H in absolute value. Here we show that derandomized that entry ij contains the number of edges from vertex i G D n × n squaring actually provides a spectral O()-approximation to vertex in . The degree matrix is the diagonal D i G to G2. Consequently, we can use derandomized squaring matrix such that ii is the degree of vertex in . The G with expanders of second eigenvalue 1/O(log n) and hence Laplacian of is defined to be c n degree = polylog( ) in the Peng–Spielman algorithm. L = D − A. Now, to obtain a space-efficient implementation, we con- sider what happens when we repeatedly apply Equation (1), 3We follow the usual model of space-bounded computation, where algorithms can have output (to write-only memory) that is larger than but with true squaring replaced by derandomized squaring. their space bound. We review the standard composition lemma for such We obtain a sequence of graphs G0,G1,...,Gk, where algorithms in Section VI-A.

803 L is a symmetric, positive semidefinite matrix with non- Proposition II.4. If Lx = b has a solution, then the vector positive off-diagonal entries and diagonal entries Lii = L+b is a solution. |Lij| equaling the sum of the absolute values of the j= i Proof: Let x∗ be a solution. Multiplying both sides of off-diagonal entries in that row. Each row of a Laplacian the equation Lx∗ = b by LL+ gives sums to 0 so the all-ones vector is in the kernel of every ∗ Laplacian. LL+Lx = LL+b We will also work with the normalized Laplacian ∗ =⇒ Lx = LL+b LD−1 = I − M where M = AD−1 is the transition matrix ⇒ b LL+b of the random walk on the graph G. = = Definition II.1. Let G be a regular undirected graph with transition matrix M. Then we define The final equality shows that L+b is a solution to the system. Mv λ(G)=max v⊥1 v v=0 C. Spectral approximation = 2nd largest eigenvalue of M in absolute value. We want to approximate a solution to Lx = b. Our algorithm works by computing an approximation L˜+ to the The spectral gap of G is γ(G)=1− λ(G) matrix L+ and then outputting L˜+b as an approximate solu- The spectral gap of a multigraph is a good measure of tion to Lx = b. We use the notion of spectral approximation its expansion properties. It is shown in [24] that if G is a of matrices first introduced in [41]. d-regular multigraph on n vertices with a self loop on every Definition II.5. Let X, Y be symmetric n×n real matrices. vertex then λ(G) ≤ 1 − 1/2d2n2. We say that X ≈ Y if Throughout the paper we let J denote the n × n matrix with 1/n in every entry. So I−J is the normalized Laplacian e− · X Y e · X of the complete graph with a self loop on every vertex. We use 1 to denote the all ones vector. where for any two matrices A, B we write A B if B − A is positive semidefinite. B. Moore-Penrose pseudoinverse Below are some additional useful facts about spectral Since a normalized Laplacian L = I −M is not invertible, approximation that we use throughout the paper. we will consider the Moore-Penrose pseudoinverse of L. Proposition II.6 ([23]). If X, Y, W, Z are positive semidef- Definition II.2. Let L be a real-valued matrix. The Moore- inite n × n matrices then Penrose pseudoinverse L+ of L is the unique matrix satis- X ≈ Y Y ≈ X fying the following: 1) If then X ≈ Y c ≥ cX ≈ cY LL+L L 2) If and 0 then 1) = X ≈ Y X W ≈ Y W L+LL+ L+ 3) If then + + 2) = X ≈ Y W ≈ Z X W ≈ Y Z L+L L+L  4) If and then + + 3) =( ) X ≈ Y Y ≈ Z X ≈ Z + +  5) If 1 and 2 then 1+2 4) LL =(LL ) .  6) If X ≈ Y and M is any matrix then M XM ≈ Fact II.3. For a real-valued matrix L, L+ has the following M YM properties: 7) If X ≈ Y and X and Y have the same kernel, then + + 1) ker(L+)=ker(L) X ≈ Y 2) im(L+)=im(L) 8) If X ≈ Y then I ⊗X ≈ I ⊗Y where I is the identity 3) If c is a nonzero scalar then (cL)+ = c−1L+ matrix (of any dimension) and ⊗ denotes the tensor 4) L+ is real-valued product. L λ ,...,λ 5) If is symmetric with eigenvalues 1 n, then The proof of Proposition II.6 can be found in the full L+ L has the same eigenvectors as and eigenvalues version of this paper. + + λ ,...,λn where 1 ⎧ ⎨ 1 III. MAIN THEOREM AND APPROACH if λi =0 λ+ λ L+ i = ⎩ i Our Laplacian solver works by approximating . The 0ifλi =0. main technical result is: Next we show that solving a linear system in L can be Theorem III.1. Given an undirected, connected multigraph reduced to computing the pseudoinverse of L and applying G with Laplacian L = D−A and >0, there is a determin- it to a vector. istic algorithm that computes L˜+ such that L˜+ ≈ L+ that

804 uses space O(log n · log log(n/)) where n is the bitlength IV. APPROXIMATION OF THE PSEUDOINVERSE of the input. In this section we show how to approximate the pseu- Approximating solutions to Laplacian systems follows as doinverse of a Laplacian using a method from Peng and a corollary and is discussed in Section VIII. To prove Theo- Spielman [23]. The Peng Spielman solver was originally rem III.1 we first reduce the problem to the task of approx- written to approximately solve symmetric diagonally dom- imating the pseudoinverse of the normalized Laplacian of a inant systems, which in turn can be used to approximate regular, aperiodic, multigraph with degree a power of 2. Let solutions to Laplacian systems. We adapt their algorithm to L = D−A be the Laplacian of an undirected multigraph G. compute the pseudoinverse of a graph Laplacian directly. We can make Gf-regular with f a power of 2 and aperiodic The approximation proceeds in two steps. The first achieves by adding an appropriate number of self loops to every a constant spectral approximation to the pseudoinverse of vertex. Let E be the diagonal matrix of self loops added to a Laplacian matrix. Then the constant approximation is each vertex in G. Then L = D+E −A−E is the Laplacian boosted to an approximation through rounds of Richardson of the regular, aperiodic multigraph (self loops do not change iteration. the unnormalized Laplacian) and L(D + E)−1 = L/f is Theorem IV.1 (Adapted from [23]). Let 0,...,k ≥ 0 and the normalized Laplacian. Recalling that (L/f)+ = f · L+ k = i. Let M ,...,Mk be symmetric matrices such that completes the reduction. 0 0 Li = I − Mi are positive semidefinite for all i ∈{0,...,k} Our algorithm for computing the pseudoinverse of the and Li ≈ I −M 2 for all 1 ≤ i ≤ k and Lk ≈ I −J. normalized Laplacian of a regular, aperiodic multigraph is i−1 i−1 k Then based on the Laplacian solver of Peng and Spielman [23]. It works by using the following identity: k−1 + 1 1 1 L ≈ (I − J)+ Wi + Wk L I−M 0 2 2i+2 2k+1 Proposition III.2. If = is the normalized Laplacian i=0 of an undirected, connected, regular, aperiodic, multigraph i ∈ k G on n vertices then where for all [ ] 1 W I M ·...· I M I −J I M ·...· I M L+ = I − J +(I + M)(I − M 2)+(I + M) i =( + 0) ( + i)( )( + i) ( + 0) 2 This identity is adapted from the one used in [23]. A The proof of this theorem is adapted from [23] and can proof of its correctness can be found in the full version of be found in the full version of this paper. this paper. Our algorithm works by using Theorem IV.1 to compute Recall that squaring the transition matrix M of a regular a constant approximation to L+ and then boosting the multigraph G yields the transition matrix of G2, which is approximation to an arbitrary -approximation. Our main defined to be the graph on the same vertex set as G whose tool for this is the following lemma which shows how an ap- edges correspond to walks of length 2 in G. So the identity proximate pseudoinverse can be improved to a high quality from Proposition III.2 reduces the problem of computing approximate pseudoinverse. This is essentially a symmetric the pseudoinverse of the normalized Laplacian of G to version of a well known technique in linear system solving computing the pseudoinverse of the normalized Laplacian known as preconditioned Richardson iteration. It is a slight of G2 (plus some additional matrix products). Recursively modification of Lemma 31 from [44]. applying the identity k times results in a matrix polynomial k Lemma IV.2. Let L˜+ ≈α L+ for α<1/2. Then involving the term (I − M 2 )+. Since G is connected and k aperiodic, as k grows, the term (I − M 2 )+ approaches k −α −α + i (I − J)+ = I − J. Undirected multigraphs have polynomial Lk = e · L · (I − e · L˜ · L) mixing time, so setting k to O(log n) and then replacing i=0 k+1 (I −M 2 )+ in the final term of the expansion with (I −J) L ≈ L+ is a matrix satisfying k O((2α)k) should give a good approximation to L+ without explicitly computing any pseudoinverses. The proof of Lemma IV.2 can be found in the full version Using the identity directly to approximate L+ in O˜(log n) of this paper. space is infeasible because raising a matrix to the 2kth power V. D ERANDOMIZED SQUARING by repeated squaring takes space Θ(k · log n), which is Θ(log2 n) for k =Θ(logn). To approximate the pseudoinverse of a graph Laplacian To save space we use the derandomized square introduced we will use Theorem IV.1 to get a constant approximation in [24] in place of true matrix squaring. The derandomized and then use Lemma IV.2 to boost the approximation. In square improves the connectivity of a graph almost as much order to achieve a space efficient and deterministic imple- as true squaring does, but it does not square the degree and mentation of Theorem IV.1, we will replace every instance of thereby avoids the space blow-up when iterated. matrix squaring with the derandomized square of Rozenman

805 and Vadhan [24]. Before defining the derandomized square Then Gs H is a d · c-regular undirected multigraph and we define two-way labelings and rotation maps. λ(Gs H) ≤ 1 − (1 − λ2) · (1 − μ) ≤ λ2 + μ Definition V.1. [45] A two-way labeling of a d-regular undirected multigraph G is a labeling of the edges in G The output of the derandomized square is an undirected such that multigraph with a two-way labeling. So the operation can be repeated on the output. In our algorithm for computing 1) Every edge (u, v) has two labels in [d], one as an edge an approximate pseudoinverse, we use the Peng Spielman incident to u and one as an edge incident to v. identity but replace every instance of a squared transition 2) For every vertex v the labels of the edges incident to matrix with the derandomized square. To prove that this v are distinct. approach indeed yields an approximation to the pseudoin- In [24], two-way labelings are referred to as undirected verse, we want to apply Theorem IV.1. For this we need to two-way labelings. In a two-way labeling, each vertex has prove two properties of the derandomized square: that the its own labeling from 1 to d for the d edges incident to derandomized square is a good spectral approximation of it. Since every edge is incident to two vertices, each edge the true square and that iterating the derandomized square receives two labels, which may or may not be the same. It is yields an approximation of the complete graph. The latter convenient to specify a multigraph with a two-way labeling property follows from the corollary and lemma below. by a rotation map: Corollary V.5. Let G0 be a d-regular undirected multigraph n λ G λ G d on vertices with a two-way labeling and ( 0)= Definition V.2. [45] Let be a -regular multigraph on H ,...,H c n and 1 k be -regular graphs with two-way labelings vertices with a two-way labeling. The rotation map H d·ci−1 λ H ≤ μ i ∈ k n × d → n × d where i has vertices and ( i) . For all [ ] RotG :[ ] [ ] [ ] [ ] is defined as follows: 2 2 let Gi = Gi−1s Hi.Ifμ ≤ 1/100 and k = 6logd n  RotG(v, i)=(w, j) if the ith edge incident to vertex v leads then λ(Gk) ≤ 1/3. to vertex w and this edge is the jth edge incident to w. Proof: Let μ ≤ 1/100 and for all i ∈ [k], let γi = Note that RotG is its own inverse, and that any function 1 − λ(Gi).Ifγi− < 2/3 then Rot:[n] × [d] → [n] × [d] that is its own inverse defines a 1 d n 2 -regular multigraph on vertices with a two-way labeling. γi ≥ (1 − λ(Gi−1) ) · (1 − μ) G2 2 Recall that the edges in correspond to all of the walks =(2γi− − γ ) · (1 − μ) G d 1 i−1 of length 2 in . This is equivalent to placing a -clique 99 4 with self loops on every vertex on the neighbor set of every ≥ · γi− 100 3 1 vertex in G. The derandomized square picks out a subset of ≥ 5γ the walks of length 2 by placing a small degree expander i−1 4 on the neighbor set of every vertex rather than a clique. It follows that until γi is driven above 2/3 we have γk ≥ G d k 2 2 2 2 Definition V.3 ([24]). Let be a -regular multigraph on (5/4) · γ0. Since γ0 ≥ 1/2d n , setting k = 6logd n  n vertices with a two-way labeling, let H be a c-regular will result in λ(Gk) ≤ 1/3. multigraph on d vertices with a two-way labeling. The Lemma V.6. If L = I − M is the normalized Laplacian derandomized square Gs H is a (d · c)-regular graph on of a d-regular, undirected, 1/2-lazy, multigraph G, and = n vertices with rotation map RotGs H defined as follows: ln(1/(1 − λ)) then L ≈ I − J if and only if λ(G) ≤ λ. For v0 ∈ [n],i0 ∈ [d], and j0 ∈ [c], we compute RotGs H (v0, (i0,j0)) as A proof of Lemma V.6 can be found in the full version of this paper. 1) Let (v1,i1)=RotG(v0,i0) We prove in Theorem V.7 that if H is an expander 2) Let (i2,j1)=RotH (i1,j0) then the derandomized square of a multigraph G with H 3) Let (v2,i3)=RotG(v1,i2) is a good spectral approximation to G2 and thus can be 4) Output (v2, (i3,j1)) used for the approximation in Theorem IV.1. The idea is It can be verified that RotGs H is its own inverse and that the square of a graph can be interpreted as putting hence this indeed defines a (d · c)-regular multigraph on n d-clique K on the neighbors of each vertex v in G, and vertices. The main idea behind the derandomized square is the derandomized square can be interpreted as putting a that it improves the connectivity of the graph (as measured copy of H on each vertex. By Lemma V.6, L(H) spectrally by the second eigenvalue) without squaring the degree. approximates L(K), and we can use this to show that the Laplacian of L(Gs H) spectrally approximates L(G2). Theorem V.4 ([24]). Let G be a d-regular undirected multigraph with a two-way labeling and λ(G)=λ and H be Theorem V.7. Let G be a d-regular, undirected, aperiodic a c-regular graph with a two-way labeling and λ(H)=μ. multigraph on n vertices and H be a regular multigraph on

806 d vertices with λ(H) ≤ α. Then 6weget

In − M = P AI˜ n·dAQ˜ − P A˜B˜AQ˜ = P A˜(In ⊗ LH )AQ˜ 2 L(G ) ≈ L(Gs H)   = d · Q A˜ (In ⊗ LH )AQ˜   ≈ d · Q A˜ (In ⊗ LK )AQ˜ = P A˜(In ⊗ LK )AQ˜ for =ln(1/(1 − α)). = In − P A˜(In ⊗ J)AQ˜ Proof: Following the proof from [24] that the de- = In − P AQP˜ AQ˜ randomized square improves connectivity, we can write = L(G2) the transition matrix for the random walk on Gs H as M = P A˜B˜AQ˜ , where each matrix corresponds to a step where the final line follows from observing that P AQ˜ in the definition of the derandomized square: equals the transition matrix of G. Thus the above shows that L(G2) ≈ L(Gs H) as desired. • Q is an n · d × n matrix that maps any n-vector Note that if we want to compute a constant spectral + v to v ⊗ ud (where ud denotes the d-coordinate approximation to L then we need each i =1/O(log n) uniform distribution). This corresponds to “lifting” a so that the sum of k = O(log n) of them is a constant. probability distribution over [n] to one over [n] × [d] This implies that we need α =1/O(log n) in Theorem where the mass on each coordinate is divided uni- V.7 which requires expanders of polylogarithmic degree. formly over d coordinates in the distribution over This will affect the space complexity of the algorithm as n × d Q /d u v [ ] [ ]. That is (u,i),v =1 if = discussed in the next section. and 0 otherwise where the rows of Q are ordered VI. SPACE COMPLEXITY (1, 1), (1, 2),...,(1,d), (2, 1),...,(2,d),...(n, d). L • A˜ is the n · d × n · d permutation matrix corresponding In this section we argue that given a Laplacian , L+ to the rotation map for G. That is A u,i , v,j =1if our algorithm produces an -approximation to using ( ) ( ) O n · n/ L RotG(u, i)=(v, j) and 0 otherwise. (log log log( )) space. First we prove that when is • B˜ is In ⊗ B, where In is the n × n identity matrix and the normalized Laplacian of a regular graph, we can produce L+ O n · n B is the transition matrix for H. a constant-approximation to using (log log log ) • P is the n × n · d matrix that maps any (n · d)-vector to space and then boost the approximation quality to arbitrary O n · / an n-vector by summing all the entries corresponding accuracy using an additional (log log log(1 )) space to the same vertex in G. This corresponds to projecting thereby proving Theorem III.1. n × d distributions on [ ] [ ] back down to a distribution A. Model of space bounded computation over [n]. That is Pv, u,i =1if u = v and 0 ( ) We use a standard model of space bounded computation otherwise where the columns of P are ordered described here. The machine has a read-only input tape, (1, 1), (1, 2),...,(1,d), (2, 1),...,(2,d),...(n, d).  a constant number of read/write work tapes, and a write- Note that P = d · Q . only output tape. We say the machine runs in space s if throughout the computation, it only uses s tape cells on the Let c be the degree of H and let K be the complete graph work tapes. The machine may write outputs to the output on d vertices with self loops on every vertex. Lemma V.6 O s tape that are larger than s (in fact as large as 2 ( ))but says that LH ≈ LK where LH I − B and LK I − J = = the output tape is write-only. For the randomized analogue, are the normalized Laplacians of H and K, respectively. It we imagine the machine endowed with a coin-flipping box. follows that In ⊗LH In·d −B˜ -approximates In ⊗LK = = At any point in the computation the machine may ask for a In·d − In ⊗ J by Proposition II.6 Part 8. In ⊗ LH is the random bit but in order to remember any bit it receives, the Laplacian for n disjoint copies of H and In ⊗ LK is the bit must be recorded on a work tape. Laplacian for n disjoint copies of K. Applying the matrices P A˜ AQ˜ We use the following proposition about the composition and on the left and right place these copies on the of space-bounded algorithms. neighborhoods of each vertex of G. Proposition VI.1. Let f1 and f2 be functions that can be Note that A˜ is symmetric because RotG is an involution. computed in space s1(n),s2(n) ≥ log n, respectively, and In other words, for all u, v ∈ [n] and a, b ∈ [d], RotG(u, a)= f2 has output of length 1(n) on inputs of size n. Then f2◦f1 (v, b)=⇒ RotG(v, b)=(u, a). This also implies that can be computed in space A˜2 = In·d. Also note that PQ = In and QP = In·d ⊗ J. Applying these observations along with Proposition II.6 part O(s2(1(n)) + s1(n)).

807 B. Constant approximation of the pseudoinverse Let Space(Gi) be the amount of space required to com- pute the rotation map of graph Gi. We will show that for In this section we show how to compute a constant all i ∈ k , Gi Gi− O c . Note that approximation of the pseudoinverse of the normalized Lapla- [ ] Space( ) = Space( 1)+ (log ) Space(G )=O(log(nd)). cian of a regular multigraph in space O(log n · log log n) 0 i ∈ k v ∈ n ,i ∈ d · ci−1 where n is the bitlength of the input. Fix [ ]. We begin with 0 [ ] 0 [ ] and j0 ∈ [c] on the first work tape and we want to compute Proposition VI.2. There is an algorithm that given an undi- v , i ,j v ,i RotGi ( 0 ( 0 0)). We recursively compute RotGi−1 ( 0 0) rected, aperiodic, d-regular multigraph G with normalized so that the work tape now contains (v1,i1,j0). Then Laplacian L, and d a power of 2, computes a matrix L˜+ i ,j we compute RotHi ( 1 0) so that the tape now contains such that L˜+ ≈α L+ using space O (log n · log log n) where v ,i ,j v ,i ( 1 2 1). Finally, we compute RotGi−1 ( 1 2) so that the α<1/2 and n is the bitlength of the input. tape now contains (v2,i3,j1).

To prove Proposition VI.2, we first prove a few lemmas. This requires 2 evaluations of the rotation map of Gi−1 We will use the fact that large derandomized powers can and one evaluation of the rotation map of Hi. Note that we be computed in small space, which we prove following [8], can reuse the same space for each of these evaluations be- [24], [46]. First we note that neighbors in the sequence of cause they happen in succession. The space needed on top of expanders we use for the iterated derandomized square can Space(Gi−1) is the space to store edge label j0, which uses be explicitly computed space-efficiently. O(log c) space. So Space(Gi) = Space(Gi−1)+O(log c). Since i can be as large as k and Space(G )=O(log(n · d)) Lemma VI.3. For every t ∈ N and μ>0, there is a graph 0 we get that for all i ∈ [k], Space(Gi)=O(log n·d+k·log c). Ht,μ with a two-way labeling such that: t • H has 2 vertices and is c-regular for c being a power of 2 bounded by poly(t, 1/μ). Corollary VI.5. Let M0,...,Mk be the transition matrices • λ(H) ≤ μ of G0,...,Gk as defined in Lemma VI.4. For all  ∈ [k], • RotH can be evaluated in linear space in its input given coordinates i, j ∈ [n], entry i, j of M can be length, i.e. space O(t +logc). computed in space O(log n · d + k · log c). Proof sketch.: The expanders can be obtained by letting Proof: Lemma VI.4 shows that we can compute neigh- H Ft t,λ be the Cayley graph on vertex set 2 with neighbors bors in the graph G in space O(log n · d + k · log c).Given λ S ⊆ Ft specified by a -biased multiset 2 [47], [48], taking coordinates i, j the algorithm initiates a tally t at 0 and − H v, i v si,i si i S i, q q d · c 1 Rot ( )=( + ), where is the th element of . computes RotG ( ) for each from 1 to , the G j degree of . If the vertex outputted by RotG is , then Now we argue that high derandomized powers can be t is incremented by 1. After the loop finishes, t contains computed space-efficiently. the number of edges from i to j and the algorithm outputs t/d · c−1 i, j M G d , which is entry of . This used space Lemma VI.4. Let 0 be a -regular, undirected multigraph O n · d k · c G n H ,...,H (log + log ) to compute the rotation map of  on vertices with a two-way labeling and 1 k be O d · c−1 q t c plus space (log( )) to store and . So the total -regular undirected graphs with two-way labelings where −1 i− space usage is O(log n · d + k · log c)+O(log(d · c )) = for each i ∈ [k], Hi has d · c 1 vertices. For each i ∈ [k] O(log n · d + k · log c). let It follows from Corollary VI.5 that when k = O(log n) Gi = Gi−1s Hi. for all i ∈ [k] entries in I + Mi can be computed in space i−1 O n · c Then given v0 ∈ [n],i0 ∈ [d · c ],j0 ∈ [c], (log log ). When we apply it, we will use expanders v, i ,j O n · d (from Lemma VI.3) of degree c = O(polylog(n)),soour RotGi ( ( 0 0)) can be computed in space (log( )+ k · c ,..., algorithm computes entries of I + Mi in space O(log n · log ) with oracle queries to RotH1 RotHk . log log n). Proof: When c is subpolynomial we are reasoning about Now we prove that we can multiply matrices in small sublogarithmic space complexity, which can depend on the space. model. We will use a model where the read-only input G k tape contains the initial input graph 0, the first read-write Lemma VI.6. Given matrices M (1),...,M( ) and indices v , i ,j v k work tape contains the input triple ( 0 ( 0 0)), where 0 i, j (M (1) · M (2) · ... · M ( ))ij can be computed using G i ∈ d · ci−1 G is a vertex of i, 0 [ ] is an edge label in i, O(log n · log k) space where n is the dimension of the input j ∈ c H and 0 [ ] is an edge label in i, and the other work matrices. tapes are blank. When the computation ceases, we want v , i ,j v , i ,j n × n ( 2 ( 3 1)) = RotGi ( 0 ( 0 0)) on the first work tape Proof: First we show how to multiply two matri- and the other work tapes to be blank. ces in O(log n) space. Given as input matrices M (1),M(2)

808 and indices i, j we wish to compute composition of space bounded algorithms, each term in the n sum can be computed in space O(log n·log log n)+O(log n· (1) (2) (1) (2) n O n· n (M · M )ij = Mi Mj log log )= (log log log ). Then since iterated addition O n =1 can be carried out in (log ) space [49], [50], the terms of the sum can be added using an additional O n space. We use the fact that nn-bit numbers can be multiplied and (log ) Again by the composition of space bounded algorithms, the added in O(log n) space (in fact in TC0 [49], [50]). So for total space usage is O(log n · log log n) for computing a a counter  from 1 to n, the algorithm multiplies Mi · Mj constant approximation to L+. and adds the result. The counter can be stored with log n bits and the arithmetic can be carried out in logspace. C. A more precise pseudoinverse To multiply k matrices we recursively split the product into two blocks and multiply each block separately. The Now that we have computed a constant approximation to L+ depth of the recursion is log k and each level requires , we show how to improve the quality of our approxima- O(log n) space for a total of O(log n · log k) space. tion through iterative methods. Now we can prove Proposition VI.2. Proposition VI.7. Let L be the normalized Laplacian of Proof of Proposition VI.2: Let M0 be the transition ma- an undirected, regular, aperiodic, multigraph. There is an G G L I −M trix of 0 = such that = 0. Clearly a two-way la- algorithm such that for every constant α<1/2, given L˜1 G O n + + beling of can be computed in (log ) space by just fixing such that L˜1 ≈α L , computes L˜2 such that L˜2 ≈ L a canonical way for each vertex to label its incident edges. using space O(log n · log log n +logn · log log(1/)) where Set k = 6logd2n2 and μ =1/30k. Let c = polylog(n) n is the length of the input. i− be a power of 2 and let ti =logd · c 1 for all i ∈ [k]. Proof: By Lemma IV.2, given L˜ ≈α L, we can boost Note that each ti is an integer because d and c are powers 1 ti i− the approximation from α to by setting k = O(log(1/)) of 2. Let H ,...,Hk be ci-regular graphs on 2 = d · c 1 1 and computing vertices, respectively and λ(Hi) ≤ μ as given by Lemma VI.3. So for all i ∈ [k], ci =poly(ti, 1/μ) = polylog(n). k −α −α i Without loss of generality, we can take c = polylog(n) ≥ ci L˜2 = e · L · (I − e L˜1 · L) for all i ∈ [k] and make each expander c-regular by adding i=0 self loops. By Proposition VI.2, entries in L˜1 can be computed in space For all i ∈ [k] let Gi = Gi−1s Hi and let Mi be the −α O(log n · log log n) and hence entries in I − e · L˜1 · L can transition matrix of Gi. By Theorem V.7 we have I −Mi ≈ be computed in space O(log n · log log n) because matrix I − M 2 for all i ∈ [k] where =ln(1/(1 − μ)).By i−1 addition and multiplication can be done in O(log n) space. Corollary V.5 λ(Gk) ≤ 1/3 and so by Lemma V.6 we have −α Viewing (I −e ·L˜1 ·L) as a single matrix, each term in the I − Mk ≈ / I − J. ln(3 2) sum above is the product of at most i +1 matrices. Lemma From Theorem IV.1, we get that VI.6 tells us that we can compute such a product in space k−1 O(log n · log i). Since i can be as large as k = O(log 1/), + 1 1 1 L ≈δ (I − J)+ Wi + Wk (3) this gives space O(log n · log log(1/)) for computing each 2 2i+2 2k+1 i=0 term. Then since iterated addition can be computed in where for all i ∈ [k] logarithmic space [49], [50], the k +1 terms in the sum can be added using an additional O(log N) space where N W I M ·...· I M I −J I M ·...· I M i =( + 0) ( + i)( )( + i) ( + 0) is the bit length of the computed terms. By composition of space bounded algorithms, computing L˜ uses a total of for δ = k · ln(1/(1 − μ)) + ln(3/2) <.5 (using the fact 2 − μ O n · n n · / space. that e 2 ≤ 1 − μ). So we need to compute approximation (log log log +log log log(1 )) O n · n k 3 above in (log log log ) space. There are +2 = VII. PROOF OF MAIN RESULT O(log n) terms in the sum. Aside from the first term (which is easy to compute) and adjusting the coefficient on the last Now we are ready to prove our main technical result, term, the ith term in the expansion looks like Theorem III.1. Theorem I.1 follows as a corollary and is discussed in Section VIII. 1 I M ...· I M I − J I M · ...· I M Proof of Theorem III.1: Let L = D − A be the input i ( + 0) ( + i)( )( + i) ( + 0) 2 +2 Laplacian of the graph G. Let Δ be the maximum vertex which is the product of at most O(log n) matrices. Corollary degree in G. We construct a graph G as follows: at every VI.5 says that for all i ∈ [k] we can compute the entries of vertex v, we add 2log 2Δ − d(v) self loops where d(v)   Mi in space O(log n · d + k log c)=O(log n · log log n). denotes the degree of v. Note that 2 log 2Δ − d(v) ≥ Δ Lemma VI.6 says that we can compute the product of so this ensures G is aperiodic and 2log 2Δ-regular. Let E O(log n) matrices in O(log n · log log n) space. By the be the diagonal matrix of self loops added in this stage.

809 Then the degree matrix of G is D + E =2log 2Δ · I and A proof of Lemma VIII.3 can be found in the full version the adjacency matrix of G is A + E. So the unnormalized of this paper. Now using Theorem III.1 and the lemmas Laplacian of G is D + E − A − E = D − A, which is above we can prove Theorem I.1. the same as the unnormalized Laplacian of G. Let L = Proof of Theorem I.1: Let L = D − A be the input (D − A)(D + E)−1 = I − M be the normalized Laplacian Laplacian of the graph G and let b ⊥ 1 be the given vector. If of G where M is the transition matrix of G. G is disconnected then we can use Reingold’s algorithm to By Proposition VI.2, we can compute a matrix L˜1 such find the connected components and work on each + G that L˜1 ≈α L for α<1/2 in space O(log n · log log n). separately. So assume without loss of generality that is + Then we can compute L˜2 ≈ L by applying Proposi- connected. We may also assume without loss of generality  2 2 2 tion VI.7 using an additional O(log n · log log n +logn · that G is d-regular. Let =(/(4d n )) /2 where n is log log(1/)) space. By composition of space bounded algo- the bit length of the input. Theorem III.1 says that we can + +  rithms this yields a O(log n·log log n+log n·log log(1/)) = compute L˜ ≈ L in space O(log n · log log(n/ )) = O(log n · log log(n/)) space algorithm for computing an O(log n · log log(n/)). √ +  -approximation to L+. Recalling that L = L/2log 2Δ By Lemma VIII.3, x = L˜ b is a 2 -approximate ∗ log 2Δ + Lx b x L+b implies that L˜2/2 ≈ L as desired. solution to = . In other words, letting = we have √ ∗  ∗ x − x L ≤ 2 ·x L VIII. COROLLARIES Translating to the 2 norm we get In this section we prove some applications of Theorem √ √ 1 ∗ ∗  ∗  ∗ III.1. ·x−x  ≤x−x L ≤ 2 ·x L ≤ 2dn 2 ·x  2dn 2 2 Definition VIII.1. Let L be the normalized Laplacian of which implies an undirected multigraph and b ∈ im(L). Then x˜ is an - √ x − x∗ ≤ dn 2  ·x∗ ·x∗ approximate solution to the system Lx = b if there is an 2 (2 ) 2 2 = 2 actual solution x∗ such that as desired. ∗ ∗ O˜ n x − x˜L ≤ x L Our algorithm also implies a (log ) space algorithm √ for computing many interesting quantities associated with n  where for all v ∈ R , vL ≡ v Lv. undirected graphs. Below we survey a few of these corol- laries including hitting times, commute times, and escape Here we prove that our algorithm for computing an - probabilities. approximation to the pseudoinverse of a Laplacian can be used to solve Laplacian systems (Theorem I.1). The Definition VIII.4. In a multigraph G =(V,E) the hitting following lemma is useful for translating between the L- time Huv from u to v ∈ V is the expected number of steps u v norm and the 2 norm. a random walk starting at will take before hitting . Lemma VIII.2. Let x ∈ Rn such that x ⊥ 1 and L be Definition VIII.5. In a multigraph G =(V,E) the commute the normalized Laplacian of an undirected multigraph with time Cuv = Huv + Hvu between vertices u, v ∈ V is the u smallest nonzero eigenvalue γ2 and largest eigenvalue γn. expected number of steps a random walk starting at will Then take to reach v and then return to u. γ x2 ≤x2 ≤ γ x2 2 2 L n 2 Corollary VIII.6. Given an undirected multigraph G = (V,E), vertices u, v ∈ V and >0 there are determin- x2 xLx Proof: By definition L = and by the varia- istic O(log n · log log(n/)) space algorithms for computing tional characterization of the eigenvalues it follows that numbers H˜uv and C˜uv such that γ ·x2 γ · xx ≤x2 ≤ γ · xx γ ·x2 2 2 = 2 L n = n 2 |H˜uv − Huv|≤

and d Since for undirected multigraphs with maximum degree , |C˜uv − Cuv|≤ 2 γ2 ≥ 1/2dn and γn ≤ 2d it follows from the above lemma n that for all x ⊥ 1, x2 and xL are within a multiplicative where is the bitlength of the input. d · n factor of 2 of one another. We also use the following Note that since the values we are approximating are at fact. least 1 (if not 0), Corollary VIII.6 also implies that we can Lemma VIII.3. If √L˜+ ≈ L+, ≤ ln(2) and b ∈ im(L) achieve relative error. Our algorithm can also be used to then x˜ = L˜+b is a 2-approximate solution to Lx = b. compute escape probabilities.

810 Definition VIII.7. In a graph G =(V,E), for two vertices [11] C. Alvarez and R. Greenlaw, “A compendium of problems u and v, the escape probability pw(u, v) denotes the prob- complete for symmetric logarithmic space,” Computational ability that a random walk starting at vertex w reaches u Complexity, vol. 9, no. 2, pp. 123–145, 2000. before first reaching v. [12] O. Reingold, L. Trevisan, and S. Vadhan, “Pseudorandom walks in regular digraphs and the RL vs. L problem,” in Corollary VIII.8. Given an undirected multigraph G = Proceedings of the 38th Annual ACM Symposium on Theory (V,E), vertices u, v ∈ V , and >0, there is a deterministic of Computing (STOC ‘06), 21–23 May 2006, pp. 457–466, O(log n · log log(n/)) space algorithm for computing a preliminary version as ECCC TR05-22, February 2005. vector p˜ such that [13] M. Braverman, A. Rao, R. Raz, and A. Yehudayoff, “Pseudo- p˜ − p≤ random generators for regular branching programs,” in FOCS. IEEE Computer Society, 2010, pp. 40–47. p p u, v G where is the vector of escape probabilities i( ) in . [14] J. Brody and E. Verbin, “The coin problem and pseudoran- Proofs of Corollaries VIII.6 and VIII.8 can be found in domness for branching programs,” in FOCS. IEEE Computer Society, 2010, pp. 30–39. the full version of this paper. [15] M. Koucky,´ P. Nimbhorkar, and P. Pudlak,´ “Pseudorandom IX. ACKNOWLEDGEMENTS generators for group products: extended abstract,” in STOC, J.M supported by NSF grant CCF-1420938, S.V sup- L. Fortnow and S. P. Vadhan, Eds. ACM, 2011, pp. 263–272. ported by NSF grant CCF-1420938 and a Simons Inves- tigator Award. [16] J. S´ıma and S. Zak,´ “Almost k-wise independent sets establish hitting sets for width-3 1-branching programs,” in CSR, ser. REFERENCES Lecture Notes in Computer Science, A. S. Kulikov and N. K. Vereshchagin, Eds., vol. 6651. Springer, 2011, pp. 120–133. [1] R. Impagliazzo and A. Wigderson, “P = BPP if E requires exponential circuits: Derandomizing the XOR lemma,” in Proceedings of the Twenty-Ninth Annual ACM Symposium on [17] P. Gopalan, R. Meka, O. Reingold, L. Trevisan, and S. Vad- Theory of Computing, El Paso, Texas, 4–6 May 1997, pp. han, “Better pseudorandom generators via milder pseudoran- 220–229. dom restrictions,” in Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS ‘12). [2] A. R. Klivans and D. van Melkebeek, “Graph nonisomor- IEEE, 20–23 October 2012, to appear. phism has subexponential size proofs unless the polynomial- time hierarchy collapses,” SIAM Journal on Computing, [18] R. Impagliazzo, R. Meka, and D. Zuckerman, “Pseudoran- vol. 31, no. 5, pp. 1501–1526 (electronic), 2002. [Online]. domness from shrinkage,” Electronic Colloquium on Com- Available: http://dx.doi.org/10.1137/S0097539700389652 putational Complexity (ECCC), Tech. Rep. TR12-057, 2012. [3] R. Impagliazzo, V. Kabanets, and A. Wigderson, “In search of an easy witness: exponential time vs. probabilistic polynomial [19] T. Steinke, “Pseudorandomness for permutation branching time,” Journal of Computer and System Sciences, vol. 65, programs without the group theory,” Electronic no. 4, pp. 672–694, 2002. Colloquium on Computational Complexity (ECCC), Tech. Rep. TR12-083, July 2012. [Online]. Available: [4] V. Kabanets and R. Impagliazzo, “Derandomizing polynomial http://eccc.hpi-web.de/report/2012/083/ identity tests means proving circuit lower bounds,” Compu- tational Complexity, vol. 13, no. 1-2, pp. 1–46, 2004. [20] O. Reingold, T. Steinke, and S. Vadhan, “Pseudorandomness for regular branching programs via Fourier analysis,” in Pro- [5] N. Nisan, “Pseudorandom generators for space-bounded com- ceedings of the 17th International Workshop on Randomiza- putation,” Combinatorica, vol. 12, no. 4, pp. 449–461, 1992. tion and Computation (RANDOM ‘13), ser. Lecture Notes in Computer Science, S. Raskhodnikova and J. Rolim, Eds., vol. [6] M. Saks and S. Zhou, “BPHSPACE(S) ⊆ 8096. Springer-Verlag, 21–23 August 2013, pp. 655–670, 3/2 DSPACE(S ),” Journal of Computer and System full version posted as ECCC TR13-086 and arXiv:1306.3004 Sciences, vol. 58, no. 2, pp. 376–403, 1999. [cs.CC]. [7] R. Aleliunas, R. M. Karp, R. J. Lipton, L. Lovasz,´ and [21] T. Steinke, S. Vadhan, and A. Wan, “Pseudorandomness and C. Rackoff, “Random walks, universal traversal sequences, Fourier growth bounds for width 3 branching programs,” and the complexity of maze problems,” in 20th Annual in Proceedings of the 18th International Workshop on Ran- Symposium on Foundations of Computer Science (San Juan, domization and Computation (RANDOM ‘14), ser. Leibniz Puerto Rico, 1979). New York: IEEE, 1979, pp. 218–223. International Proceedings in Informatics (LIPIcs), C. Moore and J. Rolim, Eds., 4–6 September 2014, pp. 885–899, full [8] O. Reingold, “Undirected connectivity in log-space,” Journal version posted as ECCC TR14-076 and arXiv:1405.7028 of the ACM, vol. 55, no. 4, pp. Art. 17, 24, 2008. [cs.CC], and accepted to Theory of Computing Special Issue [9] H. Lewis and C. Papadimitriou, “Symmertric space-bounded on APPROX/RANDOM ‘14. computation,” Automata, Languages and Programming, pp. [22] D. A. Spielman and S.-H. Teng, “Nearly-linear time algo- 374–384, 1980. rithms for graph partitioning, graph sparsification, and solving [10] N. D. Jones, Y. E. Lien, and W. T. Laaser, “New problems linear systems,” in Proceedings of the thirty-sixth annual complete for nondeterministic log space,” Mathematical Sys- ACM symposium on Theory of computing. ACM, 2004, pp. tems Theory, vol. 10, no. 1, pp. 1–17, 1976. 81–90.

811 [23] R. Peng and D. A. Spielman, “An efficient parallel solver for [37] I. Koutis, G. L. Miller, and D. Tolliver, “Combinatorial pre- SDD linear systems,” STOC, 2014. conditioners and multilevel solvers for problems in computer [24] E. Rozenman and S. Vadhan, “Derandomized squaring of vision and image processing,” in International Symposium on graphs,” in Proceedings of the 8th International Workshop Visual Computing. Springer, 2009, pp. 1067–1078. on Randomization and Computation (RANDOM ‘05), ser. [38] A. Ta-Shma, “Inverting well conditioned matrices in quantum Lecture Notes in Computer Science, no. 3624. Berkeley, logspace,” in Proceedings of the forty-fifth annual ACM CA: Springer, August 2005, pp. 436–447. symposium on Theory of computing. ACM, 2013, pp. 881– [25] M. B. Cohen, J. Kelner, J. Peebles, R. Peng, A. Rao, 890. A. Sidford, and A. Vladu, “Almost-linear-time algorithms [39] D. Doron, F. Le Gall, and A. Ta-Shma, “Probabilistic for markov chains and new spectral primitives for directed logarithmic-space algorithms for laplacian solvers,” ECCC graphs,” arXiv preprint arXiv:1611.00755, 2016. https://eccc.weizmann.ac.il/report/2017/036/, 2017. [26] M. B. Cohen, J. Kelner, J. Peebles, R. Peng, A. Sidford, and [40] V. Trifonov, “An o(\logn\log \logn) space algorithm for undi- A. Vladu, “Faster algorithms for computing the stationary rected st-connectivity,” SIAM Journal on Computing, vol. 38, distribution, simulating random walks, and more,” in Founda- no. 2, pp. 449–483, 2005. tions of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on. IEEE, 2016, pp. 583–592. [41] D. A. Spielman and S.-H. Teng, “Spectral sparsification of graphs,” SIAM Journal on Computing, vol. 40, no. 4, pp. 981– [27] K.-M. Chung, O. Reingold, and S. Vadhan, “S- 1025, 2011. T connectivity on digraphs with a known stationary distribution,” ACM Transactions on Algorithms, vol. 7, [42] L. Orecchia and N. K. Vishnoi, “Towards an sdp-based no. 3, pp. Art. 30, 21, 2011, preliminary versions in approach to spectral methods: A nearly-linear-time algorithm CCC ‘07 and on ECCC (TR07-030). [Online]. Available: for graph partitioning and decomposition,” in Proceedings of http://dx.doi.org/10.1145/1978782.1978785 the twenty-second annual ACM-SIAM symposium on Discrete [28] I. Koutis, G. L. Miller, and R. Peng, “Approaching optimality Algorithms. Society for Industrial and Applied Mathematics, for solving sdd systems. march 2010.” 2011, pp. 532–545. [29] K. Ioannis, G. Miller, and R. Peng,“A nearly-m log n time [43] D. A. Spielman and S.-H. Teng, “A local clustering algorithm solver for sdd linear systems,” in Foundations of Computer for massive graphs and its application to nearly linear time Science (FOCS), 2011 IEEE 52nd Annual Symposium on. graph partitioning,” SIAM Journal on Computing, vol. 42, IEEE, 2011, pp. 590–598. no. 1, pp. 1–26, 2013. [30] J. A. Kelner, L. Orecchia, A. Sidford, and Z. A. Zhu, “A [44] Y. T. Lee and A. Sidford, “Efficient inverse simple, combinatorial algorithm for solving sdd systems in maintenance and faster algorithms for linear programming,” nearly-linear time,” in Proceedings of the forty-fifth annual CoRR, vol. abs/1503.01752, 2015. [Online]. Available: ACM symposium on Theory of computing. ACM, 2013, pp. http://arxiv.org/abs/1503.01752 911–920. [45] O. Reingold, S. Vadhan, and A. Wigderson, “Entropy waves, [31] Y. T. Lee and A. Sidford, “Efficient accelerated coordinate the zig-zag graph product, and new constant-degree ex- descent methods and faster algorithms for solving linear panders,” Annals of Mathematics, vol. 155, no. 1, January systems,” in Foundations of Computer Science (FOCS), 2013 2001. IEEE 54th Annual Symposium on. IEEE, 2013, pp. 147–156. [46] S. P. Vadhan, “Pseudorandomness,” Foundations and TrendsR in Theoretical Computer Science, vol. 7, no. 1–3, [32] P. Christiano, J. A. Kelner, A. Madry, D. A. Spielman, and pp. 1–336, 2012. S.-H. Teng, “Electrical flows, laplacian systems, and faster [47] J. Naor and M. Naor, “Small-bias probability spaces: Efficient approximation of maximum flow in undirected graphs,” in constructions and applications,” SIAM Journal on Computing, Proceedings of the forty-third annual ACM symposium on vol. 22, no. 4, pp. 838–856, Aug. 1993. Theory of computing. ACM, 2011, pp. 273–282. [48] N. Alon, O. Goldreich, J. Hastad,˚ and R. Peralta, [33] J. A. Kelner, Y. T. Lee, L. Orecchia, and A. Sidford, “An k almost-linear-time algorithm for approximate max flow in “Simple constructions of almost -wise independent undirected graphs, and its multicommodity generalizations,” random variables,” Random Structures & Algorithms, in Proceedings of the Twenty-Fifth Annual ACM-SIAM Sym- vol. 3, no. 3, pp. 289–304, 1992, see also addendum posium on Discrete Algorithms. SIAM, 2014, pp. 217–226. in issue 4(1), 1993, pp. 199–120. [Online]. Available: http://dx.doi.org/10.1002/rsa.3240030308 [34] J. A. Kelner and A. Madry, “Faster generation of random [49] P. W. Beame, S. A. Cook, and H. J. Hoover, “Log depth spanning trees,” in Foundations of Computer Science, 2009. circuits for division and related problems,” SIAM Journal on FOCS’09. 50th Annual IEEE Symposium on. IEEE, 2009, Computing, vol. 15, no. 4, pp. 994–1003, 1986. pp. 13–21. [50] W. Hesse, E. Allender, and D. A. M. Barrington, “Uniform [35] J. Sherman, “Breaking the multicommodity flow barrier for constant-depth threshold circuits for division and iterated o (vlog n)-approximations to sparsest cut,” in Foundations multiplication,” Journal of Computer and System Sciences, of Computer Science, 2009. FOCS’09. 50th Annual IEEE vol. 65, no. 4, pp. 695–716, 2002. Symposium on. IEEE, 2009, pp. 363–372. [51] 51th Annual IEEE Symposium on Foundations of Computer [36] D. A. Spielman and N. Srivastava, “Graph sparsification by Science, FOCS 2010, October 23-26, 2010, Las Vegas, effective resistances,” SIAM Journal on Computing, vol. 40, Nevada, USA. IEEE Computer Society, 2010. no. 6, pp. 1913–1926, 2011.

812