Graph Connectivities, Network Coding, and Expander Graphs

Ho Yee Cheung, Lap Chi Lau, Kai Man Leung The Chinese University of Hong Kong

Abstract— We present a new algebraic formulation to compute nectivities. This reduces the problem of computing edge con- edge connectivities in a directed graph, using the ideas developed nectivities to solving systems of linear equations, and opens in network coding. This reduces the problem of computing edge up new directions to design for the problem. connectivities to solving systems of linear equations, thus allowing us to use tools in to design new algorithms. Using One advantage is that the edge connectivities from a source the algebraic formulation we obtain faster algorithms for computing vertex to all other vertices can be computed simultaneously single source edge connectivities and all pairs edge connectivities, (as in the max-information-flow min-cut theorem). This in some settings the amortized time to compute the edge connec- leads to faster algorithms for computing single source edge tivity for one pair is sublinear. Through this connection, we have connectivities and all pairs edge connectivities, and in some also found an interesting use of expanders and superconcentrators to design fast algorithms for some graph connectivity problems. settings the amortized time to compute the edge connectivity for one pair is sublinear. In the process we have also found an interesting use of expanders and superconcentrators to 1. INTRODUCTION design fast algorithms for graph connectivity problems. Graph connectivity is a basic concept that measures the reliability and efficiency of a graph. The edge connectivity 1.1. Our Results from vertex s to vertex t is defined as the size of a minimum Our new algebraic formulation for computing edge con- s-t cut, or equivalently the maximum number of edge nectivities is inspired by the random linear coding algo- disjoint paths from s to t. Computing edge connectivities rithm [17] in constructing network codes. Let G =(V,E) is a classical and well-studied problem in combinatorial be a directed graph. Let s be the source vertex with out- optimization. Most known algorithms to solve this problem degree d, with outgoing edges e1,...,ed. For each edge are based on network flow techniques (see e.g. [33]). e ∈ E we associate a vector fe of dimension d, where each The fastest to compute the s-t edge connectivity entry in fe is an element from a large enough finite field F. in a simple directed graph is a O(min{m1/2,n2/3}·m) The vectors are required to satisfy the following properties: time algorithm by Even and Tarjan [11], where m is the (1) the vectors on e1,...,ed are linearly independent, and number of edges and n is the number of vertices. To (2) the vector on an edge e =(v, w) is a random linear compute the edge connectivities for many pairs, however, combination of the incoming vectors of v. Once we obtain it is not known how to do it faster than computing the edge the vectors, we can compute the edge connectivities from connectivity for each pair separately, even when the pairs the source vertex as follows. share the source or the sink. For instance, it is not known how to compute all pairs edge connectivities faster than Theorem 1.1 (Informal statement) With high probability 2 computing the s-t edge connectivity for Ω(n ) pairs. This there is a unique solution to the vectors, and the edge is in contrast to the problem in undirected graphs, where all connectivity from s to t is equal to the rank of the incoming pairs edge connectivities can be computed in O˜(mn) time vectors of t for any t ∈ V − s. by constructing a Gomory-Hu tree [6]. Network coding is an innovative method to transmit infor- See Figure 1.1 for an example and Theorem 2.1 for the mation in a . The fundamental result is a formal statement. This formulation is previously known for max-information-flow min-cut theorem for multicasting [1]: directed acyclic graphs only [21], [34]. For general di- if the edge connectivity between the source vertex s to rected graphs, network coding schemes require convolution each sink vertex ti is at least k, then one can transmit codes [10], [27], and it is not known how to use these to k units of information to all sink vertices simultaneously, compute edge connectivities. Our contribution is to show by performing encoding and decoding at the vertices. An that this simpler formulation can be used to compute edge elegant algebraic framework has been developed to construct connectivities. efficient network coding schemes for multicasting [28], [25]. The algebraic formulation reduces the problem of com- We use the techniques developed in network coding to puting edge connectivities to solving systems of linear equa- obtain a new algebraic formulation for computing edge con- tions. We call the step to compute the vectors as the encoding step, and the step to compute the ranks as the decoding step. Research supported by HK RGC grant 413609 and 413411. The formulation has the advantage that after the encoding v1 result of Alon and Yuster [3] to perform the encoding step

4 faster in directed planar graphs with constant maximum 7 ⎡ ⎤ degree. The best known algorithm to compute single source 1 ⎡ ⎤ ⎡ ⎤ edge connectivities in directed planar graphs requires O(n2) f ⎣ ⎦ 1 4 1 = 0 ⎣ ⎦ ⎣ ⎦ f4 f6 O n s t 0 = 2 = 6 time, by using a ( ) time algorithm to compute - edge 1 3 connectivity [8].

⎡ ⎤ 2 ⎡ ⎤ 0 10 2 Theorem 1.3 Given a simple directed planar graph with ⎣ ⎦ ⎣ ⎦ 2 f2 = 1 f7 = 3 s v2 v4 s 0 1 constant maximum degree and a source vertex ,theedge connectivities from s to all vertices can be computed in 5 1 O(nω/2) time. ⎡ ⎤ ⎡ ⎤ 0 ⎡ ⎤ 0 ⎣ ⎦ 0 For general directed graphs, we show that all pairs edge f5 = 0 f ⎣ ⎦ f ⎣ ⎦ 3 = 0 2 8 = 0 connectivities can be computed in one matrix inverse time, 1 1 instead of solving linear equations for each source vertex 2 separately. The algorithm is faster when m = O(n1.93),for example when m = O(n) it takes O(nω) time while the v 1 3 best known algorithm takes O(n3.5) time.

Figure 1.1: In this example three independent vectors f1, Theorem 1.4 Given a simple directed graph, the edge con- f2 and f3 are sent from the source s. Other vectors are a nectivities between all pairs of vertices can be computed in linear combination of the incoming vectors, according to the O(mω) time where m is the number of edges in the graph. random coefficients on the dotted lines, e.g. f4 =7·f1+4·f6 We show that the matrix inverse can be computed more and f7 =2·f4 +10·f2+5·f5, etc. All operations are done in the field of size 11. To compute the edge connectivity from efficiently for graphs that are “well-separable” (which will be defined in Section 4.2), including planar graphs, bounded s to v2, for instance, we compute the rank of (f2,f4,f5) which is 3 in this example. genus graphs, fixed minor free graphs, etc. The resulting algorithm is faster than√ that for general graphs when the maximum degree is O( n). step has been done, the edge connectivities from the source s vertex to all other vertices can be computed readily. In the Theorem 1.5 Given a simple well-separable directed graph following we will first present algorithmic results on single with maximum degree d, the edge connectivities between all source edge connectivities and then present algorithmic pairs of vertices can be computed in O(dω−2nω/2+1) time. results on all pairs edge connectivities. In directed acyclic graphs, the encoding step can be The idea of transforming the graph using superconcen- implemented directly without solving linear equations. By trators can also be used in other graph connectivity prob- a simple transformation using superconcentrators [35], [18], lems. In Section 5 we show how to use expanders and we show how to implement the encoding step optimally. superconcentrators to speedup the algorithms for finding This implies a faster algorithm for computing single source edge splitting-off operations preserving edge connectivities edge connectivities in directed acyclic graphs. in undirected and directed graphs. 1.2. Related Work Theorem 1.2 Given a simple directed acyclic graph and a source vertex s with outdegree d, the encoding step can be The standard way to solve the s-t edge connectivity done in O(dm) time, and the edge connectivities from s problem in directed graphs is by network flow techniques. O dω−1m For simple directed graphs, the best known algorithm is a to all vertices can be computed in ( ) time, where 2/3 1/2 ω ≈ 2.38 is the matrix multiplication exponent. O(min{n ,m }·m) time algorithm [11] by the blocking flow method. As mentioned previously, in directed graphs, The algorithm runs in linear time when d is a constant, it is not known how to compute all pairs edge connectivities and by a simple reduction this can be used to return all faster than computing s-t edge connectivity for Ω(n2) pairs vertices with edge connectivity at most d from the source s separately. This is in contrast to the problem in undirected in linear time. We note that the encoding algorithm can also graphs, where all pairs edge connectivities can be computed be used to improve the random linear coding algorithm [17] in O˜(mn) time by constructing a Gomory-Hu tree [6], much for network coding. faster than computing edge connectivities for Ω(n2) pairs. In some graphs the system of linear equations can be There are improvements in special cases of the s-t solved more efficiently. For instance, we can use a recent edge connectivity problem in directed graphs. For bipartite √ matching, the best known algorithm is a O(m n) time For all pairs edge connectivities, we observe that if we algorithm [19] by the blocking flow method, and a O(nω) change the source vertex the system of linear equations is time algorithm [32], [16] by algebraic techniques. It is similar, and so we could compute the n single source edge known that the bipartite matching problem is equivalent to connectivities in one matrix inverse time. For graphs with the s-t vertex connectivity problem in directed graphs, and good separators, we show how to compute the inverse faster so the above results hold for the latter problem√ as well [9]. by a divide and conquer algorithm, using Schur’s formula For simple undirected graphs, there is a O(n3/2 m) time al- and the Sherman-Morrison-Woodbury formula, which may gorithm [15] and a O˜(n2.2) time algorithm [23]. In directed be of independent interest. We note that the algorithmic planar graphs, there is an optimal O(n) time algorithm for results on all pairs edge connectivities can also be derived computing s-t edge connectivity [8]. from the matrix formulation given by Ingleton and Piff [20], For network coding, the max-information-flow min-cut [9] for vertex connectivities. theorem for multicasting is first proved by an information theoretical argument [1]. Later it is shown that linear net- 1.4. Organization work coding is enough to achieve the max-flow min-cut We present the algebraic formulation in Section 2 and theorem for multicasting [28] and an algebraic framework prove a formal statement of Theorem 1.1. In Section 3 is developed [25]. Then a polynomial time deterministic we present algorithms for single source edge connectivities algorithm is obtained to construct optimal linear coding and prove Theorem 1.2 and Theorem 1.3. In Section 4 we schemes for multicasting in directed acyclic graphs [22], present algorithms for all pairs edge connectivities and prove and in general directed graphs using convolution codes [10], Theorem 1.4 and Theorem 1.5. Finally we show the use of [27]. Subsequently a simple polynomial time randomized expanders and superconcentrators in the edge splitting-off algorithm is obtained for constructing optimal linear coding problem in Section 5. schemes for multicasting [17], and our algorithm is based on this approach. 2. ALGEBRAIC FORMULATION

1.3. Techniques Throughout the paper we consider uncapacitated directed graphs where each edge is of the same capacity. We begin The starting observation is that the random linear coding with some notation and definitions for graphs and matrices. algorithm [17] for network coding does not require the In a directed graph G =(V,E), an edge e =(u, v) is knowledge of the graph topology, and it could be used directed from u to v, and we say that u is the tail of to compute the edge connectivities from the source to e and v is the head of e. For any vertex v ∈ V ,we the sinks. Actually this observation was already made for define δin(v)={uv | uv ∈ E} as the set of incoming directed acyclic graphs in earlier work [21], [34]. For general edges of v and din(v)=|δin(v)|; similarly we define directed graphs, however, network coding schemes are more δout(v)={vw | vw ∈ E} as the set of outgoing edges complicated (even for random linear coding) as convolution of v and dout(v)=|δout(v)|. For a subset S ⊆ V , codes are required [10], [27], and it is not known how to use we define δin(S)={uv | u/∈ S, v ∈ S, uv ∈ E} these to compute edge connectivities. Our contribution is to and din(S)=|δin(S)|. Given a matrix M, the submatrix show that a simpler formulation can be used to compute edge containing rows S and columns T is denoted by MS,T .A connectivities, which also allows us to design more efficient submatrix containing all rows (or columns) is denoted by algorithms. The proof is based on the ideas developed in the M∗,T (or MS,∗),andanentryofM is denoted by Mi,j . random linear coding algorithm. We define the algebraic formulation for computing graph We show a simple transformation using expanders and connectivities formally. Given a directed graph G =(V,E) superconcentrators [35], [18] to design fast algorithms for and a specified source vertex s, we are interested in com- some graph connectivity problems. In directed graphs, the puting the edge connectivities from s to other vertices. Let out idea is to replace a vertex of indegree d and outdegree d m = |E| and E = {e1,e2,...,em}.Letd = d (s) and out by a superconcentrator with d inputs and d outputs. This re- δ (s)={e1,...,ed}.LetF be a finite field. For each edge d duces the maximum degree of the graph significantly, while e ∈ E, we associate a global encoding vector fe ∈ F of preserving the edge connectivities and only increasing the dimension d where each entry is in F.Wesayapairofedges number of vertices moderately. For random linear coding, e and e are adjacent if the head of e is the same as the tail using the direct algorithm in the resulting graph gives an of e,i.e.e =(u, v) and e =(v, w) for some v ∈ V .For optimal algorithm in the original graph. For edge splitting- each pair of adjacent edges e and e, we associate a local off, using the straightfoward algorithm in the resulting graph encoding coefficient ke,e ∈ F. Given the local encoding gives a considerable improvement upon the same algorithm coefficients for all pairs of adjacent edges in G, we say that in the original graph. In undirected graphs, we can use the global encoding vectors are a network coding solution constant degree expander graphs for the same purpose. if the following two sets of equations are satisfied: e ∈ δout s f 1) For each edge i ( ),wehave ei = Lemma 2.3 Given the conditions of Theorem 2.1, we have c+1 k  · f  e e i M λ − O /m e∈δin(s) e ,ei e + i,where i is the -th vector rank( t)= s,t with probability at least 1 (1 ). in the standard basis. M ≤ λ 2) For each edge e =(v, w) with v = s,wehavefe = Proof: First we prove that rank( t) s,t with high e∈δin(v) ke,e · fe . probability. The plan is to show that the global encoding t The main theorem in this section is the following formal vector on each incoming edge of is a linear combination s t statement of Theorem 1.1. of the global encoding vectors in a minimum - cut with high probability. Consider a minimum s-t cut δin(T ) where in F |F| mc+2 T ⊂ V with s/∈ T and t ∈ T and d (T )=λs,t.Let Theorem 2.1 Let be a finite field with =Ω( ) for    c E = {e1,...,em } be the set of edges in E with their heads some integer . If we choose each local encoding coefficient in   F in T .Letλ = λs,t and assume δ (T )={e1,...,eλ}. independently and uniformly at random from , then with    c Let F be the d × m matrix (fe ,...,fe ).LetK be probability at least 1 − O(1/m ): 1 m the m × m submatrix of K restricted to the edges in E. 1) There is a unique network coding solution for the     Let H be the d × m matrix (fe ,...,fe , 0,...,0). Then, global encoding vectors. 1 λ in by the network coding requirements, the matrices satisfy 2) For t ∈ V − s,letδ (t)={a1,...,al},theedge the equation F  = F K + H. By the same argument as connectivity from s to t is equal to the rank of the in Lemma 2.2, the matrix (I − K) is nonsingular with matrix (fa ,fa ,...,fa ). 1 2 l probability at least 1 − O(1/mc+1). So the above matrix    −1 We will prove Theorem 2.1 in the remaining of this equation can be rewritten as F = H (I − K ) .This  section. First we rewrite the requirements of a network implies that every global encoding vector in F is a linear  coding solution in a matrix form. Let F be the d × m combination of the global encoding vectors in H , which are δin T matrix (fe ,...,fe ).LetK be the m × m matrix where the global encoding vectors in the cut ( ). Therefore the 1 m in rank(Mt) ≤ d (T )=λs,t. Ki,j = kei,ej when ei and ej are adjacent edges, and 0 oth-    M ≥ λ erwise. Let Hs be the d × m matrix (e1,...,ed, 0, 0,...,0) Next we prove that rank( t) s,t with high proba- where 0 denotes the all zero vector of dimension d. Then the bility. The plan is to show that there is a λ × λ submatrix M  M M  network coding equations are equal to the matrix equation t of t such that det( t ) is a non-zero polynomial of F = FK + Hs. the local encoding coefficients with small total degree. First s t M  To prove the first part of Theorem 2.1, we will prove in we use the edge disjoint paths from to to define t. the following lemma that (I − K) is non-singular with high Let λ = λs,t and P1,...,Pλ beasetofλ edge disjoint probability. Then the above equation can be rewritten as paths from s to t.Setke,e =1for every pair of adjacent −1 e,e∈ P i F = Hs(I − K) , which implies that the global encoding edges i for every , and set all other local encoding vectors are uniquely determined. coefficients to be zero. Then each path sends a distinct unit vector of the standard basis to t, and thus Mt contains a λ×λ λ×λ M  Lemma 2.2 Given the conditions in Theorem 2.1, the ma- identity matrix as a submatrix. Call this submatrix t. M  trix (I − K) is non-singular with probability at least Now we show that the det( t) is a non-zero polynomial 1 − O(1/mc+1). of the local encoding coefficients with small total degree. Recall that F = H(I − K)−1. By considering the adjoint Proof: Since the diagonal entries of K are zero, the matrix of I −K, each entry of (I −K)−1 is a polynomial of diagonal entries of I −K are all one. By treating each entry the local encoding coefficients with total degree m divided of K as an indeterminant Ki,j , it follows that det(I −K)= by det(I − K), and thus the same is true for each entry  1+p(··· ,Ki,j , ···) where p(··· ,Ki,j , ···) is a polynomial of F . Hence det(Mt) is a degree λm polynomial of the of the indeterminates with total degree at most m.Note local encoding coefficients divided by (det(I − K))λ.By that det(I − K) is not a zero polynomial since there is using the edge disjoint paths P1,...,Pλ,wehaveshown a constant term. Hence, by the Schwartz-Zippel Lemma, if that there is a choice of the local encoding coefficients so c+2   |F| =Ω(m ) and each Ki,j is a random element in F, that rank(Mt)=λ. Thus det(Mt) is a non-zero polynomial then det(I − K)=0with probability at most O(1/mc+1), of the local encoding coefficients. Conditioned on the event  proving the lemma. that det(I − K) is nonzero, the probability that det(Mt) After we obtained the global encoding vectors, we would is nonzero is at most O(λ/mc+2) ≤ O(1/mc+1) by the like to show that the edge connectivities can be determined Schwarz-Zippel lemma. By bounding the probability that from the ranks of these vectors. Consider a vertex t ∈ V −s. det(I − K)=0using Lemma 2.2, we obtain that the in  c+1 Let δ (t)={a1,...,al} and let Mt be the d × l matrix probability that det(Mt)=0is at most O(1/m ). f ,f ,...,f s t M  ( a1 a2 al ). Let the edge connectivity from to be This implies that t is a full rank submatrix with high  λs,t. We prove in the following lemma that rank(Mt)=λs,t probability, and thus rank(Mt) ≥ rank(Mt)=λ.We with high probability. conclude that rank(Mt)=λs,t with probability at least 1 − O(1/mc+1) by the union bound. on expanders and superconcentrators, and then come back Thus the probability that rank(Mt) = λs,t for some to the algorithm for directed acyclic graphs. t ∈ V − s is at most n · O(1/mc+1) ≤ O(1/mc) by the 3.1.1. Expander Graphs and Superconcentrators: An ex- union bound, and this proves the second part of Theorem 2.1. pander graph is a sparse graph that exhibits strong connec- Therefore, we only need to set c to be a constant to guarantee tivity properties. Given S ⊆ V ,wedefineN(S) as the set a high probability result, and so each field operation can be of vertices in V − S with a neighbor in S. done in O(log m) time. Definition 3.1 A graph G =(V,E) is called an (n, d, c)- 3. SINGLE SOURCE EDGE CONNECTIVITIES expander if it has n vertices, the maximum degree is d, and S ⊂ V |S|≤|V |/ |N S |≥c|S| The algebraic formulation can be used to compute the for all with 2, we have ( ) c G edge connectivities from one source vertex to all other ver- and is called the expansion of . tices. In general the encoding step requires solving systems m m There are several explicit constructions for expander of linear equations with variables and equations. In graphs with d and c constants (see [18]). Superconcentrators this section we will show how to obtain faster algorithms are first defined by Valiant [35] for studying the complexity for directed acyclic graphs and directed planar graphs. of linear transformations. 3.1. Directed Acyclic Graphs Definition 3.2 ([18]) Let G =(V,E) be a directed graph In directed acyclic graphs, one can compute the global and let I and O be two disjoint subsets of V . We say that encoding vectors directly by following the topological or- G is a superconcentrator if for every k and every S ⊆ I dering of the graph. We can first preprocess the graph by and T ⊆ O with |S| = |T | = k,thereexistk vertex disjoint a breadth first search to remove all vertices that are not paths from S to T . connected from the source vertex. Then the source vertex s is the only vertex with indegree zero, and we set the global Valiant proved that there exist superconcentrators with encoding vectors of its outgoing edges to be the vectors in O(n) edges, see [18] for a simple recursive construction the standard basis, as required by the first condition for the using bipartite expander graphs. There exist explicit con- network coding solution. We assign random local encoding structions [12] of superconcentrators with the following coefficients to each pair of adjacent edges. Then we process properties: (1) there are O(n) vertices and O(n) edges, the remaining vertices following the topological ordering of (2) the maximum indegree and the maximum outdegree the graph, so that when each vertex v is processed all the are constants, (3) the graphs are directed acyclic. The vectors of its incoming edges are already computed. For construction in [12] can be implemented in O(n) time for a each outgoing edge of v, we compute its vector by taking superconcentrator with n inputs and n outputs. a linear combination of the vectors of the incoming edges 3.1.2. Proof of Theorem 1.2: To improve the encoding of v, according to the local encoding coefficients. It is easy time, we first transform the graph G =(V,E) by replacing to see that the global encoding vectors of all edges will each vertex in V − s by a superconcentrator. For each in out be computed, and the resulting vectors satisfy the second vertex v,letdv =max{d (v),d (v)}, we replace v by requirement of the network coding solution. a superconcentrator Γv with |I| = |O| = dv, and “rewire” We consider efficient implementations of this algorithm. each incoming edge of v to a distinct vertex in I and each Let d be the outdegree of the source vertex s. So each outgoing edge of v from a distinct vertex in O. The total in global encoding vector is of dimension d.Foravertex transformation time is v∈V O(dv)= v∈V O((d (v)+ v with indegree din(v), it requires d · din(v) arithmetic dout(v))) = O(m) where n = |V | and m = |E|. operations to compute the vector of one outgoing edge of Call the graph after the transformation G. A key point v, and thus it requires d · din(v) · dout(v) operations to is that the edge connectivity from the source s to a vertex compute the vectors of all outgoing edges of v. Therefore the t in G is equal to the edge connectivity from the source s  total encoding time of this straightforward implementation to Γt in G by thinking of Γt as a node. To see this, any in out is v∈V −s d · d (v) · d (v), and in worst case it requires set of edge disjoint paths from s to t in G corresponds to a  Θ(dnm) steps. An improvement is to use a fast rectangular set of edge disjoint paths from s to Γt in G , because paths matrix multiplication algorithm to compute all the outgoing sharing a vertex v in G can be routed using the disjoint paths  vectors of a vertex, but we will show how to do even faster. guaranteed by the superconcentrator Γv in G . Observe that the encoding operation is much faster if the By the properties described in Section 3.1.1, G is a di- indegree of a vertex is a constant. So the idea is to transform rected acyclic graph with constant maximum indegree. Thus the graph into a bounded degree graph, and it turns out that the global encoding vector of an edge in G can be computed superconcentrators give us an optimal transformation for this in O(d) time, where d is the outdegree of the source vertex s. purpose. In the following we will have a short introduction Since Γv has O(dv ) edges, the global encoding vectors of all the edges of Γv can be computed in O(d · dv) time. There- setting, however, the matrix (I − K) is not symmetric fore, all the global encoding vectors can be computed in and its elements are from a finite field. Interestingly, Alon in out v∈V O(d·dv )= v∈V O(d·(d (v)+d (v))) = O(dm) and Yuster [3] extended the nested dissection method very time. This proves the first part of Theorem 1.2. recently to solve system of linear equations Ax = b over any The encoding time is optimal since writing down all the finite field and for any matrix (not necessarily symmetric) global encoding vectors already takes O(dm) time. It is whose underlying graph has an (O(nβ ),α)-weak separator surprising that the vectors of all the outgoing edges of Γv tree, and this is exactly what we need. In the following a  in G can be computed in O(d · dv) time, the same time family of graphs is δ-sparse if any n-vertex graph in this complexity as computing the vector of just one outgoing family has at most δn edges. edge of v in G. This can be used to improve the random linear coding algorithm [17] for network coding. s G To compute the edge connectivity from to Γt in , Theorem 3.3 (Alon, Yuster [3]) Let F be a δ-sparse fam- by Theorem 2.1 we can compute the rank of the incoming ily of graphs with an O(nγ ) time algorithm to find an G a × b β vectors of Γt in . For a rectangular matrix of size , (O(n ),α)-weak separator tree where α is a constant O abω−1 its rank can be computed in ( ) time. So the total smaller than one. Given a system of linear equations O din v · dω−1 O m · dω−1 n×n n decoding time is v ( ( ) )= ( ).This Ax = b where A ∈ F is non-singular, b ∈ F and proves the second part of Theorem 1.2. the underlying graph of A is in F, there is a randomized Our algorithm is faster than running the Even-Tarjan algorithm that finds the unique solution of the system in n n m d ωβ γ algorithm for Ω( ) times for all and .When is small O(n + n + n log n) time. we can compute single source edge connectivites in linear time. Let λs,t be the edge connectivity from s to t.When the outdegree of s is unbounded, this can be used to obtain We are interested in computing the network coding solu- G V,E an algorithm to compute min{λs,t,d} for all t in linear time tion for a directed planar graph =( ) with constant for constant d, by adding a new source s with d outgoing maximum degree. If the source vertex has outdegree d,then edges to s and compute the edge connectivities from s. the global encoding vectors are of dimension d,andthe equation F (I − K)=Hs can be solved by d systems of 3.2. Directed Planar Graphs linear equations of the form (I − K)T x = h where h is the One way to compute the global encoding vectors is to transpose of a row of Hs. The underlying graph of (I−K) is solve the system of linear equations F (I − K)=Hs.For not a planar graph, but is the line graph of a planar graph. some graphs this system of linear equations can be solved Nevertheless,√ if the maximum degree of G is a constant, more efficiently. The result in this section can be applied then√ an (O( n), 2/3)-separation in G corresponds to an to more general classes of graphs than the class of directed (O( n),α)-separation in the line graph for α a constant planar graphs. smaller than one, by taking the edges with one endpoint in We say an undirected graph G =(V,E) has a (f(n),α)- the separator in G as a separator of the line graph. Since a separation, if V can be partitioned into three parts, X, Y, Z separator can be found in linear time in planar graphs [30], such that |X ∪ Z|≤α|V |, |Y ∪ Z|≤α|V |, |Z|≤f(n), by applying Theorem 3.3 with γ =1, β =1/2 and δ and no edges have endpoints in both X and Y . A class of a constant (since the maximum degree is a constant), we ω/2 graphs is hereditary if it is closed under taking subgraphs obtain an O(n ) time randomized algorithm for solving T (e.g. planar graphs). For a hereditary class of graphs, if one system of linear equation (I − K) x = h. The total ω/2 there is always a (f(n),α)-separation for any n vertex encoding time is also O(n ) as d is a constant. To compute graph in this class, then one can recursively separate the the edge connectivity from s to a vertex t, by Theorem 2.1 separated parts X and Y until the separated pieces are small we can just compute the rank of a d × l matrix where l ≤ d, enough. This yields a (f(n),α)-weak separator tree (see and this can be done in constant time. Therefore the total [3] for a formal definition). It is known that planar graphs, decoding time is O(n). This proves Theorem 1.3. bounded√ genus graphs and fixed minor free graphs√ have a The same result holds for bounded√ genus graphs with ( n, 2/3)-separation [30], [14], [2], and thus a ( n, 2/3) constant maximum degree, since a (O( n), 2/3) separator weak separator tree. can be found in linear time [14]. For fixed minor free Given Ax = b where A is an n×n matrix, the underlying graphs with constant maximum degree, we can use Alon graph of A is an undirected graph with n vertices where and Yuster [3] result to obtain a O(n3ω/(3+ω))=O(n1.33) there is an edge ij if Ai,j =0 or Aj,i =0 . The nested time algorithm. Very recently Kawarabayashi and Reed [24] dissection method of Lipton, Rose and Tarjan [29] shows gave an O(n1+) time algorithm for finding a separator in that if A is a symmetric positive definite matrix and the fixed minor-free graphs for any >0, and this can be used underlying graph has an (O(nβ ), 2/3)-weak separator tree to improve the running time for bounded degree fixed minor then Ax = b can be solved in O(nωβ) time. In our free graphs to O(nω/2). −1 4. ALL PAIRS EDGE CONNECTIVITIES of low rank, so that we can compute (I − K) faster by a In this section we show how to compute all pairs edge divide and conquer algorithm. connectivities in general directed graphs, and then show how The rest of this section is organized as follows. In Sec- to improve the running time for graphs with good separators. tion 4.2.1, we will show that if a matrix is “well-separable” (to be defined in the next section), then its inverse can be 4.1. General Directed Graphs computed more efficiently. TheninSection4.2.2,weshow I − K To solve F = FK + Hs, one can compute the inverse that the matrix for graphs with good separators is −1 −1 ω of (I − K) and get F = Hs(I − K) . It takes O(m ) well-separable, and conclude that the edge connectivities in time to compute (I − K)−1 since the matrix (I − K) is of such graphs can be computed efficiently. size m×m, but we observe that all pairs edge connectivities 4.2.1. Inverse of Well-Separable Matrix: Let M be an can be computed readily once we have (I − K)−1. In our n × n matrix as defined above. We say M is (r, α)- out out out α M setup, Hs is a d (s) × m matrix with a d (s) × d (s) well-separable ( is a constant smaller than one) if is identity matrix in the columns corresponding to δout(s).So invertible, both A and D are (r, α)-well-separable square −1 αn B F is just equal to ((I−K) )δout(s),∗ up to permuting rows. matrix with dimension no more than , and both and Therefore (I−K)−1 contains all the global encoding vectors C are of rank no more than r. In this section we are going for all source vertices, and thus the total encoding time for to show that the inverse of a well-separable matrix can be ω−2 2 all pairs is O(mω). computed in O(r n ) time. −1 To compute the edge connectivity from s to t,byThe- To compute M , we first compute the inverses of A and D M −1 orem 2.1 we compute the rank of F∗,δin(t) and this is recursively. We will compute using Schur’s formula. −1 just equal to the rank of ((I − K) )δout(s),δin(t).As A key step is to use the Sherman-Morrison-Woodbury for- −1 shown in Section 3.1, given a vertex s, computing the mula to compute S efficiently, instead of first computing −1 D − CA−1B D − CA−1B −1 ranks of ((I − K) )δout (s),δin(t) for all t can be done in and then ( ) . The Sherman- out ω−1 O(m · (d (s)) ) time. So the total decoding time is Morrison-Woodbury formula tells us how to compute the O m · dout s ω−1 dout s ≤ n (D + UV T )−1 efficiently if D−1 is known and U, V are s∈V ( ( ( )) ).Since ( ) in a simple out ω−1 m ω−1 low rank matrices. graph, s∈V O((d (s)) ) is at most O( n ·n ).This implies that the total decoding time is at most O(m2nω−2), M which is O(mω) since m ≥ n. This proves Theorem 1.4. Lemma 4.2 (Sherman-Morrion-Woodbury [36]) Let n × n U n × k V Our algorithm is faster than running Even-Tarjan algo- be an matrix, be an matrix, and be an n × k M rithm for Ω(n2) pairs as long as m = O(n1.93).Form = matrix. Suppose that is nonsingular. Then T T −1 O(n) our algorithm runs in O(nω) time while running Even- • M + UV is nonsingular if and only if I + V M U Tarjan algorithm for Ω(n2) pairs takes O(n3.5) time. We is nonsingular. T T −1 note that Theorem 1.4 and Theorem 1.5 can also be derived • if M + UV is nonsingular, then (M + UV ) = −1 −1 T −1 −1 T −1 from the matrix formulation of Ingleton and Piff [20], [9] M − M U(I + V M U) V M . for vertex connectivities. By transforming to line graphs, the B , C ≤ r B −1 Firstly, since rank( ) rank( ) , we can write = resulting matrix is similar to (I − K) . T T BuBv and C = CuCv where Bu, Bv, Cu and Cv are size ω−2 2 4.2. Directed Graphs with Good Separators O(n) × r matrices. This can be done in O(r n ) time. −1 (I − K) −1 −1 We show a faster method to compute when its Claim 4.3 If we have A and D , then we can compute underlying graph has a weak separator tree (see Section 3.2). S−1 =(D − CA−1B)−1 in O(rω−2n2) time. Let AB M , Proof: By Schur’s formula S = D − CA−1B is = CD non-singular if M −1 and A−1 exist. The proof consists −1 where A and D are square matrices. If A is nonsingular, of two steps. We will first write CA B as a product S = D − CA−1B is called the Schur complement of A. of two low rank matrices. Then we use the Sherman- Morrison-Woodbury formula to compute S−1 efficiently. Lemma 4.1 (Schur’s formula [37]) Let M and In the following, rectangular matrix multiplications will be A, B, C, D be matrices as defined above. Then done by dividing each matrix into submatrices of size r × r. det(M)=det(A) × det(S).IfA and S are nonsingular Multiplications between these submatrices can be done in ω then O(r ) time. CA−1B A−1 A−1BS−1CA−1 −A−1BS−1 First we consider , which is equal to M −1 + CA−1B BT C A−1B O n × r = −S−1CA−1 S−1 ( u) v . ( u) is of size ( ) and can be computed using O(n2/r2) submatrices multiplications of −1 Our observation is that for graphs with good separators, size r × r. By putting U = CA Bu and V = Bv,wehave we can find a partition of (I − K) such that B and C are CA−1B = UV T where U and V are of size O(n) × r. Now we can use Sherman-Morrison-Woodbury formula 4.2.2. Proof of Theorem 1.5: In this section we show to compute (D + UV T )−1 efficiently since D−1 is known that all pairs edge connectivities in planar graphs, bounded and U, V are low rank matrices. By the formula, genus graphs, and fixed minor free graphs can be computed ω−2 ω/2+1 −1 −1 −1 T −1 −1 T −1 in O(d n ) time. S = D − D U(I + V D U) V D . We will first see that the underlining matrix I−K for these Similar to the above V T D−1U can be computed us- graphs are well separable. Thus we can apply Theorem 4.5 ing O(n2/r2) submatrix multiplications. Since S is non- together with the fact the these graphs have O(n) edges, to singular, (I + V T D−1U)−1 exists by Lemma 4.2, and it obtain a O(dω−2nω/2+1) time algorithm to compute (I − can be computed in one submatrix multiplication time as K)−1. Finally we show that the time to compute the required I + V T D−1U is of size r × r. Finally, since (D−1U)T ranks of all submatrices in (I − K)−1 is O(dω−2nω/2+1). T −1 and (V D ) are r × O(n) matrices, we can compute A fixed minor√ free graph G (and its subgraph) has O(n) (D−1U)(I + V T D−1U)−1(V T D−1) using O(n2/r2) sub- edges and a (O( n), 2/3)-separation (recall the definition matrix multiplications. Hence the overall time complexity is from Section√ 3.2), we claim that the matrix I − K of G O(rω · n2/r2)=O(rω−2n2). is O(d n, α)-well-separable for some constant α<1.To Using Claim 4.3, we can now compute M −1 using show that I −K is well-separable, we can use a separator to Schur’s formula efficiently. divide the graph into three parts X, Y,√ Z such that |X ∪Z|≤ 2n/3, |Y ∪ Z|≤2n/3 and |Z| = O( n), and there are no −1 −1 −1 Claim 4.4 If we have A , D and S , then we can edges between X and Y .LetE1 be the set of edges with at −1 ω−2 2 compute M in O(r n ) time. least one endpoint in X and let E2 be E − E1. We partition I − K as follows. Proof: By Schur’s formula, to compute M −1, we need −1 −1 −1 −1 −1 −1 −1 E E to compute A BS CA , A BS and S CA . 1 2 These multiplications all involve some O(n)× r matrix Bu, E1 AB Bv, Cu or Cv follows: E2 CD −1 −1 −1 −1 T −1 −1 • A BS CA =(A Bu)(Bv S CA ) |Y | n Y −1 −1 −1 T −1 Since =Ω() and each vertex in is of degree at • A BS =(A Bu)(Bv S ) |E | n |E | n −1 −1 −1 T −1 least one, we have 1 =Ω( ) and 2 =Ω().Also • S CA =(S Cu)(C A ) v we have |E1| = O(n) and |E2| = O(n) since m = O(n), All steps during computation of these products involve and thus |E1|, |E2|≤α|E| for some constant α<1.Let O n × r O n2/r2 a ( ) matrix. Hence they all only take ( ) F1 be the subset of E1 with one endpoint in Z, and define submatrix multiplications to compute. Therefore the total F2 similarly. Then any edge in E1 − F1 does not√ share an time complexity is O(rω · n2/r2)=O(rω−2n2). endpoint with any edge in E2 −F2.Since|Z| = O( n)√and By Claim 4.3 and Claim 4.4, we can compute M −1 each vertex is of degree at most d, there are at most O(d√n) in O(rω−2n2) timeifwearegivenA−1, D−1 and also edges with one√ endpoint in Z, and thus |F1| = O(d n) low rank decompositions of B and C. We can then use a and √|F2| = O(d n). Therefore, submatrices B and C have recursive algorithm to compute M −1. O(d n) non-zero√ rows and columns, and thus are of rank at most O(d n).AlsoI −K must be invertible because I −K n×n M r, α Theorem 4.5 If an matrix is ( )-well-separable, has ones on its diagonal, and thus det(I − K) is a non-zero O nγ M −1 and the separation can be found in ( ) time, then polynomial. Matrices A and D correspond to matrices I−K can be computed in O(nγ + rω−2n2) time. for subgraphs√ of G, and by the inductive hypothesis A and D O d n ,α Proof: First we analyse the time to compute M −1.We are ( ( ) )√-well-separable. Hence we can conclude I − K O d n ,α use a O(nγ ) time algorithm to find an (r, α)-well-separable that is ( ( ) )-well-separable. I − K partition of M. By the property of the partition, both B Now it remains to analyze the probability that all , A D and C are matrices of rank at most r. Then we can write and remain invertible after substituting random values T T ω−2 2 ki,j det(I − K) m B = BuB and C = CuC in O(r n ) time where to . Note that is a degree non-zero v v I − K Bu, Bv, Cu and Cv are all O(n) × r matrices. We then polynomial, because has ones on its diagonal. Thus, I − K compute A−1 and D−1 recursively, as A and D by definition by the Schwartz-Zippel Lemma, is invertible with − m/|F| A D are also (r, α)-separable. Using these inverses we can apply probability at least 1 .Since and correspond I − K G Claim 4.3 to compute S−1, then apply Claim 4.4 to compute to matrices for subgraphs of , we can apply A D M −1 using A−1,D−1 and S−1.Thus,givenA−1 and D−1, the same argument to and recursively, and all the we can compute M −1 in O(rω−2n2) time. Let f(n) be required matrices are invertible with probability at least − O m m /|F| the time to compute M −1 of size n × n.Thenf(n)= 1 ( log ) by the√ union bound. I −K O d n ,α f(αn)+f((1 − α)n)+O(nγ )+O(rω−2n2), and it can be As a result is ( ( ) )-well-separable. We can γ . I −K −1 shown by induction that f(n)=O(nγ + rω−2n2). now apply Theorem 4.5 with√ =15 [2]. So ( ) can be computed in O(n1.5 +(d n)ω−2n2)=O(dω−2nω/2+1) time. For the decoding time, by a similar argument as until x is of degree zero. We call this a complete splitting- in Section 4.1, the total decoding time is s∈V O(m · off at x. Our goal is to design a fast algorithm to completely (dout(s))ω−1).Sincem = O(n) and dout(s) ≤ d,thisisat split-off x. A straightforward algorithm is to try every pair n ω−1 2 ω−2 most O(n · d · d )=O(n d ), which is dominated by of edges on x, and check whether the edge connectivities the encoding time. Thus we obtain the following√ corollary. decrease for some pairs after its splitting-off. In the worst This is faster than Theorem 1.4 when d = O( n) case this requires O((d(x))2) attempts to completely split- off x. We show how to use expanders and superconcentrators Corollary 4.6 All pairs edge connectivities can be com- to completely split-off x in O(d(x)) attempts in undirected ω−2 ω/2+1 puted in O(d n ) time for any directed fixed minor and directed graphs respectively. free graph with maximum degree d. In undirected graphs, it is proved in [26] that O(d(x)) attempts are enough to completely split-off x,usinga 5. EDGE SPLITTING-OFF structural theorem of mincuts. We show how to get the In this section we show that expanders and superconcen- same result immediately using expanders. We replace x by a trators can be applied to design fast algorithms for another constant degree expander graph Hx with O(d(x)) vertices, well-studied graph connectivity problem. and “rewire” each edge of x to a distinct vertex in Hx. ux, xv Splitting-off a pair of edges ( ) means deleting these The graph Hx is required to satisfy the following propery: uv u  v two edges and adding a new edge if = .Note for every subset S ⊆ V (Hx) with |S|≤|V (Hx)|/2, it holds that the above definition works for both undirected and that d(S) ≥|S| where d(S) is the number of edges with directed graphs. The content of edge splitting off theorems exactly one endpoint in S. By Menger’s theorem, it follows ux, xv is to prove the existence of one pair of edges ( ) so that for every S, T ⊆ V (Hx) with |S| = |T |,thereare|S| that its splitting-off preserves the edge connectivities for edge disjoint paths between S and T in Hx. Hence the edge all pairs of vertices. These results are a powerful tool for connectivity between every pair of vertices a, b ∈ V − x in proving theorems and developing algorithms for many graph the resulting graph is the same as in the original graph. We connectivity problems, including connectivity augmentation can add extra edges to make sure that every vertex in Hx is problems, network design problems, tree packing problems of even degree. Then every vertex in Hx can be completely and graph orientation problems (see the references in [26]). split-off by Mader’s theorem. Since every vertex in Hx is Faster algorithms for the splitting-off operation can be of constant degree, we can completely split-off one vertex used to obtain faster algorithms for many graph connectivity in Hx in O(1) attempts by the straightforward algorithm, problems. There are several existing algorithms for this and thus we can completely split-off all vertices in Hx in task (e.g. [13], [5], [7]) in undirected and directed graphs, O(d(x)) attempts since Hx has only O(d(x)) vertices. After but most algorithms only preserve global edge connectivity we completely split-off all vertices in Hx,thisisthesame (i.e. the value of the global min-cut). Our results in this as completely split-off x in the original graph and the edge section apply to the general setting where all pairs edge connectivity between every pair is preserved. connectivities are preserved. For the algorithm to be efficient, we need to show that the In undirected graphs, Mader proved that there is a “good” expander graph Hx can be constructed efficiently, and here pair of edges in almost all situations. we give an example of one such construction. For a d-regular graph G =(V,E), the edge expansion of a set S ⊆|V |/2 Theorem 5.1 (Mader [31]) Let G =(V,E) be an undi- is defined as h(S)=d(S)/(d ·|S|). The requirement rected graph and x ∈ V . If there is no cut edge incident that d(S) ≥|S| for every set S with |S|≤|V |/2 is to x and d(x) =3 , then there exists an edge pair (yx,xz) equivalent to the requirement that h(S) ≥ 1/d for every so that its splitting-off preserves the edge connectivity for set S with |S|≤|V |/2.Leth(G)=minS:|S|≤|V |/2 h(S). a, b ∈ V − x every pair of vertices . By Cheeger’s inequality, h(G) ≥ (1 − λ2)/2 where λ2 is There is a similar theorem for Eulerian directed graphs the second largest eigenvalue in the normalized adjacency A(G)/d where for every vertex its indegree is equal to its outdegree. matrix . So it suffices to construct a graph with λ2 ≤ 1 − 2/d. Consider the graph Gp =(Vp,Ep) in V { ,...,p− } a ∈ V −{ } Theorem 5.2 (Bang-Jensen, Frank and Jackson [4]) Let which p = 0 1 .For p 0 ,thevertex a a p a − p G =(V,E) be an Eulerian directed graph and x ∈ V .Then is connected to +1 mod ,to 1mod and a−1 p there exists an edge pair (yx,xz) so that its splitting-off to its multiplicative inverse mod .Thevertex0 is p − G preserves the edge connectivity for every ordered pair of connected to 1,to 1 and has a self-loop. This graph p λ < . vertices a, b ∈ V − x. is 3-regular, and it is known that 2 0 9999 (see [18], Section 11.1.2). By taking the 8-th power of Gp, denoted 8 8 8 8 In undirected graphs, when d(x) is even, Mader’s theorem by Gp, one can verify that λ2 ≤ 1 − 2/d .SoGp satisfies can be repeatedly applied until x is of degree zero. In the property that d(S) ≥|S| for every S with |S|≤|V |/2, directed graphs, Theorem 5.2 can also be repeatedly applied and is of constant degree. Since Gp can be constructed in 8 O(p log p) time, Gp can also be constructed in O(p log p) [15] A.V. Goldberg, S. Rao. Flows in undirected unit capacity time. Therefore, for a vertex x with degree d(x), we can set networks. FOCS 1997, 32-34. 8 Hx to be Gp for p = O(d(x)). [16] N. Harvey. Algebraic Algorithms for Matching and Matroid In directed graphs, we can completely split-off a vertex in Problems. SIAM J. Computing, 39:679-702, 2009. O(din(x)) attempts using superconcentrators. The argument [17]T.Ho,M.M´edard, R. Koetter, D.R. Karger, M. Effros, J. Shi, B. Leong. A Random Linear Network Coding Approach is very similar to the undirected case and is omitted. to Multicast. IEEE Transactions on Information Theory 52, In general, expander graphs and superconcentrators can 4413–4430, 2006. be used to reduce the maximum degree significantly while [18] S. Hoory, N. Linial, A. Wigderson. Expander Graphs and preserving the edge connectivities and only increasing the their Applications. Bulletin of the AMS, 43:439-561, 2006. number of vertices moderately. This may be used to reduce [19] J. Hopcroft, R.M. Karp. An n5/2 algorithm for maximum the running time of an algorithm that has a super-linear matchings in bipartite graphs. SIAM J. Computing, 2:225- dependency on the maximum degree. We believe that these 231, 1973. reductions will find further applications for other graph [20] A.W. Ingleton, M.J. Piff. Gammoids and transversal matroids. J. Combin. Theory Ser. B, 15, 51–68, 1973. connectivity problems. [21] S. Jaggi. Design and analysis of network codes. PhD thesis, California Institute of Technology, 2006. ACKNOWLEDGEMENT [22] S. Jaggi, P. Sanders, P.A. Chou, M. Effros, S. Egner, K. We thank Andrej Bogdanov for his help on expanders and Jain, L.M.G.M. Tolhuizen. Polynomial time algorithms for superconcentrators, and Nick Harvey for useful comments multicast network code construction. IEEE Transactions on on the paper. Information Theory 51, 1973–1982, 2005. [23] D.R. Karger, M.S. Levine. Finding maximum flows in undi- rected graphs seems easier than bipartite matching.STOC REFERENCES 1998, 69–78. [1] R. Ahlswede, N. Cai, S.R. Li, and R.W. Yeung. Network [24] K. Kawarabayashi, B. Reed. A separator theorem in minor- Information Flow. IEEE Transactions on Information Theory, closed classes. FOCS 2010. 46(4):1204–1216, 2000. [25] R. Koetter, M. M´edard. An algebraic approach to network [2] N. Alon, P. Seymour, R. Thomas. A Separator Theorem for coding. IEEE/ACM Transactions on Network, 11(5):782–795, Nonplanar Graphs. Journal of the AMS, 3(4):801–808, 1990. 2003. [3] N. Alon, R. Yuster. Solving Linear Systems Through Nested [26] L.C. Lau, C.K. Yung. Efficient edge splitting-off algorithms Dissection. FOCS 2010, 225–234. maintaining all-pairs edge-connectivities. IPCO 2010, 96– [4] J. Bang-Jensen, A. Frank, and B. Jackson. Preserving and 109. increasing local edge-connectivity in mixed graphs.SIAM [27] S.-Y.R. Li, R.W. Yeung. On Convolutional Network Coding. Journal on Discrete Mathematics, 8(2):155-178, 1995. ISIT 2006, 1743–1747. [5] A. A. Bencz´ur and D. R. Karger. Augmenting undirected edge [28] S.-Y.R. Li, R.W. Yeung, and N. Cai. Linear Network Coding. connectivity in O(n2) time. J. Algorithms, 37(1):2-36, 2000. IEEE Transactions on Information Theory, 49(2):371–381, [6] A. Bhalgat, R. Hariharan, T. Kavitha, and D. Panigrahi. 2003. An O˜(mn) Gomory-Hu tree construction algorithm for un- [29] R.J. Lipton, D.J. Rose, and R.E. Tarjan. Generalized Nested weighted graphs. STOC 2007, 605–614. Dissection. SIAM J. Numerical Analysis, 16(2):346–358, [7] A. Bhalgat, R. Hariharan, T. Kavitha, and D. Panigrahi. Fast 1979. edge splitting and Edmonds’ arborescence construction for [30] R.J. Lipton, R.E. Tarjan. A separator theorem for planar unweighted graphs. SODA 2008, 455–464. graphs. SIAM J. Applied Mathematics, 36:177–189, 1979. [8] U. Brandes, D. Wagner. A linear time algorithm for the arc [31] W. Mader. A reduction method for edge-connectivity in disjoint Menger problem in planar directed graphs.ESA graphs. Annals of Discrete Mathematics, 3:145–164, 1978. 1997, 64–77. [32] M. Mucha, P. Sankowski. Maximum matchings via Gaussian [9] J. Cheriyan. Randomized O˜(M(|V |)) Algorithms for Prob- elimination. FOCS 2004, 248-255. lems in Matching Theory. SIAM J. Computing, 26(6):1635– [33] A. Schrijver. Combinatorial Optimization. Springer, 2003. 1655, 1997. [34] A.L. Toledo, X. Wang. Efficient multipath in sensor networks [10] E. Erez, M. Feder. Convolutional network codes. ISIT, 146, using diffusion and network coding. 40th Annual Conference 2004. on Information Sciences and Systems, 2006. [11] S. Even, R.E. Tarjan. Network Flow and Testing Graph [35] L. Valiant. On Non-linear Lower Bounds in Computational Connectivity. SIAM J. Computing, 4(4), 507-518, 1975. Complexity. STOC 1975. [12] O. Gabber, Z. Galil. Explicit Constructions of Linear-Sized [36] M.A. Woodbury. Inverting modified matrices. Memorandum Superconcentrators. Journal of Computer and System Sci- Report 42, Statistical Research Group, Princeton University, ences, 22:407–420, 1981. 1950. [13] H. N. Gabow. Efficient splitting off algorithms for graphs. [37] F. Zhang. The Schur Complement and Its Applications. STOC 1994, 696–705. Springer, 2005. [14] J.R. Gilbert, J.P. Hutchinson, and R.E. Tarjan. A Separator Theorem for Graphs of Bounded Genus. J. Algorithms, 5:391– 407, 1984.