<<

NUMERICAL ALGEBRA, doi:10.3934/naco.2014.4.227 CONTROL AND OPTIMIZATION Volume 4, Number 3, September 2014 pp. 227–239

SPARSE INVERSE MATRICES FOR SCHILDERS’ FACTORIZATION APPLIED TO RESISTOR NETWORK MODELING

Sangye Lungten †, Wil H. A. Schilders and Joseph M. L. Maubach Center for Analysis, Scientific Computing and Applications Department of and Computer Science Eindhoven University of Technology 5600 MB, Eindhoven, The Netherlands

(Communicated by Xiaoqing Jin)

Abstract. Schilders’ factorization can be used as a basis for precondition- ing indefinite linear systems which arise in many problems like least-squares, saddle-point and electronic circuit simulations. Here we consider its appli- cation to resistor network modeling. In that case the sparsity of the blocks in Schilders’ factorization depends on the sparsity of the inverse of a permuted incidence matrix. We introduce three different possible permuta- tions and determine which permutation leads to the sparsest inverse of the incidence matrix. Permutation techniques are based on types of sub-digraphs of the network of an incidence matrix.

1. Introduction and motivation. Indefinite linear systems of the form n m " # n A BT x f = (1) m B 0 y g | {z } A arise in electronic circuit simulations and many other applications where A is sym- metric and positive (semi) definite, and BT is of maximal column m (≤ n). Preconditioning techniques to solve (1) have become very important, especially for problems which arise from Stokes equation resulting in saddle point problems [6]. Schilders’ factorization [7] can be used as a basis for such preconditioners. A pos- sibly permuted A˜ B˜T  A˜ = B˜ 0

2010 Mathematics Subject Classification. Primary: 58F15, 58F17; Secondary: 53C35. Key words and phrases. Schilders’ factorization, lower trapezoidal, digraph, incidence matrix, nilpotent. † The first author’s PhD research is supported by the Erasmus Mundus IDEAS project and the Department of Mathematics and Computer Science, Eindhoven University of Technology.

227 228 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH is split into a block 3 × 3 structure and factorized:  ˜ ˜ ˜T   ˜T     ˜ ˜  A11 A12 B1 B1 0 L1 D1 0 I B1 B2 0 ˜ ˜ ˜T ˜T T A21 A22 B2  = B2 L2 M   0 D2 0  0 L2 0 , (2) ˜ ˜ T T B1 B2 0 0 0 I I 0 0 L1 M I ˜−T where the blocks L1,L2 & M depend on B1 . The permutation has to be such that BT is permuted into a lower trapezoidal form

B˜T    B˜T = 1 = , (3) ˜T B2 ˜T ˜T where the top part B1 is a lower of size m × m and B2 is an (n − m) × m matrix. Generally, A is sparse and so is A˜. But (2) can have blocks ˜−T which are more dense because of B1 . In other words, sparsity of the blocks in ˜−T ˜−T (2) depends on the sparsity of B1 . In order to illustrate the involvement of B1 while computing the blocks in (2), we consider (4) and (5). The details for deriving these formulas can be found in [7].

˜T  ˜−T ˜ ˜−1 L1 = B1 lower B1 A11B1 ; (4)  ˜ ˜T T  ˜−1 ˜T M = A21 − B2 L1 B1 − B2 D1. (5)

˜ ˜T ˜−T ˜ ˜−1 Although A11 and B1 in (4) are sparse, the product B1 A11B1 can become dense ˜−T where B1 is dense. Eventually this will lead to even more dense blocks L1 and M. ˜−T This entails to have a permutation such that B1 is sparse. Therefore, we develop an algorithm which permutes an incidence matrix B˜T into a lower trapezoidal form ˜−T (3) such that it results sparse B1 . In this paper we consider Schilders’ factorization applied to resistor network modeling in which A is a with entries of resistance values, and BT is an incidence matrix. The modeling can be done by applying Kirchhoff’s current law and Ohm’s law for resistors [5]. In order to explain the definition of an incidence matrix, and to understand some of its properties, a brief summary on and incidence matrices is given in Section2. In Section3, aspects of the sparsity ˜−T of B1 are discussed. Then we introduce three different possible permutations and determine which permutation leads to the sparsest inverse of the incidence matrix. ˜−T Section4 gives the permutation algorithm which leads to a sparse inverse, B1 . Numerical experiments performed on industrial resistor networks are presented in Section5.

2. A brief summary on graph theory and incidence matrices. A graph G consists of a finite set V = {η0, η1, . . . , ηm} called the set and a finite set E = {ξ1, ξ2, . . . , ξn} called the edge set. An η ∈ V is called a vertex (or node) of G and a ξ ∈ E an edge of G. An edge ξ is represented by an unordered pair of nodes {ηi, ηj}, which are said to be adjacent to each other, and called the end points of ξ. A (or digraph) G consists of a node set V and an edge set E, where each edge ξ is an ordered pair (ηi, ηj) known as an arc (or directed edge). We write ηi → ηj to represent that the arc ξ connects the two nodes in the direction from ηi to ηj. Here, call ηi the initial node and ηj the terminal node of ξ. If the direction between the two nodes is unknown, we shall write it as ηi ηj to avoid ambiguity. SPARSE INVERSE INCIDENCE MATRICES 229

A path in a graph G is an alternating sequence of distinct nodes and edges. For example the path between η0 and ηr is given by η0, ξ1, η1, ξ2, . . . , ξr, ηr. If there is a path between every pair of nodes, then G is said to be connected. A digraph is called weakly connected if its underlying graph is connected. A digraph is strongly connected if for every pair distinct nodes ηi and ηj, there is a directed path starting from ηi to ηj.A Hamiltonian path in a digraph is a path in a single direction that visits each node exactly once. A tournament is a digraph in which every pair of nodes is connected by a single directed edge [1]. A digraph is called loopless if it contains none of (ηi, ηi). Theorem 2.1. [3] Every tournament on a finite number of m nodes contains a Hamiltonian path. Proof. See page 149, Section 5.3 in [3]. We exploit this theorem later for the generation of test examples. Definition 2.2. Consider a loopless digraph G of m + 1 nodes and n arcs. The T node-arc incidence matrix of G is an n × (m + 1) matrix Bˆ = [bij], where   1, if ηj is the initial node of arc ξi bij = −1, if ηj is the terminal node of arc ξi  0, if arc ξi does not contain node ηj . To obtain a reduced incidence matrix BT of size n×m from BˆT , we call one node the reference or ground node, and delete its related column. This causes all rows of BT , related to the arcs connected to the reference node, to only have one nonzero entry. This is necessary for the construction of a permutation to a lower trapezoidal matrix. Moreover, systems of the form (1) resulting from resistor network modelings are made consistent by grounding a node and removing the corresponding column [5]. Theorem 2.3 gives an important property of an incidence matrix which relates to its underlying digraph. Theorem 2.3. [2] Suppose G is a digraph with m+1 nodes. Then the column rank of the incidence matrix BT is m if and only if G is weakly connected. Proof. See page 12, Section 2.1 in [2].

3. Sparsity of inverse incidence matrix. Before exploring different types of permutations, we first give an overview of interdependence between the sparsity of inverse and nilpotency degree of a lower triangular matrix which contains maximally two nonzero entries in each row. For this we consider a unit lower −1 H and compute its inverse using the expansion of (I + NH ) , where I is the and NH is the strictly lower triangular part of H (the nilpotent part of H). The inverse of unit lower bidiagonal matrices can also be computed by using Gaussian elimination [8]. However, the former gives an insight on how the interdependence between the sparsity of inverse and nilpotency degree can be conceived. A unit lower bidigonal matrix H of size n × n is of the form  1  β1 1  H =   , where β 6= 0, 1 ≤ i ≤ n − 1.  .. ..  i  . .  βn−1 1 230 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH

By defining the nilpotent part  0  β1 0  N =   , H  .. ..   . .  βn−1 0 it is trivial to observe that −1 2 n−1 n−1 H = I − NH + NH − · · · + (−1) NH  1  (1)  α 1   1   . (1)   . α 1   2   .   α(r) .  =  1  ,  . . .   . α(r) .. ..   2   . .  α(n−2) . ..   1  (n−1) (n−2) (r) (1) α1 α2 . . . αn−r . . . αn−1 1 where each rth subdiagonal entry, 1 ≤ r ≤ n − 1, is defined by r+j−1 (r) r Y αj = (−1) βi 6= 0 , 1 ≤ j ≤ n − r. (6) i=j −1 That is every subdiagonal entry of H is a product of some βi’s and not equal to −1 zero. So H is full and NH has nilpotency degree k = n. On the other hand if a lower triangular matrix with maximally two nonzero entries in each row is of the form  1   β1 1  G =   ,  . ..   . 0 .  βn−1 0 ... 1 then again it is easy to obtain  1   −β1 1  G−1 = I − N =   , G  . ..   . 0 .  −βn−1 0 ... 1 which is as sparse as G and the nilpotent part NG has degree k = 2. Based on these two extreme cases, we consider an example of an incidence matrix BT which is constructed from a digraph G as shown in Figure1. It is permuted to a lower ˜T ˜T trapezoidal form B (3) in two different ways giving two different B1 ’s of size 4×4. ˜−T Sparsity of B1 depends on the type of underlying digraph. If the permutation is ˜−T based on a Hamiltonian path (Figure1 (a)), then B1 is a full lower triangular matrix. If it is based on a star digraph (Figure1 (b)), then one can obtain the ˜−T most sparse B1 . The former has nilpotency degree k = 4, while the latter has k = 2. Therefore we are also induced to look for a permutation that leads to a small nilpotency degree. It is always possible to find star sub-digraphs in large networks. SPARSE INVERSE INCIDENCE MATRICES 231

 1 0 0 0   1 0 0 0   −1 1 0 0  −T  1 1 0 0  B˜T =   ⇒ B˜ =   1  0 −1 1 0  1  1 1 1 0  0 0 −1 1 1 1 1 1

Hamiltonian path (a) Full inverse

 1 0 0 0   1 0 0 0   −1 1 0 0  −T  1 1 0 0  B˜T =   ⇒ B˜ =   1  −1 0 1 0  1  1 0 1 0  −1 0 0 1 1 0 0 1

star digraph (b) Sparse inverse

˜T ˜−T Figure 1. Different forms of B1 and B1 from a digraph G whose ground node is shaded.

˜−T In order to illustrate further the sparsity of B1 by looking at examples of larger sizes, we generated incidence matrices of full column rank. The easiest way to generate such incidence matrix is to consider a digraph containing a Hamiltonian path. This is being indicated by Theorems 2.1 and 2.3 (see also in [4]). Construct a tournament T from a randomly generated m + 1 nodes η0, η1, . . . , ηm, where η0 is the ground node. And then choose randomly a sub-digraph G ⊆ T of n arcs such that it contains the Hamiltonian path. The required incidence matrix BT can be constructed from G using Algorithm1. Note that Algorithm1 can be used only for generating a random test incidence matrix. It ensures to construct an incidence matrix BT of full column rank. It should also be noted that n ≤ (m + 1)m/2 since a tournament from m + 1 nodes contains only (m + 1)m/2 arcs. However if G is already known to be fulfilling the requirements of Theorem 2.3, then it is not necessary for it to contain a Hamiltonian path. For such G, we can directly construct BT by using the steps 10 − 14 of Algorithm1. Using the above technique, a random network with an incidence matrix BT of size 100 × 60 is generated. Then three different permutations are applied on BT to reduce it to a lower trapezoidal form B˜T as shown in Figure2. Permutation I is done by choosing a Hamiltonian path in G. It is evident from Figure2, that permutation I does not promise good results of sparse blocks in Schilders’ factorization since it gives full inverse incidence matrices. Permutation II is done randomly without taking any other considerations than the lower trapezoidal form. This permutation may sometimes give sparse inverse incidence matrices. However, it is quite difficult to predict the sparsity in advance since the permutation does not emphasize any special connectivity of the underlying digraph. Permutation III is done by priori- tizing on the star sub-digraphs of G that are connected to each other starting from ˜−T the ground node. It leads to the sparsest B1 and the smallest nilpotency degree comparing to the other two. Thus we propose Permutation III in the next section. 232 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH

0 0 0 ~ T B ~ T 1 50 B1 50 50

100 100 0 50 100 0 50 T 0 50 T B , nz = 196 B~ , nz = 196 B~T , nz = 196

0 0 0 0 20 20 k 60 ~ T 20 40 B1 40 k 14 50 40 60 60 0 50 60 0 50 B~ T , nz = 1830 0 50 B~ T , nz = 406 1! 100 ~ T 1! 0 50 B1! , nz = 197 Permutation I B~T , nz = 196 k 5 Permutation II Permutation III (improved version) Figure 2. Three different permutations applied to an incidence matrix.

˜−T 4. Permutation which leads to small nilpotency degree and sparse B1 . Let G be the digraph of a node-arc incidence matrix BˆT of size n × (m + 1). Let m the node set of G be V = {ηj}j=0, where η0 is the reference node, and the arc set n ˆT be E = {ξi}i=1. After deleting the column of B corresponding to η0, we obtain a reduced incidence matrix BT of size n × m. Define

T B = [bij] , where bij ∈ {−1, 0, 1} , 1 ≤ i ≤ n, 1 ≤ j ≤ m.

Without loss of generality, assume that BT has maximal column rank m and m ≤ n. Then by Theorem 2.3, G is at least weakly connected. Note that each row of BT contains maximally two non-zero entries from {−1, 1}. This leads us to represent the column numbers of BT by the elements of V and its row numbers by elements of E. Let R[is] be an n × n row such that the rows s and is are permuted. Similarly denote an m × m column permutation matrix by C[js] indicating that the columns s and js are permuted.

Suppose ηj1 ηj0 by the arc ξi1 , where ηj0 = η0, ηj1 ∈ V , and ξi1 ∈ E. st th st th T Permuting the 1 row with i1 row and the 1 column with j1 column of B would give

ηj1 "# T ξi1 bi1j1 0 R[i1]B C[j1] = , T u B(1)

T where B(1) is an (n − 1) × (m − 1) reduced incidence matrix, u is an (n − 1) × 1 column vector, and bi1j1 ∈ {−1, 1}. Similarly, continue the process until we find the last remaining node, ηjr ηj0 by the arc ξir , where 2 ≤ r ≤ m and 1 ≤ ir ≤ n. So th th th th that permuting the r row with ir row and the r column with jr column would SPARSE INVERSE INCIDENCE MATRICES 233 give BT with the following structure.

ηj1 . . . ηjr   ξi1 bi1j1 .  . ..  R BT C = .  . . , [i1,...,ir ] [j1,...,jr ]   ξ  0 . . . b  ir  ir jr  T v1 . . . vr B(r)

T where B(r) is now of size (n − r) × (m − r), and v1, . . . , vr are the column vectors of size (n − r) × 1. A diagonal entry bisjs ∈ {−1, 1}, 1 ≤ s ≤ r, is the only nonzero entry in sth row. This completes permuting the columns and rows related to the nodes and arcs of the star sub-digraph whose central node is ηj0 .

Now, suppose each of ηjr+1 , . . . , ηjp ηj1 by the arcs ξir+1 , . . . , ξip , respectively, where r + 1 ≤ p ≤ m. If there exists no such connection for ηj1 , then there will be at least one such connection for one of ηj2 , . . . , ηjr. Otherwise G is discon- nected. As soon as one such connection is found, permute every corresponding th th th th th th r + 1 , . . . , p row with ir+1, . . . , ip row and every corresponding r + 1 , . . . , p th th column with jr+1, . . . , jp column. This is the permutation with respect to another star sub-digraph whose central node is one of the nodes ηj1 , . . . , ηjr, connected to the previous star sub-digraph. Repeating the same process for p + 1, p + 2, . . . , m, we obtain BT with a lower trapezoidal form:

T R[i1,...,ir ,...,ip,...,im]B C[j1,...,jr ,...,jp,...,jm]

ηj1 . . . ηjr ηjr+1 . . . ηjp ηjp+1 . . . ηjm

ξi1  bi1j1  . . . .  . ..  .  .  ξ  0 . . . b  ir  ir jr  ξ b ... 0 b  ir+1  ir+1j1 ir+1jr+1  .  . . . .  .  . . . ..  = .  . . . . ξ  b ... 0 0 . . . b  ip  ipj1 ipjp  ξ  0 . . . b 0 ... 0 b  ip+1  ip+1jr ip+1jp+1  .  . . . . .  .  ......  .  ......    ξim 0 bimjr 0 ... 0 0 . . . bimjm

ξiN w1 . . . wr wr+1 . . . wp wp+1 . . . wm

˜T The top m × m lower triangular part is B1 , and the remaining (n − m) × m rect- T angular matrix is B˜ = [w1, . . . , wr, . . . , wp . . . , wm], where ws, 1 ≤ s ≤ m, is an 2  (n − m) × 1 column vector. Here, ξiN = ξim+1 , . . . , ξin , which gives the row ˜T numbers of B2 . In fact we have shown that if BT is an n × m incidence matrix with full column rank, containing maximally two nonzero entries in each row, then there exist an n × n row permutation matrix R and an m × m column permutation matrix C such 234 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH that

" # B˜T RBT C = B˜T = 1 , ˜T B2

˜T ˜T where B1 is an m × m lower triangular and B2 is an (n − m) × m rectangular matrix. Consider an example of a digraph shown in Figure3, which has 9 nodes (including the ground node η0) and 12 arcs. Using Definition 2.2, we construct a node-arc

ξ1 ξ5 η1 η2 η3

ξ2 ξ3 ξ6

ξ4 η4 η5 η6 ξ7

ξ8 ξ9 ξ10

η7 η0 η8 ξ11 ξ12

Figure 3. A digraph.

incidence matrix BˆT of size 12 × 9 as follows:

η0 η1 η2 η3 η4 η5 η6 η7 η8 ξ1  0 1 −1 0 0 0 0 0 0 ξ2  0 1 0 0 −1 0 0 0 0   ξ3  0 0 1 0 0 −1 0 0 0   ξ4  0 0 0 0 1 −1 0 0 0   ξ5  0 0 1 −1 0 0 0 0 0   BˆT = ξ6  0 0 0 1 0 0 −1 0 0   ξ7  0 0 0 0 0 −1 1 0 0   ξ8  0 0 0 0 1 0 0 −1 0   ξ9 −1 0 0 0 0 1 0 0 0   ξ10  0 0 0 0 0 0 1 0 −1 ξ11 −1 0 0 0 0 0 0 1 0 ξ12 −1 0 0 0 0 0 0 0 1 SPARSE INVERSE INCIDENCE MATRICES 235

After deleting the column related to ground node, η0, a reduced incidence matrix BT of size 12 × 8 is obtained as follows:

η1 η2 η3 η4 η5 η6 η7 η8 ξ1  1 −1 0 0 0 0 0 0 ξ2  1 0 0 −1 0 0 0 0   ξ3  0 1 0 0 −1 0 0 0   ξ4  0 0 0 1 −1 0 0 0   ξ5  0 1 −1 0 0 0 0 0   BT = ξ6  0 0 1 0 0 −1 0 0   ξ7  0 0 0 0 −1 1 0 0   ξ8  0 0 0 1 0 0 −1 0   ξ9  0 0 0 0 1 0 0 0   ξ10  0 0 0 0 0 1 0 −1 ξ11  0 0 0 0 0 0 1 0 ξ12 0 0 0 0 0 0 0 1

Applying the new permutation technique, we obtain the permuted incidence matrix B˜T as follows:

η5 η7 η8 η4 η6 η2 η1 η3 ξ9  1 0 0 0 0 0 0 0 ξ11  0 1 0 0 0 0 0 0   ξ12  0 0 1 0 0 0 0 0   ξ4 −1 0 0 1 0 0 0 0   ξ7 −1 0 0 0 1 0 0 0   B˜T = ξ3 −1 0 0 0 0 1 0 0,   ξ2  0 0 0 −1 0 0 1 0   ξ6  0 0 0 0 −1 0 0 1   ξ1  0 0 0 0 0 −1 1 0   ξ10  0 0 −1 0 1 0 0 0 ξ5  0 0 0 0 0 1 0 −1 ξ8 0 −1 0 1 0 0 0 0 where  1 0 0 0 0 0 0 0   1 0 0 0 0 0 0 0   0 1 0 0 0 0 0 0   0 1 0 0 0 0 0 0       0 0 1 0 0 0 0 0   0 0 1 0 0 0 0 0      T  −1 0 0 1 0 0 0 0  −T  1 0 0 1 0 0 0 0  B˜ =   , B˜ =   . 1  −1 0 0 0 1 0 0 0  1  1 0 0 0 1 0 0 0       −1 0 0 0 0 1 0 0   1 0 0 0 0 1 0 0       0 0 0 −1 0 0 1 0   1 0 0 1 0 0 1 0  0 0 0 0 −1 0 0 1 1 0 0 0 1 0 0 1 The row and column permutation matrices, respectively are given by T R = [e9, e11, e12, e4, e7, e3, e2, e6, e1, e10, e5, e8] and C = [e5, e7, e8, e4, e6, e2, e1, e3], th where ei denotes the i unit vector of size 12 × 1 for R and 8 × 1 for C. It is easy T to see that the nilpotent part of B1 has a degree k = 3. 236 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH

Algorithm 1 Generates a random full column rank incidence matrix. Require: Number of nodes m, and number of arcs n such that m ≤ n. Ensure: Node-arc incidence matrix BˆT . 1: generate {η1, η2, . . . , ηm} 2: for j = 1 to m − 1 do 3: for i = 1 to m − j do 4: T = {(ηi, ηi+j)} 5: end for 6: end for 7: for i = 1 to n do 8: G = {ξi : ξi ∈ T } ⊃ weakly connected path 9: end for 10: for i = 1 to n do 11: BˆT (i, G(i, 1)) = 1 12: BˆT (i, G(i, 2)) = −1 13: end for 14: delete the ground node column of BˆT to obtain a reduced incidence matrix BT

Algorithm 2 Permutes an incidence matrix to a lower trapezoidal form which leads to a sparse inverse. Require: Incidence matrix BT . Ensure: Lower trapezoidal incidence matrix B˜T , row and column permutation matrices PR and PC . 1: PR = In×n, PC = Im−1×m−1, k = 1 2: for i = 1 to n do 3: find column index r such that BT (i, r) is the only non-zero entry in ith row T 4: permute rows i and k of B and PR T 5: permute columns r and k of B and PC 6: k = k + 1 (only after each permutation) 7: end for 8: s = k (as updated above) 9: j = 1 10: while s ≤ m do 11: find a row index r ≥ s and a column index c ≥ s such that BT (r, j) and BT (r, c) are the only nonzero entries in rth row T 12: permute rows r and s of B and PR T 13: permute columns c and k of B and PC 14: s = s + 1 (only after each permutation) 15: j = j + 1 16: end while

5. Numerical results. We conducted numerical experiments on industrial resistor networks consisting of different components (two of them are illustrated in Figure4). Component R1 is banded and has star sub-digraphs with central nodes connected to only a few number of arcs as shown in Figure4 (a). Due to this banded structure, the ˜T two diagonals of B1 are quite close to each other creating more fill-ins in the inverse ˜−T B1 . Component R2 in Figure4 (b) has star sub-digraphs with many number of SPARSE INVERSE INCIDENCE MATRICES 237 arcs connected to each central node, which results large distance between the two ˜T ˜−T ˜−T diagonals of B1 giving sparse B1 . Thus sparsity of B1 depends on the number of arcs connected to each central node of the underlying star sub-digraphs as explained in Section3. Sparsity of the blocks from other components is shown in Table1.

6. Concluding remarks. The sparsity of blocks in Schilders’ factorization de- ˜−T pends on the sparsity of the inverse incidence matrix B1 . Permutation algorithm ˜T to achieve a potential optimum nilpotency degree of B1 is important. We have examined three different permutations and proposed the one which leads to the sparsest inverse. The technique of permutation is based on the star sub-digraphs ˜−T connected to each other starting from the ground node. Sparsity of the inverse B1 depends on the number of arcs connected to each central node of the underlying star sub-digraphs. We have developed an algorithm which takes care efficiently all the above conditions.

˜T ˜−T Resistor Size Permutation B1 B1 Nilpotency network (m) base (% nz) (% nz) (k) SSD 0.180 0.434 11 R3 1058 HP 0.180 50.000 1058 SSD 0.077 1.218 99 R4 2579 HP 0.079 49.994 2579 SSD 0.061 0.280 18 R5 3284 HP 0.061 50.000 3284 SSD 0.043 0.486 131 R6 4656 HP 0.043 50.000 4656 SSD 0.026 0.073 18 R7 7631 HP 0.026 49.993 7631 SSD 0.017 1.002 254 R8 11782 HP 0.017 49.996 11782 SSD 0.016 0.143 50 R9 12820 HP 0.016 49.996 12820 SSD 0.005 0.065 69 R10 36392 HP 0.005 49.999 36392

Table 1. Sparsity of blocks from other resistor components. Per- mutation is based on (i) star sub-digraphs (SSD) and (ii) Hamil- ˜T tonian path (HP). Size of B1 is m × m and ‘% nz’ stands for the percentage of nonzero entries contained. 238 SANGYE LUNGTEN, WIL H. A. SCHILDERS AND JOSEPH M. L. MAUBACH

0 0

500 500

1000 1000

1500 1500

2000 2000

0 500 1000 0 500 1000 nz = 4351 nz = 4351 T T R1 B (2176 × 1156) B˜

0 0

200 200

400 400

600 600

800 800

1000 1000

0 500 1000 0 500 1000 nz = 2311 nz = 19402 ˜T ˜−T B1 , k = 32 B1 (1156 × 1156) (a) A banded component having star sub-digraphs with each central node connected to only a few number of arcs.

0 0

500 500

1000 1000

1500 1500

2000 2000

0 500 1000 0 500 1000 nz = 4593 nz = 4593 T T R2 B (2356 × 1435) B˜

0 0

500 500

1000 1000

0 500 1000 0 500 1000 nz = 2751 nz = 7321 ˜T ˜−T B1 , k = 15 B1 (1435 × 1435) (b) A component having star sub-digraphs with each central node connected to many number of arcs.

Figure 4. Two different industrial resistor network components. ˜T Nilpotent part of B1 has degree equal to the value of k in each case. SPARSE INVERSE INCIDENCE MATRICES 239

REFERENCES [1] R. Balakrishnan and K. Ranganathan, A Textbook of Graph Theory, 2nd edition, Springer- Verlag, New York, 2012. [2] R. B. Bapat, Graphs and Matrices, Hindustan Book Agency, New Delhi, Springer-Verlag, London and Dordrecht, Heidelberg, New York, 2010. [3] G. Chartrand and L. Lesniak, Graphs and Digraphs, 3rd edition, Chapman and Hall/CRC Press, Boca Raton, London, 1996. [4] Z. Lijang, A matrix solution to Hamiltonian path of any graph, International conference on intelligent computing and cognitive informatics, IEEE 2010. [5] J. Rommes and W. H. A. Schilders, Efficient methods for large resistor networks, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 29 (2010), 28– 39. [6] Y. Saad, Preconditioning techniques for nonsymetric and indefinite linear systems, Journal of Computational and Applied Mathematics, 24 (1988), 89–105. [7] W. H. A. Schilders, Solution of indefinite linear systems using an LQ decomosition for the linear constraints, Linear Algebra and Applications, 431 (2009), 381–395. [8] R. Vandebril, M. V. Barel and N. Mastronardi, Matrix Computations and Semiseparable Matrices, The Johns Hopkins University Press, Baltimore, Marylan Received December 2013; 1st revision May 2014; final revision August 2014. E-mail address: [email protected] E-mail address: [email protected] E-mail address: [email protected]