The Interplay of Ranks of Submatrices
¡
Gilbert Strang and Tri Nguyen
Massachusetts Institute of Technology ¡
Singapore-MIT Alliance
J Abstract— A banded invertible matrix ¢ has a remarkable Upper: Above the th subdiagonal, every submatrix of
inverse. All “upper” and “lower” submatrices of ¢¤£¦¥ have has OQPRTS UJ .
low rank (depending on the bandwidth in ¢ ). The exact rank
Lower: Below the J th superdiagonal, every submatrix of
condition is known, and it allows fast multiplication by full OVPRTS% WJ
matrices that arise in the boundary element method. has . JBLX We look for the “right” proof of this property of ¢¤£¦¥ . Equivalently, all upper and lower minors of order are
Ultimately it reduces to a fact that deserves to be better known: zero. Again can be completed starting from its “banded ¢§£¦¥
Complementary submatrices of any ¢ and have the same Y part.” The count of parameters in and still agrees. nullity. The last figure in the paper (when ¢ is tridiagonal) shows
A next small step would allow the lower triangular part of
two submatrices with the same nullity ¨ © . Then has rank . Y
On and above the diagonal of ¢§£¦¥ , all rows are proportional. to be full. Only the upper condition will apply to , coming
,Z.¤0[D #\ &GH]J from the upper condition for . We will pursue Index Terms— Band matrix, low rank submatrix, fast multi-
plication. this one-sided formulation from now on. Upper conditions on J
(above the th superdiagonal) will be equivalent to upper J conditions on (above the th subdiagonal).
I. INTRODUCTION Here is a significant extension. Requiring zero entries is a
An by tridiagonal matrix is specified by statement about by minors of . We could ask instead
^ J
parameters—its entries on the three central diagonals. In some for all ^ by minors of to vanish (above the th diag-
way must also be specified by parameters (if is onal). Equivalently, all “upper submatrices” _ would have
*a`
^ Y
invertible). It will be especially nice if those parameters can be OQPRTS;(?_ . What property does this imply for ? The
the “tridiagonal part” of . Fortunately they can. can answer is neat and already known. But this relation between
be built outwards, diagonal by diagonal (and without knowing submatrices of and is certainly not well known. The
), because the inverse of a tridiagonal matrix possesses these goal of this paper is to try for a new proof: two properties:
Theorem (for invertible )
Upper: On and above the main diagonal, every by minor
J All submatrices _ above the th superdiagonal of
of is zero.
*d` ^
have OQPRbSc(5_ Lower: On and below the main diagonal, every by if and only if
minor of is zero.
J All submatrices e above the th subdiagonal of
These allow us to fill in the upper triangular part of and
* ` JL'^ then the lower triangular part. Expressed in another way, those have OQPRbSc(fe . properties are statements of low rank. Of course the whole Our tridiagonal case (or Hessenberg case, which is the one-
matrix has full rank! But it has many submatrices of
0 0
^ sided version) had J and . Even more special is
rank ! :
0gD 0
^ J and . Then is lower triangular if and only
Upper: On and above the main diagonal, every submatrix if is lower triangular (extremely well known!). The case
"
0hD 0
^ of has rank : J and is not so familiar—if is “rank 1 above
the main diagonal” then so is Y . This also comes from the
+*-,/.10!23,546. *;:
() (5789 For #%$'& : parameters
Woodbury-Morrison formula [20, p. 82], [16, p. 19] for the
Lower: On and below the main diagonal, every submatrix effect on of a rank one change in . "
of has rank : We comment in advance about our proof (and a second
e Y
proof!). For pairs of submatrices _ and of and , the
*-,/.10!<=,?>. *;:
() (5789
For #% '& : parameters goal is to show that
, , , ,
2 4 0!< >
Equality along the main diagonal reduces @8A
*i` * ` :
^ OQPRbSc(fe JL'^ OVPRTS;(?_ if and only if (1) to BC , the desired number of parameters. The friendly but impatient reader will quickly ask for Our proof will use an inequality for ranks of products. Wayne
generalizations and proofs. A natural generalization allows Barrett observed that our lemma is a special case of the
,/.
0ED
F &GC#;FIHKJ
to have a wider band. Suppose for , so Frobenius Rank Inequality
0
that J means tridiagonal. The corresponding conditions
* * * *;:
LNOVPRTSc(5_e 'OVPRTS;(?_ LNOQPRbS¦(?j¤_e
OVPRTSc(5j§_ (2) JMLN
on involve zero subdeterminants of order . The key
is to know which submatrices of are involved when is Then we noticed that (2) follows quickly from our special banded: case. Conceivably this provides a new proof of (2).
[L8/ Barrett also pointed out that (1) follows immediately from This returns us to the first case, with + columns and rows
the beautiful observation by Fiedler and Markham [10] that instead of + :
(in this notation) 3
e
* * * *
j j
4 5
OQPRTS;(?j LNOVPRTS;(5e OVPRTSc( 0 LNOVPRTSc(
¢£¢¥¤¥¦¨§ ¢£¢£¤©¦¨§
§e2097
* 0 *;:
¡ (?_ R¡ (fe
R (3)
:
:+hL;/6 which is (4) This approach surely gives the best proof of the Theorem. It has the great merit that (3) is an equality (of nullities) instead Note. This lemma is a key to our proof of the main Theorem,
of an inequality (of ranks). After returning to this discussion but (on such a basic subject!) it could not possibly be new. It is
0= in Section III, the Theorem is applied to fast multiplication by the special case _ of the Frobenius rank inequality [16, these “semiseparable” matrices. p. 13] for three matrices:
Notice that there are an equal number of free parameters in
* * * *;:
LNOVPRTSc(5_e 'OVPRTS;(?_ LNOQPRbS¦(?j¤_e
OVPRTSc(5j§_ (5)
( 5JML
and . Figure 1 shows how the entry in position
*
^ of both and is generically the first to be determined We noticed that our weaker result (4) leads quickly to the
+ +A@
from the (equivalent) conditions in the theorem. (This entry Frobenius inequality (5). Suppose _ is any by ma-
* 0 0
+ _ _ _B@
is in the upper corner of a square singular submatrix for both trix with OVPRTS¦(5_ . Then factors into
* *
C+ + (C+ +A@ j§_
matrices. There will be degenerate cases when that entry is not ( by by . Applying (4) to the matrices and
@ e
determined, in the same way that specifying three entries of a _ gives
singular by matrix does not always determine the fourth.)
* * *;:
@
LCOQPRTS;(?_ e D+ LNOVPRTSc(5j§_ae
OVPRTS;(?j§_ (6)
*
( ?JGL ^
The entries on all earlier diagonals, before position , Y
can be the free parameters for and also for .
* 0 * *
@
OQPRbSc(5j§_ _ OQPRTS;(?j¤_
Always OVPRTSc(5j§_ . Sim-
* 0 * *
@ @
OVPRTS;(?_ _ e OQPRTS;(?_ e
ilarly OVPRTSc(5_e . So (6)
implies (5). E singular super Now we restate the Theorem and prove it using the rank
order ^
inequality (4).
£ J
Every submatrix _ above the th superdiagonal of
*`
OQPRTS;(?_ ^ e
has IFF every submatrix above
* `
OQPRbSc(fe JL'^
the J th subdiagonal of has .
0F<
Our proof comes directly from . Look at the first
`
^ G* 8J
singular * rows of (with ). They always multiply
<
H*
sub order JaL'^ the last columns of to give a zero submatrix of :
D _ e j * 0 * * * I * * L L L L $M#:NPO O1# N1O Fig. 1. For both and £¦¥ , the first entry to be determined from earlier diagonals is in position "!£#$&%"')( . The earlier diagonals are free. 0 D j¤e!L'_ _ Thus I . The lower left entry of is in row * LUJ L! _ * and column , so the submatrix is above the *¤` OVPRTS;(?_ ^ J th superdiagonal of . We are given that and * ` *¤0 ^ OQPRTS;(?j1e OVPRTS;(?_ j * LWJ II. PROOF OF THE THEOREM therefore I . Since has columns, the Lemma gives e * + + Lemma. Suppose the matrices j and are by and , *i` : by . Then * LCOQPRTS;(5e * L JL'^ OVPRTS;(?j (7) * * *;: OQPRTS;(?j LNOVPRTSc(fe -+ LNOQPRTS;(?j1e * ` * OVPRTSc(fe J LA^ (4) If j has full rank , it follows that , as we wished to prove. 0 e Proof. If j¤e zero matrix, the column space of is * LJ Note that the lower left entry of e is in row contained in the nullspace of j . The dimensions of those LE J e and column * . The difference means that * * .+ WOQPRbSc(5j spaces give OQPRTS;(5e . This is (4) in the case * is immediately above the J th subdiagonal of . As * 0!D OVPRTSc(5j¤e . J varies, we capture all the submatrices of Y above that th To reduce every other case to that one, suppose j¤e has subdiagonal. D H j¤e j10 e20 rank / . Then can be written as the product of * In case j fails to have full rank , perturb it a little. The / / , an * by matrix and an by matrix. Now two block new is still invertible and the theorem applies. We have matrices multiply to give zero: Y proved that the new submatrix e in has rank less than 3 J L ^ e . That rank cannot jump as the perturbation goes to zero, : 0 0 0 0 j j j¤e"Aj e 01465 zero matrix e JL'^ 2017 §e so the actual also has rank less than . The proof of the converse is similar, but with a little twist. ^ perturbation would complete the proof. Remarkably, it was j V_ Ve1 (We need a fifth matrix .) For the same I with a paper by his father (in the same journal) that led Asplund 0 D j¤e[LN_ I , we must prove: to the problem. The crucial tridiagonal case appears (independently and * ` * ` JL'^ OQPRTS;(?_ ^ If OQPRbSc(fe then . with earlier references) in the famous 1960 text of Gantmacher * JYL'^ Since e has columns, and its rank is less than , and Krein [11, p. 95]. There is a close connection to second- 0 8 * BJ C^1L its nullspace has dimension at least , . order differential equations, noted in our final section below. e Put , linearly independent nullvectors of into the columns Karlin’s 1968 book Total Positivity [18] refers to as 0 D 0 D e¡ j¤e¡ of a matrix , so that . Then implies a “Green’s matrix”. Subsequently Barrett [2] proved from 0!D _ I and we can apply the Lemma: explicit formulas that tridiagonality of is equivalent to rank 1 submatrices in . The formulas allowed him to clarify all * * * OVPRTSc(5_ LCOQPRTSc( ( _ I number of columns of cases of zero entries in . 0 : * 8J (8) The natural extension to block tridiagonal matrices was given in the same year by Ikebe [17]. Then Cao and Stewart [6] , If I has full column rank , our conclusion follows: H! allowed J and blocks of varying sizes. The crucial step to * 0 : OVPRTSc(5_ 9 * J% , ^9 as desired ^H! was taken by Barrett and Feinsilver [3]. Their proof of the Theorem is based on the formula for the minors of . 8 * J , Notice that I has rows, which is at least . If Meurant [19] has provided an extremely helpful survey of the *§` OVPRTS;( , I it happens that I , perturb a little to achieve literature, and a stable algorithm for computing in the full rank. We don’t change e or , so our proof applies and tridiagonal case. ^ the new submatrix _ (in the new ) has rank less than . As It is interesting to recognize these two approaches: deter- the perturbation of goes to zero, the new approaches minant formulas or rank and nullity formulas. The former _ ^ the actual . So the actual submatrix has rank less than come ultimately from Jacobi and Sylvester. They led in [3] (since the rank can’t suddenly increase). to the conditions of unique completion of starting from This completes the proof, when * takes all values from 0 J its central band (for ^ and any ). We now give the JYU^ * _ to . That largest captures the submatrix in the nullity formula (too little known!) which proves the Theorem J last ^ columns of , above the th superdiagonal. We have in a single step. ^ proved that all ^ by submatrices of above that diagonal The Nullity Theorem was given in 1984 by Gustafson [13] are singular. in the language of modules over a ring and in 1986 by Fiedler Note on the proof : To see why the matrix is needed, write and Markham [10] in matrix language. A neat and simple 0!D 0 0 0 j1e[LN_ J ^ * out I in the by case for : proof will be in Section 0.7.5 of the forthcoming new edition £ ¢ of [16]. Here is an equivalent statement: 3 3 3 ` D D : 0 @ OVPRTS ¥¤¦ 4 LN 4 4 @ Nullity Theorem V ¨§ § §©§ Complementary submatrices of a matrix and its * ` 0!D inverse have the same nullity. We must prove OVPRTS;(?_ , in other words . When ¨§ these matrices multiply a by nullvector of that low rank Two submatrices are “complementary” when the row numbers j¤e¡ matrix e , the first term disappears to leave not used in one are the column numbers used in the other. If ¢£ j * + the first submatrix is by , the other submatrix I is 3 0!Di: + ;* j 4 ¤¦ I by . Suppose is the upper left corner of , § so that I is the lower right corner of the inverse: 0!D I This proves , after possible perturbation of to make j * rows § 0 5 * 5 * * 0 D I . 7 7 * * * I (9) 8 * + columns columns ¢£¢£¤©¦¨§ III. REFERENCES AND ALTERNATIVE PROOFS ¢£¢¥¤¥¦¨§ *;: * 0 R¡ (?j R¡ ( has I Normally we would comment on the existing literature before adding to it! That is the proper order, especially for Note that all blocks can be rectangular. One partitioning is a theorem to which many earlier authors have contributed. the “transpose” of the other, to allow block multiplication. But the “best” proof was pointed out to us by Wayne Barrett A permutation will put both submatrices into the upper right and it becomes particularly easy to describe in terms of the corner (appropriately for our proof): _ e matrices and above. The proof will follow directly from _ e * rows 0 5 a simple and beautiful result of Fiedler and Markham. 5 * * 7 7 (10) The first reference we know is to Edgar Asplund [1] in * * * * * + columns columns ¢¥¢£¤©¦¨§ ¢¥¢£¤¥¦¨§ 1959. He discovered the banded case: bandwidth J in and * 0 *;: (5_ R¡ (5e has R rank J submatrices in . Rereading his proof, we see that j _ e¡ he anticipated many of the ideas that came later. Combining The submatrices and I (as well as and ) are his theorem with the Woodbury-Morrison formula for a rank again complementary, after the entire block matrices are trans- 0 D Y£¢ posed. So the Nullity Theorem applies also to the transposed ^ ) as our model, and approach the complexity of submatrices (but we don’t use it): in the most direct way. 0¥¤§¦ e * + ¢£¢£¤©¦¨§ ¢¥¢£¤©¦¨§ If is an by matrix of rank one, the product *G0 *;: R (?j R¡ ( I (11) 0 ¤ ¦ * e©¨ ( ¨ * L + requires individual multiplications + e Fiedler and Markham begin their proof with direct multi- (rather than * ). We want to partition into blocks plication of the block matrix and its inverse to produce < . The that do not cross the diagonal. The natural choice is to begin @ @ e with square blocks e and of size , in the upper nullspace matrix that we called enters in the same way. e After two examples, we show how this neater formulation right and lower left corners of Y . This leaves blocks V @ @ and e of size on the diagonal of , to be partitioned leads instantly to the desired submatrix ranks in Y . * A() (recursively) in the same way. The multiplication count e In the first special case, I (or ) is a by matrix. If its obeys a rule much like the FFT: single entry is zero the nullity is . By the standard cofactor formula for the entries of the inverse matrix, the determinant of *G0 : ](? L L' (14) j _ 1 (or ) of order is also zero. The nullity is therefore at ¢ least —but why not greater? Because a greater nullity would *G0 A() 7 This recursion is satisfied by @ . imply that the whole by matrix is not invertible. The true applications of the Theorem in this paper are not A particular form of block matrix arises frequently in to tridiagonal matrices but to integral equations—often with applications, with a square zero block on the diagonal. It is space dimension greater than one. Now a model problem is interesting to see the Nullity Theorem in action once more: the approximate computation of a single or double integral: () V () *¡ * > *! * > > : ( ¡¢ (¢ ¢ ( ¡ " !¢ ¡¨ (¢ !¨ ¢ ¨ : 0 5 5 (12) or G* D > G* 7 7 (? (? The kernel may be the Green’s function of an underlying These block matrices are invertible exactly when the + differential equation. We expect to see blocks, rather than columns of and the + rows of are independent (requiring > D scalar entries, when approximating double integrals. The pa- $ + + * ). The nullity of is . The complementary J ^ G* () V submatrix must have the same nullity. In “continuous” analogs for an integral operator: this case the + columns of are a basis for the nullspace of * ¢ ¡¢ that submatrix: Decay rate: Fast decay away from the diagonal ( corresponds to low bandwidth. < G*+ *¡ A0 : J () ( zero matrix Smoothness: A slowly varying kernel corresponds to low rank submatrices. e Returning to the upper right submatrices _ and in our proof of the main Theorem, we have In both cases the word “approximate” should be included. We ¢£¢£¤©¦¨§ ¢£¢¥¤¥¦¨§ have matrix analysis rather than matrix algebra. * 0 *;: ¡ (?_ R¡ (fe R To summarize the applications to fast solution of integral equations, our best plan is to point to several active groups ]D* WJ e ]=* The matrix _ has columns, and has columns. Therefore (with apologies to others). The first group has emphasized the connections to earlier “panel methods” and the delicate * 0 *;: * 8J ]OVPRTSc(5_ * WOQPRbSc(fe (13) partitioning that can sometimes reduce the operation count to " * () —for matrix inversion as well as multiplication. We hope * 0 * J LOQPRbSc(5_ Then (exactly!) OQPRTS;(5e . This means that these names and references will help the reader: * ` * ` ^ OQPRTS;(5e J\L^ OVPRTSc(5_ if and only if , and the Theorem is proved. 1) Hackbusch [14], [15] 2) Chandrasekaran and Gu [7] Notice that both proofs extend immediately to block ma- 3) Tyrtyshnikov [12], [21] trices and because they deal one at a time with 4) Eidelman and Gohberg [8], [9] e complementary pairs _ and . Suppose that has block ,Z. ,Z. 0 entries (of any compatible sizes), with zero block V. TRIDIAGONAL MATRICES AND DIFFERENTIAL &H!J for # . The inverse of this block band matrix has EQUATIONS J low rank submatrices e above the th block subdiagonal. The The tridiagonal case is the simplest and most important. proof is the same except that the choices of _ respect the If we look at the first two columns of Y , below the first * _ block form of . The th row of comes at the end of a row, then the “lower” statement at the start of our paper block in . means: Those columns are proportional. We want to discuss This block case is of utmost importance in applications. this conclusion directly, and also to recognize the analogous statement for second-order differential equations and their IV. FAST MATRIX MULTIPLICATION AND APPLICATIONS Green’s functions. e Y For a tridiagonal matrix , the upper blocks in (and In the matrix case, the tridiagonal multiplies the first * also the lower blocks) have OVPRTSc(fe . How quickly can column of to give zeros below the diagonal. Key point: we multiply an by matrix of this “semiseparable” When the second column of is proportional to the first, 0 J Q form by a vector ¢ ? We take this tridiagonal case ( and multiplication by gives those zeros again (below the entry). These columns of Y contain a solution of the ¨©8 ¨a©©¨ 0KD ¨ “second-order difference equation .” It is the solution that satisfies the “boundary condition” at the right endpoint. _ e We see multiples of this homogeneous solution in all columns of , below the diagonal. They meet multiples of the other solution on the main diagonal. That other solution of 0[D ¨ satisfies the boundary condition at the left endpoint. 0 ¨ Since can be any invertible tridiagonal matrix, D may not look like a second-order difference equation. ACKNOWLEDGMENT ¡ ¡ ¡ 2 4 < We could choose three numbers to multiply ¢ ¢ @ ¡ ¡ ¡ The second author visited MIT in July 2003 as a student in ¤ ¤ ¤ ^ to match row of . It may be useful to compare with the standard approach to second-order differen- the Singapore-MIT Alliance. Both authors express their thanks to SMA for this opportunity to work together on linear algebra, 0 < Y tial equations, where the analog of column # in is and to Wayne Barrett for such helpful advice. 0!2 * 4 * < * 0¤£ 23* 0 0 0 , ¨ (¢ ¨ L (¢ ¨ L (¢ ¨ ( ¢% REFERENCES 0 D 0 ¢ with boundary conditions at ¢ and . The solution [1] E. Asplund, Inverses of matrices which satisfy for * 0¦¥ 2 * ;( ¢ (¢ ¨ is the Green’s function and it corresponds to $2# , Math. Scand., 7 (1959) 57–60. * ,Z. () . [2] W. W. Barrett, C. R. Johnson, D. D. Olesky and P. van den Driessche, A good text like [5, pp. 15–18] notes the equivalence Inherited matrix entries: principal submatrices of the inverse, SIAM J. Alg. Disc. Meth., 8 (1987) 313–322. between computing ¥ and variation of parameters. The latter [3] W. W. Barrett and P. J. Feinsilver, Inverses of banded matrices, Linear * * (¢ ¨ @ (¢ begins with two independent solutions ¨ and of Alg. Appl., 41 (1981) 111–130. [4] W. W. Barrett, A theorem on inverse of tridiagonal matrices, Linear Alg. 0!D 0§£ ¨ , ¨ , and finds a particular solution to of the form Appl., 27 (1979) 211–217. [5] C. Bender and S. Orszag, Advanced Mathematical Methods for Scientists * * ¤ * *;: * 0 ¤ ( ¢ ¨ ( ¢ L @(¢ ¨ @ (¢ (¢ ¨ (15) and Engineers, McGraw-Hill, 1978; Springer, 1999. [6] W. L. Cao and W. J. Stewart, A note on inverses of Hessenberg-like ¤ ¤ The underdetermined and @ are constrained by matrices, Linear Alg. Appl., 76 (1986) 233–240. [7] S. Chandrasekaran and M. Gu, Fast and stable algorithms for banded ¤ 0 * * ¤ 0 * * 0!Di: @ ( ¢ ¨ ( ¢ L (¢ ¨ (¢ @ (16) plus semiseparable systems of linear equations, SIAM J. Matrix Anal. Appl., 25 (2003) 373–384. 0§£ 2 * ¨ (¢B Substituting (15) into , gives [8] Y. Eidelman and I. Gohberg, On a new class of structured matrices, Integral Eq. and Oper. Theory, 34 (1999) 293–324. ¤ * * ¤ * * 0§£ 2 *;: 0 0 0 0 (¢ ¨ (¢ L ( ¢ ¨ ( ¢ (¢B @ @ (17) [9] Y. Eidelman and I. Gohberg, Inversion formulas and linear complexity algorithm for diagonal plus semiseparable matrices, Computers and ¤ ¤ 0 0 Solving (16) and (17) for and @ involves the nonzero Math. with Appl., 33 (1997) 69–79. ¤ * ¤ * @ @ ¨ ¨ ¢ ( ¢ Wronskian of and . Integration gives ( and , [10] M. Fiedler and T. L. Markham, Completing a matrix when certain entries of its inverse are specified, Linear Alg. Appl., 74 (1986) 225–237. * (¢ and equation (15) gives ¨ . [11] F. R. Gantmacher and M. G. Krein, Oszillationsmatrizen, Oszilla- 0 ¨ A few remarks will connect this continuous problem , tionskerne und kleine Schwingungen mechanischer Systeme, Akademie- £ 0 < * @ ¨ ( ¢ with the discrete . Suppose is chosen to Verlag, 1960. Oscillation matrices and kernels and small vibrations of mechanical systems, Amer. Math. Society, 2002. 0 satisfy the boundary condition at the right endpoint ¢ , [12] S. Gorainov, E. Tyrtyshnikov and A. Yeremin, Matrix-free iterative 0 D * 0 D ¨ @ ¨ ( ¢ , ¨ as well as , . Similarly satisfies and solution strategies for large dense linear systems, Numerical Linear Alg. 0!D the boundary condition at ¢ . With those special choices, and Appl., 4 (1997) 273–294. [13] W. H. Gustafson, A note on matrix inversion, Linear Alg. Appl., 57 ¥ 2 * * ¢ ¨ @ (¢ the Green’s function ( will be a multiple of for (1984) 71–73. 2 * 2 !$ ¨ ( ¢ ¢! ¢ and a multiple of for . Those multiples [14] W. Hackbusch, A sparse -matrix arithmetic, Computing, 62 (1999) ¤ * ¤ * ¢ @( ¢ ( and correspond to the “proportionality constants” 89–108. [15] W. Hackbusch and B. N. Khoromskij, Towards -matrix approximation that connect the columns of , above and below its main of linear complexity, Problems and Methods in Mathematical Physics, J. diagonal. And those constants are given by the first and last Elschner, I. Gohberg and B. Silbermann, eds., Birkhauser¨ , Basel, 2001. rows of , which solve the adjoint problem based on the [16] R. Horn and C. R. Johnson, Matrix Analysis, Cambridge Univ. Press, 1985. transpose of . [17] Y. Ikebe, On inverses of Hessenberg matrices, Linear Alg. Appl., 24 Perhaps we can summarize the discrete case in this way. (1979) 93–97. The matrix is determined by its first and last columns [18] S. Karlin, Total Positivity, Stanford Univ. Press, 1968. [19] G. Meurant, A review of the inverse of symmetric tridiagonal and block and rows. It has rank above and below the diagonal. Those tridiagonal matrices, SIAM J. Matrix Anal. Appl., 13 (1992) 707–728. parameters are reduced to the correct number [ by [20] G. Strang, Introduction to Linear Algebra, 3rd Edition, Wellesley- equality along the diagonal. Cambridge Press, 2003. [21] E. E. Tyrtyshnikov, Mosaic-skeleton approximations, Calcolo, 33 (1996) e To close the circle, we specialize the matrices _ and in 47–57. 0 _ this paper to this tridiagonal case. For * rows, is a % e by % submatrix of zeros and its nullity is . Then is 8] a by matrix with the same nullity. Therefore its rank * e Y is ! As varies, all the submatrices of have rank . Gilbert Strang Gilbert Strang was an undergraduate at MIT and a Rhodes Scholar at Balliol College, Oxford. His doctorate was from UCLA and since then he has taught at MIT. He has been a Sloan Fellow and a Fairchild Scholar and is a Fellow of the American Academy of Arts and Sciences. He is a Professor of Mathematics at MIT and an Honorary Fellow of Balliol College. Professor Strang has published a monograph with George Fix, An Analysis of the Finite Element Method, and six textbooks: Introduction to Linear Algebra (1993, 1998, 2003), Linear Algebra and Its Applications (1976, 1980, 1988), Introduction to Applied Mathematics (1986), Calculus (1991), Wavelets and Filter Banks, with Truong Nguyen (1996), Linear Algebra, Geodesy, and GPS, with Kai Borre (1997). Gilbert Strang served as President of SIAM during 1999 and 2000. He is Chair of the US National Committee on Mathematics for 2003–2004. His home page is http://math.mit.edu/ gs and his courses are on ocw.mit.edu. Tri Nguyen Tri Dung Nguyen was selected for the Mathematics Seeded School in Vietnam as early as fifth grade. He earned a Silver Medal in the Asian Pacific Mathematics Olympiad in 1997. For his bachelor’s degree in 2002 at RMIT, Australia, he specialized in Software Systems Engineering. There he won the Golden Key Lifetime Membership of the International Honor Society. He was a student member of IEEE and of the Telecommuni- cation Society of Australia. His home page is http://www2.ece.rmit.edu.au/ s9912677/Personal/. He can be contacted via email at [email protected]. Tri Nguyen is pursuing his Master’s degree with the Singapore-MIT Alliance in 2003–2004, in High Performance Computation for Engineered Systems.