arXiv:1402.5890v1 [math.NA] 21 Feb 2014 where 11 sbt eesr n ucetfrteeitneo nqeJ unique a of existence the for [6] sufficient (see and problem necessary both is (1.1) rbe aebe eeoe 1 ,7 ,1,1] u oepoiean provide none but 12], , 10, solution 8, the 7, of 3, entries [1, the developed been have problem of fteepoeue nrdcsflaigpiterradassociat and error point floating introduces procedures these of akrudtheory). background n the find with 1] ie w tityitrae eune fra values, real stu of first sequences problem interlaced eigenvalue strictly inverse two following Given the [11]. by motivated are We Introduction 1 B ubro ieetcntutv rcdrst rdc h ex the produce to procedures constructive different of number A orsodnesol eadesdt .B ils bwillms Copyright Willms; B. N. to addressed be should Correspondence 2 1 uto naymdu,poie h rgnlwr sproperl is work original the provided medium, unres any permits in which duction License, Attribution Commons Creative nvriyo aelo nai,NL3G1 N2L Ontario, Waterloo, of University eateto ahmtc,Bso’ nvriy Sherbroo University, Bishop’s Mathematics, of Department itnuse rfso mrts eateto ii n E and Civil of Department Emeritus, Professor Distinguished while , otemti a eue sats arxfregnrbes b eigenproblems, for incorpo matrix problem test inverse a spring-mass a as of used solution be explicit can matrix the so hc lostse h diinlcniinta t leadi its spectrum, that interlaced condition additional the satisfies also which B o n epeetara ymti r-ignlmti forder of matrix tri-diagonal symmetric real a present We etMti o nIvreEgnau Problem Eigenvalue Inverse an for Matrix Test A sotie from obtained is × λ n

( c el ymti,tiignlmatrix, tridiagonal symmetric, real, , B 04G .L ldele.a.Ti sa pnacs ril di article access open an is This al. et. Gladwell L. M. G. 2014 o § ( = ) .M .Gladwell L. M. G. . r[17] or 4.3 λ i o ) { 1 n λ 2 − 1 B l 1 λ < 1 + § r h ievle ftelaigpicplsbarxof submatrix principal leading the of eigenvalues the are . o itr ftepolm and problem, the of history a for 1.2 ydltn h atrwadclm.Tecniino h data-set the on condition The column. and row last the deleting by } 1 o l n =0 λ < − 2 B h arxetisaeepii ucin ftesize the of functions explicit are entries matrix The . ntrso h aast(.) optrimplementation Computer (1.1). data-set the of terms in , 2 eray2,2014 25, February ( λ < λ 1 .H Jones H. T. , i ) 1 n 2 o Abstract < n ( and · · · 1 λ < B λ i o uhthat such , ) n 1 n 2 − − gpicpesbarxhsauniformly a has submatrix principle ng 1 1 n .B Willms B. N. and , e ubc 1 2H2 J1M Quebec, ke, aigtets arxi provided. is matrix test the rating λ < rce s,dsrbto,adrepro- and distribution, use, tricted cited. y @ubishops.ca vrnetlEngineering, nvironmental n n o dnmrclsaiiyise.Loss issues. stability numerical ed − hs ievle are eigenvalues whose 1 λ t owr n nes.An inverse. and forward oth ( λ < cba arxslto othe to solution matrix acobian B § fti ae o additional for paper this of 3 ( = ) c ouino hsinverse this of solution act n idb ohtd n1967 in Hochstadt by died xlctcaatrzto of characterization explicit , tiue ne the under stributed λ i ) 1 n r h eigenvalues the are 2 { 2 n k B and , } k n htis, that , =0 − 1 (1.1) of significant figures due to accumulation of round-off error makes some of the known solution procedures undesirable. Determining the extent of round-off in the numerical solution, Bˆ, computed from a given data-set requires a priori knowledge of the exact solution B. In the absence of this knowledge, an additional numerical computation of the forward problem to find the spectra, λ(Bˆ) and λ(Bˆo) allows comparison to the original data. Test matrices, with known entries and known spectra, are therefore helpful in comparing the efficacy of the various solution algorithms in regards to stability. It is particularly helpful when said test matrices can be produced at arbitrary size. However some existant test matrices given as a function of matrix size n suffer the following trait: when ordered by size, the minimum spacing between consecutive eigenvalues is a decreasing function of n. This trait is potentially undesirable since the reciprocal of this minimum separation between eigenvalues can be thought of as a con- dition number on the sensitivity of the eigenvectors (invariant subspaces) to perturbation (see [9], Theorem 8.1.12). Some of the algorithms for the inverse problem seem to suffer from this form of ill-conditioning. From a motivation to avoid confounding the numerical stability issue with poten- tial increased ill-conditioning of the data-set as a function of n, the authors developed a test matrix which has equally spaced, and uniformly interlaced, simple eigenvalues. In Section 2 we provide the explicit entries of such a matrix, A(n). We claim that its eigenvalues are equally spaced§ as λ(A(n)) = 0, 2, 4,..., 2n 2 , { − } while its leading principal submatrix Ao(n) has eigenvalues uniformly interlaced with those of A(n), namely: λ(Ao(n)) = 1, 3, 5,..., 2n 3 . { − } A short proof verifies the claims. In Section 3 we present some background theory concerning Jacobian matrices, and in Section 4 apply our§ test matrix to a model of a physical spring-mass § system, an application which leads naturally to Jacobian matrices.

2 Main Result

Let A(n) be a real, symmetric, tridiagonal, n n matrix with entries × a = n 1 i =1, 2,...,n ii − 1 a = i(2n i 1) i =1, 2,...,n 2 i,i+1 2 − − − pn(n 1) an−1,n = − r 2 and let Ao(n) be the principal submatrix of A(n), that is, the (n 1) (n 1) matrix obtained from A(n) by deleting the last row and column. − × −

Theorem 2.1. A(n) has eigenvalues 0, 2,..., 2n 2 and Ao(n) has eigenvalues 1, 3,..., 2n 3 . { − } { − } Proof: By induction. When n =2 1 1 A(2) = 1 1   has eigenvalues 0, 2, and Ao(2) has eigenvalue 1. Assume the result holds for n. So A(n) has eigenvalues 0, 2,..., 2n 2 . Let B = Ao(n + 1) nI and A = A(n) (n 1)I. Then B and A { − } − − −

2 are similar via BR = RA where R is upper triangular, with entries

k(j−1)!(2n−j−1)! (i−1)!(2n−i+1)! i, j have same parity and j i, rij = ≥ ( 0q otherwise, and 2 j = n, k = 6 1 j = n.  Therefore Ao(n + 1) has eigenvalues 1, 3,..., 2n 1 . Now we show that A(n+1) has eigenvalues{ 2n− }eigenvalues of A(n) . Let C = A(n+1) 2nI. Factorize C = LLT , where L is lower bidiagonal.{ }∪{ We find: } − − l = 2n−i+1 ; l = i , i =1, 2,...,n 1, ii 2 i+1,i − 2 − l =q n+1 ; l = q√n; l =0. nn 2 n+1,n − n+1,n+1 q Therefore C has eigenvalue 0 and thus A(n + 1) has eigenvalue 2n. Define D =2nI LT L, so − Do O D = O 2n   with d = 2n−1 ; d = 1 i(2n i), i =1, 2,...,n 1, and d = n−1 . ii 2 i+1,i 2 − − nn 2 Now Do has the same eigenvalues aspA(n) since they are similar matrices via SDo = A(n)S where S is upper triangular with entries

s = √2n i; s = √i i =1, 2,...,n 1, ii − i,i+1 − − snn = √2n; sij = 0 otherwise.

Therefore A(n + 1) has eigenvalues 2n eigenvalues of A(n) .  { }∪{ }

3 Discussion

A real, symmetric n n B is called a Jacobian matrix when its off-diagonal elements are non-zero× ([6], p. 46). We write

a b 0 0 0 1 − 1 ··· b1 a2 b2 0 0  − − ···  .. . 0 b2 a3 b3 . . B =  − . −. .  (3.1)  ......   0 0 0   . . .   . . .. b − a − b −   − n 2 n 1 − n 1   0 0 ... 0 b − a   − n 1 n    The similarity transformation, Bˆ = S−1BS, where S = S−1 is the , S = diag(1, 1, 1, 1,..., ( 1)n−1), produces a Jacobian matrix Bˆ with entries the same as B except for the− sign of− the off-diagonal− elements, which are all reversed. If instead we use the self-inverse

3 n−m sign matrix, S(m) = diag(1, 1,..., 1, 1, 1,..., 1), to transform B, then Bˆ is a Jacobian matrix − − − m z }| { identical to B except for a switched sign on the mth off-digonal element. In regards to the spectrum of the matrix, there is therefore| {z no} loss of generality in accepting the convention that a Jacobian matrix be expressed with negative off-diagonal elements, that is, b > 0, i =1,...,n 1 in (3.1). i ∀ − While Cauchy’s interlace theorem [13] guarantees that the eigenvalues of any square, real, sym- metric (or even Hermitian) matrix will interlace those of its leading (or trailing) principal submatrix, the interlacing cannot be strict, in general [5]. However, specializing to the case of Jacobian ma- trices restricts the interlacing to strict inequalities. That is, Jacobian matrices possess distinct eigenvalues, and the eigenvalues of the leading (or trailing) principal submatrix are also distinct, and strictly interlace those of the original matrix (see [6], Theorems 3.1.3 and 3.1.4. See also [9] exercise P8.4.1, p. 475: when a tridiagonal matrix has algebraically multiple eigenvalues, the matrix fails to be Jacobian). The inverse problem is also well-posed: there is a unique (up to the signs of the off-diagonal elements) Jacobian matrix B having given spectra specified as per (1.1) (see o n−1 [6], Theorem 4.2.1, noting that the interlaced spectrum of n 1 eigenvalues (λ )1 can be used to calculate the last components of each of the n ortho-normalized− eigenvectors of B via equation 4.3.31). Therefore, the matrix A(n) in theorem 2.1 is the unique Jacobian matrix with eigenvalues equally spaced by two, starting with smallest eigenvalue zero, whose leading principal submatrix has eigenvalues also equally spaced by two, starting with smallest eigenvalue one. As a consequence of the theorem, we now have

Corollary 3.1. The eigenvalues of the real, symmetric n n tridiagonal matrix × a c n−1 0 0 0 − 2 ···  c n−1 qa c 2n−3 0 0  − 2 − 2 ··· − − . .  q 2n 3 q 3n 6 .. .   0 c 2 a c 2  W =  − −  (3.2) n  q .. q.. ..   0 0 . . . 0     . . .. (n−2)(n+1) n(n−1)   . . . c a c   − 4 − 2   n(n−1)   0 0 ... q 0 c qa   − 2   q  form the arithmetic sequence, λ(W )= a +2c(i 1) n , (3.3) n { o − }i=1 o while the eigenvalues of its leading principal submatrix, Wn , form the uniformally interlaced sequence

λ(W o)= a + c +2c(i 1) n−1, where a = a + c(n 1). (3.4) n { o − }i=1 o −

The form and properties of Wn were first hypothesised by the third author while programming Fortran algorithms to reconstruct band matrices from spectral data [17]. Initial attempts to prove the spectral properties of Wn by both he and his graduate supervisor (the first author) failed. Later, the first author produced the short induction argument of Theorem (2.1), in July, 1996. Alas, the fax on which the argument was communicated to the third author was lost in a cross- border academic move, and so the matter languished until recently. In summer of 2013, the second and third authors, looking for a tractable problem for a summer undergraduate research project, assigned the problem of this paper: “hypothesize, and then verify, if possible, the explicit entries

4 of an n n symmetric, tridiagonal matrix with eigenvalues (3.3), such that the eigenvalues of its principal× submatirx are (3.4).” Meanwhile a building renovation in summer of 2013 required the third author to clean out his office, including boxes of old papers and notes. During this process the misplaced fax from the first author was found. However, to our delight, the student, Mr. Alexander De Serre-Rothney, was also able to independently complete both parts of the problem. His proof is now found in [15]. Though longer than the one presented here, his proof utilizes the spectral properties of another tridiagonal (non-symmetric) matrix, the so-called Kac-, Kn, of size (n + 1) (n + 1) [2, 4, 14, 16]: × n n 1 00 0 1 − n n 2 0 ··· 0  − ···. .  0 2 n n 3 .. . Kn =  . .− .  , (3.5)  ......   0 0 0   . . .   . . .. n 2 n 1   −   0 0 ... 0 n 1 n   −    which has eigenvalues λ(K )= 2k n n . n { − }k=0 4 A Spring-Mass Model problem

k1 k2 k3 k4 kn (a)

m1 m2 m3 mn

k1 k2 k3 k4 kn (b)

m1 m2 m3

Figure 4.1: Spring-mass system: (a) right hand end free, (b) right hand end fixed

Let

k1 + k2 k2 m1 k k −+ k k m  − 2 2 3 − 3   2  C = and M = ··· ·  k − k − + k k   m −   − n 1 n 1 n − n  n 1   k k   m   − n n   n     Then the natural frequencies of the systems in Figure 4.1 are solutions of (C λM)x = 0 − and (Co λoM o)xo = 0, where Co is obtained from C by deleting the last row and column. The − 0 o solutions can be ordered 0 < λ1 < λ1 < λ2 < < λn−1 < λn−1 < λn. We can also rewrite the ··· − − systems as (B λI)u = 0 and (Bo λoI)uo = 0 where B = M 1/2CM 1/2 and u = M 1/2x. Note the natural frequencies− of the systems− are the eigenvalues of B and Bo.

5 We will consider the system with natural frequencies 1, 3,..., 2n 1 for system (a), and 2, 4,..., 2n 2 for system (b). We can use the matrix A{ (n) to help− solve} this system for the {spring stiffnesses− } and the masses. Let B(n) be the symmetric tridiagonal matrix:

Bii = ai = n i =1,...,n 1 B = b = i(2n i 1) i =1,...n 2 i,i+1 − i −2 − − − p n(n 1) Bn−1,n = bn−1 = − − −r 2 n−1 o n−1 Note that B(n)= A(n)+ I and has eigenvalues 2k +1 k=0, while B (n) has eigenvalues 2k k=1. 1/2 1/2 T { } n T { } Let u = m , , mn with m > 0 for all i. Let m = m = u u. We wish to solve h 1 ··· i i i=1 i − B(n)u = m 1/2k , 0,..., 0 PT (4.1) h 1 i i n for (mi)i=1, and k1. The bottom, nth, equation is

1/2 1/2 1/2 nmn n mn−1 = − = √2 α, b − n 1 − n 1  −  1/2 1/2 where we choose mn = α. We will thus be able to express mi in terms of the scaling parameter α. The (n 1)th equation is − n(n 1) n 1/2 − n√2 1/2 1/2 αbn−1 nmn 2 − n 1 n(n + 1) m − = = αr r = α√2 . n 2 − 1 − bn−2 (n 2)(n + 1) (n 1)(n 2) − − 2 −  − −  The ith equation, for i =1, n 1, n, isp 6 − 1/2 1/2 1/2 b − m − + nm b m =0. − i 1 m i i − i i+1 Then

1/2 1/2 1/2 1/2 2nmi (i(2n i 1)) mi+1 m − = − − − (4.2) i 1 ((i 1)(2n i))1/2 − − Now suppose

1/2 1/2 n(n + 1) (n + i 1) m − = α√2 ··· − (4.3) n i (n 1)(n 2) (n i)  − − ··· −  for i = 1, 2,...,j. Then cases i = 1, 2 are already verified, and the strong inductive assumption

6 applied in (4.2) with i 1= n (j + 1) implies i = n j. So − − − 1/2 √ n(n+1)···(n+j−1) 1/2 1/2 2nα 2 (n−1)(n−2)···(n−j) ((n j)(n + j 1)) mn−j+1 m − − = − − − n j 1  ((n j 1)(n + j))1/2 − − − 1/2 1/2 n+j 1 1/2 n(n + 1) (n + j 1) 2n n−j ((n j)(n + j 1)) = α√2 ··· − − − − (n 1)(n 2) (n j)   ((n j 1)(n + j))1/2   − − ··· −  − −   n(n + 1) (n + j 1) 1/2  2n (n j)  = α√2 ··· − − − (n 1)(n 2) (n j) ((n j 1)(n + j))1/2  − − ··· −   − −  n(n + 1) (n + j 1)(n + j) 1/2 = α√2 ··· − (n 1)(n 2) (n j)(n j 1  − − ··· − − −  1/2 which verifies, by strong induction, the closed form for mn−i given by (4.3). Finally, the first equation of (4.1) is

− nm1/2 b m1/2 = m 1/2k 1 − 1 2 1 1 and so k = nm b (m m )1/2. 1 1 − 1 1 2 We note that the values mn−i can be written as

2 (n + i 1)!(n i 1)! m − =2α − − − (4.4) n i ((n 1)!)2 − (n +0 1)!(n 0 1)! for i =1,...,n 1, and m = α2 − − − = α2. − n ((n 1)!)2 Since C = M 1/2B(n)M 1/2, then −

i!(2n i 1)! k = C = m1/2 B m1/2 = α2 − − (4.5) i+1 − i,i+1 − n−(n−i) i,i+1 n−(n−i−1) ((n 1)!)2  −  (2n 1)! and k = α2 − . 1 ((n 1)!)2 − 5 Conclusion

A family of n n symmetric tri-diagonal matrices, Wn, whose eigenvalues are simple and uniformally spaced, and whose× leading principle submatrix has uniformly interlaced, simple eigenvalues, has been presented (3.2). Members of the family are characterized by a specified smallest eigenvalue, ao and gap size, c between eigenvalues. The matrices are termed Jacobian, since the off-diagonal entries are all non-zero. The matrix entries are explicit functions of the size n, ao and c, so the matrices can be used as a test matrices for eigenproblems, both forward and inverse. The matrix Wn for specified smallest eigenvalue ao and gap c is unique up to the signs of the off-diagonal elements. In Section 4, the form of W was used as an explicit solution of a spring-mass vibration model § n (Figure 4.1), and the inverse problem to determine the lumped masses and spring stiffnesses was solved explicitely. Both the lumped masses, mn−i given by equation (4.4), and spring stiffnesses,

7 kn−i from equation (4.5) show super-exponential growth. Consequently mn/m1,kn/k1 become van- ishingly small as n . As a result, the spring-mass system of Figure 4.1 cannot be used as a discretized model for→ a ∞ physical beam in longitudinal vibration, as the model becomes unrealistic in the limit as n . →∞ References

[1] F. W. Biegler-Konig. Construction of band matrices from spectral data. Linear Algebra and Its Applications, 40:79–87, 1981.

[2] Paul A. Clement. A class of triple-diagonal matrices for test purposes. SIAM Review, 1:50–52, 1959.

[3] C. de Boor and G. H. Golub. The numerically stable reconstruction of a Jacobi matrix from spectral data. Linear Algebra and Its Applications, 21:245–260, 1978.

[4] Alan Edelman and Eric Kostlan. The road from Kac’s matrix to Kac’s random polynomials. In Proceedings of the 1994 SIAM Applied Linear Algebra Conference, pages 503–507, Philadelphia, Pennyslvania, 1994.

[5] Steve Fisk. A very short proof of Cauchy’s interlace theorem for eigenvalues of Hermitian matrices. The American Mathematical Monthly, 112(2):118, 2005.

[6] G. M. L. Gladwell. Inverse Problems in Vibration. Martinus Nijhoff Publishers, 1986.

[7] G. M. L. Gladwell and N. B. Willms. A discrete Gelfand-Levitan method for band-matrix inverse eigenvalue problems. Inverse Problems, 5(2):165179, 1989.

[8] G. H. Golub and D. L. Boley. Inverse eigenvalue problems for band matrices. In G. A. Watson, editor, , pages 23–31. Springer Verlag, 1977.

[9] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The John Hopkins University Press, 4th edition, 2013.

[10] O. H. Hald. Inverse problems for Jacobi matrices. Linear Algebra and Its Applications, 14(4):63– 85, 1976.

[11] H. Hochstadt. On some inverse problems in matrix theory. Arch. Math, 18:201–207, 1967.

[12] H. Hochstadt. On the construction of a Jacobi matrix from spectral data. Linear Algebra and Its Applications, 8:435–446, 1974.

[13] S. G. Hwang. Cauchy’s interlace theorem for eigenvalues of Hermitian matrices. The American Mathematical Monthly, 111(4):157–159, 2004.

[14] T. Muir. A Treatise on the Theory of Determinants. Longmans Greens, 1933. Revised and enlarged by W. H. Metzler, Dover, New York, 1960.

[15] Alexander De Serre Rothney. Eigenvalues of a special tridiagonal matrix. http://www.ubishops.ca/fileadmin/bishops_documents/natural_sciences/mathematics/files/2 2013.

8 [16] O. Taussky and J. Todd. Another look at a matrix of Mark Kac. Linear Algebra and Its Applications, 150:341–360, 1991.

[17] N. Brad Willms. Some matrix inverse eigenvalue problems. Master’s thesis, University of Waterloo, Ontario, Canada, 1988.

9