Eigenvalues of a Special Tridiagonal

Alexander De Serre Rothney∗

October 10, 2013

Abstract In this paper we consider a special tridiagonal test matrix. We prove that its eigenvalues are the even integers 2,..., 2n and show its relationship with the famous Kac-Sylvester tridiagonal matrix.

1 Introduction

We begin with a quick overview of the theory of symmetric tridiagonal matrices, that is, we detail a few basic facts about tridiagonal matrices. In particular, we describe the symmetrization process of a tridiagonal matrix as well as the orthogonal polynomials that arise from the characteristic polynomials of said matrices.

Definition 1.1. A tridiagonal matrix, Tn, is of the form:

a1 b1 0 ... 0   .. .. .  c1 a2 . . .     ......  Tn =  0 . . . 0  , (1.1)    . .. ..   . . . an−1 bn−1 0 ... 0 cn−1 an where entries below the subdiagonal and above the superdiagonal are zero. If bi 6= 0 for i = 1, . . . , n − 1 and ci 6= 0 for i = 1, . . . , n − 1, Tn is called a Jacobi matrix. In this paper we will use a more compact notation and only describe the subdiagonal, diagonal, and superdiagonal (where appropriate). For example, Tn can be rewritten as:   b1 . . . bn−1 Tn =  a1 a2 . . . an−1 an  . (1.2) c1 . . . cn−1

∗Bishop’s University, Sherbrooke, Quebec, Canada

1 Note that the study of symmetric tridiagonal matrices is sufficient for our purpose as any Jacobi matrix with bici > 0 ∀i can be symmetrized through a similarity transformation:  √ p  b1c1 ... bn−1cn−1 −1 An = Dn TnDn =  a1 a2 . . . an−1 an  , (1.3) √ p b1c1 ... bn−1cn−1 r cici+1 ··· cn−1 where Dn = diag(γ1, . . . , γn) and γi = . bibi+1 ··· bn−1 We refer the reader to [1] for a proof and more detailed exposition. The added symmetry allows for an easier analysis of the spectrum of An. In particular, a cofactor expansion along the last row of Pn = An − λIn yields the recurrence relations:

P0(λ) = 1 (1.4)

P1(λ) = a1 − λ (1.5)

Pi(λ) = (ai − λ)Pi−1(λ) − bi−1ci−1Pi−2(λ). (1.6)

Here {Pi} is an orthogonal family of polynomials with respect to the inner product:

Z +∞ hPn,Pmi := Pn(x)Pm(x)w(x)dx, (1.7) −∞ where w(x) is the measure or weight function w(x) = e−x2 . Orthogonality yields the following useful properties (see [9]) :

The zeros of Pi are real, 1 ≤ i ≤ n, (1.8)

The zeros of Pi and Pi+1 interlace, 1 ≤ i ≤ n − 1. (1.9)

In other words, the eigenvalues of An are real and the eigenvalues of Ai−1 interlace those of Ai for 1 ≤ i ≤ n. An interesting problem in matrix theory is that of the inverse eigenvalue problem (IEP). Before formally stating the problem for tridiagonal matrices, let us introduce some notation.

Definition 1.2. Given Tn an n×n tridiagonal matrix, the (n−1)×(n−1) principal submatrix, Tˆn, is the matrix formed by removing the last row and column of Tn.

n n−1 IEP for Tridiagonal Matrices. Given the ordered lists Λ = (λi)i=1 and Θ = (θi)i=1 such that Θ interlaces Λ, i.e., λi ≤ θi ≤ λi+1 for i = 1, . . . , n, find the (n × n) symmetric tridiagonal matrix Tn such that Λ and Θ are the spectra of Tn and Tˆn, respectively.

Note that the existence and uniqueness (up to signs) of Tn from spectral data is only guaranteed when Λ and Θ strictly interlace; we refer the reader to [7] for more details. Also, the IEP for tridiagonal matrices is fully solved in the sense that given the lists Λ and Θ, one can reconstruct Tn algorithmically (see [8], [5, page 473]). In this paper, we are interested in the tridiagonal test matrix Wn that has spectrum Λ = {2, 4,..., 2n} and Wˆn has spectrum Θ = {3, 5,..., 2n − 1}. By test matrix we mean a matrix with known eigenvalues and given structure. Such matrices make it possible to test the stability of numerical eigenvalue algorithms. The motivation behind Wn is provided in section 2. A famous tridiagonal matrix is the Kac- proposed by Clement [2] as a test matrix.

2 Definition 1.3. The (n + 1) × (n + 1) Kac-Sylvester matrix, Kn, is:

 n n − 1 ... 2 1  Kn =  0 0 ...... 0 0  . (1.10) 1 2 . . . n − 1 n

n It has the particularly nice eigenvalues : σ(Kn) = {2k − n}k=0. There are several proofs that Kn has the above spectrum (see [3], [6], [10] ). The relevance of this matrix will become apparent when we prove our main result.

2 Motivation

The problem of finding the tridiagonal matrix Tn with the spectrum of Tn being Λ = {2, 4,..., 2n} and the spectrum of Tˆn being Θ = {3, 5,..., 2n − 1} was posed by one of my research supervisors, Dr. N. B. Willms. It arises from the study of spring-mass systems in free motion, where the eigenvalues correspond to natural frequencies of the systems. It turns out that many spring-mass systems beget tridiagonal matrices (see [4]), where the entries of the corresponding tridiagonal matrix are functions of the spring constants and masses of the systems. n More specifically, given n masses {mi}i=1 in suspension from a ceiling with the respective spring n−1 constants {ki}i=0 , where the hanging end of the system is free (called a fixed-free system), we wish to model this system in terms of matrices. The solutions of |λM − EKE−1| = 0 are precisely the natural frequencies of the system (see [4, page 45]), where M, K, and E are given by:

M = diag(m1, m2, . . . , mn−1, mn). (2.1)   −k1 ... −kn−1 K =  k0 + k1 k1 + k2 . . . kn−2 + kn−1 kn−1  , (2.2) −k1 ... −kn−1

1 −1 0 ...... 0  1 1 ...... 1 1  .. ..   .. ..  0 1 −1 . . 0  0 1 1 . . 1      ......   ......  0 0 . . . .  −1 0 0 . . . . E =   and E =   , (2.3) ......  ......  . . . . −1 0  . . . . 1 .     . . .  .  . .. .. 0 1 −1 ...... 0 1 1 0 ...... 0 0 1 0 ...... 0 0 1

Note that E is upper-bidiagonal and that E−1 is upper-triangular with all ones. As such our problem is to verify the form of the symmetric tridiagonal matrix B = L−1EKE−1L−T (with √ √ √ L = diag( m1, m2,..., mn)) with eigenvalues Λ and such that σ(Bˆ) = Θ. As mentioned earlier, there exists an algorithm for the IEP for tridiagonal matrices. Our problems differs from the former as we wish to find Tn with entries as explicit functions of n.

3 3 Main Result

We begin with a definition of the matrix of interest which we shall show to be the solution of the IEP.

Definition 3.1. Let Wn(k) be the n × n symmetric tridiagonal matrix with the following entries:  ai = k, i = 1, . . . , n  q W (k) = b = i(2n−1−i) , i = 1, . . . , n − 2 n i √ 4  n(n−1) b = √ , n−1 2 as per definition (1.1). For example,

 q 5  7 2 0 0 0 0 q q   5 7 9 0 0 0   2 2   q q   9 12   0 2 7 2 0 0  W6(7) =  q q  .  12 14   0 0 2 7 2 0   q q   14 30   0 0 0 2 7 2    q 30 0 0 0 0 2 7

We are now ready to introduce and prove our main result:

Theorem 3.2. The spectra of Wn(n + 1) and Wˆ n(n + 1) are Λ = {2, 4,..., 2n} and Θ = {3, 5,..., 2n − 1}, respectively.

Proof. Part A - Eigenvalues of Wn(n + 1) By the Schur decomposition theorem, there exists unitary Q ∈ Mn×n(C) such that −1 Q Wn(n + 1)Q = U is upper-triangular with the eigenvalues of Wn(n + 1) being on the diagonal of U. Hence, subtracting (n + 1)In from Wn(n + 1), where In is the n × n , shifts the eigenvalues by −(n + 1). Note that Wn(n + 1) − (n + 1)In = Wn(0), the (n × n) symmetric tridiagonal matrix with zero diagonal. Now, choose H = diag(γ1, . . . , γn) such that,

 1 2 . . . n − 2 2(n − 1)  ˜ −1 Wn(0) := 2HWn(0)H = 0 0 ...... 0 0 . (3.1) 2n − 2 2n − 3 . . . n + 1 n

th th That is, given that the i , (i + 1) (as per definition (1.1)), i = 1, . . . , n − 2, entry of Wn(0) is bi, the corresponding entry of HW (0)H−1 on the superdiagonal is γi b . We find γ ,γ such that n γi+1 i i i+1 γi b = i for i = 1, . . . , n − 2. Note that b introduces an extra factor of 2 so that γn−1 b γi+1 i 2 n−1 γn n−1 is twice as large as the pattern suggests. In other words γn−1 b = n − 1. On the other hand, γn n−1 for the subdiagonal, since γi b = i , we have that γi+1 = 2 b and thus γi+1 b = 2 b2. Simplifying, γi+1 i 2 γi i i γi i i i 2 2 b2 = 2 ( 2ni−i−i ) = 2n−1−i with i = 1, . . . , n − 2. Finally, knowing that γn−1 b = n − 1, i i i 4 2 γn n−1 we can determine the corresponding subdiagonal entry. Specifically, γn = 1 b , so that, γn−1 n−1 n−1 γn b = 1 b2 = 1 ( n(n−1) ) = n . Our matrix is now precisely as claimed in (3.1) save for γn−1 n−1 n−1 n−1 n−1 2 2

4 the extra factor of 2 which we have removed to simplify later calculations. In other words, the T magnitude of the eigenvalues of W˜ n(0) are twice those of Wn(0). Now, consider J(W˜ n(0)) J where J is the defined with the Kronecker-Delta where (J)ij = δi,n−j+1. The matrix, T J(W˜ n(0)) J, is of the form:  2(n − 1) n − 2 ... 2 1  ˜ T X := J(Wn(0)) J = 0 0 ...... 0 0 . (3.2) n n + 1 ... 2n − 3 2n − 2

n In the spirit of [6]’s proof that Kn (definition 1.3) has the spectrum σ(Kn) = {2k − n}k=0, let us introduce the n × n matrix P1 such that:  1 0 ...... 0 1 0 ...... 0 −1 1 0 ...... 0 1 1 0 ...... 0      ......   ......   0 −1 . . . . 1 1 . . . . P =   where P −1 =   (3.3) 1  ......  1 ......   . . . . 0 0 . . . . 0 0      .  .   ..... 0 −1 1 0 ...... 1 1 0 0 ...... 0 −1 1 1 ...... 1 1 1

−1 so that P1 is lower-bidiagonal and P1 is lower-triangular with all ones. We claim that,  2(n − 1) v  ˚ −1  0  Wn(0) := P1XP1 =  .  , (3.4)  . Ln−1 − In−1  0 where v = [2(n − 1) 0 ... 0] and, | {z } (n−2)zeros

 n − 2 n − 3 ... 2 1  Ln−1 =  −(n − 1) 0 ...... 0 0  . (3.5) n n + 1 ... 2n − 4 2n − 3 We prove the above claim. The entries of X are (as row vectors):

First row = [0 2(n − 1) 0 ... 0], ith row = [0 ... 0 n + i − 2 0 n − i 0 ... 0], | {z } | {z } (i−1)th entry (i+1)th entry Last row = [0 ... 0 2n − 2 0].

th th th The i row of P1X is the (i − 1) row of X subtracted from the i row of X. Hence, the first row is left unchanged. Therefore, the rows of P1X are: First row = [0 2(n − 1) 0 ... 0], ith row = [0 ... 0 −(n + i − 3) n + i − 2 −(n + i + 1) n − i 0 ... 0], | {z } | {z } | {z } | {z } (i−2)th entry (i−1)th entry ith entry (i+1)th entry Last row = [0 ... 0 2n − 2 0].

5 −1 −1 Now, postmultiplying by P1 adds columns together in the sense that (P1XP1 )i,j, j = 1, . . . , n th (for each row i) is obtained by adding the (n − j + 1) last columns of (P1X)i. In this manner, −1 the first row of P1XP1 is: [2(n − 1) 2(n − 1) 0 ... 0].

The second row of P1X is:

[n − 2(n − 1) n − 2 0 ... 0],

−1 so that the second row of P1XP1 is:

[n − 2(n − 1) + (n − 2) − 2(n − 1) + (n − 2) n − 2 0 ... 0].

Or, more simply, [0 − n n − 2 0 ... 0]. th −1 Now, the entries of the i row, i = 3, . . . , n − 1, of P1XP1 are:

−(n + i − 3) + (n + i − 2) − (n − i + 1) + (n − i) = 0, (i − 2)th entry, (n + i − 2) − (n − i + 1) + (n − i) = n + i − 3, (i − 1)th entry, −(n − i + 1) + (n − i) = −1, ith entry, (n − i) = n − i, (i + 1)th entry.

th −1 More succinctly, the i row of P1XP1 is:

[0 ... 0 n + i − 3 −1 n − i 0 ... 0]. | {z } |{z} | {z } (i−1)th entry ith entry (i+1)th entry

Finally, the last row, i = n, is, [0 ... 0 2n − 3 − 1]. ˚ −1 From where we get that Wn(0) = P1XP1 looks like:

 2(n − 1) 2(n − 1) 0 ...... 0   0 −n n − 2 0 ... 0     . .. .   . n −1 n − 3 . .  W˚ (0) =   , (3.6) n  ......   . . n + 1 . . 0     . . . . .   ...... 1  0 0 ... 0 2n − 3 −1 which is precisely as claimed in (3.4). Now, since W˚n(0) is block upper-triangular, we have that + σ(W˚n(0)) = {2(n−1)}∪σ(Ln−1 −In−1). Let Ln−1 be the matrix obtained from taking the absolute value of all the entries of Ln−1, that is,

 n − 2 n − 3 ... 2 1  + Ln−1= n − 1 0 ...... 0 0 . (3.7) n n + 1 ... 2n − 4 2n − 3

6 + Note that σ(Ln−1) = −σ(Ln−1), that is, their eigenvalues have opposite signs. To see why, consider,

n Dn−1 = diag(1, −1,..., (−1) ). (3.8)

−1 Clearly Dn−1 = Dn−1. Then,

 −(n − 2) −(n − 3) ... −2 −1  −1 Dn−1Ln−1Dn−1 = −(n − 1) 0 ...... 0 0 . (3.9) −n −(n + 1) ... −(2n − 4) −(2n − 3)

+ −1 −1 −1 So, σ(Ln−1) = σ(Dn−1Ln−1Dn−1). Then, −(Dn−1Ln−1Dn−1) =Ln−1, but multiplying Dn−1Ln−1Dn−1 + by −1 changes the signs of all of its eigenvalues. And so we have that σ(Ln−1) = (−1)σ(Ln−1). + Additionally, Ln−1 shares the same characteristic polynomial as the following matrices:

 n n + 1 ... 2n − 4 2n − 3  + T (Ln−1) =  n − 1 0 ...... 0 0  , (3.10) n − 2 n − 3 ... 2 1

 1 2 . . . n − 3 n − 2  + T J(Ln−1) J =  0 0 ...... 0 n − 1  , (3.11) 2n − 3 2n − 4 . . . n + 1 n since they all share the same characteristic polynomial. From where, consider the (2n−2)×(2n−2) :

 +  T J(Ln−1) J 0 B =  +  (3.12) T 0 (Ln−1)

 1 2 . . . n − 2 0 n . . . 2n − 4 2n − 3  =  0 ...... 0 n − 1 n − 1 0 ...... 0  . 2n − 3 2n − 4 . . . n 0 n − 2 ... 2 1 Furthermore, consider the (2n − 2) × (2n − 2) perfect shuffle matrix:   en−1 0 on−1 Sn−1 =  0 Sn−2 0  , (3.13) on−1 0 en−1

1−(−1)n−1 1+(−1)n−1 where on−1 = 2 and en−1 = 2 . For example,

0 0 0 0 0 1 0 1 0 0 0 0   0 0 0 1 0 0 S3 =   . (3.14) 0 0 1 0 0 0   0 0 0 0 1 0 1 0 0 0 0 0

7 T Note that Sn−1 has orthogonal columns and so Sn−1Sn−1 = I2n−2. However, Sn−1 is also symmetric, 2 −1 so Sn−1 = I2n−2, from where we have that Sn−1 = Sn−1. Now,

 0 1 ... 0 0   2n − 3 0        en−1 0 on−1  .  en−1 0 on−1  0 0 .  Sn−1BSn−1 =  0 Sn−2 0   B   0 Sn−2 0  ,  .  o 0 e  . 0  o 0 e n−1 n−1   n−1 n−1  0 2n − 3  0 ... 0 1 0 (3.15)

0 where B is obtained by deleting the first and last rows and columns of B. Premultiplying by Sn−1 will swap the first and last rows of B, followed by a postmultiplication by Sn−1 which swaps the first and last column, in this fashion we obtain,   0 en−1 0 ... 0 on−1 0    0  Sn−1B =  (2n − 3)Sn−2e1 Sn−2B (2n − 3)Sn−2e2n−4  . (3.16)     0 on−1 0 ... 0 en−1 0

So,   0 ZSn−2 0    T 0 T  Sn−1BSn−1 =  (2n − 3)Sn−2Z Sn−2B Sn−2 (2n − 3)Sn−2Q  , (3.17)     0 QSn−2 0 where Z = [en−1 0 ... 0 on−1], Q = [on−1 0 ... 0 en−1], and ei are the standard basis vectors. Consider the topmost middle block of Sn−1BSn−1, in other words, [en−1 0 ... 0 on−1]Sn−2. If (n − 1) is even, then Z picks off the first row of Sn−2 so that (n − 2) is odd, so that the first row of Sn−2 is [0 ... 0 1]. On the other hand, if (n − 1) is odd, then Z picks off the last row of Sn−2 when (n − 2) is even, so that the last row of Sn−2 is [0 ... 0 1]. A similar argument applied to the remaining three outer-middle blocks and given that the two innermost elements of B, n − 1 and n − 1, are already in their proper place, the following form for Sn−1BSn−1 follows by induction,

 0 ...... 0 1 0   0 ... 0 2 0 2n − 3    ......   . . . . 2n − 4 0  G = S BS =   . (3.18) 2n−2 n−1 n−1  ......   0 2n − 4 . . . .     .  2n − 3 .. 2 0 ... 0  0 1 0 ...... 0

G2n−2 is tridiagonal with zero antidiagonal; it also bears close resemblance to the Kac-Sylvester T matrix previously mentioned in definition 1.3, specifically, G2n−2 = K2n−3 J. Since L resembles

8 the Kac-Sylvester matrix, consider the following (2n − 2) × (2n − 2) matrix,

 1 0 ...... 0  1 0 ...... 0  0 1 0 ...... 0  0 1 0 ...... 0      ......   ......  −1 0 . . . .  1 0 . . . . P (n) =   where P −1(n) =   , (3.19) 2  .. ..  2  ......   0 −1 . . 0 0  0 . . . 0 0      . .   .   ...... 0 1 0  ...... 0 1 0 0 ... 0 −1 0 1 on ... 0 1 0 1

−1 and P2(n) only has one off-diagonal band of negative ones. Note that P2 (n) is banded lower- 1−(−1)n−1 triangular with diagonals of alternating zeros or ones and that on = 2 . For example, 1 0 0 0 0 0 0 1 0 0 0 0   −1 1 0 1 0 0 0 P (4) =   . (3.20) 2 0 1 0 1 0 0   1 0 1 0 1 0 0 1 0 1 0 1

Now, rows i = 2,..., 2n − 3 of G2n−2 are given by: [0 ... 0 i 0 (2n − 3) − i + 2 0 ... 0]. |{z} th | {z } (i−1) entry (i+1)th entry

−1 Premultiplying by P2 (n) adds rows (i − k), kodd < i to row i, i = 3,..., 2n − 2. In other words, th for the i column of G2n−2: [0 ... 0 i 0 (2n − 3) − i 0 ... 0]T , |{z} th | {z } (i−1) entry (i+1)th entry

−1 the corresponding column of [P2 (n)]G2n−2 is: [0 ... 0 i 0 (2n − 3) 0 (2n − 3) ... (2n − 3) or 0]T . |{z} th | {z } | {z } (i−1) entry (i+1)th entry (i+3)th entry

−1 With 0 occuring when the size of the matrix, n, is odd. Now, [P2 (n)]G2n−2 has a checkerboard pattern with entries (2n − 3) below the anti-diagonal. For example,

0 0 0 0 1 0 0 0 0 2 0 5   −1 0 0 3 0 5 0 [P (4)]G6 =   . 2 0 4 0 5 0 5   5 0 5 0 5 0 0 5 0 5 0 5

th −1 Now, the i row of [P2 (n)]G2n−2 is: [0 ... 0 i 0 (2n − 3) 0 (2n − 3) 0 ... 0 or (2n − 3)], |{z} th | {z } | {z } (i−1) entry (i+1)th entry n-i+1 terms

9 with (2n − 3) and 0 alternating across the row and (2n − 3) occurring at the end of the row if th th n + i − 1 is even. Now, postmultiplying by P2(n) subtracts the (i + 2) column from the i −1 one, leaving the last two columns unchanged. Since [P2 (n)]G2n−2 is checkerboard below the anti- diagonal, every (2n − 3) entry not in the last two columns is deleted. More specifically, the ith row −1 of [P2 (n)]G2n−2P2(n) is now:

[0 ... 0 −i 0 i − (2n − 3) 0 (2n − 3) − (2n − 3) 0 ... 0 x y], |{z} | {z } | {z } (i−3)th entry (i−1)th entry (i+1)th entry

(2n−3)(1−(−1)i+1) th for i = 2,..., 2n − 5 and where x = (2n − 3) − y and y = 2 . Equivalently, the i row is: [0 ... 0 −i 0 −((2n − 5) − i + 2) 0 ... 0 x y]. |{z} | {z } (i−3)th entry (i−1)th entry Note that the first row is: [0 ... 0 − 1 0 1 0]. −1 Since the last two rows of [P2 (n)]G2n−2 are:

(2n − 3) 0 (2n − 3) 0 ... 0 y x , 0 (2n − 3) 0 (2n − 3) ... 0 x y

th th and given that postmultiplication by P2(n) subtracts the (i + 2) column from the i one leaving −1 the last two unchanged, we get that [P2 (n)]G2n−2[P2(n)] has its last two rows as:

0 0 ... 0 y x . 0 0 ... 0 x y

And hence we get that,   −1 −G2n−4 V(2n−4)×2 P2 (n)G2n−2P2(n) = , (3.21) 02×(2n−4) F2×2

2n − 3 0  where F = since G is always of even size. 0 2n − 3 2n−2 We then have that the eigenvalues of (3.21) are {2n − 3, 2n − 3} ∪ σ(−G2n−4). Iterating, we get that, n n σ(G2n−2) = {(2n − 3), (2n − 3), −(2n − 5), −(2n − 5),..., (−1) , (−1) }. (3.22) + + T T Therefore B constructed from J(Ln−1) J and (Ln−1) will have each eigenvalue of even multiplic- ity. Also, σ(G2n−2) = σ(B) by a similarity transformation, thus,

+ + T n σ(J(Ln−1) J) = σ(Ln−1) = {(2n − 3), −(2n − 5), (2n − 7),..., (−1) }. (3.23)

+ Since σ(Ln−1) = −σ(Ln−1),

n−1 σ(Ln−1) = {−(2n − 3), (2n − 5), −(2n − 7),..., (−1) }, (3.24) so, σ(Ln−1 − In−1) = {−(2n − 2), (2n − 6), −(2n − 6), . . . , r}, (3.25)

10 with r = (−1)n − 1. Therefore, we have,

σ(W˚n(0)) = {2(n − 1)} ∪ {−(2n − 2), (2n − 6), −(2n − 6), . . . , r}. (3.26)

Since W˚n(0) and W˜ n(0) are similar (see (3.2)), we get, 1 1 σ( W˜ (0)) = {(n − 1)} ∪ {−(n − 1), (n − 3), −(n − 3),..., r}. (3.27) 2 n 2 It is worth tidying up (3.27). Doing so, we obtain,

1  σ W˜ (0) = σ(W (0)) = {2k − (n + 1)}n , (3.28) 2 n n k=1 whereby, n σ(Wn(n + 1)) = {2k}k=1 = {2, 4,..., 2n}. (3.29)

Part B - Eigenvalues of Wˆ n(n + 1)

We wish to show that σ(Wˆ n(n + 1)) = {3, 5,..., 2n − 1}. Note that the entries of σ(Wˆ n(n + 1)) lack the special term bn−1, and so we can consider the principal submatrix of (3.1). In other words,

 1 2 . . . n − 3 n − 2  ˆ ˆ ˆ −1 M := 2HWn(0)H =  0 0 ...... 0 0  . (3.30) 2n − 2 2n − 3 . . . n + 2 n + 1

Note that M is of size (n−1)×(n−1). Now, recall the matrix P1, (3.3), now of size (n−1)×(n−1), −1 −1 and consider the product P1 MP1. Premultiplying by P1 adds rows k, k = 1, . . . , i − 1, to the ith row leaving the first row unchanged. We have that the ith row, i = 2, . . . , n − 2, of M is:

[0 ... 0 (2n − i) 0 i 0 ... 0]. |{z} | {z } th (i−1)th entry (i+1) entry

The last row is: [0 ... 0 n + 1 0]. It is easier to consider the ith column, i = 2, . . . , n − 2, for which the entries are:

[0 ... 0 i − 1 0 (2n − i − 1) 0 ... 0]T , | {z } | {z } (i−1)th entry (i+1)th entry so that the sum of the entries of the column is (i − 1) + (2n − i − 1) = 2n − 2. Since premultiplying −1 th by P1 adds rows k, k = 1, . . . , i − 1, to the i row, anything below the main diagonal becomes (2n − 2) while anything on the diagonal has the entry directly above it (from the above row) added th −1 to it. The i row of P1 M is now: [2n − 2 ... 2n − 2 (2n − i) + (i − 2) (i − 1) i 0 ... 0], |{z} | {z } | {z } th (i−1)th entry ith entry (i+1) entry or, [2n − 2 ... 2n − 2 2n − 2 (i − 1) i 0 ... 0]. (3.31) |{z} | {z } | {z } | {z } th i−2 terms (i−1)th entry ith entry (i+1) entry

11 th th Now, postmultiplying by P1 subtracts the (i + 1) column from the i one. In this manner, (3.31) becomes:

[0 ... 0 (2n − 2) − (i − 1) (i − 1) − i i 0 ... 0]. |{z} | {z } | {z } th (i−1)th entry ith entry (i+1) entry

More simply put: [0 ... 0 (2n − i − 1) − 1 i 0 ... 0], −1 and where i = 2, . . . , n − 2. Note that the last row of P1 M is: [2n − 2 ... 2n − 2 n − 2], | {z } n−2 terms

−1 so that the last row of P1 MP1 is: [0 ... 0 (2n − 2 − (n − 2)) (n − 2)], | {z } (n−2)th entry or, [0 ... 0 n (n − 2)]. |{z} (n−2)th entry −1 We finally get that P1 MP1 is:  1 2 . . . n − 3 n − 2  1 + P −1MP = −1 −1 ...... −1 n − 2 = J(L )T J − I . 1 1 2   n−1 n−1 2n − 3 2n − 4 . . . n + 1 n (3.32)

But we know from (3.24) that,

n−1 σ(Ln−1) = {−(2n − 3), (2n − 5),..., (−1) }. (3.33)

So, by (3.9) and (3.10), we have:

+ T n σ(J(Ln−1) J) = {(2n − 3), −(2n − 5),..., (−1) }. (3.34)

And therefore,

+ T σ(J(Ln−1) J − In−1) = {(2n − 4), −(2n − 4),..., 0 or − 2}, (3.35) with −2 occurring when n is odd. From where we have,

σ(M) = {(n − 2), −(n − 2), (n − 4), −(n − 4) ..., 0 or − 1}. (3.36)

More simply, ˆ n−1 σ(Wn(0)) = {2k − n}k=1. (3.37) Finally, we indeed have that, ˆ n−1 σ(Wn(n + 1)) = {2k + 1}k=1. (3.38)

12 As a final result, it is worth noting that Wn(n − 1) has a particularly nice Cholesky decomposition.

T Theorem 3.3. Wn(n − 1) = LL where

 q 2 Li,i = bi ∀i = 1, . . . , n − 2  i  q L = i ∀i = 1, . . . , n − 2  i+1,i 2 q 1 Ln−1,n−1 = n−1 bn−1   Ln,n−1 = n − 1  Ln,n = 0

Proof. We must show that,   l1α1 l2α2 . . . ln−2αn−2 ln−1αn−1 T 2 2 2 2 2 2 2 LL = l1 α1 + l2 ...... αn−2 + ln−1 αn−1 + ln  = Wn(n − 1), l1α1 l2α2 . . . ln−2αn−2 ln−1αn−1 (3.39) where,

 0 0 ... 0 0  L = l1 l2 ...... ln−1 ln  . (3.40) α1 α2 . . . αn−2 αn−1

In other words, we must verify the following equalities,

 2 l1 = n − 1   q 2ni−i−i2 liαi = ∀i = 1, . . . , n − 2 4 (3.41) 2 2 αi + li+1 = n − 1 ∀i = 1, . . . , n − 1  q  2ni−i−i2 ln−1αn−1 = 2

Now, for the first case,

2 2 l1 = 2 · b1 2n − 2 = 2 · 4 = n − 1.

For the second case,

r2 r i l α = · b · i i i i 2 r 2ni − i − i2 = . 4

13 And for the third case, i 2 α2 + l2 = + b2 i i+1 2 i + 1 i+1 i(i + 1) + 4 · b2 = i+1 2(i + 1) i(i + 1) − (i + 1)(i − 2n + 2) = 2(i + 1) = n − 1.

Finally,

r 1 √ α l = · b · n − 1 n−1 n−1 n − 1 n−1 r 2ni − i − i2 = . 2

T From where we have that LL = Wn(n − 1).

4 Acknowledgements

I would like to thank the Institut des Sciences Math´ematiques(ISM) for giving me the opportunity and the funds to pursue my undergraduate summer research project. I would also like to thank my research supervisors, Dr. N. B. Willms and Dr. T. Jones for their continued support and encouragement. Additionally, I would like to thank Dr. F. Huard for his efforts in contacting me about this opportunity and his valuable input.

References

[1] Q. Al-Hassan. An inverse eigenvalue problem for general tridiagonal matrices. International Journal of Contemporary Mathematical Sciences, 4:625–634, 2009.

[2] Paul. A. Clement. A class of triple-diagonal matrices for test purposes. SIAM Review, 1:50–52, 1959.

[3] Alan Edelman and Eric Kostlan. The road from Kac’s matrix to Kac’s random polynomials. In Proceedings of the 1994 SIAM Applied Conference (Philadelphia, pages 503– 507, 1994.

[4] G.M.L. Gladwell. Inverse Problems in Vibration. Nijhoff, Dordrecht, 1986.

14 [5] G. H. Golub and C. F. Van Loan. Matrix Computations. The John Hopkins University Press, Baltimore, fourth edition, 2013.

[6] T. Muir. A Treatise on the Theory of . Longmans Greens, 1933. Revised and enlarged by W.H. Metzler, Dover, New York, 1960.

[7] H. Pickmann, R.L. Soto, J. Egana, and M. Salas. An inverse eigenvalue problem for symmetric tridiagonal matrices. Computers and Mathematics with Applications, 54:699–708, 2007.

[8] G. Schneisser. A real with a given characteristic polynomial. Linear Algebra and its Applications, 193:11–18, 1993.

[9] G. Szego. Orthogonal polynomials. American Mathematical Society Colloquium Publications, XXIII, 1939.

[10] O. Taussky and J. Todd. Another look at a matrix of Mark Kac. Linear Algebra and its Applications, 150:341–360, 1991.

15