A Fast Eigenvalue Algorithm for Hankel Matricesୋ Franklin T

A Fast Eigenvalue Algorithm for Hankel Matricesୋ Franklin T

Linear Algebra and its Applications 316 (2000) 171–182 www.elsevier.com/locate/laa A fast eigenvalue algorithm for Hankel matricesୋ Franklin T. Luk a,∗, Sanzheng Qiao b aDepartment of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180, USA bDepartment of Computing and Software, McMaster University, Hamilton, Ont., Canada L8S 4K1 Received 12 March 1999; accepted 18 February 2000 Dedicated to Robert J. Plemmons on the occasion of his 60th birthday Submitted by J. Nagy Abstract We present an algorithm that can find all the eigenvalues of an n × n complex Hankel matrix in O(n2 log n) operations. Our scheme consists of an O(n2 log n) Lanczos-type tri- diagonalization procedure and an O(n) QR-type diagonalization method. © 2000 Elsevier Science Inc. All rights reserved. Keywords: Hankel matrix; Toeplitz matrix; Circulant matrix; Fast Fourier transform; Lanczos tridiago- nalization; Eigenvalue decomposition; Complex-symmetric matrix; Complex-orthogonal transformations 1. Introduction The eigenvalue decomposition of a structured matrix has important applications in signal processing. Common occurring structures include an n × n Hankel matrix h1 h2 ... hn−1 hn h h ... h h 2 3 n n+1 . H = . .. (1) hn−1 hn ... h2n−3 h2n−2 hn hn+1 ... h2n−2 h2n−1 ୋ This work was partly supported by the Natural Sciences and Engineering Research Council of Canada under grant OGP0046301. ∗ Corresponding author. E-mail address: [email protected] (F.T. Luk). 0024-3795/00/$ - see front matter ( 2000 Elsevier Science Inc. All rights reserved. PII:S0024-3795(00)00084-7 172 F.T. Luk, S. Qiao / Linear Algebra and its Applications 316 (2000) 171–182 or an n × n Toeplitz matrix tn tn−1 ... t2 t1 t t ... t t n+1 n 3 2 . T = . .. . (2) t2n−2 t2n−3 ... tn tn−1 t2n−1 t2n−2 ... tn+1 tn There is extensive literature on inverting such matrices or solving such linear sys- tems. However, efficient eigenvalue algorithms for structured matrices are still un- der development. Cybenko and Van Loan [3] proposed an algorithm for computing the minimum eigenvalue of a symmetric positive definite Toeplitz matrix. Their algorithm is based on the Levinson–Durbin Algorithm and Newton’s method; it requires up to O(n2) floating-point operations per Newton iteration and heuristically O(log n) iterations. Building on their work [3], Trench [8] presented an algorithm for Hermitian Toeplitz matrices. His algorithm requires O(n2) operations per eigen- value–eigenvector pair. In this paper, we study the eigenvalue problem of a Hankel matrix. Taking advantage of two properties, namely that a complex Hankel matrix is symmetric and that a permuted Hankel matrix can be embedded in a circulant matrix, we develop an O(n2 log n) algorithm that can find all the eigenvalues of an n × n Hankel matrix. We should point out that our new method is a theoretical contribution; considerable work is required to develop a practical software. An error analysis of this algorithm can be found in [6]. Our paper is organized as follows. How to exploit complex-symmetry is presented in Section 2, and how to construct complex-orthogonal transformations in Section 3. An O(n log n) scheme for multiplying a Hankel matrix and a vector is described in Section 4, and an O(n2 log n) Lanczos tridiagonalization process in Section 5. A QR procedure to diagonalize a complex-symmetric tridiagonal matrix is given in Section 6, followed by an overall computational procedure and two numerical examples in Section 7. 2. Complex symmetry Our idea is to take advantage of the symmetry of a Hankel matrix. In general, an eigenvalue decomposition of H (assuming that it is nondefective) is given by − H = XDX 1, (3) where D is diagonal and X is nonsingular. Note that the following Hankel matrix is defective: H = 2i. i0 We will pick the matrix X to be complex-orthogonal;thatis, XXT = I. (4) F.T. Luk, S. Qiao / Linear Algebra and its Applications 316 (2000) 171–182 173 So, H = XDXT. We apply a special Lanczos tridiagonalization to the Hankel matrix (assuming that the Lanczos process does not prematurely terminate) H = QJ QT, (5) where Q is complex-orthogonal and J is complex-symmetric tridiagonal. Then we diagonalize J: J = WDWT, where W is complex-orthogonal and D is diagonal. Thus, we get (3) with X = QW. The dominant cost of the Lanczos method is matrix–vector multiplication which in general takes O(n2) operations. We propose a fast O(n log n) Hankel matrix–vector multiplication algorithm. Thus we can tridiagonalize a Hankel matrix in O(n2 log n) operations. The resulting tridiagonal matrix is complex-symmetric. In order to main- tain its symmetric and tridiagonal structure, we use the complex-orthogonal trans- formations in the QR iteration. 3. Complex-orthogonal transformations A basic operation in solving eigenvalue problems is the introduction of zeros into 2 × 1 vectors using 2 × 2 transformations. From its definition in (4), we derive the general form of a 2 × 2 complex-orthogonal matrix as cs cs G = , −sc or s −c where c2 + s2 = 1. Here, we choose the nonsymmetric version cs G = . −sc (6) In the real case, the transformation G of (6) reduces to a Givens rotation. Given a complex 2-element vector x x = 1 , (7) x2 x2 + x2 =/ where 1 2 0, the following algorithm computes the nonsymmetric transfor- mation G of (6) so that q x2 + x2 Gx = 1 2 . (8) 0 For more details, see [5,10]. 174 F.T. Luk, S. Qiao / Linear Algebra and its Applications 316 (2000) 171–182 Algorithm 1 (Complex-orthogonaltransformation). Given a complex vector x of (7), this algorithm computes the parameters c and s for the complex-orthogonal transfor- mation G of (6), so that (8) holds if (|x1| > |x2|) √ 2 t = x2/x1; c = 1/ 1 + t ; s = t · c; else √ 2 τ = x1/x2; s = 1/ 1 + τ ; c = τ · s; end if. This algorithm will be used in the complex-orthogonal diagonalization in Section 6. 4. Fast Hankel matrix–vector product In this section, we describe an O(n log n) algorithm for multiplying an n × n Hankel matrix into an n-element vector. We begin with some additional notations. Let T h ≡ (h1 h2 h3 ... h2n−1 ) and T t ≡ (t1 t2 t3 ... t2n−1 ) denote two (2n − 1)-element vectors specifying the n × n Hankel matrix H(h) and the n × n Toeplitz matrix T(t) of Eqs. (1) and (2), respectively. First, we permute a Hankel matrix into a Toeplitz matrix. Let P represent an n × n permutation matrix that reverses all columns of H in postmultiplication: 00... 01 ... 00 10 . P = . .. ; 01... 00 10... 00 that is, H(h) P = T(h). (9) The simplicity of Eq. (9) explains why we start with H11 (respectively T1n)whenwe define the vector h (respectively t). Next, we embed the Toeplitz matrix T(h) in a larger circulant matrix. Consider a (2n − 1) × (2n − 1) circulant matrix F.T. Luk, S. Qiao / Linear Algebra and its Applications 316 (2000) 171–182 175 c1 c2n−1 c2n−2 ... c3 c2 c c c ... c c 2 1 2n−1 4 3 c c c ... c c 3 2 1 5 4 C = . ≡ C(c), . .. c2n−2 c2n−3 c2n−4 ... c1 c2n−1 c2n−1 c2n−2 c2n−3 ... c2 c1 where T c ≡ (c1 c2 c3 ... c2n−1 ) . Note that c represents the first column of C. Consider a special choice of this vector T bc = (hn hn+1 hn+2 ... h2n−1 h1 h2 ... hn−1 ) . (10) Then hn hn−1 hn−2 ... h1 h2n−1 h2n−2 ... hn+1 h h h ... h h h ... h n+1 n n−1 2 1 2n−1 n+2 h h h ... h h h ... h n+2 n+1 n 3 2 1 n+3 . . .. .. C(b)= h h h ... h h h ... h , c 2n−1 2n−2 2n−3 n n−1 n−2 1 h h h ... h h h ... h 1 2n−1 2n−2 n+1 n n−1 2 h h h ... h h h ... h 2 1 2n−1 n+2 n+1 n 3 . . .. .. hn−1 hn−2 hn−3 ... h2n−1 h2n−2 h2n−3 ... hn where the leading n × n principal submatrix is T(h). This technique of embedding a Toeplitz matrix in a larger circulant matrix to achieve fast computation is widely used in preconditioning methods [1,7]. Given an n-element vector T w = (w1 w2 w3 ... wn ) , (11) we want to compute the matrix–vector product p = H w. (12) We see that p = H(h)w = T(h)(Pw). Let bw denote a special (2n − 1)-element vector: T bw = (wn wn−1 ... w1 0 ... 0 ) , (13) which can be obtained from the n-vector Pw by appending it with n − 1 zeros. So p is given by the first n elements of the product y,definedby y ≡ C(bc )bw. This circulant matrix–vector multiplication can be efficiently computed via the Fast Fourier Transform (FFT) [9]; namely, 176 F.T. Luk, S. Qiao / Linear Algebra and its Applications 316 (2000) 171–182 C(bc )bw = ifft(fft(bc ).∗ fft(bw)), where fft(v) denotes a one-dimensional FFT of a vector v,ifft(v) a one-dimensional inverse FFT of v,and“.∗” a componentwise multiplication of two vectors. Algorithm 2 (Fast Hankel matrix–vector product). Given a vector w in (11) and a Hankel matrix H in (1), this algorithm computes the product vector p of (12) by using FFT.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us