An Algorithm for Polynomial Matrix SVD Based on Generalised

An Algorithm for Polynomial Matrix SVD Based on Generalised

18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 AN ALGORITHM FOR POLYNOMIAL MATRIX SVD BASED ON GENERALISED KOGBETLIANTZ TRANSFORMATIONS John G McWhirter School of Engineering, Cardiff University Queen's Buildings, CF24 3AA, Cardiff, Wales phone: + (44) (0)2920870627, fax: + (44) (0)2920874716, email: [email protected] ABSTRACT for conventional matrices, basing an algorithm entirely on An algorithm is presented for computing the singular value the SBR philosophy would not be computationally efficient. decomposition (SVD) of a polynomial matrix. It takes the Given a matrix X 2 Cm×n, where we assume that m > n, it is form of a sequential best rotation (SBR) algorithm and con- more efficient to begin the SVD by performing a QR decom- stitutes a generalisation of the Kogbetliantz technique for position of the matrix since this only requires a fixed number computing the SVD of conventional scalar matrices. It of carefully structured steps. In this case, a unitary matrix avoids ”squaring” the matrix to be factorised, uses only uni- Q 2 Cm×m is computed s.t. tary and paraunitary operations, and therefore exhibits a high degree of numerical stability. R = (1) QX 0 1. INTRODUCTION Polynomial matrices have been used for many years in the where R 2 Cn×n is an upper triangular matrix with real diag- area of control [1]. They play an important role in the re- onal elements. Computation of the SVD can then proceed by alisation of multi-variable transfer functions associated with treating R as a general square matrix and performing an itera- multiple-input multiple-output (MIMO) systems. Over the tive sequence of Kogbetliantz transformations which is guar- last few years they have become more widely used in the anteed to converge and reduce it to diagonal form. The Kog- context of digital signal processing (DSP) and communica- betliantz transformation may be viewed as a generalisation of tions [2]. Typical areas of application include broadband the classical Jacobi algorithm to non-symmetric (square) ma- adaptive sensor array processing [3, 4], MIMO communica- trices and also belongs to the class of SBR algorithms [18]. tion channels [5–7], and digital filter banks for subband cod- It results in a transformation of the form ing [8] or data compression [9]. Just as orthogonal or unitary matrix decomposition tech- Σ niques such as the QR decomposition (QRD), eigenvalue Q1RV = (2) decomposition (EVD), and singular value decomposition Cn×n Cn×n (SVD) [10] are important for narrowband adaptive sensor ar- where Q1 2 and V 2 are unitary matrices and rays [11], corresponding paraunitary polynomial matrix de- Σ 2 Rn×n is diagonal. In combination, we have compositions are proving beneficial for broadband adaptive arrays [12–14] and also for filterbank design [15, 16]. In Σ = (3) a previous paper [17], we described a generalisation of the UXV 0 EVD for conventional Hermitian matrices to para-Hermitian polynomial matrices. This technique will be referred to as Cm×m PEVD while the underlying algorithm is known as the 2nd where U 2 is a unitary matrix given by order sequential best rotation (SBR2) algorithm. In order to minimise the number of iterative steps re- Q 0 U = 1 Q (4) quired, and so prevent unnecessary growth in the order of 0 I the polynomial matrix being diagonalised, the philosophy adopted for the algorithm was one of sequential best rotation and so this constitutes the SVD of X [10]. (SBR). In the context of conventional Hermitian matrices, this corresponds to the classical Jacobi algorithm [18]. The The first stage in developing a PSVD algorithm based SBR2 algorithm is, in effect, a generalisation of the classi- on the SBR philosophy, is to generate an SBR algorithm for cal Jacobi algorithm to the third (time) dimension associated the case of conventional matrices. In effect, it is necessary with polynomial matrices. A similar approach was subse- to merge the QR decomposition stage of the SVD algorithm quently adopted for polynomial matrix QR decomposition described above, into the iterative Kogbetliantz process. A (PQRD) [19]. novel algorithm of this type is developed in the next section For similar reasons, the PSVD algorithm outlined here for complex (scalar) matrices. Note that the complex case is is also based on the SBR principle. It can be viewed as the more involved than its real counterpart because of the need generalisation to polynomial matrices of an SBR algorithm to ensure that the diagonal elements remain real throughout for computing the SVD of conventional matrices. Unlike the the process. This is vital for the Jacobi transformation step case of Hermitian matrix EVD, no such algorithm seems to which relies on Hermitian symmetry of the 2 × 2 sub-matrix exist already in the literature. This is not surprising since, to which it is applied. © EURASIP, 2010 ISSN 2076-1465 457 2. MODIFIED KOGBETLIANTZ ALGORITHM matrix except for the (k;k);(k; j);( j;k) and ( j; j) elements FOR COMPLEX NON-SQUARE MATRICES which serve to embed the 2 × 2 rotation matrix in equation (6). This completes the elementary transformation for the m > n Assuming that , we seek to compute the SVD of a case j > n. In preparation for the next iteration the input ma- matrix 2 Cm×n as defined by equation (3). Note that for X trix is now updated by setting X X0. Note that kX0k = kXk square matrices, it is not necessary to perform the initial QR 2 decomposition as indicated in equation (1) so the basic Kog- and that for each iteration, kdiag(X)k (the on-diagonal en- 2 betliantz method is sufficient. The corresponding PSVD al- ergy) increases while koffdiag(X)k (the off-diagonal en- gorithm can easily be deduced from the one developed below ergy) decreases by the same amount. The operator G(θ;φ) and will not be treated separately in this paper. in equation(11) contributes to the overall transformation U in equation (3). 2.1 Initial phase adjustment The algorithm begins by transforming the matrix to one with 2.3 Complex Kogbetliantz transformation real elements on the principal diagonal. This can be achieved If j ≤ n (i.e. if the dominant off-diagonal element of X lies by multiplying the jth column by exp(−iα j) where α j is the within the upper n × n sub-matrix), perform the following phase of x j j i.e. x j j = x j j exp(iα j). The same procedure complex Kogbetliantz transformation designed to eliminate is applied to every column and so the entire process may be both x and x [18]. This transformation may be broken jk k j written in the form down into three simple steps - a Givens rotation and a sym- metrisation followed by a standard Jacobi transformation. X XT (5) The Givens rotation step, which would not be required in the where T 2 Cn×n is a diagonal matrix of phase rotations and is real case, is needed here to ensure that the symmetrisation therefore unitary. Note that kXk is not affected by this initial step leaves the diagonal elements entirely real. First, swap phase adjustment (where k:k denotes the Frobenius norm of the indices j and k, if necessary, so that j > k. It is advisable a matrix or of a polynomial matrix as defined in [17]). The also to swap rows j and k, and columns j and k, so that the algorithm ensures that each subsequent transformation main- Givens rotation which follows, is never used to eliminate a tains the real property of the diagonal elements of the matrix. zero element and thus become numerically ill-defined. This occurs naturally with the Jacobi transformation which preserves Hermitian symmetry, but requires more effort in 2.3.1 Givens rotation other cases. Compute and apply a Givens rotation of the form The new SBR algorithm begins by locating the dominant off-diagonal element of X (i.e. the one with greatest magni- tude) denoted by x . The next step depends on whether or if 0 0 jk c se xkk xk j xkk xk j not j > n. −if = 0 (12) −se c x jk x j j 0 x j j 2.2 Givens rotation 0 R where x j j, xkk and xkk 2 . This requires the same choice of If j > n (i.e. if the dominant off-diagonal element lies outside rotation parameters as the Givens rotation applied if j > n. the upper n × n sub-matrix), compute and apply a complex However we also have Givens rotation with rotation parameters defined by 0 iw if 0 x j j = −se xk j + cx j j (13) c se xkk xkk −if = (6) −se c x jk 0 This quantity is not necessarily real, so a further phase ad- 0 0 ib 0 R justment is required. Denote x j j = x j j e and multiply the where c = cosθ, s = sinθ, and xkk, xkk2 . This requires jth row of the matrix by e−ib . This ensures that the matrix −if −se xkk + cx jk = 0 (7) still has real elements on the diagonal. Denote the embed- while ded Givens rotation, together with any phase adjusment and interchange of rows and columns, by 0 if xkk = cxkk + se x jk (8) X0 = F(θ;φ;β)X (14) Denoting x = x eiw , it is clear that we must have φ = −ω jk jk ( ; ; ) 2 Cm×m to ensure that x0 is real. Equation (7) then becomes where F θ φ β . Once again, since this combined k k transformation is unitary, it can be seen that kX0k = kXk and iw iw 2 2 −se xkk + c x jk e = 0 (9) that kdiag(X)k increases while koffdiag(X)k decreases by the same amount.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us