Fast Qrs Decomposition of Matrix and Its Applications to Numerical Optimization

Fast Qrs Decomposition of Matrix and Its Applications to Numerical Optimization

FAST QRS DECOMPOSITION OF MATRIX AND ITS APPLICATIONS TO NUMERICAL OPTIMIZATION JIAN-KANG ZHANG¤ AND KON MAX WONG Abstract. In this paper, we extend our recently developed equal diagonal QRS decomposition of a matrix to a general case in which the diagonal entries of the R-factor are not necessarily equal but satisfy a majority relationship with the singular values of the matrix. A low-complexity quadratic recursive algorithm to characterize all the eligible S-factors is derived. In particular, a pair of fast closed-form QRS decompositions for both a matrix and its inverse is obtained. Finally, we show how to apply this generalized QRS decomposition to e±ciently solving two classes of numerical opti- mization problems that usually occur in the joint design of transceiver pairs with decision feedback detection in communication systems via semi-de¯nite and geometrical programming. Key words. QR decomposition, majority, QRS decomposition, Q-R-S-factors, semi-de¯nite and geometrical programming 1. Introduction. The QR decomposition and the singular value decomposition (SVD) are commonly used tools in various signal processing applications. The QR decomposition of a matrix A is a factorization [1, 2] A = QR, where Q is a unitary matrix and R is an upper triangular matrix, while the SVD decomposition of an M £ N matrix A is a factorization [1, 2] µ ¶ ¤1=2 0 A = U r£(N¡r) VH ; (1.1) 0(M¡r)£r 0(M¡r)£(N¡r) where r is the rank of A, U and V are unitary and ¤ = diag(¸1; ¸2; ¢ ¢ ¢ ; ¸r) with ¸1 ¸ ¸2 ¸ ¢ ¢ ¢ ¸r > 0. To preserve the virtues of both decompositions, an intermediary between the QR decomposition and the QR decomposition, a two-sided orthogonal decomposition URV was developed in [2, 3, 4]; i.e., A = URVH , where U and V are unitary matrices and µ ¶ R 0 R = r£r r£(N¡r) (1.2) 0(M¡r)£r 0(M¡r)£(N¡r) with Rr£r being a r £ r upper triangular matrix. Recently, we have developed the equal diagonal QRS decomposition of a ma- trix [5, 6, 7, 8]; i.e., any matrix A can be decomposed into AS = QR, where Q and S are unitary matrices and R has the same structure as that given in (1.2) but Qr 1=2r diagonal entries of Rr£r are all equal to ( i=1 ¸i) . There are two main di®er- ences between the URV decomposition and the QRS decomposition. (a) Methodology is totally di®erent. In the URV decomposition, for a given data matrix A, perform trangularlization to the left corner by orthonormal row and column transformations. Such unitary matrices U and V always exist [2, 3, 4]. However, in our equal diagonal QRS decomposition, the existence of the Q and S factors is problematic in mathe- matics literature. (b) The task of designing the S-factor in the QRS decomposition is totally di®erent from that of designing the V-factor in the URV decomposition. The goal of designing the V-factor is rank-revealing and updating for subspace track- ing [2, 3, 4], while the principal purpose to design the S-factor is to optimize the ¤Contact Author: Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario, L8S 4K1, Canada. Phone: +1 905 525 9140, Ext. 27599. Fax: +1 905 521 2922. ([email protected]). 1 2 J.-K. ZHANG AND K. M. WANG signal to noise (or plus interference) ratio of the worst case R-factor value subchannel or block error probability for decision feedback device in communication signal pro- cessing [5, 6, 7, 8]. Essentially, our equal diagonal QRS decomposition is based on the Schur's decomposition [1] that the determinant of a positive de¯nite matrix is equal to the product of the determinant of an arbitrarily given principal submatrix and the determinant of its schur complement. Therefore, it actually describes a process for successively and evenly distributing the total information over each one-dimensional subspace [8]. Very interestingly, such decomposition simultaneously enjoys many im- portant optimality properties in communication signal processing and therefore, has played a vital role in jointly designing transceiver with decision feedback detection in many block-based communication systems [5, 6, 7, 9]. However, in some environ- ments where di®erent R-factor value subchannels (users) require di®erent the qualities of service (Q±S) [10] or have di®erent space freedom such as random MIMO chan- nels [11, 12], this would demand an unequal diagonal QRS decomposition. Therefore, in this paper we extend the equal diagonal QRS decomposition [5, 6, 7] to a general case where the diagonal entries of the R-factor are not necessarily equal but satisfy a majority relationship [13, 14, 10]. A low-complexity generalized quadratic recursive algorithm to characterize all the S-factors is developed. In particular, a pair of fast speci¯c closed-form QRS decompositions is derived for both an invertible matrix and its inverse. Finally, we show how to apply this general QRS decomposition to e±- ciently solving two classes of numerical optimization problems that usually occur in the joint design of transceiver pairs with decision feedback detection in communication systems via semi-de¯nite and geometrical programming [15]. It should be mentioned that Mirsky [13] and Guess [10] gave their constructive proofs to ¯nd a speci¯c positive de¯nite matrix with a pair of the prescribed eigen- values and the Cholesky values. Although their methods can be exploited to ¯nd a unitary S-factor via the eigenvalue decomposition of the obtained positive de¯nite matrix, these are not direct and e±cient, since many applications require directly computing the S-factor. In addition, their methods [13, 10] cannot characterize all the eligible positive de¯nite matrices. Notation: Matrices are denoted by uppercase boldface characters (e.g., A), while column vectors are denoted by lowercase boldface characters (e.g., b). The (i; j)-th entry of A is denoted by Ai;j. The i-th entry of b is denoted by bi. The columns of an M £ N matrix A are denoted by a1; a2; ¢ ¢ ¢ ; aN . Notation Ak denotes a matrix consisting of the ¯rst k columns of A, i.e., Ak = [a1; a2; ¢ ¢ ¢ ; ak]. By convention, A0 = 1. The remaining matrix after deleting columns ak1 ; ak2 ; ¢ ¢ ¢ ; aki from A is denoted by Ak1;k2¢¢¢ ;ki . The j-th diagonal entry of a square matrix A is denoted ? by [A]j. Notation A denotes the orthonormal complement of a matrix A. The transpose of A is denoted by AT . The Hermitian transpose of A (i.e., the conjugate and transpose of A) is denoted by AH . 2. QRS decomposition of matrix. Our task in this section is to extend the equal diagonal QRS decomposition in [5, 6, 7, 8] and the quadratic recursive algorithm to a general case and then, derive a pair of a speci¯c closed-form QRS decompositions for both an invertible matrix and its inverse. 2.1. QRS decomposition. First, we give a quadratic recursive algorithm to determine whether a positive sequence is a valid candid for the diagonal entries of the R-factor in the QR decomposition of a given matrix. For discussion convenience, we assume hereafter that the singular value decomposition of an M £ N matrix A is given by (1.1). In order to judge whether a positive sequence constitutes a diagonal FAST QRS DECOMPOSITION OF MATRIX AND ITS APPLICATIONS 3 entries of the R-factor of a given matrix, we recall the notion of majorization and some key results [13, 16, 10] we will require in our derivation. K Definition 2.1. For any a 2 R , let a[1] ¸ a[2] ¸ ¢ ¢ ¢ ¸ a[K] denote the components of a in decreasing order (also termed order statistics of a). Majorization makes precise the vague notion that the components of a vector a are \less spread out" or \more nearly equal" than the components of a vector b. Definition 2.2. Let a; b 2 RK . Then, vector b majorizes a if Xk Xk a[i] · b[i] i=1 i=1 for all k = 1; 2; ¢ ¢ ¢ ;K, and with equality when k = K, which is denoted by a Á b. We also need the following de¯nition. K K Definition 2.3. Let fakgk=1 and fbkgk=1 be two positive real-valued sequences. K K It is said that fakgk=1 majorities fbkgk=1 in the product sense if Yk Yk a[i] ¸ b[i] (2.1) i=1 i=1 ­ for all k = 1; 2; ¢ ¢ ¢ ;K, and with equality when k = K, which is denoted by a > b. The following lemma [17, 16, 13, 10] gives a very simple necessary and su±cient condition to check whether a pair of positive sequences constitute a pair of the valid singular values and the R-factor values of some matrix, r r Lemma 2.4. Let f¸igk=1 and fdigi=1 be the eigenvalue sequence and the Cholesky r value sequence of a positive de¯nite matrix A, respectively. Then, f¸igi=1 majorities r r r fdigi=1 in the product sense. Conversely, if f¸igi=1 majorities fdkgi=1 in the product r sense, then, for an arbitrarily given desired permutation of fdigi=1, dj1 ; dj2 ; ¢ ¢ ¢ ; djr , r r there exists a matrix A such that f¸igi=1 and fdji gi=1 are the eigenvalues and the Cholesky values of A, respectively. In [13], Mirsky gave a constructive proof to ¯nd some positive de¯nite matrix with the given valid eigenvalues and the Cholesky values.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us