An Algorithm for Polynomial Matrix SVD Based on Generalised

Total Page:16

File Type:pdf, Size:1020Kb

An Algorithm for Polynomial Matrix SVD Based on Generalised 18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 AN ALGORITHM FOR POLYNOMIAL MATRIX SVD BASED ON GENERALISED KOGBETLIANTZ TRANSFORMATIONS John G McWhirter School of Engineering, Cardiff University Queen's Buildings, CF24 3AA, Cardiff, Wales phone: + (44) (0)2920870627, fax: + (44) (0)2920874716, email: [email protected] ABSTRACT for conventional matrices, basing an algorithm entirely on An algorithm is presented for computing the singular value the SBR philosophy would not be computationally efficient. decomposition (SVD) of a polynomial matrix. It takes the Given a matrix X 2 Cm×n, where we assume that m > n, it is form of a sequential best rotation (SBR) algorithm and con- more efficient to begin the SVD by performing a QR decom- stitutes a generalisation of the Kogbetliantz technique for position of the matrix since this only requires a fixed number computing the SVD of conventional scalar matrices. It of carefully structured steps. In this case, a unitary matrix avoids ”squaring” the matrix to be factorised, uses only uni- Q 2 Cm×m is computed s.t. tary and paraunitary operations, and therefore exhibits a high degree of numerical stability. R = (1) QX 0 1. INTRODUCTION Polynomial matrices have been used for many years in the where R 2 Cn×n is an upper triangular matrix with real diag- area of control [1]. They play an important role in the re- onal elements. Computation of the SVD can then proceed by alisation of multi-variable transfer functions associated with treating R as a general square matrix and performing an itera- multiple-input multiple-output (MIMO) systems. Over the tive sequence of Kogbetliantz transformations which is guar- last few years they have become more widely used in the anteed to converge and reduce it to diagonal form. The Kog- context of digital signal processing (DSP) and communica- betliantz transformation may be viewed as a generalisation of tions [2]. Typical areas of application include broadband the classical Jacobi algorithm to non-symmetric (square) ma- adaptive sensor array processing [3, 4], MIMO communica- trices and also belongs to the class of SBR algorithms [18]. tion channels [5–7], and digital filter banks for subband cod- It results in a transformation of the form ing [8] or data compression [9]. Just as orthogonal or unitary matrix decomposition tech- Σ niques such as the QR decomposition (QRD), eigenvalue Q1RV = (2) decomposition (EVD), and singular value decomposition Cn×n Cn×n (SVD) [10] are important for narrowband adaptive sensor ar- where Q1 2 and V 2 are unitary matrices and rays [11], corresponding paraunitary polynomial matrix de- Σ 2 Rn×n is diagonal. In combination, we have compositions are proving beneficial for broadband adaptive arrays [12–14] and also for filterbank design [15, 16]. In Σ = (3) a previous paper [17], we described a generalisation of the UXV 0 EVD for conventional Hermitian matrices to para-Hermitian polynomial matrices. This technique will be referred to as Cm×m PEVD while the underlying algorithm is known as the 2nd where U 2 is a unitary matrix given by order sequential best rotation (SBR2) algorithm. In order to minimise the number of iterative steps re- Q 0 U = 1 Q (4) quired, and so prevent unnecessary growth in the order of 0 I the polynomial matrix being diagonalised, the philosophy adopted for the algorithm was one of sequential best rotation and so this constitutes the SVD of X [10]. (SBR). In the context of conventional Hermitian matrices, this corresponds to the classical Jacobi algorithm [18]. The The first stage in developing a PSVD algorithm based SBR2 algorithm is, in effect, a generalisation of the classi- on the SBR philosophy, is to generate an SBR algorithm for cal Jacobi algorithm to the third (time) dimension associated the case of conventional matrices. In effect, it is necessary with polynomial matrices. A similar approach was subse- to merge the QR decomposition stage of the SVD algorithm quently adopted for polynomial matrix QR decomposition described above, into the iterative Kogbetliantz process. A (PQRD) [19]. novel algorithm of this type is developed in the next section For similar reasons, the PSVD algorithm outlined here for complex (scalar) matrices. Note that the complex case is is also based on the SBR principle. It can be viewed as the more involved than its real counterpart because of the need generalisation to polynomial matrices of an SBR algorithm to ensure that the diagonal elements remain real throughout for computing the SVD of conventional matrices. Unlike the the process. This is vital for the Jacobi transformation step case of Hermitian matrix EVD, no such algorithm seems to which relies on Hermitian symmetry of the 2 × 2 sub-matrix exist already in the literature. This is not surprising since, to which it is applied. © EURASIP, 2010 ISSN 2076-1465 457 2. MODIFIED KOGBETLIANTZ ALGORITHM matrix except for the (k;k);(k; j);( j;k) and ( j; j) elements FOR COMPLEX NON-SQUARE MATRICES which serve to embed the 2 × 2 rotation matrix in equation (6). This completes the elementary transformation for the m > n Assuming that , we seek to compute the SVD of a case j > n. In preparation for the next iteration the input ma- matrix 2 Cm×n as defined by equation (3). Note that for X trix is now updated by setting X X0. Note that kX0k = kXk square matrices, it is not necessary to perform the initial QR 2 decomposition as indicated in equation (1) so the basic Kog- and that for each iteration, kdiag(X)k (the on-diagonal en- 2 betliantz method is sufficient. The corresponding PSVD al- ergy) increases while koffdiag(X)k (the off-diagonal en- gorithm can easily be deduced from the one developed below ergy) decreases by the same amount. The operator G(θ;φ) and will not be treated separately in this paper. in equation(11) contributes to the overall transformation U in equation (3). 2.1 Initial phase adjustment The algorithm begins by transforming the matrix to one with 2.3 Complex Kogbetliantz transformation real elements on the principal diagonal. This can be achieved If j ≤ n (i.e. if the dominant off-diagonal element of X lies by multiplying the jth column by exp(−iα j) where α j is the within the upper n × n sub-matrix), perform the following phase of x j j i.e. x j j = x j j exp(iα j). The same procedure complex Kogbetliantz transformation designed to eliminate is applied to every column and so the entire process may be both x and x [18]. This transformation may be broken jk k j written in the form down into three simple steps - a Givens rotation and a sym- metrisation followed by a standard Jacobi transformation. X XT (5) The Givens rotation step, which would not be required in the where T 2 Cn×n is a diagonal matrix of phase rotations and is real case, is needed here to ensure that the symmetrisation therefore unitary. Note that kXk is not affected by this initial step leaves the diagonal elements entirely real. First, swap phase adjustment (where k:k denotes the Frobenius norm of the indices j and k, if necessary, so that j > k. It is advisable a matrix or of a polynomial matrix as defined in [17]). The also to swap rows j and k, and columns j and k, so that the algorithm ensures that each subsequent transformation main- Givens rotation which follows, is never used to eliminate a tains the real property of the diagonal elements of the matrix. zero element and thus become numerically ill-defined. This occurs naturally with the Jacobi transformation which preserves Hermitian symmetry, but requires more effort in 2.3.1 Givens rotation other cases. Compute and apply a Givens rotation of the form The new SBR algorithm begins by locating the dominant off-diagonal element of X (i.e. the one with greatest magni- tude) denoted by x . The next step depends on whether or if 0 0 jk c se xkk xk j xkk xk j not j > n. −if = 0 (12) −se c x jk x j j 0 x j j 2.2 Givens rotation 0 R where x j j, xkk and xkk 2 . This requires the same choice of If j > n (i.e. if the dominant off-diagonal element lies outside rotation parameters as the Givens rotation applied if j > n. the upper n × n sub-matrix), compute and apply a complex However we also have Givens rotation with rotation parameters defined by 0 iw if 0 x j j = −se xk j + cx j j (13) c se xkk xkk −if = (6) −se c x jk 0 This quantity is not necessarily real, so a further phase ad- 0 0 ib 0 R justment is required. Denote x j j = x j j e and multiply the where c = cosθ, s = sinθ, and xkk, xkk2 . This requires jth row of the matrix by e−ib . This ensures that the matrix −if −se xkk + cx jk = 0 (7) still has real elements on the diagonal. Denote the embed- while ded Givens rotation, together with any phase adjusment and interchange of rows and columns, by 0 if xkk = cxkk + se x jk (8) X0 = F(θ;φ;β)X (14) Denoting x = x eiw , it is clear that we must have φ = −ω jk jk ( ; ; ) 2 Cm×m to ensure that x0 is real. Equation (7) then becomes where F θ φ β . Once again, since this combined k k transformation is unitary, it can be seen that kX0k = kXk and iw iw 2 2 −se xkk + c x jk e = 0 (9) that kdiag(X)k increases while koffdiag(X)k decreases by the same amount.
Recommended publications
  • Polynomial Flows in the Plane
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ADVANCES IN MATHEMATICS 55, 173-208 (1985) Polynomial Flows in the Plane HYMAN BASS * Department of Mathematics, Columbia University. New York, New York 10027 AND GARY MEISTERS Department of Mathematics, University of Nebraska, Lincoln, Nebraska 68588 Contents. 1. Introduction. I. Polynomial flows are global, of bounded degree. 2. Vector fields and local flows. 3. Change of coordinates; the group GA,(K). 4. Polynomial flows; statement of the main results. 5. Continuous families of polynomials. 6. Locally polynomial flows are global, of bounded degree. II. One parameter subgroups of GA,(K). 7. Introduction. 8. Amalgamated free products. 9. GA,(K) as amalgamated free product. 10. One parameter subgroups of GA,(K). 11. One parameter subgroups of BA?(K). 12. One parameter subgroups of BA,(K). 1. Introduction Let f: Rn + R be a Cl-vector field, and consider the (autonomous) system of differential equations with initial condition x(0) = x0. (lb) The solution, x = cp(t, x,), depends on t and x0. For which f as above does the flow (p depend polynomially on the initial condition x,? This question was discussed in [M2], and in [Ml], Section 6. We present here a definitive solution of this problem for n = 2, over both R and C. (See Theorems (4.1) and (4.3) below.) The main tool is the theorem of Jung [J] and van der Kulk [vdK] * This material is based upon work partially supported by the National Science Foun- dation under Grant NSF MCS 82-02633.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
    Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations Tri Dao 1 Albert Gu 1 Matthew Eichhorn 2 Atri Rudra 2 Christopher Re´ 1 Abstract ture generation, and kernel approximation, to image and Fast linear transforms are ubiquitous in machine language modeling (convolutions). To date, these trans- learning, including the discrete Fourier transform, formations rely on carefully designed algorithms, such as discrete cosine transform, and other structured the famous fast Fourier transform (FFT) algorithm, and transformations such as convolutions. All of these on specialized implementations (e.g., FFTW and cuFFT). transforms can be represented by dense matrix- Moreover, each specific transform requires hand-crafted vector multiplication, yet each has a specialized implementations for every platform (e.g., Tensorflow and and highly efficient (subquadratic) algorithm. We PyTorch lack the fast Hadamard transform), and it can be ask to what extent hand-crafting these algorithms difficult to know when they are useful. Ideally, these barri- and implementations is necessary, what structural ers would be addressed by automatically learning the most priors they encode, and how much knowledge is effective transform for a given task and dataset, along with required to automatically learn a fast algorithm an efficient implementation of it. Such a method should be for a provided structured transform. Motivated capable of recovering a range of fast transforms with high by a characterization of matrices with fast matrix- accuracy and realistic sizes given limited prior knowledge. vector multiplication as factoring into products It is also preferably composed of differentiable primitives of sparse matrices, we introduce a parameteriza- and basic operations common to linear algebra/machine tion of divide-and-conquer methods that is capa- learning libraries, that allow it to run on any platform and ble of representing a large class of transforms.
    [Show full text]
  • Polynomial Parametrization for the Solutions of Diophantine Equations and Arithmetic Groups
    ANNALS OF MATHEMATICS Polynomial parametrization for the solutions of Diophantine equations and arithmetic groups By Leonid Vaserstein SECOND SERIES, VOL. 171, NO. 2 March, 2010 anmaah Annals of Mathematics, 171 (2010), 979–1009 Polynomial parametrization for the solutions of Diophantine equations and arithmetic groups By LEONID VASERSTEIN Abstract A polynomial parametrization for the group of integer two-by-two matrices with determinant one is given, solving an old open problem of Skolem and Beurk- ers. It follows that, for many Diophantine equations, the integer solutions and the primitive solutions admit polynomial parametrizations. Introduction This paper was motivated by an open problem from[8, p. 390]: CNTA 5.15 (Frits Beukers). Prove or disprove the following statement: There exist four polynomials A; B; C; D with integer coefficients (in any number of variables) such that AD BC 1 and all integer solutions D of ad bc 1 can be obtained from A; B; C; D by specialization of the D variables to integer values. Actually, the problem goes back to Skolem[14, p. 23]. Zannier[22] showed that three variables are not sufficient to parametrize the group SL2 Z, the set of all integer solutions to the equation x1x2 x3x4 1. D Apparently Beukers posed the question because SL2 Z (more precisely, a con- gruence subgroup of SL2 Z) is related with the solution set X of the equation x2 x2 x2 3, and he (like Skolem) expected the negative answer to CNTA 1 C 2 D 3 C 5.15, as indicated by his remark[8, p. 389] on the set X: I have begun to believe that it is not possible to cover all solutions by a finite number of polynomials simply because I have never The paper was conceived in July of 2004 while the author enjoyed the hospitality of Tata Institute for Fundamental Research, India.
    [Show full text]
  • The Images of Non-Commutative Polynomials Evaluated on 2 × 2 Matrices
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 140, Number 2, February 2012, Pages 465–478 S 0002-9939(2011)10963-8 Article electronically published on June 16, 2011 THE IMAGES OF NON-COMMUTATIVE POLYNOMIALS EVALUATED ON 2 × 2 MATRICES ALEXEY KANEL-BELOV, SERGEY MALEV, AND LOUIS ROWEN (Communicated by Birge Huisgen-Zimmermann) Abstract. Let p be a multilinear polynomial in several non-commuting vari- ables with coefficients in a quadratically closed field K of any characteristic. It has been conjectured that for any n, the image of p evaluated on the set Mn(K)ofn by n matrices is either zero, or the set of scalar matrices, or the set sln(K) of matrices of trace 0, or all of Mn(K). We prove the conjecture for n = 2, and show that although the analogous assertion fails for completely homogeneous polynomials, one can salvage the conjecture in this case by in- cluding the set of all non-nilpotent matrices of trace zero and also permitting dense subsets of Mn(K). 1. Introduction Images of polynomials evaluated on algebras play an important role in non- commutative algebra. In particular, various important problems related to the theory of polynomial identities have been settled after the construction of central polynomials by Formanek [F1] and Razmyslov [Ra1]. The parallel topic in group theory (the images of words in groups) also has been studied extensively, particularly in recent years. Investigation of the image sets of words in pro-p-groups is related to the investigation of Lie polynomials and helped Zelmanov [Ze] to prove that the free pro-p-group cannot be embedded in the algebra of n × n matrices when p n.
    [Show full text]
  • The Computation of the Inverse of a Square Polynomial Matrix ∗
    The Computation of the Inverse of a Square Polynomial Matrix ∗ Ky M. Vu, PhD. AuLac Technologies Inc. c 2008 Email: [email protected] Abstract 2 The Inverse Polynomial An approach to calculate the inverse of a square polyno- The inverse of a square nonsingular matrix is a square ma- mial matrix is suggested. The approach consists of two trix, which by premultiplying or postmultiplying with the similar algorithms: One calculates the determinant polyno- matrix gives an identity matrix. An inverse matrix can be mial and the other calculates the adjoint polynomial. The expressed as a ratio of the adjoint and determinant of the algorithm to calculate the determinant polynomial gives the matrix. A singular matrix has no inverse because its deter- coefficients of the polynomial in a recursive manner from minant is zero; we cannot calculate its inverse. We, how- a recurrence formula. Similarly, the algorithm to calculate ever, can always calculate its adjoint and determinant. It is, the adjoint polynomial also gives the coefficient matrices therefore, always better to calculate the inverse of a square recursively from a similar recurrence formula. Together, polynomial matrix by calculating its adjoint and determi- they give the inverse as a ratio of the adjoint polynomial nant polynomials separately and obtain the inverse as a ra- and the determinant polynomial. tio of these polynomials. In the following discussion, we will present algorithms to calculate the adjoint and deter- minant polynomials. Keywords: Adjoint, Cayley-Hamilton theorem, determi- nant, Faddeev’s algorithm, inverse matrix. 2.1 The Determinant Polynomial To begin our discussion, we consider the following equa- 1 Introduction tion that is the determinant of a square matrix of dimension n, viz Modern control theory consists of a large part of matrix the- n n−1 n−2 n jλI − Cj = λ − p1(C)λ + p2(C)λ ··· (−1) pn(C); ory.
    [Show full text]
  • Block Circulant Matrix with Circulant Polynomial Matrices As Its Blocks G.Ramesh, * R.Muthamilselvam,** * Associate Professor of Mathematics, Govt
    Science, Technology and Development ISSN : 0950-0707 Block Circulant Matrix with Circulant Polynomial Matrices as its Blocks G.Ramesh, * R.Muthamilselvam,** * Associate Professor of Mathematics, Govt. Arts College(Autonomous), Kumbakonam. ([email protected]) ** Assistant Professor of Mathematics, Arasu Engineering College, Kumbakonam. ([email protected]) ___________________________________________________________________________ Abstract: The characterization of block circulant matrix with circulant polynomial matrices as its blocks are derived as a generalization of the block circulant matrices with circulant block matrices. Keywords: Circulant polynomial matrix, Block circulant polynomial matrix, Circulant block polynomial matrix. AMS Classification: 15A09, 15A15, 15A57. I. Introduction Let a12, a ,..., an be an ordered n-tuple of polynomial with complex coefficients , and let them generate the circulant polynomial matrix of order n 5 : a12 a ... an a a ... a A n 12 1.1 a2 a 3 ... a 1 We shall often denote this circulant polynomial matrix as A circ a , a ,..., a 1.2 12 n It is well known that all circulant polynomial matrices of order n are simultaneously diagonalizable by the polynomial matrix F associated with the finite Fourier transforms. 2i Specifically, let exp ,i 1 1.3 n 1 1 1 ... 1 21n 1 1 and set Fn 2 1 2 4 2(n 1) 1.4 nn12 1 ... The Fourier polynomial matrix F depends only on n. This matrix is also symmetric polynomial and unitary polynomial FFFFI and we have AFF 1.5 Where diag 12, ,..., n 1.6 Volume IX Issue IV APRIL 2020 Page No : 519 Science, Technology and Development ISSN : 0950-0707 The symbol * designates the conjugate transpose.
    [Show full text]
  • Matrix Algebra and Control
    Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero.
    [Show full text]
  • Polynomial Matrix Decompositions
    UPTEC F10 059 Examensarbete 20 p November 2010 Polynomial Matrix Decompositions Evaluation of Algorithms with an Application to Wideband MIMO Communications Rasmus Brandt Abstract Polynomial Matrix Decompositions: Evaluation of Algorithms with an Application to Wideband MIMO Communications Rasmus Brandt Teknisk- naturvetenskaplig fakultet UTH-enheten The interest in wireless communications among consumers has exploded since the introduction of the ''3G'' cell phone standards. One reason for their success is the Besöksadress: increasingly higher data rates achievable through the networks. A further increase in Ångströmlaboratoriet Lägerhyddsvägen 1 data rates is possible through the use of multiple antennas at either or both sides of Hus 4, Plan 0 the wireless links. Postadress: Precoding and receive filtering using matrices obtained from a singular value Box 536 751 21 Uppsala decomposition (SVD) of the channel matrix is a transmission strategy for achieving the channel capacity of a deterministic narrowband multiple-input multiple-output Telefon: (MIMO) communications channel. When signalling over wideband channels using 018 – 471 30 03 orthogonal frequency-division multiplexing (OFDM), an SVD must be performed for Telefax: every sub-carrier. As the number of sub-carriers of this traditional approach grow 018 – 471 30 00 large, so does the computational load. It is therefore interesting to study alternate means for obtaining the decomposition. Hemsida: http://www.teknat.uu.se/student A wideband MIMO channel can be modeled as a matrix filter with a finite impulse response, represented by a polynomial matrix. This thesis is concerned with investigating algorithms which decompose the polynomial channel matrix directly. The resulting decomposition factors can then be used to obtain the sub-carrier based precoding and receive filtering matrices.
    [Show full text]
  • Quasi-Rational Canonical Forms of a Matrix Over a Number Field
    Advances in Linear Algebra & Matrix Theory, 2018, 8, 1-10 http://www.scirp.org/journal/alamt ISSN Online: 2165-3348 ISSN Print: 2165-333X Quasi-Rational Canonical Forms of a Matrix over a Number Field Zhudeng Wang*, Qing Wang, Nan Qin School of Mathematics and Statistics, Yancheng Teachers University, Yancheng, China How to cite this paper: Wang, Z.D., Wang, Abstract Q. and Qin, N. (2018) Quasi-Rational Ca- nonical Forms of a Matrix over a Number A matrix is similar to Jordan canonical form over the complex field and the Field. Advances in Linear Algebra & Matrix rational canonical form over a number field, respectively. In this paper, we fur- Theory, 8, 1-10. ther study the rational canonical form of a matrix over any number field. We https://doi.org/10.4236/alamt.2018.81001 firstly discuss the elementary divisors of a matrix over a number field. Then, we Received: October 28, 2017 give the quasi-rational canonical forms of a matrix by combining Jordan and Accepted: January 7, 2018 the rational canonical forms. Finally, we show that a matrix is similar to its Published: January 10, 2018 quasi-rational canonical forms over a number field. Copyright © 2018 by authors and Keywords Scientific Research Publishing Inc. This work is licensed under the Creative Matrix, Jordan Canonical Form, Rational Canonical Form, Commons Attribution International Quasi-Rational Canonical Form License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access 1. Introduction A matrix is similar to Jordan canonical form over the complex field and the ra- tional canonical form over a number field, respectively.
    [Show full text]
  • On the Covariance Completion Problem Under a Circulant Structure
    1 On the Covariance Completion Problem under a banded inverse with zero values corresponding to the location Circulant Structure of unspecified elements, cf. [4]. The purpose of the present note is to develop a simple in- Francesca P. Carli and Tryphon T. Georgiou dependent argument that explains this result and, at the same time, shows that the algebraic constraint for the completion to be circulant is automatically satisfied in all cases, i.e., for Abstract| Covariance matrices with a circulant structure arise any number of missing bands as well as for any number of ar- in the context of discrete-time periodic processes and their sig- bitrary missing elements in a block-circulant structure. More nificance stems also partly from the fact that they can be diag- onalized via a Fourier transformation. This note deals with the specifically, the proof of the key result relies on the observation problem of completion of partially specified circulant covariance that circulant and block-circulant matrices are stable points of matrices. The particular completion that has maximal deter- a certain group. The group action preserves the value of the de- minant (i.e., the so-called maximum entropy completion) was terminant. Hence, the maximizer of the determinant, which is considered in Carli etal. [2] where it was shown that if a single unique and has the Dempster property [4], will generate an or- band is unspecified and to be completed, the algebraic restriction bit under the group-action that preserves the specified elements that enforces the circulant structure is automatically satisfied and that the inverse of the maximizer has a band of zero values in their original locations (since these are compatible with the that corresponds to the unspecified band in the data|i.e., it has circulant structure).
    [Show full text]