On the Singular Value Decomposition Of

Total Page:16

File Type:pdf, Size:1020Kb

On the Singular Value Decomposition Of Spec. Matrices 2020; 8:1–13 Research Article Open Access Heike Faßbender* and Martin Halwaß On the singular value decomposition of (skew-)involutory and (skew-)coninvolutory matrices https://doi.org/10.1515/spma-2020-0001 Received July 30, 2019; accepted November 6, 2019 1 Abstract: The singular values σ > 1 of an n × n involutory matrix A appear in pairs (σ, σ ). Their left and right singular vectors are closely connected. The case of singular values σ = 1 is discussed in detail. These singular values may appear in pairs (1, 1) with closely connected left and right singular vectors or by themselves. The link between the left and right singular vectors is used to reformulate the singular value decomposition (SVD) of an involutory matrix as an eigendecomposition. This displays an interesting relation between the singular values of an involutory matrix and its eigenvalues. Similar observations hold for the SVD, the singular values and the coneigenvalues of (skew-)coninvolutory matrices. Keywords: singular value decomposition, (skew-)involutory matrix, (skew-)coninvolutory, consimilarity MSC: 15A23, 65F99 1 Introduction Inspired by the work [7] on the singular values of involutory matrices some more insight into the singular n×n value decomposition (SVD) of involutory matrices is derived. For any matrix A 2 C there exists a singular value decomposition (SVD), that is, a decomposition of the form A = UΣV H , (1.1) n×n n×n where U, V 2 C are unitary matrices and Σ 2 R is a diagonal matrix with non-negative real numbers on the diagonal. The diagonal entries σi of Σ are the singular values of A. Usually, they are ordered such that σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0. The number of nonzero singular values of A is the same as the rank of A. Thus, n×n a nonsingular matrix A 2 C has n positive singular values. The columns uj , j = 1, ... , n of U and the columns vj , j = 1, ... , n of V are the left singular vectors and right singular vectors of A, respectively. From (1.1) we have Avj = σj uj , j = 1, ... , n. Any triplet (u, v, σ) with Av = σu is called a singular triplet of A. In n×n case A 2 R , U and V can be chosen to be real and orthogonal. While the singular values are unique, in general, the singular vectors are not. The nonuniqueness of the singular vectors mainly depends on the multiplicities of the singular values. For simplicity, assume that n×n A 2 C is nonsingular. Let s1 > s2 > ··· > sk > 0 denote the distinct singular values of A with respective Pk H multiplicities θ1, θ2, ... , θk ≥ 1, j=1 θj = n. Let A = UΣV be a given singular value decomposition with Σ = diag(s1Iθ1 , s2Iθ2 , ... , sk Iθk ). Here Ij denotes as usual the j × j identity matrix. Then Ub = U [W1 ⊕ W2 ⊕ ··· ⊕ Wk] and Vb = V [W1 ⊕ W2 ⊕ ··· ⊕ Wk] (1.2) *Corresponding Author: Heike Faßbender: Institut Computational Mathematics/ AG Numerik, TU Braunschweig, Universitätsplatz 2, 38106 Braunschweig, Germany, Email: [email protected] Martin Halwaß: Kampstraße 7, 17121 Loitz, Germany Open Access. © 2020 Heike Faßbender and Martin Halwaß, published by De Gruyter. This work is licensed under the Creative Commons Attribution alone 4.0 License. 2 Ë Heike Faßbender and Martin Halwaß θj×θj H with unitary matrices Wj 2 C yield another SVD of A, A = UΣb Vb . This describes all possible SVDs of A, see, e.g., [4, Theorem 3.1.1’] or [8, Theorem 4.28]. In case θ = 1, the corresponding left and right singular j p ıαj vector are unique up to multiplication with some e , αj 2 R, where ı = −1. For more information on the SVD see, e.g. [2, 4, 8]. n×n 2 −1 A matrix A 2 C with A = In , or equivalently, A = A is called an involutory matrix. Thus, for any involutory matrix A, A and its inverse A−1 have the same SVD and hence, the same (positive) singular values. H Let A = UΣV be the usual SVD of A with the diagonal of Σ ordered by magnitude, σ1 ≥ σ2 ≥ ... ≥ σn . Noting that an SVD of A−1 is given by A−1 = VΣ−1UH and that the diagonal elements of Σ−1 are ordered by 1 1 1 1 1 1 magnitude, ≥ ≥ ... ≥ , we conclude that σ = , σ = , ... , σn = . That is, the singular values σn σn−1 σ1 1 σn 2 σn−1 σ1 1 of an involutory matrix A are either 1 or pairs (σ, σ ), where σ > 1. This has already been observed in [7] (see Theorem 1 for a quote of the ndings). Here we will describe the SVD A = UΣV H for involutory matrices in 1 detail. In particular, we will note a close relation between the left and right singular vectors of pairs (σ, σ ) of 1 singular values: if (u, v, σ) is a singular triplet, then so is (v, u, σ ). We will observe that some of the singular values σ = 1 may also appear in such pairs. That is, if (u, v, 1) is a singular triplet, then so is (v, u, 1). Other singular values σ = 1 may appear as a single singular triplet, that is (u, ±u, 1). These observations allow to express U as U = VT for a real elementary orthogonal matrix T. With this the SVD of A reads A = VTΣV H . Taking a closer look at the real matrix TΣ we will see that TΣ is an involutory matrix just as A. All relevant information concerning the singular values and the eigenvalues of A can be read o of TΣ. Some of these ndings also follow from [6, Theorem 7.2], see Section 2. n×n 2 As any skew-involutory matrix B 2 C , B = −In can be expressed as B = ıA with an involutory matrix n×n A 2 C , the results for involutory matrices can be transferred easily to skew-involutory ones. n×n We will also consider the SVD of coninvolutory matrices, that is of matrices A 2 C which satisfy AA = In , see, e.g., [5]. As involutory matrices, coninvolutory are nonsingular and have n positive singular values. 1 Moreover, the singular values appear in pairs (σ, σ ) or are 1, see [4]. Similar to the case of involutory matrices, we can give a relation between the matrices U and V in the SVD UΣV of A in the form U = VT. TΣ is a complex coninvolutory matrix consimilar¹ to A and consimilar to the identity. Similar observations have been given in [5]. Some of our ndings also follow from [6, Theorem 7.1], see Section 4. 2n×2n Skew-coninvolutory matrices A 2 C , that is, AA = −I2n , have been studied in [1]. Here we will briey state how with our approach ndings on the SVD of skew-coninvolutory matrices given in [1] can easily be rediscovered. Our goal is to round o the picture drawn in the literature about the singular value decomposition of the four classes of matrices considered here. In particular we would like to make visible the relation between the singular vectors belonging to reciprocal pairs of singular values in the form of the matrix T. The SVD of involutory matrices is treated at length in Section 2. In Section 3 we make immediate use of the results in Section 2 in order to discuss the SVD of skew-involutory matrices. Section 4 deals with coninvolutory matrices. Finally, the SVD of skew-coninvolutory matrices is discussed in Section 5. 2 Involutory matrices n×n 2 −1 Let A 2 C be involutory, that is, A = In holds. Thus, A = A . Symmetric permutation matrices and Hermitian unitary matrices are simple examples of involutory matrices. Nontrivial examples of involutory matrices can be found in [3, Page 165, 166, 170]. The spectrum of any involutory matrix can have at most two n 2 2 elements, λ(A) ⊂ f1, −1g as from Ax = λx for x 2 C nf0g it follows that A x = λAx which gives x = λ x. The SVD of involutory matrices has already been considered in [7]. In particular, the following theorem is given. n×n n×n −1 1 Two matrices A and B 2 C are said to be consimilar if there exists a nonsingular matrix S 2 C such that A = SBS , [5]. On the singular value decomposition Ë 3 n×n 1 1 Theorem 1. Let A 2 C be an involutory matrix. Then B1 = 2 (In −A) and B2 = 2 (In +A) are idempotent, that 2 n n is, Bi = Bi , i = 1, 2. Assume that A has r ≤ 2 eigenvalues −1 and n − r eigenvalues +1 (or r ≤ 2 eigenvalues +1 H H and n − r eigenvalues −1). Then B1 (B2) is of rank r and can be decomposed as B1 = QW (B2 = QW ) where n×r H Q, W 2 C have r linear independent columns. Moreover, W W − I is at least positive semidenite. There are n − 2r singular values of A equal to 1, r singular values which are equal to the eigenvalues of the matrix H 1 H 1 (W W) 2 + (W W − In) 2 and r singular values which are the reciprocal of these. Thus, in [7] it was noted that the singular values of an involutory matrix A may be 1 or may appear in pairs 1 n (σ, σ ). Moreover, the minimal number of singular values 1 is given by 2n−r where r ≤ 2 denotes the number of eigenvalues −1 of A or the number of eigenvalues +1 of A, whichever is smallest.
Recommended publications
  • 3·Singular Value Decomposition
    i “book” — 2017/4/1 — 12:47 — page 91 — #93 i i i Embree – draft – 1 April 2017 3 Singular Value Decomposition · Thus far we have focused on matrix factorizations that reveal the eigenval- ues of a square matrix A n n,suchastheSchur factorization and the 2 ⇥ Jordan canonical form. Eigenvalue-based decompositions are ideal for ana- lyzing the behavior of dynamical systems likes x0(t)=Ax(t) or xk+1 = Axk. When it comes to solving linear systems of equations or tackling more general problems in data science, eigenvalue-based factorizations are often not so il- luminating. In this chapter we develop another decomposition that provides deep insight into the rank structure of a matrix, showing the way to solving all variety of linear equations and exposing optimal low-rank approximations. 3.1 Singular Value Decomposition The singular value decomposition (SVD) is remarkable factorization that writes a general rectangular matrix A m n in the form 2 ⇥ A = (unitary matrix) (diagonal matrix) (unitary matrix)⇤. ⇥ ⇥ From the unitary matrices we can extract bases for the four fundamental subspaces R(A), N(A), R(A⇤),andN(A⇤), and the diagonal matrix will reveal much about the rank structure of A. We will build up the SVD in a four-step process. For simplicity suppose that A m n with m n.(Ifm<n, apply the arguments below 2 ⇥ ≥ to A n m.) Note that A A n n is always Hermitian positive ⇤ 2 ⇥ ⇤ 2 ⇥ semidefinite. (Clearly (A⇤A)⇤ = A⇤(A⇤)⇤ = A⇤A,soA⇤A is Hermitian. For any x n,notethatx A Ax =(Ax) (Ax)= Ax 2 0,soA A is 2 ⇤ ⇤ ⇤ k k ≥ ⇤ positive semidefinite.) 91 i i i i i “book” — 2017/4/1 — 12:47 — page 92 — #94 i i i 92 Chapter 3.
    [Show full text]
  • Matrices That Are Similar to Their Inverses
    116 THE MATHEMATICAL GAZETTE Matrices that are similar to their inverses GRIGORE CÃLUGÃREANU 1. Introduction In a group G, an element which is conjugate with its inverse is called real, i.e. the element and its inverse belong to the same conjugacy class. An element is called an involution if it is of order 2. With these notions it is easy to formulate the following questions. 1) Which are the (finite) groups all of whose elements are real ? 2) Which are the (finite) groups such that the identity and involutions are the only real elements ? 3) Which are the (finite) groups in which the real elements form a subgroup closed under multiplication? According to specialists, these (general) questions cannot be solved in any reasonable way. For example, there are numerous families of groups all of whose elements are real, like the symmetric groups Sn. There are many solvable groups whose elements are all real, and one can prove that any finite solvable group occurs as a subgroup of a solvable group whose elements are all real. As for question 2, note that in any Abelian group (conjugations are all the identity function), the only real elements are the identity and the involutions, and they form a subgroup. There are non-abelian examples as well, like a Suzuki 2-group. Question 3 is similar to questions 1 and 2. Therefore the abstract study of reality questions in finite groups is unlikely to have a good outcome. This may explain why in the existing bibliography there are only specific studies (see [1, 2, 3, 4]).
    [Show full text]
  • Centro-Invertible Matrices Linear Algebra and Its Applications, 434 (2011) Pp144-151
    References • R.S. Wikramaratna, The centro-invertible matrix:a new type of matrix arising in pseudo-random number generation, Centro-invertible Matrices Linear Algebra and its Applications, 434 (2011) pp144-151. [doi:10.1016/j.laa.2010.08.011]. Roy S Wikramaratna, RPS Energy [email protected] • R.S. Wikramaratna, Theoretical and empirical convergence results for additive congruential random number generators, Reading University (Conference in honour of J. Comput. Appl. Math., 233 (2010) 2302-2311. Nancy Nichols' 70th birthday ) [doi: 10.1016/j.cam.2009.10.015]. 2-3 July 2012 Career Background Some definitions … • Worked at Institute of Hydrology, 1977-1984 • I is the k by k identity matrix – Groundwater modelling research and consultancy • J is the k by k matrix with ones on anti-diagonal and zeroes – P/t MSc at Reading 1980-82 (Numerical Solution of PDEs) elsewhere • Worked at Winfrith, Dorset since 1984 – Pre-multiplication by J turns a matrix ‘upside down’, reversing order of terms in each column – UKAEA (1984 – 1995), AEA Technology (1995 – 2002), ECL Technology (2002 – 2005) and RPS Energy (2005 onwards) – Post-multiplication by J reverses order of terms in each row – Oil reservoir engineering, porous medium flow simulation and 0 0 0 1 simulator development 0 0 1 0 – Consultancy to Oil Industry and to Government J = = ()j 0 1 0 0 pq • Personal research interests in development and application of numerical methods to solve engineering 1 0 0 0 j =1 if p + q = k +1 problems, and in mathematical and numerical analysis
    [Show full text]
  • The Weyr Characteristic
    Swarthmore College Works Mathematics & Statistics Faculty Works Mathematics & Statistics 12-1-1999 The Weyr Characteristic Helene Shapiro Swarthmore College, [email protected] Follow this and additional works at: https://works.swarthmore.edu/fac-math-stat Part of the Mathematics Commons Let us know how access to these works benefits ouy Recommended Citation Helene Shapiro. (1999). "The Weyr Characteristic". American Mathematical Monthly. Volume 106, Issue 10. 919-929. DOI: 10.2307/2589746 https://works.swarthmore.edu/fac-math-stat/28 This work is brought to you for free by Swarthmore College Libraries' Works. It has been accepted for inclusion in Mathematics & Statistics Faculty Works by an authorized administrator of Works. For more information, please contact [email protected]. The Weyr Characteristic Author(s): Helene Shapiro Source: The American Mathematical Monthly, Vol. 106, No. 10 (Dec., 1999), pp. 919-929 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2589746 Accessed: 19-10-2016 15:44 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://about.jstor.org/terms Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to The American Mathematical Monthly This content downloaded from 130.58.65.20 on Wed, 19 Oct 2016 15:44:49 UTC All use subject to http://about.jstor.org/terms The Weyr Characteristic Helene Shapiro 1.
    [Show full text]
  • On the Construction of Lightweight Circulant Involutory MDS Matrices⋆
    On the Construction of Lightweight Circulant Involutory MDS Matrices? Yongqiang Lia;b, Mingsheng Wanga a. State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China b. Science and Technology on Communication Security Laboratory, Chengdu, China [email protected] [email protected] Abstract. In the present paper, we investigate the problem of con- structing MDS matrices with as few bit XOR operations as possible. The key contribution of the present paper is constructing MDS matrices with entries in the set of m × m non-singular matrices over F2 directly, and the linear transformations we used to construct MDS matrices are not assumed pairwise commutative. With this method, it is shown that circulant involutory MDS matrices, which have been proved do not exist over the finite field F2m , can be constructed by using non-commutative entries. Some constructions of 4 × 4 and 5 × 5 circulant involutory MDS matrices are given when m = 4; 8. To the best of our knowledge, it is the first time that circulant involutory MDS matrices have been constructed. Furthermore, some lower bounds on XORs that required to evaluate one row of circulant and Hadamard MDS matrices of order 4 are given when m = 4; 8. Some constructions achieving the bound are also given, which have fewer XORs than previous constructions. Keywords: MDS matrix, circulant involutory matrix, Hadamard ma- trix, lightweight 1 Introduction Linear diffusion layer is an important component of symmetric cryptography which provides internal dependency for symmetric cryptography algorithms. The performance of a diffusion layer is measured by branch number.
    [Show full text]
  • Equivalent Conditions for Singular and Nonsingular Matrices Let a Be an N × N Matrix
    Equivalent Conditions for Singular and Nonsingular Matrices Let A be an n × n matrix. Any pair of statements in the same column are equivalent. A is singular (A−1 does not exist). A is nonsingular (A−1 exists). Rank(A) = n. Rank(A) = n. |A| = 0. |A| = 0. A is not row equivalent to In. A is row equivalent to In. AX = O has a nontrivial solution for X. AX = O has only the trivial solution for X. AX = B does not have a unique AX = B has a unique solution solution (no solutions or for X (namely, X = A−1B). infinitely many solutions). The rows of A do not form a The rows of A form a basis for Rn. basis for Rn. The columns of A do not form The columns of A form abasisforRn. abasisforRn. The linear operator L: Rn → Rn The linear operator L: Rn → Rn given by L(X) = AX is given by L(X) = AX not an isomorphism. is an isomorphism. Diagonalization Method To diagonalize (if possible) an n × n matrix A: Step 1: Calculate pA(x) = |xIn − A|. Step 2: Find all real roots of pA(x) (that is, all real solutions to pA(x) = 0). These are the eigenvalues λ1, λ2, λ3, ..., λk for A. Step 3: For each eigenvalue λm in turn: Row reduce the augmented matrix [λmIn − A | 0] . Use the result to obtain a set of particular solutions of the homogeneous system (λmIn − A)X = 0 by setting each independent variable in turn equal to 1 and all other independent variables equal to 0.
    [Show full text]
  • Zero-Sum Triangles for Involutory, Idempotent, Nilpotent and Unipotent Matrices
    Zero-Sum Triangles for Involutory, Idempotent, Nilpotent and Unipotent Matrices Pengwei Hao1, Chao Zhang2, Huahan Hao3 Abstract: In some matrix formations, factorizations and transformations, we need special matrices with some properties and we wish that such matrices should be easily and simply generated and of integers. In this paper, we propose a zero-sum rule for the recurrence relations to construct integer triangles as triangular matrices with involutory, idempotent, nilpotent and unipotent properties, especially nilpotent and unipotent matrices of index 2. With the zero-sum rule we also give the conditions for the special matrices and the generic methods for the generation of those special matrices. The generated integer triangles are mostly newly discovered, and more combinatorial identities can be found with them. Keywords: zero-sum rule, triangles of numbers, generation of special matrices, involutory, idempotent, nilpotent and unipotent matrices 1. Introduction Zero-sum was coined in early 1940s in the field of game theory and economic theory, and the term “zero-sum game” is used to describe a situation where the sum of gains made by one person or group is lost in equal amounts by another person or group, so that the net outcome is neutral, or the sum of losses and gains is zero [1]. It was later also frequently used in psychology and political science. In this work, we apply the zero-sum rule to three adjacent cells to construct integer triangles. Pascal's triangle is named after the French mathematician Blaise Pascal, but the history can be traced back in almost 2,000 years and is referred to as the Staircase of Mount Meru in India, the Khayyam triangle in Persia (Iran), Yang Hui's triangle in China, Apianus's Triangle in Germany, and Tartaglia's triangle in Italy [2].
    [Show full text]
  • Matrix Algebra and Control
    Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero.
    [Show full text]
  • Bidirected Graph from Wikipedia, the Free Encyclopedia Contents
    Bidirected graph From Wikipedia, the free encyclopedia Contents 1 Bidirected graph 1 1.1 Other meanings ............................................ 1 1.2 See also ................................................ 2 1.3 References ............................................... 2 2 Bipartite double cover 3 2.1 Construction .............................................. 3 2.2 Examples ............................................... 3 2.3 Matrix interpretation ......................................... 4 2.4 Properties ............................................... 4 2.5 Other double covers .......................................... 4 2.6 See also ................................................ 5 2.7 Notes ................................................. 5 2.8 References ............................................... 5 2.9 External links ............................................. 6 3 Complex question 7 3.1 Implication by question ........................................ 7 3.2 Complex question fallacy ....................................... 7 3.2.1 Similar questions and fallacies ................................ 8 3.3 Notes ................................................. 8 4 Directed graph 10 4.1 Basic terminology ........................................... 11 4.2 Indegree and outdegree ........................................ 11 4.3 Degree sequence ............................................ 12 4.4 Digraph connectivity .......................................... 12 4.5 Classes of digraphs .........................................
    [Show full text]
  • About Circulant Involutory MDS Matrices Victor Cauchois, Pierre Loidreau
    About Circulant Involutory MDS Matrices Victor Cauchois, Pierre Loidreau To cite this version: Victor Cauchois, Pierre Loidreau. About Circulant Involutory MDS Matrices. International Workshop on Coding and Cryptography, Sep 2017, Saint-Pétersbourg, Russia. hal-01673463 HAL Id: hal-01673463 https://hal.archives-ouvertes.fr/hal-01673463 Submitted on 29 Dec 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. About Circulant Involutory MDS Matrices Victor Cauchois ∗ Pierre Loidreau y DGA MI and Universit´ede Rennes 1 Abstract We give a new algebraic proof of the non-existence of circulant involutory MDS matrices with coefficients in fields of even characteristic. For odd characteristics we give parameters for the potential existence. If we relax circulancy to θ-circulancy, then there is no restriction to the existence of θ-circulant involutory MDS matrices even for fields of even characteristic. Finally, we relax further the involutory defi- nition and propose a new direct construction of almost involutory θ-circulant MDS matrices. We show that they can be interesting in hardware implementations. 1 Introduction MDS matrices offer the maximal diffusion of symbols for cryptographic applications. To reduce implementation costs, research focuses on circulant and recursive matri- ces, which need only a linear number of multipliers to compute the matrix-vector product.
    [Show full text]
  • Matrices and Determinants Matrices 1
    MATRICES AND DETERMINANTS MATRICES 1. Matrix :- Arrangement of the elements in rows and column 2. Order of the matrix :- (number of rows) (number of columns) 3. Types Of Matrix 1. Zero matrix :- A matrix in which each element is zero, is called zero matrix or null matrix 2. Row matrix :- A matrix having only one row 3. Column matrix :- A matrix having only one column 4. Square matrix:- A matrix in which the number of rows is equal to the number of columns Principle diagonal: - The diagonal from top left to bottom right 5. Diagonal matrix :- A matrix in which all the elements except the principle diagonal elements are zero 6. Scalar matrix :- A diagonal matrix in which all the principle diagonal elements are equal 7. Unit matrix or Identity matrix :- A diagonal matrix in which each principle diagonal entries is one 8. Idempotent matrix :- A matrix is said to be idempotent if 9. Nilpotent: A square matrix is called nilpotent matrix if there exist a positive integer ‘n’ such that . If ‘m’ is the least positive integer such that , them ‘m’ is called the index of the nilpotent matrix. 10. Involutory matrix :- A matrix is said to be involutory if 11. Upper triangular matrix :- A matrix in which all the elements below the principle diagonal are zero 12. Lower triangular matrix :- A matrix in which all the elements above the principle diagonal are zero 4. Trace of a matrix: - Trace of a matrix is defined and denoted as ( ) sum of principal diagonal element. 5. Two matrices are said to be equal if they have the same order and the corresponding elements are also equal.
    [Show full text]
  • Factorizations Into Normal Matrices in Indefinite Inner Product Spaces
    Factorizations into Normal Matrices in Indefinite Inner Product Spaces Xuefang Sui1, Paolo Gondolo2 Department of Physics and Astronomy, University of Utah, 115 South 1400 East #201, Salt Lake City, UT 84112-0830 Abstract We show that any nonsingular (real or complex) square matrix can be factor- ized into a product of at most three normal matrices, one of which is unitary, another selfadjoint with eigenvalues in the open right half-plane, and the third one is normal involutory with a neutral negative eigenspace (we call the latter matrices normal neutral involutory). Here the words normal, uni- tary, selfadjoint and neutral are understood with respect to an indefinite inner product. Keywords: Indefinite inner product, Polar decomposition, Sign function, Normal matrix, Neutral involution 2010 MSC: 15A23, 15A63, 47B50, 15A21, 15B99 1. Introduction We consider two kinds of indefinite inner products: a complex Hermitian inner product and a real symmetric inner product. Let K denote either the field of complex numbers K = C or the field of real numbers K = R. Let arXiv:1610.09742v1 [math.RA] 31 Oct 2016 n n the nonsingular matrix H K × be either a complex Hermitian matrix n n ∈ T H C × defining a complex Hermitian inner product [x, y]H =x ¯ Hy for ∈ n n n x, y C , or a real symmetric matrix H R × defining a real symmetric inner∈ product [x, y] = xT Hy for x, y R∈n. Herex ¯T indicates the complex H ∈ conjugate transpose of x and xT indicates the transpose of x. 1Email address: [email protected] 2Email address: [email protected] Preprint submitted to Linear Algebra and its Applications May 11, 2018 n n In a Euclidean space, H K × is the identity matrix.
    [Show full text]