Kronecker Commutation Matrices and Particle Physics

Total Page:16

File Type:pdf, Size:1020Kb

Kronecker Commutation Matrices and Particle Physics Kronecker Commutation Matrices and Particle Physics Christian Rakotonirina∗ Civil engineering Department, Institut Sup´erieurde Technologie d'Antananarivo, IST-T Laboratoire de la Dynamique de l'Atmosph`ere,du Climat et des Oc´eans,DYACO, University of Antananarivo Madagascar December 5, 2016 Abstract In this paper, formulas giving a Kronecker commutation matrices (KCMs) in terms of some matrices of particles physics and formulas giving electric charge operators (ECOs) for fundamental fermions in terms of KCMs have been reviewed. Physical meaning have been given to the eigenvalues and eigenvectors of a KCM. Introduction The Kronecker or tensor commutation matrices (KCMs) are matrices which commute Kronecker or tensor product of matrices. So, we can think of using them where Kronecker product is used. Kronecker product is used in many branches of physics and mathematics: in quantum information theory, op- tics, matrix equations and algebraic Bethe ansatz. One can remark that a wave function of two identical fermions is eigenfunc- tion of a KCM associated to the eigenvalue -1 and a wave function of two identical bosons is eigenfunction of a KCM associated to the eigenvalue -1. So, it is natural to think to what are the meaning we can give to these eigenvalues, their multiplicities and the eigenvectors associated, in particle physics. The KCMs have already relations with some matrices of the particle physics and we can construct an electric charge operator (OCE) for fundamental fermions by using a KCM. After the first section which will speak about kronecker product and the mathematical definition of a KCM, in the second ∗[email protected] 1 section we will review the expression of the KCMs in terms of some matrices of particle physics, after that in the third section we will review the ECOs built with the KCMs. Finally, in the fourth section we will give the eigenval- ues of KCMs with their multiplicities and the eigenvectors associated, and some examples giving their meaning in particle physics. We will take as system of units the natural units ~ = c = 1 and as unit of charge the charge of an electron e. 1 Kronecker Commutation Matrices i m×n The Kronecker product of a matrix A = (Aj) 2 C by other matrix B : 0 1 1 1 1 0 1 1 1 1 A1 :::Aj :::An A1B :::Aj B :::AnB B . C B . C B . C B . C B C B C A⊗B = B Ai :::Ai :::Ai C⊗B = B Ai B :::Ai B :::Ai BC B 1 j n C B 1 j n C B . C B . C @ . A @ . A m m m m m m A1 :::Aj :::An A1 B :::Aj B :::An B is not commutative. That is A ⊗ B 6= B ⊗ A K the Kronecker commutation matrix (KCM) K (a ⊗ b) = b ⊗ a with a, b are unicolumn matrices. The KCM K2⊗2 commutes two row and unicolumn matrices a1 b1 b1 a1 K2⊗2 ⊗ = ⊗ a2 b2 b2 a2 0 1 0 1 0 1 0 1 a1 b1 b1 a1 K3⊗3 @a2A ⊗ @b2A = @b2A ⊗ @a2A a3 b3 b3 a3 and so on. 01 0 0 0 0 0 0 0 01 B0 0 0 1 0 0 0 0 0C B C 0 1 B0 0 0 0 0 0 1 0 0C 1 0 0 0 B C B0 1 0 0 0 0 0 0 0C B0 0 1 0C B C K2⊗2 = B C ;K3⊗3 = B0 0 0 0 1 0 0 0 0C @0 1 0 0A B C B0 0 0 0 0 0 0 1 0C 0 0 0 1 B C B0 0 1 0 0 0 0 0 0C B C @0 0 0 0 0 1 0 0 0A 0 0 0 0 0 0 0 0 1 2 2 KCMs and Matrices of Particle Physics The KCMs K2⊗2 and K3⊗3 can be expressed respectively in terms of the Pauli matrices (See, for example, [1]) and the Gell-Mann matrices [2] by the following ways 3 1 1 X K = I ⊗ I + σ ⊗ σ 2⊗2 2 2 2 2 i i i=1 1 0 where I = the 2 × 2 unit matrix and 2 0 1 0 1 0 −i 1 0 σ = , σ = , σ = 1 1 0 2 i 0 3 0 −1 the Pauli matrices 8 1 1 X K = I ⊗ I + λ ⊗ λ 3⊗3 3 3 3 2 i i i=1 where 0 0 1 0 1 0 0 −i 0 1 0 1 0 0 1 λ1 = @ 1 0 0 A , λ2 = @ i 0 0 A , λ3 = @ 0 −1 0 A, 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 −i 1 0 0 0 0 1 0 0 0 0 1 λ4 = @ 0 0 0 A, λ5 = @ 0 0 0 A, λ6 = @ 0 0 1 A, λ7 = @ 0 0 −i A 1 0 0 i 0 0 0 1 0 0 i 0 0 1 0 0 1 λ = p1 0 1 0 8 3 @ A 0 0 −2 are the Gell-Mann matrices or 3 × 3 Gell-Mann matrices. A variant of this last formula with other definition of the Gell-Mann matrices [3] have recently application in optics. We say variant because this paper has expressed the KCM K3⊗3 in terms of other Gell-Mann matrices, with its definition of the Gell-Mann matrices. As a generalization [4], n2−1 1 1 X K = I ⊗ I + Λ ⊗ Λ n⊗n n n n 2 i i i=1 where Λi's are the n × n Gell-Mann matrices. 3 KCMs and Charges of Fundamental Fermions In this section we are going to construct ECOs for fundamental fermions in using KCMs [5]. An ECO we are going to construct here can have charges 3 of leptons and quarks, together as eigenvalues. Let us recall at first that fundamental fermions have the quantum numbers J3, the isospin and Y , the hypercharge. The electric charge Q of a fermion is given by the Gell-Mann-Nishijima formula Y Q = J + (1) 3 2 For the fermions of the standard model (SM) these quantum numbers are given in the following table [6] QJ3 Y Neutral Leptons νeL, νµL, ντL 0 1=2 −1 Charged Leptons eL, µL, τL −1 −1=2 −1 eR, µR, τR −1 0 −2 r b g r b g r b g Quarks u, c, t uL, uL, uL, cL, cL, cL, tL, tL, tL 2=3 1=2 1=3 r b g r b g r b g uR, uR, uR, cR, cR, cR, tR, tR, tR 2=3 0 4=3 r b g r b g r b g Quarks d, s, b dL, dL, dL, sL, sL, sL, bL, bL, bL −1=3 −1=2 1=3 r b g r b g r b g dR, dR, dR, sR, sR, sR, bR, bR, bR −1=3 0 −2=3 A matrix relation of the Gell-Mann-Nishijima for eight leptons and quarks r b g r b of the SM of the same generation, for example eL, νeL, uL, uL, uL, dL, dL, g et dL has been proposed in [7]. 3 ! 1 1 X Q = σ0 ⊗ σ0 ⊗ σ3 + σi ⊗ σi ⊗ σ0 (2) 2 | {z } 6 Y i=1 | {z } J3 The same ECO can be obtained from a relation between the following ECO 0 0 2=3 0 of leptons Q = and ECO of quarks Q = L 0 −1 Q 0 −1=3 2 Q − Q = I Q L 3 2 This leads to the following ECO (2), but in terms of the KCM K2⊗2. 1 Q = σ ⊗ σ ⊗ Q + (K + σ ⊗ σ ) ⊗ σ 0 0 L 3 2⊗2 0 0 0 or 1 Q = σ ⊗ σ ⊗ Q + (K − σ ⊗ σ ) ⊗ σ 0 0 Q 3 2⊗2 0 0 0 We can remark in these formulas that the eigenvalue +1 of the KCM K2⊗2 is associated to the charges of leptons whereas the eigenvalue -1 is associated to the charges of quarks. 4 For including more than eight fundamental fermions, let us construct an ECO in terms of the KCM K3⊗3. For doing so, let us take the following 00 0 0 1 ECOs respectively of leptons and quarks, QL = @0 −1 0 A whose diag- 0 0 −1 onal is formed by the electric charge of a neutrino and the electric charges 02=3 0 0 1 of two charged leptons and QQ = @ 0 −1=3 0 A whose diagonal is 0 0 −1=3 formed by the electric charge of a quark u (a quark c (charm) or a quark t (top)) and the electric charges of quark d, quark s (strange) or quark b (bottom). 2 Q − Q = I (3) Q L 3 3 Then we have an ECO for some fermions of the SM 1 Q = I ⊗ I ⊗ Q + (K + I ⊗ I ) ⊗ I 3 3 L 3 3⊗3 3 3 3 or 1 Q = I ⊗ I ⊗ Q + (K − I ⊗ I ) ⊗ I 3 3 Q 3 3⊗3 3 3 3 4 Eigenvalues of KCMs in Particle physics The following relations give the eigenvalues of the KCMs with their multi- plicities. K2⊗2 ≡ diag(−1; +1; +1; +1) | {z } 3 times K3⊗3 ≡ diag(−1; −1; −1; +1; +1; +1; +1; +1; +1) | {z } | {z } 3 times 6 times K4⊗4 ≡ diag(−1; −1; −1; −1; −1; −1; +1; +1; +1; +1; +1; +1; +1; +1; +1; +1) | {z } | {z } 6 times 10 times and so on.
Recommended publications
  • Block Kronecker Products and the Vecb Operator* CORE View
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Block Kronecker Products and the vecb Operator* Ruud H. Koning Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Heinz Neudecker+ Department of Actuarial Sciences and Econometrics University of Amsterdam Jodenbreestraat 23 1011 NH, Amsterdam, The Netherlands and Tom Wansbeek Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Submitted hv Richard Rrualdi ABSTRACT This paper is concerned with two generalizations of the Kronecker product and two related generalizations of the vet operator. It is demonstrated that they pairwise match two different kinds of matrix partition, viz. the balanced and unbalanced ones. Relevant properties are supplied and proved. A related concept, the so-called tilde transform of a balanced block matrix, is also studied. The results are illustrated with various statistical applications of the five concepts studied. *Comments of an anonymous referee are gratefully acknowledged. ‘This work was started while the second author was at the Indian Statistiral Institute, New Delhi. LINEAR ALGEBRA AND ITS APPLICATIONS 149:165-184 (1991) 165 0 Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 166 R. H. KONING, H. NEUDECKER, AND T. WANSBEEK INTRODUCTION Almost twenty years ago Singh [7] and Tracy and Singh [9] introduced a generalization of the Kronecker product A@B. They used varying notation for this new product, viz. A @ B and A &3B. Recently, Hyland and Collins [l] studied the-same product under rather restrictive order conditions.
    [Show full text]
  • Genius Manual I
    Genius Manual i Genius Manual Genius Manual ii Copyright © 1997-2016 Jiríˇ (George) Lebl Copyright © 2004 Kai Willadsen Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License (GFDL), Version 1.1 or any later version published by the Free Software Foundation with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. You can find a copy of the GFDL at this link or in the file COPYING-DOCS distributed with this manual. This manual is part of a collection of GNOME manuals distributed under the GFDL. If you want to distribute this manual separately from the collection, you can do so by adding a copy of the license to the manual, as described in section 6 of the license. Many of the names used by companies to distinguish their products and services are claimed as trademarks. Where those names appear in any GNOME documentation, and the members of the GNOME Documentation Project are made aware of those trademarks, then the names are in capital letters or initial capital letters. DOCUMENT AND MODIFIED VERSIONS OF THE DOCUMENT ARE PROVIDED UNDER THE TERMS OF THE GNU FREE DOCUMENTATION LICENSE WITH THE FURTHER UNDERSTANDING THAT: 1. DOCUMENT IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT IS FREE OF DEFECTS MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY, ACCURACY, AND PERFORMANCE OF THE DOCUMENT OR MODIFIED VERSION OF THE DOCUMENT IS WITH YOU.
    [Show full text]
  • Package 'Fastmatrix'
    Package ‘fastmatrix’ May 8, 2021 Type Package Title Fast Computation of some Matrices Useful in Statistics Version 0.3-819 Date 2021-05-07 Author Felipe Osorio [aut, cre] (<https://orcid.org/0000-0002-4675-5201>), Alonso Ogueda [aut] Maintainer Felipe Osorio <[email protected]> Description Small set of functions to fast computation of some matrices and operations useful in statistics and econometrics. Currently, there are functions for efficient computation of duplication, commutation and symmetrizer matrices with minimal storage requirements. Some commonly used matrix decompositions (LU and LDL), basic matrix operations (for instance, Hadamard, Kronecker products and the Sherman-Morrison formula) and iterative solvers for linear systems are also available. In addition, the package includes a number of common statistical procedures such as the sweep operator, weighted mean and covariance matrix using an online algorithm, linear regression (using Cholesky, QR, SVD, sweep operator and conjugate gradients methods), ridge regression (with optimal selection of the ridge parameter considering the GCV procedure), functions to compute the multivariate skewness, kurtosis, Mahalanobis distance (checking the positive defineteness) and the Wilson-Hilferty transformation of chi squared variables. Furthermore, the package provides interfaces to C code callable by another C code from other R packages. Depends R(>= 3.5.0) License GPL-3 URL https://faosorios.github.io/fastmatrix/ NeedsCompilation yes LazyLoad yes Repository CRAN Date/Publication 2021-05-08 08:10:06 UTC R topics documented: array.mult . .3 1 2 R topics documented: asSymmetric . .4 bracket.prod . .5 cg ..............................................6 comm.info . .7 comm.prod . .8 commutation . .9 cov.MSSD . 10 cov.weighted .
    [Show full text]
  • Dynamics of Correlation Structure in Stock Market
    Entropy 2014, 16, 455-470; doi:10.3390/e16010455 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Dynamics of Correlation Structure in Stock Market Maman Abdurachman Djauhari * and Siew Lee Gan Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor Bahru, Johor Darul Takzim, Malaysia; E-Mail: [email protected] * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +60-017-520-0795; Fax: +60-07-556-6162. Received: 24 September 2013; in revised form: 19 December 2013 / Accepted: 24 December 2013 / Published: 6 January 2014 Abstract: In this paper a correction factor for Jennrich’s statistic is introduced in order to be able not only to test the stability of correlation structure, but also to identify the time windows where the instability occurs. If Jennrich’s statistic is only to test the stability of correlation structure along predetermined non-overlapping time windows, the corrected statistic provides us with the history of correlation structure dynamics from time window to time window. A graphical representation will be provided to visualize that history. This information is necessary to make further analysis about, for example, the change of topological properties of minimal spanning tree. An example using NYSE data will illustrate its advantages. Keywords: Mahalanobis square distance; multivariate normal distribution; network topology; Pearson correlation coefficient; random matrix PACS Codes: 89.65.Gh; 89.75.Fb 1. Introduction Correlation structure among stocks in a given portfolio is a complex structure represented numerically in the form of a symmetric matrix where all diagonal elements are equal to 1 and the off-diagonals are the correlations of two different stocks.
    [Show full text]
  • The Jacobian of the Exponential Function
    TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R. Magnus1 Henk G.J. Pijls2 Enrique Sentana3 1 Department of Econometrics and Data Science, Vrije Universiteit Amsterdam and Tinbergen Institute, 2 Korteweg-de Vries Institute for Mathematics, University of Amsterdam 3 CEMFI, Madrid Tinbergen Institute is the graduate school and research institute in economics of Erasmus University Rotterdam, the University of Amsterdam and Vrije Universiteit Amsterdam. Contact: [email protected] More TI discussion papers can be downloaded at https://www.tinbergen.nl Tinbergen Institute has two locations: Tinbergen Institute Amsterdam Gustav Mahlerplein 117 1082 MS Amsterdam The Netherlands Tel.: +31(0)20 598 4580 Tinbergen Institute Rotterdam Burg. Oudlaan 50 3062 PA Rotterdam The Netherlands Tel.: +31(0)10 408 8900 The Jacobian of the exponential function June 16, 2020 Jan R. Magnus Department of Econometrics and Data Science, Vrije Universiteit Amsterdam and Tinbergen Institute Henk G. J. Pijls Korteweg-de Vries Institute for Mathematics, University of Amsterdam Enrique Sentana CEMFI Abstract: We derive closed-form expressions for the Jacobian of the matrix exponential function for both diagonalizable and defective matrices. The re- sults are applied to two cases of interest in macroeconometrics: a continuous- time macro model and the parametrization of rotation matrices governing impulse response functions in structural vector autoregressions. JEL Classification: C65, C32, C63. Keywords: Matrix differential calculus, Orthogonal matrix, Continuous-time Markov chain, Ornstein-Uhlenbeck process. Corresponding author: Enrique Sentana, CEMFI, Casado del Alisal 5, 28014 Madrid, Spain. E-mail: [email protected] Declarations of interest: None. 1 1 Introduction The exponential function ex is one of the most important functions in math- ematics.
    [Show full text]
  • Right Semi-Tensor Product for Matrices Over a Commutative Semiring
    Journal of Informatics and Mathematical Sciences Vol. 12, No. 1, pp. 1–14, 2020 ISSN 0975-5748 (online); 0974-875X (print) Published by RGN Publications http://www.rgnpublications.com DOI: 10.26713/jims.v12i1.1088 Research Article Right Semi-Tensor Product for Matrices Over a Commutative Semiring Pattrawut Chansangiam Department of Mathematics, Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand [email protected] Abstract. This paper generalizes the right semi-tensor product for real matrices to that for matrices over an arbitrary commutative semiring, and investigates its properties. This product is defined for any pair of matrices satisfying the matching-dimension condition. In particular, the usual matrix product and the scalar multiplication are its special cases. The right semi-tensor product turns out to be an associative bilinear map that is compatible with the transposition and the inversion. The product also satisfies certain identity-like properties and preserves some structural properties of matrices. We can convert between the right semi-tensor product of two matrices and the left semi-tensor product using commutation matrices. Moreover, certain vectorizations of the usual product of matrices can be written in terms of the right semi-tensor product. Keywords. Right semi-tensor product; Kronecker product; Commutative semiring; Vector operator; Commutation matrix MSC. 15A69; 15B33; 16Y60 Received: March 17, 2019 Accepted: August 16, 2019 Copyright © 2020 Pattrawut Chansangiam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
    [Show full text]
  • Package 'Matrixcalc'
    Package ‘matrixcalc’ July 28, 2021 Version 1.0-5 Date 2021-07-27 Title Collection of Functions for Matrix Calculations Author Frederick Novomestky <[email protected]> Maintainer S. Thomas Kelly <[email protected]> Depends R (>= 2.0.1) Description A collection of functions to support matrix calculations for probability, econometric and numerical analysis. There are additional functions that are comparable to APL functions which are useful for actuarial models such as pension mathematics. This package is used for teaching and research purposes at the Department of Finance and Risk Engineering, New York University, Polytechnic Institute, Brooklyn, NY 11201. Horn, R.A. (1990) Matrix Analysis. ISBN 978-0521386326. Lancaster, P. (1969) Theory of Matrices. ISBN 978-0124355507. Lay, D.C. (1995) Linear Algebra: And Its Applications. ISBN 978-0201845563. License GPL (>= 2) Repository CRAN Date/Publication 2021-07-28 08:00:02 UTC NeedsCompilation no R topics documented: commutation.matrix . .3 creation.matrix . .4 D.matrix . .5 direct.prod . .6 direct.sum . .7 duplication.matrix . .8 E.matrices . .9 elimination.matrix . 10 entrywise.norm . 11 1 2 R topics documented: fibonacci.matrix . 12 frobenius.matrix . 13 frobenius.norm . 14 frobenius.prod . 15 H.matrices . 17 hadamard.prod . 18 hankel.matrix . 19 hilbert.matrix . 20 hilbert.schmidt.norm . 21 inf.norm . 22 is.diagonal.matrix . 23 is.idempotent.matrix . 24 is.indefinite . 25 is.negative.definite . 26 is.negative.semi.definite . 28 is.non.singular.matrix . 29 is.positive.definite . 31 is.positive.semi.definite . 32 is.singular.matrix . 34 is.skew.symmetric.matrix . 35 is.square.matrix . 36 is.symmetric.matrix .
    [Show full text]
  • The Jacobian of the Exponential Function
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique Working Paper The Jacobian of the exponential function Tinbergen Institute Discussion Paper, No. TI 2020-035/III Provided in Cooperation with: Tinbergen Institute, Amsterdam and Rotterdam Suggested Citation: Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique (2020) : The Jacobian of the exponential function, Tinbergen Institute Discussion Paper, No. TI 2020-035/III, Tinbergen Institute, Amsterdam and Rotterdam This Version is available at: http://hdl.handle.net/10419/220072 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R.
    [Show full text]
  • Graph Notation for Arrays
    Graph Notation for Arrays Hans G. Ehrbar Economics Department, University of Utah 1645 Campus Center Drive Salt Lake City, UT 84112-9300 U.S.A. +1 801 581 7797 [email protected] July 24, 2000 Abstract avoided in the applied sciences because of the dif- ficulties to write them on a two-dimensional sheet A graph-theoretical notation for array concatenation of paper. The following symbolic notation makes represents arrays as bubbles with arms sticking out, the structure of arrays explicit without writing them each arm with a specified number of “fingers.” Bub- down element by element. It is hoped that this makes bles with one arm are vectors, with two arms ma- arrays easier to understand, and that this notation trices, etc. Arrays can only hold hands, i.e., “con- leads to simple high-level user interfaces for pro- tract” along a given pair of arms, if the arms have gramming languages manipulating arrays. the same number of fingers. There are three array concatenations: outer product, contraction, and di- rect sum. Special arrays are the unit vectors and the 2 Informal Survey of the Notation diagonal array, which is the branching point of sev- eral arms. Outer products and contractions are inde- Each array is symbolized by a rectangular tile with pendent of the order in which they are performed and arms sticking out, similar to a molecule. Tiles with distributive with respect to the direct sum. Examples one arm are vectors, those with two arms matrices, are given where this notation clarifies mathematical those with more arms are arrays of higher rank (or proofs.
    [Show full text]
  • Matrix Polynomials and Their Lower Rank Approximations
    Matrix Polynomials and their Lower Rank Approximations by Joseph Haraldson A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science Waterloo, Ontario, Canada, 2019 c Joseph Haraldson 2019 Examining Committee Membership The following served on the Examining Committee for this thesis. The decision of the Examining Committee is by majority vote. External Examiner: Lihong Zhi Professor, Mathematics Mechanization Research Center Institute of Systems Science, Academy of Mathematics and System Sciences Academia Sinica Supervisor(s): Mark Giesbrecht Professor, School of Computer Science, University of Waterloo George Labahn Professor, School of Computer Science, University of Waterloo Internal-External Member: Stephen Vavasis Professor, Dept. of Combinatorics and Optimization, University of Waterloo Other Member(s): Eric Schost Associate Professor, School of Computer Science, University of Waterloo Other Member(s): Yuying Li Professor, School of Computer Science, University of Waterloo ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. iii Abstract This thesis is a wide ranging work on computing a \lower-rank" approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank are investigated. The first is the rank as the number of linearly independent rows or columns, which is the classical definition. The other notion considered is the lowest rank of a matrix polynomial when evaluated at a complex number, or the McCoy rank.
    [Show full text]
  • On Long-Run Covariance Matrix Estimation with the Truncated Flat
    On Long-Run Covariance Matrix Estimation with the Truncated Flat Kernel Chang-Ching Lina, Shinichi Sakatab∗ a Institute of Economics, Academia Sinica, 128, Sec.2 Academia Rd., Nangang, Taipei 115, Taiwan, Republic of China b Department of Economics, University of British Columbia, 997-1873 East Mall, Vancouver, BC V6T 1Z1, Canada March 31, 2008 Despite its large sample efficiency, the truncated flat (TF) kernel estimator of long-run covariance matrices is seldom used, because it lacks the guaranteed positive semidefiniteness and sometimes performs poorly in small samples, compared to other familiar kernel estimators. This paper proposes simple modifications to the TF estimator to enforce the positive definiteness without sacrificing the large sample efficiency and make the estimator more reliable in small samples through better utilization of the bias- variance tradeoff. We study the large sample properties of the modified TF estimators and verify their improved small-sample performances by Monte Carlo simulations. Keywords: Heteroskedasticity autocorrelation-consistent covariance matrix estimator; hypothe- sis testing; M-estimation; generalized method-of-moments estimation; positive semidefinite matrices; semidefinite programming. ∗Tel: +1-604-822-5360; fax: 1+604-822-5915. Email addresses: [email protected] (C. Lin), [email protected] (S. Sakata). 1 1 Introduction In assessing the accuracy of an extremum estimator or making a statistical inference on unknown parameter values based on an extremum estimator using time series data, it is often necessary to estimate a long-run covariance matrix. The long-run covariance matrix is typically estimated by a kernel estimator that is the weighted average of estimated autocovariance matrices with weights determined by a kernel function and the bandwidth for it.
    [Show full text]
  • Signal Processing, IEEE Transactions On
    UC Riverside UC Riverside Previously Published Works Title Fast subspace tracking and neural network learning by a novel information criterion Permalink https://escholarship.org/uc/item/85h2b369 Journal IEEE Transactions on Signal Processing, 46(7) ISSN 1053-587X Authors Miao, Yongfeng Hua, Yingbo Publication Date 1998-07-01 Peer reviewed eScholarship.org Powered by the California Digital Library University of California IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 46, NO. 7, JULY 1998 1967 Fast Subspace Tracking and Neural Network Learning by a Novel Information Criterion Yongfeng Miao and Yingbo Hua, Senior Member, IEEE Abstract— We introduce a novel information criterion (NIC) to be globally convergent since the global minimum of the for searching for the optimum weights of a two-layer linear MSE is only achieved by the principal subspace, and all neural network (NN). The NIC exhibits a single global maximum the other stationary points of the MSE are saddle points. attained if and only if the weights span the (desired) principal subspace of a covariance matrix. The other stationary points of The MSE criterion has also led to many other algorithms, the NIC are (unstable) saddle points. We develop an adaptive which include the projection approximation subspace tracking algorithm based on the NIC for estimating and tracking the (PAST) algorithm [16], the conjugate gradient method [17], principal subspace of a vector sequence. The NIC algorithm and the Gauss–Newton method [18]. It is clear that a properly provides a fast on-line learning of the optimum weights for the chosen criterion is a very important part in developing any two-layer linear NN.
    [Show full text]