The Matrix Cookbook [ ]

Total Page:16

File Type:pdf, Size:1020Kb

The Matrix Cookbook [ ] The Matrix Cookbook [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 15, 2012 1 Introduction What is this? These pages are a collection of facts (identities, approxima- tions, inequalities, relations, ...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference . Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apolo- gize and would be grateful to receive corrections at [email protected]. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome [email protected]. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix. Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rishøj, Christian Schr¨oppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripidis Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, J¨urgen Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Markus Froeb, Michael Hubatka, Miguel Bar~ao,Ole Winther, Pavel Sakov, Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies. Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2 CONTENTS CONTENTS Contents 1 Basics 6 1.1 Trace . 6 1.2 Determinant . 6 1.3 The Special Case 2x2 . 7 2 Derivatives 8 2.1 Derivatives of a Determinant . 8 2.2 Derivatives of an Inverse . 9 2.3 Derivatives of Eigenvalues . 10 2.4 Derivatives of Matrices, Vectors and Scalar Forms . 10 2.5 Derivatives of Traces . 12 2.6 Derivatives of vector norms . 14 2.7 Derivatives of matrix norms . 14 2.8 Derivatives of Structured Matrices . 14 3 Inverses 17 3.1 Basic . 17 3.2 Exact Relations . 18 3.3 Implication on Inverses . 20 3.4 Approximations . 20 3.5 Generalized Inverse . 21 3.6 Pseudo Inverse . 21 4 Complex Matrices 24 4.1 Complex Derivatives . 24 4.2 Higher order and non-linear derivatives . 26 4.3 Inverse of complex sum . 27 5 Solutions and Decompositions 28 5.1 Solutions to linear equations . 28 5.2 Eigenvalues and Eigenvectors . 30 5.3 Singular Value Decomposition . 31 5.4 Triangular Decomposition . 32 5.5 LU decomposition . 32 5.6 LDM decomposition . 33 5.7 LDL decompositions . 33 6 Statistics and Probability 34 6.1 Definition of Moments . 34 6.2 Expectation of Linear Combinations . 35 6.3 Weighted Scalar Variable . 36 7 Multivariate Distributions 37 7.1 Cauchy . 37 7.2 Dirichlet . 37 7.3 Normal . 37 7.4 Normal-Inverse Gamma . 37 7.5 Gaussian . 37 7.6 Multinomial . 37 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3 CONTENTS CONTENTS 7.7 Student's t . 37 7.8 Wishart . 38 7.9 Wishart, Inverse . 39 8 Gaussians 40 8.1 Basics . 40 8.2 Moments . 42 8.3 Miscellaneous . 44 8.4 Mixture of Gaussians . 44 9 Special Matrices 46 9.1 Block matrices . 46 9.2 Discrete Fourier Transform Matrix, The . 47 9.3 Hermitian Matrices and skew-Hermitian . 48 9.4 Idempotent Matrices . 49 9.5 Orthogonal matrices . 49 9.6 Positive Definite and Semi-definite Matrices . 50 9.7 Singleentry Matrix, The . 52 9.8 Symmetric, Skew-symmetric/Antisymmetric . 54 9.9 Toeplitz Matrices . 54 9.10 Transition matrices . 55 9.11 Units, Permutation and Shift . 56 9.12 Vandermonde Matrices . 57 10 Functions and Operators 58 10.1 Functions and Series . 58 10.2 Kronecker and Vec Operator . 59 10.3 Vector Norms . 61 10.4 Matrix Norms . 61 10.5 Rank . 62 10.6 Integral Involving Dirac Delta Functions . 62 10.7 Miscellaneous . 63 A One-dimensional Results 64 A.1 Gaussian . 64 A.2 One Dimensional Mixture of Gaussians . 65 B Proofs and Details 66 B.1 Misc Proofs . 66 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4 CONTENTS CONTENTS Notation and Nomenclature A Matrix Aij Matrix indexed for some purpose Ai Matrix indexed for some purpose Aij Matrix indexed for some purpose An Matrix indexed for some purpose or The n.th power of a square matrix A−1 The inverse matrix of the matrix A A+ The pseudo inverse matrix of the matrix A (see Sec. 3.6) A1=2 The square root of a matrix (if unique), not elementwise (A)ij The (i; j).th entry of the matrix A Aij The (i; j).th entry of the matrix A [A]ij The ij-submatrix, i.e. A with i.th row and j.th column deleted a Vector (column-vector) ai Vector indexed for some purpose ai The i.th element of the vector a a Scalar <z Real part of a scalar <z Real part of a vector <Z Real part of a matrix =z Imaginary part of a scalar =z Imaginary part of a vector =Z Imaginary part of a matrix det(A) Determinant of A Tr(A) Trace of the matrix A diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = δijAij eig(A) Eigenvalues of the matrix A vec(A) The vector-version of the matrix A (see Sec. 10.2.2) sup Supremum of a set jjAjj Matrix norm (subscript if any denotes what norm) AT Transposed matrix A−T The inverse of the transposed and vice versa, A−T = (A−1)T = (AT )−1. A∗ Complex conjugated matrix AH Transposed and complex conjugated matrix (Hermitian) A ◦ B Hadamard (elementwise) product A ⊗ B Kronecker product 0 The null matrix. Zero in all entries. I The identity matrix Jij The single-entry matrix, 1 at (i; j) and zero elsewhere Σ A positive definite matrix Λ A diagonal matrix Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5 1 BASICS 1 Basics (AB)−1 = B−1A−1 (1) (ABC:::)−1 = :::C−1B−1A−1 (2) (AT )−1 = (A−1)T (3) (A + B)T = AT + BT (4) (AB)T = BT AT (5) (ABC:::)T = :::CT BT AT (6) (AH )−1 = (A−1)H (7) (A + B)H = AH + BH (8) (AB)H = BH AH (9) (ABC:::)H = :::CH BH AH (10) 1.1 Trace P Tr(A) = iAii (11) P Tr(A) = iλi; λi = eig(A) (12) Tr(A) = Tr(AT ) (13) Tr(AB) = Tr(BA) (14) Tr(A + B) = Tr(A) + Tr(B) (15) Tr(ABC) = Tr(BCA) = Tr(CAB) (16) aT a = Tr(aaT ) (17) 1.2 Determinant Let A be an n × n matrix. Q det(A) = iλi λi = eig(A) (18) n n×n det(cA) = c det(A); if A 2 R (19) det(AT ) = det(A) (20) det(AB) = det(A) det(B) (21) det(A−1) = 1= det(A) (22) det(An) = det(A)n (23) det(I + uvT ) = 1 + uT v (24) For n = 2: det(I + A) = 1 + det(A) + Tr(A) (25) For n = 3: 1 1 det(I + A) = 1 + det(A) + Tr(A) + Tr(A)2 − Tr(A2) (26) 2 2 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6 1.3 The Special Case 2x2 1 BASICS For n = 4: 1 det(I + A) = 1 + det(A) + Tr(A) + 2 1 +Tr(A)2 − Tr(A2) 2 1 1 1 + Tr(A)3 − Tr(A)Tr(A2) + Tr(A3) (27) 6 2 3 For small ", the following approximation holds 1 1 det(I + "A) =∼ 1 + det(A) + "Tr(A) + "2Tr(A)2 − "2Tr(A2) (28) 2 2 1.3 The Special Case 2x2 Consider the matrix A A A A = 11 12 A21 A22 Determinant and trace det(A) = A11A22 − A12A21 (29) Tr(A) = A11 + A22 (30) Eigenvalues λ2 − λ · Tr(A) + det(A) = 0 Tr(A) + pTr(A)2 − 4 det(A) Tr(A) − pTr(A)2 − 4 det(A) λ = λ = 1 2 2 2 λ1 + λ2 = Tr(A) λ1λ2 = det(A) Eigenvectors A12 A12 v1 / v2 / λ1 − A11 λ2 − A11 Inverse 1 A −A A−1 = 22 12 (31) det(A) −A21 A11 Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7 2 DERIVATIVES 2 Derivatives This section is covering differentiation of a number of expressions with respect to a matrix X. Note that it is always assumed that X has no special structure, i.e. that the elements of X are independent (e.g. not symmetric, Toeplitz, positive definite). See section 2.8 for differentiation of structured matrices. The basic assumptions can be written in a formula as @Xkl = δikδlj (32) @Xij that is for e.g. vector forms, @x @x @x @x @x @x = i = = i @y i @y @y i @yi @y ij @yj The following rules are general and very useful when deriving the differential of an expression ([19]): @A = 0 (A is a constant) (33) @(αX) = α@X (34) @(X + Y) = @X + @Y (35) @(Tr(X)) = Tr(@X) (36) @(XY) = (@X)Y + X(@Y) (37) @(X ◦ Y) = (@X) ◦ Y + X ◦ (@Y) (38) @(X ⊗ Y) = (@X) ⊗ Y + X ⊗ (@Y) (39) @(X−1) = −X−1(@X)X−1 (40) @(det(X)) = Tr(adj(X)@X) (41) @(det(X)) = det(X)Tr(X−1@X) (42) @(ln(det(X))) = Tr(X−1@X) (43) @XT = (@X)T (44) @XH = (@X)H (45) 2.1 Derivatives of a Determinant 2.1.1 General form @ det(Y) @Y = det(Y)Tr Y−1 (46) @x @x X @ det(X) Xjk = δij det(X) (47) @Xik k " " # @2 det(Y) @ @Y = det(Y) Tr Y−1 @x @x2 @x @Y @Y +Tr Y−1 Tr Y−1 @x @x # @Y @Y −Tr Y−1 Y−1 (48) @x @x Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8 2.2 Derivatives of an Inverse 2 DERIVATIVES 2.1.2 Linear forms @ det(X) = det(X)(X−1)T (49) @X X @ det(X) Xjk = δij det(X) (50) @Xik k @ det(AXB) = det(AXB)(X−1)T = det(AXB)(XT )−1 (51) @X 2.1.3 Square forms If X is square and invertible, then @ det(XT AX) = 2 det(XT AX)X−T (52) @X If X is not square but A is symmetric, then @ det(XT AX) = 2 det(XT AX)AX(XT AX)−1 (53) @X If X is not square and A is not symmetric, then @ det(XT AX) = det(XT AX)(AX(XT AX)−1 + AT X(XT AT X)−1) (54) @X 2.1.4 Other nonlinear forms Some special cases are (See [9, 7]) @ ln det(XT X)j = 2(X+)T (55) @X @ ln det(XT X) = −2XT (56) @X+ @ ln j det(X)j = (X−1)T
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • Matrix-Valued Moment Problems
    Matrix-valued moment problems A Thesis Submitted to the Faculty of Drexel University by David P. Kimsey in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics June 2011 c Copyright 2011 David P. Kimsey. All Rights Reserved. ii Acknowledgments I wish to thank my advisor Professor Hugo J. Woerdeman for being a source of mathematical and professional inspiration. Woerdeman's encouragement and support have been extremely beneficial during my undergraduate and graduate career. Next, I would like to thank Professor Robert P. Boyer for providing a very important men- toring role which guided me toward a career in mathematics. I owe many thanks to L. Richard Duffy for providing encouragement and discussions on fundamental topics. Professor Dmitry Kaliuzhnyi-Verbovetskyi participated in many helpful discussions which helped shape my understanding of several mathematical topics of interest to me. In addition, Professor Anatolii Grinshpan participated in many useful discussions gave lots of encouragement. I wish to thank my Ph.D. defense committee members: Professors Boyer, Lawrence A. Fialkow, Grinshpan, Pawel Hitczenko, and Woerde- man for pointing out any gaps in my understanding as well as typographical mistakes in my thesis. Finally, I wish to acknowledge funding for my research assistantship which was provided by the National Science Foundation, via the grant DMS-0901628. iii Table of Contents Abstract ................................................................................ iv 1. INTRODUCTION ................................................................
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Arxiv:2105.00793V3 [Math.NA] 14 Jun 2021 Tubal Matrices
    Tubal Matrices Liqun Qi∗ and ZiyanLuo† June 15, 2021 Abstract It was shown recently that the f-diagonal tensor in the T-SVD factorization must satisfy some special properties. Such f-diagonal tensors are called s-diagonal tensors. In this paper, we show that such a discussion can be extended to any real invertible linear transformation. We show that two Eckart-Young like theo- rems hold for a third order real tensor, under any doubly real-preserving unitary transformation. The normalized Discrete Fourier Transformation (DFT) matrix, an arbitrary orthogonal matrix, the product of the normalized DFT matrix and an arbitrary orthogonal matrix are examples of doubly real-preserving unitary transformations. We use tubal matrices as a tool for our study. We feel that the tubal matrix language makes this approach more natural. Key words. Tubal matrix, tensor, T-SVD factorization, tubal rank, B-rank, Eckart-Young like theorems AMS subject classifications. 15A69, 15A18 1 Introduction arXiv:2105.00793v3 [math.NA] 14 Jun 2021 Tensor decompositions have wide applications in engineering and data science [11]. The most popular tensor decompositions include CP decomposition and Tucker decompo- sition as well as tensor train decomposition [11, 3, 17]. The tensor-tensor product (t-product) approach, developed by Kilmer, Martin, Bra- man and others [10, 1, 9, 8], is somewhat different. They defined T-product opera- tion such that a third order tensor can be regarded as a linear operator applied on ∗Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China; ([email protected]). †Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China.
    [Show full text]
  • Information-Theoretic Approaches to Portfolio Selection
    Information-theoretic approaches to portfolio selection Nathan LASSANCE Doctoral Thesis 2 | 2020 Université catholique de Louvain LOUVAIN INSTITUTE OF DATA ANALYSIS AND MODELING IN ECONOMICS AND STATISTICS (LIDAM) Universite´ catholique de Louvain Louvain School of Management LIDAM & Louvain Finance Doctoral Thesis Information-theoretic approaches to portfolio selection Nathan Lassance Thesis submitted in partial fulfillment of the requirements for the degree of Docteur en sciences ´economiques et de gestion Dissertation committee: Prof. Fr´ed´ericVrins (UCLouvain, BE), Advisor Prof. Kris Boudt (Ghent University, BE) Prof. Victor DeMiguel (London Business School, UK) Prof. Guofu Zhou (Washington University, USA) Prof. Marco Saerens (UCLouvain, BE), President Academic year 2019-2020 \Find a job you enjoy doing, and you will never have to work a day in your life." Mark Twain Contents Abstract vii Acknowledgments ix Research accomplishments xii List of Figures xii List of Tables xv List of Notation xvii Introduction1 1 Research background7 1.1 Mean-variance approaches..........................7 1.1.1 Definitions...............................8 1.1.2 Estimation risk............................ 10 1.1.3 Robust mean-variance portfolios................... 13 1.2 Higher-moment approaches.......................... 19 1.2.1 Efficient portfolios.......................... 21 1.2.2 Downside-risk criteria......................... 23 1.2.3 Indirect approaches.......................... 25 1.3 Risk-parity approaches............................ 25 1.3.1 Asset-risk parity........................... 26 1.3.2 Factor-risk parity........................... 28 1.3.3 Criticisms............................... 29 1.4 Information-theoretic approaches...................... 30 1.5 Thesis contributions............................. 32 2 Minimum R´enyi entropy portfolios 35 2.1 Introduction.................................. 35 2.2 The notion of entropy............................ 36 2.2.1 Shannon entropy........................... 37 2.2.2 R´enyi entropy............................
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • A Matrix Handbook for Statisticians
    A MATRIX HANDBOOK FOR STATISTICIANS George A. F. Seber Department of Statistics University of Auckland Auckland, New Zealand BICENTENNIAL BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc., Publication This Page Intentionally Left Blank A MATRIX HANDBOOK FOR STATISTICIANS THE WlLEY BICENTENNIAL-KNOWLEDGE FOR GENERATIONS Gachgeneration has its unique needs and aspirations. When Charles Wiley first opened his small printing shop in lower Manhattan in 1807, it was a generation of boundless potential searching for an identity. And we were there, helping to define a new American literary tradition. Over half a century later, in the midst of the Second Industrial Revolution, it was a generation focused on building the future. Once again, we were there, supplying the critical scientific, technical, and engineering knowledge that helped frame the world. Throughout the 20th Century, and into the new millennium, nations began to reach out beyond their own borders and a new international community was born. Wiley was there, expanding its operations around the world to enable a global exchange of ideas, opinions, and know-how. For 200 years, Wiley has been an integral part of each generation's journey, enabling the flow of information and understanding necessary to meet their needs and fulfill their aspirations. Today, bold new technologies are changing the way we live and learn. Wiley will be there, providing you the must-have knowledge you need to imagine new worlds, new possibilities, and new opportunities. Generations come and go, but you can always count on Wiley to provide you the knowledge you need, when and where you need it! n WILLIAM J.
    [Show full text]
  • Polynomial Sequences Generated by Linear Recurrences
    Innocent Ndikubwayo Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Linear Recurrences: Location by Sequences Generated Polynomial Innocent Ndikubwayo ISBN 978-91-7911-462-6 Department of Mathematics Doctoral Thesis in Mathematics at Stockholm University, Sweden 2021 Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Innocent Ndikubwayo Academic dissertation for the Degree of Doctor of Philosophy in Mathematics at Stockholm University to be publicly defended on Friday 14 May 2021 at 15.00 in sal 14 (Gradängsalen), hus 5, Kräftriket, Roslagsvägen 101 and online via Zoom, public link is available at the department website. Abstract In this thesis, we study the problem of location of the zeros of individual polynomials in sequences of polynomials generated by linear recurrence relations. In paper I, we establish the necessary and sufficient conditions that guarantee hyperbolicity of all the polynomials generated by a three-term recurrence of length 2, whose coefficients are arbitrary real polynomials. These zeros are dense on the real intervals of an explicitly defined real semialgebraic curve. Paper II extends Paper I to three-term recurrences of length greater than 2. We prove that there always exist non- hyperbolic polynomial(s) in the generated sequence. We further show that with at most finitely many known exceptions, all the zeros of all the polynomials generated by the recurrence lie and are dense on an explicitly defined real semialgebraic curve which consists of real intervals and non-real segments. The boundary points of this curve form a subset of zero locus of the discriminant of the characteristic polynomial of the recurrence.
    [Show full text]
  • Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016
    Mathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow Lecture 23: Fourier Transform, Convolution Theorem, and Linear Dynamical Systems April 28, 2016. Discrete Fourier Transform (DFT) We will focus on the discrete Fourier transform, which applies to discretely sampled signals (i.e., vectors). Linear algebra provides a simple way to think about the Fourier transform: it is simply a change of basis, specifically a mapping from the time domain to a representation in terms of a weighted combination of sinusoids of different frequencies. The discrete Fourier transform is therefore equiv- alent to multiplying by an orthogonal (or \unitary", which is the same concept when the entries are complex-valued) matrix1. For a vector of length N, the matrix that performs the DFT (i.e., that maps it to a basis of sinusoids) is an N × N matrix. The k'th row of this matrix is given by exp(−2πikt), for k 2 [0; :::; N − 1] (where we assume indexing starts at 0 instead of 1), and t is a row vector t=0:N-1;. Recall that exp(iθ) = cos(θ) + i sin(θ), so this gives us a compact way to represent the signal with a linear superposition of sines and cosines. The first row of the DFT matrix is all ones (since exp(0) = 1), and so the first element of the DFT corresponds to the sum of the elements of the signal. It is often known as the \DC component". The next row is a complex sinusoid that completes one cycle over the length of the signal, and each subsequent row has a frequency that is an integer multiple of this \fundamental" frequency.
    [Show full text]
  • 8.3 Positive Definite Matrices
    8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number.
    [Show full text]