Chapter 0 Preliminaries

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 0 Preliminaries Chapter 0 Preliminaries Objective of the course Introduce basic matrix results and techniques that are useful in theory and applications. It is important to know many results, but there is no way I can tell you all of them. I would like to introduce techniques so that you can obtain the results you want. This is the standard teaching philosophy of \It is better to teach students how to fish instead of just giving them fishes.” Assumption You are familiar with • Linear equations, solution sets, elementary row operations. • Matrices, column space, row space, null space, ranks, • Determinant, eigenvalues, eigenvectors, diagonal form. • Vector spaces, basis, change of bases. • Linear transformations, range space, kernel. • Inner product, Gram Schmidt process for real matrices. • Basic operations of complex numbers. • Block matrices. Notation • Mn(F), Mm;n(F) are the set of n × n and m × n matrices over F = R or C (or a more general field). n • F is the set of column vectors of length n with entries in F. • Sometimes we write Mn;Mm;n if F = C. A1 0 • If A = 2 Mm+n(F) with A1 2 Mm(F) and A2 2 Mn(F), we write A = A1 ⊕ A2. 0 A2 • xt;At denote the transpose of a vector x and a matrix A. • For a complex matrix A, A denotes the matrix obtained from A by replacing each entry by its complex conjugate. Furthermore, A∗ = (A)t. More results and examples on block matrix multiplications. 1 1 Similarity, Jordan form, and applications 1.1 Similarity Definition 1.1.1 Two matrices A; B 2 Mn are similar if there is an invertible S 2 Mn such that S−1AS = B, equivalently, A = T −1BT with T = S−1. A matrix A is diagonalizable if A is similar to a diagonal matrix. Remark 1.1.2 If A and B are similar, then det(A) = det(B) and det(zI − A) = det(zI − B). Furthermore, if S−1AS = B, then Bx = µx if and only if A(Sx) = µ(Sx). Theorem 1.1.3 Let A 2 Mn(F). Then A is diagonalizable over F if and only if it has n linearly n independent eigenvectors in F . −1 Proof. An invertible matrix S 2 Mn satisfies S AS = D = diag (λ1; : : : ; λn) if and only if AS = SD, i.e., the columns of S equal to x1; : : : ; xn. −1 Remark 1.1.4 Suppose A = SDS with D = diag (λ1; : : : ; λn). Let S have columns x1; : : : ; xn −1 t t and S have rows y1; : : : ; yj. Then t t Axj = λjxj and yjA = λjyj; j = 1; : : : ; n; and n X t A = λjxjyj: j=1 t t Thus, x1; : : : ; xn are the (right) eigenvectors of A, and y1; : : : ; yn are the left eigenvectors of A. Remark 1.1.5 For a real matrix A 2 Mn(R), we may not be able to find real eigenvalues, and we may not be able to find n linearly independent eigenvectors even if det(zI − A) = 0 has only real roots. 0 1 0 1 Example 1.1.6 Consider A = and B = . Then A has no real eigenvalues, and −1 0 0 0 B does not have two linearly independent eigenvectors. Remark 1.1.7 By the fundamental theorem of algebra, every complex polynomial can be written as a product of linear factors. Consequently, for every A 2 Mn(C), the characteristic polynomial has linear factors: det(zI − A) = (z − λ1) ··· (z − λn): So, we needs only to find n linearly independent eigenvectors in order to show that A is diagonal- izable. 2 1.2 Triangular form and Jordan form Question What simple form can we get by similarity if A 2 Mn is not diagonalizable? Theorem 1.2.1 Let A 2 Mn be such that det(zI −A) = (z −λ1) ··· (z −λn). For any permutation (j1; : : : ; jn) of (1; : : : ; n), A is similar to a matrix in (upper) triangular form with diagonal entries λj1 ; : : : ; λjn . Proof. By induction. The result clearly holds if n = 1. Suppose Ax1 = λi1 x1. Let S2 be λi1 ∗ −1 λi1 ∗ invertible with first column equal to x1. Then AS1 = S1 . Thus, S1 AS1 = . 0 A1 0 A1 −1 Note that the eigenvalues of A1 are λi2 ; : : : ; λin . By induction assumption, S2 A1S2 is in upper −1 triangular form with eigenvalues λi2 ; : : : ; λin . Let S = S1([1] ⊕ S2). Then S AS has the required form. Note that we can apply the argument to At to get an invertible T such that T −1AtT is in upper triangular form. So, S−1AS is in lower triangular form for S = T −1. Lemma 1.2.2 Suppose A 2 Mm;B 2 Mn have no common eigenvalues. Then for any C 2 Mm;n there is X 2 Mm;n such that AX − XB = C. Proof. Suppose S−1AS = A~ is in upper triangular form, and T −1BT = B~ is in lower triangular form. Multiplying the equation AX−XB = C on left and right by S and T −1, we get AY~ −Y B~ = C~ with Y = SXT −1 and C~ = SY T −1. Once solve AY~ − Y B~ = C~, we get the solution X = S−1YT for the original problem. ~ Pn ~ ~ Now, for each column colj(C) for j = 1; : : : ; n, we have colj(Y ) − i=1 Bijcoli(Y ) = colj(C). We get a system of linear equation with coefficient matrix 0 ~ 1 0A 1 B11Im : : : bn1Im ~ ~ . B . .. C In ⊗ A − B ⊗ Im = B .. C − B . C : @ A @ A A .. B~n1Im . B~nnIm Because B~ is in lower triangular form, and the upper triangular matrix A~ = S−1AS and the lower triangular matrix B~ = T −1BT have no common eigenvalues, the coefficient matrix is in upper triangular form with nonzero diagonal entries. So, the equation AY~ −Y B~ = C~ always has a unique solution. A11 A12 Theorem 1.2.3 Suppose A = 2 Mn such that A11 2 Mk;A22 2 Mn−k have no 0 A22 common eigenvalue. Then A is similar to A11 ⊕ A22. 3 Proof. By the previous lemma, there is X be such that A11X + A12 = XA22. Let S = Ik X so that AS = S(A11 ⊕ A22). The result follows. 0 In−k Corollary 1.2.4 Suppose A 2 Mn has distinct eigenvalues λ1; : : : ; λk. Then A is similar to A11 ⊕ · · · ⊕ Akk such that Ajj has (only one distinct) eigenvalue λj for j = 1; : : : ; k. Definition 1.2.5 Let Jk(λ) 2 Mk such that all the diagonal entries equal λ and all super diagonal 0λ 1 1 B .. .. C B . C entries equal 1. Then Jk(λ) = B C 2 Mk is call a (an upper triangular) Jordan block @ λ 1A λ of λ of size k. Theorem 1.2.6 Every A 2 Mn is similar to a direct sum of Jordan blocks. Proof. We may assume that A = A11 ⊕ · · · ⊕ Akk. If we can find invertible matrices S1;:::;Sk such −1 −1 that Si AiiSi is in Jordan form, then S AS is in Jordan form for S = S1 ⊕ · · · ⊕ Sk. −1 Focus on T = Aii − λiInk . If S TS is in Jordan form, then so is Aii. Now, we use a proof of Mark Wildon. https://www.math.vt.edu/people/renardym/class home/Jordan.pdf n Note: Then vectors u1; T u1;:::;T 1 u1 in the proof is called a Jordan chain. 00 0 1 21 B0 0 3 4C Example 1.2.7 Let T = B C. Then T e1 = 0; T e2 = 0; T e3 = e1 + 3e2; T e4 = 2e1 + 4e2. @0 0 0 0A 0 0 0 0 So, T (V ) = span fe1; e2g. Now, T e1 = T e2 = 0 so that e1; e2 form a Jordan basis for T (V ). Solving u1; u2 such that T (u1) = e1;T (u2) = e2, we let u1 = −2e3 + 3e4=2 and u2 = e3 − e4=2. Thus, TS = S(J2(0) ⊕ J2(0)) with 01 0 0 0 1 B0 0 1 0 C S = B C : @0 −2 0 1 A 0 3=2 0 −1=2 00 1 21 Example 1.2.8 Let T = @0 0 3A. Then T e1 = 0; T e2 = e1; T e3 = 2e1 + e2. So, T (V ) = 0 0 0 span fe1; e2g, and e2; T e2 = e1 form a Jordan basis for T (V ). Solving u1 such that T (u1) = e2, we 4 have u1 = (−2e2 + e3)=3. Thus, TS = SJ3(0) with 01 0 0 1 S = @0 1 −2=3A : 0 0 1=3 k Remark 1.2.9 The Jordan form of A can be determined by the rank/nullity of (A − λjI) for k = 1; 2;::: . So, the Jordan blocks are uniquely determined. i Let ker((A − λI) ) = `i has dimension `i. Then there are `1 Jordan blocks of λ, and there are `i − λi−1 Jordan blocks of size at least i. Example 1.2.10 Suppose A 2 M9 has distinct eigenvalues λ1; λ2; λ3 such that A − λ1I has rank 2 3 2 8, A − λ2I has rank 7, (A − λ2I) and (A − λ2I) have rank 5, A − λ3I has rank 6, (A − λ3I) and 3 (A − λ3I) have rank 5. Then the Jordan form of A is J1(λ1) ⊕ J2(λ2) ⊕ J2(λ2) ⊕ J1(λ3) ⊕ J1(λ3) ⊕ J2(λ3): 5 1.3 Implications of the Jordan form Theorem 1.3.1 Two matrices are similar if and only if they have the same Jordan form. Proof. If A and B have Jordan form J, then S−1AS = J = T −1BT for some invertible S; T so that R−1AR = B with R = ST −1.
Recommended publications
  • Publications of Roger Horn
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 424 (2007) 3–7 www.elsevier.com/locate/laa Publications of Roger Horn 1. A heuristic asymptotic formula concerning the distribution of prime numbers (with P.T. Bateman), Math. Comp. 16 (1962), 363–367 (MR26#6139). 2. Primes represented by irreducible polynomials in one variable, Theory of Numbers (with P.T. Bateman), In: Proceedings of Symposia in Pure Mathematics, vol. VIII, American Mathematical Society, Providence, Rhode Island, 1965, pp. 119–132 (MR31#1234). 3. On boundary values of a Schlicht mapping, Proc. Amer. Math. Soc. 18 (1967) 782–787 (MR36#2792). 4. On infinitely divisible matrices, kernels, and functions, Z. Wahrscheinlichkeitstheorie verw. Gebiete 8 (1967) 219–230 (MR36#925). 5. The theory of infinitely divisible matrices and kernels, Trans. Amer. Math. Soc. 136 (1969) 269–286 (MR41#9327). 6. Infinitely divisible positive definite sequences, Trans. Amer. Math. Soc. 136 (1969) 287–303 (MR58#17681). 7. On a functional equation arising in probability (with R.D. Meredith), Amer. Math. Monthly 76 (1969) 802–804 (MR40#4622). 8. On the Wronskian test for independence, Amer. Math. Monthly 77 (1970) 65–66. 9. On certain power series and sequences, J. London Math. Soc. 2 (2) (1970) 160–162 (MR41#5629). 10. On moment sequences and renewal sequences, J. Math. Anal. Appl. 31 (1970) 130–135 (MR42#6541). 11. On Fenchel’s theorem, Amer. Math. Monthly 78 (1971) 380–381 (MR44#2142). 12. Schlicht mappings and infinitely divisible kernels, Pacific J.
    [Show full text]
  • Linear Algebra (XXVIII)
    Linear Algebra (XXVIII) Yijia Chen 1. Quadratic Forms Definition 1.1. Let A be an n × n symmetric matrix over K, i.e., A = aij i,j2[n] where aij = aji 0 1 x1 B . C for all i, j 2 [n]. Then for the variable vector x = @ . A xn 0 1 0 1 a11 ··· a1n x1 0 x ··· x B . .. C B . C x Ax = 1 n @ . A @ . A an1 ··· ann xn = aij · xi · xj i,Xj2[n] 2 = aii · xi + 2 aij · xi · xj. 1 i<j n iX2[n] 6X6 is a quadratic form. Definition 1.2. A quadratic form x0Ax is diagonal if the matrix A is diagonal. Definition 1.3. Let A and B be two n × n-matrices. If there exists an invertible C with B = C0AC, then B is congruent to A. Lemma 1.4. The matrix congruence is an equivalence relation. More precisely, for all n × n-matrices A, B, and C, (i) A is congruent to A, (ii) if A is congruent to B, then B is congruent to A as well, and (iii) if A is congruent to B, B is congruent to C, then A is congruent to C. Lemma 1.5. Let A and B be two n × n-matrices. If B is obtained from A by the following elementary operations, then B is congruent to A. (i) Switch the i-th and j-th rows, then switch the i-th and j-th columns, where 1 6 i < j 6 j 6 n. (ii) Multiply the i-th row by k, then multiply the i-th column by k, where i 2 [n] and k 2 K n f0g.
    [Show full text]
  • SYMPLECTIC GROUPS MATHEMATICAL SURVEYS' Number 16
    SYMPLECTIC GROUPS MATHEMATICAL SURVEYS' Number 16 SYMPLECTIC GROUPS BY O. T. O'MEARA 1978 AMERICAN MATHEMATICAL SOCIETY PROVIDENCE, RHODE ISLAND TO JEAN - my wild Irish rose Library of Congress Cataloging in Publication Data O'Meara, O. Timothy, 1928- Symplectic groups. (Mathematical surveys; no. 16) Based on lectures given at the University of Notre Dame, 1974-1975. Bibliography: p. Includes index. 1. Symplectic groups. 2. Isomorphisms (Mathematics) I. Title. 11. Series: American Mathematical Society. Mathematical surveys; no. 16. QA171.046 512'.2 78-19101 ISBN 0-8218-1516-4 AMS (MOS) subject classifications (1970). Primary 15-02, 15A33, 15A36, 20-02, 20B25, 20045, 20F55, 20H05, 20H20, 20H25; Secondary 10C30, 13-02, 20F35, 20G15, 50025,50030. Copyright © 1978 by the American Mathematical Society Printed in the United States of America AIl rights reserved except those granted to the United States Government. Otherwise, this book, or parts thereof, may not be reproduced in any form without permission of the publishers. CONTENTS Preface....................................................................................................................................... ix Prerequisites and Notation...................................................................................................... xi Chapter 1. Introduction 1.1. Alternating Spaces............................................................................................. 1 1.2. Projective Transformations................................................................................
    [Show full text]
  • Canonical Forms for Congruence of Matrices and T -Palindromic Matrix Pencils: a Tribute to H
    Noname manuscript No. (will be inserted by the editor) Canonical forms for congruence of matrices and T -palindromic matrix pencils: a tribute to H. W. Turnbull and A. C. Aitken Fernando De Ter´an the date of receipt and acceptance should be inserted later Abstract A canonical form for congruence of matrices was introduced by Turnbull and Aitken in 1932. More than 70 years later, in 2006, another canonical form for congruence has been introduced by Horn and Sergeichuk. The main purpose of this paper is to draw the attention on the pioneer work of Turnbull and Aitken and to compare both canonical forms. As an application, we also show how to recover the spectral structure of T -palindromic matrix pencils from the canonical form of matrices by Horn and Sergeichuk. Keywords matrices, canonical forms, congruence, T -palindromic matrix pencils, equivalence Mathematics Subject Classification (2000) 15A21, 15A22 1 Introduction 1.1 Congruence and similarity We consider the following two actions of the general linear group of n × n invertible matrices with complex n×n entries (denoted by GL(n; C)) on the set of n × n matrices with complex entries (denoted by C ): GL(n; ) × n×n −! n×n GL(n; ) × n×n −! n×n C C C ; and C C C (P; A) 7−! P AP −1 (P; A) 7−! P AP T : The first one is known as the action of similarity and the second one as the action of congruence. Accordingly, n×n n×n −1 two matrices A; B 2 C are said to be similar if there is a nonsingular P 2 C such that P AP = B and they are said to be congruent if there is a nonsingular P such that P AP T = B.
    [Show full text]
  • Matrix Canonical Forms
    Matrix Canonical Forms Roger Horn University of Utah ICTP School: Linear Algebra: Friday, June 26, 2009 ICTP School: Linear Algebra: Friday, June 26, 2009 1 Roger Horn (University of Utah) Matrix Canonical Forms / 22 Canonical forms for congruence and *congruence Congruence A SAST (change variables in quadratic form xT Ax) ! *Congruence A SAS (change variables in Hermitian form x Ax) ! Congruence and *congruence are simpler than similarity: no inverses; identical row and column operations for congruence (complex conjugates for *congruence). The singular and nonsingular canonical structures are fundamentally di¤erent. ICTP School: Linear Algebra: Friday, June 26, 2009 2 Roger Horn (University of Utah) Matrix Canonical Forms / 22 Example: *Congruence for Hermitian matrices Sylvester’sInertia Theorem (1852): Two Hermitian matrices are *congruent if and only if they have the same number of positive eigenvalues and the same number of negative eigenvalues (and hence also the same number of zero eigenvalues). Reformulate: Two Hermitian matrices are *congruent if and only if they have the same number of eigenvalues on each of the two open rays tei0 : t > 0 and teiπ : t > 0 in the complex plane. f g i0 f iπ g Canonical form: e In+ e In 0n0 ICTP School: Linear Algebra: Friday, June 26, 2009 3 Roger Horn (University of Utah) Matrix Canonical Forms / 22 Example: *Congruence for normal matrices Unitary *congruence: Two normal matrices are unitarily *congruent (unitarily similar!) if and only if they have the same eigenvalues. Ikramov (2001): Two normal matrices are *congruent if and only if they have the same number of eigenvalues on each open ray teiθ : t > 0 , θ [0, 2π) in the complex plane.
    [Show full text]
  • Congruence of Symmetric Matrices Over Local Rings
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Linear Algebra and its Applications 431 (2009) 1687–1690 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Congruence of symmetric matrices over local rings ∗ Yonglin Cao a,1, Fernando Szechtman b, a Institute of Applied Mathematics, School of Sciences, Shandong University of Technology, Zibo, Shandong 255091, PR China b Department of Mathematics and Statistics, University of Regina, Saskatchewan, Canada ARTICLE INFO ABSTRACT Article history: Let R be a commutative, local, and principal ideal ring with maximal Received 25 January 2009 ideal m and residue class field F. Suppose that every element of Accepted 4 June 2009 1 + m is square. Then the problem of classifying arbitrary symmet- Available online 15 July 2009 ric matrices over R by congruence naturally reduces, and is actually Submitted by V. Sergeichuk equivalent to, the problem of classifying invertible symmetric matrices over F by congruence. AMS classification: © 2009 Elsevier Inc. All rights reserved. 15A21 15A33 Keywords: Matrix congruence Bilinear form Local ring Principal ideal ring 1. Introduction We consider the problem of classifying symmetric bilinear forms defined on a free R-module of finite rank, where R is a commutative, local and principal ideal ring with maximal ideal m = Rπ, ∗ residue field F = R/m and unit group R . This is equivalent to the problem of classifying symmetric matrices over R by congruence. For the ring of integers of a local field of characteristic not 2 the problem was studied by O’Meara [6], who in 1953 gave a complete solution provided 2 is a unit or does not ramify.
    [Show full text]
  • Matrices with Orthogonal Groups Admitting Only Determinant One
    Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 796–813 www.elsevier.com/locate/laa Matrices with orthogonal groups admitting only determinant one Edward S. Coakley a,∗, Froilán M. Dopico b, Charles R. Johnson c a Department of Mathematics, Harvard University, Cambridge, MA 02138, USA b Departamento de Matemáticas, Universidad Carlos III de Madrid, Avenida de la Universidad 30, 28911 Leganés, Spain c Department of Mathematics, The College of William and Mary, Williamsburg, VA 23187-8795, USA Received 6 July 2006; accepted 13 August 2007 Available online 1 November 2007 Submitted by V. Mehrmann Abstract The K-orthogonal group of an n-by-n matrix K is defined as the set of nonsingular n-by-n matrices A satisfying ATKA = K, where the superscript T denotes transposition. These form a group under matrix multiplication. It is well-known that if K is skew-symmetric and nonsingular the determinant of every element of the K-Orthogonal group is +1, i.e., the determinant of any symplectic matrix is +1. We present necessary and sufficient conditions on a real or complex matrix K so that all elements of the K-Orthogonal group have determinant +1. These necessary and sufficient conditions can be simply stated in terms of the symmetric and skew-symmetric parts of K, denoted by Ks and Kw respectively, as follows: the determinant of every element in the K-Orthogonal group is +1 if and only if the matrix pencil Kw − λKs is regular and the matrix −1 (Kw − λ0Ks) Kw has no Jordan blocks associated to the zero eigenvalue with odd dimension, where λ0 is any number such that det(Kw − λ0Ks)/= 0.
    [Show full text]
  • CONSISTENCY and EFFICIENT SOLUTION of the SYLVESTER EQUATION for ⋆-CONGRUENCE: AX + X⋆B = C 1. Introduction. the Sylvester E
    CONSISTENCY AND EFFICIENT SOLUTION OF THE SYLVESTER EQUATION FOR ?-CONGRUENCE: AX + X?B = C ∗ y z FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. We consider the matrix equation AX + X?B = C , where the matrices A and B have sizes m × n and n × m, respectively, the size of the unknown X is n × m, and the operator (·)? denotes either the transpose or the conjugate transpose of a matrix. In the first part of the paper, we review necessary and sufficient conditions for the existence and uniqueness of solutions. These conditions were obtained previously by Wimmer and by Byers, Kressner, Schröder and Watkins. In this review, we generalize to fields of characteristic different from two the existence condition that Wimmer originally proved for the complex field. In the second part, we develop, in the real or complex square case m = n, an algorithm to solve the equation in O(n3) flops when the solution is unique. This algorithm is based on the generalized Schur decomposition of the matrix pencil A − λB?. The equation AX + X?B = C is connected with palindromic eigenvalue problems and, as a consequence, the square complex case has attracted recently the attention of several authors. For instance, Byers, Kressner, Schröder and Watkins have considered this equation in the study of the conditioning and in the development of structured algorithms for palindromic eigenvalue problems, while De Terán and Dopico have solved the homogeneous equations AX + X?A = 0 and studied their relationship with orbits of matrices and palindromic pencils under the action of ?-congruence. Key words.
    [Show full text]
  • More on Regular Subgroups of the Affine Group 3
    MORE ON REGULAR SUBGROUPS OF THE AFFINE GROUP M.A. PELLEGRINI AND M.C. TAMBURINI BELLANI Abstract. This paper is a new contribution to the study of regular subgroups of the affine group AGLn(F), for any field F. In particular we associate to any partition λ =6 (1n+1) of n + 1 abelian regular subgroups in such a way that different partitions define non-conjugate subgroups. Moreover, we classify the regular subgroups of certain natural types for n ≤ 4. Our classification is equivalent to the classification of split local algebras of dimension n + 1 over F. Our methods, based on classical results of linear algebra, are computer free. 1. Introduction Let F be any field. We identify the affine group AGLn(F) with the subgroup of T GLn+1(F) consisting of the matrices having (1, 0,..., 0) as first column. With this n notation, AGLn(F) acts on the right on the set = (1, v): v F of affine points. A { ∈ } Clearly, there exists an epimorphism π : AGLn(F) GLn(F) induced by the action n → of AGLn(F) on F . A subgroup (or a subset) R of AGLn(F) is called regular if it acts regularly on , namely if, for every v Fn, there exists a unique element in R A ∈ having (1, v) as first row. Thus R is regular precisely when AGLn(F)= GLn(F) R, with GLn(F) R = In , where GLn(F) denotes the stabilizer of (1, 0,..., 0). ∩ { +1} A subgroup H of AGLn(F) is indecomposable if there exists no decompositionc of Fn asc a direct sum of non-trivial π(cH)-invariant subspaces.
    [Show full text]
  • An Informal Introduction to Perturbations of Matrices Determined up to Similarity Or Congruence
    S~aoPaulo Journal of Mathematical Sciences 8, 1 (2014), 1{22 An informal introduction to perturbations of matrices determined up to similarity or congruence Lena Klimenko National Technical University of Ukraine \Kyiv Polytechnic Institute", Prospect Peremogy 37, Kiev, Ukraine. E-mail address: [email protected] Vladimir V. Sergeichuk* Institute of Mathematics, Tereshchenkivska 3, Kiev, Ukraine. E-mail address: [email protected] Abstract. The reductions of a square complex matrix A to its canon- ical forms under transformations of similarity, congruence, or *con- gruence are unstable operations: these canonical forms and reduction transformations depend discontinuously on the entries of A. We sur- vey results about their behavior under perturbations of A and about normal forms of all matrices A + E in a neighborhood of A with re- spect to similarity, congruence, or *congruence. These normal forms are called miniversal deformations of A; they are not uniquely deter- mined by A + E, but they are simple and depend continuously on the entries of E. 1. Introduction The purpose of this survey is to give an informal introduction into the theory of perturbations of a square complex matrix A determined up to transformations of similarity S−1AS, or congruence ST AS, or *congruence S∗AS, in which S is nonsingular and S∗ ∶= S¯T . 2010 Mathematics Subject Classification. 15A21, 15A63, 47A07, 47A55. Key words: similarity, congruence, *congruence, perturbations, miniversal deforma- tions, closure graphs. * Supported by the FAPESP grant 2012/18139-2. The work was done while this author was visiting the University of S~aoPaulo, whose hospitality is gratefully acknowledged.
    [Show full text]
  • Mathematics in Physical Chemistry
    Mathematics in Physical Chemistry Ali Nassimi [email protected] Chemistry Department Sharif University of Technology May 8, 2021 1/1 Aim Your most valuable asset is your learning ability. This course is a practice in learning and specially improves your deduction skills. This course provides you with tools applicable in and necessary for modeling many natural phenomena. \The fundamental laws necessary for the mathematical treatment of a large part of physics and the whole of chemistry are thus completely known, and the difficulty lies only in the fact that application of these laws leads to equations that are too complex to be solved." The first part of the course reviews Linear algebra and calculus while introducing some very useful notations. In the second part of the course, we study ordinary differential equations. End of semester objective: Become familiar with mathematical tools needed for understanding physical chemistry. 2/1 Course Evaluation Final exam 11 Bahman 3 PM 40% Midterm exam 24 Azar 9 AM 40% Quizz 10% Class presentation 10% Office hours: Due to special situation of corona pandemic office hour is not set, email for an appointment instead. 3/1 To be covered in the course Complex numbers, Vector analysis and Linear algebra Vector rotation, vector multiplication and vector derivatives. Series expansion of analytic functions Integration and some theorems from calculus Dirac delta notation and Fourier transformation Curvilinear coordinates. Matrices 4/1 To be covered in the course When we know the relation between change in dependent variable with changes in independent variable we are facing a differential equation.
    [Show full text]
  • Products of Positive Definite Symplectic Matrices
    Products of positive definite symplectic matrices by Daryl Q. Granario A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 8, 2020 Keywords: symplectic matrix, positive definite matrix, matrix decomposition Copyright 2020 by Daryl Q. Granario Approved by Tin-Yau Tam, Committee Chair and Professor Emeritus of Mathematics Ming Liao, Professor of Mathematics Huajun Huang, Associate Professor of Mathematics Ulrich Albrecht, Department Chair and Professor of Mathematics Abstract We show that every symplectic matrix is a product of five positive definite symplectic matrices and five is the best in the sense that there are symplectic matrices which are not product of less. In Chapter 1, we provide a historical background and motivation behind the study. We highlight the important works in the subject that lead to the formulation of the problem. In Chapter 2, we present the necessary mathematical prerequisites and construct a symplectic ∗congruence canonical form for Sp(2n; C). In Chapter 3, we give the proof of the main the- orem. In Chapter 4, we discuss future research. The main results in this dissertation can be found in [18]. ii Acknowledgments I would like to express my sincere gratitude to everyone who have supported me through- out this process. To my advisor Professor Tin-Yau Tam for the continuous support of my Ph.D study and related research, for their patience, motivation, and immense knowledge. Thank you for their insightful comments and constant encouragement, and for the challenging questions that pushed me to grow as a mathematician.
    [Show full text]