Simultaneous Triangularization of Certain Sets of Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Inverse Along a Lower Triangular Matrix∗
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa. -
MATH 2370, Practice Problems
MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2. -
LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and . -
Handout 9 More Matrix Properties; the Transpose
Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric). -
Matrix Lie Groups
Maths Seminar 2007 MATRIX LIE GROUPS Claudiu C Remsing Dept of Mathematics (Pure and Applied) Rhodes University Grahamstown 6140 26 September 2007 RhodesUniv CCR 0 Maths Seminar 2007 TALK OUTLINE 1. What is a matrix Lie group ? 2. Matrices revisited. 3. Examples of matrix Lie groups. 4. Matrix Lie algebras. 5. A glimpse at elementary Lie theory. 6. Life beyond elementary Lie theory. RhodesUniv CCR 1 Maths Seminar 2007 1. What is a matrix Lie group ? Matrix Lie groups are groups of invertible • matrices that have desirable geometric features. So matrix Lie groups are simultaneously algebraic and geometric objects. Matrix Lie groups naturally arise in • – geometry (classical, algebraic, differential) – complex analyis – differential equations – Fourier analysis – algebra (group theory, ring theory) – number theory – combinatorics. RhodesUniv CCR 2 Maths Seminar 2007 Matrix Lie groups are encountered in many • applications in – physics (geometric mechanics, quantum con- trol) – engineering (motion control, robotics) – computational chemistry (molecular mo- tion) – computer science (computer animation, computer vision, quantum computation). “It turns out that matrix [Lie] groups • pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry”. (K. Tapp, 2005) RhodesUniv CCR 3 Maths Seminar 2007 EXAMPLE 1 : The Euclidean group E (2). • E (2) = F : R2 R2 F is an isometry . → | n o The vector space R2 is equipped with the standard Euclidean structure (the “dot product”) x y = x y + x y (x, y R2), • 1 1 2 2 ∈ hence with the Euclidean distance d (x, y) = (y x) (y x) (x, y R2). -
Triangular Factorization
Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix. -
8.3 Positive Definite Matrices
8.3. Positive Definite Matrices 433 Exercise 8.2.25 Show that every 2 2 orthog- [Hint: If a2 + b2 = 1, then a = cos θ and b = sinθ for × cos θ sinθ some angle θ.] onal matrix has the form − or sinθ cosθ cos θ sin θ Exercise 8.2.26 Use Theorem 8.2.5 to show that every for some angle θ. sinθ cosθ symmetric matrix is orthogonally diagonalizable. − 8.3 Positive Definite Matrices All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will encounter them again in Chapter 10 when describing all inner products in Rn. Definition 8.5 Positive Definite Matrices A square matrix is called positive definite if it is symmetric and all its eigenvalues λ are positive, that is λ > 0. Because these matrices are symmetric, the principal axes theorem plays a central role in the theory. Theorem 8.3.1 If A is positive definite, then it is invertible and det A > 0. Proof. If A is n n and the eigenvalues are λ1, λ2, ..., λn, then det A = λ1λ2 λn > 0 by the principal axes theorem (or× the corollary to Theorem 8.2.5). ··· If x is a column in Rn and A is any real n n matrix, we view the 1 1 matrix xT Ax as a real number. -
LIE GROUPS and ALGEBRAS NOTES Contents 1. Definitions 2
LIE GROUPS AND ALGEBRAS NOTES STANISLAV ATANASOV Contents 1. Definitions 2 1.1. Root systems, Weyl groups and Weyl chambers3 1.2. Cartan matrices and Dynkin diagrams4 1.3. Weights 5 1.4. Lie group and Lie algebra correspondence5 2. Basic results about Lie algebras7 2.1. General 7 2.2. Root system 7 2.3. Classification of semisimple Lie algebras8 3. Highest weight modules9 3.1. Universal enveloping algebra9 3.2. Weights and maximal vectors9 4. Compact Lie groups 10 4.1. Peter-Weyl theorem 10 4.2. Maximal tori 11 4.3. Symmetric spaces 11 4.4. Compact Lie algebras 12 4.5. Weyl's theorem 12 5. Semisimple Lie groups 13 5.1. Semisimple Lie algebras 13 5.2. Parabolic subalgebras. 14 5.3. Semisimple Lie groups 14 6. Reductive Lie groups 16 6.1. Reductive Lie algebras 16 6.2. Definition of reductive Lie group 16 6.3. Decompositions 18 6.4. The structure of M = ZK (a0) 18 6.5. Parabolic Subgroups 19 7. Functional analysis on Lie groups 21 7.1. Decomposition of the Haar measure 21 7.2. Reductive groups and parabolic subgroups 21 7.3. Weyl integration formula 22 8. Linear algebraic groups and their representation theory 23 8.1. Linear algebraic groups 23 8.2. Reductive and semisimple groups 24 8.3. Parabolic and Borel subgroups 25 8.4. Decompositions 27 Date: October, 2018. These notes compile results from multiple sources, mostly [1,2]. All mistakes are mine. 1 2 STANISLAV ATANASOV 1. Definitions Let g be a Lie algebra over algebraically closed field F of characteristic 0. -
Arxiv:1508.00183V2 [Math.RA] 6 Aug 2015 Atclrb Ed Yasemigroup a by field
A THEOREM OF KAPLANSKY REVISITED HEYDAR RADJAVI AND BAMDAD R. YAHAGHI Abstract. We present a new and simple proof of a theorem due to Kaplansky which unifies theorems of Kolchin and Levitzki on triangularizability of semigroups of matrices. We also give two different extensions of the theorem. As a consequence, we prove the counterpart of Kolchin’s Theorem for finite groups of unipotent matrices over division rings. We also show that the counterpart of Kolchin’s Theorem over division rings of characteristic zero implies that of Kaplansky’s Theorem over such division rings. 1. Introduction The purpose of this short note is three-fold. We give a simple proof of Kaplansky’s Theorem on the (simultaneous) triangularizability of semi- groups whose members all have singleton spectra. Our proof, although not independent of Kaplansky’s, avoids the deeper group theoretical aspects present in it and in other existing proofs we are aware of. Also, this proof can be adjusted to give an affirmative answer to Kolchin’s Problem for finite groups of unipotent matrices over division rings. In particular, it follows from this proof that the counterpart of Kolchin’s Theorem over division rings of characteristic zero implies that of Ka- plansky’s Theorem over such division rings. We also present extensions of Kaplansky’s Theorem in two different directions. M arXiv:1508.00183v2 [math.RA] 6 Aug 2015 Let us fix some notation. Let ∆ be a division ring and n(∆) the algebra of all n × n matrices over ∆. The division ring ∆ could in particular be a field. By a semigroup S ⊆ Mn(∆), we mean a set of matrices closed under multiplication. -
Unipotent Flows and Applications
Clay Mathematics Proceedings Volume 10, 2010 Unipotent Flows and Applications Alex Eskin 1. General introduction 1.1. Values of indefinite quadratic forms at integral points. The Op- penheim Conjecture. Let X Q(x1; : : : ; xn) = aijxixj 1≤i≤j≤n be a quadratic form in n variables. We always assume that Q is indefinite so that (so that there exists p with 1 ≤ p < n so that after a linear change of variables, Q can be expresses as: Xp Xn ∗ 2 − 2 Qp(y1; : : : ; yn) = yi yi i=1 i=p+1 We should think of the coefficients aij of Q as real numbers (not necessarily rational or integer). One can still ask what will happen if one substitutes integers for the xi. It is easy to see that if Q is a multiple of a form with rational coefficients, then the set of values Q(Zn) is a discrete subset of R. Much deeper is the following conjecture: Conjecture 1.1 (Oppenheim, 1929). Suppose Q is not proportional to a ra- tional form and n ≥ 5. Then Q(Zn) is dense in the real line. This conjecture was extended by Davenport to n ≥ 3. Theorem 1.2 (Margulis, 1986). The Oppenheim Conjecture is true as long as n ≥ 3. Thus, if n ≥ 3 and Q is not proportional to a rational form, then Q(Zn) is dense in R. This theorem is a triumph of ergodic theory. Before Margulis, the Oppenheim Conjecture was attacked by analytic number theory methods. (In particular it was known for n ≥ 21, and for diagonal forms with n ≥ 5). -
Contents 1 Root Systems
Stefan Dawydiak February 19, 2021 Marginalia about roots These notes are an attempt to maintain a overview collection of facts about and relationships between some situations in which root systems and root data appear. They also serve to track some common identifications and choices. The references include some helpful lecture notes with more examples. The author of these notes learned this material from courses taught by Zinovy Reichstein, Joel Kam- nitzer, James Arthur, and Florian Herzig, as well as many student talks, and lecture notes by Ivan Loseu. These notes are simply collected marginalia for those references. Any errors introduced, especially of viewpoint, are the author's own. The author of these notes would be grateful for their communication to [email protected]. Contents 1 Root systems 1 1.1 Root space decomposition . .2 1.2 Roots, coroots, and reflections . .3 1.2.1 Abstract root systems . .7 1.2.2 Coroots, fundamental weights and Cartan matrices . .7 1.2.3 Roots vs weights . .9 1.2.4 Roots at the group level . .9 1.3 The Weyl group . 10 1.3.1 Weyl Chambers . 11 1.3.2 The Weyl group as a subquotient for compact Lie groups . 13 1.3.3 The Weyl group as a subquotient for noncompact Lie groups . 13 2 Root data 16 2.1 Root data . 16 2.2 The Langlands dual group . 17 2.3 The flag variety . 18 2.3.1 Bruhat decomposition revisited . 18 2.3.2 Schubert cells . 19 3 Adelic groups 20 3.1 Weyl sets . 20 References 21 1 Root systems The following examples are taken mostly from [8] where they are stated without most of the calculations. -
Chapter 6 Jordan Canonical Form
Chapter 6 Jordan Canonical Form 175 6.1 Primary Decomposition Theorem For a general matrix, we shall now try to obtain the analogues of the ideas we have in the case of diagonalizable matrices. Diagonalizable Case General Case Minimal Polynomial: Minimal Polynomial: k k Y rj Y rj mA(λ) = (λ − λj) mA(λ) = (λ − λj) j=1 j=1 where rj = 1 for all j where 1 ≤ rj ≤ aj for all j Eigenspaces: Eigenspaces: Wj = Null Space of (A − λjI) Wj = Null Space of (A − λjI) dim(Wj) = gj and gj = aj for all j dim(Wj) = gj and gj ≤ aj for all j Generalized Eigenspaces: rj Vj = Null Space of (A − λjI) dim(Vj) = aj Decomposition: Decomposition: n n F = W1 ⊕ W2 ⊕ Wj ⊕ · · · ⊕ Wk W = W1⊕W2⊕Wj⊕· · ·⊕Wk is a subspace of F Primary Decomposition Theorem: n F = V1 ⊕ V2 ⊕ Vj ⊕ Vk n n x 2 F =) 9 unique x 2 F =) 9 unique xj 2 Wj, 1 ≤ j ≤ k such that vj 2 Vj, 1 ≤ j ≤ k such that x = x1 + x2 + ··· + xj + ··· + xk x = v1 + v2 + ··· + vj + ··· + vk 176 6.2 Analogues of Lagrange Polynomial Diagonalizable Case General Case Define Define k k Y Y ri fj(λ) = (λ − λi) Fj(λ) = (λ − λi) i=1 i=1 i6=j i6=j This is a set of coprime polynomials This is a set of coprime polynomials and and hence there exist polynomials hence there exist polynomials Gj(λ), 1 ≤ gj(λ), 1 ≤ j ≤ k such that j ≤ k such that k k X X fj(λ)gj(λ) = 1 Fj(λ)Gj(λ) = 1 j=1 j=1 The polynomials The polynomials `j(λ) = fj(λ)gj(λ); 1 ≤ j ≤ k Lj(λ) = Fj(λ)Gj(λ); 1 ≤ j ≤ k are the Lagrange Interpolation are the counterparts of the polynomials Lagrange Interpolation polynomials `1(λ) + `2(λ) + ··· + `k(λ) = 1 L1(λ) + L2(λ) + ··· + Lk(λ) = 1 λ1`1(λ)+λ2`2(λ)+···+λk`k(λ) = λ λ1L1(λ) + λ2L2(λ) + ··· + λkLk(λ) = d(λ) (a polynomial but it need not be = λ) For i 6= j the product `i(λ)`j(λ) has For i 6= j the product Li(λ)Lj(λ) has a a factor mA (λ) factor mA (λ) 6.3 Matrix Decomposition In the diagonalizable case, using the Lagrange polynomials we obtained a decomposition of the matrix A.