Effective Procedure for Computing the Jordan Normal Form of Nilpotent Matrix Given an N × N Nilpotent Matrix A, Define the Line

Total Page:16

File Type:pdf, Size:1020Kb

Effective Procedure for Computing the Jordan Normal Form of Nilpotent Matrix Given an N × N Nilpotent Matrix A, Define the Line Effective procedure for computing the Jordan Normal Form of nilpotent matrix Given an n × n nilpotent matrix A, define the linear transformation n n f : R ! R by f(x) = Ax: Then A represents f with respect to the standard basis. Define the subspaces n 2 n V0 = R ;V1 = cs(A);V2 = cs(A );:::;Vn = cs(A ): Note that f maps Vk into Vk for each k: This is clear when k = 0. For k ≥ 1, k n let x 2 Vk be given. Then we have x = A y for some y 2 R , hence k+1 k k f(x) = Ax = A y = A (Ay) 2 cs(A ) = Vk: For each k define fk : Vk ! Vk by fk(x) = Ax: Then each fk is a nilpotent transformation. We will find a Jordan normal basis for each Vk, working backwards from the largest value of k such that Ak 6= 0. Find the largest value of k such that Ak 6= 0. We must have Ak+1 = 0. k k k+1 We also have Vk = cs(A ) 6= f0g and fk(x) = fk(A y) = A y = 0 for all x 2 Vk. Hence fk is represented by the zero matrix with respect to any basis for Vk. The size of this basis is determined by the dimension of Vk. Since the zero matrix is Jordan normal of type J0(1; 1;::: ), we have found a Jordan normal basis for Vk. Induction Hypothesis: we have found a Jordan normal basis for Vj where j ≥ 1. Assume it has the form n1−1 0 n2−1 0 np−1 0 fA v1;:::;A v1;A v2;:::;A v2;:::;A vk;:::;A vkg where n1 n2 nk A (v1) = A v2 = ··· = A vk = 0: 1 Now we find a Jordan normal basis for Vj−1. Find ui so that A(ui) = vi n1−1 nk−1 for each i ≤ k. Expand the vectors A (v1);:::;A (vk), which belong to n1−1 nk−1 the kernel of fj−1, to a basis A (v1);:::;A (vk); vk+1; : : : ; vp for ker(fj). Then by a previous theorem, n1 0 nk 0 0 0 A u1;:::;A u1;A uk;:::;A uk;A vk+1;:::;A vp is a Jordan normal basis for Vj−1. Example: 20 1 7 19 54 61 3 60 0 0 −7 −29 −417 6 7 60 0 0 1 3 −1 7 A = 6 7 60 0 0 0 1 6 7 6 7 40 0 0 0 0 0 5 0 0 0 0 0 0 20 0 0 0 11 66 3 60 0 0 0 −7 −427 6 7 2 60 0 0 0 1 6 7 A = 6 7 60 0 0 0 0 0 7 6 7 40 0 0 0 0 0 5 0 0 0 0 0 0 20 0 0 0 0 03 60 0 0 0 0 07 6 7 3 60 0 0 0 0 07 A = 6 7 : 60 0 0 0 0 07 6 7 40 0 0 0 0 05 0 0 0 0 0 0 A Jordan normal basis for V2 is 82 11 39 > > >6−77> >6 7> <6 1 7= 6 7 : >6 0 7> >6 7> >4 0 5> :> 0 ;> 2 This belongs to the kernel of f1. We must find other vectors we can add to this to create a basis for the kernel of f1. In order to do this efficiently we must first find a matrix representation of f1. A basis for V1 is 8213 2 19 3 2 54 39 > > >607 6−77 6−297> >6 7 6 7 6 7> <607 6 1 7 6 3 7= 6 7 ; 6 7 ; 6 7 : >607 6 0 7 6 1 7> >6 7 6 7 6 7> >405 4 0 5 4 0 5> :> 0 0 0 ;> The matrix representation of f1 with respect to this basis is 20 0 −83 40 0 1 5 : 0 0 0 The kernel of this matrix is 82 3 2 39 < 1 0 = span 405 ; 415 : : 0 0 ; This implies that a basis for the kernel of f1 is 8213 2 19 39 > > >607 6−77> >6 7 6 7> <607 6 1 7= 6 7 ; 6 7 : >607 6 0 7> >6 7 6 7> >405 4 0 5> :> 0 0 ;> We must find a vector v such that 82 11 3 9 > > >6−77 > >6 7 > <6 1 7 = 6 7 ; v >6 0 7 > >6 7 > >4 0 5 > :> 0 ;> 3 have the same span as 8213 2 19 39 > > >607 6−77> >6 7 6 7> <607 6 1 7= 6 7 ; 6 7 : >607 6 0 7> >6 7 6 7> >405 4 0 5> :> 0 0 ;> We can choose 213 607 6 7 607 v = 6 7 : 607 6 7 405 0 We must also find u so that 2 11 3 6−77 6 7 6 1 7 Au = 6 7 : 6 0 7 6 7 4 0 5 0 We choose 2 54 3 6−297 6 7 6 3 7 u = 6 7 : 6 1 7 6 7 4 0 5 0 The Jordan normal basis we have found for V1 is now 82 11 3 2 54 3 2139 > > >6−77 6−297 607> >6 7 6 7 6 7> <6 1 7 6 3 7 607= fAu; u; vg = 6 7 ; 6 7 ; 6 7 : >6 0 7 6 1 7 607> >6 7 6 7 6 7> >4 0 5 4 0 5 405> :> 0 0 0 ;> 4 The vectors Au and v belong to the kernel of f0. We will fill these out to a basis for the kernel of f0. We already have a matrix representation of f0: it is the matrix A with respect to the standard basis. The kernel of A has basis 8213 2 0 3 2 0 39 > > >607 6 98 7 6 0 7> >6 7 6 7 6 7> <607 6 0 7 6 14 7= 6 7 ; 6 7 ; 6 7 : >607 6−197 6−197> >6 7 6 7 6 7> >405 4 6 5 4 6 5> :> 0 −1 −1 ;> These vectors have the same span as the vectors 82 11 3 213 2 0 39 > > >6−77 607 6 0 7> >6 7 6 7 6 7> <6 1 7 607 6 14 7= 6 7 ; 6 7 ; 6 7 : >6 0 7 607 6−197> >6 7 6 7 6 7> >4 0 5 405 4 6 5> :> 0 0 −1 ;> So set 2 0 3 6 0 7 6 7 6 14 7 w = 6 7 : 6−197 6 7 4 6 5 −1 We must find also find vectors p and q so that Ap = u and Aq = v. We can use p = e5 and q = e2. So a Jordan normal basis for V0 should be 82 11 3 2 54 3 203 213 203 2 0 39 > > >6−77 6−297 607 607 617 6 0 7> >6 7 6 7 6 7 6 7 6 7 6 7> 2 <6 1 7 6 3 7 607 607 607 6 14 7= fA p; Ap; p; Aq; q; wg = 6 7 ; 6 7 ; 6 7 ; 6 7 ; 6 7 ; 6 7 : >6 0 7 6 1 7 607 607 607 6−197> >6 7 6 7 6 7 6 7 6 7 6 7> >4 0 5 4 0 5 415 405 405 4 6 5> :> 0 0 0 0 0 −1 ;> 5 One can check that the matrix representation of f with respect to this basis is 20 1 0 0 0 03 60 0 1 0 0 07 6 7 60 0 0 0 0 07 6 7 = J0(3; 2; 1): 60 0 0 0 1 07 6 7 40 0 0 0 0 05 0 0 0 0 0 0 Therefore 2 11 54 0 1 0 0 3−1 2 11 54 0 1 0 0 3 6−7 −29 0 0 1 0 7 6−7 −29 0 0 1 0 7 6 7 6 7 6 1 3 0 0 0 14 7 6 1 3 0 0 0 14 7 6 7 A 6 7 = J0(3; 2; 1): 6 0 1 0 0 0 −197 6 0 1 0 0 0 −197 6 7 6 7 4 0 0 1 0 0 6 5 4 0 0 1 0 0 6 5 0 0 0 0 0 −1 0 0 0 0 0 −1 As a corollary, we get 2 11 54 0 1 0 0 3−1 2λ 1 7 19 54 61 3 2 11 54 0 1 0 0 3 6−7 −29 0 0 1 0 7 60 λ 0 −7 −29 −417 6−7 −29 0 0 1 0 7 6 7 6 7 6 7 6 1 3 0 0 0 14 7 60 0 λ 1 3 −1 7 6 1 3 0 0 0 14 7 6 7 6 7 6 7 = 6 0 1 0 0 0 −197 60 0 0 λ 1 6 7 6 0 1 0 0 0 −197 6 7 6 7 6 7 4 0 0 1 0 0 6 5 40 0 0 0 λ 0 5 4 0 0 1 0 0 6 5 0 0 0 0 0 −1 0 0 0 0 0 λ 0 0 0 0 0 −1 Jλ(3; 2; 1): 6.
Recommended publications
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • 18.700 JORDAN NORMAL FORM NOTES These Are Some Supplementary Notes on How to Find the Jordan Normal Form of a Small Matrix. Firs
    18.700 JORDAN NORMAL FORM NOTES These are some supplementary notes on how to find the Jordan normal form of a small matrix. First we recall some of the facts from lecture, next we give the general algorithm for finding the Jordan normal form of a linear operator, and then we will see how this works for small matrices. 1. Facts Throughout we will work over the field C of complex numbers, but if you like you may replace this with any other algebraically closed field. Suppose that V is a C-vector space of dimension n and suppose that T : V → V is a C-linear operator. Then the characteristic polynomial of T factors into a product of linear terms, and the irreducible factorization has the form m1 m2 mr cT (X) = (X − λ1) (X − λ2) ... (X − λr) , (1) for some distinct numbers λ1, . , λr ∈ C and with each mi an integer m1 ≥ 1 such that m1 + ··· + mr = n. Recall that for each eigenvalue λi, the eigenspace Eλi is the kernel of T − λiIV . We generalized this by defining for each integer k = 1, 2,... the vector subspace k k E(X−λi) = ker(T − λiIV ) . (2) It is clear that we have inclusions 2 e Eλi = EX−λi ⊂ E(X−λi) ⊂ · · · ⊂ E(X−λi) ⊂ .... (3) k k+1 Since dim(V ) = n, it cannot happen that each dim(E(X−λi) ) < dim(E(X−λi) ), for each e e +1 k = 1, . , n. Therefore there is some least integer ei ≤ n such that E(X−λi) i = E(X−λi) i .
    [Show full text]
  • (VI.E) Jordan Normal Form
    (VI.E) Jordan Normal Form Set V = Cn and let T : V ! V be any linear transformation, with distinct eigenvalues s1,..., sm. In the last lecture we showed that V decomposes into stable eigenspaces for T : s s V = W1 ⊕ · · · ⊕ Wm = ker (T − s1I) ⊕ · · · ⊕ ker (T − smI). Let B = fB1,..., Bmg be a basis for V subordinate to this direct sum and set B = [T j ] , so that k Wk Bk [T]B = diagfB1,..., Bmg. Each Bk has only sk as eigenvalue. In the event that A = [T]eˆ is s diagonalizable, or equivalently ker (T − skI) = ker(T − skI) for all k , B is an eigenbasis and [T]B is a diagonal matrix diagf s1,..., s1 ;...; sm,..., sm g. | {z } | {z } d1=dim W1 dm=dim Wm Otherwise we must perform further surgery on the Bk ’s separately, in order to transform the blocks Bk (and so the entire matrix for T ) into the “simplest possible” form. The attentive reader will have noticed above that I have written T − skI in place of skI − T . This is a strategic move: when deal- ing with characteristic polynomials it is far more convenient to write det(lI − A) to produce a monic polynomial. On the other hand, as you’ll see now, it is better to work on the individual Wk with the nilpotent transformation T j − s I =: N . Wk k k Decomposition of the Stable Eigenspaces (Take 1). Let’s briefly omit subscripts and consider T : W ! W with one eigenvalue s , dim W = d , B a basis for W and [T]B = B.
    [Show full text]
  • Arxiv:1508.00183V2 [Math.RA] 6 Aug 2015 Atclrb Ed Yasemigroup a by field
    A THEOREM OF KAPLANSKY REVISITED HEYDAR RADJAVI AND BAMDAD R. YAHAGHI Abstract. We present a new and simple proof of a theorem due to Kaplansky which unifies theorems of Kolchin and Levitzki on triangularizability of semigroups of matrices. We also give two different extensions of the theorem. As a consequence, we prove the counterpart of Kolchin’s Theorem for finite groups of unipotent matrices over division rings. We also show that the counterpart of Kolchin’s Theorem over division rings of characteristic zero implies that of Kaplansky’s Theorem over such division rings. 1. Introduction The purpose of this short note is three-fold. We give a simple proof of Kaplansky’s Theorem on the (simultaneous) triangularizability of semi- groups whose members all have singleton spectra. Our proof, although not independent of Kaplansky’s, avoids the deeper group theoretical aspects present in it and in other existing proofs we are aware of. Also, this proof can be adjusted to give an affirmative answer to Kolchin’s Problem for finite groups of unipotent matrices over division rings. In particular, it follows from this proof that the counterpart of Kolchin’s Theorem over division rings of characteristic zero implies that of Ka- plansky’s Theorem over such division rings. We also present extensions of Kaplansky’s Theorem in two different directions. M arXiv:1508.00183v2 [math.RA] 6 Aug 2015 Let us fix some notation. Let ∆ be a division ring and n(∆) the algebra of all n × n matrices over ∆. The division ring ∆ could in particular be a field. By a semigroup S ⊆ Mn(∆), we mean a set of matrices closed under multiplication.
    [Show full text]
  • Your PRINTED Name Is: Please Circle Your Recitation
    18.06 Professor Edelman Quiz 3 December 3, 2012 Grading 1 Your PRINTED name is: 2 3 4 Please circle your recitation: 1 T 9 2-132 Andrey Grinshpun 2-349 3-7578 agrinshp 2 T 10 2-132 Rosalie Belanger-Rioux 2-331 3-5029 robr 3 T 10 2-146 Andrey Grinshpun 2-349 3-7578 agrinshp 4 T 11 2-132 Rosalie Belanger-Rioux 2-331 3-5029 robr 5 T 12 2-132 Georoy Horel 2-490 3-4094 ghorel 6 T 1 2-132 Tiankai Liu 2-491 3-4091 tiankai 7 T 2 2-132 Tiankai Liu 2-491 3-4091 tiankai 1 (16 pts.) a) (4 pts.) Suppose C is n × n and positive denite. If A is n × m and M = AT CA is not positive denite, nd the smallest eigenvalue of M: (Explain briey.) Solution. The smallest eigenvalue of M is 0. The problem only asks for brief explanations, but to help students understand the material better, I will give lengthy ones. First of all, note that M T = AT CT A = AT CA = M, so M is symmetric. That implies that all the eigenvalues of M are real. (Otherwise, the question wouldn't even make sense; what would the smallest of a set of complex numbers mean?) Since we are assuming that M is not positive denite, at least one of its eigenvalues must be nonpositive. So, to solve the problem, we just have to explain why M cannot have any negative eigenvalues. The explanation is that M is positive semidenite.
    [Show full text]
  • Advanced Linear Algebra (MA251) Lecture Notes Contents
    Algebra I – Advanced Linear Algebra (MA251) Lecture Notes Derek Holt and Dmitriy Rumynin year 2009 (revised at the end) Contents 1 Review of Some Linear Algebra 3 1.1 The matrix of a linear map with respect to a fixed basis . ........ 3 1.2 Changeofbasis................................... 4 2 The Jordan Canonical Form 4 2.1 Introduction.................................... 4 2.2 TheCayley-Hamiltontheorem . ... 6 2.3 Theminimalpolynomial. 7 2.4 JordanchainsandJordanblocks . .... 9 2.5 Jordan bases and the Jordan canonical form . ....... 10 2.6 The JCF when n =2and3 ............................ 11 2.7 Thegeneralcase .................................. 14 2.8 Examples ...................................... 15 2.9 Proof of Theorem 2.9 (non-examinable) . ...... 16 2.10 Applications to difference equations . ........ 17 2.11 Functions of matrices and applications to differential equations . 19 3 Bilinear Maps and Quadratic Forms 21 3.1 Bilinearmaps:definitions . 21 3.2 Bilinearmaps:changeofbasis . 22 3.3 Quadraticforms: introduction. ...... 22 3.4 Quadraticforms: definitions. ..... 25 3.5 Change of variable under the general linear group . .......... 26 3.6 Change of variable under the orthogonal group . ........ 29 3.7 Applications of quadratic forms to geometry . ......... 33 3.7.1 Reduction of the general second degree equation . ....... 33 3.7.2 The case n =2............................... 34 3.7.3 The case n =3............................... 34 1 3.8 Unitary, hermitian and normal matrices . ....... 35 3.9 Applications to quantum mechanics . ..... 41 4 Finitely Generated Abelian Groups 44 4.1 Definitions...................................... 44 4.2 Subgroups,cosetsandquotientgroups . ....... 45 4.3 Homomorphisms and the first isomorphism theorem . ....... 48 4.4 Freeabeliangroups............................... 50 4.5 Unimodular elementary row and column operations and the Smith normal formforintegralmatrices .
    [Show full text]
  • Chapter 6 Jordan Canonical Form
    Chapter 6 Jordan Canonical Form 175 6.1 Primary Decomposition Theorem For a general matrix, we shall now try to obtain the analogues of the ideas we have in the case of diagonalizable matrices. Diagonalizable Case General Case Minimal Polynomial: Minimal Polynomial: k k Y rj Y rj mA(λ) = (λ − λj) mA(λ) = (λ − λj) j=1 j=1 where rj = 1 for all j where 1 ≤ rj ≤ aj for all j Eigenspaces: Eigenspaces: Wj = Null Space of (A − λjI) Wj = Null Space of (A − λjI) dim(Wj) = gj and gj = aj for all j dim(Wj) = gj and gj ≤ aj for all j Generalized Eigenspaces: rj Vj = Null Space of (A − λjI) dim(Vj) = aj Decomposition: Decomposition: n n F = W1 ⊕ W2 ⊕ Wj ⊕ · · · ⊕ Wk W = W1⊕W2⊕Wj⊕· · ·⊕Wk is a subspace of F Primary Decomposition Theorem: n F = V1 ⊕ V2 ⊕ Vj ⊕ Vk n n x 2 F =) 9 unique x 2 F =) 9 unique xj 2 Wj, 1 ≤ j ≤ k such that vj 2 Vj, 1 ≤ j ≤ k such that x = x1 + x2 + ··· + xj + ··· + xk x = v1 + v2 + ··· + vj + ··· + vk 176 6.2 Analogues of Lagrange Polynomial Diagonalizable Case General Case Define Define k k Y Y ri fj(λ) = (λ − λi) Fj(λ) = (λ − λi) i=1 i=1 i6=j i6=j This is a set of coprime polynomials This is a set of coprime polynomials and and hence there exist polynomials hence there exist polynomials Gj(λ), 1 ≤ gj(λ), 1 ≤ j ≤ k such that j ≤ k such that k k X X fj(λ)gj(λ) = 1 Fj(λ)Gj(λ) = 1 j=1 j=1 The polynomials The polynomials `j(λ) = fj(λ)gj(λ); 1 ≤ j ≤ k Lj(λ) = Fj(λ)Gj(λ); 1 ≤ j ≤ k are the Lagrange Interpolation are the counterparts of the polynomials Lagrange Interpolation polynomials `1(λ) + `2(λ) + ··· + `k(λ) = 1 L1(λ) + L2(λ) + ··· + Lk(λ) = 1 λ1`1(λ)+λ2`2(λ)+···+λk`k(λ) = λ λ1L1(λ) + λ2L2(λ) + ··· + λkLk(λ) = d(λ) (a polynomial but it need not be = λ) For i 6= j the product `i(λ)`j(λ) has For i 6= j the product Li(λ)Lj(λ) has a a factor mA (λ) factor mA (λ) 6.3 Matrix Decomposition In the diagonalizable case, using the Lagrange polynomials we obtained a decomposition of the matrix A.
    [Show full text]
  • An Effective Lie–Kolchin Theorem for Quasi-Unipotent Matrices
    Linear Algebra and its Applications 581 (2019) 304–323 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa An effective Lie–Kolchin Theorem for quasi-unipotent matrices Thomas Koberda a, Feng Luo b,∗, Hongbin Sun b a Department of Mathematics, University of Virginia, Charlottesville, VA 22904-4137, USA b Department of Mathematics, Rutgers University, Hill Center – Busch Campus, 110 Frelinghuysen Road, Piscataway, NJ 08854, USA a r t i c l e i n f oa b s t r a c t Article history: We establish an effective version of the classical Lie–Kolchin Received 27 April 2019 Theorem. Namely, let A, B ∈ GLm(C)be quasi-unipotent Accepted 17 July 2019 matrices such that the Jordan Canonical Form of B consists Available online 23 July 2019 of a single block, and suppose that for all k 0the Submitted by V.V. Sergeichuk matrix ABk is also quasi-unipotent. Then A and B have a A, B < C MSC: common eigenvector. In particular, GLm( )is a primary 20H20, 20F38 solvable subgroup. We give applications of this result to the secondary 20F16, 15A15 representation theory of mapping class groups of orientable surfaces. Keywords: © 2019 Elsevier Inc. All rights reserved. Lie–Kolchin theorem Unipotent matrices Solvable groups Mapping class groups * Corresponding author. E-mail addresses: [email protected] (T. Koberda), fl[email protected] (F. Luo), [email protected] (H. Sun). URLs: http://faculty.virginia.edu/Koberda/ (T. Koberda), http://sites.math.rutgers.edu/~fluo/ (F. Luo), http://sites.math.rutgers.edu/~hs735/ (H.
    [Show full text]
  • A SHORT PROOF of the EXISTENCE of JORDAN NORMAL FORM Let V Be a Finite-Dimensional Complex Vector Space and Let T : V → V Be A
    A SHORT PROOF OF THE EXISTENCE OF JORDAN NORMAL FORM MARK WILDON Let V be a finite-dimensional complex vector space and let T : V → V be a linear map. A fundamental theorem in linear algebra asserts that there is a basis of V in which T is represented by a matrix in Jordan normal form J1 0 ... 0 0 J2 ... 0 . . .. . 0 0 ...Jk where each Ji is a matrix of the form λ 1 ... 0 0 0 λ . 0 0 . . .. 0 0 . λ 1 0 0 ... 0 λ for some λ ∈ C. We shall assume that the usual reduction to the case where some power of T is the zero map has been made. (See [1, §58] for a characteristically clear account of this step.) After this reduction, it is sufficient to prove the following theorem. Theorem 1. If T : V → V is a linear transformation of a finite-dimensional vector space such that T m = 0 for some m ≥ 1, then there is a basis of V of the form a1−1 ak−1 u1, T u1,...,T u1, . , uk, T uk,...,T uk a where T i ui = 0 for 1 ≤ i ≤ k. At this point all the proofs the author has seen (even Halmos’ in [1, §57]) become unnecessarily long-winded. In this note we present a simple proof which leads to a straightforward algorithm for finding the required basis. Date: December 2007. 1 2 MARK WILDON Proof. We work by induction on dim V . For the inductive step we may assume that dim V ≥ 1.
    [Show full text]
  • Facts from Linear Algebra
    Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector.
    [Show full text]
  • Preliminary Exam 2018 Solutions to Afternoon Exam Part I. Solve Four of the Following Five Problems. Problem 1. Find the Inverse
    Preliminary Exam 2018 Solutions to Afternoon Exam Part I. Solve four of the following five problems. Problem 1. Find the inverse of the matrix 1 0 1 A = 00 2− 0 1 : @5 0 3 A Solution: Any sequence of row operations which puts the matrix (A I) into row- reduced upper-echelon form terminates in the matrix (I B), where j j 3=8 0 1=8 1 0 1 B = A− = 0 1=2 0 : @ 5=8 0 1=8A − 1 Alternatively, one can use the formula A− = (bij), where i+j 1 bij = ( 1) (det A)− (det Aji); − where Aji is the matrix obtained from j by removing the jth row and ith column. Problem 2. The 2 2 matrix A has trace 1 and determinant 2. Find the trace of A100, indicating× your reasoning. − Solution: The characteristic polynomial of A is x2 x 2 = (x+1)(x 2). Thus the eigenvalues of A are 1 and 2, and consequently− the− eigenvalues of −A100 are 1 and 2100. So tr (A) = 1 +− 2100. Problem 3. Find a basis for the space of solutions to the simultaneous equations x + 2x + 3x + 5x = 0 ( 1 3 4 5 x2 + 5x3 + 4x5 = 0: Solution: The associated matrix is already in row-reduced upper-echelon form, so one can solve for the pivotal variables x1 and x2 in terms of the nonpivotal variables x3, x4, and x5. Thus the general solution to the system is ( 2x3 3x4 5x5; 5x3 4x5; x3; x4; x5) = x3v1 + x4v2 + x5v3; − − − − − where v1 = ( 2; 5; 1; 0; 0), v2 = ( 3; 0; 0; 1; 0), v3 = ( 5; 4; 0; 0; 1), and x3, x4, − − − − − and x5 are arbitrary scalars.
    [Show full text]
  • Similar Matrices and Jordan Form
    Similar matrices and Jordan form We’ve nearly covered the entire heart of linear algebra – once we’ve finished singular value decompositions we’ll have seen all the most central topics. AT A is positive definite A matrix is positive definite if xT Ax > 0 for all x 6= 0. This is a very important class of matrices; positive definite matrices appear in the form of AT A when computing least squares solutions. In many situations, a rectangular matrix is multiplied by its transpose to get a square matrix. Given a symmetric positive definite matrix A, is its inverse also symmet­ ric and positive definite? Yes, because if the (positive) eigenvalues of A are −1 l1, l2, · · · ld then the eigenvalues 1/l1, 1/l2, · · · 1/ld of A are also positive. If A and B are positive definite, is A + B positive definite? We don’t know much about the eigenvalues of A + B, but we can use the property xT Ax > 0 and xT Bx > 0 to show that xT(A + B)x > 0 for x 6= 0 and so A + B is also positive definite. Now suppose A is a rectangular (m by n) matrix. A is almost certainly not symmetric, but AT A is square and symmetric. Is AT A positive definite? We’d rather not try to find the eigenvalues or the pivots of this matrix, so we ask when xT AT Ax is positive. Simplifying xT AT Ax is just a matter of moving parentheses: xT (AT A)x = (Ax)T (Ax) = jAxj2 ≥ 0.
    [Show full text]