7 Canonical Forms

Total Page:16

File Type:pdf, Size:1020Kb

7 Canonical Forms 7 Canonical Forms 7.1 Jordan Forms, part I As previously, we remain concerned with the question of diagonalizability, more properly: given T 2 L(V), find a basis b such that the matrix [T]b is as close to diagonal as possible. In this chapter we see what is possible when T is not diagonalizable. We start with an example. −8 4 Example 7.1. A = −25 12 2 M2(R) has characteristic equation p(t) = (−8 − t)(12 − t) + 4 · 25 = t2 − 4t + 4 = (t − 2)2 and thus only one eigenvalue l = 2. We quickly observe that the eigenspace has dimension 1: −10 4 2 E = N = Span 2 −25 10 5 We cannot diagonalize A since there are insufficient independent eigenvectors. However, if we con- 2 sider a basis b = fv1, v2g where v1 = 5 is an eigenvector, we see that [LA]b is upper-triangular, x which is better than nothing. How simple can we make this matrix? Let v2 = ( y ), then −8x + 4y x −10x + 4y Av = = 2 + = 2v + (−5x + 2y)v 2 −25x + 12y y −25x + 10y 2 1 2 −5x + 2y =) [L ] = A b 0 2 Since v2 isn’t parallel to v1, the only thing we cannot have is a diagonal matrix. The next best thing is to have the upper right corner be 1; for instance, 2 1 2 1 b = fv , v g = , =) [L ] = 1 2 5 3 A b 0 2 The matrix [LA]b in this example is of a special type: Definition 7.2. A Jordan block is a square matrix of the form 0l 1 1 B .. C B l . C J = B . C @ .. 1A l where l 2 F and all non-indicated entries are zero. Every 1 × 1 matrix is also a Jordan block. A Jordan canonical form is a block-diagonal matrix diag(J1,..., Jm) where each Jk is a Jordan block. A Jordan canonical basis for T 2 L(V) is a basis b of V such that [T]b is a Jordan canonical form. If a map is diagonalizable, then any eigenbasis is Jordan canonical and the corresponding Jordan canonical form is diagonal. Our interest is in the situation when a map isn’t diagonalizable. 1 Example 7.3. It can easily be checked that if 0 1 80 1 0 1 0 19 −1 2 3 < 1 1 1 = A = @−4 5 4A and b = fv1, v2, v3g = @2A , @1A , @0A −2 1 4 : 0 1 1 ; then 03 1 01 Av1 = 3v1, Av2 = v1 + 3v2, Av3 = 2v3 =) [LA]b = @0 3 0A 0 0 2 Thus b is a Jordan canonical basis for A (that is, for LA). Generalized Eigenvectors We have two goals: showing that a linear map has a Jordan canonical basis, and computing such. For instance, in Example 7.3, if we were merely given A, how could we find b? We brute-forced this in Example 7.1, but such is not a reasonable approach in general. Eigenvectors get you some of the distance but not all the way: • v1 is an eigenvector in Example 7.1, but v2 is not; • v1, v3 are eigenvectors in Example 7.3, but v2 is not. The practical question is therefore how we fill out a Jordan canonical basis once we have all the eigenvectors. We now define the necessary objects. Definition 7.4. Let T 2 L(V) have an eigenvalue l: its generalized eigenspace is k [ k Kl := fx 2 V : (T − lI) (x) = 0 for some k 2 Ng = N (T − lI) k2N A non-zero element of Kl is called a generalized eigenvector. n If A 2 Mn(F) then its generalized eigenspaces are those of the linear map LA 2 L(F ). It is easy to check that our earlier Jordan canonical bases consist of generalized eigenvectors: 2 0 0 • In Example 7.1, we have one eigenvalue l = 2. Since (A − 2I) = 0 0 is the zero matrix, it 2 is immediate that we have a single generalized eigenspace: K2 = R . Every non-zero vector is therefore a generalized eigenvector. • In Example 7.3, we easily check that 2 (A − 3I)v1 = 0, (A − 3I) v2 = 0, (A − 2I)v3 = 0 so that the basis b consists of generalized eigenvectors. Indeed it can be seen that K3 = Spanfv1, v2g, K2 = E2 = Spanfv3g though to verify this thoroughly using only the definition is a little awkward. 2 In order to compute generalized eigenspaces directly, it is useful to invoke the main result of this section; we’ll delay the proof until later. Theorem 7.5. Suppose T is a linear operator on a finite-dimensional vector space V over F. Suppose that the characteristic polynomial of T splits over F: m1 mk p(t) = (l1 − t) ··· (lk − t) (∗) where the lj are the distinct eigenvalues of T with multiplicities mj. Then: m 1. For each eigenvalue, Kl = N (T − lI) ; 2. V = Kl1 ⊕ · · · ⊕ Klk so that dim Klj = mj for each eigenvalue. Compare, especially part 2, with the statement on diagonalizability from the start of the course. Observe how Example 7.3 works in this language: p(t) = det(A − tI) = (3 − t)2(2 − t)1 02 −1 −11 8011 0119 2 < = N (A − 3I) = N @0 0 0 A = Span @2A , @1A = K3 =) dim K3 = 2 2 −1 −1 : 0 1 ; 011 1 N (A − 2I) = Span @0A = E2 = K2 =) dim K2 = 1 1 3 R = K3 ⊕ K2 5 2 −1 Example 7.6. We find the generalized eigenspaces of the matrix A = 0 0 0 9 6 −1 The characteristic polynomial is 5 − t −1 p(t) = det(A − lI) = −t = −t(t2 − 5t + t − 5 + 9) = −(0 − t)1(2 − t)2 9 −1 − t whence there are two eigenvalues. 1 1 • l = 0 has multiplicity1 and so K0 = N (A − 0I) = N (A) = Span −1 . This is in fact E0. 3 • l = 2 has multiplicity2, and so 2 03 2 −11 00 −4 01 8011 0019 2 < = K2 = N (A − 2I) = N @0 −2 0 A = N @0 4 0A = Span @0A , @0A 9 6 −3 0 −12 0 : 0 1 ; 1 Observe that the corresponding eigenspace is only one-dimensional; E2 = Span 0 , and so the 3 matrix is not diagonalizable. 3 Observe also that R = K0 ⊕ K2 in accordance with the Theorem. 3 Properties of Generalized Eigenspaces and the Proof of Theorem 7.5 Quite a lot of work is required in order to justify our main result. You might want to skip the proofs at first reading. Lemma 7.7. Let l be an eigenvalue of an operator T on a vector space V. Then: 1. El is a subspace of Kl, which is indeed a subspace of V; 2. Kl is T-invariant; 3. If m 6= l and Kl is finite-dimensional, then T − mI restricted to Kl is an isomorphism of Kl. In particular, Kl does not contain the eigenspace Em. Proof. Part 1 is an easy exercise. k 2. Let x 2 Kl, then 9k 2 N such that (T − lI) (x) = 0. But then (T − lI)kT(x) = (T − lI)kT(x) − lx + lx = (T − lI)k+1(x) + l(T − lI)k(x) = 0 =) T(x) 2 Kl 3. Since Kl is T-invariant, it is also (T − mI)-invariant. It is therefore enough to show that T − mI is injective on Kl. Suppose not, then 9x 2 Kl non-zero such that (T − mI)(x) = 0. Let k 2 N be k k−1 minimal such that (T − lI) (x) = 0, and let y = (T − lI) (x). Clearly y 2 El. But then (T − mI)(y) = (T − lI + (l − m)I(T − lI)k−1(x) = (T − lI)k−1(T − lI + (l − m)I(x) = (T − lI)k−1(T − mI)(x) = 0 Thus y 2 Em \ El =) y = 0, contradicting the minimality of k. Now we come to the main proof: it’s a bit of a monster, but things become much easier once we’re through with it. Proof of Theorem 7.5. Fix an eigenvalue l. Since Kl is T-invariant, the characteristic polynomial pl(t) of the restriction TKl divides that of T. Since Kl contains no eigenvectors except those in El, we see that dim Kl pl(t) = (l − t) =) dim Kl ≤ m By the Cayley–Hamilton Theorem, TKl satisfies its characteristic polynomial, whence dim Kl m (lI − T) (x) = 0, 8x 2 Kl =) Kl ⊆ N (T − lI) m That N (T − lI) ⊆ Kl is by definition of Kl. We’ve therefore established part 1. For part 2, we prove by induction on the number of distinct eigenvalues of T. Let dim V = n and suppose the characteristic polynomial of T splits as in (∗). (Base case) If T has one eigenvalue, then p(t) = (l − t)n. By Cayley–Hamilton, (T − lI)n(x) = 0 for all x 2 V and so V = Kl. 4 (Induction step) Fix k 2 N and suppose the result is true for any map with k − 1 distinct eigenvalues. Suppose T 2 L(V) has k distinct eigenvalues. Let lk have multiplicity mk and define mk W = R(T − lkI) The subspace W has the following properties: • W is T-invariant, mk w 2 W =) 9x such that w = (T − lkI) (x) mk+1 =) T(w) = (T − lkI) (x) + lkw 2 W whence the characteristic polynomial of the restriction TW divides that of T. • By Lemma 7.7, we have isomorphisms (T − l I)K 2 L(K ) for each j 6= k.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Linear Algebra Handbook
    CS419 Linear Algebra January 7, 2021 1 What do we need to know? By the end of this booklet, you should know all the linear algebra you need for CS419. More specifically, you'll understand: • Vector spaces (the important ones, at least) • Dirac (bra-ket) notation and why it's quite nice to use • Linear combinations, linearly independent sets, spanning sets and basis sets • Matrices and linear transformations (they're the same thing) • Changing basis • Inner products and norms • Unitary operations • Tensor products • Why we care about linear algebra 2 Vector Spaces In quantum computing, the `vectors' we'll be working with are going to be made up of complex numbers1. A vector space, V , over C is a set of vectors with the vital property2 that αu + βv 2 V for all α; β 2 C and u; v 2 V . Intuitively, this means we can add together and scale up vectors in V, and we know the result is still in V. Our vectors are going to be lists of n complex numbers, v 2 Cn, and Cn will be our most important vector space. Note we can just as easily define vector spaces over R, the set of real numbers. Over the course of this module, we'll see the reasons3 we use C, but for all this linear algebra, we can stick with R as everyone is happier with real numbers. Rest assured for the entire module, every time you see something like \Consider a vector space V ", this vector space will be Rn or Cn for some n 2 N.
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Math 217: True False Practice Professor Karen Smith 1. a Square Matrix Is Invertible If and Only If Zero Is Not an Eigenvalue. Solution Note: True
    (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. Math 217: True False Practice Professor Karen Smith 1. A square matrix is invertible if and only if zero is not an eigenvalue. Solution note: True. Zero is an eigenvalue means that there is a non-zero element in the kernel. For a square matrix, being invertible is the same as having kernel zero. 2. If A and B are 2 × 2 matrices, both with eigenvalue 5, then AB also has eigenvalue 5. Solution note: False. This is silly. Let A = B = 5I2. Then the eigenvalues of AB are 25. 3. If A and B are 2 × 2 matrices, both with eigenvalue 5, then A + B also has eigenvalue 5. Solution note: False. This is silly. Let A = B = 5I2. Then the eigenvalues of A + B are 10. 4. A square matrix has determinant zero if and only if zero is an eigenvalue. Solution note: True. Both conditions are the same as the kernel being non-zero. 5. If B is the B-matrix of some linear transformation V !T V . Then for all ~v 2 V , we have B[~v]B = [T (~v)]B. Solution note: True. This is the definition of B-matrix. 21 2 33 T 6. Suppose 40 2 05 is the matrix of a transformation V ! V with respect to some basis 0 0 1 B = (f1; f2; f3). Then f1 is an eigenvector. Solution note: True. It has eigenvalue 1. The first column of the B-matrix is telling us that T (f1) = f1.
    [Show full text]
  • Homogeneous Systems (1.5) Linear Independence and Dependence (1.7)
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout Page 1 of3 Homogeneous systems (1.5) Homogenous systems are linear systems in the form Ax = 0, where 0 is the 0 vector. Given a system Ax = b, suppose x = α + t1α1 + t2α2 + ::: + tkαk is a solution (in parametric form) to the system, for any values of t1; t2; :::; tk. Then α is a solution to the system Ax = b (seen by seeting t1 = ::: = tk = 0), and α1; α2; :::; αk are solutions to the homoegeneous system Ax = 0. Terms you should know The zero solution (trivial solution): The zero solution is the 0 vector (a vector with all entries being 0), which is always a solution to the homogeneous system Particular solution: Given a system Ax = b, suppose x = α + t1α1 + t2α2 + ::: + tkαk is a solution (in parametric form) to the system, α is the particular solution to the system. The other terms constitute the solution to the associated homogeneous system Ax = 0. Properties of a Homegeneous System 1. A homogeneous system is ALWAYS consistent, since the zero solution, aka the trivial solution, is always a solution to that system. 2. A homogeneous system with at least one free variable has infinitely many solutions. 3. A homogeneous system with more unknowns than equations has infinitely many solu- tions 4. If α1; α2; :::; αk are solutions to a homogeneous system, then ANY linear combination of α1; α2; :::; αk is also a solution to the homogeneous system Important theorems to know Theorem. (Chapter 1, Theorem 6) Given a consistent system Ax = b, suppose α is a solution to the system.
    [Show full text]
  • Me Me Ft-Uiowa-Math2550 Assignment Hw6fall14 Due 10/09/2014 at 11:59Pm CDT
    me me ft-uiowa-math2550 Assignment HW6fall14 due 10/09/2014 at 11:59pm CDT The columns of A could be either linearly dependent or lin- 1. (1 pt) Library/WHFreeman/Holt linear algebra/Chaps 1-4- early independent depending on the case. /2.3.40.pg Correct Answers: • B n Let S be a set of m vectors in R with m > n. Select the best statement. 3. (1 pt) Library/WHFreeman/Holt linear algebra/Chaps 1-4- /2.3.42.pg • A. The set S is linearly independent, as long as it does Let A be a matrix with more columns than rows. not include the zero vector. Select the best statement. • B. The set S is linearly dependent. • C. The set S is linearly independent, as long as no vec- • A. The columns of A are linearly independent, as long tor in S is a scalar multiple of another vector in the set. as they does not include the zero vector. • D. The set S is linearly independent. • B. The columns of A could be either linearly dependent • E. The set S could be either linearly dependent or lin- or linearly independent depending on the case. early independent, depending on the case. • C. The columns of A must be linearly dependent. • F. none of the above • D. The columns of A are linearly independent, as long Solution: (Instructor solution preview: show the student so- as no column is a scalar multiple of another column in lution after due date. ) A • E. none of the above SOLUTION Solution: (Instructor solution preview: show the student so- By theorem 2.13, a linearly independent set in n can contain R lution after due date.
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]
  • Chapter 2: Linear Algebra User's Manual
    Preprint typeset in JHEP style - HYPER VERSION Chapter 2: Linear Algebra User's Manual Gregory W. Moore Abstract: An overview of some of the finer points of linear algebra usually omitted in physics courses. May 3, 2021 -TOC- Contents 1. Introduction 5 2. Basic Definitions Of Algebraic Structures: Rings, Fields, Modules, Vec- tor Spaces, And Algebras 6 2.1 Rings 6 2.2 Fields 7 2.2.1 Finite Fields 8 2.3 Modules 8 2.4 Vector Spaces 9 2.5 Algebras 10 3. Linear Transformations 14 4. Basis And Dimension 16 4.1 Linear Independence 16 4.2 Free Modules 16 4.3 Vector Spaces 17 4.4 Linear Operators And Matrices 20 4.5 Determinant And Trace 23 5. New Vector Spaces from Old Ones 24 5.1 Direct sum 24 5.2 Quotient Space 28 5.3 Tensor Product 30 5.4 Dual Space 34 6. Tensor spaces 38 6.1 Totally Symmetric And Antisymmetric Tensors 39 6.2 Algebraic structures associated with tensors 44 6.2.1 An Approach To Noncommutative Geometry 47 7. Kernel, Image, and Cokernel 47 7.1 The index of a linear operator 50 8. A Taste of Homological Algebra 51 8.1 The Euler-Poincar´eprinciple 54 8.2 Chain maps and chain homotopies 55 8.3 Exact sequences of complexes 56 8.4 Left- and right-exactness 56 { 1 { 9. Relations Between Real, Complex, And Quaternionic Vector Spaces 59 9.1 Complex structure on a real vector space 59 9.2 Real Structure On A Complex Vector Space 64 9.2.1 Complex Conjugate Of A Complex Vector Space 66 9.2.2 Complexification 67 9.3 The Quaternions 69 9.4 Quaternionic Structure On A Real Vector Space 79 9.5 Quaternionic Structure On Complex Vector Space 79 9.5.1 Complex Structure On Quaternionic Vector Space 81 9.5.2 Summary 81 9.6 Spaces Of Real, Complex, Quaternionic Structures 81 10.
    [Show full text]
  • Review Questions for Exam 1
    EXAM 1 - REVIEW QUESTIONS LINEAR ALGEBRA Questions (answers are below) Examples. For each of the following, either provide a specific example which satisfies the given description, or if no such example exists, briefly explain why not. (1) (JW) A skew-symmetric matrix A such that the trace of A is 1 (2) (HD) A nonzero singular matrix A 2 M2×2. (3) (LL) A non-zero 2 x 2 matrix, A, whose determinant is equal to the determinant of the A−1 Note: Chap. 3 is not on the exam. But this is a great one for next time. (4) (MS) A 3 × 3 augmented matrice such that it has infinitely many solutions (5) (LS) A nonsingular skew-symmetric matrix B. (6) (BP) A symmetric matrix A, such that A = A−1. (7) (CB) A 2 × 2 singular matrix with all nonzero entries. (8) (LB) A lower triangular matrix A in RREF. 3 (9) (AP) A homogeneous system Ax = 0, where A 2 M3×3 and x 2 R , which has only the trivial solution. (10) (EA) Two n × n matrices A and B such that AB 6= BA (11) (EB) A nonsingular matrix A such that AT is singular. (12) (EF) Three 2 × 2 matrices A, B, and C such that AB = AC and A 6= C. (13) (LD) A and B are n × n matrices with no zero entries such that AB = 0. (14) (OH) A matrix A 2 M2×2 where Ax = 0 has only the trivial solution. (15) (AL) An elementary matrix such that E = E−1.
    [Show full text]
  • Eigenvalues and Eigenvectors MAT 67L, Laboratory III
    Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled \Experiments" are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled \Problems" are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the \Getting started guide" to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don't forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V −! V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv ; v 6= 0 : In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R3 ! R3 to be (in the canonical basis) 01 2 31 M := @2 4 5A : 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(M − λI) : (2) Compute the roots λi of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λi calculate ker(M − λiI) : The vectors of any basis for for ker(M − λiI) are the eigenvectors corresponding to λi.
    [Show full text]
  • Solution to Section 3.2, Problem 9. (A) False. If the Matrix Is the Zero Matrix, Then All of the Variables Are Free (There Are No Pivots)
    Solutions to problem set #3 ad problem 1: Solution to Section 3.2, problem 9. (a) False. If the matrix is the zero matrix, then all of the variables are free (there are no pivots). (b) True. Page 138 says that “if A is invertible, its reduced row echelon form is the identity matrix R = I”. Thus, every column has a pivot, so there are no free variables. (c) True. There are n variables altogether (free and pivot variables). (d) True. The pivot variables are in a 1-to-1 correspondence with the pivots in the RREF of the matrix. But there is at most one pivot per row (every pivot has its own row), so the number of pivots is no larger than the number of rows, which is m. ad problem 2: Solution to Section 3.2, problem 12. (a) The maximum number of 1’s is 11: 0 1 1 0 1 1 1 0 0 1 B 0 0 1 1 1 1 0 0 C B C @ 0 0 0 0 0 0 1 0 A 0 0 0 0 0 0 0 1 (b) The maximum number of 1’s is 13: 0 0 1 1 0 0 1 1 1 1 B 0 0 0 1 0 1 1 1 C B C @ 0 0 0 0 1 1 1 1 A 0 0 0 0 0 0 0 0 ad problem 3: Solution to Section 3.2, problem 27. Assume there exists a 3 × 3-matrix A whose nullspace equals its column space.
    [Show full text]
  • Eigenvalues and Eigenvectors
    Eigenvalues and eigenvectors Definition: For a vector space V over a field K and a linear map f : V → V , the eigenvalue of f is any λ ∈ K for which exists a vector u ∈ V \ 0 such that f (u) = λu. The eigenvector corresponding to an eigenvalue λ is any vector u such that f (u) = λu. If V is of finite dimension n, then f can be represented n×n by the matrix A = [f ]XX ∈ K w.r.t. some basis X of V . n This way we get eigenvalues λ ∈ K and eigenvectors x ∈ K of matrices — these shall satisfy Ax = λx. The collection of all eigenvalues of a matrix is its spectrum. Examples — a linear map in the plane R2 The axis symmetry by the axis of the 2nd and 4th quadrant x x 1 2 0 −1 [f ] = KK −1 0 T λ1 = 1 x1 = c · (−1, 1) T λ2 = −1 x2 = c · (1, 1) The rotation by the right angle 0 1 [f ] = KK −1 0 No real eigenvalues nor eigenvectors exist. The projection onto the first coordinate x 1 1 0 [f ] = KK 0 0 x2 T λ1 = 0 x1 = c · (0, 1) T λ2 = 1 x2 = c · (1, 0) Scaling by the factor2 f (u) 2 0 [f ] = u KK 0 2 λ1 = 2 Every vector is an eigenvector. A linear map given by a matrix x1 f (u) 1 0 [f ]KK = u 1 1 T λ1 = 1 x1 = c · (0, 1) Eigenvectors and eigenvalues of a diagonal matrix D The equation d1,1 0 ..
    [Show full text]