A Useful Basis for Defective Matrices: Generalized Eigenvectors and the Jordan Form

Total Page:16

File Type:pdf, Size:1020Kb

A Useful Basis for Defective Matrices: Generalized Eigenvectors and the Jordan Form A useful basis for defective matrices: Generalized eigenvectors and the Jordan form S. G. Johnson, MIT 18.06 Spring 2009 April 22, 2009 1 Introduction not worried about them up to now. But it is important to have some idea of what happens when you have a defec- So far in the eigenproblem portion of 18.06, our strategy tive matrix. Partially for computational purposes, but also has been simple: find the eigenvalues li and the corre- to understand conceptually what is possible. For example, sponding eigenvectors xi of a square matrix A, expand what will be the result of any vector of interest u in the basis of these eigenvectors (u = c1x1 +···+cnxn), and then any operation with A can be turned into the corresponding operation with li acting k 1 At 1 k k At lit A or e on each eigenvector. So, A becomes li , e becomes e , 2 2 and so on. But this relied on one key assumption: we re- quire the n × n matrix A to have a basis of n independent eigenvectors. We call such a matrix A diagonalizable. for the defective matrix A above, since (1;2) is not in the Many important cases are always diagonalizable: ma- span of the (single) eigenvector of A? For diagonalizable trices with n distinct eigenvalues l , real symmetric or or- i matrices, this would grow as l k or elt , respectively, but thogonal matrices, and complex Hermitian or unitary ma- what about defective matrices? trices. But there are rare cases where A does not have a complete basis of n eigenvectors: such matrices are called The textbook (Intro. to Linear Algebra, 4th ed. by defective. For example, consider the matrix Strang) covers the defective case only briefly, in section 1 1 6.6, with something called the Jordan form of the ma- A = : 0 1 trix, a generalization of diagonalization. In that short sec- tion, however, the Jordan form falls down out of the sky, This matrix has a characteristic polynomial l 2 − 2l + 1, and you don’t really learn where it comes from or what with a repeated root (a single eigenvalue) l1 = 1. (Equiv- its consequences are. In this section, we will take a dif- alently, since A is upper triangular, we can read the de- ferent tack. For a diagonalizable matrix, the fundamen- terminant of A − lI, and hence the eigenvalues, off the tal vectors are the eigenvectors, which are useful in their diagonal.) However, it only has a single indepenent eigen- own right and give the diagonalization of the matrix as a vector, because side-effect. For a defective matrix, to get a complete basis we need to supplement the eigenvectors with something 0 1 A − I = called generalized eigenvectors. Generalized eigenvec- 0 0 tors are useful in their own right, just like eigenvectors, is obviously rank 1, and has a one-dimensional nullspace and also give the Jordan form as a side effect. Here, how- spanned by x1 = (1;0). ever, we’ll focus on defining, obtaining, and using the Defective matrices arise rarely in practice, and usually generalized eigenvectors, and not so much on the Jordan only when one arranges for them intentionally, so we have form. 1 2 Defining generalized eigenvectors 2.1 More than double roots In the example above, we had a 2 × 2 matrix A but only If we wanted to be more notationally consistent, we could (1) a single eigenvector x1 = (1;0). We need another vector use xi instead of xi. If li is a triple root, we would 2 (3) (1;2) to get a basis for R . Of course, we could pick another find a third vector xi perpendicular to xi by requir- vector at random, as long as it is independent of x1, but (3) (2) ing (A − liI)xi = xi , and so on. In general, if li is we’d like it to have something to do with A, in order to an m-times repeated root, then we will always be able to help us with computations just like eigenvectors. The key find an orthogonal sequence of generalized eigenvectors thing is to look at A−I above, and to notice that the one of ( j) ( j) ( j−1) xi for j = 2:::m satisfying (A − liI)xi = xi and the columns is equal to x1: the vector x1 is in C(A−I), so (1) (2) (A − l I)x = 0. Even more generally, you might have we can find a new generalized eigenvector x satisfying i i 1 cases with e.g. a triple root and two ordinary eigenvec- (2) (2) tors, where you need only one generalized eigenvector, (A − I)x1 = x1; x1 ? x1: or an m-times repeated root with ` > 1 eigenvectors and Notice that, since x1 2 N(A−I), we can add any multiple m−` generalized eigenvectors. However, cases with more (2) of x1 to x1 and still have a solution, so we can use Gram- than a double root are extremely rare in practice. Defec- (2) tive matrices are rare enough to begin with, so here we’ll Schmidt to get a unique solution x1 perpendicular to x1. This particular 2 × 2 equation is easy enough for us to stick with the most common defective matrix, one with (2) a double root li: hence, one ordinary eigenvector xi and solve by inspection, obtaining x1 = (0;1). Now we have (2) 2 one generalized eigenvector x . a nice orthonormal basis for R , and our basis has some i simple relationship to A! Before we talk about how to use these generalized eigenvectors, let’s give a more general definition. Sup- 3 Using generalized eigenvectors pose that li is an eigenvalue of A corresponding to a re- Using an eigenvector xi is easy: multiplying by A is just peated root of det(A − liI), but with only a single (ordi- like multiplying by li. But how do we use a generalized nary) eigenvector xi, satisfying, as usual: (2) eigenvector xi ? The key is in the definition above. It (A − liI)xi = 0: immediately tells us that (2) (2) If li is a double root, we will need a second vector to Axi = lixi + xi: complete our basis. Remarkably,1 it turns out to always be the case for a double root li that xi is in C(A − liI), It will turn out that this has a simple consequence for more k At just as in the example above. Hence, we can always find complicated expressions like A or e , but that’s probably (2) not obvious yet. Let’s try multiplying by A2: a unique second solution xi satisfying: 2 (2) (2) (2) (2) (2) (2) A xi = A(Axi ) = A(lixi + xi) = li(lixi + xi) + lixi (A − liI)x = xi; x ? xi : i i 2 (2) = li xi + 2lixi Again, we can choose x(2) to be perpendicular to x(1) via i i and then try A3: Gram-Schmidt—this is not strictly necessary, but gives a convenient orthogonal basis. (That is, the complete solu- 3 (2) 2 (2) 2 (2) 2 (2) 2 A xi = A(A xi ) = A(li xi + 2lixi) = li (lixi + xi) + 2li xi tion is always of the form xp +cxi, a particular solution xp 3 (2) 2 plus any multiple of the nullspace basis xi. If we choose = li xi + 3li xi: T T c = −xi xp=xi xi we get the unique orthogonal solution From this, it’s not hard to see the general pattern (which (2) (2) xi .) We call xi a generalized eigenvector of A. can be formally proved by induction): 1This fact is proved in any number of advanced textbooks on linear k (2) k (2) k−1 algebra; I like Linear Algebra by P. D. Lax. A xi = li xi + kli xi : 2 Notice that the coefficient in the second term is exactly 3.2 Example d k (li) . This is the clue we need to get the general for- dli mula to apply any function f (A) of the matrix A to the 1 1 Let’s try this for our example 2×2 matrix A = eigenvector and the generalized eigenvector: 0 1 from above, which has an eigenvector x1 = (1;0) and f (A)xi = f (li)xi; (2) a generalized eigenvector x1 = (0;1) for an eigenvalue k At l1 = 1. Suppose we want to comput A u0 and e u0 for (2) (2) 0 u0 = (1;2). As usual, our first step is to write u0 in the ba- f (A)xi = f (li)xi + f (li)xi : sis of the eigenvectors...except that now we also include (2) the generalized eigenvectors to get a complete basis: Multiplying by a function of the matrix multiplies xi by the same function of the eigenvalue, just as for an eigen- vector, but also adds a term multiplying xi by the deriva- 1 (2) 0 u0 = = x1 + 2x1 : tive f (li). So, for example: 2 At (2) lit (2) lit k e xi = e xi +te xi : Now, computing A u0 is easy, from our formula above: We can show this explicitly by considering what happens k k k (2) k k (2) k−1 when we apply our formula for Ak in the Taylor series for A u0 = A x1 + 2A x1 = l1 x1 + 2l1 x1 + 2kl1 x1 eAt : 1 1 1 + 2k = 1k + 2k1k−1 = : 2 0 2 ¥ Aktk ¥ tk eAt x(2) = x(2) = (l kx(2) + kl k−1x ) i ∑ k! i ∑ k! i i i i k=0 k=0 For example, this is the solution to the recurrence uk+1 = ¥ (l t)k ¥ (l t)k−1 Auk.
Recommended publications
  • Linear Systems and Control: a First Course (Course Notes for AAE 564)
    Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ...............................
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Scientific Computing: an Introductory Survey
    Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • Mata Glossary of Common Terms
    Title stata.com Glossary Description Commonly used terms are defined here. Mata glossary arguments The values a function receives are called the function’s arguments. For instance, in lud(A, L, U), A, L, and U are the arguments. array An array is any indexed object that holds other objects as elements. Vectors are examples of 1- dimensional arrays. Vector v is an array, and v[1] is its first element. Matrices are 2-dimensional arrays. Matrix X is an array, and X[1; 1] is its first element. In theory, one can have 3-dimensional, 4-dimensional, and higher arrays, although Mata does not directly provide them. See[ M-2] Subscripts for more information on arrays in Mata. Arrays are usually indexed by sequential integers, but in associative arrays, the indices are strings that have no natural ordering. Associative arrays can be 1-dimensional, 2-dimensional, or higher. If A were an associative array, then A[\first"] might be one of its elements. See[ M-5] asarray( ) for associative arrays in Mata. binary operator A binary operator is an operator applied to two arguments. In 2-3, the minus sign is a binary operator, as opposed to the minus sign in -9, which is a unary operator. broad type Two matrices are said to be of the same broad type if the elements in each are numeric, are string, or are pointers. Mata provides two numeric types, real and complex. The term broad type is used to mask the distinction within numeric and is often used when discussing operators or functions.
    [Show full text]
  • SUPPLEMENT on EIGENVALUES and EIGENVECTORS We Give
    SUPPLEMENT ON EIGENVALUES AND EIGENVECTORS We give some extra material on repeated eigenvalues and complex eigenvalues. 1. REPEATED EIGENVALUES AND GENERALIZED EIGENVECTORS For repeated eigenvalues, it is not always the case that there are enough eigenvectors. Let A be an n × n real matrix, with characteristic polynomial m1 mk pA(λ) = (λ1 − λ) ··· (λk − λ) with λ j 6= λ` for j 6= `. Use the following notation for the eigenspace, E(λ j ) = {v : (A − λ j I)v = 0 }. We also define the generalized eigenspace for the eigenvalue λ j by gen m j E (λ j ) = {w : (A − λ j I) w = 0 }, where m j is the multiplicity of the eigenvalue. A vector in E(λ j ) is called a generalized eigenvector. The following is a extension of theorem 7 in the book. 0 m1 mk Theorem (7 ). Let A be an n × n matrix with characteristic polynomial pA(λ) = (λ1 − λ) ··· (λk − λ) , where λ j 6= λ` for j 6= `. Then, the following hold. gen (a) dim(E(λ j )) ≤ m j and dim(E (λ j )) = m j for 1 ≤ j ≤ k. If λ j is complex, then these dimensions are as subspaces of Cn. gen n (b) If B j is a basis for E (λ j ) for 1 ≤ j ≤ k, then B1 ∪ · · · ∪ Bk is a basis of C , i.e., there is always a basis of generalized eigenvectors for all the eigenvalues. If the eigenvalues are all real all the vectors are real, then this gives a basis of Rn. (c) Assume A is a real matrix and all its eigenvalues are real.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]