Linear Algebra Notes

Linear Algebra Notes

Linear Algebra Notes Nikhil Srivastava February 9, 2015 Scalars are lowercase, matrices are uppercase, and vectors are lowercase bold. All vectors are column vectors (i.e., a vector in Rn is an n × 1 matrix), unless transposed. 1 Column Picture of Matrix Vector Multiplication T T Suppose A is an m × n matrix with rows r1 ;:::; rm and columns c1;:::; cn. In high school, we are taught to think of the matrix-vector product Ax as taking dot products of x with rows of A: 2 T 3 2 T 3 2 3 r1 r1 x (r1 · x) T T 6r 7 6r x7 6 (r2 · x) 7 Ax = 6 2 7 x = 6 2 7 = 6 7 4:::5 4 ::: 5 4 ::: 5 T T rm rmx (rm · x) This makes sense if we regard matrices as essentially a convenient notation to represent linear equations, since each linear equation naturally gives a dot product. A different perspective is to view Ax as taking a linear combination of the columns c1;:::; cn of A, with coefficients equal to the entries of x: 2 3 x1 6x2 7 Ax = [c1jc2j ::: jcn] 6 7 = x1c1 + x2c2 + ::: + xncn: 4:::5 xn This view makes it transparent that matrices can be used to represent linear transforma- tions with respect to a pair of bases. Suppose T : Rn ! Rm is a linear transformation, and n n let e1;:::; en be the standard basis of R . Then we can write any x 2 R as x = x1e1 + ::: + xnen for some coefficients xi; indeed, this is what we mean when we identify x with its standard coordinate vector: 2 3 x1 6x2 7 [x] = 6 7 : 4:::5 xn 1 Since T is linear, it is completely determined by its values on the basis: T (x) = T (x1e1 + ::: + xnen) = x1T (e1) + : : : xnT (en): Since these vectors are equal, their coordinate vectors in the standard basis of the \output" vector space Rm must also be equal: [T (x)] = [T (x1e1 + ::: + xnen)] = x1[T (e1)] + : : : xn[T (en)]: But since matrix-vector multiplication is the same thing as taking linear combinations, we may write this as 2 3 x1 6x2 7 [T (x)] = [T (e1)]j[T (e2)]j ::: j[T (en)] 6 7 ; 4:::5 xn where [v] denotes the standard coordinate vector of v. Note that anything that appears inside a matrix must be some sort of coordinate vector: it does not make sense to put an abstract vector inside a matrix whose entries are numbers. However, as before, we will identify v with its standard coordinate vector [v] and drop the brackets when we are working in the standard basis. With this identification, we can write: 2 3 x1 6x2 7 T (x) = T (e1)jT (e2)j ::: jT (en) 6 7 : 4:::5 xn The matrix above is called the standard matrix of T , and is denoted by [T ]. It is a complete description of T , with respect to the standard basis. One of the remarkable things about linear transformations is that they have such compact descriptions | this is not at all true of arbitrary functions from Rn to Rm. Remark. The column view of matrix-vector multiplication also explains why matrix-matrix multiplication is defined the way it is (which usually seems mysterious the first time you see it). It is the unique definition for which [S ◦ T ] = [S][T ]: where (S ◦ T )(v) = S(T (v)) denotes the composition of two linear transformations S and T . More concretely, for any matrix A and column vectors c1 :::; cn, it is the unique definition for which A c1jc2j ::: jcn = Ac1jAc2j ::: jAcn : 2 2 Change of Basis We know we can write any x 2 Rn as a linear combination of the standard basis vectors, for some coefficients x1; : : : ; xn: x = x1e1 + ::: + xnen: In some situations, it is useful to write x in another basis b1;:::; bn (we will see many such situations in the course). By definition, since the bi's are a basis, there must be unique 0 0 coefficients x1; : : : ; xn such that 0 0 0 x = x1b1 + x2b2 + ::: + xnbn: Passing to this representation of x is what I called \implicit" change of basis in class. By implicit, I meant that we chose to decompose x in terms of its coefficients in the bi, but we didn't give an explicit method for computing what those coefficients are. To find the coefficients, it helps to write down the latter linear combination in matrix notation: 2 0 3 x1 0 0 [x] = x1[b1] + ::: + xn[bn] = [b1j ::: jbn] 4:::5 = B[x]B; 0 xn where B is the matrix with columns b1;:::; bn and [x]B denotes the coordinate vector of x in the B basis. This yields the fundamental relationship [x] = B[x]B; which since B is invertible tells us how to find [x]B from [x]: −1 [x]B = B [x]: Thus, changing basis is equivalent to solving linear equations. Note that this also shows that −1 the columns of B are [e1]B;:::; [en]B, the standard basis vectors in the B basis. Once we know how to change basis for vectors, it is easy to do it for linear transformations and matrices. For simplicity, we will only consider linear operators (linear transformations from a vector space to itself), which will allow us to keep track of just one basis rather than two. Suppose T : Rn ! Rn is a linear operator and [T ] is its standard matrix, i.e. [T ][x] = [T (x)]: By definition, the matrix [T ]B of T in the B basis must satisfy: [T ]B[x]B = [T (x)]B: Plugging in the relationship between [x]B and [x] which we derived above, we get −1 −1 [T ]BB [x] = B [T (x)]; 3 which since B is invertible is equivalent to −1 B[T ]BB [x] = [T (x)]: Thus, we must have −1 B[T ]BB = [T ]; or equivalently −1 [T ]B = B [T ]B; which are the explicit formulas for change of basis of matrices/linear transformations. 3 Diagonalization n The standard basis e1;:::; en of R is completely arbitrary, and as such it is just a conven- tion invented by humans so that they can start writing vectors in some basis. But linear transformations that occur in nature and elsewhere often come equipped with a much better basis, given by their eigenvectors. Let T : V ! V be a linear operator (i.e., a linear transformation from a vector space to itself). If v 6= 0 is a nonzero vector and T (v) = λv for some scalar λ (which may be complex), then v is called an eigenvector of T and λ is called an eigenvalue of v. There is an analogous definition for square matrices A, in which we ask that Av = λv. Note that T (v) = λv if and only if [T ][v] = λ[v], so in particular the operator T and the matrix [T ] have the same eigenvalues. This fact holds in every basis (see HW3 question 6), so eigenvalues are intrinsic to operators and do not depend on the choice of basis used to write the matrix. The eigenvalues of a matrix may be computed by solving the characteristic equation det(λI −A) = 0. Since this is a polynomial of degree n for an n×n matrix, the fundamental theorem of algebra tells us that it must have n roots, whence every n × n matrix must have n eigenvalues. Once the eigenvalues are known, the corresponding eigenvectors can be obtained by solving systems of linear equations (λI − A)v = 0. See your Math 54 text for more information on how to compute these things. In any case, something wonderful happens when we have an operator/matrix with a basis of linearly independent eigenvectors, sometimes called an eigenbasis. For instance, let T : Rn ! Rn be such an operator. Then, we have T (b1) = λ1b1;T (b2) = λ2b2;:::;T (bn) = λnbn; n for linearly independent b1;:::; bn. Thus, we may write an arbitrary x 2 R in this basis: 0 0 0 x = x1b1 + x2b2 + ::: + xnbn; and then 0 0 0 T (x) = T (x1b1 + x2b2 + ::: + xnbn) 0 0 0 = x1T (b1) + x2T (b2) + ::: + xnT (bn) by linearity of T 0 0 0 = x1λ1b1 + x2λ2b2 + ::: + xnλnbn: 4 0 So, applying the transformation T is tantamount to multiplying each coefficient xi by λi. In particular T acts on each coordinate completely independently by scalar multiplication, and there are no interactions between the coordinates. This is as about as simple as a linear transformation can be. If we write down the matrix of T in the basis B consisting of its eigenvectors, we find that [T ]B is a diagonal matrix with the eigenvalues λ1; : : : ; λn on the diagonal. Appealing to the change of basis formula we derived in the previous section, this means that −1 [T ] = B[T ]BB : In general, this can be done for any square matrix A, since every A is equal to [T ] for the linear transformation T (x) = Ax. Using the letter D to denote the diagonal matrix of eigenvalues, this gives A = B−1DB: Factorizing a matrix in this way is called diagonalization, and a matrix which can be diag- onalized (i.e., one with a basis of eigenvectors) is called diagonalizable. Not all matrices are diagonalizable, but the vast majority of them are. 4 Coupled Oscillator Example Here is an example of diagonalization in action. Suppose we have two unit masses connected by springs with spring constant k as in Figure 12.1 of the book.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us