Generalized Eigenvectors, Minimal Polynomials and Theorem of Cayley-Hamiltion

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Eigenvectors, Minimal Polynomials and Theorem of Cayley-Hamiltion GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION FRANZ LUEF Abstract. Our exposition is inspired by S. Axler’s approach to linear algebra and follows largely his exposition in ”Down with Determinants”, check also the book ”LinearAlgebraDoneRight” by S. Axler [1]. These are the lecture notes for the course of Prof. H.G. Feichtinger ”Lineare Algebra 2” from 15.11.2006. Before we introduce generalized eigenvectors of a linear transformation we recall some basic facts about eigenvalues and eigenvectors of a linear transformation. Let V be a n-dimensional complex vector space. Recall a complex number λ is called an eigenvalue of a linear operator T on V if T − λI is not injective, i.e. ker(T − λI) 6= {0}. The main result about eigenvalues is that every linear op- erator on a finite-dimensional complex vector space has an eigenvalue! Furthermore we call a vector v ∈ V an eigenvector of T if T v = λv for some eigenvalue λ. The central result on eigenvectors is that Non-zero eigenvectors corresponding to distinct eigenvalues of a linear transformation on V are linearly independent. Consequently the number of distinct eigenvalues of T can- not exceed thte dimension of V . Unfortunately the eigenvectors of T need not span V . For example the linear transformation on C4 whose matrix is 0 1 0 0 0 0 1 0 T = 0 0 0 1 0 0 0 0 as only the eigenvalue 0, and its eigenvectors form a one-dimensional subspace of C4. Observe that T,T 2 6= 0 but T 3 = 0. More generally a linear operator T such that T,T 2, ..., T p−1 6= 0 and T p = 0 is called nilpotent of index p. More generally, let T be a linear operator on V , then the space of all linear operators on V is finite- dimensional (actually of dimension n2). Then there exists a smallest positive integer k such that I,T,T 2, ..., T k are not linearly independent. In other words there exist unique complex numbers a0, a1, ..., ak−1 such that k−1 k a0I + a1T + ··· + ak−1T + T = 0. k−1 k The polynomial m(x) = a0 + a1x + ··· + ak−1z + z is called the minimal poly- nomial of T . It is the monic polynomial of smallest degree such that m(T ) = 0. A polynomial q such that q(T ) = 0 is a so-called annihilating polynomial. The 1 Fundamental Theorem of Algebra yields that α1 α2 αm m(x) = (x − λ1) (x − λ2) ··· (x − λm) , where αj is the multiplicity of the eigenvalue λj of T . Since α1 α2 αm m(T ) = (T − λ1I) (T − λ2I) ··· (T − λmI) = 0 αj αj implies that for some j (T − λj) = 0 is not injective, i.e. ker(T − λjI) 6= {0}. What is the structure of the subspace ker(T − λjI)? First of all we call a vector v ∈ V a generalized eigenvector of T if (T − λI)kv = 0 for some eigenvalue λ of T . Then ker(T − λI)k is the space of all generalized eigenvectors of T corresponding to an eigenvalue λ. Lemma 0.1. The set of generalized eigenvectors of T on a n-dimensional complex vector space corresponding to an eigenvalue λ equals ker(T − λI)n. Proof. Obviously, every element of ker(T − λI)n is a generalized eigenvector of T corresponding to λ. Let us show the other inclusion. If v 6= 0 is a generalized eigenvector of T corre- sponding to V , then we need to prove that (T − λI)nv = 0. By assumption there is a smallest non-negative integer k such that (T − λI)kv = 0. We are done if we show that k ≤ n. In other words we proof that v, (T − λI)v, ..., (T − λI)k−1v are linearly independent vectors. Since then we will have k linearly independent elements in an n-dimensional vector space, which implies that k ≤ n. Let a0, a1, ..., ak−1 be complex numbers such that k−1 a0v + a1(T − λI)v + ··· + ak−1(T − λI) v = 0. k−1 k−1 Apply (T − λI) to both sides of the equation above, getting a0(T − λI) v = 0, k−2 which yields a0 = 0. Now apply (T − λI) to both sides of the equation, getting k−1 a1(T − λI) v = 0, which implies a1 = 0. Continuing in this fashion, we see that aj = 0 for each j, as desired. Following the basic pattern of the proof that non-zero eigenvectors corresponding to discinct eigenvalues of T are linearly independent, we obtain: Proposition 0.2. Non-zero generalized eigenvectors corresponding to distinct eigen- values of T are linearly independent. Proof. Suppose that v1, .., vm are non-zero generalized eigenvectors of T correspond- ing to distinct eigenvalues λ1, ..., λm. We assume that there are complex numbers a1, ..., am such that a1v1 + a2v2 + ··· + amvm = 0. Then we have to show that a1 = a2 = ··· = am = 0. Let k be the smallest positive k integer such that (T − λI) v1 = 0. Then apply the linear operator k−1 n n (T − λ1I) (T − λ2I) ··· (T − λmI) 2 to both sides of the previous equation, getting k−1 n n a1(T − λ1I) (T − λ2I) ··· (T − λmI) v1 = 0. n n We rewrite (T − λ2I) ··· T − λmI) as n n ((T − λ1) + (λ1 − λ2)I) ··· (T − λn) + (λ1 − λn)I) v1 = 0. An application of the binomial theorem gives a sum of terms which when combined k−1 with (T − λ1I) on the left and applied to v1 gives 0, except for the term n n k−1 a1(λ1 − λ2) ··· (λ1 − λm) (T − λ1) v1 = 0. Thus a1 −0. Continuing in a similar fashion, we get aj = 0 for each j, as desired. The central fact about generalized eigenvectors is that they span V . Theorem 0.3. Let V be a n-dimensional complex vector space and let λ be an eigenvalue of T . Then V = ker(T − λI)n ⊕ im(T − λI)n. Proof. The proof will be an induction on n, the dimension of V . The result holds for n = 1. Suppose that n > 1 and that the result holds for all vector spaces of dimension less than n. Let λ be any eigenvalue of T . Then we want to show that n n V = ker(T − λI) ⊕ im(T − λI) =: V1 ⊕ V2. n Let v ∈ V1 ∩ V2. Then (T − λI) v = 0 and there exists a u ∈ V such that (T − λI)nu = v. Applying (T − λI)n to both sides of the last equation, we have that 2n n (T − λI) u = 0. Consequently, (T − λI) u = 0, i.e. v = 0. Thus V1 ∩ V2 = {0}. Now V1 and V2 are the kernel and the image of a linear operator on V , we have dim V = dim V1 + dim V2. Note that V1 6= {0}, because λ is an eigenvalue of T , thus dim V2 < n. Furthermore n T maps V2 into V2 since T commutes with (T −λI) . By our induction hypothesis, V2 is spanned by the generalized eigenvectors of T |V2 , each of wich is also a generalized eigenvector of T . Everything in V1 is a generalized eigenvector of T , which gives the desired result. Corollary 0.4. If 0 is the only eigenvalue of a linear operator on V , then T is nilpotent. Proof. By assumption 0 is the only eigenvalue of T . Then every vector v in V is a generalized eigenvector of T corresponding to the eigenvalue λ = 0. Consequently p T = 0 for some p. As a consequence we get the following structure theorem for linear transformations. Theorem 0.5. Let λ1, ..., λm be the distinct eigenvalues of T , with E1, ..., Em de- noting the corresponding sets of generalized eigenvectors. Then (1) V = E1 ⊕ E2 ⊕ · · · ⊕ Em; (2) T maps each Ej into itself; (3) each (T − λjI)|Ej is nilpotent; (4) each T |Ej has only one eigenvalue, namely λj. 3 Proof. (1) Follows from the linear independence of generalized eigenvectors cor- responding to distinct eigenvalues and that the generalized eigenvectors of λj span Ej. k (2) Suppose v ∈ Ej. Then (T − λjI) v = 0 for some positive integer k. Further- more we have k k (T − λj) T v = T (T − λj) v = T (0) = 0, i.e. T v ∈ Uj. (3) is a reformulation of the definition of a generalized eigenvector. (4) Let λ be an eigenvalue of T |Uj , with corresponding non-zero eigenvector v ∈ Uj. Then (T − λjI)v = (λ − λj)v, and hence k k (T − λjI) v = (λ − λj) v for each positive integer k. But v is a generalized eigenvector of T corre- sponding to λj, the left hand side of the equation is 0 for some k, i.e. λ = λj. The next theorem connects the minimal polynomial of T to th decomposition of V as a direct sum of generalized eigenvectors. Theorem 0.6. Let λ1, ..., λm be the distinct eigenvalues of T , let Ej denote the set of the generalized eigenvectors corresponding to λj, and let αj be the smallest positive αj integer such that (T − λjI) v = 0 for every v ∈ Ej. Let α1 α2 αm m(x) = (x − λ1) (x − λ2) ··· (x − λm) . Then (1) m has degree at most dim(V ); (2) if p is another annihilating polynomial of T , then p is a polynomial multiple of m; (3) m is the minimal polynomial of T . Proof. Each αj is at most the dimension of Ej and V = E1 ⊕ · · · ⊕ Em gives that the αj’s can at most add up to n.
Recommended publications
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • SUPPLEMENT on EIGENVALUES and EIGENVECTORS We Give
    SUPPLEMENT ON EIGENVALUES AND EIGENVECTORS We give some extra material on repeated eigenvalues and complex eigenvalues. 1. REPEATED EIGENVALUES AND GENERALIZED EIGENVECTORS For repeated eigenvalues, it is not always the case that there are enough eigenvectors. Let A be an n × n real matrix, with characteristic polynomial m1 mk pA(λ) = (λ1 − λ) ··· (λk − λ) with λ j 6= λ` for j 6= `. Use the following notation for the eigenspace, E(λ j ) = {v : (A − λ j I)v = 0 }. We also define the generalized eigenspace for the eigenvalue λ j by gen m j E (λ j ) = {w : (A − λ j I) w = 0 }, where m j is the multiplicity of the eigenvalue. A vector in E(λ j ) is called a generalized eigenvector. The following is a extension of theorem 7 in the book. 0 m1 mk Theorem (7 ). Let A be an n × n matrix with characteristic polynomial pA(λ) = (λ1 − λ) ··· (λk − λ) , where λ j 6= λ` for j 6= `. Then, the following hold. gen (a) dim(E(λ j )) ≤ m j and dim(E (λ j )) = m j for 1 ≤ j ≤ k. If λ j is complex, then these dimensions are as subspaces of Cn. gen n (b) If B j is a basis for E (λ j ) for 1 ≤ j ≤ k, then B1 ∪ · · · ∪ Bk is a basis of C , i.e., there is always a basis of generalized eigenvectors for all the eigenvalues. If the eigenvalues are all real all the vectors are real, then this gives a basis of Rn. (c) Assume A is a real matrix and all its eigenvalues are real.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]
  • Math 2280 - Lecture 23
    Math 2280 - Lecture 23 Dylan Zwick Fall 2013 In our last lecture we dealt with solutions to the system: ′ x = Ax where A is an n × n matrix with n distinct eigenvalues. As promised, today we will deal with the question of what happens if we have less than n distinct eigenvalues, which is what happens if any of the roots of the characteristic polynomial are repeated. This lecture corresponds with section 5.4 of the textbook, and the as- signed problems from that section are: Section 5.4 - 1, 8, 15, 25, 33 The Case of an Order 2 Root Let’s start with the case an an order 2 root.1 So, our eigenvalue equation has a repeated root, λ, of multiplicity 2. There are two ways this can go. The first possibility is that we have two distinct (linearly independent) eigenvectors associated with the eigen- value λ. In this case, all is good, and we just use these two eigenvectors to create two distinct solutions. 1Admittedly, not one of Sherlock Holmes’s more popular mysteries. 1 Example - Find a general solution to the system: 9 4 0 ′ x = −6 −1 0 x 6 4 3 Solution - The characteristic equation of the matrix A is: |A − λI| = (5 − λ)(3 − λ)2. So, A has the distinct eigenvalue λ1 = 5 and the repeated eigenvalue λ2 =3 of multiplicity 2. For the eigenvalue λ1 =5 the eigenvector equation is: 4 4 0 a 0 (A − 5I)v = −6 −6 0 b = 0 6 4 −2 c 0 which has as an eigenvector 1 v1 = −1 .
    [Show full text]
  • On the History of Some Linear Algebra Concepts:From Babylon to Pre
    The Online Journal of Science and Technology - January 2017 Volume 7, Issue 1 ON THE HISTORY OF SOME LINEAR ALGEBRA CONCEPTS: FROM BABYLON TO PRE-TECHNOLOGY Sinan AYDIN Kocaeli University, Kocaeli Vocational High school, Kocaeli – Turkey [email protected] Abstract: Linear algebra is a basic abstract mathematical course taught at universities with calculus. It first emerged from the study of determinants in 1693. As a textbook, linear algebra was first used in graduate level curricula in 1940’s at American universities. In the 2000s, science departments of universities all over the world give this lecture in their undergraduate programs. The study of systems of linear equations first introduced by the Babylonians around at 1800 BC. For solving of linear equations systems, Cardan constructed a simple rule for two linear equations with two unknowns around at 1550 AD. Lagrange used matrices in his work on the optimization problems of real functions around at 1750 AD. Also, determinant concept was used by Gauss at that times. Between 1800 and 1900, there was a rapid and important developments in the context of linear algebra. Some theorems for determinant, the concept of eigenvalues, diagonalisation of a matrix and similar matrix concept were added in linear algebra by Couchy. Vector concept, one of the main part of liner algebra, first used by Grassmann as vector product and vector operations. The term ‘matrix’ as a rectangular forms of scalars was first introduced by J. Sylvester. The configuration of linear transformations and its connection with the matrix addition, multiplication and scalar multiplication were studied first by A.
    [Show full text]
  • Multi-Variable Calculus/Linear Algebra Scope & Sequence
    Multi-Variable Calculus/Linear Algebra Scope & Sequence Grading Period Unit Title Learning Targets Throughout the *Apply mathematics to problems in everyday life School Year *Use a problem-solving model that incorporates analyzing information, formulating a plan, determining a solution, justifying the solution and evaluating the reasonableness of the solution *Select tools to solve problems *Communicate mathematical ideas, reasoning and their implications using multiple representations *Create and use representations to organize, record and communicate mathematical ideas *Analyze mathematical relationships to connect and communicate mathematical ideas *Display, explain and justify mathematical ideas and arguments First Grading Vectors in a plane and Unit vectors, graphing and basic operations Period in space MULTIVARIABLE Dot products and cross Angles between vectors, orthogonality, projections, work CALCULUS products Lines and Planes in Equations, intersections, distance space Surfaces in Space Cylindrical surfaces, quadric surfaces, surfaces of revolution, cylindrical and spherical coordinate Vector Valued Parameterization, domain, range, limits, graphical representation, continuity, differentiation, Functions integration, smoothness, velocity, acceleration, speed, position for a projectile, tangent and normal vectors, principal unit normal, arc length and curvature. Functions of several Graphically, level curves, delta neighborhoods, interior point, boundary point, closed regions, variables limits by definition, limits from various
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]