Generalized Eigenvectors for Systems with Repeated Eigenvalues

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Eigenvectors for Systems with Repeated Eigenvalues LECTURE NOTES ON GENERALIZED EIGENVECTORS FOR SYSTEMS WITH REPEATED EIGENVALUES We consider a matrix A 2 Cn×n. The characteristic polynomial P (λ) = jλI − Aj admits in general p complex roots: λ1; λ2; : : : ; λp with p ≤ n. Each of the root has a multiplicity that we denote ki and P (λ) can be decomposed as p ki P (λ) = Πi=1(λ − λi) : The sum of the multiplicity of all eigenvalues is equal to the degree of the polyno- mial, that is, p X ki = n: i Let Ei be the subspace of eigenvectors associated to the eigenvalue λi, that is, n Ei = fu 2 C such that Au = λiug: Theorem 1 (from linear algebra). The dimension of Ei is smaller than the mul- tiplicity of λi, that is, dim(Ei) ≤ ki: If dim(Ei) = ki for all i 2 f1; : : : ; pg; then we can find a basis of ki independent eigenvectors for each λi, which we denote by ui ; ui ; : : : ; ui : 1 2 ki Pp Since i=1 ki = n, we finally get n linearly independent eigenvectors (eigen- vectors with distinct eigenvalues are automatically independent). Therefore the dY matrix A is diagonalizable and we can solve the system dt = AY by using the basis of eigenvectors. The general solution is given by p X (1) Y (t) = eλit(a ui + a ui + ::: + a ui ) 1;i 1 2;i 2 ki;i ki i=1 for any constant coefficients ai;j. example: We consider the matrix 01 0 01 A = @0 3 1A 0 −2 0 Date: 14.03.2011. 1 2 GENERALIZED EIGENVECTORS 2 The characteristic is P (λ) = −(λ − 2)(λ − 1) and we have two eigenvalues, λ1 = 2 (with multiplicity 1) and λ2 = 1 (with multiplicity 2). We compute the eigenvectors for λ1 = 2. We have to solve 0−1 0 0 1 0x1 @ 0 1 1 A @yA = 0 0 −2 −2 z It yields two independent relations (Hence, the dimension of E1 = n − 2 = 1), namely x = 0 y + z = 0 Thus, u = (0; 1; −1) is an eigenvector for λ1 = 2. We compute the eigenvectors for λ2 = 1. We have to solve 00 0 0 1 0x1 @0 2 1 A @yA = 0 0 −2 −1 z It yields one independent relation, namely 2y + z = 0: Hence, the dimension of E2 is equal to n − 1 = 2. The two independent vectors 1 u2 = (1; 0; 0) 2 u2 = (0; 1; −2) dY form a basis for E2. Finally, the general solution of dt = AY is given by 0 0 1 011 0 0 1 2t t t Y (t) = a1e @ 1 A + a2e @0A + a3e @ 1 A : −1 0 −2 End of example. If for some i, dim(Ei) < ki, then we cannot find ki independent eigenvectors in Ei. We say that the eigenvalue λi is incomplete. In this case, we are not able to find n linearly independent eigenvectors and cannot get an expression like (1) for the solution of the ODE. We have to use generalized eigenvectors. example: We consider −2 1 A = : 0 −2 2 The characteristic polynomial is P (λ) = (λ+2) and there is one eigenvalue λ1 = −2 with multiplicity 2. We compute the eigenvectors. We have to solve 0 1 x = 0 0 0 y It yields one independent relation, namely y = 0 and therefore the dimension of E1 is 1 and A is not diagonalizable. An eigenvector is given by u1 = (1; 0). λ1t We know that Y1(t) = e u1 is a solution. Let us look for solutions of the type Y (t) = teλtu + eλtv GENERALIZED EIGENVECTORS 3 for two unknown vectors u and v different from zero. Such Y is solution if and only if eλtu + λteλtu + λeλtv = teλtAu + eλtAv for all t. It implies that we must have (2) Au = λu (3) Av = u + λv: The first equality implies (because we want u 6= 0) that u is an eigenvector and λ is an eigenvalue. We take λ = λ1 and u = u1. Let us compute v. We have to solve 0 1 x 1 = 0 0 y 0 which yields y = 1 We take (for example) v = (0; 1) and we have that 1 0 Y (t) = teλ1tu + eλ1tv = te−2t + e−2t 2 1 0 1 is a solution. Since Y1(0) and Y2(0) are independent, a general solution is given by 1 1 0 Y (t) = a e−2t + a te−2t + e−2t 1 0 2 0 1 −2t −2t a1e + a2te = −2t a2e : for any constant a1, a2 in R. End of example. In the previous example, note that v satisfies (4) (A − λI)2v = 0 and (A − λI)v 6= 0: A vector which satisfies (4) is a generalized eigenvector. Let us give now the general definition of such vectors. Definition 2. For a given eigenvalue λ, the vector u is a generalized eigenvector of rank r if (A − λI)ru = 0 (A − λI)r−1u 6= 0: Remark: An eigenvector is a generalized eigenvector of rank 1. Indeed, we have (A − λI)u = 0 and u 6= 0. Given an generalized eigenvector u of rank r, let us define the vectors v1; : : : ; vr as follows 0 vr = (A − λI) u = u 1 vr−1 = (A − λI) u (5) . r−1 v1 = (A − λ) u r Note that v1 is an eigenvector as v1 6= 0 and (A − λI)v1 = (A − λ) u = 0. The vectors v1; : : : ; vr form a chain of generalized eigenvectors of length r. 4 GENERALIZED EIGENVECTORS Definition 3. Given an eigenvalue λ, we say that v1; v2; : : : ; vr form a chain of generalized eigenvectors of length r if v1 6= 0 and vr−1 = (A − λI)vr vr−2 = (A − λI)vr−1 . (6) . v1 = (A − λI)v2 0 = (A − λI)v1 r Remark: Given a chain fvigi=1, the first element, that is, v1, is allways an eigen- value. By using the definition (6), we get that i−1 (7) (A − λI) vi = v1 and therefore the element vi is a generalized vector of rank i . Theorem 4. The vectors in a chain of generalized eigenvectors are linearly inde- pendent. Proof. We consider the linear combination r X (8) aivi = 0: i=1 By using the definition (6), we get that r−i vi = (A − λI) vr so that (8) is equivalent to r X r−i (9) ai(A − λI) vr = 0: i=1 We want to prove that all the ai are equal to zero. We are going to use the fact that (10) (A − λI)mu = 0 for all m ≥ r: Indeed, m m−r r m−r (A − λI) vr = (A − λI) (A − λI) vr = (A − λI) (A − λI)v1 = 0: Now, we apply (A − λI)r−1 to (9) and get r X 2r−i−1 (11) ai(A − λI) vr = 0: i=1 2r−i−1 Since (A − λI) vr = 0 for i ≤ r − 1, the equation (11) simplifies and we get r−1 ar(A − λI) vr = arv1 = 0 Hence, ar = 0 because v1 6= 0. Now, we know that ar = 0 so that (9) rewrites as r−1 X r−i (12) ai(A − λI) vr = 0: i=1 We apply (A − λI)r−2 to (12) and obtain that r−1 X 2r−i−2 r−1 ai(A − λI) vr = ar−1(A − λI) vr = ar−1v1 = 0: i=1 GENERALIZED EIGENVECTORS 5 2r−i−2 because (A − λI) vr = 0 for i ≤ r − 2. Therefore, ar−1 = 0. We proceed recursively with the same argument and prove that all the ai are equal to zero so that the vectors vi are linearly independent. A chain of generalized eigenvectors allow us to construct solutions of the system of ODE. Indeed, we have Theorem 5. Given a chain of generalized eigenvector of length r, we define λt X1(t) = v1e λt X2(t) = (tv1 + v2)e t2 X (t) = v + tv + v eλt 3 2 1 2 3 . tr−1 t2 X (t) = v + ::: + v + tv + v eλt r (r − 1)! 1 2 r−2 r−1 r r dX The functions fXi(t)gi=1 form r linearly independent solutions of dt = AX. Proof. We have j ! X tj−i X (t) = eλt v : j (j − i)! i i=1 We use the convention that v0 = 0 and, with this convention, we can check from (6) that Avi = vi−1 + λvi for i = f1; : : : ; rg. We have, on one hand, j−1 j X tj−i−1 X tj−i X_ (t) = eλt v + eλt λ v j (j − i − 1)! i (j − i)! i i=1 i=1 and, on the other hand, j X tj−1 AX (t) = eλt Av j (j − i)! i i=1 j X tj−i = eλt (v + λv ) (j − i)! i−1 i i=1 j j X tj−i X tj−i = eλt v + eλt λ v (j − i)! i−1 (j − i)! i i=1 i=1 j−1 j X tj−i−1 X tj−i = eλt v + eλt λ v : (j − i − 1)! i (j − i)! i i=1 i=1 Hence, Xi is a solution. To prove that Xi(t) are independent, it is enough to prove that Xi(0) = vi are independent. This follows from Theorem 4. Conclusion: A chain of generalized eigenvectors of length r gives us r independent solutions. 6 GENERALIZED EIGENVECTORS It turns out that there exist enough chains of generalized eigenvectors to obtain a complete set of independent solutions. This is the content of the following theorem.
Recommended publications
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • SUPPLEMENT on EIGENVALUES and EIGENVECTORS We Give
    SUPPLEMENT ON EIGENVALUES AND EIGENVECTORS We give some extra material on repeated eigenvalues and complex eigenvalues. 1. REPEATED EIGENVALUES AND GENERALIZED EIGENVECTORS For repeated eigenvalues, it is not always the case that there are enough eigenvectors. Let A be an n × n real matrix, with characteristic polynomial m1 mk pA(λ) = (λ1 − λ) ··· (λk − λ) with λ j 6= λ` for j 6= `. Use the following notation for the eigenspace, E(λ j ) = {v : (A − λ j I)v = 0 }. We also define the generalized eigenspace for the eigenvalue λ j by gen m j E (λ j ) = {w : (A − λ j I) w = 0 }, where m j is the multiplicity of the eigenvalue. A vector in E(λ j ) is called a generalized eigenvector. The following is a extension of theorem 7 in the book. 0 m1 mk Theorem (7 ). Let A be an n × n matrix with characteristic polynomial pA(λ) = (λ1 − λ) ··· (λk − λ) , where λ j 6= λ` for j 6= `. Then, the following hold. gen (a) dim(E(λ j )) ≤ m j and dim(E (λ j )) = m j for 1 ≤ j ≤ k. If λ j is complex, then these dimensions are as subspaces of Cn. gen n (b) If B j is a basis for E (λ j ) for 1 ≤ j ≤ k, then B1 ∪ · · · ∪ Bk is a basis of C , i.e., there is always a basis of generalized eigenvectors for all the eigenvalues. If the eigenvalues are all real all the vectors are real, then this gives a basis of Rn. (c) Assume A is a real matrix and all its eigenvalues are real.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]
  • Math 2280 - Lecture 23
    Math 2280 - Lecture 23 Dylan Zwick Fall 2013 In our last lecture we dealt with solutions to the system: ′ x = Ax where A is an n × n matrix with n distinct eigenvalues. As promised, today we will deal with the question of what happens if we have less than n distinct eigenvalues, which is what happens if any of the roots of the characteristic polynomial are repeated. This lecture corresponds with section 5.4 of the textbook, and the as- signed problems from that section are: Section 5.4 - 1, 8, 15, 25, 33 The Case of an Order 2 Root Let’s start with the case an an order 2 root.1 So, our eigenvalue equation has a repeated root, λ, of multiplicity 2. There are two ways this can go. The first possibility is that we have two distinct (linearly independent) eigenvectors associated with the eigen- value λ. In this case, all is good, and we just use these two eigenvectors to create two distinct solutions. 1Admittedly, not one of Sherlock Holmes’s more popular mysteries. 1 Example - Find a general solution to the system: 9 4 0 ′ x = −6 −1 0 x 6 4 3 Solution - The characteristic equation of the matrix A is: |A − λI| = (5 − λ)(3 − λ)2. So, A has the distinct eigenvalue λ1 = 5 and the repeated eigenvalue λ2 =3 of multiplicity 2. For the eigenvalue λ1 =5 the eigenvector equation is: 4 4 0 a 0 (A − 5I)v = −6 −6 0 b = 0 6 4 −2 c 0 which has as an eigenvector 1 v1 = −1 .
    [Show full text]
  • On the History of Some Linear Algebra Concepts:From Babylon to Pre
    The Online Journal of Science and Technology - January 2017 Volume 7, Issue 1 ON THE HISTORY OF SOME LINEAR ALGEBRA CONCEPTS: FROM BABYLON TO PRE-TECHNOLOGY Sinan AYDIN Kocaeli University, Kocaeli Vocational High school, Kocaeli – Turkey [email protected] Abstract: Linear algebra is a basic abstract mathematical course taught at universities with calculus. It first emerged from the study of determinants in 1693. As a textbook, linear algebra was first used in graduate level curricula in 1940’s at American universities. In the 2000s, science departments of universities all over the world give this lecture in their undergraduate programs. The study of systems of linear equations first introduced by the Babylonians around at 1800 BC. For solving of linear equations systems, Cardan constructed a simple rule for two linear equations with two unknowns around at 1550 AD. Lagrange used matrices in his work on the optimization problems of real functions around at 1750 AD. Also, determinant concept was used by Gauss at that times. Between 1800 and 1900, there was a rapid and important developments in the context of linear algebra. Some theorems for determinant, the concept of eigenvalues, diagonalisation of a matrix and similar matrix concept were added in linear algebra by Couchy. Vector concept, one of the main part of liner algebra, first used by Grassmann as vector product and vector operations. The term ‘matrix’ as a rectangular forms of scalars was first introduced by J. Sylvester. The configuration of linear transformations and its connection with the matrix addition, multiplication and scalar multiplication were studied first by A.
    [Show full text]
  • Multi-Variable Calculus/Linear Algebra Scope & Sequence
    Multi-Variable Calculus/Linear Algebra Scope & Sequence Grading Period Unit Title Learning Targets Throughout the *Apply mathematics to problems in everyday life School Year *Use a problem-solving model that incorporates analyzing information, formulating a plan, determining a solution, justifying the solution and evaluating the reasonableness of the solution *Select tools to solve problems *Communicate mathematical ideas, reasoning and their implications using multiple representations *Create and use representations to organize, record and communicate mathematical ideas *Analyze mathematical relationships to connect and communicate mathematical ideas *Display, explain and justify mathematical ideas and arguments First Grading Vectors in a plane and Unit vectors, graphing and basic operations Period in space MULTIVARIABLE Dot products and cross Angles between vectors, orthogonality, projections, work CALCULUS products Lines and Planes in Equations, intersections, distance space Surfaces in Space Cylindrical surfaces, quadric surfaces, surfaces of revolution, cylindrical and spherical coordinate Vector Valued Parameterization, domain, range, limits, graphical representation, continuity, differentiation, Functions integration, smoothness, velocity, acceleration, speed, position for a projectile, tangent and normal vectors, principal unit normal, arc length and curvature. Functions of several Graphically, level curves, delta neighborhoods, interior point, boundary point, closed regions, variables limits by definition, limits from various
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]