Math 113: Linear Algebra Eigenvectors and Eigenvalues

Total Page:16

File Type:pdf, Size:1020Kb

Math 113: Linear Algebra Eigenvectors and Eigenvalues Math 113: Linear Algebra Eigenvectors and Eigenvalues Ilya Sherman October 31, 2008 1 Finding Eigenvalues and Eigenvectors Our goal is, given T (V, V ), to !nd bases that are “well-adapted” to T . In particular, it ∈ L would be nice to have ei so that T ei = λiei. De!nition 1 (Eigenvalue, eigenvector). An eigenvector is a nonzero vector v V so that ∈ T (v) = λv for some λ F . An eigenvalue is any scalar λ F that occurs in this way. ∈ ∈ Last time, we showed that if F = C, there is always an eigenvector. In fact, λ0 is an eigenvalue if and only if λ Id A is not an isomorphism, i.e. det(λ Id A) = 0. Recall that this follows from 0 − 0 − the fact that det(λ Id A) is a polynomial in λ of degree n, so any complex root of det(λ Id A) − − is an eigenvalue. Recall that if F = R, there doesn’t have to be an eigenvector (consider, for example, a rotation in R2). Recall also that repeated roots of the polynomial can result in fewer than n distinct eigenvalues, and hence fewer than n linearly independent eigenvectors. Note that this proof also shows that if F = R and n is odd, then any A (V, V ) has an ∈ L eigenvector (because any polynomial of odd degree has a real root). Axler gives a di"erent proof of this property: Proposition 1. For F = C, any A (V, V ) has an eigenvector. ∈ L Proof. Take v = 0. Consider v, Av, A2v, . , Anv. These are n + 1 elements in V . Since the # dimension of V is n, there exist a F so that i ∈ n i aiA v = 0, i=0 ! with not all ai = 0. 1 Ilya Sherman Math 113: Eigenvectors and Eigenvalues October 31, 2008 a0 If n = 1, then this boils down to a1Av + a0v = 0, i.e. Av = v, so v is an eigenvector. In − a1 general, we’ll reduce to the linear case by factoring the sum into linear factors. Let p by the polynomial with complex coe"icients with n i p(z) = aiz . i=0 ! It factors: p(z) = C (z z ), − j where the zj are the roots. Then, we can “substitute”" A for z, as p(A) = C (A z Id). − j " This is because all the algebraic rules used to verify the original factoring (e.g. the distributive property) remain valid when substituting A for z. For more detail, check out “polynomials applied to operators” in Axler, Chapter 5. Warning: note that matrix multiplication is not in general commutative; but fortunately, the only matrices we’re multiplying are A and Id, which commute with one another. Since p(A)v = 0, (A z Id)v = 0, i.e. (A z Id) is not injective, i.e. there exists some j − j − j so that A zj Id is not injective, and hence, zj is an eigenvalue. − # # Exercise: work out the proof numerically to !nd an eigenvalue of a 2 2 matrix. × Note that the !rst proof we gave of this property gave us all of the eigenvalues: they are exactly the roots of the characteristic polynomial, det(λ Id A). In contrast, this proof only tells us one − eigenvalue. 2 Proposition 2. Suppose v1, . , vk are eigenvectors of A with eigenvalues λ1, . , λk distinct, i.e. λ = λ for i = j. Then, v are linearly independent. In particular, if the characteristic i # j # i polynomial of A has n distinct roots, then there exists a basis v1, . , vn of eigenvectors for V . Proof. First of all, note that the second part follows immediately from the !rst: If the character- istic polynomial has n distinct roots, λ1, . , λn, then all of the λi are eigenvalues. Let v1, . , vn be the corresponding eigenvectors. By the !rst assertion, they are linearly independent, so they are a basis. Now, suppose we have some linear relation α v + α v + + α v = 0, 1 1 2 2 · · · k k Page 2 of 3 Ilya Sherman Math 113: Eigenvectors and Eigenvalues October 31, 2008 where not all ai are 0. Apply A: α λ v + α λ v + + α λ v = 0. 1 1 1 2 2 2 · · · k k k Subtracting λ1 times the !rst equation from the second equation gives α (λ λ )v + + α (λ λ )v = 0. 2 2 − 1 2 · · · k k − 1 k k But, we could choose the original linear relation i=1 αivi = 0 to have the smallest possible number of nonzero αi (among all possible linear relations). The reasoning above shows that one $ can produce a shorter linear relation (fewer nonzero αi’s), though we need to make sure that rather than just eliminating α , we choose α = 0 to eliminate. This is a contradiction. 1 j # Another way to show this is to repeatedly apply A to α v + α v + + α v = 0, 1 1 2 2 · · · k k where not all ai are 0. Apply A repeatedly: α v + α v + + α v = 0 1 1 2 2 · · · k k α λ v + α λ v + + α λ v = 0 1 1 1 2 2 2 · · · k k k α λ2v + α λ2v + + α λ2v = 0 1 1 1 2 2 2 · · · k k k . α λmv + α λmv + + α λmv = 0 applying A m times 1 1 1 2 2 2 · · · k k k . 2 k 1 k In an earlier exercise, we showed that (1, λi, λi , . , λi − ) are linearly independent vectors in F for each 1 i k (while studying polynomial interpolation). Hence, these vectors are a basis k ≤ ≤ 2 2 k 1 k 1 for F . This implies that (1, 1, 1, . , 1), (λ1, . , λk), (λ1, . , λk), . , (λ1− , . , λk− ) (because in matrices, row rank is equal to column rank). Hence, there exist βi so that k 1 k 1 (1, 1, . , 1)β + (λ , . , λ )β + + (λ − , . , λ − )β = (1, 0, 0, 0, . , 0). 1 1 k 2 · · · 1 k k Taking a linear combination of equations, we see that a v = 0 a = 0; similarly all a v = 1 1 ⇒ 1 i i 0 a = 0. ⇒ i The thing to learn from this proof is that considering matrix powers A, A2, . is a useful way of studying eigenvalues and eigenvectors. Note that the condition that the characteristic polynomial has n distinct roots is usually satis- !ed, in the sense that if we choose a matrix at random, most of the time this is true. Page 3 of 3.
Recommended publications
  • LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
    LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students.
    [Show full text]
  • Things You Need to Know About Linear Algebra Math 131 Multivariate
    the standard (x; y; z) coordinate three-space. Nota- tions for vectors. Geometric interpretation as vec- tors as displacements as well as vectors as points. Vector addition, the zero vector 0, vector subtrac- Things you need to know about tion, scalar multiplication, their properties, and linear algebra their geometric interpretation. Math 131 Multivariate Calculus We start using these concepts right away. In D Joyce, Spring 2014 chapter 2, we'll begin our study of vector-valued functions, and we'll use the coordinates, vector no- The relation between linear algebra and mul- tation, and the rest of the topics reviewed in this tivariate calculus. We'll spend the first few section. meetings reviewing linear algebra. We'll look at just about everything in chapter 1. Section 1.2. Vectors and equations in R3. Stan- Linear algebra is the study of linear trans- dard basis vectors i; j; k in R3. Parametric equa- formations (also called linear functions) from n- tions for lines. Symmetric form equations for a line dimensional space to m-dimensional space, where in R3. Parametric equations for curves x : R ! R2 m is usually equal to n, and that's often 2 or 3. in the plane, where x(t) = (x(t); y(t)). A linear transformation f : Rn ! Rm is one that We'll use standard basis vectors throughout the preserves addition and scalar multiplication, that course, beginning with chapter 2. We'll study tan- is, f(a + b) = f(a) + f(b), and f(ca) = cf(a). We'll gent lines of curves and tangent planes of surfaces generally use bold face for vectors and for vector- in that chapter and throughout the course.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • From Arithmetic to Algebra
    From arithmetic to algebra Slightly edited version of a presentation at the University of Oregon, Eugene, OR February 20, 2009 H. Wu Why can’t our students achieve introductory algebra? This presentation specifically addresses only introductory alge- bra, which refers roughly to what is called Algebra I in the usual curriculum. Its main focus is on all students’ access to the truly basic part of algebra that an average citizen needs in the high- tech age. The content of the traditional Algebra II course is on the whole more technical and is designed for future STEM students. In place of Algebra II, future non-STEM would benefit more from a mathematics-culture course devoted, for example, to an understanding of probability and data, recently solved famous problems in mathematics, and history of mathematics. At least three reasons for students’ failure: (A) Arithmetic is about computation of specific numbers. Algebra is about what is true in general for all numbers, all whole numbers, all integers, etc. Going from the specific to the general is a giant conceptual leap. Students are not prepared by our curriculum for this leap. (B) They don’t get the foundational skills needed for algebra. (C) They are taught incorrect mathematics in algebra classes. Garbage in, garbage out. These are not independent statements. They are inter-related. Consider (A) and (B): The K–3 school math curriculum is mainly exploratory, and will be ignored in this presentation for simplicity. Grades 5–7 directly prepare students for algebra. Will focus on these grades. Here, abstract mathematics appears in the form of fractions, geometry, and especially negative fractions.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Choosing Your Math Course
    CHOOSING YOUR MATH COURSE When choosing your first math class, it’s important to consider your current skills, the topics covered, and course expectations. It’s also helpful to think about how each course aligns with both your interests and your career and transfer goals. If you have questions, talk with your advisor. The courses on page 1 are developmental-level and the courses on page 2 are college-level. • Developmental math courses (Foundations of Math and Math & Algebra for College) offer the chance to develop concepts and skills you may have forgotten or never had the chance to learn. They focus less on lecture and more on practicing concepts individually and in groups. Your homework will emphasize developing and practicing new skills. • College-level math courses build on foundational skills taught in developmental math courses. Homework, quizzes, and exams will often include word problems that require multiple steps and incorporate a variety of math skills. You should expect three to four exams per semester, which cover a variety of concepts, and shorter weekly quizzes which focus on one or two concepts. You may be asked to complete a final project and submit a paper using course concepts, research, and writing skills. Course Your Skills Readiness Topics Covered & Course Expectations Foundations of 1. All multiplication facts (through tens, preferably twelves) should be memorized. In Foundations of Mathematics, you will: Mathematics • learn to use frections, decimals, percentages, whole numbers, & 2. Whole number addition, subtraction, multiplication, division (without a calculator): integers to solve problems • interpret information that is communicated in a graph, chart & table Math & Algebra 1.
    [Show full text]
  • Teaching Strategies for Improving Algebra Knowledge in Middle and High School Students
    EDUCATOR’S PRACTICE GUIDE A set of recommendations to address challenges in classrooms and schools WHAT WORKS CLEARINGHOUSE™ Teaching Strategies for Improving Algebra Knowledge in Middle and High School Students NCEE 2015-4010 U.S. DEPARTMENT OF EDUCATION About this practice guide The Institute of Education Sciences (IES) publishes practice guides in education to provide edu- cators with the best available evidence and expertise on current challenges in education. The What Works Clearinghouse (WWC) develops practice guides in conjunction with an expert panel, combining the panel’s expertise with the findings of existing rigorous research to produce spe- cific recommendations for addressing these challenges. The WWC and the panel rate the strength of the research evidence supporting each of their recommendations. See Appendix A for a full description of practice guides. The goal of this practice guide is to offer educators specific, evidence-based recommendations that address the challenges of teaching algebra to students in grades 6 through 12. This guide synthesizes the best available research and shares practices that are supported by evidence. It is intended to be practical and easy for teachers to use. The guide includes many examples in each recommendation to demonstrate the concepts discussed. Practice guides published by IES are available on the What Works Clearinghouse website at http://whatworks.ed.gov. How to use this guide This guide provides educators with instructional recommendations that can be implemented in conjunction with existing standards or curricula and does not recommend a particular curriculum. Teachers can use the guide when planning instruction to prepare students for future mathemat- ics and post-secondary success.
    [Show full text]
  • MATH 532: Linear Algebra Chapter 4: Vector Spaces
    MATH 532: Linear Algebra Chapter 4: Vector Spaces Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 [email protected] MATH 532 1 Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 2 Spaces and Subspaces Outline 1 Spaces and Subspaces 2 Four Fundamental Subspaces 3 Linear Independence 4 Bases and Dimension 5 More About Rank 6 Classical Least Squares 7 Kriging as best linear unbiased predictor [email protected] MATH 532 3 Spaces and Subspaces Spaces and Subspaces While the discussion of vector spaces can be rather dry and abstract, they are an essential tool for describing the world we work in, and to understand many practically relevant consequences. After all, linear algebra is pretty much the workhorse of modern applied mathematics. Moreover, many concepts we discuss now for traditional “vectors” apply also to vector spaces of functions, which form the foundation of functional analysis. [email protected] MATH 532 4 Spaces and Subspaces Vector Space Definition A set V of elements (vectors) is called a vector space (or linear space) over the scalar field F if (A1) x + y 2 V for any x; y 2 V (M1) αx 2 V for every α 2 F and (closed under addition), x 2 V (closed under scalar (A2) (x + y) + z = x + (y + z) for all multiplication), x; y; z 2 V, (M2) (αβ)x = α(βx) for all αβ 2 F, (A3) x + y = y + x for all x; y 2 V, x 2 V, (A4) There exists a zero vector 0 2 V (M3) α(x + y) = αx + αy for all α 2 F, such that x + 0 = x for every x; y 2 V, x 2 V, (M4) (α + β)x = αx + βx for all (A5) For every x 2 V there is a α; β 2 F, x 2 V, negative (−x) 2 V such that (M5)1 x = x for all x 2 V.
    [Show full text]
  • Taming the Unknown. a History of Algebra from Antiquity to the Early Twentieth Century, by Victor J
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 52, Number 4, October 2015, Pages 725–731 S 0273-0979(2015)01491-6 Article electronically published on March 23, 2015 Taming the unknown. A history of algebra from antiquity to the early twentieth century, by Victor J. Katz and Karen Hunger Parshall, Princeton University Press, Princeton and Oxford, 2014, xvi+485 pp., ISBN 978-0-691-14905-9, US $49.50 1. Algebra now Algebra has been a part of mathematics since ancient times, but only in the 20th century did it come to play a crucial role in other areas of mathematics. Algebraic number theory, algebraic geometry, and algebraic topology now cast a big shadow on their parent disciplines of number theory, geometry, and topology. And algebraic topology gave birth to category theory, the algebraic view of mathematics that is now a dominant way of thinking in almost all areas. Even analysis, in some ways the antithesis of algebra, has been affected. As long ago as 1882, algebra captured some important territory from analysis when Dedekind and Weber absorbed the Abel and Riemann–Roch theorems into their theory of algebraic function fields, now part of the foundations of algebraic geometry. Not all mathematicians are happy with these developments. In a speech in 2000, entitled Mathematics in the 20th Century, Michael Atiyah said: Algebra is the offer made by the devil to the mathematician. The devil says “I will give you this powerful machine, and it will answer any question you like. All you need to do is give me your soul; give up geometry and you will have this marvellous machine.” Atiyah (2001), p.
    [Show full text]
  • Basic Algebra Review
    SC_03213974739_rp08.qxd 1/15/08 9:15 AM Page 1 Algebra Review Exponents (continued) Polynomials (continued) Factoring (continued) Rational Expressions Rational Expressions Equations of Lines Quotient Rules FOIL Expansion for Multiplying Two To find the value(s) for which a rational (continued) TwoVariables (continued) Factoring Trinomials, Z Binomials expression is undefined, set the denominator If a 0, Leading Term Z x2 Intercepts 0 equal to 0 and solve the resulting equation. SIMPLIFYING COMPLEX FRACTIONS i. Zero exponent: a = 1 i. Multiply the first terms. 2 + + Z To find the x-intercept, let y = 0. To factor ax bx c, a 1: Lowest Terms Numbers Linear Equations ii. Multiply the outer terms. Method 1 To find the y-intercept, let x = 0. -n = 1 By Grouping To write a rational expression in lowest terms: ii. Negative exponents: a n iii. Multiply the inner terms. i. Simplify the numerator and denominator FRACTIONS Definition of Subtraction Properties a i. Find m and n such that i. Factor the numerator and denominator. Slope iv. Multiply the last terms. = + = separately. m mn ac and m n b. Suppose (x , y ) and (x , y ) are two differ- Addition and Subtraction x - y = x + -y i. Addition: The same quantity may be a - ii. Divide out common factors. ii. Divide by multiplying the simplified 1 1 2 2 1 2 iii. Quotient rule: = m n v. Collect like terms. Z added to (or subtracted from) each side of n a ii. Then numerator by the reciprocal of the ent points on a line. If x1 x2, then the i.
    [Show full text]