A Second Course in Ordinary Differential Equations

Total Page:16

File Type:pdf, Size:1020Kb

A Second Course in Ordinary Differential Equations Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics A Second Course in Ordinary Differential Equations Marcel B. Finan c All Rights Reserved 2015 Edition To my children Amin and Nadia Preface The present text, while mathematically rigorous, is readily accessible to stu- dents of either mathematics or engineering. In addition to all the theoretical considerations, there are numerous worked examples drawn from engineering and physics. It is important to have a good understanding of the theory underlying it, rather than just a cursory knowledge of its application. This text provides that understanding. Marcel B. Finan Russellville, Arkansas May 2015 i ii PREFACE Contents Prefacei 1 nth Order Linear Ordinary Differential Equations3 1.1 Calculus of Matrix-Valued Functions of a Real Variable....4 1.2 Uniform Convergence and Weierstrass M-Test......... 16 1.3 Existence and Uniqueness Theorem............... 26 1.4 The General Solution of nth Order Linear Homogeneous Equa- tions................................ 38 1.5 Existence of Many Fundamental Sets.............. 47 1.6 nth Order Homogeneous Linear Equations with Constant Co- efficients.............................. 55 1.7 Review of Power Series...................... 66 1.8 Series Solutions of Linear Homogeneous Differential Equations 75 1.9 Non-Homogeneous nth Order Linear Differential Equations.. 80 2 First Order Linear Systems 87 2.1 Existence and Uniqueness of Solution to Initial Value First Order Linear Systems....................... 88 2.2 Homogeneous First Order Linear Systems........... 96 2.3 First Order Linear Systems: Fundamental Sets and Linear Independence........................... 107 2.4 Homogeneous Systems with Constant Coefficients: Distinct Eigenvalues............................ 116 2.5 Homogeneous Systems with Constant Coefficients: Complex Eigenvalues............................ 126 2.6 Homogeneous Systems with Constant Coefficients: Repeated Eigenvalues............................ 131 1 2 CONTENTS Answer Key 141 Index 171 Chapter 1 nth Order Linear Ordinary Differential Equations This chapter is devoted to extending the results for solving second order ordinary differential equations to higher order ODEs. 3 4CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1.1 Calculus of Matrix-Valued Functions of a Real Variable In establishing the existence result for second and higher order linear differ- ential equations one transforms the equation into a linear system and tries to solve such a system. This procedure requires the use of concepts such as the derivative of a matrix whose entries are functions of t; the integral of a matrix, and the exponential matrix function. Thus, techniques from matrix theory play an important role in dealing with systems of differential equations. The present section introduces the necessary background in the calculus of matrix functions. Matrix-Valued Functions of a Real Variable A matrix A of dimension m × n is a rectangular array of the form 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 4 ::: ::: ::: ::: 5 am1 am2 ::: amn where the aij's are the entries of the matrix, m is the number of rows, n is the number of columns. When m = n we call the matrix a square matrix. The zero matrix 0 is the matrix whose entries are all 0. The n×n identity 0 matrix In is a square matrix whose main diagonal consists of 1 s and the off diagonal entries are all 0. A matrix A can be represented with the following th th compact notation A = [aij]: The entry aij is located in the i row and j column. Example 1.1.1 Consider the matrix 2−5 0 13 A = 4 10 −2 05 −5 2 −7 Find a22; a32; and a23: Solution. The entry a22 is in the second row and second column so that a22 = −2: Similarly, a32 = 2 and a23 = 0 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE5 An m × n array whose entries are functions of a real variable defined on a common interval I is called a matrix function. Thus, the matrix 2 3 a11(t) a12(t) a13(t) A(t) = 4 a21(t) a22(t) a23(t) 5 a31(t) a32(t) a33(t) is a 3 × 3 matrix function whereas the matrix 2 3 x1(t) x(t) = 4 x2(t) 5 x3(t) is a 3 × 1 matrix function also known as a vector-valued function. We will denote an m × n matrix function by A(t) = [aij(t)] where aij(t) is the entry in the ith row and jth coloumn. Arithmetic of Matrix Functions All the familiar rules of matrix arithmetic hold for matrix functions as well. (i) Equality: Two m × n matrices A(t) = [aij(t)] and B(t) = [bij(t)] are said to be equal if and only if aij(t) = bij(t) for all 1 ≤ i ≤ m and 1 ≤ j ≤ n and all t 2 I: That is, two matrices are equal if and only if all corresponding elements are equal. Notice that the matrices must be of the same dimension. Example 1.1.2 Solve the following matrix equation. t2 − 3t + 2 1 0 1 = 2 −1 2 t2 − 4t + 2 Solution. Equating corresponding entries we get t2 − 3t + 2 = 0 and t2 − 4t + 2 = −1: Solving these equations, we find t = 1 and t = 2 for the firstr equation and t = 1 and t = 3 for the second equation. Hence, t = 1 is the common value (ii) Addition: If A(t) = [aij(t)] and B(t) = [bij(t)] are m × n matrices then the sum is a new m × n matrix obtained by adding corresponding elements (A + B)(t) = A(t) + B(t) = [aij(t) + bij(t)]: 6CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS Matrices of different dimensions cannot be added. (iii) Subtraction: Let A(t) = [aij(t)] and B(t) = [bij(t)] be two m × n matrices. Then the difference (A − B)(t) is the new matrix obtained by subtracting corresponding elements,that is (A − B)(t) = A(t) − B(t) = [aij(t) − bij(t)]: Example 1.1.3 Consider the following matrices t − 1 t2 t −1 A(t) = ; B(t) = : 2 2t + 1 0 t + 2 Find A(t) + B(t) and A(t) − B(t): Solution. We have t − 1 t2 t −1 2t − 1 t2 − 1 A(t) + B(t) = + = 2 2t + 1 0 t + 2 2 3t + 3 and t − 1 t2 t −1 −1 t2 + 1 A(t) − B(t) = − = 2 2t + 1 0 t + 2 2 t − 1 (iv) Matrix Multiplication by a function: If α(t) is a function and A(t) = [aij(t)] is an m × n matrix then α(t)A)(t) is the m × n matrix obtained by multiplying the entries of A(t) by α(t); that is, α(t)A)(t) = [α(t)aij(t)]: Example 1.1.4 Find t2A(t) where A(t) is the matrix defined in Example 1.1.3. Solution. We have t3 − t2 t4 t2A(t) = 2t2 2t3 + t2 1.1. CALCULUS OF MATRIX-VALUED FUNCTIONS OF A REAL VARIABLE7 (v) Matrix Multiplication: If A(t) is an m×n matrix and B(t) is an n×p matrix then the matrix AB(t) is the m × p matrix AB(t) = [cij(t)] where n X cij(t) = aik(t)bkj(t): k=1 That is, the cij entry is obtained by multiplying componentwise the ith row of A(t) by the jth column of B(t). It is important to realize that the order of the multiplicands is significant, in other words AB(t) is not necessarily equal to BA(t). In mathematical terminology matrix multiplication is not commutative. Example 1.1.5 1 2 2 −1 A = ; B = 3 2 −3 4 Show that AB 6= BA: Hence, matrix multiplication is not commutative. Solution. Using the definition of matrix multiplication we find −4 7 −1 2 AB = ; BA = : 0 5 9 2 Hence, AB 6= BA (vi) Inverse: An n×n matrix A(t) is said to be invertible or non-singular if and only if there is an n × n matrix B(t) such that AB(t) = BA(t) = In: We denote the inverse of A(t) by A−1(t): Example 1.1.6 Find the inverse of the matrix t 1 A = 1 t given that t 6= ±1: 8CHAPTER 1. N TH ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS Solution. Suppose that b (t) b (t) A−1 = 11 12 b21(t) b22(t) Then b (t) b (t) t 1 1 0 11 12 = b21(t) b22(t) 1 t 0 1 This implies that tb (t) + b (t) b (t) + tb (t) 1 0 11 12 11 12 = tb21(t) + b22(t) b21(t) + tb22(t) 0 1 Hence, tb11(t) + b12(t) = 1 b11(t) + tb12(t) = 0 tb21(t) + b22(t) = 0 b21(t) + tb22(t) = 1: Applying the method of elimination to the first two equations we find t −1 b11(t) = t2−1 and b12(t) = t2−1 : Applying the method of elimination to the last two equations we find −1 t b21(t) = t2−1 and b22(t) = t2−1 : Hence, 1 t −1 A−1 = t2 − 1 −1 t Norm of a Vector Function The norm of a vector function will be needed in the coming sections. In one dimension a norm is known as the absolute value. In multidimenesion, we define the norm of a vector function x(t) with components x1(t); x2(t); ··· ; xn(t) by jjx(t)jj = jx1(t)j + jx2(t)j + ··· + jxn(t)j: From this definition one notices the following properties: (i) If jjx(t)jj = 0 then jx1(t)j + jx2(t)j + ··· + jxn(t)j = 0 and this implies that 1.1.
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Linear Systems and Control: a First Course (Course Notes for AAE 564)
    Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ...............................
    [Show full text]
  • Math 4571 (Advanced Linear Algebra) Lecture #27
    Math 4571 (Advanced Linear Algebra) Lecture #27 Applications of Diagonalization and the Jordan Canonical Form (Part 1): Spectral Mapping and the Cayley-Hamilton Theorem Transition Matrices and Markov Chains The Spectral Theorem for Hermitian Operators This material represents x4.4.1 + x4.4.4 +x4.4.5 from the course notes. Overview In this lecture and the next, we discuss a variety of applications of diagonalization and the Jordan canonical form. This lecture will discuss three essentially unrelated topics: A proof of the Cayley-Hamilton theorem for general matrices Transition matrices and Markov chains, used for modeling iterated changes in systems over time The spectral theorem for Hermitian operators, in which we establish that Hermitian operators (i.e., operators with T ∗ = T ) are diagonalizable In the next lecture, we will discuss another fundamental application: solving systems of linear differential equations. Cayley-Hamilton, I First, we establish the Cayley-Hamilton theorem for arbitrary matrices: Theorem (Cayley-Hamilton) If p(x) is the characteristic polynomial of a matrix A, then p(A) is the zero matrix 0. The same result holds for the characteristic polynomial of a linear operator T : V ! V on a finite-dimensional vector space. Cayley-Hamilton, II Proof: Since the characteristic polynomial of a matrix does not depend on the underlying field of coefficients, we may assume that the characteristic polynomial factors completely over the field (i.e., that all of the eigenvalues of A lie in the field) by replacing the field with its algebraic closure. Then by our results, the Jordan canonical form of A exists.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Scientific Computing: an Introductory Survey
    Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]