Chapter 6 Eigenvalues and Eigenvectors

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 6 Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors Po-Ning Chen, Professor Department of Electrical and Computer Engineering National Chiao Tung University Hsin Chu, Taiwan 30010, R.O.C. 6.1 Introduction to eigenvalues 6-1 Motivations • The static system problem of Ax = b has now been solved, e.g., by Gauss- Jordan method or Cramer’s rule. • However, a dynamic system problem such as Ax = λx cannot be solved by the static system method. • To solve the dynamic system problem, we need to find the static feature of A that is “unchanged” with the mapping A. In other words, Ax maps to itself with possibly some stretching (λ>1), shrinking (0 <λ<1), or being reversed (λ<0). • These invariant characteristics of A are the eigenvalues and eigenvectors. Ax maps a vector x to its column space C(A). We are looking for a v ∈ C(A) such that Av aligns with v. The collection of all such vectors is the set of eigen- vectors. 6.1 Eigenvalues and eigenvectors 6-2 Conception (Eigenvalues and eigenvectors): The eigenvalue-eigenvector pair (λi, vi) of a square matrix A satisfies Avi = λivi (or equivalently (A − λiI)vi = 0) where vi = 0 (but λi can be zero), where v1, v2,..., are linearly independent. How to derive eigenvalues and eigenvectors? • For 3 × 3 matrix A, we can obtain 3 eigenvalues λ1,λ2,λ3 by solving 3 2 det(A − λI)=−λ + c2λ + c1λ + c0 =0. • If all λ1,λ2,λ3 are unequal, then we continue to derive: Av1 = λ1v1 (A − λ1I)v1 = 0 v1 =theonly basis of the nullspace of (A − λ1I) Av λ v ⇔ A − λ I v ⇔ v A − λ I 2 = 2 2 ( 2 ) 2 = 0 2 =theonly basis of the nullspace of ( 2 ) Av3 = λ3v3 (A − λ3I)v3 = 0 v3 =theonly basis of the nullspace of (A − λ3I) and the resulting v1, v2 and v3 are linearly independent to each other. • If A is symmetric,thenv1, v2 and v3 are orthogonal to each other. 6.1 Eigenvalues and eigenvectors 6-3 • If, for example, λ1 = λ2 = λ3, then we derive: A − λ I v 0 {v , v } A − λ I ( 1 ) 1 = ⇔ 1 2 =thebasis of the nullspace of ( 1 ) (A − λ3I)v3 = 0 v3 =theonly basis of the nullspace of (A − λ3I) and the resulting v1, v2 and v3 are linearly independent to each other when the nullspace of (A − λ1I)istwo dimensional. • If A is symmetric,thenv1, v2 and v3 are orthogonal to each other. – Yet, it is possible that the nullspace of (A − λ1I)isone dimensional, then we can only have v1 = v2. – In such case, we say the repeated eigenvalue λ1 only have one eigenvector. In other words, we say A only have two eigenvectors. – If A is symmetric,thenv1 and v3 are orthogonal to each other. 6.1 Eigenvalues and eigenvectors 6-4 • If λ1 = λ2 = λ3, then we derive: (A − λ1I)v1 = 0 ⇔{v1, v2, v3} =thebasis of the nullspace of (A − λ1I) • When the nullspace of (A − λ1I)isthree dimensional, then v1, v2 and v3 are linearly independent to each other. • When the nullspace of (A − λ1I)istwo dimensional, then we say A only have two eigenvectors. • When the nullspace of (A − λ1I)isone dimensional, then we say A only have one eigenvector. In such case, the “matrix-form eigensystem” becomes: λ 1 00 A v1 v1 v1 = v1 v1 v1 0 λ1 0 00λ1 • If A is symmetric, then distinct eigenvectors are orthogonal to each other. 6.1 Eigenvalues and eigenvectors 6-5 Invariance of eigenvectors and eigenvalues. A • Property 1: The eigenvectors stay the same for every power of . The eigenvalues equal the same power of the respective eigenvalues. n n I.e., A vi = λi vi. 2 2 Avi = λivi =⇒ A vi = A(λivi)=λiAvi = λi vi A • Property 2: If the nullspace of n×n consists of non-zero vectors, then 0 is the eigenvalue of A. Accordingly, there exists non-zero vi to satisfy Avi =0· vi = 0. 6.1 Eigenvalues and eigenvectors 6-6 Property 3: Assume with no loss of generality |λ1| > |λ2| > ··· > |λk|.For any vector x = a1v1 + a2v2 + ···+ akvk that is a linear combination of all P 1 A eigenvectors, the normalized mapping = λ1 (namely, • 1 1 1 P v1 = A v1 = (Av1)= λ1v1 = v1) λ1 λ1 λ1 converges to the eigenvector with the largest absolute eigenvalue (when being applied repeatedly). I.e., k 1 k 1 k k k lim P x = lim A x = lim a1A v1 + a2A v2 + ···+ akA vk k→∞ k→∞ λk k→∞ λk 1 1 1 k k k k = lim a1λ v1 + a2λ v2 + ···+ λkA vk k→∞ λk 1 2 1 steady state transient states = a1v1. 6.1 How to determine the eigenvalues? 6-7 We wish to find a non-zero vector v to satisfy Av = λv;then (A − λI)v =0, where v = 0 ⇔ det(A − λI)=0. So by solving det(A − λI) = 0, we can obtain all the eigenvalues of A. 0.50.5 Example. Find the eigenvalues of A = . 0.50.5 Solution. 0.5 − λ 0.5 det(A − λI)=det 0.50.5 − λ =(0.5 − λ)2 − 0.52 = λ2 − λ = λ(λ − 1) = 0 ⇒ λ =0 or 1. 6.1 How to determine the eigenvalues? 6-8 Proposition: Projection matrix (defined in Section 4.2) has eigenvalues either 1 or 0. Proof: • A projection matrix always satisfies P 2 = P .SoP 2v = P v = λv. • By definition of eigenvalues and eigenvectors, we have P 2v = λ2v. • Hence, λv = λ2v for non-zero vector v, which immediately implies λ = λ2. • Accordingly, λ is either 1 or 0. 2 Proposition: Permutation matrix has eigenvalues satisfying λk =1forsome integer k. Proof: • A purmutation matrix always satisfies P k+1 = P for some integer k. 001 v3 3 Example. P = 100. Then, P v = v1 and P v = v. Hence, k =3. 010 v2 • Accordingly, λk+1v = λv,whichgivesλk = 1 since the eigenvalue of P cannot be zero. 2 6.1 How to determine the eigenvalues? 6-9 Proposition: Matrix m m−1 αmA + αm−1A + ···+ α1A + α0I has the same eigenvectors as A, but its eigenvalues become m m−1 αmλ + αm−1λ + ···+ α1λ + α0, where λ is the eigenvalue of A. Proof: • Let vi and λi be the eigenvector and eigenvalue of A. Then, m m−1 m m−1 αmA + αm−1A + ···+ α0I vi = αmλ + αm−1λ + ···+ α0 vi m m−1 • Hence, vi and αmλ + αm−1λ + ···+ α0 are the eigenvector and eigen- value of the polynomial matrix. 2 6.1 How to determine the eigenvalues? 6-10 Theorem (Cayley-Hamilton): For a square matrix A,define n n−1 f(λ)=det(A − λI)=λ + αn−1λ + ···+ α0. (Suppose A has linearly independent eigenvectors.) Then n n−1 f(A)=A + αn−1A + ···+ α0I = all-zero n × n matrix. Proof: The eigenvalues of f(A)areall zeros and the eigenvectors of f(A)remain the same as A. By definition of eigen-system, we have f(λ1)0··· 0 0 f(λ2) ··· 0 f(A) v v ··· vn = v v ··· vn . 1 2 1 2 . ... 00··· f(λn) 2 Corollary (Cayley-Hamilton): (Suppose A has linearly independent eigen- vectors.) (λ1I − A)(λ2I − A) ···(λnI − A) = all-zero matrix Proof: f(λ) can be re-written as f(λ)=(λ1 − λ)(λ2 − λ) ···(λn − λ). 2 6.1 How to determine the eigenvalues? 6-11 (Problem 11, Section 6.1) Here is a strange fact about 2 by 2 matrices with eigen- values λ1 = λ2:ThecolumnsofA − λ1I are multiples of the eigenvector x2.Any idea why this should be? 00 Hint: (λ I − A)(λ I − A)= implies 1 2 00 (λ1I − A)w1 = 0 and (λ1I − A)w2 = 0 where (λ2I − A)= w1 w2 . Hence, the columns of (λ2I − A) give the eigenvectors of λ1 if they are non-zero vectors and the columns of (λ1I − A) give the eigenvectors of λ2 if they are non-zero vectors . So, the (non-zero) columns of A − λ1I are (multiples of) the eigen- vector x2. 6.1 Why Gauss-Jordan cannot solve Ax = λx? 6-12 • The forward elimination may change the eigenvalues and eigenvectors? 1 −2 Example. Check eigenvalues and eigenvectors of A = . 25 Solution. A A − λI λ − 2 . – The eigenvalues of satisfy det( )=( 3) =0 10 1 −2 – A = LU = . The eigenvalues of U apparently satisfy 21 09 det(U − λI)=(1− λ)(9 − λ)=0. – Suppose u1 and u2 are the eigenvectors of U, respectively corresponding to eigenvalues 1 and 9. Then, they cannot be the eigenvectors of A since if they were, 10 3u1 = Au1 = LUu1 = Lu1 = u1 =⇒ u1 = 0 21 2 10 3u2 = Au2 = LUu2 =9Lu2 =9 u2 =⇒ u2 = 0. 21 Hence, the eigenvalues are nothing to do with pivots (except for a triangular A). 6.1 How to determine the eigenvalues? (Revisited) 6-13 Solve det(A − λI)=0. • det(A − λI) is a polynomial of order n. a − λa a ··· a 1,1 1,2 1,3 1,n a a − λa ··· a 2,1 2,2 2,3 2,n f λ A − λI a a a − λ ··· a ( )=det( )=det 3,1 3,2 3,3 3,n . ... an,1 an,2 an,3 ··· an,n − λ =(a1,1 − λ)(a2,2 − λ) ···(an,n − λ)+terms of order at most (n − 2) (By Leibniz formula) =(λ1 − λ)(λ2 − λ) ···(λn − λ) Observations • The coefficient of λn is (−1)n,providedn ≥ 1. • λn−1 n λ n a A n ≥ The coefficient of is i=1 i = i=1 i,i = trace( ),provided 2.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • Determinants Linear Algebra MATH 2010
    Determinants Linear Algebra MATH 2010 • Determinants can be used for a lot of different applications. We will look at a few of these later. First of all, however, let's talk about how to compute a determinant. • Notation: The determinant of a square matrix A is denoted by jAj or det(A). • Determinant of a 2x2 matrix: Let A be a 2x2 matrix a b A = c d Then the determinant of A is given by jAj = ad − bc Recall, that this is the same formula used in calculating the inverse of A: 1 d −b 1 d −b A−1 = = ad − bc −c a jAj −c a if and only if ad − bc = jAj 6= 0. We will later see that if the determinant of any square matrix A 6= 0, then A is invertible or nonsingular. • Terminology: For larger matrices, we need to use cofactor expansion to find the determinant of A. First of all, let's define a few terms: { Minor: A minor, Mij, of the element aij is the determinant of the matrix obtained by deleting the ith row and jth column. ∗ Example: Let 2 −3 4 2 3 A = 4 6 3 1 5 4 −7 −8 Then to find M11, look at element a11 = −3. Delete the entire column and row that corre- sponds to a11 = −3, see the image below. Then M11 is the determinant of the remaining matrix, i.e., 3 1 M11 = = −8(3) − (−7)(1) = −17: −7 −8 ∗ Example: Similarly, to find M22 can be found looking at the element a22 and deleting the same row and column where this element is found, i.e., deleting the second row, second column: Then −3 2 M22 = = −8(−3) − 4(−2) = 16: 4 −8 { Cofactor: The cofactor, Cij is given by i+j Cij = (−1) Mij: Basically, the cofactor is either Mij or −Mij where the sign depends on the location of the element in the matrix.
    [Show full text]
  • Basic Concepts of Linear Algebra by Jim Carrell Department of Mathematics University of British Columbia 2 Chapter 1
    1 Basic Concepts of Linear Algebra by Jim Carrell Department of Mathematics University of British Columbia 2 Chapter 1 Introductory Comments to the Student This textbook is meant to be an introduction to abstract linear algebra for first, second or third year university students who are specializing in math- ematics or a closely related discipline. We hope that parts of this text will be relevant to students of computer science and the physical sciences. While the text is written in an informal style with many elementary examples, the propositions and theorems are carefully proved, so that the student will get experince with the theorem-proof style. We have tried very hard to em- phasize the interplay between geometry and algebra, and the exercises are intended to be more challenging than routine calculations. The hope is that the student will be forced to think about the material. The text covers the geometry of Euclidean spaces, linear systems, ma- trices, fields (Q; R, C and the finite fields Fp of integers modulo a prime p), vector spaces over an arbitrary field, bases and dimension, linear transfor- mations, linear coding theory, determinants, eigen-theory, projections and pseudo-inverses, the Principal Axis Theorem for unitary matrices and ap- plications, and the diagonalization theorems for complex matrices such as the Jordan decomposition. The final chapter gives some applications of symmetric matrices positive definiteness. We also introduce the notion of a graph and study its adjacency matrix. Finally, we prove the convergence of the QR algorithm. The proof is based on the fact that the unitary group is compact.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • 8 Rank of a Matrix
    8 Rank of a matrix We already know how to figure out that the collection (v1;:::; vk) is linearly dependent or not if each n vj 2 R . Recall that we need to form the matrix [v1 j ::: j vk] with the given vectors as columns and see whether the row echelon form of this matrix has any free variables. If these vectors linearly independent (this is an abuse of language, the correct phrase would be \if the collection composed of these vectors is linearly independent"), then, due to the theorems we proved, their span is a subspace of Rn of dimension k, and this collection is a basis of this subspace. Now, what if they are linearly dependent? Still, their span will be a subspace of Rn, but what is the dimension and what is a basis? Naively, I can answer this question by looking at these vectors one by one. In particular, if v1 =6 0 then I form B = (v1) (I know that one nonzero vector is linearly independent). Next, I add v2 to this collection. If the collection (v1; v2) is linearly dependent (which I know how to check), I drop v2 and take v3. If independent then I form B = (v1; v2) and add now third vector v3. This procedure will lead to the vectors that form a basis of the span of my original collection, and their number will be the dimension. Can I do it in a different, more efficient way? The answer if \yes." As a side result we'll get one of the most important facts of the basic linear algebra.
    [Show full text]
  • Chapter 8. Eigenvalues: Further Applications and Computations
    8.1 Diagonalization of Quadratic Forms 1 Chapter 8. Eigenvalues: Further Applications and Computations 8.1. Diagonalization of Quadratic Forms Note. In this section we define a quadratic form and relate it to a vector and ma- trix product. We define diagonalization of a quadratic form and give an algorithm to diagonalize a quadratic form. The fact that every quadratic form can be diago- nalized (using an orthogonal matrix) is claimed by the “Principal Axis Theorem” (Theorem 8.1). It is applied in Section 8.2 to address quadratic surfaces. Definition. A quadratic form f(~x) = f(x1, x2,...,xn) in n variables is a polyno- mial that can be written as n f(~x)= u x x X ij i j i,j=1; i≤j where not all uij are zero. Note. For n = 2, a quadratic form is f(x, y)= ax2 + bxy + xy2. The graph of such an equation in the xy-plane is a conic section (ellipse, hyperbola, or parabola). For n = 3, we have 2 2 2 f(x1, x2, x3)= u11x1 + u12x1x2 + u13x1x3 + u22x2 + u23x2x3 + u33x2. We’ll see in the next section that the graph of f(x1, x2, x3) is a quadric surface. We 8.1 Diagonalization of Quadratic Forms 2 can easily verify that u11 u12 u13 x1 [f(x1, x2, x3)]= [x1, x2, x3] 0 u22 u23 x2 . 0 0 u33 x3 Definition. In the quadratic form ~xT U~x, the matrix U is the upper-triangular coefficient matrix of the quadratic form. Example. Page 417 Number 4(a).
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Orthogonality Handout
    3.8 (SUPPLEMENT) | ORTHOGONALITY OF EIGENFUNCTIONS We now develop some properties of eigenfunctions, to be used in Chapter 9 for Fourier Series and Partial Differential Equations. 1. Definition of Orthogonality R b We say functions f(x) and g(x) are orthogonal on a < x < b if a f(x)g(x) dx = 0 . [Motivation: Let's approximate the integral with a Riemann sum, as follows. Take a large integer N, put h = (b − a)=N and partition the interval a < x < b by defining x1 = a + h; x2 = a + 2h; : : : ; xN = a + Nh = b. Then Z b f(x)g(x) dx ≈ f(x1)g(x1)h + ··· + f(xN )g(xN )h a = (uN · vN )h where uN = (f(x1); : : : ; f(xN )) and vN = (g(x1); : : : ; g(xN )) are vectors containing the values of f and g. The vectors uN and vN are said to be orthogonal (or perpendicular) if their dot product equals zero (uN ·vN = 0), and so when we let N ! 1 in the above formula it makes R b sense to say the functions f and g are orthogonal when the integral a f(x)g(x) dx equals zero.] R π 1 2 π Example. sin x and cos x are orthogonal on −π < x < π, since −π sin x cos x dx = 2 sin x −π = 0. 2. Integration Lemma Suppose functions Xn(x) and Xm(x) satisfy the differential equations 00 Xn + λnXn = 0; a < x < b; 00 Xm + λmXm = 0; a < x < b; for some numbers λn; λm. Then Z b 0 0 b (λn − λm) Xn(x)Xm(x) dx = [Xn(x)Xm(x) − Xn(x)Xm(x)]a: a Proof.
    [Show full text]
  • Inner Products and Orthogonality
    Advanced Linear Algebra – Week 5 Inner Products and Orthogonality This week we will learn about: • Inner products (and the dot product again), • The norm induced by the inner product, • The Cauchy–Schwarz and triangle inequalities, and • Orthogonality. Extra reading and watching: • Sections 1.3.4 and 1.4.1 in the textbook • Lecture videos 17, 18, 19, 20, 21, and 22 on YouTube • Inner product space at Wikipedia • Cauchy–Schwarz inequality at Wikipedia • Gram–Schmidt process at Wikipedia Extra textbook problems: ? 1.3.3, 1.3.4, 1.4.1 ?? 1.3.9, 1.3.10, 1.3.12, 1.3.13, 1.4.2, 1.4.5(a,d) ??? 1.3.11, 1.3.14, 1.3.15, 1.3.25, 1.4.16 A 1.3.18 1 Advanced Linear Algebra – Week 5 2 There are many times when we would like to be able to talk about the angle between vectors in a vector space V, and in particular orthogonality of vectors, just like we did in Rn in the previous course. This requires us to have a generalization of the dot product to arbitrary vector spaces. Definition 5.1 — Inner Product Suppose that F = R or F = C, and V is a vector space over F. Then an inner product on V is a function h·, ·i : V × V → F such that the following three properties hold for all c ∈ F and all v, w, x ∈ V: a) hv, wi = hw, vi (conjugate symmetry) b) hv, w + cxi = hv, wi + chv, xi (linearity in 2nd entry) c) hv, vi ≥ 0, with equality if and only if v = 0.
    [Show full text]
  • Understanding the Spreading Power of All Nodes in a Network: a Continuous-Time Perspective
    i i “Lawyer-mainmanus” — 2014/6/12 — 9:39 — page 1 — #1 i i Understanding the spreading power of all nodes in a network: a continuous-time perspective Glenn Lawyer ∗ ∗Max Planck Institute for Informatics, Saarbr¨ucken, Germany Submitted to Proceedings of the National Academy of Sciences of the United States of America Centrality measures such as the degree, k-shell, or eigenvalue central- epidemic. When this ratio exceeds this critical range, the dy- ity can identify a network’s most influential nodes, but are rarely use- namics approach the SI system as a limiting case. fully accurate in quantifying the spreading power of the vast majority Several approaches to quantifying the spreading power of of nodes which are not highly influential. The spreading power of all all nodes have recently been proposed, including the accessibil- network nodes is better explained by considering, from a continuous- ity [16, 17], the dynamic influence [11], and the impact [7] (See time epidemiological perspective, the distribution of the force of in- fection each node generates. The resulting metric, the Expected supplementary Note S1). These extend earlier approaches to Force (ExF), accurately quantifies node spreading power under all measuring centrality by explicitly incorporating spreading dy- primary epidemiological models across a wide range of archetypi- namics, and have been shown to be both distinct from previous cal human contact networks. When node power is low, influence is centrality measures and more highly correlated with epidemic a function of neighbor degree. As power increases, a node’s own outcomes [7, 11, 18]. Yet they retain the common foundation degree becomes more important.
    [Show full text]
  • Similarity Preserving Linear Maps on Upper Triangular Matrix Algebras ୋ
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 414 (2006) 278–287 www.elsevier.com/locate/laa Similarity preserving linear maps on upper triangular matrix algebras ୋ Guoxing Ji ∗, Baowei Wu College of Mathematics and Information Science, Shaanxi Normal University, Xian 710062, PR China Received 14 June 2005; accepted 5 October 2005 Available online 1 December 2005 Submitted by P. Šemrl Abstract In this note we consider similarity preserving linear maps on the algebra of all n × n complex upper triangular matrices Tn. We give characterizations of similarity invariant subspaces in Tn and similarity preserving linear injections on Tn. Furthermore, we considered linear injections on Tn preserving similarity in Mn as well. © 2005 Elsevier Inc. All rights reserved. AMS classification: 47B49; 15A04 Keywords: Matrix; Upper triangular matrix; Similarity invariant subspace; Similarity preserving linear map 1. Introduction In the last few decades, many researchers have considered linear preserver problems on matrix or operator spaces. For example, there are many research works on linear maps which preserve spectrum (cf. [4,5,11]), rank (cf. [3]), nilpotency (cf. [10]) and similarity (cf. [2,6–8]) and so on. Many useful techniques have been developed; see [1,9] for some general techniques and background. Hiai [2] gave a characterization of the similarity preserving linear map on the algebra of all complex n × n matrices. ୋ This research was supported by the National Natural Science Foundation of China (no. 10571114), the Natural Science Basic Research Plan in Shaanxi Province of China (program no.
    [Show full text]