Applied Matrix Theory

Total Page:16

File Type:pdf, Size:1020Kb

Applied Matrix Theory APPLIED MATRIX THEORY j Lecture Notes for Math 464/514 Presented by DR. MONIKA NITSCHE j Typeset and Editted by ERIC M. BENNER j STUDENTS PRESS December 3, 2013 Copyright © 2013 Contents 1 Introduction to Linear Algebra1 1.1 Lecture 1: August 19, 2013 .......................... 1 About the class, 1. Linear Systems, 1. Example: Application to boundary value problem, 2. Analysis of error, 3. Solution of the discretized equation, 4. 2 Matrix Inversion5 2.1 Lecture 2: August 21, 2013 .......................... 5 Gaussian Elimination, 5. Inner-product based implementation, 7. Office hours and other class notes, 8. Example: Gauss Elimination, 8. 2.2 Lecture 3: August 23, 2013 .......................... 8 Example: Gauss Elimination, cont., 8. Operation Cost of Forward Elimination, 9. Cost of the Order of an Algorithm, 10. Validation of Lower/Upper Triangular Form, 11. Theoretical derivation of Lower/Upper Form, 11. 2.3 HW 1: Due August 30, 2013 ......................... 12 3 Factorization 15 3.1 Lecture 4: August 26, 2013 .......................... 15 Elementary Matrices, 15. Solution of Matrix using the Lower/Upper factorization, 18. Sparse and Banded Matrices, 18. Motivation for Gauss Elimination with Pivoting, 19. 3.2 Lecture 5: August 28, 2013 .......................... 19 Motivation for Gauss Elimination with Pivoting, cont., 19. Discussion of well-posedness, 20. Gaussian elimination with pivoting, 21. 3.3 Lecture 6: August 30, 2013 .......................... 22 Discussion of HW problem 2, 22. PLU factorization, 22. 3.4 Lecture 7: September 4, 2013......................... 24 PLU Factorization, 24. Triangular Matrices, 25. Multiplication of lower triangular ma- trices, 25. Inverse of a lower triangular matrix, 25. Uniqueness of LU factorization, 26. Existence of the LU factorization, 26. 3.5 Lecture 8: September 6, 2013......................... 27 About Homeworks, 27. Discussion of ill-conditioned systems, 27. Inversion of lower triangular matrices, 28. Example of LU decomposition of a lower triangular matrix, 28. Banded matrix example, 29. iii Nitsche and Benner Applied Matrix Theory 3.6 Lecture 9: September 9, 2013......................... 29 Existence of the LU factorization (cont.), 29. Rectangular matrices, 31. 3.7 HW 2: Due September 13, 2013 ....................... 32 4 Rectangular Matrices 35 4.1 Lecture 10: September 11, 2013 ....................... 35 Rectangular matrices (cont.), 35. Example of RREF of a Rectangular Matrix, 37. 4.2 Lecture 11: September 13, 2013 ....................... 38 Solving Ax = b, 38. Example, 38. Linear functions, 39. Example: Transpose operator, 40. Example: trace operator, 40. Matrix multiplication, 41. Proof of transposition property, 42. 4.3 Lecture 12: September 16, 2013 ....................... 42 Inverses, 42. Low rank perturbations of I, 43. The Sherman{Morrison Formula, 44. Finite difference example with periodic boundary conditions, 44. Examples of pertur- bation, 45. Small perturbations of I, 45. 4.4 Lecture 13: September 18, 2013 ....................... 46 Small perturbations of I (cont.), 46. Matrix Norms, 47. Condition Number, 48. 4.5 HW 3: Due September 27, 2013 ....................... 49 5 Vector Spaces 55 5.1 Lecture 14: September 20, 2013 ....................... 55 Topics in Vector Spaces, 55. Field, 55. Vector Space, 56. Examples of function spaces, 57. 5.2 Lecture 15: September 23, 2013 ....................... 58 The four subspaces of Am×n, 58. 5.3 Lecture 16: September 25, 2013 ....................... 61 The Four Subspaces of A, 62. Linear Independence, 63. 5.4 Lecture 17: September 27, 2013 ....................... 64 Linear functions (rev), 64. Review for exam, 64. Previous lecture continued, 65. 5.5 Lecture 18: October 2, 2013.......................... 66 Exams and Points, 66. Continuation of last lecture, 66. 6 Least Squares 69 6.1 Lecture 19: October 4, 2013.......................... 69 Least Squares, 69. 6.2 Lecture 20: October 7, 2013.......................... 70 Properties of Transpose Multiplication, 71. The Normal Equations, 71. Exam 1, 73. 6.3 Lecture 21: October 9, 2013.......................... 74 Exam Review, 74. Least squares and minimization, 74. 6.4 HW 4: Due October 21, 2013......................... 76 iv Nitsche and Benner Applied Matrix Theory 7 Linear Transformations 81 7.1 Lecture 22: October 14, 2013 ......................... 81 Linear Transformations, 83. Examples of Linear Functions, 83. Matrix representation of linear transformations, 83. 7.2 Lecture 23: October 16, 2013 ......................... 84 Basis of a linear transformation, 84. Action of linear transform, 87. Change of Basis, 88. 7.3 Lecture 24: October 21, 2013 ......................... 89 Change of Basis (cont.), 89. 7.4 Lecture 25: October 23, 2013 ......................... 91 Properties of Special Bases, 91. Invariant Subspaces, 93. 7.5 HW 5: Due November 4, 2013 ........................ 94 8 Norms 99 8.1 Lecture 26: October 25, 2013 ......................... 99 Difinition of norms, 99. Vector Norms, 99. The two norm, 99. Matrix Norms, 101. Induced Norms, 102. 8.2 Lecture 27: October 28, 2013 ......................... 102 Matrix norms (review), 102. Frobenius Norm, 102. Induced Matrix Norms, 104. 8.3 Lecture 28: October 30, 2013 ......................... 106 The 2-norm, 106. 9 Orthogonalization with Projection and Rotation 109 9.1 Lecture 28 (cont.) ................................ 109 Inner Product Spaces, 109. 9.2 Lecture 29: November 1, 2013 ........................ 110 Inner Product Spaces, 110. Fourier Expansion, 111. Orthogonalization Process (Gramm-Schmidt), 111. 9.3 Lecture 30: November 4, 2013 ........................ 112 Gramm{Schmidt Orthogonalization, 112. 9.4 Lecture 31: November 6, 2013 ........................ 115 Unitary (orthogonal) matrices, 116. Rotation, 117. Reflection, 118. 9.5 HW 6: Due November 11, 2013 ....................... 118 9.6 Lecture 32: November 8, 2013 ........................ 120 Elementary orthogonal projectors, 120. Elementary reflection, 121. Complimentary Subspaces of V, 121. Projectors, 121. 9.7 Lecture 33: November 11, 2013........................ 122 Projectors, 122. Representation of a projector, 123. 9.8 Lecture 34: November 13, 2013........................ 124 n Projectors, 124. Decompositions of R , 125. Range Nullspace decomposition of An×n, 126. 9.9 HW 7: Due November 22, 2013 ....................... 126 v Nitsche and Benner Applied Matrix Theory 9.10 Lecture 35: November 15, 2013........................ 128 Range Nullspace decomposition of An×n, 128. Corresponding factorization of A, 129. 10 Singular Value Decomposition 131 10.1 Lecture 35 (cont.) ................................ 131 Singular Value Decomposition, 131. 10.2 Lecture 36: November 18, 2013........................ 132 Singular Value Decomposition, 132. Existence of the Singular Value Decomposition, 133. 10.3 Lecture 37: November 20, 2013........................ 136 Review and correction from last time, 136. Singular Value Decomposition, 136. Geometric interpretation, 138. 10.4 Lecture 38: November 22, 2013........................ 139 Review for Exam 2, 139. Norms, 139. More major topics, 140. 10.5 HW 8: Due December 10, 2013 ....................... 142 10.6 Lecture 39: November 27, 2013........................ 144 Singular Value Decomposition, 144. SVD in Matlab, 145. 11 Additional Topics 149 11.1 Lecture 39 (cont.) ................................ 149 The Determinant, 149. 11.2 Lecture 40: December 2, 2013 ........................ 150 Further details for class, 150. Diagonalizable Matrices, 150. Eigenvalues and eigenvec- tors, 150. Index 155 Other Contents 157 vi UNIT 1 Introduction to Linear Algebra 1.1 Lecture 1: August 19, 2013 About the class The textbook for the class will be Matrix Analysis and Applied Linear Algebra by Meyer. Another highly recommended text is Laub's Matrix Analysis for Scientists and Engineers. Linear Systems A linear system may be of the general form Ax = b: (1.1.1) This may be represented in several equivalent ways. 2x1 + x2 − 3x3 = 18; (1.1.2a) −4x1 + 5x3 = −28; (1.1.2b) 6x1 + 13x2 = 37: (1.1.2c) This also may be put in matrix form 0 1 0 1 0 1 2 1 −3 x1 18 @−4 0 5A @x2A = @−28A: (1.1.3) 6 13 0 x3 37 Finally, a the third common form is vector form: 0 21 0 11 0−31 0 181 @−4A x1 + @ 0A x2 + @ 5A x3 = @−28A: (1.1.4) 6 13 0 37 1 Nitsche and Benner Unit 1. Introduction to Linear Algebra y y(t) t t0 t1 t2 t3 ··· tn Figure 1.1. Finite difference approximation of a 1D boundary value problem. Example: Application to boundary value problem We will use finite difference approximations on a rectangular grid to solve the system, − y00(t) = f(t); for t 2 [0; 1]; (1.1.5) with the boundary conditions y(0) = 0; (1.1.6a) y(1) = 0: (1.1.6b) This is a 1D version of the general Laplace equation represented by, − ∆u = f (1.1.7) or in more engineering/science form − r2u = f: (1.1.8) The Laplace operator in cartesian coordinates, r2u = r · (ru); (1.1.9a) = uxx + uyy + uzz: (1.1.9b) Finite Difference Approximation Let tj = j∆t, with j = 0;:::;N. The approximate forms of the solution yj ≈ y(tj). Now we need to approximate the derivatives with discrete values of the variables. The forward difference approximation is 0 yj+1 − yj y (tj) = ; (1.1.10) tj+1 − tj or y − y y0(t ) = j+1 j ; (1.1.11) j ∆t 2 1.1. Lecture 1: August 19, 2013 Applied Matrix Theory The backward difference approximation is y − y y0(t ) = j j−1 : (1.1.12) j ∆t The centered difference approximation is y − y y0(t ) = j+1 j−1 : (1.1.13) j 2∆t Each of these are useful approximations to the first derivative that have
Recommended publications
  • Matrix Analysis (Lecture 1)
    Matrix Analysis (Lecture 1) Yikun Zhang∗ March 29, 2018 Abstract Linear algebra and matrix analysis have long played a fundamen- tal and indispensable role in fields of science. Many modern research projects in applied science rely heavily on the theory of matrix. Mean- while, matrix analysis and its ramifications are active fields for re- search in their own right. In this course, we aim to study some fun- damental topics in matrix theory, such as eigen-pairs and equivalence relations of matrices, scrutinize the proofs of essential results, and dabble in some up-to-date applications of matrix theory. Our lecture will maintain a cohesive organization with the main stream of Horn's book[1] and complement some necessary background knowledge omit- ted by the textbook sporadically. 1 Introduction We begin our lectures with Chapter 1 of Horn's book[1], which focuses on eigenvalues, eigenvectors, and similarity of matrices. In this lecture, we re- view the concepts of eigenvalues and eigenvectors with which we are familiar in linear algebra, and investigate their connections with coefficients of the characteristic polynomial. Here we first outline the main concepts in Chap- ter 1 (Eigenvalues, Eigenvectors, and Similarity). ∗School of Mathematics, Sun Yat-sen University 1 Matrix Analysis and its Applications, Spring 2018 (L1) Yikun Zhang 1.1 Change of basis and similarity (Page 39-40, 43) Let V be an n-dimensional vector space over the field F, which can be R, C, or even Z(p) (the integers modulo a specified prime number p), and let the list B1 = fv1; v2; :::; vng be a basis for V.
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • QUADRATIC FORMS and DEFINITE MATRICES 1.1. Definition of A
    QUADRATIC FORMS AND DEFINITE MATRICES 1. DEFINITION AND CLASSIFICATION OF QUADRATIC FORMS 1.1. Definition of a quadratic form. Let A denote an n x n symmetric matrix with real entries and let x denote an n x 1 column vector. Then Q = x’Ax is said to be a quadratic form. Note that a11 ··· a1n . x1 Q = x´Ax =(x1...xn) . xn an1 ··· ann P a1ixi . =(x1,x2, ··· ,xn) . P anixi 2 (1) = a11x1 + a12x1x2 + ... + a1nx1xn 2 + a21x2x1 + a22x2 + ... + a2nx2xn + ... + ... + ... 2 + an1xnx1 + an2xnx2 + ... + annxn = Pi ≤ j aij xi xj For example, consider the matrix 12 A = 21 and the vector x. Q is given by 0 12x1 Q = x Ax =[x1 x2] 21 x2 x1 =[x1 +2x2 2 x1 + x2 ] x2 2 2 = x1 +2x1 x2 +2x1 x2 + x2 2 2 = x1 +4x1 x2 + x2 1.2. Classification of the quadratic form Q = x0Ax: A quadratic form is said to be: a: negative definite: Q<0 when x =06 b: negative semidefinite: Q ≤ 0 for all x and Q =0for some x =06 c: positive definite: Q>0 when x =06 d: positive semidefinite: Q ≥ 0 for all x and Q = 0 for some x =06 e: indefinite: Q>0 for some x and Q<0 for some other x Date: September 14, 2004. 1 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider as an example the 3x3 diagonal matrix D below and a general 3 element vector x. 100 D = 020 004 The general quadratic form is given by 100 x1 0 Q = x Ax =[x1 x2 x3] 020 x2 004 x3 x1 =[x 2 x 4 x ] x2 1 2 3 x3 2 2 2 = x1 +2x2 +4x3 Note that for any real vector x =06 , that Q will be positive, because the square of any number is positive, the coefficients of the squared terms are positive and the sum of positive numbers is always positive.
    [Show full text]
  • Arxiv:1805.04488V5 [Math.NA]
    GENERALIZED STANDARD TRIPLES FOR ALGEBRAIC LINEARIZATIONS OF MATRIX POLYNOMIALS∗ EUNICE Y. S. CHAN†, ROBERT M. CORLESS‡, AND LEILI RAFIEE SEVYERI§ Abstract. We define generalized standard triples X, Y , and L(z) = zC1 − C0, where L(z) is a linearization of a regular n×n −1 −1 matrix polynomial P (z) ∈ C [z], in order to use the representation X(zC1 − C0) Y = P (z) which holds except when z is an eigenvalue of P . This representation can be used in constructing so-called algebraic linearizations for matrix polynomials of the form H(z) = zA(z)B(z)+ C ∈ Cn×n[z] from generalized standard triples of A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using ℓ the coefficients of the expression 1 = Pk=0 ekφk(z) in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. Key words. Standard triple, regular matrix polynomial, polynomial bases, companion matrix, colleague matrix, comrade matrix, algebraic linearization, linearization of matrix polynomials. AMS subject classifications. 65F15, 15A22, 65D05 1. Introduction. A matrix polynomial P (z) ∈ Fm×n[z] is a polynomial in the variable z with coef- ficients that are m by n matrices with entries from the field F. We will use F = C, the field of complex k numbers, in this paper.
    [Show full text]
  • Math 623: Matrix Analysis Final Exam Preparation
    Mike O'Sullivan Department of Mathematics San Diego State University Spring 2013 Math 623: Matrix Analysis Final Exam Preparation The final exam has two parts, which I intend to each take one hour. Part one is on the material covered since the last exam: Determinants; normal, Hermitian and positive definite matrices; positive matrices and Perron's theorem. The problems will all be very similar to the ones in my notes, or in the accompanying list. For part two, you will write an essay on what I see as the fundamental theme of the course, the four equivalence relations on matrices: matrix equivalence, similarity, unitary equivalence and unitary similarity (See p. 41 in Horn 2nd Ed. He calls matrix equivalence simply \equivalence."). Imagine you are writing for a fellow master's student. Your goal is to explain to them the key ideas. Matrix equivalence: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a simple set of representatives for the equivalence classes: identify them and sketch the proof. Similarity: (1) Define it. (2) Explain the relationship with abstract linear transformations and change of basis. (3) We found a set of representatives for the equivalence classes using Jordan matrices. State the Jordan theorem. (4) The proof of the Jordan theorem can be broken into two parts: (1) writing the ambient space as a direct sum of generalized eigenspaces, (2) classifying nilpotent matrices. Sketch the proof of each. Explain the role of invariant spaces. Unitary equivalence (1) Define it. (2) Explain the relationship with abstract inner product spaces and change of basis.
    [Show full text]
  • Arxiv:1901.01378V2 [Math-Ph] 8 Apr 2020 Where A(P, Q) Is the Arithmetic Mean of the Vectors P and Q, G(P, Q) Is Their P Geometric Mean, and Tr X Stands for Xi
    MATRIX VERSIONS OF THE HELLINGER DISTANCE RAJENDRA BHATIA, STEPHANE GAUBERT, AND TANVI JAIN Abstract. On the space of positive definite matrices we consider dis- tance functions of the form d(A; B) = [trA(A; B) − trG(A; B)]1=2 ; where A(A; B) is the arithmetic mean and G(A; B) is one of the different versions of the geometric mean. When G(A; B) = A1=2B1=2 this distance is kA1=2− 1=2 1=2 1=2 1=2 B k2; and when G(A; B) = (A BA ) it is the Bures-Wasserstein metric. We study two other cases: G(A; B) = A1=2(A−1=2BA−1=2)1=2A1=2; log A+log B the Pusz-Woronowicz geometric mean, and G(A; B) = exp 2 ; the log Euclidean mean. With these choices d(A; B) is no longer a metric, but it turns out that d2(A; B) is a divergence. We establish some (strict) convexity properties of these divergences. We obtain characterisations of barycentres of m positive definite matrices with respect to these distance measures. 1. Introduction Let p and q be two discrete probability distributions; i.e. p = (p1; : : : ; pn) and q = (q1; : : : ; qn) are n -vectors with nonnegative coordinates such that P P pi = qi = 1: The Hellinger distance between p and q is the Euclidean norm of the difference between the square roots of p and q ; i.e. 1=2 1=2 p p hX p p 2i hX X p i d(p; q) = k p− qk2 = ( pi − qi) = (pi + qi) − 2 piqi : (1) This distance and its continuous version, are much used in statistics, where it is customary to take d (p; q) = p1 d(p; q) as the definition of the Hellinger H 2 distance.
    [Show full text]
  • Numerical Linear Algebra and Matrix Analysis
    Numerical Linear Algebra and Matrix Analysis Higham, Nicholas J. 2015 MIMS EPrint: 2015.103 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 1 exploitation of matrix structure (such as sparsity, sym- Numerical Linear Algebra and metry, and definiteness), and the design of algorithms y Matrix Analysis to exploit evolving computer architectures. Nicholas J. Higham Throughout the article, uppercase letters are used for matrices and lower case letters for vectors and scalars. Matrices are ubiquitous in applied mathematics. Matrices and vectors are assumed to be complex, unless ∗ Ordinary differential equations (ODEs) and partial dif- otherwise stated, and A = (aji) denotes the conjugate ferential equations (PDEs) are solved numerically by transpose of A = (aij ). An unsubscripted norm k · k finite difference or finite element methods, which lead denotes a general vector norm and the corresponding to systems of linear equations or matrix eigenvalue subordinate matrix norm. Particular norms used here problems. Nonlinear equations and optimization prob- are the 2-norm k · k2 and the Frobenius norm k · kF . lems are typically solved using linear or quadratic The notation “i = 1: n” means that the integer variable models, which again lead to linear systems. i takes on the values 1; 2; : : : ; n. Solving linear systems of equations is an ancient task, undertaken by the Chinese around 1AD, but the study 1 Nonsingularity and Conditioning of matrices per se is relatively recent, originating with Arthur Cayley’s 1858 “A Memoir on the Theory of Matri- Nonsingularity of a matrix is a key requirement in many ces”.
    [Show full text]
  • Package 'Matrixcalc'
    Package ‘matrixcalc’ July 28, 2021 Version 1.0-5 Date 2021-07-27 Title Collection of Functions for Matrix Calculations Author Frederick Novomestky <[email protected]> Maintainer S. Thomas Kelly <[email protected]> Depends R (>= 2.0.1) Description A collection of functions to support matrix calculations for probability, econometric and numerical analysis. There are additional functions that are comparable to APL functions which are useful for actuarial models such as pension mathematics. This package is used for teaching and research purposes at the Department of Finance and Risk Engineering, New York University, Polytechnic Institute, Brooklyn, NY 11201. Horn, R.A. (1990) Matrix Analysis. ISBN 978-0521386326. Lancaster, P. (1969) Theory of Matrices. ISBN 978-0124355507. Lay, D.C. (1995) Linear Algebra: And Its Applications. ISBN 978-0201845563. License GPL (>= 2) Repository CRAN Date/Publication 2021-07-28 08:00:02 UTC NeedsCompilation no R topics documented: commutation.matrix . .3 creation.matrix . .4 D.matrix . .5 direct.prod . .6 direct.sum . .7 duplication.matrix . .8 E.matrices . .9 elimination.matrix . 10 entrywise.norm . 11 1 2 R topics documented: fibonacci.matrix . 12 frobenius.matrix . 13 frobenius.norm . 14 frobenius.prod . 15 H.matrices . 17 hadamard.prod . 18 hankel.matrix . 19 hilbert.matrix . 20 hilbert.schmidt.norm . 21 inf.norm . 22 is.diagonal.matrix . 23 is.idempotent.matrix . 24 is.indefinite . 25 is.negative.definite . 26 is.negative.semi.definite . 28 is.non.singular.matrix . 29 is.positive.definite . 31 is.positive.semi.definite . 32 is.singular.matrix . 34 is.skew.symmetric.matrix . 35 is.square.matrix . 36 is.symmetric.matrix .
    [Show full text]
  • Eigenvalues and Eigenvectors
    ContentsContents 2222 Eigenvalues and Eigenvectors 22.1 Basic Concepts 2 22.2 Applications of Eigenvalues and Eigenvectors 18 22.3 Repeated Eigenvalues and Symmetric Matrices 30 22.4 Numerical Determination of Eigenvalues and Eigenvectors 46 Learning outcomes In this Workbook you will learn about the matrix eigenvalue problem AX = kX where A is a square matrix and k is a scalar (number). You will learn how to determine the eigenvalues (k) and corresponding eigenvectors (X) for a given matrix A. You will learn of some of the applications of eigenvalues and eigenvectors. Finally you will learn how eigenvalues and eigenvectors may be determined numerically. Basic Concepts 22.1 Introduction From an applications viewpoint, eigenvalue problems are probably the most important problems that arise in connection with matrix analysis. In this Section we discuss the basic concepts. We shall see that eigenvalues and eigenvectors are associated with square matrices of order n × n. If n is small (2 or 3), determining eigenvalues is a fairly straightforward process (requiring the solutiuon of a low order polynomial equation). Obtaining eigenvectors is a little strange initially and it will help if you read this preliminary Section first. # • have a knowledge of determinants and Prerequisites matrices • have a knowledge of linear first order Before starting this Section you should ... differential equations #" • obtain eigenvalues and eigenvectors of 2 × 2 ! Learning Outcomes and 3 × 3 matrices • state basic properties of eigenvalues and On completion you should be able to ... eigenvectors " ! 2 HELM (2008): Workbook 22: Eigenvalues and Eigenvectors ® 1. Basic concepts Determinants A square matrix possesses an associated determinant.
    [Show full text]
  • Matrix Theory
    Matrix Theory Xingzhi Zhan +VEHYEXI7XYHMIW MR1EXLIQEXMGW :SPYQI %QIVMGER1EXLIQEXMGEP7SGMIX] Matrix Theory https://doi.org/10.1090//gsm/147 Matrix Theory Xingzhi Zhan Graduate Studies in Mathematics Volume 147 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Daniel S. Freed Rafe Mazzeo Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 15-01, 15A18, 15A21, 15A60, 15A83, 15A99, 15B35, 05B20, 47A63. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-147 Library of Congress Cataloging-in-Publication Data Zhan, Xingzhi, 1965– Matrix theory / Xingzhi Zhan. pages cm — (Graduate studies in mathematics ; volume 147) Includes bibliographical references and index. ISBN 978-0-8218-9491-0 (alk. paper) 1. Matrices. 2. Algebras, Linear. I. Title. QA188.Z43 2013 512.9434—dc23 2013001353 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2013 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government.
    [Show full text]
  • Matrix Analysis, CAAM 335, Spring 2012
    Matrix Analysis, CAAM 335, Spring 2012 Steven J Cox Preface Bellman has called matrix theory ‘the arithmetic of higher mathematics.’ Under the influence of Bellman and Kalman engineers and scientists have found in matrix theory a language for repre- senting and analyzing multivariable systems. Our goal in these notes is to demonstrate the role of matrices in the modeling of physical systems and the power of matrix theory in the analysis and synthesis of such systems. Beginning with modeling of structures in static equilibrium we focus on the linear nature of the relationship between relevant state variables and express these relationships as simple matrix–vector products. For example, the voltage drops across the resistors in a network are linear combinations of the potentials at each end of each resistor. Similarly, the current through each resistor is as- sumed to be a linear function of the voltage drop across it. And, finally, at equilibrium, a linear combination (in minus out) of the currents must vanish at every node in the network. In short, the vector of currents is a linear transformation of the vector of voltage drops which is itself a linear transformation of the vector of potentials. A linear transformation of n numbers into m numbers is accomplished by multiplying the vector of n numbers by an m-by-n matrix. Once we have learned to spot the ubiquitous matrix–vector product we move on to the analysis of the resulting linear systems of equations. We accomplish this by stretching your knowledge of three–dimensional space. That is, we ask what does it mean that the m–by–n matrix X transforms Rn (real n–dimensional space) into Rm? We shall visualize this transformation by splitting both Rn and Rm each into two smaller spaces between which the given X behaves in very manageable ways.
    [Show full text]
  • Bounds for Sine and Cosine Via Eigenvalue Estimation
    DOI 10.2478/spma-2014-0003 Ë Spec. Matrices 2014; 2:19–29 Research Article Open Access Pentti Haukkanen, Mika Mattila, Jorma K. Merikoski*, and Alexander Kovačec Bounds for sine and cosine via eigenvalue estimation Abstract: Dene n × n tridiagonal matrices T and S as follows: All entries of the main diagonal of T are zero and those of the rst super- and subdiagonal are one. The entries of the main diagonal of S are two except the (n, n) entry one, and those of the rst super- and subdiagonal are minus one. Then, denoting by λ(·) the largest eigenvalue, π −1 1 λ(T) = 2 cos , λ(S ) = 2 nπ . n + 1 4 cos 2n+1 Using certain lower bounds for the largest eigenvalue, we provide lower bounds for these expressions and, further, lower bounds for sin x and cos x on certain intervals. Also upper bounds can be obtained in this way. Keywords: eigenvalue bounds, trigonometric inequalities MSC: 15A42, 26D05 ËË Pentti Haukkanen, Mika Mattila: School of Information Sciences, FI-33014 University of Tampere, Finland, E-mail: pentti.haukkanen@uta.; mika.mattila@uta. *Corresponding Author: Jorma K. Merikoski: School of Information Sciences, FI-33014 University of Tampere, Finland, E-mail: jorma.merikoski@uta. Alexander Kovačec: Department of Mathematics, University of Coimbra EC Santa Cruz, 3001-501 Coimbra, Portugal, E-mail: [email protected] 1 Introduction Given n ≥ 2, let tridiag (a, b) denote the symmetric tridiagonal n × n matrix with diagonal a and rst super- and subdiagonal b. Dene T = (tij) = tridiag (0, 1). Also dene S = (sij) = tridiag (2, −1) − F, where the entries of F are zero except the (n, n) entry one.
    [Show full text]