CS321 Numerical Analysis

Total Page:16

File Type:pdf, Size:1020Kb

CS321 Numerical Analysis CS321 Numerical Analysis Lecture 5 System of Linear Equations Professor Jun Zhang Department of Computer Science University of Kentucky Lexington, KY 40506-0046 System of Linear Equations a11 x1 a12 x2 a1n xn b1 a21 x1 a22 x2 a2n xn b2 an1 x1 an2 x2 ann xn bn where aij are coefficients, xi are unknowns, and bi are right-hand sides. Written in a compact form is n aij x j bi , i 1,,n j1 The system can also be written in a matrix form Ax b where the matrix is a11 a12 a1n a a a A 21 22 2n an1 an2 ann and x [x , x ,, x ]T ,b [b ,b ,b ]T 1 2 n 1 2 n 2 An Upper Triangular System An upper triangular system a11 x1 a12 x2 a13 x3 a1n xn b1 a22 x2 a23 x3 a2n xn b2 a33 x3 a3n xn b3 an1,n1 xn1 an1,n xn bn1 ann xn bn is much easier to find the solution: bn xn ann from the last equation and substitute its value in other equations and repeat the process n 1 xi bi aij x j aii ji1 for i = n – 1, n – 2,…, 1 3 Karl Friedrich Gauss (April 30, 1777 – February 23, 1855) German Mathematician and Scientist 4 Gaussian Elimination Linear systems are solved by Gaussian elimination, which involves repeated procedure of multiplying a row by a number and adding it to another row to eliminate a certain variable For a particular step, this amounts to aik aij aij akj (k j n) akk aik bi bi bk akk th After this step, the variable xk, is eliminated in the (k + 1) and in the later equations The Gaussian elimination modifies a matrix into an upper triangular form such that aij = 0 for all i > j. The solution of an upper triangular system is then easily obtained by a back substitution procedure 5 Illustration of Gaussian Elimination 6 7 Back Substitution The obtained upper triangular system is a11 x1 a12 x2 a13 x3 a1n xn b1 a22 x2 a23 x3 a2n xn b2 a33 x3 a3n xn b3 an1,n1 xn1 an1,n xn bn1 ann xn bn We can compute bn xn ann from the last equation and substitute its value in other equations and repeat the process n 1 xi bi aij x j aii ji1 for i = n – 1, n – 2,…, 1 8 9 Condition Number and Error A quantity used to measure the quality of a matrix is called condition number, defined as (A) A A1 The condition number measures the transfer of error from the matrix A to the right-hand side vector b. If A has a large condition number, small error in b may yield large error in the solution x = A-1b. Such a matrix is called ill- conditioned The error e is defined as the difference between a computed solution and the exact solution e x x~ Since the exact solution is generally unknown, we measure the residual r b A x~ as an indicator of the size of the error 10 Is this BMW ill-conditioned? 11 Small Pivot x1 x2 1 x1 x2 2 for some small ε. After the step of Gaussian elimination x1 x2 1 1 1 1 x2 2 We have 2 1/ x 2 1 1/ 1 x x 2 1 For very small ε, the computer result will be x2 = 1 and x1 = 0. The correct results are 1 x 1 1 1 1 2 x 1 2 1 12 Scaled Partial Pivoting We need to choose an element which is large relative to other elements of the same row as the pivot Let L = (l1, l2,…, ln) be an index array of integers. We first compute an array of scale factor as S = (s1, s2,…, sn) where si max aij (1 i n) 1 jn The first row i is chosen such that the ratio |ai,1 |/si is the greatest. Suppose this index is l1, then appropriate multipliers of equation l1 are subtracted from the other equations to eliminate x1 from the other equations Suppose initially L = (l1, l2,…, ln) = (1, 2,…, n), if our first choice is lj, we will interchange lj and l1 in the index set, not actually interchange the first and the lj rows, to avoid moving data around the memory. The remaining subsystem uses the same scale factors 13 Example Straightforward Gaussian elimination does not work well (not robust) x1 x2 1 x1 x2 2 The scale factor will be computed as S = {1,1}. In the first step, the scale factor ratio array {ε,1}. So the 2nd row is the pivoting row st After eliminating x1 from the 1 equation, we have (1 )x2 1 2 x1 x2 2 It follows that 1 2 x 1 2 1 x1 2 x2 1 We computed correct results by using scaled partial pivoting strategy 14 Gaussian Elimination with Scaled Partial Pivoting 15 Long Operation Count We count the number of multiplications and divisions, ignore summations and subtractions in scaled Gaussian elimination The 1st step, finding a pivoting costs n divisions Additional n operations are needed to multiply a factor to the pivoting row for each of the n – 1 eliminations. The cost is n(n – 1) operations. The total cost of this step is n2 operations The computation is repeated on the remaining (n – 1) equations. The total cost of Gaussian elimination with scaled partial pivoting is n2 (n 1)2 42 32 22 n(n 1)(2n 1) n3 1 6 3 Back substitution costs n(n – 1)/2 operations 16 Tridiagonal and Banded Systems Banded system has a coefficient matrix such that aij = 0 if |i – j| ≥ w. For a tridiagonal system, w = 2 d1 c1 x1 b1 a d c x b 1 2 2 2 2 a2 d3 c3 x3 b3 a d c x b n2 n1 n1 n1 n1 an1 d n xn bn General elimination procedure ai 1 di di ci1 di 1 ai 1 bi bi bi1 di 1 The array ci is not modified. No additional nonzero is created Matrix can be stored in three vector arrays 17 Tridiagonal Systems The back substitution is straightforward bn xn dn bi ci xi1 xi (i n 1,,1) di No pivoting is performed, otherwise the procedure will be quite different due to the fill-in (the array c will be modified) Diagonal dominance: A matrix A = (aij)n×n is diagonally dominant if n aii aij (1i n) j1, ji For a diagonally dominant tridiagonal system, no pivoting is needed, i.e., no division by zero will happen We want to show Gaussian elimination preserves diagonal dominance, i.e., di ai1 ci 18 Tridiagonal Systems The new coefficient matrix has 0 elements at the ai ‘s places. The new diagonal elements are determined recursively as d1 d1 ai1 d i d c (2 i n) i i1 di1 We assume that di | ai1 | |ci | We want to show that | ||c | di i We use induction to prove the inequality It is obviously true for i =1, as d d1 1 19 Tridiagonal Systems If we assume that We prove for index i, as ai1 | d i || d c | i i1 di1 | ci1 | | di | | ai1 | | di1 | | ai1 | | ci | | ai1 || ci | It follows that the new diagonal entries will not be zero, the Gaussian elimination procedure can be carried out without any problem 20 Example of a pentadiagonal matrix. It is nearly tridiagonal. The matrix with only nonzero entries on the main diagonal, and the first two bi-diagonals above and below it 21 22 LU Factorization 1 As we showed before, an n*n system of linear equations can be written in a matrix form as Axb where the coefficient matrix A has the form a11 a12 a1n a a a A 21 22 2n an1 an2 ann x is the unknown vector and b is the right-hand side known vector We also assume A is of full rank, and most entries of A are not zero 23 LU Factorization 2 There are two special forms of matrices. One is called (unit) lower triangular 1 0 0 l 1 0 L 21 ln1 ln2 1 The other is upper triangular u11 u12 u1n 0 u u U 22 2n 0 0 u nn We want to find a pair of L and U matrices, such that A LU 24 Example Take a system of linear equations 6 2 2 4 x1 16 12 8 6 10 x 26 2 3 13 9 3 x3 19 6 4 1 18x4 34 The Gaussian elimination process finally yields an upper triangular system 6 2 2 4 x1 16 0 4 2 2 x 6 2 0 0 2 5x3 9 0 0 0 3x4 3 This could be achieved by multiplying the original system with a matrix M , such that MAx Mb 25 Example We want the matrix M to be special, so that MA is upper triangular 6 2 2 4 0 4 2 2 MA U 0 0 2 5 0 0 0 3 The question is: can we find such a matrix M? Look at the first step of the Gaussian elimination 6 2 2 4 x1 16 0 4 2 2 x 6 2 0 12 8 1 x3 27 0 2 3 14x4 18 This step can be achieved by multiplying the original system with a lower triangular matrix M1Ax M1b 26 Example Here the lower triangular matrix M1 is 1 0 0 0 2 1 0 0 M 1 1 0 1 0 2 1 0 0 1 This matrix is nonsingular, because it is lower triangular with a main diagonal containing all 1’s.
Recommended publications
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Solving Systems of Linear Equations by Gaussian Elimination
    Chapter 3 Solving Systems of Linear Equations By Gaussian Elimination 3.1 Mathematical Preliminaries In this chapter we consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar form of that system is as follows: a11x1 +a12x2 +... +... +a1nxn = b1 a x +a x +... +... +a x = b (S) 8 21 1 22 2 2n n 2 > ... ... ... ... ... <> an1x1 +an2x2 +... +... +annxn = bn > Written in matrix:> form, (S) is equivalent to: (3.1) Ax = b, where the coefficient square matrix A Rn,n, and the column vectors x, b n,1 n 2 2 R ⇠= R . Specifically, a11 a12 ... ... a1n a21 a22 ... ... a2n A = 0 1 ... ... ... ... ... B a a ... ... a C B n1 n2 nn C @ A 93 94 N. Nassif and D. Fayyad x1 b1 x2 b2 x = 0 1 and b = 0 1 . ... ... B x C B b C B n C B n C @ A @ A We assume that the basic linear algebra property for systems of linear equa- tions like (3.1) are satisfied. Specifically: Proposition 3.1. The following statements are equivalent: 1. System (3.1) has a unique solution. 2. det(A) =0. 6 3. A is invertible. In this chapter, our objective is to present the basic ideas of a linear system solver. It consists of two main procedures allowing to solve efficiently (3.1). 1. The first, referred to as Gauss elimination (or reduction) reduces (3.1) into an equivalent system of linear equations, which matrix is upper triangular. Specifically one shows in section 4 that Ax = b Ux = c, () where c Rn and U Rn,n is given by: 2 2 u11 u12 ..
    [Show full text]
  • 4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
    page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]
  • The Jordan Canonical Forms of Complex Orthogonal and Skew-Symmetric Matrices Roger A
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Linear Algebra and its Applications 302–303 (1999) 411–421 www.elsevier.com/locate/laa The Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices Roger A. Horn a,∗, Dennis I. Merino b aDepartment of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 84112–0090, USA bDepartment of Mathematics, Southeastern Louisiana University, Hammond, LA 70402-0687, USA Received 25 August 1999; accepted 3 September 1999 Submitted by B. Cain Dedicated to Hans Schneider Abstract We study the Jordan Canonical Forms of complex orthogonal and skew-symmetric matrices, and consider some related results. © 1999 Elsevier Science Inc. All rights reserved. Keywords: Canonical form; Complex orthogonal matrix; Complex skew-symmetric matrix 1. Introduction and notation Every square complex matrix A is similar to its transpose, AT ([2, Section 3.2.3] or [1, Chapter XI, Theorem 5]), and the similarity class of the n-by-n complex symmetric matrices is all of Mn [2, Theorem 4.4.9], the set of n-by-n complex matrices. However, other natural similarity classes of matrices are non-trivial and can be characterized by simple conditions involving the Jordan Canonical Form. For example, A is similar to its complex conjugate, A (and hence also to its T adjoint, A∗ = A ), if and only if A is similar to a real matrix [2, Theorem 4.1.7]; the Jordan Canonical Form of such a matrix can contain only Jordan blocks with real eigenvalues and pairs of Jordan blocks of the form Jk(λ) ⊕ Jk(λ) for non-real λ.We denote by Jk(λ) the standard upper triangular k-by-k Jordan block with eigenvalue ∗ Corresponding author.
    [Show full text]
  • Fifth Lecture
    Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. n Definition 1 A set of k vectors fu1; u2;:::; ukg, where each ui 2 R , is said to be an orthogonal with respect to the inner product (·; ·) if (ui; uj) = 0 for i =6 j. The set is said to be orthonormal if it is orthogonal and (ui; ui) = 1 for i = 1; 2; : : : ; k The definition of an orthogonal matrix is related to the definition for vectors, but with a subtle difference. n×k Definition 2 The matrix U = (u1; u2;:::; uk) 2 R whose columns form an orthonormal set is said to be left orthogonal. If k = n, that is, U is square, then U is said to be an orthogonal matrix. Note that the columns of (left) orthogonal matrices are orthonormal, not merely orthogonal. Square complex matrices whose columns form an orthonormal set are called unitary. Example 1 Here are some common 2 × 2 orthogonal matrices ! 1 0 U = 0 1 ! p 1 −1 U = 0:5 1 1 ! 0:8 0:6 U = 0:6 −0:8 ! cos θ sin θ U = − sin θ cos θ Let x 2 Rn then k k2 T Ux 2 = (Ux) (Ux) = xT U T Ux = xT x k k2 = x 2 1 So kUxk2 = kxk2: This property is called orthogonal invariance, it is an important and useful property of the two norm and orthogonal transformations. That is, orthog- onal transformations DO NOT AFFECT the two-norm, there is no compa- rable property for the one-norm or 1-norm.
    [Show full text]
  • Advanced Linear Algebra (MA251) Lecture Notes Contents
    Algebra I – Advanced Linear Algebra (MA251) Lecture Notes Derek Holt and Dmitriy Rumynin year 2009 (revised at the end) Contents 1 Review of Some Linear Algebra 3 1.1 The matrix of a linear map with respect to a fixed basis . ........ 3 1.2 Changeofbasis................................... 4 2 The Jordan Canonical Form 4 2.1 Introduction.................................... 4 2.2 TheCayley-Hamiltontheorem . ... 6 2.3 Theminimalpolynomial. 7 2.4 JordanchainsandJordanblocks . .... 9 2.5 Jordan bases and the Jordan canonical form . ....... 10 2.6 The JCF when n =2and3 ............................ 11 2.7 Thegeneralcase .................................. 14 2.8 Examples ...................................... 15 2.9 Proof of Theorem 2.9 (non-examinable) . ...... 16 2.10 Applications to difference equations . ........ 17 2.11 Functions of matrices and applications to differential equations . 19 3 Bilinear Maps and Quadratic Forms 21 3.1 Bilinearmaps:definitions . 21 3.2 Bilinearmaps:changeofbasis . 22 3.3 Quadraticforms: introduction. ...... 22 3.4 Quadraticforms: definitions. ..... 25 3.5 Change of variable under the general linear group . .......... 26 3.6 Change of variable under the orthogonal group . ........ 29 3.7 Applications of quadratic forms to geometry . ......... 33 3.7.1 Reduction of the general second degree equation . ....... 33 3.7.2 The case n =2............................... 34 3.7.3 The case n =3............................... 34 1 3.8 Unitary, hermitian and normal matrices . ....... 35 3.9 Applications to quantum mechanics . ..... 41 4 Finitely Generated Abelian Groups 44 4.1 Definitions...................................... 44 4.2 Subgroups,cosetsandquotientgroups . ....... 45 4.3 Homomorphisms and the first isomorphism theorem . ....... 48 4.4 Freeabeliangroups............................... 50 4.5 Unimodular elementary row and column operations and the Smith normal formforintegralmatrices .
    [Show full text]
  • Arxiv:2009.05100V2
    THE COMPLETE POSITIVITY OF SYMMETRIC TRIDIAGONAL AND PENTADIAGONAL MATRICES LEI CAO 1,2, DARIAN MCLAREN 3, AND SARAH PLOSKER 3 Abstract. We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix A is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results, and providing two different decom- positions sufficient for complete positivity. We illustrate our constructions with a number of examples. 1. Preliminaries All matrices herein will be real-valued. Let A be an n n symmetric tridiagonal matrix: × a1 b1 b1 a2 b2 . .. .. .. . A = .. .. .. . bn 3 an 2 bn 2 − − − bn 2 an 1 bn 1 − − − bn 1 an − We are often interested in the case where A is also doubly stochastic, in which case we have ai = 1 bi 1 bi for i = 1, 2,...,n, with the convention that b0 = bn = 0. It is easy to see that− if a− tridiagonal− matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case. We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, arXiv:2009.05100v2 [math.CO] 10 Mar 2021 has applications in a variety of areas of study, including block designs, maximin efficiency- robust tests, modelling DNA evolution, and more [5, Chapter 2], as well as recent use in mathematical optimization and quantum information theory (see [14] and the references therein).
    [Show full text]
  • Linear Algebra with Exercises B
    Linear Algebra with Exercises B Fall 2017 Kyoto University Ivan Ip These notes summarize the definitions, theorems and some examples discussed in class. Please refer to the class notes and reference books for proofs and more in-depth discussions. Contents 1 Abstract Vector Spaces 1 1.1 Vector Spaces . .1 1.2 Subspaces . .3 1.3 Linearly Independent Sets . .4 1.4 Bases . .5 1.5 Dimensions . .7 1.6 Intersections, Sums and Direct Sums . .9 2 Linear Transformations and Matrices 11 2.1 Linear Transformations . 11 2.2 Injection, Surjection and Isomorphism . 13 2.3 Rank . 14 2.4 Change of Basis . 15 3 Euclidean Space 17 3.1 Inner Product . 17 3.2 Orthogonal Basis . 20 3.3 Orthogonal Projection . 21 i 3.4 Orthogonal Matrix . 24 3.5 Gram-Schmidt Process . 25 3.6 Least Square Approximation . 28 4 Eigenvectors and Eigenvalues 31 4.1 Eigenvectors . 31 4.2 Determinants . 33 4.3 Characteristic polynomial . 36 4.4 Similarity . 38 5 Diagonalization 41 5.1 Diagonalization . 41 5.2 Symmetric Matrices . 44 5.3 Minimal Polynomials . 46 5.4 Jordan Canonical Form . 48 5.5 Positive definite matrix (Optional) . 52 5.6 Singular Value Decomposition (Optional) . 54 A Complex Matrix 59 ii Introduction Real life problems are hard. Linear Algebra is easy (in the mathematical sense). We make linear approximations to real life problems, and reduce the problems to systems of linear equations where we can then use the techniques from Linear Algebra to solve for approximate solutions. Linear Algebra also gives new insights and tools to the original problems.
    [Show full text]
  • Chapter 9 Eigenvectors and Eigenvalues
    Chapter 9 Eigenvectors and Eigenvalues 9.1 Eigenvectors and Eigenvalues of a Linear Map Given a finite-dimensional vector space E,letf : E E ! be any linear map. If, by luck, there is a basis (e1,...,en) of E with respect to which f is represented by a diagonal matrix λ1 0 ... 0 0 λ ... D = 2 , 0 . ... ... 0 1 B 0 ... 0 λ C B nC @ A then the action of f on E is very simple; in every “direc- tion” ei,wehave f(ei)=λiei. 511 512 CHAPTER 9. EIGENVECTORS AND EIGENVALUES We can think of f as a transformation that stretches or shrinks space along the direction e1,...,en (at least if E is a real vector space). In terms of matrices, the above property translates into the fact that there is an invertible matrix P and a di- agonal matrix D such that a matrix A can be factored as 1 A = PDP− . When this happens, we say that f (or A)isdiagonaliz- able,theλisarecalledtheeigenvalues of f,andtheeis are eigenvectors of f. For example, we will see that every symmetric matrix can be diagonalized. 9.1. EIGENVECTORS AND EIGENVALUES OF A LINEAR MAP 513 Unfortunately, not every matrix can be diagonalized. For example, the matrix 11 A = 1 01 ✓ ◆ can’t be diagonalized. Sometimes, a matrix fails to be diagonalizable because its eigenvalues do not belong to the field of coefficients, such as 0 1 A = , 2 10− ✓ ◆ whose eigenvalues are i. ± This is not a serious problem because A2 can be diago- nalized over the complex numbers.
    [Show full text]