Introduction to Linear Algebra
Total Page:16
File Type:pdf, Size:1020Kb
Introduction to linear algebra Teo Banica Real matrices, The determinant, Complex matrices, Calculus and applications, Infinite matrices, Special matrices 08/20 Foreword These are slides written in the Fall 2020, on linear algebra. Presentations available at my Youtube channel. 1. Real matrices and their properties ... 3 2. The determinant of real matrices ... 19 3. Complex matrices and diagonalization ... 35 4. Linear algebra and calculus questions ... 51 5. Infinite matrices and spectral theory ... 67 6. Special matrices and matrix tricks ... 83 Real matrices and their properties Teo Banica "Introduction to linear algebra", 1/6 08/20 Rotations 1/3 Problem: what’s the formula of the rotation of angle t? Rotations 2/3 2 x The points in the plane R can be represented as vectors y . The 2 × 2 matrices “act” on such vectors, as follows: a b x ax + by = c d y cx + dy Many simple transformations (symmetries, projections..) can be written in this form. What about the rotation of angle t? Rotations 3/3 A quick picture shows that we must have: ∗ ∗ 1 cos t = ∗ ∗ 0 sin t Also, by paying attention to positives and negatives: ∗ ∗ 0 − sin t = ∗ ∗ 1 cos t Thus, the matrix of our rotation can only be: cos t − sin t R = t sin t cos t By "linear algebra”, this is the correct answer. Linear maps 1/4 2 2 Theorem. The maps f : R ! R which are linear, in the sense that they map lines through 0 to lines through 0, are: x ax + by f = y cx + dy Remark. If we make the multiplication convention a b x ax + by = c d y cx + dy the theorem says f (v) = Av, with A being a 2 × 2 matrix. Linear maps 2/4 Examples. The identity and null maps are given by: 1 0 x x 0 0 x 0 = ; = 0 1 y y 0 0 y 0 The projections on the horizontal and vertical axes are given by: 1 0 x x 0 0 x 0 = ; = 0 0 y 0 0 1 y y The symmetry with respect to the x = y diagonal is given by: 0 1 x y = 1 0 y x We have as well the rotation of angle t 2 R, studied before. Linear maps 3/4 N N Theorem. The maps f : R ! R which are linear, in the sense that they map lines through 0 to lines through 0, are: 0 1 0 1 x1 a11x1 + ::: + a1N xN B . C B . C f @ . A = @ . A xN aN1x1 + ::: + aNN xN Remark. With the matrix multiplication convention 0 1 0 1 0 1 a11 ::: a1N x1 a11x1 + ::: + a1N xN B . C B . C B . C @ . A @ . A = @ . A aN1 ::: aNN xN aN1x1 + ::: + aNN xN the theorem says f (v) = Av, with A being a N × N matrix. Linear maps 4/4 Example. Consider the all-1 matrix. This acts as follows: 0 1 0 1 0 1 1 ::: 1 x1 x1 + ::: + xN B. .. .C B . C B . C @. .A @ . A = @ . A 1 ::: 1 xN x1 + ::: + xN But this formula can be written as follows: 0 1 0 1 0 1 1 ::: 1 x1 1 1 . x1 + ::: + xN . B. .. .C B . C = B.C N @. .A @ . A N @.A 1 ::: 1 xN 1 And this latter map is the projection on the all-1 vector. Theory 1/4 Definition. We can multiply M × N matrices with N × K matrices, 0 1 0 1 a11 ::: a1N b11 ::: b1K B . C B . C @ . A @ . A aM1 ::: aMN bN1 ::: bNK the product being a M × K matrix, given by the formula 0 1 a11b11 + ::: + a1N bN1 :::::: a11b1K + ::: + a1N bNK B . C B . C B C B . C @ . A aM1b11 + ::: + aMN bN1 :::::: aM1b1K + ::: + aMN bNK obtained via the rule “multiply rows by columns”. Theory 2/4 Better definition. The matrix multiplication is given by X (AB)ij = Aik Bkj k with Aij being the entry on the i-th row and j-th column. N M Theorem. The linear maps f : R ! R are those of the form fA(v) = Av with A being a N × M matrix. Remark. Size check (N × 1) = (N × M)(M × 1), ok. Theory 3/4 Theorem. With the above convention fA(v) = Av, we have fAfB = fAB "the product of matrices corresponds to the composition of maps". N N Theorem. A linear map f : R ! R is invertible when the matrix A 2 MN (R) which produces it is invertible, and we have: −1 (fA) = fA−1 Theory 4/4 Theorem. The inverses of the 2 × 2 matrices are given by: a b−1 1 d −b = c d ad − bc −c a Proof. When ad = bc the columns are proportional, so the matrix cannot be invertible. When ad − bc 6= 0, let us solve: a b−1 1 ∗ ∗ = c d ad − bc ∗ ∗ We must solve the following equations: a b ∗ ∗ ad − bc 0 = c d ∗ ∗ 0 ad − bc But this leads to the formula in the statement. Eigenvectors 1/4 Definition. Let A 2 MN (R) be a square matrix, and assume that A N multiplies by λ 2 R in the direction of a vector v 2 R : Av = λv In this case, we say that: N (1) v 2 R is an eigenvector of A. (2) λ 2 R is the corresponding eigenvalue. Eigenvectors 2/4 Examples. The identity has all vectors as eigenvectors, with λ = 1: 1 0 x x = 0 1 y y The same goes for the null matrix, with λ = 0 this time: 0 0 x 0 = 0 0 y 0 x x For the projection on the horizontal axis, P y = 0 , we have: 0 x Pv = λv () v = ; λ = 0 or v = ; λ = 1 y 0 A similar result holds for the projection on the vertical axis. Eigenvectors 3/4 x y More examples. For the symmetry S y = x , we have: x x Sv = λv () v = ; λ = 1 or v = ; λ = −1 x −x x 0 1 x y For the transformation T y = (0 0) y = 0 we have: x Tv = λv () v = ; λ = 0 0 For the rotation of angle t 6= 0, we must have v = 0; λ = 0. Eigenvectors 4/4 Definition. We say that a matrix A 2 MN (R) is diagonalizable if it N has N eigenvectors v1;:::; vN which form a basis of R . Remark. When A is diagonalizable, in that basis we can write: 0 1 λ1 B .. C A = @ . A λN This means that we have A = PDP−1, with D diagonal. Problems. Which matrices are diagonalizable? And, how to diagonalize them? The determinant of real matrices Teo Banica "Introduction to linear algebra", 2/6 08/20 Definition 1/3 N Definition. Associated to any vectors v1;:::; vN 2 R is the volume + det (v1 ::: vN ) = vol < v1;:::; vN > of the parallelepiped made by these vectors. Remark. This notion is useful, for instance because v1;:::; vN are + linearly dependent precisely when det (v1 ::: vN ) = 0. Definition 2/3 Theorem. In 2 dimensions we have the formula a b det+ = jad − bcj c d a b 2 valid for any two vectors c ; d 2 R . Proof. We must show that the area of the parrallelogram formed by a b the vectors c ; d equals the quantity jad − bcj. But this latter quantity is a difference of areas of two rectangles, and this can be done in “puzzle” style. Comment. This is nice, but with ad − bc as "answer", which is linear in a; b; c; d, it would be even nicer. Definition 3/3 N Convention. A system of vectors v1;:::; vN 2 R is called: (1) Oriented (+), if one can pass from the standard basis to it. (2) Unoriented (-), otherwise. N Definition. Associated to v1;:::; vN 2 R is the signed volume ± det(v1 ::: vN ) = vol < v1;:::; vN > of the parallelepiped made by these vectors. a b Remark. We have det = ad − bc, which is nice. c d Properties 1/4 Notation. Given a matrix A 2 MN (R), we write det A, or just jAj, for the determinant of the system of column vectors of A. Notation. Given a linear map, written as f (v) = Av, we call the number det A the “inflation coefficient” of f . Remark. The inflation coefficient of f is the signed volume of the N image f (N ) of the unit cube N 2 R . Properties 2/4 Theorem. The determinant det A of the matrices A 2 MN (R) has the following properties: (1) It is a linear function of the columns of A. (2) We have det(AB) = det A · det B. (3) We have det(AB) = det(BA). Proof. (1) By doing some geometry, we obtain indeed: det(u + v; fwi g) = det(u; fwi g) + det(v; fwi g) det(λu; fwi g) = λ det(u; fwi g) (2) This follows from fAB = fAfB , by looking at "inflation". (3) Follows from (2), both quantities being det A · det B. Properties 3/4 Theorem. Assuming that a matrix A 2 MN (R) is diagonalizable, with eigenvalues λ1; : : : ; λN , we have: det A = λ1 : : : λN Proof. This is clear from the "inflation" viewpoint, because in the basis formed by the eigenvectors v1;:::; vN , we have: fA(vi ) = λi vi −1 Alternatively, A = PDP with D = diag(λ1; : : : ; λN ), so det(A) = det(PDP−1) = det(DP−1 · P) = det(D) and by linearity det(D) = λ1 : : : λN · det(1N ) = λ1 : : : λN . Properties 4/4 Theorem. We have the following formula, for any λ 2 R: det(u; v; fwi gi ) = det(u − λv; v; fwi gi ) Theorem.