
Intermediate Linear Algebra Version 2.1 Christopher Griffin « 2016-2020 Licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License With Contributions By: Elena Kosygina Contents List of Figuresv About This Document ix Chapter 1. Vector Space Essentials1 1. Goals of the Chapter1 2. Fields and Vector Spaces1 3. Matrices, Row and Column Vectors5 4. Linear Combinations, Span, Linear Independence6 5. Basis 9 6. Dimension 11 Chapter 2. More on Matrices and Change of Basis 17 1. Goals of the Chapter 17 2. More Matrix Operations: A Review 17 3. Special Matrices 19 4. Matrix Inverse 19 5. Linear Equations 20 6. Elementary Row Operations 21 7. Computing a Matrix Inverse with the Gauss-Jordan Procedure 25 8. When the Gauss-Jordan Procedure does not yield a Solution to Ax = b 27 9. Change of Basis as a System of Linear Equations 29 10. Building New Vector Spaces 32 Chapter 3. Linear Transformations 37 1. Goals of the Chapter 37 2. Linear Transformations 37 3. Properties of Linear Transforms 39 4. Image and Kernel 41 5. Matrix of a Linear Map 44 6. Applications of Linear Transforms 47 7. An Application of Linear Algebra to Control Theory 49 Chapter 4. Determinants, Eigenvalues and Eigenvectors 57 1. Goals of the Chapter 57 2. Permutations 57 3. Determinant 58 4. Properties of the Determinant 63 5. Eigenvalues and Eigenvectors 65 iii 6. Diagonalization and Jordan's Decomposition Theorem 69 Chapter 5. Orthogonality 73 1. Goals of the Chapter 73 2. Some Essential Properties of Complex Numbers 73 3. Inner Products 73 4. Orthogonality and the Gram-Schmidt Procedure 76 5. QR Decomposition 78 6. Orthogonal Projection and Orthogonal Complements 80 7. Orthogonal Complement 82 8. Spectral Theorem for Real Symmetric Matrices 84 9. Some Results on AT A 87 Chapter 6. Principal Components Analysis and Singular Value Decomposition 89 1. Goals of the Chapter 89 2. Some Elementary Statistics with Matrices 89 3. Projection and Dimensional Reduction 91 4. An Extended Example 93 5. Singular Value Decomposition 97 Chapter 7. Linear Algebra for Graphs and Markov Chains 103 1. Goals of the Chapter 103 2. Graphs, Multi-Graphs, Simple Graphs 103 3. Directed Graphs 105 4. Matrix Representations of Graphs 107 5. Properties of the Eigenvalues of the Adjacency Matrix 109 6. Eigenvector Centrality 110 7. Markov Chains and Random Walks 114 8. Page Rank 117 9. The Graph Laplacian 120 Chapter 8. Linear Algebra and Systems of Differential Equations 127 1. Goals of the Chapter 127 2. Systems of Differential Equations 127 3. A Solution to the Linear Homogenous Constant Coefficient Differential Equation 130 4. Three Examples 132 5. Non-Diagonalizable Matrices 135 Bibliography 137 iv List of Figures 1.1 The subspace R2 is shown within the subspace R3.4 2.1 (a) Intersection along a line of 3 planes of interest. (b) Illustration that the planes do not intersect in any common line. 28 2.2 The vectors for the change of basis example are shown. Note that v is expressed in terms of the standard basis in the problem statement. 31 2.3 The intersection of two sub-spaces in R3 produces a new sub-space of R3. 33 2.4 The sum of two sub-spaces of R2 that share only 0 in common recreate R2. 34 2 3.1 The image and kernel of fA are illustrated in R . 42 3.2 Geometric transformations are shown in the figure above. 48 3.3 A mass moving on a spring is governed by Hooke's law, translated into the language of Newtonian physics as mx¨ − kx = 0. 49 3.4 A mass moving on a spring given a push on a frictionless surface will oscillate indefinitely, following a sinusoid. 51 5.1 The orthogonal projection of the vector u onto the vector v. 80 5.2 The common plane shared by two vectors in R3 is illustrated along with the triangle they create. 81 5.3 The orthogonal projection of the vector u onto the vector v. 81 5.4 A vector v generates the linear subspace W = span(v). It's orthogonal complement W? is shown when v 2 R3. 83 6.1 An extremely simple data set that lies along a line y − 4 = x − 3, in the direction of h1; 1i containing point (3; 4). 91 6.2 The one dimensional nature of the data is clearly illustrated in this plot of the transformed data z. 92 6.3 A scatter plot of data drawn from a multivariable Gaussian distribution. The distribution density function contour plot is superimposed. 95 6.4 Computing Z = WT YT creates a new uncorrelated data set that is centered at 0. 96 6.5 The data is shown projected onto a linear subspace (line). This is the best projection from 2 dimensions to 1 dimension under a certain measure of best. 96 v 6.6 A gray scale version of the image found at http://hanna-barbera.wikia. com/wiki/Scooby-Doo_(character)?file=Scoobydoo.jpg. Copyright Hannah-Barbara used under the fair use clause of the Copyright Act. 101 6.7 The singular values of the image matrix corresponding to the image in Figure 6.6. Notice the steep decay of the singular values. 102 6.8 Reconstructed images from 15 and 50 singular values capture a substantial amount of detail for substantially smaller transmission sizes. 102 7.1 It is easier for explanation to represent a graph by a diagram in which vertices are represented by points (or squares, circles, triangles etc.) and edges are represented by lines connecting vertices. 103 7.2 A self-loop is an edge in a graph G that contains exactly one vertex. That is, an edge that is a one element subset of the vertex set. Self-loops are illustrated by loops at the vertex in question. 105 7.3 (a) A directed graph. (b) A directed graph with a self-loop. In a directed graph, edges are directed; that is they are ordered pairs of elements drawn from the vertex set. The ordering of the pair gives the direction of the edge. 106 7.4 A walk (a) and a cycle (b) are illustrated. 107 7.5 A connected graph (a) and a disconnected graph (b). 107 7.6 The adjacency matrix of a graph with n vertices is an n × n matrix with a 1 at element (i; j) if and only if there is an edge connecting vertex i to vertex j; otherwise element (i; j) is a zero. 108 7.7 A matrix with 4 vertices and 5 edges. Intuitively, vertices 1 and 4 should have the same eigenvector centrality score as vertices 2 and 3. 113 7.8 A Markov chain is a directed graph to which we assign edge probabilities so that the sum of the probabilities of the out-edges at any vertex is always 1. 115 7.9 An induced Markov chain is constructed from a graph by replacing every edge with a pair of directed edges (going in opposite directions) and assigning a probability equal to the out-degree of each vertex to every edge leaving that vertex. 118 7.10 A set of triangle graphs. 121 7.11 A simple social network. 124 7.12 A graph partition using positive and negative entries of the Fiedler vector. 125 8.1 The solution to the differential equation can be thought of as a vector of fixed unit rotation about the origin. 133 8.2 A plot of representative solutions for x(t) and y(t) for the simple homogeneous linear system in Expression 8.25. 133 8.3 Representative solution curves for Expression 8.39 showing sinusoidal exponential growth of the system. 134 vi 8.4 Representative solution curves for Expression 8.42 showing exponential decay of the system. 135 vii About This Document This is a set of lecture notes. They are given away freely to anyone who wants to use them. You know what they say about free things, so you might want to get yourself a book. I like Serge Lang's Linear Algebra, which is part of the Springer Undergraduate Texts in Mathematics series. If you don't like Lang's book, I also like Gilbert Strang's Linear Algebra and its Applications. To be fair, I've only used the third edition of that book. The newer addition seems more like a tome, while the third edition was smaller and to the point. The lecture notes were intended for SM361: Intermediate Linear Algebra, which is a breadth elective in the Mathematics Department at the United States Naval Academy. Since I use these notes while I teach, there may be typographical errors that I noticed in class, but did not fix in the notes. If you see a typo, send me an e-mail and I'll add an acknowledgement. There may be many typos, that's why you should have a real textbook. (Because real textbooks never have typos, right?) The material in these notes is largely based on Lang's excellent undergraduate linear algebra textbook. However, the applications are drawn from multiple sources outside of Lang. There are a few results that are stated but not proved in these notes: • The formula det(AB) = det(A)det(B), • The Jordan Normal Form Theorem, and • The Perron-Frobenius theorem. Individuals interested in using these notes as the middle part of a three-part Linear Algebra sequence should seriously consider proving these results in an advanced linear algebra course to complete the theoretical treatment begun here.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages147 Page
-
File Size-