
Linear Algebra Jim Hefferon ¡ ¢ 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯1 2¯ ¯3 1¯ ¡ ¢ 1 x ¢ 1 3 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯x ¢ 1 2¯ ¯ ¯ x ¢ 3 1 ¡ ¢ 6 8 ¡ ¢ 2 1 ¯ ¯ ¯ ¯ ¯6 2¯ ¯8 1¯ Notation R real numbers N natural numbers: f0; 1; 2;:::g ¯ C complex numbers f::: ¯ :::g set of . such that . h:::i sequence; like a set but order matters V; W; U vector spaces ~v; ~w vectors ~0, ~0V zero vector, zero vector of V B; D bases n En = h~e1; : : : ; ~eni standard basis for R ¯;~ ~± basis vectors RepB(~v) matrix representing the vector Pn set of n-th degree polynomials Mn£m set of n£m matrices [S] span of the set S M © N direct sum of subspaces V =» W isomorphic spaces h; g homomorphisms, linear maps H; G matrices t; s transformations; maps from a space to itself T;S square matrices RepB;D(h) matrix representing the map h hi;j matrix entry from row i, column j jT j determinant of the matrix T R(h); N (h) rangespace and nullspace of the map h R1(h); N1(h) generalized rangespace and nullspace Lower case Greek alphabet name character name character name character alpha ® iota ¶ rho ½ beta ¯ kappa · sigma σ gamma γ lambda ¸ tau ¿ delta ± mu ¹ upsilon υ epsilon ² nu º phi Á zeta ³ xi » chi  eta ´ omicron o psi à theta θ pi ¼ omega ! Cover. This is Cramer’s Rule for the system x + 2y = 6, 3x + y = 8. The size of the first box is the determinant shown (the absolute value of the size is the area). The size of the second box is x times that, and equals the size of the final box. Hence, x is the final determinant divided by the first determinant. Contents Chapter One: Linear Systems 1 I Solving Linear Systems ........................ 1 1 Gauss’ Method ........................... 2 2 Describing the Solution Set .................... 11 3 General = Particular + Homogeneous .............. 20 II Linear Geometry of n-Space ..................... 32 1 Vectors in Space .......................... 32 2 Length and Angle Measures¤ ................... 38 III Reduced Echelon Form ........................ 46 1 Gauss-Jordan Reduction ...................... 46 2 Row Equivalence .......................... 52 Topic: Computer Algebra Systems ................... 62 Topic: Input-Output Analysis ...................... 64 Topic: Accuracy of Computations .................... 68 Topic: Analyzing Networks ........................ 72 Chapter Two: Vector Spaces 79 I Definition of Vector Space ...................... 80 1 Definition and Examples ...................... 80 2 Subspaces and Spanning Sets ................... 91 II Linear Independence ......................... 102 1 Definition and Examples ...................... 102 III Basis and Dimension ......................... 113 1 Basis ................................. 113 2 Dimension .............................. 119 3 Vector Spaces and Linear Systems ................ 124 4 Combining Subspaces¤ ....................... 131 Topic: Fields ................................ 141 Topic: Crystals .............................. 143 Topic: Voting Paradoxes ......................... 147 Topic: Dimensional Analysis ....................... 152 vii Chapter Three: Maps Between Spaces 159 I Isomorphisms ............................. 159 1 Definition and Examples ...................... 159 2 Dimension Characterizes Isomorphism .............. 168 II Homomorphisms ........................... 176 1 Definition .............................. 176 2 Rangespace and Nullspace ..................... 183 III Computing Linear Maps ....................... 195 1 Representing Linear Maps with Matrices ............. 195 2 Any Matrix Represents a Linear Map¤ .............. 205 IV Matrix Operations .......................... 212 1 Sums and Scalar Products ..................... 212 2 Matrix Multiplication ....................... 214 3 Mechanics of Matrix Multiplication ................ 222 4 Inverses ............................... 231 V Change of Basis ............................ 238 1 Changing Representations of Vectors ............... 238 2 Changing Map Representations .................. 242 VI Projection ............................... 250 1 Orthogonal Projection Into a Line¤ ................ 250 2 Gram-Schmidt Orthogonalization¤ ................ 254 3 Projection Into a Subspace¤ .................... 260 Topic: Line of Best Fit .......................... 269 Topic: Geometry of Linear Maps .................... 274 Topic: Markov Chains .......................... 281 Topic: Orthonormal Matrices ...................... 287 Chapter Four: Determinants 293 I Definition ............................... 294 1 Exploration¤ ............................ 294 2 Properties of Determinants .................... 299 3 The Permutation Expansion .................... 303 4 Determinants Exist¤ ........................ 312 II Geometry of Determinants ...................... 319 1 Determinants as Size Functions .................. 319 III Other Formulas ............................ 326 1 Laplace’s Expansion¤ ........................ 326 Topic: Cramer’s Rule ........................... 331 Topic: Speed of Calculating Determinants ............... 334 Topic: Projective Geometry ....................... 337 Chapter Five: Similarity 349 I Complex Vector Spaces ........................ 349 1 Factoring and Complex Numbers; A Review¤ .......... 350 2 Complex Representations ..................... 351 II Similarity ............................... 353 viii 1 Definition and Examples ...................... 353 2 Diagonalizability .......................... 355 3 Eigenvalues and Eigenvectors ................... 359 III Nilpotence ............................... 367 1 Self-Composition¤ ......................... 367 2 Strings¤ ............................... 370 IV Jordan Form .............................. 381 1 Polynomials of Maps and Matrices¤ ................ 381 2 Jordan Canonical Form¤ ...................... 388 Topic: Method of Powers ......................... 401 Topic: Stable Populations ........................ 405 Topic: Linear Recurrences ........................ 407 Appendix A-1 Propositions ............................... A-1 Quantifiers ............................... A-3 Techniques of Proof .......................... A-5 Sets, Functions, and Relations ..................... A-7 ¤Note: starred subsections are optional. ix Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc. Of course, these formulas are related — that is, we will develop a family of formulas, a scheme that describes the formula for each size. 293 294 Chapter Four. Determinants I Definition For 1£1 matrices, determining nonsingularity is trivial. ¡ ¢ a is nonsingular iff a 6= 0 The 2£2 formula came out in the course of developing the inverse. µ ¶ a b is nonsingular iff ad ¡ bc 6= 0 c d The 3£3 formula can be produced similarly (see Exercise 9). 0 1 a b c @d e fA is nonsingular iff aei + bfg + cdh ¡ hfa ¡ idb ¡ gec 6= 0 g h i With these cases in mind, we posit a family of formulas, a, ad¡bc, etc. For each n the formula gives rise to a determinant function detn£n : Mn£n ! R such that an n£n matrix T is nonsingular if and only if detn£n(T ) 6= 0. (We usually omit the subscript because if T is n£n then ‘det(T )’ could only mean ‘detn£n(T )’.) I.1 Exploration This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection. The three cases above don’t show an evident pattern to use for the general n£n formula. We may spot that the 1£1 term a has one letter, that the 2£2 terms ad and bc have two letters, and that the 3£3 terms aei, etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the cdh term 0 1 c @d A h come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted. A good problem solving strategy is to see what properties a solution must have and then search for
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages61 Page
-
File Size-