
John Nachbar Washington University March 25, 2018 A Quick Introduction to Linear Algebra, Topology, and Multivariate Calculus1 1 Vector Spaces and Linear Algebra. 1.1 Overview. N Definition 1. A set V ⊆ R is a vector space iff the following two properties hold. 1. For any x; x^ 2 V , x +x ^ 2 V . 2. For any x 2 V , for any γ 2 R, γx 2 V . Because γ can equal 0, we always have that the origin (0;:::; 0) is in V . I will N write the origin in R as simply \0," but to avoid confusion with the real number N 0, some authors write the origin in R as, say, θ. A vector space is a line, plane, or higher dimensional analog thereof, through the origin. Thus, for example, for x 2 R, the graph of the line f(x) = ax is a vector 2 space in R . On the other other hand, the graph of f^(x) = ax + b, with b 6= 0, is not a vector 2 space because the graph does not go through the origin in R . The graph of f^(x) is instead an example of a linear manifold. A linear manifold is the result of taking a vector space and shifting it in a parallel fashion away from the origin. N N R itself is a vector space. Thus it is also common to see a vector space V ⊆ R N called a vector subspace of R . 1.2 Spanning, Linear Independence, and Bases In the example above, the vector space V given by the graph of f(x) = ax can also 2 be represented in the form V = f(x; y) 2 R : there is a γ 2 R such that (x; y) = γ(1; a)g. The vector (1; a) 2 V is said to span V . More generally, we have the t t t t N following. As a matter of notation, s denotes the vector s = (s1; : : : ; sN ) 2 R . N 1 T Definition 2. Given a vector space V ⊆ R and a set of T vectors S = fs ; : : : ; s g, N 1 T all in V , S spans V iff for any x 2 R there exist γ ; : : : ; γ , all in R, such that x = γ1s1 + ··· + γT sT : 1cbna. This work is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike 4.0 License. 1 In the example, note that although f(1; a)g spans V , so does f(1; a); (2; 2a)g. In particular it is always possible to take γ2 = 0, which puts us back in the previous case. Including (2; 2a) in the spanning set when we already have (1; a) is redundant. We are interested in spanning by sets of vectors that are minimal in the sense of not having any such redundancies. Given a set S = fs1; : : : ; sT g of T vectors, there is a redundancy if it is possible to write one of the vectors, say s1, as a linear combination of the other vectors, s1 = γ2s2 + : : : γT sT : This can be rewritten as −s1 + γ2s2 + : : : γT sT = 0: In the example above, I could write either (1; a) = (1=2)(2; 2a) or (2; 2a) = 2(1; a). This motivates the following definition. 1 T N Definition 3. A set S = fs ; : : : ; s g of T vectors in R is linearly dependent if there exists T numbers γ1; : : : ; γT , at least one not equal to zero, such that γ1s1 + ··· + γT sT = 0: If S is not linearly dependent then it is linearly independent. Equivalent, S is linearly independent iff whenever γ1s1 + ··· + γT sT = 0; γ1 = ··· = γT = 0. In particular, if γ1s1 + ··· + γT sT = 0 and, say, γ1 6= 0 then s1 is redundant in the sense that s1 = −(1/γ1)(γ2s2 + ··· + γT sT ). If S contains the origin as one of its vectors then it is automatically linearly dependent. In particular, if S contains only the origin then it is linearly dependent. Specializing even further, in the case N = 1, the \matrix" [0] is linearly dependent; on the other hand, the \matrix" [1] is linearly independent. Even if S is linearly dependent, some subset of S may be linearly independent. And when there is one linearly independent subset of S, then there is often more than one. For example, if S = f(1; a); (2; 2a)g then S is linearly dependent (take λ1 = −2, λ2 = 1), but both S^ = f(1; a)g or S~ = f(2; 2a)g are linearly independent. Definition 4. S is a basis for V iff S is linearly independent and spans V . Except for the trivial case V = f0g, there will be many (uncountably infinitely many, in fact) bases. In the example above, (1; a) is a basis but so is (2; 2a) and so is (−1; −a). One can show that if there is a basis for V with T vectors then every basis for V has exactly T vectors. This allows us to define the dimension of V . 2 Definition 5. If V is a vector space then the dimension of V is T iff there is a basis for V with T vectors. N 1 In particular, the dimension of R is N, because the N unit vectors e = 2 N (1; 0;:::; 0), e = (0; 1; 0;:::; 0) and so on are a basis for R . This basis is called the standard basis. Finally, note that if V is the vector space and S is a basis for V then any x 2 V is uniquely represented in the form x = γ1s1 + ··· + γT sT : To see this, suppose that we also have x =γ ^1s1 + ··· +γ ^T sT : Then setting these equal and rearranging, 0 = (γ1 − γ^1)s1 + ··· + (γT − γ^T )sT If S is independent, all of the γt − γ^t must equal 0, which implies that for all t, γ^t = γt. 1.3 Linear Functions and Matrices. N M Definition 6. A function f : R ! R is linear iff the following hold. N 1. For any x; x^ 2 R , f(x +x ^) = f(x) + f(^x): N 2. For any x 2 R , γ 2 R, f(γx) = γf(x): When M > 1, linear functions are often called linear maps. Map and function mean the same thing here. Setting γ = 0 implies that if f is linear then f(0) = 0. Thus, in the examples above, f(x) = ax is linear but f^(x) = ax + b is not when b 6= 0 (f^ is affine). A fundamental fact is that a function f is linear iff it can be represented in matrix form: there is an M × N matrix A (note the dimensions) such that for any N x 2 R , f(x) = Ax: N 1 N n To see that this is true, note that for any x 2 R , x = x1e + ··· + xN e , where e is the unit vector with a 1 in coordinate n and 0s everywhere else. Then since f is linear 1 N f(x) = x1f(e ) + ··· + xN f(e ): Let an = f(en). Let A be the M × N matrix in which column n is an. Then f(x) = Ax: In our simple example in which f(x) = ax, f(1) = a and so A = [a]. 3 1.4 The Fundamental Spaces of a Linear Function Given an M ×N matrix A, let an denote column n of A. Then for the linear function f(x) = Ax, 1 N f(x) = x1a + ··· + xN a : M In words, this says that f(x) is in the vector space in R that is spanned by the columns of A. This space is called the column space. N Similarly, one can consider the vector space spanned in R by the rows of A, considered as vectors (equivalently, consider the columns of A0, the transpose of A). Finally, let K(A) be the set of points x such that Ax = 0: K(A) is the kernel of A. (It is also called the null space of A.) It is easy to verify N that K(A) is a vector space in R . 1.5 The Fundamental Dimensionality Theorem. One can prove the following theorem. Theorem 1 (The Dimension Theorem). Let A be an M ×N matrix. The dimension of the column space of A plus the dimension of K(A) equals N. One consequence of the Dimension Theorem is that, with some additional work, one can show that, for any given matrix, the maximum number of independent columns (i.e. the number of columns in the largest independent subset of columns) equals the maximum number of independent rows (i.e., the maximum number of independent columns of A0). This number is called the rank of A. In particular, this says that the column space of A and the row space of A have the same dimension. And this says that the dimension of K(A) is N minus the rank of A. The rank of A cannot exceed minfM; Ng but could be strictly less. For example, if N = M = 2 and 1 2 A = ; a 2a then the rank is 1. A matrix A has full rank iff its rank is minfM; Ng. Otherwise, A is singular. As discussed below, the linear function f(x) = Ax is one-to-one, and hence invertible, iff A has full rank. 1.6 Vector Subspaces Revisited. Consider any M × N matrix A. Suppose that M < N and that A has full rank, which is M. Then by the Dimension Theorem, the vector space K(A) has dimension N − M.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-