Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces
Total Page:16
File Type:pdf, Size:1020Kb
Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces 1 1.1 Definitions..........................................1 1.1.1 Basics........................................1 1.1.2 Linear Combinations, Spans, Bases, and Dimensions..............1 1.1.3 Sums and Products of Vector Spaces and Subspaces..............3 1.2 Basic Vector Space Theory................................5 2 Linear Transformations 15 2.1 Definitions.......................................... 15 2.1.1 Linear Transformations, or Vector Space Homomorphisms........... 15 2.2 Basic Properties of Linear Transformations....................... 18 2.3 Monomorphisms and Isomorphisms............................ 20 2.4 Rank-Nullity Theorem................................... 23 3 Matrix Representations of Linear Transformations 25 3.1 Definitions.......................................... 25 3.2 Matrix Representations of Linear Transformations................... 26 4 Change of Coordinates 31 4.1 Definitions.......................................... 31 4.2 Change of Coordinate Maps and Matrices........................ 31 1 Vector Spaces 1.1 Definitions 1.1.1 Basics A vector space (linear space) V over a field F is a set V on which the operations addition, + : V × V ! V , and left F -action or scalar multiplication, · : F × V ! V , satisfy: for all x; y; z 2 V and a; b; 1 2 F VS1 x + y = y + x 9 VS2 (x + y) + z = x + (y + z) => (V; +) is an abelian group VS3 90 2 V such that x + 0 = x > VS4 8x 2 V; 9y 2 V such that x + y = 0 ; VS5 1x = x VS6 (ab)x = a(bx) VS7 a(x + y) = ax + ay VS8 (a + b)x = ax + ay If V is a vector space over F , then a subset W ⊆ V is called a subspace of V if W is a vector space over the same field F and with addition and scalar multiplication +jW ×W and ·jF ×W . 1 1.1.2 Linear Combinations, Spans, Bases, and Dimensions If ? 6= S ⊆ V , v1; : : : ; vn 2 S and a1; : : : ; an 2 F , then a linear combination of v1; : : : ; vn is the finite sum a1v1 + ··· + anvn (1.1) which is a vector in V . The ai 2 F are called the coefficients of the linear combination. If a1 = ··· = an = 0, then the linear combination is said to be trivial. In particular, considering the special case of 0 in V , the zero vector, we note that 0 may always be represented as a linear combination of any vectors u1; : : : ; un 2 V , 0u1 + ··· + 0un = 0 | {z } |{z} 02F 02V This representation is called the trivial representation of 0 by u1; : : : ; un. If, on the other hand, there are vectors u1; : : : ; un 2 V and scalars a1; : : : ; an 2 F such that a1u1 + ··· + anun = 0 where at least one ai 6= 0 2 F , then that linear combination is called a nontrivial representation of 0. Using linear combinations we can generate subspaces, as follows. If S 6= ? is a subset of V , then the span of S is given by span(S) := fv 2 V j v is a linear combination of vectors in Sg (1.2) The span of ? is by definition span(?) := f0g (1.3) In this case, S ⊆ V is said to generate or span V , and to be a generating or spanning set. If span(S) = V , then S said to be a generating set or a spanning set for V . A nonempty subset S of a vector space V is said to be linearly independent if, taking any finite number of distinct vectors u1; : : : ; un 2 S, we have for all a1; : : : ; an 2 F that a1u1 + a2u2 + ··· + anun = 0 =) a1 = ··· = an = 0 That is S is linearly independent if the only representation of 0 2 V by vectors in S is the trivial one. In this case, the vectors u1; : : : ; un themselves are also said to be linearly independent. Otherwise, if there is at least one nontrivial representation of 0 by vectors in S, then S is said to be linearly dependent. Given ? 6= S ⊆ V , a nonzero vector v 2 S is said to be an essentially unique linear combination of the vectors in S if, up to order of terms, there is one and only one way to express v as a linear combination of u1; : : : ; un 2 S. That is, if there are a1; : : : ; an; b1; : : : ; bm 2 F nf0g and distinct u1; : : : ; un 2 S and distinct v1; : : : ; vm 2 S distinct, then, re-indexing the bis if necessary, ) ( ) v = a1u1 + ··· + anun ai = bi =) m = n and 8i = 1; : : : ; n = b1v1 + ··· + bmvm ui = vi A subset β ⊆ V is called a (Hamel) basis if it is linearly independent and span(β) = V . We also say that the vectors of β form a basis for V . Equivalently, as explained in Theorem 1.13 below, β is a basis if every nonzero vector v 2 V is an essentially unique linear combination of vectors in β. In the context of inner product spaces of inifinite dimension, there is a difference between a vector space basis, the Hamel basis of V , and an orthonormal basis for V , the Hilbert basis for V , because though the two always exist, they are not always equal unless dim(V ) < 1. 2 The dimension of a vector space V is the cardinality of any basis for V , and is denoted dim(V ). V finite-dimensional if it is the zero vector space f0g or if it has a basis of finite cardinality. Otherwise, if it's basis has infinite cardinality, it is called infinite-dimensional. In the former case, dim(V ) = jβj = n < 1 for some n 2 N, and V is said to be n-dimensional, while in the latter case, dim(V ) = jβj = κ, where κ is a cardinal number, and V is said to be κ-dimensional. If V is finite-dimensional, say of dimension n, then an ordered basis for V a finite sequence or n-tuple (v1; : : : ; vn) of linearly independent vectors v1; : : : ; vn 2 V such that fv1; : : : ; vng is a basis for V . If V is infinite-dimensional but with a countable basis, then an ordered basis is a sequence (vn)n2N such that the set fvn j n 2 Ng is a basis for V . The term standard ordered basis is n ! B n applied only to the ordered bases for F ; (F )0; (F )0 and F [x] and its subsets Fn[x]. For F , the standard ordered basis is given by (e1; e2;:::; en) where e1 = (1; 0;:::; 0); e2 = (0; 1; 0;:::; 0);:::; en = (0;:::; 0; 1). Since F is any field, 1 represents the multiplicative identity and 0 the additive identity. For F = R, the standard ordered basis is just as above with the usual 0 and 1. If F = C, then 1 = (1; 0) and 0 = (0; 0), so the ith standard basis vector for Cn looks like this: ith term ei = (0; 0);:::; (1; 0) ;:::; (0; 0) ! For (F )0 the standard ordered basis is given by (en)n2N; where en(k) = δnk B and generally for (F )0 the standard ordered basis is given by: ( 1; if x = b (eb)b2B; where eb(x) = 0; if x 6= b When the term standard ordered basis is applied to the vector space Fn[x], the subspace of F [x] of polynomials of degree less than or equal to n, it refers the basis (1; x; x2; : : : ; xn) For F [x], the standard ordered basis is given by (1; x; x2;::: ) 1.1.3 Sums and Products of Vector Spaces and Subspaces If S1;:::;Sk 2 P (V ) are subsets or subspaces of a vector space V , then their (internal) sum is given by k X S1 + ··· + Sk = Si := fv1 + ··· + vk j vi 2 Si for 1 ≤ i ≤ kg (1.4) i=1 More generally, if fSi j i 2 Ig ⊆ P (V ) is any collection of subsets or subspaces of V , then their sum is defined to be X n [ o Si = v1 + ··· + vn si 2 Si and n 2 N (1.5) i2I i2I If S is a subspace of V and v 2 V , then the sum fvg + S = fv + s j s 2 Sg is called a coset of S and is usually denoted v + S (1.6) 3 We are of course interested in the (internal) sum of subspaces of a vector space. In this case, we may have the special condition k X X V = Si and Sj \ Si = f0g; 8j = 1; : : : ; k (1.7) i=1 i6=j When this happens, V is said to be the (internal) direct sum of S1;:::;Sk, and this is symbolically denoted k M V = S1 ⊕ S2 ⊕ · · · ⊕ Sk = Si = fs1 + ··· + sk j si 2 Sig (1.8) i=1 In the infinite-dimensional case we proceed similarly. If F = fSi j i 2 Ig is a family of subsets of V such that X X V = Si and Sj \ Si = f0g; 8j 2 I (1.9) i2I i6=j then V is a direct sum of the Si 2 F, denoted M M n [ o V = F = Si = s1 + ··· + sn sj 2 Si; and sji 2 Si if sji 6= 0 (1.10) i2I i2I We can also define the (external) sum of distinct vector spaces which do not lie inside a larger vector space: if V1;:::;Vn are vector spaces over the same field F , then their external direct sum is the cartesian product V1 × · · · × Vn, with addition and scalar multiplication defined componentwise. And we denote the sum, confusingly, by the same notation: V1 ⊕ V2 ⊕ · · · ⊕ Vn := f(v1; v2; : : : ; vn) j vi 2 Vi; i = 1; : : : ; ng (1.11) In the infinite-dimensional case, we have two types of external direct sum, one where there is no restriction on the sequences, the other where we only allow sequences with finite support.