<<

Vector Spaces Example 1.4. Let Mm,n be the of all m×n matrices over a field K. Then Mm,n is a under the ordinary addition and multiplications of matrices. 1 Definition of vector space Example 1.5. Let X be a nonempty set, and let K be Definition 1.1. Let K be a field and let V be a nonempty a field. Let F (X) denote the set of all functions from X set with two operations: to K. Then F (X) is a vector space under the following additions and scalar multiplications. Vector addition: For any u, v ∈ V there is a sum For f, g ∈ F (X) and c ∈ K, f + g and cf are defined u + v in V . as follows: Vector Addition: : For u ∈ V and a scalar c ∈ K there is a product cu in V . (f + g)(x) = f(x) + g(x), ∀x ∈ X.

The set V is called a vector space over K if the following Scalar Multiplication: properties are satisfied for u, v, w ∈ V and c, d ∈ K: (cf)(x) = cf(x), ∀x ∈ X. (A1) u + v = v + u;

(A2) (u + v) + w = u + (v + w); The zero vector in F (X) is the zero function 0, which is defines as (A3) There is a vector 0, called the zero vector 0, 0(x) = 0, ∀x ∈ X. such that u + 0 = u; For f ∈ F (X), the function −f is defined as (−f)(x) = −f(x), ∀x ∈ X. (A4) For any vector u there is a vector −u such that

u + (−u) = 0; 2 Linear Combinations, spanning (M1) c(u + v) = cu + cv; sets

(M2) (c + d)u = cu + du; Let V be a vector space over a field K. We say that v in V is a of vectors v1, v2, . . . , vk in V (M3) c(du) = (cd)u; if there exist scalars c1, c2, . . . , ck ∈ K such that (M4) 1u = u. v = c1v1 + c2v2 + ··· + ckvk. Example 1.1. Let K be a field. Then the set The set of all linear combinations of vectors v , v , . . . , v n o 1 2 k ¯ is called the linear span of v , v , . . . , v , and is denoted Kn := (a , a , . . . , a ) ¯ a ∈ K, 1 ≤ i ≤ n . 1 2 k 1 2 n i by span{v , v , . . . , v }. is a vector space under the following addition and scalar 1 2 k multiplication. If span{v1, v2, . . . , vk} = V , we say that v1, v2, . . . , vk Vector addition: span V , or v1, v2, . . . , vk form a spanning set of V .

(a1, . . . , an) + (b1, . . . , bn) = (a1 + b1, . . . , an + bn). Example 2.1. The vectors · ¸ · ¸ · ¸ Scalar multiplication: 1 2 3 u = , u = , u = 1 2 2 3 3 4 c(a1, . . . , an) = (ca1, . . . , can). form a spanning set of R2. The zero vector is 0 = (0,..., 0) and the negative of a The vectors vector is       1 0 0 −(a1, . . . , an) = (−a1,..., −an). e1 =  0  , e2 =  1  , e2 =  0  0 0 1 Example 1.2. Let P(t) be the set of all real polynomials. Then P(t) is a vector space under the ordinary addition form a spanning set of R3. However, the vectors and scalar multiplications of polynomials. T T u1 = [1, 2, 3] , u2 = [2, 3, 4] , Example 1.3. Let Pn(t) be the set of all real polynomi- als of degree n. Then P (t) is a vector space under the T T n u3 = [3, 4, 5] , u4 = [4, 5, 6] ordinary addition and scalar multiplications of polynomi- als. do not form a spanning set of R3.

1 Example 2.2. The matrices Proposition 3.3. If U and W are vector subspaces of a · ¸ · ¸ vector space of V . Then the intersection 1 0 1 1 E = ,E = , n o 11 0 0 12 0 0 ¯ U ∩ W := v ∈ V ¯ v ∈ U, v ∈ W , · ¸ · ¸ 0 0 0 0 E = ,E = 21 1 0 22 0 1 is a subspace of V . The sum n o form a spanning set of M . ¯ 2,2 U + W := u + w ¯ v ∈ U, w ∈ W Proposition 2.1. Let v1, v2, ..., vn be vectors of a vector space V . If vk is a linear combination of the other vectors, is also subspace of V . then Let A be an m × n real , and write span{v1, . . . , vk, . . . , vn} = span{v1,..., vˆk, . . . , vn}. A = [a1, a2,..., an]. Proof. It is clear that span{v1,..., vˆk, . . . , vn} is con- tained in span{v1, . . . , vk, . . . , vn}. Let vk be written as The column space of A is span{a1, a2,..., an}, which is subspace of Km. The linear span of the row vector of v = c v + ··· + c v + c v + ··· + c v . k 1 1 k−1 k−1 k+1 k+1 n n A is called the row space of A, which is a subspace of n Now for any vector v in the span of v1, v2, . . . , vn, we have K .

v = a1v1 + ··· + akvk + ··· + anvn. 4 Linear dependence and inde- Thus pendence v = (a1 + akc1)v1 + ··· + (ak−1 + akck−1)vk−1

+(ak+1 + akck+1)vk+1 + ··· + (an + akcn)vn. Definition 4.1. Vectors v1, v2, . . . , vp in a vector space V are called linearly dependent if there are scalars This means that v is contained in the span of v1, ..., c1, c2, . . . , cp, not all zero, such that vk−1, vk+1, ..., vn. c1v1 + c2v2 + ··· + cpvp = 0.

3 Subspaces Otherwise, the vectors v1, v2, . . . , vp are called linearly independent , i.e., if Definition 3.1. Let V be a vector space over a field K.

A subset W ⊆ V is called a subspace if W itself is a c1v1 + c2v2 + ··· + cbvp = 0, vector space over K and its vector addition and scalar multiplication are the same as that in V . then c1 = c2 = ··· = cp = 0. Theorem 3.2. Let W be a nonempty subset of a vector Example 4.1. (a) The vectors space V over a field K. Then W is a subspace of V if the       following conditions are satisfied: 1 0 1  1  ,  1  ,  0  (a) If u, v ∈ W , then u + v ∈ W . 1 1 1 (b) If u ∈ V and c ∈ K, then cu ∈ W . in R3 are linearly independent. Example 3.1. If v1, v2, . . . , vk are vectors of a vector space V . Then span{v1, v2, . . . , vk} is a subspace of V . (b) The vectors       Example 3.2. Let A be an m × n matrix. Then the 1 1 5 solution set of the linear system  1  ,  0  ,  2  Ax = 0 1 1 5 is a subspace of Rn. in R3 are linearly dependent.

Example 3.3. Pn(t) is a subspace of P(t). (c) The vectors Example 3.4. The plane         n o 1 0 1 3 ¯         R2 × 0 = (a, b, 0) ¯ a, b ∈ R 1 , 1 , 0 , 5 1 1 1 6 and the line 3 n ¯ o in R are linearly dependent. R × 0 × 0 = (a, 0, 0) ¯ a ∈ R Example 4.2. The basic solutions of any homogeneous are subspaces of R3. linear system are linearly independent.

2 Theorem 4.2. Any set of vectors containing the zero Note that if u, v ∈ V and vector 0 is linearly dependent. u = a1v1 + a2v2 + ··· + anvn, n Theorem 4.3. Vectors v1, v2,..., vp in R are linearly dependent if p > n. v = b1v1 + b2v2 + ··· + bnvn,

Proof. Let A = [v1,..., vp]. Then A is an n × p matrix, then and the equation Ax = 0 has n equations in p unknowns. Recall that for the matrix A the number of pivot positions u + v = (a1 + b1)v1 + (a2 + b2)v2 + ··· + (an + bn)vn, plus the number of free variables is equal to p, and the number of pivot positions is at most n. Thus, if p > n, cu = ca1v1 + ca2v2 + ··· + canvn. there must be some free variables. Hence Ax = 0 has Thus nontrivial solutions. This means that the column vectors [u]B = [a1, a2, . . . , an], of A are linearly dependent. [v]B = [b1, b2, . . . , bn], Theorem 4.4. Vectors v1, v2, . . . , vk (k ≥ 2) in a vector space V are linearly dependent if and only if one of the [u + v]B = [a1 + b1, a2 + b2, . . . , an + bn], vectors is a linear combination of the other vectors. [cu]B = [ca1, ca2, . . . , can].

Theorem 4.5. The column vectors of a matrix A are Moreover, if u 6= v, then [u]B 6= [v]B. For any n linearly independent if and only if the system [c1, c2, . . . , cn] ∈ K , let w = c1v1 + c2v2 + ··· + cnvn, then Ax = 0 [w]B = [c1, c2, . . . , cn]. has the only trivial solution. The correspondence v 7→ [v]B is called an isomorphism from V to Kn. We say that V is isomorphic to Kn.

5 , coordinates, and dimen- Example 5.1. Any three linearly independent vectors of sion R3 form a basis for R3. For instance, the set        1 1 1  The vectors v = [1, 1, 1]T and v = [0, 1, 1]T are linearly 1 2 B =  1  ,  2  ,  1  independent, but not every vector of R3 can be written as   3 1 1 3 a linear combination of v1 and v2. Every vector of R can be written as linear combination of the following vectors is basis of R3. The vector [6, 9, 8]T has the B-coordinate [1, 1, 1]T , [0, 1, 1]T , [1, 0, 1]T , [3, 5, 6]T . vector [2, 3, 1]. However, the coordinate vector of the vec- tor [2, 3, 1]T in R3 is just [2, 3, 1] under the However, the linear combinations are not unique, as the       vectors are linearly dependent.  1 0 0  E =  0  ,  1  ,  0  .   Definition 5.1. A set B = {v1, v2, . . . , vp} of vectors in 0 0 1 V is called a basis of V if Theorem 5.2. Let B = {v1, v2, . . . , vn} be a basis of a (a) B is linearly independent, and vector space V . Then any family w1, w2, . . . , wp of vectors in V are dependent if p > n. (b) B spans V . Proof. Let {w , w , . . . , w } be a set of vectors with p > In other words, every vector of V can be written uniquely 1 2 p n. Since any set of more than n vectors of Rn is lin- as a linear combination of basis vectors. n early dependent, the vectors [w1]B, [w2]B,..., [wp]B of R Let B = {v1, v2, . . . , vn} be a basis of a vector space must be linearly dependent. Then there exist constants V . Then every vector v in V over a field K has a unique c1, c2, . . . , cp, not all zero, such that expression c1[w1]B + c2[w2]B + ··· + cp[wp]B = 0. v = c1v1 + c2v2 + ··· + cnvn. Thus The scalars c1, c2, . . . , cn are called the coordinates of v [c1w1 + c2w2 + ··· + cpwp]B = 0. relative to the basis B, or B-coordinates of v; and the Since the zero vector and only the zero vector 0 in V has vector its B-coordinate vector equal to the zero 0 in Kn, we see [v] = [c , c , . . . , c ] B 1 2 n that n in K is called the coordinate vector of v relative to c1u1 + c2u2 + ··· + cpup = 0, B, or the B-coordinate vector of v. We may write having c1, c2, . . . , cn not all zero. This means that the T vectors w1, w2, . . . , wp are linearly dependent. v = c1v1 + c2v2 + ··· + cnvn = [v1, v2, . . . , vn][v]B .

3 Theorem 5.3. Let B1 = {u1, u2, . . . , um} and B2 = Its row canonical form is the matrix {v , v , . . . , v } be bases of a vector space V . Then m = n.   1 2 n 1 4 0 2 4   Proof. Suppose m < n. By Theorem 5.2, {v1, v2, . . . , vn} 0 0 1 −1 2 B =   = [b1, b2, b3, b4, b5]. is linearly dependent, contrary to the properties for a ba-  0 0 0 0 0  sis. Thus m ≥ n. A similar argument shows that m ≤ n. 0 0 0 0 0 Therefore m = n. Since Ax = 0 is equivalent to Bx = 0, i.e., The number of vectors of a basis B for a vector space V x a + x a + x a + x a + x a = 0 is called the dimension of V , denoted dim V . If #(B) = 1 1 2 2 3 3 4 4 5 5 n is finite, then V is said to be of finite dimension n or if and only if n-dimensional, written dim V = n. If #(B) is infinite, then V is said to be of infinite dimension or infinite- x1b1 + x2b2 + x3b3 + x4b4 + x5b5 = 0. dimensional. This means that the linear relations among the vec- Theorem 5.4 (Extension Basis Theorem). Let V be tors a1, a2, a3, a4, a5 are the same as the linear relations an n-dimensional vector space. Then any linearly inde- among the vectors b1, b2, b3, b4, b5. For instance, pendent vectors v1, v2, . . . , vk can be extended to a basis b2 = 4b1 ←→ a2 = 4a1 B = {v1, v2, . . . , vk, vk+1, . . . , vn} of V . b4 = 2b1 − b3 ←→ a4 = 2a1 − a3 Proof. Let S = {v1, v2, . . . , vk} be a set of linearly inde- b5 = 4b1 + 2b3 ←→ a4 = 4a1 + 2a3. pendent vectors of V . If spanS = V , then S is a basis This shows that row operations do not change the linear of V by definition. Otherwise, there exists a vector vk+1 relations among the column vectors of a matrix. in V such that vk+1 is not contained in spanS. Then {v1, v2, . . . , vk, vk+1} is a linearly independent set of V . Let A and B be matrix such that A ∼ B, i.e., A is Now set S = {v1, v2, . . . , vk+1}. If spanS = V , then S equivalent to B. Then is a basis of V . Otherwise, continue to add a vector of V − spanS to S in this way until spanS = V . Since V Nul A = Nul B, Row A = Row B. has finite dimension, the extension ends in finite number of steps. In other words, row operations do not change null space and row space of a matrix. However, row operations do Example 5.2. Find a basis for the solution space of the change column spaces, so we may have linear system Col A 6= Col B. ½ x + x + x + x = 0, 1 2 4 6 Theorem 5.6 (Column Basis Theorem). The column x + x + x + x = 0. 3 4 5 6 vectors of a matrix A corresponding to its pivot positions form a basis of Col A. Theorem 5.5 (Basis Theorem). Let v1, v2, . . . , vn be vectors of an n-dimensional vector space V . Proof. Let B = [b1, b2,..., bn] denote the reduced row echelon form of A = [a , a ,..., a ]. Let b , b ,..., b (a) If v , v , . . . , v are linearly independent, then B = 1 2 n i1 i2 ik 1 2 n be the column vectors of B containing the pivot positions. {v1, v2, . . . , vn} is a basis of V . It is clear that bi1 , bi2 ,..., bik are linearly independent (b) If v1, v2, . . . , vn span V , then B = {v1, v2, . . . , vn} is and every column vector of B is a linear combination of a basis of V . the vectors bi1 , bi2 ,..., bik .

Let ai1 , ai2 ,..., aik be the corresponding column vec- Proof. Let S := {v1, v2, . . . , vn}. tors of A. It suffices to prove that a linear relation for (a) By Extension Basis Theorem, S can be extended to b1, b2,..., bn is also a linear relation for a1, a2,..., an, a basis of V . Since S has n vectors and all bases have the and vice versa. Notice that a linear relation among same number of vectors. It follows that no vectors were the vectors b1, b2,..., bn is just a solution of the system added to S. Hence S itself is a basis. Bx = 0; and the systems Ax = 0 and Bx = 0 have the

(b) We only need to show that S is linearly indepen- same solution set. Thus ai1 , ai2 ,..., aik are linearly in- dent. Suppose S is not linearly independent. Then S con- dependent and every column vector of A is a linear com- 0 tains a linearly independent proper subset S such that bination of ai1 , ai2 ,..., aik . So ai1 , ai2 ,..., aik form a span(S0) = V . So S0 is a basis of V . Thus #(S0) ≥ n, basis of Col A. contradict to #(S0) < n.

Example 5.3. Consider the matrix 6 Direct sum   1 4 0 2 4 Theorem 6.1 (Dimension Theorem). Let U and W  −1 −4 1 −3 −2  be subspaces of a vector space V . Then A =   = [a , a , a , a , a ].  2 8 1 3 10  1 2 3 4 5 1 4 1 1 6 dim U + dim W = dim(U ∩ W ) + dim(U + W ).

4 Proof. Applying Extension Basis Theorem. Definition 6.2. Let U and W be subspaces of a vector space V . The vector space U + W is called a direct sum of U and W , written U ⊕ W , provided that if u + w = u0 + w0 for u, u0 ∈ U and w, w0 ∈ W , then u = u0 and w = w0. Example 6.1. Let V be the solution space of ½ x1 + x2 + x4 + x6 = 0, x3 + x4 + x5 + x6 = 0. Let W be the solution space of   x1 + x2 + x4 + x5 + 3x6 = 0, x + x + 2x + 3x = 0,  3 4 5 6 x1 + x2 + x3 + 2x4 + 2x5 + 4x6 = 0. The intersection V ∩W is the solution space of the system   x + x + x + x = 0,  1 2 4 6  x3 + x4 + x5 + x6 = 0, x + x + x + x + 3x = 0,  1 2 4 5 6  x + x + 2x + 3x = 0,  3 4 5 6 x1 + x2 + x3 + 2x4 + 2x5 + 4x6 = 0. Theorem 6.3 (Dimension Theorem). Let U and W be subspaces of a vector space V . Then the following state- ments are equivalent: (a) U + W is a direct sum. (b) U ∩ W = {0}. (c) dim(U + W ) = dim U + dim W . Example 6.2. Let V be the solution space of ½ x1 + x2 + x3 = 0, x2 + x3 = 0. Let W be the solution space of ½ x1 + x2 + x3 = 0, x2 − x3 = 0. Then V ∩ W = {0} and V ⊕ W is a direct sum. A vector space V is called a direct sum of subspaces W1,W2,...,Wk, written

V = W1 ⊕ W2 ⊕ · · · ⊕ Wk, if every vector v ∈ V can be uniquely written as

v = w1 + w2 + ··· + wk, where w1 ∈ W1, w2 ∈ W2, ..., wk ∈ Wk.

Theorem 6.4. Let W1,W2,...,Wk be subspaces of a vec- tor space V . Then the following statements are equiva- lent:

(a) W1 + W2 + ··· + Wk is a direct sum. (b) For each 1 ≤ i ≤ k, ¡ ¢ Wi ∩ W1 + ··· + Wi−1 + Wi+1 + ··· + Wk = {0}. ¡ ¢ (c) dim W1 + W2 + ··· + Wk

= dim W1 + dim W2 + ··· + dim Wk.

5