<<

Linear combinations of vectors

Brian Krummel January 31, 2020

We will continue discussing linear combinations of vectors and the concepts of span and . Last Wednesday we introduced the concept of the span of a of vectors:

Definition 1. Let V be a and let S = {X1,X2,...,Xk} be a finite set of vectors in V. The span of S is the set of all linear combinations

x2 Y = c1X1 + c2X2 + ··· + ckXk of X1,X2,...,Xk, where c1, c2, . . . , ck is any possible choice of scalars. We denote the span of S by Span S. X We ended class last Wednesday on a cliff hanger with the following example: x1 Example 1. What is the span of two nonzero vectors X,Y in R2? Answer. If X,Y are parallel, then their span is a line passing through 0 and X, which also passes through Y . For instance, the span of X, 2X is:

x2

2X

X

x1

1 If instead X,Y are not parallel, then their span is all of R2:

x2

X

x2 Y

x1

X Y

x1 Example 2. What is the span of two vectors X,Yx3 in R3 which are nonzero and are not parallel? Answer. The plane containing 0,X,Y : Y

X x3

x1

x2 Y X

x1

x2

Next let’s look at the concept of linear independence. The standard definition of linear inde- pendence (which differs slightly from the textbook) is as follows:

Definition 2. Let V be a vector space and S = {X1,X2,...,Xk} be a finite set of vectors in V. We say that S is linearly independent if

c1X1 + c2X2 + ··· + ckXk = 0, (??) for scalars c1, c2, . . . , ck only if c1 = c2 = ··· = ck = 0. We say that S is linearly dependent if there exists scalars c1, c2, . . . , ck not all zero such that (??) holds true. Example 3. Is the set of 2 × 2 matrices  1 2   1 3   2 4  , , 3 4 0 5 0 0 is linearly independent? Answer. Yes. Suppose that           1 2 1 3 2 4 c1 + c2 + 2c3 2c1 + 3c2 + 4c3 0 0 c1 + c2 + c3 = = 3 4 0 5 0 0 3c1 4c1 + 5c2 0 0

2 The (2,1)-entry tells us that 3c1 = 0 ⇒ c1 = 0. The (2, 2)-entry then tells us that 4c1 + 5c2 = 0 ⇒ c2 = 0. Finally, we must have that c3 = 0. To better understand what this definition means, and to relate this definition back to the textbook, we have the following theorem:

Theorem 1. Let V be a vector space and S = {X1,X2,...,Xk} be a finite set of vectors in V. S is linearly dependent if and only if one of the vectors Xi in S is a linear combination of the other vectors in S (in which case we say Xi is redundant or linearly dependent on the other vectors of S).

Reason of Theorem 1. Suppose that S is linearly dependent. Then there exists scalars c1, c2, . . . , ck not all zero such that c1X1 + c2X2 + ··· + ckXk = 0.

Up to relabeling indices we may assume that ck 6= 0. Then by subtracting ckXk from both sides and dividing by −ck, c1 c2 ck−1 Xk = − X1 − X2 − · · · − Xk−1 ck ck ck so that Xk is a linear combination of X1,X2,...,Xk−1. Suppose that one of the vectors in S is a linear combination of the other vectors in S. Up to relabeling indices we may assume that Xk is a linear combination of X1,X2,...,Xk−1. Hence

Xk = a1X1 + a2X2 + ··· + ak−1 Xk−1 for some scalars a1, a2, . . . , ak−1. By subtracting Xk from both sides,

0 = a1X1 + a2X2 + ··· + ak−1 Xk−1 − Xk, which since the weight on Xk is −1 6= 0 means that X1,X2,...,Xk are linearly dependent.

Example 4. Is the set of column vectors in R3        1 3 0   0  ,  4  ,  2   1 3 0  is linearly independent? Answer. No, since the 2nd vector is a linear combination of the first two vectors  3   1   0   4  = 3  0  + 2  2  . 3 1 0 Notice that we can regard the 2nd vector as redundant since any linear combination of the three vectors can be rewritten as a linear combination of the 1st and 3rd vector:  1   3   0   1   0  c1  0  + c2  4  + c3  2  = (c1 + 3c2)  0  + (c3 + 2c2)  2  . 1 3 0 1 0

3 Example 5. Given two nonzero vectors X,Y in R2 are linearly dependent if and only if X and Y are parallel: x2 x2 2X Y X X

x1 x1

Linearly dependent Linearly independent

Now let’s verify that linear independence implies uniqueness of linear combinations:

Theorem 2. Let V be a vector space and S = {X1,X2,...,Xk} be a finite set of vectors in V. For each Y ∈ span S there exists a unique choice of weights c1, c2, . . . , ck such that

c1X1 + c2X2 + ··· + ckXk = Y holds true if and only if S is linearly independent.

Reason. First it is worth noting that when Y = 0, we can write the zero vector as

0 X1 + 0 X2 + ··· + 0 Xk = 0.

By the definition of linear independence, S = {X1,X2,...,Xk} is linearly independent if and only if this is the only way to write the zero vector as a linear combination of {X1,X2,...,Xk}.

Suppose that we can express Y as a linear combination of X1,X2,...,Xk with two choices of weights:

Y = c1X1 + c2X2 + ··· + ckXk and

Y = d1X1 + d2X2 + ··· + dkXk.

By subtracting:

Y = c1 X1 + c2 X2 + ··· + ck Xk − Y = d1 X1 + d2 X2 + ··· + dk Xk 0 = (c1 − d1) X1 + (c2 − d2) X2 + ··· + (ck − dk) Xk

Hence if S = {X1,X2,...,Xk} is linearly independent, then ci − di = 0 for each i, that is ci = di for each i. If on the other hand S = {X1,X2,...,Xk} is linearly dependent, we can find weights not all zero such that a1X1 + a2X2 + ··· + akXk = 0.

Setting ci − di = ai, or equivalently ci = di + ai, we obtain two different ways to express Y as a linear combination of X1,X2,...,Xk.

4 The following theorem from the textbook gives a simple condition for “eyeballing” when a finite set of matrices (or column vectors) are linearly independent.

Theorem 3. Let S = {A1,A2,...,Ap} be a set of m × n matrices. Suppose that for each k there is pair of indices (i, j) such that (i, j)-entry of Ak is nonzero but the (i, j)-entry of A` is zero for all ` 6= k. Then S is linearly independent.

Example 6. Theorem 3 says that the set

 1 2 3   0 1 5   0 4 6  S = , , 0 0 4 3 0 7 0 7 9 is linearly independent. Notice that the first has (1, 1)-entry nonzero, whereas the second and third matrices have (1, 1)-entry equal to zero. (We put a box around the (1, 1)-entry in the first matrix.) Similarly, the second matrix has nonzero (2, 1)-entry whereas the other matrices have (2, 1)-entry zero. The third matrix has nonzero (2, 2)-entry whereas the other matrices have (2, 2)-entry zero. To see that S is linearly independent, suppose that

 1 2 3   0 1 5   0 4 6   0 0 0  c + c + c = 1 0 0 4 2 3 0 7 3 0 7 9 0 0 0 for some scalars c1, c2, c3. There are six entries which we could look at. However, if for instance we look at the (1, 3)-entry we obtain 3c1 + 5c2 + 6c3 = 0, which alone does not tell us much. However, if we look at the (1, 1)-entry, we get c1 = 0. Similarly, if we look at the (2, 1)-entry we get 3c2 = 0 ⇒ c2 = 0. If we look at the (2, 2)-entry we get 7c3 = 0 ⇒ c3 = 0.

5