Quick viewing(Text Mode)

Linear Independence

Linear Independence

Linear independence

Brian Krummel September 7, 2019

2 Recall from a Section 1.3 that given two vectors v1, v2 in R which are not parallel, Span{v1, v2} is R2 and we could show this graphically using a grid representing all the ways to form vectors as 3 linear combinations of v1, v2. Similarly, given v1, v2 in R not parallel, Span{v1, v2} is a and we can again show this graphically using a grid. Today we introduce the concept of linear independence, which generalizes the notion of two vectors not being parallel. n Definition 1. A set of vectors {v1, v2,..., vp} in R is linearly independent if the only solution to the vector equation x1v1 + x2v2 + ... + xpvp = 0 (?) is the trivial solution x1 = x2 = ··· = xp = 0. n A set of vectors {v1, v2,..., vp} in R is linearly dependent if the vector equation (?) has a nontrivial solution (x1, x2, . . . , xp) = (c1, c2, . . . , cp), in other words if there exists weights c1, c2, . . . , cp, not all zero, such that

c1v1 + c2v2 + ... + cpvp = 0. (??)

We call the equation (??) a linear dependence relation amongst v1, v2,..., vp. Example 1. The standard coordinate vectors  1   0   0  v1 =  0  , v2 =  1  , v3 =  0  0 0 1 3 in R linearly independent. The vector equation x1v1 + x2v2 + x3v3 = 0 means that           x1 1 0 0 0  x2  = x1  0  + x2  1  + x3  0  =  0  . x3 0 0 1 0

Of course, this can only happen if x1 = x2 = x3 = 0, so the standard coordinate vectors are linearly independent. n Remark 1. {v1, v2,..., vp} in R are linearly independent if and only there exists only the solution to the homogeneous vector equation

x1v1 + x2v2 + ... + xpvp = 0 (1) or equivalently there exists at most one solution to the vector equation

x1v1 + x2v2 + ... + xpvp = b (2) for each b in Rn.

1 Theorem 1. The columns of a A are linearly independent if and only if the matrix equation Ax = 0 has only the trivial solution.

Reason. Let A = [ a1 a1 ··· an ] with columns ai. Then the matrix equation Ax = 0 has the equivalent vector equation x1a1 + x2a2 + ... + xnan = 0.

Thus Ax = 0 having only the trivial solution means the exact same thing as {a1, a2,..., an} being linearly independent. Another way to state Theorem 1 is that the columns of an m × n matrix A are linearly independent if and only if A has n pivot positions, one pivot position in each column.

Example 2. Are the vectors

 3   7   −1  v1 =  1  , v2 =  2  , v3 =  0  −2 −1 −3 in R3 linearly independent?

Answer. The vector equation x1v1+x2v2+x3v3 = 0 is a homogeneous equation with the coefficient matrix  3 7 −1   1 2 0  . −2 −1 −3 Note that I could have also written down the , but then the right-hand column would always be zero. Row reducing this augmented matrix

 3 7 −1   1 2 0   1 2 0  R1 ↔ R2 R2−3·R1 7→ R2  1 2 0  −−−−−→  3 7 −1  −−−−−−−−→  0 1 −1  −2 −1 −3 −2 −1 −3 R3+2·R1 7→ R2 0 3 −3  1 2 0   1 0 2  R3−3·R2 7→ R3 R1−2·R2 7→ R1 −−−−−−−−→  0 1 −1  −−−−−−−−→  0 1 −1  . 0 0 0 0 0 0

Since x3 is a free variable, x1v1 + x2v2 + x3v3 = 0 must have a nontrivial solution and thus {v1, v2, v3} are linearly dependent. In particular, the corresponding linear system is

x1 + 2x3 = 0

x2 − x3 = 0 and thus the parametric vector form of the solution is       x1 −2x3 −2 x =  x2  =  x3  = x3  1  . x3 x3 1

2 Setting x3 = 1, we get the solution     x1 −2 x =  x2  =  1  x3 1 which corresponds to the linear dependence relation

 3   7   −1   0  −2  1  +  2  +  0  =  0  . −2 −1 −3 0

Example 3. Describe all of the possible echelon forms of a 3×3 matrix A with linearly independent columns. Answer. Since the columns of A are linearly independent, the homogeneous equation Ax = 0 has only the trivial solution and thus A must have 3 pivot positions and no free variables. Since the first column of A must have a pivot position, the first pivot position of must be the (1, 1)-entry so that the first column of A is    ?? A =  0 ? ?  , 0 ? ? where like in Section 1.2 we let  denote nonzero leading entries and ∗ denote entries with any value and we let ? denote entries to be determined. Since the second column of A must also have a pivot position, the second pivot position of must be the (2, 2)-entry so that the second column of A is    ∗ ? A =  0  ?  . 0 0 ? Finally, since the third column of A must also have a pivot position, the third pivot position of must be the (3, 3)-entry so that the full echelon form of A including the third column is given by    ∗ ∗ A =  0  ∗  . 0 0 

Now so far during the lecture we have been determining whether a set of vectors {v1, v2,..., vp} is linearly independent primarily by writing down a matrix A whose columns are the vectors v1, v2,..., vp and then applying the row reduction algorithm to A to determine whether A has a pivot position in every column. Below we state several theorems about linear independence that provide a more conceptual understanding of when a set of vectors are linearly independent, and which also allow us to determine whether a set of vectors are linearly independent by inspection.

Theorem 2. A set {v1, v2,..., vp} containing the zero vector is always linearly dependent.

Reason. Suppose v1 = 0. Then the set has the linear dependence relation

1 · 0 + 0 v2 + 0 v3 + ··· + 0 vp = 0.

3 Theorem 3. {v1, v2,..., vp} is linearly dependent if and only if at least one of the vectors is a of the other vectors.

Reason. To keep things simple, let us assume that p = 3 so that we have three vectors v1, v2, v3. Suppose one of the vectors is a linear combination of the others, say v3 is a linear combination of v1, v2: v3 = c1v1 + c2v2 for some scalars c1, c2. Then by subtracting v3 from both sides, v1, v2, v3 satisfy the linear depen- dence relation c1v1 + c2v2 − v3 = 0.

On the other hand, {v1, v2, v3} being linearly dependent means they satisfy the linear relation

c1v1 + c2v2 + c3v3 = 0 with weights c1, c2, c3 not all zero, say c3 6= 0. By subtracting and dividing by c3, we get

c1 c2 v3 = − v1 − v2 c3 c3 so that v3 is a linear combination of v1, v2.

Notice that in the special case of two vectors {v1, v2}, the previous theorem asserts that {v1, v2} is linearly independent if and only if one of the vectors is a scalar multiple of the other, e.g. v2 = cv1 for some scalar c. That is, {v1, v2} is linearly independent if and only if v1, v2 are parallel, just like we claimed at the beginning of lecture. Thus linear independence is indeed a generalization of the notion of two parallel vectors. Example 4. In Example 2, we had

 3   7   −1   0  −2  1  +  2  +  0  =  0  . −2 −1 −3 0

By moving the first term to the right-hand side and dividing by 2 we obtain

 3   7   −1  1 1 1 = 2 + 0   2   2   −2 −1 −3 giving us that one vector is a linear combination of the other two vectors.

n Theorem 4. If a set of vectors {v1, v2,..., vp} in R has more vectors than entries, i.e. p > n, then the set is linearly dependent. Example 5 (Possible exam question). Are the vectors

 1   3   2  , , 2 1 −6 in R2 linearly dependent?

4 Answer. Yes, because # vectors = 3 > 2 = # entries.

On an exam we could stop here. But let us work a bit more to see what is going on. The vector equation x1v1 + x2v2 + x3v3 = 0 has the augmented matrix

 1 3 2  . 2 1 −6

Row reducing this augmented matrix       1 3 2 R2−2·R1 ↔ R2 1 3 2 (−1/5)·R2 ↔ R2 1 3 2 −−−−−−−−→ −−−−−−−−−→ 2 1 −6 0 −5 −10 0 1 2   R1−3·R2 ↔ R1 1 0 −4 −−−−−−−−→ . 0 1 2

Since there are more variables than equations, there cannot be a pivot position in every column. Thus there must be at least one free variable and there is a nontrivial solution. In this case, x3 is the free variable. The corresponding linear system is

x1 − 4x3 = 0

x2 + 2x3 = 0 and thus the parametric vector form of the solution is       x1 4x3 4 x =  x2  =  −2x3  = x3  −2  . x3 x3 1

Setting x3 = 1, we get the solution     x1 4 x =  x2  =  −2  x3 1 which corresponds to the linear dependence relation

 1   3   2   0  4 − 2 + = . 2 1 −6 0

Example 6 (Possible exam question). Is        1 2 10   3  ,  1  ,  5   7 2 10  in R3 linearly independent? Why or why not? Answer. No, since the third vector is equal to 5 times the second vector.

5 Example 7 (Possible exam question). Is

 1   0   5  , , 2 0 3 in R3 linearly dependent? Why or why not? Answer. Yes, they are linearly dependent since the second vector is the zero vector. Also because # vectors = 3 > 2 = # entries.

Example 8 (Possible exam question). Is        1 2 4   0  ,  3  ,  5   0 0 6  in R3 linearly independent? Why or why not?

Answer. Yes, since the vector equation x1v1 + x2v2 + x3v3 = 0 has the coefficient matrix

 1 2 4   0 3 5  0 0 6 which is already in echelon form with a pivot position in every column.

Example 9 (Possible exam question). Is        1 0 3   0  ,  2  ,  8   0 0 0  in R3 linearly independent? Why or why not? Answer. No. There are several good reasons. One reason is that if we ignore the third row, which has all zeros, then we have three vectors spanning the x1x2-plane and they must be linearly dependent. We will see later in Section 4 this reasoning is correct, and certainly this type of reasoning is valid on the final exam. As another reason, the corresponding matrix

 1 0 3   0 2 8  0 0 0 which is already in echelon form with two pivot positions and a free variable x3. Yes another reason is that we can easily write the third vector as a linear combination of the first two vectors:

 1   0   3  3  0  + 4  2  =  8  . 0 0 0

6 Example 10 (Possible homework or exam question). Find all values of h such that the vectors        1 3 4   −1  ,  −2  ,  h   0 2 4  are linearly dependent. Answer. Let’s place the vectors in a matrix A:

 1 3 4  A =  −1 −2 h  . 0 2 4

Recall that the columns of A are linearly dependent if and only if A has 2 or fewer pivot positions. Finding an echelon form of A:

 1 3 4   1 3 4   1 3 4  R2+R1 7→ R2 R3−2 R2 7→ R3 A =  −1 −2 h  −−−−−−−→  0 1 h + 4  −−−−−−−−→  0 1 h + 4  . 0 2 4 0 2 4 0 0 −2h − 4

Clearly A has a pivot position in the first and second columns. Thus the third column must not have a pivot position, which will occur precisely when −2h − 4 = 0 ⇒ h = −2. Therefore, the set of vectors is linearly dependent if and only if h = −2. Alternatively, one could notice that since the first two vectors are not parallel, for the set of three vectors to be linearly dependent we must write the third vector as a linear combination of the first two vectors:  4   1   3   h  = c1  −1  + c2  −2  4 0 2 for some scalars c1, c2. By looking at the third entry, we need 4 = 0 + 2c2 so c2 = 2. Then by looking at the first entry, we need 4 = c1 + 2 · 3 so c1 = −2:

 4   1   3   h  = −2  −1  + 2  −2  . 4 0 2

Hence h = (−2) · (−1) + 2 · (−2) = −2.

7