Linear Independence

Learning Goals: identify and test for of vectors.

The complementary concept to spanning is independence. Spanning means that linear combinations “fill up” a subspace. Independence means that the vectors really all point in different directions.

Definition: A linear combination a1v1 + ! + anvn is called trivial if all the a’s are zero. Otherwise it is nontrivial.

Definition: a of vectors is called linearly independent if the only linear combination of them that adds to 0 is the trivial combination. If there is a nontrivial combination of the vectors that adds to 0 then the vectors are called linearly dependent.

Dependence means that there is some redundancy in the vectors. Independence means that the vectors really point in different directions.

2 ⎡1⎤ ⎡2⎤ ⎡−3⎤ ⎡1⎤ ⎡2⎤ ⎡−3⎤ ⎡0⎤ Example: In R , the vectors ⎢ ⎥ , ⎢ ⎥ , and ⎢ ⎥ are dependent, for 5 ⎢ ⎥ − 4 ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ . ⎣2⎦ ⎣3⎦ ⎣−2⎦ ⎣2⎦ ⎣3⎦ ⎣−2⎦ ⎣0⎦ Example: any set of vectors that includes 0 is automatically linearly dependent. For we can use the linear combination 1⋅0 + 0v1 + ! + 0vn.

Example: the pivot columns of a are linearly independent, for we have already shown that no nontrivial combination of them adds to zero.

To see if a bunch of vectors in Rm are dependent or independent, we look for a linear combination of them that adds to zero. This is equivalent to placing them as the columns of a matrix A and then trying to solve Ax = 0.

Corollary: any n vectors in Rm are automatically dependent if n > m.

Proof: If we put the n vectors as columns of a matrix, we get an m × n matrix with n > m, so there are more columns than rows, and hence there are nontrivial null vectors., which provide nontrivial linear combinations of the columns that give zero.

I’ve stated that linear independence means the vectors aren’t “redundant.” What this means more precisely is:

Theorem: if a nonempty set of vectors is linearly dependent, then one of them can be written as a (perhaps trivial) combination of the others.

Proof: Let a1v1 + a2v2 + ! + anvn = 0 be a nontrivial linear combination that adds to 0. Without loss of generality, assume a1 ≠ 0. Then v1 = (-a2/a1)v2 + ! + (-an/a1)vn. Note that this gives another test for linear independence. We can put the vectors as the rows of a matrix and do elimination. If we get a row of zeroes, then the vectors were linearly dependent, since we combined the rows above the zero row to get the row that became zero. Note also how spanning and independence are really opposite concepts. Spanning intends to fill up a subspace, by using as many vectors as you might need. Independence seeks to eliminate redundant vectors, reducing the number of vectors you have. It is precisely at the intersection of these two, where a set is both spanning and independent, that things get really interesting.