
Linear independence Brian Krummel September 7, 2019 2 Recall from a Section 1.3 that given two vectors v1; v2 in R which are not parallel, Spanfv1; v2g is R2 and we could show this graphically using a grid representing all the ways to form vectors as 3 linear combinations of v1; v2. Similarly, given v1; v2 in R not parallel, Spanfv1; v2g is a plane and we can again show this graphically using a grid. Today we introduce the concept of linear independence, which generalizes the notion of two vectors not being parallel. n Definition 1. A set of vectors fv1; v2;:::; vpg in R is linearly independent if the only solution to the vector equation x1v1 + x2v2 + ::: + xpvp = 0 (?) is the trivial solution x1 = x2 = ··· = xp = 0. n A set of vectors fv1; v2;:::; vpg in R is linearly dependent if the vector equation (?) has a nontrivial solution (x1; x2; : : : ; xp) = (c1; c2; : : : ; cp), in other words if there exists scalar weights c1; c2; : : : ; cp, not all zero, such that c1v1 + c2v2 + ::: + cpvp = 0: (??) We call the equation (??) a linear dependence relation amongst v1; v2;:::; vp. Example 1. The standard coordinate vectors 2 1 3 2 0 3 2 0 3 v1 = 4 0 5 ; v2 = 4 1 5 ; v3 = 4 0 5 0 0 1 3 in R linearly independent. The vector equation x1v1 + x2v2 + x3v3 = 0 means that 2 3 2 3 2 3 2 3 2 3 x1 1 0 0 0 4 x2 5 = x1 4 0 5 + x2 4 1 5 + x3 4 0 5 = 4 0 5 : x3 0 0 1 0 Of course, this can only happen if x1 = x2 = x3 = 0, so the standard coordinate vectors are linearly independent. n Remark 1. fv1; v2;:::; vpg in R are linearly independent if and only there exists only the solution to the homogeneous vector equation x1v1 + x2v2 + ::: + xpvp = 0 (1) or equivalently there exists at most one solution to the vector equation x1v1 + x2v2 + ::: + xpvp = b (2) for each b in Rn. 1 Theorem 1. The columns of a matrix A are linearly independent if and only if the matrix equation Ax = 0 has only the trivial solution. Reason. Let A = [ a1 a1 ··· an ] with columns ai. Then the matrix equation Ax = 0 has the equivalent vector equation x1a1 + x2a2 + ::: + xnan = 0: Thus Ax = 0 having only the trivial solution means the exact same thing as fa1; a2;:::; ang being linearly independent. Another way to state Theorem 1 is that the columns of an m × n matrix A are linearly independent if and only if A has n pivot positions, one pivot position in each column. Example 2. Are the vectors 2 3 3 2 7 3 2 −1 3 v1 = 4 1 5 ; v2 = 4 2 5 ; v3 = 4 0 5 −2 −1 −3 in R3 linearly independent? Answer. The vector equation x1v1+x2v2+x3v3 = 0 is a homogeneous equation with the coefficient matrix 2 3 7 −1 3 4 1 2 0 5 : −2 −1 −3 Note that I could have also written down the augmented matrix, but then the right-hand column would always be zero. Row reducing this augmented matrix 2 3 7 −1 3 2 1 2 0 3 2 1 2 0 3 R1 $ R2 R2−3·R1 7! R2 4 1 2 0 5 −−−−−! 4 3 7 −1 5 −−−−−−−−! 4 0 1 −1 5 −2 −1 −3 −2 −1 −3 R3+2·R1 7! R2 0 3 −3 2 1 2 0 3 2 1 0 2 3 R3−3·R2 7! R3 R1−2·R2 7! R1 −−−−−−−−! 4 0 1 −1 5 −−−−−−−−! 4 0 1 −1 5 : 0 0 0 0 0 0 Since x3 is a free variable, x1v1 + x2v2 + x3v3 = 0 must have a nontrivial solution and thus fv1; v2; v3g are linearly dependent. In particular, the corresponding linear system is x1 + 2x3 = 0 x2 − x3 = 0 and thus the parametric vector form of the solution is 2 3 2 3 2 3 x1 −2x3 −2 x = 4 x2 5 = 4 x3 5 = x3 4 1 5 : x3 x3 1 2 Setting x3 = 1, we get the solution 2 3 2 3 x1 −2 x = 4 x2 5 = 4 1 5 x3 1 which corresponds to the linear dependence relation 2 3 3 2 7 3 2 −1 3 2 0 3 −2 4 1 5 + 4 2 5 + 4 0 5 = 4 0 5 : −2 −1 −3 0 Example 3. Describe all of the possible echelon forms of a 3×3 matrix A with linearly independent columns. Answer. Since the columns of A are linearly independent, the homogeneous equation Ax = 0 has only the trivial solution and thus A must have 3 pivot positions and no free variables. Since the first column of A must have a pivot position, the first pivot position of must be the (1; 1)-entry so that the first column of A is 2 3 ?? A = 4 0 ? ? 5 ; 0 ? ? where like in Section 1.2 we let denote nonzero leading entries and ∗ denote entries with any value and we let ? denote entries to be determined. Since the second column of A must also have a pivot position, the second pivot position of must be the (2; 2)-entry so that the second column of A is 2 3 ∗ ? A = 4 0 ? 5 : 0 0 ? Finally, since the third column of A must also have a pivot position, the third pivot position of must be the (3; 3)-entry so that the full echelon form of A including the third column is given by 2 3 ∗ ∗ A = 4 0 ∗ 5 : 0 0 Now so far during the lecture we have been determining whether a set of vectors fv1; v2;:::; vpg is linearly independent primarily by writing down a matrix A whose columns are the vectors v1; v2;:::; vp and then applying the row reduction algorithm to A to determine whether A has a pivot position in every column. Below we state several theorems about linear independence that provide a more conceptual understanding of when a set of vectors are linearly independent, and which also allow us to determine whether a set of vectors are linearly independent by inspection. Theorem 2. A set fv1; v2;:::; vpg containing the zero vector is always linearly dependent. Reason. Suppose v1 = 0. Then the set has the linear dependence relation 1 · 0 + 0 v2 + 0 v3 + ··· + 0 vp = 0: 3 Theorem 3. fv1; v2;:::; vpg is linearly dependent if and only if at least one of the vectors is a linear combination of the other vectors. Reason. To keep things simple, let us assume that p = 3 so that we have three vectors v1; v2; v3. Suppose one of the vectors is a linear combination of the others, say v3 is a linear combination of v1; v2: v3 = c1v1 + c2v2 for some scalars c1; c2. Then by subtracting v3 from both sides, v1; v2; v3 satisfy the linear depen- dence relation c1v1 + c2v2 − v3 = 0: On the other hand, fv1; v2; v3g being linearly dependent means they satisfy the linear relation c1v1 + c2v2 + c3v3 = 0 with weights c1; c2; c3 not all zero, say c3 6= 0. By subtracting and dividing by c3, we get c1 c2 v3 = − v1 − v2 c3 c3 so that v3 is a linear combination of v1; v2. Notice that in the special case of two vectors fv1; v2g, the previous theorem asserts that fv1; v2g is linearly independent if and only if one of the vectors is a scalar multiple of the other, e.g. v2 = cv1 for some scalar c. That is, fv1; v2g is linearly independent if and only if v1; v2 are parallel, just like we claimed at the beginning of lecture. Thus linear independence is indeed a generalization of the notion of two parallel vectors. Example 4. In Example 2, we had 2 3 3 2 7 3 2 −1 3 2 0 3 −2 4 1 5 + 4 2 5 + 4 0 5 = 4 0 5 : −2 −1 −3 0 By moving the first term to the right-hand side and dividing by 2 we obtain 2 3 3 2 7 3 2 −1 3 1 1 1 = 2 + 0 4 5 2 4 5 2 4 5 −2 −1 −3 giving us that one vector is a linear combination of the other two vectors. n Theorem 4. If a set of vectors fv1; v2;:::; vpg in R has more vectors than entries, i.e. p > n, then the set is linearly dependent. Example 5 (Possible exam question). Are the vectors 1 3 2 ; ; 2 1 −6 in R2 linearly dependent? 4 Answer. Yes, because # vectors = 3 > 2 = # entries. On an exam we could stop here. But let us work a bit more to see what is going on. The vector equation x1v1 + x2v2 + x3v3 = 0 has the augmented matrix 1 3 2 : 2 1 −6 Row reducing this augmented matrix 1 3 2 R2−2·R1 $ R2 1 3 2 (−1=5)·R2 $ R2 1 3 2 −−−−−−−−! −−−−−−−−−! 2 1 −6 0 −5 −10 0 1 2 R1−3·R2 $ R1 1 0 −4 −−−−−−−−! : 0 1 2 Since there are more variables than equations, there cannot be a pivot position in every column. Thus there must be at least one free variable and there is a nontrivial solution. In this case, x3 is the free variable. The corresponding linear system is x1 − 4x3 = 0 x2 + 2x3 = 0 and thus the parametric vector form of the solution is 2 3 2 3 2 3 x1 4x3 4 x = 4 x2 5 = 4 −2x3 5 = x3 4 −2 5 : x3 x3 1 Setting x3 = 1, we get the solution 2 3 2 3 x1 4 x = 4 x2 5 = 4 −2 5 x3 1 which corresponds to the linear dependence relation 1 3 2 0 4 − 2 + = : 2 1 −6 0 Example 6 (Possible exam question).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-