Row Equivalence of Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Row Equivalence of matrices March 1, 2007 0.1 Row equivalence Let F be a field and let m and n be positive integers. Two m by n matrices are said to be row equivalent if there is an invertible matrix S such that B = SA. (Check that this is indeed an equivalence relation.) The textbook shows that any two row equivalent matrices have the same null space. In fact the converse is also true, so that we have the following theorem: Theorem: If A and B are two m by n matrices, then the following conditions are equivalent: 1. There exists an invertible matrix S such that B = SA 2. The matrices A and B have the same nullspace. This theorem can be translated into a statement about linear transfor- mations by considering the linear transformations LA, LB, and LS associated with the matrices above. It seems more revealing to treat this more abstract looking version. Theorem Let V and W be finite dimensional vector spaces, and let T and T 0 be linear transformations from V to W . Then the following are equivalent: 1. There exists an isomorphism S: T → T 0 such that T 0 = S ◦ T . 2. The transformations T and T 0 have the same kernel. Proof: The proof that (1) implies (2) is relatively easy (and is in the book) and we omit it. Let us prove that (2) implies (1). Let K := Ker(T ) = Ker(T 0), a linear subspace of V . By the rank-nulity theorem, the dimension k of K is n−r, where r is the rank of T and hence also the rank of T 0. Choose 1 a basis for K, and prolong it to a basis for V . Now choose an ordering (v1, v2, ··· vn) for this basis so that the last k vectors, that is, the vectors (vr+1, . vn), are the chosen basis for K. For 1 ≤ i ≤ r, let wi := T (vi) and 0 0 let wi := T (vi). 0 0 Lemma: Each of the sequences (w1, . wr) is (w1, . wr) is linearly in- dependent. This fact was proved during the proof of the rank nullity theorem, but let us repeat the proof. We just write it for the sequence (w1, . , wr). Suppose that a1w1 + ··· arwr = 0. Then T (a1v1 + ··· arvr) = a1T (v1) + ··· arT (vr) = a1w1 + ··· arwr = 0. This means that a1v1 + ··· arvr ∈ Ker(T ) = K. Since (vr+1, . vn) spans K, we can write a1v1 + ··· arvr = br+1vr+1 + ··· bnvn for a suitable choice of bj’s. But since the sequence (v1, . vn) is linearly independent, all the ai’s and bj’s are zero. This completes the proof of the lemma. 0 0 Now extend (w1, . wr) to a basis (w1, . wm) for W and extend (w1, . wr) 0 0 to a basis (w1, . wm) to a basis for W . By Theorem 2.6 of the text, 0 there is a unique linear transformation S such that S(wi) = wi for all i. 0 0 In fact, S is invertible because (w1, . wm) is also a basis for W . Thus to prove the theorem it will be enough to show that T 0 = S ◦ T . Since 0 both T and S ◦ T are linear and (v1, . vn) is a basis for V , it will suf- 0 fice to prove that T (vi) = S ◦ T (vi) for each i, If 1 ≤ i ≤ r, we find 0 0 T (vi) = wi = S(wi) = S(T (wi)) = T ◦ S(wi), as required. If i > r, 0 vi ∈ K = Ker(T ) = Ker(T ), and S ◦ T (vi) = S(T (vi)) = S(0) = 0 = T (vi). This completes the proof. 2.