Linear Algebra (VI)
Total Page:16
File Type:pdf, Size:1020Kb
Linear Algebra (VI) Yijia Chen 1. Review The rank of f0g. Lemma 1.1. (i) ; is maximally linearly independent in f0g. (ii) rank(f0g) = 0. Lemma 1.2. Let A, B ⊆ V be two finite sets of vectors in V, possibly empty. If A is linearly indepen- dent and can be represented by B. Then jAj 6 jBj. Proposition 1.3. Let S ⊆ V be a finite set of vectors. Then S must contain a set of maximally linearly independent set of vectors. Basis and Dimension. We fix a vector space V. Theorem 1.4. Let S ⊆ V and A, B ⊆ S be both maximally linearly independent in S. Then jAj = jBj. Definition 1.5. Let e1,..., en 2 V. Assume that – e1,..., en are linearly independent, – and every v 2 V can be represented by e1,..., en. Equivalently, fe1,..., eng is maximally linearly independent in V. Then fe1,..., eng is a basis of V. Note that n = 0 is allowed, and in that case, it is easy to see that V = f0g. By Theorem 1.4: 0 0 Lemma 1.6. If fe1,..., eng and fe1,..., emg be both bases of V with pairwise distinct ei’s and with 0 pairwise distinct ei, then n = m. Definition 1.7. Let fe1,..., eng be a basis of V with pairwise distinct ei’s. Then the dimension of V, denoted by dim(V), is n. Equivalently, if rank(V) is defined, then dim(V) := rank(V). Corollary 1.8. Assume dim(V) = n and u1,..., um 2 V with m > n. Then u1,..., um are linearly dependent. Proof: By assumption let fe1,..., eng be a basis of V. If u1,..., um are linearly independent, and observe that they can be represented by fe1,..., eng, then m 6 n by Lemma 1.2. 2 1 1 Theorem 1.9. Assume dim(V) = n and let u1,..., un 2 V. (1) If u1,..., un are linearly independent, then fu1,..., ung is a basis. (2) If every v 2 V can be represented by u1,..., un, then fu1,..., ung is a basis. Proof: (1) We need to show that every v 2 V can be represented by u1,..., un. By Corollary 1.8 v, u1,..., un are linearly dependent. Thus there exist a, a1,..., an 2 R such that av + aiui = 0, iX2[n] and at least one of a, a1,..., an is not 0. Since u1,..., un are linearly independent, we must have a 6= 0. Therefore, a v = - i · u . a i iX2[n] (2) Since fu1,..., ung is a finite set of vectors, it must have a maximally linearly independent subset by Proposition 1.3. Without loss of generality, let it be fu1,..., urg with pairwise distinct u1,..., ur. Then it is easy to see that fu1,..., urg is a basis of V. Thus r = dim(V) = n. 2 2. Steinitz Exchange Lemma Theorem 2.1. Assume that dim(V) = n and v1,..., vm 2 V with 1 6 m 6 n are linearly indepen- dent. Furthermore, let fe1,..., eng be a basis of V. Then for some 1 6 i1 < i2 < ··· < in-m 6 n v1,..., vm, ei1 ,..., ein-m is a basis of V. Proof: We prove by induction on m and start with m = 1. Since fe1,..., eng is a basis, v1 can be represented by e1,..., en. Thus, there exist a1,..., an 2 R such that v1 = aiei. iX2[n] As v1 6= 0 (otherwise, v1 is linearly dependent), there is an i 2 [n] with ai 6= 0. It follows that 1 aj aj ei = v1 + - · ei + - · ei. ai ai ai i<j n j2X[i-1] X6 In other words, ei can be represented by fv1, e1,..., ei-1, ei+1,..., eng. Thus fe1,..., eng can be represented by fv1, e1,..., ei-1, ei+1,..., eng, and so does every vector in V, since fe1,..., eng is a basis. By Theorem 1.9 (2), fv1, e1,..., ei-1, ei+1,..., eng is a basis. 1 We do not assume beforehand that u1,..., un are pairwise distinct, although under the conditions in the theorem, they have to be, i.e., ui 6= uj for every 1 6 i < j 6 n. 2 Now assume that m > 1 and v1,..., vm 2 V are linearly independent. Of course v1,..., vm-1 are linearly independent too. By induction hypothesis on m - 1, there exist 1 6 i1 < i2 < ··· < in-m+1 6 n such that v1,..., vm-1, ei1 ,..., ein-m+1 is a basis of V. In particular, vmcan be represented by this basis, i.e., there exist a1,..., am-1, c1,..., cn-m+1 2 R such that vm = aivi + cjeij . i2X[m-1] j2[nX-m+1] Observe that j2[n-m+1] cjeij 6= 0, otherwise, v1,..., vm would be linearly dependent. Thus, for some j 2 [n - m + 1] we have c 6= 0, and thereby P j ai 1 c` eij = - · vi + · vm + - · ei` . cj cj cj i2X[m-1] `2[n-Xm+1]nfjg Then it is easy to see that v1,..., vm, ei1 ,..., eij-1 , eij+1 ,..., ein-m+1 is a basis of V. 2 Remark 2.2. (i) The above proof is in fact essentially the same proof for Lemma 1.2. (ii) Again, we can drop the requirement 1 6 m, in particular, the case m = 0 holds trivially. 3. Back to the Textbook Matrices and matrix operations. Recall an m × n matrix has the form 2 3 a11 a12 ··· a1n 6 a21 a21 ··· a2n 7 A = 6 7 = a , 6 . .. 7 ij m×n 4 . 5 am1 a21 ··· amn T where each aij 2 R. The transpose matrix of A, denoted by A , is the n × m matrix 2 3 a11 a21 ··· am1 6a12 a22 ··· am2 7 6 7 . 6 . .. 7 4 . 5 a1n a2n ··· amn Definition 3.1 (Matrix Addition). Let A = aij m×n and B = bij m×n be two m × n matrices. Then 2 3 a11 + b11 ··· a1n + b1n A + B := a + b = 6 . .. 7 . ij ij m×n 4 . 5 am1 + bm1 ··· amn + bmn Definition 3.2. The m × n zero matrix is 20 ··· 03 6 . .. 7 0m×n = 4 . 5 . 0 ··· 0 When m, n are clear from the context, we write 0 instead of 0m×n. 3 Lemma 3.3. (i) A + B = B + A. (ii) (A + B) + C = A + (B + C). (iii) (A + B)T = AT + BT . Definition 3.4 (Scalar Multiplication). Let A = aij m×n be an m × n matrix and k 2 R. Then 2 3 k · a11 ··· k · a1n k · A := k · a = 6 . .. 7 . ij m×n 4 . 5 k · am1 ··· k · amn Lemma 3.5. Let A and B be two m × n-matrices and k, ` 2 N. (i) k · (` · A) = (k · `) · A. (ii) (k + `) · A = k · A + ` · A. (iii) k · (A + B) = k · A + k · B. (iv) (k · A)T = k · AT . Definition 3.6. For every A = aij m×n we define -A := -1 · A = - aij m×n. Lemma 3.7. A + (-A) = 0. 3.1. Matrix multiplication. Definition 3.8. Let m, n, r > 1, A be an m × r matrix, and B an r × n matrix. Then C := AB = cij m×n is an m × n matrix where each cij := ai`b`j. `X2[r] Although matrix multiplication seems arbitrary at first sight, we have seen that it could be understood in the context of substituting the variables in one system of linear equations by another system of linear equations. Three matrices, A, B, and C, correspond to the coefficients of the three systems. Remark 3.9. Compared to most multiplications we have encountered before, matrix multiplica- tion is not commutative. – AB and BA can have different size, or even one of them is not defined. For instance, A = 213 1 0 4 and B = 415. Then AB is a 1 × 1 matrix, while BA is 3 × 3. 0 0 0 – Even if they have the same size, AB and BA can be different matrices. Let A := and 0 1 0 1 0 0 0 1 B := . Then AB = and BA = . 0 0 0 0 0 0 4 This is hardly a surprise by viewing AB as substituting variables xi’s with yi’s and BA the other way around for two systems of linear equations. Definition 3.10. For n > 1 we define 21 0 ··· 03 60 1 ··· 07 I = 6 7 n 6 . .. 7 4 . 5 0 0 ··· 1 2 as an n × n diagonal matrix. In is called the (n × n) identity matrix. Lemma 3.11. (i) (AB)C = A(BC). (ii) C(A + B) = CA + CB and (A + B)C = AC + BC. (iii) k · (AB) = (k · A)B = A(k · B). (iv) Assume that A is an m × n matrix. Then ImA = AIn = A. 4. Block Matrices For mostly computational reasons, sometimes we need to partition a matrix into several submatri- ces, or more precisely, block matrices. The following is an example. 2 1 2 -1 0 3 A A A = 6 2 5 0 -2 7 = 11 12 4 5 A A 3 1 -1 3 21 22 where 1 2 -1 0 A = , A = , 11 2 5 0 12 -2 A21 = 3 1 -1 , A22 = 3 . In general, for some p, q 2 N with 1 6 p 6 m and 1 6 q 6 n we break m rows of A into p blocks, and similarly break n columns of A into q blocks: n1 columns nq columns 2 3 m1 rows A11 A12 ··· A1q 6A A ··· A 7 6 21 22 2q 7 6 7 6 .