<<

Row Space and Column Space

• If A is an m×n , then the subspace of Rn spanned by the row vectors of A is called the row space of A, and the subspace of Rm spanned by the column vectors is called the column space of A. • The solution space of the homogeneous system of equation Ax = 0, which is a subspace of Rn, is called the null space of A.

15 Remarks

• In this section we will be concerned with two questions – What relationships exist between the solutions of a linear system Ax=b and the row space, column space, and null space of A. – What relliations hips exist among the row space, column space, and null space of a matrix.

16 Remarks

• It fllfollows from Formula (10) of SiSection 131.3

• We concldlude tha t Ax=b is consiitsten t if and only if b is expressible as a of the column vectors of A or, equivalently, if and only if b is in the column space of A.

17 Theorem 4714.7.1

• A system of linear equations Ax = b is consistent if and only if b is in the column space of A.

18 Example

⎡−13 2⎤⎡x1 ⎤ ⎡ 1 ⎤ ⎢⎥⎢⎥⎢⎥12− 3x =− 9 • Let Ax = b be the linear system ⎢ ⎥⎢2 ⎥ ⎢ ⎥ ⎣⎢ 21− 2⎦⎣⎥⎢x3 ⎦⎥ ⎣⎢− 3 ⎦⎥ Show that b is in the column space of A, and express b as a linear combination of the column vectors of A. • Solution: – Solving the system by yields

x1 = 2, x2 = ‐1, x3 = 3 – Since the system is consistent, b is in the column space of A. – Moreover, it follows that ⎡⎤⎡⎤⎡⎤⎡⎤−13 2 1 21⎢ ⎥⎢⎥ 2 3 ⎢ 3 ⎥⎢ 9 ⎥ ⎢ ⎥⎢⎥− +−=− ⎢ ⎥⎢ ⎥ ⎣⎦⎣⎦⎣⎦⎣⎦⎢ 21⎥⎢⎥ ⎢− 2 ⎥⎢− 3 ⎥

19 Theorem 4724.7.2

• General and Partilicular Solilutions

– If x0 denotes any single solution of a consistent linear system Ax = b, and if v1, v2, …, vk form a for the null space of A, (that is, the solution space of the homogeneous system Ax = 0), then every solution of Ax = b can be expressed in the form

x = x0 + c1v1 + c2v2 + ∙ ∙ ∙ + ckvk ClConversely, for all chihoices of scalars c1, c2, …, ck, the vector x in this formula is a solution of Ax = b.

20 Proof of 4724.7.2

• Assume tha t x0 is any fixe d soltilution of Ax=b and that x is an arbitrary solution. Then Ax0 = b and Ax = b. • Subtracting these equations yields

Ax – Ax0 = 0 or A(x‐x0)=0 • Which shows that x‐x0 is a solution of the homogeneous system Ax = 0.

• Since v1, v2, …, vk is a basis for the solution space of this system, we can express x‐x0 as a linear combination of these vectors, say x‐x0 = c1v1+c2v2+…+ckvk. Thus, x=x0+c1v1+c2v2+…+ckvk.

21 Proof of 4724.7.2

• ClConversely, for all chhioices of the scalars c1,c2,…,ck, we have

Ax = A(x0+c1v1+c2v2+++…+ckvk) Ax = Ax0 + c1(Av1) + c2(Av2) + … + ck(Avk) • But x0 is a solliution of the nonhomogeneous system, and v1, v2, …, vk are solutions of the homogeneous system, so the last equation implies that Ax = b + 0 + 0 + … + 0 = b • Which shows that x is a solution of Ax = b.

22 Remarks

• The vector x0 is calle d a partilicular solliution of Ax = b.

• The expression x0 + c1v1 + ∙ ∙ ∙ + ckvk is called the general solution of Ax = b, the expression c1v1 + ∙ ∙ ∙ + ckvk is called the general solution of Ax = 0. • The general solution of Ax = b is the sum of any particular solution of Ax = b and the general solution of Ax = 0.

23 Example

• The solltiution to the ⎡x ⎤ ⎡− 3r − 4s − 2t⎤ ⎡ 0 ⎤ ⎡− 3⎤ ⎡− 4⎤ ⎡− 2⎤ nonhomogeneous system 1 ⎢x ⎥ ⎢ r ⎥ ⎢ 0 ⎥ ⎢ 1 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢x3 ⎥ ⎢ − 2s ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢− 2⎥ ⎢ 0 ⎥ x + 3x – 2x + 2x = 0 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ + r⎢ ⎥ + s⎢ ⎥ + t⎢ ⎥ 1 2 3 5 x s 0 0 1 0 2x + 6x – 5x – 2x + 4x – 3x = ‐1 ⎢ 4 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 2 3 4 5 6 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ x5 t 0 0 0 1 5x3 + 10x4 + 15x6 = 5 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢x6 ⎦⎥ ⎣⎢ 1/ 3 ⎦⎥ ⎣⎢1/ 3⎦⎥ ⎣⎢ 0 ⎦⎥ ⎣⎢ 0 ⎦⎥ ⎣⎢ 0 ⎦⎥ 2x1 + 5x2 + 8x4 + 4x5 + 18x6 = 6   x0 x is which is the general solution.

• The vector x0 is a particul ar x1 = ‐3r ‐ 4s ‐ 2t, x2 = r, solution of nonhomogeneous x3 = ‐2s, x4 = s, x = t, x = 1/3 system, and the linear 5 6 combination x is the general • The result can be written in solution of the homogeneous vector form as system.

24 Elementary Row Operation

• Performing an elementary row operation on an augmented matrix does not change the solution set of the corresponding linear system. • It follows that applying an elementary row operation to a matrix A does not change the solution set of the corresponding linear system Ax=0, or stated another way, it does not change the null space of A. The solution space of the homogeneous system of equation Ax = 0, which is a subspace of Rn, is called the null space of A.

25 Example ⎡ 22− 101⎤ ⎢−112−− 31⎥ • Find a basis for the nullspace of A = ⎢⎥ ⎢ 11− 20− 1⎥ ⎢ ⎥ ⎣ 00111⎦ • Solution – The nullspace of A is the solution space of the homogeneous system 2x1 + 2x2 –x3 + x5 = 0 ‐x1 –x2 – 2 x3 – 3x4 + x5 = 0 x1 + x2 – 2 x3 – x5 = 0 x3 + x4 + x5 = 0 – In Example 10 of Section 5.4 we showed that the vectors

⎡⎤−−11 ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥10 ⎢⎥ vv12==⎢⎥01 and ⎢⎥− ⎢⎥ ⎢⎥ ⎢⎥00 ⎢⎥ ⎣⎦⎢⎥01 ⎣⎦⎢⎥form a basis for the nullspace.

26 Theorems

• Theorem 4734.7.3 – Elementary row operations do not change the null space of a matrix.

• Theorem 4.7.4 – Elementary row operations do not change the row space of a matrix.

27 Proof of 4744.7.4

• Suppose that the row vectors of a matrix A are r1,r2,…,rm, and let B be obtained from A by perfiforming an elltementary row operation. (We say that A and B are row equivalent.) • We shall show that every vector in the row space of B is also in that of A, and that every vector in the row space of A is in that of B. • If the row operation is a row interchange, then B and A have the same row vectors and consequently have the same row space.

28 Proof of 4744.7.4

• If the row operation is muliltip licat ion of a row by a nonzero or a multiple of one row to another, then the row vector r1’,r2’,…,rm’ of B are linear combination of r1,r2,…,rm; thus they lie in the row space of A. • Since a is closed under addition and scalar muliltip licat ion, all linear combinat ion of r1’,r2’,…,rm’ will also lie in the row space of A. There fore, each vector in the row space of B is in the row space of A.

29 Proof of 4744.7.4

• Since B is obtained from A by performing a row operation, A can be obtained from B by performing the inverse operation (Sec. 1.5). • Thus the argument above shows that the row space of A is contained in the row space of B.

30 Remarks

• Do eltlementary row operations change the column space? – Yes! • The second column is a scalar multiple of the first, so the column space of A consists of all scalar multiplies of the first column vector.

Add ‐2 times the first row to the second • Again, the second column is a scalar multiple of the first, so the column space of B consists of all scalar multiples of the first column vector. This is not the same as the column space of A.

31 Theorem 4754.7.5

• If a mattirix R is in row echlhelon form, then the row vectors with the leading 1’s (i.e., the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 1s1’s of the row vectors form a basis for the column space of R.

32 Bases for The matrix ⎡1− 2503⎤ ⎢⎥01300 R = ⎢⎥ ⎢⎥00010 ⎢ ⎥ ⎣⎦00000 is in row-echelon form. From Theorem 5.5.6 the vectors

r1 = [1 -2 5 0 3]

r2 = [0 1 3 0 0]

r3 = [0 0 0 1 0] form a basis for the row space of R , and the v ectors ⎡⎤120 ⎡− ⎤ ⎡⎤ ⎢⎥010 ⎢ ⎥ ⎢⎥ cc==⎢⎥ , ⎢ ⎥ , c = ⎢⎥ 12⎢⎥001 ⎢ ⎥ 4 ⎢⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎣⎦000 ⎣ ⎦ ⎣⎦ form a basis for the column space of R .

33 Example

• Find bases for the row and column spaces of ⎡ 134254−− ⎤ ⎢ 269182− − ⎥ A = ⎢⎥ ⎢ 269197−− ⎥ ⎢ ⎥ • Solution: ⎣−13−−− 42 5 4⎦ – Since elementary row operations do not change the row space of a matrix, we can find a basis for the row space of A by finding a basis that of any row‐echelon form of A. – Reducing A to row‐echelon form we obtain

⎡134254−− ⎤ ⎢⎥0013− 2− 6 R = ⎢ ⎥ ⎢0000 1 5⎥ ⎢ ⎥ ⎣0000 0 0⎦

34 ⎡ 134254−− ⎤ ⎡134254−− ⎤ ⎢ ⎥ 269182− − ⎢0013− 2− 6⎥ A = ⎢⎥R = ⎢⎥ Example ⎢ 269197−− ⎥ ⎢0000 1 5⎥ ⎢ ⎥ ⎢ ⎥ ⎣−13−−− 42 5 4⎦ ⎣0000 0 0⎦ • The basis vectors for the row space of R and A

r1 = [1 ‐3 4 ‐2 5 4]

r2 = [0 0 1 3 ‐2 ‐6]

r3 = [0 0 0 0 1 5] • Keeping in mind that A and R may have different colu mn spaces, we cannot find a basis for the colu mn space of A directly from the column vectors of R.

35 Theorem 4764.7.6

• If A and B are row equiltivalent mattirices, then: – A given set of column vectors of A is linearly idindepen den t if and only if the corresponding column vectors of B are linearly independent. – A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B.

36 ⎡ 134254−− ⎤ ⎡134254−− ⎤ ⎢ ⎥ 269182− − ⎢0013− 2− 6⎥ A = ⎢⎥R = ⎢⎥ Example ⎢ 269197−− ⎥ ⎢0000 1 5⎥ ⎢ ⎥ ⎢ ⎥ ⎣−13−−− 42 5 4⎦ ⎣0000 0 0⎦ • We can find the basis for the column space of R, then the corresponding column vectors of A will form a basis for the column space of A. • Basis for R’s column space

• Basis for A’s column space ⎡⎤145 ⎡⎤ ⎡⎤ ⎢⎥298 ⎢⎥ ⎢⎥ cc===⎢⎥, ⎢⎥, c ⎢⎥ 13⎢⎥299 ⎢⎥ 5 ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦−−−145 ⎣⎦ ⎣⎦

37 Example

• Fin d a bibasis for the space spanned by the row vectors

v1= (1, ‐2, 0, 0, 3), v2 = (2, ‐5, ‐3, ‐2, 6), v3 = ((,0, 5, 15, 10, 0), v4 = ((,2, 6, 18, 8, 6). • Except for a variation in notation, the space spanned by these vectors is the row space of the matrix

⎡⎤12003− ⎡⎤12003− ⎢⎥25326−−− ⎢⎥ ⎢⎥⎢⎥01320 ⎢⎥0515100 ⎢⎥00110 ⎢⎥⎢⎥ ⎣⎦261886 ⎣⎦0 0 000 – The nonzero row vectors in this matrix are

w1= (1, ‐2, 0, 0, 3), w2 = (0, 1, 3, 2, 0), w3 = (0, 0, 1, 1, 0) – These vectors form a basis for the row space and consequently form a 5 basis for the subspace of R spanned by v1, v2, v3, and v4.

38 Remarks

• Keeping in mind that A and R may have different column spaces, we cannot find a basis for the column space of A directly from the column vectors of R. • However, if we can find a set of column vectors of R that forms a basis for the column space of R, then the corresponding column vectors of A will form a basis for the column space of A. • The basis vectors obtained for the column space of A consisted of column vectors of A, but the basis vectors obtained for the row space of A were not all vectors of A. • of the matrix can be used to solve this problem.

39 Example

• Fidind a bbiasis for the row space of – The column space of AT are ⎡⎤12003− ⎢⎥ ⎡⎤12 ⎡⎤ ⎡⎤ 2 25326−−− ⎢⎥ ⎢⎥ ⎢⎥ A = ⎢⎥ ⎢⎥−25 ⎢⎥− ⎢⎥ 6 ⎢⎥0 5 15 10 0 cc12==⎢⎥03, ⎢⎥− , and c 4 = ⎢⎥ 18 ⎢⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦261886 ⎢⎥02 ⎢⎥− ⎢⎥ 8 consisting entirely of row vectors ⎣⎦⎢⎥36 ⎣⎦⎢⎥ ⎣⎦⎢⎥ 6 from A. – Thus, the basis vectors for the row space of A are • Solution: r1 = [1 ‐2 0 0 3] ⎡ 1202⎤ ⎡120 2⎤ ⎢ ⎥ ⎢ ⎥ r2 = [2 ‐5 ‐3 ‐2 6] ⎢⎥−−2556 ⎢⎥015− 10 T r = [2 6 18 8 6] A = ⎢ 0− 3 15 18⎥ ⎢000 1⎥ 3 ⎢ ⎥ ⎢ ⎥ ⎢ 02108− ⎥ ⎢000 0⎥ ⎣⎢ 3606⎦⎥ ⎣⎢000 0⎦⎥

40 Example

• (a) Find a subset of the vectors v1 = (1, ‐2, 0, 3), v2 = (2, ‐5, ‐3, 6), v3 = (0, 1, 3, 0), v4 = (2, ‐1, 4, ‐7), v5 = (5, ‐8, 1, 2) that forms a basis for the space spanned by these vectors. • (b) Express each vector not in the basis as a linear combination of the bibasis vectors.

• Solution (a): ⎡ 1 2 0 2 5 ⎤ ⎡1 0 2 0 1⎤ ⎢ ⎥ ⎢0 1 −1 0 1⎥ ⎢− 2 − 5 1 − 1 − 8⎥ ⎢ ⎥ ⎢ 0 − 3 3 4 1 ⎥ ⎢0 0 0 1 1⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 3 6 0 − 7 2 ⎦ ⎣0 0 0 0 0⎦ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑

v 1 v 2 v 3 v 4 v 5 w1 w 2 w3 w 4 w5

– Thus, {v1, v2, v4} is a basis for the column space of the matrix.

41 Example

• SltiSolution (b):

– We can express w3 as a linear combination of w1 and w2, express w5 as a linear combination of w1, w2, and w4 (Why?). By inspection, these linear combination are

w3 = 2w1 – w2 w5 = w1 + w2 + w4 – We call these the dddependency equations. The corresponding relationships in the original vectors are

v3 = 2v1 – v2 v5 = v1 + v2 + v4

42 Theorem 4814.8.1

and – If A is any matrix, then the row space and column space of A have the same dimension. • Proof: Let R be any row‐echelon form of A. It follows from Theorem 4.7.4 and 4.7.6b that dim(row space of A) = dim(row space of R). dim(column space of A) = dim(column space of R) • The dimension of the row space of R is the number of nonzero rows = number of leading 1’s = dimension of the column space of R

43 Rank and Nullity

• The common dimension of the row and column space of a matrix A is called the rank of A and is denoted by rank(A); the dimension of the nullspace of a is called the nullity of A and is denoted by nullity(A).

44 Example (Rank and Nullity)

• Fin d the rank and nullity of the mattirix ⎡−1204 5− 3⎤ ⎢ 372014− ⎥ A = ⎢⎥ ⎢ 252461− ⎥ ⎢ ⎥ ⎣ 492447−−−⎦ • Solution: – The reduced row‐echelon form of A is ⎡⎤10− 4−− 283713 ⎢ ⎥ ⎢01−− 2 12165 − ⎥ ⎢00 0 0 0 0⎥ ⎢⎥ ⎣00 0 0 0 0⎦ – Since there are two nonzero rows (two leading 1’s), the row space and column space are both two‐dimensional, so rank(A) = 2.

45 Example – To find the nullity of A, we must find the dimension of the soltilution space of the linear system Ax=0. – The corresponding system of equations will be

x1 – 4x3 – 28x4 – 37x5 + 13x6 = 0 x2 –2x3 –12x4 –16 x5+ 5 x6 = 0 – It follows that the general solution of the system is

x1 = 4r + 28s + 37t –13u, x2 = 2r + 12s + 16t –5u, x3 = r, x4 = s, x5 = t, x6 = u

or ⎡⎤x1 ⎡4283713⎤⎡⎤⎡⎤⎡⎤− ⎢⎥x ⎢212165⎥⎢⎥⎢⎥⎢⎥− ⎢⎥2 ⎢ ⎥⎢⎥⎢⎥⎢⎥ Thus, nullity(A) = 4. ⎢⎥x3 ⎢⎥⎢⎥100 ⎢⎥ ⎢ 0 ⎥ ⎢⎥=+rs⎢ ⎥⎢⎥⎢⎥⎢⎥ + t + u ⎢⎥x4 ⎢01⎥⎢⎥⎢⎥⎢⎥ 0 0 ⎢⎥x ⎢001⎥⎢⎥⎢⎥⎢⎥ 0 ⎢⎥5 ⎢⎥⎢⎥ ⎢⎥ ⎢ ⎥ ⎣⎦⎢⎥x6 ⎣⎢000⎦⎣⎦⎣⎦⎣⎦⎥⎢⎥⎢⎥⎢⎥ 1

46 Example

• What is the maximum possible rank of an matrix A that is not square? • Solution: The row space of A is at most n‐dimensional and the column space is at most m‐dimensional. Since the rank of A is the common dimension of its row and column space, it follows that the rank is at most the smaller of m and n.

47 Theorem 4824.8.2

• Diiimension Theorem for Matr ices – If A is a matrix with n columns, then rank(A) + nullity(A) = n. • Proof: • Since A has n columns, Ax = 0 has n unknowns. These fall into two categories: the leading variables and the free variables.

• The numb er of ldileading 1’s in the reddduced row‐echlhelon form of A is the rank of A

48 Theorem 4824.8.2

• The number of free variables is equal to the nullity of A. This is so because the nullity of A is the dimension of the solution space of Ax=0, which is the same as the number of parameters in the general solution, which is the same as the number of free variables. Thus rank(A) + nullity(A) = n

49 Example

⎡−1204 5− 3⎤ ⎢ 372014− ⎥ A = ⎢ ⎥ ⎢⎥252461− ⎢ ⎥ ⎣ 492447−−−⎦ • This matrix has 6 columns, so rank(A) + nullity(A) = 6 • In previous example, we know rank(A) = 2 and nullity(A) = 4

50 Theorem 4834.8.3

• If A is an m×n matrix, then: – rank(A) = Number of leading variables in the solution of Ax = 0. – nullity(A) = Number of parameters in the general solution of Ax = 0.

51 Example

• Find the number of parameters in the general solution of Ax = 0 if A is a 5×7 matrix of rank 3. • Solution: – nullity (A) = n – rank(A) = 7 – 3 = 4 – Thus, there are four parameters.

52 Theorem 4844.8.4

n n • If A is an n×n matrix, and if TA : R → R is multiplication by A, then the following are equivalent: – A is invertible. – Ax = 0 has only the trivial solution. – The reduced row‐echelon form of A is In. – A is expressible as a product of elementary matrices. – Ax = b is consistent for every n×1 matrix b. – Ax = b has exactly one solution for every n×1matrix b. – det(A)≠0. – The column vectors of A are linearly independent. – The row vectors of A are linear ly iidndepen den t. – The column vectors of A span Rn. – The row vectors of A span Rn. – The column vectors of A form a basis for Rn. – The row vectors of A form a basis for Rn. – A has rank n. – A has nullity 0.

53

• A linear system with more equations than unknowns is called an overdetermined linear system. With fewer equations than unknowns, it’ s called an underdetermined system. • Theorem 4.8.5 – If Ax = b is a consistent linear system of m equations in n unknowns, and if A has rank r, then the general solution of the system contains n – r parameters.

• If A is a matrix with rank 4, and if Ax=b is a consistent linear system, then the general soltilution of the system contains 7‐4=3 parameters.

54 Theorem 4864.8.6

• Let A be an matrix • (a) (Overdetemined Case) If m> n, then the linear system Ax=b is inconsistent for at least one vector b in Rm. • (b) (Underdetermined Case) If m < n, then for each vector b in Rm the linear system Ax=b is either inconsistent or has infinitely many solutions.

55 Proof of Theorem 4864.8.6 (a)

• Assume that m>n, in which case the column vectors of A cannot span Rm (fewer vectors than the dimension of Rm). Thus, there is at least one vector b in Rm that is not in the column space of A, and for that b the system Ax=b is inconsistent by Theorem 4714.7.1.

56 Proof of Theorem 4864.8.6 (b)

• Assume that m 0 • This means that the general solution has at least one parameter and hence there are infinitely many solutions.

57 Example

• What can you say about the solutions of an overdetermined system Ax=b of 7 equations in 5 unknowns in which A has rank = 4? • What can you say about the solutions of an underdetermined system Ax=b of 5 equations in 7 unknowns in which A has rank = 4? • Solution: – (a) the system is consistent for some vector b in R7, and for any such b the number of parameters in the general solution is n‐r=5‐4=1 – (b) the system may be consistent or inconsistent, but if it is consistent for the vector b in R5, then the general solution has n‐r=7‐4=3 parameters.

58 Example x121− 2xb=

x122− xb=

• The linear system x123+ xb=

x124+ 2xb=

x125+ 3xb= is overdetermined, so it cannot be consistent for all

possible values of b1, b2, b3, b4, and b5. Exact conditions under which the system is consistent can be obtained by solving the linear system by Gauss‐ ⎡10 2bb− ⎤ Jordan elimination. 21 ⎢⎥01 bb− ⎢ 21 ⎥

⎢00bbb321−+ 3 2⎥ ⎢ ⎥ ⎢00bbb421−+ 4 3⎥ ⎢⎥ ⎣00bbb521− 5+ 4⎦

59 • Thus, the system is consistent if and only if b1, b2, b3, b4, and b5 satisfy the conditions

2bbb123− 3+ =0

24bb12− + b 4 = 0

4bb12−+ 5 b 5 =0

or, on solving this homogeneous linear system,

b1=5r‐4s, b2=4r‐3s, b3=2r‐s, b4=r, b5=s where r and s are arbitrary.

60 Fundamental Spaces of a Matrix

• Six important vector spaces associdiated wiihth a matrix A • Row space of A, row space of AT • Column space of A, column space of AT • Null space of A, null space of AT • Transposing a matrix converts row vectors into column vectors – Row space of AT = column space of A – Column space of AT = row space of A • These are calle d the fdfundamenta l spaces of a matrix A

61 Theorem 4874.8.7

• if A is any matrix, then rank(A) = rank(AT) – Rank(A) = dim(row space of A) = dim(column space of AT) = rank(AT) • If A is an matrix, then rank(A)+nullity(A)=n. rank(AT)+nullity(AT) = m • The dimensions of fundamental spaces Fundamental Space Dimension Row space of A r Column space of Ar Nullspace of An– r Nullspace of AT m – r 62 Recap

• Theorem 3.4.3: If A is an m × n matrix, then the solution set of the homogeneous linear system Ax=0 consists of all vectors in Rn that are orthogonal to every row vector of A. • The null space of A consists of those vectors that are orthlhogonal to each of the row vectors of A.

63

• DfiiiDefinition – Let W be a subspace of Rn, the set of all vectors in Rn that are orthogonal to every vector in W is called the of W, and is denoted by W⊥ – If V is a through the origin of R3 with Euclidean inner product, then the set of all vectors that are orthogonal to every vector in V forms the line L through the origin that is perpendicular to V. L

V

64 Theorem 4884.8.8

• If W is a subspace of a fin ite‐dimens iona l Rn, then: – W⊥ is a subspace of Rn. (read “W perp”) – The only vector common to W and W⊥ is 0; that is ,W ∩ W⊥ = 0. – The orthogonal complement of W⊥ is W; that is , (W⊥)⊥ = W.

65 Example

• Orthogonal complements

y W W⊥ L

V x

66 Theorem 4894.8.9

• If A is an m×n matrix, then: – The null space of A and the row space of A are orthogonal complements in Rn. – The null space of AT and the column space of A are orthogonal complements in Rm.

67 Theorem 48104.8.10

n n • If A is an m×n matrix, and if TA : R → R is multiplication by A, then the following are equivalent: – A is invertible. – Ax = 0 has only the trivial solution. – The reddduced row‐echlhelon form of A is In. – A is expressible as a product of elementary matrices. – Ax = b is consistent for every n×1 matrix b. – Ax = b has exactly one solution for every n×1matrix b. – det(A)≠0. – The column vectors of A are linearly independent. – The row vectors of A are linearly independent. – The column vectors of A span Rn. – The row vectors of A span Rn. – The column vectors of A form a basis for Rn. – The row vectors of A form a basis for Rn. – A has rank n. – A has nullity 0. – The orthogonal complement of the nullspace of A is Rn. – The orthogonal complement of the row space of A is {0}.

68 Applications of Rank

• Dig ita l dtdata are commonly stdtored in matitrix form. • Rank plays a role because it measures the “redundancy” in a matrix. • If A is an m × n matrix of rank k, then n‐k of the column vectors and m‐k of the row vectors can be expressed in terms of k linearly independently column or row vectors. • The essential idea in many data compression schemes is to approximate the original data set by a data set with smaller rank that conveys nearly the same information.

69