Row Space and Column Space • If A is an m×n matrix, then the subspace of Rn spanned by the row vectors of A is called the row space of A, and the subspace of Rm spanned by the column vectors is called the column space of A. • The solution space of the homogeneous system of equation Ax = 0, which is a subspace of Rn, is called the null space of A. 15 Remarks • In this section we will be concerned with two questions – What relationships exist between the solutions of a linear system Ax=b and the row space, column space, and null space of A. – What relilations hips exist among the row space, column space, and null space of a matrix. 16 Remarks • It fllfollows from Formula (10) of SiSection 131.3 • We concldlude tha t Ax=b is consitisten t if and only if b is expressible as a linear combination of the column vectors of A or, equivalently, if and only if b is in the column space of A. 17 Theorem 4714.7.1 • A system of linear equations Ax = b is consistent if and only if b is in the column space of A. 18 Example ⎡−13 2⎤⎡x1 ⎤ ⎡ 1 ⎤ ⎢⎥⎢⎥⎢⎥12− 3x =− 9 • Let Ax = b be the linear system ⎢ ⎥⎢2 ⎥ ⎢ ⎥ ⎣⎢ 21− 2⎦⎣⎥⎢x3 ⎦⎥ ⎣⎢− 3 ⎦⎥ Show that b is in the column space of A, and express b as a linear combination of the column vectors of A. • Solution: – Solving the system by Gaussian elimination yields x1 = 2, x2 = ‐1, x3 = 3 – Since the system is consistent, b is in the column space of A. – Moreover, it follows that ⎡⎤⎡⎤⎡⎤⎡⎤−13 2 1 21⎢ ⎥⎢⎥ 2 3 ⎢ 3 ⎥⎢ 9 ⎥ ⎢ ⎥⎢⎥− +−=− ⎢ ⎥⎢ ⎥ ⎣⎦⎣⎦⎣⎦⎣⎦⎢ 21⎥⎢⎥ ⎢− 2 ⎥⎢− 3 ⎥ 19 Theorem 4724.7.2 • General and Partilicular Solilutions – If x0 denotes any single solution of a consistent linear system Ax = b, and if v1, v2, …, vk form a basis for the null space of A, (that is, the solution space of the homogeneous system Ax = 0), then every solution of Ax = b can be expressed in the form x = x0 + c1v1 + c2v2 + ∙ ∙ ∙ + ckvk ClConversely, for all chihoices of scalars c1, c2, …, ck, the vector x in this formula is a solution of Ax = b. 20 Proof of 4724.7.2 • Assume tha t x0 is any fixe d soltilution of Ax=b and that x is an arbitrary solution. Then Ax0 = b and Ax = b. • Subtracting these equations yields Ax – Ax0 = 0 or A(x‐x0)=0 • Which shows that x‐x0 is a solution of the homogeneous system Ax = 0. • Since v1, v2, …, vk is a basis for the solution space of this system, we can express x‐x0 as a linear combination of these vectors, say x‐x0 = c1v1+c2v2+…+ckvk. Thus, x=x0+c1v1+c2v2+…+ckvk. 21 Proof of 4724.7.2 • ClConversely, for all chihoices of the scalars c1,c2,…,ck, we have Ax = A(x0+c1v1+c2v2+++…+ckvk) Ax = Ax0 + c1(Av1) + c2(Av2) + … + ck(Avk) • But x0 is a solilution of the nonhomogeneous system, and v1, v2, …, vk are solutions of the homogeneous system, so the last equation implies that Ax = b + 0 + 0 + … + 0 = b • Which shows that x is a solution of Ax = b. 22 Remarks • The vector x0 is calle d a partilicular solilution of Ax = b. • The expression x0 + c1v1 + ∙ ∙ ∙ + ckvk is called the general solution of Ax = b, the expression c1v1 + ∙ ∙ ∙ + ckvk is called the general solution of Ax = 0. • The general solution of Ax = b is the sum of any particular solution of Ax = b and the general solution of Ax = 0. 23 Example • The soltilution to the ⎡x ⎤ ⎡− 3r − 4s − 2t⎤ ⎡ 0 ⎤ ⎡− 3⎤ ⎡− 4⎤ ⎡− 2⎤ nonhomogeneous system 1 ⎢x ⎥ ⎢ r ⎥ ⎢ 0 ⎥ ⎢ 1 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢x3 ⎥ ⎢ − 2s ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢− 2⎥ ⎢ 0 ⎥ x + 3x – 2x + 2x = 0 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ + r⎢ ⎥ + s⎢ ⎥ + t⎢ ⎥ 1 2 3 5 x s 0 0 1 0 2x + 6x – 5x – 2x + 4x – 3x = ‐1 ⎢ 4 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 2 3 4 5 6 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ x5 t 0 0 0 1 5x3 + 10x4 + 15x6 = 5 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢x6 ⎦⎥ ⎣⎢ 1/ 3 ⎦⎥ ⎣⎢1/ 3⎦⎥ ⎣⎢ 0 ⎦⎥ ⎣⎢ 0 ⎦⎥ ⎣⎢ 0 ⎦⎥ 2x1 + 5x2 + 8x4 + 4x5 + 18x6 = 6 x0 x is which is the general solution. • The vector x0 is a particul ar x1 = ‐3r ‐ 4s ‐ 2t, x2 = r, solution of nonhomogeneous x3 = ‐2s, x4 = s, x = t, x = 1/3 system, and the linear 5 6 combination x is the general • The result can be written in solution of the homogeneous vector form as system. 24 Elementary Row Operation • Performing an elementary row operation on an augmented matrix does not change the solution set of the corresponding linear system. • It follows that applying an elementary row operation to a matrix A does not change the solution set of the corresponding linear system Ax=0, or stated another way, it does not change the null space of A. The solution space of the homogeneous system of equation Ax = 0, which is a subspace of Rn, is called the null space of A. 25 Example ⎡ 22− 101⎤ ⎢−112−− 31⎥ • Find a basis for the nullspace of A = ⎢⎥ ⎢ 11− 20− 1⎥ ⎢ ⎥ ⎣ 00111⎦ • Solution – The nullspace of A is the solution space of the homogeneous system 2x1 + 2x2 –x3 + x5 = 0 ‐x1 –x2 – 2 x3 – 3x4 + x5 = 0 x1 + x2 – 2 x3 – x5 = 0 x3 + x4 + x5 = 0 – In Example 10 of Section 5.4 we showed that the vectors ⎡⎤−−11 ⎡⎤ ⎢⎥ ⎢⎥ ⎢⎥10 ⎢⎥ vv12==⎢⎥01 and ⎢⎥− ⎢⎥ ⎢⎥ ⎢⎥00 ⎢⎥ ⎣⎦⎢⎥01 ⎣⎦⎢⎥form a basis for the nullspace. 26 Theorems • Theorem 4734.7.3 – Elementary row operations do not change the null space of a matrix. • Theorem 4.7.4 – Elementary row operations do not change the row space of a matrix. 27 Proof of 4744.7.4 • Suppose that the row vectors of a matrix A are r1,r2,…,rm, and let B be obtained from A by perfiforming an eltlementary row operation. (We say that A and B are row equivalent.) • We shall show that every vector in the row space of B is also in that of A, and that every vector in the row space of A is in that of B. • If the row operation is a row interchange, then B and A have the same row vectors and consequently have the same row space. 28 Proof of 4744.7.4 • If the row operation is muliltip licat ion of a row by a nonzero scalar or a multiple of one row to another, then the row vector r1’,r2’,…,rm’ of B are linear combination of r1,r2,…,rm; thus they lie in the row space of A. • Since a vector space is closed under addition and scalar muliltip licat ion, all linear combinat ion of r1’,r2’,…,rm’ will also lie in the row space of A. There fore, each vector in the row space of B is in the row space of A. 29 Proof of 4744.7.4 • Since B is obtained from A by performing a row operation, A can be obtained from B by performing the inverse operation (Sec. 1.5). • Thus the argument above shows that the row space of A is contained in the row space of B. 30 Remarks • Do eltlementary row operations change the column space? – Yes! • The second column is a scalar multiple of the first, so the column space of A consists of all scalar multiplies of the first column vector. Add ‐2 times the first row to the second • Again, the second column is a scalar multiple of the first, so the column space of B consists of all scalar multiples of the first column vector. This is not the same as the column space of A. 31 Theorem 4754.7.5 • If a matitrix R is in row echlhelon form, then the row vectors with the leading 1’s (i.e., the nonzero row vectors) form a basis for the row space of R, and the column vectors with the leading 11s’s of the row vectors form a basis for the column space of R. 32 Bases for Row and Column Spaces The matrix ⎡1− 2503⎤ ⎢⎥01300 R = ⎢⎥ ⎢⎥00010 ⎢ ⎥ ⎣⎦00000 is in row-echelon form. From Theorem 5.5.6 the vectors r1 = [1 -2 5 0 3] r2 = [0 1 3 0 0] r3 = [0 0 0 1 0] form a basis for the row space of R , and the v ectors ⎡⎤120 ⎡− ⎤ ⎡⎤ ⎢⎥010 ⎢ ⎥ ⎢⎥ cc==⎢⎥, ⎢ ⎥, c = ⎢⎥ 12⎢⎥001 ⎢ ⎥ 4 ⎢⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎣⎦000 ⎣ ⎦ ⎣⎦ form a basis for the column space of R . 33 Example • Find bases for the row and column spaces of ⎡ 134254−− ⎤ ⎢ 269182− − ⎥ A = ⎢⎥ ⎢ 269197−− ⎥ ⎢ ⎥ • Solution: ⎣−13−−− 42 5 4⎦ – Since elementary row operations do not change the row space of a matrix, we can find a basis for the row space of A by finding a basis that of any row‐echelon form of A. – Reducing A to row‐echelon form we obtain ⎡134254−− ⎤ ⎢⎥0013− 2− 6 R = ⎢ ⎥ ⎢0000 1 5⎥ ⎢ ⎥ ⎣0000 0 0⎦ 34 ⎡ 134254−− ⎤ ⎡134254−− ⎤ ⎢ ⎥ 269182− − ⎢0013− 2− 6⎥ A = ⎢⎥R = ⎢⎥ Example ⎢ 269197−− ⎥ ⎢0000 1 5⎥ ⎢ ⎥ ⎢ ⎥ ⎣−13−−− 42 5 4⎦ ⎣0000 0 0⎦ • The basis vectors for the row space of R and A r1 = [1 ‐3 4 ‐2 5 4] r2 = [0 0 1 3 ‐2 ‐6] r3 = [0 0 0 0 1 5] • Keeping in mind that A and R may have different colu mn spaces, we cannot find a basis for the colu mn space of A directly from the column vectors of R. 35 Theorem 4764.7.6 • If A and B are row equiltivalent matitrices, then: – A given set of column vectors of A is linearly idindepen den t if and only if the corresponding column vectors of B are linearly independent.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages55 Page
-
File Size-