Math 3191 Spring 2000 Cheat Sheet Final

Jan Mandel May 10, 2000

Elementary row operations: 1. add a multiple of a row to another row (does not change determinant) 2. interchange two rows (changes sign of det) 3. multiply row by a scalar (multiplies the determinant by the same scalar). of elementary row transformation: doing the transformation on A is same as replacing A by EA. Row echelon form: has zeros under a “staircase”: Reduced row echelon form: first entry in each row is 1; it is the only nonzero in its column (called pivot column) Au = b is equivalent to b = u1a1 + ... + unan, A = [a1, . . . , an]. Au = b has a solution if and only if (iff) b is in the span of the columns of A. Au = b has a solution for every b iff columns of A span Rm iff A has a pivot position in every row. Solution of Ax = b is unique iff Au = 0 has only zero solution iff there are no free variables (in the echelon form of A). Vectors [a1 . . . am] are linearly dependent if there is linear combination of them that equals to zero but has some nonzero coefficients. Solution of Ax = b is unique iff columns of A are linearly independent. Linear maps: T : x Ax. A is the standard matrix for T . A = [T (e1),...,T (en)] 7→ where I = [e1, . . . , en]. Matrix-matrix product: C = AB means cik = aijbk. Pj Note B[a1, . . . , an] = [Ba1, . . . , Ban]. 1 Matrix inverse: B = A− means AB = BA = I. A and B must be square. Only 1 one of AB = I or BA = I is sufficient. To compute A− , transform [A, I] to 1 the reduced echelon form. The following is equivalent for square A: A− exists; T : x Ax is invertible; T is onto; T is one-to-one; Ax = b has a solution for every 7→b; Ax = 0 has only zero solution; det A = 0; A has pivot in every column (in the algorithm of reduction to echelon form).6 A = LU where U is the echelon form and L has ones on the diagonal and under the diagonal are minus the multipliers used in the algorithm of reduction to the echelon form (use only steps adding multiple of a row to another row). Jacobi k+1 and Gauss-Seidel iterative method: compute xi from equation i. Jacobi: use the values of xk. Gauss-Seides: use the newest values available. Convergence guaranteed when A strictly diagonally dominant: sum of absolute values of offdiagonal terms in column i is less than aii, for all i. If C is strictly diagonally

1 1 2 3 dominant, (I C)− = I + C + C + C + .... − n Basis of subspace V of R is any set b1, . . . , bp that is linearly independent { } and spans V . Each u V can be written uniquely as u = c1b1 + . . . cpbp, with ∈ the coefficients ci scalars; dim V = p. NulA is the set of all x such that Ax = 0. To find a basis of NulA, solve Ax = 0 by transforming A to reduced echelon form. ColA is the set of all Ax. Pivot columns of A form a basis of ColA. theorem: dim NulA + dim ColA = number of columns of A. n i+j Expansion of determinant by row i: det A = j=1( 1) det Aij. Expan- sion by a column is similar. Determinant of triangularP matrix− equals the product of the diagonal terms. Vector space H is subspace of vector space V if H V and u + v H for any u, v H and cu H for any u H and scalar c.⊂ Basis of a vector∈ space V is a subset∈ of V that∈ spans V and∈ is linearly independent. The coordinates p of v V relative to a basis b1, . . . , bp is (x1, . . . , xp) R such that v = ∈ { } ∈ p x1b1 + + xpvp. The mapping of vectors in V to their coordinates in R preserves··· linear dependence and independence. The change of coordinates matrix is constructed so that [x]C = PC B[x]B ← and it is given by PC B = [[b1]C ,..., [bp]C ]. (This matrix is also matrix of the identity operator from← the basis B to C). In Rn, if B and C denote also the 1 matrices with the basis vectors as columns, PC B[x]B = C− B and it can be ← found by reducing the [CB] to the form [PC B I]. λ and u are eigenvalue and eigenvector of A if Au = λu and← u = 0. The eigenvalues of a are its diagonal entries. Eigenvalues6 satisfy 1 the characteristic equation det(A λI) = 0. If A = PBP − then the eigenvalues of A and B are same. A is diagonalizable− if there exists a basis consisting of 1 its eigenvectors; then A = PDP − , where D is a with the eigenvalues on the diagonal, and the columns of P are the eigenvectors. A basis B = b1, . . . , bn is orthogonal if bi bj = 0 if i = j. Coefficients of { } · 6 x relative to orthogonal basis B are xi = x bi/bi bi. A least squares solution of a rectangular· system· Ax = b is defined by Ax b 2 min and it can be found by solving the normal equations AT Ax =kAT b−. k →

2