
NUMERICAL LINEAR ALGEBRA Robert D. Skeel c 2006 Robert D. Skeel February 10, 2006 0 Chapter 1 DIRECT METHODS—PART I 1.1 Linear Systems of Equations 1.2 Data Error 1.3 Gaussian Elimination 1.4 LU Factorization 1.5 Symmetric Positive Definite Matrices 1.1 Linear Systems of Equations 1.1.1 Network analysis A resistance network 4 ohm 3 ohm E1 I1 5 ohm 2 v I3 I2 E2 has unknown currents in its branches and unknown potentials at its nodes. A computer representation might look like branch from to RV 1 2 1 4 2 2 1 2 5 −4 3 1 2 3 0 1 Ohm’s Law Eto = Efrom + V − RI gives equations branch 1 E1 = E2 +2− 4I1 branch 2 E2 = E1 − 4 − 5I2 branch 3 E2 = E1 − 3I3 Kirchoff’s Current Law (a conservation law) gives equations node 1 I1 − I2 − I3 =0 node 2 − I1 + I2 + I3 =0 These equations are redundant; hence set E2 =0. Usually these equations are reduced to a smaller system by one of two techniques: 1. loop analysis—complicated to program 2. nodal analysis—eliminate currents and solve for nodal potentials, i.e., use 3 branch equations to substitute for currents in 1st node eqn: 1 1 1 1 ( − E ) − (−1+ E ) − (0 + E )=0. 2 4 1 5 1 3 1 Here are some other applications which give rise to linear systems of equations: • AC networks: capacitance, inductance, complex numbers; • hydraulic networks: pressure, rate of flow (flux); • framed structures: displacements, forces, stiffness, Newton’s 1st Law, Hooke’s Law; • surveying networks. The last 3 are nonlinear. 1.1.2 Matrices To quote Jennings, “they provide a concise and simple method of describing lengthy and otherwise complicated computations.” A matrix A ∈ Rm×n is expressed a a ··· a n 11 12 1 a21 a22 ··· a2n A =[aij]= . . . am1 am2 ··· amn For example, the directed graph 2 1 1 2 34 8 23 4 56 7 5 can be represented by the adjacency matrix branches 12345678 1 −1 −1000001 2 10−10−10 0 0 nodes 3 001−10−10 0 4 010100−10 5 0000111−1 Special types of matrices are a column vector x 1 x2 m x = . ,x∈ R , . xm a diagonal matrix d 0 ··· 0 1 0 d2 ··· 0 D = . =diag(d1,d2,...,dn), . .. 00··· dn and an identity matrix 0 . . . 0 I =diag(1, 1,...,1),ek = 1 k. 0 . . 0 3 Three operations are defined for matrices: (i) αA (ii) A + B AB=: C (iii) m×nn×pm×p Xn cij = aikbkj k=1 j i Note that x x 1 1 x2 x2 α . = . [α]. . . xn xn Generally AB =6 BA. We define the transpose by T A =: C where cij = aji. T This permits a compact definition of a column vector by means of x =(x1,x2,...,xn) . Note that (AB)T = BTAT. A lower or left triangular matrix, typically denoted by the symbol L, has the form × ×× ××× . ×××× 4 An upper or right triangular matrix, typically denoted by U or R, has the form ×××× ××× ×× . × A unit triangular matrix has ones on the diagonal. The determinant det(A), whose definition is complicated, has the properties det(αA)=αn det(A), det(AT)=det(A), det(AB)=det(A)det(B), det(A) =06 ⇐⇒ A−1 exists. Also (AB)−1 = B−1A−1. 1.1.3 Partitioned Matrices This notion makes it possible to express many ideas without introducing the clutter of subscripts. An example of partitioning a matrix into blocks is 00−1 −1 0 AT 2 00 11 M = = AR 2 −11 50 22 −11 07 where −11 50 A = ,R= . −11 07 In matrix operations, blocks can be treated as scalars except that multiplication is noncom- mutative. The product A A ... A q B B ... B r 11 12 1 l1 11 12 1 m1 q r A21 A22 ... A2 l2 B21 B22 ... B2 m2 . . . . p q Ap1 Ap2 ... Apq l Bq1 Bq2 ... Bqr m m1 m2 mq n1 n2 nr can be conveniently expressed because the partitioning is conformable. The (1-1)-block of the product is A11B11 + A12B21 + ···+ A1qBq1 5 Here are some examples: x 1 x2 Ax =: c1 c2 ··· cn . = x1c1 + x2c2 + ···+ xncn, . xn T T r1 r1 x T T r2 r2 x Ax =: . x = . . . T T rm rmx (the two preceding examples are computational alternatives), AB =: A b b ··· bp = Ab Ab ··· Abp , 1 2 1 2 T T T T a11 a12 a13 r1 a11r1 + a12r2 + a13r3 T T T T AB =: a21 a22 a23 r2 = a21r1 + a22r2 + a23r3 . T T T T a31 a32 a33 r3 a31r1 + a32r2 + a33r3 The last example states that the rows of AB are linear combinations of rows of B. Therefore, matrix premultiplication ⇔ row operations. 1.1.4 Linear Spaces 1 Amultiset of vectors x1,x2,...,xk is linearly independent if α1x1 + α2x2 + ···+ αkxk =0⇒ α1 = α2 = ···= αk =0; otherwise it is linearly dependent, which implies that one of them is a linear combination of the others. Rn is a vector space.Asubspace S is a subset which is also a vector space, i.e., x ∈ S ⇒ αx ∈ S, x, y ∈ S ⇒ x + y ∈ S. (What are the possible subspaces for n = 3?) Recall that span{x1,x2,...,xk} := ··· . The dimension of S := maximum number of linearly independent vectors. A linearly indepen- n dent multiset having that many elements is a basis, e.g., R has a basis e1,e2,...,en. If y1,y2,...,yk is a basis for S, then for any x ∈ S there exist unique α1,α2,...,αk such that x = α y + α y + ···+ αkyk or 1 1 2 2 α 1 α 2 x =[|y1,y2,...,y{z }k] . , . basis αk |{z} coordinates 1A multiset, or bag, of k elements is a k-tuple in which the ordering of elements does not matter 6 which we note is a conformable partitioning. Consider a matrix A ∈ Rm×n expressed as T r1 T r2 A =[c1,c2,...,cn]andA = . . . T rm The range n R(A)= span{c1,c2,...,cn} = {Ax : x ∈ R }. The null space N(A)={x ∈ Rn : Ax =0}. Also rank (A)= dim[R(A)]. It can be shown that rank (AT) = rank (A). The problem Ax = b can be written c1x1 + c2x2 + ···+ cnxn = b. It has a solution if b ∈ R(A). There is always a solution if R(A)=Rm, i.e., rank (A)=m, which implies n ≥ m. The solution is unique if c1,c2,...,cn is a basis, which implies n = m and x = A−1b. The inner product for x, y ∈ Rn is xTy. The outer product for x ∈ Rm, y ∈ Rn is xyT ∈ Rm×n, e.g., T T ··· T I = e1e1 + e2e2 + + enen . Review questions 1. Give a complete definition of the product AB where A is an m×n matrix with elements aij and B is an n × p matrix with elements bij. 2. If AB is a square matrix, when is it true that det(AB)=det(A)det(B)? 3. Give an expression for the product AB with B is partitioned into columns. 4. Define what it means for a multiset of vectors x1,x2,...,xk to be linearly independent. 5. Define a subspace of a linear space. 6. Define span{x1,x2,...,xk}. 7. Define the dimension of a subspace. 8. Define a basis for a subspace. 7 9. Define the range of a matrix. 10. Define the null space of a matrix. 11. Define the column rank of a matrix. 12. If A is an m × n matrix, what can we say about its row rank and its column rank? Exercises 1. In the resistance network of section 1.1.1 do the currents I1,I2,I3 depend on the arbi- trary choice for E2? T T 2. Let A denote the incidence matrix given in the example, let i =[I1,I2,...,I8] , T v =[V1,V2,...,V8] , R =diag(R1,R2,...,R8), and e =[E1,E2,E3,E4,E5]where Ij,Vj,Rj is the current, voltage source, resistance along branch j and Ei is the potential at node i. Use these vectors and matrices to express Ohm’s Law and Kirchoff’s Current Law as vector equations. Then eliminate i to get a single vector equation for e. 3. In an electrical resistance network such as given in the example, we can represent a simple loop by a column vector b of dimension 8 with elements −1, 0, and 1. What would be the meaning of the values −1, 0, and 1? Let AT be the node–branch incidence matrix. What can we say about ATb? Explain. k k− 4. Show that if x, y ∈ Rn,then(xyT) =(xTy) 1xyT. 5. Prove the Woodbury formula −1 −1 (A + UV T) = A−1 − A−1U(I + V TA−1U) V TA−1 where A ∈ Rn×n, U, V ∈ Rn×k and both A and I + V TA−1U are nonsingular. − T 6. Show that (AT) 1 =(A−1) . T 7. Show that (AB) = BTAT. T 8. Use Exercise 1.10 to show that (ABC) = CTBTAT. Avoid subscripts. 9. (Stewart, p. 66) Prove that the equation Ax = b has a solution if and only if for any y, yTA = 0 implies yTb =0. 10. (advanced) (Stewart, p. 67) Show that if A ∈ Rm×n and rank (A)=r,thenA = UV T where U and V are of full rank r. Do not use the approach suggested by Stewart; rather, partition A and U by columns and V by rows and columns.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages41 Page
-
File Size-