Appendix: Basics and Useful Relations from Linear Algebra
Total Page:16
File Type:pdf, Size:1020Kb
Appendix: Basics and Useful Relations from Linear Algebra A.1 Inner Product T hx; yiA = x Ay: (A.1) The index, written with font Times, indicates the matrix used in the bilinear form. In the T case of homogeneous vectors we have hx; yiA = x Ay. We omit the index when it is clear from the context. A.2 Determinant A.2.1 Definition of the Determinant The determinant of an N × N matrix is a scalar function D = det(A) : IRN×N ! IR with the following properties 1. The determinant is linear in the columns (or rows) of the matrix. That is, if the nth N column is an = αx + βy for any vectors x; y 2 IR and some constants α; β, then j(a1; :::; αx + βy; :::; aN )j = αj(a1; :::; x; :::; aN )j + βj(a1; :::::; y; :::; aN )j (A.2) 2. When exchanging two rows or two columns, the sign of the determinant changes. 3. If N = 1, det([1]) = 1. We also write det A = jAj : (A.3) For N = 2, we have a11 a12 det = a11a22 − a12a21 : (A.4) a21 a22 A.2.2 Laplacian Development of a Determinant The following theorem allows us to write the determinant of a matrix A as a sum of products of sub-determinants of a matrix. Let r = fr1; :::; rK g with r1 < ::: < rK be a set of row indices rk 2 N , and c = fc1; :::; cK g with c1 < ::: < cK be a set of column indices ck 2 N . The K × K submatrix only keeping the indices r and c is written as S(A; r; c) : (A.5) Ó Springer International Publishing Switzerland 2016 767 W. Förstner and B.P. Wrobel, Photogrammetric Computer Vision, Geometry and Computing 11, DOI 10.1007/978-3-319-11550-4 768 Appendix The complementary (N − K) × (N − K) submatrix removing these indices is S 0(A; r; c) : (A.6) Then we have the Theorem A.2.9: Laplacian development theorem. Given the N ×N matrix A and two lists r = frkg and c = fckg of K row and column indices with 1 ≤ r1 < ::: < rK < N and 1 ≤ c1 < ::: < cK < N, the determinant can be expressed as X jAj = (−1)jcj (−1)jrjjS(A; r; c)j jS 0(A; r; c)j ; (A.7) r where jrj = r1 + ::: + rK and jcj = c1 + ::: + cK , and the summation is taken over all possible combinations of c with 1 ≤ c1 < ::: < ck < ::: < cK < N. Clearly, if the properties of the determinant hold for the submatrices S(A; r; c) and S 0(A; r; c), they also hold for the determinant of the matrix A, which allows the theorem to be proven by induction, as it holds for N = 2. The determinant of a quadratic submatrix is also called minor. Thus the Laplacian development theorem expresses the determinant of the matrix as a sum of products of minors. Two cases are of special interest. An important example is the development of a 4 × 4 matrix by the first two columns. Thus we fix c = (1; 2) and obtain X det A = (−1)1+2 (−1)r1+r2 jS(A; r; c)j jS 0(A; r; c)j (A.8) r = +jS(A; (1; 2); (1; 2))j jS 0(A; (1; 2); (1; 2))j −|S(A; (1; 3); (1; 2))j jS 0(A; (1; 3); (1; 2))j +jS(A; (1; 4); (1; 2))j jS 0(A; (1; 4); (1; 2))j +jS(A; (2; 3); (1; 2))j jS 0(A; (2; 3); (1; 2))j −|S(A; (2; 4); (1; 2))j jS 0(A; (2; 4); (1; 2))j +jS(A; (3; 4); (1; 2))j jS 0(A; (3; 4); (1; 2))j (A.9) a11 a12 a33 a34 = + a21 a22 a43 a44 a11 a12 a23 a24 − a31 a32 a43 a44 a11 a12 a23 a24 + a41 a42 a33 a34 a21 a22 a13 a14 + a31 a32 a43 a44 a21 a22 a13 a14 − a41 a42 a33 a34 a31 a32 a13 a14 + : (A.10) a41 a42 a23 a24 As the minors referring to a set c of columns of a square matrix can be interpreted as the N−1 Plücker coordinates of the join of the points Xc in IP in these columns, the determinant of a matrix is the sum of the products of the Plücker coordinates of the columns c and of the columns not c, taking the correct signs into account. The second application of (A.7) is the following lemma. Lemma A.2.1: Development of a determinant by row. The determinant of an N × N matrix can be expressed as Appendix 769 N X 1+n 0 jAj = (−1) a1;n jS (A; 1; f2; :::; ng)j : (A.11) n=1 This results from (A.7) by setting r = 1 and c = 2 : n. For example, take the determinant of a 3 × 3 matrix: a b c e f d f d e d e f = a − b + c : (A.12) h i g i g h g h i A.2.3 Determinant of a Block Matrix The determinant of a block matrix is given by A11 A12 −1 −1 = jA11j jA22 − A21A11 A12j = jA22j jA11 − A12A22 A21j : (A.13) A21 A22 A.3 Inverse, Adjugate, and Cofactor Matrix The inverse A−1 of a regular square matrix A fulfils A−1A = AA−1 = I . We have the Woodbury identity, with correctly related matrices A; B; C, (A CBC T)−1 = A−1 − A−1C(C TA−1C B−1)−1C TA−1 (A.14) (see Petersen and Pedersen, 2012). We also have A−1 + B−1 = A−1(A + B)B−1 ; (A.15) (see Petersen and Pedersen, 2012, (144)). The inverse of a symmetric 2 × 2 block matrix is given by −1 −1 −1 −1 −1 −1 −1 A11 A12 A11 + A11 A12C 2 A21A11 −A11 A12C 2 = −1 −1 −1 (A.16) A21 A22 −C 2 A21A11 C 2 −1 −1 −1 C 1 −C 1 A12A22 = −1 −1 −1 −1 −1 ; (A.17) −A22A21C 1 A22 + A22 A21C 1 A12A22 with −1 −1 C 1 = A11 − A12A22 A21 ; C 2 = A22 − A21A11 A12 ; (A.18) assuming at least one of the two submatrices Aii to be regular. The cofactor matrix AO of a square, not necessarily regular, matrix is the matrix of the determinants of its submatrices AO = [(−1)i+jjA(ij)j] ; (A.19) where A(ij) is the matrix with row i and column j deleted. For a 2 × 2 matrix we have a −a AO = 22 21 : (A.20) −a12 a11 For a general 3 × 3 matrix A = [a1; a2; a3] with column vectors ai, it can be shown that O A = [a2 × a3; a3 × a1; a1 × a2] : (A.21) 770 Appendix The adjugate matrix A∗ of a square matrix, which is not necessarily regular, is the transpose of the cofactor matrix, A∗ = (AO)T = [(−1)i+jjA(ji)j] : (A.22) It is closely related to the inverse by A∗ = jAjA−1 ; (A.23) and thus is proportional to the inverse, if A is regular. The determinant therefore can be written as 1 1 jAj = tr(A∗A) = tr((AO)TA) ; (A.24) N N where trA is the trace of the matrix A. Finally, we observe for regular n × n matrices, (A∗)∗ = jAjn−2A and (AO)O = jAjn−2A ; (A.25) due to (A∗)∗ = (jAj.A−1)∗ = jAjn−1.jAj−1A = jAjn−2A. A.4 Skew Symmetric Matrices Skew matrices play a central role when representing rotations. An N × N skew symmetric matrix S has properties: S = −S T ; (A.26) trS = 0 : (A.27) A.4.1 2 × 2 Skew Matrix For a scalar x, we obtain the 2 × 2 skew-symmetric matrix 0 −x S = S(x) = (A.28) x x 0 with the following properties: • It is regular with determinant det(S(x))2 = x2 (A.29) and eigenvalues p λ1 = ix λ2 = −ix with i = −1 : (A.30) • Its square, its cube, and its fourth power are 2 2 3 3 4 4 S (x) = −x I 2 ; S (x) = −x S(x) ; S (x) = x I 2 : (A.31) • If x = 1, then S(1) rotates a 2-vector −b 0 −1 a a = = R ◦ (A.32) a 1 0 b 90 b by 90◦ anti-clockwise. Appendix 771 • We have the rotation matrix cos x − sin x R(x) = exp(S ) = cos(x) I + sin(x) S(1) = (A.33) x 2 sin x cos x using the matrix exponential, see Sect. (A.13), p. 781, which can be proven by using the definition of the matrix exponential and collecting the odd and even terms. A.4.2 3 × 3 Skew Matrix For a 3-vector x = [x; y; z]T, the 3 × 3 skew symmetric matrix is defined as 2 0 −z y 3 S x = S(x) = 4 z 0 −x 5 : (A.34) −y x 0 The matrix S(x) has the following properties: • The product with a 3-vector is identical to the anti-symmetric cross product of two vectors: S(x)y = x × y = −y × x = −S(y)x : (A.35) Therefore, often S(x) is denoted by [x]×, leading to the intuitive relation x × y = [x]×y. We do not follow this notation since the vector product does not immediately generalize to higher dimensions. • Its right null space is x as x × x = 0. • If x 6= 0, the matrix has rank 2. Its eigenvalues are λ1 = ijxj λ1 = −ijxj λ3 = 0 : (A.36) • The matrix S(x) and its square S 2(x) are related to the dyad T 2 Dx = xx with trDx = jxj (A.37) by S x Dx = 0 (A.38) and 2 T 2 2 2 S x = xx − jxj I 3 with tr(S x) = −2jxj : (A.39) • The third and the fourth powers are xxT S 3 = −|xj2S and S(x)4 = jxj4(I − ) : (A.40) x x 3 jxj2 • Therefore we have the relation, for any 3 × 3 skew matrix, 1 S S TS = tr(S S T)S : (A.41) x x x 2 x x x • The following relations hold for unit vectors r with jrj = 1: 2 Dr = Dr (A.42) 2 S r = −(I 3 − Dr) (A.43) 3 S r = −S r (A.44) 4 S r = I 3 − Dr : (A.45) 772 Appendix The following relations between a skew-symmetric matrix and a regular matrix are useful.