Simple Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Appendix B Simple matrices Mathematicians also attempted to develop algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. The first vector algebra that involved a noncommutative vector product (that is, v w need not equal w v) was proposed by Hermann Grassmann in his book× Ausdehnungslehre× (1844 ). Grassmann’s text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rank-one matrix. In the late 19th century the American mathematical physicist Willard Gibbs published his famous treatise on vector analysis. In that treatise Gibbs represented general matrices, which he called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the physicist P. A. M. Dirac introduced the term “bra-ket” for what we now call the scalar product of a “bra” (row) vector times a “ket” (column) vector and the term “ket-bra” for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. Our convention of identifying column matrices and vectors was introduced by physicists in the 20th century. Suddhendu Biswas [52, p.2] − B.1 Rank-1 matrix (dyad) Any matrix formed from the unsigned outer product of two vectors, T M N Ψ = uv R × (1783) ∈ where u RM and v RN , is rank-1 and called dyad. Conversely, any rank-1 matrix must have the∈ form Ψ . [233∈, prob.1.4.1] Product uvT is a negative dyad. For matrix products ABT, in general, we have − (ABT) (A) , (ABT) (BT) (1784) R ⊆ R N ⊇ N B.1 § with equality when B =A [374, §3.3, 3.6] or respectively when B is invertible and (A)= 0. Yet for all nonzero dyads we have N T T T (uv ) = (u) , (uv ) = (v ) v⊥ (1785) R R N N ≡ B.1Proof. (AAT) (A) is obvious. R ⊆ R (AAT) = AATy y Rm R { | ∈ } AATy ATy (AT) = (A) by(146) ¨ ⊇ { | ∈ R } R Dattorro, Convex Optimization Euclidean Distance Geometry 2ε, εβoo, v2018.09.21. 517 M 518 APPENDIX B. SIMPLE MATRICES (v) (Ψ) = (u) R R R r0 0 r (Ψ)= (vT) (uT) N N N RN = (v) ⊞ (uvT) (uT) ⊞ (uvT) = RM R N N R T M N Figure 183: The four fundamental subspaces [376, §3.6] for any dyad Ψ = uv R × : N ∈M (v) (Ψ) & (uT) (Ψ). Ψ(x), uvTx is a linear mapping from R to R . Map R ⊥ N N ⊥ R from (v) to (u) is bijective. [374, §3.1] R R where dim v⊥ =N 1. − It is obvious that a dyad can be 0 only when u or v is 0 ; Ψ = uvT = 0 u = 0 or v = 0 (1786) ⇔ The matrix 2-norm for Ψ is equivalent to Frobenius’ norm; Ψ = σ = uvT = uvT = u v (1787) k k2 1 k kF k k2 k k k k When u and v are normalized, the pseudoinverse is the transposed dyad. Otherwise, T T vu Ψ† = (uv )† = (1788) u 2 v 2 k k k k T N N T When dyad uv R × is square, uv has at least N 1 0-eigenvalues and ∈ − corresponding eigenvectors spanning v⊥. The remaining eigenvector u spans the range of uvT with corresponding eigenvalue λ = vTu = tr(uvT) R (1789) ∈ Determinant is a product of the eigenvalues; so, it is always true that det Ψ = det(uvT) = 0 (1790) When λ = 1 , the square dyad is a nonorthogonal projector projecting on its range 2 (Ψ =Ψ , §E.6); a projector dyad. It is quite possible that u v⊥ making the remaining eigenvalue instead 0 ;B.2 λ = 0 together with the first N 1∈ 0-eigenvalues; id est, it is possible uvT were nonzero while all its eigenvalues are 0. The− matrix 1 [ 1 1 ] 1 1 = (1791) 1 1 1 · − ¸ · − − ¸ for example, has two 0-eigenvalues. In other words, eigenvector u may simultaneously be a member of the nullspace and range of the dyad. The explanation is, simply, because u and v share the same dimension, dim u = M = dim v = N : B.2 A dyad is not always diagonalizable ( §A.5) because its eigenvectors are not necessarily independent. B.1. RANK-1 MATRIX (DYAD) 519 Proof. Figure 183 sees the four fundamental subspaces for the dyad. Linear operator Ψ : RN RM provides a map between vector spaces that remain distinct when M =N ; → u (uvT) ∈ R u (uvT) vTu = 0 (1792) ∈ N ⇔ (uvT) (uvT) = R ∩ N ∅ ¨ B.1.0.1 rank-1 modification M N N M T B.3 For A R × , x R , y R , and y Ax=0 [238, §2.1] ∈ ∈ ∈ 6 AxyTA rank A = rank(A) 1 (1794) − yTAx − µ ¶ N N T 1 Given nonsingular matrix A R × Ä 1 + v A− u =0, [178, §4.11.2] [253, App.6] ∈ 6 [463, §2.3 prob.16] (Sherman-Morrison-Woodbury) 1 T 1 T 1 1 A− uv A− (A uv )− = A− (1795) ± ∓ 1 vTA 1u ± − B.1.0.2 dyad symmetry T N N In the specific circumstance that v = u , then uu R × is symmetric, rank-1 , and positive semidefinite having exactly N 1 0-eigenvalues.∈ In fact, (Theorem A.3.1.0.7) − uvT 0 v = u (1796) º ⇔ and the remaining eigenvalue is almost always positive; λ = uTu = tr(uuT) > 0 unless u=0 (1797) When λ = 1 , the dyad becomes an orthogonal projector. Matrix Ψ u (1798) uT 1 · ¸ for example, is rank-1 positive semidefinite if and only if Ψ = uuT. B.1.1 Dyad independence Now we consider a sum of dyads like (1783) as encountered in diagonalization and singular value decomposition: k k k s wT = s wT = (s ) w i are l.i. (1799) R i i R i i R i ⇐ i ∀ Ãi=1 ! i=1 i=1 X X ¡ ¢ X B.3This rank-1 modification formula has a Schur progenitor, in the symmetric case: minimize c c A Ax (1793) subject to 0 yTA c º · ¸ T T T Axy A has analytical solution by (1679b): c y AA†Ax = y Ax . Difference A comes from (1679c). ≥ − yTAx Rank modification is provable via Theorem A.4.0.1.3. 520 APPENDIX B. SIMPLE MATRICES range of summation is the vector sum of ranges.B.4 (Theorem B.1.1.1.1) Under the assumption the dyads are linearly independent (l.i.), then vector sums are unique (p.631): for w l.i. and s l.i. { i} { i} k s wT = s wT . s wT = (s ) . (s ) (1800) R i i R 1 1 ⊕ ⊕ R k k R 1 ⊕ ⊕ R k Ãi=1 ! X ¡ ¢ ¡ ¢ B.1.1.0.1 Definition. Linearly independent dyads. [243, p.29 thm.11] [382, p.2] The set of k dyads s wT i=1 ...k (1801) i i | where s CM and w CN , is said© to be linearly independentª iff i ∈ i ∈ k T T rank SW , si wi = k (1802) à i=1 ! X M k N k where S , [s s ] C × and W , [w w ] C × . 1 ··· k ∈ 1 ··· k ∈ △ Dyad independence does not preclude existence of a nullspace (SW T) , as defined, nor does it imply SW T were full-rank. In absence of assumptionN of independence, generally, rank SW T k . Conversely, any rank-k matrix can be written in the form T ≤ SW by singular value decomposition. (§A.6) B.1.1.0.2 Theorem. Linearly independent (l.i.) dyads. M N Vectors si C , i=1 ...k are l.i. and vectors wi C , i=1 ...k are l.i. if and only { ∈ T M N } { ∈ } if dyads s w C × , i=1 ...k are l.i. { i i ∈ } ⋄ Proof. Linear independence of k dyads is identical to definition (1802). ( ) Suppose s and w are each linearly independent sets. Invoking Sylvester’s rank ⇒ { i} { i} § inequality, [233, §0.4] [463, 2.4] rank S + rank W k rank(SW T) min rank S , rank W ( k) (1803) − ≤ ≤ { } ≤ Then k rank(SW T) k which implies the dyads are independent. ( ) Conversely,≤ suppose≤ rank(SW T)= k . Then ⇐ k min rank S , rank W k (1804) ≤ { } ≤ implying the vector sets are each independent. ¨ B.1.1.1 Biorthogonality condition, Range and Nullspace of Sum Dyads characterized by biorthogonality condition W TS = I are independent; id est, for M k N k T T S C × and W C × , if W S = I then rank(SW )= k by the linearly independent ∈ ∈ dyads theorem because (confer §E.1.1) W TS = I rank S = rank W = k M =N (1805) ⇒ ≤ B.5 To see that, we need only show: (S)= 0 B Ä BS =I . ( ) Assume BS =I . Then (BSN )= 0= ⇔x BSx ∃ = 0 (S). (1784) ⇐ N { | } ⊇ N B.4 Move of range to inside summation admitted by linear independence of wi . B.5 R { } Left inverse is not unique, in general. B.2. DOUBLET 521 ([ u v ]) (Π)= ([u v]) R R R r0 0 r (Π)= v⊥ u⊥ v⊥ u⊥ N ∩ ∩ vT vT RN = ([ u v ]) ⊞ ⊞ ([ u v ]) = RN R N uT N uT R µ· ¸¶ µ· ¸¶ T T N Figure 184: The four fundamental subspaces [376, §3.6] for doublet Π = uv + vu S . Π(x)=(uvT + vuT)x is a linear bijective mapping from ([ u v ]) to ([ u v ]). ∈ R R ( ) If (S)= 0 then S must be full-rank thin-or-square. ⇒ N B ∴ A,B,C Ä [ S A ] = I (id est, [ S A ] is invertible) BS =I .