Useful Facts, Identities, Inequalities

Useful Facts, Identities, Inequalities

Useful Facts, Identities, Inequalities Linear Algebra Determinants Facts about definite matrices Notation Lemma 1. [Sylvester’s Identity] Assume A is positive definite. Then: Notation Name Comment Let A 2 Mm;n and B 2 Mn;m. Then: •A is always full rank. A a matrix indicated by capitalization and bold •A is invertible and A−1 is also positive definite. font det(Im + AB) = det(In + BA) (3) [A]ij the element from the i-th row and j-th Sometimes denoted Aij for brevity •A is positive definite , there exists an invertible matrix B such that A = BB>. column of A Mn the set of complex-valued matrices e.g., A 2 Mm;n •Aii > 0 with m rows and n columns > M 2 M Determinant Facts •rank(BAB ) = rank(B). n the set of complex-valued square e.g., A n 2 M f g Q For A n Pwith eigenvalues λ(A) = λ1; : : : λn , ≤ matrices n •det(A) i Aii I the identity matrix e.g., I = diag [1; 1;:::; 1] •tr(A) = i=1 λi Q > e vector of ones n •If X 2 Mn;r, with n ≤ r and rank(X) = n, then XX is positive definite. •det(A) = i=1 λi σi(A) i-th singular value of A Typically, we assume sorted order: y f g f g •A positive definite , eig( A+A ) > 0 (and ≥ 0 for A positive-semidefinite). σ1(A) ≥ σ2(A) ≥ ::: •Also, det(exp A ) = exp tr(A) . 2 f g 2 M λi(A) the i-th eigenvalue of A may also write λ(A) = λi i for the Lemma 2 (Block Matrix Determinants). Let M n+m be a square matrix partitioned as: •If B is symmetric, then A − tB is positive definite for sufficiently small t. set of eigenvalues. 2 M fj jg AB ρ(A) the spectral radius of A n ρ(A) = max λi M = (4) Projections 1≤i≤n CD ¯ ¯ A the complex conjugate of A operates entrywise, e.g. Aij = Aij Definition 14 A> the matrix transpose of A where A 2 Mn, D 2 Mm, with B and C being n × m and m × n respectively. y y def ¯ > Let V be a vector space, and U ⊆ V be a subspace. The projection of v 2 V onto some A the conjugate transpose of A e.g., A = (A) If B = 0n;m or C = 0m;n, then − − k·k A 1 the matrix inverse of A e.g., A 1A = I subspace with respect to some norm q is: − A+ the Moore-Penrose pseudoinverse of A sometimes may just use A 1 due to det(M) = det(A) det(D) (5) def k − k sloppy notation Πv = argmin v u q (10) u2U Definitions Furthermore, if A is invertible, then y ¯ > For the (weighted) Euclidean norm, Π can itself be expressed as a matrix. Definition 1 (Complex Conjugate). Let A 2 Mn, then A = (A) . > −1 Definition 2 (Symmetric Matrix). A matrix A such that A = A det(M) = det(A) det D − CA B (6) > −1 > ΠD = X(X DX) X D (11) Definition 3 (Hermitian Matrix). A matrix A such that A = Ay = (A¯ )>. , y y Definition 4 (Normal Matrix). A normal A A = AA A similar identity holds for D invertible. Although sometimes we just write Π instead of ΠD. That is, a normal matrix is one that commutes with its conjugate transpose. Lemma 3 (Block Triangular Matrix Determinants). Consider a matrix M 2 M of the , y y m;n Definition 5 (Unitary Matrix). A unitary I = A A = AA . form: Definition 6 (Orthogonal Matrix). A square matrix A whose columns and rows are orthogonal unit vectors, that is AA> = A>A. 0 1 Projection Facts S11 S12 ··· S1n Definition 7 (Idempotent matrix). A matrix such that AA = A. B ··· C •Projections are idempotent, i.e., Π2 = Π. BS21 S22 S2n C Definition 8 (Nilpotent Matrix). A matrix such that A2 = AA = 0. M = B C (7) @ . .. A •A square matrix Π 2 M is called an orthogonal projection matrix if Π2 = Π = Πy. Definition 9 (Unipotent Matrix). A matrix A such that A2 = I. n Definition 10 (Stochastic Matrix). Consider a matrix, say P with no negative entries. P Sn1 Sn2 ··· Snn •A non-orthogonal projection is called an oblique projection, usually expressed as is row stochastic matrix if each row sums to one; it is column stochastic if each column sums Π = A(B>A)−1B>. × to one. By convention, usually mean row stochastic when using “stochastic” without where each Sij is an m m matrix. •The triangle inequality applies, kxk = k(I − Π)x + Πxk ≤ k(I − Π)xk + kΠxk. qualification. A matrix is doubly stochastic if it is both row and column stochastic. If M is in block triangular form (that is, Sij = 0 for j > i), then Definition 11 (Matrix congruence). Two square matrices A and B over a field are called > Yn Singular Values congruent if there exists an invertible matrix P over the same field such that P AP = B. × det(M) = det(Sii) = det(S11) det(S22) ··· det(Snn) (8) Let A 2 Cm n and r = minfm; ng, with σ(A) = fσ (A)gr , and U; V are subspaces with Definition 12 (Matrix similarity). Two square matrices A and B are called similar if there k k=1 i=1 ⊆ n ⊆ m exists an invertible matrix P such that B = P−1AP. U C , V C . A transformation A 7! P−1AP is called a similarity transformation or conjugation of the k k k k Definiteness σk(A) = max min Ax 2 = min max Ax 2 matrix A. dim(U)=k x2U dim(U)=n−k+1 x2U kxk=1 kxk=1 Special Matrices and Vectors Definition 13: Definiteness > (12) > y Ax kAxk The vector of all ones, 1 = (1; 1;:::; 1) . = max min = max min A square matrix A is dim(U)=k x2U kxk kyk dim(U)=k x2U kxk y 2 2 2 2 0 1 •positive definite if x Ax > 0 8x =6 0. dim(V )=k x V v1 v1 ··· v1 y Bv2 v2 ··· vnC •positive semi-definite if x Ax ≥ 0 8x. > B C jjj jjj v1 = v ⊗ 1 = B C (1) •σ1(A) = A 2 @ . .. A Similar definitions apply for negative (semi-)definiteness and indefiniteness. > y •σi(A) = σi(A ) = σi(A ) = σi(A¯ ). vn vn ··· vn 2 y y Some sources argue that only Hermitian matrices should count as positive definite, while •σi (A) = λi(AA ) = λi(A A). Standard Basis others note that (real) matrices can be positive definite according to definition 13, although this is ultimately due to the symmetric part of the matrix. •For U and V (square) unitary matrices of appropriate dimension, σi(A) = σi(UAV). n f gn 2 n The standard (euclidean) basis for R is denoted ei i=1, with ei R , and ◦ k k k·k This also connects to why A 2 = σ1, because 2 is unitarily invariant, ( k k j j k k y k k Remark 1: non-symmetric positive definite matrices diag(d) = maxi di , so A 2 = UΣV = Σ 2 = σ1(A). def 1 if i = j 2 [ei] = (2) 0 i =6 j > Matrix Inverse If A 2 Mn(R) is a (not necessarily symmetric) real matrix, then x Ax > 0 for all > non-zero real x if the symmetric part of A, defined as A+ = (A + A )=2, is positive Identities General Information definite. −1 −1 −1 > > • (AB) = B A Symmetric Matrices In fact, x Ax = x A+x 8x because • (A + CBC>)−1 = A−1 − A−1C(B−1 + C>A−1C)−1C>A−1 •Any matrix congruent to a symmetric matrix is itself symmetric, i.e. if S is symmetric, > > > > > 1 > > > x Ax = (x Ax) = x A x = x (A + A )x (9) then so is ASA for any matrix A. 2 • •Any symmetric real matrix can be diagonalized by an orthogonal matrix. • Moore-Penrose Pseudoinverse Miscellaneous facts Induced norms 1=2 Definition 17 (Induced Norm). A matrix norm jj|·|jj is said to be induced by the vector Decompositions • kAvk = (v>A>Av)1=2 ≤ A>A (v>v)1=2. 2 2 norm k·k if Symmetric Decomposition • j det(A)j = σ1σ2 ··· σn, where σi is the i-th singular value of A. jjjMjjj = max kMxk kxk=1 (19) A square matrix A can always be written as the sum of a symmetric matrix A+ and an • All eigenvalues of a matrix are smaller in magnitude than the largest singular value, jj|·|jj antisymmetric matrix A−, such that A = A+ + A−. j j j j ≤ k k For an induced norm we have: i.e., σ1(A) > λi(A) . In particular, λ1(A) = ρ(A) σ1(A) = A 2. jjj jjj QR Decomposition I = 1 Matrix Norms jjjAxjjj ≤ jjjAjjjkxk for all matrices A and vectors x A decomposition of a matrix A into the product of an orthogonal matrix Q and an upper jjj jjj ≤ jjj jjjjjj jjj Properties AB A B for all A; B triangular matrix R such that A = QR. For a weighted norm with W a symmetric positive definite matrix, we have jjjαAjjj = jαjjjjAjjj (absolutely homogeneous) p Singular Value Decomposition jjj jjj ≤ jjj jjj jjj jjj A + B A + B (sub-additive) k k def > 1=2 x W = x Wx = W x (20) Rewriting Matrices jjjAjjj ≥ 0 (positive valued) 2 jjjAjjj = 0 ) A = 0 (definiteness) We can rewrite some common expressions using the standard basis e and the single-entry i jjjABjjj ≤ jjjAjjjjjjBjjj (sub-multiplicative) The corresponding induced matrix norm is matrix Jij = e e>: i j Some vector norms, when generalized to matrices, are also matrix norms.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us