Useful Facts, Identities, Inequalities

Total Page:16

File Type:pdf, Size:1020Kb

Useful Facts, Identities, Inequalities Useful Facts, Identities, Inequalities Linear Algebra Determinants Facts about definite matrices Notation Lemma 1. [Sylvester’s Identity] Assume A is positive definite. Then: Notation Name Comment Let A 2 Mm;n and B 2 Mn;m. Then: •A is always full rank. A a matrix indicated by capitalization and bold •A is invertible and A−1 is also positive definite. font det(Im + AB) = det(In + BA) (3) [A]ij the element from the i-th row and j-th Sometimes denoted Aij for brevity •A is positive definite , there exists an invertible matrix B such that A = BB>. column of A Mn the set of complex-valued matrices e.g., A 2 Mm;n •Aii > 0 with m rows and n columns > M 2 M Determinant Facts •rank(BAB ) = rank(B). n the set of complex-valued square e.g., A n 2 M f g Q For A n Pwith eigenvalues λ(A) = λ1; : : : λn , ≤ matrices n •det(A) i Aii I the identity matrix e.g., I = diag [1; 1;:::; 1] •tr(A) = i=1 λi Q > e vector of ones n •If X 2 Mn;r, with n ≤ r and rank(X) = n, then XX is positive definite. •det(A) = i=1 λi σi(A) i-th singular value of A Typically, we assume sorted order: y f g f g •A positive definite , eig( A+A ) > 0 (and ≥ 0 for A positive-semidefinite). σ1(A) ≥ σ2(A) ≥ ::: •Also, det(exp A ) = exp tr(A) . 2 f g 2 M λi(A) the i-th eigenvalue of A may also write λ(A) = λi i for the Lemma 2 (Block Matrix Determinants). Let M n+m be a square matrix partitioned as: •If B is symmetric, then A − tB is positive definite for sufficiently small t. set of eigenvalues. 2 M fj jg AB ρ(A) the spectral radius of A n ρ(A) = max λi M = (4) Projections 1≤i≤n CD ¯ ¯ A the complex conjugate of A operates entrywise, e.g. Aij = Aij Definition 14 A> the matrix transpose of A where A 2 Mn, D 2 Mm, with B and C being n × m and m × n respectively. y y def ¯ > Let V be a vector space, and U ⊆ V be a subspace. The projection of v 2 V onto some A the conjugate transpose of A e.g., A = (A) If B = 0n;m or C = 0m;n, then − − k·k A 1 the matrix inverse of A e.g., A 1A = I subspace with respect to some norm q is: − A+ the Moore-Penrose pseudoinverse of A sometimes may just use A 1 due to det(M) = det(A) det(D) (5) def k − k sloppy notation Πv = argmin v u q (10) u2U Definitions Furthermore, if A is invertible, then y ¯ > For the (weighted) Euclidean norm, Π can itself be expressed as a matrix. Definition 1 (Complex Conjugate). Let A 2 Mn, then A = (A) . > −1 Definition 2 (Symmetric Matrix). A matrix A such that A = A det(M) = det(A) det D − CA B (6) > −1 > ΠD = X(X DX) X D (11) Definition 3 (Hermitian Matrix). A matrix A such that A = Ay = (A¯ )>. , y y Definition 4 (Normal Matrix). A normal A A = AA A similar identity holds for D invertible. Although sometimes we just write Π instead of ΠD. That is, a normal matrix is one that commutes with its conjugate transpose. Lemma 3 (Block Triangular Matrix Determinants). Consider a matrix M 2 M of the , y y m;n Definition 5 (Unitary Matrix). A unitary I = A A = AA . form: Definition 6 (Orthogonal Matrix). A square matrix A whose columns and rows are orthogonal unit vectors, that is AA> = A>A. 0 1 Projection Facts S11 S12 ··· S1n Definition 7 (Idempotent matrix). A matrix such that AA = A. B ··· C •Projections are idempotent, i.e., Π2 = Π. BS21 S22 S2n C Definition 8 (Nilpotent Matrix). A matrix such that A2 = AA = 0. M = B C (7) @ . .. A •A square matrix Π 2 M is called an orthogonal projection matrix if Π2 = Π = Πy. Definition 9 (Unipotent Matrix). A matrix A such that A2 = I. n Definition 10 (Stochastic Matrix). Consider a matrix, say P with no negative entries. P Sn1 Sn2 ··· Snn •A non-orthogonal projection is called an oblique projection, usually expressed as is row stochastic matrix if each row sums to one; it is column stochastic if each column sums Π = A(B>A)−1B>. × to one. By convention, usually mean row stochastic when using “stochastic” without where each Sij is an m m matrix. •The triangle inequality applies, kxk = k(I − Π)x + Πxk ≤ k(I − Π)xk + kΠxk. qualification. A matrix is doubly stochastic if it is both row and column stochastic. If M is in block triangular form (that is, Sij = 0 for j > i), then Definition 11 (Matrix congruence). Two square matrices A and B over a field are called > Yn Singular Values congruent if there exists an invertible matrix P over the same field such that P AP = B. × det(M) = det(Sii) = det(S11) det(S22) ··· det(Snn) (8) Let A 2 Cm n and r = minfm; ng, with σ(A) = fσ (A)gr , and U; V are subspaces with Definition 12 (Matrix similarity). Two square matrices A and B are called similar if there k k=1 i=1 ⊆ n ⊆ m exists an invertible matrix P such that B = P−1AP. U C , V C . A transformation A 7! P−1AP is called a similarity transformation or conjugation of the k k k k Definiteness σk(A) = max min Ax 2 = min max Ax 2 matrix A. dim(U)=k x2U dim(U)=n−k+1 x2U kxk=1 kxk=1 Special Matrices and Vectors Definition 13: Definiteness > (12) > y Ax kAxk The vector of all ones, 1 = (1; 1;:::; 1) . = max min = max min A square matrix A is dim(U)=k x2U kxk kyk dim(U)=k x2U kxk y 2 2 2 2 0 1 •positive definite if x Ax > 0 8x =6 0. dim(V )=k x V v1 v1 ··· v1 y Bv2 v2 ··· vnC •positive semi-definite if x Ax ≥ 0 8x. > B C jjj jjj v1 = v ⊗ 1 = B C (1) •σ1(A) = A 2 @ . .. A Similar definitions apply for negative (semi-)definiteness and indefiniteness. > y •σi(A) = σi(A ) = σi(A ) = σi(A¯ ). vn vn ··· vn 2 y y Some sources argue that only Hermitian matrices should count as positive definite, while •σi (A) = λi(AA ) = λi(A A). Standard Basis others note that (real) matrices can be positive definite according to definition 13, although this is ultimately due to the symmetric part of the matrix. •For U and V (square) unitary matrices of appropriate dimension, σi(A) = σi(UAV). n f gn 2 n The standard (euclidean) basis for R is denoted ei i=1, with ei R , and ◦ k k k·k This also connects to why A 2 = σ1, because 2 is unitarily invariant, ( k k j j k k y k k Remark 1: non-symmetric positive definite matrices diag(d) = maxi di , so A 2 = UΣV = Σ 2 = σ1(A). def 1 if i = j 2 [ei] = (2) 0 i =6 j > Matrix Inverse If A 2 Mn(R) is a (not necessarily symmetric) real matrix, then x Ax > 0 for all > non-zero real x if the symmetric part of A, defined as A+ = (A + A )=2, is positive Identities General Information definite. −1 −1 −1 > > • (AB) = B A Symmetric Matrices In fact, x Ax = x A+x 8x because • (A + CBC>)−1 = A−1 − A−1C(B−1 + C>A−1C)−1C>A−1 •Any matrix congruent to a symmetric matrix is itself symmetric, i.e. if S is symmetric, > > > > > 1 > > > x Ax = (x Ax) = x A x = x (A + A )x (9) then so is ASA for any matrix A. 2 • •Any symmetric real matrix can be diagonalized by an orthogonal matrix. • Moore-Penrose Pseudoinverse Miscellaneous facts Induced norms 1=2 Definition 17 (Induced Norm). A matrix norm jj|·|jj is said to be induced by the vector Decompositions • kAvk = (v>A>Av)1=2 ≤ A>A (v>v)1=2. 2 2 norm k·k if Symmetric Decomposition • j det(A)j = σ1σ2 ··· σn, where σi is the i-th singular value of A. jjjMjjj = max kMxk kxk=1 (19) A square matrix A can always be written as the sum of a symmetric matrix A+ and an • All eigenvalues of a matrix are smaller in magnitude than the largest singular value, jj|·|jj antisymmetric matrix A−, such that A = A+ + A−. j j j j ≤ k k For an induced norm we have: i.e., σ1(A) > λi(A) . In particular, λ1(A) = ρ(A) σ1(A) = A 2. jjj jjj QR Decomposition I = 1 Matrix Norms jjjAxjjj ≤ jjjAjjjkxk for all matrices A and vectors x A decomposition of a matrix A into the product of an orthogonal matrix Q and an upper jjj jjj ≤ jjj jjjjjj jjj Properties AB A B for all A; B triangular matrix R such that A = QR. For a weighted norm with W a symmetric positive definite matrix, we have jjjαAjjj = jαjjjjAjjj (absolutely homogeneous) p Singular Value Decomposition jjj jjj ≤ jjj jjj jjj jjj A + B A + B (sub-additive) k k def > 1=2 x W = x Wx = W x (20) Rewriting Matrices jjjAjjj ≥ 0 (positive valued) 2 jjjAjjj = 0 ) A = 0 (definiteness) We can rewrite some common expressions using the standard basis e and the single-entry i jjjABjjj ≤ jjjAjjjjjjBjjj (sub-multiplicative) The corresponding induced matrix norm is matrix Jij = e e>: i j Some vector norms, when generalized to matrices, are also matrix norms.
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Clustering by Left-Stochastic Matrix Factorization
    Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Maya R. Gupta [email protected] Amol Kapila [email protected] Maryam Fazel [email protected] University of Washington, Seattle, WA 98103, USA Abstract 1.1. Related Work in Matrix Factorization Some clustering objective functions can be written as We propose clustering samples given their matrix factorization objectives. Let n feature vectors d×n pairwise similarities by factorizing the sim- be gathered into a feature-vector matrix X 2 R . T d×k ilarity matrix into the product of a clus- Consider the model X ≈ FG , where F 2 R can ter probability matrix and its transpose. be interpreted as a matrix with k cluster prototypes n×k We propose a rotation-based algorithm to as its columns, and G 2 R is all zeros except for compute this left-stochastic decomposition one (appropriately scaled) positive entry per row that (LSD). Theoretical results link the LSD clus- indicates the nearest cluster prototype. The k-means tering method to a soft kernel k-means clus- clustering objective follows this model with squared tering, give conditions for when the factor- error, and can be expressed as (Ding et al., 2005): ization and clustering are unique, and pro- T 2 arg min kX − FG kF ; (1) vide error bounds. Experimental results on F;GT G=I simulated and real similarity datasets show G≥0 that the proposed method reliably provides accurate clusterings. where k · kF is the Frobenius norm, and inequality G ≥ 0 is component-wise. This follows because the combined constraints G ≥ 0 and GT G = I force each row of G to have only one positive element.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Similarity-Based Clustering by Left-Stochastic Matrix Factorization
    JournalofMachineLearningResearch14(2013)1715-1746 Submitted 1/12; Revised 11/12; Published 7/13 Similarity-based Clustering by Left-Stochastic Matrix Factorization Raman Arora [email protected] Toyota Technological Institute 6045 S. Kenwood Ave Chicago, IL 60637, USA Maya R. Gupta [email protected] Google 1225 Charleston Rd Mountain View, CA 94301, USA Amol Kapila [email protected] Maryam Fazel [email protected] Department of Electrical Engineering University of Washington Seattle, WA 98195, USA Editor: Inderjit Dhillon Abstract For similarity-based clustering, we propose modeling the entries of a given similarity matrix as the inner products of the unknown cluster probabilities. To estimate the cluster probabilities from the given similarity matrix, we introduce a left-stochastic non-negative matrix factorization problem. A rotation-based algorithm is proposed for the matrix factorization. Conditions for unique matrix factorizations and clusterings are given, and an error bound is provided. The algorithm is partic- ularly efficient for the case of two clusters, which motivates a hierarchical variant for cases where the number of desired clusters is large. Experiments show that the proposed left-stochastic decom- position clustering model produces relatively high within-cluster similarity on most data sets and can match given class labels, and that the efficient hierarchical variant performs surprisingly well. Keywords: clustering, non-negative matrix factorization, rotation, indefinite kernel, similarity, completely positive 1. Introduction Clustering is important in a broad range of applications, from segmenting customers for more ef- fective advertising, to building codebooks for data compression. Many clustering methods can be interpreted in terms of a matrix factorization problem.
    [Show full text]
  • Chapter 0 Preliminaries
    Chapter 0 Preliminaries Objective of the course Introduce basic matrix results and techniques that are useful in theory and applications. It is important to know many results, but there is no way I can tell you all of them. I would like to introduce techniques so that you can obtain the results you want. This is the standard teaching philosophy of \It is better to teach students how to fish instead of just giving them fishes.” Assumption You are familiar with • Linear equations, solution sets, elementary row operations. • Matrices, column space, row space, null space, ranks, • Determinant, eigenvalues, eigenvectors, diagonal form. • Vector spaces, basis, change of bases. • Linear transformations, range space, kernel. • Inner product, Gram Schmidt process for real matrices. • Basic operations of complex numbers. • Block matrices. Notation • Mn(F), Mm;n(F) are the set of n × n and m × n matrices over F = R or C (or a more general field). n • F is the set of column vectors of length n with entries in F. • Sometimes we write Mn;Mm;n if F = C. A1 0 • If A = 2 Mm+n(F) with A1 2 Mm(F) and A2 2 Mn(F), we write A = A1 ⊕ A2. 0 A2 • xt;At denote the transpose of a vector x and a matrix A. • For a complex matrix A, A denotes the matrix obtained from A by replacing each entry by its complex conjugate. Furthermore, A∗ = (A)t. More results and examples on block matrix multiplications. 1 1 Similarity, Jordan form, and applications 1.1 Similarity Definition 1.1.1 Two matrices A; B 2 Mn are similar if there is an invertible S 2 Mn such that S−1AS = B, equivalently, A = T −1BT with T = S−1.
    [Show full text]
  • Matrix Lie Groups
    Maths Seminar 2007 MATRIX LIE GROUPS Claudiu C Remsing Dept of Mathematics (Pure and Applied) Rhodes University Grahamstown 6140 26 September 2007 RhodesUniv CCR 0 Maths Seminar 2007 TALK OUTLINE 1. What is a matrix Lie group ? 2. Matrices revisited. 3. Examples of matrix Lie groups. 4. Matrix Lie algebras. 5. A glimpse at elementary Lie theory. 6. Life beyond elementary Lie theory. RhodesUniv CCR 1 Maths Seminar 2007 1. What is a matrix Lie group ? Matrix Lie groups are groups of invertible • matrices that have desirable geometric features. So matrix Lie groups are simultaneously algebraic and geometric objects. Matrix Lie groups naturally arise in • – geometry (classical, algebraic, differential) – complex analyis – differential equations – Fourier analysis – algebra (group theory, ring theory) – number theory – combinatorics. RhodesUniv CCR 2 Maths Seminar 2007 Matrix Lie groups are encountered in many • applications in – physics (geometric mechanics, quantum con- trol) – engineering (motion control, robotics) – computational chemistry (molecular mo- tion) – computer science (computer animation, computer vision, quantum computation). “It turns out that matrix [Lie] groups • pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry”. (K. Tapp, 2005) RhodesUniv CCR 3 Maths Seminar 2007 EXAMPLE 1 : The Euclidean group E (2). • E (2) = F : R2 R2 F is an isometry . → | n o The vector space R2 is equipped with the standard Euclidean structure (the “dot product”) x y = x y + x y (x, y R2), • 1 1 2 2 ∈ hence with the Euclidean distance d (x, y) = (y x) (y x) (x, y R2).
    [Show full text]
  • Alternating Sign Matrices and Polynomiography
    Alternating Sign Matrices and Polynomiography Bahman Kalantari Department of Computer Science Rutgers University, USA [email protected] Submitted: Apr 10, 2011; Accepted: Oct 15, 2011; Published: Oct 31, 2011 Mathematics Subject Classifications: 00A66, 15B35, 15B51, 30C15 Dedicated to Doron Zeilberger on the occasion of his sixtieth birthday Abstract To each permutation matrix we associate a complex permutation polynomial with roots at lattice points corresponding to the position of the ones. More generally, to an alternating sign matrix (ASM) we associate a complex alternating sign polynomial. On the one hand visualization of these polynomials through polynomiography, in a combinatorial fashion, provides for a rich source of algo- rithmic art-making, interdisciplinary teaching, and even leads to games. On the other hand, this combines a variety of concepts such as symmetry, counting and combinatorics, iteration functions and dynamical systems, giving rise to a source of research topics. More generally, we assign classes of polynomials to matrices in the Birkhoff and ASM polytopes. From the characterization of vertices of these polytopes, and by proving a symmetry-preserving property, we argue that polynomiography of ASMs form building blocks for approximate polynomiography for polynomials corresponding to any given member of these polytopes. To this end we offer an algorithm to express any member of the ASM polytope as a convex of combination of ASMs. In particular, we can give exact or approximate polynomiography for any Latin Square or Sudoku solution. We exhibit some images. Keywords: Alternating Sign Matrices, Polynomial Roots, Newton’s Method, Voronoi Diagram, Doubly Stochastic Matrices, Latin Squares, Linear Programming, Polynomiography 1 Introduction Polynomials are undoubtedly one of the most significant objects in all of mathematics and the sciences, particularly in combinatorics.
    [Show full text]
  • Representations of Stochastic Matrices
    Rotational (and Other) Representations of Stochastic Matrices Steve Alpern1 and V. S. Prasad2 1Department of Mathematics, London School of Economics, London WC2A 2AE, United Kingdom. email: [email protected] 2Department of Mathematics, University of Massachusetts Lowell, Lowell, MA. email: [email protected] May 27, 2005 Abstract Joel E. Cohen (1981) conjectured that any stochastic matrix P = pi;j could be represented by some circle rotation f in the following sense: Forf someg par- tition Si of the circle into sets consisting of …nite unions of arcs, we have (*) f g pi;j = (f (Si) Sj) = (Si), where denotes arc length. In this paper we show how cycle decomposition\ techniques originally used (Alpern, 1983) to establish Cohen’sconjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue proba- bility space. Representations by pointwise and setwise periodic automorphisms are also established. While this paper is largely expository, all the proofs, and some of the results, are new. Keywords: rotational representation, stochastic matrix, cycle decomposition MSC 2000 subject classi…cations. Primary: 60J10. Secondary: 15A51 1 Introduction An automorphism of a Lebesgue probability space (X; ; ) is a bimeasurable n bijection f : X X which preserves the measure : If S = Si is a non- ! f gi=1 trivial (all (Si) > 0) measurable partition of X; we can generate a stochastic n matrix P = pi;j by the de…nition f gi;j=1 (f (Si) Sj) pi;j = \ ; i; j = 1; : : : ; n: (1) (Si) Since the partition S is non-trivial, the matrix P has a positive invariant (stationary) distribution v = (v1; : : : ; vn) = ( (S1) ; : : : ; (Sn)) ; and hence (by de…nition) is recurrent.
    [Show full text]
  • Lecture 2: Spectral Theorems
    Lecture 2: Spectral Theorems This lecture introduces normal matrices. The spectral theorem will inform us that normal matrices are exactly the unitarily diagonalizable matrices. As a consequence, we will deduce the classical spectral theorem for Hermitian matrices. The case of commuting families of matrices will also be studied. All of this corresponds to section 2.5 of the textbook. 1 Normal matrices Definition 1. A matrix A 2 Mn is called a normal matrix if AA∗ = A∗A: Observation: The set of normal matrices includes all the Hermitian matrices (A∗ = A), the skew-Hermitian matrices (A∗ = −A), and the unitary matrices (AA∗ = A∗A = I). It also " # " # 1 −1 1 1 contains other matrices, e.g. , but not all matrices, e.g. 1 1 0 1 Here is an alternate characterization of normal matrices. Theorem 2. A matrix A 2 Mn is normal iff ∗ n kAxk2 = kA xk2 for all x 2 C : n Proof. If A is normal, then for any x 2 C , 2 ∗ ∗ ∗ ∗ ∗ 2 kAxk2 = hAx; Axi = hx; A Axi = hx; AA xi = hA x; A xi = kA xk2: ∗ n n Conversely, suppose that kAxk = kA xk for all x 2 C . For any x; y 2 C and for λ 2 C with jλj = 1 chosen so that <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij, we expand both sides of 2 ∗ 2 kA(λx + y)k2 = kA (λx + y)k2 to obtain 2 2 ∗ 2 ∗ 2 ∗ ∗ kAxk2 + kAyk2 + 2<(λhAx; Ayi) = kA xk2 + kA yk2 + 2<(λhA x; A yi): 2 ∗ 2 2 ∗ 2 Using the facts that kAxk2 = kA xk2 and kAyk2 = kA yk2, we derive 0 = <(λhAx; Ayi − λhA∗x; A∗yi) = <(λhx; A∗Ayi − λhx; AA∗yi) = <(λhx; (A∗A − AA∗)yi) = jhx; (A∗A − AA∗)yij: n ∗ ∗ n Since this is true for any x 2 C , we deduce (A A − AA )y = 0, which holds for any y 2 C , meaning that A∗A − AA∗ = 0, as desired.
    [Show full text]
  • LIE GROUPS and ALGEBRAS NOTES Contents 1. Definitions 2
    LIE GROUPS AND ALGEBRAS NOTES STANISLAV ATANASOV Contents 1. Definitions 2 1.1. Root systems, Weyl groups and Weyl chambers3 1.2. Cartan matrices and Dynkin diagrams4 1.3. Weights 5 1.4. Lie group and Lie algebra correspondence5 2. Basic results about Lie algebras7 2.1. General 7 2.2. Root system 7 2.3. Classification of semisimple Lie algebras8 3. Highest weight modules9 3.1. Universal enveloping algebra9 3.2. Weights and maximal vectors9 4. Compact Lie groups 10 4.1. Peter-Weyl theorem 10 4.2. Maximal tori 11 4.3. Symmetric spaces 11 4.4. Compact Lie algebras 12 4.5. Weyl's theorem 12 5. Semisimple Lie groups 13 5.1. Semisimple Lie algebras 13 5.2. Parabolic subalgebras. 14 5.3. Semisimple Lie groups 14 6. Reductive Lie groups 16 6.1. Reductive Lie algebras 16 6.2. Definition of reductive Lie group 16 6.3. Decompositions 18 6.4. The structure of M = ZK (a0) 18 6.5. Parabolic Subgroups 19 7. Functional analysis on Lie groups 21 7.1. Decomposition of the Haar measure 21 7.2. Reductive groups and parabolic subgroups 21 7.3. Weyl integration formula 22 8. Linear algebraic groups and their representation theory 23 8.1. Linear algebraic groups 23 8.2. Reductive and semisimple groups 24 8.3. Parabolic and Borel subgroups 25 8.4. Decompositions 27 Date: October, 2018. These notes compile results from multiple sources, mostly [1,2]. All mistakes are mine. 1 2 STANISLAV ATANASOV 1. Definitions Let g be a Lie algebra over algebraically closed field F of characteristic 0.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Unipotent Flows and Applications
    Clay Mathematics Proceedings Volume 10, 2010 Unipotent Flows and Applications Alex Eskin 1. General introduction 1.1. Values of indefinite quadratic forms at integral points. The Op- penheim Conjecture. Let X Q(x1; : : : ; xn) = aijxixj 1≤i≤j≤n be a quadratic form in n variables. We always assume that Q is indefinite so that (so that there exists p with 1 ≤ p < n so that after a linear change of variables, Q can be expresses as: Xp Xn ∗ 2 − 2 Qp(y1; : : : ; yn) = yi yi i=1 i=p+1 We should think of the coefficients aij of Q as real numbers (not necessarily rational or integer). One can still ask what will happen if one substitutes integers for the xi. It is easy to see that if Q is a multiple of a form with rational coefficients, then the set of values Q(Zn) is a discrete subset of R. Much deeper is the following conjecture: Conjecture 1.1 (Oppenheim, 1929). Suppose Q is not proportional to a ra- tional form and n ≥ 5. Then Q(Zn) is dense in the real line. This conjecture was extended by Davenport to n ≥ 3. Theorem 1.2 (Margulis, 1986). The Oppenheim Conjecture is true as long as n ≥ 3. Thus, if n ≥ 3 and Q is not proportional to a rational form, then Q(Zn) is dense in R. This theorem is a triumph of ergodic theory. Before Margulis, the Oppenheim Conjecture was attacked by analytic number theory methods. (In particular it was known for n ≥ 21, and for diagonal forms with n ≥ 5).
    [Show full text]