Notes on Lie Groups

Total Page:16

File Type:pdf, Size:1020Kb

Notes on Lie Groups Notes on Lie Groups Paul Nelson February 19, 2019 Contents 1 Disclaimers 6 2 Summary of classes and homework assignments 6 2.1 9/20: The definition of a Lie group . .7 2.2 9/22: The connected component . .8 2.3 9/27: One-parameter subgroups and the exponential map . 10 2.4 9/29: The Lie algebra of a Lie group . 12 2.5 10/4 (half-lecture) and 10/6: Representations of SL(2) ...... 14 2.6 10/11 (half-lecture) and 10/13: The unitary trick . 17 2.7 10/18 (half-lecture): The adjoint representation . 19 2.8 10/20 (first half): Character theory for SL(2) (algebraic) . 21 2.9 10/20 (second half): Maurer-Cartan equations; lifting morphisms of Lie algebras . 23 2.10 10/25 (half-lecture): universal covering group . 24 2.11 10/27: Fundamental groups of Lie groups . 26 2.12 11/1: The Baker–Campbell–Hausdorff(–Dynkin) formula . 28 2.13 11/3: Closed subgroups are Lie; virtual subgroups vs. Lie subal- gebras . 30 2.14 11/8: Simplicity of Lie groups and Lie algebras . 32 2.15 11/10: Simplicity of the special linear Lie algebra . 33 2.16 11/22, 11/24: How to classify classical simple complex Lie algebras 35 2.17 11/29: 11/31: Why simple Lie algebras give rise to root systems 38 2.18 12/6: Complex reductive vs. compact real . 40 2.19 12/8: Compact Lie groups: center, fundamental group . 40 2.20 12/13 Maximal tori in compact Lie groups . 42 3 Selected homework solutions 43 4 Some notation 49 4.1 Local maps . 49 4.1.1 Motivation . 49 4.1.2 Definition . 49 4.1.3 Equivalence . 50 1 4.1.4 Images . 50 4.1.5 Composition . 50 4.1.6 Inverses . 50 5 Some review of calculus 50 5.1 One-variable derivatives . 50 5.2 Multi-variable total and partial derivatives . 51 5.3 The chain rule . 52 5.4 Inverse function theorem . 52 5.5 Implicit function theorem . 52 6 Some review of differential geometry 53 6.1 Charts . 54 6.2 Manifolds . 54 6.3 Smooth maps . 55 6.4 Coordinate systems . 55 6.5 Tangent spaces . 55 6.6 Derivatives . 57 6.7 Jacobians . 58 6.8 Inverse function theorem . 58 6.9 Local linearization of smooth maps . 59 6.9.1 Linear maps in terms of coordinates . 59 6.9.2 The constant rank theorem . 59 6.9.3 The case of maximal rank . 60 6.10 Submanifolds . 61 6.11 A criterion for being a submanifold . 62 6.12 Computing tangent spaces of submanifolds . 63 6.13 Summary of how to work with submanifolds . 63 7 Some review of differential equations 64 8 Some review of group theory 67 8.1 Basic definition . 67 8.2 Permutation groups . 68 8.3 Topological groups . 68 9 Some review of functional analysis 70 9.1 Definitions and elementary properties of operators on Hilbert spaces 70 9.2 Compact self-adjoint operators on nonzero Hilbert spaces have eigenvectors . 70 9.3 Spectral theorem for compact self-adjoint operators on a Hilbert space . 72 9.4 Basics on matrix coefficients . 72 9.5 Finite functions on a compact group are dense . 72 9.6 Schur orthogonality relations . 74 9.7 Peter–Weyl theorem . 75 2 10 Some facts concerning invariant measures 76 10.1 Definition of Haar measures . 76 10.2 Existence theorem . 77 10.3 Unimodularity of compact groups . 78 10.4 Direct construction for Lie groups . 78 10.5 Some exercises . 78 10.6 Construction of a Haar measure on a compact group via averaging 79 11 Definition and basic properties of Lie groups 80 11.1 Lie groups: definition . 80 11.2 Basic examples . 80 11.3 Lie subgroups: definition . 81 11.4 A handy criterion for being a Lie subgroup . 82 11.5 Lie subgroups are closed . 83 11.6 Translation of tangent spaces by group elements . 83 12 The connected component 84 12.1 Generalities . 84 12.2 Some examples . 85 12.3 Connectedness of SO(n) ....................... 86 13 Basics on the exponential map 88 13.1 Review of the matrix exponential . 88 13.2 One-parameter subgroups . 90 13.3 Definition and basic properties of exponential map . 92 13.4 Application to detecting invariance by a connected Lie group . 93 13.5 Connected Lie subgroups are determined by their Lie algebras . 95 13.6 The exponential map commutes with morphisms . 95 13.7 Morphisms out of a connected Lie group are determined by their differentials . 96 14 Putting the “algebra” in “Lie algebra” 96 14.1 The commutator of small group elements . 96 14.2 The Lie bracket as an infinitesimal commutator . 97 14.3 Lie algebras . 98 14.4 The Lie bracket commutes with differentials of morphisms . 99 15 How pretend that every Lie group is a matrix group and sur- vive 99 16 Something about representations, mostly SL2 100 16.1 Some preliminaries . 100 16.2 Definition . 100 16.3 Matrix multiplication . 102 16.4 Invariant subspaces and irreducibility . 102 16.5 Polynomial representations of SL2(C) ............... 103 16.6 Classifying finite-dimensional irreducible representations of SL2(C)105 3 16.7 Complete reducibility . 108 16.8 Linear reductivity of compact groups . 112 16.9 Constructing new representations from old ones . 115 16.9.1 Direct sum . 115 16.9.2 Tensor product . 115 16.9.3 Symmetric power representations . 115 16.10Characters . 116 17 The unitary trick 118 18 Ad and ad 121 18.1 Basic definitions . 121 18.2 Relationship between Ad and ad .................. 124 18.3 Interpretation of the Jacobi identity . 124 18.4 Ad; ad are intertwined by the exponential map . 124 18.5 Some low-rank exceptional isomorphisms induced by the adjoint representation . 125 19 Maurer–Cartan equations and applications 127 19.1 The equations . 127 19.2 Lifting morphisms of Lie algebras . 128 20 The universal covering group 130 21 Quotients of Lie groups 133 22 Homotopy exact sequence 134 23 Cartan decomposition 135 24 BCHD 138 25 Some more ways to produce and detect Lie groups 141 25.1 Summary . 141 25.2 Closed subgroups of real Lie groups . 142 25.2.1 Statement of the key result . 142 25.2.2 A toy example . 143 25.2.3 Proof of the key result . 143 25.3 Stabilizers . 145 25.3.1 The basic result . 145 25.3.2 Application to kernels . 145 25.3.3 Application to preimages . 145 25.3.4 Application to intersections . 145 25.3.5 Application to stabilizers of vectors or subspaces in rep- resentations . 146 25.4 Commutator subgroups . 146 4 26 Immersed Lie subgroups 146 27 Simple Lie groups 147 28 Simplicity of the special linear group 149 28.1 Some linear algebra . 149 28.2 Recap on sl2(C) ............................ 151 28.3 Reformulation in terms of representations of abelian Lie algebras 152 28.4 Roots of an abelian subalgebra . 153 28.5 The roots of the special linear Lie algebra . 154 28.6 The simplicity of sln(C) ....................... 157 28.7 The proof given in lecture . 159 29 Classification of the classical simple complex Lie algebras 160 29.1 Recap . 160 29.2 Classical simple complex Lie algebras . 161 29.3 How to classify them (without worrying about why it works) . 162 29.4 Dynkin diagrams of classical simple Lie algebras . 166 29.5 Classical algebras come with faithful representations and are cut out by anti-involutions . 168 29.6 Diagonalization in classical Lie algebras . 169 29.7 Semisimplicity of elements of classical Lie algebras . 170 29.8 Conjugacy of Cartan subalgebras . 171 29.9 Interpretation of Cartan matrix in terms of inner products . 172 29.10Independence with respect to the choice of simple system . 172 30 Why simple Lie algebras give rise to root systems 176 30.1 Overview . 176 30.2 The basic theorem on Cartan subalgebras . 177 30.3 Abstract root systems . 178 30.4 Illustration of the root system axioms.
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • LECTURE 6: FIBER BUNDLES in This Section We Will Introduce The
    LECTURE 6: FIBER BUNDLES In this section we will introduce the interesting class of fibrations given by fiber bundles. Fiber bundles play an important role in many geometric contexts. For example, the Grassmaniann varieties and certain fiber bundles associated to Stiefel varieties are central in the classification of vector bundles over (nice) spaces. The fact that fiber bundles are examples of Serre fibrations follows from Theorem ?? which states that being a Serre fibration is a local property. 1. Fiber bundles and principal bundles Definition 6.1. A fiber bundle with fiber F is a map p: E ! X with the following property: every ∼ −1 point x 2 X has a neighborhood U ⊆ X for which there is a homeomorphism φU : U × F = p (U) such that the following diagram commutes in which π1 : U × F ! U is the projection on the first factor: φ U × F U / p−1(U) ∼= π1 p * U t Remark 6.2. The projection X × F ! X is an example of a fiber bundle: it is called the trivial bundle over X with fiber F . By definition, a fiber bundle is a map which is `locally' homeomorphic to a trivial bundle. The homeomorphism φU in the definition is a local trivialization of the bundle, or a trivialization over U. Let us begin with an interesting subclass. A fiber bundle whose fiber F is a discrete space is (by definition) a covering projection (with fiber F ). For example, the exponential map R ! S1 is a covering projection with fiber Z. Suppose X is a space which is path-connected and locally simply connected (in fact, the weaker condition of being semi-locally simply connected would be enough for the following construction).
    [Show full text]
  • MTH 304: General Topology Semester 2, 2017-2018
    MTH 304: General Topology Semester 2, 2017-2018 Dr. Prahlad Vaidyanathan Contents I. Continuous Functions3 1. First Definitions................................3 2. Open Sets...................................4 3. Continuity by Open Sets...........................6 II. Topological Spaces8 1. Definition and Examples...........................8 2. Metric Spaces................................. 11 3. Basis for a topology.............................. 16 4. The Product Topology on X × Y ...................... 18 Q 5. The Product Topology on Xα ....................... 20 6. Closed Sets.................................. 22 7. Continuous Functions............................. 27 8. The Quotient Topology............................ 30 III.Properties of Topological Spaces 36 1. The Hausdorff property............................ 36 2. Connectedness................................. 37 3. Path Connectedness............................. 41 4. Local Connectedness............................. 44 5. Compactness................................. 46 6. Compact Subsets of Rn ............................ 50 7. Continuous Functions on Compact Sets................... 52 8. Compactness in Metric Spaces........................ 56 9. Local Compactness.............................. 59 IV.Separation Axioms 62 1. Regular Spaces................................ 62 2. Normal Spaces................................ 64 3. Tietze's extension Theorem......................... 67 4. Urysohn Metrization Theorem........................ 71 5. Imbedding of Manifolds..........................
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]
  • Basics of the Differential Geometry of Surfaces
    Chapter 20 Basics of the Differential Geometry of Surfaces 20.1 Introduction The purpose of this chapter is to introduce the reader to some elementary concepts of the differential geometry of surfaces. Our goal is rather modest: We simply want to introduce the concepts needed to understand the notion of Gaussian curvature, mean curvature, principal curvatures, and geodesic lines. Almost all of the material presented in this chapter is based on lectures given by Eugenio Calabi in an upper undergraduate differential geometry course offered in the fall of 1994. Most of the topics covered in this course have been included, except a presentation of the global Gauss–Bonnet–Hopf theorem, some material on special coordinate systems, and Hilbert’s theorem on surfaces of constant negative curvature. What is a surface? A precise answer cannot really be given without introducing the concept of a manifold. An informal answer is to say that a surface is a set of points in R3 such that for every point p on the surface there is a small (perhaps very small) neighborhood U of p that is continuously deformable into a little flat open disk. Thus, a surface should really have some topology. Also,locally,unlessthe point p is “singular,” the surface looks like a plane. Properties of surfaces can be classified into local properties and global prop- erties.Intheolderliterature,thestudyoflocalpropertieswascalled geometry in the small,andthestudyofglobalpropertieswascalledgeometry in the large.Lo- cal properties are the properties that hold in a small neighborhood of a point on a surface. Curvature is a local property. Local properties canbestudiedmoreconve- niently by assuming that the surface is parametrized locally.
    [Show full text]
  • Matrix Exponential Updates for On-Line Learning and Bregman
    Matrix Exponential Updates for On-line Learning and Bregman Projection ¥ ¤£ Koji Tsuda ¢¡, Gunnar Ratsch¨ and Manfred K. Warmuth Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tubingen,¨ Germany ¡ AIST CBRC, 2-43 Aomi, Koto-ku, Tokyo, 135-0064, Japan £ Fraunhofer FIRST, Kekulestr´ . 7, 12489 Berlin, Germany ¥ University of Calif. at Santa Cruz ¦ koji.tsuda,gunnar.raetsch § @tuebingen.mpg.de, [email protected] Abstract We address the problem of learning a positive definite matrix from exam- ples. The central issue is to design parameter updates that preserve pos- itive definiteness. We introduce an update based on matrix exponentials which can be used as an on-line algorithm or for the purpose of finding a positive definite matrix that satisfies linear constraints. We derive this update using the von Neumann divergence and then use this divergence as a measure of progress for proving relative loss bounds. In experi- ments, we apply our algorithms to learn a kernel matrix from distance measurements. 1 Introduction Most learning algorithms have been developed to learn a vector of parameters from data. However, an increasing number of papers are now dealing with more structured parameters. More specifically, when learning a similarity or a distance function among objects, the parameters are defined as a positive definite matrix that serves as a kernel (e.g. [12, 9, 11]). Learning is typically formulated as a parameter updating procedure to optimize a loss function. The gradient descent update [5] is one of the most commonly used algorithms, but it is not appropriate when the parameters form a positive definite matrix, because the updated parameter is not necessarily positive definite.
    [Show full text]
  • Matrix Exponentiated Gradient Updates for On-Line Learning and Bregman Projection
    JournalofMachineLearningResearch6(2005)995–1018 Submitted 11/04; Revised 4/05; Published 6/05 Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection Koji Tsuda [email protected] Max Planck Institute for Biological Cybernetics Spemannstrasse 38 72076 Tubingen,¨ Germany, and Computational Biology Research Center National Institute of Advanced Science and Technology (AIST) 2-42 Aomi, Koto-ku, Tokyo 135-0064, Japan Gunnar Ratsch¨ [email protected] Friedrich Miescher Laboratory of the Max Planck Society Spemannstrasse 35 72076 Tubingen,¨ Germany Manfred K. Warmuth [email protected] Computer Science Department University of California Santa Cruz, CA 95064, USA Editor: Yoram Singer Abstract We address the problem of learning a symmetric positive definite matrix. The central issue is to de- sign parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gra- dient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive def- inite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.
    [Show full text]
  • Journal of Computational and Applied Mathematics Exponentials of Skew
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 233 (2010) 2867–2875 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Exponentials of skew-symmetric matrices and logarithms of orthogonal matrices João R. Cardoso a,b,∗, F. Silva Leite c,b a Institute of Engineering of Coimbra, Rua Pedro Nunes, 3030-199 Coimbra, Portugal b Institute of Systems and Robotics, University of Coimbra, Pólo II, 3030-290 Coimbra, Portugal c Department of Mathematics, University of Coimbra, 3001-454 Coimbra, Portugal article info a b s t r a c t Article history: Two widely used methods for computing matrix exponentials and matrix logarithms are, Received 11 July 2008 respectively, the scaling and squaring and the inverse scaling and squaring. Both methods become effective when combined with Padé approximation. This paper deals with the MSC: computation of exponentials of skew-symmetric matrices and logarithms of orthogonal 65Fxx matrices. Our main goal is to improve these two methods by exploiting the special structure Keywords: of skew-symmetric and orthogonal matrices. Geometric features of the matrix exponential Skew-symmetric matrix and logarithm and extensions to the special Euclidean group of rigid motions are also Orthogonal matrix addressed. Matrix exponential ' 2009 Elsevier B.V. All rights reserved. Matrix logarithm Scaling and squaring Inverse scaling and squaring Padé approximants 1. Introduction n×n Given a square matrix X 2 R , the exponential of X is given by the absolute convergent power series 1 X X k eX D : kW kD0 n×n X Reciprocally, given a nonsingular matrix A 2 R , any solution of the matrix equation e D A is called a logarithm of A.
    [Show full text]