Manifold Optimization Over the Set of Doubly Stochastic Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Manifold Optimization Over the Set of Doubly Stochastic Matrices View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Caltech Authors 1 Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry Ahmed Douik, Student Member, IEEE and Babak Hassibi, Member, IEEE Abstract—Convex optimization is a well-established research Another important property of convex optimization is that area with applications in almost all fields. Over the decades, mul- the interior of the search space can be identified with a tiple approaches have been proposed to solve convex programs. manifold that is embedded in a higher-dimensional Euclidean The development of interior-point methods allowed solving a more general set of convex programs known as semi-definite space. Using advanced tools to solve the constrained opti- programs and second-order cone programs. However, it has been mization, e.g., [2], requires solving on the high dimension established that these methods are excessively slow for high space which can be excessively slow. Riemannian optimization dimensions, i.e., they suffer from the curse of dimensionality. takes advantage of the fact that the manifold is of lower On the other hand, optimization algorithms on manifold have dimension and exploits its underlying geometric structure. shown great ability in finding solutions to nonconvex problems in reasonable time. This paper is interested in solving a subset The optimization problem is reformulated from a constrained of convex optimization using a different approach. The main Euclidean optimization into an unconstrained optimization idea behind Riemannian optimization is to view the constrained over a restricted search space, a.k.a., a Riemannian manifold. optimization problem as an unconstrained one over a restricted Thanks to the aforementioned low-dimension feature, opti- search space. The paper introduces three manifolds to solve mization over Riemannian manifolds is expected to perform convex programs under particular box constraints. The mani- folds, called the doubly stochastic, symmetric and the definite more efficiently [3]. Therefore, a large body of literature ded- multinomial manifolds, generalize the simplex also known as the icated to adapting traditional Euclidean optimization methods multinomial manifold. The proposed manifolds and algorithms and their convergence properties to Riemannian manifolds. are well-adapted to solving convex programs in which the This paper introduces a framework for solving optimization variable of interest is a multidimensional probability distribution problems in which the optimization variable is a doubly function. Theoretical analysis and simulation results testify the efficiency of the proposed method over state of the art methods. In stochastic matrix. Such framework is particularly interesting particular, they reveal that the proposed framework outperforms for clustering applications. In such problems ,e.g., [4]–[7], one conventional generic and specialized solvers, especially in high wishes to recover the structure of a graph given a similarity dimensions. matrix. The recovery is performed by minimizing a prede- Index Terms—Riemannian manifolds, symmetric doubly fined cost function under the constraint that the optimization stochastic matrices, positive matrices, convex optimization. variable is a doubly stochastic matrix. This work provides a unified framework to carry such optimization. I. INTRODUCTION Numerical optimization is the foundation of various engi- A. State of the Art neering and computational sciences. Consider a mapping f Optimization algorithms on Riemannian manifolds appeared from a subset D of Rn to R. The goal of the optimization in the optimization literature as early as the 1970’s with the algorithms is to find an extreme point x∗ D such that ∈ work of Luenberger [8] wherein the standard Newton’s opti- f(x∗) f(y) for all point y ∗ in the neighborhood of x mization method has been adapted to problems on manifolds. x∗. Unconstrained≤ Euclidean1 optimization∈ N refers to the setup A decade later, Gabay [9] introduces the steepest descent and in which the domain of the objective function is the whole the quasi-Newton algorithm on embedded submanifolds of space, i.e., D = Rn. On the other hand, constrained Euclidean Rn. The work investigates the global and local convergence arXiv:1802.02628v1 [math.OC] 7 Feb 2018 optimization denotes optimization problem in which the search properties of both the steepest descent and the Newton’s set is constrained, i.e., D ( Rn. method. The analysis of the steepest descent and the Newton Convex optimization is a special case of constrained opti- algorithm is extended in [10], [11] to Riemannian manifolds. mization problems in which both the objective function and By using exact line search, the authors concluded the conver- the search set are convex. Historically initiated with the study gence of their proposed algorithms. The assumption is relaxed of least-squares and linear programming problems, convex in [12] wherein the author provides convergence rate and optimization plays a crucial role in optimization algorithm guarantees for the steepest descent and Newton’s method for thanks to the desirable convergence property it exhibits. The Armijo step-size control. development of interior-point methods allowed solving a more general set of convex programs known as semi-definite pro- The above-mentioned works substitute the concept of the line search in Euclidian algorithms by searching along a grams and second-order cone programs. A summary of convex optimization methods and performance analysis can be found geodesic which generalizes the idea of a straight line. While in the seminal book [1]. the method is natural and intuitive, it might not be practical. Indeed, finding the expression of the geodesic requires com- Ahmed Douik and Babak Hassibi are with the Department of Electrical puting the exponential map which may be as complicated as Engineering, California Institute of Technology, Pasadena, CA 91125 USA solving the original optimization problem [13]. To overcome (e-mail: {ahmed.douik,hassibi}@caltech.edu). 1The traditional optimization schemes are identified with the word Eu- the problem, the authors in [14] suggest approximating the clidean in contrast with the Riemannian algorithm in the rest of the paper. exponential map up to a given order, called a retraction, and show quadratic convergence for Newton’s method under The paper investigates the first and second order Rieman- such setup. The work initiated more sophisticated optimization nian geometry of the proposed manifolds endowed with the algorithm such as the trust region methods [3], [15]–[18]. Fisher information metric which guarantees that the manifolds Analysis of the convergence of first and second order methods have a differentiable structure. For each manifold, the tangent on Riemannian manifolds, e.g., gradient and conjugate gradi- space, Riemannian gradient, Hessian, and retraction are de- ent descent, Newton’s method, and trust region methods, using rived. With the aforementioned expressions, the manuscript general retractions are summarized in [13]. formulates first and a second order optimization algorithms Thanks to the theoretical convergence guarantees mentioned and characterizes their complexity. Simulation results are above, the optimization algorithms on Riemannian manifolds provided to further illustrate the efficiency of the proposed are gradually gaining momentum in the optimization field [3]. method against state of the art algorithms. Several successful algorithms have been proposed to solve The rest of the manuscript is organized as follows: Section II non-convex problems, e.g., the low-rank matrix completion introduces the optimization algorithms on manifolds and lists [19]–[21], online learning [22], clustering [23], [24] and the problems of interest in this paper. In Section III, the doubly tensor decomposition [25]. It is worth mentioning that these stochastic manifold is introduced and its first and second works modify the optimization algorithm by using a general order geometry derived. Section IV iterate a similar study connection instead of the genuine parallel vector transport to to a particular case of doubly stochastic matrices known as move from a tangent space to the other while computing the the symmetric manifold. The study is extended to the definite (approximate) Hessian. Such approach conserves the global symmetric manifold in Section V. Section VI suggests first and convergence of the quasi-Newton scheme but no longer en- second order algorithms and analyze their complexity. Finally, sures their superlinear convergence behavior [26]. before concluding in Section VIII, the simulation results are Despite the advantages cited above, the use of optimization plotted and discussed in Section VII. algorithms on manifolds is relatively limited. This is mainly due to the lack of a systematic mechanism to turn a constrained II. OPTIMIZATION ON RIEMANNIAN MANIFOLDS optimization problem into an optimization over a manifold provided that the search space forms a manifold, e.g., convex This section introduces the numerical optimization methods optimization. Such reformulation, usually requiring some level on smooth matrix manifolds. The first part introduces the of understanding of differential geometry and Riemannian Riemannian manifold notations and operations. The second manifolds, is prohibitively complex for regular use. This paper part extends the first and second order Euclidean optimization addresses
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Arxiv:1306.4805V3 [Math.OC] 6 Feb 2015 Used Greedy Techniques to Reorder Matrices
    CONVEX RELAXATIONS FOR PERMUTATION PROBLEMS FAJWEL FOGEL, RODOLPHE JENATTON, FRANCIS BACH, AND ALEXANDRE D’ASPREMONT ABSTRACT. Seriation seeks to reconstruct a linear order between variables using unsorted, pairwise similarity information. It has direct applications in archeology and shotgun gene sequencing for example. We write seri- ation as an optimization problem by proving the equivalence between the seriation and combinatorial 2-SUM problems on similarity matrices (2-SUM is a quadratic minimization problem over permutations). The seriation problem can be solved exactly by a spectral algorithm in the noiseless case and we derive several convex relax- ations for 2-SUM to improve the robustness of seriation solutions in noisy settings. These convex relaxations also allow us to impose structural constraints on the solution, hence solve semi-supervised seriation problems. We derive new approximation bounds for some of these relaxations and present numerical experiments on archeological data, Markov chains and DNA assembly from shotgun gene sequencing data. 1. INTRODUCTION We study optimization problems written over the set of permutations. While the relaxation techniques discussed in what follows are applicable to a much more general setting, most of the paper is centered on the seriation problem: we are given a similarity matrix between a set of n variables and assume that the variables can be ordered along a chain, where the similarity between variables decreases with their distance within this chain. The seriation problem seeks to reconstruct this linear ordering based on unsorted, possibly noisy, pairwise similarity information. This problem has its roots in archeology [Robinson, 1951] and also has direct applications in e.g.
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • Doubly Stochastic Matrices Whose Powers Eventually Stop
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 330 (2001) 25–30 www.elsevier.com/locate/laa Doubly stochastic matrices whose powers eventually stopୋ Suk-Geun Hwang a,∗, Sung-Soo Pyo b aDepartment of Mathematics Education, Kyungpook National University, Taegu 702-701, South Korea bCombinatorial and Computational Mathematics Center, Pohang University of Science and Technology, Pohang, South Korea Received 22 June 2000; accepted 14 November 2000 Submitted by R.A. Brualdi Abstract In this note we characterize doubly stochastic matrices A whose powers A, A2,A3,... + eventually stop, i.e., Ap = Ap 1 =···for some positive integer p. The characterization en- ables us to determine the set of all such matrices. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A51 Keywords: Doubly stochastic matrix; J-potent 1. Introduction Let R denote the real field. For positive integers m, n,letRm×n denote the set of all m × n matrices with real entries. As usual let Rn denote the set Rn×1.We call the members of Rn the n-vectors. The n-vector of 1’s is denoted by e,andthe identity matrix of order n is denoted by In. For two matrices A, B of the same size, let A B denote that all the entries of A − B are nonnegative. A matrix A is called nonnegative if A O. A nonnegative square matrix is called a doubly stochastic matrix if all of its row sums and column sums equal 1.
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]
  • Matrix Exponential Updates for On-Line Learning and Bregman
    Matrix Exponential Updates for On-line Learning and Bregman Projection ¥ ¤£ Koji Tsuda ¢¡, Gunnar Ratsch¨ and Manfred K. Warmuth Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tubingen,¨ Germany ¡ AIST CBRC, 2-43 Aomi, Koto-ku, Tokyo, 135-0064, Japan £ Fraunhofer FIRST, Kekulestr´ . 7, 12489 Berlin, Germany ¥ University of Calif. at Santa Cruz ¦ koji.tsuda,gunnar.raetsch § @tuebingen.mpg.de, [email protected] Abstract We address the problem of learning a positive definite matrix from exam- ples. The central issue is to design parameter updates that preserve pos- itive definiteness. We introduce an update based on matrix exponentials which can be used as an on-line algorithm or for the purpose of finding a positive definite matrix that satisfies linear constraints. We derive this update using the von Neumann divergence and then use this divergence as a measure of progress for proving relative loss bounds. In experi- ments, we apply our algorithms to learn a kernel matrix from distance measurements. 1 Introduction Most learning algorithms have been developed to learn a vector of parameters from data. However, an increasing number of papers are now dealing with more structured parameters. More specifically, when learning a similarity or a distance function among objects, the parameters are defined as a positive definite matrix that serves as a kernel (e.g. [12, 9, 11]). Learning is typically formulated as a parameter updating procedure to optimize a loss function. The gradient descent update [5] is one of the most commonly used algorithms, but it is not appropriate when the parameters form a positive definite matrix, because the updated parameter is not necessarily positive definite.
    [Show full text]
  • Matrix Exponentiated Gradient Updates for On-Line Learning and Bregman Projection
    JournalofMachineLearningResearch6(2005)995–1018 Submitted 11/04; Revised 4/05; Published 6/05 Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection Koji Tsuda [email protected] Max Planck Institute for Biological Cybernetics Spemannstrasse 38 72076 Tubingen,¨ Germany, and Computational Biology Research Center National Institute of Advanced Science and Technology (AIST) 2-42 Aomi, Koto-ku, Tokyo 135-0064, Japan Gunnar Ratsch¨ [email protected] Friedrich Miescher Laboratory of the Max Planck Society Spemannstrasse 35 72076 Tubingen,¨ Germany Manfred K. Warmuth [email protected] Computer Science Department University of California Santa Cruz, CA 95064, USA Editor: Yoram Singer Abstract We address the problem of learning a symmetric positive definite matrix. The central issue is to de- sign parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gra- dient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive def- inite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.
    [Show full text]
  • Notes on Birkhoff-Von Neumann Decomposition of Doubly Stochastic Matrices Fanny Dufossé, Bora Uçar
    Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufossé, Bora Uçar To cite this version: Fanny Dufossé, Bora Uçar. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matri- ces. Linear Algebra and its Applications, Elsevier, 2016, 497, pp.108–115. 10.1016/j.laa.2016.02.023. hal-01270331v6 HAL Id: hal-01270331 https://hal.inria.fr/hal-01270331v6 Submitted on 23 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Notes on Birkhoff-von Neumann decomposition of doubly stochastic matrices Fanny Dufoss´ea, Bora U¸carb,∗ aInria Lille, Nord Europe, 59650, Villeneuve d'Ascq, France bLIP, UMR5668 (CNRS - ENS Lyon - UCBL - Universit´ede Lyon - INRIA), Lyon, France Abstract Birkhoff-von Neumann (BvN) decomposition of doubly stochastic matrices ex- presses a double stochastic matrix as a convex combination of a number of permutation matrices. There are known upper and lower bounds for the num- ber of permutation matrices that take part in the BvN decomposition of a given doubly stochastic matrix. We investigate the problem of computing a decom- position with the minimum number of permutation matrices and show that the associated decision problem is strongly NP-complete.
    [Show full text]
  • Journal of Computational and Applied Mathematics Exponentials of Skew
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 233 (2010) 2867–2875 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Exponentials of skew-symmetric matrices and logarithms of orthogonal matrices João R. Cardoso a,b,∗, F. Silva Leite c,b a Institute of Engineering of Coimbra, Rua Pedro Nunes, 3030-199 Coimbra, Portugal b Institute of Systems and Robotics, University of Coimbra, Pólo II, 3030-290 Coimbra, Portugal c Department of Mathematics, University of Coimbra, 3001-454 Coimbra, Portugal article info a b s t r a c t Article history: Two widely used methods for computing matrix exponentials and matrix logarithms are, Received 11 July 2008 respectively, the scaling and squaring and the inverse scaling and squaring. Both methods become effective when combined with Padé approximation. This paper deals with the MSC: computation of exponentials of skew-symmetric matrices and logarithms of orthogonal 65Fxx matrices. Our main goal is to improve these two methods by exploiting the special structure Keywords: of skew-symmetric and orthogonal matrices. Geometric features of the matrix exponential Skew-symmetric matrix and logarithm and extensions to the special Euclidean group of rigid motions are also Orthogonal matrix addressed. Matrix exponential ' 2009 Elsevier B.V. All rights reserved. Matrix logarithm Scaling and squaring Inverse scaling and squaring Padé approximants 1. Introduction n×n Given a square matrix X 2 R , the exponential of X is given by the absolute convergent power series 1 X X k eX D : kW kD0 n×n X Reciprocally, given a nonsingular matrix A 2 R , any solution of the matrix equation e D A is called a logarithm of A.
    [Show full text]