The Exponential of a Matrix

Total Page:16

File Type:pdf, Size:1020Kb

The Exponential of a Matrix 5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + . You can prove this by multiplying the power series for the exponentials on the left. (eA is just eAt with t = 1.) Example.Compute eAt if 2 0 A = . 0 3 Compute the successive powers of A: n 2 0 2 4 0 n 2 0 A = , A = , ...,A = n . 0 3 0 9 0 3 1 Therefore, n ∞ ∞ (2t) n n n 0 2t At t 2 0 =0 n! e 0 e = n = = t . n e3 n n! 0 3 P ∞ (3t) 0 =0 0 n X =0 n! P You can compute the exponential of an arbitrary diagonal matrix in the same way: λ1t λ1 0 ··· 0 e 0 ··· 0 λ2t 0 λ2 ··· 0 At 0 e ··· 0 A = . , e = . . eλnt 0 0 ··· λn 0 0 ··· Example. Compute eAt if 1 2 A = . 0 1 Compute the successive powers of A: 1 2 1 4 1 6 n 1 2n A = , A2 = , A3 = , ...,A = . 0 1 0 1 0 1 0 1 Hence, n n ∞ ∞ t ∞ 2nt n n n t t A t 1 2n =0 n! =0 n! e 2te e = = n = t . n! 0 1 ∞ t 0 e n=0 P P 0 n=0 X n! Here’s where the last equality came from: P ∞ n t t = e , n! n=0 X ∞ ∞ ∞ n n−1 m 2nt t t t = 2t = 2t = 2te . n! (n − 1)! m! n=0 n=1 m=0 X X X Example. Compute eAt, if 3 −10 A = . 1 −4 If you compute powers of A as in the last two examples, there is no evident pattern. Therefore, it would be difficult to compute the exponential using the power series. Instead, set up the system whose coefficient matrix is A: ′ x = 3x − 10y, ′ y = x − 4y. The solution is t − t 1 t 1 − t x = c e + c e 2 , y = c e + c e 2 . 1 2 5 1 2 2 2 Next, note that if B is a 2 × 2 matrix, 1 0 B = first column of B and B = second column of B. 0 1 In particular, this is true for eAt. Now At ~x = e ~x0 is the solution satisfying ~x(0) = ~x0, but t −2t c1e + c2e ~x = 1 1 . c et + c e−2t " 5 1 2 2 # Set ~x(0) = (1, 0) to get the first column of eAt: 1 c1 + c2 = 1 1 . 0 c + c " 5 1 2 2 # 5 2 Hence, c = , c = − . So 1 3 2 3 5 t 2 −2t x e − e = 3 3 . y 1 t 1 − t e − e 2 3 3 Set ~x(0) = (0, 1) to get the second column of eAt: 0 c1 + c2 = 1 1 . 1 c + c " 5 1 2 2 # 10 10 Therefore, c = − , c = . Hence, 1 3 2 3 10 t 10 −2t x − e + e = 3 3 . y 2 t 5 − t − e + e 2 3 3 Therefore, 5 2 10 10 et − e−2t − et + e−2t At e = 3 3 3 3 . 1 1 2 5 et − e−2t − et + e−2t 3 3 3 3 I found eAt, but I had to solve a system of differential equations in order to do it. In some cases, it’s possible to use linear algebra to compute the exponential of a matrix. An n × n matrix A is diagonalizable if it has n independent eigenvectors. (This is true, for example, if A has n distinct eigenvalues.) Suppose A is diagonalizable with independent eigenvectors ~v1,...,~vn and corresponding eigenvalues λ1,...,λn. Let S be the matrix whose columns are the eigenvectors: ↑ ↑ ↑ S = ~v ~v ··· ~vn . 1 2 ↓ ↓ ↓ 3 Then λ1 0 ··· 0 − 0 λ2 ··· 0 S 1AS = . = D. 0 0 ··· λn As I observed above, eλ1t 0 ··· 0 λ2t Dt 0 e ··· 0 e = . . eλnt 0 0 ··· On the other hand, since (S−1AS)n = S−1AnS, ∞ ∞ n −1 n n n Dt t (S AS) − t A − At e = = S 1 S = S 1e S. n! n! n=0 n=0 ! X X Hence, eλ1t 0 ··· 0 λ2t At 0 e ··· 0 − e = S . S 1. eλnt 0 0 ··· I can use this approach to compute eAt in case A is diagonalizable. Example. Compute eAt if 3 5 A = . 1 −1 The eigenvalues are λ =, λ = −2. Since there are two different eigenvalues and A is a 2 matrix, A is diagonalizable. The corresponding eigenvectors are (5, 1) and (−1, 1). Thus, 5 −1 − 1 1 1 S = , S 1 = . 1 1 6 −1 5 Hence, 4t 4t −2t 4t −2t At 5 −1 e 0 1 1 1 1 5e + e 5e − 5e e = − t = t − t t − t . 1 1 0 e 2 6 −1 5 6 e4 − e 2 e4 + 5e 2 Example. Compute eAt if 5 −6 −6 A = −1 4 2 . 3 −6 −4 The eigenvalues are λ = 1 and λ = 2 (double). The corresponding eigenvectors are (3, −1, 3) for λ = 1, and (2, 1, 0) and (2, 0, 1) for λ = 2. Since I have 3 independent eigenvectors, the matrix is diagonalizable. I have 3 2 2 −1 2 2 − S = −1 1 0 , S 1 = −1 3 2 . 3 0 1 3 −6 −5 4 From this, it follows that −3et + 4e2t 6et − 6e2t 6et − 6e2t At e = et − e2t −2et + 3e2t −2et + 2e2t . t t t t t t −3e + 3e2 6e − 6e2 6e − 5e2 Here’s a quick check on the computation: If you set t = 0 in the right side, you get 1 0 0 0 1 0 . 0 0 1 This checks, since eA·0 = I. Note that this check isn’t foolproof — just because you get I by setting t = 0 doesn’t mean your answer is right. However, if you don’t get I, your answer is surely wrong! How do you compute eAt is A is not diagonalizable? I’ll describe an iterative algorithm for computing eAt that only requires that one know the eigenvalues of A. There are various algorithms for computing the matrix exponential; this one, which is due to Williamson [1], seems to me to be the easiest for hand computation. (Note that finding the eigenvalues of a matrix is, in general, a difficult problem: Any method for finding eAt will have to deal with it.) Let A be an n × n matrix. Let {λ1,λ2,...,λn} be a list of the eigenvalues, with multiple eigenvalues repeated according to their multiplicity. Let λ1t a1 = e , t λkt λk(t−u) ak = e ⋆ak−1(t) = e ak−1(u) du, k = 2, . , n, Z0 B1 = I, Bk =(A − λk−1I) · Bk−1, k = 2, . , n, Then At e = a1B1 + a2B2 + ... + anBn. To prove this, I’ll show that the expression on the right satisfies the differential equation ~x ′ = A~x. To do this, I’ll need two facts about the characteristic polynomial p(x). 1. (x − λ1)(x − λ2) ··· (x − λn) = ±p(x). 2. (Cayley-Hamilton Theorem) p(A) = 0. Observe that if p(x) is the characteristic polynomial, then using the first fact and the definition of the B’s, p(x) = ±(x − λ1)(x − λ2) ··· (x − λn) p(A) = ±(A − λ1I)(A − λ2I) ··· (A − λnI) = ±I(A − λ1I)(A − λ2I) ··· (A − λnI) = ±B1(A − λ1I)(A − λ2I) ··· (A − λnI) = ±B2(A − λ2I) ··· (A − λnI) . = ±Bn(A − λnI) 5 By the Cayley-Hamilton Theorem, ±Bn(A − λnI) = 0. (∗) I will use this fact in the proof below. Example. I’ll illustrate the Cayley-Hamilton theorem with the matrix 2 3 A = . 2 1 The characteristic polynomial is (2 − λ)(1 − λ) − 6 = λ2 − 3λ − 4. The Cayley-Hamilton theorem asserts that if you plug A into λ2 − 3λ − 4, you’ll get the zero matrix. First, 2 3 2 3 10 9 A2 = = . 2 1 2 1 6 7 Therefore, 10 9 6 9 4 0 0 0 A2 − A − 4I = − − = .
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]
  • Matrix Exponential Updates for On-Line Learning and Bregman
    Matrix Exponential Updates for On-line Learning and Bregman Projection ¥ ¤£ Koji Tsuda ¢¡, Gunnar Ratsch¨ and Manfred K. Warmuth Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tubingen,¨ Germany ¡ AIST CBRC, 2-43 Aomi, Koto-ku, Tokyo, 135-0064, Japan £ Fraunhofer FIRST, Kekulestr´ . 7, 12489 Berlin, Germany ¥ University of Calif. at Santa Cruz ¦ koji.tsuda,gunnar.raetsch § @tuebingen.mpg.de, [email protected] Abstract We address the problem of learning a positive definite matrix from exam- ples. The central issue is to design parameter updates that preserve pos- itive definiteness. We introduce an update based on matrix exponentials which can be used as an on-line algorithm or for the purpose of finding a positive definite matrix that satisfies linear constraints. We derive this update using the von Neumann divergence and then use this divergence as a measure of progress for proving relative loss bounds. In experi- ments, we apply our algorithms to learn a kernel matrix from distance measurements. 1 Introduction Most learning algorithms have been developed to learn a vector of parameters from data. However, an increasing number of papers are now dealing with more structured parameters. More specifically, when learning a similarity or a distance function among objects, the parameters are defined as a positive definite matrix that serves as a kernel (e.g. [12, 9, 11]). Learning is typically formulated as a parameter updating procedure to optimize a loss function. The gradient descent update [5] is one of the most commonly used algorithms, but it is not appropriate when the parameters form a positive definite matrix, because the updated parameter is not necessarily positive definite.
    [Show full text]
  • Matrix Exponentiated Gradient Updates for On-Line Learning and Bregman Projection
    JournalofMachineLearningResearch6(2005)995–1018 Submitted 11/04; Revised 4/05; Published 6/05 Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection Koji Tsuda [email protected] Max Planck Institute for Biological Cybernetics Spemannstrasse 38 72076 Tubingen,¨ Germany, and Computational Biology Research Center National Institute of Advanced Science and Technology (AIST) 2-42 Aomi, Koto-ku, Tokyo 135-0064, Japan Gunnar Ratsch¨ [email protected] Friedrich Miescher Laboratory of the Max Planck Society Spemannstrasse 35 72076 Tubingen,¨ Germany Manfred K. Warmuth [email protected] Computer Science Department University of California Santa Cruz, CA 95064, USA Editor: Yoram Singer Abstract We address the problem of learning a symmetric positive definite matrix. The central issue is to de- sign parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gra- dient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive def- inite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.
    [Show full text]
  • Journal of Computational and Applied Mathematics Exponentials of Skew
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 233 (2010) 2867–2875 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Exponentials of skew-symmetric matrices and logarithms of orthogonal matrices João R. Cardoso a,b,∗, F. Silva Leite c,b a Institute of Engineering of Coimbra, Rua Pedro Nunes, 3030-199 Coimbra, Portugal b Institute of Systems and Robotics, University of Coimbra, Pólo II, 3030-290 Coimbra, Portugal c Department of Mathematics, University of Coimbra, 3001-454 Coimbra, Portugal article info a b s t r a c t Article history: Two widely used methods for computing matrix exponentials and matrix logarithms are, Received 11 July 2008 respectively, the scaling and squaring and the inverse scaling and squaring. Both methods become effective when combined with Padé approximation. This paper deals with the MSC: computation of exponentials of skew-symmetric matrices and logarithms of orthogonal 65Fxx matrices. Our main goal is to improve these two methods by exploiting the special structure Keywords: of skew-symmetric and orthogonal matrices. Geometric features of the matrix exponential Skew-symmetric matrix and logarithm and extensions to the special Euclidean group of rigid motions are also Orthogonal matrix addressed. Matrix exponential ' 2009 Elsevier B.V. All rights reserved. Matrix logarithm Scaling and squaring Inverse scaling and squaring Padé approximants 1. Introduction n×n Given a square matrix X 2 R , the exponential of X is given by the absolute convergent power series 1 X X k eX D : kW kD0 n×n X Reciprocally, given a nonsingular matrix A 2 R , any solution of the matrix equation e D A is called a logarithm of A.
    [Show full text]
  • Chapter 1 Rigid Body Kinematics
    Chapter 1 Rigid Body Kinematics 1.1 Introduction This chapter builds up the basic language and tools to describe the motion of a rigid body – this is called rigid body kinematics. This material will be the foundation for describing general mech- anisms consisting of interconnected rigid bodies in various topologies that are the focus of this book. Only the geometric property of the body and its evolution in time will be studied; the re- sponse of the body to externally applied forces is covered in Chapter ?? under the topic of rigid body dynamics. Rigid body kinematics is not only useful in the analysis of robotic mechanisms, but also has many other applications, such as in the analysis of planetary motion, ground, air, and water vehicles, limbs and bodies of humans and animals, and computer graphics and virtual reality. Indeed, in 1765, motivated by the precession of the equinoxes, Leonhard Euler decomposed a rigid body motion to a translation and rotation, parameterized rotation by three principal-axis rotation (Euler angles), and proved the famous Euler’s rotation theorem [?]. A body is rigid if the distance between any two points fixed with respect to the body remains constant. If the body is free to move and rotate in space, it has 6 degree-of-freedom (DOF), 3 trans- lational and 3 rotational, if we consider both translation and rotation. If the body is constrained in a plane, then it is 3-DOF, 2 translational and 1 rotational. We will adopt a coordinate-free ap- proach as in [?, ?] and use geometric, rather than purely algebraic, arguments as much as possible.
    [Show full text]
  • The Jacobian of the Exponential Function
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique Working Paper The Jacobian of the exponential function Tinbergen Institute Discussion Paper, No. TI 2020-035/III Provided in Cooperation with: Tinbergen Institute, Amsterdam and Rotterdam Suggested Citation: Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique (2020) : The Jacobian of the exponential function, Tinbergen Institute Discussion Paper, No. TI 2020-035/III, Tinbergen Institute, Amsterdam and Rotterdam This Version is available at: http://hdl.handle.net/10419/220072 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R.
    [Show full text]
  • Calculating the Pauli Matrix Equivalent for Spin-1 Particles and Further
    Calculating the Pauli Matrix equivalent for Spin-1 Particles and further implementing it to calculate the Unitary Operators of the Harmonic Oscillator involving a Spin-1 System Rajdeep Tah To cite this version: Rajdeep Tah. Calculating the Pauli Matrix equivalent for Spin-1 Particles and further implementing it to calculate the Unitary Operators of the Harmonic Oscillator involving a Spin-1 System. 2020. hal-02909703 HAL Id: hal-02909703 https://hal.archives-ouvertes.fr/hal-02909703 Preprint submitted on 31 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution - NonCommercial - NoDerivatives| 4.0 International License Calculating the Pauli Matrix equivalent for Spin-1 Particles and further implementing it to calculate the Unitary Operators of the Harmonic Oscillator involving a Spin-1 System Rajdeep Tah1, ∗ 1School of Physical Sciences, National Institute of Science Education and Research Bhubaneswar, HBNI, Jatni, P.O.-752050, Odisha, India (Dated: July, 2020) Here, we derive the Pauli Matrix Equivalent for Spin-1 particles (mainly Z-Boson and W-Boson). 1 Pauli Matrices are generally associated with Spin- 2 particles and it is used for determining the 1 properties of many Spin- 2 particles.
    [Show full text]
  • Problem 1. (§6.3, #14) the Matrix in This Question Is Skew-Symmetric
    18.06 PSET 8 SOLUTIONS APRIL 15, 2010 Problem 1. ( 6.3, #14) The matrix in this question is skew-symmetric (AT = A): § ′ − 0 c b u1 = cu2 bu3 du − ′ − = c 0 a u or u = au3 cu1 dt 2 − −b a 0 ′ − u3 = bu1 au2 − 2 2 2 3 ′ ′ ′ ′ ′ ′ (a) The derivative of u(t) = u1 + u2 + u3 is 2u1u1 +2u2u2 +2u3u3. Substitute u1,u2,u3 to get zero. Then u(t) 2 staysk equalk to u(0) 2. (b) WhenkA isk skew-symmetric, kQ = ekAt is orthogonal. Prove QT = e−At from the series for Q = eAt. Then QTQ = I. Solution. (4 points) (a) ′ ′ ′ 2u u +2u u +2u u =2u (cu bu )+2u (au cu )+2u (bu au )=0. 1 1 2 2 3 3 1 2 − 3 2 3 − 1 3 1 − 2 (b) The important points are that (An)T = (AT)n = ( A)n, and that we can take transpose termwise in a sum: − ∞ T ∞ ∞ n n n t t t − QT = An = (An)T = ( A)n = e At. n! n! − n! n=0 ! n=0 n=0 X X X Then, − QTQ = e AteAt = e0 = I because A and A commute (but I don’t think the problem intended for you to have to actually check this!). − 1 3 Problem 2. ( 6.3, #24) Write A = as SΛS−1. Multiply SeΛtS−1 to find the matrix exponential § 0 0 eAt. Check eAt and the derivative of eAt when t = 0. Solution. (4 points) 1 0 1 1 Λ= and S = 2 2 . 0 3 −0 1 Then, 3 t t − et e e SeΛtS 1 = 2 2 0 e3−t This is the identity matrix when t = 0, as it should be.
    [Show full text]