PHYSICS 116A Homework 7 Solutions 1. Boas, Ch. 3, §6, Qu. 6

Total Page:16

File Type:pdf, Size:1020Kb

PHYSICS 116A Homework 7 Solutions 1. Boas, Ch. 3, §6, Qu. 6 PHYSICS 116A Homework 7 Solutions 1. Boas, Ch. 3, 6, Qu. 6. § The Pauli spin matrices in quantum mechanics are 0 1 0 i 1 0 A = , B = − , C = . 1 0 i 0 0 1 − Show that A2 = B2 = C2 = I (the unit matrix) Also show that any of these two matrices anticommute, that is AB = BA, etc. Show that the commutator of A and B is 2iC and − similarly for the other pairs in cyclic order. 2 0 1 0 1 0 0 + 1 1 0 1 + 1 0 1 0 A = = · · · · = , 1 0 1 0 1 0 + 0 1 1 1 + 0 0 0 1 · · · · 2 0 i 0 i 0 0+( i) i 0 ( i)+( i) 0 1 0 B = − − = · − · · − − · = , i 0 i 0 i 0 + 0 i i ( i) + 0 0 0 1 · · · − · 1 0 1 0 1 0 C2 = = , 0 1 0 1 0 1 − − 0 1 0 i 0 0 + 1 i 0 ( i) + 1 0 i 0 AB = − = · · · − · = , 1 0 i 0 1 0 + 0 i 1 ( i) + 0 0 0 i · · · − · − 0 i 0 1 i 0 BA = − = − = AB, i 0 1 0 0 i − 2i 0 AB BA = = 2iC, − 0 2i − 0 i 1 0 0 i BC = − = , i 0 0 1 i 0 − 1 0 0 i 0 i CB = − = − = BC, 0 1 i 0 i 0 − − − 0 2i BC CB = = 2iA, − 2i 0 1 0 0 1 0 1 CA = = , 0 1 1 0 1 0 − − 0 1 1 0 0 1 AC = = − = CA, 1 0 0 1 1 0 − − 0 2 CA AC = = 2iB. − 2 0 − 2. Boas, Ch. 3, 6, Qu. 7. § Find the matrix product of 1 4 1 2 3 − − . 2 1 2 − By evaluating this in two ways, verify the associative law for multiplication, A(BC)=(AB)C, which justifies our writing it as ABC (i.e. without brackets). 1 Writing 1 4 1 A = 2 3 , B = − , C = − 2 1 2 − we have 1 4 AB = 2 3 − = 4 5 , 2 1 − so 1 (AB)C = 4 5 − = 6. 2 Similarly 1 4 1 9 BC = − − = , 2 1 2 4 − − so 9 A(BC)= 2 3 = 6, 4 − =(AB)C. 3. Boas, problem 3.6–16. Find the inverse of 2 0 1 − A = 1 1 2 − 3 1 0 by employing eq. (6.13) on p. 119 of Boas and also by using Gauss-Jordan elimination. Eq. (6.13) pm p. 119 of Boas is T 1 C A− = , (1) det A where CT is the transpose of the matrix whose elements are the cofactors of A. First we compute the determinant of A using the expansion of cofactors based on the elements of the third column 2 0 1 − 1 1 2 0 det A = 1 1 2 = (1) − (2) − = (1)(4) (2)( 2) = 8 . − 3 1 − 3 1 − − 3 1 0 Next, we evaluate the matrix of cofactors and take the transpose: 1 2 0 1 0 1 − 1 0 − 1 0 1 2 − 2 1 1 − T 1 2 2 1 2 1 C = − − = 6 3 5 . − 3 0 3 0 − 1 2 − 4 2 2 1 1 2 0 2 0 − − − 3 1 − 3 1 1 1 − 2 The inverse of A is given by Eq. (1), so it follows that 2 1 1 − 1 1 A− = 6 3 5 . (2) 8 − 4 2 2 You should check this calculation by verifying that AA 1 = I, where I is the 3 3 identity matrix. − × Next we solve the problem by Gauss-Jordan elimination in which we perform the same row oper- ations on the matrix A and the inverse matrix I. This proceeds as follows: 2 0 1 1 0 0 − 1 1 2 0 1 0 − 3 1 0 0 0 1 R(3) R(3) 3 R(1) 2 0 1 1 0 0 + 2 − → 1 1 2 0 1 0 −−−−−−−−−−−→ − 3 3 0 1 2 2 0 1 1 R(2) R(2)+ R(1) 2 0 1 1 0 0 → 2 − 5 1 0 1 2 2 1 0 −−−−−−−−−−−→ − 3 3 0 1 2 2 0 1 2 0 1 1 0 0 R(3) R(3)+R(2) − 5 1 → 0 1 1 0 −−−−−−−−−−→ − 2 2 0 0 4 2 1 1 R(3) 1 R(3) 2 0 1 8 0 0 4 − 5 1 → 0 1 4 8 0 −−−−−−−−→ − 2 8 0 0 1 4 2 2 R(2) R(2) 5 R(3) 2 0 1 8 0 0 2 − 1 → − 0 1 0 6 3 5 −−−−−−−−−−−→ − 8 − − 0 0 1 4 2 2 2 0 1 8 0 0 R(2) R(2) − 1 →− 0 1 0 6 3 5 −−−−−−−−→ 8 − 0 0 1 4 2 2 2 0 0 4 2 2 R(1) R(1) R(2) − 1 − − → − 0 1 0 6 3 5 −−−−−−−−−−→ 8 − 0 0 1 4 2 2 R(1) 1 R(1) 1 0 0 2 1 1 2 1 − →− 0 1 0 6 3 5 . (3) −−−−−−−−−→ 8 − 0 0 1 4 2 2 The left hand matrix, which started as A, has been transformed into the identity matrix, so the 1 right hand matrix, which started as the identity matrix, has been transformed into A− . Thus 2 1 1 1 1 − A− = 6 3 5 , (4) 8 − 4 2 2 3 in agreement with Eq. (2). The Gauss-Jordan method can be efficiently implemented on a com- puter and requires many fewer operations than using Eq. (1) except for small matrices. By hand, the Gauss-Jordan method involves a lot of copying from line to line which is not needed in the computer program. 4. Boas, problem 3.6–21. Solve the following set of equations by the method of finding the inverse of the coefficient matrix. x + 2z = 8 , 2x y = 5 , − − x + y + z = 4 . The coefficient matrix is: 1 0 2 M = 2 1 0 . 1− 1 1 ! 1 One can compute the inverse M − by using 1 1 T M − = C . (5) detM First, we compute det M via the expansion of cofactors. Using the first row, 1 0 2 2 1 0 = (1)( 1) + 2[(2)(1) ( 1)(1)] = 5 . − − − − 1 1 1 Next, we compute the transpose of the matrix of cofactors, 1 0 0 2 0 2 − 1 1 − 1 1 1 0 − 1 2 2 − T 2 0 1 2 1 2 C = = 2 1 4 . − 1 1 1 1 − 2 0 − − 3 1 1 2 1 1 0 1 0 − − − 1 1 − 1 1 2 1 − Thus, Eq. (5) yields: 1 2 2 − 1 1 M − = 2 1 4 . 5 − − 3 1 1 − − Finally, the solution to the set of equations above is: x 1 2 2 8 2 − − 1 y = 2 1 4 5 = 1 . 5 − − − z 3 1 1 4 5 − − That is, the solution is x = 2, y = 1 and z = 5. − 4 0 i iθB 5. Boas, problem 3.6–32. For the Pauli spin matrix, B = i −0 , find e and show that your result is a rotation matrix. Repeat the calculation for e iθB. − First, we observe that: 2 0 i 0 i 1 0 B = − − = = I, i 0 i 0 0 1 where I is the 2 2 identity matrix. Thus, × n n B2 = I, and B2 +1 = B, for any non-negative integer n. By definition, the matrix exponential is defined via its power series, n 2 2 3 3 4 4 5 5 iθB ∞ (iθB) θ B θ B θ B θ B e = = I + iθB i + + i + n=0 n! − 2! − 3! 4! 5! ··· X θ2 θ4 θ3 θ5 = I 1 + + + iB θ + + − 2! 4! ··· − 3! 5! ··· n 2n n 2n+1 ∞ ( 1) θ ∞ ( 1) θ = I − + iB − = I cos θ + iB sin θ n=0 (2n)! n=0 (2n + 1)! X X 1 0 0 1 cos θ sin θ = cos θ + sin θ = , (6) 0 1 1 0 sin θ cos θ − − iθB which is a rotation matrix [cf. Eq.(6.14) on p. 120 of Boas]. The computation of e− follows precisely the same steps, with θ replaced everywhere by θ. Hence, − iθB cos θ sin θ e− = I cos θ iB sin θ = − . − sin θ cos θ 6. Boas, problem 3.7–27. The following matrix, 1 1 1 R − − , ≡ √2 1 1 − represents an active transformation of vectors in the x–y plane (axes fixed, vectors rotated or reflected). Show that the above matrix is orthogonal, find its determinant, and find the rotation angle, or find the line of reflection. The matrix R is orthogonal, since T 1 1 1 1 1 1 0 RR = − − − = = I . 2 1 1 1 1 0 1 − − − In addition, the determinant of R is 1 1 √2 √2 1 1 det R − 1 − 1 = 2 + 2 = 1 , ≡ √2 √2 − 5 which means that R is a proper rotation matrix. Comparing this with the general form for the matrix representation of an active rotation in two-dimensions, cos θ sin θ − . sin θ cos θ we see that sin θ = cos θ = 1/√2, which implies that θ = 3π/4 = 135◦. − 7. Boas, problem 3.7–34. For the matrix, 1 0 0 L = 0 0 1 , − 0 1 0 − find its determinant to see whether it produces a rotation or a reflection. If a rotation, find the axis and angle of rotation. If a reflection, find the reflecting plane and the rotation (if any) about the normal to this plane. First we compute the determinant of L by using the expansion of cofactors based on the elements of the third row.
Recommended publications
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Enhancing Self-Reflection and Mathematics Achievement of At-Risk Urban Technical College Students
    Psychological Test and Assessment Modeling, Volume 53, 2011 (1), 108-127 Enhancing self-reflection and mathematics achievement of at-risk urban technical college students Barry J. Zimmerman1, Adam Moylan2, John Hudesman3, Niesha White3, & Bert Flugman3 Abstract A classroom-based intervention study sought to help struggling learners respond to their academic grades in math as sources of self-regulated learning (SRL) rather than as indices of personal limita- tion. Technical college students (N = 496) in developmental (remedial) math or introductory col- lege-level math courses were randomly assigned to receive SRL instruction or conventional in- struction (control) in their respective courses. SRL instruction was hypothesized to improve stu- dents’ math achievement by showing them how to self-reflect (i.e., self-assess and adapt to aca- demic quiz outcomes) more effectively. The results indicated that students receiving self-reflection training outperformed students in the control group on instructor-developed examinations and were better calibrated in their task-specific self-efficacy beliefs before solving problems and in their self- evaluative judgments after solving problems. Self-reflection training also increased students’ pass- rate on a national gateway examination in mathematics by 25% in comparison to that of control students. Key words: self-regulation, self-reflection, math instruction 1 Correspondence concerning this article should be addressed to: Barry Zimmerman, PhD, Graduate Center of the City University of New York and Center for Advanced Study in Education, 365 Fifth Ave- nue, New York, NY 10016, USA; email: [email protected] 2 Now affiliated with the University of California, San Francisco, School of Medicine 3 Graduate Center of the City University of New York and Center for Advanced Study in Education Enhancing self-reflection and math achievement 109 Across America, faculty and policy makers at two-year and technical colleges have been deeply troubled by the low academic achievement and high attrition rate of at-risk stu- dents.
    [Show full text]
  • Solutions to Math 53 Practice Second Midterm
    Solutions to Math 53 Practice Second Midterm 1. (20 points) dx dy (a) (10 points) Write down the general solution to the system of equations dt = x+y; dt = −13x−3y in terms of real-valued functions. (b) (5 points) The trajectories of this equation in the (x; y)-plane rotate around the origin. Is this dx rotation clockwise or counterclockwise? If we changed the first equation to dt = −x + y, would it change the direction of rotation? (c) (5 points) Suppose (x1(t); y1(t)) and (x2(t); y2(t)) are two solutions to this equation. Define the Wronskian determinant W (t) of these two solutions. If W (0) = 1, what is W (10)? Note: the material for this part will be covered in the May 8 and May 10 lectures. 1 1 (a) The corresponding matrix is A = , with characteristic polynomial (1 − λ)(−3 − −13 −3 λ) + 13 = 0, so that λ2 + 2λ + 10 = 0, so that λ = −1 ± 3i. The corresponding eigenvector v to λ1 = −1 + 3i must be a solution to 2 − 3i 1 −13 −2 − 3i 1 so we can take v = Now we just need to compute the real and imaginary parts of 3i − 2 e3it cos(3t) + i sin(3t) veλ1t = e−t = e−t : (3i − 2)e3it −2 cos(3t) − 3 sin(3t) + i(3 cos(3t) − 2 sin(3t)) The general solution is thus expressed as cos(3t) sin(3t) c e−t + c e−t : 1 −2 cos(3t) − 3 sin(3t) 2 (3 cos(3t) − 2 sin(3t)) dx (b) We can examine the direction field.
    [Show full text]
  • Reflection Invariant and Symmetry Detection
    1 Reflection Invariant and Symmetry Detection Erbo Li and Hua Li Abstract—Symmetry detection and discrimination are of fundamental meaning in science, technology, and engineering. This paper introduces reflection invariants and defines the directional moments(DMs) to detect symmetry for shape analysis and object recognition. And it demonstrates that detection of reflection symmetry can be done in a simple way by solving a trigonometric system derived from the DMs, and discrimination of reflection symmetry can be achieved by application of the reflection invariants in 2D and 3D. Rotation symmetry can also be determined based on that. Also, if none of reflection invariants is equal to zero, then there is no symmetry. And the experiments in 2D and 3D show that all the reflection lines or planes can be deterministically found using DMs up to order six. This result can be used to simplify the efforts of symmetry detection in research areas,such as protein structure, model retrieval, reverse engineering, and machine vision etc. Index Terms—symmetry detection, shape analysis, object recognition, directional moment, moment invariant, isometry, congruent, reflection, chirality, rotation F 1 INTRODUCTION Kazhdan et al. [1] developed a continuous measure and dis- The essence of geometric symmetry is self-evident, which cussed the properties of the reflective symmetry descriptor, can be found everywhere in nature and social lives, as which was expanded to 3D by [2] and was augmented in shown in Figure 1. It is true that we are living in a spatial distribution of the objects asymmetry by [3] . For symmetric world. Pursuing the explanation of symmetry symmetry discrimination [4] defined a symmetry distance will provide better understanding to the surrounding world of shapes.
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Theory of Angular Momentum and Spin
    Chapter 5 Theory of Angular Momentum and Spin Rotational symmetry transformations, the group SO(3) of the associated rotation matrices and the 1 corresponding transformation matrices of spin{ 2 states forming the group SU(2) occupy a very important position in physics. The reason is that these transformations and groups are closely tied to the properties of elementary particles, the building blocks of matter, but also to the properties of composite systems. Examples of the latter with particularly simple transformation properties are closed shell atoms, e.g., helium, neon, argon, the magic number nuclei like carbon, or the proton and the neutron made up of three quarks, all composite systems which appear spherical as far as their charge distribution is concerned. In this section we want to investigate how elementary and composite systems are described. To develop a systematic description of rotational properties of composite quantum systems the consideration of rotational transformations is the best starting point. As an illustration we will consider first rotational transformations acting on vectors ~r in 3-dimensional space, i.e., ~r R3, 2 we will then consider transformations of wavefunctions (~r) of single particles in R3, and finally N transformations of products of wavefunctions like j(~rj) which represent a system of N (spin- Qj=1 zero) particles in R3. We will also review below the well-known fact that spin states under rotations behave essentially identical to angular momentum states, i.e., we will find that the algebraic properties of operators governing spatial and spin rotation are identical and that the results derived for products of angular momentum states can be applied to products of spin states or a combination of angular momentum and spin states.
    [Show full text]
  • Isometries and the Plane
    Chapter 1 Isometries of the Plane \For geometry, you know, is the gate of science, and the gate is so low and small that one can only enter it as a little child. (W. K. Clifford) The focus of this first chapter is the 2-dimensional real plane R2, in which a point P can be described by its coordinates: 2 P 2 R ;P = (x; y); x 2 R; y 2 R: Alternatively, we can describe P as a complex number by writing P = (x; y) = x + iy 2 C: 2 The plane R comes with a usual distance. If P1 = (x1; y1);P2 = (x2; y2) 2 R2 are two points in the plane, then p 2 2 d(P1;P2) = (x2 − x1) + (y2 − y1) : Note that this is consistent withp the complex notation. For P = x + iy 2 C, p 2 2 recall that jP j = x + y = P P , thus for two complex points P1 = x1 + iy1;P2 = x2 + iy2 2 C, we have q d(P1;P2) = jP2 − P1j = (P2 − P1)(P2 − P1) p 2 2 = j(x2 − x1) + i(y2 − y1)j = (x2 − x1) + (y2 − y1) ; where ( ) denotes the complex conjugation, i.e. x + iy = x − iy. We are now interested in planar transformations (that is, maps from R2 to R2) that preserve distances. 1 2 CHAPTER 1. ISOMETRIES OF THE PLANE Points in the Plane • A point P in the plane is a pair of real numbers P=(x,y). d(0,P)2 = x2+y2. • A point P=(x,y) in the plane can be seen as a complex number x+iy.
    [Show full text]
  • The Exponential Function for Matrices
    The exponential function for matrices Matrix exponentials provide a concise way of describing the solutions to systems of homoge- neous linear differential equations that parallels the use of ordinary exponentials to solve simple differential equations of the form y0 = λ y. For square matrices the exponential function can be defined by the same sort of infinite series used in calculus courses, but some work is needed in order to justify the construction of such an infinite sum. Therefore we begin with some material needed to prove that certain infinite sums of matrices can be defined in a mathematically sound manner and have reasonable properties. Limits and infinite series of matrices Limits of vector valued sequences in Rn can be defined and manipulated much like limits of scalar valued sequences, the key adjustment being that distances between real numbers that are expressed in the form js−tj are replaced by distances between vectors expressed in the form jx−yj. 1 Similarly, one can talk about convergence of a vector valued infinite series n=0 vn in terms of n the convergence of the sequence of partial sums sn = i=0 vk. As in the case of ordinary infinite series, the best form of convergence is absolute convergence, which correspondsP to the convergence 1 P of the real valued infinite series jvnj with nonnegative terms. A fundamental theorem states n=0 1 that a vector valued infinite series converges if the auxiliary series jvnj does, and there is P n=0 a generalization of the standard M-test: If jvnj ≤ Mn for all n where Mn converges, then P n vn also converges.
    [Show full text]
  • Approximating the Exponential from a Lie Algebra to a Lie Group
    MATHEMATICS OF COMPUTATION Volume 69, Number 232, Pages 1457{1480 S 0025-5718(00)01223-0 Article electronically published on March 15, 2000 APPROXIMATING THE EXPONENTIAL FROM A LIE ALGEBRA TO A LIE GROUP ELENA CELLEDONI AND ARIEH ISERLES 0 Abstract. Consider a differential equation y = A(t; y)y; y(0) = y0 with + y0 2 GandA : R × G ! g,whereg is a Lie algebra of the matricial Lie group G. Every B 2 g canbemappedtoGbythematrixexponentialmap exp (tB)witht 2 R. Most numerical methods for solving ordinary differential equations (ODEs) on Lie groups are based on the idea of representing the approximation yn of + the exact solution y(tn), tn 2 R , by means of exact exponentials of suitable elements of the Lie algebra, applied to the initial value y0. This ensures that yn 2 G. When the exponential is difficult to compute exactly, as is the case when the dimension is large, an approximation of exp (tB) plays an important role in the numerical solution of ODEs on Lie groups. In some cases rational or poly- nomial approximants are unsuitable and we consider alternative techniques, whereby exp (tB) is approximated by a product of simpler exponentials. In this paper we present some ideas based on the use of the Strang splitting for the approximation of matrix exponentials. Several cases of g and G are considered, in tandem with general theory. Order conditions are discussed, and a number of numerical experiments conclude the paper. 1. Introduction Consider the differential equation (1.1) y0 = A(t; y)y; y(0) 2 G; with A : R+ × G ! g; where G is a matricial Lie group and g is the underlying Lie algebra.
    [Show full text]