Computing Exponentials of Skew-Symmetric Matrices and Logarithms of Orthogonal Matrices

Total Page:16

File Type:pdf, Size:1020Kb

Computing Exponentials of Skew-Symmetric Matrices and Logarithms of Orthogonal Matrices International Journal of Robotics and Automation, Vol. 17, No. 4, 2002 COMPUTING EXPONENTIALS OF SKEW-SYMMETRIC MATRICES AND LOGARITHMS OF ORTHOGONAL MATRICES J. Gallier∗ and D. Xu∗ Abstract tom Dieck [4]). The surjectivity of exp is an important property. Indeed, it implies the existence of a function log: The authors show that there is a generalization of Rodrigues’ formula SO(n) → so(n) (only locally a function, log is really a mul- for computing the exponential map exp: so(n) → SO(n) from skew- tivalued function), and this has interesting applications. symmetric matrices to orthogonal matrices when n ≥ 4, and give For example, exp and log can be used for motion interpo- a method for computing some determination of the (multivalued) function log: SO(n) → so(n). The key idea is the decomposition of a lation, as illustrated in Kim, M.-J., Kim, M.-S and Shin skew-symmetric n×n matrix B in terms of (unique) skew-symmetric [5, 6], and Park and Ravani [7, 8]. Motion interpolation matrices B1,...,Bp obtained from the diagonalization of B and and rational motions have also been investigated by J¨uttler satisfying some simple algebraic identities. A subproblem arising in [9, 10], J¨uttler and Wagner [11, 12], Horsch and J¨uttler [13], computing log R, where R ∈ SO(n), is the problem of finding a skew- B B2 B2 and R¨oschel [14]. In its simplest form, the problem is as symmetric matrix , given the matrix , and knowing that has R ,R ∈ n eigenvalues −1 and 0. The authors also consider the exponential map follows: given two rotation matrices 1 2 SO( ), find exp: se(n) → SE(n), where se(n) is the Lie algebra of the Lie group a “natural” interpolating rotation R(t), where 0 ≤ t ≤ 1. SE(n) of (affine) rigid motions. The authors show that there is a Of course, it would be necessary to clarify what we mean Rodrigues-like formula for computing this exponential map, and give by “natural,” but note that we have the following solution: a method for computing some determination of the (multivalued) SE n → se n function log: ( ) ( ). This yields a direct proof of the R(t) = exp((1 − t)log R1 + t log R2) surjectivity of exp: se(n) → SE(n). In theory, the problem is solved. However, it is still necessary to compute exp(B) and log R effectively. Key Words When n = 2, a skew-symmetric matrix B can be written as B = θJ, where: Rotations, skew-symmetric matrices, exponentials, logarithms, rigid motions, interpolation 0 −1 J = 10 1. Introduction and it is easily shown that: Given a real skew-symmetric n × n matrix B,itiswell B θJ e = e = cos θI2 + sin θJ known that R = eB is a rotation matrix, where: Given R ∈ SO(2), we can find cos θ because tr(R)=2cosθ ∞ k B B R R e = In + (where tr( ) denotes the trace of ). Thus, the problem is k k=1 ! completely solved. When n = 3, a real skew-symmetric matrix B is of the B is the exponential of (for instance, see Chevalley [1], form: Marsden and Ratiu [2], or Warner [3]). Conversely, given R ∈ n −cb any rotation matrix SO( ), there is some skew- 0 symmetric matrix B such that R = eB. These two facts can B = c −a be expressed by saying that the map exp: so(n) → SO(n) 0 from the Lie algebra so(n) of skew-symmetric n × n matri- −ba 0 ces to the Lie group SO(n) is surjective (see Br¨ocker and √ and letting θ = a2 + b2 + c2, we have the well-known ∗ Department of Computer and Information Science, Univer- formula due to Rodrigues: sity of Pennsylvania, Philadelphia, PA 19104, USA; e-mail: [email protected], [email protected] B sin θ (1 − cos θ) 2 e = I3 + B + B (paper no. 206-2550) θ θ2 1 B with e = I3 when B = 0 (for instance, see Marsden and of computing the logarithm and the exponential of a matrix Ratiu [2], McCarthy [15], or Murray, Li, and Sastry [16]). is also investigated in [18, 19]. It turns out that it is more convenient to normalize B, The article is organized as follows. In Section 2 that is, to write B = θB1 (where B1 = B/θ, assuming that we give a Rodrigues-like formula for computing exp: θ = 0), in which case the formula becomes: so(n) → SO(n). In Section 3 we show how to compute log: SO(4) → so(4) in the special case of SO(4), which is θB1 2 e = I3 + sin θB1 +(1− cos θ)B1 simpler. In Section 4 we show how to compute some deter- mination of the (multivalued) function log: SO(n) → so(n) Also, given R ∈ SO(3), we can find cos θ because tr(R)= in general (n ≥ 4). In Section 5 we give a Rodrigues-like 1+2cosθ, and we can find B1 by observing that: formula for computing exp: se(n) → SE(n). In Section 6 we 1 show how to compute some determination of the (multi- (R − R ) = sin θB1 2 valued) function log: SE(n) → se(n). Our method yields a Actually, the above formula cannot be used when θ =0or simple proof of the surjectivity of exp: se(n) → SE(n). In θ = π, as sin θ = 0 in these cases. When θ = 0, we have Section 7 we solve the problem of finding a skew-symmetric 2 2 R = I3 and B1 = 0, and when θ = π, we need to find B1 matrix B, given the matrix B , and knowing that B has such that: eigenvalues −1 and 0. Section 8 draws conclusions. 2 1 B1 = 2 (R − I3) 2. A Rodrigues-Like Formula for B × As 1 is a skew-symmetric 3 3 matrix, this amounts exp: so(n) →SO(n) to solving some simple equations with three unknowns. Again, the problem is completely solved. In this section, we give a Rodrigues-like formula showing n ≥ B What about the cases where 4? The reason why how to compute the exponential e of a skew-symmetric Rodrigues’ formula can be derived is that: n × n matrix B, where n ≥ 4. We also show the uniqueness 3 2 of the matrices B1,...,Bp used in the decomposition of B = −θ B B mentioned in the introductory section. The following 3 fairly well-known lemma plays a key role in obtaining the or, equivalently, B1 = −B1. Unfortunately, for n ≥ 4, given any non-null skew-symmetric n × n matrix B,itis matrices B1,...,Bp (see Horn and Johnson [20], Corollary generally false that B3 = −θ2B, and the reasoning used in 2.5.14, or Bourbaki [21]). the 3D case does not apply. In this article, we show that there is a generalization Lemma 2.1. Given any skew-symmetric n × n matrix B of Rodrigues’ formula for computing the exponential map (n ≥ 2), there is some orthogonal matrix P and some block exp: so(n) → SO(n), when n ≥ 4, and we give a method diagonal matrix E such that: for computing some determination of the (multivalued) B = PEP function log function log: SO(n) → so(n). The key to the n × n B solution is that, given a skew-symmetric matrix , with E of the form: there are p unique skew-symmetric matrices B1,...,Bp such that B can be expressed as: E ··· 1 B θ B ··· θ B . .. = 1 1 + + p p . E = where: ··· Em ··· 0n−2m {iθ1, −iθ1,...,iθp, −iθp} E is the set of distinct eigenvalues of B, with θi > 0 and where each block i is a real two-dimensional matrix of where: the form: B B B B i j 0 −θi 0 −1 i j = j i =0n ( = ) E θ θ > 3 i = = i with i 0 Bi = −Bi θi 0 10 × This reduces the problem to the case of 3 3 matrices. Observe that the eigenvalues of B are ±iθj, or 0, recon- We also consider the exponential map exp se(n) → SE(n), firming the well-known fact that the eigenvalues of a where se(n) is the Lie algebra of the Lie group SE(n)of skew-symmetric matrix are purely imaginary, or null. We (affine) rigid motions. We show that there is a Rodrigues- now prove the existence and uniqueness of the Bj’s as well like formula for computing this exponential map, and we as the generalized Rodrigues’ formula. give a method for computing some determination of the (multivalued) function log: SE(n) → se(n). Theorem 2.2. Given any non-null skew-symmetric n×n The general problem of computing the exponential of matrix B, where n ≥ 3, if: a matrix is discussed in Moler and Van Loan [17]. However, more general types of matrices are considered. The problem {iθ1, −iθ1,...,iθp, −iθp} 2 3 is the set of distinct eigenvalues of B, where θj > 0 and Indeed, Bi = −Bi implies that: each iθj (and −iθj) has multiplicity kj ≥ 1, there are p B ,...,B 4k+j j 4k+2+j j unique skew-symmetric matrices 1 p such that: Bi = Bi and Bi = −Bi for j =1, 2 and all k ≥ 0 B = θ1B1 + ···+ θpBp (1) BiBj = BjBi =0n (i = j) (2) and thus, we get: 3 Bi = −Bi (3) θkBk θiBi i i e = In + for all i, j with 1 ≤ i, j ≤ p, and 2p ≤ n. Furthermore: k k≥1 ! p 3 5 θi θi θi B θ1B1+···+θpBp 2 e = e = In + (sin θiBi +(1− cos θi)B ) = In + − + + ··· Bi i 1! 3! 5! i=1 2 4 6 θi θi θi 2 + − + + ··· Bi and {θ1,...,θp} is the set of the distinct positive square 2! 4! 6! roots of the 2m positive eigenvalues of the symmetric 2 2 = In + sin θiBi +(1− cos θi)Bi matrix −1/4(B − B ) , where m = k1 + ···+ kp.
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Lecture Notes in One File
    Phys 6124 zipped! World Wide Quest to Tame Math Methods Predrag Cvitanovic´ January 2, 2020 Overview The career of a young theoretical physicist consists of treating the harmonic oscillator in ever-increasing levels of abstraction. — Sidney Coleman I am leaving the course notes here, not so much for the notes themselves –they cannot be understood on their own, without the (unrecorded) live lectures– but for the hyperlinks to various source texts you might find useful later on in your research. We change the topics covered year to year, in hope that they reflect better what a graduate student needs to know. This year’s experiment are the two weeks dedicated to data analysis. Let me know whether you would have preferred the more traditional math methods fare, like Bessels and such. If you have taken this course live, you might have noticed a pattern: Every week we start with something obvious that you already know, let mathematics lead us on, and then suddenly end up someplace surprising and highly non-intuitive. And indeed, in the last lecture (that never took place), we turn Coleman on his head, and abandon harmonic potential for the inverted harmonic potential, “spatiotemporal cats” and chaos, arXiv:1912.02940. Contents 1 Linear algebra7 HomeworkHW1 ................................ 7 1.1 Literature ................................ 8 1.2 Matrix-valued functions ......................... 9 1.3 A linear diversion ............................ 12 1.4 Eigenvalues and eigenvectors ...................... 14 1.4.1 Yes, but how do you really do it? . 18 References ................................... 19 exercises 20 2 Eigenvalue problems 23 HomeworkHW2 ................................ 23 2.1 Normal modes .............................. 24 2.2 Stable/unstable manifolds .......................
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Introduction to Quantum Field Theory
    Utrecht Lecture Notes Masters Program Theoretical Physics September 2006 INTRODUCTION TO QUANTUM FIELD THEORY by B. de Wit Institute for Theoretical Physics Utrecht University Contents 1 Introduction 4 2 Path integrals and quantum mechanics 5 3 The classical limit 10 4 Continuous systems 18 5 Field theory 23 6 Correlation functions 36 6.1 Harmonic oscillator correlation functions; operators . 37 6.2 Harmonic oscillator correlation functions; path integrals . 39 6.2.1 Evaluating G0 ............................... 41 6.2.2 The integral over qn ............................ 42 6.2.3 The integrals over q1 and q2 ....................... 43 6.3 Conclusion . 44 7 Euclidean Theory 46 8 Tunneling and instantons 55 8.1 The double-well potential . 57 8.2 The periodic potential . 66 9 Perturbation theory 71 10 More on Feynman diagrams 80 11 Fermionic harmonic oscillator states 87 12 Anticommuting c-numbers 90 13 Phase space with commuting and anticommuting coordinates and quanti- zation 96 14 Path integrals for fermions 108 15 Feynman diagrams for fermions 114 2 16 Regularization and renormalization 117 17 Further reading 127 3 1 Introduction Physical systems that involve an infinite number of degrees of freedom are usually described by some sort of field theory. Almost all systems in nature involve an extremely large number of degrees of freedom. A droplet of water contains of the order of 1026 molecules and while each water molecule can in many applications be described as a point particle, each molecule has itself a complicated structure which reveals itself at molecular length scales. To deal with this large number of degrees of freedom, which for all practical purposes is infinite, we often regard a system as continuous, in spite of the fact that, at certain distance scales, it is discrete.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Theory of Angular Momentum and Spin
    Chapter 5 Theory of Angular Momentum and Spin Rotational symmetry transformations, the group SO(3) of the associated rotation matrices and the 1 corresponding transformation matrices of spin{ 2 states forming the group SU(2) occupy a very important position in physics. The reason is that these transformations and groups are closely tied to the properties of elementary particles, the building blocks of matter, but also to the properties of composite systems. Examples of the latter with particularly simple transformation properties are closed shell atoms, e.g., helium, neon, argon, the magic number nuclei like carbon, or the proton and the neutron made up of three quarks, all composite systems which appear spherical as far as their charge distribution is concerned. In this section we want to investigate how elementary and composite systems are described. To develop a systematic description of rotational properties of composite quantum systems the consideration of rotational transformations is the best starting point. As an illustration we will consider first rotational transformations acting on vectors ~r in 3-dimensional space, i.e., ~r R3, 2 we will then consider transformations of wavefunctions (~r) of single particles in R3, and finally N transformations of products of wavefunctions like j(~rj) which represent a system of N (spin- Qj=1 zero) particles in R3. We will also review below the well-known fact that spin states under rotations behave essentially identical to angular momentum states, i.e., we will find that the algebraic properties of operators governing spatial and spin rotation are identical and that the results derived for products of angular momentum states can be applied to products of spin states or a combination of angular momentum and spin states.
    [Show full text]
  • 4.1 RANK of a MATRIX Rank List Given Matrix M, the Following Are Equal
    page 1 of Section 4.1 CHAPTER 4 MATRICES CONTINUED SECTION 4.1 RANK OF A MATRIX rank list Given matrix M, the following are equal: (1) maximal number of ind cols (i.e., dim of the col space of M) (2) maximal number of ind rows (i.e., dim of the row space of M) (3) number of cols with pivots in the echelon form of M (4) number of nonzero rows in the echelon form of M You know that (1) = (3) and (2) = (4) from Section 3.1. To see that (3) = (4) just stare at some echelon forms. This one number (that all four things equal) is called the rank of M. As a special case, a zero matrix is said to have rank 0. how row ops affect rank Row ops don't change the rank because they don't change the max number of ind cols or rows. example 1 12-10 24-20 has rank 1 (maximally one ind col, by inspection) 48-40 000 []000 has rank 0 example 2 2540 0001 LetM= -2 1 -1 0 21170 To find the rank of M, use row ops R3=R1+R3 R4=-R1+R4 R2ØR3 R4=-R2+R4 2540 0630 to get the unreduced echelon form 0001 0000 Cols 1,2,4 have pivots. So the rank of M is 3. how the rank is limited by the size of the matrix IfAis7≈4then its rank is either 0 (if it's the zero matrix), 1, 2, 3 or 4. The rank can't be 5 or larger because there can't be 5 ind cols when there are only 4 cols to begin with.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]
  • Unipotent Flows and Applications
    Clay Mathematics Proceedings Volume 10, 2010 Unipotent Flows and Applications Alex Eskin 1. General introduction 1.1. Values of indefinite quadratic forms at integral points. The Op- penheim Conjecture. Let X Q(x1; : : : ; xn) = aijxixj 1≤i≤j≤n be a quadratic form in n variables. We always assume that Q is indefinite so that (so that there exists p with 1 ≤ p < n so that after a linear change of variables, Q can be expresses as: Xp Xn ∗ 2 − 2 Qp(y1; : : : ; yn) = yi yi i=1 i=p+1 We should think of the coefficients aij of Q as real numbers (not necessarily rational or integer). One can still ask what will happen if one substitutes integers for the xi. It is easy to see that if Q is a multiple of a form with rational coefficients, then the set of values Q(Zn) is a discrete subset of R. Much deeper is the following conjecture: Conjecture 1.1 (Oppenheim, 1929). Suppose Q is not proportional to a ra- tional form and n ≥ 5. Then Q(Zn) is dense in the real line. This conjecture was extended by Davenport to n ≥ 3. Theorem 1.2 (Margulis, 1986). The Oppenheim Conjecture is true as long as n ≥ 3. Thus, if n ≥ 3 and Q is not proportional to a rational form, then Q(Zn) is dense in R. This theorem is a triumph of ergodic theory. Before Margulis, the Oppenheim Conjecture was attacked by analytic number theory methods. (In particular it was known for n ≥ 21, and for diagonal forms with n ≥ 5).
    [Show full text]