Rotation Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Rotation Matrix Rotation Matrix Suppose that a . We let R:R2—R be the function defiled as follows: Aiiy vector in the plane can be written in polar coordmates as r(cos(9), sin(O)) where r > 0 and 0 R. For any such vector, we define ( r( cos(0), sin(0))) = r( cos(0 ± a), sin(0 + a)) Notice that the function Ra doesn’t change the norms of vectors (the num ber r), it just affects their direction, which is measured by the unit circle coordinate. We call the function Rc, rotation of the plane by angle a. 5 ‘ (cs),s . % c co ($) If a > 0. then R rotates the plane counterclockwise. If a < 0. then R is a clockwise rotation. Examples. R(1, 0) is the point (1, 0) rotated by an angle of a, which is the point (cos(a), sin(a)) (cos(Sn (oct) (1,0) 235 R(0, 1) is the point (0, 1) rotated by au angle of a. Or alternatively, we could rotate the point (1,0) by an angle of to get to the point (0, 1), and then we could rotate another a to get to the point R (0, 1). g0 (o, ) (i,o) Written as an equation, the previous sentence is R(O, 1) — R(H (1, 0) Using the previous example, we have that R(0. 1) R+(1. 0) cos (a + sin (a + Using some of tile trigonometric idleiltities we learned in the chapter Sine and Cosine. wTe know that sin (a + = cos(a) [Leinnia (8)] and cos a sin(a + ) [Lemma (9)] = —sin(a) [Lemma (10)] Therefore, R(0, 1) ( cos (a + sin (a + ) ) = (— sin(a), cos(a)) 236 . If a > 0, then Ra is a counterclockwise rotation of angle a. The function R is a clockwise rotation by a, If first we rotate the plane coun terclockwise by angle a, and then we rotate the plane clockwise by angle a it will be as if we had done nothing at all to the plane. In other words, Ra and R are inverse functions. That is = R 0 C’ Theorem (13). R o R Proof: If first we rotate the plane by an angle of B, and then we rotate the plane by an angle of a, what we would have done is rotated the plane by an angle of a + . That’s what tins theorem says. * * * * * * * * * * * * * 237 We know what the rotation function R : 2 does to vectors written in polar coordinates. The formula is R(r(cos(O),sin(O))) r(cos(O + ),sin(O + )) What’s less clear is what the formula for Ra should be for vectors written in Cartesian coordinates. For example, what’s R(3, 7)? We’ll answer this question below, in Theorem 16. Before that though, we need a couple of more lemmas. Lemma (14). If c is a scalar and (a, b) is a vector, then R(c(a,b)) = cR(a,b) (c()) Ic1b) j) Proof: We can write (a, b) as vector in polar coordinates. Let’s say that (a,b) = r(cos(O),sin(O)). Then R(c(a,b)) = R(cr(cos(0),sin(O))) = cr( cos(O + ), sin(O + )) cR(a, b) Lemma (15). If (a, b) and (c, d) are two vectors, then R((a,b) + (c,d)) = Ra(a,b 238 Proof: In the first picture are the vectors (a, b), (c, d), and their sum (a, b) + (c, d). The second picture shows these three vectors rotated by an angle of : the vectors Ra(a, b), R(c, d), and R((a, b) + (c, d)). Notice in tins second picture that R((a, b) + (c, d)) would be the result of adding the vectors Ra(a, b) and R(c, d), which is what the lemma claims to be true. Now we are ready to describe the rotation function R using Cartesian coordmates. The functioll can be written as a matrix function, and we know how matrix functions affect vectors written in Cartesian coordinates. Theorem (16). Ra 2 2 is the same function as the matrix function (cos() —sin(c) siii() cos() For short, (cos(c) — sin(c) R(), sm() cos() Proof: To show that R0, amid the matrix above are the same function, we’ll input the vector (a, b) into each function and check that we get the same output. First let’s check the matrix function. Writing out vectors interchangeably as row vectors and column vectors we have that (cos() —sin(c) ( — (cos(c) —sin(c) (a sin() cos(a) ‘ 1 sin() cos() ) b — (acos() — bsin() — asin() +bcos(a) = (acos() — bsin(), asin() + bcos()) Now let’s check that we’d get the same output if we input the vector (a, b) into the function Ra. 239 R1(a,b) R((a,O) + (O,b)) = R(a. 0) + R(0, b) = R(a(1O)) +R(b(O,1)) Lemrria (15) = aR(1, 0) + bR(0, 1) Lemma (14) = a( cos(a), sin(a)) + b( — sin(), cos(c)) Examples = (acos(),asin()) + (—bsin(),bcos(a)) = (a cos() — b sin(), a sin() + b cos()) Notice that the matrix arid the function R(- gave us the same output, so they are the same function. N ( / Z * * * * * * * * Angle sum formulas Theorem (13) tells us that Ra+ Theorenm(16 allows us to write this equation with matrices —sin(a+/3) — cos(c) —sin(c) cos(/3) —sin(/3) (cos(+/3)sin(c + /3) cos( + /3) sin(c) cos() sin(/3) cos(/3) (cos() cos(/3) — sin(cr) sin(/3) — cos(ci) sin($) — sin() cos(/3) sin() cos(/3) + cos() sin(/3) — sin() sin(s) + cos(a) cos(fl) The top left entry of the matrix we started with has to equal the top left entry of the matrix that we ended with, since tire two matrices are equal. Similarly, the bottom left entries are equal. That gives us two equations two trigonometric identities that are known as the angle sum formulas: 240 cos(a + B) = cos(a) cos(/3) — siu(a) sin(B) sin(a + 3) = sin(a) cos(8) + cos(a) sin(/3) Double angle formulas If we write the angle sum formulas with a = 8 then w&d have two more trigonometric identities, called the double angle formulas: cos(2a) cos(a)2 — sin(a)2 sin(2a) 2 sin(a) cos(a) More identities encoded in matrix multiplication The angle sum and double angle formulas are encoded in matrix multipli cation. as we saw above. In fact all but one of the trigonometric identities that weve see so far are encoded iii matrix multiplication. Lemma (8) can be seen in the matrix equation = RR1; Lemma (9) in the matrix equation R6_ = RR; Lemma (10) in the matrix equation R8+ RR; Lemma (11) and (12) in the equation R9 Ri’; and the period identities for sine and cosine in the equation R9+2 = R92?-. o. conic. Let’s rotate the hyperbola xy = clockwise by an angle of . That means, that we will apply the rotation matrix R_. to the hyperbola. ;9 211 To find the new equation for our rotated hyperbola, we’ll precompose the equation of the original hyperbolatIie equation xy -with the the in verse of R. Notice that /cos() -sin( 4 1=I sin() cos()) is the function that replaces x with — and replaces y with + y. Therefore, the equation for the rotated hyperbola is /1 1 ‘/1 1 ‘, 1 Using the distributive law, the above equation is the same as 19 121 2’ Multiplying both sides by 2 leaves us with the equivalent equation 9 9 x-—y =1 Tins is the equation for the rotated hyperbola. 2 2 -x / 7 / N / / 242 Exercises For most of these exercises you’ll need to use the rotation matrix — (cos(c) —sin(c) R — sin() cos(a) For #1-4 use the values of sin() and cos() that you know to write the given matrices using exact numbers that do not involve trigonometric func tions such as sin or cos. For example, as we saw when rotating the hyperbola at the end of the chapter, R=I i \‘ 1.) R 2.) R 3.) R 4.) R2 For #5-8, use the matrices and R from above to write the given vectors in Cartesian coordinates, 5.) R(3, 5) 6.) R(4, —8) 7.) R(—2, —7) 8.) R(5, —9) 9.) Use a trigonometric identity from a previous chapter the find the de terminailt of the matrix R. 10.) Use an angle suni formula to find cos (i). (Hint: f + (—).) 11.) Use an angle sum formula to find sin (i). 12.) Suppose that (cos(O), sin(O)) Use the double angle = (,). formulas to find (cos(20),sin(29)). 243 13.) Find an equation for the hyperbola xy = — rotated clockwise by an angle of 4 14.) The solution of y — x2 = 0 is a parabola. Find an equation for the parabola rotated counterclockwise by an angle of . Write your answer in the form x2 + Bxy + Cy2 + Dx + Ey = 0. 13 For #15-17, find the solutions of the given equations. 15.) 1+x1E1 16.) ln(x) + 2 ln(x) = 7 17.) (x + 1)2 16 244.
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Theory of Angular Momentum and Spin
    Chapter 5 Theory of Angular Momentum and Spin Rotational symmetry transformations, the group SO(3) of the associated rotation matrices and the 1 corresponding transformation matrices of spin{ 2 states forming the group SU(2) occupy a very important position in physics. The reason is that these transformations and groups are closely tied to the properties of elementary particles, the building blocks of matter, but also to the properties of composite systems. Examples of the latter with particularly simple transformation properties are closed shell atoms, e.g., helium, neon, argon, the magic number nuclei like carbon, or the proton and the neutron made up of three quarks, all composite systems which appear spherical as far as their charge distribution is concerned. In this section we want to investigate how elementary and composite systems are described. To develop a systematic description of rotational properties of composite quantum systems the consideration of rotational transformations is the best starting point. As an illustration we will consider first rotational transformations acting on vectors ~r in 3-dimensional space, i.e., ~r R3, 2 we will then consider transformations of wavefunctions (~r) of single particles in R3, and finally N transformations of products of wavefunctions like j(~rj) which represent a system of N (spin- Qj=1 zero) particles in R3. We will also review below the well-known fact that spin states under rotations behave essentially identical to angular momentum states, i.e., we will find that the algebraic properties of operators governing spatial and spin rotation are identical and that the results derived for products of angular momentum states can be applied to products of spin states or a combination of angular momentum and spin states.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Unipotent Flows and Applications
    Clay Mathematics Proceedings Volume 10, 2010 Unipotent Flows and Applications Alex Eskin 1. General introduction 1.1. Values of indefinite quadratic forms at integral points. The Op- penheim Conjecture. Let X Q(x1; : : : ; xn) = aijxixj 1≤i≤j≤n be a quadratic form in n variables. We always assume that Q is indefinite so that (so that there exists p with 1 ≤ p < n so that after a linear change of variables, Q can be expresses as: Xp Xn ∗ 2 − 2 Qp(y1; : : : ; yn) = yi yi i=1 i=p+1 We should think of the coefficients aij of Q as real numbers (not necessarily rational or integer). One can still ask what will happen if one substitutes integers for the xi. It is easy to see that if Q is a multiple of a form with rational coefficients, then the set of values Q(Zn) is a discrete subset of R. Much deeper is the following conjecture: Conjecture 1.1 (Oppenheim, 1929). Suppose Q is not proportional to a ra- tional form and n ≥ 5. Then Q(Zn) is dense in the real line. This conjecture was extended by Davenport to n ≥ 3. Theorem 1.2 (Margulis, 1986). The Oppenheim Conjecture is true as long as n ≥ 3. Thus, if n ≥ 3 and Q is not proportional to a rational form, then Q(Zn) is dense in R. This theorem is a triumph of ergodic theory. Before Margulis, the Oppenheim Conjecture was attacked by analytic number theory methods. (In particular it was known for n ≥ 21, and for diagonal forms with n ≥ 5).
    [Show full text]
  • Linear Operators Hsiu-Hau Lin [email protected] (Mar 25, 2010)
    Linear Operators Hsiu-Hau Lin [email protected] (Mar 25, 2010) The notes cover linear operators and discuss linear independence of func- tions (Boas 3.7-3.8). • Linear operators An operator maps one thing into another. For instance, the ordinary func- tions are operators mapping numbers to numbers. A linear operator satisfies the properties, O(A + B) = O(A) + O(B);O(kA) = kO(A); (1) where k is a number. As we learned before, a matrix maps one vector into another. One also notices that M(r1 + r2) = Mr1 + Mr2;M(kr) = kMr: Thus, matrices are linear operators. • Orthogonal matrix The length of a vector remains invariant under rotations, ! ! x0 x x0 y0 = x y M T M : y0 y The constraint can be elegantly written down as a matrix equation, M T M = MM T = 1: (2) In other words, M T = M −1. For matrices satisfy the above constraint, they are called orthogonal matrices. Note that, for orthogonal matrices, computing inverse is as simple as taking transpose { an extremely helpful property for calculations. From the product theorem for the determinant, we immediately come to the conclusion det M = ±1. In two dimensions, any 2 × 2 orthogonal matrix with determinant 1 corresponds to a rotation, while any 2 × 2 orthogonal HedgeHog's notes (March 24, 2010) 2 matrix with determinant −1 corresponds to a reflection about a line. Let's come back to our good old friend { the rotation matrix, cos θ − sin θ ! cos θ sin θ ! R(θ) = ;RT = : (3) sin θ cos θ − sin θ cos θ It is straightforward to check that RT R = RRT = 1.
    [Show full text]
  • Inclusion Relations Between Orthostochastic Matrices And
    Linear and Multilinear Algebra, 1987, Vol. 21, pp. 253-259 Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 Photocopying permitted by license only 1987 Gordon and Breach Science Publishers, S.A. Printed in the United States of America Inclusion Relations Between Orthostochastic Matrices and Products of Pinchinq-- Matrices YIU-TUNG POON Iowa State University, Ames, lo wa 5007 7 and NAM-KIU TSING CSty Poiytechnic of HWI~K~tig, ;;e,~g ,?my"; 2nd A~~KI:Lkwe~ity, .Auhrn; Al3b3m-l36849 Let C be thc set of ail n x n urthusiuihaatic natiiccs and ;"', the se! of finite prnd~uctpof n x n pinching matrices. Both sets are subsets of a,, the set of all n x n doubly stochastic matrices. We study the inclusion relations between C, and 9'" and in particular we show that.'P,cC,b~t-~p,t~forn>J,andthatC~$~~fori123. 1. INTRODUCTION An n x n matrix is said to be doubly stochastic (d.s.) if it has nonnegative entries and ail row sums and column sums are 1. An n x n d.s. matrix S = (sij)is said to be orthostochastic (0.s.) if there is a unitary matrix U = (uij) such that 2 sij = luijl , i,j = 1, . ,n. Following [6], we write S = IUI2. The set of all n x n d.s. (or 0s.) matrices will be denoted by R, (or (I,, respectively). If P E R, can be expressed as 1-t (1 2 54 Y. T POON AND N K. TSING Downloaded By: [Li, Chi-Kwong] At: 00:16 20 March 2009 for some permutation matrix Q and 0 < t < I, then we call P a pinching matrix.
    [Show full text]
  • 2 Rotation Kinematics
    2 Rotation Kinematics Consider a rigid body with a fixed point. Rotation about the fixed point is the only possible motion of the body. We represent the rigid body by a body coordinate frame B, that rotates in another coordinate frame G, as is shown in Figure 2.1. We develop a rotation calculus based on trans- formation matrices to determine the orientation of B in G, and relate the coordinates of a body point P in both frames. Z ZP z B zP G P r yP y YP X x P P Y X x FIGURE 2.1. A rotated body frame B in a fixed global frame G,aboutafixed point at O. 2.1 Rotation About Global Cartesian Axes Consider a rigid body B with a local coordinate frame Oxyz that is origi- nally coincident with a global coordinate frame OXY Z.PointO of the body B is fixed to the ground G and is the origin of both coordinate frames. If the rigid body B rotates α degrees about the Z-axis of the global coordi- nate frame, then coordinates of any point P of the rigid body in the local and global coordinate frames are related by the following equation G B r = QZ,α r (2.1) R.N. Jazar, Theory of Applied Robotics, 2nd ed., DOI 10.1007/978-1-4419-1750-8_2, © Springer Science+Business Media, LLC 2010 34 2. Rotation Kinematics where, X x Gr = Y Br= y (2.2) ⎡ Z ⎤ ⎡ z ⎤ ⎣ ⎦ ⎣ ⎦ and cos α sin α 0 − QZ,α = sin α cos α 0 .
    [Show full text]
  • Rotation Averaging
    Noname manuscript No. (will be inserted by the editor) Rotation Averaging Richard Hartley, Jochen Trumpf, Yuchao Dai, Hongdong Li Received: date / Accepted: date Abstract This paper is conceived as a tutorial on rota- Single rotation averaging. In the single rotation averaging tion averaging, summarizing the research that has been car- problem, several estimates are obtained of a single rotation, ried out in this area; it discusses methods for single-view which are then averaged to give the best estimate. This may and multiple-view rotation averaging, as well as providing be thought of as finding a mean of several points Ri in the proofs of convergence and convexity in many cases. How- rotation space SO(3) (the group of all 3-dimensional rota- ever, at the same time it contains many new results, which tions) and is an instance of finding a mean in a manifold. were developed to fill gaps in knowledge, answering funda- Given an exponent p ≥ 1 and a set of n ≥ 1 rotations p mental questions such as radius of convergence of the al- fR1;:::; Rng ⊂ SO(3) we wish to find the L -mean rota- gorithms, and existence of local minima. These matters, or tion with respect to d which is defined as even proofs of correctness have in many cases not been con- n p X p sidered in the Computer Vision literature. d -mean(fR1;:::; Rng) = argmin d(Ri; R) : We consider three main problems: single rotation av- R2SO(3) i=1 eraging, in which a single rotation is computed starting Since SO(3) is compact, a minimum will exist as long as the from several measurements; multiple-rotation averaging, in distance function is continuous (which any sensible distance which absolute orientations are computed from several rel- function is).
    [Show full text]
  • Numerically Stable Coded Matrix Computations Via Circulant and Rotation Matrix Embeddings
    1 Numerically stable coded matrix computations via circulant and rotation matrix embeddings Aditya Ramamoorthy and Li Tang Department of Electrical and Computer Engineering Iowa State University, Ames, IA 50011 fadityar,[email protected] Abstract Polynomial based methods have recently been used in several works for mitigating the effect of stragglers (slow or failed nodes) in distributed matrix computations. For a system with n worker nodes where s can be stragglers, these approaches allow for an optimal recovery threshold, whereby the intended result can be decoded as long as any (n − s) worker nodes complete their tasks. However, they suffer from serious numerical issues owing to the condition number of the corresponding real Vandermonde-structured recovery matrices; this condition number grows exponentially in n. We present a novel approach that leverages the properties of circulant permutation matrices and rotation matrices for coded matrix computation. In addition to having an optimal recovery threshold, we demonstrate an upper bound on the worst-case condition number of our recovery matrices which grows as ≈ O(ns+5:5); in the practical scenario where s is a constant, this grows polynomially in n. Our schemes leverage the well-behaved conditioning of complex Vandermonde matrices with parameters on the complex unit circle, while still working with computation over the reals. Exhaustive experimental results demonstrate that our proposed method has condition numbers that are orders of magnitude lower than prior work. I. INTRODUCTION Present day computing needs necessitate the usage of large computation clusters that regularly process huge amounts of data on a regular basis. In several of the relevant application domains such as machine learning, arXiv:1910.06515v4 [cs.IT] 9 Jun 2021 datasets are often so large that they cannot even be stored in the disk of a single server.
    [Show full text]
  • Lecture 13: Complex Eigenvalues & Factorization
    Lecture 13: Complex Eigenvalues & Factorization Consider the rotation matrix 0 1 A = . (1) 1 ¡0 · ¸ To …nd eigenvalues, we write ¸ 1 A ¸I = ¡ ¡ , ¡ 1 ¸ · ¡ ¸ and calculate its determinant det (A ¸I) = ¸2 + 1 = 0. ¡ We see that A has only complex eigenvalues ¸= p 1 = i. § ¡ § Therefore, it is impossible to diagonalize the rotation matrix. In general, if a matrix has complex eigenvalues, it is not diagonalizable. In this lecture, we shall study matrices with complex eigenvalues. Since eigenvalues are roots of characteristic polynomials with real coe¢cients, complex eigenvalues always appear in pairs: If ¸0 = a + bi is a complex eigenvalue, so is its conjugate ¸¹0 = a bi. ¡ For any complex eigenvalue, we can proceed to …nd its (complex) eigenvectors in the same way as we did for real eigenvalues. Example 13.1. For the matrix A in (1) above, …nd eigenvectors. ¹ Solution. We already know its eigenvalues: ¸0 = i, ¸0 = i. For ¸0 = i, we solve (A iI) ~x = ~0 as follows: performing (complex) row operations to¡get (noting that i2 = 1) ¡ ¡ i 1 iR1+R2 R2 i 1 A iI = ¡ ¡ ¡ ! ¡ ¡ ¡ 1 i ! 0 0 · ¡ ¸ · ¸ = ix1 x2 = 0 = x2 = ix1 ) ¡ ¡ ) ¡ x1 1 1 0 take x1 = 1, ~u = = = i x2 i 0 ¡ 1 · ¸ ·¡ ¸ · ¸ · ¸ is an eigenvalue for ¸= i. For ¸= i, we proceed similarly as follows: ¡ i 1 iR1+R2 R2 i 1 A + iI = ¡ ! ¡ = ix1 x2 = 0 1 i ! 0 0 ) ¡ · ¸ · ¸ 1 1 1 0 take x = 1, ~v = = + i 1 i 0 1 · ¸ · ¸ · ¸ is an eigenvalue for ¸= i. We note that ¡ ¸¹0 = i is the conjugate of ¸0 = i ¡ ~v = ~u (the conjugate vector of eigenvector ~u) is an eigenvector for ¸¹0.
    [Show full text]
  • 3. Translational and Rotational Dynamics MAE 345 2017.Pptx
    Translational and Rotational Dynamics! Robert Stengel! Robotics and Intelligent Systems MAE 345, Princeton University, 2017 Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/MAE345.html 1 Reference Frame •! Newtonian (Inertial) Frame of Reference –! Unaccelerated Cartesian frame •! Origin referenced to inertial (non-moving) frame –! Right-hand rule –! Origin can translate at constant linear velocity –! Frame cannot rotate with respect to inertial origin ! x $ # & •! Position: 3 dimensions r = # y & –! What is a non-moving frame? "# z %& •! Translation = Linear motion 2 Velocity and Momentum of a Particle •! Velocity of a particle ! $ ! x! $ vx dr # & # & v = = r! = y! = # vy & dt # & # ! & # & " z % vz "# %& •! Linear momentum of a particle ! v $ # x & p = mv = m # vy & # & "# vz %& 3 Newtons Laws of Motion: ! Dynamics of a Particle First Law . If no force acts on a particle, it remains at rest or continues to move in straight line at constant velocity, . Inertial reference frame . Momentum is conserved d ()mv = 0 ; mv = mv dt t1 t2 4 Newtons Laws of Motion: ! Dynamics of a Particle Second Law •! Particle acted upon by force •! Acceleration proportional to and in direction of force •! Inertial reference frame •! Ratio of force to acceleration is particle mass d dv dv 1 1 ()mv = m = ma = Force ! = Force = I Force dt dt dt m m 3 ! $ " % fx " % fx # & 1/ m 0 0 $ ' = $ ' f Force = # fy & = force vector $ 0 1/ m 0 '$ y ' $ ' # & #$ 0 0 1/ m &' f fz $ z ' "# %& # & 5 Newtons Laws
    [Show full text]
  • Lecture 42: Orthogonal Matrices & Rotations
    Rotations & reflections Aim lecture: We use the spectral thm for normal operators to show how any orthogonal matrix can be built up from rotations. In this lecture we work over the fields F = R & C. We let cos θ − sin θ R = θ sin θ cos θ 0 the 2 × 2-matrix which rotates anti-clockwise about 0 through angle θ. Note that det Rθ = 1 & that Rπ is the diagonal matrix −I2 = (−1) ⊕ (−1). Defn Let A 2 Mnn(R). We say that A is a rotation matrix if it is orthogonally similar to In−2 ⊕ Rθ for some θ. We say that A is a reflection matrix if it is orthogonally similar to In−1 ⊕ (−1). Rem Note that this generalises the usual notions in dim 2 & 3. Also, rotations are orthogonal with determinant 1 whilst reflections are orthogonal with det -1. Daniel Chan (UNSW) Lecture 42: Orthogonal matrices & rotations Semester 2 2012 1 / 7 Orthogonal matrices in O2; O3 Recall that orthogonal matrices have det ±1. We defer the proof of Theorem 1 Let A 2 O2. If det A = 1 then A is orthog sim to Rθ for some θ (so tr A = 2 cos θ) & if det A = −1 then A is a reflection. 2 Let A 2 O3. If det A = 1 then A is orthog sim to (1) ⊕ Rθ (so tr A = 1 + 2 cos θ) & if det A = −1 then A is orthog sim to (−1) ⊕ Rθ (so tr A = −1 + 2 cos θ) for some θ. Corollary Let A; B 2 M22(R) or A; B 2 M33(R).
    [Show full text]