CHANGE of BASIS PICTURES 1. Introduction in These

Total Page:16

File Type:pdf, Size:1020Kb

CHANGE of BASIS PICTURES 1. Introduction in These CHANGE OF BASIS PICTURES LANCE D. DRAGER 1. Introduction In these notes, I'll try to draw some pictures to demonstrate change of basis. The notes are best viewed in color. 2. Change of Basis 2.1. Change of Basis in R2. To start with, let's work in R2 so we can draw pictures. To draw a picture, let's start with the Euclidean plane and choose a coordinate system, so the picture looks like Figure 2.1. y 4 3 2 1 4 3 2 1 1234 − − − − x 1 − 2 − 3 − 4 − Figure 1. A Coordinate System on the Plane If we have a point (v1; v2), we can draw the arrow from the origin to (v1; v2), this arrow represents a geometric vector v (remember that any other arrow with the same magnitude and direction also represents v). The pair of numbers (v1; v2) are called the components or coordinates of the vector v with respect to our coordinate system. From matrix calculations, it's best to think of vectors in R2 as column vectors. If we let 2 v = ; 3 1 2 LANCE D. DRAGER the picture looks like Figure 2. y 4 3 2 v 1 4 3 2 1 1234 − − − − x 1 − 2 − 3 − 4 − Figure 2. Coordinates of the Tip of v 2 For R we have the standard basis = e1 e2 , where E 1 0 e = ; e = : 1 0 2 1 We can add the vectors e1 and e2 to our picture to get Figure 3. In terms of matrices, we have 2 1 0 v = = 2 + 3 = 2e + 3e : 3 0 1 1 2 In terms of matrix algebra, this seems trivial, but it's not so trivial if we look at the vector addition v = 2e1 + 3e2 in the geometric picture, Figure 4 We can summarize the equation v = 2e1 + 3e2 as matrix multiplication 2 v = e1 e2 = [v]E : 3 E where we call 2 [v] = E 3 the coordinate vector of v with respect to the basis . For many purposes, it's more convenient to use a basis differentE from the standard basis. In our example, let's introduce the basis , where U 1 2 u = ; u = : 1 −1 2 1 − If we add these to the picture, we get Figure 5 Now, if we look at the world from the standpoint of the basis , we are introduc- U ing new coordinate axes, where the directions are specified by the vectors u1 and CHANGE OF BASIS PICTURES 3 y 4 3 2 v 1 e2 4 3 2 1e1 1234 − − − − x 1 − 2 − 3 − 4 − Figure 3. The Standard Basis Vectors e1 and e2 y 4 3 2 v 3e2 1 e2 2e1 4 3 2 1e1 1234 − − − − x 1 − 2 − 3 − 4 − Figure 4. v = 2e1 + 3e2 u2 and the scale on each axis is determined by taking multiples of the vector that determines the axis. If we draw the picture in Figure 6, we see that the tip of v has new coordinates. It looks like the u1-coordinate of v is somewhat more that 2 and the u2-coordinate is near 1=2. Another way to look at it is that v = c1u1 + c2u2 where c1 is somewhat more that 2 and c2 is near 1=2, see Figure 7. Of course, 4 LANCE D. DRAGER y 4 3 2 v u1 1 4 3 2 1 1234 − − − − x 1 − u2 2 − 3 − 4 − Figure 5. New Basis Vectors u1 and u2 3 2 3 1 v − u 2 1 − 1 − u2 1 − 1 2 2 − 3 3 − Figure 6. The world from the Viewpoint of . U c1 v = c1u1 + c2u2 = u1 u2 = [v]U ; c2 U so c1 = [v]U : c2 The problem is, how do we calculate c1 and c2, or to put it another way, how do we calculate [v]U ? CHANGE OF BASIS PICTURES 5 3 2 3 1 v − u 2 1 − 1 u − 2 1 − 1 2 2 − 3 3 − Figure 7. Expressing v in the Basis U We need to go back to Figure 5 and express u1 and u2 in terms of the basis e1 and e2. Of course we have 1 u1 = − = ( 1)e1 + 2e2 2 − 3 u2 = = 3e1 + ( 1)e2: 1 − − We can write this in matrix form as 1 3 u u = e e 1 2 1 2 −2 1 − We introduced notation for the change of basis matrices, in the present setup we have = SEU : U E Thus, in the present setup, 1 3 S = : EU −2 1 − To find [v]U , the coordinate vector of v with respect to , we can use the change of coordinates equation U [v]U = SUE [v]E : Thus, we to know SUE , not SEU . But, we know −1 SUE = (SEU ) ; so 1=5 3=5 S = : UE 2=5 1=5 Since we know [v]E from above we get 1=5 3=5 2 11=5 [v] = S [v] = = : U UE E 2=5 1=5 3 7=5 Recall that this means 11=5 11 7 v = [v]U = u1 u2 = u1 + u2: U 7=5 5 5 6 LANCE D. DRAGER Since 11=5 = 2:2 and 7=5 = 1:4, this is in accord with our estimates from Figure 7. 2.2. Change of Basis in a Space of Functions. Here is another example of a change of basis problem. Let's introduce the notation C(R) for the vector space of all continuous functions on the real line. You may not have studied how to solve this kind of differential equation yet, but I can tell you that the general solution of the differential equation d2y (2.1) x = 0 dx2 − is x −x (2.2) y(x) = c1e + c2e : In other words, the space S of all solutions of (2.1) is the subspace of C(R) spanned by ex and e−x. (I'm using the usual calculus abuse of notation here.) It's easy to check that the functions ex and e−x are independent. Since there are two of them, and they are not the zero function, if they we dependent, we could write one as a multiple of the other. If you look at the graphs, you can see this is impossible. Thus, we have one basis of S, given by = ex e−x : U It's often easier to use a different basis for S, namely the functions hyperbolic cosine cosh(x) and hyperbolic sine sinh(x). You can find out more about these in any calculus book. All we need here are the definitions ex + e−x cosh(x) = 2 ex e−x sinh(x) = − 2 Since cosh(x) and sinh(x) are linear combinations of ex and e−x, they are in S. We can write the definitions in matrix form as 1=2 1=2 (2.3) cosh(x) sinh(x) = ex e−x 1=2 1=2 − Since the matrix 1=2 1=2 A = ; 1=2 1=2 − is invertible, we see that cosh(x) and sinh(x) also form a basis of S. If we set = cosh(x) sinh(x) V the is an ordered basis of S. The change of basis matrix SUV is defined by V = SUV V U comparing this with (2.3), we see 1=2 1=2 S = A = : UV 1=2 1=2 − The defining equation for SVU is (2.4) = SVU : U V CHANGE OF BASIS PICTURES 7 but we know that −1 1=2 1=2 1 1 S = [S ]−1 = = : VU UV 1=2 1=2 1 1 − − If we plug this into (2.4) and write things out in more detail we get 1 1 ex e−x = cosh(x) sinh(x) : 1 1 − In other words, ex = cosh(x) + sinh(x) e−x = cosh(x) sinh(x): − In differential equations, we often want to solve an initial value problem. So consider the initial value problem d2y x = 0(2.5) dx2 − (2.6) y(0) = α (2.7) y0(0) = β: We might try to find the solution in terms of the basis = ex e−x. We know any solution of the differential equation can be written asU x −x (2.8) y(x) = c1e + c2e : In other words, c y(x) = ex e−x 1 ; c2 so c1 [y(x)]U = : c2 If we substitute x = 0 into (2.8) and use the first initial condition (2.6) we get α = y(0) = c1 + c2: If we differentiate (2.8) we get 0 x y (x) = c1e c2ex: − Setting x = 0 is this equation and using the second initial condition (2.7) gives 0 β = y (0) = c1 c2: − Thus, to find c1 and c2 we have to solve the system of equations c1 + c2 = α (2.9) c1 c2 = β: − 8 LANCE D. DRAGER That's not hard, or course, but life is easier if we use the basis = cosh(x) sinh(x) because of the following easy to check identities V cosh(0) = 1 sinh(0) = 0 d cosh(x) = sinh(x) dx d sinh(x) = cosh(x): dx So, let's write our solution as a linear combination of the basis functions , so V (2.10) y(x) = a1 cosh(x) + a2 sinh(x): In other words, a y(x) = cosh(x) sinh(x) 1 ; a2 so we have a1 [y(x)]V = a2 If we plug x = 0 into (2.10) and use the first initial condition (2.6) we get α = y(0) = a1 cosh(0) + a2 sinh(0) = a1(1) + a2(0) = a1: If we differentiate (2.10) we get 0 y (x) = a1 sinh(x) + a2 cosh(x): If we plug in x = 0 and use the second initial condition (2.7) we get 0 β = y (0) = a1 sinh(0) + a2 cosh(0) = a2 Thus, our solution is y(x) = α cosh(x) + β sinh(x); our to put it another way, α [y(x)] = : V β Now, if we want to find c1 and c2 in Equation (2.8), that's the same thing as finding [y(x)]U .
Recommended publications
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam
    MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 2006, 09:30am - 10:50am Problem 1 (15 points). Let the matrix A be given by 1 −2 −1 2 −1 5 6 3 : 5 −4 5 4 5 (a) Find the inverse A−1 of A, if it exists. (b) Based on your answer in (a), determine whether the columns of A span R3. (Justify your answer!) Solution. (a) To check whether A is invertible, we row reduce the augmented matrix [A I3]: 1 −2 −1 1 0 0 1 −2 −1 1 0 0 2 −1 5 6 0 1 0 3 ∼ : : : ∼ 2 0 3 5 1 1 0 3 : 5 −4 5 0 0 1 0 0 0 −7 −2 1 4 5 4 5 Since the last row in the echelon form of A contains only zeros, A is not row equivalent to I3. Hence, A is not invertible, and A−1 does not exist. (b) Since A is not invertible by (a), the Invertible Matrix Theorem says that the columns of A cannot span R3. Problem 2 (15 points). Let the vectors b1; : : : ; b4 be defined by 3 2 −1 0 0 5 1 0 −1 1 0 1 1 0 0 1 b1 = ; b2 = ; b3 = ; and b4 = : −2 −5 3 0 B C B C B C B C B 4 C B 7 C B 0 C B −3 C @ A @ A @ A @ A (a) Determine if the set B = fb1; b2; b3; b4g is linearly independent by computing the determi- nant of the matrix B = [b1 b2 b3 b4].
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • Solving the Geodesic Equation
    Solving the Geodesic Equation Jeremy Atkins December 12, 2018 Abstract We find the general form of the geodesic equation and discuss the closed form relation to find Christoffel symbols. We then show how to use metric independence to find Killing vector fields, which allow us to solve the geodesic equation when there are helpful symmetries. We also discuss a more general way to find Killing vector fields, and some of their properties as a Lie algebra. 1 The Variational Method We will exploit the following variational principle to characterize motion in general relativity: The world line of a free test particle between two timelike separated points extremizes the proper time between them. where a test particle is one that is not a significant source of spacetime cur- vature, and a free particles is one that is only under the influence of curved spacetime. Similarly to classical Lagrangian mechanics, we can use this to de- duce the equations of motion for a metric. The proper time along a timeline worldline between point A and point B for the metric gµν is given by Z B Z B µ ν 1=2 τAB = dτ = (−gµν (x)dx dx ) (1) A A using the Einstein summation notation, and µ, ν = 0; 1; 2; 3. We can parame- terize the four coordinates with the parameter σ where σ = 0 at A and σ = 1 at B. This gives us the following equation for the proper time: Z 1 dxµ dxν 1=2 τAB = dσ −gµν (x) (2) 0 dσ dσ We can treat the integrand as a Lagrangian, dxµ dxν 1=2 L = −gµν (x) (3) dσ dσ and it's clear that the world lines extremizing proper time are those that satisfy the Euler-Lagrange equation: @L d @L − = 0 (4) @xµ dσ @(dxµ/dσ) 1 These four equations together give the equation for the worldline extremizing the proper time.
    [Show full text]
  • Geodetic Position Computations
    GEODETIC POSITION COMPUTATIONS E. J. KRAKIWSKY D. B. THOMSON February 1974 TECHNICALLECTURE NOTES REPORT NO.NO. 21739 PREFACE In order to make our extensive series of lecture notes more readily available, we have scanned the old master copies and produced electronic versions in Portable Document Format. The quality of the images varies depending on the quality of the originals. The images have not been converted to searchable text. GEODETIC POSITION COMPUTATIONS E.J. Krakiwsky D.B. Thomson Department of Geodesy and Geomatics Engineering University of New Brunswick P.O. Box 4400 Fredericton. N .B. Canada E3B5A3 February 197 4 Latest Reprinting December 1995 PREFACE The purpose of these notes is to give the theory and use of some methods of computing the geodetic positions of points on a reference ellipsoid and on the terrain. Justification for the first three sections o{ these lecture notes, which are concerned with the classical problem of "cCDputation of geodetic positions on the surface of an ellipsoid" is not easy to come by. It can onl.y be stated that the attempt has been to produce a self contained package , cont8.i.ning the complete development of same representative methods that exist in the literature. The last section is an introduction to three dimensional computation methods , and is offered as an alternative to the classical approach. Several problems, and their respective solutions, are presented. The approach t~en herein is to perform complete derivations, thus stqing awrq f'rcm the practice of giving a list of for11111lae to use in the solution of' a problem.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Coordinate Transformation
    Coordinate Transformation Coordinate Transformations In this chapter, we explore mappings – where a mapping is a function that "maps" one set to another, usually in a way that preserves at least some of the underlyign geometry of the sets. For example, a 2-dimensional coordinate transformation is a mapping of the form T (u; v) = x (u; v) ; y (u; v) h i The functions x (u; v) and y (u; v) are called the components of the transforma- tion. Moreover, the transformation T maps a set S in the uv-plane to a set T (S) in the xy-plane: If S is a region, then we use the components x = f (u; v) and y = g (u; v) to …nd the image of S under T (u; v) : EXAMPLE 1 Find T (S) when T (u; v) = uv; u2 v2 and S is the unit square in the uv-plane (i.e., S = [0; 1] [0; 1]). Solution: To do so, let’s determine the boundary of T (S) in the xy-plane. We use x = uv and y = u2 v2 to …nd the image of the lines bounding the unit square: Side of Square Result of T (u; v) Image in xy-plane v = 0; u in [0; 1] x = 0; y = u2; u in [0; 1] y-axis for 0 y 1 u = 1; v in [0; 1] x = v; y = 1 v2; v in [0; 1] y = 1 x2; x in[0; 1] v = 1; u in [0; 1] x = u; y = u2 1; u in [0; 1] y = x2 1; x in [0; 1] u = 0; u in [0; 1] x = 0; y = v2; v in [0; 1] y-axis for 1 y 0 1 As a result, T (S) is the region in the xy-plane bounded by x = 0; y = x2 1; and y = 1 x2: Linear transformations are coordinate transformations of the form T (u; v) = au + bv; cu + dv h i where a; b; c; and d are constants.
    [Show full text]
  • Unit 5: Change of Coordinates
    LINEAR ALGEBRA AND VECTOR ANALYSIS MATH 22B Unit 5: Change of Coordinates Lecture 5.1. Given a basis B in a linear space X, we can write an element v in X in a unique 3 way as a sum of basis elements. For example, if v = is a vector in X = 2 and 4 R 1 1 2 B = fv = ; v = g, then v = 2v + v . We say that are the B 1 −1 2 6 1 2 1 B 3 coordinates of v. The standard coordinates are v = are assumed if no other 4 basis is specified. This means v = 3e1 + 4e2. n 5.2. If B = fv1; v2; ··· ; vng is a basis of R , then the matrix S which contains the vectors vk as column vectors is called the coordinate change matrix. Theorem: If S is the matrix of B, then S−1v are the B coordinates of v. 1 1 6 −1 5.3. In the above example, S = has the inverse S−1 = =7. We −1 6 1 1 compute S−1[3; 4]T = [2; 1]T . Proof. If [v]B = [a1; : : : ; an] are the new coordinates of v, this means v = a1v1 + ··· + −1 anvn. But that means v = S[v]B. Since B is a basis, S is invertible and [v]B = S v. Theorem: If T (x) = Ax is a linear map and S is the matrix from a basis change, then B = S−1AS is the matrix of T in the new basis B. Proof. Let y = Ax. The statement [y]B = B[x]B can be written using the last theorem as S−1y = BS−1x so that y = SBS−1x.
    [Show full text]
  • COORDINATE TRANSFORMATIONS Members of a Structural System Are Typically Oriented in Differing Directions, E.G., Fig
    COORDINATE TRANSFORMATIONS Members of a structural system are typically oriented in differing directions, e.g., Fig. 17.1. In order to perform an analysis, the ele- ment stiffness equations need to be expressed in a common coor- dinate system – typically the global coordinate system. Once the element equations are expressed in a common coordinate system, the equations for each element comprising the structure can be Figure 17.1 – Frame Structure 1 (Kassimali, 1999) 2 assembled. Coordinate Transformations: Frame Elements Consider frame element m of Fig. 17.7. Y e x y m b X (a) Frame Q , u 6 6 Q , u m 4 4 Q3, u3 Q , u Q5, u5 1 1 Figure 17.7: Local – Global Q2, u2 Coordinate Relationships (b) Local Coordinate End Forces and Displacements 3 4 1 Figure 17.7 shows that the local – QcosFsinFxXY global coordinate transformations QsinFcosFyXY (17.9) can be expressed as QFzZ x = cos X + sin Y y = -sin X + cos Y where x, X = 1 or 4; y, Y = 2 or 5; and z, Z = 3 or 6. and since z and Z are parallel, this coordinate transformation is Utilizing (17.9) for all six member expressed as force components and expressing z = Z the resulting transformations in Using the above coordinate matrix form gives transformations, the end force and QFb [t] [0] b displacement transformations can QFee[0] [t] be expressed as or 5 {Q} = [T] {F} (17.11) T {Q} Q Q Q T where b 123 = member; {Q} = <<Q>b <Q>e> = beginning node local coordinate element local coordinate force T T force vector; {Q}e = <Q4 Q5 Q6> = vector; {F} = <<F>b <F>e> = end node local coordinate force element global coordinate force T vector; {F}b = <F1 F2 F3> = [t] [0] beginning node global coordinate vector; and [T] = = T [0] [t] force vector; {F}e = <F4 F5 F6> = end node global coordinate force element local to global coordinate cos sin 0 transformation matrix.
    [Show full text]
  • 1 Vectors & Tensors
    1 Vectors & Tensors The mathematical modeling of the physical world requires knowledge of quite a few different mathematics subjects, such as Calculus, Differential Equations and Linear Algebra. These topics are usually encountered in fundamental mathematics courses. However, in a more thorough and in-depth treatment of mechanics, it is essential to describe the physical world using the concept of the tensor, and so we begin this book with a comprehensive chapter on the tensor. The chapter is divided into three parts. The first part covers vectors (§1.1-1.7). The second part is concerned with second, and higher-order, tensors (§1.8-1.15). The second part covers much of the same ground as done in the first part, mainly generalizing the vector concepts and expressions to tensors. The final part (§1.16-1.19) (not required in the vast majority of applications) is concerned with generalizing the earlier work to curvilinear coordinate systems. The first part comprises basic vector algebra, such as the dot product and the cross product; the mathematics of how the components of a vector transform between different coordinate systems; the symbolic, index and matrix notations for vectors; the differentiation of vectors, including the gradient, the divergence and the curl; the integration of vectors, including line, double, surface and volume integrals, and the integral theorems. The second part comprises the definition of the tensor (and a re-definition of the vector); dyads and dyadics; the manipulation of tensors; properties of tensors, such as the trace, transpose, norm, determinant and principal values; special tensors, such as the spherical, identity and orthogonal tensors; the transformation of tensor components between different coordinate systems; the calculus of tensors, including the gradient of vectors and higher order tensors and the divergence of higher order tensors and special fourth order tensors.
    [Show full text]
  • Lecture 34: Principal Axes of Inertia • We’Ve Spent the Last Few Lectures Deriving the General
    Lecture 34: Principal Axes of Inertia • We’ve spent the last few lectures deriving the general expressions for L and Trot in terms of the inertia tensor • Both expressions would be a great deal simpler if the inertia tensor was diagonal. That is, if: = δ Iij Ii ij or I1 0 0 … Ÿ I = 0 I 0 … 2 Ÿ …0 0 I3 ⁄Ÿ • Then we could write = ω = δ ω = ω Li ƒ Iij j ƒ Ii ij j Ii i j j = 1 ω ω = 1 δ ω ω = 1 ω 2 Trot ƒ Iij i j ƒ Ii ij i j ƒ Ii i 2 i, j 2 i, j 2 i • We’ve already seen that the elements of the inertia tensor transform under rotations • So perhaps we can rotate to a set of axes for which the tensor (for a given rigid body) is diagonal – These are called the principal axes of the body – All the rotational problems you did in first-year physics dealt with rotation about a principal axis – that’s why the equations looked simpler. • If a body is rotating solely about a principal axis (call it the i axis) then: = ω = Li Ii i , or L Ii • If we can find a set of principal axes for a body, we call the three non-zero inertia tensor elements the principal moments of inertia Finding the Principal Moments • In general, it’s easiest to first determine the principal moments, and then find the principal axes • We know that if we’re rotating about a principal axis, we have: L = I A principal moment = ω • But the general relation L i I i j j also holds.
    [Show full text]
  • Notes on Change of Bases Northwestern University, Summer 2014
    Notes on Change of Bases Northwestern University, Summer 2014 Let V be a finite-dimensional vector space over a field F, and let T be a linear operator on V . Given a basis (v1; : : : ; vn) of V , we've seen how we can define a matrix which encodes all the information about T as follows. For each i, we can write T vi = a1iv1 + ··· + anivn 2 for a unique choice of scalars a1i; : : : ; ani 2 F. In total, we then have n scalars aij which we put into an n × n matrix called the matrix of T relative to (v1; : : : ; vn): 0 1 a11 ··· a1n B . .. C M(T )v := @ . A 2 Mn;n(F): an1 ··· ann In the notation M(T )v, the v showing up in the subscript emphasizes that we're taking this matrix relative to the specific bases consisting of v's. Given any vector u 2 V , we can also write u = b1v1 + ··· + bnvn for a unique choice of scalars b1; : : : ; bn 2 F, and we define the coordinate vector of u relative to (v1; : : : ; vn) as 0 1 b1 B . C n M(u)v := @ . A 2 F : bn In particular, the columns of M(T )v are the coordinates vectors of the T vi. Then the point of the matrix M(T )v is that the coordinate vector of T u is given by M(T u)v = M(T )vM(u)v; so that from the matrix of T and the coordinate vectors of elements of V , we can in fact reconstruct T itself.
    [Show full text]