Tensors in Generalized Coordinate Systems: Components and Direct Notation

Total Page:16

File Type:pdf, Size:1020Kb

Tensors in Generalized Coordinate Systems: Components and Direct Notation Tensors in generalized coordinate systems: components and direct notation Math 1550 lecture notes, Prof. Anna Vainchtein 1 Vectors in generalized coordinates Consider a generalized coordinate system (x1, x2, x3) with the local basis {e1, e2, e3}. The basis is not necessarily orthogonal, let alone orthonormal. It comes along with its reciprocal basis {e1, e2, e3}. Recall that we can write a vector a in terms of either basis, using contravariant components ai and covariant components ai, respectively: 1 2 3 i 1 2 3 i a = a e1 + a e2 + a e3 = a ei, a = a1e + a2e + a3e = aie . (1) Recall also that we can find the covariant and contravariant components of the vector by taking dot products with the basis vectors and reciprocal basis vectors, respectively: i i ai = a · ei, a = a · e (2) Consider now another coordinate system (¯x1, x¯2, x¯3), with the basis vec- tors p p p ¯ei = αi ep, αi = ¯ei · e . (3) Recall that the reciprocal basis vectors then transform via the inverse trans- formation (see the first set of notes): i −1 i p −1 i i ¯e = (α )pe , (α )p = ep · ¯e (4) Recall also that covariant and contravariant components of a vector (first order tensor) transform according to p i −1 i p a¯i = αi ap, a¯ = (α )pa . (5) Notice that that the transformation law for the covariant components in- volves the direct transformation matrix α, while the one for contravariant components has the inverse transformation matrix α−1. As we have seen 1 before, this is becausea ¯i = a·¯ei, where ¯ei is related to ei via the direct trans- formation, while ina ¯i = a · ¯ei, the reciprocal vector ¯ei transforms according to (4). In particular, the new and old coordinates of the same point with position vector 1 2 3 1 2 3 r = x e1 + x e2 + x e3 =x ¯ ¯e1 +x ¯ ¯e2 +x ¯ ¯e3 are related by i −1 i p p p i x¯ = (α )px , x = αi x¯ , and thus we can also represent the direct and inverse transformation matrices in terms of partial derivatives of the old and new coordinates with respect to one another: ∂xp ∂x¯i αp = , (α−1)i = (6) i ∂x¯i p ∂xp Using this, we can rewrite (5) as ∂xp ∂x¯i a¯ = a , a¯i = ap (7) i ∂x¯i p ∂xp Finally, recall that covariant and contravariant components are not inde- pendent. They are related by i ik k a = g ak, ai = gika , ik i k where we recall that gik = ei ·ek and g = e ·e are covariant and contravari- ant components of the metric tensor. We called this raising or lowering the index via the metric tensor. Remark. In the case of two rectangular coordinate systems with orthonor- i i i i mal bases that we considered earlier, we have e = ei, ¯e = e , ai = a and α becomes an orthogonal matrix, α−1 = αT . Thus, in this case we have j −1 j αi = Qij and (α )i = Qji, where Qij = ¯ei · ej are components of an orthog- onal matrix. Example 1. Consider the vector a = x1x2i1 + x2x3i2 + x1x3i3, where {i1, i2, i3} is the standard basis in the Cartesian coordinate system (x1, x2, x3). 2 a) Find the covariant components of a in parabolic cylindrical coordinates (v, w, z) defined by v2 − w2 x = , x = vw, x = z. 1 2 2 3 b) Express its contravariant components in terms of covariant ones. Solution. a) The new basis is ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = vi + wi 1 ∂u 1 ∂u 2 ∂u 3 1 2 ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = −wi + vi 2 ∂v 1 ∂v 2 ∂v 3 1 2 ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = i 3 ∂z 1 ∂z 2 ∂z 3 3 The transformation matrix is v w 0 j [αi ] = [¯ei · ij] = −w v 0 0 0 1 j The covariant components of a in the new basis area ¯i = αi aj, or 1 2 2 1 2 2 2 2 a¯1 v w 0 2 (v − w )vw 2 (v − w )v w + vw z 1 2 2 2 2 a¯2 = −w v 0 vwz = − 2 (v − w )vw + v wz . 1 2 2 1 2 2 a¯3 0 0 1 2 (v − w )z 2 (v − w )z √ 2 2 b) Note that the new basis is orthogonal, with h1 = |¯e1| = v + w = |¯e2| = h2 and h3 = |¯e3| = 1. Thus the contravariant components of the 1 metric tensor areg ¯ij = 0 for i 6= j andg ¯11 =g ¯22 = andg ¯33 = 1. v2 + w2 Therefore, 1 1 a¯1 =g ¯11a¯ = a¯ , a¯1 =g ¯22a¯ = a¯ , a¯3 =g ¯33a¯ =a ¯ . 1 v2 + w2 1 2 v2 + w2 2 3 3 Example 2. Let f(x1, x2, x3) be a scalar field. If we change to new coor- dinatesx ¯i =x ¯i(x1, x2, x3), we have, via the chain rule, ∂f ∂f ∂xk = , ∂x¯i ∂xk ∂x¯i 3 which by the first equation in (6) yields ∂f ∂f = αk . ∂x¯i i ∂xk ∂f Thus, v = transform as covariant components of the vector (gradient i ∂xi of f) ∂f v = ei = ∇f, ∂xi where ei are the reciprocal basis vectors associated with coordinates (x1, x2, x3). Of course, the vector v also has contravariant components ∂f ∂f vi = gikv = gik , v = gik e . k ∂xk ∂xk i 2 General higher order tensors By analogy with above transformation laws for covariant and contravariant components of a vector, we can now introduce a more general definition of a second order tensor, no longer considering only rectangular coordinate sys- tems: Definition. A second order tensor in Rd is a quantity uniquely specified 2 by d numbers (its components). These components can be covariant (Aij), ij ·j i contravariant (A ) or mixed (Ai , A·j), and they transform under the change of basis (3) according to ¯ p q Aij = αi αj Apq (8) ¯ij −1 i −1 j pq A = (α )p(α )qA (9) ¯·j p −1 j ·q Ai = αi (α )qAp (10) ¯i −1 i q p A·j = (α )pαj A·q (11) ·j The little dot helps us denote the actual position of the index, e.g. in Ai ·j j j the index j comes second. Since Ai 6= A·i in general, writing Ai is mislead- ing, since we don’t know which of the two is actually meant (Kronecker delta j i δi = δj is an exception). Note that in the transformation laws the covariant indices always require direct transformation matrix α, while the contravari- ant indices come along with inverse transformation matrix α−1, just like 4 it was for the vector transformation laws (5). The indices in the transfor- mation matrices are arranged so that the indices that are not summed over are in the same position as in the new component, while the summation is over the opposite indices. The only exception is Cartesian coordinates with orthonormal bases, where reciprocal vectors coincide with the regular ones, and the covariant, contravariant and mixed components coincide for each pair of indices. Recalling that the transformation matrix can be represented as (6), we can also write (8)-(11) as ∂xp ∂xq A¯ = A ij ∂x¯i ∂x¯j pq i j ¯ij ∂x¯ ∂x¯ pq A = p q A ∂x ∂x (12) ∂xp ∂x¯j A¯·j = A·q i ∂x¯i ∂xq p ∂x¯i ∂xq A¯i = Ap ·j ∂xp ∂x¯j ·q This is how second order tensors are defined in some books. Similar to vectors, the covariant, contravariant and mixed components of a second order tensor are related to one another via the metric tensor, which raises or lowers the corresponding indices. We have mn ·n m Aij = gimgjnA = gjnAi = gimA·j ij im jn im ·j jn i A = g g Amn = g Am = g A·n ·j jn mj (13) Ai = g Ain = gimA i im in A·j = g Amj = gjnA We can now easily generalize this to write transformation laws for higher order tensors. For example, ¯ p q r Aijk = αi αj αkApqr ¯ijk −1 i −1 j −1 k pqr A = (α )p(α )q(α )r A ¯ij −1 i −1 j r pq A··k = (α )p(α )qαkA··r ¯·j· p −1 j r ·q· Ai·k = αi (α )qαkAp·r, and so on. Once again, we define a third order tensor as an object whose various components transform according to these rules - and this is what we 5 need to check to verify these are components of a tensor. The components of a third order tensor are again related by the metric tensor: m mn mnl Aijk = gimA· jk = gimginA·· j = gimgjngklA , etc. ijkl Exercise. Write the transformation laws for the components Aijkl, A , ··kl ·jk· Aij and Ai··l of a fourth order tensor and the relations between these com- ponents. Example 1. Contravariant components of a tensor A in the basis e1 = (0, 1, 1), e2 = (1, 0, 1), e3 = (1, 1, 1) are −1 2 0 ij [A ] = 2 0 3 . 0 3 −2 i ·j Find its mixed components A·j and Ai and covariant components Aij. i im Solution. We have A·j = A gmj. The metric tensor is given by 2 1 2 [gmj] = [em · ej] = 1 2 2 . 2 2 3 Therefore, −1 2 0 2 1 2 0 3 2 i im [A·j] = [A ][gmj] = 2 0 3 1 2 2 = 10 8 13 .
Recommended publications
  • Arxiv:2012.13347V1 [Physics.Class-Ph] 15 Dec 2020
    KPOP E-2020-04 Bra-Ket Representation of the Inertia Tensor U-Rae Kim, Dohyun Kim, and Jungil Lee∗ KPOPE Collaboration, Department of Physics, Korea University, Seoul 02841, Korea (Dated: June 18, 2021) Abstract We employ Dirac's bra-ket notation to define the inertia tensor operator that is independent of the choice of bases or coordinate system. The principal axes and the corresponding principal values for the elliptic plate are determined only based on the geometry. By making use of a general symmetric tensor operator, we develop a method of diagonalization that is convenient and intuitive in determining the eigenvector. We demonstrate that the bra-ket approach greatly simplifies the computation of the inertia tensor with an example of an N-dimensional ellipsoid. The exploitation of the bra-ket notation to compute the inertia tensor in classical mechanics should provide undergraduate students with a strong background necessary to deal with abstract quantum mechanical problems. PACS numbers: 01.40.Fk, 01.55.+b, 45.20.dc, 45.40.Bb Keywords: Classical mechanics, Inertia tensor, Bra-ket notation, Diagonalization, Hyperellipsoid arXiv:2012.13347v1 [physics.class-ph] 15 Dec 2020 ∗Electronic address: [email protected]; Director of the Korea Pragmatist Organization for Physics Educa- tion (KPOP E) 1 I. INTRODUCTION The inertia tensor is one of the essential ingredients in classical mechanics with which one can investigate the rotational properties of rigid-body motion [1]. The symmetric nature of the rank-2 Cartesian tensor guarantees that it is described by three fundamental parameters called the principal moments of inertia Ii, each of which is the moment of inertia along a principal axis.
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • Solving the Geodesic Equation
    Solving the Geodesic Equation Jeremy Atkins December 12, 2018 Abstract We find the general form of the geodesic equation and discuss the closed form relation to find Christoffel symbols. We then show how to use metric independence to find Killing vector fields, which allow us to solve the geodesic equation when there are helpful symmetries. We also discuss a more general way to find Killing vector fields, and some of their properties as a Lie algebra. 1 The Variational Method We will exploit the following variational principle to characterize motion in general relativity: The world line of a free test particle between two timelike separated points extremizes the proper time between them. where a test particle is one that is not a significant source of spacetime cur- vature, and a free particles is one that is only under the influence of curved spacetime. Similarly to classical Lagrangian mechanics, we can use this to de- duce the equations of motion for a metric. The proper time along a timeline worldline between point A and point B for the metric gµν is given by Z B Z B µ ν 1=2 τAB = dτ = (−gµν (x)dx dx ) (1) A A using the Einstein summation notation, and µ, ν = 0; 1; 2; 3. We can parame- terize the four coordinates with the parameter σ where σ = 0 at A and σ = 1 at B. This gives us the following equation for the proper time: Z 1 dxµ dxν 1=2 τAB = dσ −gµν (x) (2) 0 dσ dσ We can treat the integrand as a Lagrangian, dxµ dxν 1=2 L = −gµν (x) (3) dσ dσ and it's clear that the world lines extremizing proper time are those that satisfy the Euler-Lagrange equation: @L d @L − = 0 (4) @xµ dσ @(dxµ/dσ) 1 These four equations together give the equation for the worldline extremizing the proper time.
    [Show full text]
  • Geodetic Position Computations
    GEODETIC POSITION COMPUTATIONS E. J. KRAKIWSKY D. B. THOMSON February 1974 TECHNICALLECTURE NOTES REPORT NO.NO. 21739 PREFACE In order to make our extensive series of lecture notes more readily available, we have scanned the old master copies and produced electronic versions in Portable Document Format. The quality of the images varies depending on the quality of the originals. The images have not been converted to searchable text. GEODETIC POSITION COMPUTATIONS E.J. Krakiwsky D.B. Thomson Department of Geodesy and Geomatics Engineering University of New Brunswick P.O. Box 4400 Fredericton. N .B. Canada E3B5A3 February 197 4 Latest Reprinting December 1995 PREFACE The purpose of these notes is to give the theory and use of some methods of computing the geodetic positions of points on a reference ellipsoid and on the terrain. Justification for the first three sections o{ these lecture notes, which are concerned with the classical problem of "cCDputation of geodetic positions on the surface of an ellipsoid" is not easy to come by. It can onl.y be stated that the attempt has been to produce a self contained package , cont8.i.ning the complete development of same representative methods that exist in the literature. The last section is an introduction to three dimensional computation methods , and is offered as an alternative to the classical approach. Several problems, and their respective solutions, are presented. The approach t~en herein is to perform complete derivations, thus stqing awrq f'rcm the practice of giving a list of for11111lae to use in the solution of' a problem.
    [Show full text]
  • 1.3 Cartesian Tensors a Second-Order Cartesian Tensor Is Defined As A
    1.3 Cartesian tensors A second-order Cartesian tensor is defined as a linear combination of dyadic products as, T Tijee i j . (1.3.1) The coefficients Tij are the components of T . A tensor exists independent of any coordinate system. The tensor will have different components in different coordinate systems. The tensor T has components Tij with respect to basis {ei} and components Tij with respect to basis {e i}, i.e., T T e e T e e . (1.3.2) pq p q ij i j From (1.3.2) and (1.2.4.6), Tpq ep eq TpqQipQjqei e j Tij e i e j . (1.3.3) Tij QipQjqTpq . (1.3.4) Similarly from (1.3.2) and (1.2.4.6) Tij e i e j Tij QipQjqep eq Tpqe p eq , (1.3.5) Tpq QipQjqTij . (1.3.6) Equations (1.3.4) and (1.3.6) are the transformation rules for changing second order tensor components under change of basis. In general Cartesian tensors of higher order can be expressed as T T e e ... e , (1.3.7) ij ...n i j n and the components transform according to Tijk ... QipQjqQkr ...Tpqr... , Tpqr ... QipQjqQkr ...Tijk ... (1.3.8) The tensor product S T of a CT(m) S and a CT(n) T is a CT(m+n) such that S T S T e e e e e e . i1i2 im j1j 2 jn i1 i2 im j1 j2 j n 1.3.1 Contraction T Consider the components i1i2 ip iq in of a CT(n).
    [Show full text]
  • Tensors (Draft Copy)
    TENSORS (DRAFT COPY) LARRY SUSANKA Abstract. The purpose of this note is to define tensors with respect to a fixed finite dimensional real vector space and indicate what is being done when one performs common operations on tensors, such as contraction and raising or lowering indices. We include discussion of relative tensors, inner products, symplectic forms, interior products, Hodge duality and the Hodge star operator and the Grassmann algebra. All of these concepts and constructions are extensions of ideas from linear algebra including certain facts about determinants and matrices, which we use freely. None of them requires additional structure, such as that provided by a differentiable manifold. Sections 2 through 11 provide an introduction to tensors. In sections 12 through 25 we show how to perform routine operations involving tensors. In sections 26 through 28 we explore additional structures related to spaces of alternating tensors. Our aim is modest. We attempt only to create a very structured develop- ment of tensor methods and vocabulary to help bridge the gap between linear algebra and its (relatively) benign notation and the vast world of tensor ap- plications. We (attempt to) define everything carefully and consistently, and this is a concise repository of proofs which otherwise may be spread out over a book (or merely referenced) in the study of an application area. Many of these applications occur in contexts such as solid-state physics or electrodynamics or relativity theory. Each subject area comes equipped with its own challenges: subject-specific vocabulary, traditional notation and other conventions. These often conflict with each other and with modern mathematical practice, and these conflicts are a source of much confusion.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Coordinate Transformation
    Coordinate Transformation Coordinate Transformations In this chapter, we explore mappings – where a mapping is a function that "maps" one set to another, usually in a way that preserves at least some of the underlyign geometry of the sets. For example, a 2-dimensional coordinate transformation is a mapping of the form T (u; v) = x (u; v) ; y (u; v) h i The functions x (u; v) and y (u; v) are called the components of the transforma- tion. Moreover, the transformation T maps a set S in the uv-plane to a set T (S) in the xy-plane: If S is a region, then we use the components x = f (u; v) and y = g (u; v) to …nd the image of S under T (u; v) : EXAMPLE 1 Find T (S) when T (u; v) = uv; u2 v2 and S is the unit square in the uv-plane (i.e., S = [0; 1] [0; 1]). Solution: To do so, let’s determine the boundary of T (S) in the xy-plane. We use x = uv and y = u2 v2 to …nd the image of the lines bounding the unit square: Side of Square Result of T (u; v) Image in xy-plane v = 0; u in [0; 1] x = 0; y = u2; u in [0; 1] y-axis for 0 y 1 u = 1; v in [0; 1] x = v; y = 1 v2; v in [0; 1] y = 1 x2; x in[0; 1] v = 1; u in [0; 1] x = u; y = u2 1; u in [0; 1] y = x2 1; x in [0; 1] u = 0; u in [0; 1] x = 0; y = v2; v in [0; 1] y-axis for 1 y 0 1 As a result, T (S) is the region in the xy-plane bounded by x = 0; y = x2 1; and y = 1 x2: Linear transformations are coordinate transformations of the form T (u; v) = au + bv; cu + dv h i where a; b; c; and d are constants.
    [Show full text]
  • Unit 5: Change of Coordinates
    LINEAR ALGEBRA AND VECTOR ANALYSIS MATH 22B Unit 5: Change of Coordinates Lecture 5.1. Given a basis B in a linear space X, we can write an element v in X in a unique 3 way as a sum of basis elements. For example, if v = is a vector in X = 2 and 4 R 1 1 2 B = fv = ; v = g, then v = 2v + v . We say that are the B 1 −1 2 6 1 2 1 B 3 coordinates of v. The standard coordinates are v = are assumed if no other 4 basis is specified. This means v = 3e1 + 4e2. n 5.2. If B = fv1; v2; ··· ; vng is a basis of R , then the matrix S which contains the vectors vk as column vectors is called the coordinate change matrix. Theorem: If S is the matrix of B, then S−1v are the B coordinates of v. 1 1 6 −1 5.3. In the above example, S = has the inverse S−1 = =7. We −1 6 1 1 compute S−1[3; 4]T = [2; 1]T . Proof. If [v]B = [a1; : : : ; an] are the new coordinates of v, this means v = a1v1 + ··· + −1 anvn. But that means v = S[v]B. Since B is a basis, S is invertible and [v]B = S v. Theorem: If T (x) = Ax is a linear map and S is the matrix from a basis change, then B = S−1AS is the matrix of T in the new basis B. Proof. Let y = Ax. The statement [y]B = B[x]B can be written using the last theorem as S−1y = BS−1x so that y = SBS−1x.
    [Show full text]
  • COORDINATE TRANSFORMATIONS Members of a Structural System Are Typically Oriented in Differing Directions, E.G., Fig
    COORDINATE TRANSFORMATIONS Members of a structural system are typically oriented in differing directions, e.g., Fig. 17.1. In order to perform an analysis, the ele- ment stiffness equations need to be expressed in a common coor- dinate system – typically the global coordinate system. Once the element equations are expressed in a common coordinate system, the equations for each element comprising the structure can be Figure 17.1 – Frame Structure 1 (Kassimali, 1999) 2 assembled. Coordinate Transformations: Frame Elements Consider frame element m of Fig. 17.7. Y e x y m b X (a) Frame Q , u 6 6 Q , u m 4 4 Q3, u3 Q , u Q5, u5 1 1 Figure 17.7: Local – Global Q2, u2 Coordinate Relationships (b) Local Coordinate End Forces and Displacements 3 4 1 Figure 17.7 shows that the local – QcosFsinFxXY global coordinate transformations QsinFcosFyXY (17.9) can be expressed as QFzZ x = cos X + sin Y y = -sin X + cos Y where x, X = 1 or 4; y, Y = 2 or 5; and z, Z = 3 or 6. and since z and Z are parallel, this coordinate transformation is Utilizing (17.9) for all six member expressed as force components and expressing z = Z the resulting transformations in Using the above coordinate matrix form gives transformations, the end force and QFb [t] [0] b displacement transformations can QFee[0] [t] be expressed as or 5 {Q} = [T] {F} (17.11) T {Q} Q Q Q T where b 123 = member; {Q} = <<Q>b <Q>e> = beginning node local coordinate element local coordinate force T T force vector; {Q}e = <Q4 Q5 Q6> = vector; {F} = <<F>b <F>e> = end node local coordinate force element global coordinate force T vector; {F}b = <F1 F2 F3> = [t] [0] beginning node global coordinate vector; and [T] = = T [0] [t] force vector; {F}e = <F4 F5 F6> = end node global coordinate force element local to global coordinate cos sin 0 transformation matrix.
    [Show full text]
  • 1 Vectors & Tensors
    1 Vectors & Tensors The mathematical modeling of the physical world requires knowledge of quite a few different mathematics subjects, such as Calculus, Differential Equations and Linear Algebra. These topics are usually encountered in fundamental mathematics courses. However, in a more thorough and in-depth treatment of mechanics, it is essential to describe the physical world using the concept of the tensor, and so we begin this book with a comprehensive chapter on the tensor. The chapter is divided into three parts. The first part covers vectors (§1.1-1.7). The second part is concerned with second, and higher-order, tensors (§1.8-1.15). The second part covers much of the same ground as done in the first part, mainly generalizing the vector concepts and expressions to tensors. The final part (§1.16-1.19) (not required in the vast majority of applications) is concerned with generalizing the earlier work to curvilinear coordinate systems. The first part comprises basic vector algebra, such as the dot product and the cross product; the mathematics of how the components of a vector transform between different coordinate systems; the symbolic, index and matrix notations for vectors; the differentiation of vectors, including the gradient, the divergence and the curl; the integration of vectors, including line, double, surface and volume integrals, and the integral theorems. The second part comprises the definition of the tensor (and a re-definition of the vector); dyads and dyadics; the manipulation of tensors; properties of tensors, such as the trace, transpose, norm, determinant and principal values; special tensors, such as the spherical, identity and orthogonal tensors; the transformation of tensor components between different coordinate systems; the calculus of tensors, including the gradient of vectors and higher order tensors and the divergence of higher order tensors and special fourth order tensors.
    [Show full text]