Orthogonal Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Orthogonal Transformations Orthogonal transformations October 12, 2014 1 Defining property The squared length of a vector is given by taking the dot product of a vector with itself, v2 = v · v i j = gijv v An orthogonal transformation is a linear transformation of a vector space that preserves lengths of vectors. This defining property may therefore be written as a linear transformation, v0 = Ov such that v0 · v0 = v · v Write this definition in terms of components using index notation. The transformation O must have mixed indices so that the sum is over one index of each type and the free indices match, 0i i j v = O jv we have 0i 0j i j gijv v = gijv v i k j m i j gij O kv O mv = gijv v i j k m k m gijO kO m v v = gkmv v or, bringing both terms to the same side, i j k m gijO kO m − gkm v v = 0 Because vk is arbitrary, we can easily make six independent choices so that the resulting six matrices vkvm span the space of all symmetric matrices. The only way for all six of these to vanish when contracted on i j gijO kO m − gkm is if i j gijO kO m − gkm = 0 This is the defining property of an orthogonal transformation. In Cartesian coordinates, gij = δij, and this condition may be written as OtO = 1 where Ot is the transpose of O. This is equivalent to Ot = O−1. 1 Notice that nothing in the preceeding arguments depends on the dimension of v being three. We conclude that orthogonal transformations in any dimension n must satisfy OtO = 1, where O is an n × n matrix. Consider the determinant of the defining condition, det 1 = det OtO 1 = det Ot det O and since the determinant of the transpose of a matrix equals the determinant of the original matrix, we have det O = ±1. Any orthogonal transformation with det O = −1 is just the parity operator, P = 0 −1 1 @ −1 A times an orthogonal transformation with det O = +1, so if we treat parity independently −1 we have the special orthogonal group, SO (n), of unit determinant orthogonal transformations. 1.1 More detail If the argument above is not already clear, it is easy to see what is happening in 2-dimensions. We have v1v1 v1v2 vkvm = v2v1 v2v2 i j k m i which is a symmetric matrix. But gijO kO m − gkm v v = 0 must hold for all choices of v . If we make the choice vi = (1; 0) then 1 0 vkvm = 0 0 0 0 and g Oi Oj − g vkvm = 0 implies g Oi Oj = g . Now choose vi = (0; 1) so that vkvm = ij k m km ij 1 1 11 0 1 1 1 and we see we must also have g Oi Oj = g . Finally, let vi = (1; 1) so that vkvm = . This gives ij 2 2 22 1 1 i j i j i j i j gijO 1O 1 − g11 + gijO 1O 2 − g12 + gijO 2O 1 − g21 + gijO 2O 2 − g22 = 0 but by the previous two relations, the first and fourth term already vanish and we have i j i j gijO 1O 2 − g12 + gijO 2O 1 − g21 = 0 Because gij = gji, these two terms are the same, and we conclude i j gijO kO m = gkm for all km. In 3-dimensions, six different choices for vi will be required. 2 Infinitesimal generators We work in Cartesian coordinates, where we can use matrix notation. Then, for the real, 3-dimensional representation of rotations, we require Ot = O−1 Notice that the identity satisfies this condition, so we may consider linear transformations near the identity which also satisfy the condition. Let O = 1 + " 2 where ["]ij = "ij are all small, j"ijj 1 for all i; j. Keeping only terms to first order in "ij, we have: Ot = 1 + "t O−1 = 1 − " where we see that we have the right form for O−1 by computing OO−1 = (1 + ") (1 − ") = 1 − "2 ≈ 1 correct to first order in ". Now we impose our condition, Ot = O−1 1 + "t = 1 − " "t = −" so that the matrix " must be antisymmetric. Next, we write the most general antisymmetric 3×3 matrix as a linear combination of a convenient basis, i " = w Ji 0 0 0 0 1 0 0 0 −1 1 0 0 1 0 1 1 2 3 = w @ 0 0 1 A + w @ 0 0 0 A + w @ −1 0 0 A 0 −1 0 1 0 0 0 0 0 0 0 w2 −w3 1 = @ −w2 0 w1 A w3 −w1 0 i where w 1. Notice that the components of the three matrices Ji are neatly summarized by [Ji]jk = "ijk 0 0 0 0 1 where "ijk is the totally antisymmetric Levi-Civita tensor. For example, [J1]ij = "1ij = @ 0 0 1 A. 0 −1 0 The matrices Ji are called the generators of the transformations. The most general antisymmetric is then a linear combination of the three generators. Knowing the generators is enough to recover an arbitrary rotation. Starting with O = 1 + " we may apply O repeatedly, taking the limit O (θ) = lim On n−!1 = lim (1 + ")n n−!1 i n = lim 1 + w Ji n−!1 Let w be the length of the infinitesmal vector wi, so that wi = wni, where ni is a unit vector. Then the limit is taken in such a way that lim nw = θ n−!1 3 n Pn n! n−k k where θ is finite. Using the binomial expansion, (a + b) = k=0 k!(n−k)! a b we have n i n lim O = lim 1 + w Ji n−!1 n−!1 n X n! n−k i k = lim (1) w Ji n−!1 k!(n − k)! k=0 n X n (n − 1) (n − 2) ··· (n − k + 1) 1 i k = lim (nw) n Ji n−!1 k! nk k=0 n n n−1 n−2 n−k+1 X n n n ··· n i k = lim θn Ji n−!1 k! k=0 1 X 1 k = θniJ k! i k=0 i ≡ exp θn Ji We define the exponential of a matrix by the power series for the exponential, applied using powers of the matrix. This is the matrix for a rotation through an angle θ around an axis in the direction of n. i To find the detailed form of a general rotation, we now need to find powers of n Ji. This turns out to be straightforward: i j i j n Ji k = n "i k m h i 2i i m j k n Ji = n "i k n "j n n i j m k = n n "i k"j n i j m k = −n n "i k"jn i j m m = −n n δijδ n − δinδ j i j m i m j = − δijn n δ n + δinn δ j n m m = −δ n + n nn m m = − (δ n − n nn) m h i 3i m m i k n Ji = − (δ k − n nk) n "i n n m i k m i k = −δ k n "i n + n nkn "i n i m m k i = −n "i n + n n n "ikn i m = − n Ji n m k i m i where n n n "ikn = n (n × n) = 0. The powers come back to n Ji with only a sign change, so we can divide the series into even and odd powers. For all k > 0, m h i 2ki k m m n Ji = (−1) (δ n − n nn) n m h i 2k+1i k i m n Ji = (−1) n Ji n n m h i 0i m For k = 0 we have the identity, n Ji = δ n. n We can now compute the exponential explicitly: m ^ m i [O (θ; n)] n = exp θn Ji n 4 " 1 #m X 1 k = θniJ k! i k=0 n " 1 #m " 1 #m X 1 2l X 1 2l+1 = θniJ + θniJ (2l)! i (2l + 1)! i l=0 n l=0 n " 1 #m " 1 #m X 1 2l X 1 2l+1 = 1 + θniJ + θniJ (2l)! i (2l + 1)! i l=1 n l=0 n 1 l 1 l X (−1) X (−1) m = δm + θ2l (δm − nmn ) + θ2l+1 niJ n (2l)! n n (2l + 1)! i n l=1 l=0 m m m i m = δ n + (cos θ − 1) (δ n − n nn) + sin θn "i n where we get (cos θ − 1) because the l = 0 term is missing from the sum. To see what this means, let O act on an arbitrary vector v, and write the result in normal vector notation, m n m n m m n i m n [O (θ; n^)] n v = δ nv + (cos θ − 1) (δ n − n nn) v + sin θn "i nv m m n m n i n m = v + (cos θ − 1) (δ nv − n nnv ) − sin θn v "in = vm + (cos θ − 1) (vm − (n · v) nm) − [n × v]m sin θ Going fully to vector notation, O (θ; n^) v = v + (cos θ − 1) (v − (n · v) n) − (n × v) sin θ Finally, define the components of v parallel and perpendicular to the unit vector n: vk = (v · n) n v? = v − (v · n) n Therefore, O (θ; n^) v = v − (v − (v · n) n) + (v − (v · n) n) cos θ − sin θ (n × v) = vk + v? cos θ − sin θ (n × v) This expresses the rotated vector in terms of three mutually perpendicular vectors, vk; v?; (n × v).
Recommended publications
  • Linear Algebra I
    Linear Algebra I Martin Otto Winter Term 2013/14 Contents 1 Introduction7 1.1 Motivating Examples.......................7 1.1.1 The two-dimensional real plane.............7 1.1.2 Three-dimensional real space............... 14 1.1.3 Systems of linear equations over Rn ........... 15 1.1.4 Linear spaces over Z2 ................... 21 1.2 Basics, Notation and Conventions................ 27 1.2.1 Sets............................ 27 1.2.2 Functions......................... 29 1.2.3 Relations......................... 34 1.2.4 Summations........................ 36 1.2.5 Propositional logic.................... 36 1.2.6 Some common proof patterns.............. 37 1.3 Algebraic Structures....................... 39 1.3.1 Binary operations on a set................ 39 1.3.2 Groups........................... 40 1.3.3 Rings and fields...................... 42 1.3.4 Aside: isomorphisms of algebraic structures...... 44 2 Vector Spaces 47 2.1 Vector spaces over arbitrary fields................ 47 2.1.1 The axioms........................ 48 2.1.2 Examples old and new.................. 50 2.2 Subspaces............................. 53 2.2.1 Linear subspaces..................... 53 2.2.2 Affine subspaces...................... 56 2.3 Aside: affine and linear spaces.................. 58 2.4 Linear dependence and independence.............. 60 3 4 Linear Algebra I | Martin Otto 2013 2.4.1 Linear combinations and spans............. 60 2.4.2 Linear (in)dependence.................. 62 2.5 Bases and dimension....................... 65 2.5.1 Bases............................ 65 2.5.2 Finite-dimensional vector spaces............. 66 2.5.3 Dimensions of linear and affine subspaces........ 71 2.5.4 Existence of bases..................... 72 2.6 Products, sums and quotients of spaces............. 73 2.6.1 Direct products...................... 73 2.6.2 Direct sums of subspaces................
    [Show full text]
  • Vector Spaces in Physics
    San Francisco State University Department of Physics and Astronomy August 6, 2015 Vector Spaces in Physics Notes for Ph 385: Introduction to Theoretical Physics I R. Bland TABLE OF CONTENTS Chapter I. Vectors A. The displacement vector. B. Vector addition. C. Vector products. 1. The scalar product. 2. The vector product. D. Vectors in terms of components. E. Algebraic properties of vectors. 1. Equality. 2. Vector Addition. 3. Multiplication of a vector by a scalar. 4. The zero vector. 5. The negative of a vector. 6. Subtraction of vectors. 7. Algebraic properties of vector addition. F. Properties of a vector space. G. Metric spaces and the scalar product. 1. The scalar product. 2. Definition of a metric space. H. The vector product. I. Dimensionality of a vector space and linear independence. J. Components in a rotated coordinate system. K. Other vector quantities. Chapter 2. The special symbols ij and ijk, the Einstein summation convention, and some group theory. A. The Kronecker delta symbol, ij B. The Einstein summation convention. C. The Levi-Civita totally antisymmetric tensor. Groups. The permutation group. The Levi-Civita symbol. D. The cross Product. E. The triple scalar product. F. The triple vector product. The epsilon killer. Chapter 3. Linear equations and matrices. A. Linear independence of vectors. B. Definition of a matrix. C. The transpose of a matrix. D. The trace of a matrix. E. Addition of matrices and multiplication of a matrix by a scalar. F. Matrix multiplication. G. Properties of matrix multiplication. H. The unit matrix I. Square matrices as members of a group.
    [Show full text]
  • An Attempt to Intuitively Introduce the Dot, Wedge, Cross, and Geometric Products
    An attempt to intuitively introduce the dot, wedge, cross, and geometric products Peeter Joot March 21, 2008 1 Motivation. Both the NFCM and GAFP books have axiomatic introductions of the gener- alized (vector, blade) dot and wedge products, but there are elements of both that I was unsatisfied with. Perhaps the biggest issue with both is that they aren’t presented in a dumb enough fashion. NFCM presents but does not prove the generalized dot and wedge product operations in terms of symmetric and antisymmetric sums, but it is really the grade operation that is fundamental. You need that to define the dot product of two bivectors for example. GAFP axiomatic presentation is much clearer, but the definition of general- ized wedge product as the totally antisymmetric sum is a bit strange when all the differential forms book give such a different definition. Here I collect some of my notes on how one starts with the geometric prod- uct action on colinear and perpendicular vectors and gets the familiar results for two and three vector products. I may not try to generalize this, but just want to see things presented in a fashion that makes sense to me. 2 Introduction. The aim of this document is to introduce a “new” powerful vector multiplica- tion operation, the geometric product, to a student with some traditional vector algebra background. The geometric product, also called the Clifford product 1, has remained a relatively obscure mathematical subject. This operation actually makes a great deal of vector manipulation simpler than possible with the traditional methods, and provides a way to naturally expresses many geometric concepts.
    [Show full text]
  • Dirac Notation 1 Vectors
    Physics 324, Fall 2001 Dirac Notation 1 Vectors 1.1 Inner product T Recall from linear algebra: we can represent a vector V as a column vector; then V y = (V )∗ is a row vector, and the inner product (another name for dot product) between two vectors is written as AyB = A∗B1 + A∗B2 + : (1) 1 2 ··· In conventional vector notation, the above is just A~∗ B~ . Note that the inner product of a vector with itself is positive definite; we can define the· norm of a vector to be V = pV V; (2) j j y which is a non-negative real number. (In conventional vector notation, this is V~ , which j j is the length of V~ ). 1.2 Basis vectors We can expand a vector in a set of basis vectors e^i , provided the set is complete, which means that the basis vectors span the whole vectorf space.g The basis is called orthonormal if they satisfy e^iye^j = δij (orthonormality); (3) and an orthonormal basis is complete if they satisfy e^ e^y = I (completeness); (4) X i i i where I is the unit matrix. (note that a column vectore ^i times a row vectore ^iy is a square matrix, following the usual definition of matrix multiplication). Assuming we have a complete orthonormal basis, we can write V = IV = e^ e^yV V e^ ;V (^eyV ) : (5) X i i X i i i i i ≡ i ≡ The Vi are complex numbers; we say that Vi are the components of V in the e^i basis.
    [Show full text]
  • 1 Sets and Set Notation. Definition 1 (Naive Definition of a Set)
    LINEAR ALGEBRA MATH 2700.006 SPRING 2013 (COHEN) LECTURE NOTES 1 Sets and Set Notation. Definition 1 (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most often name sets using capital letters, like A, B, X, Y , etc., while the elements of a set will usually be given lower-case letters, like x, y, z, v, etc. Two sets X and Y are called equal if X and Y consist of exactly the same elements. In this case we write X = Y . Example 1 (Examples of Sets). (1) Let X be the collection of all integers greater than or equal to 5 and strictly less than 10. Then X is a set, and we may write: X = f5; 6; 7; 8; 9g The above notation is an example of a set being described explicitly, i.e. just by listing out all of its elements. The set brackets {· · ·} indicate that we are talking about a set and not a number, sequence, or other mathematical object. (2) Let E be the set of all even natural numbers. We may write: E = f0; 2; 4; 6; 8; :::g This is an example of an explicity described set with infinitely many elements. The ellipsis (:::) in the above notation is used somewhat informally, but in this case its meaning, that we should \continue counting forever," is clear from the context. (3) Let Y be the collection of all real numbers greater than or equal to 5 and strictly less than 10. Recalling notation from previous math courses, we may write: Y = [5; 10) This is an example of using interval notation to describe a set.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]
  • Relativistic Inversion, Invariance and Inter-Action
    S S symmetry Article Relativistic Inversion, Invariance and Inter-Action Martin B. van der Mark †,‡ and John G. Williamson *,‡ The Quantum Bicycle Society, 12 Crossburn Terrace, Troon KA1 07HB, Scotland, UK; [email protected] * Correspondence: [email protected] † Formerly of Philips Research, 5656 AE Eindhoven, The Netherlands. ‡ These authors contributed equally to this work. Abstract: A general formula for inversion in a relativistic Clifford–Dirac algebra has been derived. Identifying the base elements of the algebra as those of space and time, the first order differential equations over all quantities proves to encompass the Maxwell equations, leads to a natural extension incorporating rest mass and spin, and allows an integration with relativistic quantum mechanics. Although the algebra is not a division algebra, it parallels reality well: where division is undefined turns out to correspond to physical limits, such as that of the light cone. The divisor corresponds to invariants of dynamical significance, such as the invariant interval, the general invariant quantities in electromagnetism, and the basis set of quantities in the Dirac equation. It is speculated that the apparent 3-dimensionality of nature arises from a beautiful symmetry between the three-vector algebra and each of four sets of three derived spaces in the full 4-dimensional algebra. It is conjectured that elements of inversion may play a role in the interaction of fields and matter. Keywords: invariants; inversion; division; non-division algebra; Dirac algebra; Clifford algebra; geometric algebra; special relativity; photon interaction Citation: van der Mark, M.B.; 1. Introduction Williamson, J.G. Relativistic Inversion, Invariance and Inter-Action.
    [Show full text]
  • 1 Vectors & Tensors
    1 Vectors & Tensors The mathematical modeling of the physical world requires knowledge of quite a few different mathematics subjects, such as Calculus, Differential Equations and Linear Algebra. These topics are usually encountered in fundamental mathematics courses. However, in a more thorough and in-depth treatment of mechanics, it is essential to describe the physical world using the concept of the tensor, and so we begin this book with a comprehensive chapter on the tensor. The chapter is divided into three parts. The first part covers vectors (§1.1-1.7). The second part is concerned with second, and higher-order, tensors (§1.8-1.15). The second part covers much of the same ground as done in the first part, mainly generalizing the vector concepts and expressions to tensors. The final part (§1.16-1.19) (not required in the vast majority of applications) is concerned with generalizing the earlier work to curvilinear coordinate systems. The first part comprises basic vector algebra, such as the dot product and the cross product; the mathematics of how the components of a vector transform between different coordinate systems; the symbolic, index and matrix notations for vectors; the differentiation of vectors, including the gradient, the divergence and the curl; the integration of vectors, including line, double, surface and volume integrals, and the integral theorems. The second part comprises the definition of the tensor (and a re-definition of the vector); dyads and dyadics; the manipulation of tensors; properties of tensors, such as the trace, transpose, norm, determinant and principal values; special tensors, such as the spherical, identity and orthogonal tensors; the transformation of tensor components between different coordinate systems; the calculus of tensors, including the gradient of vectors and higher order tensors and the divergence of higher order tensors and special fourth order tensors.
    [Show full text]
  • Spacetime Algebra As a Powerful Tool for Electromagnetism
    Spacetime algebra as a powerful tool for electromagnetism Justin Dressela,b, Konstantin Y. Bliokhb,c, Franco Norib,d aDepartment of Electrical and Computer Engineering, University of California, Riverside, CA 92521, USA bCenter for Emergent Matter Science (CEMS), RIKEN, Wako-shi, Saitama, 351-0198, Japan cInterdisciplinary Theoretical Science Research Group (iTHES), RIKEN, Wako-shi, Saitama, 351-0198, Japan dPhysics Department, University of Michigan, Ann Arbor, MI 48109-1040, USA Abstract We present a comprehensive introduction to spacetime algebra that emphasizes its prac- ticality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in stud- ies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials.
    [Show full text]
  • Index Notation
    Index Notation January 10, 2013 One of the hurdles to learning general relativity is the use of vector indices as a calculational tool. While you will eventually learn tensor notation that bypasses some of the index usage, the essential form of calculations often remains the same. Index notation allows us to do more complicated algebraic manipulations than the vector notation that works for simpler problems in Euclidean 3-space. Even there, many vector identities are most easily estab- lished using index notation. When we begin discussing 4-dimensional, curved spaces, our reliance on algebra for understanding what is going on is greatly increased. We cannot make progress without these tools. 1 Three dimensions To begin, we translated some 3-dimensional formulas into index notation. You are familiar with writing boldface letters to stand for vectors. Each such vector may be expanded in a basis. For example, in our usual Cartesian basis, every vector v may be written as a linear combination ^ ^ ^ v = vxi + vyj + vzk We need to make some changes here: 1. Replace the x; y; z labels with a numbers 1; 2; 3: (vx; vy; vz) −! (v1; v2; v3) 2. Write the labels raised instead of lowered. 1 2 3 (v1; v2; v3) −! v ; v ; v 3. Use a lower-case Latin letter to run over the range, i = 1; 2; 3. This means that the single symbol, vi stands for all three components, depending on the value of the index, i. 4. Replace the three unit vectors by an indexed symbol: e^i ^ ^ ^ so that e^1 = i; e^2 = j and e^3 = k With these changes the expansion of a vector v simplifies considerably because we may use a summation: ^ ^ ^ v = vxi + vyj + vzk 1 2 3 = v e^1 + v e^2 + v e^3 3 X i = v e^i i=1 1 We have chosen the index positions, in part, so that inside the sum there is one index up and one down.
    [Show full text]
  • Geometric Intuition
    thesis Geometric intuition Richard Feynman once made a statement represent the consequences of rotations in to the effect that the history of mathematics One day, perhaps, 3-space faithfully. is largely the history of improvements in Clifford’s algebra will It is also completely natural not only to add notation — the progressive invention of or subtract multi-vectors, but to multiply or ever more efficient means for describing be taught routinely to divide them — something not possible with logical relationships and making them students in place of ordinary vectors. The result is always another easier to grasp and manipulate. The Romans multi-vector. In the particular case of a were stymied in their efforts to advance vector analysis. multi-vector that is an ordinary vector V, the mathematics by the clumsiness of Roman inverse turns out to be V/v2, where v2 is the numerals for arithmetic calculations. After to j, that is, eiej = −ejei. Another way to put squared magnitude of V. It’s a vector in the Euclid, geometry stagnated for nearly it is that multiplication between parallel same direction but of reciprocal magnitude. 2,000 years until Descartes invented a new vectors is commutative, whereas it is anti- Write out the components for the notation with his coordinates, which made commutative for orthogonal vectors. product of two vectors U and V, and you it easy to represent points and lines in These rules are enough to define find the resultUV = U•V + ĬU × V, with • space algebraically. the algebra, and it’s then easy to work and × being the usual dot and cross product Feynman himself, of course, introduced out various implications.
    [Show full text]
  • Unit Vector Notation
    Unit Vector Notation 1 Definitions Figure 1 shows the definition of components of a vector V~ . The angles α, β and γ are the angles that the vector makes to the three coordinate axes x, y and z respectively. The dashed lines are perpendiculars drawn from the tip of the vector to the three coordinate axis. The components ~ Vx, Vy and Vz of V are defined to be the segments of the coordinate axes marked out by these perpendiculars as shown. Hence, using the definition of cosine for the three right-angle triangles, we find, Vx = V cos α; Vy = V cos β; Vz = V cos γ; where V (short for jV~ j) is the magnitude of the vector V~ . We also define unit vectors (vectors of magnitude one) along each of the three coordinate axes x, y and z to be ^i, ^j and k^ respectively (figure 2). The following three vectors in the three coordinate directions can now be defined. ~ ^ ~ ^ ~ ^ Vx = Vxi; Vy = Vyj; Vz = Vzk: Using the triangle rule for vector addition twice, this gives, ~ ~ ~ ~ ^ ^ ^ V = Vx + Vy + Vz = Vxi + Vyj + Vzk: This is known as the unit vector notation of a vector. If the vector is restricted to the x-y plane, then γ = 90◦. This makes (figure 3), ◦ Vz = 0; and α + β = 90 : Hence, cos β = cos (90◦ − α) = sin α: Thus the components Vx and Vy can now be written as, Vx = V cos α; Vy = V sin α 1 . y ... ....... ... ... .. ... .. .. .... .. ..... ... .... .... ... ... .. .. ... .. ... ... ..... V . .... .... y . ...... .. ..... ..... ...... .. ... ..... ... .. .... .. ...... .. ~ . ... .. V ...... .... .. β .. ...... .. .. .... .... .. ..... ....... .. .
    [Show full text]