Reciprocal Frame Vectors

Total Page:16

File Type:pdf, Size:1020Kb

Reciprocal Frame Vectors Reciprocal Frame Vectors Peeter Joot March 29, 2008 1 Approach without Geometric Algebra. Without employing geometric algebra, one can use the projection operation expressed as a dot product and calculate the a vector orthogonal to a set of other vectors, in the direction of a reference vector. Such a calculation also yields RN results in terms of determinants, and as a side effect produces equations for parallelogram area, parallelopiped volume and higher dimensional analogues as a side effect (without having to employ change of basis diagonalization arguments that don’t work well for higher di- mensional subspaces). 1.1 Orthogonal to one vector The simplest case is the vector perpendicular to another. In anything but R2 there are a whole set of such vectors, so to express this as a non-set result a reference vector is required. Calculation of the coordinate vector for this case follows directly from the dot product. Borrowing the GA term, we subtract the projection to calculate the rejection. Rejuˆ (v) = v − v · uˆ uˆ 1 = (vu2 − v · uu) u2 1 = v e u u − v u u e u2 ∑ i i j j j j i i 1 vi vj = ∑ ujei u2 ui uj 1 ui uj = (u e − u e ) 2 ∑ i j j i v v u i<j i j Thus we can write the rejection of v from uˆ as: 1 1 ui uj ui uj Rej (v) = (1) uˆ 2 ∑ v v e e u i<j i j i j Or introducing some shorthand: uv ui uj Dij = vi vj ue ui uj Dij = ei ej equation 1 can be expressed in a form that will be slightly more convient for larger sets of vectors: 1 ( ) = uv ue Rejuˆ v 2 ∑ Dij Dij (2) u i<j Note that although the GA axiom u2 = u · u has been used in equations 1 and 2 above and the derivation, that was not necessary to prove them. This can, for now, be thought of as a notational convenience, to avoid having to write u · u, or kuk2. This result can be used to express the RN area of a parallelogram since we just have to multiply the length of Rejuˆ (v): 1 2 k ( )k2 = ( ) · = uv Rejuˆ v Rejuˆ v v 2 ∑ Dij u i<j with the length of the base kuk. [FIXME: insert figure.] Thus the area (squared) is: 2 2 uv Au,v = ∑ Dij (3) i<j For the special case of a vector in R2 this is uv ui uj Au,v = |D12 | = abs (4) vi vj 1.2 Vector orthogonal to two vectors in direction of a third. The same procedure can be followed for three vectors, but the algebra gets messier. Given three vectors u, v, and w we can calculate the component w0 of w perpendicular to u and v. That is: 2 v0 = v − v · uˆ uˆ =⇒ w0 = w − w · uˆ uˆ − w · vˆ0vˆ0 After expanding this out, a number of the terms magically cancel out and one is left with w00 = w0(u2v2 − (u · v)2) = u −u · wv2 + (u · v)(v · w) + v −u2(v · w) − (u · v)(u · w) + w u2v2 − (u · v)2 And this in turn can be expanded in terms of coordinates and the results collected yielding 00 vj vk uj uk uj uk w = ∑ eiujvk ui − vi wi wj wk wj wk vj vk ui uj uk = e u v v v v ∑ i j k i j k wi wj wk ui uj uk uj uk = ∑ ei vi vj vk vj vk i,j<k wi wj wk ! ui uj uk uj uk = ∑ + ∑ + ∑ ei vi vj vk . vj vk i<j<k j<i<k j<k<i wi wj wk Expanding the sum of the denominator in terms of coordinates: 2 ui uj u2v2 − (u · v)2 = ∑ v v i<j i j and using a change of summation indexes, our final result for the vector perpendicular to two others in the direction of a third is: ui uj uk ui uj uk ∑ v v v v v v i<j<k i j k i j k wi wj wk ei ej ek Rej (w) = (5) uˆ ,ˆv 2 ui uj ∑i<j vi vj 3 ui uj As a small aside, it is notable here to observe that span is the ei ej ui uj uk null space for the vector u, and the set span v v v is the null space i j k ei ej ek for the two vectors u and v respectively. Since the rejection is a normal to the set of vectors it must necessarily in- clude these cross product like determinant terms. uvw As in equation 2, use of a Dijk notation allows for a more compact result: !−1 2 uv uvw uve Rejuˆ vˆ (w) = ∑ Dij ∑ Dijk Dijk (6) i<j i<j<k And, as before this yields the Volume of the parallelopiped by multiplying perpendicular height: !−1 2 2 uv uvw kRejuˆ vˆ (w)k = Rejuˆ vˆ (w) · w = ∑ Dij ∑ Dijk i<j i<j<k by the base area. Thus the squared volume of a parallelopiped spanned by the three vectors is: 2 2 uvw Vu,v,w = ∑ Dijk . (7) i<j<k The simplest case is for R3 where we have only one summand: u1 u2 u3 uvw Vu,v,w = |D | = abs v v2 v3 . (8) ijk 1 w1 w2 w3 1.3 Generalization. Inductive Hypothesis. There are two things to prove 1. hypervolume of parallelopiped spanned by vectors u1, u2,..., uk ui ui ···ui 2 V2 = D 1 2 k (9) u1,u2,··· ,uk ∑ i1i2···ik i1<i2<···<ik 2. Orthogonal rejection of a set of vectors in direction of another. u ···u u ···u e i1 ik i1 ik−1 ∑i <···<i Di ···i Di ···i Rej (u ) = 1 k 1 k 1 k (10) uˆ 1···uˆ k−1 k u ···u 2 i1 ik−1 ∑ <···< D i1 ik−1 i1···ik−1 4 I cannot recall if I ever did the inductive proof for this. Proving for the initial case is done (since it’s proved for both the two and three vector cases). For the limiting case where k = n it can be observed that this is normal to all the others, so the only thing to prove for that case is if the scaling provided by hypervolume equation 9 is correct. 1.4 Scaling required for reciprocal frame vector. Presuming an inductive proof of the general result of 10 is possible, this rejec- tion has the property Rej (u ) · u ∝ δ uˆ 1···uˆ k−1 k i ki With the scaling factor picked so that this equals δki, the resulting “recipro- cal frame vector” is u ···u u ···u e i1 ik i1 ik−1 ∑i <···<i Di ···i Di ···i uk = 1 k 1 k 1 k (11) ui ···ui 2 ∑ D 1 k i1<···<ik i1···ik The superscript notation is borrowed from Doran/Lasenby, and denotes not a vector raised to a power, but this this special vector satisfying the follow- ing orthogonality and scaling criteria: k u · ui = δki. (12) Note that for k = n − 1, equation 11 reduces to u1···un−1e D1···(n−1) un = u1···un . (13) D1···n This or some other scaled version of this is likely as close as we can come to generalizing the cross product as an operation that takes vectors to vectors. 1.5 Example. R3 case. Perpendicular to two vectors. 3 0 3 Observe that for R , writing u = u1, v = u2, w = u3, and w = u3 this is: u u u 1 2 3 v v v 1 2 3 0 e1 e2 e3 u × v w = = (14) u u u (u × v) · w 1 2 3 v v v 1 2 3 w1 w2 w3 This is the cross product scaled by the (signed) volume for the parallelop- iped spanned by the three vectors. 5 2 Derivation with GA. Regression with respect to a set of vectors can be expressed directly. For vectors ui write B = u1 ∧ u2 ··· uk. Then for any vector we have: 1 x = xB B 1 = xB B 1 1 = (x · B + x ∧ B) B 1 All the grade three and grade five terms are selected out by the grade one operation, leaving just 1 1 x = (x · B) · + (x ∧ B) · . (15) B B This last term is the rejective component. 1 (x ∧ B) · B† Rej (x) = (x ∧ B) · = (16) B B BB† Here we see in the denominator the squared sum of determinants in the denominator of equation 10: u ···u 2 BB† = D i1 ik ∑ i1···ik i1<···<ik In the numerator we have the dot product of two wedge products, each expressible as sums of determinants: ui ···ui B† = (−1)k(k−1)/2 D 1 k e e ··· e ∑ i1···ik i1 i2 ik i1<···<ik And xu ···u i1 ik x ∧ B = D ei ei ··· ei ∑ i1···ik+1 1 2 k+1 i1<···<ik+1 Dotting these is all the grade one components of the product. Performing that calculation would likely provide an explicit confirmation of the inductive hypothesis of equation 10. This can be observed directly for the k + 1 = n case. That product produces a Laplace expansion sum. † xu1···un−1 u1···un−1 u1···un−1 u1···un−1 (x ∧ B) · B = D12···n e1D234···n − e2D134···n + e3D124···n xu ···u eu ···u 1 D 1 n−1 D 1 n−1 (x ∧ B) · = 12···n 12···n (17) B ui ···ui 2 ∑ D 1 k i1<···<ik i1···ik 6 Thus equation 10 for the k = n − 1 case is proved without induction.
Recommended publications
  • Quaternions and Cli Ord Geometric Algebras
    Quaternions and Cliord Geometric Algebras Robert Benjamin Easter First Draft Edition (v1) (c) copyright 2015, Robert Benjamin Easter, all rights reserved. Preface As a rst rough draft that has been put together very quickly, this book is likely to contain errata and disorganization. The references list and inline citations are very incompete, so the reader should search around for more references. I do not claim to be the inventor of any of the mathematics found here. However, some parts of this book may be considered new in some sense and were in small parts my own original research. Much of the contents was originally written by me as contributions to a web encyclopedia project just for fun, but for various reasons was inappropriate in an encyclopedic volume. I did not originally intend to write this book. This is not a dissertation, nor did its development receive any funding or proper peer review. I oer this free book to the public, such as it is, in the hope it could be helpful to an interested reader. June 19, 2015 - Robert B. Easter. (v1) [email protected] 3 Table of contents Preface . 3 List of gures . 9 1 Quaternion Algebra . 11 1.1 The Quaternion Formula . 11 1.2 The Scalar and Vector Parts . 15 1.3 The Quaternion Product . 16 1.4 The Dot Product . 16 1.5 The Cross Product . 17 1.6 Conjugates . 18 1.7 Tensor or Magnitude . 20 1.8 Versors . 20 1.9 Biradials . 22 1.10 Quaternion Identities . 23 1.11 The Biradial b/a .
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Università Degli Studi Di Trieste a Gentle Introduction to Clifford Algebra
    Università degli Studi di Trieste Dipartimento di Fisica Corso di Studi in Fisica Tesi di Laurea Triennale A Gentle Introduction to Clifford Algebra Laureando: Relatore: Daniele Ceravolo prof. Marco Budinich ANNO ACCADEMICO 2015–2016 Contents 1 Introduction 3 1.1 Brief Historical Sketch . 4 2 Heuristic Development of Clifford Algebra 9 2.1 Geometric Product . 9 2.2 Bivectors . 10 2.3 Grading and Blade . 11 2.4 Multivector Algebra . 13 2.4.1 Pseudoscalar and Hodge Duality . 14 2.4.2 Basis and Reciprocal Frames . 14 2.5 Clifford Algebra of the Plane . 15 2.5.1 Relation with Complex Numbers . 16 2.6 Clifford Algebra of Space . 17 2.6.1 Pauli algebra . 18 2.6.2 Relation with Quaternions . 19 2.7 Reflections . 19 2.7.1 Cross Product . 21 2.8 Rotations . 21 2.9 Differentiation . 23 2.9.1 Multivectorial Derivative . 24 2.9.2 Spacetime Derivative . 25 3 Spacetime Algebra 27 3.1 Spacetime Bivectors and Pseudoscalar . 28 3.2 Spacetime Frames . 28 3.3 Relative Vectors . 29 3.4 Even Subalgebra . 29 3.5 Relative Velocity . 30 3.6 Momentum and Wave Vectors . 31 3.7 Lorentz Transformations . 32 3.7.1 Addition of Velocities . 34 1 2 CONTENTS 3.7.2 The Lorentz Group . 34 3.8 Relativistic Visualization . 36 4 Electromagnetism in Clifford Algebra 39 4.1 The Vector Potential . 40 4.2 Electromagnetic Field Strength . 41 4.3 Free Fields . 44 5 Conclusions 47 5.1 Acknowledgements . 48 Chapter 1 Introduction The aim of this thesis is to show how an approach to classical and relativistic physics based on Clifford algebras can shed light on some hidden geometric meanings in our models.
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 14: Basis and Coordinates. Change of Basis
    MATH 304 Linear Algebra Lecture 14: Basis and coordinates. Change of basis. Linear transformations. Basis and dimension Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Theorem Any vector space V has a basis. If V has a finite basis, then all bases for V are finite and have the same number of elements (called the dimension of V ). Example. Vectors e1 = (1, 0, 0,..., 0, 0), e2 = (0, 1, 0,..., 0, 0),. , en = (0, 0, 0,..., 0, 1) form a basis for Rn (called standard) since (x1, x2,..., xn) = x1e1 + x2e2 + ··· + xnen. Basis and coordinates If {v1, v2,..., vn} is a basis for a vector space V , then any vector v ∈ V has a unique representation v = x1v1 + x2v2 + ··· + xnvn, where xi ∈ R. The coefficients x1, x2,..., xn are called the coordinates of v with respect to the ordered basis v1, v2,..., vn. The mapping vector v 7→ its coordinates (x1, x2,..., xn) is a one-to-one correspondence between V and Rn. This correspondence respects linear operations in V and in Rn. Examples. • Coordinates of a vector n v = (x1, x2,..., xn) ∈ R relative to the standard basis e1 = (1, 0,..., 0, 0), e2 = (0, 1,..., 0, 0),. , en = (0, 0,..., 0, 1) are (x1, x2,..., xn). a b • Coordinates of a matrix ∈ M2,2(R) c d 1 0 0 0 0 1 relative to the basis , , , 0 0 1 0 0 0 0 0 are (a, c, b, d). 0 1 • Coordinates of a polynomial n−1 p(x) = a0 + a1x + ··· + an−1x ∈Pn relative to 2 n−1 the basis 1, x, x ,..., x are (a0, a1,..., an−1).
    [Show full text]
  • A Guided Tour to the Plane-Based Geometric Algebra PGA
    A Guided Tour to the Plane-Based Geometric Algebra PGA Leo Dorst University of Amsterdam Version 1.15{ July 6, 2020 Planes are the primitive elements for the constructions of objects and oper- ators in Euclidean geometry. Triangulated meshes are built from them, and reflections in multiple planes are a mathematically pure way to construct Euclidean motions. A geometric algebra based on planes is therefore a natural choice to unify objects and operators for Euclidean geometry. The usual claims of `com- pleteness' of the GA approach leads us to hope that it might contain, in a single framework, all representations ever designed for Euclidean geometry - including normal vectors, directions as points at infinity, Pl¨ucker coordinates for lines, quaternions as 3D rotations around the origin, and dual quaternions for rigid body motions; and even spinors. This text provides a guided tour to this algebra of planes PGA. It indeed shows how all such computationally efficient methods are incorporated and related. We will see how the PGA elements naturally group into blocks of four coordinates in an implementation, and how this more complete under- standing of the embedding suggests some handy choices to avoid extraneous computations. In the unified PGA framework, one never switches between efficient representations for subtasks, and this obviously saves any time spent on data conversions. Relative to other treatments of PGA, this text is rather light on the mathematics. Where you see careful derivations, they involve the aspects of orientation and magnitude. These features have been neglected by authors focussing on the mathematical beauty of the projective nature of the algebra.
    [Show full text]
  • Vectors, Change of Basis and Matrix Representation: Onto-Semiotic Approach in the Analysis of Creating Meaning
    International Journal of Mathematical Education in Science and Technology ISSN: 0020-739X (Print) 1464-5211 (Online) Journal homepage: http://www.tandfonline.com/loi/tmes20 Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning Mariana Montiel , Miguel R. Wilhelmi , Draga Vidakovic & Iwan Elstak To cite this article: Mariana Montiel , Miguel R. Wilhelmi , Draga Vidakovic & Iwan Elstak (2012) Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning, International Journal of Mathematical Education in Science and Technology, 43:1, 11-32, DOI: 10.1080/0020739X.2011.582173 To link to this article: http://dx.doi.org/10.1080/0020739X.2011.582173 Published online: 01 Aug 2011. Submit your article to this journal Article views: 174 View related articles Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=tmes20 Download by: [Universidad Pública de Navarra - Biblioteca] Date: 22 January 2017, At: 10:06 International Journal of Mathematical Education in Science and Technology, Vol. 43, No. 1, 15 January 2012, 11–32 Vectors, change of basis and matrix representation: onto-semiotic approach in the analysis of creating meaning Mariana Montiela*, Miguel R. Wilhelmib, Draga Vidakovica and Iwan Elstaka aDepartment of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA; bDepartment of Mathematics, Public University of Navarra, Pamplona 31006, Spain (Received 26 July 2010) In a previous study, the onto-semiotic approach was employed to analyse the mathematical notion of different coordinate systems, as well as some situations and university students’ actions related to these coordinate systems in the context of multivariate calculus.
    [Show full text]
  • Inverse and Determinant in 0 to 5 Dimensional Clifford Algebra
    Clifford Algebra Inverse and Determinant Inverse and Determinant in 0 to 5 Dimensional Clifford Algebra Peruzan Dadbeha) (Dated: March 15, 2011) This paper presents equations for the inverse of a Clifford number in Clifford algebras of up to five dimensions. In presenting these, there are also presented formulas for the determinant and adjugate of a general Clifford number of up to five dimensions, matching the determinant and adjugate of the matrix representations of the algebra. These equations are independent of the metric used. PACS numbers: 02.40.Gh, 02.10.-v, 02.10.De Keywords: clifford algebra, geometric algebra, inverse, determinant, adjugate, cofactor, adjoint I. INTRODUCTION the grade. The possible grades are from the grade-zero scalar and grade-1 vector elements, up to the grade-(d−1) Clifford algebra is one of the more useful math tools pseudovector and grade-d pseudoscalar, where d is the di- for modeling and analysis of geometric relations and ori- mension of Gd. entations. The algebra is structured as the sum of a Structurally, using the 1–dimensional ei’s, or 1-basis, scalar term plus terms of anticommuting products of vec- to represent Gd involves manipulating the products in tors. The algebra itself is independent of the basis used each term of a Clifford number by using the anticom- for computations, and only depends on the grades of muting part of the Clifford product to move the 1-basis the r-vector parts and their relative orientations. It is ei parts around in the product, as well as using the inner- this implementation (demonstrated via the standard or- product to eliminate the ei pairs with their metric equiv- thonormal representation) that will be used in the proofs.
    [Show full text]
  • Orthonormal Bases
    Math 108B Professor: Padraic Bartlett Lecture 3: Orthonormal Bases Week 3 UCSB 2014 In our last class, we introduced the concept of \changing bases," and talked about writ- ing vectors and linear transformations in other bases. In the homework and in class, we saw that in several situations this idea of \changing basis" could make a linear transformation much easier to work with; in several cases, we saw that linear transformations under a certain basis would become diagonal, which made tasks like raising them to large powers far easier than these problems would be in the standard basis. But how do we find these \nice" bases? What does it mean for a basis to be\nice?" In this set of lectures, we will study one potential answer to this question: the concept of an orthonormal basis. 1 Orthogonality To start, we should define the notion of orthogonality. First, recall/remember the defini- tion of the dot product: n Definition. Take two vectors (x1; : : : xn); (y1; : : : yn) 2 R . Their dot product is simply the sum x1y1 + x2y2 + : : : xnyn: Many of you have seen an alternate, geometric definition of the dot product: n Definition. Take two vectors (x1; : : : xn); (y1; : : : yn) 2 R . Their dot product is the product jj~xjj · jj~yjj cos(θ); where θ is the angle between ~x and ~y, and jj~xjj denotes the length of the vector ~x, i.e. the distance from (x1; : : : xn) to (0;::: 0). These two definitions are equivalent: 3 Theorem. Let ~x = (x1; x2; x3); ~y = (y1; y2; y3) be a pair of vectors in R .
    [Show full text]
  • Notes on Change of Bases Northwestern University, Summer 2014
    Notes on Change of Bases Northwestern University, Summer 2014 Let V be a finite-dimensional vector space over a field F, and let T be a linear operator on V . Given a basis (v1; : : : ; vn) of V , we've seen how we can define a matrix which encodes all the information about T as follows. For each i, we can write T vi = a1iv1 + ··· + anivn 2 for a unique choice of scalars a1i; : : : ; ani 2 F. In total, we then have n scalars aij which we put into an n × n matrix called the matrix of T relative to (v1; : : : ; vn): 0 1 a11 ··· a1n B . .. C M(T )v := @ . A 2 Mn;n(F): an1 ··· ann In the notation M(T )v, the v showing up in the subscript emphasizes that we're taking this matrix relative to the specific bases consisting of v's. Given any vector u 2 V , we can also write u = b1v1 + ··· + bnvn for a unique choice of scalars b1; : : : ; bn 2 F, and we define the coordinate vector of u relative to (v1; : : : ; vn) as 0 1 b1 B . C n M(u)v := @ . A 2 F : bn In particular, the columns of M(T )v are the coordinates vectors of the T vi. Then the point of the matrix M(T )v is that the coordinate vector of T u is given by M(T u)v = M(T )vM(u)v; so that from the matrix of T and the coordinate vectors of elements of V , we can in fact reconstruct T itself.
    [Show full text]
  • Eigenvalues and Eigenvectors MAT 67L, Laboratory III
    Eigenvalues and Eigenvectors MAT 67L, Laboratory III Contents Instructions (1) Read this document. (2) The questions labeled \Experiments" are not graded, and should not be turned in. They are designed for you to get more practice with MATLAB before you start working on the programming problems, and they reinforce mathematical ideas. (3) A subset of the questions labeled \Problems" are graded. You need to turn in MATLAB M-files for each problem via Smartsite. You must read the \Getting started guide" to learn what file names you must use. Incorrect naming of your files will result in zero credit. Every problem should be placed is its own M-file. (4) Don't forget to have fun! Eigenvalues One of the best ways to study a linear transformation f : V −! V is to find its eigenvalues and eigenvectors or in other words solve the equation f(v) = λv ; v 6= 0 : In this MATLAB exercise we will lead you through some of the neat things you can to with eigenvalues and eigenvectors. First however you need to teach MATLAB to compute eigenvectors and eigenvalues. Lets briefly recall the steps you would have to perform by hand: As an example lets take the matrix of a linear transformation f : R3 ! R3 to be (in the canonical basis) 01 2 31 M := @2 4 5A : 3 5 6 The steps to compute eigenvalues and eigenvectors are (1) Calculate the characteristic polynomial P (λ) = det(M − λI) : (2) Compute the roots λi of P (λ). These are the eigenvalues. (3) For each of the eigenvalues λi calculate ker(M − λiI) : The vectors of any basis for for ker(M − λiI) are the eigenvectors corresponding to λi.
    [Show full text]
  • Tutorial on Fourier and Wavelet Transformations in Geometric Algebra
    Geometric Calculus GA FTs Quaternion FT Spacetime FT GA Wavelets Conclusion Tutorial on Fourier and Wavelet Transformations in Geometric Algebra E. Hitzer Department of Applied Physics University of Fukui Japan 17. August 2008, AGACSE 3 Hotel Kloster Nimbschen, Grimma, Leipzig, Germany E. Hitzer Department of Applied Physics University of Fukui Japan GA Fourier & Wavelet Transformations Geometric Calculus GA FTs Quaternion FT Spacetime FT GA Wavelets Conclusion Acknowledgements I do believe in God’s creative power in having brought it all into being in the first place, I find that studying the natural world is an opportunity to observe the majesty, the elegance, the intricacy of God’s creation. Francis Collins, Director of the US National Human Genome Research Institute, in Time Magazine, 5 Nov. 2006. I thank my wife, my children, my parents. B. Mawardi, G. Sommer, D. Hildenbrand, H. Li, R. Ablamowicz Organizers of AGACSE 3, Leipzig 2008. E. Hitzer Department of Applied Physics University of Fukui Japan GA Fourier & Wavelet Transformations Geometric Calculus GA FTs Quaternion FT Spacetime FT GA Wavelets Conclusion CONTENTS 1 Geometric Calculus Multivector Functions 2 Clifford GA Fourier Transformations (FT) Definition and properties of GA FTs Applications of GA FTs, discrete and fast versions 3 Quaternion Fourier Transformations (QFT) Basic facts about Quaternions and definition of QFT Properties of QFT and applications 4 Spacetime Fourier Transformation (SFT) GA of spacetime and definition of SFT Applications of SFT 5 Clifford GA Wavelets GA wavelet basics and GA wavelet transformation GA wavelets and uncertainty limits, example of GA Gabor wavelets 6 Conclusion E. Hitzer Department of Applied Physics University of Fukui Japan GA Fourier & Wavelet Transformations Geometric Calculus GA FTs Quaternion FT Spacetime FT GA Wavelets Conclusion Multivector Functions Multivectors, blades, reverse, scalar product Multivector M 2 Gp;q = Clp;q; p + q = n; has k-vector parts (0 ≤ k ≤ n) scalar, vector, bi-vector, .
    [Show full text]