Lecture 2: Differentiation and Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 2: Differentiation and Linear Algebra Lecture 2: Differentiation and Linear Algebra Joan Lasenby [with thanks to: Chris Doran & Anthony Lasenby (book), Hugo Hadfield & Eivind Eide (code), Leo Dorst (book)........ and of course, David Hestenes] Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College, Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl July 2018 1 / 37 Contents: GA Course I, Session 2 The vector derivative and examples of its use. Multivector differentiation: examples. Linear algebra Geometric algebras with non-Euclidean metrics. The contents follow the notation and ordering of Geometric Algebra for Physicists [ C.J.L. Doran and A.N. Lasenby ] and the corresponding course the book was based on. 2 / 37 The Vector Derivative A vector x can be represented in terms of coordinates in two ways: k k x = x ek or x = xke (Summation implied). Depending on whether we expand in k terms of a given frame fekg or its reciprocal fe g. The coefficients in these two frames are therefore given by k k x = e ·x and xk = ek·x Now define the following derivative operator which we call the vector derivative ¶ ¶ r = k ≡ k ∑ e k e k k ¶x ¶x ..this is clearly a vector! 3 / 37 The Vector Derivative, cont... ¶ r = k ∑ e k k ¶x This is a definition so far, but we will now see how this form arises. Suppose we have a function acting on vectors, F(x). Using standard definitions of rates of change, we can define the directional derivative of F, evaluated at x, in the direction of a vector a as F(x + ea) − F(x) lim e!0 e 4 / 37 The Vector Derivative cont.... Now, suppose we want the directional derivative in the direction of one of our frame vectors, say e1, this is given by F((x1 + e)e + x2e + x3e ) − F(x1e + x2e + x3e ) lim 1 2 3 1 2 3 e!0 e which we recognise as ¶F(x) ¶x1 ie the derivative with respect to the first coordinate, keeping the second and third coordinates constant. 5 / 37 The Vector Derivative cont.... So, if we wish to define a gradient operator, r, such that (a·r)F(x) gives the directional derivative of F in the a direction, we clearly need: ¶ e ·r = for i = 1, 2, 3 i ¶xi e ·ej ¶ = ¶ ...which, since i ¶xj ¶xi , gives us the previous form of the vector derivative: ¶ r = k ∑ e k k ¶x 6 / 37 The Vector Derivative cont.... It follows now that if we dot r with a, we get the directional derivative in the a direction: F(x + ea) − F(x) a·r F(x) = lim e!0 e The definition of r is in fact independent of the choice of frame. 7 / 37 Operating on Scalar and Vector Fields Operating on: A Scalar Field f(x): it gives rf which is the gradient. A Vector Field J(x): it gives rJ. This is a geometric product Scalar part gives divergence Bivector part gives curl rJ = r·J + r^J Very important in electromagnetism. 8 / 37 The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t 9 / 37 The Multivector Derivative cont... Let feJg be a basis for X – ie if X is a bivector, then feJg will be the basis bivectors. With the definition on the previous slide, eJ ∗ ¶X is therefore the partial derivative in the eJ direction. Giving J ¶X ≡ ∑ e eJ ∗ ¶X J I [since eJ ∗ ¶X ≡ eJ ∗ fe (eI ∗ ¶Xg]. Key to using these definitions of multivector differentiation are several important results: 10 / 37 The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t theJBi lim = heJBi t!0 t Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) 11 / 37 Other Key Results Some other useful results are listed here (proofs are similar to that on previous slide and are left as exercises): ¶XhXBi = PX(B) ¶XhXB˜ i = PX(B˜ ) ˜ ¶X˜ hXBi = PX˜ (B) = PX(B) −1 −1 −1 ¶yhMy i = −y Py(M)y X, B, M, y all general multivectors. 12 / 37 A Simple Example Suppose we wish to fit a set of points fXig to a plane F – where the Xi and F are conformal representations (vector and 4 vector respectively). One possible way forward is to find the plane that minimises the sum of the squared perpendicular distances of the points from the plane. X2 X3 X1 0 X1 0 X2 X0 3 13 / 37 Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane. Thus we could take as our cost function: S = − ∑ Xi·(FXiF) i Now use the result ¶XhXBi = PX(B) to differentiate this expression wrt F ˙ ˙ ˙ ˙ ¶FS = − ∑ ¶FhXiFXiFi = − ∑ ¶FhXiFXiFi + ¶FhXiFXiFi i i = −2 ∑ PF(XiFXi) = −2 ∑ XiFXi i i =) solve (via linear algebra techniques) ∑i XiFXi = 0. 14 / 37 Differentiation cont.... Of course we can extend these ideas to other geometric fitting problems and also to those without closed form solutions, using gradient information to find solutions. Another example is differentiating wrt rotors or bivectors. Suppose we wished to create a Kalman filter-like system which tracked bivectors (not simply their components in some basis) – this might involve evaluating expressions such as L h i i ˜ i ¶Bn ∑ vnRnun−1Rn i=1 −B where Rn = e n , u, v s are vectors. 15 / 37 Differentiation cont.... Using just the standard results given, and a page of algebra later (but one only needs to do it once!) we find that 1 ˜ = − ( ) + h ( ) ˜ i ¶Bn vnRnun−1Rn G Bn 2 BnG Bn RnBnRn 2 jBnj sin(jB j) B G(B )R˜ B + n n n n n + ( ) ˜ 2 G Bn Rn jBnj jBnj 2 1 ˜ where G(Bn) = 2 [un−1^RnvnRn]Rn. 16 / 37 Linear Algebra A linear function,f, mapping vectors to vectors asatisfies f(la + mb) = lf(a) + mf(b) We can now extendf to act on any order blade by (outermorphism) f(a1^a2^...^an) = f(a1)^f(a2)^....^f(an) Note that the resulting blade has the same grade as the original blade. Thus, an important property is that these extended linear functions are grade preserving, ie f(Ar) = hf(Ar)ir 17 / 37 Linear Algebra cont.... Matrices are also linear functions which map vectors to vectors. IfF is the matrix corresponding to the linear functionf, we obtain the elements ofF via Fij = ei·f(ej) Where feig is the basis in which the vectors the matrix acts on are written. As with matrix multiplication, where we obtain a 3rd matrix (linear function) from combining two other matrices (linear functions), ieH = FG, we can also write h(a) = f[g(a)] = fg(a) The product of linear functions is associative. 18 / 37 Linear Algebra We now need to verify that h(A) = f[g(A)] = fg(A) for any multivector A. First take a blade a1^a2^...^ar and note that h(a1^a2^...^ar) = fg(a1)^fg(a2)^...^fg(ar) = f[g(a1)^g(a2)^.....^g(ar)] = f[g(a1^a2^...^ar)] from which we get the first result. 19 / 37 The Determinant Consider the action of a linear functionf on an orthogonal basis in 3d: The unit cube I = e1^e2^e3 is transformed to a parallelepiped, V V = f(e1)^f(e2)^f(e3) = f(I) 20 / 37 The Determinant cont.... So, sincef (I) is also a pseudoscalar, we see that if V is the magnitude ofV, then f(I) = VI Let us define the determinant of the linear functionf as the volume scale factor V. So that f(I) = det(f) I This enables us to find the form of the determinant explicitly (in terms of partial derivatives between coordinate frames) very easily in any dimension. 21 / 37 A Key Result As before, leth = fg, then h(I) = det(h) I = f(g(I)) = f(det(g) I) = det(g) f(I) = det(g) det(f)(I) So we have proved that det(fg) = det(f) det(g) A very easy proof! 22 / 37 The Adjoint/Transpose of a Linear Function For a matrixF and its transpose,F T we have (for any vectors a, b) aTFb = bTFTa = f (scalar) In GA we can write this in terms of linear functions as a·f(b) = f¯(a)·b This reverse linear function, f¯, is called the adjoint. 23 / 37 The Adjoint cont.... It is not hard to show that the adjoint extends to blades in the expected way f¯(a1^a2^...^an) = f¯(a1)^f¯(a2)^....^f¯(an) See exercises to show that a·f(b^c) = f[f¯(a)·(b^c)] This can now be generalised to Ar·f¯(Bs) = f¯[f(Ar)·Bs] r ≤ s f(Ar)·Bs = f[Ar·f¯(Bs)] r ≥ s 24 / 37 The Inverse Ar·f¯(Bs) = f¯[f(Ar)·Bs] r ≤ s Now put Bs = I in this formula: Ar·f¯(I) = Ar·det(f)(I) = det(f)(ArI) = f¯[f(Ar)·I] = f¯[f(Ar)I] We can now write this as −1 −1 Ar = f¯[f(Ar)I]I [det(f)] 25 / 37 The Inverse cont..
Recommended publications
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Appendix A: Matrices and Tensors
    Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • A Clifford Dyadic Superfield from Bilateral Interactions of Geometric Multispin Dirac Theory
    A CLIFFORD DYADIC SUPERFIELD FROM BILATERAL INTERACTIONS OF GEOMETRIC MULTISPIN DIRAC THEORY WILLIAM M. PEZZAGLIA JR. Department of Physia, Santa Clam University Santa Clam, CA 95053, U.S.A., [email protected] and ALFRED W. DIFFER Department of Phyaia, American River College Sacramento, CA 958i1, U.S.A. (Received: November 5, 1993) Abstract. Multivector quantum mechanics utilizes wavefunctions which a.re Clifford ag­ gregates (e.g. sum of scalar, vector, bivector). This is equivalent to multispinors con­ structed of Dirac matrices, with the representation independent form of the generators geometrically interpreted as the basis vectors of spacetime. Multiple generations of par­ ticles appear as left ideals of the algebra, coupled only by now-allowed right-side applied (dextral) operations. A generalized bilateral (two-sided operation) coupling is propoeed which includes the above mentioned dextrad field, and the spin-gauge interaction as partic­ ular cases. This leads to a new principle of poly-dimensional covariance, in which physical laws are invariant under the reshuffling of coordinate geometry. Such a multigeometric su­ perfield equation is proposed, whi~h is sourced by a bilateral current. In order to express the superfield in representation and coordinate free form, we introduce Eddington E-F double-frame numbers. Symmetric tensors can now be represented as 4D "dyads", which actually are elements of a global SD Clifford algebra.. As a restricted example, the dyadic field created by the Greider-Ross multivector current (of a Dirac electron) describes both electromagnetic and Morris-Greider gravitational interactions. Key words: spin-gauge, multivector, clifford, dyadic 1. Introduction Multi vector physics is a grand scheme in which we attempt to describe all ba­ sic physical structure and phenomena by a single geometrically interpretable Algebra.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • The Construction of Spinors in Geometric Algebra
    The Construction of Spinors in Geometric Algebra Matthew R. Francis∗ and Arthur Kosowsky† Dept. of Physics and Astronomy, Rutgers University 136 Frelinghuysen Road, Piscataway, NJ 08854 (Dated: February 4, 2008) The relationship between spinors and Clifford (or geometric) algebra has long been studied, but little consistency may be found between the various approaches. However, when spinors are defined to be elements of the even subalgebra of some real geometric algebra, the gap between algebraic, geometric, and physical methods is closed. Spinors are developed in any number of dimensions from a discussion of spin groups, followed by the specific cases of U(1), SU(2), and SL(2, C) spinors. The physical observables in Schr¨odinger-Pauli theory and Dirac theory are found, and the relationship between Dirac, Lorentz, Weyl, and Majorana spinors is made explicit. The use of a real geometric algebra, as opposed to one defined over the complex numbers, provides a simpler construction and advantages of conceptual and theoretical clarity not available in other approaches. I. INTRODUCTION Spinors are used in a wide range of fields, from the quantum physics of fermions and general relativity, to fairly abstract areas of algebra and geometry. Independent of the particular application, the defining characteristic of spinors is their behavior under rotations: for a given angle θ that a vector or tensorial object rotates, a spinor rotates by θ/2, and hence takes two full rotations to return to its original configuration. The spin groups, which are universal coverings of the rotation groups, govern this behavior, and are frequently defined in the language of geometric (Clifford) algebras [1, 2].
    [Show full text]
  • Appendix a Multilinear Algebra and Index Notation
    Appendix A Multilinear algebra and index notation Contents A.1 Vector spaces . 164 A.2 Bases, indices and the summation convention 166 A.3 Dual spaces . 169 A.4 Inner products . 170 A.5 Direct sums . 174 A.6 Tensors and multilinear maps . 175 A.7 The tensor product . 179 A.8 Symmetric and exterior algebras . 182 A.9 Duality and the Hodge star . 188 A.10 Tensors on manifolds . 190 If linear algebra is the study of vector spaces and linear maps, then multilinear algebra is the study of tensor products and the natural gener- alizations of linear maps that arise from this construction. Such concepts are extremely useful in differential geometry but are essentially algebraic rather than geometric; we shall thus introduce them in this appendix us- ing only algebraic notions. We'll see finally in A.10 how to apply them to tangent spaces on manifolds and thus recoverx the usual formalism of tensor fields and differential forms. Along the way, we will explain the conventions of \upper" and \lower" index notation and the Einstein sum- mation convention, which are standard among physicists but less familiar in general to mathematicians. 163 164 APPENDIX A. MULTILINEAR ALGEBRA A.1 Vector spaces and linear maps We assume the reader is somewhat familiar with linear algebra, so at least most of this section should be review|its main purpose is to establish notation that is used in the rest of the notes, as well as to clarify the relationship between real and complex vector spaces. Throughout this appendix, let F denote either of the fields R or C; we will refer to elements of this field as scalars.
    [Show full text]
  • The Linear Algebra Version of the Chain Rule 1
    Ralph M Kaufmann The Linear Algebra Version of the Chain Rule 1 Idea The differential of a differentiable function at a point gives a good linear approximation of the function – by definition. This means that locally one can just regard linear functions. The algebra of linear functions is best described in terms of linear algebra, i.e. vectors and matrices. Now, in terms of matrices the concatenation of linear functions is the matrix product. Putting these observations together gives the formulation of the chain rule as the Theorem that the linearization of the concatenations of two functions at a point is given by the concatenation of the respective linearizations. Or in other words that matrix describing the linearization of the concatenation is the product of the two matrices describing the linearizations of the two functions. 1. Linear Maps Let V n be the space of n–dimensional vectors. 1.1. Definition. A linear map F : V n → V m is a rule that associates to each n–dimensional vector ~x = hx1, . xni an m–dimensional vector F (~x) = ~y = hy1, . , yni = hf1(~x),..., (fm(~x))i in such a way that: 1) For c ∈ R : F (c~x) = cF (~x) 2) For any two n–dimensional vectors ~x and ~x0: F (~x + ~x0) = F (~x) + F (~x0) If m = 1 such a map is called a linear function. Note that the component functions f1, . , fm are all linear functions. 1.2. Examples. 1) m=1, n=3: all linear functions are of the form y = ax1 + bx2 + cx3 for some a, b, c ∈ R.
    [Show full text]
  • Multilinear Algebra for Visual Geometry
    multilinear algebra for visual geometry Alberto Ruiz http://dis.um.es/~alberto DIS, University of Murcia MVIGRO workshop Berlin, june 28th 2013 contents 1. linear spaces 2. tensor diagrams 3. Grassmann algebra 4. multiview geometry 5. tensor equations intro diagrams multiview equations linear algebra the most important computational tool: linear = solvable nonlinear problems: linearization + iteration multilinear problems are easy (' linear) visual geometry is multilinear (' solved) intro diagrams multiview equations linear spaces I objects are represented by arrays of coordinates (depending on chosen basis, with proper rules of transformation) I linear transformations are extremely simple (just multiply and add), and invertible in closed form (in least squares sense) I duality: linear transformations are also a linear space intro diagrams multiview equations linear spaces objects are linear combinations of other elements: i vector x has coordinates x in basis B = feig X i x = x ei i Einstein's convention: i x = x ei intro diagrams multiview equations change of basis 1 1 0 0 c1 c2 3 1 0 i [e1; e2] = [e1; e2]C; C = 2 2 = ej = cj ei c1 c2 1 2 x1 x1 x01 x = x1e + x2e = [e ; e ] = [e ; e ]C C−1 = [e0 ; e0 ] 1 2 1 2 x2 1 2 x2 1 2 x02 | 0{z 0 } [e1;e2] | {z } 2x013 4x025 intro diagrams multiview equations covariance and contravariance if the basis transforms according to C: 0 0 [e1; e2] = [e1; e2]C vector coordinates transform according to C−1: x01 x1 = C−1 x02 x2 in the previous example: 7 3 1 2 2 0;4 −0;2 7 = ; = 4 1 2 1 1 −0;2 0;6 4 intro
    [Show full text]
  • Notices of the American Mathematical Society 35 Monticello Place, Pawtucket, RI 02861 USA American Mathematical Society Distribution Center
    ISSN 0002-9920 Notices of the American Mathematical Society 35 Monticello Place, Pawtucket, RI 02861 USA Society Distribution Center American Mathematical of the American Mathematical Society February 2012 Volume 59, Number 2 Conformal Mappings in Geometric Algebra Page 264 Infl uential Mathematicians: Birth, Education, and Affi liation Page 274 Supporting the Next Generation of “Stewards” in Mathematics Education Page 288 Lawrence Meeting Page 352 Volume 59, Number 2, Pages 257–360, February 2012 About the Cover: MathJax (see page 346) Trim: 8.25" x 10.75" 104 pages on 40 lb Velocity • Spine: 1/8" • Print Cover on 9pt Carolina AMS-Simons Travel Grants TS AN GR EL The AMS is accepting applications for the second RAV T year of the AMS-Simons Travel Grants program. Each grant provides an early career mathematician with $2,000 per year for two years to reimburse travel expenses related to research. Sixty new awards will be made in 2012. Individuals who are not more than four years past the comple- tion of the PhD are eligible. The department of the awardee will also receive a small amount of funding to help enhance its research atmosphere. The deadline for 2012 applications is March 30, 2012. Applicants must be located in the United States or be U.S. citizens. For complete details of eligibility and application instructions, visit: www.ams.org/programs/travel-grants/AMS-SimonsTG Solve the differential equation. t ln t dr + r = 7tet dt 7et + C r = ln t WHO HAS THE #1 HOMEWORK SYSTEM FOR CALCULUS? THE ANSWER IS IN THE QUESTIONS.
    [Show full text]