Geometry and Tensor Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Geometry and Tensor Calculus Appendix A Geometry and Tensor Calculus A.l Literature There is plenty of introductory literature on differential geometry and tensor cal­ culus. It depends very much on scientific background which of the following references provides the most suitable starting point. The list is merely a sugges­ tion and is not meant to give a complete overview. • Y. Choquet-Bruhat, C. DeWitt-Morette, and M. Dillard-Bleick, Analysis, Man­ ifolds, and Physics. Part I: Basics [36]: overview of many fundamental notions of analysis, differential calculus, the theory of differentiable manifolds, the theory of distributions, and Morse theory. An excellent summary of stan­ dard mathematical techniques of generic applicability. A good reminder for those who have seen some of it already; probably less suited for initiates. • M. Spivak, Calculus on Manifolds [264]: a condensed introduction to basic geometry. A "black piste" that starts off gently at secondary school level. A good introduction to Spivak's more comprehensive work. • M. Spivak, Differential Geometry [265]: a standard reference for mathemati­ cians. Arguably a bit of an overkill; however, the first volume contains virtually all tensor calculus one is likely to run into in the image literature for the next decade or so. • c. W. Misner, K. S. Thome, and J. A. Wheeler, Gravitation: an excellent course in geometry from a physicist's point of view [211]. Intuitive and practical, even if you have not got the slightest interest in gravitation (in which case you may want to skip the second half of the book). • J. J. Koenderink, Solid Shape [158]: an informal explanation of geometric concepts, helpful in developing a pictorial attitude. The emphasis is on 206 Geometry and Tensor Calculus understanding geometry. For this reason a useful companion to any tutorial on the subject. • A P. Lightman, W. H. Press, R. H. Price, and S. A Teukolsky, Problem Book in Relativity and Gravitation [185]: ±500 problems on relativity and gravitation to practice with; a good way to acquire computational skill in geometry. • M. Friedman, Foundations of Space-Time Theories: Relativistic Physics and Phi­ losophy of Science [92]: a philosophical view on differential geometry and its relation to classical, special relativistic, and general relativistic spacetime theories. Technique is de-emphasized in favour of understanding the role of geometry in physics. • Other classics: R. Abraham, J. E. Marsden, T. Ratiu, Manifolds, Tensor Anal­ ysis, and Applications [1]; V. I. Arnold, Mathematical Methods of Classical Me­ chanics [6]; M. P. do Carmo, Riemannian Geometry [31] and Differential Geome­ try of Curves and Surfaces [30]; R. L. Bishop and S. I. Goldberg, Tensor Analysis on Manifolds [18]; H. Flanders, Differential Forms [72]. The next section contains a highly condensed summary of geometric concepts introduced in this book. The reader is referred to the text and to aforementioned literature for detailed explanations. A.2 Geometric Concepts A.2.1 Preliminaries Kronecker symbol or Kronecker tensor: 1 if { a=!3 (A1) 8$ = 0 if a -#!3 Generalised Kronecker or permutation symbols: 8~11 ... 88~:'~k' ) = 8~11:::::kk = det:( r- 8~; Vk +1 ~ (ILl, ... ,ILk) even permuta~ion of (VI, ... ,Vk) , { -1 If (ILl, ... ,ILk) odd permutation of (VI, ... ,Vk) , (A2) o otherwise. Completely antisymmetric symbol in n dimensions: if (ILl, ... ,ILn) is an even permutation of (1, ... ,n) , if (ILl, ... , ILn) is an odd permutation of (1, ... , n), (A3) otherwise. A.2 Geometric Concepts 207 The tensor product of any two lR-valued operators: if X: DomX -t lR x I--t X(X) , (A4) Y : Dom Y -t lR : y I--t Y(y) , (AS) then x 0 Y : DomX x Dom Y -t lR : (x; y) I--t X 0 Y(x; y) (A6) is the operator defined by x 0 Y(x; y) ~f X(x) . Y(y). (A7) Alternation or antisymmetrisation of an operator: if then AltX: V 0 ... 0 V -t lR: (Xl, ... ,Xk) I--t AltX(Xl, ... ,Xk) ~ k def= k!1 '"L..J sgn 7r X( X"'[l] , ... ,X"'[k] ) , (A9) ". in which the sum extends over all permutations 7r of (1, ... , k), and in which sgn 7r denotes the sign of 7r. Symmetrisation of an operator: if (A 10) then (All) in which the sum extends over all permutations 7r of (1, ... , k). Index symmetrisation and antisymmetrisation: (A12) X(JL1 ... JLd = ~! L X JLr(l) ... JLr(k) ' ". 1 -k! '"L..J sgn7r X 1',,(1) "'JLr(k) • (A13) ". Einstein summation convention in n dimensions: n Xa ya ~ L:Xaya . (A14) a=l 208 Geometry and Tensor Calculus Einstein summation convention for antisymmetric sums: if XI-'I ... l-'k and YI-'I"'l-'k are antisymmetric, then (A.1S) 1-'1 < ... <I-'k A.2.2 Vectors A vector lives in a vector (or linear) space: vEV. (A. 16) Pictorial representation: a vector is an arrow with its tail attached to some base point (Figure A.l). Vectors are rates: if I is a lR-valued function, then v[/] = v(df) = dl (v) = va oa/, (A. 1 7) i.e. the rate of change of I along v (for definition of dl, see Sections A.2.3 and A.2.7). This also shows you how to define polynomials and power series of a vector, e.g. exp v [I] = f ~Val Oal ... van Oan I . (A.18) n=O n. This is just Taylor's expansion at the (implicit) base point to which v is assumed to be attached (see "tangent space"). Leibniz's product rule: v[/g] = gv[J] + I v[g]. (A.19) Decomposition relative to basis vectors: .... a .... V = v ea. (A.20) Standard coordinate basis: (A.2l) Operational definition: a vector is a linear, lR-valued function of covectors (or i-forms), producing the contraction of the vector and the covector: v:.... V* -+ lR :wl-tv- .... (-)w. (A.22) Often, one attaches a vector space V to each base point p of a manifold M (Fig­ ure A.l): (A.23) 1Mp is the tangent space at p, or the "fibre over p" of the manifold's tangent bundle (Figure A.l): TM= U IMp. (A.24) pEM A.2 Geometric Concepts 209 ..., ...... ,P ?k= TMp vector y at base point p ! proJectIoa map : " : hallie point Figure A.l: Vector, tangent space, tangent bundle. The "vector vat base point p": (A.2S) One can find out to which base point a vector is attached by means of the projec­ tion map: 7r : TM ~ M : vp I-t p. (A.26) The tangent space TMp projects, by definition, to p: 7r-1 : M ~ TM : p I-t TMp. (A.27) A vector field is a smooth section of the tangent bundle: s : M ~ TM: p I-t s(p) = vp. (A.28) Any vector field projects to the identity map of the base manifold: (A.29) A.2.3 Covectors A covector (or 11orm) lives in a covector space: wE V*. (A.30) Pictorial representation: a covector is an oriented stack of planes "attached" to some base point (a "planar wave disregarding phase"; see Figure A.2). Covectors are gradients. Decomposition relative to basis covectors: w=w",e- -'" . (A.31) Standard coordinate basis: e'" =dx"'. (A.32) 210 Geometry and Tensor Calculus cuudul'" (I) at baR: point p ~ Figure A.2: Covector. Operational definition: a covector is a linear, IR-valued function of vectors, pro­ ducing the contraction of the covector and the vector: w: Y ~ IR : iit--+ w(iJ). (A.33) Often, one attaches a covector space Y* to each base point p of a manifold M: Y* "'-'T*Mp. (A.34) T*Mp is the cotangent space at p, or the fibre over p of the manifold's cotangent bundle: T*M= U T*Mp. (A.35) pEM The "covector wat base point p": (A.36) One can find out to which base point a covector is attached by means of the pro­ jection map: 7r' : T*M ~ M : wp H p. (A.37) The cotangent space T*Mp projects, by definition, to p: 7r,-l : M ~ T*M: pH T*Mp. (A.38) A covector field is a smooth section of the cotangent bundle: s' : M ~ T*M: p H s'(p) = wp, (A.39) Any covector field projects to the identity map of the base manifold: 7r' 0 s' = 1M. (A.40) A.2.4 Dual Bases Dual bases (no metric required!): e-0< (-)e{3 = e{3- (-0<)e = U{3.1:0< . (A.41) A.2 Geometric Concepts 211 .. ~ ..... ,....,.." : Figure A.3: The gauge figure. A.2.S Riemannian Metric Riemannian metric tensor: G : Vx V -+ IR : (v, 'Iii) I-t G( V, w) . (A.42) A Riemannian metric is a symmetric, bilinear, positive definite, IR-valued map­ ping of two vectors: linearity: G('x ii + j.£ v, w) 'xG(17,w) + j.£G(v,w) , symmetry: G( ii, v) = G(v, 17) , (A.43) positivity: G( ii, ii) > 0 '</17# o. The scalar product is another name for the metric: V·-- W def= G(-v,w.-) (A.44) Pictorial representation: a Riemannian metric is a quadric II centred" at some base point, the gauge figure: Figure A.3. The sharp operator converts vectors into covectors: u V V* - u_defG(- )def -a W : -+ : v I-t wV = V,. = Va e . (A.4S) The "-operator is invertible, "-1 == " (the flat operator): L V* V - L - def H( - ) def a- V : -+ : V I-t V V = V,. = V ea , (A.46) with H(" V," W) ~f G(V, W) . (A.47) In particular, " converts a basis vector into a corresponding covector: "ea = ga{3 e{3 . (A.48) Similarly, " converts a basis covector into a corresponding vector: "ea = ga{3 e{3 .
Recommended publications
  • Notes on Calculus II Integral Calculus Miguel A. Lerma
    Notes on Calculus II Integral Calculus Miguel A. Lerma November 22, 2002 Contents Introduction 5 Chapter 1. Integrals 6 1.1. Areas and Distances. The Definite Integral 6 1.2. The Evaluation Theorem 11 1.3. The Fundamental Theorem of Calculus 14 1.4. The Substitution Rule 16 1.5. Integration by Parts 21 1.6. Trigonometric Integrals and Trigonometric Substitutions 26 1.7. Partial Fractions 32 1.8. Integration using Tables and CAS 39 1.9. Numerical Integration 41 1.10. Improper Integrals 46 Chapter 2. Applications of Integration 50 2.1. More about Areas 50 2.2. Volumes 52 2.3. Arc Length, Parametric Curves 57 2.4. Average Value of a Function (Mean Value Theorem) 61 2.5. Applications to Physics and Engineering 63 2.6. Probability 69 Chapter 3. Differential Equations 74 3.1. Differential Equations and Separable Equations 74 3.2. Directional Fields and Euler’s Method 78 3.3. Exponential Growth and Decay 80 Chapter 4. Infinite Sequences and Series 83 4.1. Sequences 83 4.2. Series 88 4.3. The Integral and Comparison Tests 92 4.4. Other Convergence Tests 96 4.5. Power Series 98 4.6. Representation of Functions as Power Series 100 4.7. Taylor and MacLaurin Series 103 3 CONTENTS 4 4.8. Applications of Taylor Polynomials 109 Appendix A. Hyperbolic Functions 113 A.1. Hyperbolic Functions 113 Appendix B. Various Formulas 118 B.1. Summation Formulas 118 Appendix C. Table of Integrals 119 Introduction These notes are intended to be a summary of the main ideas in course MATH 214-2: Integral Calculus.
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • A Brief Tour of Vector Calculus
    A BRIEF TOUR OF VECTOR CALCULUS A. HAVENS Contents 0 Prelude ii 1 Directional Derivatives, the Gradient and the Del Operator 1 1.1 Conceptual Review: Directional Derivatives and the Gradient........... 1 1.2 The Gradient as a Vector Field............................ 5 1.3 The Gradient Flow and Critical Points ....................... 10 1.4 The Del Operator and the Gradient in Other Coordinates*............ 17 1.5 Problems........................................ 21 2 Vector Fields in Low Dimensions 26 2 3 2.1 General Vector Fields in Domains of R and R . 26 2.2 Flows and Integral Curves .............................. 31 2.3 Conservative Vector Fields and Potentials...................... 32 2.4 Vector Fields from Frames*.............................. 37 2.5 Divergence, Curl, Jacobians, and the Laplacian................... 41 2.6 Parametrized Surfaces and Coordinate Vector Fields*............... 48 2.7 Tangent Vectors, Normal Vectors, and Orientations*................ 52 2.8 Problems........................................ 58 3 Line Integrals 66 3.1 Defining Scalar Line Integrals............................. 66 3.2 Line Integrals in Vector Fields ............................ 75 3.3 Work in a Force Field................................. 78 3.4 The Fundamental Theorem of Line Integrals .................... 79 3.5 Motion in Conservative Force Fields Conserves Energy .............. 81 3.6 Path Independence and Corollaries of the Fundamental Theorem......... 82 3.7 Green's Theorem.................................... 84 3.8 Problems........................................ 89 4 Surface Integrals, Flux, and Fundamental Theorems 93 4.1 Surface Integrals of Scalar Fields........................... 93 4.2 Flux........................................... 96 4.3 The Gradient, Divergence, and Curl Operators Via Limits* . 103 4.4 The Stokes-Kelvin Theorem..............................108 4.5 The Divergence Theorem ...............................112 4.6 Problems........................................114 List of Figures 117 i 11/14/19 Multivariate Calculus: Vector Calculus Havens 0.
    [Show full text]
  • CSE 152: Computer Vision Manmohan Chandraker
    CSE 152: Computer Vision Manmohan Chandraker Lecture 15: Optimization in CNNs Recap Engineered against learned features Label Convolutional filters are trained in a Dense supervised manner by back-propagating classification error Dense Dense Convolution + pool Label Convolution + pool Classifier Convolution + pool Pooling Convolution + pool Feature extraction Convolution + pool Image Image Jia-Bin Huang and Derek Hoiem, UIUC Two-layer perceptron network Slide credit: Pieter Abeel and Dan Klein Neural networks Non-linearity Activation functions Multi-layer neural network From fully connected to convolutional networks next layer image Convolutional layer Slide: Lazebnik Spatial filtering is convolution Convolutional Neural Networks [Slides credit: Efstratios Gavves] 2D spatial filters Filters over the whole image Weight sharing Insight: Images have similar features at various spatial locations! Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R. Fergus, Y. LeCun Slide: Lazebnik Convolution as a feature extractor Key operations in a CNN Feature maps Rectified Linear Unit (ReLU) Spatial pooling Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Key operations in a CNN Feature maps Spatial pooling Max Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Pooling operations • Aggregate multiple values into a single value • Invariance to small transformations • Keep only most important information for next layer • Reduces the size of the next layer • Fewer parameters, faster computations • Observe larger receptive field in next layer • Hierarchically extract more abstract features Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R.
    [Show full text]
  • Differentiation Rules (Differential Calculus)
    Differentiation Rules (Differential Calculus) 1. Notation The derivative of a function f with respect to one independent variable (usually x or t) is a function that will be denoted by D f . Note that f (x) and (D f )(x) are the values of these functions at x. 2. Alternate Notations for (D f )(x) d d f (x) d f 0 (1) For functions f in one variable, x, alternate notations are: Dx f (x), dx f (x), dx , dx (x), f (x), f (x). The “(x)” part might be dropped although technically this changes the meaning: f is the name of a function, dy 0 whereas f (x) is the value of it at x. If y = f (x), then Dxy, dx , y , etc. can be used. If the variable t represents time then Dt f can be written f˙. The differential, “d f ”, and the change in f ,“D f ”, are related to the derivative but have special meanings and are never used to indicate ordinary differentiation. dy 0 Historical note: Newton used y,˙ while Leibniz used dx . About a century later Lagrange introduced y and Arbogast introduced the operator notation D. 3. Domains The domain of D f is always a subset of the domain of f . The conventional domain of f , if f (x) is given by an algebraic expression, is all values of x for which the expression is defined and results in a real number. If f has the conventional domain, then D f usually, but not always, has conventional domain. Exceptions are noted below.
    [Show full text]
  • 1 Convolution
    CS1114 Section 6: Convolution February 27th, 2013 1 Convolution Convolution is an important operation in signal and image processing. Convolution op- erates on two signals (in 1D) or two images (in 2D): you can think of one as the \input" signal (or image), and the other (called the kernel) as a “filter” on the input image, pro- ducing an output image (so convolution takes two images as input and produces a third as output). Convolution is an incredibly important concept in many areas of math and engineering (including computer vision, as we'll see later). Definition. Let's start with 1D convolution (a 1D \image," is also known as a signal, and can be represented by a regular 1D vector in Matlab). Let's call our input vector f and our kernel g, and say that f has length n, and g has length m. The convolution f ∗ g of f and g is defined as: m X (f ∗ g)(i) = g(j) · f(i − j + m=2) j=1 One way to think of this operation is that we're sliding the kernel over the input image. For each position of the kernel, we multiply the overlapping values of the kernel and image together, and add up the results. This sum of products will be the value of the output image at the point in the input image where the kernel is centered. Let's look at a simple example. Suppose our input 1D image is: f = 10 50 60 10 20 40 30 and our kernel is: g = 1=3 1=3 1=3 Let's call the output image h.
    [Show full text]
  • Deep Clustering with Convolutional Autoencoders
    Deep Clustering with Convolutional Autoencoders Xifeng Guo1, Xinwang Liu1, En Zhu1, and Jianping Yin2 1 College of Computer, National University of Defense Technology, Changsha, 410073, China [email protected] 2 State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha, 410073, China Abstract. Deep clustering utilizes deep neural networks to learn fea- ture representation that is suitable for clustering tasks. Though demon- strating promising performance in various applications, we observe that existing deep clustering algorithms either do not well take advantage of convolutional neural networks or do not considerably preserve the local structure of data generating distribution in the learned feature space. To address this issue, we propose a deep convolutional embedded clus- tering algorithm in this paper. Specifically, we develop a convolutional autoencoders structure to learn embedded features in an end-to-end way. Then, a clustering oriented loss is directly built on embedded features to jointly perform feature refinement and cluster assignment. To avoid feature space being distorted by the clustering loss, we keep the decoder remained which can preserve local structure of data in feature space. In sum, we simultaneously minimize the reconstruction loss of convolutional autoencoders and the clustering loss. The resultant optimization prob- lem can be effectively solved by mini-batch stochastic gradient descent and back-propagation. Experiments on benchmark datasets empirically validate
    [Show full text]
  • Tensorizing Neural Networks
    Tensorizing Neural Networks Alexander Novikov1;4 Dmitry Podoprikhin1 Anton Osokin2 Dmitry Vetrov1;3 1Skolkovo Institute of Science and Technology, Moscow, Russia 2INRIA, SIERRA project-team, Paris, France 3National Research University Higher School of Economics, Moscow, Russia 4Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russia [email protected] [email protected] [email protected] [email protected] Abstract Deep neural networks currently demonstrate state-of-the-art performance in sev- eral domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times. 1 Introduction Deep neural networks currently demonstrate state-of-the-art performance in many domains of large- scale machine learning, such as computer vision, speech recognition, text processing, etc. These advances have become possible because of algorithmic advances, large amounts of available data, and modern hardware.
    [Show full text]
  • Fully Convolutional Mesh Autoencoder Using Efficient Spatially Varying Kernels
    Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying Kernels Yi Zhou∗ Chenglei Wu Adobe Research Facebook Reality Labs Zimo Li Chen Cao Yuting Ye University of Southern California Facebook Reality Labs Facebook Reality Labs Jason Saragih Hao Li Yaser Sheikh Facebook Reality Labs Pinscreen Facebook Reality Labs Abstract Learning latent representations of registered meshes is useful for many 3D tasks. Techniques have recently shifted to neural mesh autoencoders. Although they demonstrate higher precision than traditional methods, they remain unable to capture fine-grained deformations. Furthermore, these methods can only be applied to a template-specific surface mesh, and is not applicable to more general meshes, like tetrahedrons and non-manifold meshes. While more general graph convolution methods can be employed, they lack performance in reconstruction precision and require higher memory usage. In this paper, we propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data. It is enabled by our novel convolution and (un)pooling operators learned with globally shared weights and locally varying coefficients which can efficiently capture the spatially varying contents presented by irregular mesh connections. Our model outperforms state-of-the-art methods on reconstruction accuracy. In addition, the latent codes of our network are fully localized thanks to the fully convolutional structure, and thus have much higher interpolation capability than many traditional 3D mesh generation models. 1 Introduction arXiv:2006.04325v2 [cs.CV] 21 Oct 2020 Learning latent representations for registered meshes 2, either from performance capture or physical simulation, is a core component for many 3D tasks, ranging from compressing and reconstruction to animation and simulation.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Visual Differential Calculus
    Proceedings of 2014 Zone 1 Conference of the American Society for Engineering Education (ASEE Zone 1) Visual Differential Calculus Andrew Grossfield, Ph.D., P.E., Life Member, ASEE, IEEE Abstract— This expository paper is intended to provide = (y2 – y1) / (x2 – x1) = = m = tan(α) Equation 1 engineering and technology students with a purely visual and intuitive approach to differential calculus. The plan is that where α is the angle of inclination of the line with the students who see intuitively the benefits of the strategies of horizontal. Since the direction of a straight line is constant at calculus will be encouraged to master the algebraic form changing techniques such as solving, factoring and completing every point, so too will be the angle of inclination, the slope, the square. Differential calculus will be treated as a continuation m, of the line and the difference quotient between any pair of of the study of branches11 of continuous and smooth curves points. In the case of a straight line vertical changes, Δy, are described by equations which was initiated in a pre-calculus or always the same multiple, m, of the corresponding horizontal advanced algebra course. Functions are defined as the single changes, Δx, whether or not the changes are small. valued expressions which describe the branches of the curves. However for curves which are not straight lines, the Derivatives are secondary functions derived from the just mentioned functions in order to obtain the slopes of the lines situation is not as simple. Select two pairs of points at random tangent to the curves.
    [Show full text]
  • Tensor Calculus and Differential Geometry
    Course Notes Tensor Calculus and Differential Geometry 2WAH0 Luc Florack March 10, 2021 Cover illustration: papyrus fragment from Euclid’s Elements of Geometry, Book II [8]. Contents Preface iii Notation 1 1 Prerequisites from Linear Algebra 3 2 Tensor Calculus 7 2.1 Vector Spaces and Bases . .7 2.2 Dual Vector Spaces and Dual Bases . .8 2.3 The Kronecker Tensor . 10 2.4 Inner Products . 11 2.5 Reciprocal Bases . 14 2.6 Bases, Dual Bases, Reciprocal Bases: Mutual Relations . 16 2.7 Examples of Vectors and Covectors . 17 2.8 Tensors . 18 2.8.1 Tensors in all Generality . 18 2.8.2 Tensors Subject to Symmetries . 22 2.8.3 Symmetry and Antisymmetry Preserving Product Operators . 24 2.8.4 Vector Spaces with an Oriented Volume . 31 2.8.5 Tensors on an Inner Product Space . 34 2.8.6 Tensor Transformations . 36 2.8.6.1 “Absolute Tensors” . 37 CONTENTS i 2.8.6.2 “Relative Tensors” . 38 2.8.6.3 “Pseudo Tensors” . 41 2.8.7 Contractions . 43 2.9 The Hodge Star Operator . 43 3 Differential Geometry 47 3.1 Euclidean Space: Cartesian and Curvilinear Coordinates . 47 3.2 Differentiable Manifolds . 48 3.3 Tangent Vectors . 49 3.4 Tangent and Cotangent Bundle . 50 3.5 Exterior Derivative . 51 3.6 Affine Connection . 52 3.7 Lie Derivative . 55 3.8 Torsion . 55 3.9 Levi-Civita Connection . 56 3.10 Geodesics . 57 3.11 Curvature . 58 3.12 Push-Forward and Pull-Back . 59 3.13 Examples . 60 3.13.1 Polar Coordinates in the Euclidean Plane .
    [Show full text]