
Differentiation Draft V1.1 @c MIT 2014. From Math, Numerics, & Programming for Mechanical Engineers . in a Nutshell by AT Patera and M Yano. All rights reserved. 1 Preamble In engineering analysis we must often evaluate the derivatives of functions in order to predict or optimize the performance of a system. In some cases the derivative directly serves as the quantity of interest; for example, the derivative of the density is related to the frequency of internal gravity waves in the ocean. In other cases, the derivative, or gradient, serves to guide a design process towards better performance. Unfortunately, in most cases we are not provided with a closed-form representation of the function, and instead we must base our prediction on limited (experimental) observations or (computational) evaluations. In this nutshell we answer the following question: if we can probe a function at some few input values near an input value of interest, how can we estimate the (say, first and second) derivatives of the function at the input value of interest? In this nutshell we consider one approach to this approximation problem: the venerable “finite difference” method. We introduce the notion of finite difference approximation, and we present several important numerical differentiation schemes: approximation of the first derivative of a function by forward, backward, and centered difference formulas; approximation of the second derivative of a function by a centered difference formula. We derive error bounds for these finite difference approximations based on Taylor series expansions, and we discuss the issues of resolution and smoothness. Fi­ nally, we discuss the general concept of finite-precision arithmetic, and demonstrate the effect of truncation and round-off errors on convergence (and indeed, divergence). Prerequisites: univariate differential and integral calculus; basic principles of numerical methods: discretization, convergence, convergence order, resolution, operation counts and FLOPs, asymptotic and big-O estimates; Interpolation: formulation, error analysis, and computational considerations. © The Authors. License: Creative Commons Attribution-Noncommercial-Share Alike 3.0 (CC BY-NC-SA 3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and MIT OpenCourseWare source are credited; the use is non-commercial; and the CC BY-NC-SA license is retained. See also http://ocw.mit.edu/terms/. 1 2 Motivation: Example Let us consider a concrete example of numerical differentiation. Oscillations in a statically stable ocean environment are characterized by the Brunt-V¨ais¨al¨a frequency, s g @ρ N = ; ρ @x where ρ is the density of the ocean water, g is the gravitational acceleration, and x is the depth measured from the free surface. (Depth shall always refer to vertical distance below the free surface.) We wish to approximate the frequency at a particular depth of interest, say d = 200m. To accomplish our task, we are given Table 1, which provides the density of ocean water at seven different depths. (The density of ocean water depends on the salinity and temperature, which in turn depends on depth.) How can we exploit the data of Table 1 to approximate the derivative @ρ/∂x and subsequently the Brunt-V¨ais¨al¨a frequency at the depth of interest, d = 200m? Can we say anything about the accuracy of this estimate? What is the cost associated with the computation of this frequency estimate? The material provided in this nutshell will help you answer these questions. depth (m) 0 100 200 300 400 500 600 3 density (kg/m ) 1024.985 1025.375 1025.815 1026.271 1026.707 1027.086 1027.375 1 Table 1: Variation in the density of ocean water with depth. 3 Forward Difference We first consider arguably the simplest form of numerical differentiation: the forward difference 0 formula. Here, we wish to approximate the derivative f (¯x0) based on function values f(¯x0) and f(¯x1) for x¯0 < x¯1. To approximate the derivative, we first recall the definition of the derivative: 0 f(¯x0 + δ) − f(¯x0) f (¯x0) ≡ lim : δ!0 δ We now approximate this derivative by replacing the limit δ ! 0 — a differential — with a finite difference, h ≡ x¯1 − x¯0 > 0; the resulting approximation is the forward difference formula, 0 f(¯x0 + h) − f(¯x0) f(¯x1) − f(¯x0) fh(¯x0) = = : (1) h h A geometric interpretation of the forward difference formula is provided in Figure 1. In general, a finite difference formula approximates the derivative of a function at a point of interest x¯0 based on the function value at x¯0 and selected additional points in the neighborhood of x¯0. The set of points which serve to evaluate the derivative is denoted the finite difference stencil or simply the stencil; the stencil for the forward difference formula is fx¯0; x¯1g. As a simple example, we apply the forward difference formula to the ocean density example. In particular, we wish to estimate the Brunt-V¨ais¨al¨a frequency at the depth of interest d = 200m. 1This table is derived from the representative ocean density profile provided in Windows to the Universe. 2 (x¯1,f(x¯1)) f (x¯0) f ¯ fh(x0) (x¯0,f(x¯0)) S x¯0 x¯1 Figure 1: Illustration of the forward difference formula. @ρ To approximate the derivative @x at depth of interest d ≡ 200m we first choose the stencil fx¯0 ≡ 200m; x¯1 ≡ 300mg; the stencil must be a subset of the points available in Table 1. We then enlist the forward different formula (1) to obtain f(¯x ≡ 300) − f(¯x ≡ 200) 1026:271 − 1025:815 f 0 (d = 200) = 1 0 = = 0:456 kg=m 2 : h 300 − 200 100 The approximation to the Brunt-V¨ais¨al¨a frequency at d = 200m is then given by s r r g @ρ g 0 9:81 −2 −1 N = = fh (d ≡ 200) = · 0:456 = 6:60 × 10 s : (2) ρ @x ρ(d ≡ 200) 1025:815 Note we evaluate the density in the denominator at the depth of interest, here d = 200m. CYAWTP 1. Invoke the forward difference formula and the (depth, density) data of Table 1 to estimate the Brunt-V¨ais¨al¨a frequency at depths of interest d = 300m and d = 500m, respectively. We now proceed with the error analysis: we wish to understand how well the forward difference 0 0 estimate fh (¯x0) approximates the exact derivative f (¯x0). Towards this end, we first recall the Taylor series expansion with explicit remainder, 0 1 00 2 f(¯x1) = f(¯x0) + f (¯x0)h + f (ξ)h ; 2 recall that the Mean-Value Theorem ensures the existence of a ξ 2 [¯x0; x¯1] for which equality holds (presuming f is twice differentiable). We now compare the forward difference approximation and the exact derivative at x¯0, 0 0 0 f(¯x1) − f(¯x0) 0 1 0 1 00 2 jf (¯x0) − fh (¯x0)j = f (¯x0) − = f (¯x0) − f (¯x0)h + f (ξ)h h h 2 1 00 1 00 = f (ξ)h ≤ max jf (x)jh: (3) 2 2 x2[¯x0;x¯1] The accuracy of the forward difference formula depends on two factors: the discretization resolution parameter h — the shorter the segment, the better the estimate; the second derivative of the function f — the slower the rate of the change of the derivative with x, the better the estimate. 0 0 0 It is not surprising that fh (¯x0) converges to f (¯x0) since the by construction f (¯x0) is the limit of 0 fh(¯x0) as h tends to zero. However, we also observe that the forward difference formula is first-order accurate: the order of the scheme is p = 1. 3 Numerical Experiment 2. Invoke the differentiation GUI for forward difference approximation 0 of the derivative f (x = 0) for the function f(x) = exp(x): visualize the geometric interpretation of the finite difference approximation; confirm empirically the convergence rate of the forward difference formula. 0 0 CYAWTP 3. Sketch the logarithmic convergence curve, log(jf (¯x0) − fh (¯x0)j) as a function of log(1=h), and the associated logarithmic convergence asymptote, for forward difference approxima­ tion of the function f = sin(10πx) at x¯0 = 0. Estimate very roughly the value of the discretization parameter h at which the asymptotic regime is first apparent. Numerical Experiment 4. Invoke the differentiation GUI to confirm your sketch and estimates of CYAWTP 3. We now briefly comment on the cost associated with the forward difference formula. Given the pairs (¯x0; f(¯x0)) and (¯x1; f(¯x1)), the evaluation of the forward difference formula requires two subtractions (¯x1 − x¯0, and f(¯x1) − f(¯x0)) and one division ((f(¯x1) − f(¯x0))=(¯x1 − x¯0)); hence the evaluation of the derivative at x¯0 requires, in total, 3 FLOPs. 4 Generalization: Other Finite Difference Formulas We now introduce a more general procedure to derive finite difference formulas. The key ingredient, just as in our develoment of quadrature rules, is interpolation. We first choose a stencil of input values; we next construct an interpolant of the function; we then evaluate the derivative of the interpolant at the point of interest. For the interpolant we adopt here the “point-based formu­ lation” of interpolation: “what” shall be a low-order polynomial; “where” shall be the points of our stencil. (As always, the “what” and “where” must yield a unique interpolant.) In the context of differentiation, rather than denote our interpolation points x¯1, x¯2;:::, we instead we shift the indices such that x¯0 shall always be the input point of interest at which we wish to evaluate the derivative.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-