Improvements of Generalized Finite Difference Method And

Total Page:16

File Type:pdf, Size:1020Kb

Improvements of Generalized Finite Difference Method And View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Applied Mathematical Modelling 27 (2003) 831–847 www.elsevier.com/locate/apm Improvements of generalized finite difference method and comparison with other meshless method L. Gavete a,*, M.L. Gavete b, J.J. Benito c a Escuela Tecnica Superior de Ingenieros de Minas, Universidad Politecnica, c/Rios Rosas 21, 28003 Madrid, Spain b Facultad de Farmacia, Universidad Complutense, Avda Complutense s/n, 28040 Madrid, Spain c Escuela Tecnica Superior de Ingenieros Industriales, U.N.E.D., Apdo. Correos 60149, 28080 Madrid, Spain Received 3 December 2001; received in revised form 29 January 2003; accepted 19 February 2003 Abstract One of the most universal and effective methods, in wide use today, for approximately solving equations of mathematical physics is the finite difference (FD) method. An evolution of the FD method has been the development of the generalized finite difference (GFD) method, which can be applied over general or ir- regular clouds of points. The main drawback of the GFD method is the possibility of obtaining ill- conditioned stars of nodes. In this paper a procedure is given that can easily assure the quality of numerical results by obtaining the residual at each point. The possibility of employing the GFD method over adaptive clouds of points increasing progressively the number of nodes is explored, giving in this paper a condition to be accomplished to employ the GFD method with more efficiency. Also, in this paper, the GFD method is compared with another meshless method the, so-called, element free Galerkin method (EFG). The EFG method with linear approximation and penalty functions to treat the essential boundary condition is used in this paper. Both methods are compared for solving Laplace equation. Ó 2003 Elsevier Inc. All rights reserved. Keywords: Meshless; Generalized finite difference method; Element free Galerkin method; Singularities 1. Introduction The objective of meshless methods is to eliminate, at least, a part of the structure of elements as in the finite element method (FEM) by constructing the approximation entirely in terms of nodes. * Corresponding author. Tel.: +34-913-366-466; fax: +34-913-363-230. E-mail addresses: [email protected] (L. Gavete), [email protected] (J.J. Benito). 0307-904X/$ - see front matter Ó 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0307-904X(03)00091-X 832 L. Gavete et al. / Appl. Math. Modelling 27 (2003) 831–847 Although meshless methods were originated about twenty years ago, the research effort devoted to them until recently has been very small. One of the starting points is the smooth particle hy- drodynamics method [1] used for modelling astrophysical phenomena without boundaries such as exploding stars and dust clouds. Other path in the evolution of meshless methods has been the development of the generalized finite difference (GFD) method, also called meshless finite dif- ference (FD) method. The GFD method is included in the so named meshless methods (MM). One of the early contributors to the former were Perrone and Kao [2]. The bases of the GFD were published in the early seventies. Jensen [3] was the first to introduce fully arbitrary mesh. He considered Taylor series expansions interpolated on six-node stars in order to derive the FD formulae approximating derivatives of up to the second order. While he used that approach to the solution of boundary value problems given in the local formulation, Nay and Utku [4] extended it to the analysis of problems posed in the variational (energy) form. However, these very early GFD formulations were later essentially improved and extended by many other authors, but the most robust of these methods was developed by Liszka and Orkisz [5,6], using moving least squares (MLS) interpolation [7], and the most advanced version was given by Orkisz [8]. The explicit FD formulae used in the GFD method, as well as the influence of the main parameters involved, was studied by Benito et al. [9]. Other different MM have been proposed. The diffuse element method, developed by Nayroles et al. [10], was a new way for solving partial differential equations. Belytschko et al. [11] developed an alternative implementation using MLS approximation. They called their approach the element free Galerkin (EFG) method. The use of a constrained variational principle with a penalty function to alleviate the treatment of Dirichlet boundary conditions in (EFG) method has been proposed [12,13]. Liu et al. [14] have used a different kind of ‘‘griddles’’ multiple scale method based on reproducing kernel and wavelet analysis. Onnate~ et al. [15] focused on the application to fluid flow problems with a standard point collocation technique. Duarte and Oden [16], on the one hand and Babuska and Melenk [17] on the other, have shown how the denominated methods without mesh can be based on the partition of the unity. All these methods can be considered as MM. This paper is organized as follows. Firstly, in Section 2 the GFD method is briefly described. Secondly, in Section 3 several examples in the presence of singularities are given and the per- formance of the GFD method is analyzed using fixed or variable radius of influence for the weighting functions. Also in Section 3 the possibility of employing the GFD method over adaptive clouds of points is explored. Thirdly, the GFD method is compared to the EFG method in Section 4. And finally, in Section 5, some conclusions are obtained. 2. Generalized finite difference method For any sufficiently differentiable function f ðx; yÞ, in a given domain, the Taylor series ex- pansion around a point Pðx0; y0Þ may be expressed in the form of of h2 o2f k2 o2f o2f f ¼ f þ h 0 þ k 0 þ 0 þ 0 þ hk 0 þ oðq3Þð1Þ 0 ox oy 2 ox2 2 oy2 oxoy pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 where f ¼ f ðx; yÞ, f0 ¼ f ðx0; y0Þ, h ¼ x À x0, k ¼ y À y0 and q ¼ h þ k . L. Gavete et al. / Appl. Math. Modelling 27 (2003) 831–847 833 Eq. (1) and all following formulae will be limited to second order approximations and two- dimensional problems. In any case, the extension to other problems is obvious. We consider norm B 2 XN of of o2f o2f o2f B ¼ f À f þ h 0 þ k 0 þ h2 0 þ k2 0 þ h k 0 w ð2Þ 0 i i ox i oy i ox2 i oy2 i i oxoy i i¼1 where fi ¼ f ðxi; yiÞ, f0 ¼ f ðx0; y0Þ, hi ¼ xi À x0, ki ¼ yi À y0, wi ¼ weighting function with compact support. The solution may be obtained by minimizing norm B, writing oB ¼ 0 ð3Þ ofDf g of of o2f o2f o2f fDf gT ¼ 0 ; 0 ; 0 ; 0 ; 0 ð4Þ ox oy ox2 oy2 oxoy we come to a set of five equations with five unknowns for each node. For example, the first equation is as follows XN XN of XN of XN o2f XN h3 f w2h À f w2h þ 0 w2h2 þ 0 w2h k þ 0 w2 i 0 i i i i i ox i i oy i i i ox2 i 2 i¼1 i¼1 i¼1 i¼1 i¼1 o2f XN k2h o2f XN þ 0 w2 i i þ 0 w2h2k ¼ 0 ð5Þ oy2 i 2 oxoy i i i i¼1 i¼1 this Eq. (5) and all following equations give us the following system of equations 0 18 9 h3 2 o 2 2 2 2 i 2 ki hi 2 2 > f0 > Rwi hi Rwi hiki Rwi Rwi Rwi hi ki > ox > B 2 2 C> > B 2 3 C> of0 > 2 2 2 2 hi ki 2 ki 2 2 > > B Rw h k Rw k Rw Rw Rw h k C> oy > B i i i i i i 2 i 2 i i i C<> => B 3 2 4 2 2 3 C o2f 2 h 2 kih 2 h 2 h k 2 h ki 0 B i i i i i i C o 2 B Rwi 2 Rwi 2 Rwi 4 Rwi 4 Rwi 2 C> x > > o2 > B 2 3 2 2 4 3 C> f0 > B 2 hiki 2 ki 2 hi ki 2 ki 2 hiki C> > @ Rw Rw Rw Rw Rw A> oy2 > i 2 i 2 i 4 i 4 i 2 > > 3 3 : o2f ; 2 2 2 2 2 h ki 2 hik 2 2 2 0 Rw h k Rw h k Rw i Rw i Rw h k oxoy 0i i i i i i i 2 1 i 2 i i i 2 2 Àf0Rw hi þ Rfiw hi B i i C B f Rw2k Rf w2k C B À 0 i i þ i i i C B h2 h2 C B f Rw2 i Rf w2 i C 6 ¼ B À 0 i 2 þ i i 2 C ð Þ B k2 k2 C @ 2 i 2 i A Àf0Rwi 2 þ Rfiwi 2 2 2 Àf0Rwi hiki þ Rfiwi hiki This system of linear equations (6) in resumed notation is given by APDfP ¼ bP ð7Þ where the AP are matrices of 5 · 5, and the vector DfP is 5 · 1. 834 L. Gavete et al. / Appl. Math. Modelling 27 (2003) 831–847 2 2 2 2 If we are interested in solving PoissonÕs equation, we can calculate o f0=ox , o f0=oy at each node according to (6) and then o2f o2f 0 þ 0 À gðx ; y Þ¼0 ð8Þ ox2 oy2 0 0 giving us a linear system of equations for the considered domain.
Recommended publications
  • Linear Approximation of the Derivative
    Section 3.9 - Linear Approximation and the Derivative Idea: When we zoom in on a smooth function and its tangent line, they are very close to- gether. So for a function that is hard to evaluate, we can use the tangent line to approximate values of the derivative. p 3 Example Usep a tangent line to approximatep 8:1 without a calculator. Let f(x) = 3 x = x1=3. We know f(8) = 3 8 = 2. We'll use the tangent line at x = 8. 1 1 1 1 f 0(x) = x−2=3 so f 0(8) = (8)−2=3 = · 3 3 3 4 Our tangent line passes through (8; 2) and has slope 1=12: 1 y = (x − 8) + 2: 12 To approximate f(8:1) we'll find the value of the tangent line at x = 8:1. 1 y = (8:1 − 8) + 2 12 1 = (:1) + 2 12 1 = + 2 120 241 = 120 p 3 241 So 8:1 ≈ 120 : p How accurate was this approximation? We'll use a calculator now: 3 8:1 ≈ 2:00829885. 241 −5 Taking the difference, 2:00829885 − 120 ≈ −3:4483 × 10 , a pretty good approximation! General Formulas for Linear Approximation The tangent line to f(x) at x = a passes through the point (a; f(a)) and has slope f 0(a) so its equation is y = f 0(a)(x − a) + f(a) The tangent line is the best linear approximation to f(x) near x = a so f(x) ≈ f(a) + f 0(a)(x − a) this is called the local linear approximation of f(x) near x = a.
    [Show full text]
  • Generalized Finite-Difference Schemes
    Generalized Finite-Difference Schemes By Blair Swartz* and Burton Wendroff** Abstract. Finite-difference schemes for initial boundary-value problems for partial differential equations lead to systems of equations which must be solved at each time step. Other methods also lead to systems of equations. We call a method a generalized finite-difference scheme if the matrix of coefficients of the system is sparse. Galerkin's method, using a local basis, provides unconditionally stable, implicit generalized finite-difference schemes for a large class of linear and nonlinear problems. The equations can be generated by computer program. The schemes will, in general, be not more efficient than standard finite-difference schemes when such standard stable schemes exist. We exhibit a generalized finite-difference scheme for Burgers' equation and solve it with a step function for initial data. | 1. Well-Posed Problems and Semibounded Operators. We consider a system of partial differential equations of the following form: (1.1) du/dt = Au + f, where u = (iti(a;, t), ■ ■ -, umix, t)),f = ifxix, t), ■ • -,fmix, t)), and A is a matrix of partial differential operators in x — ixx, • • -, xn), A = Aix,t,D) = JjüiD', D<= id/dXx)h--Yd/dxn)in, aiix, t) = a,-,...i„(a;, t) = matrix . Equation (1.1) is assumed to hold in some n-dimensional region Í2 with boundary An initial condition is imposed in ti, (1.2) uix, 0) = uoix) , xEV. The boundary conditions are that there is a collection (B of operators Bix, D) such that (1.3) Bix,D)u = 0, xEdti,B<E(2>. We assume the operator A satisfies the following condition : There exists a scalar product {,) such that for all sufficiently smooth functions <¡>ix)which satisfy (1.3), (1.4) 2 Re {A4,,<t>) ^ C(<b,<p), 0 < t ^ T , where C is a constant independent of <p.An operator A satisfying (1.4) is called semi- bounded.
    [Show full text]
  • Calculus I - Lecture 15 Linear Approximation & Differentials
    Calculus I - Lecture 15 Linear Approximation & Differentials Lecture Notes: http://www.math.ksu.edu/˜gerald/math220d/ Course Syllabus: http://www.math.ksu.edu/math220/spring-2014/indexs14.html Gerald Hoehn (based on notes by T. Cochran) March 11, 2014 Equation of Tangent Line Recall the equation of the tangent line of a curve y = f (x) at the point x = a. The general equation of the tangent line is y = L (x) := f (a) + f 0(a)(x a). a − That is the point-slope form of a line through the point (a, f (a)) with slope f 0(a). Linear Approximation It follows from the geometric picture as well as the equation f (x) f (a) lim − = f 0(a) x a x a → − f (x) f (a) which means that x−a f 0(a) or − ≈ f (x) f (a) + f 0(a)(x a) = L (x) ≈ − a for x close to a. Thus La(x) is a good approximation of f (x) for x near a. If we write x = a + ∆x and let ∆x be sufficiently small this becomes f (a + ∆x) f (a) f (a)∆x. Writing also − ≈ 0 ∆y = ∆f := f (a + ∆x) f (a) this becomes − ∆y = ∆f f 0(a)∆x ≈ In words: for small ∆x the change ∆y in y if one goes from x to x + ∆x is approximately equal to f 0(a)∆x. Visualization of Linear Approximation Example: a) Find the linear approximation of f (x) = √x at x = 16. b) Use it to approximate √15.9. Solution: a) We have to compute the equation of the tangent line at x = 16.
    [Show full text]
  • Chapter 3, Lecture 1: Newton's Method 1 Approximate
    Math 484: Nonlinear Programming1 Mikhail Lavrov Chapter 3, Lecture 1: Newton's Method April 15, 2019 University of Illinois at Urbana-Champaign 1 Approximate methods and asumptions The final topic covered in this class is iterative methods for optimization. These are meant to help us find approximate solutions to problems in cases where finding exact solutions would be too hard. There are two things we've taken for granted before which might actually be too hard to do exactly: 1. Evaluating derivatives of a function f (e.g., rf or Hf) at a given point. 2. Solving an equation or system of equations. Before, we've assumed (1) and (2) are both easy. Now we're going to figure out what to do when (2) and possibly (1) is hard. For example: • For a polynomial function of high degree, derivatives are straightforward to compute, but it's impossible to solve equations exactly (even in one variable). • For a function we have no formula for (such as the value function MP(z) from Chapter 5, for instance) we don't have a good way of computing derivatives, and they might not even exist. 2 The classical Newton's method Eventually we'll get to optimization problems. But we'll begin with Newton's method in its basic form: an algorithm for approximately finding zeroes of a function f : R ! R. This is an iterative algorithm: starting with an initial guess x0, it makes a better guess x1, then uses it to make an even better guess x2, and so on. We hope that eventually these approach a solution.
    [Show full text]
  • 2.8 Linear Approximation and Differentials
    196 the derivative 2.8 Linear Approximation and Differentials Newton’s method used tangent lines to “point toward” a root of a function. In this section we examine and use another geometric charac- teristic of tangent lines: If f is differentiable at a, c is close to a and y = L(x) is the line tangent to f (x) at x = a then L(c) is close to f (c). We can use this idea to approximate the values of some commonly used functions and to predict the “error” or uncertainty in a compu- tation if we know the “error” or uncertainty in our original data. At the end of this section, we will define a related concept called the differential of a function. Linear Approximation Because this section uses tangent lines extensively, it is worthwhile to recall how we find the equation of the line tangent to f (x) where x = a: the tangent line goes through the point (a, f (a)) and has slope f 0(a) so, using the point-slope form y − y0 = m(x − x0) for linear equations, we have y − f (a) = f 0(a) · (x − a) ) y = f (a) + f 0(a) · (x − a). If f is differentiable at x = a then an equation of the line L tangent to f at x = a is: L(x) = f (a) + f 0(a) · (x − a) Example 1. Find a formula for L(x), the linear function tangent to the p ( ) = ( ) ( ) ( ) graph of f x p x atp the point 9, 3 . Evaluate L 9.1 and L 8.88 to approximate 9.1 and 8.88.
    [Show full text]
  • Propagation of Error Or Uncertainty
    Propagation of Error or Uncertainty Marcel Oliver December 4, 2015 1 Introduction 1.1 Motivation All measurements are subject to error or uncertainty. Common causes are noise or external disturbances, imperfections in the experimental setup and the measuring devices, coarseness or discreteness of instrument scales, unknown parameters, and model errors due to simpli- fying assumptions in the mathematical description of an experiment. An essential aspect of scientific work, therefore, is quantifying and tracking uncertainties from setup, measure- ments, all the way to derived quantities and resulting conclusions. In the following, we can only address some of the most common and simple methods of error analysis. We shall do this mainly from a calculus perspective, with some comments on statistical aspects later on. 1.2 Absolute and relative error When measuring a quantity with true value xtrue, the measured value x may differ by a small amount ∆x. We speak of ∆x as the absolute error or absolute uncertainty of x. Often, the magnitude of error or uncertainty is most naturally expressed as a fraction of the true or the measured value x. Since the true value xtrue is typically not known, we shall define the relative error or relative uncertainty as ∆x=x. In scientific writing, you will frequently encounter measurements reported in the form x = 3:3 ± 0:05. We read this as x = 3:3 and ∆x = 0:05. 1.3 Interpretation Depending on context, ∆x can have any of the following three interpretations which are different, but in simple settings lead to the same conclusions. 1. An exact specification of a deviation in one particular trial (instance of performing an experiment), so that xtrue = x + ∆x : 1 Of course, typically there is no way to know what this true value actually is, but for the analysis of error propagation, this interpretation is useful.
    [Show full text]
  • Lecture # 10 - Derivatives of Functions of One Variable (Cont.)
    Lecture # 10 - Derivatives of Functions of One Variable (cont.) Concave and Convex Functions We saw before that • f 0 (x) 0 f (x) is a decreasing function ≤ ⇐⇒ Wecanusethesamedefintion for the second derivative: • f 00 (x) 0 f 0 (x) is a decreasing function ≤ ⇐⇒ Definition: a function f ( ) whose first derivative is decreasing(and f (x) 0)iscalleda • · 00 ≤ concave function In turn, • f 00 (x) 0 f 0 (x) is an increasing function ≥ ⇐⇒ Definition: a function f ( ) whose first derivative is increasing (and f (x) 0)iscalleda • · 00 ≥ convex function Notes: • — If f 00 (x) < 0, then it is a strictly concave function — If f 00 (x) > 0, then it is a strictly convex function Graphs • Importance: • — Concave functions have a maximum — Convex functions have a minimum. 1 Economic Application: Elasticities Economists are often interested in analyzing how demand reacts when price changes • However, looking at ∆Qd as ∆P =1may not be good enough: • — Change in quantity demanded for coffee when price changes by 1 euro may be huge — Change in quantity demanded for cars coffee when price changes by 1 euro may be insignificant Economist look at relative changes, i.e., at the percentage change: what is ∆% in Qd as • ∆% in P We can write as the following ratio • ∆%Qd ∆%P Now the rate of change can be written as ∆%x = ∆x . Then: • x d d ∆Q d ∆%Q d ∆Q P = Q = ∆%P ∆P ∆P Qd P · Further, we are interested in the changes in demand when ∆P is very small = Take the • ⇒ limit ∆Qd P ∆Qd P dQd P lim = lim = ∆P 0 ∆P · Qd ∆P 0 ∆P · Qd dP · Qd µ → ¶ µ → ¶ So we have finally arrived to the definition of price elasticity of demand • dQd P = dP · Qd d dQd P P Example 1 Consider Q = 100 2P.
    [Show full text]
  • The Pendulum
    The Pendulum Andrew Mark Allen - 05370299 February 21, 2012 Abstract The motion of a simple pendulum was modelled using the linear approximation of the equations of motion, valid for small angles, and using numerical methods to solve the nonlinear equations of motion, namely the trapezoid rule and fourth order Runge-Kutta method. The trajectories, suitably graphed with gnuplot, were compared, and finally the Runge-Kutta method was used to simulate a damped driven pendulum at various amplitudes of the driving force, with the phase portrait plotted again in gnuplot to show dynamical features such as period doubling and chaotic motion. Introduction and Theory The Simple Pendulum From Newton's Second Law, it follows that the equation of motion for a simple pendulum in a plane is d2s d2θ = L = −g sin(θ) dt2 dt2 where L is the length of the string and g sin θ is the downward acceleration due to gravity. This is a non-linear equation, but can be simplified using the small angle approximation sin(θ) ≈ θ, giving the second order differential equation d2θ g = − θ dt2 L which can be solved to give the familiar equation for the harmonic oscillator r g θ = A sin(βt + φ); β = L where A is the amplitude and φ is the constant phase offset. Solving Second Order Differential Equations Numerically A second order differential equation can be solved numerically, both in the linear and non-linear cases by transforming it into two first-order equations, as shown 1 here for the pendulum equation: dθ = ! dt d! = −β2 sin(θ) dt Damped Driven Oscillator The model of the simple pendulum can be expanded to take into account damp- ing and driving forces.
    [Show full text]
  • Chapter 3. Linearization and Gradient Equation Fx(X, Y) = Fxx(X, Y)
    Oliver Knill, Harvard Summer School, 2010 An equation for an unknown function f(x, y) which involves partial derivatives with respect to at least two variables is called a partial differential equation. If only the derivative with respect to one variable appears, it is called an ordinary differential equation. Examples of partial differential equations are the wave equation fxx(x, y) = fyy(x, y) and the heat Chapter 3. Linearization and Gradient equation fx(x, y) = fxx(x, y). An other example is the Laplace equation fxx + fyy = 0 or the advection equation ft = fx. Paul Dirac once said: ”A great deal of my work is just playing with equations and see- Section 3.1: Partial derivatives and partial differential equations ing what they give. I don’t suppose that applies so much to other physicists; I think it’s a peculiarity of myself that I like to play about with equations, just looking for beautiful ∂ mathematical relations If f(x, y) is a function of two variables, then ∂x f(x, y) is defined as the derivative of the function which maybe don’t have any physical meaning at all. Sometimes g(x) = f(x, y), where y is considered a constant. It is called partial derivative of f with they do.” Dirac discovered a PDE describing the electron which is consistent both with quan- respect to x. The partial derivative with respect to y is defined similarly. tum theory and special relativity. This won him the Nobel Prize in 1933. Dirac’s equation could have two solutions, one for an electron with positive energy, and one for an electron with ∂ antiparticle One also uses the short hand notation fx(x, y)= ∂x f(x, y).
    [Show full text]
  • Uncertainty Guide
    ICSBEP Guide to the Expression of Uncertainties Editor V. F. Dean Subcontractor to Idaho National Laboratory Independent Reviewer Larry G. Blackwood (retired) Idaho National Laboratory Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments ACKNOWLEDGMENT We are most grateful to Fritz H. Fröhner, Kernforschungszentrum Karlsruhe, Institut für Neutronenphysik und Reaktortechnik, for his preliminary review of this document and for his helpful comments and answers to questions. Revision: 4 i Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments This guide was prepared by an ICSBEP subgroup led by G. Poullot, D. Doutriaux, and J. Anno of IRSN, France. The subgroup membership includes the following individuals: J. Anno IRSN (retired), France J. B. Briggs INL, USA R. Bartholomay Westinghouse Safety Management Solutions, USA V. F. Dean, editor Subcontractor to INL, USA D. Doutriaux IRSN (retired), France K. Elam ORNL, USA C. Hopper ORNL, USA R. Jeraj J. Stefan Institute, Slovenia Y. Kravchenko RRC Kurchatov Institute, Russian Federation V. Lyutov VNIITF, Russian Federation R. D. McKnight ANL, USA Y. Miyoshi JAERI, Japan R. D. Mosteller LANL, USA A. Nouri OECD Nuclear Energy Agency, France G. Poullot IRSN (retired), France R. W. Schaefer ANL (retired), USA N. Smith AEAT, United Kingdom Z. Szatmary Technical University of Budapest, Hungary F. Trumble Westinghouse Safety Management Solutions, USA A. Tsiboulia IPPE, Russian Federation O. Zurron ENUSA, Spain The contribution of French participants had strong technical support from Benoit Queffelec, Director of the Société Industrielle d’Etudes et Réalisations, Enghien (France). He is a consultant on statistics applied to industrial and fabrication processes.
    [Show full text]
  • 2.8 Linear Approximations and Differentials
    Arkansas Tech University MATH 2914: Calculus I Dr. Marcel B. Finan 2.8 Linear Approximations and Differentials In this section we approximate graphs by tangent lines which we refer to as tangent line approximations. We also discuss the use of linear approxi- mation in the estimate of errors that occur to a function when an error to the independent variable is given. This estimate is referred to as differential. Tangent Line Approximations Consider a function f(x) that is differentiable at a point a: For points close to a; we approximate the values of f(x) near a via the tangent line at a whose equation is given by L(x) = f 0(a)(x − a) + f(a): We call L(x) the local linearization of f(x) at x = a: The approximation f(x) ≈ f 0(a)(x − a) + f(a) is called the linear approximation or tangent line approximation of f(x) at x = a: It follows that the graph of f(x) near a can be thought of as a straight line. Example 2.8.1 Find the tangent line approximation of f(x) = sin x near a = 0: Solution. According to the formula above, we have f(x) ≈ f(0) + f 0(0)x: But f(0) = 0 and f 0(0) = 1 since f 0(x) = cos x: Hence, for x close to 0, we have sin x ≈ x Estimation Error As with any estimation method, an error is being committed. We de- fine the absolute error of the linear approximation to be the difference E(x) =Exact − Approximate and is given by E(x) = f(x) − [f 0(a)(x − a) + f(a)]: 1 Figure 2.8.1 shows the tangent line approximation and its absolute error.
    [Show full text]
  • 32-Linearization.Pdf
    Math S21a: Multivariable calculus Oliver Knill, Summer 2011 1 What is the linear approximation of the function f(x, y) = sin(πxy2) at the point (1, 1)? We 2 2 2 have (fx(x, y),yf (x, y)=(πy cos(πxy ), 2yπ cos(πxy )) which is at the point (1, 1) equal to f(1, 1) = π cos(π), 2π cos(π) = π, 2π . Lecture 10: Linearization ∇ h i h− i 2 Linearization can be used to estimate functions near a point. In the previous example, 0.00943 = f(1+0.01, 1+0.01) L(1+0.01, 1+0.01) = π0.01 2π0.01+3π = 0.00942 . In single variable calculus, you have seen the following definition: − ∼ − − − The linear approximation of f(x) at a point a is the linear function 3 Here is an example in three dimensions: find the linear approximation to f(x, y, z)= xy + L(x)= f(a)+ f ′(a)(x a) . yz + zx at the point (1, 1, 1). Since f(1, 1, 1) = 3, and f(x, y, z)=(y + z, x + z,y + ∇ − x), f(1, 1, 1) = (2, 2, 2). we have L(x, y, z) = f(1, 1, 1)+ (2, 2, 2) (x 1,y 1, z 1) = ∇ · − − − 3+2(x 1)+2(y 1)+2(z 1)=2x +2y +2z 3. − − − − 4 Estimate f(0.01, 24.8, 1.02) for f(x, y, z)= ex√yz. Solution: take (x0,y0, z0)=(0, 25, 1), where f(x0,y0, z0) = 5. The gradient is f(x, y, z)= y=LHxL x x x ∇ (e √yz,e z/(2√y), e √y).
    [Show full text]