Approximation of a Function by a Polynomial Function

Total Page:16

File Type:pdf, Size:1020Kb

Approximation of a Function by a Polynomial Function 1 Approximation of a function by a polynomial function 1. step: Assuming that f(x) is differentiable at x = a, from the picture we see: 푓(푥) = 푓(푎) + ∆푓 ≈ 푓(푎) + 푓′(푎)∆푥 Linear approximation of f(x) at point x around x = a 푓(푥) = 푓(푎) + 푓′(푎)(푥 − 푎) 푅푒푚푎푛푑푒푟 푹 = 푓(푥) − 푓(푎) − 푓′(푎)(푥 − 푎) will determine magnitude of error Let’s try to get better approximation of f(x). Let’s assume that f(x) has all derivatives at x = a. Let’s assume there is a possible power series expansion of f(x) around a. ∞ 2 3 푛 푓(푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ = ∑ 푐푛(푥 − 푎) ∀ 푥 푎푟표푢푛푑 푎 푛=0 n 푓(푛)(푥) 푓(푛)(푎) 2 3 0 푓 (푥) = 푐0 + 푐1(푥 − 푎) + 푐2(푥 − 푎) + 푐3(푥 − 푎) + ⋯ 푓(푎) = 푐0 2 1 푓′(푥) = 푐1 + 2푐2(푥 − 푎) + 3푐3(푥 − 푎) + ⋯ 푓′(푎) = 푐1 2 2 푓′′(푥) = 2푐2 + 3 ∙ 2푐3(푥 − 푎) + 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′(푎) = 2푐2 2 3 푓′′′(푥) = 3 ∙ 2푐3 + 4 ∙ 3 ∙ 2(푥 − 푎) + 5 ∙ 4 ∙ 3(푥 − 푎) ∙∙∙ 푓′′′(푎) = 3 ∙ 2푐3 ⋮ (푛) (푛) n 푓 (푥) = 푛(푛 − 1) ∙∙∙ 3 ∙ 2 ∙ 1푐푛 + (푛 + 1)푛(푛 − 1) ⋯ 2푐푛+1(푥 − 푎) 푓 (푎) = 푛! 푐푛 ⋮ Taylor’s theorem If f (x) has derivatives of all orders in an open interval I containing a, then for each x in I, ∞ 푓(푛)(푎) 푓(푛)(푎) 푓(푛+1)(푎) 푓(푥) = ∑ (푥 − 푎)푛 = 푓(푎) + 푓′(푎)(푥 − 푎) + ⋯ + (푥 − 푎)푛 + (푥 − 푎)푛+1 + ⋯ 푛! 푛! (푛 + 1)! 푛=0 푛 ∞ 푓(푘)(푎) 푓(푘)(푎) = ∑ (푥 − 푎)푘 + ∑ (푥 − 푎)푘 푘! 푘! 푘=0 푘=푛+1 convention for the sake of algebra: 0! = 1 = 푇푛 + 푅푛 푛 푓(푘)(푎) 푇 = ∑ (푥 − 푎)푘 푝표푙푦푛표푚푎푙 표푓 푛푡ℎ 표푟푑푒푟 푛 푘! 푘=0 푓(푛+1)(푧) 푛+1 푅푛(푥) = (푥 − 푎) 푠 푟푒푚푎푛푑푒푟 푅푛 (푛 + 1)! Taylor’s theorem says that there exists some value 푧 between 푎 and 푥 for which: ∞ 푓(푘)(푎) 푓(푛+1)(푧) ∑ (푥 − 푎)푘 can be replaced by 푅 (푥) = (푥 − 푎)푛+1 푘! 푛 (푛 + 1)! 푘=푛+1 2 Approximating function by a polynomial function So we are good to go. We can find value of 풇(풙) around 풂 by calculating 푇푛 , and then adding 푅푛. Instead of adding infinite number of terms (how in the world???), we have finite number of terms. Beautiful. A little problem: We know z exists, BUT we don’t know how to find z . That’s why we use approximation: By approximating 풇(풙) with 푇푛 we neglect 푅푛. We can do it only if 푅푛 is very small compared to 푇푛. So if we find maximum possible value for 푅푛, and that value is small compared to 푇푛, we found good approximation: 푛 푓(푘)(푎) 푓(푥) ≈ ∑ (푥 − 푎)푘 푘! 푘=0 푓(푛+1)(푧) error is remainder 푅 (푥) = (푥 − 푎)푛+1 푧 is the value between 푎 and 푥 , that yields maximum 푅 푛 (푛 + 1)! 푛 A Maclaurin series is a Taylor series for n = 0 푓(푥) = 푓(푎) + 푓′(푧)(푥 − 푎) expansion of a function about 0 푓(푥) − 푓(푎) 푓′(푧) = → 푀퐸퐴푁 푣푎푙푢푒 푡ℎ푒표푟푒푚 푥 − 푎 Problems: • For what values of x can we expect a Taylor series to represent f (x)?→ 푟푎푑푢푠 표푓 푐표푛푣푒푟푔푒푛푐푒 • How accurately do Taylor polynomials approximate the f (x)? → magnitude of the error 푅푛 example 1: Find Taylor’s series for sin x centered at a = 0 (McLaurin series). 푓(푥) = 푠푛 푥 푓(0) = 0 ′ 푓 (푥) = 푐표푠 푥 푓′(1) = 1 ′′ 푓′′(푥) = −푠푛 푥 푓 (1) = 0 ′′′ ′′′ 푓 (푥) = − cos 푥 푓 (1) = −1 (4) (4) 푓 (푥) = sin 푥 푓 (1) = 0 푡ℎ푠 푝푎푡푒푟푟푛 푤푙푙 푟푒푝푒푎푡 ∞ 푓(푛)(푎) −1 푓(푥) = ∑ (푥 − 푎)푛 = 0 + 1푥 + 0푥2 + 푥3 + 0푥4 + ⋯ 푛! 3! 푛=0 ∞ 푥3 푥5 푥7 (−1)푛푥2푛+1 = 푥 − + − + ⋯ = ∑ 3! 5! 7! (2푛 + 1)! 푛=0 Signs alternate and the denominators get very big; factorials grow very fast. Ratio test: 푥2푛+3 (2푛 + 1)! 1 lim | ∙ | = |푥2| lim | | = |푥2| ∙ 0 푛→∞ (2푛 + 3)! 푥2푛+1 푛→∞ (2푛 + 3)(2푛 + 2) This converges for any value of x. The radius of convergence is infinity. 3 example 2: Find fifth order Taylor’s approximation for f(x) = ln x centered at a = 1. 푓(푥) = ln 푥 푓(1) = 0 푓′(푥) = 1/ 푥 푓′(1) = 1 푓′′(푥) = −1/푥2 푓′′(1) = −1 푓′′′(푥) = 2!⁄푥3 푓′′′(1) = 2! 푓(4)(푥) = − 3!⁄푥4 푓(4)(1) = − 3! 푓(5)(푥) = 4!⁄푥5 푓(5)(1) = 4! (−1)푛+1 (푛 − 1)! 푓(푛)(푥) = → 푓(푛)(1) = (−1)푛+1 (푛 − 1)! 푥푛 푓(푛)(1) (−1)푛+1 푐 = = 푛 푛! 푛 ∞ ∞ 푓(푛)(1) (−1)푛+1 1 1 1 ln(푥) = ∑ (푥 − 1)푛 = ∑ (푥 − 1)푛 = (푥 − 1) − (푥 − 1)2 + (푥 − 1)3 − (푥 − 1)4 + ⋯ 풇풐풓 |풙 − ퟏ| < ퟏ 푛! 푛 2 3 4 푛=0 푛=1 1 1 1 1 ퟎ < 풙 ≤ ퟐ 푃 (푥) = (푥 − 1) − (푥 − 1)2 + (푥 − 1)3 − (푥 − 1)4 + (푥 − 1)5 5 2 3 4 5 example 2: Find Maclaurin series for f(x) = ln (x+1) 푓(푥) = ln(1 + 푥) 푓(0) = 0 푓′(푥) = 1/(1 + 푥) 푓′(0) = 1 푓′′(푥) = −1/(1 + 푥)2 푓′′(0) = −1 푓′′′(푥) = 2!⁄(1 + 푥)3 푓′′′(0) = 2! 푓(4)(푥) = − 3!⁄(1 + 푥)4 푓(4)(0) = −3! (−1)푛+1 (푛 − 1)! 푓(푛)(푥) = 푓(푛)(0) = (−1)푛+1 (푛 − 1)! (1 + 푥)푛 푓(푛)(0) (−1)푛+1 푐 = = 푛 푛! 푛 ∞ ∞ 푓(푛)(0) (−1)푛+1 푥푛 푥2 푥3 푥4 푥5 ln(1 + 푥) = ∑ 푥푛 = ∑ = 푥 − + − + + ⋯ 풇풐풓 |풙| < ퟏ − ퟏ < 풙 ≤ ퟏ 푛! 푛 2 3 4 5 푛=0 푛=1 Exact value of z is only a dream. If we knew it we would know exact expansion of a given function and it wouldn’t be approximation any more. Our goal is to find the bounds for 푓(푛+1)(푧) so we can see how large remainder could be. example: Use Taylor theorem to approximate sin(0.1) by the third order polynom P3(0.1) and determine the accuracy of the approximation. 푥3 푥3 푓(4)(푧) 푠푛 푥 = 푥 − + 푅 (푥) = 푥 − + 푥4 0 < z < 0.1 푓(4)(푧) = sin 푧 3! 3 3! 4! (0.1)3 푠푛(0.1) ≈ 0.1 − = 0.0099833 3! sin 푧 0.0001 0 < 푅 (0.1) = (0.1)4 < ≈ 0.000004 0.0099833 < 푠푛(0.1) < 0.0099833 + 0.000004 = 0.099837 3 4! 4! 4 example: Use Taylor theorem to approximate ln (1.2) so error is less than 0.001 푓(푛+1)(푧) 푅 (푥) = (푥 − 푎)푛+1 < 0.001 푛 (푛 + 1)! 1. Taylor theorem with a = 0 gives Maclaurin series of ln(1 + 푥) 푤푡ℎ 푥 = 0.2 In example 2 you saw that (n+1)st derivate of 푓(푥) = 푙푛(푥 + 1) is given by (−1)푛+2 푛! (−1)푛+2 푛! (1 + 푧)푛+1 (−1)푛+2 푓(푛+1)(푥) = → |푅 (0.2)| = | | (0.2)푛+1 = | | (0.2)푛+1 (1 + 푥)푛+1 푛 (푛 + 1)! (푛 + 1) (1 + 푧)푛+1 0 < z < 0.2 maximum error would be for z = 0 (−1)푛+2 (0.2)푛+1 | | (0.2)푛+1 < 0.001 < 0.001 (푛 + 1) 푛 + 1 Either trial and error or Wolframalpha n = 2.5 → n = 3 푥2 푥3 ln(1 + 푥) ≈ 푥 − + = ⋯ 푤푡ℎ 푑푒푠푟푒푑 푎푐푐푢푟푎푐푦 2 3 2. Taylor theorem with a = 1 gives Taylor’s approximation for f(x) = ln x with x = 1.2 푓(푛+1)(푧) 푅 (푥) = (푥 − 푎)푛+1 < 0.001 푛 (푛 + 1)! In example 1 you saw that (n+1)st derivate of 푓(푥) = 푙푛(푥) is given by (−1)푛+2 푛! 푛+2 푛+1 푛+1 (푛+1) (−1) 푛! 푧 푛+1 (0.2) 푓 (푥) = 푛+1 → |푅푛(1.2)| = | (1.2 − 1) | = 푛+1 푥 (푛 + 1)! (푛 + 1) 푧 1 < z < 1.2 maximum error would be for z = 1 푛+2 푛+1 (−1) 푛+1 (0.2) | | (0.2) < 0.001 < 0.001 here we go again (푛+1) 푛+1 Convergence of Taylor Series If f (x) has derivatives of all orders in an open interval I containing a, then for each x in I, ∞ 푓(푛)(푎) 푓(푥) = ∑ (푥 − 푎)푛 푛! 푛=0 ℎ표푙푑푠 푓 푎푛푑 표푛푙푦 푓 (푛+1) 푓 (푧) 푛+1 lim 푅푛(푥) = lim (푥 − 푎) = 0 푓표푟 ∀ 푥 ∈ 퐼 푛→∞ 푛→∞ (푛 + 1)! 5 풆풙풂풎풑풍풆: 푆ℎ표푤 푡ℎ푎푡 푀푎푐푙푎푢푟푛 푠푒푟푒푠 푓표푟 푓(푥) = 푠푛 푥 푐표푛푣푒푟푔푒푠 푡표 푠푛 푥 푓표푟 푎푙푙 푥. 푥3 푥5 푥7 (−1)푛푥2푛+1 푌표푢 푛푒푒푑 푡표 푠ℎ표푤 푡ℎ푎푡 푠푛 푥 = 푥 − + − + ⋯ + + ⋯ 푠 푡푟푢푒 푓표푟 푎푙푙 푥 3! 5! 7! (2푛 + 1)! (푛+1) (푛+1) 퐹표푟 푅푛 푤푒 푛푒푒푑 푓 (푧), 푎푛푑 푎푠 푓 (푥) 푠 푒푡ℎ푒푟 ± 푠푛 푥 표푟 ± 푐표푠 푥 푤푒 푘푛표푤 푡ℎ푎푡 푚푎푥푚푢푚 푣푎푙푢푒 표푓 |푓(푛+1)(푧)| 푠 1 푓표푟 푒푣푒푟푦 푟푒푎푙 푛푢푚푏푒푟 푧. → 푓(푛+1)(푧) |푥|푛+1 | 푅 (푥)| = | 푥푛+1| ≤ → 0 푛 (푛 + 1)! (푛 + 1)! 푥푛 퐹푟표푚 푇푎푦푙표푟 푒푥푝푎푛푠표푛 ≤ 푒푥 ∀푥 ≥ 0.
Recommended publications
  • Appendix a Short Course in Taylor Series
    Appendix A Short Course in Taylor Series The Taylor series is mainly used for approximating functions when one can identify a small parameter. Expansion techniques are useful for many applications in physics, sometimes in unexpected ways. A.1 Taylor Series Expansions and Approximations In mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. It is named after the English mathematician Brook Taylor. If the series is centered at zero, the series is also called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. It is common practice to use a finite number of terms of the series to approximate a function. The Taylor series may be regarded as the limit of the Taylor polynomials. A.2 Definition A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function f(x) about a point x ¼ a is given by; f 00ðÞa f 3ðÞa fxðÞ¼faðÞþf 0ðÞa ðÞþx À a ðÞx À a 2 þ ðÞx À a 3 þÁÁÁ 2! 3! f ðÞn ðÞa þ ðÞx À a n þÁÁÁ ðA:1Þ n! © Springer International Publishing Switzerland 2016 415 B. Zohuri, Directed Energy Weapons, DOI 10.1007/978-3-319-31289-7 416 Appendix A: Short Course in Taylor Series If a ¼ 0, the expansion is known as a Maclaurin Series. Equation A.1 can be written in the more compact sigma notation as follows: X1 f ðÞn ðÞa ðÞx À a n ðA:2Þ n! n¼0 where n ! is mathematical notation for factorial n and f(n)(a) denotes the n th derivation of function f evaluated at the point a.
    [Show full text]
  • Student Understanding of Taylor Series Expansions in Statistical Mechanics
    PHYSICAL REVIEW SPECIAL TOPICS - PHYSICS EDUCATION RESEARCH 9, 020110 (2013) Student understanding of Taylor series expansions in statistical mechanics Trevor I. Smith,1 John R. Thompson,2,3 and Donald B. Mountcastle2 1Department of Physics and Astronomy, Dickinson College, Carlisle, Pennsylvania 17013, USA 2Department of Physics and Astronomy, University of Maine, Orono, Maine 04469, USA 3Maine Center for Research in STEM Education, University of Maine, Orono, Maine 04469, USA (Received 2 March 2012; revised manuscript received 23 May 2013; published 8 August 2013) One goal of physics instruction is to have students learn to make physical meaning of specific mathematical expressions, concepts, and procedures in different physical settings. As part of research investigating student learning in statistical physics, we are developing curriculum materials that guide students through a derivation of the Boltzmann factor using a Taylor series expansion of entropy. Using results from written surveys, classroom observations, and both individual think-aloud and teaching interviews, we present evidence that many students can recognize and interpret series expansions, but they often lack fluency in creating and using a Taylor series appropriately, despite previous exposures in both calculus and physics courses. DOI: 10.1103/PhysRevSTPER.9.020110 PACS numbers: 01.40.Fk, 05.20.Ày, 02.30.Lt, 02.30.Mv students’ use of Taylor series expansion to derive and I. INTRODUCTION understand the Boltzmann factor as a component of proba- Over the past several decades, much work has been done bility in the canonical ensemble. While this is a narrow investigating ways in which students understand and learn topic, we feel that it serves as an excellent example of physics (see Refs.
    [Show full text]
  • Nonlinear Optics
    Nonlinear Optics: A very brief introduction to the phenomena Paul Tandy ADVANCED OPTICS (PHYS 545), Spring 2010 Dr. Mendes INTRO So far we have considered only the case where the polarization density is proportionate to the electric field. The electric field itself is generally supplied by a laser source. PE= ε0 χ Where P is the polarization density, E is the electric field, χ is the susceptibility, and ε 0 is the permittivity of free space. We can imagine that the electric field somehow induces a polarization in the material. As the field increases the polarization also in turn increases. But we can also imagine that this process does not continue indefinitely. As the field grows to be very large the field itself starts to become on the order of the electric fields between the atoms in the material (medium). As this occurs the material itself becomes slightly altered and the behavior is no longer linear in nature but instead starts to become weakly nonlinear. We can represent this as a Taylor series expansion model in the electric field. ∞ n 23 PAEAEAEAE==++∑ n 12 3+... n=1 This model has become so popular that the coefficient scaling has become standardized using the letters d and χ (3) as parameters corresponding the quadratic and cubic terms. Higher order terms would play a role too if the fields were extremely high. 2(3)3 PEdEE=++()εχ0 (2) ( 4χ ) + ... To get an idea of how large a typical field would have to be before we see a nonlinear effect, let us scale the above equation by the first order term.
    [Show full text]
  • Fundamentals of Duct Acoustics
    Fundamentals of Duct Acoustics Sjoerd W. Rienstra Technische Universiteit Eindhoven 16 November 2015 Contents 1 Introduction 3 2 General Formulation 4 3 The Equations 8 3.1 AHierarchyofEquations ........................... 8 3.2 BoundaryConditions. Impedance.. 13 3.3 Non-dimensionalisation . 15 4 Uniform Medium, No Mean Flow 16 4.1 Hard-walled Cylindrical Ducts . 16 4.2 RectangularDucts ............................... 21 4.3 SoftWallModes ................................ 21 4.4 AttenuationofSound.............................. 24 5 Uniform Medium with Mean Flow 26 5.1 Hard-walled Cylindrical Ducts . 26 5.2 SoftWallandUniformMeanFlow . 29 6 Source Expansion 32 6.1 ModalAmplitudes ............................... 32 6.2 RotatingFan .................................. 32 6.3 Tyler and Sofrin Rule for Rotor-Stator Interaction . ..... 33 6.4 PointSourceinaLinedFlowDuct . 35 6.5 PointSourceinaDuctWall .......................... 38 7 Reflection and Transmission 40 7.1 A Discontinuity in Diameter . 40 7.2 OpenEndReflection .............................. 43 VKI - 1 - CONTENTS CONTENTS A Appendix 49 A.1 BesselFunctions ................................ 49 A.2 AnImportantComplexSquareRoot . 51 A.3 Myers’EnergyCorollary ............................ 52 VKI - 2 - 1. INTRODUCTION CONTENTS 1 Introduction In a duct of constant cross section, with a medium and boundary conditions independent of the axial position, the wave equation for time-harmonic perturbations may be solved by means of a series expansion in a particular family of self-similar solutions, called modes. They are related to the eigensolutions of a two-dimensional operator, that results from the wave equation, on a cross section of the duct. For the common situation of a uniform medium without flow, this operator is the well-known Laplace operator 2. For a non- uniform medium, and in particular with mean flow, the details become mo∇re complicated, but the concept of duct modes remains by and large the same1.
    [Show full text]
  • Fourier Series
    Chapter 10 Fourier Series 10.1 Periodic Functions and Orthogonality Relations The differential equation ′′ y + 2y = F cos !t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. If 2 = !2 a particular solution is easily found by undetermined coefficients (or by∕ using Laplace transforms) to be F y = cos !t. p 2 !2 − If the forcing function is a linear combination of simple cosine functions, so that the differential equation is N ′′ 2 y + y = Fn cos !nt n=1 X 2 2 where = !n for any n, then, by linearity, a particular solution is obtained as a sum ∕ N F y (t)= n cos ! t. p 2 !2 n n=1 n X − This simple procedure can be extended to any function that can be repre- sented as a sum of cosine (and sine) functions, even if that summation is not a finite sum. It turns out that the functions that can be represented as sums in this form are very general, and include most of the periodic functions that are usually encountered in applications. 723 724 10 Fourier Series Periodic Functions A function f is said to be periodic with period p> 0 if f(t + p)= f(t) for all t in the domain of f. This means that the graph of f repeats in successive intervals of length p, as can be seen in the graph in Figure 10.1. y p 2p 3p 4p 5p Fig. 10.1 An example of a periodic function with period p.
    [Show full text]
  • Graduate Macro Theory II: Notes on Log-Linearization
    Graduate Macro Theory II: Notes on Log-Linearization Eric Sims University of Notre Dame Spring 2017 The solutions to many discrete time dynamic economic problems take the form of a system of non-linear difference equations. There generally exists no closed-form solution for such problems. As such, we must result to numerical and/or approximation techniques. One particularly easy and very common approximation technique is that of log linearization. We first take natural logs of the system of non-linear difference equations. We then linearize the logged difference equations about a particular point (usually a steady state), and simplify until we have a system of linear difference equations where the variables of interest are percentage deviations about a point (again, usually a steady state). Linearization is nice because we know how to work with linear difference equations. Putting things in percentage terms (that's the \log" part) is nice because it provides natural interpretations of the units (i.e. everything is in percentage terms). First consider some arbitrary univariate function, f(x). Taylor's theorem tells us that this can be expressed as a power series about a particular point x∗, where x∗ belongs to the set of possible x values: f 0(x∗) f 00 (x∗) f (3)(x∗) f(x) = f(x∗) + (x − x∗) + (x − x∗)2 + (x − x∗)3 + ::: 1! 2! 3! Here f 0(x∗) is the first derivative of f with respect to x evaluated at the point x∗, f 00 (x∗) is the second derivative evaluated at the same point, f (3) is the third derivative, and so on.
    [Show full text]
  • Fourier Series
    LECTURE 11 Fourier Series A fundamental idea introduced in Math 2233 is that the solution set of a linear differential equation is a vector space. In fact, it is a vector subspace of a vector space of functions. The idea that functions can be thought of as vectors in a vector space is also crucial in what will transpire in the rest of this court. However, it is important that when you think of functions as elements of a vector space V , you are thinking primarily of an abstract vector space - rather than a geometric rendition in terms of directed line segments. In the former, abstract point of view, you work with vectors by first adopting a basis {v1,..., vn} for V and then expressing the elements of V in terms of their coordinates with respect to that basis. For example, you can think about a polynomial p = 1 + 2x + 3x2 − 4x3 as a vector, by using the monomials 1, x, x2, x3,... as a basis and then thinking of the above expression for p as “an expression of p” in terms of the basis 1, x, x2, x3,... But you can express p in terms of its Taylor series about x = 1: 3 4 p = 2 − 8 (x − 1) − 21(x − 1)2 − 16 (x − 1) − 4 (x − 1) 2 3 and think of the polynomials 1, x − 1, (x − 1) , (x − 1) ,... as providing another basis for the vector space of polynomials. Granted the second expression for p is uglier than the first, abstractly the two expressions are on an equal footing and moveover, in some situations the second expression might be more useful - for example, in understanding the behavior of p near x = 1.
    [Show full text]
  • 19. Taylor Series and Techniques
    19. TAYLOR SERIES AND TECHNIQUES Taylor polynomials can be generated for a given function through a certain linear combination of its derivatives. The idea is that we can approximate a function by a polynomial, at least locally. In calculus I we discussed the tangent line approximation to a function. We found that the linearization of a function gives a good approximation for points close to the point of tangency. If we calculate second derivatives we can similarly find a quadratic approximation for the function. Third derivatives go to finding a cubic approximation about some point. I should emphasize from the outset that a Taylor polynomial is a polynomial, it will not be able to exactly represent a function which is not a polynomial. In order to exactly represent an analytic function we’ll need to take infinitely many terms, we’ll need a power series. The Taylor series for a function is formed in the same way as a Taylor polynomial. The difference is that we never stop adding terms, the Taylor series is formed from an infinite sum of a function’s derivatives evaluated at the series’ center. There is a subtle issue here, is it possible to find a series representation for a given function? Not always. However, when it is possible we call the function analytic. Many functions that arise in applications are analytic. Often functions are analytic on subdomains of their entire domain, we need to find different series representations on opposite sides of a vertical asymptote. What we learned in the last chapter still holds, there is an interval of convergence, the series cannot be convergent on some disconnected domain.
    [Show full text]
  • Convergent Factorial Series Solutions of Linear Difference Equations*
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF DIFFERENTIAL EQUATIONS 29, 345-361 (1978) Convergent Factorial Series Solutions of Linear Difference Equations* W. J. FITZPATRICK+ AND L. J. GRIMM Depal-tment of Mathematics, University of Missowi-Rolla, Rolh, Missotwi 65801 Received July 27, 1977 1. IPU’TRODUCTION Although the analytic theory of linear difference equations dates back to an 1885 memoir of Poincare, continuing work essentially began around 1910 with publication of several papers of G. D. Birkhoff, R. D. Carmichael, J. Horn, and N. E. Norlund; see, for instance, [l, 2, 4, 51. Since that time, C. R. Adams, M7. J. Trjitzinsky, and others have carried on the development of the theory; numerous contributions have been made in recent years by TV. A. Harris, Jr., P. Sibuya, and H. L. Turrittin; see, for instance, [7-9, 16-171. The monographs of Norlund [14] and L. M. Milne-Thomson [13] give a comprehensive account of earlier work, and references to more recent literature are given in the surrey paper of Harris [S]. The similarity between difference equations and differential equations has been noted by many investigators; for instance, Birkhoff [3] pointed out that the analytic theory of linear difference equations provides a “methodological pattern” for the “essentially simpler but analogous” theory of linear differential equations. In this paper, we apply a projection method which has been useful in the treat- ment of analytic differential equations [6, 101 to obtain existence theorems for convergent factorial series solutions of analytic difference equations.
    [Show full text]
  • Physics 112 Mathematical Notes Winter 2000 1. Power Series
    Physics 112 Mathematical Notes Winter 2000 1. Power Series Expansion of the Fermi-Dirac Integral The Fermi-Dirac integral is defined as: ∞ 1 xn−1 dx fn(z) ≡ , Γ(n) z−1ex +1 0 µ/kT where x ≡ /kT and z ≡ e . We wish to derive a power series expansion for fn(z)that is useful in the limit of z → 0. We manipulate fn(z) as follows: ∞ z xn−1 dx fn(z)= Γ(n) ex + z 0 ∞ z xn−1 dx = Γ(n) ex(1 + ze−x) 0 ∞ ∞ z −x n−1 m −x m = e x dx (−1) (ze ) Γ(n) 0 m=0 ∞ ∞ z = (−1)mzm e−(m+1)xxn−1 dx . Γ(n) m=0 0 Using the well known integral: ∞ Γ(n) e−Axxn−1 dx = , An 0 and changing the summation variable to p = m + 1, we end up with: ∞ (−1)p−1zp fn(z)= n . (1) p=1 p which is the desired power series expansion. This result is also useful for deriving additional properties ofthe fn(z). For example, it is a simple matter use eq. (1) to verify: ∂ fn−1(z)=z fn(z) . ∂z Note that the classical limit is obtained for z 1. In this limit, which also corresponds to the high temperature limit, fn(z) z. One additional special case is noteworthy. If 1 z =1,weobtain ∞ (−1)p−1 fn(1) = n . p=0 p To evaluate this sum, we rewrite it as follows: 1 − 1 fn(1) = n n p odd p p even p 1 1 − 1 = n + n 2 n p odd p p even p p even p ∞ ∞ 1 − 1 = n 2 n p=1 p p=1 (2p) ∞ − 1 1 = 1 n−1 n .
    [Show full text]
  • 3 Fourier Series
    3 Fourier Series 3.1 Introduction Although it was not apparent in the early historical development of the method of separation of variables, what we are about to do is the analog for function spaces of the following basic observation, which explains why the standard basis i = e1, j = e2, and k = e3, in 3-space is so useful. The basis vectors are an orthonormal set of vectors, which means e1 e2 =0, e1 e3 =0, e2 e3 =0, · · · e1 e1 =1, e2 e2 =1, e3 e3 =1. · · · Any vector v in 3-space, has a unique representation as v = b1e1 + b2e2+b3e3. Furthermore, the coefficients b1,b2, and b3 are easily computed from v : b1 = v e1,b2 = v e2,b3 = v e3. · · · Just as was the case for the laterally insulated heat-conducting rod and for the small transverse vibrations of a string, whenever the method of separation of variables is used to solve an IBVP, you reach a point where certain data given by a function f (x) must be expanded in a series of one of the following forms in which L>0: ∞ nπx f (x)= b sin , n L n=1 FourierX Sine Series Expansion a ∞ nπx f (x)= 0 + a cos , 2 n L n=1 Fourier CosineX Series Expansion a ∞ nπx nπx f (x)= 0 + a cos + b sin , 2 n L n L n=1 XFourier³ Series Expansion ´ or, more generally, ∞ f (x)= fnϕn (x) n=1 EigenfunctionX Expansion where ϕn (x) are the eigenfunctions of an EVP that arises in the separation of variables process.
    [Show full text]
  • Module M4.5 Taylor Expansions and Polynomial Approximations
    FLEXIBLE LEARNING APPROACH TO PHYSICS Module M4.5 Taylor expansions and polynomial approximations 1 Opening items 3.2 Taylor series (expanding about zero) 1.1 Module introduction 3.3 Taylor polynomials (near x = a) 1.2 Fast track questions 3.4 Taylor series about a general point 1.3 Ready to study? 3.5 Some useful Taylor series 2 Polynomial approximations 3.6 Simplifying the derivation of Taylor expansions 2.1 Polynomials 3.7 Applications and examples 2.2 Increasingly accurate approximations for sin (x) 4 Closing items 2.3 Increasingly accurate approximations for exp(x) 4.1 Module summary 3 Finding polynomial approximations by Taylor 4.2 Achievements expansions 4.3 Exit test 3.1 Taylor polynomials (near x = 0) Exit module FLAP M4.5 Taylor expansions and polynomial approximations COPYRIGHT © 1998 THE OPEN UNIVERSITY S570 V1.1 1 Opening items 1.1 Module introduction Many of the functions used in physics (including sin1(x), tan1(x) and loge1(x)) are impossible to evaluate exactly for all values of their arguments. In this module we study a particular way of representing such functions by 2 3 means of simpler polynomial functions of the form a0 + a1x + a2x + a2x …, where a0, a1, a2, etc. are constants, the values of which depend on the function being represented. Sums of terms of this kind are generally referred to as series. The particular series we will be concerned with are known as Taylor series. We also show how, in certain circumstances, the first few terms of a Taylor series can give a useful approximation to the corresponding function.
    [Show full text]