Mathematical Statistics Approximation Methods in Statistics

Total Page:16

File Type:pdf, Size:1020Kb

Mathematical Statistics Approximation Methods in Statistics Course Notes for Math 162: Mathematical Statistics Approximation Methods in Statistics Adam Merberg and Steven J. Miller August 18, 2006 Abstract We introduce some of the approximation methods commonly used in mathematical statistics. We ¯rst consider Taylor series expansion. We then look at Stirling's Formula, which provides an approximation for n!. As an application we show how to apply these techniques to estimate the solution to the Birthday Problem. In the appendices we review the Intermediate and Mean Value Theorems, factorials, the Gamma function and proofs by induction. Contents 1 Introduction 1 2 Taylor Series 1 3 Stirling's Formula 4 4 The Birthday Problem 5 A The Intermediate and Mean Value Theorems (IVT, MVT) 6 B Factorials and the Gamma Function 7 C Proofs by Induction 8 1 Introduction Often when studying statistics, we ¯nd that we must work with expressions which are too unwieldy to consider exactly. For example, we often need to look at factorials, which are often contributed by binomial coe±cients. Sometimes we have the good fortune of being able to cancel one factorial with another. Often, however, we will have no way to simplify exactly an expression involving a factorial. In order to draw meaningful conclusions, we must therefore rely on approximations, which will typically be valid for large sample sizes. Stirling's Formula provides an approximation for n!. Before introducing Stirling's Formula, we will look at Taylor series, an approximation method used in justifying Stirling's Formula and elsewhere. 2 Taylor Series A Taylor series is a power series that allows us to approximate a function that has certain properties. The theoretical basis for Taylor series is given by the following theorem. The theorem and its proof are as given in [Rud]; by f (i)(t) we mean the ith derivative of f(t). A key ingredient in our proof is the Mean Value Theorem, which is proved in Appendix A. Theorem 2.1 (Taylor's Theorem). Let f be a real-valued function on [a; b] and let n be a positive integer. Suppose that the following conditions hold: 1. f (n¡1)(t) is continuous on [a; b] 2. f (n)(t) exists for every t 2 (a; b). 1 Let ® and ¯ be distinct points of [a; b], and de¯ne nX¡1 f (i)(®) P (t) = (t ¡ ®)i: (2.1) n¡1 i! i=0 Then for some x between ® and ¯, f (n)(x) f(¯) = P (¯) + (¯ ¡ ®)n: (2.2) n¡1 n! Given appropriate bounds on the higher derivatives, we can approximate a function as a polynomial, which is often much simpler than considering the function exactly. The function Pn¡1(t) is called the Taylor series of order (or degree) n ¡ 1. It is the best approximation to f(t) among polynomials of degree n ¡ 1, and the error between our function f(t) th and our Taylor series approximation Pn¡1(t) is bounded by the n derivative. Not surprisingly, the larger n is, the better the approximation. This is because we are using more and more information. If n ¡ 1 = 0 then we are approximating f(¯) with f(®). This means that, if all we know at time ® is the value f(®), our best guess is that the function is always this value (for we do not know if it is increasing or decreasing). If now we know f 0(®), we know how rapidly the function f is changing at ® (if we think of f as representing distance, then f 0(®) is the speed at ®). Our best guess is now that the speed is constant, always equal to f 0(®). If this were true, then the value of f at ¯ would be f(®) + f 0(®)(¯ ¡ ®) (where we start plus the distance traveled, which under the assumption of constant speed is just speed times elapsed time). This is the ¯rst order Taylor series approximation. The next piece of information, f 00(®), tells us how fast the speed is changing at time ®. This allows us weaken our assumption of constant speed, and obtain a better approximation. Proof. For a < t < b, we de¯ne n g(t) = f(t) ¡ Pn¡1(t) ¡ M(t ¡ ®) ; (2.3) (i) where M is chosen so that g(¯) = 0. It is easily seen from (2.1) that the values of Pn¡1(®) coincide with the values of f (i)(®) for any choice of i with 0 · i · n ¡ 1. Then for any such i, we have n! g(i)(®) = f (i)(t) ¡ P (i) (t) ¡ M (® ¡ ®)n¡i = 0 (2.4) n¡1 (n ¡ i)! 0 From the Mean Value Theorem, since g(¯) = g(®) = 0, we must have g (x1) = 0 for some x1 between ® and ¯. We can 0 0 0 00 now apply the mean value theorem to g (t). We have g (x1) = g (®) = 0, so g (x2) = 0 for some x2 between x1 and ®. (n) We can continue this process for a total of n steps to see that g (xn) = 0 for some xn between ® and xn¡1. We let x = xn and note that x is also between ® and ¯. By di®erentiating n times the equation (2.3), we have (n) (n) (n) 0 g (t) = f (t) ¡ Pn¡1(t) ¡ Mn!(t ¡ ®) : (2.5) Since Pn¡1(t) is a polynomial of degree n ¡ 1, this becomes g(n)(t) = f (n)(t) ¡ Mn!: (2.6) Since g(n)(®) = 0, f (n)(t) = Mn!: (2.7) We then have f (n)(x) Mn! P (¯) + (¯ ¡ ®)n = P (¯) + (¯ ¡ ®)n = f(¯) ¡ g(¯) = f(¯): (2.8) n¡1 n! n¡1 n! The x we have chosen therefore satis¯es the statement of the theorem. Taylor's Theorem allows us to approximate f(¯) as Pn¡1(¯) and gives us to approximate the error by ¯nding the (n) f (x) n maximum value of n! (¯ ¡ ®) for x on the interval [a; b]. If we have a good upper bound on the derivatives of f, we can get a good polynomial approximation for f(¯). Example 2.2. Let us consider f(x) = log(1 ¡ x): (2.9) We will compute the Taylor series about x = 0. By direct computation, we have f(0) = 0. We now compute the ¯rst derivative: 1 f 0(x) = ¡ : (2.10) 1 ¡ x 2 Hence, f 0(0) = ¡1. We will now show that (n ¡ 1)! f (n)(x) = ¡ (2.11) (1 ¡ x)n for n ¸ 1. We have just shown the case n = 1, and we will proceed by induction (see Appendix C). Assuming the statement for n ¡ 1, we compute f (n)(x) as the derivative of f (n¡1)(x). We have µ ¶ d (n ¡ 2)! ¡(n ¡ 1) ¢ (n ¡ 2)! (n ¡ 1)! f (n)(x) = ¡ = ¡ ¢ (¡1) = ¡ : (2.12) dx (1 ¡ x)n¡1 (1 ¡ x)n (1 ¡ x)n The statement in 2.11 now follows by induction for n ¸ 1. From this, we see that f (n)(0) = ¡(n ¡ 1)!. The Taylor series for f(x) about 0 is therefore given by µ ¶ µ ¶ 0! 1! 2! x2 x3 X1 xn f(x) = ¡ ¢ x + ¢ x2 + ¢ x3 + ¢ ¢ ¢ = ¡ x + + + ¢ ¢ ¢ = ¡ : (2.13) 1! 2! 3! 2 3 n n=1 Remark 2.3. Note the expansion for log(1 ¡ x) above converges for 0 · x < 1. Further, one can see that this expansion is a negative number for x > 0. This makes sense, as for x 2 (0; 1), 1 ¡ x 2 (0; 1). When we calculate log(1 ¡ x) we are taking the logarithm of a positive number less than 1; this has to be negative (the logarithm of 1 is zero, the logarithm of numbers greater than 1 are positive and those less than 1 are negative). It is very important to be able to quickly spot-check a formula to look for simple omission. For complicated formulas, try to concentrate on the main parts or features, and see if they make sense. Remark 2.4. One can also derive the Taylor series expansion by integration. From the geometric series formula, we know 1 X1 ¡ = ¡ yn (2.14) 1 ¡ y n=0 for 0 · y < 1. If we integrate both sides from 0 to x < 1, provided we can interchange the integration and summation, we have Z Z x 1 x X1 ¡ dy = ¡ yn dy 1 ¡ y 0 0 n=1 ¯ 1 Z x ¯x X ln(1 ¡ y)¯ = ¡ yn dy 0 n=0 0 X1 xn+1 X1 xn ln(1 ¡ x) = ¡ = ¡ ; (2.15) n + 1 n n=0 n=1 the interchange of integration and summation can be justi¯ed by Fubini's Theorem. Example 2.5. One of the most important Taylor series is that of f(x) = ex about x = 0. As f (n)(x) = ex for all n, we have X1 f (n)(0)xn X1 xn ex = = : (2.16) n! n! n=0 n=0 Exercise 2.6. Calculate the Taylor series of cos θ and sin θ around θ = 0. Consider what we get by formally plugging in x P1 n x iθ in the Taylor series expansion of e : n=0(iθ) =n!. Often the Taylor series expansion is used as the de¯nition of e ; because of the rapid decay of n! (see x3), this in¯nite sum converges for all complex valued x. Thus we may de¯ne ex by P1 n n=0 x =n! for all x, real or complex. Show that eiθ = cos θ + i sin θ: (2.17) One can use this to derive all trigonometric formulas1. For example, sin2 θ + cos2 θ = 1 follows from eiθ ¢ e¡iθ = 1.
Recommended publications
  • Topic 7 Notes 7 Taylor and Laurent Series
    Topic 7 Notes Jeremy Orloff 7 Taylor and Laurent series 7.1 Introduction We originally defined an analytic function as one where the derivative, defined as a limit of ratios, existed. We went on to prove Cauchy's theorem and Cauchy's integral formula. These revealed some deep properties of analytic functions, e.g. the existence of derivatives of all orders. Our goal in this topic is to express analytic functions as infinite power series. This will lead us to Taylor series. When a complex function has an isolated singularity at a point we will replace Taylor series by Laurent series. Not surprisingly we will derive these series from Cauchy's integral formula. Although we come to power series representations after exploring other properties of analytic functions, they will be one of our main tools in understanding and computing with analytic functions. 7.2 Geometric series Having a detailed understanding of geometric series will enable us to use Cauchy's integral formula to understand power series representations of analytic functions. We start with the definition: Definition. A finite geometric series has one of the following (all equivalent) forms. 2 3 n Sn = a(1 + r + r + r + ::: + r ) = a + ar + ar2 + ar3 + ::: + arn n X = arj j=0 n X = a rj j=0 The number r is called the ratio of the geometric series because it is the ratio of consecutive terms of the series. Theorem. The sum of a finite geometric series is given by a(1 − rn+1) S = a(1 + r + r2 + r3 + ::: + rn) = : (1) n 1 − r Proof.
    [Show full text]
  • An Appreciation of Euler's Formula
    Rose-Hulman Undergraduate Mathematics Journal Volume 18 Issue 1 Article 17 An Appreciation of Euler's Formula Caleb Larson North Dakota State University Follow this and additional works at: https://scholar.rose-hulman.edu/rhumj Recommended Citation Larson, Caleb (2017) "An Appreciation of Euler's Formula," Rose-Hulman Undergraduate Mathematics Journal: Vol. 18 : Iss. 1 , Article 17. Available at: https://scholar.rose-hulman.edu/rhumj/vol18/iss1/17 Rose- Hulman Undergraduate Mathematics Journal an appreciation of euler's formula Caleb Larson a Volume 18, No. 1, Spring 2017 Sponsored by Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN 47803 [email protected] a scholar.rose-hulman.edu/rhumj North Dakota State University Rose-Hulman Undergraduate Mathematics Journal Volume 18, No. 1, Spring 2017 an appreciation of euler's formula Caleb Larson Abstract. For many mathematicians, a certain characteristic about an area of mathematics will lure him/her to study that area further. That characteristic might be an interesting conclusion, an intricate implication, or an appreciation of the impact that the area has upon mathematics. The particular area that we will be exploring is Euler's Formula, eix = cos x + i sin x, and as a result, Euler's Identity, eiπ + 1 = 0. Throughout this paper, we will develop an appreciation for Euler's Formula as it combines the seemingly unrelated exponential functions, imaginary numbers, and trigonometric functions into a single formula. To appreciate and further understand Euler's Formula, we will give attention to the individual aspects of the formula, and develop the necessary tools to prove it.
    [Show full text]
  • Newton and Leibniz: the Development of Calculus Isaac Newton (1642-1727)
    Newton and Leibniz: The development of calculus Isaac Newton (1642-1727) Isaac Newton was born on Christmas day in 1642, the same year that Galileo died. This coincidence seemed to be symbolic and in many ways, Newton developed both mathematics and physics from where Galileo had left off. A few months before his birth, his father died and his mother had remarried and Isaac was raised by his grandmother. His uncle recognized Newton’s mathematical abilities and suggested he enroll in Trinity College in Cambridge. Newton at Trinity College At Trinity, Newton keenly studied Euclid, Descartes, Kepler, Galileo, Viete and Wallis. He wrote later to Robert Hooke, “If I have seen farther, it is because I have stood on the shoulders of giants.” Shortly after he received his Bachelor’s degree in 1665, Cambridge University was closed due to the bubonic plague and so he went to his grandmother’s house where he dived deep into his mathematics and physics without interruption. During this time, he made four major discoveries: (a) the binomial theorem; (b) calculus ; (c) the law of universal gravitation and (d) the nature of light. The binomial theorem, as we discussed, was of course known to the Chinese, the Indians, and was re-discovered by Blaise Pascal. But Newton’s innovation is to discuss it for fractional powers. The binomial theorem Newton’s notation in many places is a bit clumsy and he would write his version of the binomial theorem as: In modern notation, the left hand side is (P+PQ)m/n and the first term on the right hand side is Pm/n and the other terms are: The binomial theorem as a Taylor series What we see here is the Taylor series expansion of the function (1+Q)m/n.
    [Show full text]
  • Mathematics (MATH)
    Course Descriptions MATH 1080 QL MATH 1210 QL Mathematics (MATH) Precalculus Calculus I 5 5 MATH 100R * Prerequisite(s): Within the past two years, * Prerequisite(s): One of the following within Math Leap one of the following: MAT 1000 or MAT 1010 the past two years: (MATH 1050 or MATH 1 with a grade of B or better or an appropriate 1055) and MATH 1060, each with a grade of Is part of UVU’s math placement process; for math placement score. C or higher; OR MATH 1080 with a grade of C students who desire to review math topics Is an accelerated version of MATH 1050 or higher; OR appropriate placement by math in order to improve placement level before and MATH 1060. Includes functions and placement test. beginning a math course. Addresses unique their graphs including polynomial, rational, Covers limits, continuity, differentiation, strengths and weaknesses of students, by exponential, logarithmic, trigonometric, and applications of differentiation, integration, providing group problem solving activities along inverse trigonometric functions. Covers and applications of integration, including with an individual assessment and study inequalities, systems of linear and derivatives and integrals of polynomial plan for mastering target material. Requires nonlinear equations, matrices, determinants, functions, rational functions, exponential mandatory class attendance and a minimum arithmetic and geometric sequences, the functions, logarithmic functions, trigonometric number of hours per week logged into a Binomial Theorem, the unit circle, right functions, inverse trigonometric functions, and preparation module, with progress monitored triangle trigonometry, trigonometric equations, hyperbolic functions. Is a prerequisite for by a mentor. May be repeated for a maximum trigonometric identities, the Law of Sines, the calculus-based sciences.
    [Show full text]
  • A Short Course on Approximation Theory
    A Short Course on Approximation Theory N. L. Carothers Department of Mathematics and Statistics Bowling Green State University ii Preface These are notes for a topics course offered at Bowling Green State University on a variety of occasions. The course is typically offered during a somewhat abbreviated six week summer session and, consequently, there is a bit less material here than might be associated with a full semester course offered during the academic year. On the other hand, I have tried to make the notes self-contained by adding a number of short appendices and these might well be used to augment the course. The course title, approximation theory, covers a great deal of mathematical territory. In the present context, the focus is primarily on the approximation of real-valued continuous functions by some simpler class of functions, such as algebraic or trigonometric polynomials. Such issues have attracted the attention of thousands of mathematicians for at least two centuries now. We will have occasion to discuss both venerable and contemporary results, whose origins range anywhere from the dawn of time to the day before yesterday. This easily explains my interest in the subject. For me, reading these notes is like leafing through the family photo album: There are old friends, fondly remembered, fresh new faces, not yet familiar, and enough easily recognizable faces to make me feel right at home. The problems we will encounter are easy to state and easy to understand, and yet their solutions should prove intriguing to virtually anyone interested in mathematics. The techniques involved in these solutions entail nearly every topic covered in the standard undergraduate curriculum.
    [Show full text]
  • 1 Approximating Integrals Using Taylor Polynomials 1 1.1 Definitions
    Seunghee Ye Ma 8: Week 7 Nov 10 Week 7 Summary This week, we will learn how we can approximate integrals using Taylor series and numerical methods. Topics Page 1 Approximating Integrals using Taylor Polynomials 1 1.1 Definitions . .1 1.2 Examples . .2 1.3 Approximating Integrals . .3 2 Numerical Integration 5 1 Approximating Integrals using Taylor Polynomials 1.1 Definitions When we first defined the derivative, recall that it was supposed to be the \instantaneous rate of change" of a function f(x) at a given point c. In other words, f 0 gives us a linear approximation of f(x) near c: for small values of " 2 R, we have f(c + ") ≈ f(c) + "f 0(c) But if f(x) has higher order derivatives, why stop with a linear approximation? Taylor series take this idea of linear approximation and extends it to higher order derivatives, giving us a better approximation of f(x) near c. Definition (Taylor Polynomial and Taylor Series) Let f(x) be a Cn function i.e. f is n-times continuously differentiable. Then, the n-th order Taylor polynomial of f(x) about c is: n X f (k)(c) T (f)(x) = (x − c)k n k! k=0 The n-th order remainder of f(x) is: Rn(f)(x) = f(x) − Tn(f)(x) If f(x) is C1, then the Taylor series of f(x) about c is: 1 X f (k)(c) T (f)(x) = (x − c)k 1 k! k=0 Note that the first order Taylor polynomial of f(x) is precisely the linear approximation we wrote down in the beginning.
    [Show full text]
  • The Discovery of the Series Formula for Π by Leibniz, Gregory and Nilakantha Author(S): Ranjan Roy Source: Mathematics Magazine, Vol
    The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha Author(s): Ranjan Roy Source: Mathematics Magazine, Vol. 63, No. 5 (Dec., 1990), pp. 291-306 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2690896 Accessed: 27-02-2017 22:02 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://about.jstor.org/terms Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to Mathematics Magazine This content downloaded from 195.251.161.31 on Mon, 27 Feb 2017 22:02:42 UTC All use subject to http://about.jstor.org/terms ARTICLES The Discovery of the Series Formula for 7r by Leibniz, Gregory and Nilakantha RANJAN ROY Beloit College Beloit, WI 53511 1. Introduction The formula for -r mentioned in the title of this article is 4 3 57 . (1) One simple and well-known moderm proof goes as follows: x I arctan x = | 1 +2 dt x3 +5 - +2n + 1 x t2n+2 + -w3 - +(-I)rl2n+1 +(-I)l?lf dt. The last integral tends to zero if Ix < 1, for 'o t+2dt < jt dt - iX2n+3 20 as n oo.
    [Show full text]
  • Appendix a Short Course in Taylor Series
    Appendix A Short Course in Taylor Series The Taylor series is mainly used for approximating functions when one can identify a small parameter. Expansion techniques are useful for many applications in physics, sometimes in unexpected ways. A.1 Taylor Series Expansions and Approximations In mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. It is named after the English mathematician Brook Taylor. If the series is centered at zero, the series is also called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. It is common practice to use a finite number of terms of the series to approximate a function. The Taylor series may be regarded as the limit of the Taylor polynomials. A.2 Definition A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function f(x) about a point x ¼ a is given by; f 00ðÞa f 3ðÞa fxðÞ¼faðÞþf 0ðÞa ðÞþx À a ðÞx À a 2 þ ðÞx À a 3 þÁÁÁ 2! 3! f ðÞn ðÞa þ ðÞx À a n þÁÁÁ ðA:1Þ n! © Springer International Publishing Switzerland 2016 415 B. Zohuri, Directed Energy Weapons, DOI 10.1007/978-3-319-31289-7 416 Appendix A: Short Course in Taylor Series If a ¼ 0, the expansion is known as a Maclaurin Series. Equation A.1 can be written in the more compact sigma notation as follows: X1 f ðÞn ðÞa ðÞx À a n ðA:2Þ n! n¼0 where n ! is mathematical notation for factorial n and f(n)(a) denotes the n th derivation of function f evaluated at the point a.
    [Show full text]
  • Operations on Power Series Related to Taylor Series
    Operations on Power Series Related to Taylor Series In this problem, we perform elementary operations on Taylor series – term by term differen­ tiation and integration – to obtain new examples of power series for which we know their sum. Suppose that a function f has a power series representation of the form: 1 2 X n f(x) = a0 + a1(x − c) + a2(x − c) + · · · = an(x − c) n=0 convergent on the interval (c − R; c + R) for some R. The results we use in this example are: • (Differentiation) Given f as above, f 0(x) has a power series expansion obtained by by differ­ entiating each term in the expansion of f(x): 1 0 X n−1 f (x) = a1 + a2(x − c) + 2a3(x − c) + · · · = nan(x − c) n=1 • (Integration) Given f as above, R f(x) dx has a power series expansion obtained by by inte­ grating each term in the expansion of f(x): 1 Z a1 a2 X an f(x) dx = C + a (x − c) + (x − c)2 + (x − c)3 + · · · = C + (x − c)n+1 0 2 3 n + 1 n=0 for some constant C depending on the choice of antiderivative of f. Questions: 1. Find a power series representation for the function f(x) = arctan(5x): (Note: arctan x is the inverse function to tan x.) 2. Use power series to approximate Z 1 2 sin(x ) dx 0 (Note: sin(x2) is a function whose antiderivative is not an elementary function.) Solution: 1 For question (1), we know that arctan x has a simple derivative: , which then has a power 1 + x2 1 2 series representation similar to that of , where we subsitute −x for x.
    [Show full text]
  • Notes on Euler's Work on Divergent Factorial Series and Their Associated
    Indian J. Pure Appl. Math., 41(1): 39-66, February 2010 °c Indian National Science Academy NOTES ON EULER’S WORK ON DIVERGENT FACTORIAL SERIES AND THEIR ASSOCIATED CONTINUED FRACTIONS Trond Digernes¤ and V. S. Varadarajan¤¤ ¤University of Trondheim, Trondheim, Norway e-mail: [email protected] ¤¤University of California, Los Angeles, CA, USA e-mail: [email protected] Abstract Factorial series which diverge everywhere were first considered by Euler from the point of view of summing divergent series. He discovered a way to sum such series and was led to certain integrals and continued fractions. His method of summation was essentialy what we call Borel summation now. In this paper, we discuss these aspects of Euler’s work from the modern perspective. Key words Divergent series, factorial series, continued fractions, hypergeometric continued fractions, Sturmian sequences. 1. Introductory Remarks Euler was the first mathematician to develop a systematic theory of divergent se- ries. In his great 1760 paper De seriebus divergentibus [1, 2] and in his letters to Bernoulli he championed the view, which was truly revolutionary for his epoch, that one should be able to assign a numerical value to any divergent series, thus allowing the possibility of working systematically with them (see [3]). He antic- ipated by over a century the methods of summation of divergent series which are known today as the summation methods of Cesaro, Holder,¨ Abel, Euler, Borel, and so on. Eventually his views would find their proper place in the modern theory of divergent series [4]. But from the beginning Euler realized that almost none of his methods could be applied to the series X1 1 ¡ 1!x + 2!x2 ¡ 3!x3 + ::: = (¡1)nn!xn (1) n=0 40 TROND DIGERNES AND V.
    [Show full text]
  • February 2009
    How Euler Did It by Ed Sandifer Estimating π February 2009 On Friday, June 7, 1779, Leonhard Euler sent a paper [E705] to the regular twice-weekly meeting of the St. Petersburg Academy. Euler, blind and disillusioned with the corruption of Domaschneff, the President of the Academy, seldom attended the meetings himself, so he sent one of his assistants, Nicolas Fuss, to read the paper to the ten members of the Academy who attended the meeting. The paper bore the cumbersome title "Investigatio quarundam serierum quae ad rationem peripheriae circuli ad diametrum vero proxime definiendam maxime sunt accommodatae" (Investigation of certain series which are designed to approximate the true ratio of the circumference of a circle to its diameter very closely." Up to this point, Euler had shown relatively little interest in the value of π, though he had standardized its notation, using the symbol π to denote the ratio of a circumference to a diameter consistently since 1736, and he found π in a great many places outside circles. In a paper he wrote in 1737, [E74] Euler surveyed the history of calculating the value of π. He mentioned Archimedes, Machin, de Lagny, Leibniz and Sharp. The main result in E74 was to discover a number of arctangent identities along the lines of ! 1 1 1 = 4 arctan " arctan + arctan 4 5 70 99 and to propose using the Taylor series expansion for the arctangent function, which converges fairly rapidly for small values, to approximate π. Euler also spent some time in that paper finding ways to approximate the logarithms of trigonometric functions, important at the time in navigation tables.
    [Show full text]
  • Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5
    Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5, Trefethen's book Topics marked with ∗ are not on the exam 1 In approximation theory we want to find a function p(x) that is `close' to another function f(x). We can define closeness using any metric or norm, e.g. Z 2 2 kf(x) − p(x)k2 = (f(x) − p(x)) dx or kf(x) − p(x)k1 = sup jf(x) − p(x)j or Z kf(x) − p(x)k1 = jf(x) − p(x)jdx: In order for these norms to make sense we need to restrict the functions f and p to suitable function spaces. The polynomial approximation problem takes the form: Find a polynomial of degree at most n that minimizes the norm of the error. Naturally we will consider (i) whether a solution exists and is unique, (ii) whether the approximation converges as n ! 1. In our section on approximation (loosely following Atkinson, Chapter 4), we will first focus on approximation in the infinity norm, then in the 2 norm and related norms. 2 Existence for optimal polynomial approximation. Theorem (no reference): For every n ≥ 0 and f 2 C([a; b]) there is a polynomial of degree ≤ n that minimizes kf(x) − p(x)k where k · k is some norm on C([a; b]). Proof: To show that a minimum/minimizer exists, we want to find some compact subset of the set of polynomials of degree ≤ n (which is a finite-dimensional space) and show that the inf over this subset is less than the inf over everything else.
    [Show full text]