Commonly Used Taylor Series

Total Page:16

File Type:pdf, Size:1020Kb

Commonly Used Taylor Series Commonly Used Taylor Series series when is valid/true 1 note this is the geometric series. = 1 + x + x2 + x3 + x4 + ::: 1 − x just think of x as r 1 X = xn x 2 (−1; 1) n=0 so: 1 1 1 2 3 4 e = 1 + 1 + 2! + 3! + 4! + ::: x x x x 1 e = 1 + x + + + + ::: n 2! 3! 4! (17x) P1 (17x) X 17nxn e = n=0 n! = n! n=0 1 X xn = x 2 n! R n=0 note y = cos x is an even function x2 x4 x6 x8 (i.e., cos(−x) = + cos(x)) and the cos x = 1 − + − + − ::: 2! 4! 6! 8! taylor seris of y = cos x has only even powers. 1 X x2n = (−1)n x 2 (2n)! R n=0 note y = sin x is an odd function x3 x5 x7 x9 (i.e., sin(−x) = − sin(x)) and the sin x = x − + − + − ::: 3! 5! 7! 9! taylor seris of y = sin x has only odd powers. 1 2n−1 1 2n+1 X x or X x = (−1)(n−1) = (−1)n x 2 (2n − 1)! (2n + 1)! R n=1 n=0 x2 x3 x4 x5 ln (1 + x) = x − + − + − ::: question: is y = ln(1 + x) even, 2 3 4 5 odd, or neither? 1 n 1 n X x or X x = (−1)(n−1) = (−1)n+1 x 2 (−1; 1] n n n=1 n=1 x3 x5 x7 x9 tan−1 x = x − + − + − ::: question: is y = arctan(x) even, 3 5 7 9 odd, or neither? 1 2n−1 1 2n+1 X x or X x = (−1)(n−1) = (−1)n x 2 [−1; 1] 2n − 1 2n + 1 n=1 n=0 1 Math 142 Taylor/Maclaurin Polynomials and Series Prof. Girardi Fix an interval I in the real line (e.g., I might be (−17; 19)) and let x0 be a point in I, i.e., x0 2 I: Next consider a function, whose domain is I, f : I ! R and whose derivatives f (n) : I ! R exist on the interval I for n = 1; 2; 3;:::;N. th Definition 1. The N -order Taylor polynomial for y = f(x) at x0 is: f 00(x ) f (N)(x ) p (x) = f(x ) + f 0(x )(x − x ) + 0 (x − x )2 + ··· + 0 (x − x )N ; (open form) N 0 0 0 2! 0 N! 0 which can also be written as (recall that 0! = 1) f (0)(x ) f (1)(x ) f (2)(x ) f (N)(x ) p (x) = 0 + 0 (x−x )+ 0 (x−x )2+···+ 0 (x−x )N - a finite sum, i.e. the sum stops : N 0! 1! 0 2! 0 N! 0 Formula (open form) is in open form. It can also be written in closed form, by using sigma notation, as N (n) X f (x0) p (x) = (x − x )n : (closed form) N n! 0 n=0 So y = pN (x) is a polynomial of degree at most N and it has the form N (n) X f (x0) p (x) = c (x − x )n where the constants c = N n 0 n n! n=0 are specially chosen so that derivatives match up at x0, i.e. the constants cn's are chosen so that: pN (x0) = f(x0) (1) (1) pN (x0) = f (x0) (2) (2) pN (x0) = f (x0) . (N) (N) pN (x0) = f (x0) : th th The constant cn is the n Taylor coefficient of y = f(x) about x0. The N -order Maclaurin polynomial for y = f(x) th is just the N -order Taylor polynomial for y = f(x) at x0 = 0 and so it is N X f (n)(0) p (x) = xn : N n! n=0 1 Definition 2. The Taylor series for y = f(x) at x0 is the power series: f 00(x ) f (n)(x ) P (x) = f(x ) + f 0(x )(x − x ) + 0 (x − x )2 + ··· + 0 (x − x )n + ::: (open form) 1 0 0 0 2! 0 n! 0 which can also be written as f (0)(x ) f (1)(x ) f (2)(x ) f (n)(x ) P (x) = 0 + 0 (x−x )+ 0 (x−x )2+···+ 0 (x−x )n+::: - the sum keeps on going and going: 1 0! 1! 0 2! 0 n! 0 The Taylor series can also be written in closed form, by using sigma notation, as 1 (n) X f (x0) P (x) = (x − x )n : (closed form) 1 n! 0 n=0 The Maclaurin series for y = f(x) is just the Taylor series for y = f(x) at x0 = 0. 1 (n) Here we are assuming that the derivatives y = f (x) exist for each x in the interval I and for each n 2 N ≡ f1; 2; 3; 4; 5;::: g . 2 Big Questions 3. For what values of x does the power (a.k.a. Taylor) series 1 (n) X f (x0) P (x) = (x − x )n (1) 1 n! 0 n=0 converge (usually the Root or Ratio test helps us out with this question). If the power/Taylor series in formula (1) does indeed converge at a point x, does the series converge to what we would want it to converge to, i.e., does f(x) = P1(x) ? (2) Question (2) is going to take some thought. th Definition 4. The N -order Remainder term for y = f(x) at x0 is: def RN (x) = f(x) − PN (x) th where y = PN (x) is the N -order Taylor polynomial for y = f(x) at x0. So f(x) = PN (x) + RN (x) (3) that is f(x) ≈ PN (x) within an error of RN (x) : We often think of all this as: N (n) X f (x0) f(x) ≈ (x − x )n - a finite sum, the sum stops at N: n! 0 n=0 We would LIKE TO HAVE THAT 1 (n) ?? X f (x0) f(x) = (x − x )n - the sum keeps on going and going : n! 0 n=0 In other notation: ?? f(x) ≈ PN (x) and the question is f(x) = P1(x) where y = P1(x) is the Taylor series of y = f(x) at x0. ?? Well, let's think about what needs to be for f(x) = P1(x), i.e., for f to equal to its Taylor series. Notice 5. Taking the limN!1 of both sides in equation (3), we see that 1 (n) X f (x0) f(x) = (x − x )n - the sum keeps on going and going : n! 0 n=0 if and only if lim RN (x) = 0 : N!1 Recall 6. limN!1 RN (x) = 0 if and only if limN!1 jRN (x)j = 0 . So 7. If lim jRN (x)j = 0 (4) N!1 then 1 (n) X f (x0) f(x) = (x − x )n : n! 0 n=0 So we basically want to show that (4) holds true. How to do this? Well, this is where Mr. Taylor comes to the rescue! 2 2According to Mr. Taylor, his Remainder Theorem (see next page) was motivated by coffeehouse conversations about works of Newton on planetary motion and works of Halley (of Halley's comet) on roots of polynomials. 3 Taylor's Remainder Theorem 3 Version 1: for a fixed point x 2 I and a fixed N 2 N. There exists c between x and x0 so that f (N+1)(c) R (x) def= f(x) − P (x) theorem= (x − x )(N+1) : (5) N N (N + 1)! 0 So either x ≤ c ≤ x0 or x0 ≤ c ≤ x. So we do not know exactly what c is but atleast we know that c is between x and x0 and so c 2 I. Remark: This is a Big Theorem by Taylor. See the book for the proof. The proof uses the Mean Value Theorem. Note that formula (5) implies that f (N+1)(c) jR (x)j = jx − x j(N+1) : (6) N (N + 1)! 0 Version 2: for the whole interval I and a fixed N 2 N.3 Assume we can find M so that (N+1) the maximum of f (x) on the interval I ≤ M; i.e., max f (N+1)(c) ≤ M: c2I Then M jR (x)j ≤ jx − x jN+1 (7) N (N + 1)! 0 for each x 2 I. Remark: This follows from formula (6). 4 Version 3: for the whole interval I and all N 2 N. 1 Now assume that we can find a sequence fMN gN=1 so that (N+1) max f (c) ≤ MN c2I for each N 2 N and also so that MN N+1 lim jx − x0j = 0 N!1 (N + 1)! for each x 2 I. Then, by formula (7) and the Squeeze Theorem, lim jRN (x)j = 0 N!1 for each x 2 I. Thus, by So 7, 1 (n) X f (x0) f(x) = (x − x )n n! 0 n=0 for each x 2 I. 3Here we assume that the (N + 1)-derivative of y = f(x), i.e. y = f (N+1)(x), exists for each x 2 I. 4 (N) Here we assume that y = f (x), exists for each x 2 I and each N 2 N. 4.
Recommended publications
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Sequences and series Sequences and series Calculus and Differential Equations II Sequences A sequence is an infinite list of numbers, s1; s2;:::; sn;::: , indexed by integers. 1n Example 1: Find the first five terms of s = (−1)n , n 3 n ≥ 1. Example 2: Find a formula for sn, n ≥ 1, given that its first five terms are 0; 2; 6; 14; 30. Some sequences are defined recursively. For instance, sn = 2 sn−1 + 3, n > 1, with s1 = 1. If lim sn = L, where L is a number, we say that the sequence n!1 (sn) converges to L. If such a limit does not exist or if L = ±∞, one says that the sequence diverges. Sequences and series Calculus and Differential Equations II Sequences (continued) 2n Example 3: Does the sequence converge? 5n 1 Yes 2 No n 5 Example 4: Does the sequence + converge? 2 n 1 Yes 2 No sin(2n) Example 5: Does the sequence converge? n Remarks: 1 A convergent sequence is bounded, i.e. one can find two numbers M and N such that M < sn < N, for all n's. 2 If a sequence is bounded and monotone, then it converges. Sequences and series Calculus and Differential Equations II Series A series is a pair of sequences, (Sn) and (un) such that n X Sn = uk : k=1 A geometric series is of the form 2 3 n−1 k−1 Sn = a + ax + ax + ax + ··· + ax ; uk = ax 1 − xn One can show that if x 6= 1, S = a .
    [Show full text]
  • Topic 7 Notes 7 Taylor and Laurent Series
    Topic 7 Notes Jeremy Orloff 7 Taylor and Laurent series 7.1 Introduction We originally defined an analytic function as one where the derivative, defined as a limit of ratios, existed. We went on to prove Cauchy's theorem and Cauchy's integral formula. These revealed some deep properties of analytic functions, e.g. the existence of derivatives of all orders. Our goal in this topic is to express analytic functions as infinite power series. This will lead us to Taylor series. When a complex function has an isolated singularity at a point we will replace Taylor series by Laurent series. Not surprisingly we will derive these series from Cauchy's integral formula. Although we come to power series representations after exploring other properties of analytic functions, they will be one of our main tools in understanding and computing with analytic functions. 7.2 Geometric series Having a detailed understanding of geometric series will enable us to use Cauchy's integral formula to understand power series representations of analytic functions. We start with the definition: Definition. A finite geometric series has one of the following (all equivalent) forms. 2 3 n Sn = a(1 + r + r + r + ::: + r ) = a + ar + ar2 + ar3 + ::: + arn n X = arj j=0 n X = a rj j=0 The number r is called the ratio of the geometric series because it is the ratio of consecutive terms of the series. Theorem. The sum of a finite geometric series is given by a(1 − rn+1) S = a(1 + r + r2 + r3 + ::: + rn) = : (1) n 1 − r Proof.
    [Show full text]
  • Formal Power Series - Wikipedia, the Free Encyclopedia
    Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series
    [Show full text]
  • 3.3 Convergence Tests for Infinite Series
    3.3 Convergence Tests for Infinite Series 3.3.1 The integral test We may plot the sequence an in the Cartesian plane, with independent variable n and dependent variable a: n X The sum an can then be represented geometrically as the area of a collection of rectangles with n=1 height an and width 1. This geometric viewpoint suggests that we compare this sum to an integral. If an can be represented as a continuous function of n, for real numbers n, not just integers, and if the m X sequence an is decreasing, then an looks a bit like area under the curve a = a(n). n=1 In particular, m m+2 X Z m+1 X an > an dn > an n=1 n=1 n=2 For example, let us examine the first 10 terms of the harmonic series 10 X 1 1 1 1 1 1 1 1 1 1 = 1 + + + + + + + + + : n 2 3 4 5 6 7 8 9 10 1 1 1 If we draw the curve y = x (or a = n ) we see that 10 11 10 X 1 Z 11 dx X 1 X 1 1 > > = − 1 + : n x n n 11 1 1 2 1 (See Figure 1, copied from Wikipedia) Z 11 dx Now = ln(11) − ln(1) = ln(11) so 1 x 10 X 1 1 1 1 1 1 1 1 1 1 = 1 + + + + + + + + + > ln(11) n 2 3 4 5 6 7 8 9 10 1 and 1 1 1 1 1 1 1 1 1 1 1 + + + + + + + + + < ln(11) + (1 − ): 2 3 4 5 6 7 8 9 10 11 Z dx So we may bound our series, above and below, with some version of the integral : x If we allow the sum to turn into an infinite series, we turn the integral into an improper integral.
    [Show full text]
  • 1 Approximating Integrals Using Taylor Polynomials 1 1.1 Definitions
    Seunghee Ye Ma 8: Week 7 Nov 10 Week 7 Summary This week, we will learn how we can approximate integrals using Taylor series and numerical methods. Topics Page 1 Approximating Integrals using Taylor Polynomials 1 1.1 Definitions . .1 1.2 Examples . .2 1.3 Approximating Integrals . .3 2 Numerical Integration 5 1 Approximating Integrals using Taylor Polynomials 1.1 Definitions When we first defined the derivative, recall that it was supposed to be the \instantaneous rate of change" of a function f(x) at a given point c. In other words, f 0 gives us a linear approximation of f(x) near c: for small values of " 2 R, we have f(c + ") ≈ f(c) + "f 0(c) But if f(x) has higher order derivatives, why stop with a linear approximation? Taylor series take this idea of linear approximation and extends it to higher order derivatives, giving us a better approximation of f(x) near c. Definition (Taylor Polynomial and Taylor Series) Let f(x) be a Cn function i.e. f is n-times continuously differentiable. Then, the n-th order Taylor polynomial of f(x) about c is: n X f (k)(c) T (f)(x) = (x − c)k n k! k=0 The n-th order remainder of f(x) is: Rn(f)(x) = f(x) − Tn(f)(x) If f(x) is C1, then the Taylor series of f(x) about c is: 1 X f (k)(c) T (f)(x) = (x − c)k 1 k! k=0 Note that the first order Taylor polynomial of f(x) is precisely the linear approximation we wrote down in the beginning.
    [Show full text]
  • The Discovery of the Series Formula for Π by Leibniz, Gregory and Nilakantha Author(S): Ranjan Roy Source: Mathematics Magazine, Vol
    The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha Author(s): Ranjan Roy Source: Mathematics Magazine, Vol. 63, No. 5 (Dec., 1990), pp. 291-306 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2690896 Accessed: 27-02-2017 22:02 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://about.jstor.org/terms Mathematical Association of America is collaborating with JSTOR to digitize, preserve and extend access to Mathematics Magazine This content downloaded from 195.251.161.31 on Mon, 27 Feb 2017 22:02:42 UTC All use subject to http://about.jstor.org/terms ARTICLES The Discovery of the Series Formula for 7r by Leibniz, Gregory and Nilakantha RANJAN ROY Beloit College Beloit, WI 53511 1. Introduction The formula for -r mentioned in the title of this article is 4 3 57 . (1) One simple and well-known moderm proof goes as follows: x I arctan x = | 1 +2 dt x3 +5 - +2n + 1 x t2n+2 + -w3 - +(-I)rl2n+1 +(-I)l?lf dt. The last integral tends to zero if Ix < 1, for 'o t+2dt < jt dt - iX2n+3 20 as n oo.
    [Show full text]
  • Sequences and Series
    From patterns to generalizations: sequences and series Concepts ■ Patterns You do not have to look far and wide to fi nd 1 ■ Generalization visual patterns—they are everywhere! Microconcepts ■ Arithmetic and geometric sequences ■ Arithmetic and geometric series ■ Common diff erence ■ Sigma notation ■ Common ratio ■ Sum of sequences ■ Binomial theorem ■ Proof ■ Sum to infi nity Can these patterns be explained mathematically? Can patterns be useful in real-life situations? What information would you require in order to choose the best loan off er? What other Draftscenarios could this be applied to? If you take out a loan to buy a car how can you determine the actual amount it will cost? 2 The diagrams shown here are the first four iterations of a fractal called the Koch snowflake. What do you notice about: • how each pattern is created from the previous one? • the perimeter as you move from the first iteration through the fourth iteration? How is it changing? • the area enclosed as you move from the first iteration to the fourth iteration? How is it changing? What changes would you expect in the fifth iteration? How would you measure the perimeter at the fifth iteration if the original triangle had sides of 1 m in length? If this process continues forever, how can an infinite perimeter enclose a finite area? Developing inquiry skills Does mathematics always reflect reality? Are fractals such as the Koch snowflake invented or discovered? Think about the questions in this opening problem and answer any you can. As you work through the chapter, you will gain mathematical knowledge and skills that will help you to answer them all.
    [Show full text]
  • AN INTRODUCTION to POWER SERIES a Finite Sum of the Form A0
    AN INTRODUCTION TO POWER SERIES PO-LAM YUNG A finite sum of the form n a0 + a1x + ··· + anx (where a0; : : : ; an are constants) is called a polynomial of degree n in x. One may wonder what happens if we allow an infinite number of terms instead. This leads to the study of what is called a power series, as follows. Definition 1. Given a point c 2 R, and a sequence of (real or complex) numbers a0; a1;:::; one can form a power series centered at c: 2 a0 + a1(x − c) + a2(x − c) + :::; which is also written as 1 X k ak(x − c) : k=0 For example, the following are all power series centered at 0: 1 x x2 x3 X xk (1) 1 + + + + ::: = ; 1! 2! 3! k! k=0 1 X (2) 1 + x + x2 + x3 + ::: = xk: k=0 We want to think of a power series as a function of x. Thus we are led to study carefully the convergence of such a series. Recall that an infinite series of numbers is said to converge, if the sequence given by the sum of the first N terms converges as N tends to infinity. In particular, given a real number x, the series 1 X k ak(x − c) k=0 converges, if and only if N X k lim ak(x − c) N!1 k=0 exists. A power series centered at c will surely converge at x = c (because one is just summing a bunch of zeroes then), but there is no guarantee that the series will converge for any other values x.
    [Show full text]
  • A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- Sity, Pomona, CA 91768
    A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- sity, Pomona, CA 91768 In a recent calculus course, I introduced the technique of Integration by Parts as an integration rule corresponding to the Product Rule for differentiation. I showed my students the standard derivation of the Integration by Parts formula as presented in [1]: By the Product Rule, if f (x) and g(x) are differentiable functions, then d f (x)g(x) = f (x)g(x) + g(x) f (x). dx Integrating on both sides of this equation, f (x)g(x) + g(x) f (x) dx = f (x)g(x), which may be rearranged to obtain f (x)g(x) dx = f (x)g(x) − g(x) f (x) dx. Letting U = f (x) and V = g(x) and observing that dU = f (x) dx and dV = g(x) dx, we obtain the familiar Integration by Parts formula UdV= UV − VdU. (1) My student Victor asked if we could do a similar thing with the Quotient Rule. While the other students thought this was a crazy idea, I was intrigued. Below, I derive a Quotient Rule Integration by Parts formula, apply the resulting integration formula to an example, and discuss reasons why this formula does not appear in calculus texts. By the Quotient Rule, if f (x) and g(x) are differentiable functions, then ( ) ( ) ( ) − ( ) ( ) d f x = g x f x f x g x . dx g(x) [g(x)]2 Integrating both sides of this equation, we get f (x) g(x) f (x) − f (x)g(x) = dx.
    [Show full text]
  • Appendix a Short Course in Taylor Series
    Appendix A Short Course in Taylor Series The Taylor series is mainly used for approximating functions when one can identify a small parameter. Expansion techniques are useful for many applications in physics, sometimes in unexpected ways. A.1 Taylor Series Expansions and Approximations In mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point. It is named after the English mathematician Brook Taylor. If the series is centered at zero, the series is also called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. It is common practice to use a finite number of terms of the series to approximate a function. The Taylor series may be regarded as the limit of the Taylor polynomials. A.2 Definition A Taylor series is a series expansion of a function about a point. A one-dimensional Taylor series is an expansion of a real function f(x) about a point x ¼ a is given by; f 00ðÞa f 3ðÞa fxðÞ¼faðÞþf 0ðÞa ðÞþx À a ðÞx À a 2 þ ðÞx À a 3 þÁÁÁ 2! 3! f ðÞn ðÞa þ ðÞx À a n þÁÁÁ ðA:1Þ n! © Springer International Publishing Switzerland 2016 415 B. Zohuri, Directed Energy Weapons, DOI 10.1007/978-3-319-31289-7 416 Appendix A: Short Course in Taylor Series If a ¼ 0, the expansion is known as a Maclaurin Series. Equation A.1 can be written in the more compact sigma notation as follows: X1 f ðÞn ðÞa ðÞx À a n ðA:2Þ n! n¼0 where n ! is mathematical notation for factorial n and f(n)(a) denotes the n th derivation of function f evaluated at the point a.
    [Show full text]
  • Operations on Power Series Related to Taylor Series
    Operations on Power Series Related to Taylor Series In this problem, we perform elementary operations on Taylor series – term by term differen­ tiation and integration – to obtain new examples of power series for which we know their sum. Suppose that a function f has a power series representation of the form: 1 2 X n f(x) = a0 + a1(x − c) + a2(x − c) + · · · = an(x − c) n=0 convergent on the interval (c − R; c + R) for some R. The results we use in this example are: • (Differentiation) Given f as above, f 0(x) has a power series expansion obtained by by differ­ entiating each term in the expansion of f(x): 1 0 X n−1 f (x) = a1 + a2(x − c) + 2a3(x − c) + · · · = nan(x − c) n=1 • (Integration) Given f as above, R f(x) dx has a power series expansion obtained by by inte­ grating each term in the expansion of f(x): 1 Z a1 a2 X an f(x) dx = C + a (x − c) + (x − c)2 + (x − c)3 + · · · = C + (x − c)n+1 0 2 3 n + 1 n=0 for some constant C depending on the choice of antiderivative of f. Questions: 1. Find a power series representation for the function f(x) = arctan(5x): (Note: arctan x is the inverse function to tan x.) 2. Use power series to approximate Z 1 2 sin(x ) dx 0 (Note: sin(x2) is a function whose antiderivative is not an elementary function.) Solution: 1 For question (1), we know that arctan x has a simple derivative: , which then has a power 1 + x2 1 2 series representation similar to that of , where we subsitute −x for x.
    [Show full text]
  • Derivative of Power Series and Complex Exponential
    LECTURE 4: DERIVATIVE OF POWER SERIES AND COMPLEX EXPONENTIAL The reason of dealing with power series is that they provide examples of analytic functions. P1 n Theorem 1. If n=0 anz has radius of convergence R > 0; then the function P1 n F (z) = n=0 anz is di®erentiable on S = fz 2 C : jzj < Rg; and the derivative is P1 n¡1 f(z) = n=0 nanz : Proof. (¤) We will show that j F (z+h)¡F (z) ¡ f(z)j ! 0 as h ! 0 (in C), whenever h ¡ ¢ n Pn n k n¡k jzj < R: Using the binomial theorem (z + h) = k=0 k h z we get F (z + h) ¡ F (z) X1 (z + h)n ¡ zn ¡ hnzn¡1 ¡ f(z) = a h n h n=0 µ ¶ X1 a Xn n = n ( hkzn¡k) h k n=0 k=2 µ ¶ X1 Xn n = a h( hk¡2zn¡k) n k n=0 k=2 µ ¶ X1 Xn¡2 n = a h( hjzn¡2¡j) (by putting j = k ¡ 2): n j + 2 n=0 j=0 ¡ n ¢ ¡n¡2¢ By using the easily veri¯able fact that j+2 · n(n ¡ 1) j ; we obtain µ ¶ F (z + h) ¡ F (z) X1 Xn¡2 n ¡ 2 j ¡ f(z)j · jhj n(n ¡ 1)ja j( jhjjjzjn¡2¡j) h n j n=0 j=0 X1 n¡2 = jhj n(n ¡ 1)janj(jzj + jhj) : n=0 P1 n¡2 We already know that the series n=0 n(n ¡ 1)janjjzj converges for jzj < R: Now, for jzj < R and h ! 0 we have jzj + jhj < R eventually.
    [Show full text]