Math 231E, Lecture 27. Alternating Series

Total Page:16

File Type:pdf, Size:1020Kb

Math 231E, Lecture 27. Alternating Series Math 231E, Lecture 27. Alternating Series 1 Definition Definition 1.1. An alternating sequence/series is a sequence/series where the signs of the term are alter- natively positive and negative. Examples of alternating series include 1 1 1 1 1 − 2 + 3 − 4 + 5 − 6 + 7 − :::; 1 − + − + − ::: 2 4 8 16 We continue the convention that an is the nth term in the sequence, and we will always write bn = janj. Then if an is an alternating sequence then either n n+1 an = (−1) bn; an = (−1) bn: 2 Alternating Series Test We have a nice theorem to test for convergence of an alternating series: Theorem 2.1 (Alternating Series Test). Let an be an alternating series, and bn = janj. If 1. bn+1 ≤ bn 2. limn!1 bn = 0, P then n an converges. Note! Remember how we said you cannot deduce that a series converges just because its terms go to zero? If the series is decreasing and alternating, then you can! Proof. Let us say that a1 ≥ 0 (the proof is similar if the first term is negative.) We have s2 = b1 − b2; s4 = b1 − b2 + b3 − b4 = s2 + (b3 − b4) ≥ s2: More generally, s2n = s2n−2 + b2n−1 − b2n ≥ s2n−2; so s2n is an increasing sequence. Also notice that s2n = b1 − (b2 − b3) − (b4 − b5) − ::: 1 and every term in a pair of parentheses is positive, so s2n ≤ b1. Therefore the sequence s2n is increasing and bounded above, and by the Monotone Convergence Theorem, has a limit. Let us write lim s2n = S: n!1 We then compute lim s2n+1 = lim (s2n + b2n+1) n!1 n!1 = lim s2n + lim b2n+1 n!1 n!1 = S + 0 = S; so lim s2n+1 = S: n!1 From this we see that sn converges to S, and we are done. Remark 2.2. There is one way in which we can weaken the assumptions of the AST slightly. If it turns out that there exists an N such that for all n > N, bn+1 ≤ bn, then this is good enough to obtain convergence. Example 2.3. Let us consider the Alternating Harmonic Series 1 1 1 1 1 X (−1)n+1 1 − + − + − · · · = : 2 3 4 5 n n=1 Here bn = 1=n which is clearly decreasing, and bn ! 0. So by the AST, this converges. In fact, it converges to ln(2). (If you are interested to see how to compute this, see here.) Example 2.4. Consider 1 X 6n + 7 (−1)n : 8n + 9 n=1 This is an alternating series, but notice that 6 3 lim bn = = 6= 0: n!1 8 4 Since the limit of the bn is not zero, then the series does not converge! Example 2.5. What about 1 X n2 (−1)n 2n3 + 3 n=1 Now, the limit of these terms is zero. But are the terms decreasing? Let us consider the function x2 f(x) = ; 2x3 + 3 which is continuous on the whole real line. Now we compute 2x(2x3 + 3) − x2(6x) 2x(3 − x3) f 0(x) = = : (2x3 + 3)2 (2x3 + 3)2 0 Now, notice that we do not have f (x) < 0 for all px, and therefore x is not a decreasing function. However, if x > 0 and 3 − x3 < 0, then f 0(x) < 0, so for x > 3 3, we have f 0(x) < 0. Therefore, for n ≥ 3, the sequence an is decreasing. Also we can see that an ! 0, so by Remark 2.2 this series converges. 2 3 Something to be careful about! As mentioned above, we need that the sequence bn is decreasing, or at least eventually decreasing. Is that really necessary? As it turns out, this is in fact necessary. Consider the (admittedly weird) example where 1 1 b = ; b = ; 2n n 2n+1 2n n and let an = (−1) bn. Then this is an alternating sequence, and definitely lim bn = 0; n!1 but just as clearly, this is not a decreasing function. For example, the first few terms of bn are 1 1 1 1 1 1 1 1 ; ; ; ; ; ; ; ;::: 2 2 4 3 8 5 16 7 and we can see from this pattern that it is not decreasing. So the sum we are considering is 1 1 1 1 1 1 1 1 − + − + − + − + + ::: 2 2 4 3 8 5 16 7 Let us compute some of the partial sums of this series: 1 1 1 1 1 1 3 19 s = 0; s = − = ; s = s + − = + = : 2 4 3 4 12 6 4 5 8 12 40 120 More generally, let’s just look at the even partial sums: we obtain 1 1 s = s + − : 2n 2(n−1) n 2n−1 It’s not too hard to check that for n ≥ 5, 1 1 1 − > ; n 2n−1 2n and therefore 1 s > s + 2n 2(n−1) 2n If we start this at k = 5, then this means that n X 1 s > s + : 2n 10 2k k=5 Now take the limit n ! 1. The right-hand side of this inequality goes to 1, since the harmonic series diverges, and so then s2n ! 1. Therefore the limit of the partial sums cannot converge, and therefore the sum cannot converge. tl;dr: The decreasing condition is actually quite important. 4 Estimating alternating sums Theorem 4.1. Let us consider the convergent alternating sum 1 X an; n=1 3 and call the sum S. Define the remainder Rn = S − sn as in last lecture, so 1 n 1 X X X Rn = ak − ak = ak: k=1 k=1 k=n+1 Then jRnj ≤ bn+1 = jan+1j : Example 4.2. Let us consider the alternating series 1 X (−1)n S = n! n=1 From the Alternating Series Test, this converges: notice that 1=n! is both decreasing and converges to zero. Let’s imagine that we want to compute this sum to within 2 decimal places, i.e. to have a remainder less than 1=100. Basically, we need to compute which term is less than 1=100 and we can stop there. But since 5! > 100 we can just go to five terms: 1 1 1 1 44 s = 1 − 1 + − + − = = 0:366666 ::: (1) 5 2 6 24 120 120 So how close is this? In fact, 1 S = : e (Why is this??) Then we can compute S = 0:367879 ··· and we got the first two digits correct! 4.
Recommended publications
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Sequences and series Sequences and series Calculus and Differential Equations II Sequences A sequence is an infinite list of numbers, s1; s2;:::; sn;::: , indexed by integers. 1n Example 1: Find the first five terms of s = (−1)n , n 3 n ≥ 1. Example 2: Find a formula for sn, n ≥ 1, given that its first five terms are 0; 2; 6; 14; 30. Some sequences are defined recursively. For instance, sn = 2 sn−1 + 3, n > 1, with s1 = 1. If lim sn = L, where L is a number, we say that the sequence n!1 (sn) converges to L. If such a limit does not exist or if L = ±∞, one says that the sequence diverges. Sequences and series Calculus and Differential Equations II Sequences (continued) 2n Example 3: Does the sequence converge? 5n 1 Yes 2 No n 5 Example 4: Does the sequence + converge? 2 n 1 Yes 2 No sin(2n) Example 5: Does the sequence converge? n Remarks: 1 A convergent sequence is bounded, i.e. one can find two numbers M and N such that M < sn < N, for all n's. 2 If a sequence is bounded and monotone, then it converges. Sequences and series Calculus and Differential Equations II Series A series is a pair of sequences, (Sn) and (un) such that n X Sn = uk : k=1 A geometric series is of the form 2 3 n−1 k−1 Sn = a + ax + ax + ax + ··· + ax ; uk = ax 1 − xn One can show that if x 6= 1, S = a .
    [Show full text]
  • 3.3 Convergence Tests for Infinite Series
    3.3 Convergence Tests for Infinite Series 3.3.1 The integral test We may plot the sequence an in the Cartesian plane, with independent variable n and dependent variable a: n X The sum an can then be represented geometrically as the area of a collection of rectangles with n=1 height an and width 1. This geometric viewpoint suggests that we compare this sum to an integral. If an can be represented as a continuous function of n, for real numbers n, not just integers, and if the m X sequence an is decreasing, then an looks a bit like area under the curve a = a(n). n=1 In particular, m m+2 X Z m+1 X an > an dn > an n=1 n=1 n=2 For example, let us examine the first 10 terms of the harmonic series 10 X 1 1 1 1 1 1 1 1 1 1 = 1 + + + + + + + + + : n 2 3 4 5 6 7 8 9 10 1 1 1 If we draw the curve y = x (or a = n ) we see that 10 11 10 X 1 Z 11 dx X 1 X 1 1 > > = − 1 + : n x n n 11 1 1 2 1 (See Figure 1, copied from Wikipedia) Z 11 dx Now = ln(11) − ln(1) = ln(11) so 1 x 10 X 1 1 1 1 1 1 1 1 1 1 = 1 + + + + + + + + + > ln(11) n 2 3 4 5 6 7 8 9 10 1 and 1 1 1 1 1 1 1 1 1 1 1 + + + + + + + + + < ln(11) + (1 − ): 2 3 4 5 6 7 8 9 10 11 Z dx So we may bound our series, above and below, with some version of the integral : x If we allow the sum to turn into an infinite series, we turn the integral into an improper integral.
    [Show full text]
  • What Is on Today 1 Alternating Series
    MA 124 (Calculus II) Lecture 17: March 28, 2019 Section A3 Professor Jennifer Balakrishnan, [email protected] What is on today 1 Alternating series1 1 Alternating series Briggs-Cochran-Gillett x8:6 pp. 649 - 656 We begin by reviewing the Alternating Series Test: P k+1 Theorem 1 (Alternating Series Test). The alternating series (−1) ak converges if 1. the terms of the series are nonincreasing in magnitude (0 < ak+1 ≤ ak, for k greater than some index N) and 2. limk!1 ak = 0. For series of positive terms, limk!1 ak = 0 does NOT imply convergence. For alter- nating series with nonincreasing terms, limk!1 ak = 0 DOES imply convergence. Example 2 (x8.6 Ex 16, 20, 24). Determine whether the following series converge. P1 (−1)k 1. k=0 k2+10 P1 1 k 2. k=0 − 5 P1 (−1)k 3. k=2 k ln2 k 1 MA 124 (Calculus II) Lecture 17: March 28, 2019 Section A3 Recall that if a series converges to a value S, then the remainder is Rn = S − Sn, where Sn is the sum of the first n terms of the series. An upper bound on the magnitude of the remainder (the absolute error) in an alternating series arises form the following observation: when the terms are nonincreasing in magnitude, the value of the series is always trapped between successive terms of the sequence of partial sums. Thus we have jRnj = jS − Snj ≤ jSn+1 − Snj = an+1: This justifies the following theorem: P1 k+1 Theorem 3 (Remainder in Alternating Series).
    [Show full text]
  • 1 Approximating Integrals Using Taylor Polynomials 1 1.1 Definitions
    Seunghee Ye Ma 8: Week 7 Nov 10 Week 7 Summary This week, we will learn how we can approximate integrals using Taylor series and numerical methods. Topics Page 1 Approximating Integrals using Taylor Polynomials 1 1.1 Definitions . .1 1.2 Examples . .2 1.3 Approximating Integrals . .3 2 Numerical Integration 5 1 Approximating Integrals using Taylor Polynomials 1.1 Definitions When we first defined the derivative, recall that it was supposed to be the \instantaneous rate of change" of a function f(x) at a given point c. In other words, f 0 gives us a linear approximation of f(x) near c: for small values of " 2 R, we have f(c + ") ≈ f(c) + "f 0(c) But if f(x) has higher order derivatives, why stop with a linear approximation? Taylor series take this idea of linear approximation and extends it to higher order derivatives, giving us a better approximation of f(x) near c. Definition (Taylor Polynomial and Taylor Series) Let f(x) be a Cn function i.e. f is n-times continuously differentiable. Then, the n-th order Taylor polynomial of f(x) about c is: n X f (k)(c) T (f)(x) = (x − c)k n k! k=0 The n-th order remainder of f(x) is: Rn(f)(x) = f(x) − Tn(f)(x) If f(x) is C1, then the Taylor series of f(x) about c is: 1 X f (k)(c) T (f)(x) = (x − c)k 1 k! k=0 Note that the first order Taylor polynomial of f(x) is precisely the linear approximation we wrote down in the beginning.
    [Show full text]
  • A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- Sity, Pomona, CA 91768
    A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- sity, Pomona, CA 91768 In a recent calculus course, I introduced the technique of Integration by Parts as an integration rule corresponding to the Product Rule for differentiation. I showed my students the standard derivation of the Integration by Parts formula as presented in [1]: By the Product Rule, if f (x) and g(x) are differentiable functions, then d f (x)g(x) = f (x)g(x) + g(x) f (x). dx Integrating on both sides of this equation, f (x)g(x) + g(x) f (x) dx = f (x)g(x), which may be rearranged to obtain f (x)g(x) dx = f (x)g(x) − g(x) f (x) dx. Letting U = f (x) and V = g(x) and observing that dU = f (x) dx and dV = g(x) dx, we obtain the familiar Integration by Parts formula UdV= UV − VdU. (1) My student Victor asked if we could do a similar thing with the Quotient Rule. While the other students thought this was a crazy idea, I was intrigued. Below, I derive a Quotient Rule Integration by Parts formula, apply the resulting integration formula to an example, and discuss reasons why this formula does not appear in calculus texts. By the Quotient Rule, if f (x) and g(x) are differentiable functions, then ( ) ( ) ( ) − ( ) ( ) d f x = g x f x f x g x . dx g(x) [g(x)]2 Integrating both sides of this equation, we get f (x) g(x) f (x) − f (x)g(x) = dx.
    [Show full text]
  • Operations on Power Series Related to Taylor Series
    Operations on Power Series Related to Taylor Series In this problem, we perform elementary operations on Taylor series – term by term differen­ tiation and integration – to obtain new examples of power series for which we know their sum. Suppose that a function f has a power series representation of the form: 1 2 X n f(x) = a0 + a1(x − c) + a2(x − c) + · · · = an(x − c) n=0 convergent on the interval (c − R; c + R) for some R. The results we use in this example are: • (Differentiation) Given f as above, f 0(x) has a power series expansion obtained by by differ­ entiating each term in the expansion of f(x): 1 0 X n−1 f (x) = a1 + a2(x − c) + 2a3(x − c) + · · · = nan(x − c) n=1 • (Integration) Given f as above, R f(x) dx has a power series expansion obtained by by inte­ grating each term in the expansion of f(x): 1 Z a1 a2 X an f(x) dx = C + a (x − c) + (x − c)2 + (x − c)3 + · · · = C + (x − c)n+1 0 2 3 n + 1 n=0 for some constant C depending on the choice of antiderivative of f. Questions: 1. Find a power series representation for the function f(x) = arctan(5x): (Note: arctan x is the inverse function to tan x.) 2. Use power series to approximate Z 1 2 sin(x ) dx 0 (Note: sin(x2) is a function whose antiderivative is not an elementary function.) Solution: 1 For question (1), we know that arctan x has a simple derivative: , which then has a power 1 + x2 1 2 series representation similar to that of , where we subsitute −x for x.
    [Show full text]
  • Derivative of Power Series and Complex Exponential
    LECTURE 4: DERIVATIVE OF POWER SERIES AND COMPLEX EXPONENTIAL The reason of dealing with power series is that they provide examples of analytic functions. P1 n Theorem 1. If n=0 anz has radius of convergence R > 0; then the function P1 n F (z) = n=0 anz is di®erentiable on S = fz 2 C : jzj < Rg; and the derivative is P1 n¡1 f(z) = n=0 nanz : Proof. (¤) We will show that j F (z+h)¡F (z) ¡ f(z)j ! 0 as h ! 0 (in C), whenever h ¡ ¢ n Pn n k n¡k jzj < R: Using the binomial theorem (z + h) = k=0 k h z we get F (z + h) ¡ F (z) X1 (z + h)n ¡ zn ¡ hnzn¡1 ¡ f(z) = a h n h n=0 µ ¶ X1 a Xn n = n ( hkzn¡k) h k n=0 k=2 µ ¶ X1 Xn n = a h( hk¡2zn¡k) n k n=0 k=2 µ ¶ X1 Xn¡2 n = a h( hjzn¡2¡j) (by putting j = k ¡ 2): n j + 2 n=0 j=0 ¡ n ¢ ¡n¡2¢ By using the easily veri¯able fact that j+2 · n(n ¡ 1) j ; we obtain µ ¶ F (z + h) ¡ F (z) X1 Xn¡2 n ¡ 2 j ¡ f(z)j · jhj n(n ¡ 1)ja j( jhjjjzjn¡2¡j) h n j n=0 j=0 X1 n¡2 = jhj n(n ¡ 1)janj(jzj + jhj) : n=0 P1 n¡2 We already know that the series n=0 n(n ¡ 1)janjjzj converges for jzj < R: Now, for jzj < R and h ! 0 we have jzj + jhj < R eventually.
    [Show full text]
  • Calculus Terminology
    AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential
    [Show full text]
  • 20-Finding Taylor Coefficients
    Taylor Coefficients Learning goal: Let’s generalize the process of finding the coefficients of a Taylor polynomial. Let’s find a general formula for the coefficients of a Taylor polynomial. (Students complete worksheet series03 (Taylor coefficients).) 2 What we have discovered is that if we have a polynomial C0 + C1x + C2x + ! that we are trying to match values and derivatives to a function, then when we differentiate it a bunch of times, we 0 get Cnn!x + Cn+1(n+1)!x + !. Plugging in x = 0 and everything except Cnn! goes away. So to (n) match the nth derivative, we must have Cn = f (0)/n!. If we want to match the values and derivatives at some point other than zero, we will use the 2 polynomial C0 + C1(x – a) + C2(x – a) + !. Then when we differentiate, we get Cnn! + Cn+1(n+1)!(x – a) + !. Now, plugging in x = a makes everything go away, and we get (n) Cn = f (a)/n!. So we can generate a polynomial of any degree (well, as long as the function has enough derivatives!) that will match the function and its derivatives to that degree at a particular point a. We can also extend to create a power series called the Taylor series (or Maclaurin series if a is specifically 0). We have natural questions: • For what x does the series converge? • What does the series converge to? Is it, hopefully, f(x)? • If the series converges to the function, we can use parts of the series—Taylor polynomials—to approximate the function, at least nearby a.
    [Show full text]
  • Calculus Online Textbook Chapter 10
    Contents CHAPTER 9 Polar Coordinates and Complex Numbers 9.1 Polar Coordinates 348 9.2 Polar Equations and Graphs 351 9.3 Slope, Length, and Area for Polar Curves 356 9.4 Complex Numbers 360 CHAPTER 10 Infinite Series 10.1 The Geometric Series 10.2 Convergence Tests: Positive Series 10.3 Convergence Tests: All Series 10.4 The Taylor Series for ex, sin x, and cos x 10.5 Power Series CHAPTER 11 Vectors and Matrices 11.1 Vectors and Dot Products 11.2 Planes and Projections 11.3 Cross Products and Determinants 11.4 Matrices and Linear Equations 11.5 Linear Algebra in Three Dimensions CHAPTER 12 Motion along a Curve 12.1 The Position Vector 446 12.2 Plane Motion: Projectiles and Cycloids 453 12.3 Tangent Vector and Normal Vector 459 12.4 Polar Coordinates and Planetary Motion 464 CHAPTER 13 Partial Derivatives 13.1 Surfaces and Level Curves 472 13.2 Partial Derivatives 475 13.3 Tangent Planes and Linear Approximations 480 13.4 Directional Derivatives and Gradients 490 13.5 The Chain Rule 497 13.6 Maxima, Minima, and Saddle Points 504 13.7 Constraints and Lagrange Multipliers 514 CHAPTER Infinite Series Infinite series can be a pleasure (sometimes). They throw a beautiful light on sin x and cos x. They give famous numbers like n and e. Usually they produce totally unknown functions-which might be good. But on the painful side is the fact that an infinite series has infinitely many terms. It is not easy to know the sum of those terms.
    [Show full text]
  • The Set of Continuous Functions with Everywhere Convergent Fourier Series
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 302, Number 1, July 1987 THE SET OF CONTINUOUS FUNCTIONS WITH EVERYWHERE CONVERGENT FOURIER SERIES M. AJTAI AND A. S. KECHRIS ABSTRACT. This paper deals with the descriptive set theoretic properties of the class EC of continuous functions with everywhere convergent Fourier se- ries. It is shown that this set is a complete coanalytic set in C(T). A natural coanalytic rank function on EC is studied that assigns to each / € EC a count- able ordinal number, which measures the "complexity" of the convergence of the Fourier series of /. It is shown that there exist functions in EC (in fact even differentiable ones) which have arbitrarily large countable rank, so that this provides a proper hierarchy on EC with uii distinct levels. Let G(T) be the Banach space of continuous 27r-periodic functions on the reals with the sup norm. We study in this paper descriptive set theoretic aspects of the subset EC of G(T) consisting of the continuous functions with everywhere convergent Fourier series. It is easy to verify that EC is a coanalytic set. We show in §3 that EC is actually a complete coanalytic set and therefore in particular is not Borel. This answers a question in [Ku] and provides another example of a coanalytic non-Borel set. We also discuss in §2 (see in addition the Appendix) the relations of this kind of result to the studies in the literature concerning the classification of the set of points of divergence of the Fourier series of a continuous function.
    [Show full text]
  • Calculus of Variations Raju K George, IIST
    Calculus of Variations Raju K George, IIST Lecture-1 In Calculus of Variations, we will study maximum and minimum of a certain class of functions. We first recall some maxima/minima results from the classical calculus. Maxima and Minima Let X and Y be two arbitrary sets and f : X Y be a well-defined function having domain → X and range Y . The function values f(x) become comparable if Y is IR or a subset of IR. Thus, optimization problem is valid for real valued functions. Let f : X IR be a real valued → function having X as its domain. Now x X is said to be maximum point for the function f if 0 ∈ f(x ) f(x) x X. The value f(x ) is called the maximum value of f. Similarly, x X is 0 ≥ ∀ ∈ 0 0 ∈ said to be a minimum point for the function f if f(x ) f(x) x X and in this case f(x ) is 0 ≤ ∀ ∈ 0 the minimum value of f. Sufficient condition for having maximum and minimum: Theorem (Weierstrass Theorem) Let S IR and f : S IR be a well defined function. Then f will have a maximum/minimum ⊆ → under the following sufficient conditions. 1. f : S IR is a continuous function. → 2. S IR is a bound and closed (compact) subset of IR. ⊂ Note that the above conditions are just sufficient conditions but not necessary. Example 1: Let f : [ 1, 1] IR defined by − → 1 x = 0 f(x)= − x x = 0 | | 6 1 −1 +1 −1 Obviously f(x) is not continuous at x = 0.
    [Show full text]