Term by Term Integration and Differentiation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Ch. 15 Power Series, Taylor Series
Ch. 15 Power Series, Taylor Series 서울대학교 조선해양공학과 서유택 2017.12 ※ 본 강의 자료는 이규열, 장범선, 노명일 교수님께서 만드신 자료를 바탕으로 일부 편집한 것입니다. Seoul National 1 Univ. 15.1 Sequences (수열), Series (급수), Convergence Tests (수렴판정) Sequences: Obtained by assigning to each positive integer n a number zn z . Term: zn z1, z 2, or z 1, z 2 , or briefly zn N . Real sequence (실수열): Sequence whose terms are real Convergence . Convergent sequence (수렴수열): Sequence that has a limit c limznn c or simply z c n . For every ε > 0, we can find N such that Convergent complex sequence |zn c | for all n N → all terms zn with n > N lie in the open disk of radius ε and center c. Divergent sequence (발산수열): Sequence that does not converge. Seoul National 2 Univ. 15.1 Sequences, Series, Convergence Tests Convergence . Convergent sequence: Sequence that has a limit c Ex. 1 Convergent and Divergent Sequences iin 11 Sequence i , , , , is convergent with limit 0. n 2 3 4 limznn c or simply z c n Sequence i n i , 1, i, 1, is divergent. n Sequence {zn} with zn = (1 + i ) is divergent. Seoul National 3 Univ. 15.1 Sequences, Series, Convergence Tests Theorem 1 Sequences of the Real and the Imaginary Parts . A sequence z1, z2, z3, … of complex numbers zn = xn + iyn converges to c = a + ib . if and only if the sequence of the real parts x1, x2, … converges to a . and the sequence of the imaginary parts y1, y2, … converges to b. Ex. -
Formal Power Series - Wikipedia, the Free Encyclopedia
Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series -
1 Approximating Integrals Using Taylor Polynomials 1 1.1 Definitions
Seunghee Ye Ma 8: Week 7 Nov 10 Week 7 Summary This week, we will learn how we can approximate integrals using Taylor series and numerical methods. Topics Page 1 Approximating Integrals using Taylor Polynomials 1 1.1 Definitions . .1 1.2 Examples . .2 1.3 Approximating Integrals . .3 2 Numerical Integration 5 1 Approximating Integrals using Taylor Polynomials 1.1 Definitions When we first defined the derivative, recall that it was supposed to be the \instantaneous rate of change" of a function f(x) at a given point c. In other words, f 0 gives us a linear approximation of f(x) near c: for small values of " 2 R, we have f(c + ") ≈ f(c) + "f 0(c) But if f(x) has higher order derivatives, why stop with a linear approximation? Taylor series take this idea of linear approximation and extends it to higher order derivatives, giving us a better approximation of f(x) near c. Definition (Taylor Polynomial and Taylor Series) Let f(x) be a Cn function i.e. f is n-times continuously differentiable. Then, the n-th order Taylor polynomial of f(x) about c is: n X f (k)(c) T (f)(x) = (x − c)k n k! k=0 The n-th order remainder of f(x) is: Rn(f)(x) = f(x) − Tn(f)(x) If f(x) is C1, then the Taylor series of f(x) about c is: 1 X f (k)(c) T (f)(x) = (x − c)k 1 k! k=0 Note that the first order Taylor polynomial of f(x) is precisely the linear approximation we wrote down in the beginning. -
7. Properties of Uniformly Convergent Sequences
46 1. THE THEORY OF CONVERGENCE 7. Properties of uniformly convergent sequences Here a relation between continuity, differentiability, and Riemann integrability of the sum of a functional series or the limit of a functional sequence and uniform convergence is studied. 7.1. Uniform convergence and continuity. Theorem 7.1. (Continuity of the sum of a series) N The sum of the series un(x) of terms continuous on D R is continuous if the series converges uniformly on D. ⊂ P Let Sn(x)= u1(x)+u2(x)+ +un(x) be a sequence of partial sum. It converges to some function S···(x) because every uniformly convergent series converges pointwise. Continuity of S at a point x means (by definition) lim S(y)= S(x) y→x Fix a number ε> 0. Then one can find a number δ such that S(x) S(y) <ε whenever 0 < x y <δ | − | | − | In other words, the values S(y) can get arbitrary close to S(x) and stay arbitrary close to it for all points y = x that are sufficiently close to x. Let us show that this condition follows6 from the hypotheses. Owing to the uniform convergence of the series, given ε > 0, one can find an integer m such that ε S(x) Sn(x) sup S Sn , x D, n m | − |≤ D | − |≤ 3 ∀ ∈ ∀ ≥ Note that m is independent of x. By continuity of Sn (as a finite sum of continuous functions), for n m, one can also find a number δ > 0 such that ≥ ε Sn(x) Sn(y) < whenever 0 < x y <δ | − | 3 | − | So, given ε> 0, the integer m is found. -
A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- Sity, Pomona, CA 91768
A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- sity, Pomona, CA 91768 In a recent calculus course, I introduced the technique of Integration by Parts as an integration rule corresponding to the Product Rule for differentiation. I showed my students the standard derivation of the Integration by Parts formula as presented in [1]: By the Product Rule, if f (x) and g(x) are differentiable functions, then d f (x)g(x) = f (x)g(x) + g(x) f (x). dx Integrating on both sides of this equation, f (x)g(x) + g(x) f (x) dx = f (x)g(x), which may be rearranged to obtain f (x)g(x) dx = f (x)g(x) − g(x) f (x) dx. Letting U = f (x) and V = g(x) and observing that dU = f (x) dx and dV = g(x) dx, we obtain the familiar Integration by Parts formula UdV= UV − VdU. (1) My student Victor asked if we could do a similar thing with the Quotient Rule. While the other students thought this was a crazy idea, I was intrigued. Below, I derive a Quotient Rule Integration by Parts formula, apply the resulting integration formula to an example, and discuss reasons why this formula does not appear in calculus texts. By the Quotient Rule, if f (x) and g(x) are differentiable functions, then ( ) ( ) ( ) − ( ) ( ) d f x = g x f x f x g x . dx g(x) [g(x)]2 Integrating both sides of this equation, we get f (x) g(x) f (x) − f (x)g(x) = dx. -
Operations on Power Series Related to Taylor Series
Operations on Power Series Related to Taylor Series In this problem, we perform elementary operations on Taylor series – term by term differen tiation and integration – to obtain new examples of power series for which we know their sum. Suppose that a function f has a power series representation of the form: 1 2 X n f(x) = a0 + a1(x − c) + a2(x − c) + · · · = an(x − c) n=0 convergent on the interval (c − R; c + R) for some R. The results we use in this example are: • (Differentiation) Given f as above, f 0(x) has a power series expansion obtained by by differ entiating each term in the expansion of f(x): 1 0 X n−1 f (x) = a1 + a2(x − c) + 2a3(x − c) + · · · = nan(x − c) n=1 • (Integration) Given f as above, R f(x) dx has a power series expansion obtained by by inte grating each term in the expansion of f(x): 1 Z a1 a2 X an f(x) dx = C + a (x − c) + (x − c)2 + (x − c)3 + · · · = C + (x − c)n+1 0 2 3 n + 1 n=0 for some constant C depending on the choice of antiderivative of f. Questions: 1. Find a power series representation for the function f(x) = arctan(5x): (Note: arctan x is the inverse function to tan x.) 2. Use power series to approximate Z 1 2 sin(x ) dx 0 (Note: sin(x2) is a function whose antiderivative is not an elementary function.) Solution: 1 For question (1), we know that arctan x has a simple derivative: , which then has a power 1 + x2 1 2 series representation similar to that of , where we subsitute −x for x. -
Derivative of Power Series and Complex Exponential
LECTURE 4: DERIVATIVE OF POWER SERIES AND COMPLEX EXPONENTIAL The reason of dealing with power series is that they provide examples of analytic functions. P1 n Theorem 1. If n=0 anz has radius of convergence R > 0; then the function P1 n F (z) = n=0 anz is di®erentiable on S = fz 2 C : jzj < Rg; and the derivative is P1 n¡1 f(z) = n=0 nanz : Proof. (¤) We will show that j F (z+h)¡F (z) ¡ f(z)j ! 0 as h ! 0 (in C), whenever h ¡ ¢ n Pn n k n¡k jzj < R: Using the binomial theorem (z + h) = k=0 k h z we get F (z + h) ¡ F (z) X1 (z + h)n ¡ zn ¡ hnzn¡1 ¡ f(z) = a h n h n=0 µ ¶ X1 a Xn n = n ( hkzn¡k) h k n=0 k=2 µ ¶ X1 Xn n = a h( hk¡2zn¡k) n k n=0 k=2 µ ¶ X1 Xn¡2 n = a h( hjzn¡2¡j) (by putting j = k ¡ 2): n j + 2 n=0 j=0 ¡ n ¢ ¡n¡2¢ By using the easily veri¯able fact that j+2 · n(n ¡ 1) j ; we obtain µ ¶ F (z + h) ¡ F (z) X1 Xn¡2 n ¡ 2 j ¡ f(z)j · jhj n(n ¡ 1)ja j( jhjjjzjn¡2¡j) h n j n=0 j=0 X1 n¡2 = jhj n(n ¡ 1)janj(jzj + jhj) : n=0 P1 n¡2 We already know that the series n=0 n(n ¡ 1)janjjzj converges for jzj < R: Now, for jzj < R and h ! 0 we have jzj + jhj < R eventually. -
Uniform Convergence and Differentiation Theorem 6.3.1
Math 341 Lecture #29 x6.3: Uniform Convergence and Differentiation We have seen that a pointwise converging sequence of continuous functions need not have a continuous limit function; we needed uniform convergence to get continuity of the limit function. What can we say about the differentiability of the limit function of a pointwise converging sequence of differentiable functions? 1+1=(2n−1) The sequence of differentiable hn(x) = x , x 2 [−1; 1], converges pointwise to the nondifferentiable h(x) = x; we will need to assume more about the pointwise converging sequence of differentiable functions to ensure that the limit function is differentiable. Theorem 6.3.1 (Differentiable Limit Theorem). Let fn ! f pointwise on the 0 closed interval [a; b], and assume that each fn is differentiable. If (fn) converges uniformly on [a; b] to a function g, then f is differentiable and f 0 = g. Proof. Let > 0 and fix c 2 [a; b]. Our goal is to show that f 0(c) exists and equals g(c). To this end, we will show the existence of δ > 0 such that for all 0 < jx − cj < δ, with x 2 [a; b], we have f(x) − f(c) − g(c) < x − c which implies that f(x) − f(c) f 0(c) = lim x!c x − c exists and is equal to g(c). The way forward is to replace (f(x) − f(c))=(x − c) − g(c) with expressions we can hopefully control: f(x) − f(c) f(x) − f(c) fn(x) − fn(c) fn(x) − fn(c) − g(c) = − + x − c x − c x − c x − c 0 0 − f (c) + f (c) − g(c) n n f(x) − f(c) fn(x) − fn(c) ≤ − x − c x − c fn(x) − fn(c) 0 0 + − f (c) + jf (c) − g(c)j: x − c n n The second and third expressions we can control respectively by the differentiability of 0 fn and the uniformly convergence of fn to g. -
G: Uniform Convergence of Fourier Series
G: Uniform Convergence of Fourier Series From previous work on the prototypical problem (and other problems) 8 < ut = Duxx 0 < x < l ; t > 0 u(0; t) = 0 = u(l; t) t > 0 (1) : u(x; 0) = f(x) 0 < x < l we developed a (formal) series solution 1 1 X X 2 2 2 nπx u(x; t) = u (x; t) = b e−n π Dt=l sin( ) ; (2) n n l n=1 n=1 2 R l nπy with bn = l 0 f(y) sin( l )dy. These are the Fourier sine coefficients for the initial data function f(x) on [0; l]. We have no real way to check that the series representation (2) is a solution to (1) because we do not know we can interchange differentiation and infinite summation. We have only assumed that up to now. In actuality, (2) makes sense as a solution to (1) if the series is uniformly convergent on [0; l] (and its derivatives also converges uniformly1). So we first discuss conditions for an infinite series to be differentiated (and integrated) term-by-term. This can be done if the infinite series and its derivatives converge uniformly. We list some results here that will establish this, but you should consult Appendix B on calculus facts, and review definitions of convergence of a series of numbers, absolute convergence of such a series, and uniform convergence of sequences and series of functions. Proofs of the following results can be found in any reasonable real analysis or advanced calculus textbook. 0.1 Differentiation and integration of infinite series Let I = [a; b] be any real interval. -
Calculus Terminology
AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential -
20-Finding Taylor Coefficients
Taylor Coefficients Learning goal: Let’s generalize the process of finding the coefficients of a Taylor polynomial. Let’s find a general formula for the coefficients of a Taylor polynomial. (Students complete worksheet series03 (Taylor coefficients).) 2 What we have discovered is that if we have a polynomial C0 + C1x + C2x + ! that we are trying to match values and derivatives to a function, then when we differentiate it a bunch of times, we 0 get Cnn!x + Cn+1(n+1)!x + !. Plugging in x = 0 and everything except Cnn! goes away. So to (n) match the nth derivative, we must have Cn = f (0)/n!. If we want to match the values and derivatives at some point other than zero, we will use the 2 polynomial C0 + C1(x – a) + C2(x – a) + !. Then when we differentiate, we get Cnn! + Cn+1(n+1)!(x – a) + !. Now, plugging in x = a makes everything go away, and we get (n) Cn = f (a)/n!. So we can generate a polynomial of any degree (well, as long as the function has enough derivatives!) that will match the function and its derivatives to that degree at a particular point a. We can also extend to create a power series called the Taylor series (or Maclaurin series if a is specifically 0). We have natural questions: • For what x does the series converge? • What does the series converge to? Is it, hopefully, f(x)? • If the series converges to the function, we can use parts of the series—Taylor polynomials—to approximate the function, at least nearby a. -
Series of Functions Theorem 6.4.2
Math 341 Lecture #30 x6.4: Series of Functions Recall that we constructed the continuous nowhere differentiable function from Section 5.4 by using a series. We will develop the tools necessary to showing that this pointwise convergent series is indeed a continuous function. Definition 6.4.1. Let fn, n 2 N, and f be functions on A ⊆ R. The infinite series 1 X fn(x) = f1(x) + f2(x) + f3(x) + ··· n=1 converges pointwise on A to f(x) if the sequence of partial sums, sk(x) = f1(x) + f2(x) + ··· + fk(x) converges pointwise to f(x). The series converges uniformly on A to f if the sequence sk(x) converges uniformly on A to f(x). Uniform convergence of a series on A implies pointwise convergence of the series on A. For a pointwise or uniformly convergent series we write 1 1 X X f = fn or f(x) = fn(x): n=1 n=1 When the functions fn are continuous on A, each partial sum sk(x) is continuous on A by the Algebraic Continuity Theorem (Theorem 4.3.4). We can therefore apply the theory for uniformly convergent sequences to series. Theorem 6.4.2 (Term-by-term Continuity Theorem). Let fn be continuous functions on A ⊆ . If R 1 X fn n=1 converges uniformly to f on A, then f is continuous on A. Proof. We apply Theorem 6.2.6 to the partial sums sk = f1 + f2 + ··· + fk. When the functions fn are differentiable on a closed interval [a; b], we have that each partial sum sk is differentiable on [a; b] as well.