<<

Lecture 37: Terminology: Maclaurin polynomials of a function f(x) are the Taylor polynomials at a = 0. See Table 5, p. 278, for famous Maclaurin polynomials. Recall simple properties of big O: 1. If f(x) = O(u(x)) as x → a, then Cf(x) = O((u(x)) for any C. 2. If f(x) = O(u(x)) and g(x) = O(u(x)) as x → a, then f(x) ± g(x) = O(u(x)) as x → a (use triangle inequality). 3. If f(x) = O((x − a)ku(x)) as x → a, then f(x) = O(u(x)) (x−a)k as x → a. (note that technically f(x) need not be defined at a; so f(x) = (x−a)k (x−a)k O(u(x)) should be taken to mean that there is a constant K > 0 s.t. for all x =6 a in an open interval containing a, | f(x) | < K|u(x)|. (x−a)k 4. If f(x) = O(u(x)) as x → a and u(x) = O(w(x)) as x → a, then f(x) = O(w(x)). Recall Theorem 13: If f(x) = Q(x) + O((x − a)n+1(x) as x → a, for some polynomial Q(x) of degree at most n, then Q(x) = Pn(x).

Application of Theorem 13: Find Taylor polynomials of a function from Taylor polynomials of other functions, without computing any new derivatives. Example: Compute Taylor polynomials for f(x) = e2x at a = 1: Use Maclaurin polynomials for ex: e2x = e2e2(x−1) =

1 22(x − 1)2 2n(x − 1)n = e2(1+2(x−1)+ +...+ +O(2n+1(x−1)n+1) 2 n! 22(x − 1)2 2n(x − 1)n = e2(1 + 2(x − 1) + + ... + ) + O((x − 1)n+1) 2 n! (by property 1, multiplication by a constant preserves big O). So, by Theorem 13, n-th Taylor polynomial for e2x at a = 1 is: 22(x − 1)2 2n(x − 1)n e2(1 + 2(x − 1) + + ... + ). 2 n! Of course, this is easy enough to verify directly: f (n)(x) = 2ne2x and so f (n)(1) = 2ne2 Example: Compute Maclaurin polynomials for ex + e−x cosh(x) = 2 x Let Pn(x) be the n-th Taylor polynomial for e , a = 0. x 2n+2 e = P2n+1(x) + O(x ) = 1 + x + x2/2 + x3/3! + ... + x2n+1/(2n + 1)! + O(x2n+2) −x 2n+2 e = P2n+1(−x) + O(x ) = 1 − x + x2/2 − x3/3! + ... − x2n+1/(2n + 1)! + O(x2n+2) So by property 2 (big O is preserved under sums and differences) P (x) + P (−x) cosh(x) = 2n+1 2n+1 + O(x2n+2) 2 = 1 + x2/2 + x4/4! + ... + x2n/(2n)! + O(x2n+2) By Theorem 13, 1 + x2/2 + x4/4! + ... + x2n/(2n)!

2 is the (2n + 1)-th Taylor polynomial for cosh(x) at a = 0. Since the degree of Q(x) is 2n, it is also the 2n-th Taylor poly- nomial because, by property 4 above, if g(x) = O(x2n+2), then g(x) = O(x2n+1). Hyperbolic functions: Defn of cosh(x), sinh(x).

ex + e−x ex − e−x cosh(x) := , sinh(x) = 2 2 Derivatives: ex − e−x ex + e−x cosh0(x) = = sinh(x), sinh0(x) = = cosh(x) 2 2 So, for f(x) = cosh(x), f (2n)(x) = cosh(x), f (2n)(0) = cosh(0) = 1 f (2n+1)(x) = sinh(x), f (2n+1)(0) = sinh(0) = 0 (this directly verifies the Maclaurin polynomials for cosh(x) above). Sketch the graph of cosh(x): add ex and e−x and divide by 2 (equivalently, cosh(x) is the average of ex and e−x). Appears to have an abs. min at x = 0, is always concave up and limx→±∞ cosh(x) = ∞. – Indeed, critical point for f(x) = cosh(x): f 0(x) = sinh(x) = 0, which mean ex = e−x and so e2x = 1 and so x = 0. And f 00(x) = cosh(x) > 0. Shape of a hanging cable and other applications.

3 Sketch the graph of sinh(x): subtract e−x from ex and divide by 2. Appears to have no relative extrema, but an inflection point at 0 and limx→∞ sinh(x) = ∞, limx→−∞ sinh(x) = −∞, – Indeed, no critical point for g(x) = sinh(x) because g0(x) = cosh(x) > 0 And g00(x) = sinh(x), which is + for x > 0 and - for x < 0. Fact: cosh2(x) − sinh2(x) = 1. Proof: cosh2(x) − sinh2(x) = (1/4)((e2x + 2 + e−2x) − (e2x − 2 + e−2x) = 1. So, for all t, (cosh(t), sinh(t)) lies on the x2 − y2 = 1, and this explains why they are called “hyperbolic” functions. In fact, this gives a 1:1 parametrization of the right half of the hyperbola: – in the right half because cosh(t) is always > 0. – 1:1 because —– the vertical coordinate is increasing with t: sinh0(t) = cosh(t) > 0 – covers the entire right half because

limx→∞ sinh(x) = ∞, limx→−∞ sinh(x) = −∞, Hyperbolic functions are given “trigonometric” names because they mimic cos and sin: (cos(t), sin(t)) parameterizes the unit (but not 1-1). e.g., cosh(x) is even and sinh(x) is odd.

4 Other hyperbolic functions, e.g.: sinh(x) tanh(x) := cosh(x) And there are inverse hyperbolic functions.

5 Lecture 38: Recall: Defn: An anti-derivative of a function f is a function F s.t. F 0(x) = f(x). Defn: The general anti-derivative or indefinite integral of a func- tion f is written Z f(x)dx = F (x) + C where F is any particular anti-derivative and C is an arbitrary con- stant. Makes sense because Fact: Functions F and G defined on an interval I are anti-derivatives of the same function if and only if F − G is a constant. Proof: If F − G = C, then F 0 − G0 = 0 and so F 0 = G0. Conversely, if F 0 = G0 on I, then (F − G)0 = 0 on I, and we showed long ago, by the MVT, that F − G is a constant.  Note: this assumes that the common domain is an interval. Consider (0, 1) ∪ (2, 3) F = 0,G = 0 on (0, 1),G = 1 on (2, 3) Recall the Examples on p. 150 and R (1/x)dx = ln(|x|) + C. Recall: Earlier we discussed differential equations a bit: A first order differential equation is an equation of the form: dy = f(x, y) dx A solution to the differential equation is a function y = y(x) which satisfies the equation.

6 ∂f Under mild assumptions on f (continuity of f and ∂y ), there al- ways exists a solution and the solution is uniquely determined once one specifies and initial condition y(x0) = y0. Special case: f(x, y) = f(x). — A solution is the same thing as an anti-derivative. — An initial condition uniquely determines the constant of inte- gration. Example: Solve y0 = 6x2 − 1, y(2) = 10. y = y(x) = 2x3 − x + C 10 = 16 − 2 + C C = −4. Solution: y = y(x) = 2x3 − x − 4. Recall also another special case: . y0 = ky. We showed that the solution is y = y(x) = Cekx and is uniquely determined by an initial condition y(x0) = y0: kx0 y0 = Ce So, y C = 0 ekx0 A second order differential equation is of the form: y00 = f(t, y, y0)

∂f ∂f Again, under mild conditions on f (continuity of f and ∂y ) and ∂y0 , there always exists a solution and the solution is uniquely determined 0 0 once one specifies and two initial conditions y(x0) = y0, y (x0) = y0.

7 Special case: f(x, y, y0) = f(x). — A solution is the same thing as an second anti-derivative, which will have two arbitrary constants. — Initial conditions uniquely determine the two constants of inte- gration. Example: Solve y00 = sin(x), y(π) = 2, y0(π) = −1. y0 = − cos(x) + C −1 = − cos(π) + C C = −2 y0 = − cos(x) − 2 y = − sin(x) − 2x + D 2 = − sin(π) − 2π + D D = 2 + 2π. y = − sin(x) − 2x + 2 + 2π.

8 Lecture 39: Review Lecture 40: Midterm 2

9