
Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 5 Notes These notes correspond to Section 11.4 in Stewart and Section 2.3 in Marsden and Tromba. Tangent Planes, Linear Approximations and Differentiability Now that we have learned how to compute partial derivatives of functions of several independent variables, in order to measure their instantaneous rates of change with respect to these variables, we will discuss another essential application of derivatives: the approximation of functions by linear functions. Linear functions are the simplest to work with, and for this reason, there are many instances in which functions are replaced by a linear approximation in the context of solving a problem such as solving a differential equation. Tangent Planes and Linear Approximations In single-variable calculus, we learned that the graph of a function f(x) can be approximated near a point x0 by its tangent line, which has the equation 0 y = f(x0) + f (x0)(x − x0): 0 For this reason, the function Lf (x) = f(x0) + f (x0)(x − x0) is also referred to as the linearization, or linear approximation, of f(x) at x0. 2 Now, suppose that we have a function of two variables, f : D ⊆ R ! R, and a point (x0; y0) 2 D. Furthermore, suppose that the first partial derivatives of f, fx and fy, exist at (x0; y0). Because the graph of this function is a surface, it follows that a linear function that approximates f near (x0; y0) would have a graph that is a plane. Just as the tangent line of f(x) at x0 passes through the point (x0; f(x0)), and has a slope that 0 is equal to f (x0), the instantaneous rate of change of f(x) with respect to x at x0, a plane that best approximates f(x; y) at (x0; y0) must pass through the point (x0; y0; f(x0; y0)), and the slope of the plane in the x- and y-directions, respectively, should be equal to the values of fx(x0; y0) and fy(x0; y0). Since a general linear function of two variables can be described by the formula Lf (x; y) = A(x − x0) + B(y − y0) + C; so that Lf (x0; y0) = C, and a simple differentiation yields @L @L f = A; f = B; @x @y 1 we conclude that the linear function that best approximates f(x; y) near (x0; y0) is the linear approximation @f @f L (x; y) = f(x ; y ) + (x ; y )(x − x ) + (x ; y )(y − y ): f 0 0 @x 0 0 0 @y 0 0 0 Furthermore, the graph of this function is called the tangent plane of f(x; y) at (x0; y0). Its equation is @f @f z − z = (x ; y )(x − x ) + (x ; y )(y − y ): 0 @x 0 0 0 @y 0 0 0 2 2 Example Let f(x; y) = 2x y + 3y , and let (x0; y0) = (1; 1). Then f(x0; y0) = 5, and the first partial derivatives at (x0; y0) are 2 fx(1; 1) = 4xyjx=1;y=1 = 4; fy(1; 1) = 2x + 6yjx=1;y=1 = 8: It follows that the tangent plane at (1; 1) has the equation z − 5 = 4(x − 1) + 8(y − 1); and the linearization of f at (1; 1) is Lf (x; y) = 5 + 4(x − 1) + 8(y − 1): Let (x; y) = (1:1; 1:1). Then f(x; y) = 6:292, while Lf (x; y) = 6:2, for an error of 6:292−6:2 = 0:092. However, if (x; y) = (1:01; 1:01), then f(x; y) = 5:120902, while Lf (x; y) = 5:12, for an error of 5:120902 − 5:12 = 0:000902. That is, moving 10 times as close to (1; 1) decreased the error by a factor of over 100. 2 Another useful application of a linear approximation is to estimate the error in the value of a function, given estimates of error in its inputs. Given a function z = f(x; y), and its linearization Lf (x; y) around a point (x0; y0), if x0 and y0 are measured values and dx = x − x0 and dz = y − y0 are regarded as errors in x0 and y0, then the error in z can be estimated by computing dz = z − z0 = Lf (x; y) − f(x0; y0) = [f(x0; y0) + fx(x0; y0)(x − x0) + fy(x0; y0)(y − y0)] − f(x0; y0) = fx(x0; y0) dx + fy(x0; y0) dy: The variables dx and dy are called differentials, and dz is called the total differential, as it depends on the values of dx and dy. The total differential dz is only an estimate of the error in z; the actual error is given by Δz = f(x; y) − f(x0; y0), when the actual errors in x and y,Δx = x − x0 and Δy = y − y0, are known. Since this is rarely the case in practice, one instead estimates the error in z from estimates dx and dy of the errors in x and y. 2 Example Recall that the volume of a cylinder with radius r and height h is V = r2h. Suppose that r = 5 cm and h = 10 cm. Then the volume is V = 250 cm3. If the measurement error in r and h is at most 0.1 cm, then, to estimate the error in the computed volume, we first compute 2 Vr = 2rh = 100; Vh = r = 25: It follows that the error in V is approximately 3 dV = Vr dr + Vh dh = 0:1(100 + 25) = 12:5 cm : If we specify Δr = 0:1 and Δh = 0:1, and compute the actual volume using radius r + Δr = 5:1 and height h + Δh = 10:1, we obtain V + ΔV = (5:1)2(10:1) = 262:701 cm3; which yields the actual error ΔV = 262:701 − 250 = 12:701 cm3: Therefore, the estimate of the error, dV , is quite accurate. 2 Functions of More than Two Variables The concepts of a tangent plane and linear approximation generalize to more than two variables in n (0) (0) (0) a straightforward manner. Specifically, given f : D ⊆ R ! R and p0 = (x1 ; x2 ; : : : ; xn ) 2 D, n+1 we define the tangent space of f(x1; x2; : : : ; xn) at p0 to be the n-dimensional hyperplane in R whose points (x1; x2; : : : ; xn; y) satisfy the equation @f (0) @f (0) @f (0) y − y0 = (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ); @x1 @x2 @xn where y0 = f(p0). Similarly, the linearization of f at p0 is the function Lf (x1; x2; : : : ; xn) defined by @f (0) @f (0) @f (0) Lf (x1; x2; : : : ; xn) = y0 + (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ): @x1 @x2 @xn The Gradient Vector It can be seen from the above definitions that writing formulas that involve the partial derivatives of functions of n variables can be cumbersome. This can be addressed by expressing collections of partial derivatives of functions of several variables using vectors and matrices, especially for vector-valued functions of several variables. 3 (0) (0) (0) By convention, a point p0 = (x1 ; x2 ; : : : ; xn ), which can be identified with the position vector (0) (0) (0) p0 = hx1 ; x2 ; : : : ; xn i, is considered to be a column vector 2 (0) 3 x1 6 (0) 7 6 x2 7 p0 = 6 . 7 : 6 . 7 4 . 5 (0) xn n Also, by convention, given a function of n variables, f : D ⊆ R ! R, the collection of its partial derivatives with respect to all of its variables is written as a row vector h i rf(p ) = @f (p ) @f (p ) ⋅ ⋅ ⋅ @f (p ) : 0 @x1 0 @x2 0 @xn 0 This vector is called the gradient of f at p0. Viewing the partial derivatives of f as a vector allows us to use vector operations to describe, much more concisely, the linearization of f. Specifically, the linearization of f at p0, evaluated at a point p = (x1; x2; : : : ; xn), can be written as @f (0) @f (0) @f (0) Lf (p) = f(p0) + (p0)(x1 − x1 ) + (p0)(x2 − x2 ) + ⋅ ⋅ ⋅ + (p0)(xn − xn ) @x1 @x2 @xn n X @f (0) = f(p ) + (p )(x − x ) 0 @x 0 i i i=1 i = f(p0) + rf(p0) ⋅ (p − p0); where rf(p0) ⋅ (p − p0) is the dot product, also known as the inner product, of the vectors rf(p0) and p − p0. Recall that given two vectors u = hu1; u2; : : : ; uni and v = hv1; v2; : : : ; vni, the dot product of u and v, denoted by u ⋅ v, is defined by n X u ⋅ v = uivi = u1v1 + u2v2 + ⋅ ⋅ ⋅ + unvn = kukkvk cos ; i=1 where is the angle between u and v. 3 Example Let f : R ! R be defined by f(x; y; z) = 3x2y3z4: Then 3 4 2 2 4 2 3 3 rf(x; y; z) = fx fy fz = 6xy z 9x y z 12x y z : Let (x0; y0; z0) = (1; 2; −1). Then rf(x0; y0; z0) = rf(1; 2; −1) = fx(1; 2; −1) fy(1; 2; −1) fz(1; 2; −1) = 48 36 −96 : 4 It follows that the linearization of f at (x0; y0; z0) is Lf (x; y; z) = f(1; 2; −1) + rf(1; 2; −1) ⋅ hx − 1; y − 2; z + 1i = 24 + h48; 36; −96i ⋅ hx − 1; y − 2; z + 1i = 24 + 48(x − 1) + 36(y − 2) − 96(z + 1) = 48x + 36y − 96z − 192: At the point (1:1; 1:9; −1:1), we have f(1:1; 1:9; −1:1) ≈ 36:5, while Lf (1:1; 1:9; −1:1) = 34:8.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-