<<

is a good approximation of f near a in the sense that f(x) − h(x) lim = 0. x→a kx − ak Total Note that when m = 1, this definition says the Math 131 Multivariate Df of the scalar-valued function f is the D Joyce, Spring 2014 1 × n row-matrix

h ∂f ∂f ∂f i Df(x1, x2, . . . , xn) = ... Last time. We found that the total derivative of ∂x1 ∂x2 ∂xn a scalar-valued function, also called a scalar field, which is the same thing as the ∇f, but n R → R, is the gradient written out as a row-matrix rather than an n-tuple.  ∂f ∂f ∂f  Example 1. This derivative Df looks complicated, ∇f = (fx1 , fx2 , . . . , fxn ) = , ,..., . 3 4 ∂x1 ∂x2 ∂xn but it isn’t, really. For an example, let f : R → R be defined by When n = 2 the gradient, ∇f = (fx, fy), gives the slopes of the tangent plane in the x-direction and f(x, y, z) = (x + 2y + 3z, xyz, cos x, sin x). the y-direction. Then Df is the 4 × 3 matrix Df(x1, x2, . . . , xn)

 ∂f1 ∂f1 ∂f1    Total derivatives to vector-valued functions. ∂x ∂y ∂z 1 2 3 n m Let f : R → R be a vector-valued function. As  ∂f2 ∂f2 ∂f2  yz xz xy =  ∂x ∂y ∂z  =   always, a vector valued function is determined by  ∂f3 ∂f3 ∂f3  − sin x 0 0   ∂x ∂y ∂z    its m scalar-valued component functions: ∂f4 ∂f4 ∂f4 cos x 0 0 ∂x ∂y ∂z

f(x) = (f1(x), f1(x), . . . , fm(x)). Rules of differentiation. Typically, when We’ll say f is differentiable if all its component func- derivatives of multivariant functions are actually tions are differentiable, and in that case, we’ll take computed, they’re computed one the derivative of f, denoted Df, to be the m × n at a time. Partial derivatives are just ordinary matrix of partial derivatives of the component func- derivatives when only one variable actually varies, tions: so no new rules of differentiation are needed for them. But there are rules for and total  ∂f1 ∂f1 ... ∂f1  ∂x1 ∂x2 ∂xn derivatives.  ∂f2 ∂f2 ... ∂f2  The usual properties of derivatives for functions Df(x , x , . . . , x ) =  ∂x1 ∂x2 ∂xn  . 1 2 n  . . .. .  of one variable f : R → R are the sum rule, dif-  . . . .  ∂fm ∂fm ... ∂fm ference rule, , , and chain ∂x1 ∂x2 ∂xn rule. For the most part, these properties are en- n m ∂fi joyed by functions f : R → R , but the product Thus, the ij-th entry is , the j-th partial deriva- and quotient rules will need to be modified and re- ∂xj tive of the i-component function fi. stricted. In terms of limits, a logically equivalent definition For instance, derivatives are linear in the sense says f is differentiable at a if (1) all the mn partial that they preserve addition and scalar multiplica- derivatives exist, and (2) the linear function tion, and therefore they preserve subtraction and, in general, all linear combinations. So, for vector- h(x) = f(a) + Df(a)(x − a) valued differentiable functions Rn → Rm, we have

1 D(f + g) = Df + Dg D(cf) = cDf D(f − g) = Df − Dg D(c1f1 + ··· + crfr) = c1Df1 + ··· + crDfr

When the functions are scalar-valued functions Rn → R, i.e., when m = 1, we can also write these rules in terms of gradients as

∇(f + g) = ∇f + Dg ∇(cf) = c ∇f ∇(f − g) = ∇f − ∇g ∇(c1f1 + ··· + crfr) = c1∇f1 + ··· + cr∇fr

The product and quotient rules apply to scalar- valued functions f and g, both Rn → R:

∇(fg) = (∇f)g + f∇g

f  (∇f)g − f∇g ∇ = g g2

The product and quotient rules don’t easily gener- alize to vector-valued functions. Later, we’ll see what happens to the .

Math 131 Home Page at http://math.clarku.edu/~djoyce/ma131/

2