Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions

Total Page:16

File Type:pdf, Size:1020Kb

Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions Jingzhao Zhang 1 Hongzhou Lin 1 Stefanie Jegelka 1 Suvrit Sra 1 Ali Jadbabaie 1 Abstract Table 1. When the problem is nonconvex and nonsmooth, find- We provide the first non-asymptotic analysis for ing a -stationary point is intractable, see Theorem 11. Thus we finding stationary points of nonsmooth, noncon- introduce a refined notion, (δ; )-stationarity, and provide non- vex functions. In particular, we study the class asymptotic convergence rates for finding (δ; )-stationary point. of Hadamard semi-differentiable functions, per- haps the largest class of nonsmooth functions for DETERMINISTIC RATES CONVEX NONCONVEX −0:5 −2 which the chain rule of calculus holds. This class L-SMOOTH O( ) O( ) −2 −3 −1 contains examples such as ReLU neural networks L-LIPSCHITZ O( ) O~( δ ) and others with non-differentiable activation func- tions. We first show that finding an -stationary STOCHASTIC RATES CONVEX NONCONVEX point with first-order methods is impossible in −2 −4 finite time. We then introduce the notion of (δ; )- L-SMOOTH O( ) O( ) −2 ~ −4 −1 stationarity, which allows for an -approximate L-LIPSCHITZ O( ) O( δ ) gradient to be the convex combination of general- ized gradients evaluated at points within distance of nonsmooth nonconvex problems in machine learning, δ to the solution. We propose a series of random- a canonical example being deep ReLU neural networks, ized first-order methods and analyze their com- obtaining a non-asymptotic convergence analysis is an im- plexity of finding a (δ; )-stationary point. Fur- portant open problem of fundamental interest. thermore, we provide a lower bound and show We tackle this problem for nonsmooth functions that are that our stochastic algorithm has min-max opti- Lipschitz and directionally differentiable. This class is rich mal dependence on δ. Empirically, our methods enough to cover common machine learning problems, in- perform well for training ReLU neural networks. cluding ReLU neural networks. Surprisingly, even for this seemingly restricted class, finding an -stationary point, i.e., 1. Introduction a point x¯ for which d(0; @f(¯x)) ≤ , is intractable. In other words, no algorithm can guarantee to find an -stationary Gradient based optimization underlies most of machine point within a finite number of iterations. learning and it has attracted tremendous research attention over the years. While non-asymptotic complexity analysis This intractability suggests that, to obtain meaningful non- of gradient based methods is well-established for convex asymptotic results, we need to refine the notion of stationar- and smooth nonconvex problems, little is known for non- ity. We introduce such a notion and base our analysis on it, smooth nonconvex problems. We summarize the known leading to the following main contributions of the paper: rates (black) in Table1 based on the references (Nesterov, • We show that a traditional -stationary point cannot be arXiv:2002.04130v3 [math.OC] 29 Jun 2020 2018; Carmon et al., 2017; Arjevani et al., 2019). obtained in finite time (Theorem5). Within the nonsmooth nonconvex setting, recent research • We study the notion of (δ; )-stationary points (see Def- results have focused on asymptotic convergence analy- inition4). For smooth functions, this notion reduces to sis (Bena¨ım et al., 2005; Kiwiel, 2007; Majewski et al., usual -stationarity by setting δ = O(/L). We provide a 2018; Davis et al., 2018; Bolte & Pauwels, 2019). Despite Ω(δ−1) lower bound on the number of calls if algorithms their advances, these results fail to address finite-time, non- are only allowed access to a generalized gradient oracle. asymptotic convergence rates. Given the widespread use • We propose a normalized “gradient descent” style algo- 1Massachusetts Institute of Technology. Correspondence to: rithm that achieves O~(−3δ−1) complexity in finding a Jingzhao Zhang <[email protected]>. (δ; )-stationary point in the deterministic setting. Proceedings of the 37 th International Conference on Machine • We propose a momentum based algorithm that achieves ~ −4 −1 Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by O( δ ) complexity in finding a (δ; )-stationary point the author(s). in the stochastic finite variance setting. Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions As a proof of concept to validate our theoretical findings, we tion enjoys high-order smoothness, a stronger goal is to implement our stochastic algorithm and show that it matches find an approximate second-order stationary point and could the performance of empirically used SGD with momentum thus escape saddle points too. Many methods focus on this method for training ResNets on the Cifar10 dataset. goal (Ge et al., 2015; Agarwal et al., 2017; Jin et al., 2017; Daneshmand et al., 2018; Fang et al., 2019). Our results attempt to bridge the gap from recent advances in developing a non-asymptotic theory for nonconvex opti- mization algorithms to settings that apply to training deep 2. Preliminaries neural networks, where, due to non-differentiability of the In this section, we set up the notion of generalized direc- activations, most existing theory does not directly apply. tional derivatives that will play a central role in our analysis. Throughout the paper, we assume that the nonsmooth func- 1.1. Related Work tion f is L-Lipschitz continuous (more precise assumptions Asymptotic convergence for nonsmooth nonconvex on the function class are outlined in x2.3). functions. Bena¨ım et al.(2005) study the convergence of subgradient methods from a differential inclusion per- 2.1. Generalized gradients spective; Majewski et al.(2018) extend the result to include We start with the definition of generalized gradients, follow- proximal and implicit updates. Bolte & Pauwels(2019) ing (Clarke, 1990), for which we first need: focus on formally justifying the back propagation rule un- d der nonsmooth conditions. In parallel, Davis et al.(2018) Definition 1. Given a point x 2 R , and direction d, the proved asymptotic convergence of subgradient methods as- generalized directional derivative of f is defined as suming the objective function to be Whitney stratifiable. ◦ f(y+td)−f(y) f (x; d) := lim sup t : The class of Whitney stratifiable functions is broader than y!x;t#0 regular functions studied in (Majewski et al., 2018), and it does not assume the regularity inequality (see Lemma 6.3 Definition 2. The generalized gradient of f is defined as and (51) in (Majewski et al., 2018)). Another line of work @f(x) := fg j hg; di ≤ f ◦(x; d); 8d 2 dg: (Mifflin, 1977; Kiwiel, 2007; Burke et al., 2018) studies R convergence of gradient sampling algorithms. These algo- rithms assume a deterministic generalized gradient oracle. We recall below the following basic properties of the gener- Our methods draw intuition from these algorithms and their alized gradient, see e.g., (Clarke, 1990) for details. analysis, but are non-asymptotic in contrast. Proposition 1 (Properties of generalized gradients). Structured nonsmooth nonconvex problems. Another 1. @f(x) is a nonempty, convex compact set. For all line of research in nonconvex optimization is to exploit vectors g 2 @f(x), we have kgk ≤ L. structure: Duchi & Ruan(2018); Drusvyatskiy & Paquette 2. f ◦(x; d) = maxfhg; di j g 2 @f(x)g. (2019); Davis & Drusvyatskiy(2019) consider the composi- tion structure f ◦ g of convex and smooth functions; Bolte 3. @f(x) is an upper-semicontinuous set valued map. et al.(2018); Zhang & He(2018); Beck & Hallak(2020) 4. f is differentiable almost everywhere (as it is L- study composite objectives of the form f + g where one Lipschitz); let conv(·) denote the convex hull, then function is differentiable or convex/concave. With such structure, one can apply proximal gradient algorithms if the @f(x) = conv gjg = lim rf(xk); xk ! x : proximal mapping can be efficiently evaluated. However, k!1 this usually requires weak convexity, i.e., adding a quadratic 5. Let B denote the unit Euclidean ball. Then, function makes the function convex, which is not satisfied by several simple functions, e.g., −|xj. @f(x) = \δ>0 [y2x+δB @f(y): Stationary points under smoothness. When the objec- y; z λ 2 (0; 1) g 2 @f(λy + tive function is smooth, SGD finds an -stationary point in 6. For any , there exists and (1 − λ)z) f(y) − f(z) = hg; y − zi O(−4) gradient calls (Ghadimi & Lan, 2013), which im- such that . proves to O(−2) for convex problems. Fast upper bounds under a variety of settings (deterministic, finite-sum, stochas- 2.2. Directional derivatives tic) are studied in (Carmon et al., 2018; Fang et al., 2018; Since general nonsmooth functions can have arbitrarily large Zhou et al., 2018; Nguyen et al., 2019; Allen-Zhu, 2018; variations in their “gradients,” we must restrict the function Reddi et al., 2016). More recently, lower bounds have also class to be able to develop a meaningful complexity theory. been developed (Carmon et al., 2017; Drori & Shamir, 2019; We show below that directionally differentiable functions Arjevani et al., 2019; Foster et al., 2019). When the func- match this purpose well. Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions Definition 3. A function f is called directionally differen- 2.3. Nonsmooth function class of interest tiable in the sense of Hadamard (cf. (Sova, 1964; Shapiro, Throughout the paper, we focus on the set of Lipschitz, 1990)) if for any mapping ' : R+ ! X for which '(0) = x '(t)−'(0) directionally differentiable and bounded (below) functions: and limt!0+ t = d, the following limit exists: F(∆;L) := ffjf is L-Lipschitz; 0 1 f (x; d) = lim t (f('(t)) − f(x)): (1) f is directionally differentiable; t!0+ f(x0) − inf f(x) ≤ ∆g; (2) x In the rest of the paper, we will say a function f is direc- n where a function f : R ! R is L−Lipschitz if tionally differentiable if it is directionally differentiable in n the sense of Hadamard at all x.
Recommended publications
  • Tutorial 8: Solutions
    Tutorial 8: Solutions Applications of the Derivative 1. We are asked to find the absolute maximum and minimum values of f on the given interval, and state where those values occur: (a) f(x) = 2x3 + 3x2 12x on [1; 4]. − The absolute extrema occur either at a critical point (stationary point or point of non-differentiability) or at the endpoints. The stationary points are given by f 0(x) = 0, which in this case gives 6x2 + 6x 12 = 0 = x = 1; x = 2: − ) − Checking the values at the critical points (only x = 1 is in the interval [1; 4]) and endpoints: f(1) = 7; f(4) = 128: − Therefore the absolute maximum occurs at x = 4 and is given by f(4) = 124 and the absolute minimum occurs at x = 1 and is given by f(1) = 7. − (b) f(x) = (x2 + x)2=3 on [ 2; 3] − As before, we find the critical points. The derivative is given by 2 2x + 1 f 0(x) = 3 (x2 + x)1=2 and hence we have a stationary point when 2x + 1 = 0 and points of non- differentiability whenever x2 + x = 0. Solving these, we get the three critical points, x = 1=2; and x = 0; 1: − − Checking the value of the function at these critical points and the endpoints: f( 2) = 22=3; f( 1) = 0; f( 1=2) = 2−4=3; f(0) = 0; f(3) = (12)2=3: − − − Hence the absolute minimum occurs at either x = 1 or x = 0 since in both − these cases the minimum value is 0, while the absolute maximum occurs at x = 3 and is f(3) = (12)2=3.
    [Show full text]
  • Lecture Notes – 1
    Optimization Methods: Optimization using Calculus-Stationary Points 1 Module - 2 Lecture Notes – 1 Stationary points: Functions of Single and Two Variables Introduction In this session, stationary points of a function are defined. The necessary and sufficient conditions for the relative maximum of a function of single or two variables are also discussed. The global optimum is also defined in comparison to the relative or local optimum. Stationary points For a continuous and differentiable function f(x) a stationary point x* is a point at which the slope of the function vanishes, i.e. f ’(x) = 0 at x = x*, where x* belongs to its domain of definition. minimum maximum inflection point Fig. 1 A stationary point may be a minimum, maximum or an inflection point (Fig. 1). Relative and Global Optimum A function is said to have a relative or local minimum at x = x* if f ()xfxh**≤+ ( )for all sufficiently small positive and negative values of h, i.e. in the near vicinity of the point x*. Similarly a point x* is called a relative or local maximum if f ()xfxh**≥+ ( )for all values of h sufficiently close to zero. A function is said to have a global or absolute minimum at x = x* if f ()xfx* ≤ ()for all x in the domain over which f(x) is defined. Similarly, a function is D Nagesh Kumar, IISc, Bangalore M2L1 Optimization Methods: Optimization using Calculus-Stationary Points 2 said to have a global or absolute maximum at x = x* if f ()xfx* ≥ ()for all x in the domain over which f(x) is defined.
    [Show full text]
  • The Theory of Stationary Point Processes By
    THE THEORY OF STATIONARY POINT PROCESSES BY FREDERICK J. BEUTLER and OSCAR A. Z. LENEMAN (1) The University of Michigan, Ann Arbor, Michigan, U.S.A. (3) Table of contents Abstract .................................... 159 1.0. Introduction and Summary ......................... 160 2.0. Stationarity properties for point processes ................... 161 2.1. Forward recurrence times and points in an interval ............. 163 2.2. Backward recurrence times and stationarity ................ 167 2.3. Equivalent stationarity conditions .................... 170 3.0. Distribution functions, moments, and sample averages of the s.p.p ......... 172 3.1. Convexity and absolute continuity .................... 172 3.2. Existence and global properties of moments ................ 174 3.3. First and second moments ........................ 176 3.4. Interval statistics and the computation of moments ............ 178 3.5. Distribution of the t n .......................... 182 3.6. An ergodic theorem ........................... 184 4.0. Classes and examples of stationary point processes ............... 185 4.1. Poisson processes ............................ 186 4.2. Periodic processes ........................... 187 4.3. Compound processes .......................... 189 4.4. The generalized skip process ....................... 189 4.5. Jitte processes ............................ 192 4.6. ~deprendent identically distributed intervals ................ 193 Acknowledgments ............................... 196 References ................................... 196 Abstract An axiomatic formulation is presented for point processes which may be interpreted as ordered sequences of points randomly located on the real line. Such concepts as forward recurrence times and number of points in intervals are defined and related in set-theoretic Q) Presently at Massachusetts Institute of Technology, Lexington, Mass., U.S.A. (2) This work was supported by the National Aeronautics and Space Administration under research grant NsG-2-59. 11 - 662901. Aeta mathematica. 116. Imprlm$ le 19 septembre 1966.
    [Show full text]
  • The First Derivative and Stationary Points
    Mathematics Learning Centre The first derivative and stationary points Jackie Nicholas c 2004 University of Sydney Mathematics Learning Centre, University of Sydney 1 The first derivative and stationary points dy The derivative of a function y = f(x) tell us a lot about the shape of a curve. In this dx section we will discuss the concepts of stationary points and increasing and decreasing functions. However, we will limit our discussion to functions y = f(x) which are well behaved. Certain functions cause technical difficulties so we will concentrate on those that don’t! The first derivative dy The derivative, dx ,isthe slope of the tangent to the curve y = f(x)atthe point x.Ifwe know about the derivative we can deduce a lot about the curve itself. Increasing functions dy If > 0 for all values of x in an interval I, then we know that the slope of the tangent dx to the curve is positive for all values of x in I and so the function y = f(x)isincreasing on the interval I. 3 For example, let y = x + x, then y dy 1 =3x2 +1> 0 for all values of x. dx That is, the slope of the tangent to the x curve is positive for all values of x. So, –1 0 1 y = x3 + x is an increasing function for all values of x. –1 The graph of y = x3 + x. We know that the function y = x2 is increasing for x>0. We can work this out from the derivative. 2 If y = x then y dy 2 =2x>0 for all x>0.
    [Show full text]
  • EE2 Maths: Stationary Points
    EE2 Maths: Stationary Points df Univariate case: When we maximize f(x) (solving for the x0 such that dx = 0) the d df gradient is zero at these points. What about the rate of change of gradient: dx ( dx ) at the minimum x0? For a minimum the gradient increases as x0 ! x0 + ∆x (∆x > 0). It follows d2f d2f that dx2 > 0. The opposite is true for a maximum: dx2 < 0, the gradient decreases upon d2f positive steps away from x0. For a point of inflection dx2 = 0. r @f @f Multivariate case: Stationary points occur when f = 0. In 2-d this is ( @x ; @y ) = 0, @f @f namely, a generalization of the univariate case. Recall that df = @x dx + @y dy can be written as df = ds · rf where ds = (dx; dy). If rf = 0 at (x0; y0) then any infinitesimal step ds away from (x0; y0) will still leave f unchanged, i.e. df = 0. There are three types of stationary points of f(x; y): Maxima, Minima and Saddle Points. We'll draw some of their properties on the board. We will now attempt to find ways of identifying the character of each of the stationary points of f(x; y). Consider a Taylor expansion about a stationary point (x0; y0). We know that rf = 0 at (x0; y0) so writing (∆x; ∆y) = (x − x0; y − y0) we find: − ∆f = f(x; y)[ f(x0; y0) ] 1 @2f @2f @2f ' 0 + (∆x)2 + 2 ∆x∆y + (∆y)2 : (1) 2! @x2 @x@y @y2 Maxima: At a maximum all small steps away from (x0; y0) lead to ∆f < 0.
    [Show full text]
  • Applications of Differentiation Applications of Differentiation– a Guide for Teachers (Years 11–12)
    1 Supporting Australian Mathematics Project 2 3 4 5 6 12 11 7 10 8 9 A guide for teachers – Years 11 and 12 Calculus: Module 12 Applications of differentiation Applications of differentiation – A guide for teachers (Years 11–12) Principal author: Dr Michael Evans, AMSI Peter Brown, University of NSW Associate Professor David Hunt, University of NSW Dr Daniel Mathews, Monash University Editor: Dr Jane Pitkethly, La Trobe University Illustrations and web design: Catherine Tan, Michael Shaw Full bibliographic details are available from Education Services Australia. Published by Education Services Australia PO Box 177 Carlton South Vic 3053 Australia Tel: (03) 9207 9600 Fax: (03) 9910 9800 Email: [email protected] Website: www.esa.edu.au © 2013 Education Services Australia Ltd, except where indicated otherwise. You may copy, distribute and adapt this material free of charge for non-commercial educational purposes, provided you retain all copyright notices and acknowledgements. This publication is funded by the Australian Government Department of Education, Employment and Workplace Relations. Supporting Australian Mathematics Project Australian Mathematical Sciences Institute Building 161 The University of Melbourne VIC 3010 Email: [email protected] Website: www.amsi.org.au Assumed knowledge ..................................... 4 Motivation ........................................... 4 Content ............................................. 5 Graph sketching ....................................... 5 More graph sketching ..................................
    [Show full text]
  • Lecture 3: 20 September 2018 3.1 Taylor Series Approximation
    COS 597G: Toward Theoretical Understanding of Deep Learning Fall 2018 Lecture 3: 20 September 2018 Lecturer: Sanjeev Arora Scribe: Abhishek Bhrushundi, Pritish Sahu, Wenjia Zhang Recall the basic form of an optimization problem: min (or max) f(x) subject to x 2 B; where B ⊆ Rd and f : Rd ! R is the objective function1. Gradient descent (ascent) and its variants are a popular choice for finding approximate solutions to these sort of problems. In fact, it turns out that these methods also work well empirically even when the objective function f is non-convex, most notably when training neural networks with various loss functions. In this lecture we will explore some of the basics ideas behind gradient descent, try to understand its limitations, and also discuss some of its popular variants that try to get around these limitations. We begin with the Taylor series approximation of functions which serves as a starting point for these methods. 3.1 Taylor series approximation We begin by recalling the Taylor series for univariate real-valued functions from Calculus 101: if f : R ! R is infinitely differentiable at x 2 R then the Taylor series for f at x is the following power series (∆x)2 (∆x)k f(x) + f 0(x)∆x + f 00(x) + ::: + f (k)(x) + ::: 2! k! Truncating this power series at some power (∆x)k results in a polynomial that approximates f around the point x. In particular, for small ∆x, (∆x)2 (∆x)k f(x + ∆x) ≈ f(x) + f 0(x)∆x + f 00(x) + ::: + f (k)(x) 2! k! Here the error of the approximation goes to zero at least as fast as (∆x)k as ∆x ! 0.
    [Show full text]
  • Motion in a Straight Line Motion in a Straight Line – a Guide for Teachers (Years 11–12)
    1 Supporting Australian Mathematics Project 2 3 4 5 6 12 11 7 10 8 9 A guide for teachers – Years 11 and 12 Calculus: Module 17 Motion in a straight line Motion in a straight line – A guide for teachers (Years 11–12) Principal author: Dr Michael Evans AMSI Peter Brown, University of NSW Associate Professor David Hunt, University of NSW Dr Daniel Mathews, Monash University Editor: Dr Jane Pitkethly, La Trobe University Illustrations and web design: Catherine Tan, Michael Shaw Full bibliographic details are available from Education Services Australia. Published by Education Services Australia PO Box 177 Carlton South Vic 3053 Australia Tel: (03) 9207 9600 Fax: (03) 9910 9800 Email: [email protected] Website: www.esa.edu.au © 2013 Education Services Australia Ltd, except where indicated otherwise. You may copy, distribute and adapt this material free of charge for non-commercial educational purposes, provided you retain all copyright notices and acknowledgements. This publication is funded by the Australian Government Department of Education, Employment and Workplace Relations. Supporting Australian Mathematics Project Australian Mathematical Sciences Institute Building 161 The University of Melbourne VIC 3010 Email: [email protected] Website: www.amsi.org.au Assumed knowledge ..................................... 4 Motivation ........................................... 4 Content ............................................. 6 Position, displacement and distance ........................... 6 Constant velocity .....................................
    [Show full text]
  • 1 Overview 2 a Characterization of Convex Functions
    AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 — February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the simplex method for finding optimal solutions. In this lecture we return to convex optimization and give a characterization of optimality. The main theorem we will prove today shows that for a convex function f : S ! R defined over a convex set S, a stationary point (the point x¯ for which rf(x¯)) is a global minimum. We we will review basic definitions and properties from multivariate calculus, prove some important properties of convex functions, and obtain various characterizations of optimality. 2 A Characterization of Convex Functions Recall the definitions we introduced in the second lecture of convex sets and convex functions: Definition. A set S is called a convex set if any two points in S contain their line, i.e. for any x1; x2 2 S we have that λx1 + (1 − λ)x2 2 S for any λ 2 [0; 1]. n Definition. For a convex set S ⊆ R , we say that a function f : S ! R is convex on S if for any two points x1; x2 2 S and any λ 2 [0; 1] we have that: f (λx1 + (1 − λ)x2) ≤ λf(x1) + 1 − λ f(x2): We will first give an important characterization of convex function. To so, we need to characterize multivariate functions via their Taylor expansion. 2.1 First-order Taylor approximation n n Definition. The gradient of a function f : R ! R at point x 2 R denoted rf(x¯) is: @f(x) @f(x)| rf(x) := ;:::; @x1 @xn First-order Taylor expansion.
    [Show full text]
  • Implicit Differentiation
    Learning Development Service Differentiation 2 To book an appointment email: [email protected] Who is this workshop for? • Scientists-physicists, chemists, social scientists etc. • Engineers • Economists • Calculus is used almost everywhere! Learning Development Service What we covered last time : • What is differentiation? Differentiation from first principles. • Differentiating simple functions • Product and quotient rules • Chain rule • Using differentiation tables • Finding the maxima and minima of functions Learning Development Service What we will cover this time: • Second Order Differentiation • Maxima and Minima (Revisited) • Parametric Differentiation • Implicit Differentiation Learning Development Service The Chain Rule by Example (i) Differentiate : Set: The differential is then: Learning Development Service The Chain Rule: Examples Differentiate: 2 1/3 (i) y x 3x 1 (ii) y sincos x (iii) y cos3x 1 3 2x 1 (iv) y 2x 1 Learning Development Service Revisit: maxima and minima Learning Development Service Finding the maxima and minima Learning Development Service The second derivative • The second derivative is found by taking the derivative of a function twice: dy d 2 y y f (x) f (x) f (x) dx dx2 • The second derivative can be used to classify a stationary point Learning Development Service The second derivative • If x0 is the position of a stationary point, the point is a maximum if the second derivative is <0 : d 2 y 0 dx2 and a minimum if: d 2 y 0 dx2 • The nature of the stationary point is inconclusive if the second derivative=0. Learning Development Service The second derivative: Examples 3 • Find the stationary points of the function f (x) x 6x and classify the stationary points.
    [Show full text]
  • Calculus Online Textbook Chapter 13 Sections 13.5 to 13.7
    13.5 The Chain Rule 34 A quadratic function ax2 + by2 + cx + dy has the gradi- 42 f=x x = cos 2t y = sin 2t ents shown in Figure B. Estimate a, b, c, d and sketch two level curves. 43 f=x2-y2 x=xo+2t y=yo+3t 35 The level curves of f(x, y) are circles around (1, 1). The 44 f=xy x=t2+1 y=3 curve f = c has radius 2c. What is f? What is grad f at (0, O)? 45 f=ln xyz x = e' y = e2' = e-' 36 Suppose grad f is tangent to the hyperbolas xy = constant 46 f=2x2+3y2+z2 x=t y=t2 Z=t3 in Figure C. Draw three level curves off(x, y). Is lgrad f 1 larger 47 (a) Find df/ds and df/dt for the roller-coaster f = xy along at P or Q? Is lgrad f 1 constant along the hyperbolas? Choose the path x = cos 2t, y = sin 2t. (b) Change to f = x2 + y2 and a function that could bef: x2 + y2, x2 - y2, xy, x2y2. explain why the slope is zero. 37 Repeat Problem 36, if grad f is perpendicular to the hyper- bolas in Figure C. 48 The distance D from (x, y) to (1, 2) has D2 = (x - + (y - 2)2. Show that aD/ax = (X- l)/D and dD/ay = 38 Iff = 0, 1, 2 at the points (0, I), (1, O), (2, I), estimate grad f (y - 2)/D and [grad Dl = 1.
    [Show full text]
  • Module C6 Describing Change – an Introduction to Differential Calculus 6 Table of Contents
    Module C6 – Describing change – an introduction to differential calculus Module C6 Describing change – an introduction to differential calculus 6 Table of Contents Introduction .................................................................................................................... 6.1 6.1 Rate of change – the problem of the curve ............................................................ 6.2 6.2 Instantaneous rates of change and the derivative function .................................... 6.15 6.3 Shortcuts for differentiation ................................................................................... 6.19 6.3.1 Polynomial and other power functions ............................................................. 6.19 6.3.2 Exponential functions ....................................................................................... 6.28 6.3.3 Logarithmic functions ...................................................................................... 6.30 6.3.4 Trigonometric functions ................................................................................... 6.34 6.3.5 Where can’t you find a derivative of a function? ............................................. 6.37 6.4 Some applications of differential calculus ............................................................. 6.40 6.4.1 Displacement-velocity-acceleration: when derivatives are meaningful in their own right ........................................................................................................... 6.41 6.4.2 Twists and turns ...............................................................................................
    [Show full text]