Chapter 3 Gradient-Based Optimization

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 3 Gradient-Based Optimization Chapter 3 Gradient-based optimization Contents (class version) 3.0 Introduction........................................ 3.3 3.1 Lipschitz continuity.................................... 3.6 3.2 Gradient descent for smooth convex functions..................... 3.14 3.3 Preconditioned steepest descent............................. 3.18 Preconditioning: overview..................................... 3.19 Descent direction.......................................... 3.21 Complex case............................................ 3.22 3.4 Descent direction for edge-preserving regularizer: complex case........... 3.23 GD step size with preconditioning................................. 3.28 Finite difference implementation.................................. 3.29 Orthogonality for steepest descent and conjugate gradients.................... 3.31 3.5 General inverse problems................................. 3.32 3.6 Convergence rates..................................... 3.34 Heavy ball method......................................... 3.35 Generalized convergence analysis of PGD............................. 3.38 3.1 © J. Fessler, January 16, 2020, 17:44 (class version) 3.2 Generalized Nesterov fast gradient method (FGM)........................ 3.40 3.7 First-order methods.................................... 3.41 General first-order method classes................................. 3.41 Optimized gradient method (OGM)................................ 3.45 3.8 Machine learning via logistic regression for binary classification........... 3.52 Adaptive restart of OGM...................................... 3.57 3.9 Summary.......................................... 3.58 © J. Fessler, January 16, 2020, 17:44 (class version) 3.3 3.0 Introduction To solve a problem like x^ = arg min Ψ(x) x2FN via an iterative method, we start with some initial guess x0, and then the algorithm produces a sequence fxtg where hopefully the sequence converges to x^, meaning kxt − x^k ! 0 for some norm k·k as t ! 1. What algorithm we use depends greatly on the properties of the cost function Ψ: FN 7! R. EECS 551 explored the gradient descent (GD) and preconditioned gradient descent (PGD) algorithms for solving least-squares problems in detail. Here we review the general form of gradient descent (GD) for convex minimization problems; the LS application is simply a special case. Venn twice diagram Lipschitz differentiable quadratic for nonsmooth composite differentiable continuous with (LS) convex gradient bounded functions: curvature © J. Fessler, January 16, 2020, 17:44 (class version) 3.4 Motivating application(s) We focus initially on the numerous SIPML applications where the cost function is convex and smooth, meaning it has a Lipschitz continuous gradient. A concrete family of applications is edge-preserving image recovery where the measurement model is y = Ax + " for some matrix A and we estimate x using 1 2 x^ = arg min kAx − yk2 + βR(x) x2FN 2 where the regularizer is convex and smooth, such as X R(x) = ([Cx]k) k for a potential function that has a Lipschitz continuous derivative, such as the Fair potential. © J. Fessler, January 16, 2020, 17:44 (class version) 3.5 Example. Here is an example of image deblurring or image restoration that was performed using such a method. The left image is the blurry noisy image y, and the right image is the restored image x^. Step sizes and Lipschitz constant preview For gradient-based optimization methods, a key issue is choosing an appropriate step size (aka learning rate in ML). Usually the appropriate range of step sizes is determined by the Lipschitz constant of r Ψ, so we focus on that next. © J. Fessler, January 16, 2020, 17:44 (class version) 3.6 3.1 Lipschitz continuity The concept of Lipschitz continuity is defined for general metric spaces, but we focus on vector spaces. Define. A function g : FN 7! FM is Lipschitz continuous if there exists L < 1, called a Lipschitz constant, such that N kg(x) − g(z)k ≤ L kx − zk ; 8x; z 2 R : In general the norms on FN and FM can differ and L will depend on the choice of the norms. We will focus on the Euclidean norms unless otherwise specified. Define. The smallest such L is called the best Lipschitz constant. (Often just “the” LC.) Algebraic properties Let f and g be Lipschitz continuous functions with (best) Lipschitz constants Lf and Lg respectively. h(x) Lh αf(x) + β jαj Lf scale/shift f(x − x0) Lf translate f(x) + g(x) ≤ Lf + Lg add f(g(x)) ≤ Lf Lg compose ( HW) Ax + b jjjAjjj affine (for same norm on FM and FN ) f(x)g(x) ? multiply © J. Fessler, January 16, 2020, 17:44 (class version) 3.7 If f and g are Lipschitz continuous functions on R, then h(x) = f(x)g(x) is a Lipschitz continuous function on R. (?) A: True B: False ?? N N If f : F 7! F and g : F 7! F are Lipschitz continuous functions and h(x) , f(x)g(x), and jf(x)j ≤ fmax < 1 and jg(x)j ≤ gmax < 1, N then h(·) is Lipschitz continuous on F and Lh ≤ fmaxLg + gmaxLf : Proof jh(x) − h(z)j = jf(x)g(x) − f(z)g(z)j = jf(x)(g(x) − g(z)) − (f(z) − f(x))g(z)j ≤ jf(x)j jg(x) − g(z)j + jg(z)j jf(x) − f(z)j by triangle ineq. ≤ (fmaxLg + gmaxLf ) kx − zk2 : 2 Is boundedness of both f and g a necessary condition? group No. Think f(x) = α and g(x) = x. Then Lfg = jαj = fmax but g(·) is unbounded yet Lipschitz. p Think f(x) = g(x) = jxj, both unbounded. But h(x) = f(x)g(x) = jxj has Lh = 1. © J. Fessler, January 16, 2020, 17:44 (class version) 3.8 For our purposes, we especially care about cost functions whose gradients are Lipschitz continuous. We call these smooth functions. The definition of gradient is subtle for functions on CN so here we focus on RN . Define. A differentiable function f(x) is called smooth iff it has a Lipschitz continuous gradient, i.e., iff 9L < 1 such that N krf(x) − rf(z)k2 ≤ L kx − zk2 ; 8x; z 2 R : Lipschitz continuity of rf is a stronger condition than mere continuity, so any differentiable function whose gradient is Lipschitz continuous is in fact a continuously differentiable function. The set of differentiable functions on RN having L-Lipschitz continuous gradients is sometimes denoted 1;1 N CL (R ) [1, p. 20]. 1 2 0 0 Example. For f(x) = 2 kAx − yk2 we have krf(x) − rf(z)k2 = kA (Ax − y) − A (Az − y)k2 0 0 = kA A(x − z)k2 ≤ jjjA Ajjj2 kx − zk2 : 0 2 2 0 So the Lipschitz constant of rf is Lrf = jjjA Ajjj2 = jjjAjjj2 = σ (A) = ρ(A A): 0 The value Lrf = jjjA Ajjj2 is the best Lipschitz constant for rf(·). (?) A: True B: False ?? ?? © J. Fessler, January 16, 2020, 17:44 (class version) 3.9 1;1 N Here is an interesting geometric property of functions in CL (R ) [1, p. 22, Lemma 1.2.3]: L jf(x) − f(z) − rf(z)(x − z)j ≤ kx − zk2 ; 8x; z 2 N : 2 2 R In other words, for any point z, the function f(x) is bounded between the two quadratic functions: L q (x) f(z) + hrf(z); x − zi ± kx − zk2 : ± , 2 2 (Picture)of sinusoid sin(x) with bounding upward and downward parabolas. Convex functions with Lipschitz continuous gradients See [1, p. 56] for many equivalent conditions for a convex differentiable function f to have a Lipschitz continuous gradient, such as the following holding for all x; z 2 RN : L f(z) + hrf(z); x − zi ≤ f(x) ≤ f(z) + hrf(z); x − zi + kx − zk2 : (Picture) 2 2 | {z } | {z } tangent plane property quadratic majorization property The left inequality holds for all differentiable convex functions. © J. Fessler, January 16, 2020, 17:44 (class version) 3.10 Fact. If f(x) is twice differentiable and if there exists L < 1 such that its Hessian matrix has a bounded spectral norm: 2 N r f(x) 2 ≤ L; 8x 2 R ; (3.1) then f(x) has a Lipschitz continuous gradient with Lipschitz constant L. So twice differentiability with bounded curvature is sufficient, but not necessary, for a function to have Lipschitz continuous gradient. Proof. Using Taylor’s theorem and the triangle inequality and the definition of spectral norm: Z 1 2 krf(x) − rf(z)k2 = r f(x + τ(z − x)) dτ (x − z) 0 2 Z 1 Z 1 2 ≤ r f(x + τ(z − x)) 2 dτ kz − zk2 ≤ L dτ kx − zk2 = L kx − zk2 : 0 0 1 2 2 0 2 0 2 Example. f(x) = 2 kAx − yk2 =) r f = A A so jjjr fjjj2 = jjjA Ajjj2 = jjjAjjj2: 1 2 Example. The Lipschitz constant for the gradient of f(x) x0 x is: , 2 4 2 0 0 2 2 r f = 2zz where z = [1 2] so jjjr fjjj2 = 2 kzk2 = 10. © J. Fessler, January 16, 2020, 17:44 (class version) 3.11 Boundedness of 2nd derivative is not a necessary condition in general, because Lipschitz continuity of the derivative of a function does not require the function to be twice differentiable. 1 2 _ Example. Consider f(x) = 2 ([x]+) : The derivative of this function is f(x) = [x]+ which has Lipschitz constant L = 1; yet f is not twice differentiable. However, if a 1D function from R to R is twice differentiable, then its derivative is Lipschitz iff its second derivative is bounded. Proof. The “if” follows from (3.1). For the “only if” direction, suppose f¨ is unbounded. Then for any L < 1 there exists a point x 2 R such that f¨(x) > L. Now consider z = x ± and let g(x) = f_(x). Then g(x)−g(z) = g(x)−g(x±) ! f¨(x) > L as ! 0, so g cannot be L-Lipschitz continuous. This property holds x−z for every L < 1.
Recommended publications
  • Lipschitz Continuity of Convex Functions
    Lipschitz Continuity of Convex Functions Bao Tran Nguyen∗ Pham Duy Khanh† November 13, 2019 Abstract We provide some necessary and sufficient conditions for a proper lower semi- continuous convex function, defined on a real Banach space, to be locally or globally Lipschitz continuous. Our criteria rely on the existence of a bounded selection of the subdifferential mapping and the intersections of the subdifferential mapping and the normal cone operator to the domain of the given function. Moreover, we also point out that the Lipschitz continuity of the given function on an open and bounded (not necessarily convex) set can be characterized via the existence of a bounded selection of the subdifferential mapping on the boundary of the given set and as a consequence it is equivalent to the local Lipschitz continuity at every point on the boundary of that set. Our results are applied to extend a Lipschitz and convex function to the whole space and to study the Lipschitz continuity of its Moreau envelope functions. Keywords Convex function, Lipschitz continuity, Calmness, Subdifferential, Normal cone, Moreau envelope function. Mathematics Subject Classification (2010) 26A16, 46N10, 52A41 1 Introduction arXiv:1911.04886v1 [math.FA] 12 Nov 2019 Lipschitz continuous and convex functions play a significant role in convex and nonsmooth analysis. It is well-known that if the domain of a proper lower semicontinuous convex function defined on a real Banach space has a nonempty interior then the function is continuous over the interior of its domain [3, Proposition 2.111] and as a consequence, it is subdifferentiable (its subdifferential is a nonempty set) and locally Lipschitz continuous at every point in the interior of its domain [3, Proposition 2.107].
    [Show full text]
  • The Beginnings 2 2. the Topology of Metric Spaces 5 3. Sequences and Completeness 9 4
    MATH 3963 NONLINEAR ODES WITH APPLICATIONS R MARANGELL Contents 1. Metric Spaces - The Beginnings 2 2. The Topology of Metric Spaces 5 3. Sequences and Completeness 9 4. The Contraction Mapping Theorem 13 5. Lipschitz Continuity 19 6. Existence and Uniqueness of Solutions to ODEs 21 7. Dependence on Initial Conditions and Parameters 26 8. Maximal Intervals of Existence 29 Date: September 6, 2017. 1 R Marangell Part II - Existence and Uniqueness of ODES 1. Metric Spaces - The Beginnings So.... I am from California, and so I fly through L.A. a lot on my way to my parents house. How far is that from Sydney? Well... Google tells me it's 12000 km (give or take), and in fact this agrees with my airline - they very nicely give me 12000 km on my frequent flyer points. But.... well.... are they really 12000 km apart? What if I drilled a hole through the surface of the Earth? Using a little trigonometry, and the fact that the radius of the earth is R = 6371 km, we have that the distance `as the mole digs' so-to-speak from Sydney to Los Angeles CA is given by (see Figure 1): 12000 x = 2R sin ≈ 10303 km (give or take) 2R Figure 1. A schematic of the earth showing an `equatorial' (or great) circle from Sydney to L.A. So I have two different answers, to the same question, and both are correct. Indeed, Sydney is both 12000 km and around 10303 km away from L.A. What's going on here is pretty obvious, but we're going to make it precise.
    [Show full text]
  • Convergence Rates for Deterministic and Stochastic Subgradient
    Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity Benjamin Grimmer∗ Abstract We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global O(1/√T ) convergence rate for any convex function which is locally Lipschitz around its minimizers. This approach is based on Shor’s classic subgradient analysis and implies generalizations of the standard convergence rates for gradient descent on functions with Lipschitz or H¨older continuous gradients. Further, we show a O(1/√T ) convergence rate for the stochastic projected subgradient method on convex functions with at most quadratic growth, which improves to O(1/T ) under either strong convexity or a weaker quadratic lower bound condition. 1 Introduction We consider the nonsmooth, convex optimization problem given by min f(x) x∈Q for some lower semicontinuous convex function f : Rd R and closed convex feasible → ∪{∞} region Q. We assume Q lies in the domain of f and that this problem has a nonempty set of minimizers X∗ (with minimum value denoted by f ∗). Further, we assume orthogonal projection onto Q is computationally tractable (which we denote by PQ( )). arXiv:1712.04104v3 [math.OC] 26 Feb 2018 Since f may be nondifferentiable, we weaken the notion of gradients to· subgradients. The set of all subgradients at some x Q (referred to as the subdifferential) is denoted by ∈ ∂f(x)= g Rd y Rd f(y) f(x)+ gT (y x) . { ∈ | ∀ ∈ ≥ − } We consider solving this problem via a (potentially stochastic) projected subgradient method.
    [Show full text]
  • The Lipschitz Constant of Self-Attention
    The Lipschitz Constant of Self-Attention Hyunjik Kim 1 George Papamakarios 1 Andriy Mnih 1 Abstract constraint for neural networks, to control how much a net- Lipschitz constants of neural networks have been work’s output can change relative to its input. Such Lips- explored in various contexts in deep learning, chitz constraints are useful in several contexts. For example, such as provable adversarial robustness, estimat- Lipschitz constraints can endow models with provable ro- ing Wasserstein distance, stabilising training of bustness against adversarial pertubations (Cisse et al., 2017; GANs, and formulating invertible neural net- Tsuzuku et al., 2018; Anil et al., 2019), and guaranteed gen- works. Such works have focused on bounding eralisation bounds (Sokolic´ et al., 2017). Moreover, the dual the Lipschitz constant of fully connected or con- form of the Wasserstein distance is defined as a supremum volutional networks, composed of linear maps over Lipschitz functions with a given Lipschitz constant, and pointwise non-linearities. In this paper, we hence Lipschitz-constrained networks are used for estimat- investigate the Lipschitz constant of self-attention, ing Wasserstein distances (Peyré & Cuturi, 2019). Further, a non-linear neural network module widely used Lipschitz-constrained networks can stabilise training for in sequence modelling. We prove that the stan- GANs, an example being spectral normalisation (Miyato dard dot-product self-attention is not Lipschitz et al., 2018). Finally, Lipschitz-constrained networks are for unbounded input domain, and propose an al- also used to construct invertible models and normalising ternative L2 self-attention that is Lipschitz. We flows. For example, Lipschitz-constrained networks can be derive an upper bound on the Lipschitz constant of used as a building block for invertible residual networks and L2 self-attention and provide empirical evidence hence flow-based generative models (Behrmann et al., 2019; for its asymptotic tightness.
    [Show full text]
  • Hölder-Continuity for the Nonlinear Stochastic Heat Equation with Rough Initial Conditions
    Stoch PDE: Anal Comp (2014) 2:316–352 DOI 10.1007/s40072-014-0034-6 Hölder-continuity for the nonlinear stochastic heat equation with rough initial conditions Le Chen · Robert C. Dalang Received: 22 October 2013 / Published online: 14 August 2014 © Springer Science+Business Media New York 2014 Abstract We study space-time regularity of the solution of the nonlinear stochastic heat equation in one spatial dimension driven by space-time white noise, with a rough initial condition. This initial condition is a locally finite measure μ with, possibly, exponentially growing tails. We show how this regularity depends, in a neighborhood of t = 0, on the regularity of the initial condition. On compact sets in which t > 0, 1 − 1 − the classical Hölder-continuity exponents 4 in time and 2 in space remain valid. However, on compact sets that include t = 0, the Hölder continuity of the solution is α ∧ 1 − α ∧ 1 − μ 2 4 in time and 2 in space, provided is absolutely continuous with an α-Hölder continuous density. Keywords Nonlinear stochastic heat equation · Rough initial data · Sample path Hölder continuity · Moments of increments Mathematics Subject Classification Primary 60H15 · Secondary 60G60 · 35R60 L. Chen and R. C. Dalang were supported in part by the Swiss National Foundation for Scientific Research. L. Chen (B) · R. C. Dalang Institut de mathématiques, École Polytechnique Fédérale de Lausanne, Station 8, CH-1015 Lausanne, Switzerland e-mail: [email protected] R. C. Dalang e-mail: robert.dalang@epfl.ch Present Address: L. Chen Department of Mathematics, University of Utah, 155 S 1400 E RM 233, Salt Lake City, UT 84112-0090, USA 123 Stoch PDE: Anal Comp (2014) 2:316–352 317 1 Introduction Over the last few years, there has been considerable interest in the stochastic heat equation with non-smooth initial data: ∂ ν ∂2 ˙ ∗ − u(t, x) = ρ(u(t, x)) W(t, x), x ∈ R, t ∈ R+, ∂t 2 ∂x2 (1.1) u(0, ·) = μ(·).
    [Show full text]
  • 1. the Rademacher Theorem
    GEOMETRIC ANALYSIS PIOTR HAJLASZ 1. The Rademacher theorem 1.1. The Rademacher theorem. Lipschitz functions defined on one dimensional inter- vals are differentiable a.e. This is a classical result that is covered in most of courses in mea- sure theory. However, Rademacher proved a much deeper result that Lipschitz functions defined on open sets in Rn are also differentiable a.e. Let us first discuss differentiability of Lipschitz functions defined on one dimensional intervals. This result is a special case of differentiability of absolutely continuous functions. Definition 1.1. We say that a function f :[a; b] ! R is absolutely continuous if for very " > 0 there is δ > 0 such that if (x1; x1 +h1);:::; (xk; xk +hk) are pairwise disjoint intervals Pk in [a; b] of total length less than δ, i=1 hi < δ, then k X jf(xi + hi) − f(xi)j < ": i=1 The definition of an absolutely continuous function reminds the definition of a uniformly continuous function. Indeed, we would obtain the definition of a uniformly continuous function if we would restrict just to a single interval (x; x+h), i.e. if we would assume that k = 1. Despite similarity, the class of absolutely continuous function is much smaller than the class of uniformly continuous function. For example Lipschitz function f :[a; b] ! R are absolutely continuous, but in general H¨oldercontinuous functions are uniformly continuous, but not absolutely continuous. Proposition 1.2. If f; g :[a; b] ! R are absolutely continuous, then also the functions f ± g and fg are absolutely continuous.
    [Show full text]
  • On Lipschitz Continuity of Projections 2
    ON LIPSCHITZ CONTINUITY OF PROJECTIONS ONTO POLYHEDRAL MOVING SETS EWA M. BEDNARCZUK1 AND KRZYSZTOF E. RUTKOWSKI2 Abstract. In Hilbert space setting we prove local lipchitzness of projections onto parametric polyhedral sets represented as solutions to systems of inequalities and equations with parameters appearing both in left-hand-sides and right-hand-sides of the constraints. In deriving main results we assume that data are locally Lips- chitz functions of parameter and the relaxed constant rank constraint qualification condition is satisfied. 1. Introduction Continuity of metric projections of a given v¯ onto moving subsets have already been investigated in a number of instances. In the framework of Hilbert spaces, the projection ′ PC (¯v) of v¯ onto closed convex sets C, C , i.e., solutions to optimization problems minimize kz − v¯k subject to z ∈ C, (Proj) are unique and Hölder continuous with the exponent 1/2 in the sense that there exists a constant ℓH > 0 with ′ 1/2 kPC (¯v) − PC′ (¯v)k≤ ℓH [dρ(C, C )] , where dρ(·, ·) denotes the bounded Hausdorff distance (see [2] and also [7, Example 1.2]). In the case where the sets, on which we project a given v¯, are solution sets to systems of equations and inequalities, the problem P roj is a special case of a general parametric problem minimize ϕ0(p, x) subject to (Par) ϕi(p, x)=0 i ∈ I1, ϕi(p, x) ≤ 0 i ∈ I2, where x ∈ H, p ∈D⊂G, H - Hilbert space, G - metric space, ϕi : D×H→ R, arXiv:1909.13715v2 [math.OC] 6 Oct 2019 i ∈ {0} ∪ I1 ∪ I2.
    [Show full text]
  • Stability and Convergence of Stochastic Gradient Clipping: Beyond Lipschitz Continuity and Smoothness
    Stability and Convergence of Stochastic Gradient Clipping: Beyond Lipschitz Continuity and Smoothness Vien V. Mai 1 Mikael Johansson 1 Abstract problems are at the core of many machine-learning appli- cations, and are often solved using stochastic (sub)gradient Stochastic gradient algorithms are often unsta- methods. In spite of their successes, stochastic gradient ble when applied to functions that do not have methods can be sensitive to their parameters (Nemirovski Lipschitz-continuous and/or bounded gradients. et al., 2009; Asi & Duchi, 2019a) and have severe instabil- Gradient clipping is a simple and effective tech- ity (unboundedness) problems when applied to functions nique to stabilize the training process for prob- that grow faster than quadratically in the decision vector x lems that are prone to the exploding gradient (Andradottir¨ , 1996; Asi & Duchi, 2019a). Consequently, a problem. Despite its widespread popularity, the careful (and sometimes time-consuming) parameter tuning convergence properties of the gradient clipping is often required for these methods to perform well in prac- heuristic are poorly understood, especially for tice. Even so, a good parameter selection is not sufficient to stochastic problems. This paper establishes both circumvent the instability issue on steep functions. qualitative and quantitative convergence results of the clipped stochastic (sub)gradient method Gradient clipping and the closely related gradient normal- (SGD) for non-smooth convex functions with ization technique are simple modifications to the underlying rapidly growing subgradients. Our analyses show algorithm to control the step length that an update can make that clipping enhances the stability of SGD and relative to the current iterate. These techniques enhance that the clipped SGD algorithm enjoys finite con- the stability of the optimization process, while adding es- vergence rates in many cases.
    [Show full text]
  • Arxiv:1706.10175V1 [Math.CV] 30 Jun 2017 Uscnomlslto Fteieult 11 Ihrsett H D the to Respect with (1.1) Inequality the Metric
    LIPSCHITZ CONTINUITY OF QUASICONFORMAL MAPPINGS AND OF THE SOLUTIONS TO SECOND ORDER ELLIPTIC PDE WITH RESPECT TO THE DISTANCE RATIO METRIC PEIJIN LI AND SAMINATHAN PONNUSAMY Abstract. The main aim of this paper is to study the Lipschitz continuity of certain (K,K′)-quasiconformal mappings with respect to the distance ratio metric, and the Lipschitz continuity of the solution of a quasilinear differential equation with respect to the distance ratio metric. 1. Introduction and main results Martio [22] was the first who considered the study on harmonic quasiconformal mappings in C. In the recent years the articles [8, 14, 16, 17, 18, 26] brought much light on this topic. In [6, 21], the Lipschitz characteristic of (K,K′)-quasiconformal mappings has been discussed. In [20], the authors proved that a K-quasiconformal harmonic mapping from the unit disk D onto itself is bi-Lipschitz with respect to hyperbolic metric, and also proved that a K-quasiconformal harmonic mapping from the upper half-plane H onto itself is bi-Lipschitz with respect to hyperbolic metric. In [23], the authors proved that a K-quasiconformal harmonic mapping from D to D′ is bi-Lipschitz with respect to quasihyperbolic metrics on D and D′, where D and D′ are proper domains in C. Important definitions will be included later in this section. In [15], Kalaj considered the bi-Lipschitz continuity of K-quasiconformal solution of the inequality (1.1) ∆f B Df 2. | | ≤ | | Here ∆f represents the two-dimensional Laplacian of f defined by ∆f = fxx +fyy = 4fzz and the mapping f satisfying the Laplace equation ∆f = 0 is called harmonic.
    [Show full text]
  • Metric Spaces
    Chapter 1. Metric Spaces Definitions. A metric on a set M is a function d : M M R × → such that for all x, y, z M, Metric Spaces ∈ d(x, y) 0; and d(x, y)=0 if and only if x = y (d is positive) MA222 • ≥ d(x, y)=d(y, x) (d is symmetric) • d(x, z) d(x, y)+d(y, z) (d satisfies the triangle inequality) • ≤ David Preiss The pair (M, d) is called a metric space. [email protected] If there is no danger of confusion we speak about the metric space M and, if necessary, denote the distance by, for example, dM . The open ball centred at a M with radius r is the set Warwick University, Spring 2008/2009 ∈ B(a, r)= x M : d(x, a) < r { ∈ } the closed ball centred at a M with radius r is ∈ x M : d(x, a) r . { ∈ ≤ } A subset S of a metric space M is bounded if there are a M and ∈ r (0, ) so that S B(a, r). ∈ ∞ ⊂ MA222 – 2008/2009 – page 1.1 Normed linear spaces Examples Definition. A norm on a linear (vector) space V (over real or Example (Euclidean n spaces). Rn (or Cn) with the norm complex numbers) is a function : V R such that for all · → n n , x y V , x = x 2 so with metric d(x, y)= x y 2 ∈ | i | | i − i | x 0; and x = 0 if and only if x = 0(positive) i=1 i=1 • ≥ cx = c x for every c R (or c C)(homogeneous) • | | ∈ ∈ n n x + y x + y (satisfies the triangle inequality) Example (n spaces with p norm, p 1).
    [Show full text]
  • Functions of One Variable
    viii 3.A. FUNCTIONS 77 Appendix In this appendix, we describe without proof some results from real analysis which help to understand weak and distributional derivatives in the simplest context of functions of a single variable. Proofs are given in [11] or [15], for example. These results are, in fact, easier to understand from the perspective of weak and distributional derivatives of functions, rather than pointwise derivatives. 3.A. Functions For definiteness, we consider functions f :[a,b] → R defined on a compact interval [a,b]. When we say that a property holds almost everywhere (a.e.), we mean a.e. with respect to Lebesgue measure unless we specify otherwise. 3.A.1. Lipschitz functions. Lipschitz continuity is a weaker condition than continuous differentiability. A Lipschitz continuous function is pointwise differ- entiable almost everwhere and weakly differentiable. The derivative is essentially bounded, but not necessarily continuous. Definition 3.51. A function f :[a,b] → R is uniformly Lipschitz continuous on [a,b] (or Lipschitz, for short) if there is a constant C such that |f(x) − f(y)|≤ C |x − y| for all x, y ∈ [a,b]. The Lipschitz constant of f is the infimum of constants C with this property. We denote the space of Lipschitz functions on [a,b] by Lip[a,b]. We also define the space of locally Lipschitz functions on R by R R R Liploc( )= {f : → : f ∈ Lip[a,b] for all a<b} . By the mean-value theorem, any function that is continuous on [a,b] and point- wise differentiable in (a,b) with bounded derivative is Lipschitz.
    [Show full text]
  • Fat Homeomorphisms and Unbounded Derivate Containers
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 81, 545-560 (1981) Fat Homeomorphisms and Unbounded Derivate Containers J. WARGA* Department of Mathematics, Northeastern University, Boston, Massachusetts 021 I5 Submitted by Harold Kushner We generalize local and global inverse function theorems to continuous transfor- mations in I?“, replacing nonexistent derivatives by set-valued “unbounded derivate containers.” We also construct and study unbounded and ordinary derivate containers, including extensions of generalized Jacobians. 1. INTRODUCTION Let B(x, a) [B(x, a)] denote the open [closed] ball of center x and radius a. The classical inverse function theorem implies that a C’ function f in a Banach space whose derivativef’(2) has a continuous inverse is fat at X, i.e., there exists c > 0 such that f(&, a)) 2 @f(f), ca) for sufficiently small positive a. A global version of the inverse function theorem [ 7, Lemma 1, p. 661 asserts that if f is C’ in some neighborhood X of B(.F, a) and If’(x)-‘1 </3 (X E X) then .fXB(% a>>= B(.fT% a/P) P<a,<a) and a restriction off has a C’ inverse U: g(j-(f), a/j?) + @, a). If f is a Lipschitzian mapping in R” then a known result concerning functions defined on convex sets [5, Lemma 3.3, p. 5541 yields, as special cases, fat mapping and local and global homeomorphism theorems formulated in terms of derivate containers /if(x). The latter play the role of set-valued derivatives and are, crudely speaking, sets of limits of f:(y) as i + 00 and y --) x, where (fi) is any sequence of C’ functions converging uniformly to f.
    [Show full text]