On Sharpness of Error Bounds for Single Hidden Layer Feedforward

On Sharpness of Error Bounds for Single Hidden Layer Feedforward

On Sharpness of Error Bounds for Single Hidden Layer Feedforward Neural Networks Steffen Goebbels∗ Sixth uploaded version: This is a pre-print of an accepted journal paper that will appear in Results in Mathematics (RIMA) under the title “On Sharpness of Error Bounds for Univariate Approximation by Single Hidden Layer Feedforward Neural Networks”, 17.06.2020 Abstract A new non-linear variant of a quantitative extension of the uniform boundedness prin- ciple is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions. Single hidden layer feedforward neural networks with one input node perform such operations. Errors of best approximation can be ex- pressed using moduli of smoothness of the function to be approximated (i.e., to be learned). In this context, the quantitative extension of the uniform boundedness prin- ciple indeed allows to construct counterexamples that show approximation rates to be best possible. Approximation errors do not belong to the little-o class of given bounds. By choosing piecewise linear activation functions, the discussed problem becomes free knot spline approximation. Results of the present paper also hold for non-polynomial (and not piecewise defined) activation functions like inverse tangent. Based on Vapnik- Chervonenkis dimension, first results are shown for the logistic function. Keywords: Neural Networks, Rates of Convergence, Sharpness of Error Bounds, Counter Examples, Uniform Boundedness Principle AMS Subject Classification 2010: 41A25, 41A50, 62M45 arXiv:1811.05199v6 [math.FA] 17 Jun 2020 1 Introduction A feedforward neural network with an activation function σ, one input, one output node, and one hidden layer of n neurons as shown in Figure 1 implements an univariate real function g of type n g(x)= akσ(bkx + ck) Xk=1 with weights ak,bk, ck ∈ R. The given paper does not deal with multivariate ap- ∗Steff[email protected], Niederrhein University of Applied Sciences, Faculty of Electri- cal Engineering and Computer Science, Institute for Pattern Recognition, D-47805 Krefeld, Germany 1 1 Introduction 2 ! (' "' &' ! " # &# (# " & % ! ! "% ( &% ! (% n Fig. 1: One hidden layer neural network realizing Pk=1 akσ(bkx + ck) proximation. But some results can be extended to multiple input nodes, see Section 5. Often, activation functions are sigmoid. A sigmoid function σ : R → R is a measurable function with lim σ(x) = 0 and lim σ(x) = 1. x→−∞ x→∞ Sometimes also monotonicity, boundedness, continuity, or even differentiability may be prescribed. Deviant definitions are based on convexity and concavity. In case of differentiability, functions have a bell-shaped first derivative. Throughout this paper, approximation properties of following sigmoid functions are discussed: 0, x< 0 σ (x) := (Heaviside function), h 1, x ≥ 0 1 0, x< − 2 1 1 1 σc(x) := x + 2 , − 2 ≤ x ≤ 2 (cut function), 1 1, x> 2 1 1 σa(x) := + arctan(x) (inverse tangent), 2 π 1 1 x σ (x) := = 1 + tanh (logistic function). l 1+ e−x 2 2 Although not a sigmoid function, the ReLU function (Rectified Linear Unit) σr(x) := max{0,x} is often used as activation function for deep neural networks due to its computational simplicity. The Exponential Linear Unit (ELU) activation function α(ex − 1), x< 0 σ (x) := e x, x ≥ 0 with parameter α 6= 0 is a smoother variant of ReLU for α = 1. Qualitative approximation properties of neural networks have been studied exten- sively. For example, it is possible to choose an infinitely often differentiable, almost monotonous, sigmoid activation function σ such that for each continuous function f, each compact interval and each bound ε> 0 weights a0,a1,b1, c1 ∈ R exist such that f can be approximated uniformly by a0 +a1σ(b1x+c1) on the interval within bound ε, see [27] and literature cited there. In this sense, a neural network with only one hidden 1 Introduction 3 neuron is capable of approximating every continuous function. However, activation functions typically are chosen fixed when applying neural networks to solve applica- tion problems. They do not depend on the unknown function to be approximated. In the late 1980s it was already known that, by increasing the number of neurons, all continuous functions can be approximated arbitrarily well in the sup-norm on a compact set with each non-constant bounded monotonically increasing and continuous activation function (universal approximation or density property, see proof of Funa- hashi in [22]). For each continuous sigmoid activation function (that does not have to be monotone), the universal approximation property was proved by Cybenko in [14] on the unit cube. The result was extended to bounded sigmoid activation functions by Jones [31] without requiring continuity or monotonicity. For monotone sigmoid (and not necessarily continuous) activation functions, Hornik, Stinchcombe and White [29] extended the universal approximation property to the approximation of measurable functions. Hornik [28] proved density in Lp-spaces for any non-constant bounded and continuous activation functions. A rather general theorem is proved in [35]. Leshno et al. showed for any continuous activation function σ that the universal approximation property is equivalent to the fact that σ is not an algebraic polynomial. There are many other density results, e.g., cf. [7, 12]. Literature overviews can be found in [42] and [44]. To approximate or interpolate a given but unknown function f, constants ak, bk, and ck typically are obtained by learning based on sampled function values of f. The underlying optimization algorithm (like gradient descent with back propagation) might get stuck in a local but not in a global minimum. Thus, it might not find optimal constants to approximate f best possible. This paper does not focus on learning but on general approximation properties of function spaces n Φn,σ := g : [0, 1] → R : g(x)= akσ(bkx + ck) : ak,bk, ck ∈ R . ( ) Xk=1 Thus, we discuss functions on the interval [0, 1]. Without loss of generality, it is used instead of an arbitrary compact interval [a,b]. In some papers, an additional constant function a0 is allowed as summand in the definition of Φn,σ . Please note that akσ(0 · x + bk) already is a constant and that the definitions do not differ significantly. For f ∈ B[0, 1], the set of bounded real-valued functions on the interval [0, 1], let kfkB[0,1] := sup{|f(x)| : x ∈ [0, 1]}. By C[0, 1] we denote the Banach space of contin- uous functions on [0, 1] equipped with norm kfkC[0,1] := kfkB[0,1]. For Banach spaces p L [0, 1], 1 ≤ p < ∞, of measurable functions we denote the norm by kfkLp[0,1] := 1 p 1/p ∞ ( 0 |f(x)| dx) . To avoid case differentiation, we set X [0, 1] := C[0, 1] with p p k · k ∞ := k · k , and X [0, 1] := L [0, 1] with kfk p := kfk p , R X [0,1] B[0,1] X [0,1] L [0,1] 1 ≤ p< ∞. The error of best approximation E(Φn,σ, f)p, 1 ≤ p ≤∞, is defined via E(Φn,σ, f)p := inf{kf − gkXp[0,1] : g ∈ Φn,σ}. We use the abbreviation E(Φn,σ, f) := E(Φn,σ, f)∞ for p = ∞. A trained network cannot approximate a function better than the error of best approximation. Therefore, it is an important measure of what can and what cannot be done with such a network. The error of best approximation depends on the smoothness of f that is measured in terms of moduli of smoothness (or moduli of continuity). In contrast to using 1 Introduction 4 derivatives, first and higher differences of f obviously always exist. By applying a norm to such differences, moduli of smoothness measure a “degree of continuity” of f. For a natural number r ∈ N = {1, 2,... }, the rth difference of f ∈ B[0, 1] at point x ∈ [0, 1 − rh] with step size h> 0 is defined as 1 r 1 r−1 ∆hf(x) := f(x + h) − f(x), ∆hf(x):=∆h∆h f(x),r> 1, or r r ∆r f(x) := (−1)r−j f(x + jh). h j j=0 ! X The rth uniform modulus of smoothness is the smallest upper bound of the absolute values of rth differences: r ωr(f, δ) := ωr(f, δ)∞ := sup {|∆hf(x)| : x ∈ [0, 1 − rh], 0 < h ≤ δ} . With respect to Lp spaces, 1 ≤ p< ∞, let 1−rh 1/p r p ωr(f, δ)p := sup |∆hf(x)| dx : 0 < h ≤ δ . (Z0 ) r Obviously, ωr(f, δ)p ≤ 2 kfkXp[0,1], and for r-times continuously differentiable func- tions f, there holds (cf. [16, p. 46]) r (r) ωr(f, δ)p ≤ δ kf kXp[0,1]. (1.1) Barron applied Fourier methods in [4], cf. [36], to establish rates of convergence 2 in an L -norm, i.e., he estimated the error E(Φn,σ, f)2 with respect to n for n → ∞. Makovoz [40] analyzed rates for uniform convergence. With respect to moduli of smoothness, Debao [15] proved a direct estimate that is here presented in a version of the textbook [9, p. 172ff]. This estimate is independent of the choice of a bounded, sigmoid function σ. Doctoral thesis [10], cf. [11], provides an overview of such direct estimates in Section 1.3. According to Debao, 1 E(Φn,σ, f) ≤ sup |σ(x)| ω1 f, (1.2) R n x∈ holds for each f ∈ C[0, 1]. This is the prototype estimate for which sharpness is dis- cussed in this paper. In fact, the result of Debao for E(Φn,σ, f) allows to additionally restrict weights such that bk ∈ N and ck ∈ Z. The estimate has to hold true even for σ being a discontinuous Heaviside function.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    30 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us