On the Approximation of the Cut and Step Functions by Logistic and Gompertz Functions

Total Page:16

File Type:pdf, Size:1020Kb

On the Approximation of the Cut and Step Functions by Logistic and Gompertz Functions www.biomathforum.org/biomath/index.php/biomath ORIGINAL ARTICLE On the Approximation of the Cut and Step Functions by Logistic and Gompertz Functions Anton Iliev∗y, Nikolay Kyurkchievy, Svetoslav Markovy ∗Faculty of Mathematics and Informatics Paisii Hilendarski University of Plovdiv, Plovdiv, Bulgaria Email: [email protected] yInstitute of Mathematics and Informatics Bulgarian Academy of Sciences, Sofia, Bulgaria Emails: [email protected], [email protected] Received: , accepted: , published: will be added later Abstract—We study the uniform approximation practically important classes of sigmoid functions. of the sigmoid cut function by smooth sigmoid Two of them are the cut (or ramp) functions and functions such as the logistic and the Gompertz the step functions. Cut functions are continuous functions. The limiting case of the interval-valued but they are not smooth (differentiable) at the two step function is discussed using Hausdorff metric. endpoints of the interval where they increase. Step Various expressions for the error estimates of the corresponding uniform and Hausdorff approxima- functions can be viewed as limiting case of cut tions are obtained. Numerical examples are pre- functions; they are not continuous but they are sented using CAS MATHEMATICA. Hausdorff continuous (H-continuous) [4], [43]. In some applications smooth sigmoid functions are Keywords-cut function; step function; sigmoid preferred, some authors even require smoothness function; logistic function; Gompertz function; squashing function; Hausdorff approximation. in the definition of sigmoid functions. Two famil- iar classes of smooth sigmoid functions are the logistic and the Gompertz functions. There are I. INTRODUCTION situations when one needs to pass from nonsmooth In this paper we discuss some computational, sigmoid functions (e. g. cut functions) to smooth modelling and approximation issues related to sigmoid functions, and vice versa. Such a neces- several classes of sigmoid functions. Sigmoid sity rises the issue of approximating nonsmooth functions find numerous applications in various sigmoid functions by smooth sigmoid functions. fields related to life sciences, chemistry, physics, artificial intelligence, etc. In fields such as signal One can encounter similar approximation prob- processing, pattern recognition, machine learning, lems when looking for appropriate models for artificial neural networks, sigmoid functions are fitting time course measurement data coming e. g. also known as “activation” and “squashing” func- from cellular growth experiments. Depending on tions. In this work we concentrate on several the general view of the data one can decide to use Citation: Anton Iliev, Nikolay Kyurkchiev, Svetoslav Markov, On the Approximation of the Cut and Step Functions by Logistic and Gompertz Functions, Biomath 4 (2015), 15, Page 1 of 12 http://dx.doi.org/10.11145/j.biomath.2015.. A. Iliev et al., On the Approximation of the Cut and Step Functions by Logistic ... initially a cut function in order to obtain rough the real line, that is functions s of the form initial values for certain parameters, such as the s : R −! R. Sigmoid functions can be defined maximum growth rate. Then one can use a more as bounded monotone non-decreasing functions on sophisticate model (logistic or Gompertz) to obtain R. One usually makes use of normalized sigmoid a better fit to the measurement data. The presented functions defined as monotone non-decreasing results may be used to indicate to what extend and functions s(t); t 2 R, such that lim s(t)t→−∞ = 0 in what sense a model can be improved by another and lim s(t)t!1 = 1. In the fields of neural one and how the two models can be compared. networks and machine learning sigmoid-like func- tions of many variables are used, familiar under the Section 2 contains preliminary definitions and name activation functions. (In some applications motivations. In Section 3 we study the uniform and the sigmoid functions are normalised so that the Hausdorff approximation of the cut functions by lower asymptote is assumed −1: lim s(t) = logistic functions. Curiously, the uniform distance t→−∞ −1.) between a cut function and the logistic function of best uniform approximation is an absolute constant Cut (ramp) functions. Let ∆ = [γ − δ; γ + δ] be not depending on the slope of the functions, a an interval on the real line R with centre γ 2 R result observed in [18]. By contrast, it turns out and radius δ 2 R. A cut function (on ∆) is defined that the Hausdorff distance (H-distance) depends as follows: on the slope and tends to zero when increasing the slope. Showing that the family of logistic functions Definition 1. The cut function cγ,δ on ∆ is defined cannot approximate the cut function arbitrary well, for t 2 R by we then consider the limiting case when the cut 8 > 0, if t < ∆, > function tends to the step function (in Hausdorff < t − γ + δ sense). In this way we obtain an extension of a cγ,δ(t) = , if t 2 ∆, (1) 2δ previous result on the Hausdorff approximation > :> 1, if ∆ < t. of the step function by logistic functions [4]. In Section 4 we discuss the approximation of the Note that the slope of function cγ,δ(t) on the cut function by a family of squashing functions interval ∆ is 1=(2δ) (the slope is constant in the induced by the logistic function. It has been shown whole interval ∆). Two special cases are of interest in [18] that the latter family approximates uni- for our discussion in the sequel. formly the cut function arbitrary well. We propose Special case 1. For γ = 0 we obtain a cut a new estimate for the H-distance between the function on the interval ∆ = [−δ; δ]: cut function and its best approximating squashing function. Our estimate is then extended to cover 8 > 0, if t < −δ, the limiting case of the step function. In Section 5 > < t + δ the approximation of the cut function by Gompertz c0,δ(t) = , if −δ ≤ t ≤ δ, (2) functions is considered using similar techniques > 2δ > as in the previous sections. The application of the : 1, if δ < t. logistic and Gompertz functions in life sciences Special case 2 γ = δ is briefly discussed. Numerical examples are pre- . For we obtain the cut ∆ = [0; 2δ] sented throughout the paper using the computer function on : algebra system MATHEMATICA. 8 0, if t < 0, > II. PRELIMINARIES <> t cδ,δ(t) = , if 0 ≤ t ≤ 2δ, (3) 2δ Sigmoid functions. In this work we consider > sigmoid functions of a single variable defined on :> 1, if 2δ < t. Biomath 4 (2015), 15, http://dx.doi.org/10.11145/j.biomath.2015.. Page 2 of 12 A. Iliev et al., On the Approximation of the Cut and Step Functions by Logistic ... Step functions. The step function (with “jump” at For the sake of simplicity throughout the pa- γ 2 R) can be defined by per we shall work with some of the special cut 8 0, if t < γ, functions (2), (3), instead of the more general <> (arbitrary shifted) cut function (1); these special hγ(t) = cγ;0(t) = [0; 1], if t = γ, (4) > choices will not lead to any loss of generality : 1, if t > γ; concerning the results obtained. Moreover, for all which is an interval-valued function (or just in- sigmoid functions considered in the sequel we terval function) [4], [43]. In the literature various shall define a “basic” sigmoid function such that point values, such as 0; 1=2 or 1, are prescribed any member of the corresponding class is obtained to the step function (4) at the point γ; we prefer by replacing the argument t by t − γ, that is by the interval value [0; 1]. When the jump is at the shifting the basic function by some γ 2 R. origin, that is γ = 0, then the step function is Logistic and Gompertz functions: applications known as the Heaviside step function; its “inter- to life-sciences. In this work we focus on two val” formulation is: 8 familiar smooth sigmoid functions, namely the 0, if t < 0, <> Gompertz function and the Verhulst logistic func- h0(t) = c0;0(t) = [0; 1], if t = 0, (5) tion. Both their inventors, B. Gompertz and P.- :> 1, if t > 0: F. Verhulst, have been motivated by the famous demographic studies of Thomas Malthus. H-distance. The step function can be perceived The Gompertz function was introduced by as a limiting case of the cut function. Namely, Benjamin Gompertz [22] for the study of de- for δ ! 0, the cut function cδ,δ tends in “Haus- mographic phenomena, more specifically human dorff sense” to the step function. Here “Haus- aging [38], [39], [47]. Gompertz functions find dorff sense” means Hausdorff distance, briefly numerous applications in biology, ecology and H-distance. The H-distance ρ(f; g) between two medicine. A. K. Laird successfully used the Gom- interval functions f; g on Ω ⊆ R, is the distance pertz curve to fit data of growth of tumors [32]; between their completed graphs F (f) and F (g) tumors are cellular populations growing in a con- considered as closed subsets of Ω × R [24], [41]. fined space where the availability of nutrients is More precisely, limited [1], [2], [15], [19]. ρ(f; g) = maxf sup inf jjA − Bjj; (6) A number of experimental scientists apply A2F (f) B2F (g) Gompertz models in bacterial cell growth, more sup inf jjA − Bjjg; specifically in food control [10], [31], [42], [48], B2F (g) A2F (f) [49], [50]. Gompertz models prove to be useful in 2 wherein jj:jj is any norm in R , e. g. the maximum animal and agro-sciences as well [8], [21], [27], norm jj(t; x)jj = max jtj; jxj. [48].
Recommended publications
  • Discontinuous Forcing Functions
    Math 135A, Winter 2012 Discontinuous forcing functions 1 Preliminaries If f(t) is defined on the interval [0; 1), then its Laplace transform is defined to be Z 1 F (s) = L (f(t)) = e−stf(t) dt; 0 as long as this integral is defined and converges. In particular, if f is of exponential order and is piecewise continuous, the Laplace transform of f(t) will be defined. • f is of exponential order if there are constants M and c so that jf(t)j ≤ Mect: Z 1 Since the integral e−stMect dt converges if s > c, then by a comparison test (like (11.7.2) 0 in Salas-Hille-Etgen), the integral defining the Laplace transform of f(t) will converge. • f is piecewise continuous if over each interval [0; b], f(t) has only finitely many discontinuities, and at each point a in [0; b], both of the limits lim f(t) and lim f(t) t!a− t!a+ exist { they need not be equal, but they must exist. (At the endpoints 0 and b, the appropriate one-sided limits must exist.) 2 Step functions Define u(t) to be the function ( 0 if t < 0; u(t) = 1 if t ≥ 0: Then u(t) is called the step function, or sometimes the Heaviside step function: it jumps from 0 to 1 at t = 0. Note that for any number a > 0, the graph of the function u(t − a) is the same as the graph of u(t), but translated right by a: u(t − a) jumps from 0 to 1 at t = a.
    [Show full text]
  • Laplace Transforms: Theory, Problems, and Solutions
    Laplace Transforms: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved 1 Contents 43 The Laplace Transform: Basic Definitions and Results 3 44 Further Studies of Laplace Transform 15 45 The Laplace Transform and the Method of Partial Fractions 28 46 Laplace Transforms of Periodic Functions 35 47 Convolution Integrals 45 48 The Dirac Delta Function and Impulse Response 53 49 Solving Systems of Differential Equations Using Laplace Trans- form 61 50 Solutions to Problems 68 2 43 The Laplace Transform: Basic Definitions and Results Laplace transform is yet another operational tool for solving constant coeffi- cients linear differential equations. The process of solution consists of three main steps: • The given \hard" problem is transformed into a \simple" equation. • This simple equation is solved by purely algebraic manipulations. • The solution of the simple equation is transformed back to obtain the so- lution of the given problem. In this way the Laplace transformation reduces the problem of solving a dif- ferential equation to an algebraic problem. The third step is made easier by tables, whose role is similar to that of integral tables in integration. The above procedure can be summarized by Figure 43.1 Figure 43.1 In this section we introduce the concept of Laplace transform and discuss some of its properties. The Laplace transform is defined in the following way. Let f(t) be defined for t ≥ 0: Then the Laplace transform of f; which is denoted by L[f(t)] or by F (s), is defined by the following equation Z T Z 1 L[f(t)] = F (s) = lim f(t)e−stdt = f(t)e−stdt T !1 0 0 The integral which defined a Laplace transform is an improper integral.
    [Show full text]
  • 10 Heat Equation: Interpretation of the Solution
    Math 124A { October 26, 2011 «Viktor Grigoryan 10 Heat equation: interpretation of the solution Last time we considered the IVP for the heat equation on the whole line u − ku = 0 (−∞ < x < 1; 0 < t < 1); t xx (1) u(x; 0) = φ(x); and derived the solution formula Z 1 u(x; t) = S(x − y; t)φ(y) dy; for t > 0; (2) −∞ where S(x; t) is the heat kernel, 1 2 S(x; t) = p e−x =4kt: (3) 4πkt Substituting this expression into (2), we can rewrite the solution as 1 1 Z 2 u(x; t) = p e−(x−y) =4ktφ(y) dy; for t > 0: (4) 4πkt −∞ Recall that to derive the solution formula we first considered the heat IVP with the following particular initial data 1; x > 0; Q(x; 0) = H(x) = (5) 0; x < 0: Then using dilation invariance of the Heaviside step function H(x), and the uniquenessp of solutions to the heat IVP on the whole line, we deduced that Q depends only on the ratio x= t, which lead to a reduction of the heat equation to an ODE. Solving the ODE and checking the initial condition (5), we arrived at the following explicit solution p x= 4kt 1 1 Z 2 Q(x; t) = + p e−p dp; for t > 0: (6) 2 π 0 The heat kernel S(x; t) was then defined as the spatial derivative of this particular solution Q(x; t), i.e. @Q S(x; t) = (x; t); (7) @x and hence it also solves the heat equation by the differentiation property.
    [Show full text]
  • Delta Functions and Distributions
    When functions have no value(s): Delta functions and distributions Steven G. Johnson, MIT course 18.303 notes Created October 2010, updated March 8, 2017. Abstract x = 0. That is, one would like the function δ(x) = 0 for all x 6= 0, but with R δ(x)dx = 1 for any in- These notes give a brief introduction to the mo- tegration region that includes x = 0; this concept tivations, concepts, and properties of distributions, is called a “Dirac delta function” or simply a “delta which generalize the notion of functions f(x) to al- function.” δ(x) is usually the simplest right-hand- low derivatives of discontinuities, “delta” functions, side for which to solve differential equations, yielding and other nice things. This generalization is in- a Green’s function. It is also the simplest way to creasingly important the more you work with linear consider physical effects that are concentrated within PDEs, as we do in 18.303. For example, Green’s func- very small volumes or times, for which you don’t ac- tions are extremely cumbersome if one does not al- tually want to worry about the microscopic details low delta functions. Moreover, solving PDEs with in this volume—for example, think of the concepts of functions that are not classically differentiable is of a “point charge,” a “point mass,” a force plucking a great practical importance (e.g. a plucked string with string at “one point,” a “kick” that “suddenly” imparts a triangle shape is not twice differentiable, making some momentum to an object, and so on.
    [Show full text]
  • An Analytic Exact Form of the Unit Step Function
    Mathematics and Statistics 2(7): 235-237, 2014 http://www.hrpub.org DOI: 10.13189/ms.2014.020702 An Analytic Exact Form of the Unit Step Function J. Venetis Section of Mechanics, Faculty of Applied Mathematics and Physical Sciences, National Technical University of Athens *Corresponding Author: [email protected] Copyright © 2014 Horizon Research Publishing All rights reserved. Abstract In this paper, the author obtains an analytic Meanwhile, there are many smooth analytic exact form of the unit step function, which is also known as approximations to the unit step function as it can be seen in Heaviside function and constitutes a fundamental concept of the literature [4,5,6]. Besides, Sullivan et al [7] obtained a the Operational Calculus. Particularly, this function is linear algebraic approximation to this function by means of a equivalently expressed in a closed form as the summation of linear combination of exponential functions. two inverse trigonometric functions. The novelty of this However, the majority of all these approaches lead to work is that the exact representation which is proposed here closed – form representations consisting of non - elementary is not performed in terms of non – elementary special special functions, e.g. Logistic function, Hyperfunction, or functions, e.g. Dirac delta function or Error function and Error function and also most of its algebraic exact forms are also is neither the limit of a function, nor the limit of a expressed in terms generalized integrals or infinitesimal sequence of functions with point wise or uniform terms, something that complicates the related computational convergence. Therefore it may be much more appropriate in procedures.
    [Show full text]
  • Appendix I Integrating Step Functions
    Appendix I Integrating Step Functions In Chapter I, section 1, we were considering step functions of the form (1) where II>"" In were brick functions and AI> ... ,An were real coeffi­ cients. The integral of I was defined by the formula (2) In case of a single real variable, it is almost obvious that the value JI does not depend on the representation (2). In Chapter I, we admitted formula (2) for several variables, relying on analogy only. There are various methods to prove that formula precisely. In this Appendix we are going to give a proof which uses the concept of Heaviside functions. Since the geometrical intuition, above all in multidimensional case, might be decep­ tive, all the given arguments will be purely algebraic. 1. Heaviside functions The Heaviside function H(x) on the real line Rl admits the value 1 for x;;;o °and vanishes elsewhere; it thus can be defined on writing H(x) = {I for x ;;;0O, ° for x<O. By a Heaviside function H(x) in the plane R2, where x actually denotes points x = (~I> ~2) of R2, we understand the function which admits the value 1 for x ;;;0 0, i.e., for ~l ~ 0, ~2;;;o 0, and vanishes elsewhere. In symbols, H(x) = {I for x ;;;0O, (1.1) ° for x to. Note that we cannot use here the notation x < °(which would mean ~l <0, ~2<0) instead of x~O, because then H(x) would not be defined in the whole plane R 2 • Similarly, in Euclidean q-dimensional space R'l, the Heaviside function H(x), where x = (~I>' .
    [Show full text]
  • On the Approximation of the Step Function by Raised-Cosine and Laplace Cumulative Distribution Functions
    European International Journal of Science and Technology Vol. 4 No. 9 December, 2015 On the Approximation of the Step function by Raised-Cosine and Laplace Cumulative Distribution Functions Vesselin Kyurkchiev 1 Faculty of Mathematics and Informatics Institute of Mathematics and Informatics, Paisii Hilendarski University of Plovdiv, Bulgarian Academy of Sciences, 24 Tsar Assen Str., 4000 Plovdiv, Bulgaria Email: [email protected] Nikolay Kyurkchiev Faculty of Mathematics and Informatics Institute of Mathematics and Informatics, Paisii Hilendarski University of Plovdiv, Bulgarian Academy of Sciences, Acad.G.Bonchev Str.,Bl.8 1113Sofia, Bulgaria Email: [email protected] 1 Corresponding Author Abstract In this note the Hausdorff approximation of the Heaviside step function by Raised-Cosine and Laplace Cumulative Distribution Functions arising from lifetime analysis, financial mathematics, signal theory and communications systems is considered and precise upper and lower bounds for the Hausdorff distance are obtained. Numerical examples, that illustrate our results are given, too. Key words: Heaviside step function, Raised-Cosine Cumulative Distribution Function, Laplace Cumulative Distribution Function, Alpha–Skew–Laplace cumulative distribution function, Hausdorff distance, upper and lower bounds 2010 Mathematics Subject Classification: 41A46 75 European International Journal of Science and Technology ISSN: 2304-9693 www.eijst.org.uk 1. Introduction The Cosine Distribution is sometimes used as a simple, and more computationally tractable, approximation to the Normal Distribution. The Raised-Cosine Distribution Function (RC.pdf) and Raised-Cosine Cumulative Distribution Function (RC.cdf) are functions commonly used to avoid inter symbol interference in communications systems [1], [2], [3]. The Laplace distribution function (L.pdf) and Laplace Cumulative Distribution function (L.cdf) is used for modeling in signal processing, various biological processes, finance, and economics.
    [Show full text]
  • Lecture 2 ELE 301: Signals and Systems
    Lecture 2 ELE 301: Signals and Systems Prof. Paul Cuff Princeton University Fall 2011-12 Cuff (Lecture 2) ELE 301: Signals and Systems Fall 2011-12 1 / 70 Models of Continuous Time Signals Today's topics: Signals I Sinuoidal signals I Exponential signals I Complex exponential signals I Unit step and unit ramp I Impulse functions Systems I Memory I Invertibility I Causality I Stability I Time invariance I Linearity Cuff (Lecture 2) ELE 301: Signals and Systems Fall 2011-12 2 / 70 Sinusoidal Signals A sinusoidal signal is of the form x(t) = cos(!t + θ): where the radian frequency is !, which has the units of radians/s. Also very commonly written as x(t) = A cos(2πft + θ): where f is the frequency in Hertz. We will often refer to ! as the frequency, but it must be kept in mind that it is really the radian frequency, and the frequency is actually f . Cuff (Lecture 2) ELE 301: Signals and Systems Fall 2011-12 3 / 70 The period of the sinuoid is 1 2π T = = f ! with the units of seconds. The phase or phase angle of the signal is θ, given in radians. cos(ωt) cos(ωt θ) − 0 0 -2T -T T 2T t -2T -T T 2T t Cuff (Lecture 2) ELE 301: Signals and Systems Fall 2011-12 4 / 70 Complex Sinusoids The Euler relation defines ejφ = cos φ + j sin φ. A complex sinusoid is Aej(!t+θ) = A cos(!t + θ) + jA sin(!t + θ): e jωt -2T -T 0 T 2T Real sinusoid can be represented as the real part of a complex sinusoid Aej(!t+θ) = A cos(!t + θ) <f g Cuff (Lecture 2) ELE 301: Signals and Systems Fall 2011-12 5 / 70 Exponential Signals An exponential signal is given by x(t) = eσt If σ < 0 this is exponential decay.
    [Show full text]
  • Maximizing Acquisition Functions for Bayesian Optimization
    Maximizing acquisition functions for Bayesian optimization James T. Wilson∗ Frank Hutter Marc Peter Deisenroth Imperial College London University of Freiburg Imperial College London PROWLER.io Abstract Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes’ decision rule, but this ideal is difficult to achieve since these functions are fre- quently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high- dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimiza- tion. Subsequently, we identify a common family of acquisition functions, includ- ing EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization. 1 Introduction Bayesian optimization (BO) is a powerful framework for tackling complicated global optimization problems [32, 40, 44]. Given a black-box function f : , BO seeks to identify a maximizer X!Y x∗ arg maxx f(x) while simultaneously minimizing incurred costs. Recently, these strategies have2 demonstrated2X state-of-the-art results on many important, real-world problems ranging from material sciences [17, 57], to robotics [3, 7], to algorithm tuning and configuration [16, 29, 53, 56]. From a high-level perspective, BO can be understood as the application of Bayesian decision theory to optimization problems [11, 14, 45]. One first specifies a belief over possible explanations for f using a probabilistic surrogate model and then combines this belief with an acquisition function to convey the expected utility for evaluating a set of queries X.
    [Show full text]
  • Step and Delta Functions 18.031 Haynes Miller and Jeremy Orloff 1
    Step and Delta Functions 18.031 Haynes Miller and Jeremy Orloff 1 The unit step function 1.1 Definition Let's start with the definition of the unit step function, u(t): ( 0 for t < 0 u(t) = 1 for t > 0 We do not define u(t) at t = 0. Rather, at t = 0 we think of it as in transition between 0 and 1. It is called the unit step function because it takes a unit step at t = 0. It is sometimes called the Heaviside function. The graph of u(t) is simple. u(t) 1 t We will use u(t) as an idealized model of a natural system that goes from 0 to 1 very quickly. In reality it will make a smooth transition, such as the following. 1 t Figure 1. u(t) is an idealized version of this curve But, if the transition happens on a time scale much smaller than the time scale of the phenomenon we care about then the function u(t) is a good approximation. It is also much easier to deal with mathematically. One of our main uses for u(t) will be as a switch. It is clear that multiplying a function f(t) by u(t) gives ( 0 for t < 0 u(t)f(t) = f(t) for t > 0: We say the effect of multiplying by u(t) is that for t < 0 the function f(t) is switched off and for t > 0 it is switched on. 1 18.031 Step and Delta Functions 2 1.2 Integrals of u0(t) From calculus we know that Z Z b u0(t) dt = u(t) + c and u0(t) dt = u(b) − u(a): a For example: Z 5 u0(t) dt = u(5) − u(−2) = 1; −2 Z 3 u0(t) dt = u(3) − u(1) = 0; 1 Z −3 u0(t) dt = u(−3) − u(−5) = 0: −5 In fact, the following rule for the integral of u0(t) over any interval is obvious ( Z b 1 if 0 is inside the interval (a; b) u0(t) = (1) a 0 if 0 is outside the interval [a; b].
    [Show full text]
  • Expanded Use of Discontinuity and Singularity Functions in Mechanics
    AC 2010-1666: EXPANDED USE OF DISCONTINUITY AND SINGULARITY FUNCTIONS IN MECHANICS Robert Soutas-Little, Michigan State University Professor Soutas-Little received his B.S. in Mechanical Engineering 1955 from Duke University and his MS 1959 and PhD 1962 in Mechanics from University of Wisconsin. His name until 1982 when he married Patricia Soutas was Robert William Little. Under this name, he wrote an Elasticity book and published over 20 articles. Since 1982, he has written over 100 papers for journals and conference proceedings and written books in Statics and Dynamics. He has taught courses in Statics, Dynamics, Mechanics of Materials, Elasticity, Continuum Mechanics, Viscoelasticity, Advanced Dynamics, Advanced Elasticity, Tissue Biomechanics and Biodynamics. He has won teaching excellence awards and the Distinguished Faculty Award. During his tenure at Michigan State University, he chaired the Department of Mechanical Engineering for 5 years and the Department of Biomechanics for 13 years. He directed the Biomechanics Evaluation Laboratory from 1990 until he retired in 2002. He served as Major Professor for 22 PhD students and over 100 MS students. He has received numerous research grants and consulted with engineering companies. He now is Professor Emeritus of Mechanical Engineering at Michigan State University. Page 15.549.1 Page © American Society for Engineering Education, 2010 Expanded Use of Discontinuity and Singularity Functions in Mechanics Abstract W. H. Macauley published Notes on the Deflection of Beams 1 in 1919 introducing the use of discontinuity functions into the calculation of the deflection of beams. In particular, he introduced the singularity functions, the unit doublet to model a concentrated moment, the Dirac delta function to model a concentrated load and the Heaviside step function to start a uniform load at any point on the beam.
    [Show full text]
  • Twisting Algorithm for Controlling F-16 Aircraft
    TWISTING ALGORITHM FOR CONTROLLING F-16 AIRCRAFT WITH PERFORMANCE MARGINS IDENTIFICATION by AKSHAY KULKARNI A THESIS Submitted in partial fulfillment of the requirements For the degree of Master of Science in Engineering In The Department of Electrical and Computer Engineering To The School of Graduate Studies Of The University of Alabama in Huntsville HUNTSVILLE, ALABAMA 2014 In presenting this thesis in partial fulfillment of the requirements for a master's degree from The University of Alabama in Huntsville, I agree that the Library of this University shall make it freely available for inspection. I further agree that permission for extensive copying for scholarly purposes may be granted by my advisor or, in his absence, by the Chair of the Department or the Dean of the School of Graduate Studies. It is also understood that due recognition shall be given to me and to The University of Alabama in Huntsville in any scholarly use which may be made of any material in this thesis. ____________________________ ___________ (Student signature) (Date) ii THESIS APPROVAL FORM Submitted by Akshay Kulkarni in partial fulfillment of the requirements for the degree of Master of Science in Engineering and accepted on behalf of the Faculty of the School of Graduate Studies by the thesis committee. We, the undersigned members of the Graduate Faculty of The University of Alabama in Huntsville, certify that we have advised and/or supervised the candidate on the work described in this thesis. We further certify that we have reviewed the thesis manuscript and approve it in partial fulfillment of the requirements for the degree of Master of Science in Engineering.
    [Show full text]