Laplace Transforms

Total Page:16

File Type:pdf, Size:1020Kb

Laplace Transforms SECTION 5: LAPLACE TRANSFORMS ESE 330 – Modeling & Analysis of Dynamic Systems 2 Introduction – Transforms This section of notes contains an introduction to Laplace transforms. This should mostly be a review of material covered in your differential equations course. K. Webb ESE 330 Transforms 3 What is a transform? A mapping of a mathematical function from one domain to another A change in perspective not a change of the function Why use transforms? Some mathematical problems are difficult to solve in their natural domain Transform to and solve in a new domain, where the problem is simplified Transform back to the original domain Trade off the extra effort of transforming/inverse- transforming for simplification of the solution procedure K. Webb ESE 330 Transform Example – Slide Rules 4 Slide rules make use of a logarithmic transform Multiplication/division of large numbers is difficult Transform the numbers to the logarithmic domain Add/subtract (easy) in the log domain to multiply/divide (difficult) in the linear domain Apply the inverse transform to get back to the original domain Extra effort is required, but the problem is simplified K. Webb ESE 330 5 Laplace Transforms K. Webb ESE 330 Laplace Transforms 6 An integral transform mapping functions from the time domain to the Laplace domain or s-domain ℒ Time-domain functions are↔ functions of time, Laplace-domain functions are functions of is a complex variable = + K. Webb ESE 330 Laplace Transforms – Motivation 7 We’ll use Laplace transforms to solve differential equations Differential equations in the time domain difficult to solve Apply the Laplace transform Transform to the s-domain Differential equations become algebraic equations easy to solve Transform the s-domain solution back to the time domain Transforming back and forth requires extra effort, but the solution is greatly simplified K. Webb ESE 330 Laplace Transform 8 Laplace Transform: = = (1) ∞ − Unilateralℒ or one -sided transform∫0 Lower limit of integration is = 0 Assumed that the time domain function is zero for all negative time, i.e. = 0, < 0 K. Webb ESE 330 9 Laplace Transform Properties In the following section of notes, we’ll derive a few important properties of the Laplace transform. K. Webb ESE 330 Laplace Transform – Linearity 10 Say we have two time-domain functions: and Applying the 1transform 2definition, (1) + = + ∞ − ℒ 1 2 � 1 2 = 0 + ∞ ∞ − − � 1 � 2 = 0 + 0 ∞ ∞ − − 1 2 = �0 + �0 � ℒ 1 � ℒ 2 + = + (2) The ℒLaplace1 transform 2is a linear operation1 2 K. Webb ESE 330 Laplace Transform of a Derivative 11 Of particular interest, given that we want to use Laplace transform to solve differential equations = ∞ − Use integrationℒ bẏ parts�to0 evaluatė = Let = ∫ and − ∫ = then − ̇ = and = − K. Webb − ESE 330 Laplace Transform of a Derivative 12 = ∞ ∞ − − ℒ ̇ �0 − �0 − = 0 0 + = 0 + ∞ − − �0 − ℒ The Laplace transform of the derivative of a function is the Laplace transform of that function multiplied by minus the initial value of that function = (0) (3) ℒ ̇ − K. Webb ESE 330 Higher-Order Derivatives 13 The Laplace transform of a second derivative is = 0 0 (4) 2 ℒ ̈ − − ̇ In general, the Laplace transform of the derivative of a function is given by = 0 0 0 (5) −1 −2 −1 ℒ − − ̇ − ⋯ − K. Webb ESE 330 Laplace Transform of an Integral 14 The Laplace Transform of a definite integral of a function is given by = (6) 1 ℒ ∫0 Differentiation in the time domain corresponds to multiplication by in the Laplace domain Integration in the time domain corresponds to division by in the Laplace domain K. Webb ESE 330 Laplace Transforms of Common 15 Functions Next, we’ll derive the Laplace transform of some common mathematical functions K. Webb ESE 330 Unit Step Function 16 A useful and common way of characterizing a linear system is with its step response The system’s response (output) to a unit step input The unit step function or Heaviside step function: 0, < 0 1 = 1, 0 � ≥ K. Webb ESE 330 Unit Step Function – Laplace Transform 17 Using the definition of the Laplace transform 1 = 1 = ∞ ∞ − − ℒ � � 1 0 1 0 1 = = 0 = ∞ − − �0 − − The Laplace transform of the unit step 1 = (7) 1 Note that the unilateralℒ Laplace transform assumes that the signal being transformed is zero for < 0 Equivalent to multiplying any signal by a unit step K. Webb ESE 330 Unit Ramp Function 18 The unit ramp function is a useful input signal for evaluating how well a system tracks a constantly- increasing input The unit ramp function: 0, < 0 = , 0 � ≥ K. Webb ESE 330 Unit Ramp Function – Laplace Transform 19 Could easily evaluate the transform integral Requires integration by parts Alternatively, recognize the relationship between the unit ramp and the unit step Unit ramp is the integral of the unit step Apply the integration property, (6) 1 1 = 1 = ℒ ℒ � � 0 = (8) 1 2 K. Webb ℒ ESE 330 Exponential – Laplace Transform 20 = − Exponentials are common components of the responses of dynamic systems = = ( ) ∞ ∞ − − − − + ℒ � � 0 01 = = 0 − ++ ∞ + − �0 − − = (9) − 1 K. Webb ℒ + ESE 330 Sinusoidal functions 21 Another class of commonly occurring signals, when dealing with dynamic systems, is sinusoidal signals – both sin and cos = sin Recall Euler’s formula = cos + sin From which it follows that sin = − − K. Webb ESE 330 2 Sinusoidal functions 22 1 sin = ∞ − − ℒ 1 �0 − = 2 ∞ − − − + 1 �0 − 1 = 2 ∞ ∞ − − − + � − � 21 0 21 0 = − − ∞ − ++ ∞ 1 1 �0 −1 1 �01 = 2 0−+ − 20 +− = + + 2 − 2 2 2 − 2 2 sin = (10) 2 2 K. Webb ℒ + ESE 330 Sinusoidal functions 23 It can similarly be shown that cos = (11) 2 2 ℒ + Note that for neither sin( ) nor cos is the function equal to zero for < 0 as the Laplace transform assumes Really, what we’ve derived is 1 sin and 1 cos K. Webb ℒ � ℒ � ESE 330 24 More Properties and Theorems K. Webb ESE 330 Multiplication by an Exponential, 25 − We’ve seen that = − 1 What if another functionℒ is +multiplied by the decaying exponential term? = = ∞ ∞ − − − − + ℒ � � This is just the Laplace0 transform of0 with replaced by + = + (12) − K. Webb ℒ ESE 330 Decaying Sinusoids 26 The Laplace transform of a sinusoid is sin = + ℒ 2 2 And, multiplication by an decaying exponential, , results in a substitution of + for , so − sin = + + − and ℒ 2 2 + cos = + + − 2 2 K. Webb ℒ ESE 330 Time Shifting 27 Consider a time-domain function, To Laplace transform we’ve assumed = 0 for < 0, or equivalently multiplied by 1() To shift by an amount, , in time, we must also multiply by a shifted step function, 1 K. Webb − ESE 330 Time Shifting – Laplace Transform 28 The transform of the shifted function is given by 1 = ∞ − ℒ − � − � − Performing a change of variables, let = and = The transform becomes − 1 = = = ∞ ∞ ∞ − + − − − − ℒ � � � � A shift by in the0 time domain corresponds0 to multiplication0 by in the Laplace domain − 1 = (13) − ℒ − � − K. Webb ESE 330 Multiplication by time, 29 The Laplace transform of a function multiplied by time: = ( ) (14) Consider a unitℒ ramp⋅ function:− 1 1 = 1 = = ℒ ℒ ⋅ − 2 Or a parabola: = = = 2 1 2 2 3 In general ℒ ℒ ⋅ − ! = ℒ +1 K. Webb ESE 330 Initial and Final Value Theorems 30 Initial Value Theorem Can determine the initial value of a time-domain signal or function from its Laplace transform 0 = lim (15) →∞ Final Value Theorem Can determine the steady-state value of a time-domain signal or function from its Laplace transform = lim (16) ∞ →0 K. Webb ESE 330 Convolution 31 Convolution of two functions or signals is given by = Result is a function of∗ time �0 − is flipped in time and shifted by Multiply the flipped/shifted signal and the other signal Integrate the result from = 0 … May seem like an odd, arbitrary function now, but we’ll later see why it is very important Convolution in the time domain corresponds to multiplication in the Laplace domain = (17) K. Webb ℒ ∗ ESE 330 Impulse Function 32 Another common way to describe a dynamic system is with its impulse response System output in response to an impulse function input Impulse function defined by = 0, 0 = 1 ≠ ∞ � An infinitely−∞ tall, infinitely narrow pulse K. Webb ESE 330 Impulse Function – Laplace Transform 33 To derive , consider the following function 1 , 0 ℒ= 0, < 0 or≤ ≤> 0 �0 0 Can think of as the sum of two step functions: 1 1 = 1 1 − − 0 The transform of the first term0 is 0 1 1 1 = ℒ Using the time-shifting property,0 the second0 term transforms to 1 1 = −0 0 ℒ − 0 − − 0 K. Webb ESE 330 Impulse Function – Laplace Transform 34 In the limit, as 0, , so 0 → →= lim ℒ 0→01ℒ = lim −0 − ℒ 0→0 Apply l’Hôpital’s rule 0 1 = lim −0 = lim = −0 − 0 ℒ 0→0 0→0 0 The Laplace transform of an impulse0 function is one = 1 (18) K. Webb ℒ ESE 330 Common Laplace Transforms 35 1 sin + + − 1 +2 2 1 cos + + − 1 2 2 ( ) (0) 2! ̇ − 0 0 2 1+1 ̈ −1 − ̇ + − 1 � 0 + + − − 2 sin 1 + − 2 2 cos − � − + 2 2 K. Webb � − ESE 330 Example – Piecewise Function Laplace Transform 36 Determine the Laplace transform of a piecewise function: A summation of functions with known transforms: Ramp Pulse – sum of positive and negative steps Transform is the sum of the individual, known transforms K. Webb ESE 330 Example – Piecewise Function Laplace Transform 37 Treat the piecewise function as a sum of individual functions = + 1 2 Time-shifted, gated ramp 1 Time-shifted pulse 2 Sum of staggered positive and negative steps K.
Recommended publications
  • Learning Activation Functions in Deep Neural Networks
    UNIVERSITÉ DE MONTRÉAL LEARNING ACTIVATION FUNCTIONS IN DEEP NEURAL NETWORKS FARNOUSH FARHADI DÉPARTEMENT DE MATHÉMATIQUES ET DE GÉNIE INDUSTRIEL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MÉMOIRE PRÉSENTÉ EN VUE DE L’OBTENTION DU DIPLÔME DE MAÎTRISE ÈS SCIENCES APPLIQUÉES (GÉNIE INDUSTRIEL) DÉCEMBRE 2017 c Farnoush Farhadi, 2017. UNIVERSITÉ DE MONTRÉAL ÉCOLE POLYTECHNIQUE DE MONTRÉAL Ce mémoire intitulé : LEARNING ACTIVATION FUNCTIONS IN DEEP NEURAL NETWORKS présenté par : FARHADI Farnoush en vue de l’obtention du diplôme de : Maîtrise ès sciences appliquées a été dûment accepté par le jury d’examen constitué de : M. ADJENGUE Luc-Désiré, Ph. D., président M. LODI Andrea, Ph. D., membre et directeur de recherche M. PARTOVI NIA Vahid, Doctorat, membre et codirecteur de recherche M. CHARLIN Laurent, Ph. D., membre iii DEDICATION This thesis is dedicated to my beloved parents, Ahmadreza and Sholeh, who are my first teachers and always love me unconditionally. This work is also dedicated to my love, Arash, who has been a great source of motivation and encouragement during the challenges of graduate studies and life. iv ACKNOWLEDGEMENTS I would like to express my gratitude to Prof. Andrea Lodi for his permission to be my supervisor at Ecole Polytechnique de Montreal and more importantly for his enthusiastic encouragements and invaluable continuous support during my research and education. I am very appreciated to him for introducing me to a MITACS internship in which I have developed myself both academically and professionally. I would express my deepest thanks to Dr. Vahid Partovi Nia, my co-supervisor at Ecole Poly- technique de Montreal, for his supportive and careful guidance and taking part in important decisions which were extremely precious for my research both theoretically and practically.
    [Show full text]
  • 2.161 Signal Processing: Continuous and Discrete Fall 2008
    MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.161 Signal Processing – Continuous and Discrete 1 The Laplace Transform 1 Introduction In the class handout Introduction to Frequency Domain Processing we introduced the Fourier trans­ form as an important theoretical and practical tool for the analysis of waveforms, and the design of linear filters. We noted that there are classes of waveforms for which the classical Fourier integral does not converge. An important function that does not have a classical Fourier transform are is the unit step (Heaviside) function ( 0 t · 0; us(t) = 1 t > 0; Clearly Z 1 jus(t)j dt = 1; ¡1 and the forward Fourier integral Z 1 Z 1 ¡j­t ¡j­t Us(jΩ) = us(t)e dt = e dt (1) ¡1 0 does not converge. Aside: We also saw in the handout that many such functions whose Fourier integrals do not converge do in fact have Fourier transforms that can be defined using distributions (with the use of the Dirac delta function ±(t)), for example we found that 1 F fu (t)g = ¼±(Ω) + : s jΩ Clearly any function f(t) in which limj tj!1 f(t) 6= 0 will have a Fourier integral that does not converge. Similarly, the ramp function ( 0 t · 0; r(t) = t t > 0: is not integrable in the absolute sense, and does not allow direct computation of the Fourier trans­ form.
    [Show full text]
  • Multilayer Perceptron
    Advanced Introduction to Machine Learning, CMU-10715 Perceptron, Multilayer Perceptron Barnabás Póczos, Sept 17 Contents History of Artificial Neural Networks Definitions: Perceptron, MLP Representation questions Perceptron algorithm Backpropagation algorithm 2 Short History Progression (1943-1960) • First mathematical model of neurons . Pitts & McCulloch (1943) • Beginning of artificial neural networks • Perceptron, Rosenblatt (1958) . A single layer neuron for classification . Perceptron learning rule . Perceptron convergence theorem Degression (1960-1980) • Perceptron can’t even learn the XOR function • We don’t know how to train MLP • 1969 Backpropagation… but not much attention… 3 Short History Progression (1980-) • 1986 Backpropagation reinvented: . Rumelhart, Hinton, Williams: Learning representations by back-propagating errors. Nature, 323, 533—536, 1986 • Successful applications: . Character recognition, autonomous cars,… • Open questions: Overfitting? Network structure? Neuron number? Layer number? Bad local minimum points? When to stop training? • Hopfield nets (1982), Boltzmann machines,… 4 Short History Degression (1993-) • SVM: Vapnik and his co-workers developed the Support Vector Machine (1993). It is a shallow architecture. • SVM almost kills the ANN research. • Training deeper networks consistently yields poor results. • Exception: deep convolutional neural networks, Yann LeCun 1998. (discriminative model) 5 Short History Progression (2006-) Deep Belief Networks (DBN) • Hinton, G. E, Osindero, S., and Teh, Y.
    [Show full text]
  • Nonlinear Activation Functions in CNN Based on Fluid Dynamics and Its Applications
    Copyright © 2019 Tech Science Press CMES, vol.118, no.1, pp.1-14, 2019 Nonlinear Activation Functions in CNN Based on Fluid Dynamics and Its Applications Kazuhiko Kakuda1, *, Tomoyuki Enomoto1 and Shinichiro Miura2 Abstract: The nonlinear activation functions in the deep CNN (Convolutional Neural Network) based on fluid dynamics are presented. We propose two types of activation functions by applying the so-called parametric softsign to the negative region. We use significantly the well-known TensorFlow as the deep learning framework. The CNN architecture consists of three convolutional layers with the max-pooling and one fully- connected softmax layer. The CNN approaches are applied to three benchmark datasets, namely, MNIST, CIFAR-10, and CIFAR-100. Numerical results demonstrate the workability and the validity of the present approach through comparison with other numerical performances. Keywords: Deep learning, CNN, activation function, fluid dynamics, MNIST, CIFAR- 10, CIFAR-100. 1 Introduction The state-of-the-art on the deep learning in artificial intelligence is nowadays indispensable in engineering and science fields, such as robotics, automotive engineering, web-informatics, bio-informatics, and so on. There are recently some neural networks in the deep learning framework [LeCun, Bengio and Hinton (2015)], i.e., CNN (Convolutional Neural Networks) to recognize object images [Fukushima and Miyake (1982); LeCun, Bottou, Bengio et al. (1998); Krizhevsky, Sutskever and Hinton (2012)], RNN (Recurrent Neural Networks) to process time-series data [Rumelhart, Hinton and Williams (1986)], and so forth. The appropriate choice of the activation functions for neural networks is a key factor in the deep learning simulations. Heretofore, there have been significantly proposed various activation functions in the CNN/RNN-frameworks.
    [Show full text]
  • 1.2 Continuous-Time Signal
    EECE 301 Signals & Systems Prof. Mark Fowler Note Set #2 • What are Continuous-Time Signals??? • Reading Assignment: Section 1.1 of Kamen and Heck 1/22 Course Flow Diagram The arrows here show conceptual flow between ideas. Note the parallel structure between the pink blocks (C-T Freq. Analysis) and the blue blocks (D-T Freq. Analysis). Ch. 3: CT Fourier Ch. 5: CT Fourier Ch. 6 & 8: Laplace Signal Models System Models Models for CT New Signal Signals & Systems Models Fourier Series Frequency Response Transfer Function Periodic Signals Based on Fourier Transform Fourier Transform (CTFT) Ch. 1 Intro Non-Periodic Signals New System Model New System Model C-T Signal Model Functions on Real Line Ch. 2 Diff Eqs C-T System Model System Properties Differential Equations Ch. 2 Convolution LTI D-T Signal Model Causal Difference Equations New System Model C-T System Model Etc Zero-State Response Convolution Integral D-T Signal Model Zero-Input Response D-T System Model Functions on Integers Characteristic Eq. Convolution Sum Ch. 7: Z Trans. Ch. 4: DT Fourier Ch. 5: DT Fourier Models for DT New Signal Signal Models System Models Signals & Systems Model DTFT Freq. Response for DT Transfer Function (for “Hand” Analysis) Based on DTFT DFT & FFT Powerful New System (for Computer Analysis) New System Model Analysis Tool Model2/22 1.1 Continuous-Time Signal Our first math model for a signal will be a “function of time” Continuous Time (C-T) Signal: A C-T signal is defined on the continuum of time values. That is: f(t) for t ∈ℜ Real line f(t) t 3/22 Step & Ramp Functions These are common textbook signals but are also common test signals, especially in control systems.
    [Show full text]
  • Section 5: Laplace Transforms
    SECTION 5: LAPLACE TRANSFORMS MAE 3401 – Modeling and Simulation 2 Introduction – Transforms This section of notes contains an introduction to Laplace transforms. This should mostly be a reviewofmaterialcoveredinyourdifferential equations course. K. Webb MAE 3401 Transforms 3 What is a transform? A mapping of a mathematical function from one domain to another A change in perspective not a change of the function Why use transforms? Some mathematical problems are difficult to solve in their natural domain Transform to and solve in a new domain, where the problem is simplified Transform back to the original domain Trade off the extra effort of transforming/inverse‐ transforming for simplification of the solution procedure K. Webb MAE 3401 Transform Example – Slide Rules 4 Slide rules make use of a logarithmic transform Multiplication/division of large numbers is difficult Transform the numbers to the logarithmic domain Add/subtract (easy) in the log domain to multiply/divide (difficult) in the linear domain Apply the inverse transform to get back to the original domain Extra effort is required, but the problem is simplified K. Webb MAE 3401 5 Laplace Transforms K. Webb MAE 3401 Laplace Transforms 6 An integral transform mapping functions from the time domain to the Laplace domain or s‐domain Time‐domain functions are functions of time, Laplace‐domain functions are functions of is a complex variable K. Webb MAE 3401 Laplace Transforms – Motivation 7 We’ll use Laplace transforms to solve differential equations Differential equations in the time domain difficult to solve Apply the Laplace transform Transform to the s‐domain Differential equations become algebraic equations easy to solve Transform the s‐domain solution back to the time domain Transforming back and forth requires extra effort, but the solution is greatly simplified K.
    [Show full text]
  • Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks
    Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks Tomasz Szandała​1 1Wroclaw​ University of Science and Technology Wroclaw, Poland The primary neural networks decision-making units are activation functions. Moreover, they evaluate the output of networks neural node; thus, they are essential for the performance of the whole network. Hence, it is critical to choose the most appropriate activation function in neural networks calculation. Acharya et al. (2018) suggest that numerous recipes have been formulated over the years, though some of them are considered deprecated these days since they are unable to operate properly under some conditions. These functions have a variety of characteristics, which are deemed essential to successfully learning. Their monotonicity, individual derivatives, and finite of their range are some of these characteristics (Bach 2017). This research paper will evaluate the commonly used additive functions, such as swish, ReLU, Sigmoid, and so forth. This will be followed by their properties, own cons and pros, and particular formula application recommendations. deep learning, neural networks, activation function, classification, regression Introduction There exist many applications of deep learning neural network, such as voice analysis, speech or pattern recognition, and object classification. This has been demonstrated in its excellent performance in many fields. Moreover, it comprises a structure with hidden layers; this implies that the layers are more than one [1], [2]. However, from the study of deep learning structure, it can adapt in terms of real-world application with natural scenes due to its salient features. There exist only a few layers in the first deep learning model used for classification tasks; for example, there were only five layers in LeNet5 model[3].
    [Show full text]
  • Theory and Application of Backpropagation: a Handbook
    Paul Munro January 8 , 2 0 0 3 Backpropagation Handbook Draft: Do Not Distribute Theory and Application of Backpropagation: A Handbook Paul W. Munro © copyright 2002 - 1 - Paul Munro January 8 , 2 0 0 3 Backpropagation Handbook Draft: Do Not Distribute dedicated to the memory of Frank Rosenblatt (1928-1971), who was on the right track. "In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual" - Galileo Galilei - 2 - Paul Munro January 8 , 2 0 0 3 Backpropagation Handbook Draft: Do Not Distribute Preface “What is a neural network, anyway?” Occasionally confronted with such a question (by brave souls in social settings persisting beyond the requisite, “What sort of work do you do?”), I find myself describing an approach to computer simulation of neural function. This often leads to a speculative discussion of the limits of machine intelligence and whether robots will someday rule the world. Assuming a more focused audience here, I submit the following response: The term neural network often brings to mind thoughts of biological devices (brains), electronic devices, or perhaps network diagrams. But the best conceptualization, at least for the discussion here, of a neural network is none of these; rather, think of a neural network as a mathematical function. Not only does the abstractness of mathematics allow a very general description that can unite different instantiations of neural information processing principles, it is also inherently subject to formal analysis and computer simulation. Borrowing the dynamical systems approach of the so-called hard sciences, the field of neural networks has progressed in numerous directions, both towards developing explanations of the relationship between neurobiology and behavior/cognition and towards developing new approaches to problems in engineering and computer science.
    [Show full text]
  • 7 the Backpropagation Algorithm
    7 The Backpropagation Algorithm 7.1 Learning as gradient descent We saw in the last chapter that multilayered networks are capable of com- puting a wider range of Boolean functions than networks with a single layer of computing units. However the computational effort needed for finding the correct combination of weights increases substantially when more parameters and more complicated topologies are considered. In this chapter we discuss a popular learning method capable of handling such large learning problems — the backpropagation algorithm. This numerical method was used by different research communities in different contexts, was discovered and rediscovered, until in 1985 it found its way into connectionist AI mainly through the work of the PDP group [382]. It has been one of the most studied and used algorithms for neural networks learning ever since. In this chapter we present a proof of the backpropagation algorithm based on a graphical approach in which the algorithm reduces to a graph labeling problem. This method is not only more general than the usual analytical derivations, which handle only the case of special network topologies, but also much easier to follow. It also shows how the algorithm can be efficiently implemented in computing systems in which only local information can be transported through the network. 7.1.1 Differentiable activation functions The backpropagation algorithm looks for the minimum of the error function in weight space using the method of gradient descent. The combination of weights which minimizes the error function is considered to be a solution of the learning problem. Since this method requires computation of the gradient of the error function at each iteration step, we must guarantee the conti- nuity and differentiability of the error function.
    [Show full text]
  • 1 Neural Learning Methods the Basic Neuron Model Input Bias Weights
    Neural Learning Methods } An obvious source of biological inspiration for learning research: the brain } The work of McCulloch and Pitts on the perceptron (1943) started as research into how we could precisely model the neuron and the network of connections that 212 Chapter 18. LearningfromExamples allow animals (like us) to learn Class #16: } These networks are used as classifiers: given an input, Neural Networks they label that input with a classification, or a distribution 212 over possible classifications Chapter 18. LearningfromExamples Machine Learning (COMP 135): M. Allen, 28 Oct. 19 Monday, 28 Oct. 2019 Machine Learning (COMP 135) 2 1 2 Source: Russel & Norvig, AI: A Modern Approach The Basic Neuron Model (Prentice Hal, 2010) Input Bias Weights Bias Weight Bias Weight a = 1 a = 1 0 w aj = g(inj) 0 aj = g(inj) 0,j w0,j g inj g wi,j inj ai aj wi,j Σ ai Σ aj Input Input Activation Output Links Function Function Output Links Input Input Activation Output Links Function Function Output Links Figure 18.19 FILES: figures/neuron-unit.eps (Wed Nov 4 11:23:13 2009). Asimplemathemat- } Each input ai to neuron j is given a weightn wi,j ical model for a neuron. The unit’s output activation is aj = g(Pi =0wi,j ai),whereai is the output } Eachactivation neuron of unit iisand treatedwi,j is the weightas having on the link a from fixed unit i todummy this unit. input, a0 = 1 Figure 18.19 FILES: figures/neuron-unit.eps (Wed Nov 4 11:23:13 2009).
    [Show full text]
  • Appendix a the Laplace Transform
    Appendix A The Laplace Transform The Laplace transform was discovered originally by Leonhard Euler (1707–1783), the great 18th-century Swiss mathematician and physicist, but is named in honor of a French mathematician and astronomer Pierre-Simon Laplace (1749–1827), who used the transform in his work on probability theory. He was such a genius not only in mathematics that the great mathematician Simeon Poisson (1781–1840) labeled him the Isaac Newton of France, but also in politics that he could serve three regimes in revolutionary France – the republic, the empire of Napoleon, and the Bourbon restoration, having been bestowed a count from Napoleon and a marquis from Louis XVIII. The Laplace transform is a very useful tool in solving differential equations and much further, it plays an important role in dealing with linear time-invariant systems. A.1 Definition of the Laplace Transform The (unilateral or one-sided) Laplace transform is defined for a function x(t)ofa real variable t (often meaning the time) as ∞ X(s) = L{x(t)}= x(t)e−stdt (A.1) 0− where s is a complex variable, the lower limit, t−, of the integration interval is the instant just before t = 0, and x(t) is often assumed to be causal in the sense that it is zero for all t < 0. A.2 Examples of the Laplace Transform A.2.1 Laplace Transform of the Unit Step Function The unit step function is defined as 385 386 Appendix A δ lim 1 T T d (t ) ==lim rT (t) → (us(t + )–us (t – )) = us(t) T→0 T 0 T 2 2 dt T lim rT (t) T→0 us(t) 1 δ(t ) T t t t 00 0 (a) Unit step function (b) Rectangular pulse (c) Unit impulse function Fig.
    [Show full text]
  • Neural Networks for NLP
    Neural Networks for NLP COMP-599 Nov 30, 2016 Outline Neural networks and deep learning: introduction Feedforward neural networks word2vec Complex neural network architectures Convolutional neural networks Recurrent and recursive neural networks 2 Reference Much of the figures and explanations are drawn from this reference: A Primer on Neural Network Models for Natural Language Processing. Yoav Goldberg. 2015. http://u.cs.biu.ac.il/~yogo/nnlp.pdf 3 Classification Review 푦 = 푓(푥 ) input output label classifier Represent input 푥 as a list of features Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud 푥1 푥2 푥3 푥4 푥5 푥6 푥7 푥8 … exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 … in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 4 Logistic Regression Linear regression: 푦 = 푎1푥1 + 푎2푥2 + … + 푎푛푥푛 + 푏 Intuition: Linear regression gives as continuous values in [-∞, ∞] —let’s squish the values to be in [0, 1]! Function that does this: logit function 1 푃(푦|푥 ) = 푒푎1푥1 + 푎2푥2 + … + 푎푛푥푛 + 푏 푍 This 푍 is a normalizing constant to ensure this is a probability distribution. (a.k.a., maximum entropy or MaxEnt classifier) N.B.: Don’t be confused by name—this method is most often used to solve classification problems.
    [Show full text]