Neural Networks Shuai Li John Hopcroft Center, Shanghai Jiao Tong University

Total Page:16

File Type:pdf, Size:1020Kb

Neural Networks Shuai Li John Hopcroft Center, Shanghai Jiao Tong University Lecture 6: Neural Networks Shuai Li John Hopcroft Center, Shanghai Jiao Tong University https://shuaili8.github.io https://shuaili8.github.io/Teaching/VE445/index.html 1 Outline • Perceptron • Activation functions • Multilayer perceptron networks • Training: backpropagation • Examples • Overfitting • Applications 2 Brief history of artificial neural nets • The First wave • 1943 McCulloch and Pitts proposed the McCulloch-Pitts neuron model • 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons • 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer perceptrons, and almost the whole field went into hibernation • The Second wave • 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was rediscovered and the whole field took off again • The Third wave • 2006 Deep (neural networks) Learning gains popularity • 2012 made significant break-through in many applications 3 Biological neuron structure • The neuron receives signals from their dendrites, and send its own signal to the axon terminal 4 Biological neural communication • Electrical potential across cell membrane exhibits spikes called action potentials • Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters • Chemical diffuses across synapse to dendrites of other neurons • Neurotransmitters can be excitatory or inhibitory • If net input of neuro transmitters to a neuron from other neurons is excitatory and exceeds some threshold, it fires an action potential 5 Perceptron • Inspired by the biological neuron among humans and animals, researchers build a simple model called Perceptron • It receives signals 푥푖’s, multiplies them with different weights 푤푖, and outputs the sum of the weighted signals after an activation function, step function 6 Neuron vs. Perceptron 7 Artificial neural networks • Multilayer perceptron network • Convolutional neural network • Recurrent neural network 8 Perceptron 9 Training • 푤푖 ← 푤푖 − 휂 표 − 푦 푥푖 • 푦: the real label • 표: the output for the perceptron • 휂: the learning rate • Explanation • If the output is correct, do nothing • If the output is higher, lower the weight • If the output is lower, increase the weight 10 Properties • Rosenblatt [1958] proved the training can converge if two classes are linearly separable and 휂 is reasonably small 11 Limitation • Minsky and Papert [1969] showed that some rather elementary computations, such as XOR problem, could not be done by Rosenblatt’s one-layer perceptron • However Rosenblatt believed the limitations could be overcome if more layers of units to be added, but no learning algorithm known to obtain the weights yet 12 Solution: Add hidden layers • Two-layer feedforward neural network 13 Demo • Large Larger Neural Networks can represent more complicated functions. The data are shown as circles colored by their class, and the decision regions by a trained neural network are shown underneath. You can play with these examples in this ConvNetsJS demo. 14 Activation Functions 15 Most popular in fully Activation functions connected neural network 1 • Sigmoid: 휎 푧 = 1+푒−푧 푒푧−푒−푧 • Tanh: tanh 푧 = 푒푧+푒−푧 • ReLU (Rectified Linear Unity): ReLU 푧 = max 0, 푧 Most popular in deep learning 16 Activation function values and derivatives 17 Sigmoid activation function • Its derivative • Sigmoid 휎′ 푧 = 휎 푧 1 − 휎 푧 1 휎 푧 = • Output range 0,1 1 + 푒−푧 • Motivated by biological neurons and can be interpreted as the probability of an artificial neuron “firing” given its inputs • However, saturated neurons make value vanished (why?) • 푓 푓 푓 ⋯ • 푓 0,1 ⊆ 0.5, 0.732 • 푓 0.5, 0.732 ⊆ 0.622,0.676 18 Tanh activation function • Tanh function • Its derivative sinh(푧) 푒푧 − 푒−푧 tanh ′ 푧 = 1 − tanh2(푧) tanh 푧 = = cosh(푧) 푒푧 + 푒−푧 • Output range −1,1 • Thus strongly negative inputs to the tanh will map to negative outputs • Only zero-valued inputs are mapped to near-zero outputs • These properties make the network less likely to get “stuck” during training 19 ReLU activation function • ReLU (Rectified linear unity) • Its derivative 1 if 푧 > 0 ReLU′ 푧 = ቊ function 0 if 푧 ≤ 0 ReLU 푧 = max 0, 푧 • ReLU can be approximated by softplus function • ReLU’s gradient doesn't vanish as x increases • Speed up training of neural networks • Since the gradient computation is very simple • The computational step is simple, no exponentials, no multiplication or division operations (compared to others) • The gradient on positive portion is larger than sigmoid or tanh functions • Update more rapidly • The left “dead neuron” part can be ameliorated by Leaky ReLU 20 ReLU activation function (cont.) • ReLU function • The only non-linearity comes from the path selection with individual neurons being active or ReLU 푧 = max 0, 푧 not • It allows sparse representations: • for a given input only a subset of neurons are active Sparse propagation of activations and gradients 21 Multilayer Perceptron Networks 22 Neural networks • Neural networks are built by connecting many perceptrons together, layer by layer 23 Universal approximation theorem • A feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions 1-20-1 NN approximates a noisy sine function Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366 24 Increasing power of approximation • With more neurons, its approximation power increases. The decision boundary covers more details • Usually in applications, we use more layers with structures to approximate complex functions instead of one hidden layer with many neurons 25 Common neuron network structures at a glance 26 Multilayer perceptron network 27 Single / Multiple layers of calculation 푓 • Single layer function 휃 푓휃 푥 = 휎 휃0 + 휃1푥1 + 휃2푥2 푥1 푥2 • Multiple layer function 1 1 1 푓휃 • ℎ1 푥 = 휎 휃0 + 휃1 푥1 + 휃2 푥2 2 2 2 • ℎ2 푥 = 휎 휃0 + 휃1 푥1 + 휃2 푥2 ℎ1 ℎ2 • 푓휃 ℎ = 휎 휃0 + 휃1ℎ1 + 휃2ℎ2 푥1 푥2 28 Training Backpropagation 29 How to train? • As previous models, we use gradient descent method to train the neural network • Given the topology of the network (number of layers, number of neurons, their connections), find a set of weights to minimize the error function The set of training Target Output examples 30 Gradient interpretation • Gradient is the vector (the red one) along which the value of the function increases most rapidly. Thus its opposite direction is where the value decreases most rapidly. 31 Gradient descent • To find a (local) minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or an approximation) of the function at the current point 휕푓 • For a smooth function 푓(푥), is the direction that 푓 increases most 휕푥 rapidly. So we apply 휕푓 푥 = 푥 − 휂 (푥 ) 푡+1 푡 휕푥 푡 until 푥 converges 32 Gradient descent Gradient Training Rule i.e. Partial derivatives are the key Update 33 The chain rule • The challenge in neural network model is that we only know the target of the output layer, but don’t know the target for hidden and input layers, how can we update their connection weights using the gradient descent? • The answer is the chain rule that you have learned in calculus 푦 = 푓(푔(푥)) 푑푦 ⇒ = 푓′(푔(푥))푔′(푥) 푑푥 34 Feed forward vs. Backpropagation 35 Make a prediction 푡1 푡푘 36 Make a prediction (cont.) 푡1 푡푘 37 Make a prediction (cont.) 푡1 푡푘 38 • Assume all the activation functions are sigmoid 1 • Error function 퐸 = σ 푦 − 푡 2 Backpropagation 2 푘 푘 푘 휕퐸 • = 푦푘 − 푡푘 휕푦푘 휕푦푘 ′ (2) (1) (1) • (2) = 푓(2) 푛푒푡푘 ℎ푗 = 푦푘 1 − 푦푘 ℎ푗 휕푤푘,푗 휕퐸 (1) • ⇒ (2) = − 푡푘 − 푦푘 푦푘 1 − 푦푘 ℎ푗 푡1 휕푤푘,푗 • ⇒ 푤(2) ← 푤(2) + 휂훿(2)ℎ(1) (2) 푘,푗 푘,푗 푘 푗 훿푘 푡푘 Output of unit 푗 39 1 • Error function 퐸 = σ 푦 − 푡 2 2 푘 푘 푘 휕퐸 • = 푦푘 − 푡푘 Backpropagation (cont.) 휕푦푘 (2) • 훿푘 = 푡푘 − 푦푘 푦푘 1 − 푦푘 (2) (2) (2) (1) • ⇒ 푤푘,푗 ← 푤푘,푗 + 휂훿푘 ℎ푗 휕푦푘 (2) • (1) = 푦푘 1 − 푦푘 푤 휕ℎ 푘,푗 푡1 푗 (1) 휕ℎ 푗 ′ (1) (1) (1) • (1) = 푓(1) 푛푒푡푗 푥푚 = ℎ푗 1 − ℎ푗 푥푚 푡푘 휕푤푗,푚 휕퐸 (1) (1) 2 • (1) = −ℎ푗 1 − ℎ푗 σ푘 푤푘,푗 푡푘 − 푦푘 푦푘 1 − 푦푘 푥푚 휕푤푗,푚 (1) (1) 2 (2) = −ℎ푗 1 − ℎ푗 ෍ 푤푘,푗 훿푘 푥푚 푘 (1) (1) (1) (1) • ⇒ 푤 ← 푤 + 휂훿 푥푚 푗,푚 푗,푚 푗 훿푗 40 1 • Error function 퐸 = σ 푦 − 푡 2 2 푘 푘 푘 • 훿(2) = 푡 − 푦 푦 1 − 푦 Backpropagation algorithms 푘 푘 푘 푘 푘 (2) (2) (2) (1) • ⇒ 푤푘,푗 ← 푤푘,푗 + 휂훿푘 ℎ푗 • 훿(1) = ℎ(1) 1 − ℎ(1) σ 푤 2 훿(2) • Activation function: sigmoid 푗 푗 푗 푘 푘,푗 푘 (1) (1) (1) • ⇒ 푤푗,푚 ← 푤푗,푚 + 휂훿푗 푥푚 Initialize all weights to small random numbers Do until convergence • For each training example: 1. Input it to the network and compute the network output 2. For each output unit 푘, 표푘 is the output of unit 푘 훿푘 ← 표푘 1 − 표푘 푡푘 − 표푘 3. For each hidden unit 푗, 표푗 is the output of unit 푗 훿푗 ← 표푗 1 − 표푗 ෍ 푤푘,푗훿푘 푘∈푛푒푥푡 푙푎푦푒푟 4. Update each network weight, where 푥푖 is the output for unit 푖 푤푗,푖 ← 푤푗,푖 + 휂훿푗푥푖 41 See the backpropagation demo • https://google-developers.appspot.com/machine-learning/crash- course/backprop-scroll/ 42 Formula example for backpropagation 43 Formula example for backpropagation (cont.) 44 Calculation example • Consider the simple network below: • Assume that the neurons have sigmoid activation function and • Perform a forward pass on the network and find the predicted output • Perform a reverse pass (training) once (target = 0.5) with 휂 = 1 • Perform a further forward pass and comment on the result 45 • For each output unit 푘, 표푘 is the output of unit 푘 Calculation example (cont.) 훿푘 ← 표푘 1 − 표푘 푡푘 − 표푘 • For each hidden unit 푗, 표푗 is the output of unit 푗 훿푗 ← 표푗 1 − 표푗 ෍ 푤푘,푗훿푘 푘∈푛푒푥푡 푙푎푦푒푟 • Update each network weight, where 푥푖 is the input for unit 푗 푤푗,푖 ← 푤푗,푖 + 휂훿푗푥푖 46 • For each output unit 푘, 표푘 is the output of unit 푘 Calculation example (cont.) 훿푘 ← 표푘 1 − 표푘 푡푘 − 표푘 • For each hidden unit 푗, 표푗 is the output of unit 푗 • Answer (i) • Input to top neuron = 0.35 × 0.1 + 0.9 × 0.8 = 0.755.
Recommended publications
  • Training Autoencoders by Alternating Minimization
    Under review as a conference paper at ICLR 2018 TRAINING AUTOENCODERS BY ALTERNATING MINI- MIZATION Anonymous authors Paper under double-blind review ABSTRACT We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed. 1 INTRODUCTION For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort.
    [Show full text]
  • CS 189 Introduction to Machine Learning Spring 2021 Jonathan Shewchuk HW6
    CS 189 Introduction to Machine Learning Spring 2021 Jonathan Shewchuk HW6 Due: Wednesday, April 21 at 11:59 pm Deliverables: 1. Submit your predictions for the test sets to Kaggle as early as possible. Include your Kaggle scores in your write-up (see below). The Kaggle competition for this assignment can be found at • https://www.kaggle.com/c/spring21-cs189-hw6-cifar10 2. The written portion: • Submit a PDF of your homework, with an appendix listing all your code, to the Gradescope assignment titled “Homework 6 Write-Up”. Please see section 3.3 for an easy way to gather all your code for the submission (you are not required to use it, but we would strongly recommend using it). • In addition, please include, as your solutions to each coding problem, the specific subset of code relevant to that part of the problem. Whenever we say “include code”, that means you can either include a screenshot of your code, or typeset your code in your submission (using markdown or LATEX). • You may typeset your homework in LaTeX or Word (submit PDF format, not .doc/.docx format) or submit neatly handwritten and scanned solutions. Please start each question on a new page. • If there are graphs, include those graphs in the correct sections. Do not put them in an appendix. We need each solution to be self-contained on pages of its own. • In your write-up, please state with whom you worked on the homework. • In your write-up, please copy the following statement and sign your signature next to it.
    [Show full text]
  • Can Temporal-Difference and Q-Learning Learn Representation? a Mean-Field Analysis
    Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Analysis Yufeng Zhang Qi Cai Northwestern University Northwestern University Evanston, IL 60208 Evanston, IL 60208 [email protected] [email protected] Zhuoran Yang Yongxin Chen Zhaoran Wang Princeton University Georgia Institute of Technology Northwestern University Princeton, NJ 08544 Atlanta, GA 30332 Evanston, IL 60208 [email protected] [email protected] [email protected] Abstract Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks. At the core of their empirical successes is the learned feature representation, which embeds rich observations, e.g., images and texts, into the latent space that encodes semantic structures. Meanwhile, the evolution of such a feature representation is crucial to the convergence of temporal-difference and Q-learning. In particular, temporal-difference learning converges when the function approxi- mator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise. We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve? If it converges, does it converge to the optimal one? We prove that, utilizing an overparameterized two-layer neural network, temporal- difference and Q-learning globally minimize the mean-squared projected Bellman error at a sublinear rate. Moreover, the associated feature representation converges to the optimal one, generalizing the previous analysis of [21] in the neural tan- gent kernel regime, where the associated feature representation stabilizes at the initial one. The key to our analysis is a mean-field perspective, which connects the evolution of a finite-dimensional parameter to its limiting counterpart over an infinite-dimensional Wasserstein space.
    [Show full text]
  • Dynamic Modification of Activation Function Using the Backpropagation Algorithm in the Artificial Neural Networks
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 10, No. 4, 2019 Dynamic Modification of Activation Function using the Backpropagation Algorithm in the Artificial Neural Networks Marina Adriana Mercioni1, Alexandru Tiron2, Stefan Holban3 Department of Computer Science Politehnica University Timisoara, Timisoara, Romania Abstract—The paper proposes the dynamic modification of These functions represent the main activation functions, the activation function in a learning technique, more exactly being currently the most used by neural networks, representing backpropagation algorithm. The modification consists in from this reason a standard in building of an architecture of changing slope of sigmoid function for activation function Neural Network type [6,7]. according to increase or decrease the error in an epoch of learning. The study was done using the Waikato Environment for The activation functions that have been proposed along the Knowledge Analysis (WEKA) platform to complete adding this time were analyzed in the light of the applicability of the feature in Multilayer Perceptron class. This study aims the backpropagation algorithm. It has noted that a function that dynamic modification of activation function has changed to enables differentiated rate calculated by the neural network to relative gradient error, also neural networks with hidden layers be differentiated so and the error of function becomes have not used for it. differentiated [8]. Keywords—Artificial neural networks; activation function; The main purpose of activation function consists in the sigmoid function; WEKA; multilayer perceptron; instance; scalation of outputs of the neurons in neural networks and the classifier; gradient; rate metric; performance; dynamic introduction a non-linear relationship between the input and modification output of the neuron [9].
    [Show full text]
  • A Deep Reinforcement Learning Neural Network Folding Proteins
    DeepFoldit - A Deep Reinforcement Learning Neural Network Folding Proteins Dimitra Panou1, Martin Reczko2 1University of Athens, Department of Informatics and Telecommunications 2Biomedical Sciences Research Center “Alexander Fleming” ABSTRACT Despite considerable progress, ab initio protein structure prediction remains suboptimal. A crowdsourcing approach is the online puzzle video game Foldit [1], that provided several useful results that matched or even outperformed algorithmically computed solutions [2]. Using Foldit, the WeFold [3] crowd had several successful participations in the Critical Assessment of Techniques for Protein Structure Prediction. Based on the recent Foldit standalone version [4], we trained a deep reinforcement neural network called DeepFoldit to improve the score assigned to an unfolded protein, using the Q-learning method [5] with experience replay. This paper is focused on model improvement through hyperparameter tuning. We examined various implementations by examining different model architectures and changing hyperparameter values to improve the accuracy of the model. The new model’s hyper-parameters also improved its ability to generalize. Initial results, from the latest implementation, show that given a set of small unfolded training proteins, DeepFoldit learns action sequences that improve the score both on the training set and on novel test proteins. Our approach combines the intuitive user interface of Foldit with the efficiency of deep reinforcement learning. KEYWORDS: ab initio protein structure prediction, Reinforcement Learning, Deep Learning, Convolution Neural Networks, Q-learning 1. ALGORITHMIC BACKGROUND Machine learning (ML) is the study of algorithms and statistical models used by computer systems to accomplish a given task without using explicit guidelines, relying on inferences derived from patterns. ML is a field of artificial intelligence.
    [Show full text]
  • A Study of Activation Functions for Neural Networks Meenakshi Manavazhahan University of Arkansas, Fayetteville
    University of Arkansas, Fayetteville ScholarWorks@UARK Computer Science and Computer Engineering Computer Science and Computer Engineering Undergraduate Honors Theses 5-2017 A Study of Activation Functions for Neural Networks Meenakshi Manavazhahan University of Arkansas, Fayetteville Follow this and additional works at: http://scholarworks.uark.edu/csceuht Part of the Other Computer Engineering Commons Recommended Citation Manavazhahan, Meenakshi, "A Study of Activation Functions for Neural Networks" (2017). Computer Science and Computer Engineering Undergraduate Honors Theses. 44. http://scholarworks.uark.edu/csceuht/44 This Thesis is brought to you for free and open access by the Computer Science and Computer Engineering at ScholarWorks@UARK. It has been accepted for inclusion in Computer Science and Computer Engineering Undergraduate Honors Theses by an authorized administrator of ScholarWorks@UARK. For more information, please contact [email protected], [email protected]. A Study of Activation Functions for Neural Networks An undergraduate thesis in partial fulfillment of the honors program at University of Arkansas College of Engineering Department of Computer Science and Computer Engineering by: Meena Mana 14 March, 2017 1 Abstract: Artificial neural networks are function-approximating models that can improve themselves with experience. In order to work effectively, they rely on a nonlinearity, or activation function, to transform the values between each layer. One question that remains unanswered is, “Which non-linearity is optimal for learning with a particular dataset?” This thesis seeks to answer this question with the MNIST dataset, a popular dataset of handwritten digits, and vowel dataset, a dataset of vowel sounds. In order to answer this question effectively, it must simultaneously determine near-optimal values for several other meta-parameters, including the network topology, the optimization algorithm, and the number of training epochs necessary for the model to converge to good results.
    [Show full text]
  • Long Short-Term Memory 1 Introduction
    LONG SHORTTERM MEMORY Neural Computation J urgen Schmidhuber Sepp Ho chreiter IDSIA Fakultat f ur Informatik Corso Elvezia Technische Universitat M unchen Lugano Switzerland M unchen Germany juergenidsiach ho chreitinformatiktumuenchende httpwwwidsiachjuergen httpwwwinformatiktumuenchendehochreit Abstract Learning to store information over extended time intervals via recurrent backpropagation takes a very long time mostly due to insucient decaying error back ow We briey review Ho chreiters analysis of this problem then address it by introducing a novel ecient gradientbased metho d called Long ShortTerm Memory LSTM Truncating the gradient where this do es not do harm LSTM can learn to bridge minimal time lags in excess of discrete time steps by enforcing constant error ow through constant error carrousels within sp ecial units Multiplicative gate units learn to op en and close access to the constant error ow LSTM is lo cal in space and time its computational complexity p er time step and weight is O Our exp eriments with articial data involve lo cal distributed realvalued and noisy pattern representations In comparisons with RTRL BPTT Recurrent CascadeCorrelation Elman nets and Neural Sequence Chunking LSTM leads to many more successful runs and learns much faster LSTM also solves complex articial long time lag tasks that have never b een solved by previous recurrent network algorithms INTRODUCTION Recurrent networks can in principle use their feedback connections to store representations of recent input events in form of activations
    [Show full text]
  • Chapter 02: Fundamentals of Neural Networks
    FUNDAMENTALS OF NEURAL 2 NETWORKS Ali Zilouchian 2.1 INTRODUCTION For many decades, it has been a goal of science and engineering to develop intelligent machines with a large number of simple elements. References to this subject can be found in the scientific literature of the 19th century. During the 1940s, researchers desiring to duplicate the function of the human brain, have developed simple hardware (and later software) models of biological neurons and their interaction systems. McCulloch and Pitts [1] published the first systematic study of the artificial neural network. Four years later, the same authors explored network paradigms for pattern recognition using a single layer perceptron [2]. In the 1950s and 1960s, a group of researchers combined these biological and psychological insights to produce the first artificial neural network (ANN) [3,4]. Initially implemented as electronic circuits, they were later converted into a more flexible medium of computer simulation. However, researchers such as Minsky and Papert [5] later challenged these works. They strongly believed that intelligence systems are essentially symbol processing of the kind readily modeled on the Von Neumann computer. For a variety of reasons, the symbolic–processing approach became the dominant method. Moreover, the perceptron as proposed by Rosenblatt turned out to be more limited than first expected. [4]. Although further investigations in ANN continued during the 1970s by several pioneer researchers such as Grossberg, Kohonen, Widrow, and others, their works received relatively less attention. The primary factors for the recent resurgence of interest in the area of neural networks are the extension of Rosenblatt, Widrow and Hoff’s works dealing with learning in a complex, multi-layer network, Hopfield mathematical foundation for understanding the dynamics of an important class of networks, as well as much faster computers than those of 50s and 60s.
    [Show full text]
  • 1 Activation Functions in Single Hidden Layer Feed
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE Selçuk-Teknik Dergisi ISSN 1302-6178 Journal ofprovided Selcuk by- Selçuk-TeknikTechnic Dergisi (E-Journal - Selçuk Üniversiti) Özel Sayı 2018 (ICENTE’18) Special Issue 2018 (ICENTE'18) ACTIVATION FUNCTIONS IN SINGLE HIDDEN LAYER FEED-FORWARD NEURAL NETWORKS Ender SEVİNÇ University of Turkish Aeronautical Association, Ankara Turkey [email protected] Abstract Especially in the last decade, Artificial Intelligence (AI) has gained increasing popularity as the neural networks represent incredibly exciting and powerful machine learning-based techniques that can solve many real-time problems. The learning capability of such systems is directly related with the evaluation methods used. In this study, the effectiveness of the calculation parameters in a Single-Hidden Layer Feedforward Neural Networks (SLFNs) will be examined. We will present how important the selection of an activation function is in the learning stage. A lot of work is developed and presented for SLFNs up to now. Our study uses one of the most commonly known learning algorithms, which is Extreme Learning Machine (ELM). Main task of an activation function is to map the input value of a neural network to the output node with a high learning or achievement rate. However, determining the correct activation function is not as simple as thought. First we try to show the effect of the activation functions on different datasets and then we propose a method for selection process of it due to the characteristic of any dataset. The results show that this process is providing a remarkably better performance and learning rate in a sample neural network.
    [Show full text]
  • Performance Evaluation of Word Embeddings for Sarcasm Detection- a Deep Learning Approach
    Performance Evaluation of Word Embeddings for Sarcasm Detection- A Deep Learning Approach Annie Johnson1, Karthik R2 1 School of Electronics Engineering, Vellore Institute of Technology, Chennai. 2Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai. [email protected], [email protected] Abstract.Sarcasm detection is a critical step to sentiment analysis, with the aim of understanding, and exploiting the available information from platforms such as social media. The accuracy of sarcasm detection depends on the nature of word embeddings. This research performs a comprehensive analysis of different word embedding techniques for effective sarcasm detection. A hybrid combination of optimizers including; the Adaptive Moment Estimation (Adam), the Adaptive Gradient Algorithm (AdaGrad) and Adadelta functions and activation functions like Rectified Linear Unit (ReLU) and Leaky ReLU have been experimented with.Different word embedding techniques including; Bag of Words (BoW), BoW with Term Frequency–Inverse Document Frequency (TF-IDF), Word2vec and Global Vectors for word representation (GloVe) are evaluated. The highest accuracy of 61.68% was achieved with theGloVeembeddings and an AdaGrad optimizer and a ReLU activation function, used in the deep learning model. Keywords: Sarcasm, Deep Learning, Word Embedding, Sentiment Analysis 1 Introduction Sarcasm is the expression of criticism and mockery in a particular context, by employing words that mean the opposite of what is intended. This verbal irony in sarcastic text challenges the domain of sentiment analysis. With the emergence of smart mobile devices and faster internet speeds, there has been an exponential growth in the use of social media websites. There are about 5.112 billion unique mobile users around the world.
    [Show full text]
  • Deep Learning and Neural Networks
    ENLP Lecture 14 Deep Learning & Neural Networks Austin Blodgett & Nathan Schneider ENLP | March 4, 2020 a family of algorithms NN Task Example Input Example Output Binary classification Multiclass classification Sequence Sequence to Sequence Tree/Graph Parsing NN Task Example Input Example Output Binary features +/- classification Multiclass features decl, imper, … classification Sequence sentence POS tags Sequence to (English) sentence (Spanish) sentence Sequence Tree/Graph sentence dependency tree or Parsing AMR parsing 2. What’s Deep Learning (DL)? • Deep learning is a subfield of machine learning 3.3. APPROACH 35 • Most machine learning methods work Feature NER TF Current Word ! ! well because of human-designed Previous Word ! ! Next Word ! ! representations and input features Current Word Character n-gram all length ≤ 6 Current POS Tag ! • For example: features for finding Surrounding POS Tag Sequence ! Current Word Shape ! ! named entities like locations or Surrounding Word Shape Sequence ! ! Presence of Word in Left Window size 4 size 9 organization names (Finkel et al., 2010): Presence of Word in Right Window size 4 size 9 Table 3.1: Features used by the CRF for the two tasks: named entity recognition (NER) and template filling (TF). • Machine learning becomes just optimizing weights to best make a final prediction can go beyond imposing just exact identity conditions). I illustrate this by modeling two forms of non-local structure: label consistency in the named entity recognition task, and template consistency in the template filling task. One could imagine many ways of defining such models; for simplicity I use the form (Slide from Manning and Socher) P (y|x) #(λ,y,x) (3.1) M ∝ ∏ θλ λ∈Λ where the product is over a set of violation types Λ,andforeachviolationtypeλ we specify a penalty parameter θλ .Theexponent#(λ,s,o) is the count of the number of times that the violation λ occurs in the state sequence s with respect to the observation sequence o.Thishastheeffectofassigningsequenceswithmoreviolations a lower probability.
    [Show full text]
  • Deep Learning and Reinforcement Learning
    Deep learning and reinforcement learning Jes´usFern´andez-Villaverde1 and Galo Nu~no2 June 27, 2021 1University of Pennsylvania 2Banco de Espa~na Course outline 1. Dynamic programming in continuous time. 2. Deep learning and reinforcement learning. 3. Heterogeneous agent models. 4. Optimal policies with heterogeneous agents. 1 A short introduction The problem • Let us supposed we want to approximate an unknown function: y = f (x) where y is a scalar and x = x1; x2; :::; xN a vector. f g • Easy to generalize to the case where y is a vector (or a probability distribution), but notation becomes cumbersome. • In economics, f (x) can be a value function, a policy function, a pricing kernel, a conditional expectation, a classifier, ... 2 A neural network • An artificial neural network (ANN or connectionist system) is an approximation to f (x) built as a linear combination of M generalized linear models of x of the form: M NN X y ∼= g (x; θ) = θ0 + θmφ (zm) m=1 where φ( ) is an arbitrary activation function and: · N X zm = θ0;m + θn;mxn n=1 • M is known as the width of the model. • We can select θ such that g NN (x; θ) is as close to f (x) as possible given some relevant metric. • This is known as \training" the network. 3 Comparison with other approximations • Compare: M N ! NN X X y ∼= g (x; θ) = θ0 + θmφ θ0;m + θn;mxn m=1 n=1 with a standard projection: M CP X y ∼= g (x; θ) = θ0 + θmφm (x) m=1 where φm is, for example, a Chebyshev polynomial.
    [Show full text]