Multilayer Perceptrons and Backpropagation Perceptrons

Total Page:16

File Type:pdf, Size:1020Kb

Multilayer Perceptrons and Backpropagation Perceptrons Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 6 Mirella Lapata Reading: School of Informatics Kevin Gurney’s Introduction to Neural Networks, University of Edinburgh Chapters 5–6.5 [email protected] January 22, 2016 1 / 33 2 / 33 Perceptrons Recap: Perceptrons 1 − w0 Connectionism is a computer modeling approach inspired by neural networks. Anatomy of a connectionist model: units, connections 1 x w y The Perceptron as a linear classifier. 1 1 n x = Pi=0 wixi A learning algorithm for Perceptrons. 0 0 Key limitation: only works for linearly separable data. wn xn 3 / 33 4 / 33 Multilayer Perceptrons (MLPs) Activation Functions x1 w1 x2 w2 1 y Σ 0 wn h xn Step function Sigmoid function y y Input layer Output layer 1 1 Hidden layer 0 0 MLPs are feed-forward neural networks, organized in layers. h x h x One input layer, one or more hidden layers, one output layer. Outputs 0 or 1. Outputs a real value between 0 and 1. Each node in a layer connected to all other nodes in next layer. Each connection has a weight (can be zero). 5 / 33 6 / 33 Sigmoids Learning with MLPs As with perceptrons, finding the right weights is very hard! Solution technique: learning! Learning: adjusting the weights based on training examples. 7 / 33 8 / 33 Supervised Learning Learning and Error Minimization Recall: Perceptron Learning Rule General Idea Minimize the difference between the actual and desired outputs: 1 Send the MLP an input pattern, x, from the training set. 2 Get the output from the MLP, y. w w + η(t o)x i ← i − i 3 Compare y with the “right answer”, or target t, to get the error quantity. Error Function: Mean Squared Error (MSE) 4 Use the error quantity to modify the weights, so next time y An error function represents such a difference over a set of inputs: will be closer to t. N 5 1 Repeat with another x from the training set. E(w~ ) = (tp op)2 2N − Xp=1 When updating weights after seeing x, the network doesn’t just N is the number of patterns change the way it deals with x, but other inputs too ... tp is the target output for pattern p Inputs it has not seen yet! op is the output obtained for pattern p Generalization is the ability to deal accurately with unseen inputs. the 2 makes little difference, but makes life easier later on! 9 / 33 10 / 33 Gradient Descent Gradient and Derivatives: The Idea The derivative is a measure of the rate of change of a function, as its input changes; dy One technique that can be used For function y = f (x), the derivative dx indicates how much for minimizing functions is y changes in response to changes in x. gradient descent. If x and y are real numbers, and if the graph of y is plotted Can we use this on our error against x, the derivative measures the slope or gradient of the function E? line at each point, i.e., it describes the steepness or incline. We would like a learning rule that tells us how to update weights, like this: wij0 = wij + ∆wij But what should ∆wij be? 11 / 33 12 / 33 Gradient and Derivatives: The Idea Gradient and Derivatives: The Idea So, we know how to use derivatives to adjust one input value. But we have several weights to adjust! We need to use partial derivatives. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Example If y = f (x , x ), then we can have ∂y and ∂y . 1 2 ∂x1 ∂x2 dy dx > 0 implies that y increases as x increases. If we want to find the minimum y, we should reduce x. In our learning rule case, if we can work out the partial derivatives, dy dx < 0 implies that y decreases as x increases. If we want to we can use this rule to update the weights: find the minimum y, we should increase x. dy dx = 0 implies that we are at a minimum or maximum or a wij0 = wij + ∆wij plateau. To get closer to the minimum: where ∆w = η ∂E . dy ij ∂wij xnew = xold η − − dx 13 / 33 14 / 33 Summary So Far Using Gradient Descent to Minimize the Error ij We learnt what a multilayer perceptron is. f j We know a learning rule for updating weights in order to wij minimise the error: P f wij0 = wij + ∆wij P ∂E where ∆wij = η − ∂wij ∆wij tells us in which direction and how much we should change each weight to roll down the slope (descend the The mean squared error function E, which we want to minimize: gradient) of the error function E. N ∂E 1 p p 2 So, how do we calculate ∂w ? E(w~ ) = (t o ) ij 2N − Xp=1 15 / 33 16 / 33 Using Gradient Descent to Minimize the Error Using Gradient Descent to Minimize the Error If we use a sigmoid activation function f , then the output of neuron i for pattern p is: For the pth pattern and the ith neuron, we use gradient descent on p 1 the error function: oi = f (ui ) = au 1 + e− i ∂Ep p p where a is a pre-defined constant and u is the result of the input ∆wij = η = η(t o )f 0(ui )xij i − ∂w i − i function in neuron i: ij u = w x where f (u ) = df is the derivative of f with respect to u . i ij ij 0 i dui i j If f is the sigmoid function, f (u ) = af (u )(1 f (u )). X 0 i i − i 17 / 33 18 / 33 Using Gradient Descent to Minimize the Error Updating Output vs Hidden Neurons We can update outputij neurons using the generalize delta rule: p f ∆wij = η δ xij j i pwij p p P δ =( t o )f 0(u ) i i − i i p This δi is only good for the output neuronsf , since it relies on target outputs. But we don’t have target output for the hidden P nodes! What can we use instead? We can update weights after processing each pattern, using rule: p p p ∆w = η (t o ) f 0(u ) x δi = wki δk f 0(ui ) ij i − i i ij p Xk ∆wij = ηδ i xij This is known as the generalized delta rule. This rule propagates error back from output nodes to hidden We need to use the derivative of the activation function f . nodes. If effect, it blames hidden nodes according to how much So, f must be differentiable! The threshold activation function influence they had. So, now we have rules for updating both is not continuous, thus not differentiable! output and hidden neurons! Sigmoid has a derivative which is easy to calculate. 19 / 33 20 / 33 Backpropagation Backpropagation illustration: illustration: Σ Σ Σ Σ 1 Σ Σ Σ 1 Σ Σ Σ 2 Σ Σ Σ 2 Σ Σ Σ 1 Present the pattern at the input layer. 1 Present the pattern at the input layer. 21 / 33 22 / 33 Backpropagation Backpropagation illustration: illustration: Σ Σ Σ Σ 1 Σ Σ Σ 1 Σ Σ Σ 2 Σ Σ Σ 2 Σ Σ Σ 1 Present the pattern at the input layer. 1 Present the pattern at the input layer. 2 Propagate forward activations. 2 Propagate forward activations. 23 / 33 24 / 33 Backpropagation Backpropagation illustration: illustration: Σ Σ Σ Σ 1 Σ Σ Σ 1 Σ Σ Σ 2 Σ Σ Σ 2 Σ Σ Σ 1 Present the pattern at the input layer. 1 Present the pattern at the input layer. 2 Propagate forward activations. 2 Propagate forward activations. 25 / 33 26 / 33 Backpropagation Backpropagation illustration: illustration: Σ Σ Σ Σ 1 Σ Σ Σ 1 Σ Σ Σ 2 Σ Σ Σ 2 Σ Σ Σ 1 Present the pattern at the input layer. 1 Present the pattern at the input layer. 2 Propagate forward activations. 2 Propagate forward activations. 3 Calculate error for the output neurons. 3 Propagate backward error. 27 / 33 28 / 33 Backpropagation Backpropagation illustration: illustration: Σ Σ Σ Σ 1 Σ Σ Σ 1 Σ Σ Σ 2 Σ Σ Σ 2 Σ Σ Σ 1 Present the pattern at the input layer. 1 Present the pattern at the input layer. 2 Propagate forward activations. 2 Propagate forward activations. 3 Propagate backward error. 3 Propagate backward error. 29 / 33 30 / 33 Backpropagation Online Backpropagation illustration: Σ Σ 1 Σ Σ Σ 1: Initialize all weights to small random values. 2: repeat 2 Σ Σ Σ 3: for each training example do 4: Forward propagate the input features of the example to determine the MLP’s outputs. 5: Back propagate error to generate ∆wij for all weights wij . 1 Present the pattern at the input layer. 6: Update the weights using ∆wij . 2 Propagate forward activations. 7: end for 3 Propagate backward error. 8: until stopping criteria reached. ∂E 4 Calculate ∂wij 5 Repate for all patterns and sum up. 31 / 33 32 / 33 Summary We learnt what a multilayer perceptron is. We have some intuition about using gradient descent on an error function. We know a learning rule for updating weights in order to ∂E minimize the error: ∆wij = η − ∂wij If we use the squared error, we get the generalized delta rule: p ∆wij = ηδi xij .
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Backpropagation and Deep Learning in the Brain
    Backpropagation and Deep Learning in the Brain Simons Institute -- Computational Theories of the Brain 2018 Timothy Lillicrap DeepMind, UCL With: Sergey Bartunov, Adam Santoro, Jordan Guerguiev, Blake Richards, Luke Marris, Daniel Cownden, Colin Akerman, Douglas Tweed, Geoffrey Hinton The “credit assignment” problem The solution in artificial networks: backprop Credit assignment by backprop works well in practice and shows up in virtually all of the state-of-the-art supervised, unsupervised, and reinforcement learning algorithms. Why Isn’t Backprop “Biologically Plausible”? Why Isn’t Backprop “Biologically Plausible”? Neuroscience Evidence for Backprop in the Brain? A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: How to convince a neuroscientist that the cortex is learning via [something like] backprop - To convince a machine learning researcher, an appeal to variance in gradient estimates might be enough. - But this is rarely enough to convince a neuroscientist. - So what lines of argument help? How to convince a neuroscientist that the cortex is learning via [something like] backprop - What do I mean by “something like backprop”?: - That learning is achieved across multiple layers by sending information from neurons closer to the output back to “earlier” layers to help compute their synaptic updates. How to convince a neuroscientist that the cortex is learning via [something like] backprop 1. Feedback connections in cortex are ubiquitous and modify the
    [Show full text]
  • Training Autoencoders by Alternating Minimization
    Under review as a conference paper at ICLR 2018 TRAINING AUTOENCODERS BY ALTERNATING MINI- MIZATION Anonymous authors Paper under double-blind review ABSTRACT We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed. 1 INTRODUCTION For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort.
    [Show full text]
  • Fun with Hyperplanes: Perceptrons, Svms, and Friends
    Perceptrons, SVMs, and Friends: Some Discriminative Models for Classification Parallel to AIMA 18.1, 18.2, 18.6.3, 18.9 The Automatic Classification Problem Assign object/event or sequence of objects/events to one of a given finite set of categories. • Fraud detection for credit card transactions, telephone calls, etc. • Worm detection in network packets • Spam filtering in email • Recommending articles, books, movies, music • Medical diagnosis • Speech recognition • OCR of handwritten letters • Recognition of specific astronomical images • Recognition of specific DNA sequences • Financial investment Machine Learning methods provide one set of approaches to this problem CIS 391 - Intro to AI 2 Universal Machine Learning Diagram Feature Things to Magic Vector Classification be Classifier Represent- Decision classified Box ation CIS 391 - Intro to AI 3 Example: handwritten digit recognition Machine learning algorithms that Automatically cluster these images Use a training set of labeled images to learn to classify new images Discover how to account for variability in writing style CIS 391 - Intro to AI 4 A machine learning algorithm development pipeline: minimization Problem statement Given training vectors x1,…,xN and targets t1,…,tN, find… Mathematical description of a cost function Mathematical description of how to minimize/maximize the cost function Implementation r(i,k) = s(i,k) – maxj{s(i,j)+a(i,j)} … CIS 391 - Intro to AI 5 Universal Machine Learning Diagram Today: Perceptron, SVM and Friends Feature Things to Magic Vector
    [Show full text]
  • Q-Learning in Continuous State and Action Spaces
    -Learning in Continuous Q State and Action Spaces Chris Gaskett, David Wettergreen, and Alexander Zelinsky Robotic Systems Laboratory Department of Systems Engineering Research School of Information Sciences and Engineering The Australian National University Canberra, ACT 0200 Australia [cg dsw alex]@syseng.anu.edu.au j j Abstract. -learning can be used to learn a control policy that max- imises a scalarQ reward through interaction with the environment. - learning is commonly applied to problems with discrete states and ac-Q tions. We describe a method suitable for control tasks which require con- tinuous actions, in response to continuous states. The system consists of a neural network coupled with a novel interpolator. Simulation results are presented for a non-holonomic control task. Advantage Learning, a variation of -learning, is shown enhance learning speed and reliability for this task.Q 1 Introduction Reinforcement learning systems learn by trial-and-error which actions are most valuable in which situations (states) [1]. Feedback is provided in the form of a scalar reward signal which may be delayed. The reward signal is defined in relation to the task to be achieved; reward is given when the system is successfully achieving the task. The value is updated incrementally with experience and is defined as a discounted sum of expected future reward. The learning systems choice of actions in response to states is called its policy. Reinforcement learning lies between the extremes of supervised learning, where the policy is taught by an expert, and unsupervised learning, where no feedback is given and the task is to find structure in data.
    [Show full text]
  • Double Backpropagation for Training Autoencoders Against Adversarial Attack
    1 Double Backpropagation for Training Autoencoders against Adversarial Attack Chengjin Sun, Sizhe Chen, and Xiaolin Huang, Senior Member, IEEE Abstract—Deep learning, as widely known, is vulnerable to adversarial samples. This paper focuses on the adversarial attack on autoencoders. Safety of the autoencoders (AEs) is important because they are widely used as a compression scheme for data storage and transmission, however, the current autoencoders are easily attacked, i.e., one can slightly modify an input but has totally different codes. The vulnerability is rooted the sensitivity of the autoencoders and to enhance the robustness, we propose to adopt double backpropagation (DBP) to secure autoencoder such as VAE and DRAW. We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack. After smoothing the gradient by DBP, we further smooth the label by Gaussian Mixture Model (GMM), aiming for accurate and robust classification. We demonstrate in MNIST, CelebA, SVHN that our method leads to a robust autoencoder resistant to attack and a robust classifier able for image transition and immune to adversarial attack if combined with GMM. Index Terms—double backpropagation, autoencoder, network robustness, GMM. F 1 INTRODUCTION N the past few years, deep neural networks have been feature [9], [10], [11], [12], [13], or network structure [3], [14], I greatly developed and successfully used in a vast of fields, [15]. such as pattern recognition, intelligent robots, automatic Adversarial attack and its defense are revolving around a control, medicine [1]. Despite the great success, researchers small ∆x and a big resulting difference between f(x + ∆x) have found the vulnerability of deep neural networks to and f(x).
    [Show full text]
  • Introduction to Machine Learning
    Introduction to Machine Learning Perceptron Barnabás Póczos Contents History of Artificial Neural Networks Definitions: Perceptron, Multi-Layer Perceptron Perceptron algorithm 2 Short History of Artificial Neural Networks 3 Short History Progression (1943-1960) • First mathematical model of neurons ▪ Pitts & McCulloch (1943) • Beginning of artificial neural networks • Perceptron, Rosenblatt (1958) ▪ A single neuron for classification ▪ Perceptron learning rule ▪ Perceptron convergence theorem Degression (1960-1980) • Perceptron can’t even learn the XOR function • We don’t know how to train MLP • 1963 Backpropagation… but not much attention… Bryson, A.E.; W.F. Denham; S.E. Dreyfus. Optimal programming problems with inequality constraints. I: Necessary conditions for extremal solutions. AIAA J. 1, 11 (1963) 2544-2550 4 Short History Progression (1980-) • 1986 Backpropagation reinvented: ▪ Rumelhart, Hinton, Williams: Learning representations by back-propagating errors. Nature, 323, 533—536, 1986 • Successful applications: ▪ Character recognition, autonomous cars,… • Open questions: Overfitting? Network structure? Neuron number? Layer number? Bad local minimum points? When to stop training? • Hopfield nets (1982), Boltzmann machines,… 5 Short History Degression (1993-) • SVM: Vapnik and his co-workers developed the Support Vector Machine (1993). It is a shallow architecture. • SVM and Graphical models almost kill the ANN research. • Training deeper networks consistently yields poor results. • Exception: deep convolutional neural networks, Yann LeCun 1998. (discriminative model) 6 Short History Progression (2006-) Deep Belief Networks (DBN) • Hinton, G. E, Osindero, S., and Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1554. • Generative graphical model • Based on restrictive Boltzmann machines • Can be trained efficiently Deep Autoencoder based networks Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H.
    [Show full text]
  • Audio Event Classification Using Deep Learning in an End-To-End Approach
    Audio Event Classification using Deep Learning in an End-to-End Approach Master thesis Jose Luis Diez Antich Aalborg University Copenhagen A. C. Meyers Vænge 15 2450 Copenhagen SV Denmark Title: Abstract: Audio Event Classification using Deep Learning in an End-to-End Approach The goal of the master thesis is to study the task of Sound Event Classification Participant(s): using Deep Neural Networks in an end- Jose Luis Diez Antich to-end approach. Sound Event Classifi- cation it is a multi-label classification problem of sound sources originated Supervisor(s): from everyday environments. An auto- Hendrik Purwins matic system for it would many applica- tions, for example, it could help users of hearing devices to understand their sur- Page Numbers: 38 roundings or enhance robot navigation systems. The end-to-end approach con- Date of Completion: sists in systems that learn directly from June 16, 2017 data, not from features, and it has been recently applied to audio and its results are remarkable. Even though the re- sults do not show an improvement over standard approaches, the contribution of this thesis is an exploration of deep learning architectures which can be use- ful to understand how networks process audio. The content of this report is freely available, but publication (with reference) may only be pursued due to agreement with the author. Contents 1 Introduction1 1.1 Scope of this work.............................2 2 Deep Learning3 2.1 Overview..................................3 2.2 Multilayer Perceptron...........................4
    [Show full text]
  • Unsupervised Speech Representation Learning Using Wavenet Autoencoders Jan Chorowski, Ron J
    1 Unsupervised speech representation learning using WaveNet autoencoders Jan Chorowski, Ron J. Weiss, Samy Bengio, Aaron¨ van den Oord Abstract—We consider the task of unsupervised extraction speaker gender and identity, from phonetic content, properties of meaningful latent representations of speech by applying which are consistent with internal representations learned autoencoding neural networks to speech waveforms. The goal by speech recognizers [13], [14]. Such representations are is to learn a representation able to capture high level semantic content from the signal, e.g. phoneme identities, while being desired in several tasks, such as low resource automatic speech invariant to confounding low level details in the signal such as recognition (ASR), where only a small amount of labeled the underlying pitch contour or background noise. Since the training data is available. In such scenario, limited amounts learned representation is tuned to contain only phonetic content, of data may be sufficient to learn an acoustic model on the we resort to using a high capacity WaveNet decoder to infer representation discovered without supervision, but insufficient information discarded by the encoder from previous samples. Moreover, the behavior of autoencoder models depends on the to learn the acoustic model and a data representation in a fully kind of constraint that is applied to the latent representation. supervised manner [15], [16]. We compare three variants: a simple dimensionality reduction We focus on representations learned with autoencoders bottleneck, a Gaussian Variational Autoencoder (VAE), and a applied to raw waveforms and spectrogram features and discrete Vector Quantized VAE (VQ-VAE). We analyze the quality investigate the quality of learned representations on LibriSpeech of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately re- [17].
    [Show full text]
  • Approaching Hanabi with Q-Learning and Evolutionary Algorithm
    St. Cloud State University theRepository at St. Cloud State Culminating Projects in Computer Science and Department of Computer Science and Information Technology Information Technology 12-2020 Approaching Hanabi with Q-Learning and Evolutionary Algorithm Joseph Palmersten [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/csit_etds Part of the Computer Sciences Commons Recommended Citation Palmersten, Joseph, "Approaching Hanabi with Q-Learning and Evolutionary Algorithm" (2020). Culminating Projects in Computer Science and Information Technology. 34. https://repository.stcloudstate.edu/csit_etds/34 This Starred Paper is brought to you for free and open access by the Department of Computer Science and Information Technology at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Computer Science and Information Technology by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Approaching Hanabi with Q-Learning and Evolutionary Algorithm by Joseph A Palmersten A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science December, 2020 Starred Paper Committee: Bryant Julstrom, Chairperson Donald Hamnes Jie Meichsner 2 Abstract Hanabi is a cooperative card game with hidden information that requires cooperation and communication between the players. For a machine learning agent to be successful at the Hanabi, it will have to learn how to communicate and infer information from the communication of other players. To approach the problem of Hanabi the machine learning methods of Q- learning and Evolutionary algorithm are proposed as potential solutions.
    [Show full text]
  • Linear Prediction-Based Wavenet Speech Synthesis
    LP-WaveNet: Linear Prediction-based WaveNet Speech Synthesis Min-Jae Hwang Frank Soong Eunwoo Song Search Solution Microsoft Naver Corporation Seongnam, South Korea Beijing, China Seongnam, South Korea [email protected] [email protected] [email protected] Xi Wang Hyeonjoo Kang Hong-Goo Kang Microsoft Yonsei University Yonsei University Beijing, China Seoul, South Korea Seoul, South Korea [email protected] [email protected] [email protected] Abstract—We propose a linear prediction (LP)-based wave- than the speech signal, the training and generation processes form generation method via WaveNet vocoding framework. A become more efficient. WaveNet-based neural vocoder has significantly improved the However, the synthesized speech is likely to be unnatural quality of parametric text-to-speech (TTS) systems. However, it is challenging to effectively train the neural vocoder when the target when the prediction errors in estimating the excitation are database contains massive amount of acoustical information propagated through the LP synthesis process. As the effect such as prosody, style or expressiveness. As a solution, the of LP synthesis is not considered in the training process, the approaches that only generate the vocal source component by synthesis output is vulnerable to the variation of LP synthesis a neural vocoder have been proposed. However, they tend to filter. generate synthetic noise because the vocal source component is independently handled without considering the entire speech To alleviate this problem, we propose an LP-WaveNet, production process; where it is inevitable to come up with a which enables to jointly train the complicated interactions mismatch between vocal source and vocal tract filter.
    [Show full text]
  • Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning
    Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Lecture 11 Recurrent Neural Networks I CMSC 35246 Introduction Sequence Learning with Neural Networks Lecture 11 Recurrent Neural Networks I CMSC 35246 Some Sequence Tasks Figure credit: Andrej Karpathy Lecture 11 Recurrent Neural Networks I CMSC 35246 MLPs only accept an input of fixed dimensionality and map it to an output of fixed dimensionality Great e.g.: Inputs - Images, Output - Categories Bad e.g.: Inputs - Text in one language, Output - Text in another language MLPs treat every example independently. How is this problematic? Need to re-learn the rules of language from scratch each time Another example: Classify events after a fixed number of frames in a movie Need to resuse knowledge about the previous events to help in classifying the current. Problems with MLPs for Sequence Tasks The "API" is too limited. Lecture 11 Recurrent Neural Networks I CMSC 35246 Great e.g.: Inputs - Images, Output - Categories Bad e.g.: Inputs - Text in one language, Output - Text in another language MLPs treat every example independently. How is this problematic? Need to re-learn the rules of language from scratch each time Another example: Classify events after a fixed number of frames in a movie Need to resuse knowledge about the previous events to help in classifying the current. Problems with MLPs for Sequence Tasks The "API" is too limited. MLPs only accept an input of fixed dimensionality and map it to an output of fixed dimensionality Lecture 11 Recurrent Neural Networks I CMSC 35246 Bad e.g.: Inputs - Text in one language, Output - Text in another language MLPs treat every example independently.
    [Show full text]