A Developmental Method That Computes Optimal Networks Without Post-Selections

Total Page:16

File Type:pdf, Size:1020Kb

A Developmental Method That Computes Optimal Networks Without Post-Selections A Developmental Method that Computes Optimal Networks without Post-Selections Juyang Weng∗yzx ∗Department of Computer Science and Engineering, yCognitive Science Program, zNeuroscience Program, Michigan State University, East Lansing, MI, 48824 USA xGENISAMA LLC, Okemos, MI 48864 USA Abstract—This work is the theory of Post-Selection practices Last year, 2020, the author argued APFGP enables con- that have been rarely studied. Post-Selections mean selections scious machines, the first model about conscious machines of systems after the systems have been trained. Post-Selections that are based on emergent universal Turing machines [4]. Using Validation Sets (PSUVS) are wasteful and Post-Selections Using Test Sets (PSUTS) are wasteful and unethical. Both result Here, this author presented a controversial practice called in systems whose generalization powers are weak. The PSUTS Post-Selection Using Test Sets (PSUTS). This report also fall into two kinds, machine PSUTS and human PSUTS. The explains why the author’s brain model (Developmental Net- connectionist AI school received criticisms for its “scruffiness” works, DNs) avoids PSUTS. Using PSUTS means the corre- due to a huge number of network parameters and now the sponding project wastes much resource of computation and machine PSUTS; but the seemingly “clean” symbolic AI school seems more brittle because of its human PSUTS. This paper manpower and the superficial error of the reported system is analyzes why, in deep learning, error-backprop methods with misleading because the system does not give a similar error random initial weights suffer from severe local minima, why on new data sets. This paper not only raised a controversy but PSUTS violates well-established research ethics, and publications also presented a solution to avoid PSUTS. that used PSUTS should have transparently reported PSUTS. Since 2012, AI has attracted much attention from public This paper proposes a Developmental Methodology that trains and media. A large number of projects in AI has published. only a single but optimal network for each application lifetime using a new standard for performance evaluation in machine If the authors of these projects use the method of this report, learning, called developmental errors for all networks trained in they could benefit much for reducing the time and manpower a project that the selection of the luckiest network depends on, used to reach their target systems as well as improving the along with Three Learning Conditions: (1) framework restric- generalization powers of their target systems. tions, (2) training experience and (3) computational resources. Let us first discuss in Sec. II what PSUTS is. Sec. III This paper also discusses how the brain-inspired Developmental Networks (DNs) avoid PSUTS by reporting developmental errors reasons why error backprop needs PSUTS. Sec. IV explains and its maximum likelihood (ML) optimality under the Three why Developmental Networks (DNs) do not need and avoid Learning Conditions. DNs are not “scruffy” because they are ML- PUSTS. Sec. V provides details of the methodology. Sec. VI estimators of the observed Emergent Turing Machines at each outlines experiments. Sec. VII presents concluding remarks. time during their “lives”. This implies best performance given a limited amount of overall available computational resources for II. POST-SELECTIONS a project. Many machine-learning methods were evaluated without considering how much computational recourses are necessary for the development of a reported system. Thus, comparisons I. INTRODUCTION about the performance of the system have been biased toward In 2000, the author published with six (6) co-authors what is a competition of how much resources a group has, as we will called “Autonomous Mental Development (AMD)” in Science see below after we define the Post-Selection, regardless how [1]. The Science editor commented that developmental AI had many networks have been trained and discarded, and how not been not known before—the article published a new direc- large each network is. Worse still, test sets were used in a tion “task-nonspecific programs across lifetime”. All programs controversial way. Here we explicitly define a set of Three before then were task-specific, including all published neural Learning Conditions for development of an AI system: The Three Learning Conditions for developing an AI system networks due to PSUTS to be explained below. A series of are: (1) the framework restrictions, including whether task- advances has been made since 2000. The following is only specific or task-nonspecific, batch learning or incremental an outline of the most important events. For a more detailed learning, and the body’s sensors and effectors; (2) the teaching account of entire picture, the reader is referred to a book by experience; (3) the computational resources including the the author [2]. number of hidden neurons. In 2017, the author and his coworkers presented “emer- gent universal Turing machine” [3]. This is the first emer- A. Post-Selections gent mode about Auto-Programming for General Purposes The available data set D is divided into three mutually (APFGP) along with experimental results. disjoint sets, a training set T , a validation set V , and a test set T 0. Two sets are disjoint if they do not share any elements. Since a PSUVS procedure picks the best system based on Let us consider how the validation sets and test sets are used the errors on the validation set, the resulting system might not in experiments. do well on the test sets because doing well on validation sets A network architecture has a set of parameters represented do not guarantee to do well on the test sets. by a vector, where each component corresponds to an archi- Worse is Post-Selections Use Test Set (PSUTS). There are tecture parameter, such as convolution kernel sizes and stride two kinds of PSUTS, machine PSUTS and human PSUTS. values at each level of a deep hierarchy, the neuronal learning rate, and the neuronal learning momentum value, etc. Let k be C. Machine PSUTS a finite number of grid points along which such architecture If the test set T 0 is available which seems to be true parameter vectors need to be tried, A = fai j i = 1; 2; :::; kg. for almost all neural network publications, we define Post- If there are 10 parameters in each architecture and each Selection Using Test Sets (PSUTS): of which has 10 grid points to try, there are a total of A Machine PSUTS is defined as follows: If the test set T 0 10 0 k = 10 = 10B architecture parameter vectors to try, an is available, suppose the test error of N(ai; wj) is ei;j on 0 extremely large number. the test set T , find the best network N(ai∗ ; wj∗ ) so that it For each architecture vector ai, assume n sets of random reaches the minimum test error: weights wj, resulting in kn networks 0 0 ei∗;j∗ = min ei;j (3) 1≤i≤k;1≤j≤n fN(ai; wj) j i = 1; 2; :::; k; j = 1; 2; :::; ng 0 and report only the performance e ∗ ∗ but not the perfor- are trained each of which starts with a different set of random i ;j mances of other remaining kn − 1 trained neural networks. weights w , using error backprop that locally and numerically j Imagine that we want to remove lucks in the above expres- minimizes the fitting error f on the training set T . Graves i;j sion, by using averages like we did in Eq. (2): et al. [5] seems to have mentioned n = 20. Using the above n example of k = 10B, kn = 200B a huge number that requires 0 1 X 0 e¯i∗;j∗ = min ei;j: (4) a lot of computational resources and manpower. 1≤i≤k n Let us define the Post-Selection. Suppose that the trainer is j=1 first aware of the validation sets and the test sets. He trains But the above error is still unethical and misleading since each multiple systems using the training sets. After these systems term under minimization for ai has peeked into test sets! have been trained, he post-selects a system by searching, There are some variations of Machine PSUTS: The vali- manually or assisted by computers, among trained systems dation set V or T 0 are not disjoint with T . If T = V , we based on the validation sets or the test sets. This is called call it validation-vanished PSUTS. If T = T 0, we called it Post-Selection—selection after training. test-vanished PSUTS. Obviously, a post-selection wastes all trained systems except A distribution of fitting errors, validation errors and test the selected one. As we will see next, a system from the post- errors is defined as follows: The distributions of all kn trained selection has a weak generalization power. networks’ fitting errors ffijg, validation errors feijg, and test 0 First, consider Post-Selections Using Validation Sets: errors feijg, i = 1; 2; :::k, j = 1; 2; :::n, as well as the values B. PSUVS of k and n. 0 It is necessary to present some key statistical characteristics A Machine PSUVS is defined as follows: If the test set T of such distributions. For example, ranked errors in decreasing is not available, suppose the validation error of N(ai; wj) is order. Then given the maximum, 75%, 50%, 25%, and mini- ei;j on the validation set V , find the best network N(ai∗ ; wj∗ ) mum value of these kn values for the fitting errors, validation so that it reaches the minimum validation error: errors. and test errors, so that the research community can ei∗;j∗ = min ei;j (1) see whether error-backprop can avoid local minima in deep 1≤i≤k;1≤j≤n learning.
Recommended publications
  • Lecture 9 Recurrent Neural Networks “I’M Glad That I’M Turing Complete Now”
    Lecture 9 Recurrent Neural Networks “I’m glad that I’m Turing Complete now” Xinyu Zhou Megvii (Face++) Researcher [email protected] Nov 2017 Raise your hand and ask, whenever you have questions... We have a lot to cover and DON’T BLINK Outline ● RNN Basics ● Classic RNN Architectures ○ LSTM ○ RNN with Attention ○ RNN with External Memory ■ Neural Turing Machine ■ CAVEAT: don’t fall asleep ● Applications ○ A market of RNNs RNN Basics Feedforward Neural Networks ● Feedforward neural networks can fit any bounded continuous (compact) function ● This is called Universal approximation theorem https://en.wikipedia.org/wiki/Universal_approximation_theorem Cybenko, George. "Approximation by superpositions of a sigmoidal function." Mathematics of Control, Signals, and Systems (MCSS) 2.4 (1989): 303-314. Bounded Continuous Function is NOT ENOUGH! How to solve Travelling Salesman Problem? Bounded Continuous Function is NOT ENOUGH! How to solve Travelling Salesman Problem? We Need to be Turing Complete RNN is Turing Complete Siegelmann, Hava T., and Eduardo D. Sontag. "On the computational power of neural nets." Journal of computer and system sciences 50.1 (1995): 132-150. Sequence Modeling Sequence Modeling ● How to take a variable length sequence as input? ● How to predict a variable length sequence as output? RNN RNN Diagram A lonely feedforward cell RNN Diagram Grows … with more inputs and outputs RNN Diagram … here comes a brother (x_1, x_2) comprises a length-2 sequence RNN Diagram … with shared (tied) weights x_i: inputs y_i: outputs W: all the
    [Show full text]
  • Neural Turing Machines Arxiv:1410.5401V2 [Cs.NE]
    Neural Turing Machines Alex Graves [email protected] Greg Wayne [email protected] Ivo Danihelka [email protected] Google DeepMind, London, UK Abstract We extend the capabilities of neural networks by coupling them to external memory re- sources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to- end, allowing it to be efficiently trained with gradient descent. Preliminary results demon- strate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples. 1 Introduction Computer programs make use of three fundamental mechanisms: elementary operations (e.g., arithmetic operations), logical flow control (branching), and external memory, which can be written to and read from in the course of computation (Von Neumann, 1945). De- arXiv:1410.5401v2 [cs.NE] 10 Dec 2014 spite its wide-ranging success in modelling complicated data, modern machine learning has largely neglected the use of logical flow control and external memory. Recurrent neural networks (RNNs) stand out from other machine learning methods for their ability to learn and carry out complicated transformations of data over extended periods of time. Moreover, it is known that RNNs are Turing-Complete (Siegelmann and Sontag, 1995), and therefore have the capacity to simulate arbitrary procedures, if properly wired. Yet what is possible in principle is not always what is simple in practice. We therefore enrich the capabilities of standard recurrent networks to simplify the solution of algorithmic tasks. This enrichment is primarily via a large, addressable memory, so, by analogy to Turing’s enrichment of finite-state machines by an infinite memory tape, we 1 dub our device a “Neural Turing Machine” (NTM).
    [Show full text]
  • Memory Augmented Matching Networks for Few-Shot Learnings
    International Journal of Machine Learning and Computing, Vol. 9, No. 6, December 2019 Memory Augmented Matching Networks for Few-Shot Learnings Kien Tran, Hiroshi Sato, and Masao Kubo broken down when they have to learn a new concept on the Abstract—Despite many successful efforts have been made in fly or to deal with the problems which information/data is one-shot/few-shot learning tasks recently, learning from few insufficient (few or even a single example). These situations data remains one of the toughest areas in machine learning. are very common in the computer vision field, such as Regarding traditional machine learning methods, the more data gathered, the more accurate the intelligent system could detecting a new object with only a few available captured perform. Hence for this kind of tasks, there must be different images. For those purposes, specialists define the approaches to guarantee systems could learn even with only one one-shot/few-shot learning algorithms that could solve a task sample per class but provide the most accurate results. Existing with only one or few known samples (e.g., less than 20 solutions are ranged from Bayesian approaches to examples per category). meta-learning approaches. With the advances in Deep learning One-shot learning/few-shot learning is a challenging field, meta-learning methods with deep networks could be domain for neural networks since gradient-based learning is better such as recurrent networks with enhanced external memory (Neural Turing Machines - NTM), metric learning often too slow to quickly cope with never-before-seen classes, with Matching Network framework, etc.
    [Show full text]
  • Training Robot Policies Using External Memory Based Networks Via Imitation
    Training Robot Policies using External Memory Based Networks Via Imitation Learning by Nambi Srivatsav A Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science Approved September 2018 by the Graduate Supervisory Committee: Heni Ben Amor, Chair Siddharth Srivastava HangHang Tong ARIZONA STATE UNIVERSITY December 2018 ABSTRACT Recent advancements in external memory based neural networks have shown promise in solving tasks that require precise storage and retrieval of past information. Re- searchers have applied these models to a wide range of tasks that have algorithmic properties but have not applied these models to real-world robotic tasks. In this thesis, we present memory-augmented neural networks that synthesize robot naviga- tion policies which a) encode long-term temporal dependencies b) make decisions in partially observed environments and c) quantify the uncertainty inherent in the task. We extract information about the temporal structure of a task via imitation learning from human demonstration and evaluate the performance of the models on control policies for a robot navigation task. Experiments are performed in partially observed environments in both simulation and the real world. i TABLE OF CONTENTS Page LIST OF FIGURES . iii CHAPTER 1 INTRODUCTION . 1 2 BACKGROUND .................................................... 4 2.1 Neural Networks . 4 2.2 LSTMs . 4 2.3 Neural Turing Machines . 5 2.3.1 Controller . 6 2.3.2 Memory . 6 3 METHODOLOGY . 7 3.1 Neural Turing Machine for Imitation Learning . 7 3.1.1 Reading . 8 3.1.2 Writing . 9 3.1.3 Addressing Memory . 10 3.2 Uncertainty Estimation . 12 4 EXPERIMENTAL RESULTS .
    [Show full text]
  • Learning Operations on a Stack with Neural Turing Machines
    Learning Operations on a Stack with Neural Turing Machines Anonymous Author(s) Affiliation Address email Abstract 1 Multiple extensions of Recurrent Neural Networks (RNNs) have been proposed 2 recently to address the difficulty of storing information over long time periods. In 3 this paper, we experiment with the capacity of Neural Turing Machines (NTMs) to 4 deal with these long-term dependencies on well-balanced strings of parentheses. 5 We show that not only does the NTM emulate a stack with its heads and learn an 6 algorithm to recognize such words, but it is also capable of strongly generalizing 7 to much longer sequences. 8 1 Introduction 9 Although neural networks shine at finding meaningful representations of the data, they are still limited 10 in their capacity to plan ahead, reason and store information over long time periods. Keeping track of 11 nested parentheses in a language model, for example, is a particularly challenging problem for RNNs 12 [9]. It requires the network to somehow memorize the number of unmatched open parentheses. In 13 this paper, we analyze the ability of Neural Turing Machines (NTMs) to recognize well-balanced 14 strings of parentheses. We show that even though the NTM architecture does not explicitely operate 15 on a stack, it is able to emulate this data structure with its heads. Such a behaviour was unobserved 16 on other simple algorithmic tasks [4]. 17 After a brief recall of the Neural Turing Machine architecture in Section 3, we show in Section 4 how 18 the NTM is able to learn an algorithm to recognize strings of well-balanced parentheses, called Dyck 19 words.
    [Show full text]
  • Neural Shuffle-Exchange Networks
    Neural Shuffle-Exchange Networks − Sequence Processing in O(n log n) Time Karlis¯ Freivalds, Em¯ıls Ozolin, š, Agris Šostaks Institute of Mathematics and Computer Science University of Latvia Raina bulvaris 29, Riga, LV-1459, Latvia {Karlis.Freivalds, Emils.Ozolins, Agris.Sostaks}@lumii.lv Abstract A key requirement in sequence to sequence processing is the modeling of long range dependencies. To this end, a vast majority of the state-of-the-art models use attention mechanism which is of O(n2) complexity that leads to slow execution for long sequences. We introduce a new Shuffle-Exchange neural network model for sequence to sequence tasks which have O(log n) depth and O(n log n) total complexity. We show that this model is powerful enough to infer efficient algorithms for common algorithmic benchmarks including sorting, addition and multiplication. We evaluate our architecture on the challenging LAMBADA question answering dataset and compare it with the state-of-the-art models which use attention. Our model achieves competitive accuracy and scales to sequences with more than a hundred thousand of elements. We are confident that the proposed model has the potential for building more efficient architectures for processing large interrelated data in language modeling, music generation and other application domains. 1 Introduction A key requirement in sequence to sequence processing is the modeling of long range dependencies. Such dependencies occur in natural language when the meaning of some word depends on other words in the same or some previous sentence. There are important cases, e.g., to resolve coreferences, when such distant information may not be disregarded.
    [Show full text]
  • Advances in Neural Turing Machines
    Advances in Neural Turing Machines [email protected] truyentran.github.io Truyen Tran Deakin University @truyenoz letdataspeak.blogspot.com Source: rdn consulting CafeDSL, Aug 2018 goo.gl/3jJ1O0 30/08/2018 1 https://twitter.com/nvidia/status/1010545517405835264 30/08/2018 2 (Real) Turing machine It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with the tape on the beginning of which is written the string of quintuples separated by semicolons of some computing machine M, then U will compute the same sequence as M. Wikipedia 30/08/2018 3 Can we learn from data a model that is as powerful as a Turing machine? 30/08/2018 4 Agenda Brief review of deep learning Neural Turing machine (NTM) Dual-controlling for read and write (PAKDD’18) Dual-view in sequences (KDD’18) Bringing variability in output sequences (NIPS’18 ?) Bringing relational structures into memory (IJCAI’17 WS+) 30/08/2018 5 Deep learning in a nutshell 1986 2016 http://blog.refu.co/wp-content/uploads/2009/05/mlp.png 30/08/2018 2012 6 Let’s review current offerings Feedforward nets (FFN) BUTS … Recurrent nets (RNN) Convolutional nets (CNN) No storage of intermediate results. Message-passing graph nets (MPGNN) Little choices over what to compute and what to use Universal transformer ….. Little support for complex chained reasoning Work surprisingly well on LOTS of important Little support for rapid switching of tasks problems Enter the age of differentiable programming 30/08/2018 7 Searching for better
    [Show full text]
  • ELEC 576: Understanding and Visualizing Convnets & Introduction
    ELEC 576: Understanding and Visualizing Convnets & Introduction to Recurrent Neural Networks Ankit B. Patel Baylor College of Medicine (Neuroscience Dept.) Rice University (ECE Dept.) 10-11-2017 Understand & Visualizing Convnets Deconvolutional Net [Zeiler and Fergus] Feature Visualization [Zeiler and Fergus] Activity Maximization [Simonyan et al.] Deep Dream Visualization • To produce human viewable images, need to • Activity maximization (gradient ascent) • L2 regularization • Gaussian blur • Clipping • Different scales Image Example [Günther Noack] Dumbbell Deep Dream AlexNet VGGNet GoogleNet Deep Dream Video Class: goldfish, Carassius auratus Texture Synthesis [Leon Gatys, Alexander Ecker, Matthias Bethge] Generated Textures [Leon Gatys, Alexander Ecker, Matthias Bethge] Deep Style [Leon Gatys, Alexander Ecker, Matthias Bethge] Deep Style [Leon Gatys, Alexander Ecker, Matthias Bethge] Introduction to Recurrent Neural Networks What Are Recurrent Neural Networks? • Recurrent Neural Networks (RNNs) are networks that have feedback • Output is feed back to the input • Sequence processing • Ideal for time-series data or sequential data History of RNNs Important RNN Architectures • Hopfield Network • Jordan and Elman Networks • Echo State Networks • Long Short Term Memory (LSTM) • Bi-Directional RNN • Gated Recurrent Unit (GRU) • Neural Turing Machine Hopfield Network [Wikipedia] Elman Networks [John McCullock] Echo State Networks [Herbert Jaeger] Definition of RNNs RNN Diagram [Richard Socher] RNN Formulation [Richard Socher] RNN Example [Andrej
    [Show full text]
  • A Review on Neural Turing Machine
    A review on Neural Turing Machine Soroor Malekmohammadi Faradounbeh, Faramarz Safi-Esfahani Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran. Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran. [email protected], [email protected] Abstract— one of the major objectives of Artificial Intelligence is to design learning algorithms that are executed on a general purposes computational machines such as human brain. Neural Turing Machine (NTM) is a step towards realizing such a computational machine. The attempt is made here to run a systematic review on Neural Turing Machine. First, the mind-map and taxonomy of machine learning, neural networks, and Turing machine are introduced. Next, NTM is inspected in terms of concepts, structure, variety of versions, implemented tasks, comparisons, etc. Finally, the paper discusses on issues and ends up with several future works. Keywords— NTM, Deep Learning, Machine Learning, Turing machine, Neural Networks 1 INTRODUCTION The Artificial Intelligence (AI) seeks to construct real intelligent machines. The neural networks constitute a small portion of AI, thus the human brain should be named as a Biological Neural Network (BNN). Brain is a very complicated non-linear and parallel computer [1]. The almost relatively new neural learning technology in AI named ‘Deep Learning’ that consists of multi-layer neural networks learning techniques. Deep Learning provides a computerized system to observe the abstract pattern similar to that of human brain, while providing the means to resolve cognitive problems. As to the development of deep learning, recently there exist many effective methods for multi-layer neural network training such as Recurrent Neural Network (RNN)[2].
    [Show full text]
  • Evolving Neural Turing Machines
    Evolving Neural Turing Machines Rasmus Boll Greve, Emil Juul Jacobsen, and Sebastian Risi IT University of Copenhagen Copenhagen, Denmark fragr,ejuu,[email protected] Abstract Instead of training a Neural Turing Machine (NTM) with gradient descent [1], in this work NTMs are trained through an evolutionary algorithm. Preliminary results suggest that this setup can greatly simplify the neural model, generalizes better, and does not require accessing the entire memory content at each time-step. We show preliminary results on a simple copy and T-Maze learning task. 1 Introduction An important component of our human cognition is the ability to store and use information during inference. However, while much progress has been made in machine learning [2, 3], how to best augment artificial agents with a similar long-term memory component remains elusive. In recent work in this direction, Graves et al. [1] extended the capabilities of an artificial neural network (ANN) by combining it with an external memory bank. Because the architecture of their Neural Turing Machine (NTM) is differentiable end-to-end it can be trained through gradient descent and was able to learn simple algorithms such as copying, sorting and recall from example data. How- ever, the differentiable memory requires the NTM to access the entire memory content at each step, which can be prohibitively slow for larger memory banks and will likely not scale-up to large and more difficult problems. Others have tried to overcome this obstacle by combining gradient descent methods with reinforcement learning methods, so far with only limited success [4]. This study presents preliminary results training a NTM through a neuroevolutionary (NE) approach, in which both the topology and the weights of a network are determined through artificial evolution.
    [Show full text]
  • Dynamic Neural Turing Machine with Contin
    Under review as a conference paper at ICLR 2017 DYNAMIC NEURAL TURING MACHINE WITH CONTIN- UOUS AND DISCRETE ADDRESSING SCHEMES Caglar Gulcehre∗, Sarath Chandar∗, Kyunghyun Choy, Yoshua Bengio∗ ∗ University of Montreal, [email protected] y New York University, [email protected] ABSTRACT In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies including both linear and nonlinear ones. We implement the D-NTM with both continuous, differentiable and discrete, non-differentiable read/write mechanisms. We investigate the mechanisms and effects for learning to read and write to a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU-controller. The D-NTM is evaluated on a set of Facebook bAbI tasks and shown to outperform NTM and LSTM baselines. We also provide further experimental results on sequential MNIST, associative recall and copy tasks. 1 INTRODUCTION Designing general-purpose learning algorithms is one of the long-standing goals of artificial intelligence. Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a set of complex tasks that are not well addressed by conventional neural networks. Those tasks often require a neural network to be equipped with an explicit, external memory in which a larger, potentially unbounded, set of facts need to be stored.
    [Show full text]
  • Outline for Today's Presentation
    Outline for today’s presentation • We will see how RNNs and CNNs compare on variety of tasks • Then, we will go through a new approach for Sequence Modelling that has become state of the art • Finally, we will look at a few augmented RNN models RNNs vs CNNs Empirical Evaluation of Generic Networks for Sequence Modelling Let’s say you are given a sequence modelling task of text classification / music note prediction, and you are asked to develop a simple model. What would your baseline model be based on- RNNs or CNNs? Recent Trend in Sequence Modelling • Widely considered as RNNs “home turf” • Recent research has shown otherwise – • Speech Synthesis – WaveNet uses Dilated Convolutions for Synthesis • Char-to-Char Machine Translation – ByteNet uses Encoder-Decoder architecture and Dilated Convolutions. Tested on English-German dataset • Word-to-Word Machine Translation – Hybrid CNN-LSTM on English-Romanian and English-French datasets • Character-level Language Modelling – ByteNet on WikiText dataset • Word-level Language Modelling – Gated CNNs on WikiText dataset Temporal Convolutional Network (TCN) • Model that uses best practices in Convolutional network design • The properties of TCN - • Causal – there is no information leakage from future to past • Memory – It can look very far into the past for prediction/synthesis • Input - It can take any arbitrary length sequence with proper tuning to the particular task. • Simple – It uses no gating mechanism, no complex stacking mechanism and each layer output has the same length as the input •
    [Show full text]