Deep Learning in Neural Networks: an Overview
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Backpropagation and Deep Learning in the Brain
Backpropagation and Deep Learning in the Brain Simons Institute -- Computational Theories of the Brain 2018 Timothy Lillicrap DeepMind, UCL With: Sergey Bartunov, Adam Santoro, Jordan Guerguiev, Blake Richards, Luke Marris, Daniel Cownden, Colin Akerman, Douglas Tweed, Geoffrey Hinton The “credit assignment” problem The solution in artificial networks: backprop Credit assignment by backprop works well in practice and shows up in virtually all of the state-of-the-art supervised, unsupervised, and reinforcement learning algorithms. Why Isn’t Backprop “Biologically Plausible”? Why Isn’t Backprop “Biologically Plausible”? Neuroscience Evidence for Backprop in the Brain? A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: How to convince a neuroscientist that the cortex is learning via [something like] backprop - To convince a machine learning researcher, an appeal to variance in gradient estimates might be enough. - But this is rarely enough to convince a neuroscientist. - So what lines of argument help? How to convince a neuroscientist that the cortex is learning via [something like] backprop - What do I mean by “something like backprop”?: - That learning is achieved across multiple layers by sending information from neurons closer to the output back to “earlier” layers to help compute their synaptic updates. How to convince a neuroscientist that the cortex is learning via [something like] backprop 1. Feedback connections in cortex are ubiquitous and modify the -
Automatic Feature Learning Using Recurrent Neural Networks
Predicting purchasing intent: Automatic Feature Learning using Recurrent Neural Networks Humphrey Sheil Omer Rana Ronan Reilly Cardiff University Cardiff University Maynooth University Cardiff, Wales Cardiff, Wales Maynooth, Ireland [email protected] [email protected] [email protected] ABSTRACT influence three out of four major variables that affect profit. Inaddi- We present a neural network for predicting purchasing intent in an tion, merchants increasingly rely on (and pay advertising to) much Ecommerce setting. Our main contribution is to address the signifi- larger third-party portals (for example eBay, Google, Bing, Taobao, cant investment in feature engineering that is usually associated Amazon) to achieve their distribution, so any direct measures the with state-of-the-art methods such as Gradient Boosted Machines. merchant group can use to increase their profit is sorely needed. We use trainable vector spaces to model varied, semi-structured input data comprising categoricals, quantities and unique instances. McKinsey A.T. Kearney Affected by Multi-layer recurrent neural networks capture both session-local shopping intent and dataset-global event dependencies and relationships for user Price management 11.1% 8.2% Yes sessions of any length. An exploration of model design decisions Variable cost 7.8% 5.1% Yes including parameter sharing and skip connections further increase Sales volume 3.3% 3.0% Yes model accuracy. Results on benchmark datasets deliver classifica- Fixed cost 2.3% 2.0% No tion accuracy within 98% of state-of-the-art on one and exceed state-of-the-art on the second without the need for any domain / Table 1: Effect of improving different variables on operating dataset-specific feature engineering on both short and long event profit, from [22]. -
University of Navarra Ecclesiastical Faculty of Philosophy
UNIVERSITY OF NAVARRA ECCLESIASTICAL FACULTY OF PHILOSOPHY Mark Telford Georges THE PROBLEM OF STORING COMMON SENSE IN ARTIFICIAL INTELLIGENCE Context in CYC Doctoral dissertation directed by Dr. Jaime Nubiola Pamplona, 2000 1 Table of Contents ABBREVIATIONS ............................................................................................................................. 4 INTRODUCTION............................................................................................................................... 5 CHAPTER I: A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE .................................. 9 1.1. THE ORIGIN AND USE OF THE TERM “ARTIFICIAL INTELLIGENCE”.............................................. 9 1.1.1. Influences in AI................................................................................................................ 10 1.1.2. “Artificial Intelligence” in popular culture..................................................................... 11 1.1.3. “Artificial Intelligence” in Applied AI ............................................................................ 12 1.1.4. Human AI and alien AI....................................................................................................14 1.1.5. “Artificial Intelligence” in Cognitive Science................................................................. 16 1.2. TRENDS IN AI........................................................................................................................... 17 1.2.1. Classical AI .................................................................................................................... -
Exploring the Potential of Sparse Coding for Machine Learning
Portland State University PDXScholar Dissertations and Theses Dissertations and Theses 10-19-2020 Exploring the Potential of Sparse Coding for Machine Learning Sheng Yang Lundquist Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Applied Mathematics Commons, and the Computer Sciences Commons Let us know how access to this document benefits ou.y Recommended Citation Lundquist, Sheng Yang, "Exploring the Potential of Sparse Coding for Machine Learning" (2020). Dissertations and Theses. Paper 5612. https://doi.org/10.15760/etd.7484 This Dissertation is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. Exploring the Potential of Sparse Coding for Machine Learning by Sheng Y. Lundquist A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science Dissertation Committee: Melanie Mitchell, Chair Feng Liu Bart Massey Garrett Kenyon Bruno Jedynak Portland State University 2020 © 2020 Sheng Y. Lundquist Abstract While deep learning has proven to be successful for various tasks in the field of computer vision, there are several limitations of deep-learning models when com- pared to human performance. Specifically, human vision is largely robust to noise and distortions, whereas deep learning performance tends to be brittle to modifi- cations of test images, including being susceptible to adversarial examples. Addi- tionally, deep-learning methods typically require very large collections of training examples for good performance on a task, whereas humans can learn to perform the same task with a much smaller number of training examples. -
Deep Belief Networks for Phone Recognition
Deep Belief Networks for phone recognition Abdel-rahman Mohamed, George Dahl, and Geoffrey Hinton Department of Computer Science University of Toronto {asamir,gdahl,hinton}@cs.toronto.edu Abstract Hidden Markov Models (HMMs) have been the state-of-the-art techniques for acoustic modeling despite their unrealistic independence assumptions and the very limited representational capacity of their hidden states. There are many proposals in the research community for deeper models that are capable of modeling the many types of variability present in the speech generation process. Deep Belief Networks (DBNs) have recently proved to be very effective for a variety of ma- chine learning problems and this paper applies DBNs to acoustic modeling. On the standard TIMIT corpus, DBNs consistently outperform other techniques and the best DBN achieves a phone error rate (PER) of 23.0% on the TIMIT core test set. 1 Introduction A state-of-the-art Automatic Speech Recognition (ASR) system typically uses Hidden Markov Mod- els (HMMs) to model the sequential structure of speech signals, with local spectral variability mod- eled using mixtures of Gaussian densities. HMMs make two main assumptions. The first assumption is that the hidden state sequence can be well-approximated using a first order Markov chain where each state St at time t depends only on St−1. Second, observations at different time steps are as- sumed to be conditionally independent given a state sequence. Although these assumptions are not realistic, they enable tractable decoding and learning even with large amounts of speech data. Many methods have been proposed for relaxing the very strong conditional independence assumptions of standard HMMs (e.g. -
Multiscale Computation and Dynamic Attention in Biological and Artificial
brain sciences Review Multiscale Computation and Dynamic Attention in Biological and Artificial Intelligence Ryan Paul Badman 1,* , Thomas Trenholm Hills 2 and Rei Akaishi 1,* 1 Center for Brain Science, RIKEN, Saitama 351-0198, Japan 2 Department of Psychology, University of Warwick, Coventry CV4 7AL, UK; [email protected] * Correspondence: [email protected] (R.P.B.); [email protected] (R.A.) Received: 4 March 2020; Accepted: 17 June 2020; Published: 20 June 2020 Abstract: Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks—with fixed scalings—to attention, transformers, dynamic convolutions, and consciousness priors—which modulate scale to input and increase scale breadth. -
Deep Neural Network Models for Sequence Labeling and Coreference Tasks
Federal state autonomous educational institution for higher education ¾Moscow institute of physics and technology (national research university)¿ On the rights of a manuscript Le The Anh DEEP NEURAL NETWORK MODELS FOR SEQUENCE LABELING AND COREFERENCE TASKS Specialty 05.13.01 - ¾System analysis, control theory, and information processing (information and technical systems)¿ A dissertation submitted in requirements for the degree of candidate of technical sciences Supervisor: PhD of physical and mathematical sciences Burtsev Mikhail Sergeevich Dolgoprudny - 2020 Федеральное государственное автономное образовательное учреждение высшего образования ¾Московский физико-технический институт (национальный исследовательский университет)¿ На правах рукописи Ле Тхе Ань ГЛУБОКИЕ НЕЙРОСЕТЕВЫЕ МОДЕЛИ ДЛЯ ЗАДАЧ РАЗМЕТКИ ПОСЛЕДОВАТЕЛЬНОСТИ И РАЗРЕШЕНИЯ КОРЕФЕРЕНЦИИ Специальность 05.13.01 – ¾Системный анализ, управление и обработка информации (информационные и технические системы)¿ Диссертация на соискание учёной степени кандидата технических наук Научный руководитель: кандидат физико-математических наук Бурцев Михаил Сергеевич Долгопрудный - 2020 Contents Abstract 4 Acknowledgments 6 Abbreviations 7 List of Figures 11 List of Tables 13 1 Introduction 14 1.1 Overview of Deep Learning . 14 1.1.1 Artificial Intelligence, Machine Learning, and Deep Learning . 14 1.1.2 Milestones in Deep Learning History . 16 1.1.3 Types of Machine Learning Models . 16 1.2 Brief Overview of Natural Language Processing . 18 1.3 Dissertation Overview . 20 1.3.1 Scientific Actuality of the Research . 20 1.3.2 The Goal and Task of the Dissertation . 20 1.3.3 Scientific Novelty . 21 1.3.4 Theoretical and Practical Value of the Work in the Dissertation . 21 1.3.5 Statements to be Defended . 22 1.3.6 Presentations and Validation of the Research Results . -
Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space
Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space Matthew C. Fontaine Julian Togelius Viterbi School of Engineering Tandon School of Engineering University of Southern California New York University Los Angeles, CA New York City, NY [email protected] [email protected] Stefanos Nikolaidis Amy K. Hoover Viterbi School of Engineering Ying Wu College of Computing University of Southern California New Jersey Institute of Technology Los Angeles, CA Newark, NJ [email protected] [email protected] ABSTRACT We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diver- sity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single- objective continuous domains. This paper proposes a new QD algo- rithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Figure 1: Comparing Hearthstone Archives. Sample archives Our new algorithm combines the self-adaptation techniques of for both MAP-Elites and CMA-ME from the Hearthstone ex- CMA-ES with archiving and mapping techniques for maintaining periment. Our new method, CMA-ME, both fills more cells diversity in QD. Results from experiments based on standard con- in behavior space and finds higher quality policies to play tinuous optimization benchmarks show that CMA-ME finds better- Hearthstone than MAP-Elites. Each grid cell is an elite (high quality solutions than MAP-Elites; similarly, results on the strategic performing policy) and the intensity value represent the game Hearthstone show that CMA-ME finds both a higher overall win rate across 200 games against difficult opponents. -
Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates a Disse
Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2018 Reading Committee: Richard Block, Chair Sabine Wilke Ellwood Wiggins Program Authorized to Offer Degree: Germanics ©Copyright 2018 Nathan Jensen Bates University of Washington Abstract Mind-Crafting: Anticipatory Critique of Transhumanist Mind-Uploading in German High Modernist Novels Nathan Jensen Bates Chair of the Supervisory Committee: Professor Richard Block Germanics This dissertation explores the question of how German modernist novels anticipate and critique the transhumanist theory of mind-uploading in an attempt to avert binary thinking. German modernist novels simulate the mind and expose the indistinct limits of that simulation. Simulation is understood in this study as defined by Jean Baudrillard in Simulacra and Simulation. The novels discussed in this work include Thomas Mann’s Der Zauberberg; Hermann Broch’s Die Schlafwandler; Alfred Döblin’s Berlin Alexanderplatz: Die Geschichte von Franz Biberkopf; and, in the conclusion, Irmgard Keun’s Das Kunstseidene Mädchen is offered as a field of future inquiry. These primary sources disclose at least three aspects of the mind that are resistant to discrete articulation; that is, the uploading or extraction of the mind into a foreign context. A fourth is proposed, but only provisionally, in the conclusion of this work. The aspects resistant to uploading are defined and discussed as situatedness, plurality, and adaptability to ambiguity. Each of these aspects relates to one of the three steps of mind- uploading summarized in Nick Bostrom’s treatment of the subject. -
AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies. -
Long Term Memory Assistance for Evolutionary Algorithms
mathematics Article Long Term Memory Assistance for Evolutionary Algorithms Matej Crepinšekˇ 1,* , Shih-Hsi Liu 2 , Marjan Mernik 1 and Miha Ravber 1 1 Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia; [email protected] (M.M.); [email protected] (M.R.) 2 Department of Computer Science, California State University Fresno, Fresno, CA 93740, USA; [email protected] * Correspondence: [email protected] Received: 7 September 2019; Accepted: 12 November 2019; Published: 18 November 2019 Abstract: Short term memory that records the current population has been an inherent component of Evolutionary Algorithms (EAs). As hardware technologies advance currently, inexpensive memory with massive capacities could become a performance boost to EAs. This paper introduces a Long Term Memory Assistance (LTMA) that records the entire search history of an evolutionary process. With LTMA, individuals already visited (i.e., duplicate solutions) do not need to be re-evaluated, and thus, resources originally designated to fitness evaluations could be reallocated to continue search space exploration or exploitation. Three sets of experiments were conducted to prove the superiority of LTMA. In the first experiment, it was shown that LTMA recorded at least 50% more duplicate individuals than a short term memory. In the second experiment, ABC and jDElscop were applied to the CEC-2015 benchmark functions. By avoiding fitness re-evaluation, LTMA improved execution time of the most time consuming problems F03 and F05 between 7% and 28% and 7% and 16%, respectively. In the third experiment, a hard real-world problem for determining soil models’ parameters, LTMA improved execution time between 26% and 69%. -
Sustainable Development, Technological Singularity and Ethics
European Research Studies Journal Volume XXI, Issue 4, 2018 pp. 714- 725 Sustainable Development, Technological Singularity and Ethics Vyacheslav Mantatov1, Vitaly Tutubalin2 Abstract: The development of modern convergent technologies opens the prospect of a new technological order. Its image as a “technological singularity”, i.e. such “transhuman” stage of scientific and technical progress, on which the superintelligence will be practically implemented, seems to be quite realistic. The determination of the basic philosophical coordinates of this future reality in the movement along the path of sustainable development of mankind is the most important task of modern science. The article is devoted to the study of the basic ontological, epistemological and moral aspects in the reception of the coming technological singularity. The method of this study is integrating dialectical and system approach. The authors come to the conclusion: the technological singularity in the form of a “computronium” (superintelligence) opens up broad prospects for the sustainable development of mankind in the cosmic dimension. This superintelligence will become an ally of man in the process of cosmic evolution. Keywords: Technological Singularity, Superintelligence, Convergent Technologies, Cosmocentrism, Human and Universe JEL code: Q01, Q56. 1East Siberia State University of Technology and Management, Ulan-Ude, Russia [email protected] 2East Siberia State University of Technology and Management, Ulan-Ude, Russia, [email protected] V. Mantatov, V. Tutubalin 715 1. Introduction Intelligence organizes the world by organizing itself. J. Piaget Technological singularity is defined as a certain moment or stage in the development of mankind, when scientific and technological progress will become so fast and complex that it will be unpredictable.