The Dartmouth College Artificial Intelligence Conference: the Next

Total Page:16

File Type:pdf, Size:1020Kb

The Dartmouth College Artificial Intelligence Conference: the Next AI Magazine Volume 27 Number 4 (2006) (© AAAI) Reports continued this earlier work because he became convinced that advances The Dartmouth College could be made with other approaches using computers. Minsky expressed the concern that too many in AI today Artificial Intelligence try to do what is popular and publish only successes. He argued that AI can never be a science until it publishes what fails as well as what succeeds. Conference: Oliver Selfridge highlighted the im- portance of many related areas of re- search before and after the 1956 sum- The Next mer project that helped to propel AI as a field. The development of improved languages and machines was essential. Fifty Years He offered tribute to many early pio- neering activities such as J. C. R. Lick- leiter developing time-sharing, Nat Rochester designing IBM computers, and Frank Rosenblatt working with James Moor perceptrons. Trenchard More was sent to the summer project for two separate weeks by the University of Rochester. Some of the best notes describing the AI project were taken by More, al- though ironically he admitted that he ■ The Dartmouth College Artificial Intelli- Marvin Minsky, Claude Shannon, and never liked the use of “artificial” or gence Conference: The Next 50 Years Nathaniel Rochester for the 1956 “intelligence” as terms for the field. (AI@50) took place July 13–15, 2006. The event, McCarthy wanted, as he ex- Ray Solomonoff said he went to the conference had three objectives: to cele- plained at AI@50, “to nail the flag to brate the Dartmouth Summer Research summer project hoping to convince the mast.” McCarthy is credited for Project, which occurred in 1956; to as- everyone of the importance of ma- coining the phrase “artificial intelli- sess how far AI has progressed; and to chine learning. He came away know- gence” and solidifying the orientation project where AI is going or should be ing a lot about Turing machines that of the field. It is interesting to specu- going. AI@50 was generously funded by informed future work. the office of the Dean of Faculty and the late whether the field would have Thus, in some respects the 1956 office of the Provost at Dartmouth Col- been any different had it been called summer research project fell short of lege, by DARPA, and by some private “computational intelligence” or any expectations. The participants came at donors. of a number of other possible labels. various times and worked on their Five of the attendees from the orig- own projects, and hence it was not re- inal project attended AI@50 (figure 1). Each gave some recollections. Mc- ally a conference in the usual sense. Carthy acknowledged that the 1956 There was no agreement on a general project did not live up to expectations theory of the field and in particular on Reflections on 1956 in terms of collaboration. The atten- a general theory of learning. The field Dating the beginning of any move- dees did not come at the same time of AI was launched not by agreement ment is difficult, but the Dartmouth and most kept to their own research on methodology or choice of prob- Summer Research Project of 1956 is agenda. McCarthy emphasized that lems or general theory, but by the often taken as the event that initiated nevertheless there were important re- shared vision that computers can be AI as a research discipline. John Mc- search developments at the time, par- made to perform intelligent tasks. Carthy, a mathematics professor at ticularly Allen Newell, Cliff Shaw, and This vision was stated boldly in the Dartmouth at the time, had been dis- Herbert Simon’s Information Process- proposal for the 1956 conference: appointed that the papers in Automa- ing Language (IPL) and the Logic The- “The study is to proceed on the basis ta Studies, which he coedited with ory Machine. of the conjecture that every aspect of Claude Shannon, did not say more Marvin Minsky commented that, learning or any other feature of intel- about the possibilities of computers although he had been working on ligence can in principle be so precise- possessing intelligence. Thus, in the neural nets for his dissertation a few ly described that a machine can be proposal written by John McCarthy, years prior to the 1956 project, he dis- made to simulate it.” Copyright © 2006, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2006 / $2.00 WINTER 2006 87 Reports Photographer: Joe Mehling Figure 1. Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff. Evaluations at 2006 face, and win the DARPA Grand Chal- searchers utilize different methodolo- lenge in a race of 132 miles in the Mo- gies, and there still is no general theo- There were more than three dozen ex- jave Desert. Rus speculated that in the ry of intelligence or learning that cellent presentations and events at future we might have our own person- unites the discipline. AI@50, and there is not space to give al robots as we now have our own per- One of the disagreements that was them the individual treatment each sonal computers, robots that could be debated at AI@50 is whether AI should deserves. Leading researchers reported tailored to help us with the kind of ac- be logic based or probability based. on learning, search, networks, tivities that each of us wants to do. McCarthy continues to be fond of a robotics, vision, reasoning, language, Robot parts might be smart enough to logic-based approach. Ronald Brach- cognition, and game playing.1 These presentations documented self-assemble to become the kind of man argued that a core idea in the pro- significant accomplishments in AI structure we need at a given time. posal for the 1956 project was that “a over the past half century. Consider Much has been accomplished in large part of human thought consists robotics as one example. As Daniela robotics, and much to accomplish of manipulating words according to Rus pointed out, 50 years ago there seems not too far over the horizon. rules of reasoning and rules of conjec- were no robots as we know them. Although AI has enjoyed much suc- ture” and that this key idea has served There were fixed automata for specific cess over the last 50 years, numerous as a common basis for much of AI dur- jobs. Today robots are everywhere. dramatic disagreements remain with- ing the past 50 years. This was the AI They vacuum our homes, explore the in the field. Different research areas revolution or, as McCarthy explained, oceans, travel over the Martian sur- frequently do not collaborate, re- the counter-revolution, as it was an at- 88 AI MAGAZINE Reports Photographer: Joe Mehling Figure 2. Dartmouth Hall, Where the Original Activities Took Place. tack on behaviorism, which had be- Another axis of disagreement, cor- show encouraging signs of solving tra- come the dominant position in psy- related with the logic versus probabil- ditional AI problems though not in chology in the 1950s. ity issue, is the psychology versus terms of human psychology. For in- David Mumford argued on the con- pragmatic paradigm debate. Pat Lang- stance, machine translation with a rea- trary that the last 50 years has experi- ley, in the spirit of Allen Newell and sonable degree of accuracy between enced the gradual displacement of Herbert Simon, vigorously maintained Arabic and English is now possible brittle logic with probabilistic meth- that AI should return to its psycholog- through statistical methods though ods. Eugene Charniak supported this ical roots if human level AI is to be nobody on the relevant research staff position by explaining how natural achieved. Other AI researchers are speaks Arabic. language processing is now statistical more inclined to explore what suc- Finally, there is the ongoing debate natural language processing. He stated ceeds even if done in nonhuman of how useful neural networks might frankly, “Statistics has taken over nat- ways. Peter Norvig suggested that be in achieving AI. Simon Osindero ural language processing because it searching, particularly given the huge working with Geoffrey Hinton dis- works.” repository of data on the web, can cussed more powerful networks. Both WINTER 2006 89 Reports Photographer: Joe Mehling Figure 3. Dartmouth Hall Commerative Plaque. Terry Sejnowski and Rick Granger ex- gent tasks. Perhaps, this vision is all it Minsky thought what is needed for plained how much we have learned takes to unite the field. significant future progress is a few about the brain in the last decade and bright researchers pursuing their own how this information is very sugges- good ideas, not doing what their advi- tive for building computer models of Projections to 2056 sors have done. He lamented that too intelligent activity. Many predictions about the future of few students today are pursuing such These various differences can be tak- AI were given at AI@50. When asked ideas but rather are attracted into en- en as a sign of health in the field. As what AI will be like 50 years from now, trepreneurships or law. More hoped Nils Nilsson put it, there are many the participants from the original con- that machines would always be under routes to the summit. Of course, not ference had diverse positions. Mc- the domination of humans and sug- all of the methods may be fruitful in Carthy offered his view that human- gested that machines were very un- the long run. Since we don’t know level AI is likely but not assured by likely ever to match the imagination which way is best, it is good to have 2056.
Recommended publications
  • Backpropagation and Deep Learning in the Brain
    Backpropagation and Deep Learning in the Brain Simons Institute -- Computational Theories of the Brain 2018 Timothy Lillicrap DeepMind, UCL With: Sergey Bartunov, Adam Santoro, Jordan Guerguiev, Blake Richards, Luke Marris, Daniel Cownden, Colin Akerman, Douglas Tweed, Geoffrey Hinton The “credit assignment” problem The solution in artificial networks: backprop Credit assignment by backprop works well in practice and shows up in virtually all of the state-of-the-art supervised, unsupervised, and reinforcement learning algorithms. Why Isn’t Backprop “Biologically Plausible”? Why Isn’t Backprop “Biologically Plausible”? Neuroscience Evidence for Backprop in the Brain? A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: How to convince a neuroscientist that the cortex is learning via [something like] backprop - To convince a machine learning researcher, an appeal to variance in gradient estimates might be enough. - But this is rarely enough to convince a neuroscientist. - So what lines of argument help? How to convince a neuroscientist that the cortex is learning via [something like] backprop - What do I mean by “something like backprop”?: - That learning is achieved across multiple layers by sending information from neurons closer to the output back to “earlier” layers to help compute their synaptic updates. How to convince a neuroscientist that the cortex is learning via [something like] backprop 1. Feedback connections in cortex are ubiquitous and modify the
    [Show full text]
  • Artificial Intelligence? 1 V2.0 © J
    TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Einführung in die Künstliche Intelligenz Dozenten Prof. Johannes Fürnkranz (Knowledge Engineering) Prof. Ulf Brefeld (Knowledge Mining and Assessment) Homepage http://www.ke.informatik.tu-darmstadt.de/lehre/ki/ Termine: Dienstag 11:40-13:20 S202/C205 Donnerstag 11:40-13:20 S202/C205 3 VO + 1 UE Vorlesungen und Übungen werden in Doppelstunden abgehalten Übungen Terminplan wird auf der Web-Seite aktualisiert Ü voraussichtlich: 30.10., 13.11., 27.11., 11.12., 15.1., 29.1. Tafelübungen What is Artificial Intelligence? 1 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Text Book The course will mostly follow Stuart Russell und Peter Norvig: Artificial Intelligence: A Modern Approach. Prentice Hall, 2nd edition, 2003. Deutsche Ausgabe: Stuart Russell und Peter Norvig: Künstliche Intelligenz: Ein Moderner Ansatz. Pearson- Studium, 2004. ISBN: 978-3-8273-7089-1. 3. Auflage 2012 Home-page for the book: http://aima.cs.berkeley.edu/ Course slides in English (lecture is in German) will be availabe from Home-page What is Artificial Intelligence? 2 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz What is Artificial Intelligence Different definitions due to different criteria Two dimensions: Thought processes/reasoning vs. behavior/action Success according to human standards vs. success according to an ideal concept of intelligence: rationality. Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally What is Artificial Intelligence? 3 V2.0 © J. Fürnkranz TU Darmstadt, WS 2012/13 Einführung in die Künstliche Intelligenz Definitions of Artificial Intelligence What is Artificial Intelligence? 4 V2.0 © J.
    [Show full text]
  • The Machine That Builds Itself: How the Strengths of Lisp Family
    Khomtchouk et al. OPINION NOTE The Machine that Builds Itself: How the Strengths of Lisp Family Languages Facilitate Building Complex and Flexible Bioinformatic Models Bohdan B. Khomtchouk1*, Edmund Weitz2 and Claes Wahlestedt1 *Correspondence: [email protected] Abstract 1Center for Therapeutic Innovation and Department of We address the need for expanding the presence of the Lisp family of Psychiatry and Behavioral programming languages in bioinformatics and computational biology research. Sciences, University of Miami Languages of this family, like Common Lisp, Scheme, or Clojure, facilitate the Miller School of Medicine, 1120 NW 14th ST, Miami, FL, USA creation of powerful and flexible software models that are required for complex 33136 and rapidly evolving domains like biology. We will point out several important key Full list of author information is features that distinguish languages of the Lisp family from other programming available at the end of the article languages and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSL): languages which are specialized to a particular area and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the “programmable programming language.” We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and AI research in bioinformatics and computational biology.
    [Show full text]
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • Backpropagation with Callbacks
    Backpropagation with Continuation Callbacks: Foundations for Efficient and Expressive Differentiable Programming Fei Wang James Decker Purdue University Purdue University West Lafayette, IN 47906 West Lafayette, IN 47906 [email protected] [email protected] Xilun Wu Grégory Essertel Tiark Rompf Purdue University Purdue University Purdue University West Lafayette, IN 47906 West Lafayette, IN, 47906 West Lafayette, IN, 47906 [email protected] [email protected] [email protected] Abstract Training of deep learning models depends on gradient descent and end-to-end differentiation. Under the slogan of differentiable programming, there is an increas- ing demand for efficient automatic gradient computation for emerging network architectures that incorporate dynamic control flow, especially in NLP. In this paper we propose an implementation of backpropagation using functions with callbacks, where the forward pass is executed as a sequence of function calls, and the backward pass as a corresponding sequence of function returns. A key realization is that this technique of chaining callbacks is well known in the programming languages community as continuation-passing style (CPS). Any program can be converted to this form using standard techniques, and hence, any program can be mechanically converted to compute gradients. Our approach achieves the same flexibility as other reverse-mode automatic differ- entiation (AD) techniques, but it can be implemented without any auxiliary data structures besides the function call stack, and it can easily be combined with graph construction and native code generation techniques through forms of multi-stage programming, leading to a highly efficient implementation that combines the per- formance benefits of define-then-run software frameworks such as TensorFlow with the expressiveness of define-by-run frameworks such as PyTorch.
    [Show full text]
  • John Mccarthy
    JOHN MCCARTHY: the uncommon logician of common sense Excerpt from Out of their Minds: the lives and discoveries of 15 great computer scientists by Dennis Shasha and Cathy Lazere, Copernicus Press August 23, 2004 If you want the computer to have general intelligence, the outer structure has to be common sense knowledge and reasoning. — John McCarthy When a five-year old receives a plastic toy car, she soon pushes it and beeps the horn. She realizes that she shouldn’t roll it on the dining room table or bounce it on the floor or land it on her little brother’s head. When she returns from school, she expects to find her car in more or less the same place she last put it, because she put it outside her baby brother’s reach. The reasoning is so simple that any five-year old child can understand it, yet most computers can’t. Part of the computer’s problem has to do with its lack of knowledge about day-to-day social conventions that the five-year old has learned from her parents, such as don’t scratch the furniture and don’t injure little brothers. Another part of the problem has to do with a computer’s inability to reason as we do daily, a type of reasoning that’s foreign to conventional logic and therefore to the thinking of the average computer programmer. Conventional logic uses a form of reasoning known as deduction. Deduction permits us to conclude from statements such as “All unemployed actors are waiters, ” and “ Sebastian is an unemployed actor,” the new statement that “Sebastian is a waiter.” The main virtue of deduction is that it is “sound” — if the premises hold, then so will the conclusions.
    [Show full text]
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • Stuart Russell and Peter Norvig, Artijcial Intelligence: a Modem Approach *
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence ELSEVIER Artificial Intelligence 82 ( 1996) 369-380 Book Review Stuart Russell and Peter Norvig, Artijcial Intelligence: A Modem Approach * Nils J. Nilsson Robotics Laboratory, Department of Computer Science, Stanford University, Stanford, CA 94305, USA 1. Introductory remarks I am obliged to begin this review by confessing a conflict of interest: I am a founding director and a stockholder of a publishing company that competes with the publisher of this book, and I am in the process of writing another textbook on AI. What if Russell and Norvig’s book turns out to be outstanding? Well, it did! Its descriptions are extremely clear and readable; its organization is excellent; its examples are motivating; and its coverage is scholarly and thorough! End of review? No; we will go on for some pages-although not for as many as did Russell and Norvig. In their Preface (p. vii), the authors mention five distinguishing features of their book: Unified presentation of the field, Intelligent agent design, Comprehensive and up-to-date coverage, Equal emphasis on theory and practice, and Understanding through implementation. These features do indeed distinguish the book. I begin by making a few brief, summary comments using the authors’ own criteria as a guide. l Unified presentation of the field and Intelligent agent design. I have previously observed that just as Los Angeles has been called “twelve suburbs in search of a city”, AI might be called “twelve topics in search of a subject”.
    [Show full text]
  • Ray Solomonoff, Founding Father of Algorithmic Information Theory [Obituary]
    UvA-DARE (Digital Academic Repository) Ray Solomonoff, founding father of algorithmic information theory [Obituary] Vitanyi, P.M.B. DOI 10.3390/a3030260 Publication date 2010 Document Version Final published version Published in Algorithms Link to publication Citation for published version (APA): Vitanyi, P. M. B. (2010). Ray Solomonoff, founding father of algorithmic information theory [Obituary]. Algorithms, 3(3), 260-264. https://doi.org/10.3390/a3030260 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:29 Sep 2021 Algorithms 2010, 3, 260-264; doi:10.3390/a3030260 OPEN ACCESS algorithms ISSN 1999-4893 www.mdpi.com/journal/algorithms Obituary Ray Solomonoff, Founding Father of Algorithmic Information Theory Paul M.B. Vitanyi CWI, Science Park 123, Amsterdam 1098 XG, The Netherlands; E-Mail: [email protected] Received: 12 March 2010 / Accepted: 14 March 2010 / Published: 20 July 2010 Ray J.
    [Show full text]
  • Fast Neural Network Emulation of Dynamical Systems for Computer Animation
    Fast Neural Network Emulation of Dynamical Systems for Computer Animation Radek Grzeszczuk 1 Demetri Terzopoulos 2 Geoffrey Hinton 2 1 Intel Corporation 2 University of Toronto Microcomputer Research Lab Department of Computer Science 2200 Mission College Blvd. 10 King's College Road Santa Clara, CA 95052, USA Toronto, ON M5S 3H5, Canada Abstract Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computation­ ally demanding. This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. Neu­ roAnimators are automatically trained off-line to emulate physical dy­ namics through the observation of physics-based models in action. De­ pending on the model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conven­ tional numerical simulation. We demonstrate NeuroAnimators for a va­ riety of physics-based models. 1 Introduction Animation based on physical principles has been an influential trend in computer graphics for over a decade (see, e.g., [1, 2, 3]). This is not only due to the unsurpassed realism that physics-based techniques offer. In conjunction with suitable control and constraint mechanisms, physical models also facilitate the production of copious quantities of real­ istic animation in a highly automated fashion. Physics-based animation techniques are beginning to find their way into high-end commercial systems. However, a well-known drawback has retarded their broader penetration--compared to geometric models, physical models typically entail formidable numerical simulation costs. This paper proposes a new approach to creating physically realistic animation that differs Emulation for Animation 883 radically from the conventional approach of numerically simulating the equations of mo­ tion of physics-based models.
    [Show full text]
  • Neural Networks for Machine Learning Lecture 4A Learning To
    Neural Networks for Machine Learning Lecture 4a Learning to predict the next word Geoffrey Hinton with Nitish Srivastava Kevin Swersky A simple example of relational information Christopher = Penelope Andrew = Christine Margaret = Arthur Victoria = James Jennifer = Charles Colin Charlotte Roberto = Maria Pierro = Francesca Gina = Emilio Lucia = Marco Angela = Tomaso Alfonso Sophia Another way to express the same information • Make a set of propositions using the 12 relationships: – son, daughter, nephew, niece, father, mother, uncle, aunt – brother, sister, husband, wife • (colin has-father james) • (colin has-mother victoria) • (james has-wife victoria) this follows from the two above • (charlotte has-brother colin) • (victoria has-brother arthur) • (charlotte has-uncle arthur) this follows from the above A relational learning task • Given a large set of triples that come from some family trees, figure out the regularities. – The obvious way to express the regularities is as symbolic rules (x has-mother y) & (y has-husband z) => (x has-father z) • Finding the symbolic rules involves a difficult search through a very large discrete space of possibilities. • Can a neural network capture the same knowledge by searching through a continuous space of weights? The structure of the neural net local encoding of person 2 output distributed encoding of person 2 units that learn to predict features of the output from features of the inputs distributed encoding of person 1 distributed encoding of relationship local encoding of person 1 inputs local encoding of relationship Christopher = Penelope Andrew = Christine Margaret = Arthur Victoria = James Jennifer = Charles Colin Charlotte What the network learns • The six hidden units in the bottleneck connected to the input representation of person 1 learn to represent features of people that are useful for predicting the answer.
    [Show full text]
  • Lecture Notes Geoffrey Hinton
    Lecture Notes Geoffrey Hinton Overjoyed Luce crops vectorially. Tailor write-ups his glasshouse divulgating unmanly or constructively after Marcellus barb and outdriven squeakingly, diminishable and cespitose. Phlegmatical Laurance contort thereupon while Bruce always dimidiating his melancholiac depresses what, he shores so finitely. For health care about working memory networks are the class and geoffrey hinton and modify or are A practical guide to training restricted boltzmann machines Lecture Notes in. Trajectory automatically learn about different domains, geoffrey explained what a lecture notes geoffrey hinton notes was central bottleneck form of data should take much like a single example, geoffrey hinton with mctsnets. Gregor sieber and password you may need to this course that models decisions. Jimmy Ba Geoffrey Hinton Volodymyr Mnih Joel Z Leibo and Catalin Ionescu. YouTube lectures by Geoffrey Hinton3 1 Introduction In this topic of boosting we combined several simple classifiers into very complex classifier. Separating Figure from stand with a Parallel Network Paul. But additionally has efficient. But everett also look up. You already know how the course that is just to my assignment in the page and writer recognition and trends in the effect of the motivating this. These perplex the or free liquid Intelligence educational. Citation to lose sight of language. Sparse autoencoder CS294A Lecture notes Andrew Ng Stanford University. Geoffrey Hinton on what's nothing with CNNs Daniel C Elton. Toronto Geoffrey Hinton Advanced Machine Learning CSC2535 2011 Spring. Cross validated is sparse, a proof of how to list of possible configurations have to see. Cnns that just download a computational power nor the squared error in permanent electrode array with respect to the.
    [Show full text]