DEPARTMENT of COMPUTER SCIENCE, University of Toronto

Total Page:16

File Type:pdf, Size:1020Kb

DEPARTMENT of COMPUTER SCIENCE, University of Toronto DEPARTMENT OF COMPUTER SCIENCE, University of Toronto Natural Science and Engineering Research Council of Canada (NSERC) 2012-2013 AWARDS Discovery Accelerator Supplement (DAS) Craig Boutilier This annual listing includes awards received between May 1, 2012 and Renée Miller April 30, 2013 by: Department of Computer Science faculty across the Google Focused Research Award 2012 St. George, Mississauga, and Scarborough campuses; departmental graduate Geoffrey Hinton students; undergraduate students in the computer science program on the Ruslan Salakhutdinov (cross-appointed to Statistics) St. George campus; and alumni engaged in startup activities. Richard Zemel RESEARCH & SERVICE AWARDS AND HONOURS Genome Canada and the Ontario Genomics Institute 2012 Gary Bader (Cross-appointed to Molecular Genetics) Gerhard Herzberg Canada Gold Medal for Science and Engineering (NSERC) 2013 Michael Brudno Stephen Cook Connaught New Researcher Awards, University of Toronto 2012 Fellow, Association for Computing Machinery (ACM) 2012 Azadeh Farzan Craig Boutilier Ruslan Salakhutdinov (Cross-appointed to Statistics) Bianca Schroeder Alfred P. Sloan Research Fellowship 2013 Bianca Schroeder Inventor of the Year Award, University of Toronto's Office of Research and Ruslan Salakhutdinov (cross-appointed to Statistics) Innovation 2013 Vinod Vaikuntanathan Daniel Wigdor Ricardo Jota (Postdoctoral Fellow) Outstanding Contribution to ACM Award 2012 C.C. (Kelly) Gotlieb Bell University Laboratories Chair in Information Systems 2012 Renée Miller Order of Ontario 2012 Stephen Cook Bell University Laboratories Chair in Software Engineering 2012 Marsha Chechik Award for Research Excellence, International Joint Conference on Artificial Intelligence (IJCAI) 2013 Outstanding Senior Program Committee Member, 11th International Hector Levesque Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2012 Craig Boutilier Lifetime Research Achievement Award, Canadian Image Processing and Pattern Recognition Society (CIPPRS) 2012 President of the Association for Symbolic Logic 2013 Sven Dickinson Alasdair Urquhart Lifetime Achievement Award, Canadian Artificial Intelligence Association (CAIAC) Long Service Award, University of Toronto 2012 2012 Ronald Baecker (40 years) Hector Levesque Wayne Enright (40 years) Eugene Fiume (25 years) The Queen Elizabeth II Diamond Jubilee Medal 2012 Stephen Cook TEACHING AWARDS AND HONOURS Michelle Craig C.C. (Kelly) Gotlieb Computer Science Student Union Teaching Award 2012-2013 J.N. Patterson (Patt) Hume Steve Engels Danny Heap Outstanding Young Computer Science Researcher Prize (CACS/AIC) 2012 Sam Toueg Eyal de Lara STAFF AWARDS SIGCHI Conference on Human Factors in Computing Systems, Best Paper, Honorable Mention 2012 Excellence Through Innovation Award, University of Toronto 2012 Stephanie Santosa (MSc Student), Fanny Chevalier (Postdoctoral Fellow), John Hancock Ravin Balakrishnan, Karan Singh Joseph Raghubar SIGMETRICS and Performance, Best Paper Award 2012 Long Service Award, University of Toronto 2012 Nosayba El-Sayed (PhD Student), Ioan Stefanovici (PhD Student), George Angela Glinos (25 years) Amvrosiadis (PhD Student), Andy Hwang (PhD Student), Bianca Schroeder Marina Haloulos (25 years) 34th International Conference on Software Engineering (ICSE), ACM Distinguished BEST PAPER AWARDS Paper Award 2012 Michalis Famelis (PhD Student), Rick Salay (Postdoctoral Fellow) and Marsha 16th International Conference on Database Theory (ICDT), Test of Time Award Chechik 2013: “Data Exchange: Semantics and Query Answering” (ICDT ‘03) Ronald Fagin (IBM Almaden Research Center), Phokion Kolaitis (IBM Almaden 17th European Conference on Software Maintenance and Reengineering (CSMR), Research Center), Renée Miller, Lucian Popa (IBM Almaden Research Center) Best Paper 2013 Yael Dubinsky (IBM), Julia Rubin (PhD Student), Thorsten Berger (IT University of 27th AAAI Conference on Artificial Intelligence, Honorable Mention, Outstanding Copenhagen), Slawomir Duszynski (Fraunhofer Institute for Experimental Software Paper Award 2013 Engineering), Martin Becker (Fraunhofer Institute for Experimental Software Engi- Reshef Meir (Hebrew University), Tyler Lu (PhD Student), Moshe Tennenholtz neering), Krzysztof Czarnecki (University of Waterloo) (Technion - Israel Institute of Technology), and Craig Boutilier 45th Annual ACM Symposium on the Theory of Computing (STOC), SIAM Journal of 17th Annual Conference on Innovation and Technology in Computer Science Computing, Best Paper Special Issue 2013 Education, Best Working Group Paper, 2012 Sergey Gorbunov (PhD Student), Vinod Vaikuntanathan and Hoeteck Wee (George Michael Goldweber (Xavier University), John Barr (Ithaca College), Tony Clear Washington University) (Auckland University of Technology), Renzo Davoli (Università di Bologna), Samuel Mann (Otago Polytechnic), Elizabeth Patitsas (PhD Student), Scott Portnoff 45th Annual ACM Symposium on the Theory of Computing (STOC), SIAM Journal of (Downtown Magnets High School). Computing, Best Paper Special Issue 2013 Shafi Goldwasser (MIT), Yael Kalai (Microsoft Research), Raluca Ada Popa (MIT), International Colloquium on Automata, Languages and Programming (ICALP), Vinod Vaikuntanathan and Nickolai Zeldovich (MIT) Co-recipient of Best Student Paper for Track A 2012 Anastasios Zouzias (PhD Student) International Conference on Very Large Databases, Best Paper (VLDB ) 2012 Albert Angel (PhD Student), Nick Koudas, Nikos Sarkas (Tower Research Capital), Eurographics Sketch Based Interfaces and Modeling (SBIM), Best Paper 2012 Divesh Srivastava (AT&T Labs -Research) Nilgun Donmez (PhD Student), Karan Singh COMPETITION RESULTS 3rd Annual Conference on Graphics, Animation and New Media (GRAND), Best Research Note Award 2012 2nd International Competition on Software Verification (SV-COMP), 4 Gold Medals Kinfe Tadesse Mengistu (Postdoctoral Fellow), Richard Compton (U of T Linguistics), and 1 Bronze Medal 2013 and Gerald Penn Aws Albarghouthi (PhD Student), Arie Gurfinkel (Software Engineering Institute), Yi Li (PhD Student), Sagar Chaki (Software Engineering Institute) and Marsha Chechik 38th Graphics Interface Conference (GIC), Best Paper 2012 Cyrus Rahgoshay (CMLabs Simulations Inc.), Amir Rabbani (McGill University), Google Places API Developer Challenge, First Place 2012 Karan Singh, Paul G. Kry (McGill University) Christian Muise (PhD Student) Graphics Interface Conference (GIC), Best Paper 2013 Michael Glueck (PhD Student), Tovi Grossman (Autodesk Research), and ImageNet Large Scale Visual Recognition Challenge, Winners 2012 Daniel Wigdor Alex Krizhevsky (PhD Student), Ilya Sutskever (PhD Student), Geoffrey Hinton Merck Molecular Activity Challenge, 1st Prize 2012 George Dahl (PhD Student), Ruslan Salakhutdinov (DCS/Stats), Navdeep Jaitly (PhDStudent), Chris Jordan-Squire (University of Washington), Geoffrey Hinton Ontario Provincial 3 Minute Thesis Competition, Silver 2013 MITACS Globalink Graduate Fellowship 2012 Abraham Heifets (PhD Student) Jaiganesh Balasundaram ACM International Collegiate Programming Contest East Central North America Re- NSERC Alexander Graham Bell Canada Graduate Scholarship 2012–2013 gional, Second Place 2012 Jacob Plachta (U of T Music), Andre Hahn (U of T ECE), Sandro Feuz (MSc Stu- Aws Hamza Qasem Albarghouthi Yue Li dent); Coaches: Wesley May (MSc Student), Carolyn McLeod (PhD Student) Aditya Bhargava David Liu Julian Brooke Christopher Joseph Maddison University of Chicago Invitational Programming Contest, First Place 2013 Jonathan Calver Tomasz Nykiel Jacob Plachta (U of T Music), Andre Hahn (U of T ECE), Brian Bi (U of T Chemis- Jackie Chi Kit Cheung Peter O'Donovan try); Coaches: Wesley May (MSc Student), Carolyn McLeod (PhD Student) Recep Colak Suprio Ray Eric Corlett Utkarsh Roy GRADUATE SCHOLARSHIPS, GRANTS & FELLOWSHIPS Michael Dzamba Amirali Salehi-Abari Marco Fiume Mark Sun Postdoctoral Fellowships Program NSERC 2012 Daniel Fryer Kevin Swersky Marcus Brubaker Frank Chun Yat Li Yichuan Tang Google Fellowship in Machine Learning 2012 James Martens NSERC Industrial Research & Development Fellowship 2012 Mohammad Shakourifar Google Fellowship in Computational Economics 2012 Tyler Lu OCE SmartStart 2012 Tyler Lu Facebook Graduate Fellowship 2013-2014 Jackie C.K. Cheung Ontario Graduate Scholarship (OGS) 2012-2013 Christopher Franco De Paoli David Kordalewski Acres Productive Technologies, Joseph Yonan Memorial Fellowship 2012-2013 Erin Delisle Ahmed Mashiyat Ka-Chun Wong Carrie Demmans Epp Abdel-rahman Mohamed Alfred B. Lehman Graduate Scholarship in Computer Science 2012-2013 Joanna Drummond Mohammad Norouzi Lila Fontes Justin Foong Nirvana Nursimulu Dai Tri Man Le Wesley George Rohan Palaniappan Peter Goodman Elizabeth Patitsas C.C. Gotlieb (Kelly) Graduate Fellowship 2012 Sergey Gorbunov Nitish Srivastava Sajad Shirali-Shahreza Marina Gueroussova Sahil Suneja Kirill Ignatiev Fonds de recherche Nature et technologies (FQRNT) Industrial Innovations Scholarship 2012-2013 Kaiwen Zhang Outstanding Achievement Award, 2012 Awarded by the Iranian Canadian Inter- cultural Center and Federal Government of Canada Robertson Fellowship 2012 Mohammad Shakourifar Christian Muise Queen Elizabeth II Graduate Scholarship in Science and Technology Gordon Cressy Student Leadership Award 2013 2012-2013 Utkarsh Roy Volodymyr Mnih Matthew Patrick O'Toole Microsoft Research Graduate Women's Scholarship 2012 Andrew Perrault Joanna Drummond Microsoft Research PhD Fellowship 2012 George Dahl Robert E. Lansdale/Okino Computer Graphics
Recommended publications
  • Backpropagation and Deep Learning in the Brain
    Backpropagation and Deep Learning in the Brain Simons Institute -- Computational Theories of the Brain 2018 Timothy Lillicrap DeepMind, UCL With: Sergey Bartunov, Adam Santoro, Jordan Guerguiev, Blake Richards, Luke Marris, Daniel Cownden, Colin Akerman, Douglas Tweed, Geoffrey Hinton The “credit assignment” problem The solution in artificial networks: backprop Credit assignment by backprop works well in practice and shows up in virtually all of the state-of-the-art supervised, unsupervised, and reinforcement learning algorithms. Why Isn’t Backprop “Biologically Plausible”? Why Isn’t Backprop “Biologically Plausible”? Neuroscience Evidence for Backprop in the Brain? A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: A spectrum of credit assignment algorithms: How to convince a neuroscientist that the cortex is learning via [something like] backprop - To convince a machine learning researcher, an appeal to variance in gradient estimates might be enough. - But this is rarely enough to convince a neuroscientist. - So what lines of argument help? How to convince a neuroscientist that the cortex is learning via [something like] backprop - What do I mean by “something like backprop”?: - That learning is achieved across multiple layers by sending information from neurons closer to the output back to “earlier” layers to help compute their synaptic updates. How to convince a neuroscientist that the cortex is learning via [something like] backprop 1. Feedback connections in cortex are ubiquitous and modify the
    [Show full text]
  • Efficient Algorithms with Asymmetric Read and Write Costs
    Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads.
    [Show full text]
  • Great Ideas in Computing
    Great Ideas in Computing University of Toronto CSC196 Winter/Spring 2019 Week 6: October 19-23 (2020) 1 / 17 Announcements I added one final question to Assignment 2. It concerns search engines. The question may need a little clarification. I have also clarified question 1 where I have defined (in a non standard way) the meaning of a strict binary search tree which is what I had in mind. Please answer the question for a strict binary search tree. If you answered the quiz question for a non strict binary search tree withn a proper explanation you will get full credit. Quick poll: how many students feel that Q1 was a fair quiz? A1 has now been graded by Marta. I will scan over the assignments and hope to release the grades later today. If you plan to make a regrading request, you have up to one week to submit your request. You must specify clearly why you feel that a question may not have been graded fairly. In general, students did well which is what I expected. 2 / 17 Agenda for the week We will continue to discuss search engines. We ended on what is slide 10 (in Week 5) on Friday and we will continue with where we left off. I was surprised that in our poll, most students felt that the people advocating the \AI view" of search \won the debate" whereas today I will try to argue that the people (e.g., Salton and others) advocating the \combinatorial, algebraic, statistical view" won the debate as to current search engines.
    [Show full text]
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • Efficient Algorithms with Asymmetric Read and Write Costs
    Efficient Algorithms with Asymmetric Read and Write Costs Guy E. Blelloch1, Jeremy T. Fineman2, Phillip B. Gibbons1, Yan Gu1, and Julian Shun3 1 Carnegie Mellon University 2 Georgetown University 3 University of California, Berkeley Abstract In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fun- damentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We con- sider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M, ω)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost ω 1. For FFT and sorting networks we show a lower bound cost of Ω(ωn logωM n), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when ω is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(ω, log n)/ log(ωM)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n × n diamond DAG of Ω(ωn2/M) cost, which indicates no asymptotic improvement is achievable with fast reads.
    [Show full text]
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • P Versus NP Richard M. Karp
    P Versus NP Computational complexity theory is the branch of theoretical computer science concerned with the fundamental limits on the efficiency of automatic computation. It focuses on problems that appear to Richard M. Karp require a very large number of computation steps for their solution. The inputs and outputs to a problem are sequences of symbols drawn from a finite alphabet; there is no limit on the length of the input, and the fundamental question about a problem is the rate of growth of the number of required computation steps as a function of the length of the input. Some problems seem to require a very rapidly growing number of steps. One such problem is the inde- pendent set problem: given a graph, consisting of points called vertices and lines called edges connect- ing pairs of vertices, a set of vertices is called independent if no two vertices in the set are connected by a line. Given a graph and a positive integer n, the problem is to decide whether the graph contains an independent set of size n. Every known algorithm to solve the independent set problem encounters a combinatorial explosion, in which the number of required computation steps grows exponentially as a function of the size of the graph. On the other hand, the problem of deciding whether a given set of vertices is an independent set in a given graph is solvable by inspection. There are many such dichotomies, in which it is hard to decide whether a given type of structure exists within an input object (the existence problem), but it is easy to decide whether a given structure is of the required type (the verification problem).
    [Show full text]
  • Fast Neural Network Emulation of Dynamical Systems for Computer Animation
    Fast Neural Network Emulation of Dynamical Systems for Computer Animation Radek Grzeszczuk 1 Demetri Terzopoulos 2 Geoffrey Hinton 2 1 Intel Corporation 2 University of Toronto Microcomputer Research Lab Department of Computer Science 2200 Mission College Blvd. 10 King's College Road Santa Clara, CA 95052, USA Toronto, ON M5S 3H5, Canada Abstract Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computation­ ally demanding. This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. Neu­ roAnimators are automatically trained off-line to emulate physical dy­ namics through the observation of physics-based models in action. De­ pending on the model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conven­ tional numerical simulation. We demonstrate NeuroAnimators for a va­ riety of physics-based models. 1 Introduction Animation based on physical principles has been an influential trend in computer graphics for over a decade (see, e.g., [1, 2, 3]). This is not only due to the unsurpassed realism that physics-based techniques offer. In conjunction with suitable control and constraint mechanisms, physical models also facilitate the production of copious quantities of real­ istic animation in a highly automated fashion. Physics-based animation techniques are beginning to find their way into high-end commercial systems. However, a well-known drawback has retarded their broader penetration--compared to geometric models, physical models typically entail formidable numerical simulation costs. This paper proposes a new approach to creating physically realistic animation that differs Emulation for Animation 883 radically from the conventional approach of numerically simulating the equations of mo­ tion of physics-based models.
    [Show full text]
  • Neural Networks for Machine Learning Lecture 4A Learning To
    Neural Networks for Machine Learning Lecture 4a Learning to predict the next word Geoffrey Hinton with Nitish Srivastava Kevin Swersky A simple example of relational information Christopher = Penelope Andrew = Christine Margaret = Arthur Victoria = James Jennifer = Charles Colin Charlotte Roberto = Maria Pierro = Francesca Gina = Emilio Lucia = Marco Angela = Tomaso Alfonso Sophia Another way to express the same information • Make a set of propositions using the 12 relationships: – son, daughter, nephew, niece, father, mother, uncle, aunt – brother, sister, husband, wife • (colin has-father james) • (colin has-mother victoria) • (james has-wife victoria) this follows from the two above • (charlotte has-brother colin) • (victoria has-brother arthur) • (charlotte has-uncle arthur) this follows from the above A relational learning task • Given a large set of triples that come from some family trees, figure out the regularities. – The obvious way to express the regularities is as symbolic rules (x has-mother y) & (y has-husband z) => (x has-father z) • Finding the symbolic rules involves a difficult search through a very large discrete space of possibilities. • Can a neural network capture the same knowledge by searching through a continuous space of weights? The structure of the neural net local encoding of person 2 output distributed encoding of person 2 units that learn to predict features of the output from features of the inputs distributed encoding of person 1 distributed encoding of relationship local encoding of person 1 inputs local encoding of relationship Christopher = Penelope Andrew = Christine Margaret = Arthur Victoria = James Jennifer = Charles Colin Charlotte What the network learns • The six hidden units in the bottleneck connected to the input representation of person 1 learn to represent features of people that are useful for predicting the answer.
    [Show full text]
  • Lecture Notes Geoffrey Hinton
    Lecture Notes Geoffrey Hinton Overjoyed Luce crops vectorially. Tailor write-ups his glasshouse divulgating unmanly or constructively after Marcellus barb and outdriven squeakingly, diminishable and cespitose. Phlegmatical Laurance contort thereupon while Bruce always dimidiating his melancholiac depresses what, he shores so finitely. For health care about working memory networks are the class and geoffrey hinton and modify or are A practical guide to training restricted boltzmann machines Lecture Notes in. Trajectory automatically learn about different domains, geoffrey explained what a lecture notes geoffrey hinton notes was central bottleneck form of data should take much like a single example, geoffrey hinton with mctsnets. Gregor sieber and password you may need to this course that models decisions. Jimmy Ba Geoffrey Hinton Volodymyr Mnih Joel Z Leibo and Catalin Ionescu. YouTube lectures by Geoffrey Hinton3 1 Introduction In this topic of boosting we combined several simple classifiers into very complex classifier. Separating Figure from stand with a Parallel Network Paul. But additionally has efficient. But everett also look up. You already know how the course that is just to my assignment in the page and writer recognition and trends in the effect of the motivating this. These perplex the or free liquid Intelligence educational. Citation to lose sight of language. Sparse autoencoder CS294A Lecture notes Andrew Ng Stanford University. Geoffrey Hinton on what's nothing with CNNs Daniel C Elton. Toronto Geoffrey Hinton Advanced Machine Learning CSC2535 2011 Spring. Cross validated is sparse, a proof of how to list of possible configurations have to see. Cnns that just download a computational power nor the squared error in permanent electrode array with respect to the.
    [Show full text]
  • The Dartmouth College Artificial Intelligence Conference: the Next
    AI Magazine Volume 27 Number 4 (2006) (© AAAI) Reports continued this earlier work because he became convinced that advances The Dartmouth College could be made with other approaches using computers. Minsky expressed the concern that too many in AI today Artificial Intelligence try to do what is popular and publish only successes. He argued that AI can never be a science until it publishes what fails as well as what succeeds. Conference: Oliver Selfridge highlighted the im- portance of many related areas of re- search before and after the 1956 sum- The Next mer project that helped to propel AI as a field. The development of improved languages and machines was essential. Fifty Years He offered tribute to many early pio- neering activities such as J. C. R. Lick- leiter developing time-sharing, Nat Rochester designing IBM computers, and Frank Rosenblatt working with James Moor perceptrons. Trenchard More was sent to the summer project for two separate weeks by the University of Rochester. Some of the best notes describing the AI project were taken by More, al- though ironically he admitted that he ■ The Dartmouth College Artificial Intelli- Marvin Minsky, Claude Shannon, and never liked the use of “artificial” or gence Conference: The Next 50 Years Nathaniel Rochester for the 1956 “intelligence” as terms for the field. (AI@50) took place July 13–15, 2006. The event, McCarthy wanted, as he ex- Ray Solomonoff said he went to the conference had three objectives: to cele- plained at AI@50, “to nail the flag to brate the Dartmouth Summer Research summer project hoping to convince the mast.” McCarthy is credited for Project, which occurred in 1956; to as- everyone of the importance of ma- coining the phrase “artificial intelli- sess how far AI has progressed; and to chine learning.
    [Show full text]
  • CNN Encoder to Reduce the Dimensionality of Data Image For
    CNN Encoder to Reduce the Dimensionality of Data Image for Motion Planning Janderson Ferreira1, Agostinho A. F. J´unior1, Yves M. Galv˜ao1 1 1 2 ∗ Bruno J. T. Fernandes and Pablo Barros , 1- Universidade de Pernambuco - Escola Polit´ecnica de Pernambuco Rua Benfica 455, Recife/PE - Brazil 2- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit Istituto Italiano di Tecnologia, Genova, Italy Abstract. Many real-world applications need path planning algorithms to solve tasks in different areas, such as social applications, autonomous cars, and tracking activities. And most importantly motion planning. Although the use of path planning is sufficient in most motion planning scenarios, they represent potential bottlenecks in large environments with dynamic changes. To tackle this problem, the number of possible routes could be reduced to make it easier for path planning algorithms to find the shortest path with less efforts. An traditional algorithm for path planning is the A*, it uses an heuristic to work faster than other solutions. In this work, we propose a CNN encoder capable of eliminating useless routes for motion planning problems, then we combine the proposed neural network output with A*. To measure the efficiency of our solution, we propose a database with different scenarios of motion planning problems. The evaluated metric is the number of the iterations to find the shortest path. The A* was compared with the CNN Encoder (proposal) with A*. In all evaluated scenarios, our solution reduced the number of iterations by more than 60%. 1 Introduction Present in many daily applications, path planning algorithm helps to solve many navigation problems.
    [Show full text]