One-Shot Imitation Learning
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Improved Policy Networks for Computer Go
Improved Policy Networks for Computer Go Tristan Cazenave Universite´ Paris-Dauphine, PSL Research University, CNRS, LAMSADE, PARIS, FRANCE Abstract. Golois uses residual policy networks to play Go. Two improvements to these residual policy networks are proposed and tested. The first one is to use three output planes. The second one is to add Spatial Batch Normalization. 1 Introduction Deep Learning for the game of Go with convolutional neural networks has been ad- dressed by [2]. It has been further improved using larger networks [7, 10]. AlphaGo [9] combines Monte Carlo Tree Search with a policy and a value network. Residual Networks improve the training of very deep networks [4]. These networks can gain accuracy from considerably increased depth. On the ImageNet dataset a 152 layers networks achieves 3.57% error. It won the 1st place on the ILSVRC 2015 classifi- cation task. The principle of residual nets is to add the input of the layer to the output of each layer. With this simple modification training is faster and enables deeper networks. Residual networks were recently successfully adapted to computer Go [1]. As a follow up to this paper, we propose improvements to residual networks for computer Go. The second section details different proposed improvements to policy networks for computer Go, the third section gives experimental results, and the last section con- cludes. 2 Proposed Improvements We propose two improvements for policy networks. The first improvement is to use multiple output planes as in DarkForest. The second improvement is to use Spatial Batch Normalization. 2.1 Multiple Output Planes In DarkForest [10] training with multiple output planes containing the next three moves to play has been shown to improve the level of play of a usual policy network with 13 layers. -
A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
Journal of Machine Learning Research 22 (2021) 1-82 Submitted 9/19; Revised 8/20; Published 1/21 A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms Oliver Kroemer∗ [email protected] School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA Scott Niekum [email protected] Department of Computer Science The University of Texas at Austin Austin, TX 78712, USA George Konidaris [email protected] Department of Computer Science Brown University Providence, RI 02912, USA Editor: Daniel Lee Abstract A key challenge in intelligent robotics is creating robots that are capable of directly in- teracting with the world around them to achieve their goals. The last decade has seen substantial growth in research on the problem of robot manipulation, which aims to exploit the increasing availability of affordable robot arms and grippers to create robots capable of directly interacting with the world to achieve their goals. Learning will be central to such autonomous systems, as the real world contains too much variation for a robot to expect to have an accurate model of its environment, the objects in it, or the skills required to manipulate them, in advance. We aim to survey a representative subset of that research which uses machine learning for manipulation. We describe a formalization of the robot manipulation learning problem that synthesizes existing research into a single coherent framework and highlight the many remaining research opportunities and challenges. Keywords: manipulation, learning, review, robots, MDPs 1. Introduction Robot manipulation is central to achieving the promise of robotics|the very definition of a robot requires that it has actuators, which it can use to effect change on the world. -
Lecture Notes Geoffrey Hinton
Lecture Notes Geoffrey Hinton Overjoyed Luce crops vectorially. Tailor write-ups his glasshouse divulgating unmanly or constructively after Marcellus barb and outdriven squeakingly, diminishable and cespitose. Phlegmatical Laurance contort thereupon while Bruce always dimidiating his melancholiac depresses what, he shores so finitely. For health care about working memory networks are the class and geoffrey hinton and modify or are A practical guide to training restricted boltzmann machines Lecture Notes in. Trajectory automatically learn about different domains, geoffrey explained what a lecture notes geoffrey hinton notes was central bottleneck form of data should take much like a single example, geoffrey hinton with mctsnets. Gregor sieber and password you may need to this course that models decisions. Jimmy Ba Geoffrey Hinton Volodymyr Mnih Joel Z Leibo and Catalin Ionescu. YouTube lectures by Geoffrey Hinton3 1 Introduction In this topic of boosting we combined several simple classifiers into very complex classifier. Separating Figure from stand with a Parallel Network Paul. But additionally has efficient. But everett also look up. You already know how the course that is just to my assignment in the page and writer recognition and trends in the effect of the motivating this. These perplex the or free liquid Intelligence educational. Citation to lose sight of language. Sparse autoencoder CS294A Lecture notes Andrew Ng Stanford University. Geoffrey Hinton on what's nothing with CNNs Daniel C Elton. Toronto Geoffrey Hinton Advanced Machine Learning CSC2535 2011 Spring. Cross validated is sparse, a proof of how to list of possible configurations have to see. Cnns that just download a computational power nor the squared error in permanent electrode array with respect to the. -
CSC321 Lecture 23: Go
CSC321 Lecture 23: Go Roger Grosse Roger Grosse CSC321 Lecture 23: Go 1 / 22 Final Exam Monday, April 24, 7-10pm A-O: NR 25 P-Z: ZZ VLAD Covers all lectures, tutorials, homeworks, and programming assignments 1/3 from the first half, 2/3 from the second half If there's a question on this lecture, it will be easy Emphasis on concepts covered in multiple of the above Similar in format and difficulty to the midterm, but about 3x longer Practice exams will be posted Roger Grosse CSC321 Lecture 23: Go 2 / 22 Overview Most of the problem domains we've discussed so far were natural application areas for deep learning (e.g. vision, language) We know they can be done on a neural architecture (i.e. the human brain) The predictions are inherently ambiguous, so we need to find statistical structure Board games are a classic AI domain which relied heavily on sophisticated search techniques with a little bit of machine learning Full observations, deterministic environment | why would we need uncertainty? This lecture is about AlphaGo, DeepMind's Go playing system which took the world by storm in 2016 by defeating the human Go champion Lee Sedol Roger Grosse CSC321 Lecture 23: Go 3 / 22 Overview Some milestones in computer game playing: 1949 | Claude Shannon proposes the idea of game tree search, explaining how games could be solved algorithmically in principle 1951 | Alan Turing writes a chess program that he executes by hand 1956 | Arthur Samuel writes a program that plays checkers better than he does 1968 | An algorithm defeats human novices at Go 1992 -
Self-Corrective Apprenticeship Learning for Quadrotor Control
Self-corrective Apprenticeship Learning for Quadrotor Control Haoran Yuan Self-corrective Apprenticeship Learning for Quadrotor Control by Haoran Yuan to obtain the degree of Master of Science at the Delft University of Technology, to be defended publicly on Wednesday December 11, 2019 at 10:00 AM. Student number: 4710118 Project duration: December 12, 2018 – December 11, 2019 Thesis committee: Dr. ir. E. van Kampen, C&S, Daily supervisor Dr. Q. P.Chu, C&S, Chairman of committee Dr. ir. D. Dirkx, A&SM, External examiner B. Sun, MSc, C&S An electronic version of this thesis is available at http://repository.tudelft.nl/. Contents 1 Introduction 1 I Scientific Paper 5 II Literature Review & Preliminary Analysis 23 2 Literature Review 25 2.1 Reinforcement Learning Preliminaries . 25 2.2 Temporal-Difference Learning and Q-Learning . 26 2.2.1 Deep Q Learning . 27 2.3 Policy Gradient Method . 28 2.3.1 Actor-critic and Deep Deterministic Policy Gradient . 30 2.4 Reinforcement Learning for flight Control . 30 2.5 Learning from Demonstrations Preliminaries . 33 2.6 Learning from Demonstrations: Mapping . 34 2.6.1 Mapping Methods without Reinforcement Learning . 35 2.6.2 Mapping Methods with Reinforcement Learning . 35 2.7 Learning from Demonstrations: Planning . 37 2.8 Learning from Demonstrations: Model-Learning . 38 2.8.1 Inverse Reinforcement Learning . 39 2.8.2 Apprenticeship Learning through Inverse Reinforcement Learning . 40 2.8.3 Maximum Entropy Inverse Reinforcement Learning . 42 2.8.4 Relative Entropy Inverse Reinforcement Learning . 42 2.9 Conclusion of Literature Review . 43 3 Preliminary Analysis 45 3.1 Simulation Setup . -
The History Began from Alexnet: a Comprehensive Survey on Deep Learning Approaches
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches Md Zahangir Alom1, Tarek M. Taha1, Chris Yakopcic1, Stefan Westberg1, Paheding Sidike2, Mst Shamima Nasrin1, Brian C Van Essen3, Abdul A S. Awwal3, and Vijayan K. Asari1 Abstract—In recent years, deep learning has garnered I. INTRODUCTION tremendous success in a variety of application domains. This new ince the 1950s, a small subset of Artificial Intelligence (AI), field of machine learning has been growing rapidly, and has been applied to most traditional application domains, as well as some S often called Machine Learning (ML), has revolutionized new areas that present more opportunities. Different methods several fields in the last few decades. Neural Networks have been proposed based on different categories of learning, (NN) are a subfield of ML, and it was this subfield that spawned including supervised, semi-supervised, and un-supervised Deep Learning (DL). Since its inception DL has been creating learning. Experimental results show state-of-the-art performance ever larger disruptions, showing outstanding success in almost using deep learning when compared to traditional machine every application domain. Fig. 1 shows, the taxonomy of AI. learning approaches in the fields of image processing, computer DL (using either deep architecture of learning or hierarchical vision, speech recognition, machine translation, art, medical learning approaches) is a class of ML developed largely from imaging, medical information processing, robotics and control, 2006 onward. Learning is a procedure consisting of estimating bio-informatics, natural language processing (NLP), cybersecurity, and many others. -
Arxiv:1907.03146V3 [Cs.RO] 6 Nov 2020
A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms Oliver Kroemer* [email protected] School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA Scott Niekum* [email protected] Department of Computer Science The University of Texas at Austin Austin, TX 78712, USA George Konidaris [email protected] Department of Computer Science Brown University Providence, RI 02912, USA Abstract A key challenge in intelligent robotics is creating robots that are capable of directly in- teracting with the world around them to achieve their goals. The last decade has seen substantial growth in research on the problem of robot manipulation, which aims to exploit the increasing availability of affordable robot arms and grippers to create robots capable of directly interacting with the world to achieve their goals. Learning will be central to such autonomous systems, as the real world contains too much variation for a robot to expect to have an accurate model of its environment, the objects in it, or the skills required to manipulate them, in advance. We aim to survey a representative subset of that research which uses machine learning for manipulation. We describe a formalization of the robot manipulation learning problem that synthesizes existing research into a single coherent framework and highlight the many remaining research opportunities and challenges. Keywords: Manipulation, Learning, Review, Robots, MDPs 1. Introduction arXiv:1907.03146v3 [cs.RO] 6 Nov 2020 Robot manipulation is central to achieving the promise of robotics|the very definition of a robot requires that it has actuators, which it can use to effect change on the world. -
Uncertainty in Deep Learning
References Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jefrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geofrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorĆow.org/. Software available from tensorĆow.org. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. C Angermueller and O Stegle. Multi-task deep neural network to predict CpG methylation proĄles from low-coverage sequencing data. In NIPS MLCB workshop, 2015. O Anjos, C Iglesias, F Peres, J Martínez, Á García, and J Taboada. Neural networks applied to discriminate botanical origin of honeys. Food chemistry, 175:128Ű136, 2015. Christopher Atkeson and Juan Santamaria. A comparison of direct and model-based reinforcement learning. In In International Conference on Robotics and Automation. Citeseer, 1997. Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pages 3084Ű3092, 2013. P Baldi, P Sadowski, and D Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5, 2014. Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information Processing Systems, pages 2814Ű2822, 2013. -
Mesh-Tensorflow: Deep Learning for Supercomputers
Mesh-TensorFlow: Deep Learning for Supercomputers Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee Mingsheng Hong, Cliff Young, Ryan Sepassi, Blake Hechtman Google Brain {noam, ylc, nikip, trandustin, avaswani, penporn, phawkins, hyouklee, hongm, cliffy, rsepassi, blakehechtman}@google.com Abstract Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to dis- cover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and op- erations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to be split across any dimensions of a multi-dimensional mesh of processors. A Mesh-TensorFlow graph compiles into a SPMD program consisting of parallel operations coupled with collective communication prim- itives such as Allreduce. We use Mesh-TensorFlow to implement an efficient data-parallel, model-parallel version of the Transformer [16] sequence-to-sequence model. Using TPU meshes of up to 512 cores, we train Transformer models with up to 5 billion parameters, surpassing state of the art results on WMT’14 English- to-French translation task and the one-billion-word language modeling benchmark. -
Combining Tactical Search and Deep Learning in the Game of Go
Combining tactical search and deep learning in the game of Go Tristan Cazenave PSL-Universite´ Paris-Dauphine, LAMSADE CNRS UMR 7243, Paris, France [email protected] Abstract Elaborated search algorithms have been developed to solve tactical problems in the game of Go such as capture problems In this paper we experiment with a Deep Convo- [Cazenave, 2003] or life and death problems [Kishimoto and lutional Neural Network for the game of Go. We Muller,¨ 2005]. In this paper we propose to combine tactical show that even if it leads to strong play, it has search algorithms with deep learning. weaknesses at tactical search. We propose to com- Other recent works combine symbolic and deep learn- bine tactical search with Deep Learning to improve ing approaches. For example in image surveillance systems Golois, the resulting Go program. A related work [Maynord et al., 2016] or in systems that combine reasoning is AlphaGo, it combines tactical search with Deep with visual processing [Aditya et al., 2015]. Learning giving as input to the network the results The next section presents our deep learning architecture. of ladders. We propose to extend this further to The third section presents tactical search in the game of Go. other kind of tactical search such as life and death The fourth section details experimental results. search. 2 Deep Learning 1 Introduction In the design of our network we follow previous work [Mad- Deep Learning has been recently used with a lot of success dison et al., 2014; Tian and Zhu, 2015]. Our network is fully in multiple different artificial intelligence tasks. -
Deep Q-Learning from Demonstrations
The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Deep Q-Learning from Demonstrations Todd Hester Matej Vecerik Olivier Pietquin Marc Lanctot Google DeepMind Google DeepMind Google DeepMind Google DeepMind [email protected] [email protected] [email protected] [email protected] Tom Schaul Bilal Piot Dan Horgan John Quan Google DeepMind Google DeepMind Google DeepMind Google DeepMind [email protected] [email protected] [email protected] [email protected] Andrew Sendonaris Ian Osband Gabriel Dulac-Arnold John Agapiou Google DeepMind Google DeepMind Google DeepMind Google DeepMind [email protected] [email protected] [email protected] [email protected] Joel Z. Leibo Audrunas Gruslys Google DeepMind Google DeepMind [email protected] [email protected] Abstract clude deep model-free Q-learning for general Atari game- playing (Mnih et al. 2015), end-to-end policy search for con- Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. trol of robot motors (Levine et al. 2016), model predictive However, these algorithms typically require a huge amount of control with embeddings (Watter et al. 2015), and strategic data before they reach reasonable performance. In fact, their policies that combined with search led to defeating a top hu- performance during learning can be extremely poor. This may man expert at the game of Go (Silver et al. 2016). An im- be acceptable for a simulator, but it severely limits the appli- portant part of the success of these approaches has been to cability of deep RL to many real-world tasks, where the agent leverage the recent contributions to scalability and perfor- must learn in the real environment. -
Arxiv:2102.01293V1 [Cs.LG] 2 Feb 2021
Scaling Laws for Transfer Danny Hernandez∗ Jared Kaplanyz Tom Henighany Sam McCandlishy OpenAI Abstract We study empirical scaling laws for transfer learning between distributions in an unsuper- vised, fine-tuning setting. When we train increasingly large neural networks from-scratch on a fixed-size dataset, they eventually become data-limited and stop improving in per- formance (cross-entropy loss). When we do the same for models pre-trained on a large language dataset, the slope in performance gains is merely reduced rather than going to zero. We calculate the effective data “transferred” from pre-training by determining how much data a transformer of the same size would have required to achieve the same loss when training from scratch. In other words, we focus on units of data while holding ev- erything else fixed. We find that the effective data transferred is described well in the low data regime by a power-law of parameter count and fine-tuning dataset size. We believe the exponents in these power-laws correspond to measures of the generality of a model and proximity of distributions (in a directed rather than symmetric sense). We find that pre-training effectively multiplies the fine-tuning dataset size. Transfer, like overall perfor- mance, scales predictably in terms of parameters, data, and compute. arXiv:2102.01293v1 [cs.LG] 2 Feb 2021 ∗Correspondence to: [email protected] Author contributions listed at end of paper. y Work performed at OpenAI z Johns Hopkins University Contents 1 Introduction 3 1.1 Key Results . .3 1.2 Notation . .6 2 Methods 6 3 Results 7 3.1 Ossification – can pre-training harm performance? .