Residual Networks for Computer Go Tristan Cazenave
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Residual Networks for Computer Go
IEEE TCIAIG 1 Residual Networks for Computer Go Tristan Cazenave Universite´ Paris-Dauphine, PSL Research University, CNRS, LAMSADE, 75016 PARIS, FRANCE Deep Learning for the game of Go recently had a tremendous success with the victory of AlphaGo against Lee Sedol in March 2016. We propose to use residual networks so as to improve the training of a policy network for computer Go. Training is faster than with usual convolutional networks and residual networks achieve high accuracy on our test set and a 4 dan level. Index Terms—Deep Learning, Computer Go, Residual Networks. I. INTRODUCTION Input EEP Learning for the game of Go with convolutional D neural networks has been addressed by Clark and Storkey [1]. It has been further improved by using larger networks [2]. Learning multiple moves in a row instead of only one Convolution move has also been shown to improve the playing strength of Go playing programs that choose moves according to a deep neural network [3]. Then came AlphaGo [4] that combines ReLU Monte Carlo Tree Search (MCTS) with a policy and a value network. Deep neural networks are good at recognizing shapes in the game of Go. However they have weaknesses at tactical search Output such as ladders and life and death. The way it is handled in AlphaGo is to give as input to the network the results of Fig. 1. The usual layer for computer Go. ladders. Reading ladders is not enough to understand more complex problems that require search. So AlphaGo combines deep networks with MCTS [5]. It learns a value network the last section concludes. -
Arxiv:1701.07274V3 [Cs.LG] 15 Jul 2017
DEEP REINFORCEMENT LEARNING:AN OVERVIEW Yuxi Li ([email protected]) ABSTRACT We give an overview of recent exciting achievements of deep reinforcement learn- ing (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value func- tion, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including atten- tion and memory, in particular, differentiable neural computer (DNC), unsuper- vised learning, transfer learning, semi-supervised learning, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems (a.k.a. chatbots), machine translation, and text generation, computer vi- sion, neural architecture design, business management, finance, healthcare, Indus- try 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet. After listing a collection of RL resources, we present a brief summary, and close with discussions. arXiv:1701.07274v3 [cs.LG] 15 Jul 2017 1 CONTENTS 1 Introduction5 2 Background6 2.1 Machine Learning . .7 2.2 Deep Learning . .8 2.3 Reinforcement Learning . .9 2.3.1 Problem Setup . .9 2.3.2 Value Function . .9 2.3.3 Temporal Difference Learning . .9 2.3.4 Multi-step Bootstrapping . 10 2.3.5 Function Approximation . 11 2.3.6 Policy Optimization . 12 2.3.7 Deep Reinforcement Learning . 12 2.3.8 RL Parlance . 13 2.3.9 Brief Summary . -
ELF: an Extensive, Lightweight and Flexible Research Platform for Real-Time Strategy Games
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games Yuandong Tian1 Qucheng Gong1 Wenling Shang2 Yuxin Wu1 C. Lawrence Zitnick1 1Facebook AI Research 2Oculus 1fyuandong, qucheng, yuxinwu, [email protected] [email protected] Abstract In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environ- ments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a minia- ture version of StarCraft, captures key game dynamics and runs at 40K frame- per-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end- to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environ- ments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] cou- pled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong per- formance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open sourced at https://github.com/facebookresearch/ELF. 1 Introduction Game environments are commonly used for research in Reinforcement Learning (RL), i.e. -
Improved Policy Networks for Computer Go
Improved Policy Networks for Computer Go Tristan Cazenave Universite´ Paris-Dauphine, PSL Research University, CNRS, LAMSADE, PARIS, FRANCE Abstract. Golois uses residual policy networks to play Go. Two improvements to these residual policy networks are proposed and tested. The first one is to use three output planes. The second one is to add Spatial Batch Normalization. 1 Introduction Deep Learning for the game of Go with convolutional neural networks has been ad- dressed by [2]. It has been further improved using larger networks [7, 10]. AlphaGo [9] combines Monte Carlo Tree Search with a policy and a value network. Residual Networks improve the training of very deep networks [4]. These networks can gain accuracy from considerably increased depth. On the ImageNet dataset a 152 layers networks achieves 3.57% error. It won the 1st place on the ILSVRC 2015 classifi- cation task. The principle of residual nets is to add the input of the layer to the output of each layer. With this simple modification training is faster and enables deeper networks. Residual networks were recently successfully adapted to computer Go [1]. As a follow up to this paper, we propose improvements to residual networks for computer Go. The second section details different proposed improvements to policy networks for computer Go, the third section gives experimental results, and the last section con- cludes. 2 Proposed Improvements We propose two improvements for policy networks. The first improvement is to use multiple output planes as in DarkForest. The second improvement is to use Spatial Batch Normalization. 2.1 Multiple Output Planes In DarkForest [10] training with multiple output planes containing the next three moves to play has been shown to improve the level of play of a usual policy network with 13 layers. -
CSC321 Lecture 23: Go
CSC321 Lecture 23: Go Roger Grosse Roger Grosse CSC321 Lecture 23: Go 1 / 22 Final Exam Monday, April 24, 7-10pm A-O: NR 25 P-Z: ZZ VLAD Covers all lectures, tutorials, homeworks, and programming assignments 1/3 from the first half, 2/3 from the second half If there's a question on this lecture, it will be easy Emphasis on concepts covered in multiple of the above Similar in format and difficulty to the midterm, but about 3x longer Practice exams will be posted Roger Grosse CSC321 Lecture 23: Go 2 / 22 Overview Most of the problem domains we've discussed so far were natural application areas for deep learning (e.g. vision, language) We know they can be done on a neural architecture (i.e. the human brain) The predictions are inherently ambiguous, so we need to find statistical structure Board games are a classic AI domain which relied heavily on sophisticated search techniques with a little bit of machine learning Full observations, deterministic environment | why would we need uncertainty? This lecture is about AlphaGo, DeepMind's Go playing system which took the world by storm in 2016 by defeating the human Go champion Lee Sedol Roger Grosse CSC321 Lecture 23: Go 3 / 22 Overview Some milestones in computer game playing: 1949 | Claude Shannon proposes the idea of game tree search, explaining how games could be solved algorithmically in principle 1951 | Alan Turing writes a chess program that he executes by hand 1956 | Arthur Samuel writes a program that plays checkers better than he does 1968 | An algorithm defeats human novices at Go 1992 -
Fml-Based Dynamic Assessment Agent for Human-Machine Cooperative System on Game of Go
Accepted for publication in International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems in July, 2017 FML-BASED DYNAMIC ASSESSMENT AGENT FOR HUMAN-MACHINE COOPERATIVE SYSTEM ON GAME OF GO CHANG-SHING LEE* MEI-HUI WANG, SHENG-CHI YANG Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan *[email protected], [email protected], [email protected] PI-HSIA HUNG, SU-WEI LIN Department of Education, National University of Tainan, Tainan, Taiwan [email protected], [email protected] NAN SHUO, NAOYUKI KUBOTA Dept. of System Design, Tokyo Metropolitan University, Japan [email protected], [email protected] CHUN-HSUN CHOU, PING-CHIANG CHOU Haifong Weiqi Academy, Taiwan [email protected], [email protected] CHIA-HSIU KAO Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan [email protected] Received (received date) Revised (revised date) Accepted (accepted date) In this paper, we demonstrate the application of Fuzzy Markup Language (FML) to construct an FML- based Dynamic Assessment Agent (FDAA), and we present an FML-based Human–Machine Cooperative System (FHMCS) for the game of Go. The proposed FDAA comprises an intelligent decision-making and learning mechanism, an intelligent game bot, a proximal development agent, and an intelligent agent. The intelligent game bot is based on the open-source code of Facebook’s Darkforest, and it features a representational state transfer application programming interface mechanism. The proximal development agent contains a dynamic assessment mechanism, a GoSocket mechanism, and an FML engine with a fuzzy knowledge base and rule base. -
Move Prediction in Gomoku Using Deep Learning
VW<RXWK$FDGHPLF$QQXDO&RQIHUHQFHRI&KLQHVH$VVRFLDWLRQRI$XWRPDWLRQ :XKDQ&KLQD1RYHPEHU Move Prediction in Gomoku Using Deep Learning Kun Shao, Dongbin Zhao, Zhentao Tang, and Yuanheng Zhu The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences Beijing 100190, China Emails: [email protected]; [email protected]; [email protected]; [email protected] Abstract—The Gomoku board game is a longstanding chal- lenge for artificial intelligence research. With the development of deep learning, move prediction can help to promote the intelli- gence of board game agents as proven in AlphaGo. Following this idea, we train deep convolutional neural networks by supervised learning to predict the moves made by expert Gomoku players from RenjuNet dataset. We put forward a number of deep neural networks with different architectures and different hyper- parameters to solve this problem. With only the board state as the input, the proposed deep convolutional neural networks are able to recognize some special features of Gomoku and select the most likely next move. The final neural network achieves the accuracy of move prediction of about 42% on the RenjuNet dataset, which reaches the level of expert Gomoku players. In addition, it is promising to generate strong Gomoku agents of human-level with the move prediction as a guide. Keywords—Gomoku; move prediction; deep learning; deep convo- lutional network Fig. 1. Example of positions from a game of Gomoku after 58 moves have passed. It can be seen that the white win the game at last. I. -
Achieving Master Level Play in 9X9 Computer Go
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Achieving Master Level Play in 9 9 Computer Go × Sylvain Gelly∗ David Silver† Univ. Paris Sud, LRI, CNRS, INRIA, France University of Alberta, Edmonton, Alberta, Canada Abstract simulated, using self-play, starting from the current position. Each position in the search tree is evaluated by the average The UCT algorithm uses Monte-Carlo simulation to estimate outcome of all simulated games that pass through that po- the value of states in a search tree from the current state. However, the first time a state is encountered, UCT has no sition. The search tree is used to guide simulations along knowledge, and is unable to generalise from previous expe- promising paths. This results in a highly selective search rience. We describe two extensions that address these weak- that is grounded in simulated experience, rather than an ex- nesses. Our first algorithm, heuristic UCT, incorporates prior ternal heuristic. Programs using UCT search have outper- knowledge in the form of a value function. The value function formed all previous Computer Go programs (Coulom 2006; can be learned offline, using a linear combination of a million Gelly et al. 2006). binary features, with weights trained by temporal-difference Monte-Carlo tree search algorithms suffer from two learning. Our second algorithm, UCT–RAVE, forms a rapid sources of inefficiency. First, when a position is encoun- online generalisation based on the value of moves. We ap- tered for the first time, no knowledge is available to guide plied our algorithms to the domain of 9 9 Computer Go, the search. -
Computer Go: from the Beginnings to Alphago Martin Müller, University of Alberta
Computer Go: from the Beginnings to AlphaGo Martin Müller, University of Alberta 2017 Outline of the Talk ✤ Game of Go ✤ Short history - Computer Go from the beginnings to AlphaGo ✤ The science behind AlphaGo ✤ The legacy of AlphaGo The Game of Go Go ✤ Classic two-player board game ✤ Invented in China thousands of years ago ✤ Simple rules, complex strategy ✤ Played by millions ✤ Hundreds of top experts - professional players ✤ Until 2016, computers weaker than humans Go Rules ✤ Start with empty board ✤ Place stone of your own color ✤ Goal: surround empty points or opponent - capture ✤ Win: control more than half the board Final score, 9x9 board ✤ Komi: first player advantage Measuring Go Strength ✤ People in Europe and America use the traditional Japanese ranking system ✤ Kyu (student) and Dan (master) levels ✤ Separate Dan ranks for professional players ✤ Kyu grades go down from 30 (absolute beginner) to 1 (best) ✤ Dan grades go up from 1 (weakest) to about 6 ✤ There is also a numerical (Elo) system, e.g. 2500 = 5 Dan Short History of Computer Go Computer Go History - Beginnings ✤ 1960’s: initial ideas, designs on paper ✤ 1970’s: first serious program - Reitman & Wilcox ✤ Interviews with strong human players ✤ Try to build a model of human decision-making ✤ Level: “advanced beginner”, 15-20 kyu ✤ One game costs thousands of dollars in computer time 1980-89 The Arrival of PC ✤ From 1980: PC (personal computers) arrive ✤ Many people get cheap access to computers ✤ Many start writing Go programs ✤ First competitions, Computer Olympiad, Ing Cup ✤ Level 10-15 kyu 1990-2005: Slow Progress ✤ Slow progress, commercial successes ✤ 1990 Ing Cup in Beijing ✤ 1993 Ing Cup in Chengdu ✤ Top programs Handtalk (Prof. -
Reinforcement Learning of Local Shape in the Game of Go
Reinforcement Learning of Local Shape in the Game of Go David Silver, Richard Sutton, and Martin Muller¨ Department of Computing Science University of Alberta Edmonton, Canada T6G 2E8 {silver, sutton, mmueller}@cs.ualberta.ca Abstract effective. They are fast to compute; easy to interpret, modify and debug; and they have good convergence properties. We explore an application to the game of Go of Secondly, weights are trained by temporal difference learn- a reinforcement learning approach based on a lin- ing and self-play. The world champion Checkers program ear evaluation function and large numbers of bi- Chinook was hand-tuned by expert players over 5 years. nary features. This strategy has proved effective When weights were trained instead by self-play using a tem- in game playing programs and other reinforcement poral difference learning algorithm, the program equalled learning applications. We apply this strategy to Go the performance of the original version [7]. A similar ap- by creating over a million features based on tem- proach attained master level play in Chess [1]. TD-Gammon plates for small fragments of the board, and then achieved world class Backgammon performance after train- use temporal difference learning and self-play. This ingbyTD(0)andself-play[13]. A program trained by method identifies hundreds of low level shapes with TD(λ) and self-play outperformed an expert, hand-tuned ver- recognisable significance to expert Go players, and sion at the card game Hearts [11]. Experience generated provides quantitive estimates of their values. We by self-play was also used to train the weights of the world analyse the relative contributions to performance of champion Othello and Scrabble programs, using least squares templates of different types and sizes. -
ELF Opengo: an Analysis and Open Reimplementation of Alphazero
ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero Yuandong Tian 1 Jerry Ma * 1 Qucheng Gong * 1 Shubho Sengupta * 1 Zhuoyuan Chen 1 James Pinkerton 1 C. Lawrence Zitnick 1 Abstract However, these advances in playing ability come at signifi- The AlphaGo, AlphaGo Zero, and AlphaZero cant computational expense. A single training run requires series of algorithms are remarkable demonstra- millions of selfplay games and days of training on thousands tions of deep reinforcement learning’s capabili- of TPUs, which is an unattainable level of compute for the ties, achieving superhuman performance in the majority of the research community. When combined with complex game of Go with progressively increas- the unavailability of code and models, the result is that the ing autonomy. However, many obstacles remain approach is very difficult, if not impossible, to reproduce, in the understanding of and usability of these study, improve upon, and extend. promising approaches by the research commu- In this paper, we propose ELF OpenGo, an open-source nity. Toward elucidating unresolved mysteries reimplementation of the AlphaZero (Silver et al., 2018) and facilitating future research, we propose ELF algorithm for the game of Go. We then apply ELF OpenGo OpenGo, an open-source reimplementation of the toward the following three additional contributions. AlphaZero algorithm. ELF OpenGo is the first open-source Go AI to convincingly demonstrate First, we train a superhuman model for ELF OpenGo. Af- superhuman performance with a perfect (20:0) ter running our AlphaZero-style training software on 2,000 record against global top professionals. We ap- GPUs for 9 days, our 20-block model has achieved super- ply ELF OpenGo to conduct extensive ablation human performance that is arguably comparable to the 20- studies, and to identify and analyze numerous in- block models described in Silver et al.(2017) and Silver teresting phenomena in both the model training et al.(2018). -
AI-FML Agent for Robotic Game of Go and Aiot Real-World Co-Learning Applications
AI-FML Agent for Robotic Game of Go and AIoT Real-World Co-Learning Applications Chang-Shing Lee, Yi-Lin Tsai, Mei-Hui Wang, Wen-Kai Kuan, Zong-Han Ciou Naoyuki Kubota Dept. of Computer Science and Information Engineering Dept. of Mechanical Sytems Engineering National University of Tainan Tokyo Metropolitan University Tainan, Tawain Tokyo, Japan [email protected] [email protected] Abstract—In this paper, we propose an AI-FML agent for IEEE CEC 2019 and FUZZ-IEEE 2019. The goal of the robotic game of Go and AIoT real-world co-learning applications. competition is to understand the basic concepts of an FML- The fuzzy machine learning mechanisms are adopted in the based fuzzy inference system, to use the FML intelligent proposed model, including fuzzy markup language (FML)-based decision tool to establish the knowledge base and rule base of genetic learning (GFML), eXtreme Gradient Boost (XGBoost), the fuzzy inference system, and to optimize the FML knowledge and a seven-layered deep fuzzy neural network (DFNN) with base and rule base through the methodologies of evolutionary backpropagation learning, to predict the win rate of the game of computation and machine learning [2]. Go as Black or White. This paper uses Google AlphaGo Master sixty games as the dataset to evaluate the performance of the fuzzy Machine learning has been a hot topic in research and machine learning, and the desired output dataset were predicted industry and new methodologies keep developing all the time. by Facebook AI Research (FAIR) ELF Open Go AI bot. In Regression, ensemble methods, and deep learning are important addition, we use IEEE 1855 standard for FML to describe the machine learning methods for data scientists [10].