How Does a General-Purpose Neural Network with No Domain Knowledge Operate As Opposed to a Domain-Specific Adapted Chess Engine?

Total Page:16

File Type:pdf, Size:1020Kb

How Does a General-Purpose Neural Network with No Domain Knowledge Operate As Opposed to a Domain-Specific Adapted Chess Engine? DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2020 How does a general-purpose neural network with no domain knowledge operate as opposed to a domain-specific adapted chess engine? ISHAQ ALI JAVID KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT 1 Abstract—This report is about how a general-purpose neural network (LC0) operates compares to the domain-specific adapted chess engine (Stockfish). Specifically, to examine the depth and total simulations per move. Furthermore, to investigate how the selection of the moves are conducted. The conclusion was that Stockfish searches and evaluates a significantly larger amount of positions than LC0. Moreover, Stockfish analyses every possible move at a rather great depth. On the contrary, LC0 determines the moves sensibly and explores a few moves at a greater depth. Consequently, the argument can be made that a general-purpose neural network can conserve resources and calculation time that could serve us towards sustainability. However, training the neural network is not very environmentally friendly. Therefore, stakeholders should seek collaboration and have a general- purpose approach that could solve problems in many fields. 2 Sammanfattning—Denna rapport handlar om hur ett allmant¨ neuronnat¨ (LC0) som spelar schack fungerar jamf¨ or¨ med den domanspecifika¨ anpassade schackmotorn (Stockfish). Specifikt, att granska djupet samt totala simuleringar per drag for¨ att uppfatta hur dragen valjs¨ och varderas.¨ Slutsatsen var att Stockfish soker¨ och varderar¨ betydlig fler positioner an¨ LC0. Vidare, Stockfish forbrukade¨ mer resurser, alltsa˚ ungefar¨ sju ganger˚ mer elforbrukning.¨ Ett argument gjordes att ett allmant¨ neuronnat¨ har potentialen att spara resurser och hjalpa¨ oss mot ett hallbart˚ samhalle.¨ Men, det kostar mycket resurser att trana¨ neuronnaten¨ och darf¨ or¨ ska vi fors¨ oka¨ samarbeta for¨ att undvika onodiga¨ traningar¨ samt lara¨ fran˚ andras misstag. Slutligen, vi maste˚ strava¨ efter ett allmant¨ neuronnat¨ som ska kunna losa¨ manga˚ problem pa˚ flera falt.¨ 3 I. INTRODUCTION an expert human Go player. Deepmind trained the neural network on the games of expert human players. Later, they HESS is a two-player strategy game that has been played challenged Lee Sedol who had 18 international titles and was C and analyzed over a thousand years. The game involves considered by many as one of the best Go players of all time. no hidden information, i.e. everything that happens in the game AlphaGo defeated Lee Sedol 4-1. Later, the network received is evident for both players and just true skill decides the game. the name AlphaGo Lee [3]. In theory, the result of a game of chess by optimal play is a draw [1]. In the following year, Deepmind took a more general In most states of a chess game, there are various possible approach, they built a general neural network that masters moves and each move could be responded with numerous the game of Go, chess, and Shogi by self-play. Deepmind reasonable moves, and the process proceeds on and the used the same algorithm and network architecture for all move variations grow exponentially. Therefore, it is very three games. They built a general-purpose neural network challenging to always find the best moves even for computers. that had no domain- knowledge except the rules of the game. In early 1990 the computers could not beat the top-level chess The network was trained to start from random play and then, players since it was unmanageable to calculate all the states learning, and improving through self-play. and combinations efficiently. The IBM computer Deep blue was the first engine to beat a world chess human champion Shannon aspired more general machines that could when it defeated Garry Kasparov in 1997[2]. solve many problems through reasoning and sensibility. He explained that machines should be able to take other inputs Computer chess has advanced greatly in the past decades. such as mathematical expressions, chess positions, words, Now it is well beyond human best players. Most of the etc. rather than plain numbers. A method that is developed engines use sophisticated search techniques, domain-specific on trials and errors rather than a strict computing process. adaptation, and handcrafted evaluation functions that have Besides, the machines should learn from the mistakes [5]. been refined by human experts over the decades [3]. Stockfish is an example that has been one of the strongest chess engines Shanon’s aspired approach was implemented by Deepmind in the past decade. It has won the most Top Chess Engine in some ways. Deepmind’s approach was to have a general- Championship (TCEC) in recent years and was considered as purpose reinforcement learning and a general-purpose tree the best chess engine [4]. search algorithm. They built the neural network and the network got the name AlphaGo Zero (AlphaZero in chess). Stockfish is a rule-based chess engine with a “brute force” The general neural network outperformed all other engines in strategy that is based on numerical calculations and deep all three fields. AlphpaGo Zero outperformed AlphaGo Lee searches of the positions. Stockfish analyses every legal move with a score of 100-0. AlphaZero outperformed Stockfish in in a state of the game at a great depth. The strategy was 100 games with a score of 28 wins, 0 losses, and 72 draws [6]. defined as an inefficient way of playing chess by Shannon [5]. Shannon suggested a more humanlike approach for AlphaZero is owned by Deepmined and it is not available searching. A decent human chess player, given a “quite” for others. However, they published the pseudo-Code [6]. position (not in check or a piece about to be captured), Programmers then created a new chess engine based on considers a few of the possible moves and searches at a depth AlphaZero called Leela Chess Zero (LC0) that is an open- of 1-4. However, grandmasters search at a depth of 10-25 in source and is available for experiments. LC0 has become one the forcing variations. Shannon’s idea was that the machine of the strongest chess engines right now. It defeated Stockfish should evaluate positions based on consistent interpretation in the latest TCEC to become the champion. and search sensibly, i.e. search few promising paths rather than “brute force” calculation. A general game-playing system has been a long-standing ambition in artificial intelligence. If a general-purpose neural Stockfish has managed to calculate and search a tremendous network can play highly complex games such as Go and number of positions rather efficiently; it can search 60 million chess beyond the superhuman level. Then, perhaps we are positions per second when competing at TCEC [4]. With nearby to fulfilling the ambition. modern computers, it is now possible to calculate many positions efficiently in the game of chess. However, other Most Machine Learning research is too focused on specific games such as Shogi and Go is far more complex than chess. algorithms and is implemented in specific areas [7]. A general Especially the game of Go, the possible positions that can approach is desired to implement in different parts of life occur from a state in the game of Go grow significantly faster including healthcare, manufacturing, education, financial than in chess. With today’s technology, it is not possible to modeling, policing, and marketing. The approach also could calculate the positions deep enough to achieve high-level play. lead to a more evidence-based decision-making process [8]. In 2016 Google’s Deepmind developed a neural network named AlphaGo that could outperform an expert-level player of Go. It was the first time that an engine could outperform 4 II. AIM parameter tuning. CLOP is an approach to local regression A. What is the purpose of the study? and is used to optimize the evaluation parameters. Discussions have been made that when the function to be optimized is The study will be divided into two parts, section x, and smooth, this method outperforms all other tested algorithms section y. [13]. Section x: Int this part, the focus is to compare the 2) How LC0 evaluates a position: LC0 and AlphaZero approach and algorithms of Stockfish and LC0. Specifically, evaluate each state with the neural network. The network how they evaluate each position, and how the searching is takes in board positions with features as input and outputs conducted? Mainly because those are the most challenging two vectors p and v (1). The vector p (2) is the probability aspect of a chess program. It is interesting to analyze how of moves that an expert level player would make given the the general-purpose neural network manages the challenges state(in the training process the neural network is also learning compared to a rule-based engine? from the moves that it is analyzing and develops a probability distribution of the moves that leads to good results and then Section y: In this part, we will use the results from section x contemplate it as “expert player move.”) [11]. The vector v is to evaluate the cost and benefits of the general-purpose neural the estimated value of the moves. If the expected outcome is network from an environmental viewpoint. i.e. from society’s z then the approximate value of the position is (3). perspective are the gains worth the expense of the training? s = board positions with features. v = a vector of values B. What is NOT the purpose of the study? p = a vector of move probabilities Which engine is better? The performance of the engines a = next move, given the position. is heavily reliant on the hardware it is being run. Thus, for (p; v) = f(s) (1) comparison of the performance, we compare the displays in TCEC, considering the optimal hardware and environment are pa = p(ajs) (2) applied [4].
Recommended publications
  • Development of Games for Users with Visual Impairment Czech Technical University in Prague Faculty of Electrical Engineering
    Development of games for users with visual impairment Czech Technical University in Prague Faculty of Electrical Engineering Dina Chernova January 2017 Acknowledgement I would first like to thank Bc. Honza Had´aˇcekfor his valuable advice. I am also very grateful to my supervisor Ing. Daniel Nov´ak,Ph.D. and to all participants that were involved in testing of my application for their precious time. I must express my profound gratitude to my loved ones for their support and continuous encouragement throughout my years of study. This accomplishment would not have been possible without them. Thank you. 5 Declaration I declare that I have developed this thesis on my own and that I have stated all the information sources in accordance with the methodological guideline of adhering to ethical principles during the preparation of university theses. In Prague 09.01.2017 Author 6 Abstract This bachelor thesis deals with analysis and implementation of mobile application that allows visually impaired people to play chess on their smart phones. The application con- trol is performed using special gestures and text{to{speech engine as a sound accompanier. For human against computer game mode I have used currently the best game engine called Stockfish. The application is developed under Android mobile platform. Keywords: chess; visually impaired; Android; Bakal´aˇrsk´apr´acese zab´yv´aanal´yzoua implementac´ımobiln´ıaplikace, kter´aumoˇzˇnuje zrakovˇepostiˇzen´ymlidem hr´atˇsachy na sv´emsmartphonu. Ovl´ad´an´ıaplikace se prov´ad´ı pomoc´ıspeci´aln´ıch gest a text{to{speech enginu pro zvukov´edoprov´azen´ı.V reˇzimu ˇclovˇek versus poˇc´ıtaˇcjsem pouˇzilasouˇcasnˇenejlepˇs´ıhern´ıengine Stockfish.
    [Show full text]
  • Game Changer
    Matthew Sadler and Natasha Regan Game Changer AlphaZero’s Groundbreaking Chess Strategies and the Promise of AI New In Chess 2019 Contents Explanation of symbols 6 Foreword by Garry Kasparov �������������������������������������������������������������������������������� 7 Introduction by Demis Hassabis 11 Preface 16 Introduction ������������������������������������������������������������������������������������������������������������ 19 Part I AlphaZero’s history . 23 Chapter 1 A quick tour of computer chess competition 24 Chapter 2 ZeroZeroZero ������������������������������������������������������������������������������ 33 Chapter 3 Demis Hassabis, DeepMind and AI 54 Part II Inside the box . 67 Chapter 4 How AlphaZero thinks 68 Chapter 5 AlphaZero’s style – meeting in the middle 87 Part III Themes in AlphaZero’s play . 131 Chapter 6 Introduction to our selected AlphaZero themes 132 Chapter 7 Piece mobility: outposts 137 Chapter 8 Piece mobility: activity 168 Chapter 9 Attacking the king: the march of the rook’s pawn 208 Chapter 10 Attacking the king: colour complexes 235 Chapter 11 Attacking the king: sacrifices for time, space and damage 276 Chapter 12 Attacking the king: opposite-side castling 299 Chapter 13 Attacking the king: defence 321 Part IV AlphaZero’s
    [Show full text]
  • (2021), 2814-2819 Research Article Can Chess Ever Be Solved Na
    Turkish Journal of Computer and Mathematics Education Vol.12 No.2 (2021), 2814-2819 Research Article Can Chess Ever Be Solved Naveen Kumar1, Bhayaa Sharma2 1,2Department of Mathematics, University Institute of Sciences, Chandigarh University, Gharuan, Mohali, Punjab-140413, India [email protected], [email protected] Article History: Received: 11 January 2021; Accepted: 27 February 2021; Published online: 5 April 2021 Abstract: Data Science and Artificial Intelligence have been all over the world lately,in almost every possible field be it finance,education,entertainment,healthcare,astronomy, astrology, and many more sports is no exception. With so much data, statistics, and analysis available in this particular field, when everything is being recorded it has become easier for team selectors, broadcasters, audience, sponsors, most importantly for players themselves to prepare against various opponents. Even the analysis has improved over the period of time with the evolvement of AI, not only analysis one can even predict the things with the insights available. This is not even restricted to this,nowadays players are trained in such a manner that they are capable of taking the most feasible and rational decisions in any given situation. Chess is one of those sports that depend on calculations, algorithms, analysis, decisions etc. Being said that whenever the analysis is involved, we have always improvised on the techniques. Algorithms are somethingwhich can be solved with the help of various software, does that imply that chess can be fully solved,in simple words does that mean that if both the players play the best moves respectively then the game must end in a draw or does that mean that white wins having the first move advantage.
    [Show full text]
  • Download Source Engine for Pc Free Download Source Engine for Pc Free
    download source engine for pc free Download source engine for pc free. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. What can I do to prevent this in the future? If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Another way to prevent getting this page in the future is to use Privacy Pass. You may need to download version 2.0 now from the Chrome Web Store. Cloudflare Ray ID: 67a0b2f3bed7f14e • Your IP : 188.246.226.140 • Performance & security by Cloudflare. Download source engine for pc free. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. What can I do to prevent this in the future? If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Another way to prevent getting this page in the future is to use Privacy Pass. You may need to download version 2.0 now from the Chrome Web Store. Cloudflare Ray ID: 67a0b2f3c99315dc • Your IP : 188.246.226.140 • Performance & security by Cloudflare.
    [Show full text]
  • California State University, Northridge Implementing
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE IMPLEMENTING GAME-TREE SEARCHING STRATEGIES TO THE GAME OF HASAMI SHOGI A graduate project submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science By Pallavi Gulati December 2012 The graduate project of Pallavi Gulati is approved by: ______________________________ ______________________________ Peter N Gabrovsky, Ph.D Date ______________________________ ______________________________ John J Noga, Ph.D Date ______________________________ ______________________________ Richard J Lorentz, Ph.D, Chair Date California State University, Northridge ii ACKNOWLEDGEMENTS I would like to acknowledge the guidance provided by and the immense patience of Prof. Richard Lorentz throughout this project. His continued help and advice made it possible for me to successfully complete this project. iii TABLE OF CONTENTS SIGNATURE PAGE .......................................................................................................ii ACKNOWLEDGEMENTS ........................................................................................... iii LIST OF TABLES .......................................................................................................... v LIST OF FIGURES ........................................................................................................ vi 1 INTRODUCTION.................................................................................................... 1 1.1 The Game of Hasami Shogi ..............................................................................
    [Show full text]
  • The 17Th Top Chess Engine Championship: TCEC17
    The 17th Top Chess Engine Championship: TCEC17 Guy Haworth1 and Nelson Hernandez Reading, UK and Maryland, USA TCEC Season 17 started on January 1st, 2020 with a radically new structure: classic ‘CPU’ engines with ‘Shannon AB’ ancestry and ‘GPU, neural network’ engines had their separate parallel routes to an enlarged Premier Division and the Superfinal. Figs. 1 and 3 and Table 1 provide the logos and details on the field of 40 engines. L1 L2{ QL Fig. 1. The logos for the engines originally in the Qualification League and Leagues 1 and 2. Through the generous sponsorship of ‘noobpwnftw’, TCEC benefitted from a significant platform upgrade. On the CPU side, 4x Intel (2016) Xeon 4xE5-4669v4 processors enabled 176 threads rather than the previous 43 and the Syzygy ‘EGT’ endgame tables were promoted from SSD to 1TB RAM. The previous Windows Server 2012 R2 operating system was replaced by CentOS Linux release 7.7.1908 (Core) as the latter eased the administrators’ tasks and enabled more nodes/sec in the engine- searches. The move to Linux challenged a number of engine authors who we hope will be back in TCEC18. 1 Corresponding author: [email protected] Table 1. The TCEC17 engines (CPW, 2020). Engine Initial CPU proto- Hash # EGTs Authors Final Tier ab Name Version Elo Tier thr. col Kb 01 AS AllieStein v0.5_timefix-n14.0 3936 P ? uci — Syz. Adam Treat and Mark Jordan → P 02 An Andscacs 0.95123 3750 1 176 uci 8,192 — Daniel José Queraltó → 1 03 Ar Arasan 22.0_f928f5c 3728 1 176 uci 16,384 Syz.
    [Show full text]
  • Monte-Carlo Tree Search As Regularized Policy Optimization
    Monte-Carlo tree search as regularized policy optimization Jean-Bastien Grill * 1 Florent Altche´ * 1 Yunhao Tang * 1 2 Thomas Hubert 3 Michal Valko 1 Ioannis Antonoglou 3 Remi´ Munos 1 Abstract AlphaZero employs an alternative handcrafted heuristic to achieve super-human performance on board games (Silver The combination of Monte-Carlo tree search et al., 2016). Recent MCTS-based MuZero (Schrittwieser (MCTS) with deep reinforcement learning has et al., 2019) has also led to state-of-the-art results in the led to significant advances in artificial intelli- Atari benchmarks (Bellemare et al., 2013). gence. However, AlphaZero, the current state- of-the-art MCTS algorithm, still relies on hand- Our main contribution is connecting MCTS algorithms, crafted heuristics that are only partially under- in particular the highly-successful AlphaZero, with MPO, stood. In this paper, we show that AlphaZero’s a state-of-the-art model-free policy-optimization algo- search heuristics, along with other common ones rithm (Abdolmaleki et al., 2018). Specifically, we show that such as UCT, are an approximation to the solu- the empirical visit distribution of actions in AlphaZero’s tion of a specific regularized policy optimization search procedure approximates the solution of a regularized problem. With this insight, we propose a variant policy-optimization objective. With this insight, our second of AlphaZero which uses the exact solution to contribution a modified version of AlphaZero that comes this policy optimization problem, and show exper- significant performance gains over the original algorithm, imentally that it reliably outperforms the original especially in cases where AlphaZero has been observed to algorithm in multiple domains.
    [Show full text]
  • The SSDF Chess Engine Rating List, 2019-02
    The SSDF Chess Engine Rating List, 2019-02 Article Accepted Version The SSDF report Sandin, L. and Haworth, G. (2019) The SSDF Chess Engine Rating List, 2019-02. ICGA Journal, 41 (2). 113. ISSN 1389- 6911 doi: https://doi.org/10.3233/ICG-190107 Available at http://centaur.reading.ac.uk/82675/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . Published version at: https://doi.org/10.3233/ICG-190085 To link to this article DOI: http://dx.doi.org/10.3233/ICG-190107 Publisher: The International Computer Games Association All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement . www.reading.ac.uk/centaur CentAUR Central Archive at the University of Reading Reading’s research outputs online THE SSDF RATING LIST 2019-02-28 148673 games played by 377 computers Rating + - Games Won Oppo ------ --- --- ----- --- ---- 1 Stockfish 9 x64 1800X 3.6 GHz 3494 32 -30 642 74% 3308 2 Komodo 12.3 x64 1800X 3.6 GHz 3456 30 -28 640 68% 3321 3 Stockfish 9 x64 Q6600 2.4 GHz 3446 50 -48 200 57% 3396 4 Stockfish 8 x64 1800X 3.6 GHz 3432 26 -24 1059 77% 3217 5 Stockfish 8 x64 Q6600 2.4 GHz 3418 38 -35 440 72% 3251 6 Komodo 11.01 x64 1800X 3.6 GHz 3397 23 -22 1134 72% 3229 7 Deep Shredder 13 x64 1800X 3.6 GHz 3360 25 -24 830 66% 3246 8 Booot 6.3.1 x64 1800X 3.6 GHz 3352 29 -29 560 54% 3319 9 Komodo 9.1
    [Show full text]
  • The 19Th Top Chess Engine Championship: TCEC19
    The 19th Top Chess Engine Championship: TCEC19 Guy Haworth1 and Nelson Hernandez Reading, UK and Maryland, USA After some intriguing bonus matches in TCEC Season 18, the TCEC Season 19 Championship started on August 6th, 2020 (Haworth and Hernandez, 2020a/b; TCEC, 2020a/b). The league structure was unaltered except that the Qualification League was extended to 12 engines at the last moment. There were two promotions and demotions throughout. A key question was whether or not the much discussed NNUE, easily updated neural network, technology (Cong, 2020) would make an appearance as part of a new STOCKFISH version. This would, after all, be a radical change to the architecture of the most successful TCEC Grand Champion of all. L2 L3 QL{ Fig. 1. The logos for the engines originally in the Qualification League and in Leagues 3 and 2. The platform for the ‘Shannon AB’ engines was as for TCEC18, courtesy of ‘noobpwnftw’, the major sponsor, four Intel (2016) Xeon E5-4669v4 processors: LINUX, 88 cores, 176 process threads and 128GiB of RAM with the sub-7-man Syzygy ‘EGT’ endgame tables in their own 1TB RAM. The TCEC GPU server was a 2.5GHz Intel Xeon Platinum 8163 CPU providing 32 threads, 48GiB of RAM and four Nvidia (2019) V100 GPUs. It is not clear how many CPU threads each NN-engine used. The ‘EGT’ platform was less than on the CPU side: 500 GB of SSD fronted by a 128GiB RAM buffer. 1 Corresponding author: [email protected] Table 1. The TCEC19 engines (CPW, 2020).
    [Show full text]
  • Distributional Differences Between Human and Computer Play at Chess
    Multidisciplinary Workshop on Advances in Preference Handling: Papers from the AAAI-14 Workshop Human and Computer Preferences at Chess Kenneth W. Regan Tamal Biswas Jason Zhou Department of CSE Department of CSE The Nichols School University at Buffalo University at Buffalo Buffalo, NY 14216 USA Amherst, NY 14260 USA Amherst, NY 14260 USA [email protected] [email protected] Abstract In our case the third parties are computer chess programs Distributional analysis of large data-sets of chess games analyzing the position and the played move, and the error played by humans and those played by computers shows the is the difference in analyzed value from its preferred move following differences in preferences and performance: when the two differ. We have run the computer analysis (1) The average error per move scales uniformly higher the to sufficient depth estimated to have strength at least equal more advantage is enjoyed by either side, with the effect to the top human players in our samples, depth significantly much sharper for humans than computers; greater than used in previous studies. We have replicated our (2) For almost any degree of advantage or disadvantage, a main human data set of 726,120 positions from tournaments human player has a significant 2–3% lower scoring expecta- played in 2010–2012 on each of four different programs: tion if it is his/her turn to move, than when the opponent is to Komodo 6, Stockfish DD (or 5), Houdini 4, and Rybka 3. move; the effect is nearly absent for computers. The first three finished 1-2-3 in the most recent Thoresen (3) Humans prefer to drive games into positions with fewer Chess Engine Competition, while Rybka 3 (to version 4.1) reasonable options and earlier resolutions, even when playing was the top program from 2008 to 2011.
    [Show full text]
  • Efficiently Mastering the Game of Nogo with Deep Reinforcement
    electronics Article Efficiently Mastering the Game of NoGo with Deep Reinforcement Learning Supported by Domain Knowledge Yifan Gao 1,*,† and Lezhou Wu 2,† 1 College of Medicine and Biological Information Engineering, Northeastern University, Liaoning 110819, China 2 College of Information Science and Engineering, Northeastern University, Liaoning 110819, China; [email protected] * Correspondence: [email protected] † These authors contributed equally to this work. Abstract: Computer games have been regarded as an important field of artificial intelligence (AI) for a long time. The AlphaZero structure has been successful in the game of Go, beating the top professional human players and becoming the baseline method in computer games. However, the AlphaZero training process requires tremendous computing resources, imposing additional difficulties for the AlphaZero-based AI. In this paper, we propose NoGoZero+ to improve the AlphaZero process and apply it to a game similar to Go, NoGo. NoGoZero+ employs several innovative features to improve training speed and performance, and most improvement strategies can be transferred to other nonspecific areas. This paper compares it with the original AlphaZero process, and results show that NoGoZero+ increases the training speed to about six times that of the original AlphaZero process. Moreover, in the experiment, our agent beat the original AlphaZero agent with a score of 81:19 after only being trained by 20,000 self-play games’ data (small in quantity compared with Citation: Gao, Y.; Wu, L. Efficiently 120,000 self-play games’ data consumed by the original AlphaZero). The NoGo game program based Mastering the Game of NoGo with on NoGoZero+ was the runner-up in the 2020 China Computer Game Championship (CCGC) with Deep Reinforcement Learning limited resources, defeating many AlphaZero-based programs.
    [Show full text]
  • ELF Opengo: an Analysis and Open Reimplementation of Alphazero
    ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero Yuandong Tian 1 Jerry Ma * 1 Qucheng Gong * 1 Shubho Sengupta * 1 Zhuoyuan Chen 1 James Pinkerton 1 C. Lawrence Zitnick 1 Abstract However, these advances in playing ability come at signifi- The AlphaGo, AlphaGo Zero, and AlphaZero cant computational expense. A single training run requires series of algorithms are remarkable demonstra- millions of selfplay games and days of training on thousands tions of deep reinforcement learning’s capabili- of TPUs, which is an unattainable level of compute for the ties, achieving superhuman performance in the majority of the research community. When combined with complex game of Go with progressively increas- the unavailability of code and models, the result is that the ing autonomy. However, many obstacles remain approach is very difficult, if not impossible, to reproduce, in the understanding of and usability of these study, improve upon, and extend. promising approaches by the research commu- In this paper, we propose ELF OpenGo, an open-source nity. Toward elucidating unresolved mysteries reimplementation of the AlphaZero (Silver et al., 2018) and facilitating future research, we propose ELF algorithm for the game of Go. We then apply ELF OpenGo OpenGo, an open-source reimplementation of the toward the following three additional contributions. AlphaZero algorithm. ELF OpenGo is the first open-source Go AI to convincingly demonstrate First, we train a superhuman model for ELF OpenGo. Af- superhuman performance with a perfect (20:0) ter running our AlphaZero-style training software on 2,000 record against global top professionals. We ap- GPUs for 9 days, our 20-block model has achieved super- ply ELF OpenGo to conduct extensive ablation human performance that is arguably comparable to the 20- studies, and to identify and analyze numerous in- block models described in Silver et al.(2017) and Silver teresting phenomena in both the model training et al.(2018).
    [Show full text]