Connect6 Opening Leveraging Alphazero Algorithm and Job-Level Computing

Total Page:16

File Type:pdf, Size:1020Kb

Connect6 Opening Leveraging Alphazero Algorithm and Job-Level Computing 4N4-IS-1c-05 The 35th Annual Conference of the Japanese Society for Artificial Intelligence, 2021 Connect6 Opening Leveraging AlphaZero Algorithm and Job-Level Computing Shao-Xiong Zheng*1,2 Wei-Yuan Hsu*1,2 Kuo-Chan Huang*3 I-Chen Wu*1,2,4ġġġ *1 Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan *2 Research Center for IT Innovation, Academia Sinica, Taiwan *3 Department of Computer Science, National Taichung University of Education, Taichung, Taiwan *4 Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan For most board games, players commonly learn to increase strengths by following the opening moves played by experts, usually in the first stage of playing. In the past, many efforts have been made to use game-specific knowledge to construct opening books. Recently, DeepMind developed AlphaZero (2017) that can master game playing without domain knowledge. In this paper, we present an approach based on AlphaZero to constructing an opening book. To demonstrate the approach, we use a Connect6 program trained based on AlphaZero for evaluating positions, and then expand the opening game tree based on a job-level computing algorithm, called JL-UCT (job-level Upper Confidence Tree), developed by Wu et al. (2013) and Wei et al. (2015). In our experiment, the strengths of the Connect6 programs using this opening book are significantly improved, namely, the one with the opening book has a win rate of 65% against the one without using the book. In addition, the one without opening lost to Polygames in the Connect6 tournament of TCGA 2020 competitions, while the one with opening won against Polygames in TAAI and Computer Olympiad competitions later in 2020. To demonstrate the feasibility of our approach, we used the 1. Introduction proposed approach to construct a Connect6 opening book, and Opening book construction is an important research topic for assessed its quality by comparing the strength of our twoġConnect6 increasing the strengths of game-playing programs [Wei 2015]. programs with and without the opening book, respectively. It turns Opening books of a strategy game are databases containing good out that the opening book significantly improves the strength of actions at the opening game stage. An opening book can especially our Connect6 program. In our experiment, the one with opening bring in significant advantage in time-limited game competitions book has a win rate of 65% against the one without using the book. because a lot of search and computation time can be saved. The opening book also helped our Connect6 program get higher In the past, many efforts have been made to use game-specific ranking in real tournaments. Our original program without the knowledge to construct opening books, including analyzing opening book lost to Polygames [Cazenave 2020] in the Connect6 opening moves made by top players and using programs that tournament of TCGA 2020 competitions, while the new one with implement domain-specific algorithms to suggest opening moves. the opening book defeated Polygames and won the gold medal in These methods may encounter two problems: the quality of both TAAI and Computer Olympiad competitions later in 2020. opening books depends on human knowledge of the game, and a The remainder of this paper is organized as follows. Section 2 successful method in one game might not be applied to another presents the necessary background knowledge on MCTS, game. To solve these two issuesĭ this paper proposes an approach AlphaZero, Job-Level Computing, and discuss related work on based on AlphaZero [Silver 2018] to constructing a high-quality opening book construction. In Section 3, we describe our approach opening book without domain knowledge. to constructing a Connect6 opening book and evaluate the The AlphaZero algorithm [Silver 2018], developed by performance of the opening book. Section 4 concludes this paper. DeepMind, demonstrates the capability of reinforcement learning to master game playing without domain knowledge. In our 2. Background and Related Work opening book construction approach, a program trained based on 2.1 MCTS AlphaZero is used to evaluate the positions during expanding the Monte-Carlo tree search (MCTS) is a decision-making opening game tree. For game tree expansion, we use the Job-Level algorithm based on Monte Carlo evaluation and best-first search, Upper Confidence Tree (JL-UCT) distributed algorithm [Wei typically used in turn-based games [Chaslot 2008]. The algorithm 2015] to explore the game tree and select opening positions to repeatedly simulates the possible consequences of each action in a evaluate. The evaluation data on the opening game tree are then way that promising actions, selected based on current simulation collected and converted into an opening book. results, are given more additional simulations. Each iteration of MCTS consists of the following four stages. Contact: I-Chen Wu, Department of Computer Science, National Selection: Starting at the root node, a selection policy is Yang Ming Chiao Tung University, Hsinchu, Taiwan, +886- recursively applied to choose a child for each visited node 3-5731855, +886-3-5733777, [email protected]. until a leaf node is reached. A key issue in this stage is the balance between exploration and exploitation. A commonly - 1 - 4N4-IS-1c-05 The 35th Annual Conference of the Japanese Society for Artificial Intelligence, 2021 used selection policy is first to evaluate the UCT value of Compared with the similar game Gomoku, Connect6 has better each child i, based on the following formula. fairness. In Gomoku, a larger board size tends to gives the first player a greater advantage [Hsu 2020]. It has even been proved ݈݋݃ͳͲ ܰ ൈට (1) that the first player wins the 15x15 Gomoku [Allis 1994]. Inܥൌݔ݅ ൅ ݅ܶܥܷ ܰ ݅ contrast, so far there is no evidence that the same unfairness exists in Connect6. In addition, the Connect6 is more complex than many where ݔ௜ is the win rate of child i, N and ܰ௜ are the visiting other games, since placing two stones in one move makes the counts of the node and its child i respectively, and C is an number of actions much higher. These properties make Connect6 coefficient. MCTS is inclined to explore for larger C, and regarded as one of the most ideal games for studying computer tends to exploit with smaller C. Then, the policy is to select games[Tao 2009]. the child with the maximum ܷܥܶ݅. This allows MCTS to converge to the optimal decision after a sufficiently large 2.4 Job-Level Computing number of simulations [Browne 2012]. Since solving game problems often requires a large amount of computation, parallelization is usually necessary in practice. To Expansion: The tree is expanded by adding one or more child help solve game problems, Wu et al. proposed a general nodes to the selected leaf nodes, according to the available distributed computing model named job-level computing [Wu actions at the state represented by the selected node. 2013]. Job-level (JL) computing consists of JL clients and the JL Simulation: Simulations are run from the new node(s) by taking system. A JL client dynamically divides game problem solving a series of random actions or according to a default policy into tasks that can be completed by specific executions of game until outcomes are obtained at the terminal states. programs. The requests to execute game programs are Backpropagation: Simulation results are used to update the UCT encapsulated as jobs and sent to the JL system. The JL system, values for all ancestors. comprised of a broker and a collection of (remote) workers, helps The four MCTS stages will be repeatedly applied until a perform the jobs simultaneously by dispatching them to available predefined time or iteration constraint is reached. workers. The job results are then returned to the JL client. Under the JL computing model, many general problem solving 2.2 AlphaZero algorithms that are not limited to specific games has been AlphaZero is an algorithm that allows programs to learn to proposed. A useful one called JL-UCT [Wei 2015] will be used in master a game without human knowledge. The algorithm our opening book construction in Section 3. JL-UCT is a game tree combines Monte Carlo Tree Search and deep neural networks in a expansion algorithm adopting ideas similar to MCTS, and works reinforcement learning framework, described as follows. During as follows. In the JL system, a game-playing program is used to the self-play, a MCTS-based program plays against itself to serve as an agent that suggests actions and evaluates expected generate game records, and a deep neural network is trained with outcomes of given positions. The JL client starts to build a JL the game outcomes as well as the probability distribution of game tree rooted at a given position. Then, the JL client repeatedly actions chosen by MCTS. Note that the selection in MCTS of requests the execution of the game-playing program and expands AlphaZero slightly modified the formula (1) by considering the tree nodes representing the succeeding positions corresponding to probability provided by the network policy in the second term, as the suggested actions to the game tree. For JL-UCT, the JL client described in greater detail in [Silver 2018], recursively applies the UCT formula to select the child nodes to AlphaZero trains programs without using game specific be visited until it reaches the leaf node. JL-UCT allows to perform knowledge, so it can be generally applied to training many other the above game tree expansion in different games when games or applications, such as Go, Chess and Shogi, and reach implemented on different games. For example, when applied on state-of-the-art [Silver 2018].
Recommended publications
  • Paper and Pencils for Everyone
    (CM^2) Math Circle Lesson: Game Theory of Gomuku and (m,n,k-games) Overview: Learning Objectives/Goals: to expose students to (m,n,k-games) and learn the general history of the games through out Asian cultures. SWBAT… play variations of m,n,k-games of varying degrees of difficulty and complexity as well as identify various strategies of play for each of the variations as identified by pattern recognition through experience. Materials: Paper and pencils for everyone Vocabulary: Game – we will create a working definition for this…. Objective – the goal or point of the game, how to win Win – to do (achieve) what a certain game requires, beat an opponent Diplomacy – working with other players in a game Luck/Chance – using dice or cards or something else “random” Strategy – techniques for winning a game Agenda: Check in (10-15min.) Warm-up (10-15min.) Lesson and game (30-45min) Wrap-up and chill time (10min) Lesson: Warm up questions: Ask these questions after warm up to the youth in small groups. They may discuss the answers in the groups and report back to you as the instructor. Write down the answers to these questions and compile a working definition. Try to lead the youth so that they do not name a specific game but keep in mind various games that they know and use specific attributes of them to make generalizations. · What is a game? · Are there different types of games? · What make something a game and something else not a game? · What is a board game? · How is it different from other types of games? · Do you always know what your opponent (other player) is doing during the game, can they be sneaky? · Do all of games have the same qualities as the games definition that we just made? Why or why not? Game history: The earliest known board games are thought of to be either ‘Go’ from China (which we are about to learn a variation of), or Senet and Mehen from Egypt (a country in Africa) or Mancala.
    [Show full text]
  • Alpha-Beta Pruning
    Alpha-Beta Pruning Carl Felstiner May 9, 2019 Abstract This paper serves as an introduction to the ways computers are built to play games. We implement the basic minimax algorithm and expand on it by finding ways to reduce the portion of the game tree that must be generated to find the best move. We tested our algorithms on ordinary Tic-Tac-Toe, Hex, and 3-D Tic-Tac-Toe. With our algorithms, we were able to find the best opening move in Tic-Tac-Toe by only generating 0.34% of the nodes in the game tree. We also explored some mathematical features of Hex and provided proofs of them. 1 Introduction Building computers to play board games has been a focus for mathematicians ever since computers were invented. The first computer to beat a human opponent in chess was built in 1956, and towards the late 1960s, computers were already beating chess players of low-medium skill.[1] Now, it is generally recognized that computers can beat even the most accomplished grandmaster. Computers build a tree of different possibilities for the game and then work backwards to find the move that will give the computer the best outcome. Al- though computers can evaluate board positions very quickly, in a game like chess where there are over 10120 possible board configurations it is impossible for a computer to search through the entire tree. The challenge for the computer is then to find ways to avoid searching in certain parts of the tree. Humans also look ahead a certain number of moves when playing a game, but experienced players already know of certain theories and strategies that tell them which parts of the tree to look.
    [Show full text]
  • Ai12-General-Game-Playing-Pre-Handout
    Introduction GDL GGP Alpha Zero Conclusion References Artificial Intelligence 12. General Game Playing One AI to Play All Games and Win Them All Jana Koehler Alvaro´ Torralba Summer Term 2019 Thanks to Dr. Peter Kissmann for slide sources Koehler and Torralba Artificial Intelligence Chapter 12: GGP 1/53 Introduction GDL GGP Alpha Zero Conclusion References Agenda 1 Introduction 2 The Game Description Language (GDL) 3 Playing General Games 4 Learning Evaluation Functions: Alpha Zero 5 Conclusion Koehler and Torralba Artificial Intelligence Chapter 12: GGP 2/53 Introduction GDL GGP Alpha Zero Conclusion References Deep Blue Versus Garry Kasparov (1997) Koehler and Torralba Artificial Intelligence Chapter 12: GGP 4/53 Introduction GDL GGP Alpha Zero Conclusion References Games That Deep Blue Can Play 1 Chess Koehler and Torralba Artificial Intelligence Chapter 12: GGP 5/53 Introduction GDL GGP Alpha Zero Conclusion References Chinook Versus Marion Tinsley (1992) Koehler and Torralba Artificial Intelligence Chapter 12: GGP 6/53 Introduction GDL GGP Alpha Zero Conclusion References Games That Chinook Can Play 1 Checkers Koehler and Torralba Artificial Intelligence Chapter 12: GGP 7/53 Introduction GDL GGP Alpha Zero Conclusion References Games That a General Game Player Can Play 1 Chess 2 Checkers 3 Chinese Checkers 4 Connect Four 5 Tic-Tac-Toe 6 ... Koehler and Torralba Artificial Intelligence Chapter 12: GGP 8/53 Introduction GDL GGP Alpha Zero Conclusion References Games That a General Game Player Can Play (Ctd.) 5 ... 18 Zhadu 6 Othello 19 Pancakes 7 Nine Men's Morris 20 Quarto 8 15-Puzzle 21 Knight's Tour 9 Nim 22 n-Queens 10 Sudoku 23 Blob Wars 11 Pentago 24 Bomberman (simplified) 12 Blocker 25 Catch a Mouse 13 Breakthrough 26 Chomp 14 Lights Out 27 Gomoku 15 Amazons 28 Hex 16 Knightazons 29 Cubicup 17 Blocksworld 30 ..
    [Show full text]
  • A Scalable Neural Network Architecture for Board Games
    A Scalable Neural Network Architecture for Board Games Tom Schaul, Jurgen¨ Schmidhuber Abstract— This paper proposes to use Multi-dimensional II. BACKGROUND Recurrent Neural Networks (MDRNNs) as a way to overcome one of the key problems in flexible-size board games: scalability. A. Flexible-size board games We show why this architecture is well suited to the domain There is a large variety of board games, many of which and how it can be successfully trained to play those games, even without any domain-specific knowledge. We find that either have flexible board dimensions, or have rules that can performance on small boards correlates well with performance be trivially adjusted to make them flexible. on large ones, and that this property holds for networks trained The most prominent of them is the game of Go, research by either evolution or coevolution. on which has been considering board sizes between the min- imum of 5x5 and the regular 19x19. The rules are simple[5], I. INTRODUCTION but the strategies deriving from them are highly complex. Players alternately place stones onto any of the intersections Games are a particularly interesting domain for studies of the board, with the goal of conquering maximal territory. of machine learning techniques. They form a class of clean A player can capture a single stone or a connected group and elegant environments, usually described by a small set of his opponent’s stones by completely surrounding them of formal rules and clear success criteria, and yet they often with his own stones. A move is not legal if it leads to a involve highly complex strategies.
    [Show full text]
  • A Scalable Neural Network Architecture for Board Games
    A Scalable Neural Network Architecture for Board Games Tom Schaul, Jurgen¨ Schmidhuber Abstract— This paper proposes to use Multi-dimensional II. BACKGROUND Recurrent Neural Networks (MDRNNs) as a way to overcome one of the key problems in flexible-size board games: scalability. A. Flexible-size board games We show why this architecture is well suited to the domain There is a large variety of board games, many of which and how it can be successfully trained to play those games, even without any domain-specific knowledge. We find that either have flexible board dimensions, or have rules that can performance on small boards correlates well with performance be trivially adjusted to make them flexible. on large ones, and that this property holds for networks trained The most prominent of them is the game of Go, research by either evolution or coevolution. on which has been considering board sizes between the min- imum of 5x5 and the regular 19x19. The rules are simple[4], I. INTRODUCTION but the strategies deriving from them are highly complex. Players alternately place stones onto any of the intersections Games are a particularly interesting domain for studies of of the board, with the goal of conquering maximal territory. machine learning techniques. They form a class of clean and A player can capture a single stone or a connected group elegant environments, usually described by a small set of of his opponent’s stones by completely surrounding them formal rules, have very clear success criteria, and yet they with his own stones. A move is not legal if it leads to a often involve highly complex strategies.
    [Show full text]
  • Chapter 6 Two-Player Games
    Introduction to Using Games in Education: A Guide for Teachers and Parents Chapter 6 Two-Player Games There are many different kinds of two-person games. You may have played a variety of these games such as such as chess, checkers, backgammon, and cribbage. While all of these games are competitive, many people play them mainly for social purposes. A two-person game environment is a situation that facilitates communication and companionship. Two major ideas illustrated in this chapter: 1. Look ahead: learning to consider what your opponent will do as a response to a move that you are planning. 2. Computer as opponent. In essence, this makes a two-player game into a one- player game. In addition, we will continue to explore general-purpose, high-road transferable, problem-solving strategies. Tic-Tac-Toe To begin, we will look at the game of tic-tac-toe (TTT). TTT is a two-player game, with players taking turns. One player is designated as X and the other as O. A turn consists of marking an unused square of a 3x3 grid with one’s mark (an X or an O). The goal is to get three of one’s mark in a file (vertical, horizontal, or diagonal). Traditionally, X is the first player. A sample game is given below. Page 95 Introduction to Using Games in Education: A Guide for Teachers and Parents X X X O X O Before X's O's X's game first first second begins move move move X X X X O X X O X O O O X O X O X O X O X O O's X's O's X wins on second third third X's fourth move move move move Figure 6.1.
    [Show full text]
  • Ultimate Tic-Tac-Toe
    ULTIMATE TIC-TAC-TOE Scott Powell, Alex Merrill Professor: Professor Christman An algorithmic solver for Ultimate Tic-Tac-Toe May 2021 ABSTRACT Ultimate Tic-Tac-Toe is a deterministic game played by two players where each player’s turn has a direct effect on what options their opponent has. Each player’s viable moves are determined by their opponent on the previous turn, so players must decide whether the best move in the short term actually is the best move overall. Ultimate Tic-Tac-Toe relies entirely on strategy and decision-making. There are no random variables such as dice rolls to interfere with each player’s strategy. This is rela- tively rare in the field of board games, which often use chance to determine turns. Be- cause Ultimate Tic-Tac-Toe uses no random elements, it is a great choice for adversarial search algorithms. We may use the deterministic aspect of the game to our advantage by pruning the search trees to only contain moves that result in a good board state for the intelligent agent, and to only consider strong moves from the opponent. This speeds up the efficiency of the algorithm, allowing for an artificial intelligence capable of winning the game without spending extended periods of time evaluating each potential move. We create an intelligent agent capable of playing the game with strong moves using adversarial minimax search. We propose novel heuristics for evaluating the state of the game at any given point, and evaluate them against each other to determine the strongest heuristics. TABLE OF CONTENTS 1 Introduction1 1.1 Problem Statement............................2 1.2 Related Work...............................3 2 Methods6 2.1 Simple Heuristic: Greedy.........................6 2.2 New Heuristic...............................7 2.3 Alpha- Beta- Pruning and Depth Limit.................
    [Show full text]
  • Combinatorial Game Theory
    Combinatorial Game Theory Aaron N. Siegel Graduate Studies MR1EXLIQEXMGW Volume 146 %QIVMGER1EXLIQEXMGEP7SGMIX] Combinatorial Game Theory https://doi.org/10.1090//gsm/146 Combinatorial Game Theory Aaron N. Siegel Graduate Studies in Mathematics Volume 146 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Daniel S. Freed Rafe Mazzeo Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 91A46. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-146 Library of Congress Cataloging-in-Publication Data Siegel, Aaron N., 1977– Combinatorial game theory / Aaron N. Siegel. pages cm. — (Graduate studies in mathematics ; volume 146) Includes bibliographical references and index. ISBN 978-0-8218-5190-6 (alk. paper) 1. Game theory. 2. Combinatorial analysis. I. Title. QA269.S5735 2013 519.3—dc23 2012043675 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2013 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government.
    [Show full text]
  • Game Tree Search
    CSC384: Introduction to Artificial Intelligence Game Tree Search • Chapter 5.1, 5.2, 5.3, 5.6 cover some of the material we cover here. Section 5.6 has an interesting overview of State-of-the-Art game playing programs. • Section 5.5 extends the ideas to games with uncertainty (We won’t cover that material but it makes for interesting reading). Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 1 Generalizing Search Problem • So far: our search problems have assumed agent has complete control of environment • State does not change unless the agent (robot) changes it. • All we need to compute is a single path to a goal state. • Assumption not always reasonable • Stochastic environment (e.g., the weather, traffic accidents). • Other agents whose interests conflict with yours • Search can find a path to a goal state, but the actions might not lead you to the goal as the state can be changed by other agents (nature or other intelligent agents) Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 2 Generalizing Search Problem •We need to generalize our view of search to handle state changes that are not in the control of the agent. •One generalization yields game tree search • Agent and some other agents. • The other agents are acting to maximize their profits • this might not have a positive effect on your profits. Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 3 General Games •What makes something a game? • There are two (or more) agents making changes to the world (the state) • Each agent has their own interests • e.g., each agent has a different goal; or assigns different costs to different paths/states • Each agent tries to alter the world so as to best benefit itself.
    [Show full text]
  • Lecture Notes Part 2
    Claudia Vogel Mathematics for IBA Winter Term 2009/2010 Outline 5. Game Theory • Introduction & General Techniques • Sequential Move Games • Simultaneous Move Games with Pure Strategies • Combining Sequential & Simultaneous Moves • Simultaneous Move Games with Mixed Strategies • Discussion Claudia Vogel: Game Theory and Applications 2 Motivation: Why study Game Theory? • Games are played in many situation of every days life – Roomates and Families – Professors and Students – Dating • Other fields of application – Politics, Economics, Business – Conflict Resolution – Evolutionary Biology – Sports Claudia Vogel: Game Theory and Applications 3 The beginnings of Game Theory 1944 “Theory of Games and Economic Behavior“ Oskar Morgenstern & John Neumann Claudia Vogel: Game Theory and Applications 4 Decisions vs Games • Decision: a situation in which a person chooses from different alternatives without concerning reactions from others • Game: interaction between mutually aware players – Mutual awareness: The actions of person A affect person B, B knows this and reacts or takes advance actions; A includes this into his decision process Claudia Vogel: Game Theory and Applications 5 Sequential vs Simultaneous Move Games • Sequential Move Game: player move one after the other – Example: chess • Simultaneous Move Game: Players act at the same time and without knowing, what action the other player chose – Example: race to develop a new medicine Claudia Vogel: Game Theory and Applications 6 Conflict in Players‘ Interests • Zero Sum Game: one player’s gain is the other player’s loss – Total available gain: Zero – Complete conflict of players’ interests • Constant Sum Game: The total available gain is not exactly zero, but constant. • Games in trade or other economic activities usually offer benefits for everyone and are not zero-sum.
    [Show full text]
  • Combinatorial Game Theory: an Introduction to Tree Topplers
    Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Fall 2015 Combinatorial Game Theory: An Introduction to Tree Topplers John S. Ryals Jr. Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/etd Part of the Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Ryals, John S. Jr., "Combinatorial Game Theory: An Introduction to Tree Topplers" (2015). Electronic Theses and Dissertations. 1331. https://digitalcommons.georgiasouthern.edu/etd/1331 This thesis (open access) is brought to you for free and open access by the Graduate Studies, Jack N. Averitt College of at Digital Commons@Georgia Southern. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons@Georgia Southern. For more information, please contact [email protected]. COMBINATORIAL GAME THEORY: AN INTRODUCTION TO TREE TOPPLERS by JOHN S. RYALS, JR. (Under the Direction of Hua Wang) ABSTRACT The purpose of this thesis is to introduce a new game, Tree Topplers, into the field of Combinatorial Game Theory. Before covering the actual material, a brief background of Combinatorial Game Theory is presented, including how to assign advantage values to combinatorial games, as well as information on another, related game known as Domineering. Please note that this document contains color images so please keep that in mind when printing. Key Words: combinatorial game theory, tree topplers, domineering, hackenbush 2009 Mathematics Subject Classification: 91A46 COMBINATORIAL GAME THEORY: AN INTRODUCTION TO TREE TOPPLERS by JOHN S. RYALS, JR. B.S. in Applied Mathematics A Thesis Submitted to the Graduate Faculty of Georgia Southern University in Partial Fulfillment of the Requirement for the Degree MASTER OF SCIENCE STATESBORO, GEORGIA 2015 c 2015 JOHN S.
    [Show full text]
  • CONNECT6 I-Chen Wu1, Dei-Yen Huang1, and Hsiu-Chen Chang1
    234 ICGA Journal December 2005 NOTES CONNECT6 1 1 1 I-Chen Wu , Dei-Yen Huang , and Hsiu-Chen Chang Hsinchu, Taiwan ABSTRACT This note introduces the game Connect6, a member of the family of the k-in-a-row games, and investigates some related issues. We analyze the fairness of Connect6 and show that Connect6 is potentially fair. Then we describe other characteristics of Connect6, e.g., the high game-tree and state-space complexities. Thereafter we present some threat-based winning strategies for Connect6 players or programs. Finally, the note describes the current developments of Connect6 and lists five new challenges. 1. INTRODUCTION Traditionally, the game k-in-a-row is defined as follows. Two players, henceforth represented as B and W, alternately place one stone, black and white respectively, on one empty square2 of an m × n board; B is assumed to play first. The player who first obtains k consecutive stones (horizontally, vertically or diagonally) of his own colour wins the game. Recently, Wu and Huang (2005) presented a new family of k-in-a-row games, Connect(m,n,k,p,q), which are analogous to the traditional k-in-a-row games, except that B places q stones initially and then both W and B alternately place p stones subsequently. The additional parameter q is a key that significantly influences the fairness. The games in the family are also referred to as Connect games. For simplicity, Connect(k,p,q) denotes the games Connect(∞,∞,k,p,q), played on infinite boards. A well-known and popular Connect game is five-in-a-row, also called Go-Moku.
    [Show full text]