Game Tree Search

Total Page:16

File Type:pdf, Size:1020Kb

Game Tree Search CS 7180: Behavioral Modeling and Decision-making in AI Game Tree Search Prof. Amy Sliva September 28, 2012 Outline • Introduction to games • State-space representation of game tree • Adversarial search • Minimax search • Alpha-beta pruning Decisions in mulagent environments • Automated planning from last time assumes single agent • Decisions depend only on the state-space, possible actions, and the desired goal • Multiagent domains—strategic decisions are necessary • Consider actions of other agents when making a decision • Competitive multiagent environments often called games • Goals of agents are in conlict • Often some cooperation involved, but we will get to this later… • Formulate decision making as a state-space search problem • Adversarial search What do we mean by “games”? Deterministic Chance Chess, Checkers, Backgammon, Perfect Information Go, Othello Monopoly Bridge, Poker, Battleship, Imperfect Information Kriegspiel Scrabble, Nuclear war • Not typically considered—physical games like tennis, croquet, ice hockey, etc. • Well, sometimes… Robot Soccer! • RoboCup http://www.robocup.org/ • The goal: “By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the of<icial rule of the FIFA, against the winner of the most recent World Cup.” • We’ll stick with games like chess and checkers today… Common subset of games for AI Deterministic Chance Chess, Checkers, Backgammon, Perfect Information Go, Othello Monopoly Bridge, Poker, Battleship, Imperfect Information Kriegspiel Scrabble, Nuclear war • Deterministic, turn-taking, two-player, zero-sum, perfect- information Determinisc, turn-taking, zero-sum games • Deterministic • No chance (e.g., using dice) involved • Turn taking • Two agents whose actions alternate • Zero-sum • Utility values for each agent are equal and opposite of the other (i.e., if one agent wins, the other necessarily loses) • Creates the adversarial situation • Perfect information • Fully observable environments • No information is hidden from either player • Generalizes to stochastic games, multiple players, non zero-sum, etc. Why study these games in AI? • Fun! • Clear criteria for success • Interesting hard problems • Very large search spaces (i.e., chess has average 10134 nodes!) • Problems involving hostile, adversarial, or competing agents • Insight into other real-world applications with strategic decisions • Like the real world, games require a decision be made, even if calculating optimal decision is impossible • Different from games studied in game-theory How to play a game 1. Consider all legal moves—each move leads to a new state 2. Evaluate the possible next states and determine the best 3. Make your move! 4. Wait for opponent to move and do it all again • Represent the game as a tree, and this becomes a search problem Game trees • Root node is the current state • Looking for the best single move to make next • Evaluation function f(s) rates each possible state • Edges are the legal moves for a player • Terminal nodes represent end-game states (i.e., win, lose, or draw) Game trees • Root node is the current state • Looking for the best single move to make next • Evaluation function f(s) rates each possible state • Edges are the legal moves for a player • Terminal nodes represent end-game states (i.e., win, lose, or draw) How do we search this tree to ind the optimal move? Planning vs. adversarial search • Planning—no adversary • Solution is (heuristic) a method for Jinding goal (i.e., a plan) • Heuristic techniques can ind optimal solution • Evaluation function—estimate of cost from start to goal through given node • Games—mutliagent and adversarial • Solution is a strategy • Strategy speciJies our move for every possible opponent reply • Time limits force an approximate solution • Evaluation function—evaluate “goodness” or utility of game position Formulang games as search • Two players: MAX and MIN—MAX moves Jirst • Take turns until game is over • Winner gets reward, loser gets penalty • Zero sum—sum of the reward and the penalty is a constant • Formal deJinition • Initial state: Set-up speciJied by the rules, e.g., initial board conJiguration of chess • Player(s): DeJines which player has the move in a state • Actions(s): Returns the set of legal moves in a state • Result(s,a): Transition model deJines the result of a move • Terminal-Test(s): Is the game Jinished? True if Jinished, false otherwise • Utility function(s,p): Gives numerical value of terminal state s for player p • E.g., win (+1), lose (-1), and draw (0) in tic-tac-toe. • E.g., win (+1), lose (0), and draw (1/2) in chess. • MAX uses search tree to determine next move, assuming MIN is playing optimally Game trees for MAX and MIN • Current player searches from perspective of MAX • If it is our turn to move, then the root is a MAX node • Each level of the tree has nodes that are all MAX or all MIN • Alternate MAX and MIN level • Complete game tree • All conJigurations generated by legal moves from root to the end • Incomplete game tree • All conJigurations generated by legal moves from root to a given depth (look ahead some number of steps) Evaluang a game state • Evaluation function f(n) evaluates the "goodness" of a game position (i.e., the utility to each player) • Contrast with heuristic search—evaluation function estimates cost/ distance to the goal • Zero-sum assumption allows us to use one evaluation function for both players • f(n) > 0: position n good for MAX and bad for MIN • f(n) < 0: position n bad for MAX and good for MIN • f(n) near 0: position n is a neutral position • f(n) >> 0: win for MAX! • f(n) << 0: win for MIN! • Goal of game tree search—determine one move for MAX that maximizes the payoff for a given game tree according to f(n) • Regardless of the moves the MIN will take Evaluaon funcon for Tic-Tac-Toe f(n) = +1 if the position is a win for X. f(n) = -1 if the position is a win for O. f(n) = 0 if the position is a draw. Minimax evaluaon rule • Minimax value of a node Minimax(n) = utility for MAX of being in the state at n, assuming optimal play from MIN • MAX wants a state with the maximum minimax value, and MIN a state with minimum • Assumes worst case scenario that MIN is also trying to get his highest utility/payoff (i.e., is trying to minimize payoff to MAX) Minimax(s) = Utility(s) if Terminal-Test(s) maxa ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MAX mina ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MIN Minimax evaluaon rule • Minimax value of a node Minimax(n) = utility for MAX of being in the state at n, assuming optimal play from MIN • MAX wants a state with the Minimax(s) at a maximum minimax value, and MIN a state with terminal state is its minimum utility according to the • Assumes worst case scenario that MIN is also trying to get his highest rules of the game utility/payoff (i.e., is trying to minimize payoff to MAX) Minimax(s) = Utility(s) if Terminal-Test(s) maxa ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MAX mina ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MIN Minimax evaluaon rule • Minimax value of a node Minimax(n) = utility for MAX of being in the state at n, assuming optimal play from MIN • MAX wants a state with the Minimax(s) at a maximum minimax value, and MIN a state with terminal state is its minimum utility according to the • Assumes worst case scenario that MIN is also trying to get his highest rules of the game utility/payoff (i.e., is trying to minimize payoff to MAX) Minimax(s) at a MAX node is maximum of Minimax(s) = the child nodes Utility(s) if Terminal-Test(s) maxa ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MAX mina ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MIN Minimax evaluaon rule • Minimax value of a node Minimax(n) = utility for MAX of being in the state at n, assuming optimal play from MIN • MAX wants a state with the Minimax(s) at a maximum minimax value, and MIN a state with terminal state is its minimum utility according to the • Assumes worst case scenario that MIN is also trying to get his highest rules of the game utility/payoff (i.e., is trying to minimize payoff to MAX) Minimax(s) at a MAX node is maximum of Minimax(s) = the child nodes Utility(s) if Terminal-Test(s) maxa ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MAX mina ∈ Actions(s) Minimax(Result(s,a)) if Player(s) = MIN Minimax(s) at a MIN node is minimum of the child nodes Minimax game tree MAX 3 A a1 a3 a2 MIN 322BCD b1 b3 c1 c3 d1 d3 b2 c2 d2 3 12 8 2 4 6 14 5 2 • Terminal nodes show the utility values for MAX • Other nodes are labeled with the minimax values • MAX’s best move is a1—leads to state with highest minimax • MIN’s best reply is b1—leads to state with lowest minimax Minimax algorithm for opmal play Find the optimal strategy for MAX: 1. Generate the whole game tree, down to the leaves 2. Apply utility (payoff) function to each leaf 3. Recursively back up values from leaves: a) a Max node computes the max of its child values b) a Min node computes the min of its child values 4. At root choose the move leading to the child with highest minimax value Minimax algorithm in acon MAX 3 A a1 a3 a2 MIN 322BCD b1 b3 c1 c3 d1 d3 b2 c2 d2 3 12 8 2 4 6 14 5 2 Minimax algorithm in acon MAX 3 A a1 a3 a2 MIN 322BCD b1 b3 c1 c3 d1 d3 b2 c2 d2 3 12 8 2 4 6 14 5 2 Minimax algorithm in acon MAX 3 A a1 a3 a2 MIN 322BCD b1 b3 c1 c3 d1 d3 b2 c2 d2 3 12 8 2 4 6 14 5 2 Minimax algorithm in acon MAX 3 A a1 a3 a2 MIN 322BCD b1 b3 c1 c3 d1 d3 b2 c2 d2 3 12 8 2 4 6 14 5 2 Minimax algorithm in acon MAX 3 A a1 a3
Recommended publications
  • Alpha-Beta Pruning
    Alpha-Beta Pruning Carl Felstiner May 9, 2019 Abstract This paper serves as an introduction to the ways computers are built to play games. We implement the basic minimax algorithm and expand on it by finding ways to reduce the portion of the game tree that must be generated to find the best move. We tested our algorithms on ordinary Tic-Tac-Toe, Hex, and 3-D Tic-Tac-Toe. With our algorithms, we were able to find the best opening move in Tic-Tac-Toe by only generating 0.34% of the nodes in the game tree. We also explored some mathematical features of Hex and provided proofs of them. 1 Introduction Building computers to play board games has been a focus for mathematicians ever since computers were invented. The first computer to beat a human opponent in chess was built in 1956, and towards the late 1960s, computers were already beating chess players of low-medium skill.[1] Now, it is generally recognized that computers can beat even the most accomplished grandmaster. Computers build a tree of different possibilities for the game and then work backwards to find the move that will give the computer the best outcome. Al- though computers can evaluate board positions very quickly, in a game like chess where there are over 10120 possible board configurations it is impossible for a computer to search through the entire tree. The challenge for the computer is then to find ways to avoid searching in certain parts of the tree. Humans also look ahead a certain number of moves when playing a game, but experienced players already know of certain theories and strategies that tell them which parts of the tree to look.
    [Show full text]
  • Combinatorial Game Theory
    Combinatorial Game Theory Aaron N. Siegel Graduate Studies MR1EXLIQEXMGW Volume 146 %QIVMGER1EXLIQEXMGEP7SGMIX] Combinatorial Game Theory https://doi.org/10.1090//gsm/146 Combinatorial Game Theory Aaron N. Siegel Graduate Studies in Mathematics Volume 146 American Mathematical Society Providence, Rhode Island EDITORIAL COMMITTEE David Cox (Chair) Daniel S. Freed Rafe Mazzeo Gigliola Staffilani 2010 Mathematics Subject Classification. Primary 91A46. For additional information and updates on this book, visit www.ams.org/bookpages/gsm-146 Library of Congress Cataloging-in-Publication Data Siegel, Aaron N., 1977– Combinatorial game theory / Aaron N. Siegel. pages cm. — (Graduate studies in mathematics ; volume 146) Includes bibliographical references and index. ISBN 978-0-8218-5190-6 (alk. paper) 1. Game theory. 2. Combinatorial analysis. I. Title. QA269.S5735 2013 519.3—dc23 2012043675 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society. Requests for such permission should be addressed to the Acquisitions Department, American Mathematical Society, 201 Charles Street, Providence, Rhode Island 02904-2294 USA. Requests can also be made by e-mail to [email protected]. c 2013 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights except those granted to the United States Government.
    [Show full text]
  • Game Tree Search
    CSC384: Introduction to Artificial Intelligence Game Tree Search • Chapter 5.1, 5.2, 5.3, 5.6 cover some of the material we cover here. Section 5.6 has an interesting overview of State-of-the-Art game playing programs. • Section 5.5 extends the ideas to games with uncertainty (We won’t cover that material but it makes for interesting reading). Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 1 Generalizing Search Problem • So far: our search problems have assumed agent has complete control of environment • State does not change unless the agent (robot) changes it. • All we need to compute is a single path to a goal state. • Assumption not always reasonable • Stochastic environment (e.g., the weather, traffic accidents). • Other agents whose interests conflict with yours • Search can find a path to a goal state, but the actions might not lead you to the goal as the state can be changed by other agents (nature or other intelligent agents) Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 2 Generalizing Search Problem •We need to generalize our view of search to handle state changes that are not in the control of the agent. •One generalization yields game tree search • Agent and some other agents. • The other agents are acting to maximize their profits • this might not have a positive effect on your profits. Fahiem Bacchus, CSC384 Introduction to Artificial Intelligence, University of Toronto 3 General Games •What makes something a game? • There are two (or more) agents making changes to the world (the state) • Each agent has their own interests • e.g., each agent has a different goal; or assigns different costs to different paths/states • Each agent tries to alter the world so as to best benefit itself.
    [Show full text]
  • Lecture Notes Part 2
    Claudia Vogel Mathematics for IBA Winter Term 2009/2010 Outline 5. Game Theory • Introduction & General Techniques • Sequential Move Games • Simultaneous Move Games with Pure Strategies • Combining Sequential & Simultaneous Moves • Simultaneous Move Games with Mixed Strategies • Discussion Claudia Vogel: Game Theory and Applications 2 Motivation: Why study Game Theory? • Games are played in many situation of every days life – Roomates and Families – Professors and Students – Dating • Other fields of application – Politics, Economics, Business – Conflict Resolution – Evolutionary Biology – Sports Claudia Vogel: Game Theory and Applications 3 The beginnings of Game Theory 1944 “Theory of Games and Economic Behavior“ Oskar Morgenstern & John Neumann Claudia Vogel: Game Theory and Applications 4 Decisions vs Games • Decision: a situation in which a person chooses from different alternatives without concerning reactions from others • Game: interaction between mutually aware players – Mutual awareness: The actions of person A affect person B, B knows this and reacts or takes advance actions; A includes this into his decision process Claudia Vogel: Game Theory and Applications 5 Sequential vs Simultaneous Move Games • Sequential Move Game: player move one after the other – Example: chess • Simultaneous Move Game: Players act at the same time and without knowing, what action the other player chose – Example: race to develop a new medicine Claudia Vogel: Game Theory and Applications 6 Conflict in Players‘ Interests • Zero Sum Game: one player’s gain is the other player’s loss – Total available gain: Zero – Complete conflict of players’ interests • Constant Sum Game: The total available gain is not exactly zero, but constant. • Games in trade or other economic activities usually offer benefits for everyone and are not zero-sum.
    [Show full text]
  • Combinatorial Game Theory: an Introduction to Tree Topplers
    Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Fall 2015 Combinatorial Game Theory: An Introduction to Tree Topplers John S. Ryals Jr. Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/etd Part of the Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Ryals, John S. Jr., "Combinatorial Game Theory: An Introduction to Tree Topplers" (2015). Electronic Theses and Dissertations. 1331. https://digitalcommons.georgiasouthern.edu/etd/1331 This thesis (open access) is brought to you for free and open access by the Graduate Studies, Jack N. Averitt College of at Digital Commons@Georgia Southern. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons@Georgia Southern. For more information, please contact [email protected]. COMBINATORIAL GAME THEORY: AN INTRODUCTION TO TREE TOPPLERS by JOHN S. RYALS, JR. (Under the Direction of Hua Wang) ABSTRACT The purpose of this thesis is to introduce a new game, Tree Topplers, into the field of Combinatorial Game Theory. Before covering the actual material, a brief background of Combinatorial Game Theory is presented, including how to assign advantage values to combinatorial games, as well as information on another, related game known as Domineering. Please note that this document contains color images so please keep that in mind when printing. Key Words: combinatorial game theory, tree topplers, domineering, hackenbush 2009 Mathematics Subject Classification: 91A46 COMBINATORIAL GAME THEORY: AN INTRODUCTION TO TREE TOPPLERS by JOHN S. RYALS, JR. B.S. in Applied Mathematics A Thesis Submitted to the Graduate Faculty of Georgia Southern University in Partial Fulfillment of the Requirement for the Degree MASTER OF SCIENCE STATESBORO, GEORGIA 2015 c 2015 JOHN S.
    [Show full text]
  • CONNECT6 I-Chen Wu1, Dei-Yen Huang1, and Hsiu-Chen Chang1
    234 ICGA Journal December 2005 NOTES CONNECT6 1 1 1 I-Chen Wu , Dei-Yen Huang , and Hsiu-Chen Chang Hsinchu, Taiwan ABSTRACT This note introduces the game Connect6, a member of the family of the k-in-a-row games, and investigates some related issues. We analyze the fairness of Connect6 and show that Connect6 is potentially fair. Then we describe other characteristics of Connect6, e.g., the high game-tree and state-space complexities. Thereafter we present some threat-based winning strategies for Connect6 players or programs. Finally, the note describes the current developments of Connect6 and lists five new challenges. 1. INTRODUCTION Traditionally, the game k-in-a-row is defined as follows. Two players, henceforth represented as B and W, alternately place one stone, black and white respectively, on one empty square2 of an m × n board; B is assumed to play first. The player who first obtains k consecutive stones (horizontally, vertically or diagonally) of his own colour wins the game. Recently, Wu and Huang (2005) presented a new family of k-in-a-row games, Connect(m,n,k,p,q), which are analogous to the traditional k-in-a-row games, except that B places q stones initially and then both W and B alternately place p stones subsequently. The additional parameter q is a key that significantly influences the fairness. The games in the family are also referred to as Connect games. For simplicity, Connect(k,p,q) denotes the games Connect(∞,∞,k,p,q), played on infinite boards. A well-known and popular Connect game is five-in-a-row, also called Go-Moku.
    [Show full text]
  • Part 4: Game Theory II Sequential Games
    Part 4: Game Theory II Sequential Games Games in Extensive Form, Backward Induction, Subgame Perfect Equilibrium, Commitment June 2016 Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 1 / 17 Introduction Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 2 / 17 Sequential Games games in matrix (normal) form can only represent situations where people move simultaneously ! sequential nature of decision making is suppressed ! concept of ‘time’ plays no role but many situations involve player choosing actions sequentially (over time), rather than simultaneously ) need games in extensive form = sequential games example Harry Local Latte Starbucks Local Latte 1; 2 0; 0 Sally Starbucks 0; 0 2; 1 Battle of the Sexes (BS) Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 3 / 17 Battle of the Sexes Reconsidered suppose Sally moves first (and leaves Harry a text-message where he can find her) Harry moves second (after reading Sally’s message) ) extensive form game (game tree): game still has two Nash equilibria: (LL,LL) and (SB,SB) but (LL,LL) is no longer plausible... Games in Extensive Form, Backward Induction, SubgamePart 4: Perfect Game Equilibrium, Theory IISequential Commitment Games () June 2016 4 / 17 Sequential Games a sequential game involves: a list of players for each player, a set
    [Show full text]
  • Alpha-Beta Pruning
    CMSC 474, Introduction to Game Theory Game-tree Search and Pruning Algorithms Mohammad T. Hajiaghayi University of Maryland Finite perfect-information zero-sum games Finite: finitely many agents, actions, states, histories Perfect information: Every agent knows • all of the players’ utility functions • all of the players’ actions and what they do • the history and current state No simultaneous actions – agents move one-at-a-time Constant sum (or zero-sum): Constant k such that regardless of how the game ends, • Σi=1,…,n ui = k For every such game, there’s an equivalent game in which k = 0 Examples Deterministic: chess, checkers go, gomoku reversi (othello) tic-tac-toe, qubic, connect-four mancala (awari, kalah) 9 men’s morris (merelles, morels, mill) Stochastic: backgammon, monopoly, yahtzee, parcheesi, roulette, craps For now, we’ll consider just the deterministic games Outline A brief history of work on this topic Restatement of the Minimax Theorem Game trees The minimax algorithm α-β pruning Resource limits, approximate evaluation Most of this isn’t in the game-theory book For further information, look at the following Russell & Norvig’s Artificial Intelligence: A Modern Approach • There are 3 editions of this book • In the 2nd edition, it’s Chapter 6 Brief History 1846 (Babbage) designed machine to play tic-tac-toe 1928 (von Neumann) minimax theorem 1944 (von Neumann & Morgenstern) backward induction 1950 (Shannon) minimax algorithm (finite-horizon search) 1951 (Turing) program (on paper) for playing chess 1952–7
    [Show full text]
  • Game Tree Search Complete Control of Environment ■ State Does Not Change Unless the Agent (Robot) Changes It
    CSC384: Intro to Artificial Intelligence Generalizing Search Problems ● So far: our search problems have assumed agent has Game Tree Search complete control of environment ■ state does not change unless the agent (robot) changes it. All we need to compute is a single path to a goal state. ■ Chapter 6.1, 6.2, 6.3, 6.6 cover some of the material we cover here. Section 6.6 has an interesting ● Assumption not always reasonable overview of State-of-the-Art game playing programs. ■ stochastic environment (e.g., the weather, traffic accidents). ■ other agents whose interests conflict with yours ■ Problem: you might not traverse the path you are expecting. ■ Section 6.5 extends the ideas to games with uncertainty (We won’t cover that material but it makes for interesting reading). Hojjat Ghaderi, University of Toronto 1 Hojjat Ghaderi, University of Toronto 2 Generalizing Search Problems Two-person Zero-Sum Games ● In these cases, we need to generalize our ●Two-person, zero-sum games view of search to handle state changes ■ chess, checkers, tic-tac-toe, backgammon, go, Doom, that are not in the control of the agent. “find the last parking space” ■ Your winning means that your opponent looses, and ● One generalization yields game tree search vice-versa. ■ agent and some other agents. ■ Zero-sum means the sum of your and your opponent’s ■ The other agents are acting to maximize their payoff is zero---any thing you gain come at your profits opponent’s cost (and vice-versa). Key insight: this might not have a positive effect on your how you act depends on how the other agent acts profits.
    [Show full text]
  • How Game Complexity Affects the Playing Behavior of Synthetic Agents
    Kiourt, C., Kalles, Dimitris and Kanellopoulos, P., (2017), How game complexity affects the playing behavior of synthetic agents, 15th European Conference on Multi-Agent Systems, Evry, 14-15 December 2017 EUMAS 2017 How game complexity affects the playing behavior of synthetic agents Chairi Kiourt1, Dimitris Kalles1, and Panagiotis Kanellopoulos2 1School of Science and Technology, Hellenic Open University, Patras, Greece, chairik, [email protected] 2CTI Diophantus and University of Patras, Rion, Greece. [email protected] Abstract—Agent based simulation of social organizations, via of the game. In our investigation, we adopted the state-space the investigation of agents’ training and learning tactics and complexity approach, which is the most-known and widely strategies, has been inspired by the ability of humans to learn used [13]–[15]. from social environments which are rich in agents, interactions and partial or hidden information. Such richness is a source of The complexity of a large set of well-known games has complexity that an effective learner has to be able to navigate. been calculated [14], [15] at various levels, but their usability This paper focuses on the investigation of the impact of the envi- in multi-agent systems as regards the impact on the agents’ ronmental complexity on the game playing-and-learning behavior learning/playing progress is still a flourishing research field. of synthetic agents. We demonstrate our approach using two In this article, we study the game complexity impact on independent turn-based zero-sum games as the basis of forming social events which are characterized both by competition and the learning/training progress of synthetic agents, as well as cooperation.
    [Show full text]
  • Games Solved: Now and in the Future by H
    Games solved: Now and in the future by H. J. van den Herik, J. W. H. M. Uiterwijk, and J. van Rijswijck Tsan-sheng Hsu [email protected] http://www.iis.sinica.edu.tw/~tshsu 1 Abstract Which game characters are predominant when the solution of a game is the main target? • It is concluded that decision complexity is more important than state- space complexity. • There is a trade-off between knowledge-based methods and brute-force methods. • There is a clear correlation between the first-player's initiative and the necessary effort to solve a game. TCG: two-player games, 20121019, Tsan-sheng Hsu c 2 Definitions (1/4) Domain: two-person zero-sum games with perfect information. • Zero-sum means one player's loss is exactly the other player's gain, and vice versa. There is no way for both players to win at the same time. Game-theoretic value of a game: the outcome, i.e., win, loss or draw, when all participants play optimally. • Classification of games' solutions according to L.V. Allis [Ph.D. thesis 1994] if they are considered solved. Ultra-weakly solved: the game-theoretic value of the initial position has been determined. Weakly solved: for the initial position a strategy has been determined to achieve the game-theoretic value against any opponent. Strongly solved: a strategy has been determined for all legal positions. • The game-theoretical values of many games are unknown or are only known for some legal positions. TCG: two-player games, 20121019, Tsan-sheng Hsu c 3 Definitions (2/4) State-space complexity of a game: the number of the legal positions in a game.
    [Show full text]
  • CS 520: Intro AI Jingjin Yu | Rutgers Games
    Lecture 06 Games & Adversarial Search CS 520: Intro AI Jingjin Yu | Rutgers Games Tic-tac-toe Backgammon Monopoly Chess The Chinese version The Japanese version Just This Past Week (January 28th)… Why are We Fascinated with Games? Games are “benchmarks” of human intelligence Model real world competitive and cooperative behaviors Monopoly games: bargaining, cooperation, competition Chess, go: competition, strategy But much simplified E.g., chess has 32 pieces and a board with 64 positions Ideal for mathematical study as well as applying computational techniques This lecture will cover The MinMax algorithm (also known as minimax, MM…) Alpha-beta pruning Stochastic games & partially observable games Focus: Alternating 2-Player Games Such games are sequential with the players taking turns The game ends with a terminal state with utilities for both players Zero-sum games: one player’s utility is the negation of the other player’s utility – hence summed utility is zero Non zero-sum games: total utility is non-zero; e.g., in soccer qualifying matches, 3 points for win, 1 point for draw, 0 for loss Game as a Type of Search Problem Games fall into a category of search problems that we have touched on – those with non-deterministic actions In fact, we may assume we always face the worst outcome after we make a choice This is known as adversarial search The process of playing a two player game Assuming Max and Min are playing a zero-sum game At a step 푘 of the sequential game, Max wants to maximize her utility At step 푘 + 1, Min wants to maximize
    [Show full text]