Exploring Monte Carlo Negotiation Search with Nontrivial Agreements

Exploring Monte Carlo Negotiation Search with Nontrivial Agreements

Exploring Monte Carlo Negotiation Search with Nontrivial Agreements Elijah Malaby, John Licato Advancing Machine and Human Reasoning (AMHR) Lab Department of Computer Science and Engineering University of South Florida Abstract A particularly compelling example of a game where ne- The application of automated negotiations to general game gotiation and careful cooperation are valuable is Diplomacy. playing is a research area with far-reaching implications. Diplomacy is a perfect information game with a large state Non-zero sum games can be used to model a wide variety space, no chance, simultaneous moves, and up to seven con- of real-world scenarios and automated negotiation provides current players. Players are encouraged to come to agree- a framework for more realistically modeling the behavior ments during the game. The large state space makes an- of agents in these scenarios. A particular recent develop- alyzing Diplomacy games challenging, let alone discover- ment in this space is the Monte Carlo Negotiation Search ing effective cooperative strategies and negotiating agree- (MCNS) algorithm, which can negotiate to find valuable co- ments. Recent work has explored the problem of automated operative strategies for a wide array of games (such as those negotiation in Diplomacy (Fabregues and Sierra 2011; de of the Game Description Language). However, MCNS only Jonge et al. 2018; Theodoridis and Chalkiadakis 2020; proposes agreements corresponding to individual sequences of moves without any higher-level notions of conditional or Marinheiro and Cardoso 2018). stateful strategy. Our work attempts to lift this restriction. We present two contributions: extensions to the MCNS algorithm to support more complex agreements and an agreement lan- guage for GDL games suitable for use with our algorithm. We A recent development in the game-negotiation space is also present the results of a preliminary experiment in which we use our algorithm to search for an optimal agreement for the work on Monte Carlo Negotiation Search (de Jonge the iterated prisoners dilemma. We demonstrate significant and Zhang 2017), an algorithm for negotiating cooperative improvement of our algorithm over random agreement sam- agreements that do not assume any foreknowledge about the pling, although further work is required to more consistently game in question. Instead, it is provided the rules to the produce optimal agreements. game in the form of the Game Description Language (GDL) (Genesereth, Love, and Pell 2005) and has to be prepared 1 Introduction to negotiate over any such games. However, MCNS as de- scribed works over an agreement space where agreements Game playing provides a valuable framework for develop- correspond to sequences of moves. These move sequences ing and experimenting in artificial intelligence. Games pro- are discovered by the random sampling of its internal Monte vide controlled environments and clear goals. Furthermore, Carlo Tree Search. For simple games, MCNS can reliably a broad class of multi-agent systems can be analyzed in find an optimal strategy but this approach does not scale terms of games. Turn-based zero-sum games like Chess and as effectively to more complex games (the likelihood that a Go have already seen extensive research (recently, (Silver et given random game will be optimal decreases rapidly) or to al. 2018)), and more recent efforts have been made to de- games where a subset of players are party to the agreement velop AI for more complex environments like the real-time (a move sequence may be insufficient to describe the optimal strategy game StarCraft 2 (Churchill et al. 2016). Coopera- strategy under all opposing responses). Our work explores tive games are ones where agents are allowed to collaborate an approach to decoupling agreement search from the rest of in their strategies (and, broadly speaking, are incentivized the MCNS algorithm, allowing it to be applied to more ex- to do so). Automated negotiation is a similarly important pressive agreement spaces capable of describing more com- field in AI, tasked with solving the problem of agents com- plex strategies. We make two main contributions. First, in ing to mutually-beneficial agreements in complex environ- §3 and §4 we document our proposed extensions to MCTS ments (Fatima, Kraus, and Wooldridge 2014). At the inter- and MCNS to support these agreements. Second, in section section of automated negotiation and game playing is the po- §5 we document our approach to a general-purpose GDL- tential to build robust AIs that can work cooperatively based based agreement language suitable for our algorithm. Fi- only upon knowledge of their shared environment and un- nally, §6 documents a preliminary experiment we performed derstanding of each other’s goals. applying our agreement search process to generating the op- Copyright © 2021by the authors. All rights reserved. timal agreement for the iterated prisoners dilemma. 2 Foundation and related work The Update step walks back up the tree to update each Games node based on the rollout result. The update is dependent on what information is being recorded and what policy is being We focus on simultaneous-move two-player games (games used to select nodes. UCT, for example, records the cumu- with two players where each player’s moves are selected lative record and number of times each node is updated. concurrently and applied simultaneously). For the rest of The Expansion step performs maintenance needed for se- the paper, definitions are based on an abstract game with lection to pick children from the explored node. It allows two players, p for i 2 f0; 1g and a finite set of game states i the tree to be generated lazily in memory, which is impor- s 2 S. There is an initial state s 2 S and a nonempty subset 0 tant given even finite games have trees with explosive size. of terminal states T ⊂ S. There are two sets of actions A0 and A1 corresponding to the full set of actions the respective General game playing and GDL player could take. There are two legality functions, L and 0 General Game Playing explores the problem of agents play- L , defined on (SnT ) ! 2Ai and identifying the nonempty 1 ing games without any foreknowledge of the game rules. subset of actions legal for a player to take in a given nonter- The Game Description Language (GDL) was popularized by minal state. For each nonterminal state s 2 SnT there is a the AAAI General Game Playing competition (Genesereth, game transition function δg defined on L (s)×L (s) !S s 0 1 Love, and Pell 2005) (which provides a complete reference giving the transitions out of that state. Finally, there are two for it) as the format all submitted agents must consume to utility functions U and U defined on T! + giving the 0 1 R know what game they are playing. Since then, GDL has utility values of each terminal state to each player. been used to develop game agents based upon minimax, Monte Carlo Tree Search alpha-beta pruning, and varieties of MCTS. de Jonge and Zhang (2016) explore the general premise of applying GDL Monte Carlo Tree Search is a family of algorithms for es- as a foundation for developing general negotiating agents. timating properties of games (in particular, Nash Equilib- GDL is a language based on Datalog (Ceri, Gottlob, and ria). A more detailed discussion of MCTS can be found in Tanca 1989), a logic programming language. In particular, ´ (Kocsis and Szepesvari 2006) and particular policies for si- GDL describes a game in terms of a set of logical rules multaneous move games are discussed in (Lisy et al. 2013). which must hold in every state of the game. Each of the MCTS has recently used for automated negotiations in the rules defines logical inferences to be made about the game, work of (Buron, Guessoum, and Ductor 2019), in which framed as implications. A special predicate true is used they directly apply MCTS to a negotiation game formalism to refer to facts about particular game states, so a rule like of their design. true(x) ! legal(p; y) says “in states where x holds, it is MCTS algorithms work by iteratively exploring the move legal for player p to take action y. Like Datalog (and Pro- tree of the game, with each step incrementally improving log before it), the rules are written with the consequent first. the overall estimates. Each node in the game’s move tree So in practice that rule would be written legal(p, y) corresponds directly to a game state. The children of a state :- true(x) or in the more common s-expression nota- correspond to valid state transitions (moves). Each iteration tion (<= (legal p y) (true x)). The design of of MCTS proceeds in four steps: GDL allows agents to follow these logical inferences and 1. Selection: Select a promising unvisited node derive all of the relevant information about any given game 2. Rollout: Simulate a random game starting at this node state, including how the actions performed in a given state influence the set of true facts in the next state. 3. Update: Use the reward from the simulation to update metrics about this node and its parents Automated Negotiation 4. Expansion: Add the children of this node as unvisited Automated negotiation is a broad subfield of AI, with nodes in the tree many recent works centering on the Automated Negotiat- Selection proceeds recursively from the root by selecting ing Agents Competition (Fujita et al. 2016; Aydogan˘ et al. visited nodes according to a policy. A simple greedy policy 2020). However, much of the work in this space either selects the node with the highest average result so far, but makes too many simplifying assumptions about the nego- this policy is highly chaotic in its results.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us