Rock, Paper, Starcraft: Strategy Selection in Real-Time Strategy Games

Rock, Paper, Starcraft: Strategy Selection in Real-Time Strategy Games

Proceedings, The Twelfth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-16) Rock, Paper, StarCraft: Strategy Selection in Real-Time Strategy Games Anderson Tavares, Hector Azpurua,´ Amanda Santos, Luiz Chaimowicz Laboratory of Multidisciplinary Research in Games Computer Science Department Universidade Federal de Minas Gerais Email: fanderson,hector.azpurua,amandasantos,[email protected] Abstract of abstraction upon StarCraft, which we call the strategy se- lection metagame. We fill the metagame’s payoff matrix The correct choice of strategy is crucial for a successful real- with a data-driven approach: strategies play a number of time strategy (RTS) game player. Generally speaking, a strat- matches between each other and we register their relative egy determines the sequence of actions the player will take in order to defeat his/her opponents. In this paper we present performances. We proceed by estimating a Nash Equilib- a systematic study of strategy selection in the popular RTS rium in the metagame which specifies a probability distri- game StarCraft. We treat the choice of strategy as a game it- bution over strategies to achieve a theoretically-guaranteed self and test several strategy selection techniques, including expected performance. Nash Equilibrium and safe opponent exploitation. We adopt We observe that some strategies interact in a cyclical a subset of AIIDE 2015 StarCraft AI tournament bots as the way in StarCraft, thus we draw some insights from com- available strategies and our results suggest that it is useful puter rock-paper-scissors tournaments (Billlings 2001). In to deviate from Nash Equilibrium to exploit sub-optimal op- those, a player usually benefits by deviating from equilib- ponents on strategy selection, confirming insights from com- rium to exploit sub-optimal opponents, but must guard itself puter rock-paper-scissors tournaments. against exploitation by resorting to equilibrium when its per- formance drops. 1 Introduction Experiments in this paper are performed with participant bots of AIIDE 2015 StarCraft AI tournament. Each bot is In games with large state spaces, players often resort to seen as a strategy, thus, in our experiments, strategy selec- strategies, i.e., sequences of actions that guide their be- tion becomes the incorporation of which behavior (specified havior, to help achieving their goals. For example, games by the chosen bot) the player will adopt to play a match. Re- like Chess, Go and StarCraft have known opening libraries, sults show that it is indeed useful to deviate from equilibrium which are strategies that help players achieve favorable sit- to exploit sub-optimal strategy selection opponents and that uations from initial game states. Strategies usually interact a player benefits from adopting safe exploitation techniques with each other. Dedicated players study and develop new (McCracken and Bowling 2004) with guaranteed bounded strategies that counter older ones, and these new strategies losses. will be studied in the future to be countered as well. Thus, This paper is organized as follows: Section 2 reviews the study and correct selection of strategies is crucial for a related work. Section 3 presents the strategy selection player to succeed in a game. metagame, whereas Section 4 instantiates it upon StarCraft. In real-time strategy games, several works deal with strat- Section 5 presents and discusses results of experiments con- egy construction, which involves developing a sequence of sisting of a tournament between different strategy selection actions that would reach desired situations, as in (Stanescu, techniques. Section 6 presents concluding remarks and op- Barriga, and Buro 2014; Uriarte and Ontan˜on´ 2014), or portunities for future study. strategy prediction, which involves comparing opponent be- havior with known ones, as in (Weber and Mateas 2009; 2 Related Work Synnaeve and Bessiere` 2011; Stanescu and Certickˇ y` 2016). Moreover, most state-of-the-art software-controlled Star- Works on strategic reasoning in real-time strategy games can Craft players (bots) employ simple mechanisms to strategy be divided (non-exhaustively) in strategy prediction, strat- selection, ignoring the adversarial nature of this process, i.e. egy construction and strategy selection itself. that the opponent is reasoning as well. Strategy prediction is concerned with recognizing a In the present work, we perform a systematic study on player’s strategy, classifying it into a set of known strate- strategy selection in StarCraft. We model the process of gies or predicting next moves from a player, possibly from strategy selection as a normal-form game. This adds a layer partial and noisy observations. This can be done by encod- ing replay data into feature vectors and applying classifica- Copyright c 2016, Association for the Advancement of Artificial tion algorithms (Weber and Mateas 2009), using bayesian Intelligence (www.aaai.org). All rights reserved. reasoning (Synnaeve and Bessiere` 2011) or answer set pro- 93 gramming, a logic paradigm able to deal with uncertainty ployment1. Moreover, the game’s forward model is avail- and incomplete knowledge (Stanescu and Certickˇ y` 2016). able so that simulation of players’ decisions is possible. In Strategy construction is concerned with constructing a se- the present work we consider StarCraft, a complete real-time 2 quence of actions from a given game state, which is re- strategy game with unavailable forward model . Besides, in lated to search and planning. To deal with the huge search the present work, we go beyond Nash Equilibrium by testing space of real-time strategy games, hierarchical or abstract various strategy selection techniques, including safe oppo- representation of game states can be used in conjunction nent exploitation (see Section 5). with adapted versions of classical search algorithms, as in The major novelty of the present work is the focus (Stanescu, Barriga, and Buro 2014; Uriarte and Ontan˜on´ on strategy selection instead of planning or prediction for 2014). Also, techniques based in portfolio (a set of prede- real-time strategy games. Moreover, we analyse the in- fined strategies, or scripted behavior) are used either as com- teraction among sophisticated strategies (full game-playing ponents for playout simulation (Churchill and Buro 2013) agents) in a complex and increasingly popular AI bench- or to generate possible actions for evaluation by higher-level mark, analysing the performance of various strategy selec- game-tree search algorithms (Churchill and Buro 2015). tion techniques. Strategy selection is concerned with the choice of a course of action to adopt from a set of predefined strategies. Mar- 3 The Strategy Selection Metagame colino et al. (2014) studies this in the context of team for- mation. Authors demonstrate that, from a pool of stochastic We refer to the process of strategy selection as the strategy strategies, teams composed of varied strategies can outper- selection metagame because it adds an abstract layer of rea- form a uniform team made of copies of the best strategy as soning to a game. We refer to the game upon which the the size of the action space increases. In the proposed ap- metagame is built as the underlying game. At principle, proach, a team of strategies votes for moves in the game of the strategy selection metagame can be built upon any game Go and their suggestions are combined. Go allows this con- where it is possible to identify strategies. sulting stage due to its discrete and synchronous-time nature. In this paper, we define strategy as a (stochastic) policy, As real-time strategy games are dynamic and continuous in i.e. a mapping from underlying game states to (a probability time, such approach would be difficult to evaluate. distribution over) actions. The policy specified by a strategy Regarding strategy selection in real-time strategy games, must be a total function, that is, any valid underlying game Preuss et al. (2013) use fuzzy rules to determine strategies’ state should be mapped to a valid action. This is important usefulness according to the game state. The most useful to guarantee that a strategy plays the entire game, thus we strategy is activated and dictate the behavior of a StarCraft can abstract from the underlying game mechanics. bot. A drawback of this approach is the need of expert knowledge to create and adjust the fuzzy rules. 3.1 Definition A case-based strategy selection method is studied by Aha, Molineaux, and Ponsen (2005) in Wargus, an open-source The strategy selection metagame can be seen as a normal- Warcraft II clone. Their system learns which strategy to se- form game and formally represented by a payoff matrix P lect according to the game state. However, the study as- where component Pij represents the expected value in the sumes that opponent makes choices according to a uniform underlying game by adopting strategy i while the opponent distribution over strategies, which is unrealistic. adopts strategy j. State-of-the art StarCraft bots perform strategy selection, Payoff matrix P can be filled according to domain- according to the survey of Ontan˜on´ et al. (2013): they specific knowledge. In this case, an expert in the underlying choose a behavior according to past performance against game would fill the matrix according to strategies’ relative their opponent. However, their mechanisms lack game- performance. If domain-specific knowledge is not available, theoretic analysis or performance guarantees because they but strategies are known, data-driven approaches can be em- ignore the opponent’s adaptation and strategic reasoning. ployed to populate the matrix. For instance, match records The survey of Ontan˜on´ et al. (2013) also notes that bots in- could be used to identify strategies and register their rela- teract in a rock-paper-scissors fashion, based on previous AI tive performances. Another data-driven approach would be tournament analysis. to execute a number of matches to approximate the expected A game-theoretic approach to strategy selection is stud- value of each strategy against each other.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us