Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) An Expert-Level Card Playing Agent Based on a Variant of Perfect Information Monte Carlo Sampling Florian Wisser Vienna University of Technology Vienna, Austria
[email protected] Abstract it would take at least 10 exabytes to store it. Using state-space abstraction [Johanson et al., 2013] EAAs may still be able Despite some success of Perfect Information Monte to find good strategies for larger problems, but they depend Carlo Sampling (PIMC) in imperfect information on finding an appropriate simplification of manageable size. games in the past, it has been eclipsed by other So, we think it is still a worthwhile task to search for in-time approaches in recent years. Standard PIMC has heuristics like PIMC that are able to tackle larger problems. well-known shortcomings in the accuracy of its de- cisions, but has the advantage of being simple, fast, On the other hand, in the 2nd edition (and only there) of [ robust and scalable, making it well-suited for im- their textbook, Russell and Norvig Russell and Norvig, 2003, ] perfect information games with large state-spaces. p179 quite accurately use the term “averaging over clair- We propose Presumed Value PIMC resolving the voyancy” for PIMC. A more formal critique of PIMC was [ problem of overestimation of opponent’s knowl- given in a series of publications by Frank, Basin, et al. Frank ] [ et al. ] [ edge of hidden information in future game states. and Basin, 1998b , Frank , 1998 , Frank and Basin, ] [ ] The resulting AI agent was tested against human 1998a , Frank and Basin, 2001 , where the authors show that experts in Schnapsen, a Central European 2-player the heuristic of PIMC suffers from strategy-fusion and non- trick-taking card game, and performs above human locality producing erroneous move selection due to an over- expert-level.