An Expert-Level Card Playing Agent Based on a Variant of Perfect Information Monte Carlo Sampling

Total Page:16

File Type:pdf, Size:1020Kb

An Expert-Level Card Playing Agent Based on a Variant of Perfect Information Monte Carlo Sampling Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) An Expert-Level Card Playing Agent Based on a Variant of Perfect Information Monte Carlo Sampling Florian Wisser Vienna University of Technology Vienna, Austria [email protected] Abstract it would take at least 10 exabytes to store it. Using state-space abstraction [Johanson et al., 2013] EAAs may still be able Despite some success of Perfect Information Monte to find good strategies for larger problems, but they depend Carlo Sampling (PIMC) in imperfect information on finding an appropriate simplification of manageable size. games in the past, it has been eclipsed by other So, we think it is still a worthwhile task to search for in-time approaches in recent years. Standard PIMC has heuristics like PIMC that are able to tackle larger problems. well-known shortcomings in the accuracy of its de- cisions, but has the advantage of being simple, fast, On the other hand, in the 2nd edition (and only there) of [ robust and scalable, making it well-suited for im- their textbook, Russell and Norvig Russell and Norvig, 2003, ] perfect information games with large state-spaces. p179 quite accurately use the term “averaging over clair- We propose Presumed Value PIMC resolving the voyancy” for PIMC. A more formal critique of PIMC was [ problem of overestimation of opponent’s knowl- given in a series of publications by Frank, Basin, et al. Frank ] [ et al. ] [ edge of hidden information in future game states. and Basin, 1998b , Frank , 1998 , Frank and Basin, ] [ ] The resulting AI agent was tested against human 1998a , Frank and Basin, 2001 , where the authors show that experts in Schnapsen, a Central European 2-player the heuristic of PIMC suffers from strategy-fusion and non- trick-taking card game, and performs above human locality producing erroneous move selection due to an over- expert-level. estimation of MAX’s knowledge of hidden information in fu- ture game states. A further investigation of PIMC and why it still works well for many games is given by Long et al. [Long 1 Introduction et al., 2010], trying to give three easily measurable proper- Perfect Information Monte Carlo Sampling (PIMC) in tree ties of a game tree, meant to predict the success of PIMC in a search of games of imperfect information has been around for game. More recently overestimation of MAX’s knowledge is many years. The approach is appealing, for a number of rea- also dealt with in the field of general game play [Schofield et sons: it allows the usage of well-known methods from perfect al., 2013]. To the best of our knowledge, all literature on the information games, its complexity is magnitudes lower than deficiencies of PIMC concentrates on the overestimation of the problem of weakly solving a game in the sense of game MAX’s knowledge. Frank et al. [Frank and Basin, 1998a] ex- theory, it can be used in a just-in-time manner even for games plicitly formalize the “best defense model”, which basically with large state-space, and it has proven to produce compet- assumes a clairvoyant opponent, and state that this would be itive AI agents in some games. Since we will mainly deal the typical assumption in game analysis in expert texts. This with trick-taking cards games, let us mention Bridge [Gins- may be true for some games, but clearly not for all. berg, 1999], [Ginsberg, 2001], Skat [Buro et al., 2009] and Think, for example, of a game of heads-up no-limit Schnapsen [Wisser, 2010]. hold’em poker playing an opponent with perfect information, In recent years research in AI in games of imperfect in- knowing your hand as well as all community cards before formation was heavily centered around equilibrium approx- they even appear on the table. The only reasonable strategy imation algorithms (EAA). One of the reasons might be the left against such an opponent would be to immediately con- concentration on simple poker variants as in the renowned an- cede the game, since one will not achieve much more than nual computer poker competition. Wherever they are appli- stealing a few blinds. And in fact expert texts in poker do cable, agents like Cepheus [Bowling et al., 2015] for heads- never assume playing a clairvoyant opponent when analyzing up limit hold’em (HULHE) will not be beaten by any other the correctness of the actions of a player. agent, since they are nearly perfect. However, while HULHE In the following — and in contrast to the references above has a small state-space (∼ 1014), solving this problem still — we start off with an investigation of the problem of over- required very substantial amounts of computing power to cal- estimation of MIN’s knowledge, from which PIMC and its culate a strategy stored in 12 terabytes. Our prime example known variants suffer. We set this in context to the best Schnapsen has a state-space around 1020, so even if a near defense model and show why the very assumption of it is equilibrium strategy was computable within reasonable time, doomed to produce sub-optimal play in many situations. For 125 A A call call fold fold 011 0−11 1 4 ep = 0 4 B ep = 1 ap @ A = 3 C B ep = −1 ap @−1A = − 3 C ep = 0 2 −2 diff diff match match D E D E 0 1 0 1 ~A : 011 0−11 1 ~A : 0−11 0−11 1 }A : @1A @−1A @ 1 A }A : @−1A @−1A @ 1 A |A : 1 2 −2 |A : −1 2 −2 Figure 1: PIMC Tree for XX Figure 2: PIMC Tree for XI the purpose of demonstration we use the simplest possible To the left of node C, the evaluation of straight PIMC (for synthetic games we could think of. We go on defining two better distinction abbreviated by SP in the following) of this new heuristic algorithms, first Presumed Payoff PIMC, tar- node is given, averaging over the payoffs in different worlds geting imperfect information games decided within a single after building the point-wise maximum of the payoff vectors hand (or leg), followed by Presumed Value PIMC for games in D and E. We see that SP is willing to turn down an en- of imperfect information played in a sequence of hands (or sured payoff of 1 by folding, to go for an expected payoff legs), both dealing with the problem of MIN overestimation. of 0, by calling and going for either bet then. The reason is Finally we do an experimental analysis on a simple synthetic the well-known overestimation of hidden knowledge, i.e.: it game, as well as the Central-European trick-taking card game assumes to know ιI when deciding whether to bet on match- Schnapsen. ing or differing colors in node C, and thereby evaluates it to 4 [ an average payoff (ap) of 3 . Frank et al. Frank and Basin, 2 Background Considerations 1998b] analyzed this behavior in detail and termed it strategy- In a 2-player game of imperfect information there are gen- fusion. We will call it MAX-strategy-fusion in the follow- erally 4 types of information: information publicly available ing, since it is strategy-fusion happening in MAX nodes. The (ιP ), information private to MAX (ιX ), information private to basic solution given for this problem is vector minimax. In MIN (ιI ) and information hidden to both (ιH ). To exemplify MAX node C, vector minimax picks the vector with the high- the effect of “averaging over clairvoyancy” we introduce two est mean, instead of building a vector of point-wise max- very simple 2-player games: XX with only two successive ima for each world, i.e. the vector attached to C would be decisions by MAX, and XI with two decisions, first one by either of (1; 1; −2) or (−1; −1; 2), not (1; 1; 2), leading to MAX followed by one of MIN. The reader is free to omit the the correct decision to fold. We list the average payoffs for rules we give and view the game trees as abstract ones. Both various agents playing XX on the left-hand side of Table games are played with a deck of four aces, ♠A, ~A, }A and 1. Looking at the results we see that a uniformly random |A. The deck is shuffled and each player is dealt 1 card, with agent (RAND) plays worse than a Nash equilibrium strategy the remaining 2 cards lying face down on the table. ιP con- (NASH), and SP plays even worse than RAND. VM stands sists of the actions taken by the players, ιH are the 2 cards for vector minimax, but includes all variants proposed by face down on the table, ιX the card held by MAX and ιI the Frank et al., most notably payoff-reduction-minimax, vector- card held by MIN. αβ and payoff-reduction-αβ. Any of these algorithms solves In XX, MAX has to decide whether to fold or call first. the deficiency of MAX-strategy-fusion in this example and In case MAX calls, the second decision to make is to bet, plays optimally. whether the card MIN holds matches color with its own card Let us now turn to the game XI, its game tree given in (match, both red or both black) or differs in color (diff). Fig. 2. The two differences to XX are that MAX’s payoff at Fig. 1 shows the game tree of XX with payoffs, assuming node B is −1, not 1, and C is a MIN (not a MAX) node.
Recommended publications
  • Das Leben Ist Wie Fahrrad Fahren, Um Die Balance Zu Halten Musst Du In
    Schnapsen Die Regeln Das Leben ist DAS ZIEL In den weiteren Spielen wechseln sich die Spieler in den Rollen des Gebers und der Vorhand jeweils ab. Ziel des Spieles ist es, durch Stche und Ansagen wie Fahrrad fahren, möglichst rasch 66 Augen oder mehr zu sammeln. Anmerkung: Die in den Stchen enthaltenen Karten DAS SPIEL zählen nach ihren Augen, für ein gewonnenes oder Die Vorhand spielt zum ersten Stch aus. Zu Beginn verlorenes Spiel gibt es Punkte. um die Balance zu des Spieles herrscht weder Farb- noch Stchzwang: DIE KARTEN Der Geber kann entweder mit einer höheren Karte derselben Farbe oder einem Trumpf stechen – in DOPPELDEUTSCHE FARBEN diesem Fall gewinnt er den Stch. Er kann aber auch eine halten musst du in beliebige Karte abwerfen und den Stch der Vorhand überlassen. Der Spieler, der den Stch gewonnen hat, nimmt die Bewegung bleiben. oberste Karte des Talons, sein Gegner die folgende. Dann spielt der Gewinner des Stchs zum nächsten Stch aus. Albert Einstein Auf diese Weise setzt sich das Spiel fort, bis der Talon DAS GEBEN aufgebraucht ist – es sei denn, ein Spieler meldet zuvor Der Geber mischt, lässt abheben und teilt wie folgt die 66 Punkte oder dreht zu (das heißt, er sperrt den Talon). Karten: Zuerst erhält die Vorhand und danach der Geber Ist der Talon aufgebraucht oder wurde er zugedreht, gilt drei Karten. Die siebente Karte wird aufgeschlagen, ab diesem Zeitpunkt Farb- und Stchzwang; das heißt woraufin die Vorhand und zuletzt der Geber zwei ein Spieler muss, wenn er an der Reihe ist: Karten erhält. Die verbleibenden Karten bilden den Talon und werden • mit einer höheren Karte der angespielten Farbe als verdeckter Stapel quer auf die aufgeschlagene Karte stechen.
    [Show full text]
  • Lecture 4 Rationalizability & Nash Equilibrium Road
    Lecture 4 Rationalizability & Nash Equilibrium 14.12 Game Theory Muhamet Yildiz Road Map 1. Strategies – completed 2. Quiz 3. Dominance 4. Dominant-strategy equilibrium 5. Rationalizability 6. Nash Equilibrium 1 Strategy A strategy of a player is a complete contingent-plan, determining which action he will take at each information set he is to move (including the information sets that will not be reached according to this strategy). Matching pennies with perfect information 2’s Strategies: HH = Head if 1 plays Head, 1 Head if 1 plays Tail; HT = Head if 1 plays Head, Head Tail Tail if 1 plays Tail; 2 TH = Tail if 1 plays Head, 2 Head if 1 plays Tail; head tail head tail TT = Tail if 1 plays Head, Tail if 1 plays Tail. (-1,1) (1,-1) (1,-1) (-1,1) 2 Matching pennies with perfect information 2 1 HH HT TH TT Head Tail Matching pennies with Imperfect information 1 2 1 Head Tail Head Tail 2 Head (-1,1) (1,-1) head tail head tail Tail (1,-1) (-1,1) (-1,1) (1,-1) (1,-1) (-1,1) 3 A game with nature Left (5, 0) 1 Head 1/2 Right (2, 2) Nature (3, 3) 1/2 Left Tail 2 Right (0, -5) Mixed Strategy Definition: A mixed strategy of a player is a probability distribution over the set of his strategies. Pure strategies: Si = {si1,si2,…,sik} σ → A mixed strategy: i: S [0,1] s.t. σ σ σ i(si1) + i(si2) + … + i(sik) = 1. If the other players play s-i =(s1,…, si-1,si+1,…,sn), then σ the expected utility of playing i is σ σ σ i(si1)ui(si1,s-i) + i(si2)ui(si2,s-i) + … + i(sik)ui(sik,s-i).
    [Show full text]
  • Ass Spielkarten
    Werbemittelkatalog www.werbespielkarten.com Spielkarten können das ASS im Ärmel sein ... Spielkarten als Werbemittel bleiben in Erinnerung – als kommunikatives Spielzeug werden sie entspannt in der Freizeit genutzt und eignen sich daher hervorragend als Werbe- und Informationsträger. Die Mischung macht‘s – ein beliebtes Spiel, qualitativ hochwertige Karten und Ihre Botschaft – eine vielversprechende Kombination! Inhalt Inhalt ........................................................................ 2 Unsere grüne Mission ............................... 2 Wo wird welches Blatt gespielt? ..... 3 Rückseiten ........................................................... 3 Brandneu bei ASS Altenburger ........ 4 Standardformate ........................................... 5 Kinderspiele ........................................................ 6 Verpackungen .................................................. 7 Quiz & Memo ................................................... 8 Puzzles & Würfelbecher ....................... 10 Komplettspiele .............................................. 11 Ideen ...................................................................... 12 Referenzen ....................................................... 14 Unsere grüne Mission Weil wir Kunden und Umwelt gleichermaßen verpflichtet sind ASS Altenburger will mehr erreichen, als nur seine geschäftlichen Ziele. Als Teil eines globalen Unternehmens sind wir davon überzeugt, eine gesellschaftliche Verantwortung für Erde, Umwelt und Menschen zu haben. Wir entscheiden uns bewusst
    [Show full text]
  • SEQUENTIAL GAMES with PERFECT INFORMATION Example
    SEQUENTIAL GAMES WITH PERFECT INFORMATION Example 4.9 (page 105) Consider the sequential game given in Figure 4.9. We want to apply backward induction to the tree. 0 Vertex B is owned by player two, P2. The payoffs for P2 are 1 and 3, with 3 > 1, so the player picks R . Thus, the payoffs at B become (0, 3). 00 Next, vertex C is also owned by P2 with payoffs 1 and 0. Since 1 > 0, P2 picks L , and the payoffs are (4, 1). Player one, P1, owns A; the choice of L gives a payoff of 0 and R gives a payoff of 4; 4 > 0, so P1 chooses R. The final payoffs are (4, 1). 0 00 We claim that this strategy profile, { R } for P1 and { R ,L } is a Nash equilibrium. Notice that the 0 00 strategy profile gives a choice at each vertex. For the strategy { R ,L } fixed for P2, P1 has a maximal payoff by choosing { R }, ( 0 00 0 00 π1(R, { R ,L }) = 4 π1(R, { R ,L }) = 4 ≥ 0 00 π1(L, { R ,L }) = 0. 0 00 In the same way, for the strategy { R } fixed for P1, P2 has a maximal payoff by choosing { R ,L }, ( 00 0 00 π2(R, {∗,L }) = 1 π2(R, { R ,L }) = 1 ≥ 00 π2(R, {∗,R }) = 0, where ∗ means choose either L0 or R0. Since no change of choice by a player can increase that players own payoff, the strategy profile is called a Nash equilibrium. Notice that the above strategy profile is also a Nash equilibrium on each branch of the game tree, mainly starting at either B or starting at C.
    [Show full text]
  • Economics 201B Economic Theory (Spring 2021) Strategic Games
    Economics 201B Economic Theory (Spring 2021) Strategic Games Topics: terminology and notations (OR 1.7), games and solutions (OR 1.1-1.3), rationality and bounded rationality (OR 1.4-1.6), formalities (OR 2.1), best-response (OR 2.2), Nash equilibrium (OR 2.2), 2 2 examples × (OR 2.3), existence of Nash equilibrium (OR 2.4), mixed strategy Nash equilibrium (OR 3.1, 3.2), strictly competitive games (OR 2.5), evolution- ary stability (OR 3.4), rationalizability (OR 4.1), dominance (OR 4.2, 4.3), trembling hand perfection (OR 12.5). Terminology and notations (OR 1.7) Sets For R, ∈ ≥ ⇐⇒ ≥ for all . and ⇐⇒ ≥ for all and some . ⇐⇒ for all . Preferences is a binary relation on some set of alternatives R. % ⊆ From % we derive two other relations on : — strict performance relation and not  ⇐⇒ % % — indifference relation and ∼ ⇐⇒ % % Utility representation % is said to be — complete if , or . ∀ ∈ % % — transitive if , and then . ∀ ∈ % % % % can be presented by a utility function only if it is complete and transitive (rational). A function : R is a utility function representing if → % ∀ ∈ () () % ⇐⇒ ≥ % is said to be — continuous (preferences cannot jump...) if for any sequence of pairs () with ,and and , . { }∞=1 % → → % — (strictly) quasi-concave if for any the upper counter set ∈ { ∈ : is (strictly) convex. % } These guarantee the existence of continuous well-behaved utility function representation. Profiles Let be a the set of players. — () or simply () is a profile - a collection of values of some variable,∈ one for each player. — () or simply is the list of elements of the profile = ∈ { } − () for all players except . ∈ — ( ) is a list and an element ,whichistheprofile () .
    [Show full text]
  • American Skat : Or, the Game of Skat Defined : a Descriptive and Comprehensive Guide on the Rules and Plays of This Interesting
    'JTV American Skat, OR, THE GAME OF SKAT DEFINED. BY J. CHARLES EICHHORN A descriptive and comprehensive Guide on the Rules and Plays of if. is interesting game, including table, definition of phrases, finesses, and a full treatise on Skat as played to-day. CHICAGO BREWER. BARSE & CO. Copyrighted, January 21, 1898 EICHHORN By J. CHARLES SKAT, with its many interesting features and plays, has now gained such a firm foothold in America, that my present edition in the English language on this great game of cards, will not alone serve as a guide for beginners, but will be a complete compendium on this absorbing game. It is just a decade since my first book was pub- lished. During the past 10 years, the writer has visited, and been in personal touch with almost all the leading authorities on the game of Skat in Amer- ica as well as in Europe, besides having been contin- uously a director of the National Skat League, as well as a committeeman on rules. In pointing out the features of the game, in giving the rules, defining the plays, tables etc., I shall be as concise as possible, using no complicated or lengthy remarks, but in short and comprehensive manner, give all the points and information required. The game of Skat as played today according to the National Skat League values and rulings, with the addition of Grand Guckser and Passt Nicht as variations, is as well balanced a game, as the leading authorities who have given the same both thorough study and consideration, can make.
    [Show full text]
  • Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction
    Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction Renato Paes Leme Eva´ Tardos Department of Computer Science Department of Computer Science Cornell University, Ithaca, NY Cornell University, Ithaca, NY [email protected] [email protected] Abstract—The Generalized Second Price Auction has for advertisements and slots higher on the page are been the main mechanism used by search companies more valuable (clicked on by more users). The bids to auction positions for advertisements on search pages. are used to determine both the assignment of bidders In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full to slots, and the fees charged. In the simplest model, information setting, socially optimal Nash equilibria are the bidders are assigned to slots in order of bids, and known to exist (i.e., the Price of Stability is 1). This paper the fee for each click is the bid occupying the next is the first to prove bounds on the price of anarchy, and slot. This auction is called the Generalized Second Price to give any bounds in the Bayesian setting. Auction (GSP). More generally, positions and payments Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated in the Generalized Second Price Auction depend also on strategies. In the full information setting we prove a bound the click-through rates associated with the bidders, the of 1.618 for the price of anarchy for pure Nash equilibria, probability that the advertisement will get clicked on by and a bound of 4 for mixed Nash equilibria.
    [Show full text]
  • Extensive Form Games with Perfect Information
    Notes Strategy and Politics: Extensive Form Games with Perfect Information Matt Golder Pennsylvania State University Sequential Decision-Making Notes The model of a strategic (normal form) game suppresses the sequential structure of decision-making. When applying the model to situations in which players move sequentially, we assume that each player chooses her plan of action once and for all. She is committed to this plan, which she cannot modify as events unfold. The model of an extensive form game, by contrast, describes the sequential structure of decision-making explicitly, allowing us to study situations in which each player is free to change her mind as events unfold. Perfect Information Notes Perfect information describes a situation in which players are always fully informed about all of the previous actions taken by all players. This assumption is used in all of the following lecture notes that use \perfect information" in the title. Later we will also study more general cases where players may only be imperfectly informed about previous actions when choosing an action. Extensive Form Games Notes To describe an extensive form game with perfect information we need to specify the set of players and their preferences, just as for a strategic game. In addition, we also need to specify the order of the players' moves and the actions each player may take at each point (or decision node). We do so by specifying the set of all sequences of actions that can possibly occur, together with the player who moves at each point in each sequence. We refer to each possible sequence of actions (a1; a2; : : : ak) as a terminal history and to the function that denotes the player who moves at each point in each terminal history as the player function.
    [Show full text]
  • The Penguin Book of Card Games
    PENGUIN BOOKS The Penguin Book of Card Games A former language-teacher and technical journalist, David Parlett began freelancing in 1975 as a games inventor and author of books on games, a field in which he has built up an impressive international reputation. He is an accredited consultant on gaming terminology to the Oxford English Dictionary and regularly advises on the staging of card games in films and television productions. His many books include The Oxford History of Board Games, The Oxford History of Card Games, The Penguin Book of Word Games, The Penguin Book of Card Games and the The Penguin Book of Patience. His board game Hare and Tortoise has been in print since 1974, was the first ever winner of the prestigious German Game of the Year Award in 1979, and has recently appeared in a new edition. His website at http://www.davpar.com is a rich source of information about games and other interests. David Parlett is a native of south London, where he still resides with his wife Barbara. The Penguin Book of Card Games David Parlett PENGUIN BOOKS PENGUIN BOOKS Published by the Penguin Group Penguin Books Ltd, 80 Strand, London WC2R 0RL, England Penguin Group (USA) Inc., 375 Hudson Street, New York, New York 10014, USA Penguin Group (Canada), 90 Eglinton Avenue East, Suite 700, Toronto, Ontario, Canada M4P 2Y3 (a division of Pearson Penguin Canada Inc.) Penguin Ireland, 25 St Stephen’s Green, Dublin 2, Ireland (a division of Penguin Books Ltd) Penguin Group (Australia) Ltd, 250 Camberwell Road, Camberwell, Victoria 3124, Australia
    [Show full text]
  • Games with Hidden Information
    Games with Hidden Information R&N Chapter 6 R&N Section 17.6 • Assumptions so far: – Two-player game : Player A and B. – Perfect information : Both players see all the states and decisions. Each decision is made sequentially . – Zero-sum : Player’s A gain is exactly equal to player B’s loss. • We are going to eliminate these constraints. We will eliminate first the assumption of “perfect information” leading to far more realistic models. – Some more game-theoretic definitions Matrix games – Minimax results for perfect information games – Minimax results for hidden information games 1 Player A 1 L R Player B 2 3 L R L R Player A 4 +2 +5 +2 L Extensive form of game: Represent the -1 +4 game by a tree A pure strategy for a player 1 defines the move that the L R player would make for every possible state that the player 2 3 would see. L R L R 4 +2 +5 +2 L -1 +4 2 Pure strategies for A: 1 Strategy I: (1 L,4 L) Strategy II: (1 L,4 R) L R Strategy III: (1 R,4 L) Strategy IV: (1 R,4 R) 2 3 Pure strategies for B: L R L R Strategy I: (2 L,3 L) Strategy II: (2 L,3 R) 4 +2 +5 +2 Strategy III: (2 R,3 L) L R Strategy IV: (2 R,3 R) -1 +4 In general: If N states and B moves, how many pure strategies exist? Matrix form of games Pure strategies for A: Pure strategies for B: Strategy I: (1 L,4 L) Strategy I: (2 L,3 L) Strategy II: (1 L,4 R) Strategy II: (2 L,3 R) Strategy III: (1 R,4 L) Strategy III: (2 R,3 L) 1 Strategy IV: (1 R,4 R) Strategy IV: (2 R,3 R) R L I II III IV 2 3 L R I -1 -1 +2 +2 L R 4 II +4 +4 +2 +2 +2 +5 +1 L R III +5 +1 +5 +1 IV +5 +1 +5 +1 -1 +4 3 Pure strategies for Player B Player A’s payoff I II III IV if game is played I -1 -1 +2 +2 with strategy I by II +4 +4 +2 +2 Player A and strategy III by III +5 +1 +5 +1 Player B for Player A for Pure strategies IV +5 +1 +5 +1 • Matrix normal form of games: The table contains the payoffs for all the possible combinations of pure strategies for Player A and Player B • The table characterizes the game completely, there is no need for any additional information about rules, etc.
    [Show full text]
  • SPIELESAMMLUNG Mit 365 Spielmöglichkeiten Für Jeden Tag Ein Spiel Mühle Spieler: 2 Material: 1 Spielplan Mühle, 9 Weiße Und 9 Schwarze Spielsteine
    SPIELESAMMLUNG mit 365 Spielmöglichkeiten Für jeden Tag ein Spiel Mühle Spieler: 2 Material: 1 Spielplan Mühle, 9 weiße und 9 schwarze Spielsteine So wird gespielt: Jeder Spieler erhält 9 Spielsteine derselben Farbe. Abwechselnd wird jeweils ein Spielstein auf einen freien Punkt des Spielfeldes gelegt. Jeder Spieler muss nun versuchen, eine Reihe aus 3 Spielsteinen in einer Linie zu bilden. Das ist eine Mühle. Wer eine Mühle bilden konnte, darf einen Spielstein des anderen vom Spielbrett wegnehmen – allerdings nicht aus einer geschlossenen Mühle des Gegners! Wenn alle Spielsteine auf dem Spielbrett liegen, geht das Spiel weiter, indem abwechselnd jeweils ein Stein entlang einer Linie zu einem angrenzenden freien Feld gezogen wird. Auch jetzt ist das Ziel, eine Mühle zu bilden. Eine Mühle kann beliebig oft geöffnet und beim nächsten Zug wieder geschlossen werden. Wer eine Mühle wieder schließt, darf seinem Gegner erneut einen Spielstein wegnehmen. Spielende: Sieger ist, wer den Gegenspieler durch seine Spielsteine so behindert, dass dieser keinen Zug mehr machen kann – oder wer seinem Gegner alle Steine bis auf 2 Stück weggenommen hat. Falls ein Spieler nur noch 3 Spielsteine auf dem Spielfeld hat und diese eine Mühle bilden, muss er seine Mühle beim nächsten Zug öffnen, auch wenn ihm dann ein Stein weggenommen wird und er das Spiel verliert. Varianten Gemischte Mühle Bei dieser Variante werden die Setz- und Zugphasen nicht voneinander getrennt. Der Spieler kann entscheiden, ob er einen Stein einsetzen oder ziehen möchte. Ceylonesische Mühle Die Regeln sind mit einer Ausnahme unverändert: Auch wenn beim anfänglichen Setzen der Spielsteine eine Mühle entsteht, darf kein gegnerischer Spielstein weggenommen werden.
    [Show full text]
  • Learning the Tarot (19 Lesson C
    Welcome to Learning the Tarot - my course on how to read the tarot cards. The tarot is a deck of 78 picture cards that has been used for centuries to reveal hidden truths. In the past few years, interest in the tarot has grown tremendously. More and more people are seeking ways to blend inner and outer realities so they can live their lives more creatively. They have discovered in the tarot a powerful tool for personal growth and insight. How Does This Course Work? My main purpose in this course is to show you how to use the cards for yourself. The tarot can help you understand yourself better and teach you how to tap your inner resources more confidently. You do not have to have "psychic powers" to use the tarot successfully. All you need is the willingness to honor and develop your natural intuition. Learning the Tarot is a self-paced series of 19 lessons that begin with the basics and then move gradually into more detailed aspects of the tarot. These lessons are geared toward beginners, but experienced tarot users will find some useful ideas and techniques as well. For each lesson there are some exercises that reinforce the ideas presented. The Cards section contains information about each of the tarot cards. You can refer to this section as you go through the lessons and later as you continue your practice. These are the main features of the course, but there are many other pages to explore here as well. What is the History of this Course? I began writing this course in 1989.
    [Show full text]