THE MARKOV CHAIN PERSPECTIVE OF

A Research Presented to Professor Cristina Anton

In partial fulfillment of the requirements for Stat 322: Finite Markov Chains and Applications

Presented by: Iogue Macaraig

December 5, 2017 INTRODUCTION

Roulette is one of the most famous game in a . The mechanics of the game is easy to understand. It is considered as one of the most open game for newcomers. Also, the game is attractive to visitors in general. The big roulette table, the long monitor for recent outcomes in the game and the spinning wheel always catch the eyes of guests in any casino.

Many believed that the game, Roulette, was created by Blaise Pascal, a French

Mathematician, in the 17th century. This was the time when Blaise Pascal was studying the concept of perpetual motion machine. The game started to appear in different in France in 1796, but did not stay for long due to strict laws in the country. In 1842, Francois and Louis

Blanc introduced the single zero roulette, now known as the European Roulette, in Hamburg,

Germany. The game catches the attention of the gamblers in Germany. The game spread exponentially across different casinos and became one of the most played game in the house. Later on, Francois and Louis brought the game back in France and started the Casino

Resort.

In the 19th century, European settlers in Louisiana introduced the Roulette to the casinos in the United States. However, it did not succeed due to dissatisfaction of casino owners with the amount of bias the game possess towards the house. And so, the casino owners modify the game and increase the favor for the house. This is the nativity of the double zero roulette, now known as the American Roulette.

This study only focuses on the American Roulette, where the casino has an advantage of

5.3%. This study attempts to establish a different perspective on playing the game, different from how the casinos designed the game to be played.

THE WHEEL, THE TABLE AND THE GAME

The wheel for the American Roulette and the betting table is as shown above. In a round, the dealer wheel spin a ball on the spinning wheel, opposite to the direction of the spinning wheel.

Then, the decisions on the betting table are based on where the ball lands on the wheel. In other words, a player wins if he or she has a bet on the winning number, located on the inside section of the table (this is referred to as the inside game) or on the correct classification for the winning number, located on the outside section of the table (this is referred to as the outside game). It is obvious that it is more difficult to win if a player bets on the inside game, and so the reward in the inside game is higher. A player who wins in the inside game will have the gain of times 35 to his or her bet. A bet on a single number on the inside game has a winning probability of 0.03.

The chances of winning in the outside game is higher than the inside game. As a consequence, the winning multiplier for the outside game is lower. A winning bet on the thirds (1 to 12, 13 to 24, and 25 to 36) and the columns (1st column, 2nd column, and 3rd column) will have a gain of times 2. A bet on the thirds has a winning probability of 0.32. A winning bet on even money games (even or odd, red or black, and 1 to 18 or 19 to 36) will have a gain of times 1. A bet on the even money has a winning probability of 0.47.

There are a few more other ways to bet on the inside game other than what is discussed above but will not be explained in this study. The focus of this study is to analyze the game in terms of the even money. For simplicity in the discussion, the outside game will only be described in term of red and black.

A DIFFERENT PERSPECTIVE IN PLAYING THE OUTSIDE GAME

The casinos design the game for a player to sit down and play the roulette in consecutive rounds. In this way, guests usually leaves empty handed since few has the control to leave when are winning. And, most have the urge to regain their money when they are losing. With this behavior, a person will keep on betting until he or she loses all his or her chips, or reach the point of breakeven. In either way, the house did not lose. The computation for the expected values on the amount a player wins if a person bets a dollar on the outside game in every round can be found in Appendix A.

Now, what if a player sits down and just watch the game? Instead of betting on every round, a player will time his or her bet based on the previous results of spins. Will it make difference?

This is the gaming perspective that inspires this study.

THE MARKOV CHAIN REPRESENTATION

Now, how can the outside game be represented by a Markov Chain? We define the states of the Markov chain as the number of consecutive colors in consecutive spins in the roulette. With this, 0 and 00 will be treated as absorbing color. That is, if the previous spin before the 0 or 00 is a black, then it will be counted as a black; same idea is applied if the previous spin is a red. In this way, a change in color will always be represented by state 1. So if the outcome in two consecutive spins are RED BLACK, the first and second spin are both represented by state 1. If the outcome of three consecutive spins are BLACK BLACK RED, the second spin is represented by state 2 and the first and third spin are both represented by state 1, and so on. In this study, if the number of consecutive colors exceeded 10, the outcome will be absorbed in state 10. That is, state 10 is reached if the number of consecutive colors is greater than or equal to 10. The detailed definitions of the states of the Markov Chain are shown below.

S1 = outcome is 1 consecutive color (means a change in color occurred) nd S2 = outcome is 2 consecutive color rd S3= outcome is 3 consecutive color th S4 = outcome is 4 consecutive color th S5 = outcome is 5 consecutive color th S6 = outcome is 6 consecutive color th S7 = outcome is 7 consecutive color th S8= outcome is 8 consecutive color th S9 = outcome is 9 consecutive color th S10 = outcome is 10 or higher consecutive color

It can easily be notice that the only way to reach a higher state is to come from the next highest state before it. For instance, to reach 5th consecutive color, the last outcome should be a 4th consecutive color. That is, to reach S5, the last outcome should come from S4. Also, since S1 is the state where a change of color occurred, S1 can be reach from any state. To have better view on this perspective of the game and the communication between states, the diagram of the Markov chain representation is shown below.

With the new perspective of the game, the next concern will be probabilities of reaching a certain state from an initial state. The probability transition matrix is given below. It provides the probability of reaching a state of interest from an initial state in one step. That is, it gives the probability of possible occurrences in the next spin.

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s1 0.47 0.53 0 0 0 0 0 0 0 0 s2 0.47 0 0.53 0 0 0 0 0 0 0 s3 0.47 0 0 0.53 0 0 0 0 0 0 s4 0.47 0 0 0 0.53 0 0 0 0 0 P = s5 0.47 0 0 0 0 0.53 0 0 0 0 s6 0.47 0 0 0 0 0 0.53 0 0 0 s7 0.47 0 0 0 0 0 0 0.53 0 0 s8 0.47 0 0 0 0 0 0 0 0.53 0 s9 0.47 0 0 0 0 0 0 0 0 0.53 s10 0.47 0 0 0 0 0 0 0 0 0.53

From the matrix above, we can see that the probability values are the same with a single round play. This is the Markov chain representation (in terms of the 10 states) of betting in every round which is how the casino houses want their guest to play the game, 0.47 is the probability to player to win and 0.53 is the probability of a player to lose in a single round.

THE MARKOV CHAIN

Before the game is analyzed further, we will discuss first the Markov chain as it is and its properties. From the diagram, it is evident that all the states communicate and there is only one class (and it is a closed class). And so, all states are recurrent. The chain has a periodicity of one and therefore it is a case of a regular Markov chain. The output from R that shows the summary of the chain is shown below.

Also, since the probability transition matrix is the identical for any 1-step transition in the

Markov chain, the chain is homogeneous. The fundamental matrix of the Markov Chain do not have a significance in the analysis and is only relevant in the calculation of relevant matrices. And so, it will not be discussed but can be seen in Appendix B.

The stationary distribution (vector) of the chain is found using R, which is given below.

π' = 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0

Theoretically, since this is a regular Markov chain, it is already trivial that raising the probability transition matrix to a power n will converge to a matrix where all rows are identical to the stationary distribution vector (stationary distribution matrix), as n is increased. But, the interesting characteristic of this chain is the speed of convergence. Given that the probability transition matrix is a ten by ten, it converges to the stationary distribution matrix in n = 9. The convergence of the probability transition of the Markov chain is fast. The output from R is shown below.

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s1 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s2 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s3 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 9 s4 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 P = s5 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s6 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s7 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s8 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s9 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0 s10 0.47 0.25 0.13 0.07 0.04 0.02 0.01 0.01 0 0

The probability values in columns for S9 and S10 are not actually zeroes. But, since it is extremely low, it becomes zero after being rounded off.

THE MARKOV CHAIN PERSPECTIVE OF ROULETTE

The idea in this gaming perspective of Roulette is to just watch the spins and wait for the right time to bet. However, the dilemma in this gaming perspective is, when is the right time to bet? With the knowledge of the characteristics of the Markov chain representation of the Roulette, we look into the probabilities and means of the first passage time to analyze the game. All the derivations where done using R. Now, the mean first passage time matrix and the probabilities of first passage time in 3 steps from S4 and S5 is given below.

s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s1 2.13 1.89 5.45 12.16 24.84 48.75 93.87 178.99 339.61 642.66 s2 2.13 4.01 3.56 10.28 22.95 46.86 91.98 177.11 337.73 640.78 M = s3 2.13 4.01 7.57 6.72 19.39 43.3 88.42 173.55 334.17 637.22 s4 2.13 4.01 7.57 14.29 12.67 36.59 81.7 166.83 327.45 630.5 s5 2.13 4.01 7.57 14.29 26.96 23.91 69.02 154.16 314.77 617.83 s6 2.13 4.01 7.57 14.29 26.96 50.88 45.12 130.25 290.86 593.91 s7 2.13 4.01 7.57 14.29 26.96 50.88 95.99 85.19 245.74 548.8 s8 2.13 4.01 7.57 14.29 26.96 50.88 95.99 181.12 160.62 463.67 s9 2.13 4.01 7.57 14.29 26.96 50.88 95.99 181.12 341.74 303.05 s10 2.13 4.01 7.57 14.29 26.96 50.88 95.99 181.12 341.74 303.05

Probabilities to reach other states from S4 and S5 in at most 3 steps

Looking at the probabilities from S4 and S5, the probabilities of reaching the next 3 higher states or going back to the lower states are all equal. However, if we look into matrix M, some interesting ideas can be pulled off. If we start from any state, the mean number of steps to go to the next state is always less than the sum of the mean number of steps of the lower states. Then, looking at the probabilities of the first passage times, the probability values are more distributed in the lower states. This means that majority of the time, with respect to an initial state, the chain is more likely to stay on lower states. To stay in the lower states, the chain has first to go back to

S1. So, the idea here is to time the bet where the chain goes to S1. To create a discrepancy, for now, it would be assume that this observation is only true, if it is true, to at least state 5. So to justify this, a simulation of 1000 steps is generated using R. The results are as shown below. The sequence of states from the simulation can be found in Appendix C.

From this simulation, we keep an eye on the time interval while analyzing the results. It is noticeable that as we go further the simulation, the time interval between the occurrence of the higher states increases while the time interval in the lower states is invariant of time or duration of the simulation or in this case, length of game. Based from the result of the simulation, it has evidences that, more likely, there would be a significant number of steps (time interval) between two higher states to occur, say S6 and S7. In playing roulette, it takes a while before a player can gather a large number of spins. However, if a higher state occurs, you can expect that it will take a while before another higher state occurs. And so, the idea is to time the bet on S1. It would also be difficult to pinpoint the exact extremity in the states. Therefore, one way to work on it, once a higher state is “starting to occur,” (you can tell it by in which state currently happen) and a higher state just happened not so long ago, a player can break his betting into an exponentially increasing manner in consecutive rounds and stop once he or she wins. And then, wait again for the next opportunity.

CONCLUSION

Playing Roulette by applying the results from the Markov chain representation is better than betting in every round. However, this perspective is not foolproof. The randomness of the wheel is not being eliminated by the Markov chain representation. It just allow a player to gauge a time interval in which betting on a color change is a favourable outcome.

The gaming perspective from the Markov chain representation gives a player more confidence to win in the end, with less attempts. A player who wants to try this gaming perspective should have utmost patience and content. Though, it is better than betting every round, it will only allow you to win a few since waiting is big part of it. This study, however, has a flaw, the simulation of the Markov chain is done using R. The results from this analysis study would have been stronger and more convincing if the simulation is done in the actual game.

Appendix A

Expectation of the winnings in the Roulette

• Inside Game

1 37 E($) =$35 ( ) − $1( )= -$0.0526 38 38

• Double your Money Game

12 26 E($) = $2 ( ) − $1( )= -$0.0526 38 38

• Even Money Game

18 20 E($) = $1 ( ) − $1( )= -$0.0526 38 38

Appendix B

The Fundamental Matrix of the Markov Chain from R output

Appendix C

The result of the Simulation (n=1000)