Part VIII Markov Chains

Part VIII Markov Chains

Part VIII Markov chains - random walks Chapter 8. RANDOM WALKS - MARKOV CHAINS/MODELS Random walks on graphs are a very simple, interesting and fundamental tool with surprisingly many applications in informatics and also in mathematics and natural sciences. Design and analysis of randomized algorithms is one of them. The concept of a random walk is closely related with that of Markov chain/model { one of the key concepts of discrete stochastic processes. prof. Jozef Gruska IV054 8. Markov chains - random walks 2/110 APPLICATIONS In physical sciences, Markov chains provide a fundamental model for the emergence of global properties from local interactions. In informatics, random walks provide a general paradigm for random exploration of an exponentially large combinatorial structures (for example graphs), by a sequence of simple and local transitions. prof. Jozef Gruska IV054 8. Markov chains - random walks 3/110 HIDDEN MARKOV MODELS Hidden Markov Model (HMM) is a Markov model, with random transitions among states, extended by random outputs of the states. HMM works in such a way that an observer can see only a sequence of states' outputs and not the internal structure (states, transition and emission probabilities) of the underlying Markov model. Hidden Markov Model (HMM), has a lot of applications, especially in artificial intelligence. For example, almost all fast speech and patterns recognition systems use HMM. prof. Jozef Gruska IV054 8. Markov chains - random walks 4/110 DEVELOPMENT of MARKOV CHAIN The concept of Markov chain introduced Andreei Andreevich Markov in 1906, in order to consider the case of having a sequence of random experiments in which result of any experiment depends also on the result of the previous experiment. Before that only such sequences of random experiments were considered where the result of each random experiment was fully independent from all previous experiments. prof. Jozef Gruska IV054 8. Markov chains - random walks 5/110 UNIVERSALITY of QUANTUM RANDOM WALKS It can be also shown that any quantum evolution can be seen as so-called continuous quantum walk. prof. Jozef Gruska IV054 8. Markov chains - random walks 6/110 BASIC CONCEPTS I Notation Let G = (V ; E) be a connected, undirected, graph with n nodes (in V ) and m edges (in E). For a node v 2 V , let ΓG (v) denote the set of neighbors of v in G. A random walk on G is the following sequence of moves of the process that starts in some node v0 and then: a neighbor v1 of v0 is chosen, randomly and independently, and then the process moves (walks) from v0 to v1; afterwards a neighbor v2 of v1 is chosen randomly and independently and the process walks from v1 to v2; the process continues to walk this way :::for ever. Typical problems to explore for a given graph G: What is the expected number of steps to get from a given node u to a given node v. What is the expected number of steps needed to visit all nodes of G when starting in a given node u. prof. Jozef Gruska IV054 8. Markov chains - random walks 7/110 DRUNKARD's WALK Let a drunken seaman walk on a linear, both sides infinite graph, each time choosing a step right or left with the same probability. What are the probabilities for such a drunken man to be in a particular position after some steps in case he starts in some other position. prof. Jozef Gruska IV054 8. Markov chains - random walks 8/110 EXAMPLE Let G = Kn be the complete graph of n nodes u 6= v be any two vertices of G. It holds: The expected number of steps of a random walk that begins in u and ends when first reaching v is ??????? prof. Jozef Gruska IV054 8. Markov chains - random walks 9/110 Simple hypercubes prof. Jozef Gruska IV054 8. Markov chains - random walks 10/110 6-d hypercube prof. Jozef Gruska IV054 8. Markov chains - random walks 11/110 EXAMPLE - to finish Let G = Kn be the complete graph of n nodes It holds: The expected number of steps of a random walk in Kn that begins in a fixed node u and ends when first reaching a fixed node v is 1 n − 2 p = 1 · + (1 + p) · n − 1 n − 1 p = n − 1 prof. Jozef Gruska IV054 8. Markov chains - random walks 12/110 The expected number of steps to visit all nodes in G starting from any node u is (n − 1)Hn; where Hn is so called Harmonic number prof. Jozef Gruska IV054 8. Markov chains - random walks 13/110 A RELATED PROBLEM - COUPON SELECTION Coupon selector problem: There are n types of coupons and at each time a coupon is chosen randomly and returned. It has been shown that the average number of trials needed to have a coupon of each type is nHn: prof. Jozef Gruska IV054 8. Markov chains - random walks 14/110 EXAMPLE Let us consider graph K3 with the initial probabilities of its three nodes, say A0, A1, A2, being p0; p1; p2 and let 1=2 be the probability of the transmission through through any edge. If p is the initial probability of a node X, then probability of being in the same node after one step is 1 1 p P = (1 − p) = − ; 1 2 2 2 after two steps is 1 1 1 1 p P = (1 − (1 − p) ) = − + 2 2 2 2 4 4 and after i steps the probability is i X j+1 1 i p P = (−1) + (−1) : i 2j 2i j=1 Therefore 1 lim Pi = i!1 3 prof. Jozef Gruska IV054 8. Markov chains - random walks 15/110 EXAMPLE { 2-SATISFIABILITY A simple polynomial-time (Monte Carlo) algorithm will be given to solve 2-SAT problem (for a formula in conjunctive normal form in which each clause having two literals) satisfiable with a single assignment. Algorithm 1 Start with an arbitrary assignment. 2 while there is an unsatisfied clause C, choose randomly one of two literals of C and complement its value in the current assignment. Theorem The expected number of steps of the above algorithm at finding a satisfying assignment is O(n2) (where n is the number of variables). prof. Jozef Gruska IV054 8. Markov chains - random walks 16/110 RELATION TO A RANDOM WALK ON THE LINE Let A be a particular satisfying assignment. The progress of the above algorithm can be represented by a particle moving between integers f0; 1;:::; ng on the real line. The position of the particle will always indicate how many variables in the current assignment have the correct value (as in A). Crucial observation. In an unsatisfied clause at least one of two literals has an incorrect 1 value. Therefore at each step of the algorithm with probability 2 we increase by one the 1 number of variables having their correct value; and with probability 2 the number of variables having correct value is decreased by one. The motion of the particle therefore resembles a random walk on the line (that is on the linear graph). prof. Jozef Gruska IV054 8. Markov chains - random walks 17/110 STOCHASTIC PROCESSES in RANDOMIZED ALGORITHMS A stochastic process is a sequence of random variables fXt gt≥1, where we think of values of Xt as values (states) of the process at time t. Two types of stochastic processes come up over and over again in the analysis of randomized algorithms: 1 Martingale, where values of each next variable may depend, even in a complicated way, on the past history, but its expectation is 0. 2 Markov chain where next state depends always only on the current state and not on ways to get there - not on the past history. The most useful algorithmic property of Markov chains, to be explored in the next, is their convergence to a fixed (probability) distributions on states. prof. Jozef Gruska IV054 8. Markov chains - random walks 18/110 MARKOV CHAINS - 1st DEFINITION A Markov chain is a discrete-time stochastic process defined over a set of states S in terms of a matrix P of transition probabilities P(i; j) = pij = the probability that the next state will be j if the current state is i: Probability conditions: For any i; j it has to hold X 0 ≤ pij ≤ 1 and pij = 1: j Denote Xt the state of the Markov chain at time t. 1 The stochastic process fXt gt=0, specifying the history of the evolution of the Markov chain at time t, has the following memoryless property: The future behaviors of a Markov chain depends on its current state, and not how the chain arrived at the current state. That is, it holds, for any t ≥ 1: Pr[Xt+1 = jjX0 = i0; X1 = i1;:::; Xt = it = i] = Pr[Xt+1 = jjXt = i] = pij : prof. Jozef Gruska IV054 8. Markov chains - random walks 19/110 NOTE Note: Markov chains do not need to have prespecified initial states. In general, initial states are chosen according to some probability distribution X0 over S. Such a distribution is called the initial (probability) distribution. prof. Jozef Gruska IV054 8. Markov chains - random walks 20/110 MARKOV CHAIN {2nd DEFINITION A Markov chain is a discrete time stochastic process fXk gk≥0 of random variables with values in a countable set I such that for every k ≥ 1 and every i0; i1;:::; ik from I we have Pr[Xk = ik j Xk−1 = ik−1;:::; X0 = i0] = Pr[Xk = ik j Xk−1 = ik−1] = pik−1ik : The matrix P(i; j) = pij is called the transition matrix of one-step transition probabilities.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    111 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us