Markov Chains J. Alfredo Blakeley-Ruiz The Markov Chain • Concept developed by Andrey Andreyevich Markov • A set of states that can be inhabited at a moment in time. • These states are connected by transitions • Each transition represents the probability that a state will transition to another state in discrete time. • The transition from one state to another state depends only on the current state, not any states that may have existed in the past. – Markov property Explained Mathematically Given a series states X1, X2,…, Xn As long as Markov chains can be represented as a weighted digraph http://www.mathcs.emory.edu/~cheung/Courses/558/Syllabus/00/queueing/discrete-Markov.html Markov chains can also be represented by a transition matrix 1 2 3 4 1 0.6 0.2 0.0 0.2 2 0.6 0.4 0.0 0.0 3 0.0 0.3 0.4 0.3 4 0.3 0.0 0.0 0.7 Peter the Great • Lived 1672-1725 • Tsar: 1682-1725 • Ushered Russia into the modern era – Changed fashions – beard tax – Modernized Russian military – Changed capital to St. Petersburg – Created a meritocracy – Founded Academy of Sciences in St. Petersburg • Imported many foreign experts (Euler) • Later native Russians began to make their mark • Nikolai Lobachevsky – non-Euclidian geometry • Pafnuty Chebyshev – Markov’s advisor https://en.wikipedia.org/wiki/Peter_the_Great • Some not so good things – Huge wars of attrition with Ottomans and Swedish – Meritocracy only for the nobles and clergy – New tax laws turned the serfs into slaves Jacob Bernoulli • Born: Basel Switzerland – 1654-1705 • University of Basel • Theologian, astronomer, and mathematician – Supporter of Leibniz in calculus controversy – Credited with a huge list of mathematical contributions • Discovery of the constant e • Bernoulli numbers • Bernoulli’s golden theorem – Law of large numbers The law of large numbers • The relative frequency, hnt, of an event with probability p = r/t, t = s + t, in nt independent trials converges to probability p. • This also often called the weak law of large numbers. https://en.wikipedia.org/wiki/Law_of_large_numbers Adolphe Quetelet • Born: Ghent, French Republic – 1796-1874 • Alma Mater: University of Ghent • Brussels Observatory – Astronomer – Statistician https://en.wikipedia.org/wiki/Adolp he_Quetelet – Mathematician – Sociologist • From a sociological perspective he concluded that while free will was real. The law of large numbers demonstrated that it didn’t matter. Pafnuty Chebyshev • Born in Akatovo, Russian Empire – 1821-1894 • Alma mater: Moscow University • Professor: St. Petersburg https://en.wikipedia.org/wiki/Pafnuty_Chebyshev University – Andrei Markov – Aleksandr Lyapunov • Chebyshev inequality – Proves weak law of large numbers Chebyshev inequality • Let X be a sequence of independent and identically distributed variables with mean 휇 and standard deviation 휎. • X = (x1,...,xn)/n • <X> = 휇 • Var(X) = 휎2/n • Chebyachev’s inequality states that for all 휀 > 0. P(|X-휇|≥ 휀) ≤ Var(X)/휀 = 휎2/(n휀) → lim P(|X−휇| ≥ 휀) ≤ lim 휎2/(n휀) = 0 푛→∞ 푛→∞ Pavel Nekrasov • Lived: 1853-1923 • University of Moscow • Monarchist and supporter Orthodox Church • Attempted to use LLN to prove free will • Came up with a proof for LLN using independent variables. http://bit-player.org/wp-content/extras/markov/#/33 • His 1902 paper on these two topics inspired Markov to invent Markov chains. Andrey Andreyevich Markov • Born: Ryazan, Russian Empire – 1856-1922 • Alma Mater: St. Petersburg University – Advisor Pafnuty Cheyshev • Professor: St. Petersburg University • Notable achievments – Published more than 120 papers – Chebyshev inequality – Markov inequality – Invention of Markov Chain • Proving that the Law of Large numbers could apply to dependent random variables • Markov had a prickly personality “I note with astonishment that in the book of A. A. Chuprov, Essays on the Theory of Statistics, on page 195, P. A. Nekrasov, whose work in recent years represents an abuse of mathematics, is mentioned next to Chebyshev.” The Feud • Nekrosov and Markov where the opposite in every respect – Moscow vs. St. Petersburg – Monarchist vs. Anti-Tsarist – Religious vs Secularist • These have as much of a role to play in the future perception of these two mathematicians as mathematics Nekrosov’s argument for free will (1902 paper) • Nekrosov disagreed with Quetelet • Used Chebyech inequality to prove law of large numbers for specific independent 휀 • Showed Independent variables -> law of large numbers • Conjectured that independent variables where necessary for the law of large numbers • Conjectured voluntary acts can be considered like independent trials in probability theory. • Stated that law of large numbers had been shown to hold true for social behaviors. • Argued that this was proof of free will Markov’s counter argument • Markov did not care about the philosophical arguments and was only interested in the mathematics • Nekrosov and others assumed that the law of large numbers applied only to independent events • Markov invented markov chains and used them to prove that the law of large numbers could apply to dependant events Markov’s 1906-1907 paper Ground work • In his first Paper on Markov chains, Markov considered two states and • A simple chain was an infinite series x1,x2,…,xk, xk+1 – Where k is the current time, and xk is the state at time k. • For any k, xk+1 was independent of x1,x2,…,xk-1 given that xk is known. • This chain was also time homogeneous in that xk+1 given xk was independent of k Some Variables P, = Probability of event xk+1= , Given xk= 푘+1 푘 푃훽 = ∑훼푃훼 푃훼,훽 ai = expected value of independent variable xi 푖 퐴훾 = 퐸 푥푘+푖 푥푘 = 훾 = Expected value of xk+I, Given kk = 훾 Markov’s Theorom Theorem: For a chain with a positive matrix, all 푖 numbers ak+1 and 퐴훾 have the same limit, which they differ from by numbers < ∆푖. At the same time ∆푖 < Chi, where C and H are constants and 0<H<1. This theorem shows that the limit of the probability of the next variable converges to zero with both independent variables and dependent variables given a markov chain. “I am concerned only with ques- tions of pure analysis.... I refer to the question of the applicability of prob- ability theory with indifference.” Markov’s 1913 Paper Markov’s experiment on Pushkin’s Eugene Onegin Hayes, B 2013 Simple Markov Chains can be used to create a simple weather prediction model Hayes, B 2013 Simple Markov Chain in Economics Recession Prediction Social Class Mobility Prediction ng mr sr poor middle Rich ng 0.97 0.29 0 class mr 0.145 0.778 0.77 poor 0.9 0.1 0 sr 0 0.508 0.492 middle 0.4 0.4 0.2 class http://quant-econ.net/jl/finite_markov.html rich 0.1 0.1 0.8 Chemical Kinetics: As a stochastic process 푘1 • 푎↔b 푘2 – k1 rate that a transition to b – k2 rate that b transition to a • Modeled by a linear chain where each state is a different number of a and b molecules – x1,x2,...,xk,xk+1 – If the state xk there are 50 molecules of a and 0 molecule of b what will be the state at xk+1. Chemical Kinetics Example • a0=5 • b0=0 • 20 iterations • k1= 1 • k2= 1 To emphasize randomness Chemical Kinetics Example • a0 = 50 • b0 = 0 • 100 iterations • k1= 1 • k2= 1 Chemical Kinetics Example • a0 = 50 • b0 = 0 • 100 iterations • k1= 5 • k2= 1 Other applications of simple markov chain • Genetic drift • Google page rank algorithm • Social sciences • Games – Snakes and ladders – Monopoly Hidden Markov Model • So far we have discussed observable Markov Models • Sometimes the underlying stochastic process in a system cannot be observed • Observations resulting from the process can be used to infer the underlying stochastic process • Developed by Leonard E. Baum and Co. Hidden Markov Model • X(t) = state at time t • X(t) ∈ {x1,…,xn} – n = # unobservable states • Y(t) ∈ {y1,…,yk} – k = # possible observations • 푃 푋1:푇, 푌1:푇 = 푇 P 푥1 P 푦1 푥1 ∏푡=2 https://en.wikipedia.org/wiki/Hidden_Markov_model 푃 푥푡 푥푡−1 푃(푦푡|푥푡) Simple Example HMM • Bob and Alice talk on the phone every day • Alice cannot see the weather. • Bob only likes to talk about three activities • The weather’s markov model can be predicted using Alice’s observations. https://en.wikipedia.org/wiki/Hidden_Markov_model Left to right HMM • State transitions have the property aij = 0, j < i – No transtions are allowed to states whose indices are lower than the current state • Transition can be restricted aij = 0, j > i + △. Single Word Speech Recognition • v words to be identified – Each word modeled by a distinct HMM • k occurrences of each word spoken by 1 or more talkers. • λv = HMM for each word in the vocabulary • O = {O ,O ,…,O ) 1 2 n http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf • P(O| λv), 1 < v < V • v* = argmax[P(O| λv)] Identifying unknown proteins • Homologous proteins are proteins that share a common ancestry (and likely a similar function). – Orthologs – proteins that originate from a common ancestor – Paralogs – proteins that originate from copy events in the same ancestor • The sequence of known homologous proteins can be used to predict the function of unknown homologous proteins. Identifying unknown proteins • Global Alignment – CD-Hit – uClust • Local Alignment – Blast http://drive5.com/usearch/manual/uclust_algo.html • Problem is that sequence does not directly determine function. Structure does. Protein Homology Identification • Protein domains represent functional subunits in a protein • A single protein can be made up of one or more domains – We can use the domain architecture to predict remote homology Protein Homology Identification • Pfam and other domain databases identify domains of highly conserved sequences • They then create a series of hundreds of representative sequences in order to create an HMM.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages43 Page
-
File Size-