CS70: Jean Walrand: Lecture 30

Total Page:16

File Type:pdf, Size:1020Kb

CS70: Jean Walrand: Lecture 30 CS70: Jean Walrand: Lecture 30. Bounds: An Overview Andrey Markov Markov & Chebyshev; A brief review Andrey Markov is best known for his work on stochastic processes. A primary subject of his research later became known as Markov I Bounds: chains and Markov processes. 1. Markov Pafnuty Chebyshev was one of his teachers. 2. Chebyshev Markov was an atheist. In 1912 he protested Leo Tolstoy’s excommunication from the I A brief review Russian Orthodox Church by requesting his I A mini-quizz own excommunication. The Church complied with his request. Monotonicity Markov’s inequality A picture The inequality is named after Andrey Markov, although it appeared earlier in Let X be a RV. We write X 0 if all the possible values of X are the work of Pafnuty Chebyshev. It should be (and is sometimes) called ≥ Chebyshev’s first inequality. nonnegative. That is, Pr[X 0] = 1. ≥ Theorem Markov’s Inequality It is obvious that X 0 E[X] 0. Assume f : ℜ [0,∞) is nondecreasing. Then, ≥ ⇒ ≥ → We write X Y if X Y 0. E[f (X)] ≥ − ≥ Pr[X a] , for all a such that f (a) > 0. Then, ≥ ≤ f (a) Proof: X Y X Y 0 E[X] E[Y ] = E[X Y ] 0. ≥ ⇒ − ≥ ⇒ − − ≥ Observe that f (X) Hence, 1 X a . { ≥ } ≤ f (a) X Y E[X] E[Y ]. ≥ ⇒ ≥ Indeed, if X < a, the inequality reads 0 f (X)/f (a), which ≤ We say that expectation is monotone. holds since f ( ) 0. Also, if X a, it reads 1 f (X)/f (a), which · ≥ ≥ ≤ (Which means monotonic, but not monotonous!) holds since f ( ) is nondecreasing. · Taking the expectation yields the inequality. Markov Inequality Example: G(p) Markov Inequality Example: P(λ) Chebyshev’s Inequality 1 2 2 p Let X = G(p). Recall that E[X] = and E[X ] = − . 2 2 p p2 Let X = P(λ). Recall that E[X] = λ and E[X ] = λ + λ . This is Pafnuty’s inequality: Theorem: Choosing f (x) = x, we Choosing f (x) = x, we var[X] get Pr[ X E[X] > a] , for all a > 0. get | − | ≤ a2 E[X] 1 E[X] λ Pr[X a] = . Pr[X a] = . ≥ ≤ a ap ≥ ≤ a a Proof: Let Y = X E[X] and f (y) = y 2. Then, | − | 2 Choosing f (x) = x2, Choosing f (x) = x , E[f (Y )] var[X] we get we get Pr[Y a] = . ( ) 2 2 2 ≥ ≤ f a a E[X 2] 2 p E[X ] λ + λ Pr[X a] = − . Pr[X a] = . ≥ ≤ a2 p2a2 ≥ ≤ a2 a2 This result confirms that the variance measures the “deviations from the mean.” Chebyshev and P(λ) Fraction of H’s Fraction of H’s Let X = P(λ). Then, E[X] = λ and var[X] = λ. Thus, Here is a classical application of Chebyshev’s inequality. var[X] λ How likely is it that the fraction of H’s differs from 50%? Pr[ X λ a] = . | − | ≥ ≤ a2 a2 Let Xm = 1 if the m-th flip of a fair coin is H and Xm = 0 λ otherwise. X + + X a > λ Pr[X a] Pr[ X λ a λ] 2 . 1 n ⇒ ≥ ≤ | − | ≥ − ≤ (a λ) Define Yn = ··· , for n 1. − X + + X n ≥ Y = 1 ··· n , for n 1. n n ≥ 25 Pr[ Yn 0.5 0.1] . We want to estimate | − | ≥ ≤ n Pr[ Y 0.5 0.1] = Pr[Y 0.4 or Y 0.6]. For n = 1,000, we find that this probability is less than 2.5%. | n − | ≥ n ≤ n ≥ As n ∞, this probability goes to zero. By Chebyshev, → var[Y ] We look at a general case next. Pr[ Y 0.5 0.1] n = 100var[Y ]. | n − | ≥ ≤ (0.1)2 n Now, 1 1 1 var[Y ] = (var[X ] + + var[X ]) = var[X ] = . n n2 1 ··· n n 1 4n Weak Law of Large Numbers Summary A brief review - 1 I Framework: Theorem Weak Law of Large Numbers Ω,Pr[ω],P[A] = ∑ Pr[ω] ω A Let X1,X2,... be independent with the same distribution. Then, ∈ for all ε > 0, I Conditional Probability and Bayes’ Rule X1 + + Xn Pr[A B] Pr[Am]Pr[B Am] Pr[ ··· E[X1] ε] 0, as n ∞. Bounds! Pr[A B] := ∩ ;Pr[Am B] = | | n − | ≥ → → | Pr[B] | ∑ Pr[Ak ]Pr[B Ak ] k | I Independence: Proof: I Markov: Pr[X a] E[f (X)]/f (a) where ... ≥ ≤ X1+ +Xn 2 Let Yn = ··· . Then I Chebyshev: Pr[ X E[X] a] var[X]/a A,B ind. Pr[A B] = Pr[A]Pr[B] n | − | ≥ ≤ ⇔ ∩ var[Y ] Pr[ Y E[X ] ε] n | n − 1 | ≥ ≤ ε2 I Mutual Independence: var[X ] = 1 0, as n ∞. A,B,C,... mutually independent nε2 → → ⇔ ··· I Random Variable: X :Ω ℜ → A brief review - 2 A brief review - 3 A brief review - 4 I Expectation: E[X] := ∑a Pr[X = a] = ∑X(ω)Pr[ω]. a × ω E[aX + bY ] = aE[X] + bE[Y ]. I Independent RVs: X,Y are independent if and only ifPr[X A,Y B] = ∈ ∈ ··· I Geometric distribution: G(p),p [0,1] ∈ I Uniform distribution: U[1,...,n] I Random experiment: Number of flips until first H. I If independent, then m 1 I Random experiment: Pick a number uniformly in 1,...,n I Definition: Pr[X = m] = (1 p) − p,m 1 { } − ≥ I f (X),g(Y ) independent I Definition: Pr[X = m] = 1/n,m = 1,...,n; I Mean: E[X] = 1/p; “trick” or renewal 2 I E[XY ] = E[X]E[Y ] I Mean: E[X] = (n + 1)/2; Calculation I Variance: var[X] = (1 p)/p ; “trick” or renewal 2 − I var[X + Y ] = var[X] + var[Y ] I Variance: var[X] = (n 1)/12; Calculation I Renewal: X = 1 + ZY ,Y X,Pr[Z = 1] = q := 1 p ≡ − − E[X] = 1 + qE[X];E[X 2] = 1 + 2qE[X] + qE[X 2]. A brief review - 5 A brief review - 6 A mini-quizz True or False: I Probability Space: I Pr[A B] = Pr[A] + Pr[B]. False True iff disjoint. ∪ I Pr[A B] = Pr[A]Pr[B]. False True iff independent. ∩ I A B = /0 A,B independent. False ∩ ⇒ I For all A,B, one has Pr[A B] Pr[A]. False | ≥ I Pr[A B C] = Pr[A]Pr[B A]Pr[C B]. False ∩ ∩ | | I Random Variables: I For all X,Y , one has E[XY ] = E[X]E[Y ]. False True if (not I Binomial distribution: B(n,p),n 1,p [0,1] I Poisson distribution: P(λ),λ > 0 iff) independent. ≥ ∈ I var[3X + 1] = 3var[X]. False = 9var[X]. I Random experiment: Number of heads out of n flips. I Random experiment: Number of H’s out of n 1 flips with I n m n m p = λ/n = B(n,λ/n). var[2X + 3Y ] = 4var[X] + 9var[Y ]. False True if (not iff) I Definition: Pr[X = m] = m p (1 p) − ,m = 0,1,...,n − λ m λ independent. I Mean: E[X] = np;X = X1 + + Xn,E[X] = nE[X1], by I Definition: Pr[X = m] = m! e− ,m 0 ··· ≥ I E[2X + 3Y + 1] = 2E[X] + 3E[Y ] + 1. True linearity I Mean: E[X] = λ; E[X] = n(λ/n) = λ I Variance: var[X] = np(1 p); I Variance: var[X] = λ; − X = X + + X ;var[X] = nvar[X ] by ind. var[X] = n(λ/n)(1 λ/n) = λ λ 2/n = λ since n 1. 1 ··· n 1 − − Inequalities: Some Applications Below, Y is measured and used to construct a guess g(Y ) about X. Here are some possible goals. 8 I Detection: Pr[X = g(Y )] 10 (e.g., bit error rate) 6 ≤ − I Estimation: Pr[ X g(Y ) > a] 5% (e.g., polling) | − | ≤ I Hypothesis Testing: Pr[g(Y ) = 1 X = 0] 5% and | ≤ Pr[g(Y ) = 0 X = 1] 5% (e.g., fire alarm) | ≤ I Design: Pr[buffer overflow] 5% ≤ 6 I Control: Pr[crash] 10 ≤ − Many situations where exact probabilities are difficult to compute but bounds can be found..
Recommended publications
  • Speech by Honorary Degree Recipient
    Speech by Honorary Degree Recipient Dear Colleagues and Friends, Ladies and Gentlemen: Today, I am so honored to present in this prestigious stage to receive the Honorary Doctorate of the Saint Petersburg State University. Since my childhood, I have known that Saint Petersburg University is a world-class university associated by many famous scientists, such as Ivan Pavlov, Dmitri Mendeleev, Mikhail Lomonosov, Lev Landau, Alexander Popov, to name just a few. In particular, many dedicated their glorious lives in the same field of scientific research and studies which I have been devoting to: Leonhard Euler, Andrey Markov, Pafnuty Chebyshev, Aleksandr Lyapunov, and recently Grigori Perelman, not to mention many others in different fields such as political sciences, literature, history, economics, arts, and so on. Being an Honorary Doctorate of the Saint Petersburg State University, I have become a member of the University, of which I am extremely proud. I have been to the beautiful and historical city of Saint Petersburg five times since 1997, to work with my respected Russian scientists and engineers in organizing international academic conferences and conducting joint scientific research. I sincerely appreciate the recognition of the Saint Petersburg State University for my scientific contributions and endeavors to developing scientific cooperations between Russia and the People’s Republic of China. I would like to take this opportunity to thank the University for the honor, and thank all professors, staff members and students for their support and encouragement. Being an Honorary Doctorate of the Saint Petersburg State University, I have become a member of the University, which made me anxious to contribute more to the University and to the already well-established relationship between Russia and China in the future.
    [Show full text]
  • Calculations on Randomness Effects on the Business Cycle Of
    Calculations on Randomness Effects on the Business Cycle of Corporations through Markov Chains Christian Holm [email protected] under the direction of Mr. Gaultier Lambert Department of Mathematics Royal Institute of Technology Research Academy for Young Scientists Juli 10, 2013 Abstract Markov chains are used to describe random processes and can be represented by both a matrix and a graph. Markov chains are commonly used to describe economical correlations. In this study we create a Markov chain model in order to describe the competition between two hypothetical corporations. We use observations of this model in order to discuss how market randomness affects the corporations financial health. This is done by testing the model for different market parameters and analysing the effect of these on our model. Furthermore we analyse how our model can be applied on the real market, as to further understand the market. Contents 1 Introduction 1 2 Notations and Definitions 2 2.1 Introduction to Probability Theory . 2 2.2 Introduction to Graph and Matrix Representation . 3 2.3 Discrete Time Markov Chain on V ..................... 6 3 General Theory and Examples 7 3.1 Hidden Markov Models in Financial Systems . 7 4 Application of Markov Chains on two Corporations in Competition 9 4.1 Model . 9 4.2 Simulations . 12 4.3 Results . 13 4.4 Discussion . 16 Acknowledgements 18 A Equation 10 20 B Matlab Code 20 C Matlab Code, Different β:s 21 1 Introduction The study of Markov chains began in the early 1900s, when Russian mathematician Andrey Markov first formalised the theory of Markov processes [1].
    [Show full text]
  • Markov Chain 1 Markov Chain
    Markov chain 1 Markov chain A Markov chain, named after Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of "memorylessness" is called the Markov property. Markov chains have many applications as statistical models of real-world processes. A simple two-state Markov chain Introduction Formally, a Markov chain is a random process with the Markov property. Often, the term "Markov chain" is used to mean a Markov process which has a discrete (finite or countable) state-space. Usually a Markov chain is defined for a discrete set of times (i.e., a discrete-time Markov chain)[1] although some authors use the same terminology where "time" can take continuous values.[2][3] The use of the term in Markov chain Monte Carlo methodology covers cases where the process is in discrete time (discrete algorithm steps) with a continuous state space. The following concentrates on the discrete-time discrete-state-space case. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement; formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states.
    [Show full text]
  • Stochastic Processes and Markov Chains (Part I)
    Stochastic processes and Markov chains (part I) Wessel van Wieringen w. n. van. wieringen@vu. nl Department of Epidemiology and Biostatistics, VUmc & Department of Mathematics , VU University Amsterdam, The Netherlands Stochastic processes Stochastic processes ExExampleample 1 • The intensity of the sun. • Measured every day by the KNMI. • Stochastic variable Xt represents the sun’s intensity at day t, 0 ≤ t ≤ T. Hence, Xt assumes values in R+ (positive values only). Stochastic processes ExExampleample 2 • DNA sequence of 11 bases long. • At each base position there is an A, C, G or T. • Stochastic variable Xi is the base at position i, i = 1,…,11. • In case the sequence has been observed, say: (x1, x2, …, x11) = ACCCGATAGCT, then A is the realization of X1, C that of X2, et cetera. Stochastic processes position … t t+1 t+2 t+3 … base … A A T C … … A A A A … … G G G G … … C C C C … … T T T T … position position position position t t+1 t+2 t+3 Stochastic processes ExExampleample 3 • A patient’s heart pulse during surgery. • Measured continuously during interval [0, T]. • Stochastic variable Xt represents the occurrence of a heartbeat at time t, 0 ≤ t ≤ T. Hence, Xt assumes only the values 0 (no heartbeat) and 1 (heartbeat). Stochastic processes Example 4 • Brain activity of a human under experimental conditions. • Measured continuously during interval [0, T]. • Stochastic variable Xt represents the magnetic field at time t, 0 ≤ t ≤ T. Hence, Xt assumes values on R. Stochastic processes Differences between examples Xt Discrete Continuous e tt Example 2 Example 1 iscre DD Time s Example 3 Example 4 tinuou nn Co Stochastic processes The state space S is the collection of all possible values that the random variables of the stochastic process may assume.
    [Show full text]
  • Markov Processes
    Markov Processes Daniel Wysocki March 12, 2015 Daniel Wysocki Markov Processes March 12, 2015 1 / 31 1 Stochastic Processes 2 Markov Chains 3 Hidden Markov Models Daniel Wysocki Markov Processes March 12, 2015 2 / 31 Stochastic Processes Daniel Wysocki Markov Processes March 12, 2015 3 / 31 Definition A stochastic process is a collection of random variables {X(t), t ∈ T} defined on a common probability space indexed by the index set T which describes the evolution of some system. (Resnick, 1992) Daniel Wysocki Markov Processes March 12, 2015 4 / 31 Properties the evolution of the system is non-deterministic, even if all initial variables are known the system may evolve in many (possibly infinite) ways any given sequence of events may be assigned a probability the probability of all possible sequences must sum to 1 Daniel Wysocki Markov Processes March 12, 2015 5 / 31 History early motivation for the theory of stochastic processes lies in Brownian motion Brownian motion was a phenomenon observed by Robert Brown in 1827 he observed under a microscope the motions of pollen in a fluid the movements seemed unpredictable Brownian motion is the outcome of many unpredictable or unobservable events, each imparting a negligible influence on an observed phenomenon, but collectively having a significant influence (Paul and Baschnagel, 2013) Daniel Wysocki Markov Processes March 12, 2015 6 / 31 History (cont.) Louis Bachelier made his PhD thesis in 1900, applying stochastic theory to the price of financial assets the significance of his work was not realized
    [Show full text]
  • Markov Chains Handout for Stat 110 1 Introduction
    Markov Chains Handout for Stat 110 Prof. Joe Blitzstein (Harvard Statistics Department) 1 Introduction Markov chains were first introduced in 1906 by Andrey Markov, with the goal of showing that the Law of Large Numbers does not necessarily require the random variables to be independent. Since then, they have become extremely important in a huge number of fields such as biology, game theory, finance, machine learning, and statistical physics. They are also very widely used for simulations of complex distributions, via algorithms known as MCMC (Markov Chain Monte Carlo). To see where the Markov model comes from, consider first an i.i.d. sequence of random variables X0;X1;:::;Xn;::: where we think of n as time. Independence is a very strong assumption: it means that the Xj's provide no information about each other. At the other extreme, allowing general interactions between the Xj's makes it very difficult to compute even basic things. Markov chains are a happy medium between complete independence and complete dependence. The space on which a Markov process \lives" can be either discrete or continuous, and time can be either discrete or continuous. In Stat 110, we will focus on Markov chains X0;X1;X2;::: in discrete space and time (continuous time would be a process Xt defined for all real t ≥ 0). Most of the ideas can be extended to the other cases. Specifically, we will assume that Xn takes values in a finite set (the state space), which we usually take to be f1; 2;:::;Mg (or f0; 1;:::;Mg if it is more convenient).
    [Show full text]
  • CS70: Jean Walrand: Lecture 30
    CS70: Jean Walrand: Lecture 30. Markov & Chebyshev; A brief review I Bounds: 1. Markov 2. Chebyshev I A brief review I A mini-quizz Bounds: An Overview Andrey Markov Andrey Markov is best known for his work on stochastic processes. A primary subject of his research later became known as Markov chains and Markov processes. Pafnuty Chebyshev was one of his teachers. Markov was an atheist. In 1912 he protested Leo Tolstoy’s excommunication from the Russian Orthodox Church by requesting his own excommunication. The Church complied with his request. Monotonicity Let X be a RV. We write X ≥ 0 if all the possible values of X are nonnegative. That is, Pr[X ≥ 0] = 1. It is obvious that X ≥ 0 ) E[X] ≥ 0: We write X ≥ Y if X − Y ≥ 0. Then, X ≥ Y ) X − Y ≥ 0 ) E[X] − E[Y ] = E[X − Y ] ≥ 0: Hence, X ≥ Y ) E[X] ≥ E[Y ]: We say that expectation is monotone. (Which means monotonic, but not monotonous!) Markov’s inequality The inequality is named after Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev. It should be (and is sometimes) called Chebyshev’s first inequality. Theorem Markov’s Inequality Assume f : Â ! [0;¥) is nondecreasing. Then, E[f (X)] Pr[X ≥ a] ≤ ; for all a such that f (a) > 0: f (a) Proof: Observe that f (X) 1fX ≥ ag ≤ : f (a) Indeed, if X < a, the inequality reads 0 ≤ f (X)=f (a), which holds since f (·) ≥ 0. Also, if X ≥ a, it reads 1 ≤ f (X)=f (a), which holds since f (·) is nondecreasing.
    [Show full text]
  • Bayesian Analysis
    Title stata.com intro — Introduction to Bayesian analysis Description Remarks and examples References Also see Description This entry provides a software-free introduction to Bayesian analysis. See[ BAYES] bayes for an overview of the software for performing Bayesian analysis and for an overview example. Remarks and examples stata.com Remarks are presented under the following headings: What is Bayesian analysis? Bayesian versus frequentist analysis, or why Bayesian analysis? How to do Bayesian analysis Advantages and disadvantages of Bayesian analysis Brief background and literature review Bayesian statistics Posterior distribution Selecting priors Point and interval estimation Comparing Bayesian models Posterior prediction Bayesian computation Markov chain Monte Carlo methods Metropolis–Hastings algorithm Adaptive random-walk Metropolis–Hastings Blocking of parameters Metropolis–Hastings with Gibbs updates Convergence diagnostics of MCMC Summary The first five sections provide a general introduction to Bayesian analysis. The remaining sections provide a more technical discussion of the concepts of Bayesian analysis. What is Bayesian analysis? Bayesian analysis is a statistical analysis that answers research questions about unknown parameters of statistical models by using probability statements. Bayesian analysis rests on the assumption that all model parameters are random quantities and thus can incorporate prior knowledge. This assumption is in sharp contrast with the more traditional, also called frequentist, statistical inference where all parameters are considered unknown but fixed quantities. Bayesian analysis follows a simple rule of probability, the Bayes rule, which provides a formalism for combining prior information with evidence from the data at hand. The Bayes rule is used to form the so called posterior distribution of model parameters.
    [Show full text]
  • Programming the USSR: Leonid V. Kantorovich in Context
    BJHS 53(2): 255–278, June 2020. © The Author(s) 2020. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. doi:10.1017/S0007087420000059 First published online 08 April 2020 Programming the USSR: Leonid V. Kantorovich in context IVAN BOLDYREV* AND TILL DÜPPE** Abstract. In the wake of Stalin’s death, many Soviet scientists saw the opportunity to promote their methods as tools for the engineering of economic prosperity in the socialist state. The mathematician Leonid Kantorovich (1912–1986) was a key activist in academic politics that led to the increasing acceptance of what emerged as a new scientific persona in the Soviet Union. Rather than thinking of his work in terms of success or failure, we propose to see his career as exemplifying a distinct form of scholarship, as a partisan technocrat, characteristic of the Soviet system of knowledge production. Confronting the class of orthodox economists, many factors were at work, including Kantorovich’s cautious character and his allies in the Academy of Sciences. Drawing on archival and oral sources, we demonstrate how Kantorovich, throughout his career, negotiated the relations between mathematics and eco- nomics, reinterpreted political and ideological frames, and reshaped the balance of power in the Soviet academic landscape. In the service of plenty One of the driving forces of Western science in the twentieth century was the notion of promoting rationality in modern societies.
    [Show full text]
  • The Study of ”Loop” Markov Chains
    THE STUDY OF "LOOP" MARKOV CHAINS by Perry L. Gillespie Jr. A dissertation submitted to the faculty of the University of North Carolina at Charlotte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Applied Mathematics Charlotte 2011 Approved by: Dr. Stanislav Molchanov Dr. Isaac Sonin Dr. Joe Quinn Dr. Terry Xu ii c 2011 Perry L. Gillespie Jr. ALL RIGHTS RESERVED iii ABSTRACT PERRY L. GILLESPIE JR.. The study of "loop" markov chains. (Under the direction of DR. STANISLAV MOLCHANOV AND ISAAC SONIN) The purpose of my research is the study of "Loop" Markov Chains. This model contains several loops, which could be connected at several different points. The focal point of this thesis will be when these loops are connected at one single point. Inside each loop are finitely many points. If the process is currently at the position where all the loops are connected, it will stay at its current position or move to the first position in any loop with a positive probability. Once within the loop, the movement can be deterministic or random. We'll consider the Optimal Stopping Problem for finding the optimal stopping set and the Spectral Analysis for our "Loop" system. As a basis, we'll start with the Elimination Algorithm and a modified version of the Elimination Algorithm. None of these algorithms are efficient enough for finding the optimal stopping set of our "Loop" Markov Chain; therefore, I propose a second modification of the Elimination Algorithm, the Lift Algorithm. The "Loop" Markov Chain will dictate which algorithm or combination of algorithms, would be appropriate to find the optimal set.
    [Show full text]
  • Markov Chains
    THE STUDY OF "LOOP" MARKOV CHAINS by Perry L. Gillespie Jr. A dissertation submitted to the faculty of the University of North Carolina at Charlotte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Applied Mathematics Charlotte 2011 Approved by: Dr. Stanislav Molchanov Dr. Isaac Sonin Dr. Joe Quinn Dr. Terry Xu ii c 2011 Perry L. Gillespie Jr. ALL RIGHTS RESERVED iii ABSTRACT PERRY L. GILLESPIE JR.. The study of "loop" markov chains. (Under the direction of DR. STANISLAV MOLCHANOV AND ISAAC SONIN) The purpose of my research is the study of "Loop" Markov Chains. This model contains several loops, which could be connected at several different points. The focal point of this thesis will be when these loops are connected at one single point. Inside each loop are finitely many points. If the process is currently at the position where all the loops are connected, it will stay at its current position or move to the first position in any loop with a positive probability. Once within the loop, the movement can be deterministic or random. We'll consider the Optimal Stopping Problem for finding the optimal stopping set and the Spectral Analysis for our "Loop" system. As a basis, we'll start with the Elimination Algorithm and a modified version of the Elimination Algorithm. None of these algorithms are efficient enough for finding the optimal stopping set of our "Loop" Markov Chain; therefore, I propose a second modification of the Elimination Algorithm, the Lift Algorithm. The "Loop" Markov Chain will dictate which algorithm or combination of algorithms, would be appropriate to find the optimal set.
    [Show full text]
  • From Euler to Bernstein
    Karl-Georg Steffens The History of Approximation Theory From Euler to Bernstein Birkhauser¨ Boston • Basel • Berlin Consulting Editor: Karl-Georg Steffens George A. Anastassiou Auf der neuen Ahr 18 University of Memphis 52372 Kreuzau Department of Mathematical Sciences Germany Memphis, TB 38152 [email protected] USA Cover design by Mary Burgess. AMS Subject Classification: 11J68, 32E30, 33F05, 37Mxx, 40-XX, 41-XX, 41A10 Library of Congress Control Number: ISBN 0-8176-4353-2 eISBN 0-8176-4353-9 ISBN-13 978-0-8176- Printed on acid-free paper. c 2006 Birkhauser¨ Boston All rights reserved. This work may not be translated or copied in whole or in part without the writ- ten permission of the publisher (Birkhauser¨ Boston, c/o Springer Science+Business Media Inc., 233 Spring Street, New York, NY 10013, USA) and the author, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and re- trieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. (TXQ/SB) 987654321 www.birkhauser.com Preface The aim of the present work is to describe the early development of approxi- mation theory. We set as an endpoint the year 1919 when de la Vall´ee-Poussin published his lectures [Val19].
    [Show full text]