Martingale Theory Problem Set 3, with Solutions Martingales

Total Page:16

File Type:pdf, Size:1020Kb

Martingale Theory Problem Set 3, with Solutions Martingales Martingale Theory Problem set 3, with solutions Martingales The solutions of problems 1,2,3,4,5,6, and 11 are written down. The rest will come soon. 3.1 Let ξj, j = 1; 2;::: be i.i.d. random variables with common distribution P ξi = +1 = p; P ξi = −1 = q := 1 − p; and , , their natural ltration. Denote Pn , . Fn = σ(ξj; 0 ≤ j ≤ n) n ≥ 0 Sn := j=1 ξj n ≥ 0 Sn (a) Prove that Mn := (q=p) is an (Fn)n≥0-martingale. (b) For λ > 0 determine C = C(λ) so that λ n Sn Zn := C λ be an (Fn)n≥0-martingale. SOLUTION: (a) ξn+1 ξn+1 E Mn+1 Fn = E Mn(q=p) Fn = MnE (q=p) Fn ξn+1 = MnE (q=p) = Mn (p(q=p) + q(p=q)) = Mn: (b) −1 C = C(λ) = Eλξ = λp + λ−1q: 3.2 Gambler's Ruin, 1 A gambler wins or looses one pound in each round of betting, with equal chances and independently of the past events. She starts betting with the rm determination that she will stop gambling when either she won a pounds or she lost b pounds. (a) What is the probability that she will be winning when she stops playing further. 1 (b) What is the expected number of her betting rounds before she will stop playing further. SOLUTION: Model the experiment with simple symmetric random walk. Let ξj, j = 1; 2;::: be i.i.d. random variables with common distribution 1 Pξ = +1 = = Pξ = −1; i 2 i and Fn = σ(ξj; 0 ≤ j ≤ n), n ≥ 0, their natural ltration. Denote n X S0 = 0;Sn := ξj; n ≥ 1: j=1 Dene the stopping times TL := inffn > 0 : Sn = −bg;TR := inffn > 0 : Sn = +ag;T := minfTL;TRg: Note that fthe gambler wins a poundsg = fT = TRg; fthe gambler looses b poundsg = fT = TLg: (a) By the Optional Stopping Theorem E ST = E S0 = 0: Hence −bP T = TL + aP T = Tr = 0: On the other hand, P T = TL + P T = Tr = 1: Solving the last two equations we get a b PT = T = ; PT = T = : L a + b R a + b (b) First prove that 2 is yet another martingale: Mn := Sn − n 2 E Mn+1 Fn = E Sn+1 Fn − (n + 1) 2 = E Sn + 2Snξn+1 + 1 Fn − (n + 1) = ··· = Mn: Now, apply the Optional Stopping Theorem 2 2 2 0 = E MT = E ST − T = P T = TL b + P T = TR a − E T : Hence, using the result from (a) ET = ab: 2 3.3HW Gambler's Ruin, 2 Answer the same questions as in problem 2 when the probability of winning or loosing one pound in each round is p, respectively, q := 1 − p, with p 2 (0; 1). Hint: Use the martingales constructed in problem 1 SOLUTION: Model the experiment with simple biased random walk. Let ξj, j = 1; 2;::: be i.i.d. random variables with common distribution P ξi = +1 = p; P ξi = −1 = q; and Fn = σ(ξj; 0 ≤ j ≤ n), n ≥ 0, their natural ltration. Denote n X S0 = 0;Sn := ξj; n ≥ 1: j=1 Dene the stopping times TL := inffn > 0 : Sn = −bg;TR := inffn > 0 : Sn = +ag;T := minfTL;TRg: Note that fthe gambler wins a poundsg = fT = TRg; fthe gambler looses b poundsg = fT = TLg: (a) Use the Optional Stopping Theorem for the martingale (q=p)Sn : Sn b a 1 = E (q=p) = (p=q) P T = TL + (q=p) P T = TR : On the other hand, P T = TL + P T = Tr = 1: Solving the last two equations we get 1 − (q=p)a 1 − (p=q)b PT = T = ; PT = T = : L (p=q)b − (q=p)a R (q=p)a − (p=q)b (b) Now, apply the Optional Stopping Theorem to the martingale Sn − (p − q)n. Hence 1 − (p=q)b 1 − (q=p)a ET = (p − q)−1ES = (p − q)−1 a − b T (q=p)a − (p=q)b (p=q)b − (q=p)a a(1 − (p=q)b) + b(1 − (q=p)a) = (p − q)−1 : (q=p)a − (p=q)b 3 3.4 Let ξj, j = 1; 2; 3;::: , be independent and identically distributed random variables and Fn := σ(ξj; 0 ≤ j ≤ n), n ≥ 0, the natural ltration generated by them. Assume that γξ for some γ 2 R the exponential moment m(γ) := E e j < 1 exists. Denote S0 := 0, Pn , . Prove that the process Sn := j=1 ξj n ≥ 1 −n Mn := m(γ) expfγSng; n 2 N; is an (Fn)n≥0-martingale. SOLUTION: Very much the same as problem 1 (b). 3.5 Let (Ω; F; (Fn)n≥0; P) be a ltered probability space and Yn, n ≥ 0, a sequence of absolutely integrable random variables adapted to the ltration (Fn)n≥0. Assume that there exist real numbers un; vn, n ≥ 0, such that E Yn+1 Fn = unYn + vn: Find two real sequences an and bn, n ≥ 0, so that the sequence of random variables Mn := anYn + bn, n > 1, be martingale w.r.t. the same ltration. SOLUTION: Write down the martingale condition for Mn: E Mn+1 Fn = E an+1Yn+1 + bn+1 Fn = an+1unYn + an+1vn + bn+1 = anYn + bn: We get the recursions −1 an+1 = anun ; bn+1 = bn − an+1vn: The solution is n−1 !−1 Y a0 = 1; an = uk ; k=0 n X b0 = 0; bn = − akvk−1: k=1 3.6HW We place N balls in K urns (in whatever way) and perform the following discrete time process. At each time unit we choose one of the balls uniformly at random (that is : each ball is chosen with probability 1=N) and place it in one of the urns also uniformly chosen at random (that is: each urn is chosen with probability 1=K). Denote by Xn the number 4 of balls in the rst urn at time n and let Fn := σ(Xj; 1 ≤ j ≤ n), n ≥ 0, be the natural ltration generated by the process n 7! Xn. (a) Compute E Xn+1 Fn . (b) Using the result from problem 5, nd real numbers an; bn, n ≥ 0, such that Zn := anXn + bn be martingale with respect to the ltration (Fn)n≥0. SOLUTION: (a) N − Xn 1 Xn K − 1 N − Xn K − 1 Xn 1 E Xn+1 Fn = (Xn + 1) + (Xn − 1) + Xn + N K N K N K N K N − 1 1 = X + : n N K (b) Apply the result from problem 5 with N − 1 1 u = ; v = : n N n K 3.7 Let Xj, j ≥ 1, be absolutely integrable random variables and Fn := σ(Xj;; 1 ≤ j ≤ n), n ≥ 0, their natural ltration. Dene the new random variables n−1 X Z0 := 0;Zn := Xj+1 − E Xj+1 Fj : j=0 Prove that the process n 7! Zn is an (Fn)n≥0-martingale. 3.8 A biased coin shows HEAD=1 with probability θ 2 (0; 1), and TAIL=0 with probability 1 − θ. The value θ of the bias is not known. n For t 2 [0; 1] and n 2 N we dene pn;t : f0; 1g ! [0; 1] by Pn Pn xj n− xj pn;t(x1; : : : ; xn) := t j=1 (1 − t) j=1 : We make two hypotheses about the possible value of θ: either θ = a, or θ = b, where a; b 2 [0; 1] and a 6= b. We toss the coin repeatedly and form the sequence of random variables pn;a(ξ1; : : : ; ξn) Zn := ; pn;b(ξ1; : : : ; ξn) where ξj, j = 1; 2;::: , are the results of the successive trials (HEAD=1, TAIL=0). Prove that the process n 7! Zn is a martingale (with respect to the natural ltration generated by the coin tosses) if and only if the true bias of the coin is θ = b. SOLUTION: 5 3.9 Bonus Bellman's Optimality Principle We model a sequence of gambling as follows. Let ξj, j = 1; 2;::: , be independent random variables with the following identical distribution; P ξj = +1 = p; P ξj = −1 = 1 − p := q; 1=2 < p < 1: We denote α := p log2 p + q log2 q + 2; the entropy of the distribution of ξj. ξj is the return of unit bet in the jth round. A gambler starts playing with initial fortune Y0 > 0 and her fortune after round n is Yn = Yn−1 + Cnξn where Cn is the amount she bets in this round. Cn may depend on the values of ξ1; : : : ; ξn−1, and 0 ≤ Cn ≤ Yn−1. The expected rate of winnings within n rounds is: rn := E log2(Yn=Y0) : The gambler's goal is to maximize rn within a xed number of rounds. (a) Prove that no matter what strategy the gambler chooses (that is: no matter how she chooses Cn = Cn(ξ1; : : : ; ξn−1) 2 [0;Yn−1]) Xn := log2 Yn − nα is a supermartingale and hence it follows that rn ≤ nα. This means that she will not be able to make her average winning rate, over any number of rounds, larger than α. (b) However, there exists a gambling strategy which makes Xn dened above a martingale and thus realizes the maximal average winning rate. Find this strategy. That is: determine the optimal choice of Cn = Cn(ξ1; : : : ; ξn). SOLUTION: 3.10 Let n 7! ηn be a homogeneous Markov chain on the countable state space S := f0; 1; 2;::: g and Fn := σ(ηj; 0 ≤ j ≤ n), n ≥ 0, its natural ltration.
Recommended publications
  • Poisson Representations of Branching Markov and Measure-Valued
    The Annals of Probability 2011, Vol. 39, No. 3, 939–984 DOI: 10.1214/10-AOP574 c Institute of Mathematical Statistics, 2011 POISSON REPRESENTATIONS OF BRANCHING MARKOV AND MEASURE-VALUED BRANCHING PROCESSES By Thomas G. Kurtz1 and Eliane R. Rodrigues2 University of Wisconsin, Madison and UNAM Representations of branching Markov processes and their measure- valued limits in terms of countable systems of particles are con- structed for models with spatially varying birth and death rates. Each particle has a location and a “level,” but unlike earlier con- structions, the levels change with time. In fact, death of a particle occurs only when the level of the particle crosses a specified level r, or for the limiting models, hits infinity. For branching Markov pro- cesses, at each time t, conditioned on the state of the process, the levels are independent and uniformly distributed on [0,r]. For the limiting measure-valued process, at each time t, the joint distribu- tion of locations and levels is conditionally Poisson distributed with mean measure K(t) × Λ, where Λ denotes Lebesgue measure, and K is the desired measure-valued process. The representation simplifies or gives alternative proofs for a vari- ety of calculations and results including conditioning on extinction or nonextinction, Harris’s convergence theorem for supercritical branch- ing processes, and diffusion approximations for processes in random environments. 1. Introduction. Measure-valued processes arise naturally as infinite sys- tem limits of empirical measures of finite particle systems. A number of ap- proaches have been developed which preserve distinct particles in the limit and which give a representation of the measure-valued process as a transfor- mation of the limiting infinite particle system.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • Coalescence in Bellman-Harris and Multi-Type Branching Processes Jyy-I Joy Hong Iowa State University
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2011 Coalescence in Bellman-Harris and multi-type branching processes Jyy-i Joy Hong Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Mathematics Commons Recommended Citation Hong, Jyy-i Joy, "Coalescence in Bellman-Harris and multi-type branching processes" (2011). Graduate Theses and Dissertations. 10103. https://lib.dr.iastate.edu/etd/10103 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Coalescence in Bellman-Harris and multi-type branching processes by Jyy-I Hong A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Krishna B. Athreya, Major Professor Clifford Bergman Dan Nordman Ananda Weerasinghe Paul E. Sacks Iowa State University Ames, Iowa 2011 Copyright c Jyy-I Hong, 2011. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my parents Wan-Fu Hong and Wen-Hsiang Tseng for their un- conditional love and support. Without them, the completion of this work would not have been possible. iii TABLE OF CONTENTS ACKNOWLEDGEMENTS . vii ABSTRACT . viii CHAPTER 1. PRELIMINARIES . 1 1.1 Introduction . 1 1.2 Discrete-time Single-type Galton-Watson Branching Processes .
    [Show full text]
  • Lecture 16: March 12 Instructor: Alistair Sinclair
    CS271 Randomness & Computation Spring 2020 Lecture 16: March 12 Instructor: Alistair Sinclair Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may be distributed outside this class only with the permission of the Instructor. 16.1 The Giant Component in Gn,p In an earlier lecture we briefly mentioned the threshold for the existence of a “giant” component in a random graph, i.e., a connected component containing a constant fraction of the vertices. We now derive this threshold rigorously, using both Chernoff bounds and the useful machinery of branching processes. We work c with our usual model of random graphs, Gn,p, and look specifically at the range p = n , for some constant c. Our goal will be to prove: c Theorem 16.1 For G ∈ Gn,p with p = n for constant c, we have: 1. For c < 1, then a.a.s. the largest connected component of G is of size O(log n). 2. For c > 1, then a.a.s. there exists a single largest component of G of size βn(1 + o(1)), where β is the unique solution in (0, 1) to β + e−βc = 1. Moreover, the next largest component in G has size O(log n). Here, and throughout this lecture, we use the phrase “a.a.s.” (asymptotically almost surely) to denote an event that holds with probability tending to 1 as n → ∞. This behavior is shown pictorially in Figure 16.1. For c < 1, G consists of a collection of small components of size at most O(log n) (which are all “tree-like”), while for c > 1 a single “giant” component emerges that contains a constant fraction of the vertices, with the remaining vertices all belonging to tree-like components of size O(log n).
    [Show full text]
  • Lecture 19: Polymer Models: Persistence and Self-Avoidance
    Lecture 19: Polymer Models: Persistence and Self-Avoidance Scribe: Allison Ferguson (and Martin Z. Bazant) Martin Fisher School of Physics, Brandeis University April 14, 2005 Polymers are high molecular weight molecules formed by combining a large number of smaller molecules (monomers) in a regular pattern. Polymers in solution (i.e. uncrystallized) can be characterized as long flexible chains, and thus may be well described by random walk models of various complexity. 1 Simple Random Walk (IID Steps) Consider a Rayleigh random walk (i.e. a fixed step size a with a random angle θ). Recall (from Lecture 1) that the probability distribution function (PDF) of finding the walker at position R~ after N steps is given by −3R2 2 ~ e 2a N PN (R) ∼ d (1) (2πa2N/d) 2 Define RN = |R~| as the end-to-end distance of the polymer. Then from previous results we q √ 2 2 ¯ 2 know hRN i = Na and RN = hRN i = a N. We can define an additional quantity, the radius of gyration GN as the radius from the centre of mass (assuming the polymer has an approximately 1 2 spherical shape ). Hughes [1] gives an expression for this quantity in terms of hRN i as follows: 1 1 hG2 i = 1 − hR2 i (2) N 6 N 2 N As an example of the type of prediction which can be made using this type of simple model, consider the following experiment: Pull on both ends of a polymer and measure the force f required to stretch it out2. Note that the force will be given by f = −(∂F/∂R) where F = U − TS is the Helmholtz free energy (T is the temperature, U is the internal energy, and S is the entropy).
    [Show full text]
  • 8 Convergence of Loop-Erased Random Walk to SLE2 8.1 Comments on the Paper
    8 Convergence of loop-erased random walk to SLE2 8.1 Comments on the paper The proof that the loop-erased random walk converges to SLE2 is in the paper \Con- formal invariance of planar loop-erased random walks and uniform spanning trees" by Lawler, Schramm and Werner. (arxiv.org: math.PR/0112234). This section contains some comments to give you some hope of being able to actually make sense of this paper. In the first sections of the paper, δ is used to denote the lattice spacing. The main theorem is that as δ 0, the loop erased random walk converges to SLE2. HOWEVER, beginning in section !2.2 the lattice spacing is 1, and in section 3.2 the notation δ reappears but means something completely different. (δ also appears in the proof of lemma 2.1 in section 2.1, but its meaning here is restricted to this proof. The δ in this proof has nothing to do with any δ's outside the proof.) After the statement of the main theorem, there is no mention of the lattice spacing going to zero. This is hidden in conditions on the \inner radius." For a domain D containing 0 the inner radius is rad0(D) = inf z : z = D (364) fj j 2 g Now fix a domain D containing the origin. Let be the conformal map of D onto U. We want to show that the image under of a LERW on a lattice with spacing δ converges to SLE2 in the unit disc. In the paper they do this in a slightly different way.
    [Show full text]
  • Processes on Complex Networks. Percolation
    Chapter 5 Processes on complex networks. Percolation 77 Up till now we discussed the structure of the complex networks. The actual reason to study this structure is to understand how this structure influences the behavior of random processes on networks. I will talk about two such processes. The first one is the percolation process. The second one is the spread of epidemics. There are a lot of open problems in this area, the main of which can be innocently formulated as: How the network topology influences the dynamics of random processes on this network. We are still quite far from a definite answer to this question. 5.1 Percolation 5.1.1 Introduction to percolation Percolation is one of the simplest processes that exhibit the critical phenomena or phase transition. This means that there is a parameter in the system, whose small change yields a large change in the system behavior. To define the percolation process, consider a graph, that has a large connected component. In the classical settings, percolation was actually studied on infinite graphs, whose vertices constitute the set Zd, and edges connect each vertex with nearest neighbors, but we consider general random graphs. We have parameter ϕ, which is the probability that any edge present in the underlying graph is open or closed (an event with probability 1 − ϕ) independently of the other edges. Actually, if we talk about edges being open or closed, this means that we discuss bond percolation. It is also possible to talk about the vertices being open or closed, and this is called site percolation.
    [Show full text]
  • Chapter 21 Epidemics
    From the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World. By David Easley and Jon Kleinberg. Cambridge University Press, 2010. Complete preprint on-line at http://www.cs.cornell.edu/home/kleinber/networks-book/ Chapter 21 Epidemics The study of epidemic disease has always been a topic where biological issues mix with social ones. When we talk about epidemic disease, we will be thinking of contagious diseases caused by biological pathogens — things like influenza, measles, and sexually transmitted diseases, which spread from person to person. Epidemics can pass explosively through a population, or they can persist over long time periods at low levels; they can experience sudden flare-ups or even wave-like cyclic patterns of increasing and decreasing prevalence. In extreme cases, a single disease outbreak can have a significant effect on a whole civilization, as with the epidemics started by the arrival of Europeans in the Americas [130], or the outbreak of bubonic plague that killed 20% of the population of Europe over a seven-year period in the 1300s [293]. 21.1 Diseases and the Networks that Transmit Them The patterns by which epidemics spread through groups of people is determined not just by the properties of the pathogen carrying it — including its contagiousness, the length of its infectious period, and its severity — but also by network structures within the population it is affecting. The social network within a population — recording who knows whom — determines a lot about how the disease is likely to spread from one person to another. But more generally, the opportunities for a disease to spread are given by a contact network: there is a node for each person, and an edge if two people come into contact with each other in a way that makes it possible for the disease to spread from one to the other.
    [Show full text]
  • Pdf File of Second Edition, January 2018
    Probability on Graphs Random Processes on Graphs and Lattices Second Edition, 2018 GEOFFREY GRIMMETT Statistical Laboratory University of Cambridge copyright Geoffrey Grimmett Geoffrey Grimmett Statistical Laboratory Centre for Mathematical Sciences University of Cambridge Wilberforce Road Cambridge CB3 0WB United Kingdom 2000 MSC: (Primary) 60K35, 82B20, (Secondary) 05C80, 82B43, 82C22 With 56 Figures copyright Geoffrey Grimmett Contents Preface ix 1 Random Walks on Graphs 1 1.1 Random Walks and Reversible Markov Chains 1 1.2 Electrical Networks 3 1.3 FlowsandEnergy 8 1.4 RecurrenceandResistance 11 1.5 Polya's Theorem 14 1.6 GraphTheory 16 1.7 Exercises 18 2 Uniform Spanning Tree 21 2.1 De®nition 21 2.2 Wilson's Algorithm 23 2.3 Weak Limits on Lattices 28 2.4 Uniform Forest 31 2.5 Schramm±LownerEvolutionsÈ 32 2.6 Exercises 36 3 Percolation and Self-Avoiding Walks 39 3.1 PercolationandPhaseTransition 39 3.2 Self-Avoiding Walks 42 3.3 ConnectiveConstantoftheHexagonalLattice 45 3.4 CoupledPercolation 53 3.5 Oriented Percolation 53 3.6 Exercises 56 4 Association and In¯uence 59 4.1 Holley Inequality 59 4.2 FKG Inequality 62 4.3 BK Inequalitycopyright Geoffrey Grimmett63 vi Contents 4.4 HoeffdingInequality 65 4.5 In¯uenceforProductMeasures 67 4.6 ProofsofIn¯uenceTheorems 72 4.7 Russo'sFormulaandSharpThresholds 80 4.8 Exercises 83 5 Further Percolation 86 5.1 Subcritical Phase 86 5.2 Supercritical Phase 90 5.3 UniquenessoftheIn®niteCluster 96 5.4 Phase Transition 99 5.5 OpenPathsinAnnuli 103 5.6 The Critical Probability in Two Dimensions 107
    [Show full text]
  • BRANCHING PROCESSES in RANDOM TREES by Yanjmaa
    BRANCHING PROCESSES IN RANDOM TREES by Yanjmaa Jutmaan A dissertation submitted to the faculty of the University of North Carolina at Charlotte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Applied Mathematics Charlotte 2012 Approved by: Dr. Stanislav A. Molchanov Dr. Isaac Sonin Dr. Michael Grabchak Dr. Celine Latulipe ii c 2012 Yanjmaa Jutmaan ALL RIGHTS RESERVED iii ABSTRACT YANJMAA JUTMAAN. Branching processes in random trees. (Under the direction of DR. STANISLAV MOLCHANOV) We study the behavior of branching process in a random environment on trees in the critical, subcritical and supercritical case. We are interested in the case when both the branching and the step transition parameters are random quantities. We present quenched and annealed classifications for such processes and determine some limit theorems in the supercritical quenched case. Corollaries cover the percolation problem on random trees. The main tool is the compositions of random generating functions. iv ACKNOWLEDGMENTS This work would not have been possible without the support and assistance of my colleagues, friends, and family. I have been fortunate to be Dr. Stanislav Molchanov's student. I am grateful for his guidance, teaching and involvement in my graduate study. I have learned much from him; this will be remembered and valued throughout my life. I am thankful also to the other members of my committee, Dr. Isaac Sonin, Dr. Michael Grabchak and Dr. Celine Latulipe for their time and support provided. I would also like to thank Dr. Joel Avrin, graduate coordinator, for his advice, kindness and patience. I am eternally thankful to my father Dagva and mother Tsendmaa for their love that I feel even this far away.
    [Show full text]
  • Ewens' Sampling Formula
    Ewens’ sampling formula; a combinatorial derivation Bob Griffiths, University of Oxford Sabin Lessard, Universit´edeMontr´eal Infinitely-many-alleles-model: unique mutations ......................................................................................................... •. •. •. ............................................................................................................ •. •. .................................................................................................... •. •. •. ............................................................. ......................................... .............................................. •. ............................... ..................... ..................... ......................................... ......................................... ..................... • ••• • • • ••• • • • • • Sample configuration of alleles 4 A1,1A2,4A3,3A4,3A5. Ewens’ sampling formula (1972) n sampled genes k b Probability of a sample having types with j types represented j times, jbj = n,and bj = k,is n! 1 θk · · b b 1 1 ···n n b1! ···bn! θ(θ +1)···(θ + n − 1) Example: Sample 4 A1,1A2,4A3,3A4,3A5. b1 =1, b2 =0, b3 =2, b4 =2. Old and New lineages . •. ............................... ......................................... •. .............................................. ............................... ............................... ..................... ..................... ......................................... ..................... ..................... ◦ ◦◦◦◦◦ n1 n2 nm nm+1
    [Show full text]
  • A Branching Process Model for the Novel Coronavirus (Covid-19) Spread in Greece
    International Journal of Modeling and Optimization, Vol. 11, No. 3, August 2021 A Branching Process Model for the Novel Coronavirus (Covid-19) Spread in Greece Ioanna A. Mitrofani and Vasilis P. Koutras restrictions and after 42 days of quarantine, when the number Abstract—The novel coronavirus (covid-19) was initially of daily reported cases decreased to 10, state authorities identified at the end of 2019 and caused a global health care gradually repealed the restrictions [3]. These control crisis. The increased transmissibility of the virus, that led to measures, that were among the strictest in Europe, were high mortality, raises the interest of scientists worldwide. Thus, various methods and models have been extensively discussed, so initially considered as highly effective and whereas the to study and control covid-19 transmission. Mathematical pandemic was internationally ongoing, the case of Greece modeling constitutes an important tool to estimate key was treated as a success story [4]. However, at the time of this parameters of the transmission and predict the dynamic of the revision the numbers have been increased to 82,034 total virus. More precisely, in the relevant literature, epidemiology is cases and 1,288 deaths. These fluctuations on numbers attract considered as a classical application area of branching the interest of researchers to model the transmission of the processes, which are stochastic individual-based processes. In virus to evaluate and quantify the dynamic of the pandemic this paper, we develop a classical Galton-Watson branching process approach for the covid-19 spread in Greece at the early [1]. stage. This approach is structured in two parts, initial and latter Mathematical models of infectious disease transmission transmission stages, so to provide a comprehensive view of the effectively describe and simply depict the evolution of virus spread through basic and effective reproduction numbers diseases by providing quantitative data in epidemiology [2], respectively, along with the probability of an outbreak.
    [Show full text]