Continuous-Time Markov Chains of the Ehrenfest Type with Applications*

Total Page:16

File Type:pdf, Size:1020Kb

Continuous-Time Markov Chains of the Ehrenfest Type with Applications* 数理解析研究所講究録 1193 巻 2001 年 79-103 79 ASYMPTOTIC BEHAVIOR OF CONTINUOUS-TIME MARKOV CHAINS OF THE EHRENFEST TYPE WITH APPLICATIONS* 神奈川大学 工学部 成田清正 (Kiyomasa Narita) Faculty of Engineering, Kanagawa Univ. Abstract Kac [6] considers the discrete-time Markov chain for the original Ehrenfest urn model and gives the expression of the $n$-step transition probabilities in terms of the Krawtchouk polyno- mials; moreover, Kac’s formulas for transition laws show that the model yields Newton’s law of cooling at the level of the evolution of the averages. Here we first give a version of Newton’s law of cooling for Dette’s [4] Ehrenfest urn model. We next discuss the continuous-time formulation of Markov chain for Krafft and Schaefer’s [9] Ehrenfest urn model and calculate probability law of transitions and first-passage times in both the reduced description with states counting the number of balls in the specified urn and the full description with states enumerating the vertices of the $N$-cube. Our results correspond to a generalization of the work of Bingham [3]. We also treat several applications of urn models to queueing network and reliability theory. 1. Introduction Let $s$ and $t$ be two parameters such that $0<s=<1$ and $0<t=<1$ , and consider the $\{0,1, \ldots , N\}$ and transition probabilities homogeneous Markov chain with state . $(1- \frac{i}{N})s$ , if $j=i+1.$ ’ $\frac{i}{N}t$ , if $j=i-1$ , $p_{ij}=\{$ (1.1) $0<<=^{j}=^{N}$ $1-(1- \frac{i}{N})s-\frac{i}{N}t$ , if $j=i$ , ’ $0$ , otherwise, which is proposed by Krafft and Schaefer [9]. In case $s=t=1$ , the Markov chain is the same as P. and T. Ehrenfest [5] use for illustration of the process of heat exchange between two bodies that are in contact and insulated from the outside. The temperatures are assumed to change in steps of one unit and are represented by the numbers of balls in two urns. The two urns are marked I and II and there are $N$ balls labeled 1, 2, . , $N$ and distributed in two urns. The chain is in state $i$ when there are $i$ balls in I. At each $1/N$ ball point of time $n$ one ball is chosen at random (i.e. with equal probabilities among *This research was supported in part by $\mathrm{G}\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{t}_{-}\mathrm{i}\mathrm{n}_{-\mathrm{A}\mathrm{i}}\mathrm{d}$ for General Scientific Research (No. 10640138), The Ministry of Education, Science, Sports and Culture. 80 numbers 1, 2, . , $N$ ) and moved from its urn to other urn by the following way. If it is in urn I, it is placed in II with probability $t$ and returned to I with probability l-t. If it $\Pi$ is in , it is placed in I with probability $s$ and returned to II with probability l-s. The case where $s+t=1$ is the one-parameter Ehrenfest urn model as investigated by Karlin and $\mathrm{M}\mathrm{c}\mathrm{G}\mathrm{r}\mathrm{e}\mathrm{g}\mathrm{o}\mathrm{r}[7]$ . The case where $s=t$ is the thinning of the Ehrenfest urn model as proposed by Vincze [14] and Tak\'acs [13]. Remark 1.1. The process of the Markov chain in the model (1.1) corresponds to a $N$ ‘reduced description’ with $N+1$ states, $0,1,$ $\cdots,$ counting the number of balls in urn I. If we use a ‘full description’ with $2^{N}$ states, representing the possible configurations by $N$-tuples $i=$ $(i_{1}, \ldots, i_{k}, \ldots , i_{N})$ , where $i_{k}=1,0$ if ball $k$ is in urn I, II, respectively, one may identify the states with the vertices of the N-cube. $a$ $b$ Let and denote real numbers such that $a,$ $b>-1$ or $a,$ $b<-N$ , and consider the following homogeneous Morkov chain with state space $\{0,1, \ldots , N\}$ and transition probabilities $(1- \frac{i}{N})(\frac{a+1+i}{N+a+b+2})l\text{ノ}$, if $j=i+1$ , $\frac{i}{N}(\frac{N+b+1-i}{N+a+b+2})\nu$ , if $j=i-1$ , $p_{ij}=\{$ (1.2) $1-(1- \frac{i}{N})(\frac{a+1+i}{N+a+b+2})$ $l \text{ノ}-\frac{i}{N}(\frac{N+b+1-i}{N+a+b+2})l^{\text{ノ}}$, if $j=i$ , $0$ , otherwise, $\nu>0$ $0=^{pij=^{1}}<<$ $N$ where is arbitrary such that for all $i,j=0,1,$ $\ldots,$ . Remark 1.2. The two-parameter Ehrenfest urn model (1.1) is obtained by putting $a=su/(s+t),$ $b=tu/(s+t)$ and $\nu=s+t$ and taking the limit as $uarrow\infty$ . For the model (1.2), Dette [4] obtains the more general integral representations for the $n$-step transition probabilities and the (unique) stationary probability distribution with respect to the spectral measure of random walk and shows the following remark. Remark 1.3. (i) The $n$ -step transition probabilities $p_{ik}^{(n)}$ in the model (1.1) are given by $p_{ik}^{(n)}=( \frac{p}{q})^{k}\sum_{=x\mathrm{o}}^{N}p^{x}q^{N\prime}-x_{I\iota(X}i,p,$ $N)I\mathrm{i}_{k}(')x,p,$$N(1- \frac{s+t}{N}X)^{n}$ , where $p=s/(s+t)$ and $q=t/(s+t)$ . Here $\{K_{n}(x,p, N) : n=0,1, \ldots, N\}$ is the family of Krawtchouk polynomials, which will appear in Remark 1.4 below. This result is also found by Krafft and Schaefer [9]. (ii) The (unique) stationary probability distribution $\pi=(\pi_{0}, \ldots , \pi_{N})$ in the model (1.2) is given by $\pi_{j}=\frac{\beta(a+1+j,N+b+1-j)}{\beta(a+1,b+1)}$ , where $\beta(x, y)$ denotes the beta function, that is, $\beta(x, y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}=\int_{0}^{1}u^{x-1}(1-u)^{y-}1du$ for $x,$ $y>0$ . 81 For the model (1.1) with $s=t=1$ , Kac [6] finds the same form with (i) of Remark 1.3 and shows Newton’s law of cooling that concerns the estimate of the excess of the numbers of balls in urn I over the mean of the stationary probability distribution. Our purpose is the following: $\bullet$ For the model (1.2), we will show the same result with Newton’s law of cooling [Theorem 2.1]. $\bullet$ For the model (1.1), we will give the continuous-time formulation and obtain the law of transition probabilities and first-passage times in both the ‘reduced’ and the ‘full’ descriptions, using the method of Bingham [3] [Theorems 2.2-2.3]. Remark 1.4. With $0<p=1-q<1$ and integer $N>=0$ the Krawtchouk polynomials are defined by the hypergeometric series: $K_{n}(x,p, N)$ $=$ $2F1(-n, -x;-N;1/p)$ $=$ $\sum_{\nu=0}^{n}(-1)^{\nu}\frac{(\begin{array}{l}n\nu\end{array})(\begin{array}{l}x\nu\end{array})}{(\begin{array}{l}N\nu\end{array})}(\frac{1}{p})^{\nu}$ $N$ , $n=0,1,$ $\ldots,$ . When the meaning is clear, we write simply $K_{n}(x)$ instead of $K_{n}(x,p, N)$ . From Karlin $\mathrm{M}_{\mathrm{C}}\mathrm{c}\mathrm{r}\mathrm{e}\mathrm{g}_{\mathrm{o}\mathrm{r}}[7]$ and we cite the properties for $I_{\mathrm{L}_{n}}'(X)$ , which will be used in after sections. (i) $\sum_{n=0}^{N}I\mathrm{i}_{n}'(X)_{Z}n=(1+z)N-x(1-\frac{q}{p}Z)^{x}$ $N$ , $x=0,1,$ $\ldots,$ . (ii) $I\mathrm{i}_{n}^{r}(0)=1$ , $I\mathrm{i}_{0}^{r}(x)=1$ . $I\mathrm{i}_{n}^{\Gamma}(x, p, N)=I\mathrm{i}_{x}^{r}(n,p, N)$ $n,$ $N$ (iii) , $x=0,1,$ $\ldots,$ . (iv) $I \mathrm{i}_{n}^{r}(N-x, p, N)=(-1)^{n}(\frac{q}{p})^{n}I\iota_{n}'(x, q, N)$ . 2. Theorems We first concern ourselves with evaluation of the excess of the number of balls in urn I over the value averaged by the stationary probability distribution for the model (1.2). Theorem 2.1. Let $\{X_{n} : n=0,1, \ldots\}$ be the Markov chain with transition probabilities (1.2) with $a>-1$ and $b>-1$ . Let $\pi=(\pi_{0}, \ldots , \pi_{N})$ be the stationary probabilitity distribution given by (ii) of Remark 1.3, and denote $by<\pi>the$ mean of $\pi$ , that is, $< \pi>=\sum_{j0}^{N}=j\pi_{j}$ . Set $Y_{n}=X_{n}-<\pi>and$ put $e_{n}=E_{i}(Y_{n})$ , that is, the expected value of $Y_{n}$ given $X_{0}=i$ . Then we have the following: (i) $< \pi>=N(\frac{a+1}{a+b+2})$ . 82 (ii) $e_{n}=(i-< \pi>)[1-\nu(\frac{a+b+2}{N(N+a+b+2)})]^{n}$ . $T$ Remark 2.1. Suppose that the frequency of transitions is $\tau$ per second. Then in time there are $n=T\tau$ transitions. Write $c=- \log[1-\nu(\frac{a+b+2}{N(N+a+b+2)})]^{\mathcal{T}}$ Then (ii) of the above theorem yields $e_{n}=(i-<\pi>)\exp[-cT]$ . (2.1) $b$ Let $a,$ and $\nu$ be taken as in Remark 1.2. Then $< \pi>arrow N(\frac{s}{s+t})$ and l–v $( \frac{a+b+2}{N(N+a+b+2)})arrow 1-\frac{s+t}{N}$ in the limit as $uarrow\infty$ , which shows $e_{n}= \{i-N(\frac{s}{s+t})\}\exp[-C\tau]$ with $c=- \log[1-\frac{s+t}{N}]^{\tau}$ (2.2) Newton’s law of cooling in Kac [6] is the special case of (2.2) where $s=t=1$ .
Recommended publications
  • Urns Filled with Bitcoins: New Perspectives on Proof-Of-Work Mining
    Urns Filled with Bitcoins: New Perspectives on Proof-of-Work Mining Jos´eParra-Moyano1, Gregor Reich2, and Karl Schmedders3 1,2,3University of Zurich June 5, 2019 Abstract The probability of a miner finding a valid block in the bitcoin blockchain is assumed to follow the Poisson distribution. However, simple, descriptive, statistical analysis reveals that blocks requiring a lot of time to find—long blocks|are won only by miners with a relatively higher hash power per second. This suggests that relatively bigger miners might have an advantage with regard to winning long blocks, which can be understood as a sort of \within block learning". Modelling the bitcoin mining problem as a race, and by means of a multinomial logit model, we can reject that the time spent mining a particular block does not affect the probability of a miner finding a valid version of this block in a manner that is proportional to her size. Further, we postulate that the probability of a miner finding a valid block is governed by the negative hypergeometric distribution. This would explain the descriptive statistics that emerge from the data and be aligned with the technical aspects of bitcoin mining. We draw an analogy between bitcoin mining and the classical \urn problem" in statistics to sustain our theory. This result can have important consequences for the miners of proof-of-work cryptocurrencies in general, and for the bitcoin mining community in particular. Acknowledgements We would like to sincerely thank Prof. Dr. Claudio Tessone, Dr. Christian Jaag, Prof. Dr. Shahram Khazaei, and Dr. Juraj Sarinayˇ for their comments and inputs on this paper.
    [Show full text]
  • What Is the Entropy of a Social Organization?
    entropy Article What Is the Entropy of a Social Organization? Christian Zingg *, Giona Casiraghi * , Giacomo Vaccario * and Frank Schweitzer * Chair of Systems Design, ETH Zurich, Weinbergstrasse 58, 8092 Zurich, Switzerland * Correspondence: [email protected] (C.Z.); [email protected] (G.C.); [email protected] (G.V.); [email protected] (F.S.) Received: 24 June 2019; Accepted: 11 September 2019; Published: 17 September 2019 Abstract: We quantify a social organization’s potentiality, that is, its ability to attain different configurations. The organization is represented as a network in which nodes correspond to individuals and (multi-)edges to their multiple interactions. Attainable configurations are treated as realizations from a network ensemble. To have the ability to encode interaction preferences, we choose the generalized hypergeometric ensemble of random graphs, which is described by a closed-form probability distribution. From this distribution we calculate Shannon entropy as a measure of potentiality. This allows us to compare different organizations as well as different stages in the development of a given organization. The feasibility of the approach is demonstrated using data from three empirical and two synthetic systems. Keywords: multi-edge network; network ensemble; Shannon entropy; social organization 1. Introduction Social organizations are ubiquitous in everyday life, ranging from project teams, e.g., to produce open source software [1], to special interest groups, such as sports clubs [2] or conference audiences [3] discussed later in this paper. Our experience tells us that social organizations are highly dynamic. Individuals continuously enter and exit, and their interactions change over time. Characteristics like these make social organizations complex and difficult to quantify.
    [Show full text]
  • Ballot Theorems, Old and New
    Ballot theorems, old and new L. Addario-Berry∗ B.A. Reed† January 9, 2007 “There is a big difference between a fair game and a game it’s wise to play.” -Bertrand (1887b). 1 A brief history of ballot theorems 1.1 Discrete time ballot theorems We begin by sketching the development of the classical ballot theorem as it first appeared in the Comptes Rendus de l’Academie des Sciences. The statement that is fairly called the first Ballot Theorem was due to Bertrand: Theorem 1 (Bertrand (1887c)). We suppose that two candidates have been submitted to a vote in which the number of voters is µ. Candidate A obtains n votes and is elected; candidate B obtains m = µ − n votes. We ask for the probability that during the counting of the votes, the number of votes for A is at all times greater than the number of votes for B. This probability is (2n − µ)/µ = (n − m)/(n + m). Bertrand’s “proof” of this theorem consists only of the observation that if Pn,m counts the number of “favourable” voting records in which A obtains n votes, B obtains m votes and A always leads during counting of the votes, then Pn+1,m+1 = Pn+1,m + Pn,m+1, the two terms on the right-hand side corresponding to whether the last vote counted is for candidate B or candidate A, respectively. This “proof” can be easily formalized as ∗Department of Statistics, University of Oxford, UK. †School of Computer Science, McGill University, Canada and Projet Mascotte, I3S (CNRS/UNSA)- INRIA, Sophia Antipolis, France.
    [Show full text]
  • General Coupon Collecting Models and Multinomial Games
    UNLV Theses, Dissertations, Professional Papers, and Capstones 5-2010 General coupon collecting models and multinomial games James Y. Lee University of Nevada Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Probability Commons Repository Citation Lee, James Y., "General coupon collecting models and multinomial games" (2010). UNLV Theses, Dissertations, Professional Papers, and Capstones. 352. http://dx.doi.org/10.34917/1592231 This Thesis is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. GENERAL COUPON COLLECTING MODELS AND MULTINOMIAL GAMES by James Y. Lee Bachelor of Science University of California, Los Angeles 2003 Master of Science Creighton University 2005 A thesis submitted in partial fulfillment of the requirements for the Master of Science in Mathematical Sciences Department of Mathematical Sciences College of Sciences Graduate College University of Nevada, Las Vegas May 2010 © Copyright by James Lee 2010 All Rights Reserved THE GRADUATE COLLEGE We recommend the thesis prepared under our supervision by James Y.
    [Show full text]
  • ABSTRACTS (In Order of Presentation) INVITED TALK 1: MARK NEWMAN, UNIVERSITY 0F MICHIGAN Estimating Structure in Networks from Complex Or Uncertain Data
    TALK ABSTRACTS (in order of presentation) INVITED TALK 1: MARK NEWMAN, UNIVERSITY 0F MICHIGAN Estimating structure in networks from complex or uncertain data Most empirical studies of networks assume simple and reliable data: a straightforward edge list or adjacency matrix that reflects the true structure of the network accurately and completely. In real life, however, things are rarely this simple. Instead, our data are complex, consisting not only of nodes and edges, but also weights, values, annotations, and metadata. And they are error prone, with both false positives and false negatives, conflated nodes, missing data, and other sources of uncertainty. This talk will discuss methods for estimating the true structure and properties of networks, even in the face of significant uncertainty, and demonstrate some applications, particularly to social and biological networks. HOW TO EFFICIENTLY REVEAL COMMUNITY STRUCTURE IN MEMORY AND MULTILAYER NETWORKS Christian Persson, Ludvig Bohlin*, Daniel Edler, and Martin Rosvall, SIAM Workshop on Network Science 2017 July 13{14 · Pittsburgh, PA, USA Key message Comprehending complex systems often require modeling (b) Memory network and mapping of higher-order network flow representations to simplify and highlight important dynamical patterns. However, diverse complex systems require many different higher-order network representations, including memory and multilayer networks, to capture actual flow pathways (a) Higher-order network flows (c) Multilayer memory network in the system. Here we show that various higher-order (e) Sparse memory network network flow representations, including memory and mul- tilayer networks, can be represented with sparse mem- ory networks to efficiently reveal community structure in higher order network flows.
    [Show full text]
  • Shuffling Large Decks of Cards and the Bernoulli-Laplace Urn Model
    Shuffling large decks of cards and the Bernoulli-Laplace urn model Nestoridi, Evita White, Graham October 11, 2016 Contents 1 Introduction 1 1.1 The Bernoulli-Laplace urn problem and its early history . 2 1.2 A card shuffling model . 3 1.3 Results . 5 2 Generalisations with multiple piles 7 2.1 Coupling for a random walk on the cycle . 7 2.2 Coupling for urns arranged in a cycle . 10 2.3 Lower bounds . 12 3 Preliminaries for the eigenvalue approach 13 3.1 A symmetry of the urn model . 15 3.2 Eigenvalues and eigenvectors . 17 4 The case of two piles 19 n 4.1 Upper bounds for k close to 2 ..................................... 19 4.2 A coupling bound . 25 4.3 Lower bounds for the two urn model . 28 arXiv:1606.01437v2 [math.PR] 8 Oct 2016 5 Acknowledgements 30 1 Introduction People are often required to shuffle cards by hand. In many cases, this involves a relatively small deck of between for example 20 and 78 cards. Sometimes, however, it is required to randomise a larger deck of cards. This occurs in casinos, where games such as blackjack are played using eight standard decks of cards 1 to reduce the effectiveness of card counting. It also occurs in some designer card and board games, such as Ticket to Ride or Race for the Galaxy. In such games, a deck of between 100 and 250 cards may be used. It is easy to randomise small decks of cards via riffle shuffles ([2]). Riffle shuffles would still be effective for larger decks of cards, but may be more difficult to implement, because they require manipulating two halves of the deck, each held in one hand.
    [Show full text]
  • On (Multi)-Collision Times
    ON (MULTI)-COLLISION TIMES ERNST SCHULTE-GEERS Abstract. We study the (random) waiting time for the appearance of the first (multi-)collision in a drawing process in detail. The results have direct implications for the assessment of generic (multi-)collision search in cryptographic hash functions. 1. Introduction A(n; m)− function is a function h : D −! R where jDj = n and jRj = m are finite sets. An r-fold multi-collision (short: r-collision) of h is an r−element set fd1; : : : ; drg of (mutually distinct) domain points s.th. h(d1) = h(d2) = ::: = h(dr), 2- collisions are called collisions. In particular, for cryptographic hash functions h the difficulty of statistical (multi-) collision search for h is of interest : how difficult is it to find a point y 2 R and two (resp. r) different h-preimages of y?. In a generic statistical (multi-) collision attack on a hash function h an attacker produces randomly hash values (i.e. he produces randomly preimages of h and maps them through h) until he has found the first (multi-) collision. How many hash values must be produced to find a (multi-)collision? This number is a random variable - the waiting time Kr (resp. Rr) for the first (multi-) collision (resp. repetition) - and the distribution of this \collision time" describes statistically the effort needed for the (multi-)collision search. The main questions of interest are (1) what is the average effort for the attack (the expectation of the collision time)? (2) what is the typical effort for the attack (the distribution of the collision time)? Cryptologic \folklore" states that this question reduces to the classical birthday arXiv:1402.5547v1 [math.PR] 22 Feb 2014 phenomenonp in the codomain, and that the \birthday effort”- i.e.
    [Show full text]
  • The Block-Constrained Configuration Model Giona Casiraghi
    Casiraghi Applied Network Science (2019) 4:123 Applied Network Science https://doi.org/10.1007/s41109-019-0241-1 RESEARCH Open Access The block-constrained configuration model Giona Casiraghi Correspondence: [email protected] Abstract Chair of Systems Design, ETH We provide a novel family of generative block-models for random graphs that naturally Zürich, Weinbergstrasse 56/58, incorporates degree distributions: the block-constrained configuration model. 8092 Zürich, Switzerland Block-constrained configuration models build on the generalized hypergeometric ensemble of random graphs and extend the well-known configuration model by enforcing block-constraints on the edge-generating process. The resulting models are practical to fit even to large networks. These models provide a new, flexible tool for the study of community structure and for network science in general, where modeling networks with heterogeneous degree distributions is of central importance. Keywords: Block model, Community structure, Random graphs, Configuration model, Network analysis, gHypEG Introduction Stochastic block-models (SBMs) are random models for graphs characterized by group, communities, or block structures. They are a generalization of the classical G(n, p) Erdos-˝ Rènyi model (1959), where vertices are separated into B different blocks, and different probabilities to create edges are then assigned to each block. This way, higher probabili- ties correspond to more densely connected groups of vertices, capturing the structure of clustered graphs (Fienberg et al. 1985; Holland et al. 1983;Peixoto2012). SBMs are specified by defining a B × B block-matrix of probabilities B such that each of ω its elements bibj is the probability of observing an edge between vertices i and j,wherebi denotes the block to which vertex i belongs.
    [Show full text]
  • Concentration Inequalities and Martingale Inequalities: a Survey
    Internet Mathematics Vol. 3, No. 1: 79-127 Concentration Inequalities and Martingale Inequalities: A Survey Fan Chung and Linyuan Lu Abstract. We examine a number of generalized and extended versions of concentration inequalities and martingale inequalities. These inequalities are effective for analyzing processes with quite general conditions as illustrated in an example for an infinite Polya process and web graphs. 1. Introduction One of the main tools in probabilistic analysis is the concentration inequalities. Basically, the concentration inequalities are meant to give a sharp prediction of the actual value of a random variable by bounding the error term (from the expected value) with an associated probability. The classical concentration in- equalities such as those for the binomial distribution have the best possible error estimates with exponentially small probabilistic bounds. Such concentration in- equalities usually require certain independence assumptions (i.e., the random variable can be decomposed as a sum of independent random variables). When the independence assumptions do not hold, it is still desirable to have similar, albeit slightly weaker, inequalities at our disposal. One approach is the martingale method. If the random variable and the associated probability space can be organized into a chain of events with modified probability spaces © A K Peters, Ltd. 1542-7951/05 $0.50 per page 79 80 Internet Mathematics and if the incremental changes of the value of the event is “small,” then the martingale inequalities provide very good error estimates. The reader is referred to numerous textbooks [Alon and Spencer 92, Janson et al. 00, McDiarmid 98] on this subject. In the past few years, there has been a great deal of research in analyzing general random graph models for realistic massive graphs that have uneven de- gree distribution, such as the power law [Abello et al.
    [Show full text]
  • Nonlinear Pólya Urn Models and Self-Organizing Processes Tong
    Nonlinear P´olya Urn Models and Self-Organizing Processes Tong Zhu A Dissertation in Mathematics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy 2009 Robin Pemantle Supervisor of Dissertation Tony Pantev Graduate Group Chairperson Acknowledgments First and foremost, I am deeply grateful to my advisor Robin Pemantle. His patient guidance and kind support made this work possible. I am thankful for the chance to study with him and appreciate all the help he has given to me for all these years. Thank you Robin! I also want to thank Herman Gluck, Herbert Wilf and Tony Pantev for all the help they have given to me during my graduate study. I am so lucky to get your support. A lot of thanks to Janet Burns, Monica Pallanti and Paula Scarborough, and everyone in the Math Department. Thank you for all the help and making the department like a family. Last, but not least, I would like to express my deepest gratitude to my parents. Without their unconditional love and support, I would not be able to do what I have accomplished. ii ABSTRACT Nonlinear P´olya Urn Models and Self-Organizing Processes Tong Zhu Robin Pemantle, Advisor We consider a class of nonlinear P´olya urn models, which are applied to model self-organizing processes. The first two chapters are devoted to a survey of urn models, theories and methods. In the third chapter, we study the nonlinear urns, which show monopoly properties. We found the asymptotic distributions of the minority color and the attraction time.
    [Show full text]
  • Electoral Voting and Population Distribution in the United States Paul Kvam University of Richmond, [email protected]
    University of Richmond UR Scholarship Repository Math and Computer Science Faculty Publications Math and Computer Science 2-2010 Electoral Voting and Population Distribution in the United States Paul Kvam University of Richmond, [email protected] Follow this and additional works at: http://scholarship.richmond.edu/mathcs-faculty-publications Part of the Applied Statistics Commons Recommended Citation Kvam, Paul H. "Electoral Voting and Population Distribution in the United States." Chance: A Magazine for People Interested in the Analysis of Data 23, no. 1 (February 2010): 41-47. doi:10.1007/s00144-010-0009-y. This Article is brought to you for free and open access by the Math and Computer Science at UR Scholarship Repository. It has been accepted for inclusion in Math and Computer Science Faculty Publications by an authorized administrator of UR Scholarship Repository. For more information, please contact [email protected]. Electoral Voting and Population Distribution in the United States Paul Kvam n the United States, the electoral system for determining the candidate who wins the most votes in California, even if by the president is controversial and sometimes confusing a slim majority, garners more than 20% of the electoral votes Ito voters keeping track of election outcomes. Instead of needed to win the national election. directly counting votes to decide the winner of a presidential The 538 electoral votes in the 2008 presidential elec- election, individual states send a representative number of tion were distributed among 50 states and the District of electors to the Electoral College, and they are trusted to cast Columbia (DC). To simplify the language, we will treat DC as their collective vote for the candidate who won the popular a state, because it has a minimum allotment of three electoral vote in their state.
    [Show full text]
  • Polya Urn Model
    Polya urn Pollyanna May 1, 2014 1Intro In probability theory, statistics and combinatorics Urn Models have a long and colorful history. Clas- sical mathematicians Laplace and Bernoulis, amongst others, have made notable contributions to this class of problems. For example, Jacob Bernoulli in his Ars Conjectandi (1731) considered the problem of determining the proportions of di↵erent-colored pebbles inside an urn after a number of pebbles are drawn from the urn. This problem also attracted the attention of Abraham de Moivre and Thomas Bayes. Urn models can be used to represent many naturally occurring phenomena, such as atoms in a sys- tem, biological organisms, populations, economic processes, etc. In fact, in 1923 Eggenberger and Polya devised an urn scheme (commonly known as Polya Urn) to model such (infinite) processes as the spread of contagious diseases. In one of its simplest forms, the Polya Urn originally contains x white balls and y black balls. A ball is randomly drawn from the urn and replaced back to the urn along with additional (fixed) h balls of the same color as the drawn ball. This process continues forever. The fraction of white balls in the urn converges almost surely to a random limit, which has a beta distribution with parameters x/h and y/h. There are a number of generalizations of the basic Polya Urn model. For example, when a ball is drawn, not only it is returned to the urn, but if it is white, then A additional white balls and B additional black balls are returned to the urn, and if the drawn ball is black, then C additional white balls and D additional black balls are returned to the urn.
    [Show full text]