Math 5010 Introduction to Probability

Total Page:16

File Type:pdf, Size:1020Kb

Math 5010 Introduction to Probability Math 5010 Introduction to Probability Daniel Conus Lehigh University Davar Khoshnevisan University of Utah Firas Rassoul-Agha University of Utah Last Scribed May 8, 2013 Lecture 1 1. A few examples Example 1.1 (The Monte Hall problem; Steve Selvin, 1975). Three doors: behind one is a nice prize; behind the other two lie goats. You choose a door however you wish. The host (Monte Hall) knows where the prize is, and proceeds by showing you a door that has a goat behind it. Then, the host gives you the option to switch the door you had chosen to the remaining unopened door. Should you switch? The answer is “yes.” Here is why: If you do not switch, then you can only win if you had chosen the correct door to begin with. In other words, your winning odds are 1:3 if you don’t switch. On the other hand, if you do switch, then you win if you choose a goat right from the start (for then the host opens the other door with the goat and when you switch you have the prize). This has 2:3 odds. In other words, your chance to win doubles if you switch; therefore, you should. 2. The sample space, events, and outcomes We need a math model for describing “random” events that result from performing an “experiment.” We cannot use “frequency of occurrence” as a model because it does not have the power of “prediction.” For instance, if our definition of a fair coin is that the frequency of heads has to converge to 1/2 as the number 1 2 1 of tosses grows to infinity, then we have done things backwards. We have used our prediction to make a definition. What we should do is first define a model, then draw from it that prediction about the frequency of heads. Here is how we will do things. First, we define a state space (or sample space) that we will denote by Ω. We think of the elements of Ω as outcomes of the experiment. Then, we specify is a collection F of subsets of Ω. Each of these subsets is called an event. These events are the ones we are “allowed” to talk about the probability of. When Ω is finite, F can be taken to be the collection of all subsets of Ω. The next step is to assign a probability P(A) to every A F . We will 2 talk about this after the following examples. Example 1.2. Roll a six-sided die; what is the probability of rolling a six? First, write a sample space. Here is a natural one: Ω = f1, 2, 3, 4, 5, 6g. In this case, Ω is finite and we want F to be the collection of all subsets of Ω. That is, F = ? , f1g ,..., f6g , f1, 2g ,..., f1, 6g ,..., f1, 2, . , 6g . Example 1.3. Toss two coins; what is the probability that we get two heads? A natural sample space is Ω = (H1 , H2) , (H1 , T2) , (T1 , H2) , (T1 , T2) . Once we have readied a sample space Ω and an event-space F , we need to assign a probability to every event. This assignment cannot be made at whim; it has to satisfy some properties. 3. Rules of probability 3.1. Aside: Algebra of sets. Before we set forth the axioms [or rules] of probability, we need to recall some notation from set theory [“modern math” in high school]. Suppose A and B are two sets. We recall the following: A B denotes the union of A and B. In other words, a point x is • [ an element of A B if and only if either x is in A or in B or both; [ A B denotes the intersection of A and B. In other words, a point • \ x is in A B if and only if x is in both A and B; \ Ac denotes the “complement” of A. In other words, a point x is • in Ac if and only if x is not in A. 3. Rules of probability 3 c We write A r B for A B . The elements of A r B are points that are in A \ but not in B. Clearly, A B = B A and A B = B A. Also, A (B C) = [ [ \ \ [ [ (A B) C; therefore, we do not have to put parentheses, and simply will [ [ write A B C instead. In this way we can define A An inductively [ [ 1 [···[ without any ambiguities. Similar remarks apply to intersections. We say that A is a subset of B, and write A B, when A is inside B; ⊆ that is, all elements of A are also elements of B. It should be clear that if A B, then A B = A and A B = B. ⊆ n \ n [ Therefore, if A1 A2 An, then i=1Ai = A1 and i=1Ai = An. ⊆ ⊆ · · · ⊆ c \ c [ It should also be clear that A A = ? and A A = Ω. It is also not \ [ very hard to see that (A B)c = Ac Bc. (Not being in A or B is the same [ \ thing as not being in A and not being in B.) Similarly, (A B)c = Ac Bc. \ [ Rule 1. 0 6 P(A) 6 1 for every event A. Rule 2. P(Ω) = 1. “Something will happen with probability one.” Rule 3 (Addition rule). If A and B are disjoint events [i.e., A B = ?], \ then the probability that at least one of the two occurs is the sum of the individual probabilities. More precisely put, P(A B) = P(A) + P(B). [ 4 1 Homework Problems Exercise 1.1. You ask a friend to choose an integer N between 0 and 9. Let A = fN 6 5g, B = f3 6 N 6 7g and C = fN is even and > 0g. List the points that belong to the following events: (a) A B C \ \ (b) A (B Cc) [ \ (c) (A B) Cc [ \ (d) (A B) ((A C)c) \ \ [ Exercise 1.2. Let A, B and C be events in a sample space Ω. Prove the following identities: (a) (A B) C = A (B C) [ [ [ [ (b) A (B C) = (A B) (A C) \ [ \ [ \ (c) (A B)c = Ac Bc [ \ (d) (A B)c = Ac Bc \ [ Exercise 1.3. Let A, B and C be arbitrary events in a sample space Ω. Express each of the following events in terms of A, B and C using inter- sections, unions and complements. (a) A and B occur, but not C; (b) A is the only one to occur; (c) at least two of the events A, B, C occur; (d) at least one of the events A, B, C occurs; (e) exactly two of the events A, B, C occur; (f) exactly one of the events A, B, C occurs; (g) not more than one of the events A, B, C occur. Exercise 1.4. Two sets are disjoint if their intersection is empty. If A and B are disjoint events in a sample space Ω, are Ac and Bc disjoint? Are A C \ and B C disjoint ? What about A C and B C? \ [ [ Exercise 1.5. We roll a die 3 times. Give a probability space (Ω, F,P) for this experiment. Exercise 1.6. A urn contains three chips : one black, one green, and one red. We draw one chip at random. Give a sample space Ω and a collection of events F for this experiment. n Exercise 1.7. If An An-1 A1, show that i=1Ai = An and n ⊂ ⊂ · · · ⊂ \ Ai = A . [i=1 1 Lecture 2 1. Algebra of events, continued n We say that A1, , An are disjoint if Ai = ?. We say they are pair- ··· \i=1 wise disjoint if Ai Aj = ?, for all i = j. \ 6 Example 2.1. The sets f1, 2g, f2, 3g, and f1, 3g are disjoint but not pair-wise disjoint. Example 2.2. If A and B are disjoint, then A C and B C are disjoint only [ [ when C = ?. To see this, we write (A C) (B C) = (A B) C = ? C = C. [ \ [ \ [ [ On the other hand, A C and B C are obviously disjoint. \ \ Example 2.3. If A, B, C, and D are some events, then the event “B and at least A or C, but not D” is written as B (A C) r D or, equivalently, \ [ B (A C) Dc. Similarly, the event “A but not B, or C and D” is written \ [ \ (A Bc) (C D). \ [ \ Example 2.4. Now, to be more concrete, let A = f1, 3, 7, 13g, B = f2, 3, 4, 13, 15g, C = f1, 2, 3, 4, 17g, D = f13, 17, 30g. Then, A C = f1, 2, 3, 4, 7, 13, 17g, [ B (A C) = f2, 3, 4, 13g, and B (A C) r D = f2, 3, 4g. Similarly, \ [ \ [ A Bc = f1, 7g, C D = f17g, and (A Bc) (C D) = f1, 7, 17g. \ \ \ [ \ Example 2.5. We want to write the solutions to jx-5j+jx-3j > jxj as a union of disjoint intervals. For this, we first need to figure out what the absolute values are equal to. There are four cases. If x 6 0, then the inequality becomes 5 - x + 3 - x > -x, that is 8 > x, which is always satisfied (when x 6 -3). Next, is the case 0 6 x 6 3, and then we have 5 - x + 3 - x > x, which means 8 > 3x, and so 8=3 < x 6 3 is not allowed.
Recommended publications
  • Urns Filled with Bitcoins: New Perspectives on Proof-Of-Work Mining
    Urns Filled with Bitcoins: New Perspectives on Proof-of-Work Mining Jos´eParra-Moyano1, Gregor Reich2, and Karl Schmedders3 1,2,3University of Zurich June 5, 2019 Abstract The probability of a miner finding a valid block in the bitcoin blockchain is assumed to follow the Poisson distribution. However, simple, descriptive, statistical analysis reveals that blocks requiring a lot of time to find—long blocks|are won only by miners with a relatively higher hash power per second. This suggests that relatively bigger miners might have an advantage with regard to winning long blocks, which can be understood as a sort of \within block learning". Modelling the bitcoin mining problem as a race, and by means of a multinomial logit model, we can reject that the time spent mining a particular block does not affect the probability of a miner finding a valid version of this block in a manner that is proportional to her size. Further, we postulate that the probability of a miner finding a valid block is governed by the negative hypergeometric distribution. This would explain the descriptive statistics that emerge from the data and be aligned with the technical aspects of bitcoin mining. We draw an analogy between bitcoin mining and the classical \urn problem" in statistics to sustain our theory. This result can have important consequences for the miners of proof-of-work cryptocurrencies in general, and for the bitcoin mining community in particular. Acknowledgements We would like to sincerely thank Prof. Dr. Claudio Tessone, Dr. Christian Jaag, Prof. Dr. Shahram Khazaei, and Dr. Juraj Sarinayˇ for their comments and inputs on this paper.
    [Show full text]
  • What Is the Entropy of a Social Organization?
    entropy Article What Is the Entropy of a Social Organization? Christian Zingg *, Giona Casiraghi * , Giacomo Vaccario * and Frank Schweitzer * Chair of Systems Design, ETH Zurich, Weinbergstrasse 58, 8092 Zurich, Switzerland * Correspondence: [email protected] (C.Z.); [email protected] (G.C.); [email protected] (G.V.); [email protected] (F.S.) Received: 24 June 2019; Accepted: 11 September 2019; Published: 17 September 2019 Abstract: We quantify a social organization’s potentiality, that is, its ability to attain different configurations. The organization is represented as a network in which nodes correspond to individuals and (multi-)edges to their multiple interactions. Attainable configurations are treated as realizations from a network ensemble. To have the ability to encode interaction preferences, we choose the generalized hypergeometric ensemble of random graphs, which is described by a closed-form probability distribution. From this distribution we calculate Shannon entropy as a measure of potentiality. This allows us to compare different organizations as well as different stages in the development of a given organization. The feasibility of the approach is demonstrated using data from three empirical and two synthetic systems. Keywords: multi-edge network; network ensemble; Shannon entropy; social organization 1. Introduction Social organizations are ubiquitous in everyday life, ranging from project teams, e.g., to produce open source software [1], to special interest groups, such as sports clubs [2] or conference audiences [3] discussed later in this paper. Our experience tells us that social organizations are highly dynamic. Individuals continuously enter and exit, and their interactions change over time. Characteristics like these make social organizations complex and difficult to quantify.
    [Show full text]
  • Ballot Theorems, Old and New
    Ballot theorems, old and new L. Addario-Berry∗ B.A. Reed† January 9, 2007 “There is a big difference between a fair game and a game it’s wise to play.” -Bertrand (1887b). 1 A brief history of ballot theorems 1.1 Discrete time ballot theorems We begin by sketching the development of the classical ballot theorem as it first appeared in the Comptes Rendus de l’Academie des Sciences. The statement that is fairly called the first Ballot Theorem was due to Bertrand: Theorem 1 (Bertrand (1887c)). We suppose that two candidates have been submitted to a vote in which the number of voters is µ. Candidate A obtains n votes and is elected; candidate B obtains m = µ − n votes. We ask for the probability that during the counting of the votes, the number of votes for A is at all times greater than the number of votes for B. This probability is (2n − µ)/µ = (n − m)/(n + m). Bertrand’s “proof” of this theorem consists only of the observation that if Pn,m counts the number of “favourable” voting records in which A obtains n votes, B obtains m votes and A always leads during counting of the votes, then Pn+1,m+1 = Pn+1,m + Pn,m+1, the two terms on the right-hand side corresponding to whether the last vote counted is for candidate B or candidate A, respectively. This “proof” can be easily formalized as ∗Department of Statistics, University of Oxford, UK. †School of Computer Science, McGill University, Canada and Projet Mascotte, I3S (CNRS/UNSA)- INRIA, Sophia Antipolis, France.
    [Show full text]
  • General Coupon Collecting Models and Multinomial Games
    UNLV Theses, Dissertations, Professional Papers, and Capstones 5-2010 General coupon collecting models and multinomial games James Y. Lee University of Nevada Las Vegas Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Probability Commons Repository Citation Lee, James Y., "General coupon collecting models and multinomial games" (2010). UNLV Theses, Dissertations, Professional Papers, and Capstones. 352. http://dx.doi.org/10.34917/1592231 This Thesis is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Thesis has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. GENERAL COUPON COLLECTING MODELS AND MULTINOMIAL GAMES by James Y. Lee Bachelor of Science University of California, Los Angeles 2003 Master of Science Creighton University 2005 A thesis submitted in partial fulfillment of the requirements for the Master of Science in Mathematical Sciences Department of Mathematical Sciences College of Sciences Graduate College University of Nevada, Las Vegas May 2010 © Copyright by James Lee 2010 All Rights Reserved THE GRADUATE COLLEGE We recommend the thesis prepared under our supervision by James Y.
    [Show full text]
  • ABSTRACTS (In Order of Presentation) INVITED TALK 1: MARK NEWMAN, UNIVERSITY 0F MICHIGAN Estimating Structure in Networks from Complex Or Uncertain Data
    TALK ABSTRACTS (in order of presentation) INVITED TALK 1: MARK NEWMAN, UNIVERSITY 0F MICHIGAN Estimating structure in networks from complex or uncertain data Most empirical studies of networks assume simple and reliable data: a straightforward edge list or adjacency matrix that reflects the true structure of the network accurately and completely. In real life, however, things are rarely this simple. Instead, our data are complex, consisting not only of nodes and edges, but also weights, values, annotations, and metadata. And they are error prone, with both false positives and false negatives, conflated nodes, missing data, and other sources of uncertainty. This talk will discuss methods for estimating the true structure and properties of networks, even in the face of significant uncertainty, and demonstrate some applications, particularly to social and biological networks. HOW TO EFFICIENTLY REVEAL COMMUNITY STRUCTURE IN MEMORY AND MULTILAYER NETWORKS Christian Persson, Ludvig Bohlin*, Daniel Edler, and Martin Rosvall, SIAM Workshop on Network Science 2017 July 13{14 · Pittsburgh, PA, USA Key message Comprehending complex systems often require modeling (b) Memory network and mapping of higher-order network flow representations to simplify and highlight important dynamical patterns. However, diverse complex systems require many different higher-order network representations, including memory and multilayer networks, to capture actual flow pathways (a) Higher-order network flows (c) Multilayer memory network in the system. Here we show that various higher-order (e) Sparse memory network network flow representations, including memory and mul- tilayer networks, can be represented with sparse mem- ory networks to efficiently reveal community structure in higher order network flows.
    [Show full text]
  • Shuffling Large Decks of Cards and the Bernoulli-Laplace Urn Model
    Shuffling large decks of cards and the Bernoulli-Laplace urn model Nestoridi, Evita White, Graham October 11, 2016 Contents 1 Introduction 1 1.1 The Bernoulli-Laplace urn problem and its early history . 2 1.2 A card shuffling model . 3 1.3 Results . 5 2 Generalisations with multiple piles 7 2.1 Coupling for a random walk on the cycle . 7 2.2 Coupling for urns arranged in a cycle . 10 2.3 Lower bounds . 12 3 Preliminaries for the eigenvalue approach 13 3.1 A symmetry of the urn model . 15 3.2 Eigenvalues and eigenvectors . 17 4 The case of two piles 19 n 4.1 Upper bounds for k close to 2 ..................................... 19 4.2 A coupling bound . 25 4.3 Lower bounds for the two urn model . 28 arXiv:1606.01437v2 [math.PR] 8 Oct 2016 5 Acknowledgements 30 1 Introduction People are often required to shuffle cards by hand. In many cases, this involves a relatively small deck of between for example 20 and 78 cards. Sometimes, however, it is required to randomise a larger deck of cards. This occurs in casinos, where games such as blackjack are played using eight standard decks of cards 1 to reduce the effectiveness of card counting. It also occurs in some designer card and board games, such as Ticket to Ride or Race for the Galaxy. In such games, a deck of between 100 and 250 cards may be used. It is easy to randomise small decks of cards via riffle shuffles ([2]). Riffle shuffles would still be effective for larger decks of cards, but may be more difficult to implement, because they require manipulating two halves of the deck, each held in one hand.
    [Show full text]
  • On (Multi)-Collision Times
    ON (MULTI)-COLLISION TIMES ERNST SCHULTE-GEERS Abstract. We study the (random) waiting time for the appearance of the first (multi-)collision in a drawing process in detail. The results have direct implications for the assessment of generic (multi-)collision search in cryptographic hash functions. 1. Introduction A(n; m)− function is a function h : D −! R where jDj = n and jRj = m are finite sets. An r-fold multi-collision (short: r-collision) of h is an r−element set fd1; : : : ; drg of (mutually distinct) domain points s.th. h(d1) = h(d2) = ::: = h(dr), 2- collisions are called collisions. In particular, for cryptographic hash functions h the difficulty of statistical (multi-) collision search for h is of interest : how difficult is it to find a point y 2 R and two (resp. r) different h-preimages of y?. In a generic statistical (multi-) collision attack on a hash function h an attacker produces randomly hash values (i.e. he produces randomly preimages of h and maps them through h) until he has found the first (multi-) collision. How many hash values must be produced to find a (multi-)collision? This number is a random variable - the waiting time Kr (resp. Rr) for the first (multi-) collision (resp. repetition) - and the distribution of this \collision time" describes statistically the effort needed for the (multi-)collision search. The main questions of interest are (1) what is the average effort for the attack (the expectation of the collision time)? (2) what is the typical effort for the attack (the distribution of the collision time)? Cryptologic \folklore" states that this question reduces to the classical birthday arXiv:1402.5547v1 [math.PR] 22 Feb 2014 phenomenonp in the codomain, and that the \birthday effort”- i.e.
    [Show full text]
  • The Block-Constrained Configuration Model Giona Casiraghi
    Casiraghi Applied Network Science (2019) 4:123 Applied Network Science https://doi.org/10.1007/s41109-019-0241-1 RESEARCH Open Access The block-constrained configuration model Giona Casiraghi Correspondence: [email protected] Abstract Chair of Systems Design, ETH We provide a novel family of generative block-models for random graphs that naturally Zürich, Weinbergstrasse 56/58, incorporates degree distributions: the block-constrained configuration model. 8092 Zürich, Switzerland Block-constrained configuration models build on the generalized hypergeometric ensemble of random graphs and extend the well-known configuration model by enforcing block-constraints on the edge-generating process. The resulting models are practical to fit even to large networks. These models provide a new, flexible tool for the study of community structure and for network science in general, where modeling networks with heterogeneous degree distributions is of central importance. Keywords: Block model, Community structure, Random graphs, Configuration model, Network analysis, gHypEG Introduction Stochastic block-models (SBMs) are random models for graphs characterized by group, communities, or block structures. They are a generalization of the classical G(n, p) Erdos-˝ Rènyi model (1959), where vertices are separated into B different blocks, and different probabilities to create edges are then assigned to each block. This way, higher probabili- ties correspond to more densely connected groups of vertices, capturing the structure of clustered graphs (Fienberg et al. 1985; Holland et al. 1983;Peixoto2012). SBMs are specified by defining a B × B block-matrix of probabilities B such that each of ω its elements bibj is the probability of observing an edge between vertices i and j,wherebi denotes the block to which vertex i belongs.
    [Show full text]
  • Concentration Inequalities and Martingale Inequalities: a Survey
    Internet Mathematics Vol. 3, No. 1: 79-127 Concentration Inequalities and Martingale Inequalities: A Survey Fan Chung and Linyuan Lu Abstract. We examine a number of generalized and extended versions of concentration inequalities and martingale inequalities. These inequalities are effective for analyzing processes with quite general conditions as illustrated in an example for an infinite Polya process and web graphs. 1. Introduction One of the main tools in probabilistic analysis is the concentration inequalities. Basically, the concentration inequalities are meant to give a sharp prediction of the actual value of a random variable by bounding the error term (from the expected value) with an associated probability. The classical concentration in- equalities such as those for the binomial distribution have the best possible error estimates with exponentially small probabilistic bounds. Such concentration in- equalities usually require certain independence assumptions (i.e., the random variable can be decomposed as a sum of independent random variables). When the independence assumptions do not hold, it is still desirable to have similar, albeit slightly weaker, inequalities at our disposal. One approach is the martingale method. If the random variable and the associated probability space can be organized into a chain of events with modified probability spaces © A K Peters, Ltd. 1542-7951/05 $0.50 per page 79 80 Internet Mathematics and if the incremental changes of the value of the event is “small,” then the martingale inequalities provide very good error estimates. The reader is referred to numerous textbooks [Alon and Spencer 92, Janson et al. 00, McDiarmid 98] on this subject. In the past few years, there has been a great deal of research in analyzing general random graph models for realistic massive graphs that have uneven de- gree distribution, such as the power law [Abello et al.
    [Show full text]
  • Nonlinear Pólya Urn Models and Self-Organizing Processes Tong
    Nonlinear P´olya Urn Models and Self-Organizing Processes Tong Zhu A Dissertation in Mathematics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy 2009 Robin Pemantle Supervisor of Dissertation Tony Pantev Graduate Group Chairperson Acknowledgments First and foremost, I am deeply grateful to my advisor Robin Pemantle. His patient guidance and kind support made this work possible. I am thankful for the chance to study with him and appreciate all the help he has given to me for all these years. Thank you Robin! I also want to thank Herman Gluck, Herbert Wilf and Tony Pantev for all the help they have given to me during my graduate study. I am so lucky to get your support. A lot of thanks to Janet Burns, Monica Pallanti and Paula Scarborough, and everyone in the Math Department. Thank you for all the help and making the department like a family. Last, but not least, I would like to express my deepest gratitude to my parents. Without their unconditional love and support, I would not be able to do what I have accomplished. ii ABSTRACT Nonlinear P´olya Urn Models and Self-Organizing Processes Tong Zhu Robin Pemantle, Advisor We consider a class of nonlinear P´olya urn models, which are applied to model self-organizing processes. The first two chapters are devoted to a survey of urn models, theories and methods. In the third chapter, we study the nonlinear urns, which show monopoly properties. We found the asymptotic distributions of the minority color and the attraction time.
    [Show full text]
  • Electoral Voting and Population Distribution in the United States Paul Kvam University of Richmond, [email protected]
    University of Richmond UR Scholarship Repository Math and Computer Science Faculty Publications Math and Computer Science 2-2010 Electoral Voting and Population Distribution in the United States Paul Kvam University of Richmond, [email protected] Follow this and additional works at: http://scholarship.richmond.edu/mathcs-faculty-publications Part of the Applied Statistics Commons Recommended Citation Kvam, Paul H. "Electoral Voting and Population Distribution in the United States." Chance: A Magazine for People Interested in the Analysis of Data 23, no. 1 (February 2010): 41-47. doi:10.1007/s00144-010-0009-y. This Article is brought to you for free and open access by the Math and Computer Science at UR Scholarship Repository. It has been accepted for inclusion in Math and Computer Science Faculty Publications by an authorized administrator of UR Scholarship Repository. For more information, please contact [email protected]. Electoral Voting and Population Distribution in the United States Paul Kvam n the United States, the electoral system for determining the candidate who wins the most votes in California, even if by the president is controversial and sometimes confusing a slim majority, garners more than 20% of the electoral votes Ito voters keeping track of election outcomes. Instead of needed to win the national election. directly counting votes to decide the winner of a presidential The 538 electoral votes in the 2008 presidential elec- election, individual states send a representative number of tion were distributed among 50 states and the District of electors to the Electoral College, and they are trusted to cast Columbia (DC). To simplify the language, we will treat DC as their collective vote for the candidate who won the popular a state, because it has a minimum allotment of three electoral vote in their state.
    [Show full text]
  • Polya Urn Model
    Polya urn Pollyanna May 1, 2014 1Intro In probability theory, statistics and combinatorics Urn Models have a long and colorful history. Clas- sical mathematicians Laplace and Bernoulis, amongst others, have made notable contributions to this class of problems. For example, Jacob Bernoulli in his Ars Conjectandi (1731) considered the problem of determining the proportions of di↵erent-colored pebbles inside an urn after a number of pebbles are drawn from the urn. This problem also attracted the attention of Abraham de Moivre and Thomas Bayes. Urn models can be used to represent many naturally occurring phenomena, such as atoms in a sys- tem, biological organisms, populations, economic processes, etc. In fact, in 1923 Eggenberger and Polya devised an urn scheme (commonly known as Polya Urn) to model such (infinite) processes as the spread of contagious diseases. In one of its simplest forms, the Polya Urn originally contains x white balls and y black balls. A ball is randomly drawn from the urn and replaced back to the urn along with additional (fixed) h balls of the same color as the drawn ball. This process continues forever. The fraction of white balls in the urn converges almost surely to a random limit, which has a beta distribution with parameters x/h and y/h. There are a number of generalizations of the basic Polya Urn model. For example, when a ball is drawn, not only it is returned to the urn, but if it is white, then A additional white balls and B additional black balls are returned to the urn, and if the drawn ball is black, then C additional white balls and D additional black balls are returned to the urn.
    [Show full text]