The Law of Large Numbers and Its Applications

The Law of Large Numbers and Its Applications

The Law of Large Numbers and its Applications by Kelly Sedor A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honours Seminar) Lakehead University Thunder Bay, Ontario, Canada copyright c (2015) Kelly Sedor Abstract This honours project discusses the Law of Large Numbers (LLN). The LLN is an extremely intuitive and applicable result in the field of probability and statistics. Essen- tially, the LLN states that in regards to statistical observations, as the number of trials increase, the sample mean gets increasingly close to the hypothetical mean. In this project, a brief historical context will be outlined, basic probability theory concepts will be reviewed, the various versions of the theorem will be stated and one of these variations will be proved using two different methods. This project will continue by illustrating numerous situations in which the LLN can be applied in several different scientific and mathematical fields. i Acknowledgements I would like to thank all of the friends, family and classmates that helped me reach this point. I would also especially like to thank Dr. Deli Li and Dr. Razvan Anisca for the support and encouragement they have given me over the past four years. I am very fortunate to have met so many wonderful people throughout my university career. ii Contents Abstract i Acknowledgements ii Chapter 1. Introduction 1 1. Historical Background of the Law of Large Numbers 1 2. Law of Large Numbers Today 1 Chapter 2. Preliminaries 3 1. Definitions 3 2. Notation 6 Chapter 3. The Law of Large Numbers 7 1. Theorems and Proofs 7 2. The Weak Law Vs. The Strong Law 10 Chapter 4. Applications of The Law of Large Numbers 12 1. General Examples 12 2. Monte Carlo Methods 15 Chapter 5. Further Discussion 18 Bibliography 20 iii CHAPTER 1 Introduction 1. Historical Background of the Law of Large Numbers Early in the sixteenth century, Italian mathematician Gerolamo Cardano (1501-1575) observed what would later become known as The Law of Large Numbers. He observed that in statistics the accuracy of observations tended to improve as the number of trials increased. However, he made no known attempt to prove this observation. It was not until over two-hundred years later that this conjecture was formally proved. In the year 1713, Swiss mathematician Jacob Bernoulli published the first proof for what Cardano had observed centuries earlier. Bernoulli recognized the intuitive nature of the problem as well as its importance and spent twenty years formulating a complicated proof for the case of a binary random variable that was first published posthumously in his book, Ars Conjectandi. Bernoulli referred to this as his \Golden Theorem" but it quickly became known as \Bernoulli's Theorem". In his book, Bernoulli details the problem of drawing balls from an urn that contains both black and white balls. He describes trying to estimate the proportion of white balls to black if they are consecutively drawn from the urn and then replaced. Bernoulli states that when estimating the unknown proportion any degree of accuracy can be achieved through an appropriate number of trials. The official name of the theorem; \The Law of Large Numbers", was not coined until the year 1837 by French mathematician Simeon Denis Poisson. Over the years, many more mathematicians contributed to the evolution of the Law of Large Numbers, refining it to make it what it is today. These mathematicians include: Andrey Markov, Pafnuty Chebyshev who proved a more general case of the Law of Large Numbers for averages, and Khinchin who was the first to provide a complete proof for the case of arbitrary random variables. Additionally, several mathematicians created their own variations of the theorem. Andrey Kolmogorov's Strong Law of Large Numbers which describes the behaviour of the variance of a random variable and Emile Borel's Law of Large Numbers which describes the convergence in probability of the proportion of an event occurring during a given trial, are examples of these variations of Bernoulli's Theorem. 2. Law of Large Numbers Today In the present day, the Law of Large Numbers remains an important limit theorem that is used in a variety of fields including statistics, probability theory, and areas of economics 1 Chapter 1. Introduction 2 and insurance. The LLN can be used to optimize sample sizes as well as approximate calculations that could otherwise be troublesome. As we will see throughout our paper, there are numerous situations in which the Law of Large Numbers is effective as well as others in which it is not. The goal of this project is to introduce the various forms of The Law of Large Numbers as well as answer questions such as, \How powerful is the Law of Large Numbers?", \How can we prove the Law of Large Numbers?" and \How can the Law of Large Numbers be used?" Our paper is structured as follows. In Chapter 2 we will introduce a groundwork for our theorems by providing key definitions and notation that will be used throughout the project. We will then move on to Chapter 3 which will state the various forms of the Law of Large Numbers. We will focus primarily on the Weak Law of Large Numbers as well as the Strong Law of Large Numbers. We will answer one of the above questions by using several different methods to prove The Weak Law of Large Numbers. In Chapter 4 we will address the last question by exploring a variety of applications for the Law of Large Numbers including approximations of sample sizes, Monte Carlo methods and more. We will conclude the project in Chapter 5 by providing topics for further discussion as well as important implications of the Law of Large Numbers. CHAPTER 2 Preliminaries Before we begin our discussion of the Law of Large Numbers, we first introduce suit- able notation and define important terms needed for the discussion. In this chapter some elementary definitions with corresponding examples will also be provided. Additionally, this chapter will outline the notation that will be seen throughout the remainder of the project. In order to best understand the content of the project, one should have an ap- propriate grasp on important concepts such as expected value or mean, variance, random variables, probability distributions and more. 1. Definitions Definition 2.1. A population is a group of objects about which inferences can be made. Definition 2.2. A sample is a subset of the population. Definition 2.3. A random sample of size n from the distribution of X is a collection of n independent random variables each with the same distribution as X. Definition 2.4. An experiment is a set of positive outcomes that can be repeated. Definition 2.5. A random variable, X, is a function that assigns to every outcome of an experiment, a real numerical value. If X can assume at most a finite or countably infinite number of possible values, X is said to be a discrete random variable. If X can assume any value in some interval, or intervals of real numbers and the probability that it assumes a specific given value is 0, then X is said to be a continuous random variable. Example 2.6. Say, for instance we are flipping a fair coin and are concerned with how many times the coin lands on heads. We can define the random variable X by 1; if the coin lands on heads X = 0; if the coin lands on tails This is an classic example of a random variable with a Bernoulli Distribution, which we will see again later in our discussion. Definition 2.7. The probability distribution of a discrete random variable X, is a function f that assigns a probability to each potential value of X. Additionally, f must satisfy the following necessary and sufficient conditions, (1) f(x) = P (X = x) for x real, 3 Chapter 2. Preliminaries 4 (2) f(x) ≥ 0, P (3) x f(x) = 1. Definition 2.8. The probability density function for a continuous random variable X, denoted by f(x), is a function such that (1) f(x) ≥ 0, for all x in R, R +1 (2) −∞ f(x)dx = 1, R b (3) P (a < X < b) = a f(x)dx for all a < b. Definition 2.9. Let X be a discrete random variable with density f(x), the expected value or mean, denoted E(X) or µ, is X µ = E(X) = xf(x) x P provided that x xf(x) is finite Example 2.10. A drug is used to maintain a steady heart rate in patients who have suffered a mild heart attack. Let X denote the number of heartbeats per minute obtained per patient. Consider the hypothetical density given in the table below. What is the average heart rate obtained by all patients receiving the drug, that is what is E(X)? Table 1.2 x 40 60 68 70 72 80 100 f(x) 0.01 0.04 0.05 0.8 0.05 0.04 0.01 X E(X) = xf(x) x = 40(:01) + 60(:04) + 68(:05) + ::: + 100(:01) = 70 Definition 2.11. Let X be a random variable with mean µ, the variance of X denoted V ar(X) or σ2 is σ2 = V ar(X) = E((X − µ)2) = E(X2) − µ2 = E(X2) − (E(X))2: Definition 2.12. The standard deviation of a random variable X, denoted σ is the positive square root of the variance. Example 2.13. Let X and f(x) be the same as that seen in Example 2.10, compute the variance and standard deviation for the given data.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us