Part V - Chance Variability

Total Page:16

File Type:pdf, Size:1020Kb

Part V - Chance Variability Part V - Chance Variability Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 1 / 78 Law of Averages In Chapter 13 we discussed the Kerrich coin-tossing experiment. Kerrich was a South African who spent World War II as a Nazi prisoner. He spent his time flipping a coin 10; 000 times, faithfully recording the results. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 2 / 78 Law of Averages Law of Averages: If an experiment is independently repeated a large number of times, the percentage of occurrences of a specific event E will be the theoretical probability of the event occurring, but of by some amount - the chance error. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 3 / 78 Law of Averages As the coin toss was repeated, the percentage of heads approaches its theoretical expectation: 50%. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 4 / 78 Law of Averages Caution The Law of Averages is commonly misunderstood as the Gamblers Fallacy: "By some magic everything will balance out. With a run of 10 heads a tail is becoming more likely." This is very false. After a run of 10 heads the probability of tossing a tail is still 50%! Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 5 / 78 Law of Averages In fact, the number of heads above half is quickly increasing as the experiment proceeds. A gambler betting on tails and hoping for balance would be devastated as the tails appear about 134 times less than heads after 10; 000 tosses. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 6 / 78 Law of Averages In our coin-flipping experiment; the number of heads will be around half the number of tosses plus or minus the chance error. As the number of tosses goes up, the chance error gets larger in absolute terms. However, when viewed relatively, the chance error as a percentage decreases. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 7 / 78 Sample Spaces Recall that a sample spaces S lists all the possible outcomes of a study. Example (3 coins): We can record an outcome as a string of heads and tails, such as HHT. The corresponding sample space is S = fHHH; HHT; HTH; THH; TTH; THT; HTT; TTTg: It is often more convenient to deal with outcomes as numbers, rather than as verbal statements. Suppose we are interested in the number of heads. Let X denote the number of heads in3 tosses. For instance, if the outcome is HHT, then X = 2. The possible values of X are0,1,2, and3. For every outcome from S, X will take a particular value: Outcome HHH HHT HTH THH TTH THT HTT TTT X 3 2 2 2 1 1 1 0 Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 8 / 78 Random Variable Random Variable: An unknown subject to random change. Often a random variable will be an unknown numerical result of study. A random variable has a numerical sample space where each outcome has an assigned probability. There is not necessarily equal assigned probabilities: The quantity X in the previous Example is a random variable because its value is unknown unless the tossing experiment is performed. Definition: A random variable is an unknown numerical result of a study. Mathematically, a random variable is a function which assigns a numerical value to each outcome in a sample space S. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 9 / 78 Example (3 coins) We have two different sample spaces for our 3 coin experiment: S = fHHH; HHT; HTH; THH; TTH; THT; HTT; TTTg: S∗ = f0; 1; 2; 3g The sample spaceS describes8 equally likely outcomes for our coin flips while the sample space S∗ describes4 not equally likely outcomes. Recall that S∗ represents the values of the random variableX, the number of heads resulting from three coin flips. 1 1 1 1 P(X = 0)= P(TTT ) = · · = 2 2 2 8 3 P(X = 1)= P(HTT or TTH or THT ) = 8 3 1 P(X = 2)= P(X = 3)= 8 8 S∗ does not contain information about the order of heads and tails. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 10 / 78 Discrete and Continuous Random Variables Discrete Random Variables: A discrete random variable has a number of possible values which can be listed. Mathematically we say the number of possible values are countable. Variable X in Example (3 coins) is discrete. Simple actions are discrete: rolling dice, flipping coins, dealing cards, drawing names from a hat, spinning a wheel, . Continuous Random Variables: A continuous random variable takes values in an interval of numbers. It is impossible to list or count all the possible values of a continuous random variable. Mathematically we say the number of possible values are uncountable. For the data on heights of people, the average height¯x is a continuous random variable which takes on values from some interval, say, [0; 200] (in inches). Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 11 / 78 Probability Distributions Any random variable X , discrete or continuous, can be described with A probability distribution. A mean and standard deviation. The probability distribution of a random variable X is defined by specifying the possible values of X and their probabilities. For discrete random variables the probability distribution is given by the probability table and is represented graphically as the probability histogram. For continuous random variables the probability distribution is given by the probability density function and is represented graphically by the density curve. Recall that we discussed density curves in Part II. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 12 / 78 The Mean of a Random Variable X In Part II (Descriptive Statistics) we discussed the mean and standard deviation,¯x and s, of data sets to measure the center and spread of the observations. Similar definitions exist for random variables: The mean of the random variable X , denoted µ, measures the centrality of the probability distribution. The mean µ is computed from the probability distribution of X as a weighted average of the possible values of X with weights being the probabilities of these values. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 13 / 78 The Expected Value The mean µ of a random variable X is often called the expected value of X . It means that the observed value of a random variable is expected to be around its expected value; the difference is the chance error. In other words, observed value of X = µ + chance error We never expect a random variable X to be exactly equal to its expected value µ. The likely size of the chance error can be determined by the standard deviation, denoted σ. The standard deviation σ measures the distribution's spread and is a quantity which is computed from the probability distribution of X . Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 14 / 78 Random Variable X and Population A population of interest is often characterized by the random variable X . Example: Suppose we are interested in the distribution of American heights. The random variable X (height) describes the population (US people). The distribution of X is called the population distribution, and the distribution parameters, µ and σ, are the population parameters. Population parameters are fixed constants which are usually unknown and need to be estimated. A sample (data set) should be viewed as values (realizations) of the random variable X drawn from the probability distribution. The sample mean¯x and standard deviation s estimates the unknown population mean µ standard deviation σ. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 15 / 78 Discrete Random Variables The distribution of a discrete random variable X is summarized in the distribution table: Value of X x1 x2 x3 ... xk Probability p1 p2 p3 ... pk The symbols xi represent the distinct possible values of X and pi is the probability associated to xi . p1 + p2 + ::: + pk = 1 (or 100%) This is due to all possible values of X being listed in the sample space S = fx1; x2;:::; xk g. The events X = xi and X = xj , i 6= j, are disjoint since the random variable X cannot take two distinct values at the same time. Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 16 / 78 Example (Fish) A resort on a lake claims that the distribution of the number of fish X in the daily catch of experienced fisherman is given below. x 0 1 2 3 4 5 6 7 P(X = x) 0.02 0.08 0.10 0.18 0.25 0.20 0.15 0.02 Find the following : (a) P(X ≥ 5) 0:37 (b) P(2 < X < 5) 0:43 (c) y if P(X ≤ y) = 0:2 y = 2 (d) y if P(X > y) = 0:37 y = 4 (e) P(X 6= 5) 1 − 0:20 = 0:80 (f) P(X < 2 or X = 6) 0:25 (g) P(X < 2 and X > 4) 0 (h) P(X = 9) 0 Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 17 / 78 Probability Histograms The probability distribution of a random variable X is called the probability histogram. There are k bars, where k is the number of possible values of X . The i-th bar is centered at the xi , has a unit width and height pi . The areas of the probability histograms display the assignment of probabilities to possible values of X .
Recommended publications
  • The Introductory Statistics Course: a Ptolemaic Curriculum?
    UCLA Technology Innovations in Statistics Education Title The Introductory Statistics Course: A Ptolemaic Curriculum? Permalink https://escholarship.org/uc/item/6hb3k0nz Journal Technology Innovations in Statistics Education, 1(1) ISSN 1933-4214 Author Cobb, George W Publication Date 2007-10-12 DOI 10.5070/T511000028 Peer reviewed eScholarship.org Powered by the California Digital Library University of California The Introductory Statistics Course: A Ptolemaic Curriculum George W. Cobb Mount Holyoke College 1. INTRODUCTION The founding of this journal recognizes the likelihood that our profession stands at the thresh- old of a fundamental reshaping of how we do what we do, how we think about what we do, and how we present what we do to students who want to learn about the science of data. For two thousand years, as far back as Archimedes, and continuing until just a few decades ago, the mathematical sciences have had but two strategies for implementing solutions to applied problems: brute force and analytical circumventions. Until recently, brute force has meant arithmetic by human calculators, making that approach so slow as to create a needle's eye that effectively blocked direct solution of most scientific problems of consequence. The greatest minds of generations past were forced to invent analytical end runs that relied on simplifying assumptions and, often, asymptotic approximations. Our generation is the first ever to have the computing power to rely on the most direct approach, leaving the hard work of implementation to obliging little chunks of silicon. In recent years, almost all teachers of statistics who care about doing well by their students and doing well by their subject have recognized that computers are changing the teaching of our subject.
    [Show full text]
  • A Ptolemaic Curriculum?
    The Introductory Statistics Course: A Ptolemaic Curriculum George W. Cobb Mount Holyoke College 1. INTRODUCTION The founding of this journal recognizes the likelihood that our profession stands at the thresh- old of a fundamental reshaping of how we do what we do, how we think about what we do, and how we present what we do to students who want to learn about the science of data. For two thousand years, as far back as Archimedes, and continuing until just a few decades ago, the mathematical sciences have had but two strategies for implementing solutions to applied problems: brute force and analytical circumventions. Until recently, brute force has meant arithmetic by human calculators, making that approach so slow as to create a needle's eye that effectively blocked direct solution of most scientific problems of consequence. The greatest minds of generations past were forced to invent analytical end runs that relied on simplifying assumptions and, often, asymptotic approximations. Our generation is the first ever to have the computing power to rely on the most direct approach, leaving the hard work of implementation to obliging little chunks of silicon. In recent years, almost all teachers of statistics who care about doing well by their students and doing well by their subject have recognized that computers are changing the teaching of our subject. Two major changes were recognized and articulated fifteen years ago by David Moore (1992): we can and should automate calculations; we can and should automate graph- ics. Automating calculation enables multiple re-analyses; we can use such multiple analyses to evaluate sensitivity to changes in model assumptions.
    [Show full text]
  • The Law of Averages
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 7: The Law of Averages Relevant textbook passages: Pitman [9]: Section 3.3 Larsen–Marx [8]: Section 4.3 7.1 Law of Averages The “Law of Averages” is an informal term used to describe a number of mathematical theorems that relate averages of sample values to expectations of random variables. E ∈ Given random variables X1,...,Xn on a probability space (S, ,P∑), each point s S yields ¯ n a list X(s)1,...,X(s)n. If we average these numbers we get X(s) = i=1 Xi(s)/n, the sample average associated with the point s. The sample average, since it depends on s, is also a random variable. In later lectures, I’ll talk about how to determine the distribution of a sample average, but we already have a case that we can deal with. If X1,...,Xn are independent Bernoulli random variables, their sum has a Binomial distribution, so the distribution of the sample average is easily given. First note that the sample average can only take on the values k/n, for k = 0, . , n, and that ( ) ( ) n P X¯ = k/n = pk(1 − p)n−k. k Figure 7.1 shows the probability mass function of X¯ for the case p = 1/2 with various values of n. Observe the following things about the graphs. • The sample average X¯ is always between 0 and 1, and it is simply the fraction of successes in sequence of trials.
    [Show full text]
  • Inference by Believers in the Law of Small Numbers
    UC Berkeley Other Recent Work Title Inference by Believers in the Law of Small Numbers Permalink https://escholarship.org/uc/item/4sw8n41t Author Rabin, Matthew Publication Date 2000-06-04 eScholarship.org Powered by the California Digital Library University of California Inference by Believers in the Law of Small Numbers Matthew Rabin Department of Economics University of California—Berkeley January 27, 2000 Abstract Many people believe in the ‘‘Law of Small Numbers,’’exaggerating the degree to which a small sample resembles the population from which it is drawn. To model this, I assume that a person exaggerates the likelihood that a short sequence of i.i.d. signals resembles the long-run rate at which those signals are generated. Such a person believes in the ‘‘gambler’s fallacy’’, thinking early draws of one signal increase the odds of next drawing other signals. When uncertain about the rate, the person over-infers from short sequences of signals, and is prone to think the rate is more extreme than it is. When the person makes inferences about the frequency at which rates are generated by different sources — such as the distribution of talent among financial analysts — based on few observations from each source, he tends to exaggerate how much variance there is in the rates. Hence, the model predicts that people may pay for financial advice from ‘‘experts’’ whose expertise is entirely illusory. Other economic applications are discussed. Keywords: Bayesian Inference, The Gambler’s Fallacy, Law of Large Numbers, Law of Small Numbers, Over-Inference. JEL Classifications: B49 Acknowledgments: I thank Kitt Carpenter, Erik Eyster, David Huffman, Chris Meissner, and Ellen Myerson for valuable research assistance, Jerry Hausman, Winston Lin, Dan McFadden, Jim Powell, and Paul Ruud for helpful comments on the first sentence of the paper, and Colin Camerer, Erik Eyster, and seminar participants at Berkeley,Yale, Johns Hopkins, and Harvard-M.I.T.for helpful comments on the rest of the paper.
    [Show full text]
  • 2. the Law of Averages AKA Law of Large Numbers
    Statistics 10 Lecture 8 From Randomness to Probability (Chapter 14) 1. What is a probability ? A settling down… PROBABILITY then is synonymous with the word CHANCE and it is a percentage or proportion of time some event of interest is expected to happen PROVIDED we have randomness and we can repeat the event (whatever it may be) many times under the exact same conditions (in other words – replication). 2. The Law of Averages AKA Law of Large Numbers There is something called the Law of Averages(or the Law of Large Numbers) which states that if you repeat a random experiment, such as tossing a coin or rolling a die, a very large number of times, (as if you were trying to construct a population) your individual outcomes (statistics), when averaged, should be equal to (or very close to) the theoretical average (a parameter). There is a quote “The roulette wheel has neither conscience nor memory”. Think about his quote and then consider this situation: If you have ever visited a casino in Las Vegas and watched people play roulette, when gamblers see a streak of “Reds” come up, some will start to bet money on “Black” because they think the law of averages means that “Black” has a better chance of coming up now because they have seen so many “Reds” show up. While it is true that in the LONG RUN the proportion of Blacks and Reds will even out, in the short run, anything is possible. So it is wrong to believe that the next few spins will “make up” for the imbalance in Blacks and Reds.
    [Show full text]
  • The Normal Distribution
    ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈NORMAL DISTRIBUTION ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ THE NORMAL DISTRIBUTION Documents prepared for use in course B01.1305, New York University, Stern School of Business Use of normal table, standardizing forms page 3 This illustrates the process of obtaining normal probabilities from tables. The standard normal random variable (mean = 0, standard deviation = 1) is noted here, along with adjustment for normal random variables in which the mean and standard deviation are general. Inverse use of the table is also discussed. Additional normal distribution examples page 8 This includes also a very brief introduction to the notion of control charts. Normal distributions used with sample averages and totals page 12 This illustrates uses of the Central Limit theorem. Also discussed is the notion of selecting, in advance, a sample size n to achieve a desired precision in the answer. How can I tell if my data are normally distributed? page 19 We are very eager to assume that samples come from normal populations. Here is a display device that helps to decide whether this assumption is reasonable. Central limit theorem page 23 The approximate normal distribution for sample averages is the subject of the Central Limit theorem. This is a heavily-used statistical result, and there are many nuances to its application. 1 ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈NORMAL DISTRIBUTION ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈ Normal approximation for finding binomial probabilities page 29 The normal distribution can be used for finding very difficult binomial probabilities. Another layout for the normal table page 31 The table listed here gives probabilities of the form P[ Z ≤ z ] with values of z from -4.09 to +4.09 in steps of 0.01.
    [Show full text]
  • The Central Limit Theorem Summary
    Introduction The three trends The central limit theorem Summary The Central Limit Theorem Patrick Breheny March 5 Patrick Breheny Introduction to Biostatistics (BIOS 4120) 1/28 Introduction The three trends The law of averages The central limit theorem Mean and SD of the binomial distribution Summary Kerrich's experiment A South African mathematician named John Kerrich was visiting Copenhagen in 1940 when Germany invaded Denmark Kerrich spent the next five years in an interment camp To pass the time, he carried out a series of experiments in probability theory One of them involved flipping a coin 10,000 times Patrick Breheny Introduction to Biostatistics (BIOS 4120) 2/28 Introduction The three trends The law of averages The central limit theorem Mean and SD of the binomial distribution Summary The law of averages We know that a coin lands heads with probability 50% Thus, after many tosses, the law of averages says that the number of heads should be about the same as the number of tails . or does it? Patrick Breheny Introduction to Biostatistics (BIOS 4120) 3/28 Introduction The three trends The law of averages The central limit theorem Mean and SD of the binomial distribution Summary Kerrich's results Number of Number of Heads - tosses (n) heads 0:5·Tosses 10 4 -1 100 44 -6 500 255 5 1,000 502 2 2,000 1,013 13 3,000 1,510 10 4,000 2,029 29 5,000 2,533 33 6,000 3,009 9 7,000 3,516 16 8,000 4,034 34 9,000 4,538 38 10,000 5,067 67 Patrick Breheny Introduction to Biostatistics (BIOS 4120) 4/28 Introduction The three trends The law of averages
    [Show full text]
  • Math Colloquium Five Lectures on Probability Theory Part 2: the Law of Large Numbers
    Math Colloquium Five Lectures on Probability Theory Part 2: The Law of Large Numbers Robert Niedzialomski, [email protected] March 3rd, 2021 Robert Niedzialomski, [email protected] Math Colloquium Five Lectures on Probability Theory PartMarch 2: The 3rd, Law 2021 of Large 1Numbers / 18 Probability - Intuition Probability Theory = Mathematical framework for modeling/studying non-deterministic behavior where a source of randomness is introduced (this means that more than one outcome is possible) The space of all possible outcomes is called the sample space. A set of outcomes is called an event and the source of randomness is called a random variable Robert Niedzialomski, [email protected] Math Colloquium Five Lectures on Probability Theory PartMarch 2: The 3rd, Law 2021 of Large 2Numbers / 18 Discrete Probability A discrete probability space consists of a finite (or countable) set Ω of outcomes ! together with a set of non-negative real numbers p! assigned to each !; p! is called the probability of the outcome !. We require P !2Ω p! = 1. An event is a set of outcomes, i.e., a subset A ⊂ Ω. The probability of an event A is X P(A) = p!: !2A A random variable is a function X mapping the set Ω to the set of real numbers. We write X :Ω ! R. We note that the following Kolmogorov axioms of probability hold true: P(;) = 0 1 P1 if A1; A2;::: are disjoint events, then P([n=1An) = n=1 P(An). Robert Niedzialomski, [email protected] Math Colloquium Five Lectures on Probability Theory PartMarch 2: The 3rd, Law 2021 of Large 3Numbers / 18 An Example of Rolling a Die Twice Example (Rolling a Die Twice) Suppose we roll a fair die twice and we want to model the probability of the sum of the numbers we roll.
    [Show full text]
  • Section 6.0 and 6.1 Introduction to Randomness, Probability, and Simulation
    + Section 6.0 and 6.1 Introduction to Randomness, Probability, and Simulation Learning Objectives After this section, you should be able to… DESCRIBE the idea of probability DESCRIBE myths about randomness DESCRIBE simulations Source: TPS 4th edition (CH5) The Idea of Probability + Randomness, and Simulation Probability, Chance behavior is unpredictable in the short run, but has a regular and predictable pattern in the long run. The law of large numbers says that if we observe more and more repetitions of any chance process, the proportion of times that a specific outcome occurs approaches a single value. Definition: The probability of any outcome of a chance process is a number between 0 (never occurs) and 1(always occurs) that describes the proportion of times the outcome would occur in a very long series of repetitions. + Myths about Randomness Randomness, and Simulation Probability, The idea of probability seems straightforward. However, there are several myths of chance behavior we must address. The myth of short-run regularity: The idea of probability is that randomness is predictable in the long run. Our intuition tries to tell us random phenomena should also be predictable in the short run. However, probability does not allow us to make short-run predictions. The myth of the “law of averages”: Probability tells us random behavior evens out in the long run. Future outcomes are not affected by past behavior. That is, past outcomes do not influence the likelihood of individual outcomes occurring in the future. Simulation (will be covered in Section 6.7) + Randomness, and Simulation Probability, The imitation of chance behavior, based on a model that accurately reflects the situation, is called a simulation.
    [Show full text]
  • 1 History of Probability (Part 3)
    History of Probability (Part 3) - Jacob Bernoulli (1654-1705) – Law of Large Numbers The Bernoulli family is the most prolific family of western mathematics. In a few generations in the 17 th and 18 th centuries more than ten members produced significant work in mathematics and physics. They are all descended from a Swiss family of successful spice traders. It is easy to get them mixed up (many have the same first name.) Here are three rows from the family tree. The six men whose boxes are shaded were mathematicians. For contributions to probability and statistics, Jacob Bernoulli (1654-1705) deserves exceptional attention. Jacob gave the first proof of what is now called the (weak) Law of Large Numbers. He was trying to broaden the theory and application of probability from dealing solely with games of chance to “civil, moral, and economic” problems. Jacob and his younger brother, Johann, studied mathematics against the wishes of Jacob Bernoulli their father. It was a time of exciting new mathematics that encouraged both cooperation and competition. Many scientists (including Isaac Newton and Gottfried Leibniz) were publishing papers and books and exchanging letters. In a classic case of sibling rivalry, Jacob and Johann, themselves, were rivals in mathematics their whole lives. Jacob, the older one, seems to have been threatened by his brilliant little brother, and was fairly nasty to him in their later years. Here is a piece of a letter (1703) from Jacob Bernoulli to Leibniz that reveals the tension between the brothers. Jacob seems worried that Johann, who had told Leibniz about Jacob’s law of large numbers, may not have given him full credit.
    [Show full text]
  • Lecture 7: Concentration Inequalities and the Law of Averages
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2021 Lecture 7: Concentration Inequalities and the Law of Averages Relevant textbook passages: Pitman [19]: Section 3.3 Larsen–Marx [17]: Section 4.3 7.1 The Law of Averages The “Law of Averages” is an informal term used to describe a number of mathematical theorems that relate averages of sample values to expectations of random variables. F ∈ Given random variables X1,...,Xn on a probability space (Ω, ,PP), each point ω Ω yields ¯ n a list X(ω)1,...,X(ω)n. If we average these numbers we get X(ω) = i=1 Xi(ω)/n, the sample average associated with the outcome ω. The sample average, since it depends on ω, is also a random variable. In later lectures, I’ll talk about how to determine the distribution of a sample average, but we already have a case that we can deal with. If X1,...,Xn are independent Bernoulli random variables, their sum has a Binomial distribution, so the distribution of the sample average is easily given. First note that the sample average can only take on the values k/n, for k = 0, . , n, and that n P X¯ = k/n = pk(1 − p)n−k. k Figure 7.1 shows the probability mass function of X¯ for the case p = 1/2 with various values of n. Observe the following things about the graphs. • The sample average X¯ is always between 0 and 1, and it is simply the fraction of successes in sequence of trials.
    [Show full text]
  • From Randomness to Probability
    From Randomness to Probability Chapter 13 ON TARGET ON Dealing with Random Phenomena A random phenomenon is a situation in which we know what outcomes could happen, but we don’t know which particular outcome did or will happen. In general, each occasion upon which we observe a random phenomenon is called a trial. At each trial, we note the value of the random phenomenon, and call it an outcome. When we combine outcomes, the resulting combination is an event. The collection of all possible outcomes is called the sample space. ON TARGET ON Definitions Probability is the mathematics of chance. It tells us the relative frequency with which we can expect an event to occur The greater the probability the more likely the event will occur. It can be written as a fraction, decimal, percent, or ratio. ON TARGET ON Definitions 1 Certain Probability is the numerical measure of the likelihood that the event will occur. Value is between 0 and 1. .5 50/50 Sum of the probabilities of all events is 1. 0 Impossible ON TARGET ON Definitions A probability experiment is an action through which specific results (counts, measurements, or responses) are obtained. The result of a single trial in a probability experiment is an outcome. The set of all possible outcomes of a probability experiment is the sample space, denoted as S. e.g. All 6 faces of a die: S = { 1 , 2 , 3 , 4 , 5 , 6 } ON TARGET ON Definitions Other Examples of Sample Spaces may include: Lists Lattice Diagrams Venn Diagrams Tree Diagrams May use a combination of these.
    [Show full text]