Factors Affecting Probability Matching Behavior

Total Page:16

File Type:pdf, Size:1020Kb

Factors Affecting Probability Matching Behavior Factors Affecting Probability Matching Behavior Jie Gao Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy under the Executive Committee of the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2013 © 2013 Jie Gao All rights reserved ABSTRACT FACTORS AFFECTING PROBABILITY MATCHING BEHAVIOR Jie Gao In life, people commonly face repeated decisions under risk or uncertainty. While normative economic models assume that people tend to make choices that maximize their expected utility, suboptimal behavior – in particular, probability matching – is frequently observed in research on repeated decisions. Probability matching is the tendency to match prediction probabilities of each outcome with the observed outcome probabilities in a random binary prediction task. For example, when people are faced with making with a sequence of predictions, such as repeatedly predicting the outcome of rolling a die with four sides colored green and two sides colored red, most people allocate about two-thirds of their predictions to green, and one-third to red. The optimal strategy, referred to as maximizing, would be to choose the outcome with the higher probability in every trial in the prediction task. Various causes for probability matching have been proposed during the past several decades. Here it is proposed that implicit adoption of a perfect prediction goal by decision makers might tend to elicit probability matching behavior. Thus, one factor that might affect the prevalence of probability matching behavior (investigated in Studies 1 and 2) is the type of performance goal. The manipulation in Study 1 contrasted single-trial prediction with prediction of four-trial sequences, which it is hypothesized might create an implicit perfect prediction goal for the sequence. In Study 2, three levels of goal were explicitly manipulated for each sequence: a perfect prediction goal, an 80% correct goal, and a 60% correct goal. In both studies it was predicted that more matching behavior would be observed for those who have a goal of perfect prediction than those who have a more reasonable (lower) goal. The results of both studies, conducted in an online worker marketplace, supported the goal-level hypothesis. The second factor proposed to affect the prevalence of probability matching is the type of conceptual schema describing the events to be predicted: independent events or complementary events. Study 3 investigated the effects of schema type and abstraction level of context on matching or maximizing behavior. Three abstraction levels of stories were included: abstract, concrete random devices, and real-world stories. The main hypothesis was that when the two options to be predicted are independent events, less matching and more maximizing behavior should be observed. Data from Study 3 supported the hypothesis that independent events tend to elicit more maximizing behavior. No effects of abstraction level were observed. TABLE OF CONTENTS LIST OF TABLES III LIST OF FIGURES V ACKNOWLEDGEMENTS VII CHAPTER I INTRODUCTION 1 Research Questions 6 CHAPTER II REVIEW OF LITERATURE 9 Overview of Probability Matching Research 9 1. Reinforcement Learning 9 2. Expectation Matching 10 3. Dual-Systems Account 11 4. Pattern-Search Account 13 5. The Role of Unrealistically High Goals 14 6. Summary and Discussion of the Previous Literature 15 Misconception of Randomness 17 Description-based vs. Experience-based Decision Making 19 Goals and Decision Making 22 Pragmatic Reasoning Schemas 25 CHAPTER III STUDY ONE 27 Introduction 27 Method 28 Results 34 Discussion 42 CHAPTER IV STUDY TWO 46 Introduction 46 Method 47 i Results 51 Discussion 54 CHAPTER V STUDY THREE 55 Introduction 55 Method 59 Results 64 Discussion 68 CHAPTER VI GENERAL DISCUSSION 72 REFERENCES 77 APPENDIX A. STUDY 1 87 1. Probability of choosing the more likely outcome 87 2. Earned Payoff information 88 3. ANOVA Tables 89 4. Analysis of Individual Differences 90 APPENDIX B. STUDY 2 92 1. Probability of choosing the more likely outcome 92 2. Earned Payoff information 92 3. ANOVA Tables 93 4. Survey Analysis 93 APPENDIX C. STUDY 3 98 1. Earned Payoff information 98 2. ANOVA Table 98 ii LIST OF TABLES Table 1. Comparison of Expectation Matching and Win-Stay-Lose-Shift, summarized from Otto, Taylor and Markman (2011) 16 Table 2 Comparison of Pattern Search and Win-Stay-Lose-Shift, summarized from Gaissmaier and Schooler (2008) 16 Table 3. Illustration of Counterbalanced Conditions in Study 1 29 Table 4 Number of Participants in Each Condition of Study 1 30 Table 5 Study Design and Number of Participants in Each Condition of Study 2 49 Table 6 Descriptive statistics for the matching variable for different goal-level groups 53 Table 7 Conditions and stories in Study 3 64 Table 8 Descriptive Statistics for Each Condition 65 Table 9. Bonus Information for Study 1 (Earned and Expected) 89 Table 10. ANOVA Table for No-Bias Condition (DV=Matching) 89 Table 11. ANOVA Table for Bias Condition (DV=Matching) 89 Table 12. ANOVA Table for Bias Condition (DV=Maximizing) 90 Table 13. Individual Differences: Frequencies of Participants Showing Various Patterns of Matching Behavior 91 Table 14. Individual Differences: Frequencies of Participants Showing Various Patterns of Maximizing Behavior 91 Table 15 Expected Score Calculation Results 93 Table 16. Payoff Information for Study 2 (Earned and Expected Bonus) 93 Table 17. ANOVA Table for Matching 93 Table 18. ANOVA Table for Maximizing 93 iii Table 19. Distribution of Coded Responses to Survey Question 1 in Study 2 94 Table 20. Distribution of Responses to Survey Question 2a in Study 2 94 Table 21. Distribution of Responses to Survey Question 2b in Study 2 95 Table 22. Distribution (Frequencies) of Responses to Survey Question 3 in Study 2 96 Table 23. Distribution of Responses (Mean estimated 96 Table 24. Distribution of Responses to Survey Question 5 in Study 2 97 Table 25. Payoff Information (Earned and Expected) for Different Types of Context 98 Table 26. Payoff Information (Earned and Expected) for Different Schema Types 98 Table 27. ANOVA Table for Predictions 98 iv LIST OF FIGURES Figure 1 Interface for Single Trial Prediction Tasks 32 Figure 2 Interface for Four-trial Set Prediction Tasks 33 Figure 3 The proportion of participants using probability matching strategy, for no-bias conditions (single-trial and sequence conditions) 36 Figure 4 The proportion of participants using probability matching strategy, for bias conditions (single-trial and sequence conditions) 37 Figure 5 The proportion of using maximizing strategy, for bias conditions only (single-trial and sequence conditions) 39 Figure 6 The distribution of individuals’ probabilities over the first 20 trials (left panels) and last 20 trials (right panels) of choosing the red light in the no-bias (50%) condition. 40 Figure 7 The distribution of individuals’ probabilities over the first 20 trials (left panels) and last 20 trials (right panels) of choosing the more likely outcome in the bias (75%) conditions. 41 Figure 8 The proportion of participants using a matching strategy across sets in the three goal conditions 52 Figure 9 Error-bar plot for mean of matching variables 53 Figure 10 The proportion of participants using maximizing strategy across sets in the three goal conditions 54 Figure 11 Profile plot of the probability of choosing the more likely outcome (Schema Type x Sets) 66 Figure 12 Profile plot of the probability of choosing the more likely outcome (Schema Type x Context Level) 67 v Figure 13 Profile plot of the probability of choosing the more likely outcome, under the different contexts 68 Figure 14 The probability of predicting the more likely outcome for bias conditions, and the probability of choosing Red light for no-bias conditions 88 Figure 15 The probability of choosing the more likely event across the 5-trial sets in the three goal conditions 92 vi ACKNOWLEDGEMENTS There are many people I wish to thank for their contributions to this dissertation. First, I want to thank my advisor, Professor James Corter, for his guidance, support, and encouragement during my wonderful journey of doctoral study. I also want to gratefully acknowledge the financial support of the Department of Human Development in Teachers College of Columbia University. In addition, I wish to thank Professor Joanna P. Williams, Matthew S. Johnson, Elissa L. Perry, and Eric Johnson for their helpful feedback and advice. I am grateful to HuiYun Tseng, Yichun Chen and Yunjin Rho for their invaluable advice as senior follow students. Additionally, I would like to thank Yuan Zhang, Daoquan Li, Qing Xia, Wei Huang, Shijiao Jiang, Debbie and Eric Carson and their family for their enduring friendship, emotional support, and laughter throughout graduate school. Finally, I want to express my gratitude to my entire family, and especially to my husband, Yu Li, for their love, encouragement and faith in me. vii Factors Affecting Probability Matching 1 Chapter I Introduction Probability matching is the tendency to match prediction probabilities of each outcome with the observed outcome probabilities in a random binary prediction task (Fiorina, 1971; Fantino & Esfandiari, 2002; Shanks, Tunney, & McCarthy, 2002). This is non-optimal or “irrational” behavior. For example, when people are faced with a sequence of prediction tasks (repeated trials), such as predicting the outcome of rolling a die with four sides colored green and two sides colored red, most people allocate about two-thirds of their predictions on green, and one-third on red. This probability-matching prediction strategy is sub-optimal; the optimal strategy is maximizing, meaning to choose the outcome with the higher probability in every trial in the prediction task. In real-life, people do face repeated decisions under risk or uncertainty. For example, in the stock market, people are making repeated decisions to buy or sell a certain stock whose price may increase or decrease; companies make repeated decisions about hiring students who graduated with certain degrees or from certain colleges, believing that these qualifications affect the probability of success of the new employee.
Recommended publications
  • Effective Program Reasoning Using Bayesian Inference
    EFFECTIVE PROGRAM REASONING USING BAYESIAN INFERENCE Sulekha Kulkarni A DISSERTATION in Computer and Information Science Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy 2020 Supervisor of Dissertation Mayur Naik, Professor, Computer and Information Science Graduate Group Chairperson Mayur Naik, Professor, Computer and Information Science Dissertation Committee: Rajeev Alur, Zisman Family Professor of Computer and Information Science Val Tannen, Professor of Computer and Information Science Osbert Bastani, Research Assistant Professor of Computer and Information Science Suman Nath, Partner Research Manager, Microsoft Research, Redmond To my father, who set me on this path, to my mother, who leads by example, and to my husband, who is an infinite source of courage. ii Acknowledgments I want to thank my advisor Prof. Mayur Naik for giving me the invaluable opportunity to learn and experiment with different ideas at my own pace. He supported me through the ups and downs of research, and helped me make the Ph.D. a reality. I also want to thank Prof. Rajeev Alur, Prof. Val Tannen, Prof. Osbert Bastani, and Dr. Suman Nath for serving on my dissertation committee and for providing valuable feedback. I am deeply grateful to Prof. Alur and Prof. Tannen for their sound advice and support, and for going out of their way to help me through challenging times. I am also very grateful for Dr. Nath's able and inspiring mentorship during my internship at Microsoft Research, and during the collaboration that followed. Dr. Aditya Nori helped me start my Ph.D.
    [Show full text]
  • 1 Dependent and Independent Events 2 Complementary Events 3 Mutually Exclusive Events 4 Probability of Intersection of Events
    1 Dependent and Independent Events Let A and B be events. We say that A is independent of B if P (AjB) = P (A). That is, the marginal probability of A is the same as the conditional probability of A, given B. This means that the probability of A occurring is not affected by B occurring. It turns out that, in this case, B is independent of A as well. So, we just say that A and B are independent. We say that A depends on B if P (AjB) 6= P (A). That is, the marginal probability of A is not the same as the conditional probability of A, given B. This means that the probability of A occurring is affected by B occurring. It turns out that, in this case, B depends on A as well. So, we just say that A and B are dependent. Consider these events from the card draw: A = drawing a king, B = drawing a spade, C = drawing a face card. Events A and B are independent. If you know that you have drawn a spade, this does not change the likelihood that you have actually drawn a king. Formally, the marginal probability of drawing a king is P (A) = 4=52. The conditional probability that your card is a king, given that it a spade, is P (AjB) = 1=13, which is the same as 4=52. Events A and C are dependent. If you know that you have drawn a face card, it is much more likely that you have actually drawn a king than it would be ordinarily.
    [Show full text]
  • Lecture 2: Modeling Random Experiments
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2021 Lecture 2: Modeling Random Experiments Relevant textbook passages: Pitman [5]: Sections 1.3–1.4., pp. 26–46. Larsen–Marx [4]: Sections 2.2–2.5, pp. 18–66. 2.1 Axioms for probability measures Recall from last time that a random experiment is an experiment that may be conducted under seemingly identical conditions, yet give different results. Coin tossing is everyone’s go-to example of a random experiment. The way we model random experiments is through the use of probabilities. We start with the sample space Ω, the set of possible outcomes of the experiment, and consider events, which are subsets E of the sample space. (We let F denote the collection of events.) 2.1.1 Definition A probability measure P or probability distribution attaches to each event E a number between 0 and 1 (inclusive) so as to obey the following axioms of probability: Normalization: P (?) = 0; and P (Ω) = 1. Nonnegativity: For each event E, we have P (E) > 0. Additivity: If EF = ?, then P (∪ F ) = P (E) + P (F ). Note that while the domain of P is technically F, the set of events, that is P : F → [0, 1], we may also refer to P as a probability (measure) on Ω, the set of realizations. 2.1.2 Remark To reduce the visual clutter created by layers of delimiters in our notation, we may omit some of them simply write something like P (f(ω) = 1) orP {ω ∈ Ω: f(ω) = 1} instead of P {ω ∈ Ω: f(ω) = 1} and we may write P (ω) instead of P {ω} .
    [Show full text]
  • Probability and Counting Rules
    blu03683_ch04.qxd 09/12/2005 12:45 PM Page 171 C HAPTER 44 Probability and Counting Rules Objectives Outline After completing this chapter, you should be able to 4–1 Introduction 1 Determine sample spaces and find the probability of an event, using classical 4–2 Sample Spaces and Probability probability or empirical probability. 4–3 The Addition Rules for Probability 2 Find the probability of compound events, using the addition rules. 4–4 The Multiplication Rules and Conditional 3 Find the probability of compound events, Probability using the multiplication rules. 4–5 Counting Rules 4 Find the conditional probability of an event. 5 Find the total number of outcomes in a 4–6 Probability and Counting Rules sequence of events, using the fundamental counting rule. 4–7 Summary 6 Find the number of ways that r objects can be selected from n objects, using the permutation rule. 7 Find the number of ways that r objects can be selected from n objects without regard to order, using the combination rule. 8 Find the probability of an event, using the counting rules. 4–1 blu03683_ch04.qxd 09/12/2005 12:45 PM Page 172 172 Chapter 4 Probability and Counting Rules Statistics Would You Bet Your Life? Today Humans not only bet money when they gamble, but also bet their lives by engaging in unhealthy activities such as smoking, drinking, using drugs, and exceeding the speed limit when driving. Many people don’t care about the risks involved in these activities since they do not understand the concepts of probability.
    [Show full text]
  • 1 — a Single Random Variable
    1 | A SINGLE RANDOM VARIABLE Questions involving probability abound in Computer Science: What is the probability of the PWF world falling over next week? • What is the probability of one packet colliding with another in a network? • What is the probability of an undergraduate not turning up for a lecture? • When addressing such questions there are often added complications: the question may be ill posed or the answer may vary with time. Which undergraduate? What lecture? Is the probability of turning up different on Saturdays? Let's start with something which appears easy to reason about: : : Introduction | Throwing a die Consider an experiment or trial which consists of throwing a mathematically ideal die. Such a die is often called a fair die or an unbiased die. Common sense suggests that: The outcome of a single throw cannot be predicted. • The outcome will necessarily be a random integer in the range 1 to 6. • The six possible outcomes are equiprobable, each having a probability of 1 . • 6 Without further qualification, serious probabilists would regard this collection of assertions, especially the second, as almost meaningless. Just what is a random integer? Giving proper mathematical rigour to the foundations of probability theory is quite a taxing task. To illustrate the difficulty, consider probability in a frequency sense. Thus a probability 1 of 6 means that, over a long run, one expects to throw a 5 (say) on one-sixth of the occasions that the die is thrown. If the actual proportion of 5s after n throws is p5(n) it would be nice to say: 1 lim p5(n) = n !1 6 Unfortunately this is utterly bogus mathematics! This is simply not a proper use of the idea of a limit.
    [Show full text]
  • Probabilities, Random Variables and Distributions A
    Probabilities, Random Variables and Distributions A Contents A.1 EventsandProbabilities................................ 318 A.1.1 Conditional Probabilities and Independence . ............. 318 A.1.2 Bayes’Theorem............................... 319 A.2 Random Variables . ................................. 319 A.2.1 Discrete Random Variables ......................... 319 A.2.2 Continuous Random Variables ....................... 320 A.2.3 TheChangeofVariablesFormula...................... 321 A.2.4 MultivariateNormalDistributions..................... 323 A.3 Expectation,VarianceandCovariance........................ 324 A.3.1 Expectation................................. 324 A.3.2 Variance................................... 325 A.3.3 Moments................................... 325 A.3.4 Conditional Expectation and Variance ................... 325 A.3.5 Covariance.................................. 326 A.3.6 Correlation.................................. 327 A.3.7 Jensen’sInequality............................. 328 A.3.8 Kullback–LeiblerDiscrepancyandInformationInequality......... 329 A.4 Convergence of Random Variables . 329 A.4.1 Modes of Convergence . 329 A.4.2 Continuous Mapping and Slutsky’s Theorem . 330 A.4.3 LawofLargeNumbers........................... 330 A.4.4 CentralLimitTheorem........................... 331 A.4.5 DeltaMethod................................ 331 A.5 ProbabilityDistributions............................... 332 A.5.1 UnivariateDiscreteDistributions...................... 333 A.5.2 Univariate Continuous Distributions . 335
    [Show full text]
  • Why Retail Therapy Works: It Is Choice, Not Acquisition, That
    ASSOCIATION FOR CONSUMER RESEARCH Labovitz School of Business & Economics, University of Minnesota Duluth, 11 E. Superior Street, Suite 210, Duluth, MN 55802 Why Retail Therapy Works: It Is Choice, Not Acquisition, That Primarily Alleviates Sadness Beatriz Pereira, University of Michigan, USA Scott Rick, University of Michigan, USA Can shopping be used strategically as an emotion regulation tool? Although prior work demonstrates that sadness encourages spending, it is unclear whether and why shopping actually alleviates sadness. Our work suggests that shopping can heal, but that it is the act of choosing (e.g., between money and products), rather than the act of acquiring (e.g., simply being endowed with money or products), that primarily alleviates sadness. Two experiments that induced sadness and then manipulated whether participants made monetarily consequential choices support our conclusions. [to cite]: Beatriz Pereira and Scott Rick (2011) ,"Why Retail Therapy Works: It Is Choice, Not Acquisition, That Primarily Alleviates Sadness", in NA - Advances in Consumer Research Volume 39, eds. Rohini Ahluwalia, Tanya L. Chartrand, and Rebecca K. Ratner, Duluth, MN : Association for Consumer Research, Pages: 732-733. [url]: http://www.acrwebsite.org/volumes/1009733/volumes/v39/NA-39 [copyright notice]: This work is copyrighted by The Association for Consumer Research. For permission to copy or use this work in whole or in part, please contact the Copyright Clearance Center at http://www.copyright.com/. 732 / Working Papers SIGNIFICANCE AND Implications OF THE RESEARCH In this study, we examine how people’s judgment on the probability of a conjunctive event influences their subsequent inference (e.g., after successfully getting five papers accepted what is the probability of getting tenure?).
    [Show full text]
  • Probabilities and Expectations
    Probabilities and Expectations Ashique Rupam Mahmood September 9, 2015 Probabilities tell us about the likelihood of an event in numbers. If an event is certain to occur, such as sunrise, probability of that event is said to be 1. Pr(sunrise) = 1. If an event will certainly not occur, then its probability is 0. So, probability maps events to a number in [0; 1]. How do you specify an event? In the discussions of probabilities, events are technically described as a set. At this point it is important to go through some basic concepts of sets and maybe also functions. Sets A set is a collection of distinct objects. For example, if we toss a coin once, the set of all possible distinct outcomes will be S = head; tail , where head denotes a head and the f g tail denotes a tail. All sets we consider here are finite. An element of a set is denoted as head S.A subset of a set is denoted as head S. 2 f g ⊂ What are the possible subsets of S? These are: head ; tail ;S = head; tail , and f g f g f g φ = . So, note that a set is a subset of itself: S S. Also note that, an empty set fg ⊂ (a collection of nothing) is a subset of any set: φ S.A union of two sets A and B is ⊂ comprised of all the elements of both sets and denoted as A B. An intersection of two [ sets A and B is comprised of only the common elements of both sets and denoted as A B.
    [Show full text]
  • Lecture 2: Random Experiments; Probability Spaces; Random Variables; Independence
    Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 2: Random Experiments; Probability Spaces; Random Variables; Independence Relevant textbook passages: Pitman [4]: Sections 1.3–1.4., pp. 26–46. Larsen–Marx [3]: Sections 2.2–2.5, pp. 18–66. The great coin-flipping experiment This year there were 194 submissions of 128 flips, for a total of 24,832 tosses! You can findthe data at http://www.math.caltech.edu/~2016-17/2term/ma003/Data/FlipsMaster.txt Recall that I put predictions into a sealed envelope. Here are the predictions of the average number of runs, by length, compared to the experimental results. Run Theoretical Predicted Total Average How well length average range runs runs did I do? 1 32.5 31.3667 –33.6417 6340 32.680412 Nailed it. 2 16.125 15.4583 –16.8000 3148 16.226804 Nailed it. 3 8 7.5500 – 8.4583 1578 8.134021 Nailed it. 4 3.96875 3.6417 – 4.3000 725 3.737113 Nailed it. 5 1.96875 1.7333 – 2.2083 388 2.000000 Nailed it. 6 0.976563 0.8083 – 1.1500 187 0.963918 Nailed it. 7 0.484375 0.3667 – 0.6083 101 0.520619 Nailed it. 8 0.240234 0.1583 – 0.3333 49 0.252577 Nailed it. 9 0.119141 0.0583 – 0.1833 16 0.082474 Nailed it. 10 0.059082 0.0167 – 0.1083 12 0.061856 Nailed it. 11 0.0292969 0.0000 – 0.0667 9 0.046392 Nailed it. 12 0.0145264 0.0000 – 0.0417 2 0.010309 Nailed it.
    [Show full text]
  • More Notes to Be Added
    Unit 3 of 22 more notes to be added Introduction So the next lessons will be concerned with probabilities and particularly with structured probabilities using Bayes networks. This is some of the most involved material in this class. And since this is a Stanford level class, you will find out that some of the quizzes are actually really hard. So as you go through the material, I hope the hardness of the quizzes won't discourage you; it'll really entice you to take a piece of paper and a pen and work them out. Let me give you a flavor of a Bayes network using an example. Suppose you find in the morning that your car won't start. Well, there are many causes why your car might not start. One is that your battery is flat. Even for a flat battery there are multiple causes. One, it's just plain dead, and one is that the battery is okay but it's not charging. The reason why a battery might not charge is that the alternator might be broken or the fan belt might be broken. If you look at this influence diagram, also called a Bayes network, you'll find there's many different ways to explain that the car won't start. And a natural question you might have is, "Can we diagnose the problem?" One diagnostic tool is a battery meter, which may increase or decrease your belief that the battery may cause your car failure. You might also know your battery age. Older batteries tend to go dead more often.
    [Show full text]
  • 1 Probability 1.1 Introduction the Next Topic We Want to Take up Is Probability
    1 Probability 1.1 Introduction The next topic we want to take up is probability. Probability is a mathematical area that studies randomness. The question about what exactly "random" means is something for philosophers to ponder. Basically we call a process a random experiment if its exact outcome is unpredictable, haphazard, and without pattern. We usually can just recognize randomness when we see it. Rolling a pair of dice is a random experiment, as is tossing a coin and drawing a card from a well shu• ed deck. In mathematics something is said to be random when it can reasonably be as- sumed that individual results are unpredictable. Probability Theory, however, provides a mathematical way to make predictions about the results anyway. Probability can make rather strong predictive statements about repeated ran- dom events. The results of an individual event remain a surprise, but after a large enough number of repetitions, the overall results can form very strong patterns. Probability theory is the mathematical framework for describing those patterns. Probability cannot predict exact results, but can make very strong statements about general results. Probability theory is the mathematical background of the normal intuition we all have about random things. Consider the following example. Suppose you go for co¤ee each morning with a co-worker. Rather than argue over it, you strike a deal to determine who should pay. The co-worker ‡ips a quarter; if it comes up heads, he pays; if it comes up tails, you pay. All this seems fair until you realize that, after 10 days of this, you have paid every time.
    [Show full text]
  • 15.5 Exercise 2
    Discrete Math Name ______________________________________ Lesson 15.5 – Equiprobable Spaces Exercise 2 Period ______________ Complementary Events Two outcomes or events that cannot occur at the same time are said to be mutually exclusive events. Consider the following events when rolling a pair of dice: A: At least one of the dice is an Ace A = {11, 12, 13, 14, 15, 16, 21, 31, 41, 51, 61} B: Neither dice is an Ace B = {22, 23, 24, 25, 26, 32, 33, 34, 35, 36, 42, 43, 44, 45, 46, 52, 53, 54, 55, 56, 62, 63, 64, 65, 66} C: Roll two of a kind C = {11, 22, 33, 44, 55, 66} Events A and B are mutually exclusive because every outcome in event A includes at least one Ace while none of the outcomes in event B has an Ace in it. Events A and C are not mutually exclusive because event C includes the outcome of rolling two Aces which is also an outcome in event A. When two events are mutually exclusive and make up all of the possible outcomes in sample space, the two events are said to be complementary events. Because two complementary events make up all the possible outcomes in a sample space, the probabilities of two complementary events add up to 1. Two events A and B are said to complementary events if P(A) = 1 – P(B). Example. Consider complementary events A (at least one die is an Ace) and B (neither die is an Ace). We know from listing all possible outcomes of event A that P(A) = 11/36.
    [Show full text]