Note: This Manuscript Is a Pre-Print Version of a Manuscript Accepted for Publication in the Journal of Personality and Social Psychology

Total Page:16

File Type:pdf, Size:1020Kb

Note: This Manuscript Is a Pre-Print Version of a Manuscript Accepted for Publication in the Journal of Personality and Social Psychology PARTISAN-MOTIVATED SAMPLING Note: This manuscript is a pre-print version of a manuscript accepted for publication in the Journal of Personality and Social Psychology. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors' permission. The final article will be available, upon publication, via its DOI: 10.1037/pspi0000375. Data and Materials for this manuscript are available at: https://osf.io/9627e/? view_only=4f5c8a589e014c92a1fca171bb0e4369. Please email [email protected] if you have questions. 2 Partisan-Motivated Sampling Partisan-Motivated Sampling: Re-Examining Politically Motivated Reasoning Across the Information Processing Stream Yrian Derreumaux1, Robin Bergh2, Brent L. Hughes1 1University of California, Riverside, 2Uppsala University Correspondence should be addressed to: Brent Hughes Department of Psychology University of California, Riverside 900 University Ave, Riverside CA, USA Email: [email protected] 3 Partisan-Motivated Sampling Abstract The U.S. is increasingly politically polarized, fueling intergroup conflict and intensifying partisan biases in cognition and behavior. To date, research on intergroup bias has separately examined biases in how people search for information and how they interpret information. Here, we integrate these two perspectives to elucidate how partisan biases manifest across the information processing stream, beginning with (1) a biased selection of information, leading to (2) skewed samples of information that interact with (3) motivated interpretations to produce evaluative biases. Across 3 experiments and 4 internal meta-analyses, participants (N = 2,431) freely sampled information about ingroup and outgroup members or ingroup and outgroup political candidates until they felt confident to evaluate them. Across experiments, we reliably find that most participants begin sampling information from the ingroup, which was associated with individual differences in group-based motives, and that participants sampled overall more information from the ingroup. This sampling behavior, in turn, generates more variability in ingroup (relative to outgroup) experiences. We find that more variability in ingroup experiences predicted when participants decided to stop sampling, and was associated with more biased evaluations. We further demonstrate that participants employ different sampling strategies over time when 4 Partisan-Motivated Sampling the ingroup is de facto worse–obfuscating real group differences–and that participants selectively integrate their experiences into evaluations based on congeniality. The proposed framework extends classic findings in psychology by demonstrating how biases in sampling behavior interact with motivated interpretations to produce downstream evaluative biases and has implications for intergroup bias interventions. Keywords: intergroup bias, political partisanship, experience sampling, motivated reasoning The U.S. is more polarized now than during civil rights movement (Abramowitz & Saunders, 2008; Levendusky, 2010; Pew Research Center, 2017). Elevated polarization increases partisan biases in information processing (Druckman et al., 2013). Consider the electorate’s experience of President Donald Trump’s impeachment hearings: People were exposed to different amounts of pro- vs-anti impeachment arguments, based largely on what news sources they consumed (e.g., Fox News vs. CNN). Moreover, even when people had access to the same information (e.g., that Trump asked Ukrainian President Zelensky to investigate the Bidens), conclusions tended to diverge along party lines (i.e. 91% 5 Partisan-Motivated Sampling Democrats thought Trump had acted illegally vs. 32% of Republicans; Pew Research Center, 2020). These examples highlight two important aspects of information processing that can generate biased evaluations and beliefs. First, people tend to reach conclusions that re-affirm their pre-existing partisan beliefs, even when exposed to the same data (an interpretive bias). This phenomenon fits into a long tradition of research on motivated reasoning, which describes how pre-existing beliefs (e.g., that the ingroup is better) bias the interpretation of information to form conclusions that are most desired rather than those that are most accurate (Greenberg & Pyszczynski, 1985; Kunda, 1987; Wyer & Frey, 1983; Kunda, 1990). Second, although Republicans and Democrats had access to diverse and comprehensive information, they gathered only a subset of such information, which likewise led to congenial conclusions (a sampling bias). Such sampling biases stem from people accessing only a subset of information that is non-representative and skewed relative to all available information (e.g., Fiedler, 2000; Lindskog, Winman, & Juslin, 2013). To date, research has typically examined sampling and interpretive biases separately (e.g., Denrell, 2005; Gampa et al., 2019), rather than jointly modeling these influences on evaluations of groups (or other judgments and decisions). 6 Partisan-Motivated Sampling Here, we argue that group-based bias is driven in part by a motivation preceding those ubiquitous in studies of motivated reasoning: a wish to gather information first and most frequently from one’s own groups. This, in turn, leads to non-representative samples that, when unaccounted for, interact with partisan- based interpretive motives to bolster biased evaluations favoring the ingroup. In other words, we examine connections between three sources of evaluative bias across the information-processing stream: (1) a biased selection of information, which leads to (2) skewed samples of information, which interacts with (3) motivated interpretations to produce evaluative biases. Together, we provide a comprehensive framework for understanding the emergence of partisan biases across information processing by integrating sampling and evaluative sources of bias. We review the literature on each source of bias, stepping backwards through the causal chain to capture the evolution of the literature (interpretive biases have the longest history; e.g., Hastorf & Cantril, 1954), before outlining the framework for the current experiments. Partisan Evaluative Biases Social identities provide people with an important source of value and status, which in turn increases their tendency to engage in motivated reasoning when exposed to identity-relevant information (e.g., Kahan, 2013; Kahan et al., 2011, 7 Partisan-Motivated Sampling 2013). Political identity in particular has become one of the strongest social identity attachments in the U.S., along with race, ethnicity, and religion (Iyengar et al., 2019; Iyengar & Krupenkin, 2018; Mason, 2018). As such, a wealth of research has examined interpretive biases that arise in politically motivated reasoning (Cohen, 2003; for meta-analysis see: Ditto et al., 2019; for a review of methods see: Kahan, 2016; Tappin et al., 2020). In a typical study within this tradition, participants are asked to evaluate information that is matched as closely as possible on all dimensions, except for whether the information (or the source of the information) favors one’s political identity or not. A consistent finding is that congenial information (i.e. information that favors pre-existing beliefs) or information from congenial sources (e.g., coming from an ingroup source) is interpreted more favorably and perceived as more valid than uncongenial information (Flynn et al., 2017; Gampa et al., 2019; Hughes et al., 2017; Lord et al., 1979; Tappin et al., 2017). For example, Democrats and Republicans evaluate identical information more favorably when it is supported by members of their own party (i.e. “party over policy”; Cohen, 2003). A growing body of research suggests that people not only favor congenial information, but are also motivated skeptics of uncongenial information (disconfirmation bias; Ditto et al., 1998; Ditto & Lopez, 1992; Kraft et al., 2015; 8 Partisan-Motivated Sampling Taber et al., 2009; Taber & Lodge, 2012). From this perspective, people adopt differential judgement criteria when evaluating uncongenial relative to congenial information, holding arguments they dislike to higher standards that require more and stronger evidence. Taken together, this research explains how politically motivated reasoning, which affects both Democrats and Republicans (Ditto et al., 2019; cf. Baron & Jost, 2019), can generate discrepancies in peoples’ evaluations of the same (i.e. matched) information based on group membership. Importantly, the majority of these frameworks focus on the interpretation phase of information processing, where the congeniality of the information, most often qualitative in nature (e.g., a title of an article that is pro/anti-gun rights) or the partisan source of the information (e.g., coming from a conservative or liberal source) are explicitly labeled. However, they do not consider the sampling phase of information processing, wherein people gather and explore information that precedes interpretation. In addition, it is often difficult to establish causal links between partisan-motivations and other motivations and explanations, due in part to the qualitative nature of the information used in studies of politically motivated reasoning (see e.g., Tappin et al., 2020). Moreover, in the real world, people often have to make inferences about underlying distributions from “noisy” data that require prolonged search to arrive at an accurate judgement.
Recommended publications
  • Downloads Automatically
    VIRGINIA LAW REVIEW ASSOCIATION VIRGINIA LAW REVIEW ONLINE VOLUME 103 DECEMBER 2017 94–102 ESSAY ACT-SAMPLING BIAS AND THE SHROUDING OF REPEAT OFFENDING Ian Ayres, Michael Chwe and Jessica Ladd∗ A college president needs to know how many sexual assaults on her campus are caused by repeat offenders. If repeat offenders are responsible for most sexual assaults, the institutional response should focus on identifying and removing perpetrators. But how many offenders are repeat offenders? Ideally, we would find all offenders and then see how many are repeat offenders. Short of this, we could observe a random sample of offenders. But in real life, we observe a sample of sexual assaults, not a sample of offenders. In this paper, we explain how drawing naive conclusions from “act sampling”—sampling people’s actions instead of sampling the population—can make us grossly underestimate the proportion of repeat actors. We call this “act-sampling bias.” This bias is especially severe when the sample of known acts is small, as in sexual assault, which is among the least likely of crimes to be reported.1 In general, act sampling can bias our conclusions in unexpected ways. For example, if we use act sampling to collect a set of people, and then these people truthfully report whether they are repeat actors, we can overestimate the proportion of repeat actors. * Ayres is the William K. Townsend Professor at Yale Law School; Chwe is Professor of Political Science at the University of California, Los Angeles; and Ladd is the Founder and CEO of Callisto. 1 David Cantor et al., Report on the AAU Campus Climate Survey on Sexual Assault and Sexual Misconduct, Westat, at iv (2015) [hereinafter AAU Survey], (noting that a relatively small percentage of the most serious offenses are reported) https://perma.cc/5BX7-GQPU; Nat’ Inst.
    [Show full text]
  • Human Dimensions of Wildlife the Fallacy of Online Surveys: No Data Are Better Than Bad Data
    Human Dimensions of Wildlife, 15:55–64, 2010 Copyright © Taylor & Francis Group, LLC ISSN: 1087-1209 print / 1533-158X online DOI: 10.1080/10871200903244250 UHDW1087-12091533-158XHuman Dimensions of WildlifeWildlife, Vol.The 15, No. 1, November 2009: pp. 0–0 Fallacy of Online Surveys: No Data Are Better Than Bad Data TheM. D. Fallacy Duda ofand Online J. L. Nobile Surveys MARK DAMIAN DUDA AND JOANNE L. NOBILE Responsive Management, Harrisonburg, Virginia, USA Internet or online surveys have become attractive to fish and wildlife agencies as an economical way to measure constituents’ opinions and attitudes on a variety of issues. Online surveys, however, can have several drawbacks that affect the scientific validity of the data. We describe four basic problems that online surveys currently present to researchers and then discuss three research projects conducted in collaboration with state fish and wildlife agencies that illustrate these drawbacks. Each research project involved an online survey and/or a corresponding random telephone survey or non- response bias analysis. Systematic elimination of portions of the sample population in the online survey is demonstrated in each research project (i.e., the definition of bias). One research project involved a closed population, which enabled a direct comparison of telephone and online results with the total population. Keywords Internet surveys, sample validity, SLOP surveys, public opinion, non- response bias Introduction Fish and wildlife and outdoor recreation professionals use public opinion and attitude sur- veys to facilitate understanding their constituents. When the surveys are scientifically valid and unbiased, this information is useful for organizational planning. Survey research, however, costs money.
    [Show full text]
  • Levy, Marc A., “Sampling Bias Does Not Exaggerate Climate-Conflict Claims,” Nature Climate Change 8,6 (442)
    Levy, Marc A., “Sampling bias does not exaggerate climate-conflict claims,” Nature Climate Change 8,6 (442) https://doi.org/10.1038/s41558-018-0170-5 Final pre-publication text To the Editor – In a recent Letter, Adams et al1 argue that claims regarding climate-conflict links are overstated because of sampling bias. However, this conclusion rests on logical fallacies and conceptual misunderstanding. There is some sampling bias, but it does not have the claimed effect. Suggesting that a more representative literature would generate a lower estimate of climate- conflict links is a case of begging the question. It only make sense if one already accepts the conclusion that the links are overstated. Otherwise it is possible that more representative cases might lead to stronger estimates. In fact, correcting sampling bias generally does tend to increase effect estimates2,3. The authors’ claim that the literature’s disproportionate focus on Africa undermines sustainable development and climate adaptation rests on the same fallacy. What if the climate-conflict links are as strong as people think? It is far from obvious that acting as if they were not would somehow enhance development and adaptation. The authors offer no reasoning to support such a claim, and the notion that security and development are best addressed in concert is consistent with much political theory and practice4,5,6. Conceptually, the authors apply a curious kind of “piling on” perspective in which each new paper somehow ratchets up the consensus view of a country’s climate-conflict links, without regard to methods or findings. Consider the papers cited as examples of how selecting cases on the conflict variable exaggerates the link.
    [Show full text]
  • 10. Sample Bias, Bias of Selection and Double-Blind
    SEC 4 Page 1 of 8 10. SAMPLE BIAS, BIAS OF SELECTION AND DOUBLE-BLIND 10.1 SAMPLE BIAS: In statistics, sampling bias is a bias in which a sample is collected in such a way that some members of the intended population are less likely to be included than others. It results in abiased sample, a non-random sample[1] of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected.[2] If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method of sampling. Medical sources sometimes refer to sampling bias as ascertainment bias.[3][4] Ascertainment bias has basically the same definition,[5][6] but is still sometimes classified as a separate type of bias Types of sampling bias Selection from a specific real area. For example, a survey of high school students to measure teenage use of illegal drugs will be a biased sample because it does not include home-schooled students or dropouts. A sample is also biased if certain members are underrepresented or overrepresented relative to others in the population. For example, a "man on the street" interview which selects people who walk by a certain location is going to have an overrepresentation of healthy individuals who are more likely to be out of the home than individuals with a chronic illness. This may be an extreme form of biased sampling, because certain members of the population are totally excluded from the sample (that is, they have zero probability of being selected).
    [Show full text]
  • Cognitive Biases in Software Engineering: a Systematic Mapping Study
    Cognitive Biases in Software Engineering: A Systematic Mapping Study Rahul Mohanani, Iflaah Salman, Burak Turhan, Member, IEEE, Pilar Rodriguez and Paul Ralph Abstract—One source of software project challenges and failures is the systematic errors introduced by human cognitive biases. Although extensively explored in cognitive psychology, investigations concerning cognitive biases have only recently gained popularity in software engineering research. This paper therefore systematically maps, aggregates and synthesizes the literature on cognitive biases in software engineering to generate a comprehensive body of knowledge, understand state of the art research and provide guidelines for future research and practise. Focusing on bias antecedents, effects and mitigation techniques, we identified 65 articles (published between 1990 and 2016), which investigate 37 cognitive biases. Despite strong and increasing interest, the results reveal a scarcity of research on mitigation techniques and poor theoretical foundations in understanding and interpreting cognitive biases. Although bias-related research has generated many new insights in the software engineering community, specific bias mitigation techniques are still needed for software professionals to overcome the deleterious effects of cognitive biases on their work. Index Terms—Antecedents of cognitive bias. cognitive bias. debiasing, effects of cognitive bias. software engineering, systematic mapping. 1 INTRODUCTION OGNITIVE biases are systematic deviations from op- knowledge. No analogous review of SE research exists. The timal reasoning [1], [2]. In other words, they are re- purpose of this study is therefore as follows: curring errors in thinking, or patterns of bad judgment Purpose: to review, summarize and synthesize the current observable in different people and contexts. A well-known state of software engineering research involving cognitive example is confirmation bias—the tendency to pay more at- biases.
    [Show full text]
  • Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching
    CORRECTING SAMPLING BIAS IN NON-MARKET VALUATION WITH KERNEL MEAN MATCHING Rui Zhang Department of Agricultural and Applied Economics University of Georgia [email protected] Selected Paper prepared for presentation at the 2017 Agricultural & Applied Economics Association Annual Meeting, Chicago, Illinois, July 30 - August 1 Copyright 2017 by Rui Zhang. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract Non-response is common in surveys used in non-market valuation studies and can bias the parameter estimates and mean willingness to pay (WTP) estimates. One approach to correct this bias is to reweight the sample so that the distribution of the characteristic variables of the sample can match that of the population. We use a machine learning algorism Kernel Mean Matching (KMM) to produce resampling weights in a non-parametric manner. We test KMM’s performance through Monte Carlo simulations under multiple scenarios and show that KMM can effectively correct mean WTP estimates, especially when the sample size is small and sampling process depends on covariates. We also confirm KMM’s robustness to skewed bid design and model misspecification. Key Words: contingent valuation, Kernel Mean Matching, non-response, bias correction, willingness to pay 2 1. Introduction Nonrandom sampling can bias the contingent valuation estimates in two ways. Firstly, when the sample selection process depends on the covariate, the WTP estimates are biased due to the divergence between the covariate distributions of the sample and the population, even the parameter estimates are consistent; this is usually called non-response bias.
    [Show full text]
  • Quantifying Aristotle's Fallacies
    mathematics Article Quantifying Aristotle’s Fallacies Evangelos Athanassopoulos 1,* and Michael Gr. Voskoglou 2 1 Independent Researcher, Giannakopoulou 39, 27300 Gastouni, Greece 2 Department of Applied Mathematics, Graduate Technological Educational Institute of Western Greece, 22334 Patras, Greece; [email protected] or [email protected] * Correspondence: [email protected] Received: 20 July 2020; Accepted: 18 August 2020; Published: 21 August 2020 Abstract: Fallacies are logically false statements which are often considered to be true. In the “Sophistical Refutations”, the last of his six works on Logic, Aristotle identified the first thirteen of today’s many known fallacies and divided them into linguistic and non-linguistic ones. A serious problem with fallacies is that, due to their bivalent texture, they can under certain conditions disorient the nonexpert. It is, therefore, very useful to quantify each fallacy by determining the “gravity” of its consequences. This is the target of the present work, where for historical and practical reasons—the fallacies are too many to deal with all of them—our attention is restricted to Aristotle’s fallacies only. However, the tools (Probability, Statistics and Fuzzy Logic) and the methods that we use for quantifying Aristotle’s fallacies could be also used for quantifying any other fallacy, which gives the required generality to our study. Keywords: logical fallacies; Aristotle’s fallacies; probability; statistical literacy; critical thinking; fuzzy logic (FL) 1. Introduction Fallacies are logically false statements that are often considered to be true. The first fallacies appeared in the literature simultaneously with the generation of Aristotle’s bivalent Logic. In the “Sophistical Refutations” (Sophistici Elenchi), the last chapter of the collection of his six works on logic—which was named by his followers, the Peripatetics, as “Organon” (Instrument)—the great ancient Greek philosopher identified thirteen fallacies and divided them in two categories, the linguistic and non-linguistic fallacies [1].
    [Show full text]
  • Thesis with Row Frequencies (RF) and Column Frequencies (CF)………………………...173
    SAMPLING BIASES AND NEW WAYS OF ADDRESSING THE SIGNIFICANCE OF TRAUMA IN NEANDERTALS by Virginia Hutton Estabrook A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Anthropology) in The University of Michigan 2009 Doctoral Committee: Professor Milford H. Wolpoff, Chair Professor A. Roberto Frisancho Professor John D. Speth Associate Professor Thomas R. Gest Associate Professor Rachel Caspari, Central Michigan University © Virginia Hutton Estabrook 2009 To my father who read to me my first stories of prehistory ii ACKNOWLEDGEMENTS There are so many people whose generosity and inspiration made this dissertation possible. Chief among them is my husband, George Estabrook, who has been my constant cheerleader throughout this process. I would like to thank my committee for all of their help, insight, and time. I am truly fortunate in my extraordinary advisor and committee chair, Milford Wolpoff, who has guided me throughout my time here at Michigan with support, advice, and encouragement (and nudging). I am deeply indebted to Tom Gest, not only for being a cognate par excellence, but also for all of his help and attention in teaching me gross anatomy. Rachel Caspari, Roberto Frisancho, and John Speth gave me their attention for this dissertation and also offered me wonderful examples of academic competence, graciousness, and wit when I had the opportunity to take their classes and/or teach with them as their GSI. All photographs of the Krapina material, regardless of photographer, are presented with the permission of J. Radovčić and the Croatian Natural History Museum. My research would not have been possible without generous grants from Rackham, the International Institute, and the Department of Anthropology.
    [Show full text]
  • New Techniques for Time-Activity Studies of Avian Flocks in View-Restricted Habitats
    j. Field Ornithol., 60(3):388-396 NEW TECHNIQUES FOR TIME-ACTIVITY STUDIES OF AVIAN FLOCKS IN VIEW-RESTRICTED HABITATS MICHAEL P. LOSITO,• RALPH E. MIRARCHI, AND GUY A. BALDASSARRE 1 Departmentof Zoologyand WildlifeScience AlabamaAgricultural Experiment Station Auburn University,Alabama 36849-5414 USA Abstract.--Focal-animal sampling was comparedto a newly developedtechnique, focal- switchsampling, to evaluatetime budgetsof Mourning Doves(Zenaida macroura) in habitats where the field of view was restricted.Focal-switch sampling included a formula employed to weighthabitat use and testfor restrictionsof habitatstructure on behavior,and a standard waiting period to decidewhen to end sampling or continue pursuit of lost flocks. Focal- animal samplingbiased estimates of activebehavior downward, whereas estimates of inactive behaviorwere similar for both methods.Focal-switch sampling increased research efficiency by 12%. The standardwaiting period saved24% of samplesfrom prematuretermination and reducedobserver bias by maintaining equal sampling effort per flock. Weighting of habitat use also reducedsampling bias. Focal-switch sampling is recommendedfor use in conjunctionwith focal-animalsampling when samplingin view-restrictedhabitats. NUEVAS TP•CNICASPARA EL ESTUDIO DE ACTIVIDADES DE CONGREGACIONES DE AVES EN HABITATS CON VISIBILIDAD RESTRINGIDA Resumen.--Lat6cnica de observaci6ndirecta de animales(focal-animal) es comparada con un nuevom6todo en dondese evaluan los presupuestosde actividadesen Zenaidamacroura en habitatsen dondeel campode visi6nestaba restringido. La nuevat6cnica (cambio-focal) incluyeuna f6rmulapara "pesar"la utilizaci6nde habitaty medirlas restriccionesque imponenla estructuradel habitaten la conducta.Incluye ademfis,un periodode espera estandarizadoque permite al observadordecidir cuando terminar susobservaciones o conti- nuar las mismas.E1 muestreo de un soloanimal (focal-animal)tiene un sesgopara estimar la conductaactira, mientras que los estimados para conducta inactiva rueton similares para ambosm6todos.
    [Show full text]
  • 730 Philosophy 101, Sect 01 Logic, Reason and Persuasion
    730 Philosophy101, Sect 01 Logic, Reason and Persuasion Instructor: SidneyFelder,e-mail: [email protected] Rutgers The State University of NewJersey, Fall 2009, Tu&Th2:15-3:35, TH-206 I. Fundamental Philosophical Terms and Concepts of Language (week 1) Extension and Intension; Puzzles of Identity and Reference; Types and Tokens; Language and Metalanguage; Logical Paradoxes. Course Notes: Fundamental Terms and Concepts of Language.Schaum’s pps. 16-17 II. Elements of Set Theory (weeks 1, 2 and 3) Sets and Subsets. Union, Intersection, and Complementation. Ordered Pairs, Cartesian Products; Relations, Functions. Equivalence Relations. Denumerable and Non-denumerable Infinite Sets. Course Notes: Elements of Set Theory. III. Linear and Partial Orderings (weeks 3 and 4) Linear Orderings; Partial Orderings. Upper Bounds and Lower Bounds; Least Upper and Greatest Lower Bounds. Course Notes: Linear and Partial Orderings. IV.Truth-Functional Propositional Logic (weeks 4, 5, and 6) Symbolism. The Logical Constants. Logical Implication and Equivalence. Consistency, Satisfiability,and Validity. Course Notes: Truth Functional Propositional Logic. Schaum’s pps. 44-68 V. Quantification (weeks 6and 7) ‘All’ and ‘There exists’ — The universal and existential quantifiers. Free and Bound variables; Open and Closed Formulæ (Sentences). Interpretations and Models; Number. Logical Strength. Course Notes: Quantification Logic. Schaum’s ch. 5; ch. 6 pps. 130-149; ch. 9 pps. 223-226 Midterm VI. Fallacies (week 8) Strawman. Ad Hominem;Argument from Authority; Arguments from Pervasiveness of Belief. Arguments from Absence of Information. Schaum’s ch. 8 Syllabus Page 1 Philosophy101, Sect 01 Logic, Reason and Persuasion VII. Probability and Statistical Arguments (weeks 9 and 10) Elements of Axiomatic Probability.
    [Show full text]
  • Misunderstandings Between Experimentalists and Observationalists About Causal Inference
    J. R. Statist. Soc. A (2008) 171, Part 2, pp. 481–502 Misunderstandings between experimentalists and observationalists about causal inference Kosuke Imai, Princeton University, USA Gary King Harvard University, Cambridge, USA and Elizabeth A. Stuart Johns Hopkins Bloomberg School of Public Health, Baltimore, USA [Received January 2007. Final revision August 2007] Summary. We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference. These issues concern some of the most fundamental advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the conse- quences of using randomization, blocking before randomization and matching after assignment of treatment to achieve covariate balance. Applied researchers in a wide range of scientific disci- plines seem to fall prey to one or more of these fallacies and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show how this decomposition can help scholars from different experimental and observational research traditions to understand better each other’s inferential problems and attempted solutions. Keywords: Average treatment effects; Blocking; Covariate balance; Matching; Observational studies; Randomized experiments 1. Introduction Random treatment assignment, blocking before assignment, matching after data collection and random selection of observations are among the most important components of research designs for estimating causal effects. Yet the benefits of these design features seem to be regularly mis- understood by those specializing in different inferential approaches.
    [Show full text]
  • An Investigation of the Moral Stereotype of Scientists Master Thesis Alessandro Santoro
    Evil, or Weird? An investigation of the moral stereotype of scientists Master Thesis Alessandro Santoro Student Number 10865135 University of Amsterdam 2015/16 Supervisors dhr. dr. Bastiaan Rutjens Second Assessor: dhr. dr. Michiel van Elk University of Amsterdam Department of Social Psychology Evil, or Weird? An investigation of the moral stereotype of scientists Alessandro Santoro University of Amsterdam A recent research by Rutjens and Heine (2016) investigated the moral stereotype of scientists, and found them to be associated with immoral behaviors, especially purity violations. We developed novel hypotheses that were tested across two studies, inte- grating the original findings with two recent lines of research: one suggesting that the intuitive associations observed in the original research might have been influenced by the weirdness of the scenarios used (Gray & Keeney, 2015), and another using the dual-process theory of morality (Greene, Nystrom, Engell, Darley, & Cohen, 2004) to investigate how cognitive reflection, as opposed to intuition, influences moral judg- ment. In Study 1, we did not replicate the original results, and we found scientists to be associated more with weird than with immoral behavior. In Study 2, we did not find any effect of reflection on the moral stereotype of scientists, but we did replicate the original results. Together, our studies formed an image of a scientist that is not necessarily evil, but rather perceived as weird and possibly amoral. In our discussion, we acknowledge our studies’ limitations, which in turn helped us to meaningfully interpret our results and suggest directions for future research. Keywords: Stereotyping, Moral Foundations Theory, Cognitive Reflection How far would a scientist go to prove a theory? In 1802, et al., 2015) has shown a lack of interest in people to a training doctor named Stubbins Ffirth hypothesized pursue a science related career.
    [Show full text]