Reproducibility and Replicability in Science (2019)

Total Page:16

File Type:pdf, Size:1020Kb

Reproducibility and Replicability in Science (2019) THE NATIONAL ACADEMIES PRESS This PDF is available at http://nap.edu/25303 SHARE Reproducibility and Replicability in Science (2019) DETAILS 218 pages | 6 x 9 | PAPERBACK ISBN 978-0-309-48616-3 | DOI 10.17226/25303 CONTRIBUTORS GET THIS BOOK Committee on Reproducibility and Replicability in Science; Committee on National Statistics; Board on Behavioral, Cognitive, and Sensory Sciences; Division of Behavioral and Social Sciences and Education; Nuclear and Radiation Studies FIND RELATED TITLES Board; Division on Earth and Life Studies; Committee on Applied and Theoretical Statistics; Board on Mathematical Sciences and Analytics; Division on SUGGESTEDEngineering an CITATIONd Physical Sciences; Committee on Science, Engineering, Medicine, Nanadti oPnuabl lAicc Paodleicmyi;e Bso oafr Sd coienn Rcese, aErncghi nDeaetrain agn, da nIndf oMrmedaitcioinne; P20o1lic9y. Ranedp rGodloubcaibl ility aAnffda iRrse; pNliactaiobniliatyl Ainc aSdceiemnicees. oWf aSschieinngcteosn,, EDnCg:i nTeheer inNga,t iaonnda lM Aecdaidceinmeies Press. https://doi.org/10.17226/25303. Visit the National Academies Press at NAP.edu and login or register to get: – Access to free PDF downloads of thousands of scientific reports – 10% off the price of print titles – Email or social media notifications of new titles related to your interests – Special offers and discounts Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press. (Request Permission) Unless otherwise indicated, all materials in this PDF are copyrighted by the National Academy of Sciences. Copyright © National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs Reproducibility and Replicability in Science Committee on Reproducibility and Replicability in Science Division of Behavioral and Social Sciences and Education Board on Behavioral, Cognitive, and Sensory Sciences Committee on National Statistics Division on Earth and Life Studies Nuclear and Radiation Studies Board Division on Engineering and Physical Sciences Board on Mathematical Sciences and Analytics Committee on Applied and Theoretical Statistics Policy and Global Affairs Board on Research Data and Information Committee on Science, Engineering, Medicine, and Public Policy A Consensus Study Report of Copyright National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs THE NATIONAL ACADEMIES PRESS 500 Fifth Street, NW Washington, DC 20001 This activity was supported by contracts between the National Academy of Sciences and Alfred P. Sloan Foundation (G-2018-10102), National Science Foundation (1743856). Support for the work of the Board on Behavioral, Cognitive, and Sensory Sciences is provided primarily by a grant from the National Science Foundation (Award No. BCS-1729167). Any opinions, findings, conclusions, or recommendations expressed in this publication do not necessarily reflect the views of any organization or agency that provided support for the project. Any opinions, findings, conclusions, or recommendations expressed in this publication do not necessarily reflect the views of any organization or agency that provided support for the project. International Standard Book Number-13: 978-0-309-XXXXX-X International Standard Book Number-10: 0-309-XXXXX-X Digital Object Identifier: https://doi.org/10.17226/25303 Additional copies of this publication are available for sale from the National Academies Press, 500 Fifth Street, NW, Keck 360, Washington, DC 20001; (800) 624-6242 or (202) 334- 3313; http://www.nap.edu. Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America Suggested citation: National Academies of Sciences, Engineering, and Medicine. 2019. Reproducibility and Replicability in Science. Washington, DC: The National Academies Press. doi: https://doi.org/10.17226/25303. Copyright National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs The National Academy of Sciences was established in 1863 by an Act of Congress, signed by President Lincoln, as a private, nongovernmental institution to advise the nation on issues related to science and technology. Members are elected by their peers for outstanding contributions to research. Dr. Marcia McNutt is president. The National Academy of Engineering was established in 1964 under the charter of the National Academy of Sciences to bring the practices of engineering to advising the nation. Members are elected by their peers for extraordinary contributions to engineering. Dr. C. D. Mote, Jr., is president. The National Academy of Medicine (formerly the Institute of Medicine) was established in 1970 under the charter of the National Academy of Sciences to advise the nation on medical and health issues. Members are elected by their peers for distinguished contributions to medicine and health. Dr. Victor J. Dzau is president. The three Academies work together as the National Academies of Sciences, Engineering, and Medicine to provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions. The National Academies also encourage education and research, recognize outstanding contributions to knowledge, and increase public understanding in matters of science, engineering, and medicine. Learn more about the National Academies of Sciences, Engineering, and Medicine at www.nationalacademies.org. Copyright National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs Consensus Study Reports published by the National Academies of Sciences, Engineering, and Medicine document the evidence-based consensus on the study’s statement of task by an authoring committee of experts. Reports typically include findings, conclusions, and recommendations based on information gathered by the committee and the committee’s deliberations. Each report has been subjected to a rigorous and independent peer-review process and it represents the position of the National Academies on the statement of task. Proceedings published by the National Academies of Sciences, Engineering, and Medicine chronicle the presentations and discussions at a workshop, symposium, or other event convened by the National Academies. The statements and opinions contained in proceedings are those of the participants and are not endorsed by other participants, the planning committee, or the National Academies. For information about other products and activities of the National Academies, please visit www.nationalacademies.org/about/whatwedo. Copyright National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs COMMITTEE ON REPRODUCIBILITY AND REPLICABILITY IN SCIENCE HARVEY V. FINEBERG,1 (Chair), Gordon and Betty Moore Foundation DAVID B. ALLISON,1 School of Public Health-Bloomington, Indiana University LORENA A. BARBA, School of Engineering and Applied Science, George Washington University DIANNE CHONG,2 Boeing Research and Technology (Retired) DAVID DONOHO, 3, 4 Department of Statistics, Stanford University JULIANA FREIRE, Tandon School of Engineering, New York University GERALD GABRIELSE,3 Department of Physics, Northwestern University CONSTANTINE GATSONIS, Center for Statistical Sciences, Brown University EDWARD HALL, Department of Philosophy, Harvard University THOMAS H. JORDAN,3 Department of Earth Sciences, University of Southern California DIETRAM A. SCHEUFELE, Madison and Morgridge Institute for Research, University of Wisconsin-Madison VICTORIA STODDEN, Institute for Data Sciences and Engineering, University of Illinois at Urbana-Champaign SIMINE VAZIRE,5 Department of Psychology, University of California, Davis TIMOTHY D. WILSON, Department of Psychology, University of Virginia WENDY WOOD, Department of Psychology, University of Southern California and INSEAD- Sorbonne University Study Staff JENNIFER HEIMBERG, Study Director THOMAS ARRISON, Program Director MICHAEL COHEN, Senior Program Officer MICHELLE SCHWALBE, Director, Board on Mathematical Sciences and Analytics ADRIENNE STITH BUTLER, Associate Board Director BARBARA A. WANCHISEN, Director, Board on Behavioral, Cognitive, and Sensory Sciences TINA WINTERS, Associate Program Officer REBECCA MORGAN, Senior Librarian THELMA COX, Program Coordinator (beginning January 2019) LESLEY WEBB, Program Assistant (September 2018 through January 2019) GARRET TYSON, Program Assistant (September 2017 through August 2018) ERIN HAMMERS FORSTAG, Consultant Writer 1 Member of the National Academy of Medicine 2 Member of the National Academy of Engineering 3 Member of the National Academy of Sciences 4 Resigned from the committee on July 24, 2018 5 Resigned from the committee on October 11, 2018 v Copyright National Academy of Sciences. All rights reserved. Reproducibility and Replicability in Science Prepublication copy, uncorrected proofs Division of Behavioral and Social Sciences and Education BOARD ON BEHAVIORAL, COGNITIVE, AND SENSORY SCIENCES SUSAN FISKE,1 (Chair), Princeton University JOHN BAUGH, Washington University in St. Louis LAURA CARTENSEN,2 Stanford University JUDY DUBNO, Medical University of South Carolina JENNIFER EBERHARDT,1 Stanford University WILSON S. GEISLER, 1 The University of Texas
Recommended publications
  • Arxiv:2102.03380V2 [Cs.AI] 29 Jun 2021
    Reproducibility in Evolutionary Computation MANUEL LÓPEZ-IBÁÑEZ, University of Málaga, Spain JUERGEN BRANKE, University of Warwick, UK LUÍS PAQUETE, University of Coimbra, CISUC, Department of Informatics Engineering, Portugal Experimental studies are prevalent in Evolutionary Computation (EC), and concerns about the reproducibility and replicability of such studies have increased in recent times, reflecting similar concerns in other scientific fields. In this article, we discuss, within the context of EC, the different types of reproducibility and suggest a classification that refines the badge system of the Association of Computing Machinery (ACM) adopted by ACM Transactions on Evolutionary Learning and Optimization (https://dlnext.acm.org/journal/telo). We identify cultural and technical obstacles to reproducibility in the EC field. Finally, we provide guidelines and suggest tools that may help to overcome some of these reproducibility obstacles. Additional Key Words and Phrases: Evolutionary Computation, Reproducibility, Empirical study, Benchmarking 1 INTRODUCTION As in many other fields of Computer Science, most of the published research in Evolutionary Computation (EC) relies on experiments to justify their conclusions. The ability of reaching similar conclusions by repeating an experiment performed by other researchers is the only way a research community can reach a consensus on an empirical claim until a mathematical proof is discovered. From an engineering perspective, the assumption that experimental findings hold under similar conditions is essential for making sound decisions and predicting their outcomes when tackling a real-world problem. The “reproducibility crisis” refers to the realisation that many experimental findings described in peer-reviewed scientific publications cannot be reproduced, because e.g. they lack the necessary data, they lack enough details to repeat the experiment or repeating the experiment leads to different conclusions.
    [Show full text]
  • Reproducibility and Replicability in Science (2019)
    THE NATIONAL ACADEMIES PRESS This PDF is available at http://nap.edu/25303 SHARE Reproducibility and Replicability in Science (2019) DETAILS 256 pages | 6 x 9 | PAPERBACK ISBN 978-0-309-48616-3 | DOI 10.17226/25303 CONTRIBUTORS GET THIS BOOK Committee on Reproducibility and Replicability in Science; Board on Behavioral, Cognitive, and Sensory Sciences; Committee on National Statistics; Division of Behavioral and Social Sciences and Education; Nuclear and Radiation Studies FIND RELATED TITLES Board; Division on Earth and Life Studies; Board on Mathematical Sciences and Analytics; Committee on Applied and Theoretical Statistics; Division on SUGGESTEDEngineering an CITATIONd Physical Sciences; Board on Research Data and Information; NCaotmiomnaittl eAec oand eSmciiesn coef ,S Ecniegninceeesr, inEgn,g Mineedeircininge,, aanndd MPeudbilcicin Peo 2li0c1y;9 P. Roleicpyr oadnudcibility aGnlodb Rael Aplfifcaairbsi;li tNy ainti oSncaiel Ancea.d Wemaisehsi nogf tSonc,ie DnCce: sT,h Een Ngianteioenrainl gA, caandde Mmeiedsi cPinress. https://doi.org/10.17226/25303. Visit the National Academies Press at NAP.edu and login or register to get: – Access to free PDF downloads of thousands of scientific reports – 10% off the price of print titles – Email or social media notifications of new titles related to your interests – Special offers and discounts Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press. (Request Permission) Unless otherwise indicated, all materials in this
    [Show full text]
  • FDA Oncology Experience in Innovative Adaptive Trial Designs
    Innovative Adaptive Trial Designs Rajeshwari Sridhara, Ph.D. Director, Division of Biometrics V Office of Biostatistics, CDER, FDA 9/3/2015 Sridhara - Ovarian cancer workshop 1 Fixed Sample Designs • Patient population, disease assessments, treatment, sample size, hypothesis to be tested, primary outcome measure - all fixed • No change in the design features during the study Adaptive Designs • A study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (interim data) from subjects in the study 9/3/2015 Sridhara - Ovarian cancer workshop 2 Bayesian Designs • In the Bayesian paradigm, the parameter measuring treatment effect is regarded as a random variable • Bayesian inference is based on the posterior distribution (Bayes’ Rule – updated based on observed data) – Outcome adaptive • By definition adaptive design 9/3/2015 Sridhara - Ovarian cancer workshop 3 Adaptive Designs (Frequentist or Bayesian) • Allows for planned design modifications • Modifications based on data accrued in the trial up to the interim time • Unblinded or blinded interim results • Control probability of false positive rate for multiple options • Control operational bias • Assumes independent increments of information 9/3/2015 Sridhara - Ovarian cancer workshop 4 Enrichment Designs – Prognostic or Predictive • Untargeted or All comers design: – post-hoc enrichment, prospective-retrospective designs – Marker evaluation after randomization (example: KRAS in cetuximab
    [Show full text]
  • FORMULAS from EPIDEMIOLOGY KEPT SIMPLE (3E) Chapter 3: Epidemiologic Measures
    FORMULAS FROM EPIDEMIOLOGY KEPT SIMPLE (3e) Chapter 3: Epidemiologic Measures Basic epidemiologic measures used to quantify: • frequency of occurrence • the effect of an exposure • the potential impact of an intervention. Epidemiologic Measures Measures of disease Measures of Measures of potential frequency association impact (“Measures of Effect”) Incidence Prevalence Absolute measures of Relative measures of Attributable Fraction Attributable Fraction effect effect in exposed cases in the Population Incidence proportion Incidence rate Risk difference Risk Ratio (Cumulative (incidence density, (Incidence proportion (Incidence Incidence, Incidence hazard rate, person- difference) Proportion Ratio) Risk) time rate) Incidence odds Rate Difference Rate Ratio (Incidence density (Incidence density difference) ratio) Prevalence Odds Ratio Difference Macintosh HD:Users:buddygerstman:Dropbox:eks:formula_sheet.doc Page 1 of 7 3.1 Measures of Disease Frequency No. of onsets Incidence Proportion = No. at risk at beginning of follow-up • Also called risk, average risk, and cumulative incidence. • Can be measured in cohorts (closed populations) only. • Requires follow-up of individuals. No. of onsets Incidence Rate = ∑person-time • Also called incidence density and average hazard. • When disease is rare (incidence proportion < 5%), incidence rate ≈ incidence proportion. • In cohorts (closed populations), it is best to sum individual person-time longitudinally. It can also be estimated as Σperson-time ≈ (average population size) × (duration of follow-up). Actuarial adjustments may be needed when the disease outcome is not rare. • In an open populations, Σperson-time ≈ (average population size) × (duration of follow-up). Examples of incidence rates in open populations include: births Crude birth rate (per m) = × m mid-year population size deaths Crude mortality rate (per m) = × m mid-year population size deaths < 1 year of age Infant mortality rate (per m) = × m live births No.
    [Show full text]
  • Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation
    Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Phelps, Elizabeth A., Kevin J. O'Connor, William A. Cunningham, E. Sumie Funayama, J. Christopher Gatenby, John C. Gore, and Mahzarin R. Banaji. 2000. Performance on indirect measures of race evaluation predicts amygdala activation. Journal of Cognitive Neuroscience 12(5): 729-738. Published Version doi:10.1162/089892900562552 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:3512208 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation Elizabeth A. Phelps New York University Kevin J. O'Connor Massachusetts Institute of Technology William A. Cunningham and E. Sumie Funayama Yale University J. Christopher Gatenby and John C. Gore Yale University Medical School Mahzarin R. Banaji Yale University Abstract & We used fMRI to explore the neural substrates involved in (Implicit Association Test [IAT] and potentiated startle), but the unconscious evaluation of Black and White social groups. not with the direct (conscious) expression of race attitudes. In Specifically, we focused on the amygdala, a subcortical Experiment 2, these patterns were not obtained when the structure known to play a role in emotional learning and stimulus faces belonged to familiar and positively regarded evaluation. In Experiment 1, White American subjects observed Black and White individuals.
    [Show full text]
  • Statistical Guidance on Reporting Results from Studies Evaluating Diagnostic Tests Document Issued On: March 13, 2007
    Guidance for Industry and FDA Staff Statistical Guidance on Reporting Results from Studies Evaluating Diagnostic Tests Document issued on: March 13, 2007 The draft of this document was issued on March 12, 2003. For questions regarding this document, contact Kristen Meier at 240-276-3060, or send an e-mail to [email protected]. U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health Diagnostic Devices Branch Division of Biostatistics Office of Surveillance and Biometrics Contains Nonbinding Recommendations Preface Public Comment Written comments and suggestions may be submitted at any time for Agency consideration to the Division of Dockets Management, Food and Drug Administration, 5630 Fishers Lane, Room 1061, (HFA-305), Rockville, MD, 20852. Alternatively, electronic comments may be submitted to http://www.fda.gov/dockets/ecomments. When submitting comments, please refer to Docket No. 2003D-0044. Comments may not be acted upon by the Agency until the document is next revised or updated. Additional Copies Additional copies are available from the Internet at: http://www.fda.gov/cdrh/osb/guidance/1620.pdf. You may also send an e-mail request to [email protected] to receive an electronic copy of the guidance or send a fax request to 240-276-3151 to receive a hard copy. Please use the document number 1620 to identify the guidance you are requesting. Contains Nonbinding Recommendations Table of Contents 1. Background....................................................................................................4
    [Show full text]
  • Evolution of Clinical Trials Throughout History
    Evolution of clinical trials throughout history Emma M. Nellhaus1, Todd H. Davies, PhD1 Author Affiliations: 1. Office of Research and Graduate Education, Marshall University Joan C. Edwards School of Medicine, Huntington, West Virginia The authors have no financial disclosures to declare and no conflicts of interest to report. Corresponding Author: Todd H. Davies, PhD Director of Research Development and Translation Marshall University Joan C. Edwards School of Medicine Huntington, West Virginia Email: [email protected] Abstract The history of clinical research accounts for the high ethical, scientific, and regulatory standards represented in current practice. In this review, we aim to describe the advances that grew from failures and provide a comprehensive view of how the current gold standard of clinical practice was born. This discussion of the evolution of clinical trials considers the length of time and efforts that were made in order to designate the primary objective, which is providing improved care for our patients. A gradual, historic progression of scientific methods such as comparison of interventions, randomization, blinding, and placebos in clinical trials demonstrates how these techniques are collectively responsible for a continuous advancement of clinical care. Developments over the years have been ethical as well as clinical. The Belmont Report, which many investigators lack appreciation for due to time constraints, represents the pinnacle of ethical standards and was developed due to significant misconduct. Understanding the history of clinical research may help investigators value the responsibility of conducting human subjects’ research. Keywords Clinical Trials, Clinical Research, History, Belmont Report In modern medicine, the clinical trial is the gold standard and most dominant form of clinical research.
    [Show full text]
  • Case 1:20-Cv-00323-LY Document 49-2 Filed 04/02/20 Page 1 of 34
    Case 1:20-cv-00323-LY Document 49-2 Filed 04/02/20 Page 1 of 34 EXHIBIT 12 Case 1:20-cv-00323-LY Document 49-2 Filed 04/02/20 Page 2 of 34 IN THE UNITED STATES DISTRICT COURT FOR THE WESTERN DISTRICT OF TEXAS AUSTIN DIVISION PLANNED PARENTHOOD CENTER FOR CHOICE, et al., Plaintiffs, v. No. 1:20-cv-00323-LY GREG ABBOTT, in his official capacity as Governor of Texas, et al., Defendants. DECLARATION OF MARY TRAVIS BASSETT, M.D., M.P.H., IN SUPPORT OF PLAINTIFFS’ MOTION FOR PRELIMINARY INJUNCTION I, Mary Travis Bassett, M.D., M.P.H. declare as follows: 1. I am the Director of the François-Xavier Bagnoud (“FXB”) Center for Health and Human Rights at Harvard University, as well as the FXB Professor of the Practice of Health and Human Rights at the Harvard T.H. Chan School of Public Health. I am offering this declaration on my own behalf and not on that of Harvard University or other professional organizations that are noted. 2. I served as Commissioner of the New York City Department of Health and Mental Hygiene (DOHMH) from 2014–2018 and led New York’s response to the Ebola pandemic. I also led DOHMH as the City responded to a large outbreak of Legionnaires’ disease and the Zika outbreak in South America and the Caribbean. Previously, I had been the Program Director for the African Health Initiative and the Child Well-Being Program at the Doris Duke Charitable Foundation (2009–2014). Prior to that, I served as Deputy Commissioner of Health Promotion and Disease Prevention, for the New York City Department of Health and Mental Hygiene (2002– 2009).
    [Show full text]
  • Soviet Jewry (8) Box: 24
    Ronald Reagan Presidential Library Digital Library Collections This is a PDF of a folder from our textual collections. Collection: Green, Max: Files Folder Title: Soviet Jewry (8) Box: 24 To see more digitized collections visit: https://reaganlibrary.gov/archives/digital-library To see all Ronald Reagan Presidential Library inventories visit: https://reaganlibrary.gov/document-collection Contact a reference archivist at: [email protected] Citation Guidelines: https://reaganlibrary.gov/citing National Archives Catalogue: https://catalog.archives.gov/ Page 3 PmBOMBR.S OP CONSCIBNCB J YLADDllll UPSIDTZ ARRESTED: January 8, 1986 CHARGE: Anti-Soviet Slander DATE OF TRIAL: March 19, 1986 SENTENCE: 3 Years Labor Camp PRISON: ALBXBI KAGAllIIC ARRESTED: March 14, 1986 CHARGE: Illegal Possession of Drugs DATE OF TRIAL: SENTENCE: PRISON: UCHR P. O. 123/1 Tbltsi Georgian, SSR, USSR ALEXEI llUR.ZHBNICO (RE)ARRBSTBD: June 1, 1985 (Imprisoned 1970-1984) CHARGE: Parole Violations DA TB OF TRIAL: SENTENCE: PRISON: URP 10 4, 45/183 Ulitza Parkomienko 13 Kiev 50, USSR KAR.IC NBPOllNIASHCHY .ARRESTED: October 12, 1984 CHARGE: Defaming the Soviet State DA TB OF TRIAL: January 31, 1985 SENTENCE: 3 Years Labor Camp PRISON: 04-8578 2/22, Simferopol 333000, Krimskaya Oblast, USSR BETZALBL SHALOLASHVILLI ARRESTED: March 14, 1986 CHARGE: Evading Mllltary Service DA TE OF TRIAL: SENTENCE: PRISON: L ~ f UNION OF COUNCILS FOR SOVIET JEWS 1'411 K STREET, NW • SUITE '402 • WASHINGTON, DC 2<XX>5 • (202)393-44117 Page 4 PIUSONB'R.S OP CONSCIBNCB LBV SHBPBR ARRESTED:
    [Show full text]
  • The Authentic Speaker Revisited: a Look at Ethnic Perception Data from White Hip Hoppers
    University of Pennsylvania Working Papers in Linguistics Volume 9 Issue 2 Papers from NWAV 31 Article 6 2003 The authentic speaker revisited: A look at ethnic perception data from white hip hoppers Cecilia Cutler Follow this and additional works at: https://repository.upenn.edu/pwpl Recommended Citation Cutler, Cecilia (2003) "The authentic speaker revisited: A look at ethnic perception data from white hip hoppers," University of Pennsylvania Working Papers in Linguistics: Vol. 9 : Iss. 2 , Article 6. Available at: https://repository.upenn.edu/pwpl/vol9/iss2/6 This paper is posted at ScholarlyCommons. https://repository.upenn.edu/pwpl/vol9/iss2/6 For more information, please contact [email protected]. The authentic speaker revisited: A look at ethnic perception data from white hip hoppers This working paper is available in University of Pennsylvania Working Papers in Linguistics: https://repository.upenn.edu/pwpl/vol9/iss2/6 The Authentic Speaker Revisited: A Look at Ethnic Perception Data from White Hip Hoppers Cecilia Cutler 1 Introduction The ever-expanding popularity of rap music and hip hop culture exposes urban and suburban white youth to the speech of urban black youth. This paper examines how nine white middle class hip hoppers are identified in terms of ethnicity on the basis of their speech by undergraduate students. Additionally, it makes reference to past debates about what constitutes an authentic speaker and proposes that we reconsider the value of a socially­ defmed authenticity. In 1976, Eileen Hatala completed a study of the speech of a 13 year-old white girl ("Carla") who grew up in a predominantly African-American working class neighborhood in Camden, New Jersey.
    [Show full text]
  • U.S. Investments in Medical and Health Research and Development 2013 - 2017 Advocacy Has Helped Bring About Five Years of Much-Needed Growth in U.S
    Fall 2018 U.S. Investments in Medical and Health Research and Development 2013 - 2017 Advocacy has helped bring about five years of much-needed growth in U.S. medical and health research investment. More advocacy is critical now to ensure our nation steps up in response to health threats that we can—we must—overcome. More Than Half Favor Doubling Federal Spending on Medical Research Do you favor or oppose doubling federal spending on medical research over the next five years? 19% Not Sure 23% 8% Strongly favor Strongly oppose 16% 35% Somewhat oppose Somewhat favor Source: A Research!America survey of U.S. adults conducted in partnership with Zogby Analytics in January 2018. Research!America 3 Introduction Investment1 in medical and health research and development (R&D) in the U.S. grew by $38.8 billion or 27% from 2013 to 2017. Industry continues to invest more than any other sector, accounting for 67% of total spending in 2017, followed by the federal government at 22%. Federal investments increased from 2016 to 2017, the second year of growth after a dip from 2014 to 2015. Overall, federal investment increased by $6.1 billion or 18.4% from 2013 to 2017, but growth has been uneven across federal health agencies. Investment by other sectors, including academic and research institutions, foundations, state and local governments, and voluntary health associations and professional societies, also increased from 2013 to 2017. Looking ahead, medical and health R&D spending is expected to move in an upward trajectory in 2018 but will continue to fall short relative to the health and economic impact of major health threats.
    [Show full text]
  • What Is Implicit Bias? March 1, 2017
    Prepared for the Senate Select Committee on Women and Inequality by Megan Lane of the Senate Office of Research What is Implicit Bias? March 1, 2017 Brief Summary of Implicit Bias Implicit bias involves “attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.”1 According to a body of scholarly research, the unconscious attitudes we hold about other people often are based on categories such as race, gender, age, or ethnicity. Studies suggest that implicit bias is pervasive, not necessarily in line with our declared beliefs, developed early in life, and not fixed.2 Further, implicit bias is expressed at both an individual and institutional level. Institutional bias has been studied in the education, employment, and criminal justice contexts,3 and it may present itself in an organization’s policies and procedures. Some researchers believe implicit bias influences behavior and contributes to discriminatory conduct.4 Conversely, other experts contend that the evidence linking unconscious bias to discriminatory behavior is weak and warrants further study.5 The State of Research on Implicit Bias In the 1950s, psychologists studying social cognition focused on the nature of prejudice. Early research on prejudice led to studies that attempted to separate out our “automatic” versus “controlled” processing of information.6 In the 1990s, researchers focused much of their attention on understanding our automatic or unconscious judgments. In 1995, social psychologists Anthony Greenwald and Mahzarin Banaji 1 Kirwan Institute for the Study of Race and Ethnicity, “Understanding Implicit Bias,” Ohio State University, 2015. http://kirwaninstitute.osu.edu/research/understanding-implicit-bias/.
    [Show full text]