Confirmation Bias Examples in Politics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
(HCW) Surveys in Humanitarian Contexts in Lmics
Analytics for Operations working group GUIDANCE BRIEF Guidance for Health Care Worker (HCW) Surveys in humanitarian contexts in LMICs Developed by the Analytics for Operations Working Group to support those working with communities and healthcare workers in humanitarian and emergency contexts. This document has been developed for response actors working in humanitarian contexts who seek rapid approaches to gathering evidence about the experience of healthcare workers, and the communities of which they are a part. Understanding healthcare worker experience is critical to inform and guide humanitarian programming and effective strategies to promote IPC, identify psychosocial support needs. This evidence also informs humanitarian programming that interacts with HCWs and facilities such as nutrition, health reinforcement, communication, SGBV and gender. In low- and middle-income countries (LMIC), healthcare workers (HCW) are often faced with limited resources, equipment, performance support and even formal training to provide the life-saving work expected of them. In humanitarian contexts1, where human resources are also scarce, HCWs may comprise formally trained doctors, nurses, pharmacists, dentists, allied health professionals etc. as well as community members who perform formal health worker related duties with little or no trainingi. These HCWs frequently work in contexts of multiple public health crises, including COVID-19. Their work will be affected by availability of resources (limited supplies, materials), behaviour and emotion (fear), flows of (mis)information (e.g. understanding of expected infection prevention and control (IPC) measures) or services (healthcare policies, services and use). Multiple factors can therefore impact patients, HCWs and their families, not only in terms of risk of exposure to COVID-19, but secondary health, socio-economic and psycho-social risks, as well as constraints that interrupt or hinder healthcare provision such as physical distancing practices. -
Studying Political Bias Via Word Embeddings
Studying Political Bias via Word Embeddings Josh Gordon Marzieh Babaeianjelodar Jeanna Matthews Clarkson University jogordo,babaeim,[email protected] ABSTRACT 1 INTRODUCTION Machine Learning systems learn bias in addition to other patterns In 2016, Bolukbasi et al. [1] published an influential paper demon- from input data on which they are trained. Bolukbasi et al. pio- strating a method to quantify and remove gender bias that a ma- neered a method for quantifying gender bias learned from a corpus chine learning (ML) model learned from a corpus of human text. of text. Specifically, they compute a gender subspace into which This is important for revealing bias in human text as well as re- words, represented as word vectors, can be placed and compared ducing the impact biased ML systems can have on hiring, housing, with one another. In this paper, we apply a similar methodology and credit. In this paper, we explore how we might use a similar to a different type of bias, political bias. Unlike with gender bias, methodology to model other kinds of bias such as political bias. it is not obvious how to choose a set of definitional word pairs to As with gender, we begin as Bolukbasi et al. did with attempting compute a political bias subspace. We propose a methodology for to model political bias as simply two binary extremes along a single doing so that could be used for modeling other types of bias as well. axis. Neither gender nor political bias is as simple in the real world We collect and examine a 26 GB corpus of tweets from Republican as two points on a single axis, but we wanted to see how useful and Democratic politicians in the United States (presidential candi- this model could be in the case of political bias. -
Guidance for Health Care Worker (HCW) Surveys in Humanitarian
Analytics for Operations & COVID-19 Research Roadmap Social Science working groups GUIDANCE BRIEF Guidance for Health Care Worker (HCW) Surveys in humanitarian contexts in LMICs Developed by the Analytics for Operations & COVID-19 Research Roadmap Social Science working groups to support those working with communities and healthcare workers in humanitarian and emergency contexts. This document has been developed for response actors working in humanitarian contexts who seek rapid approaches to gathering evidence about the experience of healthcare workers, and the communities of which they are a part. Understanding healthcare worker experience is critical to inform and guide humanitarian programming and effective strategies to promote IPC, identify psychosocial support needs. This evidence also informs humanitarian programming that interacts with HCWs and facilities such as nutrition, health reinforcement, communication, SGBV and gender. In low- and middle-income countries (LMIC), healthcare workers (HCW) are often faced with limited resources, equipment, performance support and even formal training to provide the life-saving work expected of them. In humanitarian contexts1, where human resources are also scarce, HCWs may comprise formally trained doctors, nurses, pharmacists, dentists, allied health professionals etc. as well as community members who perform formal health worker related duties with little or no trainingi. These HCWs frequently work in contexts of multiple public health crises, including COVID-19. Their work will be affected -
Qanon • 75 Years of the Bomb • Vaccine History • Raising
SQANON • K75 YEARS OF ETHE BOMB P• VACCINE HISTORYT • RAISINGI CTHE DEAD? Extraordinary Claims, Revolutionary Ideas & the Promotion of Science—Vol.25Science—Vol.25 No.4No.4 2020 $6.95 USA and Canada www.skeptic.com • WHAT IS QANON? • HOW QANON RECYCLES CENTURIES-OLD CONSPIRACY BELIEFS • HOW QANON HURTS THEIR OWN CAUSE • QANON IN CONSPIRATORIAL CONTEXT watch or listen for free Hear leading scientists, scholars, and thinkers discuss the most important issues of our time. Hosted by Michael Shermer. #146 Dr. DonalD Prothero— # 130 Dr. DeBra Soh—the end # 113 Dave ruBIn— # 106 Dr. DanIel ChIrot— Weird earth: Debunking Strange of Gender: Debunking the Myths Don’t Burn this Book: you Say you Want a revolution? Ideas about our Planet about Sex & Identity in our Society thinking for yourself in an radical Idealism and its tragic age of unreason Consequences #145 GreG lukIanoff—Mighty # 129 Dr. Mona Sue WeISSMark Ira: the aClu’s controversial involve- —the Science of Diversity # 112 ann Druyan—Cosmos: # 105 Dr. DIana PaSulka— ment in the Skokie case of 1977. Possible Worlds. how science and american Cosmic: ufos, # 128 MIChael ShellenBerGer civilization grew up together religion, and technology #144 Dr. aGuStIn fuenteS— —apocalypse never: Why environ- Why We Believe: evolution and the mental alarmism hurts us all human Way of Being # 127 Dr. WIllIaM Perry and #143 Dr. nICholaS ChrIStakIS— toM CollIna—the Button: the apollo’s arrow: the Profound and new nuclear arms race and Presi- enduring Impact of Coronavirus on dential Power from truman to trump the Way We live # 126 Sarah SColeS—they are #142 Dr. -
Report on the First Workshop on Bias in Automatic Knowledge Graph Construction at AKBC 2020
EVENT REPORT Report on the First Workshop on Bias in Automatic Knowledge Graph Construction at AKBC 2020 Tara Safavi Edgar Meij Fatma Ozcan¨ University of Michigan Bloomberg Google [email protected] [email protected] [email protected] Miriam Redi Gianluca Demartini Wikimedia Foundation University of Queensland [email protected] [email protected] Chenyan Xiong Microsoft Research [email protected] Abstract We report on the First Workshop on Bias in Automatic Knowledge Graph Construction (KG- BIAS), which was co-located with the Automated Knowledge Base Construction (AKBC) 2020 conference. Identifying and possibly remediating any sort of bias in knowledge graphs, or in the methods used to construct or query them, has clear implications for downstream systems accessing and using the information in such graphs. However, this topic remains relatively unstudied, so our main aim for organizing this workshop was to bring together a group of people from a variety of backgrounds with an interest in the topic, in order to arrive at a shared definition and roadmap for the future. Through a program that included two keynotes, an invited paper, three peer-reviewed full papers, and a plenary discussion, we have made initial inroads towards a common understanding and shared research agenda for this timely and important topic. 1 Introduction Knowledge graphs (KGs) store human knowledge about the world in structured relational form. Because KGs serve as important sources of machine-readable relational knowledge, extensive re- search efforts have gone into constructing and utilizing knowledge graphs in various areas of artificial intelligence over the past decade [Nickel et al., 2015; Xiong et al., 2017; Dietz et al., 2018; Voskarides et al., 2018; Shinavier et al., 2019; Safavi and Koutra, 2020]. -
1 Encoding Information Bias in Causal Diagrams Eyal Shahar, MD, MPH
* Manuscript Encoding information bias in causal diagrams Eyal Shahar, MD, MPH Address: Eyal Shahar, MD, MPH Professor Division of Epidemiology and Biostatistics Mel and Enid Zuckerman College of Public Health The University of Arizona 1295 N. Martin Ave. Tucson, AZ 85724 Email: [email protected] Phone: 520-626-8025 Fax: 520-626-2767 1 Abstract Background: Epidemiologists usually classify bias into three main categories: confounding, selection bias, and information bias. Previous authors have described the first two categories in the logic and notation of causal diagrams, formally known as directed acyclic graphs (DAG). Methods: I examine common types of information bias—disease related and exposure related—from the perspective of causal diagrams. Results: Disease or exposure information bias always involves the use of an effect of the variable of interest –specifically, an effect of true disease status or an effect of true exposure status. The bias typically arises from a causal or an associational path of no interest to the researchers. In certain situations, it may be possible to prevent or remove some of the bias analytically. Conclusions: Common types of information bias, just like confounding and selection bias, have a clear and helpful representation within the framework of causal diagrams. Key words: Information bias, Surrogate variables, Causal diagram 2 Introduction Epidemiologists usually classify bias into three main categories: confounding bias, selection bias, and information (measurement) bias. Previous authors have described the first two categories in the logic and notation of directed acyclic graphs (DAG).1, 2 Briefly, confounding bias arises from a common cause of the exposure and the disease, whereas selection bias arises from conditioning on a common effect (a collider) by unnecessary restriction, stratification, or “statistical adjustment”. -
Mitigating Political Bias in Language Models Through Reinforced Calibration
Mitigating Political Bias in Language Models Through Reinforced Calibration Ruibo Liu,1 Chenyan Jia, 2 Jason Wei, 3 Guangxuan Xu, 1 Lili Wang, 1 Soroush Vosoughi 1 1 Department of Computer Science, Dartmouth College 2 Moody College of Communication, University of Texas at Austin 3 ProtagoLabs [email protected], [email protected] Abstract with particular keywords of the aforementioned attributes, and 2) Direct Bias, which measures bias in texts generated Current large-scale language models can be politically bi- using prompts that have directly ideological triggers (e.g., ased as a result of the data they are trained on, potentially causing serious problems when they are deployed in real- democrat, republican) in addition to keywords of aforemen- world settings. In this paper, we describe metrics for mea- tioned attributes. Table 1 shows four samples of text gen- suring political bias in GPT-2 generation and propose a re- erated by off-the-shelf GPT-2 with different attribute key- inforcement learning (RL) framework for mitigating political words in the prompts—all samples exhibit political bias. biases in generated text. By using rewards from word em- For example, when triggered with a prompt including mar- beddings or a classifier, our RL framework guides debiased ijuana, the generated text tends to present a favorable atti- generation without having access to the training data or re- tude (e.g., “I believe it should be legal and not regulated.”), quiring the model to be retrained. In empirical experiments which is mostly a liberal stance. More interestingly, even on three attributes sensitive to political bias (gender, loca- a prompt including a conservative trigger (republican) re- tion, and topic), our methods reduced bias according to both sults in generation which leans to the liberal side (“vote for our metrics and human evaluation, while maintaining read- ability and semantic coherence. -
Building Representative Corpora from Illiterate Communities: a Review of Challenges and Mitigation Strategies for Developing Countries
Building Representative Corpora from Illiterate Communities: A Review of Challenges and Mitigation Strategies for Developing Countries Stephanie Hirmer1, Alycia Leonard1, Josephine Tumwesige2, Costanza Conforti2;3 1Energy and Power Group, University of Oxford 2Rural Senses Ltd. 3Language Technology Lab, University of Cambridge [email protected] Abstract areas where the bulk of the population lives (Roser Most well-established data collection methods and Ortiz-Ospina(2016), Figure1). As a conse- currently adopted in NLP depend on the as- quence, common data collection techniques – de- sumption of speaker literacy. Consequently, signed for use in HICs – fail to capture data from a the collected corpora largely fail to repre- vast portion of the population when applied to LICs. sent swathes of the global population, which Such techniques include, for example, crowdsourc- tend to be some of the most vulnerable and ing (Packham, 2016), scraping social media (Le marginalised people in society, and often live et al., 2016) or other websites (Roy et al., 2020), in rural developing areas. Such underrepre- sented groups are thus not only ignored when collecting articles from local newspapers (Mari- making modeling and system design decisions, vate et al., 2020), or interviewing experts from in- but also prevented from benefiting from de- ternational organizations (Friedman et al., 2017). velopment outcomes achieved through data- While these techniques are important to easily build driven NLP. This paper aims to address the large corpora, they implicitly rely on the above- under-representation of illiterate communities mentioned assumptions (i.e. internet access and in NLP corpora: we identify potential biases literacy), and might result in demographic misrep- and ethical issues that might arise when col- resentation (Hovy and Spruit, 2016). -
1 Does Political Affiliation Trump Outcome Bias? Evan D. Lester
Does Political Affiliation Trump Outcome Bias? 1 Does Political Affiliation Trump Outcome Bias? Evan D. Lester Department of Psychology Hampden-Sydney College Does Political Affiliation Trump Outcome Bias? 2 Abstract Research in the field of judgment and decision making has consistently shown that information pertaining to the outcome of a decision has a significant impact on people’s attitudes of the decision itself. This effect is referred to as outcome bias. Data was collected from approximately equal numbers of Republicans and Democrats. Participants were presented with descriptions and outcomes of decisions made by a hypothetical politician. The decisions concerned public policies in response to the Coronavirus (COVID-19) pandemic. Participants evaluated the quality of the thinking that went into each decision. Results showed that policies that yielded successful outcomes received significantly better evaluations than policies that yielded failures. Democrats exhibited this tendency to a greater extent compared to Republicans. Conversely, Republicans exhibited a greater bias toward their own political party than did Democrats. The findings of this project are discussed within the context of classic and contemporary findings in the field of judgment and decision making. Keywords: Outcome Bias, Hindsight Bias, Political Affiliation Does Political Affiliation Trump Outcome Bias? 3 Does Political Affiliation Trump Outcome Bias? Individuals make countless decisions every day. Some decisions are trivial (i.e. what to eat) while other decisions can impact many people (i.e. public policy decisions). But how do individuals decide what makes a good or bad decision? In explaining how individuals evaluate decision quality, expected utility theory (EUT) suggests that individuals are rational and deliberate in their estimates of the options available. -
Self-Serving Assessments of Fairness and Pretrial Bargaining
SELF-SERVING ASSESSMENTS OF FAIRNESS AND PRETRIAL BARGAINING GEORGE LOEWENSTEIN, SAMUEL ISSACHAROFF, COLIN CAMERER, and LINDA BABCOCK* In life it is hard enough to see another person's view of things; in a law suit it is impossible. [JANET MALCOM, The Journalist and the Murderer, 1990] I. INTRODUCTION A PERSISTENTLY troubling question in the legal-economic literature is why cases proceed to trial. Litigation is a negative-sum proposition for the litigants-the longer the process continues, the lower their aggregate wealth. Although civil litigation is resolved by settlement in an estimated 95 percent of all disputes, what accounts for the failure of the remaining 5 percent to settle prior to trial? The standard economic model of legal disputes posits that settlement occurs when there exists a postively valued settlement zone-a range of transfer amounts from defendant to plaintiff that leave both parties better off than they would be if they went to trial. The location of the settlement zone depends on three factors: the parties' probability distributions of award amounts, the litigation costs they face, and their risk preferences. * Loewenstein is Professor of Economics, Carnegie Mellon University; Issacharoff is Assistant Professor of Law, University of Texas at Austin School of Law; Camerer is Professor of Organizational Behavior and Strategy, University of Chicago Graduate School of Business; Babcock is Assistant Professor of Economics, the Heinz School, Carnegie Mellon University. We thank Douglas Laycock for his helpful comments. Jodi Gillis, Na- than Johnson, Ruth Silverman, and Arlene Simon provided valuable research assistance. Loewenstein's research was supported by the Russell Sage Foundation. -
Recall Bias: a Proposal for Assessment and Control
International Journal of Epidemiology Vol. 16, No. 2 ©International Epidemiological Association 1987 Printed in Great Britain Recall Bias: A Proposal for Assessment and Control KAREN RAPHAEL Probably every major text on epidemiology offers at Whether the source of the bias is underreporting of least some discussion of recall bias in retrospective true exposures in controls or overreporting of true research. A potential for recall bias exists whenever exposures in cases, the net effect of the bias is to historical self-report information is elicited from exaggerate the magnitude of the difference between respondents. Thus-, the potential for its occurrence is cases and controls in reported rates of exposure to risk greatest in case-control studies or cross-sectional factors under investigation. Consequently, recall bias studies which include retrospective components. leads to an inflation of the odds ratio. It leads to the Downloaded from Recall bias is said to occur when accuracy of recall likelihood thai-significant research findings based upon regarding prior exposures is different for cases versus retrospective data can be interpreted in terms of a controls. methodological artefact rather than substantive The current paper will describe the process and con- theory. sequences of recall bias. Most critically, it will suggest methods for assessment and adjustment of its effects. It is important to note that recall bias is not equiva- http://ije.oxfordjournals.org/ lent to memory failure itself. If memory failure regard- ing prior events is equal in case and control groups, WHAT IS RECALL BIAS? recall bias will not occur. Rather, memory failure itself Generally, recall bias has been described in terms of will lead to measurement error which, in turn, will 'embroidery' of personal history by those respondents usually lead to a loss of statistical power. -
A Framework to Address Cognitive Biases of Climate Change Jiaying
1 [in press at Neuron] A framework to address cognitive biases of climate change Jiaying Zhao1,2 and Yu Luo1 1Department of Psychology, University of British Columbia 2Institute for Resources, Environment and Sustainability, University of British Columbia Please address correspondence to: Jiaying Zhao Department of Psychology Institute for Resources, Environment and Sustainability University of British Columbia Vancouver, B.C., Canada, V6T 1Z4 Email: [email protected] 2 Abstract We propose a framework that outlines several predominant cognitive biases of climate change, identifies potential causes, and proposes debiasing tools, with the ultimate goal of depolarizing climate beliefs and promoting actions to mitigate climate change. Keywords: decision making, cognition, behavior change, polarization, debias 3 Introduction Climate change is an urgent crisis facing humanity. To effectively combat climate change, we need concerted efforts not only from policymakers and industry leaders to institute top-down structural changes (e.g., policies, infrastructure) but also from individuals and communities to instigate bottom-up behavior changes to collectively reduce greenhouse gas emissions. To this end, psychology has offered useful insights on what motivates people to act on climate change, from social psychology investigating the underlying beliefs and attitudes of climate change, to cognitive psychology uncovering the attentional, perceptual, and decision processes of climate actions, and more recently neuroscience pinpointing the neural circuitry on motivated cognition. Insights gained from these fields have started to generate interventions to shift beliefs and promote behavior change to mitigate climate change. In the search for behavioral climate solutions, a stubborn challenge remains. That is, a sizeable portion of the public still hold disbelief on climate change and refuse to take actions, despite the fact that the vast majority of climate scientists have reached a consensus on anthropogenic causes of climate change.