Selection Bias Explorations and Debias Methods for Natural

Total Page:16

File Type:pdf, Size:1020Kb

Selection Bias Explorations and Debias Methods for Natural Selection Bias Explorations and Debias Methods for Natural Language Sentence Matching Datasets Guanhua Zhang1;2∗, Bing Bai1∗, Jian Liang1, Kun Bai1, Shiyu Chang3, Mo Yu3, Conghui Zhu2, Tiejun Zhao2 1Cloud and Smart Industries Group, Tencent, China 2Harbin Institute of Technology, China 3MIT-IBM Watson AI Lab, IBM Research, USA fguanhzhang,icebai,joshualiang,[email protected], [email protected], [email protected], fchzhu,[email protected] Abstract 1.0 Natural Language Sentence Matching 0.5 (NLSM) has gained substantial attention from both academics and the industry, and rich 0 Normalized feature 1 public datasets contribute a lot to this process. 0.0 However, biased datasets can also hurt the negative WMD S1_freq S2_freq S1S2_inter generalization performance of trained models Figure 1: Visualization for the distributions of nor- and give untrustworthy evaluation results. For malized features versus the label in QuoraQP. The many NLSM datasets, the providers select right part (in red) represents the distributions of some pairs of sentences into the datasets, duplicated pairs, and the left part (in blue) rep- and this sampling procedure can easily bring resents the distributions of not duplicated pairs. unintended pattern, i.e., selection bias. One Best viewed in color. example is the QuoraQP dataset, where some content-independent na¨ıve features are unreasonably predictive. Such features are 2017; Tien et al., 2018), including QuoraQP1, the reflection of the selection bias and termed as the “leakage features.” In this paper, we SNLI (Bowman et al., 2015), SICK (Marelli et al., investigate the problem of selection bias on six 2014), etc. These datasets provide resources for NLSM datasets and find that four out of them both training and evaluation of different algo- are significantly biased. We further propose a rithms (Torralba and Efros, 2011). training and evaluation framework to alleviate However, most of the datasets are prepared the bias. Experimental results on QuoraQP by conducting procedures involving a sampling suggest that the proposed framework can process, which can easily introduce a selection improve the generalization ability of trained bias models, and give more trustworthy evaluation (Heckman, 1977; Zadrozny, 2004). It would results for real-world adoptions. get even worse when the bias can reveal the label information, resulting in the “leakage features,” 1 Introduction which are irrelevant to the content/semantic of the sentences but are predictive to the label. One ex- Natural Language Sentence Matching (NLSM) ample is the QuoraQP, a dataset on classifying aims at comparing two sentences and identifying whether two sentences are duplicated (labeled as the relationships (Wang et al., 2017), and serves 1) or not (labeled as 0), which has been widely as the core of many NLP tasks such as question used to evaluate STS models (Gong et al., 2017; answering and information retrieval (Wang et al., Kim et al., 2018; Wang et al., 2017; Devlin et al., 2016b). Natural Language Inference (NLI) (Bow- 2018). In QuoraQP, three leakage features have man et al., 2015) and Semantic Textual Similar- been identified, including S1 freq, the number ity (STS) (Wang et al., 2016b) are both typical of occurrences of the first sentence in the dataset; NLSM problems. A large number of publicly S2 freq, the number of occurrences of the sec- available datasets have benefited the research to ond sentence; and S1S2 inter, the number of a great extent (Kim et al., 2018; Wang et al., sentences that are paired with both the first and the * Equal contributions from both authors. This work was 1https://data.quora.com/First-Quora-Dataset-Release- done when Guanhua Zhang was an intern at Tencent. Question-Pairs 4418 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4418–4429 Florence, Italy, July 28 - August 2, 2019. c 2019 Association for Computational Linguistics 1.0 31 embodied in the comparing relationships of sen- tences, and the main contributions of this paper 26 0.8 are the answers to the following questions: 21 0.6 • Does selection bias exist in other NLSM 16 S1_freq datasets? We identify four out of six pub- 0.4 11 licly available datasets that suffer from the 0.2 selection bias. 6 • Would Deep Neural Network (DNN)-based 1 0.0 1 6 11 16 21 26 31 S2_freq methods learn from the bias pattern un- intentionally? We find that Siamese-LSTM Figure 2: The averages of the labels under different models trained on QuoraQP do capture the S1 freq and S2 freq. Red squares indicate that the bias pattern. averages are close to 1, and blue squares indicate that the averages are close to 0. Best viewed in color. • Can we help the model learn the useful semantic pattern from the content with- out fitting the bias pattern? We propose second sentences in the dataset for comparison. an easy-adopting method to mitigate the bias. Figure1 shows the distributions of normalized Experiments show that this method can im- (negative) Word Mover’s Distance (WMD) (Kus- prove the generalization performance of the ner et al., 2015) and normalized leakage features trained models. versus the labels in QuoraQP. The features are • Can we build an evaluation framework all normalized to their quantiles. As illustrated, that gives us more reliable results for real- the leakage features are more predictive than the world adoption? We propose a more trust- WMD, as the differences between the distribu- worthy evaluation method that demonstrates tions of positive and negative pairs are more sig- consistent results with unbiased cross-dataset S1 freq nificant. Moreover, combining and evaluations. S2 freq can make even more accurate predic- tions as illustrated in Figure2, where we cal- The rest of the paper is organized as follows. culate the averages of the labels under different Section2 gives an empirical look at the selection S1 freq and S2 freq. We find that when both bias on a variety of NLSM datasets and analyzes features’ values are large, the pairs tend to be why the leakage features are effective. Section3 duplicated (marked in red), while when one examines whether DNN-based methods fit the bias is large and the other is small, the pairs tend to be pattern unintentionally. Section4 introduces the not duplicated (marked in blue). training and evaluation framework to alleviate the These leakage features play a critical role in biasedness. Taking QuoraQP as an example, we the QuoraQP competition2. As the evaluations are report the experimental results in Section5. Sec- conducted with the same biased datasets, models tion6 summarizes related work, and Section7 that fit the bias pattern can take additional advan- draws the conclusion. tages over unbiased models, making the bench- 2 Empirical Study of the Selection Bias mark results untrustworthy. On the other hand, the bias pattern doesn’t exist in the real-world, so if a In this section, we investigate the problem of se- model fits the bias pattern (intentionally or unin- lection bias on six NLSM datasets and then ana- tentionally), the generalization performance will lyze why the leakage features are effective. be hurt, limiting the values of these datasets for 2.1 Quantifying the Biasedness in Datasets further applications (Torralba and Efros, 2011). In this paper, we study this problem and demon- To quantify the severity of the leakage from the strate the impact of the selection bias by a series selection bias, we formulate a toy problem for of experiments. We focus on the selection bias NLSM. We predict the semantic relationship of two sentences based on the comparing relation- 2https://www.kaggle.com/c/ ships between sentences. We refer semantic re- quora-question-pairs/discussion/ 34355 and https://www.kaggle.com/c/ lationship of two sentences as their labels, for ex- quora-question-pairs/discussion/33168 ample, duplicated for STS and entailment 4419 MultiNLI SICK Method SNLI QuoraQP MSRP ByteDance Matched Mismatched NLI STS Majority 33.7 35.6 36.5 50.00 66.5 56.7 50.3 68.59 Unlexicalized 47.7 44.9 45.5 68.20 73.9 70.1 70.2 75.23 LSTM 77.6∗ 66.9y 66.9y 82.58z 70.6 71.3> 70.2 86.45 Leakage 36.6 32.1 31.1 79.63 66.7 56.7 55.5 78.24 Advanced 39.1 32.7 33.8 80.47 67.9 57.5 56.3 85.73 Leakage vs Majority +8.61 -9.83 -14.79 +59.26 +0.30 0.00 +10.34 +14.07 Advanced vs Majority +16.02 -8.15 -7.40 +60.94 +2.11 +1.41 +11.93 +24.99 Table 1: The accuracy scores of predicting the label with unlexicalized features, leakage features, and advanced graph-based features and the relative improvements. Result with ∗ is from Bowman et al.(2015). Results with y are from Williams et al.(2018). Result with z is from Wang et al.(2017). Result with is from Shen et al.(2018). Result with > is from Baudisˇ et al.(2016). Other results are based on our implementations. “%” is omitted. Sentence1 Sentence2 3 Label ByteDance . We apply two different methods ID ID 1 2 ? to classify the edges on the graph, including 1 3 ? Leakage which uses the three leakage features 1 5 ? 2 3 ? introduced in Section1 and Advanced which uses 2 4 ? some more advanced graph-based features (Per- 2 6 ? ozzi et al., 2014; Zhou et al., 2009; Liben-Nowell Figure 3: Illustration of the graph built for Problem1. and Kleinberg, 2007) together with the three 4 We only use the comparing relationships to build the leakage features . We also report the results graph. of three baselines, including Majority which predicts the most frequent label, Unlexicalized which uses 15 handcrafted features from the for NLI, and comparing relationship as whether content of sentences (Bowman et al., 2015)(e.g., they are paired for comparison in the dataset.
Recommended publications
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
    [Show full text]
  • Cognitive Bias Mitigation: How to Make Decision-Making More Rational?
    Cognitive Bias Mitigation: How to make decision-making more rational? Abstract Cognitive biases distort judgement and adversely impact decision-making, which results in economic inefficiencies. Initial attempts to mitigate these biases met with little success. However, recent studies which used computer games and educational videos to train people to avoid biases (Clegg et al., 2014; Morewedge et al., 2015) showed that this form of training reduced selected cognitive biases by 30 %. In this work I report results of an experiment which investigated the debiasing effects of training on confirmation bias. The debiasing training took the form of a short video which contained information about confirmation bias, its impact on judgement, and mitigation strategies. The results show that participants exhibited confirmation bias both in the selection and processing of information, and that debiasing training effectively decreased the level of confirmation bias by 33 % at the 5% significance level. Key words: Behavioural economics, cognitive bias, confirmation bias, cognitive bias mitigation, confirmation bias mitigation, debiasing JEL classification: D03, D81, Y80 1 Introduction Empirical research has documented a panoply of cognitive biases which impair human judgement and make people depart systematically from models of rational behaviour (Gilovich et al., 2002; Kahneman, 2011; Kahneman & Tversky, 1979; Pohl, 2004). Besides distorted decision-making and judgement in the areas of medicine, law, and military (Nickerson, 1998), cognitive biases can also lead to economic inefficiencies. Slovic et al. (1977) point out how they distort insurance purchases, Hyman Minsky (1982) partly blames psychological factors for economic cycles. Shefrin (2010) argues that confirmation bias and some other cognitive biases were among the significant factors leading to the global financial crisis which broke out in 2008.
    [Show full text]
  • THE ROLE of PUBLICATION SELECTION BIAS in ESTIMATES of the VALUE of a STATISTICAL LIFE W
    THE ROLE OF PUBLICATION SELECTION BIAS IN ESTIMATES OF THE VALUE OF A STATISTICAL LIFE w. k i p vi s c u s i ABSTRACT Meta-regression estimates of the value of a statistical life (VSL) controlling for publication selection bias often yield bias-corrected estimates of VSL that are substantially below the mean VSL estimates. Labor market studies using the more recent Census of Fatal Occu- pational Injuries (CFOI) data are subject to less measurement error and also yield higher bias-corrected estimates than do studies based on earlier fatality rate measures. These re- sultsareborneoutbythefindingsforalargesampleofallVSLestimatesbasedonlabor market studies using CFOI data and for four meta-analysis data sets consisting of the au- thors’ best estimates of VSL. The confidence intervals of the publication bias-corrected estimates of VSL based on the CFOI data include the values that are currently used by government agencies, which are in line with the most precisely estimated values in the literature. KEYWORDS: value of a statistical life, VSL, meta-regression, publication selection bias, Census of Fatal Occupational Injuries, CFOI JEL CLASSIFICATION: I18, K32, J17, J31 1. Introduction The key parameter used in policy contexts to assess the benefits of policies that reduce mortality risks is the value of a statistical life (VSL).1 This measure of the risk-money trade-off for small risks of death serves as the basis for the standard approach used by government agencies to establish monetary benefit values for the predicted reductions in mortality risks from health, safety, and environmental policies. Recent government appli- cations of the VSL have used estimates in the $6 million to $10 million range, where these and all other dollar figures in this article are in 2013 dollars using the Consumer Price In- dex for all Urban Consumers (CPI-U).
    [Show full text]
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • Why So Confident? the Influence of Outcome Desirability on Selective Exposure and Likelihood Judgment
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by The University of North Carolina at Greensboro Archived version from NCDOCKS Institutional Repository http://libres.uncg.edu/ir/asu/ Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment Authors Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose Abstract Previous studies that have directly manipulated outcome desirability have often found little effect on likelihood judgments (i.e., no desirability bias or wishful thinking). The present studies tested whether selections of new information about outcomes would be impacted by outcome desirability, thereby biasing likelihood judgments. In Study 1, participants made predictions about novel outcomes and then selected additional information to read from a buffet. They favored information supporting their prediction, and this fueled an increase in confidence. Studies 2 and 3 directly manipulated outcome desirability through monetary means. If a target outcome (randomly preselected) was made especially desirable, then participants tended to select information that supported the outcome. If made undesirable, less supporting information was selected. Selection bias was again linked to subsequent likelihood judgments. These results constitute novel evidence for the role of selective exposure in cases of overconfidence and desirability bias in likelihood judgments. Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose (2013) "Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment" Organizational Behavior and Human Decision Processes 120 (2013) 73–86 (ISSN: 0749-5978) Version of Record available @ DOI: (http://dx.doi.org/10.1016/j.obhdp.2012.10.002) Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment q a,⇑ a c b Paul D.
    [Show full text]
  • Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching
    CORRECTING SAMPLING BIAS IN NON-MARKET VALUATION WITH KERNEL MEAN MATCHING Rui Zhang Department of Agricultural and Applied Economics University of Georgia [email protected] Selected Paper prepared for presentation at the 2017 Agricultural & Applied Economics Association Annual Meeting, Chicago, Illinois, July 30 - August 1 Copyright 2017 by Rui Zhang. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract Non-response is common in surveys used in non-market valuation studies and can bias the parameter estimates and mean willingness to pay (WTP) estimates. One approach to correct this bias is to reweight the sample so that the distribution of the characteristic variables of the sample can match that of the population. We use a machine learning algorism Kernel Mean Matching (KMM) to produce resampling weights in a non-parametric manner. We test KMM’s performance through Monte Carlo simulations under multiple scenarios and show that KMM can effectively correct mean WTP estimates, especially when the sample size is small and sampling process depends on covariates. We also confirm KMM’s robustness to skewed bid design and model misspecification. Key Words: contingent valuation, Kernel Mean Matching, non-response, bias correction, willingness to pay 2 1. Introduction Nonrandom sampling can bias the contingent valuation estimates in two ways. Firstly, when the sample selection process depends on the covariate, the WTP estimates are biased due to the divergence between the covariate distributions of the sample and the population, even the parameter estimates are consistent; this is usually called non-response bias.
    [Show full text]
  • Evaluation of Selection Bias in an Internet-Based Study of Pregnancy Planners
    HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Epidemiology Manuscript Author . Author manuscript; Manuscript Author available in PMC 2016 April 04. Published in final edited form as: Epidemiology. 2016 January ; 27(1): 98–104. doi:10.1097/EDE.0000000000000400. Evaluation of Selection Bias in an Internet-based Study of Pregnancy Planners Elizabeth E. Hatcha, Kristen A. Hahna, Lauren A. Wisea, Ellen M. Mikkelsenb, Ramya Kumara, Matthew P. Foxa, Daniel R. Brooksa, Anders H. Riisb, Henrik Toft Sorensenb, and Kenneth J. Rothmana,c aDepartment of Epidemiology, Boston University School of Public Health, Boston, MA bDepartment of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark cRTI Health Solutions, Durham, NC Abstract Selection bias is a potential concern in all epidemiologic studies, but it is usually difficult to assess. Recently, concerns have been raised that internet-based prospective cohort studies may be particularly prone to selection bias. Although use of the internet is efficient and facilitates recruitment of subjects that are otherwise difficult to enroll, any compromise in internal validity would be of great concern. Few studies have evaluated selection bias in internet-based prospective cohort studies. Using data from the Danish Medical Birth Registry from 2008 to 2012, we compared six well-known perinatal associations (e.g., smoking and birth weight) in an inter-net- based preconception cohort (Snart Gravid n = 4,801) with the total population of singleton live births in the registry (n = 239,791). We used log-binomial models to estimate risk ratios (RRs) and 95% confidence intervals (CIs) for each association. We found that most results in both populations were very similar.
    [Show full text]
  • Testing for Selection Bias IZA DP No
    IZA DP No. 8455 Testing for Selection Bias Joonhwi Joo Robert LaLonde September 2014 DISCUSSION PAPER SERIES Forschungsinstitut zur Zukunft der Arbeit Institute for the Study of Labor Testing for Selection Bias Joonhwi Joo University of Chicago Robert LaLonde University of Chicago and IZA Discussion Paper No. 8455 September 2014 IZA P.O. Box 7240 53072 Bonn Germany Phone: +49-228-3894-0 Fax: +49-228-3894-180 E-mail: [email protected] Any opinions expressed here are those of the author(s) and not those of IZA. Research published in this series may include views on policy, but the institute itself takes no institutional policy positions. The IZA research network is committed to the IZA Guiding Principles of Research Integrity. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent nonprofit organization supported by Deutsche Post Foundation. The center is associated with the University of Bonn and offers a stimulating research environment through its international network, workshops and conferences, data service, project support, research visits and doctoral program. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. A revised version may be available directly from the author. IZA Discussion Paper No.
    [Show full text]
  • Correcting Sample Selection Bias by Unlabeled Data
    Correcting Sample Selection Bias by Unlabeled Data Jiayuan Huang Alexander J. Smola Arthur Gretton School of Computer Science NICTA, ANU MPI for Biological Cybernetics Univ. of Waterloo, Canada Canberra, Australia T¨ubingen, Germany [email protected] [email protected] [email protected] Karsten M. Borgwardt Bernhard Scholkopf¨ Ludwig-Maximilians-University MPI for Biological Cybernetics Munich, Germany T¨ubingen, Germany [email protected]fi.lmu.de [email protected] Abstract We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appro- priate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estima- tion. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice. 1 Introduction The default assumption in many learning scenarios is that training and test data are independently and identically (iid) drawn from the same distribution. When the distributions on training and test set do not match, we are facing sample selection bias or covariate shift. Specifically, given a domain of patterns X and labels Y, we obtain training samples Z = (x , y ),..., (x , y ) X Y from { 1 1 m m } ⊆ × a Borel probability distribution Pr(x, y), and test samples Z′ = (x′ , y′ ),..., (x′ ′ , y′ ′ ) X Y { 1 1 m m } ⊆ × drawn from another such distribution Pr′(x, y). Although there exists previous work addressing this problem [2, 5, 8, 9, 12, 16, 20], sample selection bias is typically ignored in standard estimation algorithms.
    [Show full text]
  • The Bias Bias in Behavioral Economics
    Review of Behavioral Economics, 2018, 5: 303–336 The Bias Bias in Behavioral Economics Gerd Gigerenzer∗ Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany; [email protected] ABSTRACT Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persis- tent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s ran- dom error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psycho- logical research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics. Keywords: Behavioral economics, Biases, Bounded Rationality, Imperfect information Behavioral economics began with the intention of eliminating the psycho- logical blind spot in rational choice theory and ended up portraying psychology as the source of irrationality.
    [Show full text]
  • Do Visitors of Product Evaluation Portals Select Reviews in a Biased Manner? Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 15(1), Article 4
    Winter, K., Zapf, B., Hütter, M., Tichy, N., & Sassenberg, K. (2021). Selective exposure in action: Do visitors of product evaluation portals select reviews in a biased manner? Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 15(1), article 4. https://doi.org/10.5817/CP2021-1-4 Selective Exposure in Action: Do Visitors of Product Evaluation Portals Select Reviews in a Biased Manner? Kevin Winter1, Birka Zapf1,2, Mandy Hütter2, Nicolas Tichy3, & Kai Sassenberg1,2 1 Leibniz-Institut für Wissensmedien, Tübingen, Germany 2 University of Tübingen, Tübingen, Germany 3 Ludwig Maximilian University of Munich, Munich, Germany Abstract Most people in industrialized countries regularly purchase products online. Consumers often rely on previous customers’ reviews to make purchasing decisions. The current research investigates whether potential online customers select these reviews in a biased way and whether typical interface properties of product evaluation portals foster biased selection. Based on selective exposure research, potential online customers should have a bias towards selecting positive reviews when they have an initial preference for a product. We tested this prediction across five studies (total N = 1376) while manipulating several typical properties of the review selection interface that should – according to earlier findings – facilitate biased selection. Across all studies, we found some evidence for a bias in favor of selecting positive reviews, but the aggregated effect was non-significant in an internal meta- analysis. Contrary to our hypothesis and not replicating previous research, none of the interface properties that were assumed to increase biased selection led to the predicted effects. Overall, the current research suggests that biased information selection, which has regularly been found in many other contexts, only plays a minor role in online review selection.
    [Show full text]
  • Bias and Confounding • Describe the Key Types of Bias
    M1 - PopMed OBJECTIVES Bias and Confounding • Describe the key types of bias • Identify sources of bias in study design, data collection and analysis Saba Masho, MD, MPH, DrPH Department of Epidemiology and Community Health • Identify confounders and methods for [email protected] controlling confounders 1 2 Bias Types: • Bias is any systematic error in an epidemiologic study that results in an 1. Selection incorrect estimate of association between risk factor and outcome. 2. Information/measurement • It is introduced in the design of the study including, data collection, analysis, 3. Confounding interpretation, publication or review of data. • It is impossible to correct for bias during the analysis and may also be hard to evaluate. Thus, studies need to be carefully designed. 3 4 SELECTION BIAS SELECTION BIAS • A systematic error that arises in the process of selecting the study populations • May also occur when membership in one group differs systematically from the general population or • May occur when different criteria is used to select control group, called membership bias cases/controls or exposed/non exposed – E.g. selecting cases from inner hospitals and controls from – E.g. A study in uterine cancer excluding patients with sub-urban hospitals hysterectomy from cases but not from control • The differences in hospital admission between cases • Detection bias: occurs when the risk factor under and controls may conceal an association in a study, investigation requires thorough diagnostic this is called “Berkson’s bias” or “admission
    [Show full text]