Berkson's Paradox
Total Page:16
File Type:pdf, Size:1020Kb
My bias is better than your bias Why we should all be aware of our cognitive biases SEMINAR SERIES: HOW TO RUIN YOUR CAREFULLY PLANNED STUDY? TIPS FOR IMPROVING DATA ANALYSIS – SESSION 3 BART JACOBS 2 3 4 We are all biased Psychologists claim it’s easier to recognize ``negative’’ traits in others. Actual research on cognitive biases is very difficult and subject to most biases they actually study! We won’t dive into the burden of proof whether a bias is real, or highly prevalent, which could fill a seminar series by itself. In this presentation, we will focus on: Recognising potential biases in ourselves and others. Avoiding the logical fallacies associated with them. Important sources of bias associated with data-analysis. 5 6 Confirmation bias Also known as “experimenter’s bias”. Key problem: researchers are convinced of a hypothesis, or hold more belief in it than is justified from actual data. May present itself in a variety of ways. What are the negative consequences in science? Examples? 7 Confirmation bias in practice Select information that confirms a previously held belief. Explain results in a way that matches a desirable hypothesis. Arguments are chosen or evaluated based on how strongly they match or support the envisioned conclusion. (“believe bias”) Ignore or dismiss results that don’t support the hypothesis. Failure to update one’s opinion when confronted with new and/or contradictory information. (“conservative bias”, “continued influence”) Design experiments to study the hypothesis, ignoring alternatives. Worst outcome of an experiment is “effect not found”, hypothesis can never be disproven by the experiment. (“congruence bias”) 8 9 Bandwagon effect Acceptance of popular ideas Based on amount of people who support them. Based on clout of the people who believe them. Problematic when scientific justification is lacking! What if pushed by established scientists without justification? Specialists with confirmation bias may block opposing theories. Conflicting interests may play a role. Major source of literature bias! Always judge the content, not the messenger 10 11 Prediction biases How good are we at judging our own prediction skills? Hindsight bias (“knew-it-all-along” attitude): viewing events as more predictable than they were after they occurred. May lead to false claims that “errors were preventable”. May introduce wrong explanations or imagined causes. May lead to wrongly predicting similar events. Could lead people to think that “they could have done better”. Always remember the original context and setting! Knowledge gap: “could have known” ≠ “should have known” 12 Other prediction biases Wishful thinking without sufficient justification (known as “exaggerated expectation”, “optimism bias”, “pro-innovation bias”) Overestimation of a desirable outcome. Expecting a large positive effect when previous data does not imply this. Unjustified optimism towards applicability of new methodology. Failure to predict extreme events (“normalcy bias”) Refusing to plan for a worst case outcome that never happened before. Misjudging one’s ability to distinguish true patterns from noise “clustering illusion” – We are trained to see patterns that aren’t there. 13 14 Which disease would you eliminate? Disease Active cases* Yearly deaths Comments (2017. Includes 808 694 children deaths, 15% of Pneumonia 450 000 000 4 000 000 deaths under the age of 5.) Diabetes 422 000 000 1 500 000 (2017. Does not include 3.7 million indirect deaths.) (2018. Includes 1.8 million children under age 15 HIV/Aids 37 000 000 770 000 living with HIV.) (2017, 1.6 million deaths if PLHIV are included. Tuberculosis 10 000 000 1 300 000 Includes 230 000 children deaths.) (2017. Includes 266 000 deaths of children under Malaria 219 000 000 435 000 the age of 5, 61% of all malaria deaths.) (2018-2019 outbreak, as of October 16, total cases Ebola 3 224 2 152 in history ~35 000, total deaths: ~15 000) * Calculated either as new cases/year, or people living with the disease, all numbers as officially reported by WHO or partners. 15 Extension neglect biases Ignore or insufficiently account for the size of a problem. Only a bias if the size is relevant! Often linked to “scope neglect” and “identifiable victim effect”: identifiable patients prioritized over number of people affected. Why I put pneumonia before diabetes: “children” – social desirability Ebola: sensational and striking images – impact of media The opposite may also happen! Overly trust promising results from studies with small sample size. Base rate fallacy: overconfidence in a positive test result, even in a low prevalence population. 16 Should we trust your answers to the questions? Probably not! Social desirability biases (“choice-supportive bias”, “courtesy bias”) Viewing or reporting your own actions more favourably. Give “the preferred answer” rather than honest opinion. Matches social norms, avoids conflict, “choosing the path of least resistance”. Problematic if patients do this, worst if the doctor does it. May be countered by “negativity bias”: easier to recall negative events. Selective refusal The reason to not answer is caused by the (missing) result Sample is not representative anymore. 17 Data-driven cognitive biases In the rest of the presentation, we will focus on some biases that can have critical impact on data! Overlooking these sources of bias can result in major mistakes! Beware: easy to illustrate with obvious examples, but often sneakily subtle in real data! Let’s start with a “textbook” example. 18 19 Sampling effects The million dollar question in statistics: Can we do inference on a population when only a (sub)sample was observed? The answer is never trivial! Key question: Is the sample representative for the population? Always ask yourself this question! Note: NOT directly linked to sample size 20 Surrogation Many things cannot be observed. Instead, a surrogate is measured. Surrogation: mistaking the original goal by the surrogate A problem when the surrogate is “improved”, but not the original goal! Example: improve vaccination rates in children Surrogate: vaccination rates in children who visit child care providers. Intervention: campaigns to vaccinate children who visit child care. Observed effect is surrogate effect, not representative for original goal. 21 Surrogacy surrogate True Intervention endpoint 22 23 Berkson’s paradox My friends are either very smart or really nice, but rarely both, so very smart people are typically not that nice? Students who take a second chance exam of statistics often don’t have to retake math, while those who retake math typically already got a passing grade for stats, so it turns out the scores are actually negatively correlated? Hospitalized patients who wore a helmet had greater injury severity, so helmets are associated with greater injury severity? 24 Berkson’s paradox 25 Berkson’s paradox 26 Berkson’s paradox 27 Simpson’s paradox Famous example: study on kidney stones Treatment A Treatment B Difference A – B 273/350 (78%) 289/350 (83%) [ -10% ; 1%] Yet treatment A may be better! Why? Treatment A Treatment B Difference A – B Small stones 81/87 (93%) 234/270 (87%) [ 0% ; 13% ] Large stones 192/263 (73%) 55/80 (69%) [ -7% ; 16%] 28 Blyth’s paradox If treatment A is more successful than treatment B and treatment A is more successful than treatment C, is it the most successful among all 3? Adult men Adult women Children A chosen B chosen C chosen (41%) (44%) (15%) A or B A B A 56% 44% A or C C A A 59% 41% Overall C B A 15% 44% 41% 29 Example: plane hits during World War II 30 31 32 Survivorship bias Also known as “immortal time bias” subjects were “immortal” until inclusion in the study (else they would never have been included). In the example: planes that returned Survivorship bias is all around us, at all times! We can only measure what still exists and is documented! Not always a problem – “survivors” may be representative for the purpose of the study. But typically it is a major issue that should be looked at. 33 Survivorship bias – practical issues Major source of publication bias – only positive results “survive” Cause of “hindsight bias” – survivor properties become the norm E.g. focus on “success stories”, failures are not noticed or ignored. Resulting predictions typically too positive. Unfair comparison when one group gets a “head-start” E.g.: HIV positive people live longer than HIV negative people. What about all children dying from pneumonia that may have become HIV- positive at a later age? HIV+ people are “immortal until HIV infection” Strong link with informative missingness/censoring. 34 Conclusions We are all biased – but that’s OK. Computers are not necessarily better! AIs copy human prejudice and bias in training data. Blackbox algorithms cannot always distinguish between signal and noise. My advice: be aware of these biases and try to recognize them. In yourself In others In your field of research Educate others without judging 35 Questions / Comments? Next seminar is on November 7 Making noise with results, not with data – Sources of variation Presenter: Bart Karl Jacobs Typically, we draw our conclusions from an estimate that we distil from the data. In most cases, it is equally important to understand how precise we can expect our result to be. We will discuss the importance of reporting the precision of an estimate in this session and give an introduction to different sources of variation. 36 .