Making Crime Analysis More Inferential

Total Page:16

File Type:pdf, Size:1020Kb

Making Crime Analysis More Inferential Making Crime Analysis More Inferential Dr Michael Townsley School of Criminology and Criminal Justice, Griffith University 21–27 April 2014 / International Summit On Scientific Criminal Analysis 1 / 35 Outline Defining What Analysis Is Five principles of statistical reasoning Three strategies to avoid errors 2 / 35 What Do We Mean by Analysis? Analysis is not simply descriptive. It must include some component of reasoning, inference or interpretation. Regurgitating numerical values or summarising the situation is not analysis Need a system for doing this, comprising: • Appropriate theory • Methods to generate and test hypotheses A system will allow you to generate knowledge about the criminal environment. 3 / 35 Theories for Crime Analysis: Environmental Criminology crime = motivation + opportunity • Rational choice • Routine activity • Crime pattern theory 4 / 35 Problems Let’s acknowledge the range of factors limiting analysts from doing their work: Organisational Individual Tasking Training Operational imperatives Highly variable performance Cognitive biases 5 / 35 Humans find patterns anywhere • Apophenia is the experience of seeing patterns or connections in random or meaningless data. • Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli (seeing faces in inanimate objects) 6 / 35 Seeing Faces 7 / 35 The hungry helicopter eats delicious soldiers 8 / 35 These boxes are planning something . 9 / 35 Cookie Monster spotted in Aisle 4 10 / 35 Outline Defining What Analysis Is Five principles of statistical reasoning Three strategies to avoid errors 11 / 35 Principles of Statistics The field of statistics is decision making under uncertainty. Without being over simplistic, the entirety of statistics can be distilled into five core principles: 1 rates over counts 2 making comparisons 3 retrospective versus prospective 4 sampling bias 5 Simpson’s paradox Because decisions in operational law enforcement need to be made with incomplete data, in imperfect conditions and under significant time pressure, a statistical approach should enable better analysis, or at least avoid common pitfalls. 12 / 35 Statistical Principle 1: Frequencies Versus Rates • A rate is a frequency adjusted for the underlying population at risk • Usually population based, but could be number of properties 13 / 35 International comparisons Using data from UNDOC, police recorded assaults in Chile, US, Australia and New Zealand 2004 2006 2008 2010 count rate 8e+05 US Australia 300 6e+05 250 NZ US amount 4e+05 200 2e+05 Australia 150 Chile Chile NZ 0e+00 2004 2006 2008 2010 year 14 / 35 Issues • significance of counts for practical purposes • what is the population at risk? • relevance with respect to time (by day, seasonal patterns) • most crimes measured at point level, but rate calculation some level of aggregation (to streets or areas, say) 15 / 35 Aggregation effects (Safe as Houses project) What is the population at risk? How relevant are the boundaries here? 16 / 35 Statistical Principle 2: Making Comparisons • Crime figures are meaningless without reference to a comparison area or some baseline crime level. • confounding issue is regression to the mean • Comparison groups can be misleading due to contextual differences • Hot spot maps (with hot and cold areas) do not constitute a valid comparison • need to include comparisons of the causal factor as well as crime 17 / 35 Some connection between street network and drug arrests Source: Eck (1997) 18 / 35 Strong relationship between poor place management and drug arrests Source: Eck (1997) 19 / 35 Statistical Principle 3: Retrospective Versus Prospective Risks • risk factors can be computed through different types of studies: retrospective and prospective • retrospective studies examine a group experiencing an outcome and examine their past • prospective studies follow a population and examine their lifestyle and whether the outcome occurs • Many risk factors are computed using a retrospective study, but expressed in prospective terms. 20 / 35 Sexual assaults on public transport A study claims that 80% of sexual assaults take place on the public transport system. The inference drawn is that there is a high chance of victimisation if you use public transport. • if victimised, there is a high chance of using public transport (what the study says); and • if using public transport, there is a high chance of being victimised (what get communicated). Each of these statements is a conditional probability (proportion of an event within a subsample). But here the subsamples and events are been swapped. 21 / 35 Let’s Look at Data to Make This Concrete Victimised Not victimised Total Public transport 80 10,000 10,080 Not public transport 20 11,000 11,020 Total 100 21,000 21,100 • if victimised (N=100), 80% used public transport (retrospective) • if using public transport (N=10,080), almost 1% were victimised (prospective). 22 / 35 Let’s Look at Data to Make This Concrete Victimised Not victimised Total Public transport 80 10,000 10080 Not public transport 20 11,000 11,020 Total 100 21,000 21,100 • if victimised (N=100), 80% used public transport (retrospective) • if using public transport (N=10,080), almost 1% were victimised (prospective). 23 / 35 To make a retrospective comparison Victimised Not victimised Total Public transport 80 10,000 10,080 Not public transport 20 11,000 11,020 Total 100 21,000 21,100 • if victimised (N=100), 80% used public transport • if not victimised (N=21,000), 48% used public transport. 24 / 35 To make a prospective comparison Victimised Not victimised Total Public transport 80 10,000 10080 Not public transport 20 11,000 11,020 Total 100 21,000 21,100 • if using public transport (N=10,080), almost 0.8% were victimised. • if not using public transport (N=11,020), 0.2% were victimised 25 / 35 Retrospective vs Prospective 1 Crime analysts will virtually always have retrospective studies, so this problem will come up 2 Make sure valid comparisons are made. Compare conditional probabilities appropriately 3 Retrospective proportions overstate the size of the risk factor. 26 / 35 Statistical Principle 4: Selection Bias • occurs naturally whenever secondary data analysis conducted • differences between victimisation surveys and official statistics; various filters operating on what offences are reported to police, which get recorded. • survivorship bias 27 / 35 Spatial Selection Bias • Ratcliffe (2001) catalogues the ways that geocoding can go wrong: • Out of date property parcel map • Abbreviations and misspellings • Local name variations • Address duplication • Non-existent addresses • Non-addresses • Bichler and Balchak (2007) found distinctive systematic biases in geocoding errors in the major GIS applications. • Ratcliffe (2001) between 5 and 7% of records geocoded to incorrect census tracts. 28 / 35 Statistical Principle 5: Simpson’s Paradox Table: Aggregate Crime Rates for Areas 1 and 2 Area 1 Area 2 Total 7.75 7.20 Area 2 is safer than Area 1, in aggregate. It would be worth considering what examples of best practice might be transferred to Area 1. To do so, we look at different crime types. 29 / 35 Crime Rates by Crime Types Area 1 Area 2 Crime Type Freq Denom. Rate Freq Denomin. Rate Assault 256 41,250 6.21 430 54,000 7.96 Comm. Burglary 178 2,800 6.36 30 350 8.57 Car Theft 69 20,850 3.31 66 18,750 3.52 Total 503 64,900 7.75 526 73,100 7.20 Area 2 has a higher crime rate for all crimes. 30 / 35 Explaining Simpson’s Paradox • Operates when patterns of rates (or proportions) calculated for an entire sample are not consistent for patterns for subgroups of the data. • A result of changing denominators in crime rates and is a result of only relying on proportions or rates as indicators of activity. • Usually a sign of a lurking variable • NOTE! This directly contradicts the First Principle listed here. 31 / 35 Spatial Principles Spatial data and analyses have a number of unique attributes that need to be controlled for: • modifiable area unit problem – when point level information is aggregated arbitrary administrative boundaries • spatial autocorrelation – things are close are more similar than distant things. 32 / 35 Outline Defining What Analysis Is Five principles of statistical reasoning Three strategies to avoid errors 33 / 35 Three strategies to avoid errors 1 be more scientific. Come to my next talk! 2 employ more sophisticated methods. Upskill analysts and collaborate with researchers 3 be more focused and use crime theories. Read Eck (1997) “What do all these dots mean?” chapter. 34 / 35 BibliographyI Bichler, G. and Balchak, S. (2007). Address Matching Bias: Ignorance Is Not Bliss. Policing: An International Journal of Police Strategies & Management, 30(1):32–60. Eck, J. E. (1997). What Do Those Dots Mean? Mapping Theories with Data. In Weisburd, D. L. and McEwen, T., editors, Crime mapping and crime prevention, volume 8 of Crime Prevention Studies, pages 377–406. Criminal Justice Press, Monsey, NY. Ratcliffe, J. H. (2001). On the Accuracy of Tiger-Type Geocoded Address Data in Relation to Cadastral and Census Areal Units. International Journal of Geographical Information Science, 15(5):473–485. 35 / 35.
Recommended publications
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    A Task-based Taxonomy of Cognitive Biases for Information Visualization Evanthia Dimara, Steven Franconeri, Catherine Plaisant, Anastasia Bezerianos, and Pierre Dragicevic Three kinds of limitations The Computer The Display 2 Three kinds of limitations The Computer The Display The Human 3 Three kinds of limitations: humans • Human vision ️ has limitations • Human reasoning 易 has limitations The Human 4 ️Perceptual bias Magnitude estimation 5 ️Perceptual bias Magnitude estimation Color perception 6 易 Cognitive bias Behaviors when humans consistently behave irrationally Pohl’s criteria distilled: • Are predictable and consistent • People are unaware they’re doing them • Are not misunderstandings 7 Ambiguity effect, Anchoring or focalism, Anthropocentric thinking, Anthropomorphism or personification, Attentional bias, Attribute substitution, Automation bias, Availability heuristic, Availability cascade, Backfire effect, Bandwagon effect, Base rate fallacy or Base rate neglect, Belief bias, Ben Franklin effect, Berkson's paradox, Bias blind spot, Choice-supportive bias, Clustering illusion, Compassion fade, Confirmation bias, Congruence bias, Conjunction fallacy, Conservatism (belief revision), Continued influence effect, Contrast effect, Courtesy bias, Curse of knowledge, Declinism, Decoy effect, Default effect, Denomination effect, Disposition effect, Distinction bias, Dread aversion, Dunning–Kruger effect, Duration neglect, Empathy gap, End-of-history illusion, Endowment effect, Exaggerated expectation, Experimenter's or expectation bias,
    [Show full text]
  • Cognitive Bias Mitigation: How to Make Decision-Making More Rational?
    Cognitive Bias Mitigation: How to make decision-making more rational? Abstract Cognitive biases distort judgement and adversely impact decision-making, which results in economic inefficiencies. Initial attempts to mitigate these biases met with little success. However, recent studies which used computer games and educational videos to train people to avoid biases (Clegg et al., 2014; Morewedge et al., 2015) showed that this form of training reduced selected cognitive biases by 30 %. In this work I report results of an experiment which investigated the debiasing effects of training on confirmation bias. The debiasing training took the form of a short video which contained information about confirmation bias, its impact on judgement, and mitigation strategies. The results show that participants exhibited confirmation bias both in the selection and processing of information, and that debiasing training effectively decreased the level of confirmation bias by 33 % at the 5% significance level. Key words: Behavioural economics, cognitive bias, confirmation bias, cognitive bias mitigation, confirmation bias mitigation, debiasing JEL classification: D03, D81, Y80 1 Introduction Empirical research has documented a panoply of cognitive biases which impair human judgement and make people depart systematically from models of rational behaviour (Gilovich et al., 2002; Kahneman, 2011; Kahneman & Tversky, 1979; Pohl, 2004). Besides distorted decision-making and judgement in the areas of medicine, law, and military (Nickerson, 1998), cognitive biases can also lead to economic inefficiencies. Slovic et al. (1977) point out how they distort insurance purchases, Hyman Minsky (1982) partly blames psychological factors for economic cycles. Shefrin (2010) argues that confirmation bias and some other cognitive biases were among the significant factors leading to the global financial crisis which broke out in 2008.
    [Show full text]
  • THE ROLE of PUBLICATION SELECTION BIAS in ESTIMATES of the VALUE of a STATISTICAL LIFE W
    THE ROLE OF PUBLICATION SELECTION BIAS IN ESTIMATES OF THE VALUE OF A STATISTICAL LIFE w. k i p vi s c u s i ABSTRACT Meta-regression estimates of the value of a statistical life (VSL) controlling for publication selection bias often yield bias-corrected estimates of VSL that are substantially below the mean VSL estimates. Labor market studies using the more recent Census of Fatal Occu- pational Injuries (CFOI) data are subject to less measurement error and also yield higher bias-corrected estimates than do studies based on earlier fatality rate measures. These re- sultsareborneoutbythefindingsforalargesampleofallVSLestimatesbasedonlabor market studies using CFOI data and for four meta-analysis data sets consisting of the au- thors’ best estimates of VSL. The confidence intervals of the publication bias-corrected estimates of VSL based on the CFOI data include the values that are currently used by government agencies, which are in line with the most precisely estimated values in the literature. KEYWORDS: value of a statistical life, VSL, meta-regression, publication selection bias, Census of Fatal Occupational Injuries, CFOI JEL CLASSIFICATION: I18, K32, J17, J31 1. Introduction The key parameter used in policy contexts to assess the benefits of policies that reduce mortality risks is the value of a statistical life (VSL).1 This measure of the risk-money trade-off for small risks of death serves as the basis for the standard approach used by government agencies to establish monetary benefit values for the predicted reductions in mortality risks from health, safety, and environmental policies. Recent government appli- cations of the VSL have used estimates in the $6 million to $10 million range, where these and all other dollar figures in this article are in 2013 dollars using the Consumer Price In- dex for all Urban Consumers (CPI-U).
    [Show full text]
  • Bias and Fairness in NLP
    Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas
    [Show full text]
  • Why So Confident? the Influence of Outcome Desirability on Selective Exposure and Likelihood Judgment
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by The University of North Carolina at Greensboro Archived version from NCDOCKS Institutional Repository http://libres.uncg.edu/ir/asu/ Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment Authors Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose Abstract Previous studies that have directly manipulated outcome desirability have often found little effect on likelihood judgments (i.e., no desirability bias or wishful thinking). The present studies tested whether selections of new information about outcomes would be impacted by outcome desirability, thereby biasing likelihood judgments. In Study 1, participants made predictions about novel outcomes and then selected additional information to read from a buffet. They favored information supporting their prediction, and this fueled an increase in confidence. Studies 2 and 3 directly manipulated outcome desirability through monetary means. If a target outcome (randomly preselected) was made especially desirable, then participants tended to select information that supported the outcome. If made undesirable, less supporting information was selected. Selection bias was again linked to subsequent likelihood judgments. These results constitute novel evidence for the role of selective exposure in cases of overconfidence and desirability bias in likelihood judgments. Paul D. Windschitl , Aaron M. Scherer a, Andrew R. Smith , Jason P. Rose (2013) "Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment" Organizational Behavior and Human Decision Processes 120 (2013) 73–86 (ISSN: 0749-5978) Version of Record available @ DOI: (http://dx.doi.org/10.1016/j.obhdp.2012.10.002) Why so confident? The influence of outcome desirability on selective exposure and likelihood judgment q a,⇑ a c b Paul D.
    [Show full text]
  • Correcting Sampling Bias in Non-Market Valuation with Kernel Mean Matching
    CORRECTING SAMPLING BIAS IN NON-MARKET VALUATION WITH KERNEL MEAN MATCHING Rui Zhang Department of Agricultural and Applied Economics University of Georgia [email protected] Selected Paper prepared for presentation at the 2017 Agricultural & Applied Economics Association Annual Meeting, Chicago, Illinois, July 30 - August 1 Copyright 2017 by Rui Zhang. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract Non-response is common in surveys used in non-market valuation studies and can bias the parameter estimates and mean willingness to pay (WTP) estimates. One approach to correct this bias is to reweight the sample so that the distribution of the characteristic variables of the sample can match that of the population. We use a machine learning algorism Kernel Mean Matching (KMM) to produce resampling weights in a non-parametric manner. We test KMM’s performance through Monte Carlo simulations under multiple scenarios and show that KMM can effectively correct mean WTP estimates, especially when the sample size is small and sampling process depends on covariates. We also confirm KMM’s robustness to skewed bid design and model misspecification. Key Words: contingent valuation, Kernel Mean Matching, non-response, bias correction, willingness to pay 2 1. Introduction Nonrandom sampling can bias the contingent valuation estimates in two ways. Firstly, when the sample selection process depends on the covariate, the WTP estimates are biased due to the divergence between the covariate distributions of the sample and the population, even the parameter estimates are consistent; this is usually called non-response bias.
    [Show full text]
  • When Do Employees Perceive Their Skills to Be Firm-Specific?
    r Academy of Management Journal 2016, Vol. 59, No. 3, 766–790. http://dx.doi.org/10.5465/amj.2014.0286 MICRO-FOUNDATIONS OF FIRM-SPECIFIC HUMAN CAPITAL: WHEN DO EMPLOYEES PERCEIVE THEIR SKILLS TO BE FIRM-SPECIFIC? JOSEPH RAFFIEE University of Southern California RUSSELL COFF University of Wisconsin-Madison Drawing on human capital theory, strategy scholars have emphasized firm-specific human capital as a source of sustained competitive advantage. In this study, we begin to unpack the micro-foundations of firm-specific human capital by theoretically and empirically exploring when employees perceive their skills to be firm-specific. We first develop theoretical arguments and hypotheses based on the extant strategy literature, which implicitly assumes information efficiency and unbiased perceptions of firm- specificity. We then relax these assumptions and develop alternative hypotheses rooted in the cognitive psychology literature, which highlights biases in human judg- ment. We test our hypotheses using two data sources from Korea and the United States. Surprisingly, our results support the hypotheses based on cognitive bias—a stark contrast to expectations embedded within the strategy literature. Specifically, we find organizational commitment and, to some extent, tenure are negatively related to employee perceptions of the firm-specificity. We also find that employer-provided on- the-job training is unrelated to perceived firm-specificity. These results suggest that firm-specific human capital, as perceived by employees, may drive behavior in ways unanticipated by existing theory—for example, with respect to investments in skills or turnover decisions. This, in turn, may challenge the assumed relationship between firm-specific human capital and sustained competitive advantage.
    [Show full text]
  • Evaluation of Selection Bias in an Internet-Based Study of Pregnancy Planners
    HHS Public Access Author manuscript Author ManuscriptAuthor Manuscript Author Epidemiology Manuscript Author . Author manuscript; Manuscript Author available in PMC 2016 April 04. Published in final edited form as: Epidemiology. 2016 January ; 27(1): 98–104. doi:10.1097/EDE.0000000000000400. Evaluation of Selection Bias in an Internet-based Study of Pregnancy Planners Elizabeth E. Hatcha, Kristen A. Hahna, Lauren A. Wisea, Ellen M. Mikkelsenb, Ramya Kumara, Matthew P. Foxa, Daniel R. Brooksa, Anders H. Riisb, Henrik Toft Sorensenb, and Kenneth J. Rothmana,c aDepartment of Epidemiology, Boston University School of Public Health, Boston, MA bDepartment of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark cRTI Health Solutions, Durham, NC Abstract Selection bias is a potential concern in all epidemiologic studies, but it is usually difficult to assess. Recently, concerns have been raised that internet-based prospective cohort studies may be particularly prone to selection bias. Although use of the internet is efficient and facilitates recruitment of subjects that are otherwise difficult to enroll, any compromise in internal validity would be of great concern. Few studies have evaluated selection bias in internet-based prospective cohort studies. Using data from the Danish Medical Birth Registry from 2008 to 2012, we compared six well-known perinatal associations (e.g., smoking and birth weight) in an inter-net- based preconception cohort (Snart Gravid n = 4,801) with the total population of singleton live births in the registry (n = 239,791). We used log-binomial models to estimate risk ratios (RRs) and 95% confidence intervals (CIs) for each association. We found that most results in both populations were very similar.
    [Show full text]
  • Survivorship Bias Mitigation in a Recidivism Prediction Tool
    EasyChair Preprint № 4903 Survivorship Bias Mitigation in a Recidivism Prediction Tool Ninande Vermeer, Alexander Boer and Radboud Winkels EasyChair preprints are intended for rapid dissemination of research results and are integrated with the rest of EasyChair. January 15, 2021 SURVIVORSHIP BIAS MITIGATION IN A RECIDIVISM PREDICTION TOOL Ninande Vermeer / Alexander Boer / Radboud Winkels FNWI, University of Amsterdam, Netherlands; [email protected] KPMG, Amstelveen, Netherlands; [email protected] PPLE College, University of Amsterdam, Netherlands; [email protected] Keywords: survivorship bias, AI Fairness 360, bias mitigation, recidivism Abstract: Survivorship bias is the fallacy of focusing on entities that survived a certain selection process and overlooking the entities that did not. This common form of bias can lead to wrong conclu- sions. AI Fairness 360 is an open-source toolkit that can detect and handle bias using several mitigation techniques. However, what if the bias in the dataset is not bias, but rather a justified unbalance? Bias mitigation while the ‘bias’ is justified is undesirable, since it can have a serious negative impact on the performance of a prediction tool based on machine learning. In order to make well-informed product design decisions, it would be appealing to be able to run simulations of bias mitigation in several situations to explore its impact. This paper de- scribes the first results in creating such a tool for a recidivism prediction tool. The main con- tribution is an indication of the challenges that come with the creation of such a simulation tool, specifically a realistic dataset. 1. Introduction The substitution of human decision making with Artificial Intelligence (AI) technologies increases the depend- ence on trustworthy datasets.
    [Show full text]
  • Testing for Selection Bias IZA DP No
    IZA DP No. 8455 Testing for Selection Bias Joonhwi Joo Robert LaLonde September 2014 DISCUSSION PAPER SERIES Forschungsinstitut zur Zukunft der Arbeit Institute for the Study of Labor Testing for Selection Bias Joonhwi Joo University of Chicago Robert LaLonde University of Chicago and IZA Discussion Paper No. 8455 September 2014 IZA P.O. Box 7240 53072 Bonn Germany Phone: +49-228-3894-0 Fax: +49-228-3894-180 E-mail: [email protected] Any opinions expressed here are those of the author(s) and not those of IZA. Research published in this series may include views on policy, but the institute itself takes no institutional policy positions. The IZA research network is committed to the IZA Guiding Principles of Research Integrity. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent nonprofit organization supported by Deutsche Post Foundation. The center is associated with the University of Bonn and offers a stimulating research environment through its international network, workshops and conferences, data service, project support, research visits and doctoral program. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character. A revised version may be available directly from the author. IZA Discussion Paper No.
    [Show full text]
  • Perioperative Management of ACE Inhibitor Therapy: Challenges of Clinical Decision Making Based on Surrogate Endpoints
    EDITORIAL Perioperative Management of ACE Inhibitor Therapy: Challenges of Clinical Decision Making Based on Surrogate Endpoints Duminda N. Wijeysundera MD, PhD, FRCPC1,2,3* 1Li Ka Shing Knowledge Institute, St. Michael’s Hospital, Toronto, Ontario, Canada; 2Department of Anesthesia and Pain Management, Toronto General Hospital, Toronto, Ontario, Canada; 3Department of Anesthesia, University of Toronto, Toronto, Ontario, Canada. enin-angiotensin inhibitors, which include angio- major surgery can have adverse effects. While ACE inhibitor tensin-converting enzyme (ACE) inhibitors and an- withdrawal has not shown adverse physiological effects in the giotensin II receptor blockers (ARBs), have demon- perioperative setting, it has led to rebound myocardial isch- strated benefits in the treatment of several common emia in patients with prior myocardial infarction.6 Rcardiovascular and renal conditions. For example, they are Given this controversy, there is variation across hospitals1 prescribed to individuals with hypertension, heart failure with and practice guidelines with respect to perioperative manage- reduced ejection fraction (HFrEF), prior myocardial infarction, ment of renin-angiotensin inhibitors. For example, the 2017 and chronic kidney disease with proteinuria. Perhaps unsur- Canadian Cardiovascular Society guidelines recommend that prisingly, many individuals presenting for surgery are already renin-angiotensin inhibitors be stopped temporarily 24 hours on long-term ACE inhibitor or ARB therapy. For example, such before major inpatient
    [Show full text]
  • Correcting Sample Selection Bias by Unlabeled Data
    Correcting Sample Selection Bias by Unlabeled Data Jiayuan Huang Alexander J. Smola Arthur Gretton School of Computer Science NICTA, ANU MPI for Biological Cybernetics Univ. of Waterloo, Canada Canberra, Australia T¨ubingen, Germany [email protected] [email protected] [email protected] Karsten M. Borgwardt Bernhard Scholkopf¨ Ludwig-Maximilians-University MPI for Biological Cybernetics Munich, Germany T¨ubingen, Germany [email protected]fi.lmu.de [email protected] Abstract We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appro- priate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estima- tion. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice. 1 Introduction The default assumption in many learning scenarios is that training and test data are independently and identically (iid) drawn from the same distribution. When the distributions on training and test set do not match, we are facing sample selection bias or covariate shift. Specifically, given a domain of patterns X and labels Y, we obtain training samples Z = (x , y ),..., (x , y ) X Y from { 1 1 m m } ⊆ × a Borel probability distribution Pr(x, y), and test samples Z′ = (x′ , y′ ),..., (x′ ′ , y′ ′ ) X Y { 1 1 m m } ⊆ × drawn from another such distribution Pr′(x, y). Although there exists previous work addressing this problem [2, 5, 8, 9, 12, 16, 20], sample selection bias is typically ignored in standard estimation algorithms.
    [Show full text]