Reassessing Quasi-Experiments: Policy Evaluation, Induction, and SUTVA
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
1.2. INDUCTIVE REASONING Much Mathematical Discovery Starts with Inductive Reasoning – the Process of Reaching General Conclus
1.2. INDUCTIVE REASONING Much mathematical discovery starts with inductive reasoning – the process of reaching general conclusions, called conjectures, through the examination of particular cases and the recognition of patterns. These conjectures are then more formally proved using deductive methods, which will be discussed in the next section. Below we look at three examples that use inductive reasoning. Number Patterns Suppose you had to predict the sixth number in the following sequence: 1, -3, 6, -10, 15, ? How would you proceed with such a question? The trick, it seems, is to discern a specific pattern in the given sequence of numbers. This kind of approach is a classic example of inductive reasoning. By identifying a rule that generates all five numbers in this sequence, your hope is to establish a pattern and be in a position to predict the sixth number with confidence. So can you do it? Try to work out this number before continuing. One fact is immediately clear: the numbers alternate sign from positive to negative. We thus expect the answer to be a negative number since it follows 15, a positive number. On closer inspection, we also realize that the difference between the magnitude (or absolute value) of successive numbers increases by 1 each time: 3 – 1 = 2 6 – 3 = 3 10 – 6 = 4 15 – 10 = 5 … We have then found the rule we sought for generating the next number in this sequence. The sixth number should be -21 since the difference between 15 and 21 is 6 and we had already determined that the answer should be negative. -
There Is No Pure Empirical Reasoning
There Is No Pure Empirical Reasoning 1. Empiricism and the Question of Empirical Reasons Empiricism may be defined as the view there is no a priori justification for any synthetic claim. Critics object that empiricism cannot account for all the kinds of knowledge we seem to possess, such as moral knowledge, metaphysical knowledge, mathematical knowledge, and modal knowledge.1 In some cases, empiricists try to account for these types of knowledge; in other cases, they shrug off the objections, happily concluding, for example, that there is no moral knowledge, or that there is no metaphysical knowledge.2 But empiricism cannot shrug off just any type of knowledge; to be minimally plausible, empiricism must, for example, at least be able to account for paradigm instances of empirical knowledge, including especially scientific knowledge. Empirical knowledge can be divided into three categories: (a) knowledge by direct observation; (b) knowledge that is deductively inferred from observations; and (c) knowledge that is non-deductively inferred from observations, including knowledge arrived at by induction and inference to the best explanation. Category (c) includes all scientific knowledge. This category is of particular import to empiricists, many of whom take scientific knowledge as a sort of paradigm for knowledge in general; indeed, this forms a central source of motivation for empiricism.3 Thus, if there is any kind of knowledge that empiricists need to be able to account for, it is knowledge of type (c). I use the term “empirical reasoning” to refer to the reasoning involved in acquiring this type of knowledge – that is, to any instance of reasoning in which (i) the premises are justified directly by observation, (ii) the reasoning is non- deductive, and (iii) the reasoning provides adequate justification for the conclusion. -
A Philosophical Treatise on the Connection of Scientific Reasoning
mathematics Review A Philosophical Treatise on the Connection of Scientific Reasoning with Fuzzy Logic Evangelos Athanassopoulos 1 and Michael Gr. Voskoglou 2,* 1 Independent Researcher, Giannakopoulou 39, 27300 Gastouni, Greece; [email protected] 2 Department of Applied Mathematics, Graduate Technological Educational Institute of Western Greece, 22334 Patras, Greece * Correspondence: [email protected] Received: 4 May 2020; Accepted: 19 May 2020; Published:1 June 2020 Abstract: The present article studies the connection of scientific reasoning with fuzzy logic. Induction and deduction are the two main types of human reasoning. Although deduction is the basis of the scientific method, almost all the scientific progress (with pure mathematics being probably the unique exception) has its roots to inductive reasoning. Fuzzy logic gives to the disdainful by the classical/bivalent logic induction its proper place and importance as a fundamental component of the scientific reasoning. The error of induction is transferred to deductive reasoning through its premises. Consequently, although deduction is always a valid process, it is not an infallible method. Thus, there is a need of quantifying the degree of truth not only of the inductive, but also of the deductive arguments. In the former case, probability and statistics and of course fuzzy logic in cases of imprecision are the tools available for this purpose. In the latter case, the Bayesian probabilities play a dominant role. As many specialists argue nowadays, the whole science could be viewed as a Bayesian process. A timely example, concerning the validity of the viruses’ tests, is presented, illustrating the importance of the Bayesian processes for scientific reasoning. -
How Large Are the Social Returns to Education? Evidence from Compulsory Schooling Laws
NBER WORKING PAPER SERIES HOW LARGE ARE THE SOCIAL RETURNS TO EDUCATION? EVIDENCE FROM COMPULSORY SCHOOLING LAWS Daron Acemoglu Joshua Angrist Working Paper 7444 http://www.nber.org/papers/w7444 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 December 1999 We thank Chris Mazingo and Xuanhui Ng for excellent research assistance. Paul Beaudry, Bill Evans, Bob Hall, Larry Katz, Enrico Moretti, Jim Poterba, Robert Shimer, and seminar participants at the 1999 NBER Summer Institute, University College London, the University of Maryland, and the University of Toronto provided helpful comments. Special thanks to Stefanie Schmidt for helpful advice on compulsory schooling data. The views expressed herein are those of the authors and not necessarily those of the National Bureau of Economic Research. © 1999 by Daron Acemoglu and Joshua Angrist. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. How Large are the Social Returns to Education? Evidence from Compulsory Schooling Laws Daron Acemoglu and Joshua Angrist NBER Working Paper No. 7444 December 1999 JEL No. I20, J31, J24, D62, O15 ABSTRACT Average schooling in US states is highly correlated with state wage levels, even after controlling for the direct effect of schooling on individual wages. We use an instrumental variables strategy to determine whether this relationship is driven by social returns to education. The instrumentals for average schooling are derived from information on the child labor laws and compulsory attendance laws that affected men in our Census samples, while quarter of birth is used as an instrument for individual schooling. -
Instrumental Variables Methods in Experimental Criminological Research: What, Why and How
Journal of Experimental Criminology (2006) 2: 23–44 # Springer 2006 DOI: 10.1007/s11292-005-5126-x Research article Instrumental variables methods in experimental criminological research: what, why and how JOSHUA D. ANGRIST* MIT Department of Economics, 50 Memorial Drive, Cambridge, MA 02142-1347, USA NBER, USA: E-mail: [email protected] Abstract. Quantitative criminology focuses on straightforward causal questions that are ideally addressed with randomized experiments. In practice, however, traditional randomized trials are difficult to implement in the untidy world of criminal justice. Even when randomized trials are implemented, not everyone is treated as intended and some control subjects may obtain experimental services. Treatments may also be more complicated than a simple yes/no coding can capture. This paper argues that the instrumental variables methods (IV) used by economists to solve omitted variables bias problems in observational studies also solve the major statistical problems that arise in imperfect criminological experiments. In general, IV methods estimate causal effects on subjects who comply with a randomly assigned treatment. The use of IV in criminology is illustrated through a re-analysis of the Minneapolis domestic violence experiment. The results point to substantial selection bias in estimates using treatment delivered as the causal variable, and IV estimation generates deterrent effects of arrest that are about one-third larger than the corresponding intention-to-treat effects. Key words: causal effects, domestic violence, local average treatment effects, non-compliance, two- stage least squares Background I’m not a criminologist, but I`ve long admired criminology from afar. As an applied economist who puts the task of convincingly answering causal questions at the top of my agenda, I’ve been impressed with the no-nonsense outcome-oriented approach taken by many quantitative criminologists. -
MIT Pre-Doctoral Research Fellow Professors Joshua Angrist and Parag Pathak
MIT Pre-Doctoral Research Fellow Professors Joshua Angrist and Parag Pathak Position Overview We are seeking a motivated, independent, and organized Pre-Doctoral Research Fellow to support efforts to evaluate and improve education programs and policies in the U.S. Research Fellows receive a two-year full-time appointment with the School Effectiveness and Inequality Initiative (SEII), a research lab based at the MIT Department of Economics and the National Bureau of Economic Research. SEII’s current research projects involve studies of the impact of education policies and programs in states like Massachusetts and cities such as Boston, Chicago, New York City, Indianapolis, and Denver. Principal Duties and Responsibilities Fellows will work closely with SEII Directors Joshua Angrist and Parag Pathak, as well as our collaborators at universities across the country, including Harvard University. Specific responsibilities include: o constructing data sets and preparing data for analysis o conducting analysis in Stata, R, and Matlab to answer research questions o presenting results and engaging in discussion in weekly team meetings o editing papers for publication The fellowship will be a full-time position located in Cambridge, Massachusetts. An employment term of two years is expected. This position is intended to act as a pathway to graduate school for candidates who plan to apply to an Economics or related Ph.D. program in the future. Previous fellows have gone to top-tier Economics Ph.D. programs, such as UC-Berkeley, MIT, and Stanford. Start date is flexible, with a strong preference for candidates who can begin on or before June 1, 2020. -
Argument, Structure, and Credibility in Public Health Writing Donald Halstead Instructor and Director of Writing Programs Harvard TH Chan School of Public Heath
Argument, Structure, and Credibility in Public Health Writing Donald Halstead Instructor and Director of Writing Programs Harvard TH Chan School of Public Heath Some of the most important questions we face in public health include what policies we should follow, which programs and research should we fund, how and where we should intervene, and what our priorities should be in the face of overwhelming needs and scarce resources. These questions, like many others, are best decided on the basis of arguments, a word that has its roots in the Latin arguere, to make clear. Yet arguments themselves vary greatly in terms of their strength, accuracy, and validity. Furthermore, public health experts often disagree on matters of research, policy and practice, citing conflicting evidence and arriving at conflicting conclusions. As a result, critical readers, such as researchers, policymakers, journal editors and reviewers, approach arguments with considerable skepticism. After all, they are not going to change their programs, priorities, practices, research agendas or budgets without very solid evidence that it is necessary, feasible, and beneficial. This raises an important challenge for public health writers: How can you best make your case, in the face of so much conflicting evidence? To illustrate, let’s assume that you’ve been researching mother-to-child transmission (MTCT) of HIV in a sub-Saharan African country and have concluded that (claim) the country’s maternal programs for HIV counseling and infant nutrition should be integrated because (reasons) this would be more efficient in decreasing MTCT, improving child nutrition, and using scant resources efficiently. The evidence to back up your claim might consist of original research you have conducted that included program assessments and interviews with health workers in the field, your assessment of the other relevant research, the experiences of programs in other countries, and new WHO guidelines. -
Can Successful Schools Replicate? Scaling up Boston's Charter School
NBER WORKING PAPER SERIES CAN SUCCESSFUL SCHOOLS REPLICATE? SCALING UP BOSTON’S CHARTER SCHOOL SECTOR Sarah Cohodes Elizabeth Setren Christopher R. Walters Working Paper 25796 http://www.nber.org/papers/w25796 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 May 2019 Special thanks go to Carrie Conaway, Cliff Chuang, the staff of the Massachusetts Department of Elementary and Secondary Education, and Boston’s charter schools for data and assistance. We also thank Josh Angrist, Bob Gibbons, Caroline Hoxby, Parag Pathak, Derek Neal, Eric Taylor and seminar participants at the NBER Education Program Meetings, Columbia Teachers College Economics of Education workshop, the Association for Education Finance and Policy Conference, the Society for Research on Educational Effectiveness Conference, Harvard Graduate School of Education, Federal Reserve Bank of New York, MIT Organizational Economics Lunch, MIT Labor Lunch, and University of Michigan Causal Inference for Education Research Seminar for helpful comments. We are grateful to the school leaders who shared their experiences expanding their charter networks: Shane Dunn, Jon Clark, Will Austin, Anna Hall, and Dana Lehman. Setren was supported by a National Science Foundation Graduate Research Fellowship. The Massachusetts Department of Elementary and Secondary Education had the right to review this paper prior to circulation in order to determine no individual’s data was disclosed. The authors obtained Institutional Review Board (IRB) approvals for this project from NBER and Teachers College Columbia University. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. -
Field Experiments in Development Economics1 Esther Duflo Massachusetts Institute of Technology
Field Experiments in Development Economics1 Esther Duflo Massachusetts Institute of Technology (Department of Economics and Abdul Latif Jameel Poverty Action Lab) BREAD, CEPR, NBER January 2006 Prepared for the World Congress of the Econometric Society Abstract There is a long tradition in development economics of collecting original data to test specific hypotheses. Over the last 10 years, this tradition has merged with an expertise in setting up randomized field experiments, resulting in an increasingly large number of studies where an original experiment has been set up to test economic theories and hypotheses. This paper extracts some substantive and methodological lessons from such studies in three domains: incentives, social learning, and time-inconsistent preferences. The paper argues that we need both to continue testing existing theories and to start thinking of how the theories may be adapted to make sense of the field experiment results, many of which are starting to challenge them. This new framework could then guide a new round of experiments. 1 I would like to thank Richard Blundell, Joshua Angrist, Orazio Attanasio, Abhijit Banerjee, Tim Besley, Michael Kremer, Sendhil Mullainathan and Rohini Pande for comments on this paper and/or having been instrumental in shaping my views on these issues. I thank Neel Mukherjee and Kudzai Takavarasha for carefully reading and editing a previous draft. 1 There is a long tradition in development economics of collecting original data in order to test a specific economic hypothesis or to study a particular setting or institution. This is perhaps due to a conjunction of the lack of readily available high-quality, large-scale data sets commonly available in industrialized countries and the low cost of data collection in developing countries, though development economists also like to think that it has something to do with the mindset of many of them. -
Uva-DARE (Digital Academic Repository)
UvA-DARE (Digital Academic Repository) Evaluating the effect of tax deductions on training Oosterbeek, H.; Leuven, E. DOI 10.1086/381257 Publication date 2004 Published in Journal of labor economics Link to publication Citation for published version (APA): Oosterbeek, H., & Leuven, E. (2004). Evaluating the effect of tax deductions on training. Journal of labor economics, 22, 461-488. https://doi.org/10.1086/381257 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:26 Sep 2021 Evaluating the Effect of Tax Deductions on Training Edwin Leuven, University of Amsterdam and Tinbergen Institute Hessel Oosterbeek, University of Amsterdam and Tinbergen Institute Dutch employers can claim an extra tax deduction when they train employees older than age 40. -
Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term
Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term Name: ________________________________ 1. (2 points) What is the primary advantage of using the median instead of the mean as a measure of central tendency? It is less affected by outliers. 2. (2 points) Why is counterbalancing important in a within-subjects experiment? Ensuring that conditions are independent of order and of each other. This allows us to determine effect of each variable independently of the other variables. If conditions are related to order or to each other, we are unable to determine which variable is having an effect. Short answer: order effects. 3. (6 points) Define reliability and compare it to validity. Give an example of when a measure could be valid but not reliable. 2 points: Reliability is the consistency or dependability of a measurement technique. [“Getting the same result” was not accepted; it was too vague in that it did not specify the conditions (e.g., the same phenomenon) in which the same result was achieved.] 2 points: Validity is the extent to which a measurement procedure actually measures what it is intended to measure. 2 points: Example (from class) is a weight scale that gives a different result every time the same person stands on it repeatedly. Another example: a scale that actually measures hunger but has poor test-retest reliability. [Other examples were accepted.] 4. (4 points) A consumer research company wants to compare the “coverage” of two competing cell phone networks throughout Illinois. To do so fairly, they have decided that they will only compare survey data taken from customers who are all using the same cell phone model - one that is functional on both networks and has been newly released in the last 3 months. -
The Problem of Induction
The Problem of Induction Gilbert Harman Department of Philosophy, Princeton University Sanjeev R. Kulkarni Department of Electrical Engineering, Princeton University July 19, 2005 The Problem The problem of induction is sometimes motivated via a comparison between rules of induction and rules of deduction. Valid deductive rules are necessarily truth preserving, while inductive rules are not. So, for example, one valid deductive rule might be this: (D) From premises of the form “All F are G” and “a is F ,” the corresponding conclusion of the form “a is G” follows. The rule (D) is illustrated in the following depressing argument: (DA) All people are mortal. I am a person. So, I am mortal. The rule here is “valid” in the sense that there is no possible way in which premises satisfying the rule can be true without the corresponding conclusion also being true. A possible inductive rule might be this: (I) From premises of the form “Many many F s are known to be G,” “There are no known cases of F s that are not G,” and “a is F ,” the corresponding conclusion can be inferred of the form “a is G.” The rule (I) might be illustrated in the following “inductive argument.” (IA) Many many people are known to have been moral. There are no known cases of people who are not mortal. I am a person. So, I am mortal. 1 The rule (I) is not valid in the way that the deductive rule (D) is valid. The “premises” of the inductive inference (IA) could be true even though its “con- clusion” is not true.