EVALUATION of EXPERIMENTAL DATA 2 (THE CURSE of ERROR ANALYSIS) All Experimental Measurements Are Subject to Some Uncertainty, Or Error

Total Page:16

File Type:pdf, Size:1020Kb

EVALUATION of EXPERIMENTAL DATA 2 (THE CURSE of ERROR ANALYSIS) All Experimental Measurements Are Subject to Some Uncertainty, Or Error Chapter 2. Errors & Experimental Data 6 EVALUATION OF EXPERIMENTAL DATA 2 (THE CURSE OF ERROR ANALYSIS) All experimental measurements are subject to some uncertainty, or error. At the most fundamental level the uncertainty principle of physics tells us there are some things that we can never know exactly. But most measurements we are likely to make will be limited by ourselves or by our apparatus, and it is useful to learn to deal with these inherent shortcomings in our experiments. Accuracy is a measure of the difference between our measurement and the true value of the quantity we are measuring. High accuracy is what is ultimately desired. In order to obtain accuracy, we often make multiple measurements of the same quantity and the precision is a measure of the spread of these measurements. The meanings of these two terms, accuracy and precision, are thus quite different. A set of measurements may have high precision, but low accuracy. So a student might make three determinations of the chloride content of an unknown and get 43.44%, 43.51% and 43.47%. These results are fairly precise, but the true value might be 41.70% and the determination would not be very accurate. (This is the most common case. Just because a measurement is reproducible does not mean it's right!) Note here that these terms are used in a qualitative, descriptive sense. ACCURACY AND SYSTEMATIC ERRORS How can we evaluate the accuracy of our result? The true result is normally not known, otherwise we would not be making the measurements in the first place. (Present lab excluded, of course.) The error in our measurement is thus unknown. We must make estimates of both the accuracy and the precision of various parts of our measurements, hoping that we have included everything, and then we combine these to get an estimate of how the final answer is affected. In our discussion of errors, it is useful to distinguish between systematic, personal, and random errors because they have different sources, and affect the measurements differently. All three kinds of errors affect the accuracy of the measurement, but in different ways and for different reasons. Personal errors frequently plague beginners. This is a kind way of saying that the procedure wasn’t carried out properly. Examples of such errors are Weighing a sample which isn’t dry. Not thoroughly mixing a solution. Using the wrong chemical. Measuring voltage with a meter set to “resistance”. etc. Chapter 2. Errors & Experimental Data 7 Now imagine that the person carrying out the experiment is a tireless automaton capable of repeating the experiment flawlessly many times. Such an automaton is incapable of making a personal error. Nevertheless, the results from experiment to experiment will differ somewhat because of the inherent limitations in the equipment and procedures employed. The run-to-run differences thus encountered are examples of random errors which ultimately are limiting the precision of the results. As we shall see the inherent error resulting from this sort of experimental scatter can be reduced by repeating the experiment many times and averaging the results. There is no guarantee that after such improved precision by averaging repeated measurements that the final answer will be correct even in the limit of an infinite number of repetitions giving perfect precision. The remaining error is called systematic error, but can have more than one source so that we speak of systematic errors. Such errors are important types of errors to consider, but there are no mathematical formulae to predict them. As a consequence, their discovery can be inordinately difficult. They are inherent in the system, and can often be traced to some fundamental flaw in the experimental apparatus. Systematic errors do not average out on multiple observations. They always influence the measurements in the same direction, i.e., a given systematic error might make the result low, and repeated measurements would always yield a low result. If we are so lucky as to discover the source of a systematic error, we can usually eliminate it and sometimes completely correct old measurements for it, provided that we have recorded the necessary data. The kinds of systematic error vary enormously with the experiment under consideration, and we list a few below: Miscalibration of the apparatus. Weights, scales, and volumetric equipment all need to be calibrated. Procedural errors. These can be very subtle, such as using a platinum resistance thermometer to measure temperature and finding after a lot of detective work that the wire in the thermometer was not platinum, but probably something like osmium! Inappropriate theoretical analysis. Experimental results are usually interpreted using well-established theory, but sometimes the theory may be extended to regions which are inappropriate. For example, the perfect-gas law is often used to describe gases, and this normally works quite well. But if a gas were undergoing either liquefaction or reaction, PV would not equal nRT, and deviations from the "ideal" law must be considered. As another example, Stokes' Law describes the fall of a sphere in a viscous medium, but Millikan was forced to apply corrections to this law when he measured the charge on the electron in the famous oil drop experiment. The dividing line between the three types of errors is fuzzy. Thus we classify the reading of burets to 0.1 rather than 0.01 ml as personal rather than random even though the effect might appear random because we know that the buret can be read more accurately. Some of the examples of systematic errors given above could be thought of as personal errors because the person devising the procedure should have known better. Chapter 2. Errors & Experimental Data 8 We can find out about random errors by examining the precision of several workers. We can avoid personal errors by being extremely careful in following the procedure. How can we identify and eliminate the systematic errors in our experiments? This is not so easy. The best method is usually to make the measurement using a completely different method. For example we could use a volumetric method of chloride analysis to check the results of a gravimetric analysis. But oftentimes we are called upon to make an assessment of the accuracy of our results without recourse to alternative experiments. About the best we can do is to consider how a miscalibration or mistake might affect each step of our measurement and then to evaluate how that affects our final result. In order to do that we must consider how errors are propagated in experiments. PROPAGATION OF SYSTEMATIC ERRORS. Suppose that we measure a quantity X but there is a systematic error, δX in the measurement. Remember that δX will not average out, and that it will have a definite polarity (which we can guess in our estimation of errors.) We now ask how this error appears in the final result R. a) Addition and subtraction. If two measurements are to be added, we have R = (X1 + δX1) + (X2 + δX2) = (X1 + X2) + (δX1 + δX2) (1) so δR = δX1 + δX2 (2) and the error in R is just the sum (or difference) in the error of the individual measurements. Note that it is possible in principle for the errors to completely cancel out. For example, if a section of a buret were to be mis-numbered, the volume might be read as 1 ml too high. If the result is the volume delivered, the difference between two readings, this mis-numbering of the buret would completely cancel out if the second buret reading was also in the misnumbered section. Note here that the absolute error adds (or subtracts). b) Multiplication and Division. Suppose the result is dependent on several measurements, X, Y, Z as a product or ratio XnY R = Z (3) Errors could be introduced in the measurement of either X, Y or Z. If, for example, there were no error in Y or Z, we would expect an error in X, δX to affect the result according to the rules of differential calculus as nXn-1Y δR = Z δX (4) Chapter 2. Errors & Experimental Data 9 where Y and Z in the differential are assumed constant and the finite δX is assumed to behave as an infinitesimal dX. Another expression could be written for the way in which an error in Y or and error in Z would affect R. In general it is possible that errors would be made in each variable, and the error in R would be expected to be the sum of the individual errors: nXn-1Y Xn XnY δR = Z δX + Z δY - Z2 δZ (5) if we now divide by R we have: δR δX δY δZ R = n X + Y - Z (6) notice now that the fractional errors (weighted by the appropriate exponent) add to give the fractional error in the result. EXAMPLE As an example, we cite a very elementary experiment performed to measure the density of water. Here 10 ml of water was pipetted into a weighing bottle and weighed. How do we evaluate how the uncertainty in weight and volume affect the uncertainty in density? We measure the volume with a buret which requires two readings. If we assume that the error in the first buret reading is δV = 0.03ml, and that the second error is zero, we have from above, V = (V1 + δV1) - (V2 + 0) = V1 - V2 + δV (7) The absolute error in volume, δV is thus .03 ml. The fractional error is δV/V = .003, or 0.3%. If we further assume that we have spilled a little water so that the error in weight is δw = -.001 gm, then we have δρ δw δV ρ= weight/volume; = - (8) ρ w V since V ≈ 10 ml, and W ≈ 10 gm, we have δρ .001 .03 ≈ - - = -.003-.0001 ≈ -.003 ρ 10 10 We conclude that the error in weight is very small in comparison to the error in volume, and that the accuracy expected (provided that everything is calibrated) is about 3 parts per thousand, or 0.3%.
Recommended publications
  • Survey Experiments
    IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION ............................................................................................................................................................................ 8 Overview .............................................................................................................................................................................. 8 What is a survey experiment? .................................................................................................................................... 9 What is an experiment?.............................................................................................................................................. 10 Independent and dependent variables ................................................................................................................. 11 Experimental Conditions ............................................................................................................................................. 12 WHY CONDUCT A SURVEY EXPERIMENT? ........................................................................................................................... 13 Internal, external, and construct validity ..........................................................................................................
    [Show full text]
  • Lab Experiments Are a Major Source of Knowledge in the Social Sciences
    IZA DP No. 4540 Lab Experiments Are a Major Source of Knowledge in the Social Sciences Armin Falk James J. Heckman October 2009 DISCUSSION PAPER SERIES Forschungsinstitut zur Zukunft der Arbeit Institute for the Study of Labor Lab Experiments Are a Major Source of Knowledge in the Social Sciences Armin Falk University of Bonn, CEPR, CESifo and IZA James J. Heckman University of Chicago, University College Dublin, Yale University, American Bar Foundation and IZA Discussion Paper No. 4540 October 2009 IZA P.O. Box 7240 53072 Bonn Germany Phone: +49-228-3894-0 Fax: +49-228-3894-180 E-mail: [email protected] Any opinions expressed here are those of the author(s) and not those of IZA. Research published in this series may include views on policy, but the institute itself takes no institutional policy positions. The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, politics and business. IZA is an independent nonprofit organization supported by Deutsche Post Foundation. The center is associated with the University of Bonn and offers a stimulating research environment through its international network, workshops and conferences, data service, project support, research visits and doctoral program. IZA engages in (i) original and internationally competitive research in all fields of labor economics, (ii) development of policy concepts, and (iii) dissemination of research results and concepts to the interested public. IZA Discussion Papers often represent preliminary work and are circulated to encourage discussion. Citation of such a paper should account for its provisional character.
    [Show full text]
  • Combining Epistemic Tools in Systems Biology Sara Green
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Philsci-Archive Final version to be published in Studies of History and Philosophy of Biological and Biomedical Sciences Published online: http://www.sciencedirect.com/science/article/pii/S1369848613000265 When one model is not enough: Combining epistemic tools in systems biology Sara Green Centre for Science Studies, Department of Physics and Astronomy, Ny Munkegade 120, Bld. 1520, Aarhus University, Denmark [email protected] Abstract In recent years, the philosophical focus of the modeling literature has shifted from descriptions of general properties of models to an interest in different model functions. It has been argued that the diversity of models and their correspondingly different epistemic goals are important for developing intelligible scientific theories (Levins, 2006; Leonelli, 2007). However, more knowledge is needed on how a combination of different epistemic means can generate and stabilize new entities in science. This paper will draw on Rheinberger’s practice-oriented account of knowledge production. The conceptual repertoire of Rheinberger’s historical epistemology offers important insights for an analysis of the modelling practice. I illustrate this with a case study on network modeling in systems biology where engineering approaches are applied to the study of biological systems. I shall argue that the use of multiple means of representations is an essential part of the dynamic of knowledge generation. It is because of – rather than in spite of – the diversity of constraints of different models that the interlocking use of different epistemic means creates a potential for knowledge production.
    [Show full text]
  • Experimental Philosophy and Feminist Epistemology: Conflicts and Complements
    City University of New York (CUNY) CUNY Academic Works All Dissertations, Theses, and Capstone Projects Dissertations, Theses, and Capstone Projects 9-2018 Experimental Philosophy and Feminist Epistemology: Conflicts and Complements Amanda Huminski The Graduate Center, City University of New York How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/gc_etds/2826 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] EXPERIMENTAL PHILOSOPHY AND FEMINIST EPISTEMOLOGY: CONFLICTS AND COMPLEMENTS by AMANDA HUMINSKI A dissertation submitted to the Graduate Faculty in Philosophy in partial fulfillment of the requirements for the degree of Doctor of Philosophy, The City University of New York 2018 © 2018 AMANDA HUMINSKI All Rights Reserved ii Experimental Philosophy and Feminist Epistemology: Conflicts and Complements By Amanda Huminski This manuscript has been read and accepted for the Graduate Faculty in Philosophy in satisfaction of the dissertation requirement for the degree of Doctor of Philosophy. _______________________________ ________________________________________________ Date Linda Martín Alcoff Chair of Examining Committee _______________________________ ________________________________________________ Date Nickolas Pappas Executive Officer Supervisory Committee: Jesse Prinz Miranda Fricker THE CITY UNIVERSITY OF NEW YORK iii ABSTRACT Experimental Philosophy and Feminist Epistemology: Conflicts and Complements by Amanda Huminski Advisor: Jesse Prinz The recent turn toward experimental philosophy, particularly in ethics and epistemology, might appear to be supported by feminist epistemology, insofar as experimental philosophy signifies a break from the tradition of primarily white, middle-class men attempting to draw universal claims from within the limits of their own experience and research.
    [Show full text]
  • Verification and Validation in Computational Fluid Dynamics1
    SAND2002 - 0529 Unlimited Release Printed March 2002 Verification and Validation in Computational Fluid Dynamics1 William L. Oberkampf Validation and Uncertainty Estimation Department Timothy G. Trucano Optimization and Uncertainty Estimation Department Sandia National Laboratories P. O. Box 5800 Albuquerque, New Mexico 87185 Abstract Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of 1Accepted for publication in the review journal Progress in Aerospace Sciences. 3 validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both.
    [Show full text]
  • Experimental and Quasi-Experimental Designs for Research
    CHAPTER 5 Experimental and Quasi-Experimental Designs for Research l DONALD T. CAMPBELL Northwestern University JULIAN C. STANLEY Johns Hopkins University In this chapter we shall examine the validity (1960), Ferguson (1959), Johnson (1949), of 16 experimental designs against 12 com­ Johnson and Jackson (1959), Lindquist mon threats to valid inference. By experi­ (1953), McNemar (1962), and Winer ment we refer to that portion of research in (1962). (Also see Stanley, 19S7b.) which variables are manipulated and their effects upon other variables observed. It is well to distinguish the particular role of this PROBLEM AND chapter. It is not a chapter on experimental BACKGROUND design in the Fisher (1925, 1935) tradition, in which an experimenter having complete McCall as a Model mastery can schedule treatments and meas~ In 1923, W. A. McCall published a book urements for optimal statistical efficiency, entitled How to Experiment in Education. with complexity of design emerging only The present chapter aspires to achieve an up­ from that goal of efficiency. Insofar as the to-date representation of the interests and designs discussed in the present chapter be­ considerations of that book, and for this rea­ come complex, it is because of the intransi­ son will begin with an appreciation of it. gency of the environment: because, that is, In his preface McCall said: "There afe ex­ of the experimenter's lack of complete con­ cellent books and courses of instruction deal­ trol. While contact is made with the Fisher ing with the statistical manipulation of ex; tradition at several points, the exposition of perimental data, but there is little help to be that tradition is appropriately left to full­ found on the methods of securing adequate length presentations, such as the books by and proper data to which to apply statis­ Brownlee (1960), Cox (1958), Edwards tical procedure." This sentence remains true enough today to serve as the leitmotif of 1 The preparation of this chapter bas been supported this presentation also.
    [Show full text]
  • What Distinguishes Data from Models?
    Paper accepted for publication by European Journal for the Philosophy of Science in December 2018 What Distinguishes Data from Models? Sabina Leonelli University of Exeter Abstract: I propose a framework that explicates and distinguishes the epistemic roles of data and models within empirical inquiry through consideration of their use in scientific practice. After arguing that Suppes’ characterization of data models falls short in this respect, I discuss a case of data processing within exploratory research in plant phenotyping and use it to highlight the difference between practices aimed to make data usable as evidence and practices aimed to use data to represent a specific phenomenon. I then argue that whether a set of objects functions as data or models does not depend on intrinsic differences in their physical properties, level of abstraction or the degree of human intervention involved in generating them, but rather on their distinctive roles towards identifying and characterizing the targets of investigation. The paper thus proposes a characterization of data models that builds on Suppes’ attention to data practices, without however needing to posit a fixed hierarchy of data and models or a highly exclusionary definition of data models as statistical constructs. Structure 1. Introduction ................................................................................................................. 2 2. Data and Models as Representations ......................................................................... 4 3. What Are Data
    [Show full text]
  • Experimental Tests and Qualification of Analytical Methods to Address Thermohydraulic Phenomena in Advanced Water Cooled Reactors
    XA0055000 IAEA-TECDOC-1149 Experimental tests and qualification of analytical methods to address thermohydraulic phenomena in advanced water cooled reactors Proceedings of a Technical Committee meeting held in Villigen, Switzerland, 14-17 September 1998 31/27 INTERNATIONAL ATOMIC ENERGY AGENCY May 2000 IAEA SAFETY RELATED PUBLICATIONS IAEA SAFETY STANDARDS Under the terms of Article III of its Statute, the IAEA is authorized to establish standards of safety for protection against ionizing radiation and to provide for the application of these standards to peaceful nuclear activities. The regulatory related publications by means of which the IAEA establishes safety standards and measures are issued in the IAEA Safety Standards Series. This series covers nuclear safety, radiation safety, transport safety and waste safety, and also general safety (that is, of relevance in two or more of the four areas), and the categories within it are Safety Fundamentals, Safety Requirements and Safety Guides. © Safety Fundamentals (silver lettering) present basic objectives, concepts and principles of safety and protection in the development and application of atomic energy for peaceful purposes. • Safety Requirements (red lettering) establish the requirements that must be met to ensure safety. These requirements, which are expressed as 'shall' statements, are governed by the objectives and principles presented in the Safety Fundamentals. • Safety Guides (green lettering) recommend actions, conditions or procedures for meeting safety requirements. Recommendations in Safety Guides are expressed as 'should' statements, with the implication that it is necessary to take the measures recommended or equivalent alternative measures to comply with the requirements. The IAEA's safety standards are not legally binding on Member States but may be adopted by them, at their own discretion, for use in national regulations in respect of their own activities.
    [Show full text]
  • Ontology Based Analysis of Experimental Data
    Ontology based analysis of experimental data Andrea Splendiani Institute Pasteur, Unité de Biologie Systémique, rue du dr. Roux 25-28, 75015 Paris, France University of Milano-Bicocca, DISCO, via Bicocca degli Arcimboldi 8, 20126 Milano, Italy [email protected] Abstract relating such a large scale experimental evidence to the exist- ing biological knowledge is essential in order to understand We address the problem of linking observations the phenomenon under study. from reality to a semantic web based knowledge Given the vast amount of data generated by high through- base. Concepts in the biological domain are in- put technologies, this necessitates automated support. creasingly being formalized through ontologies, At the same time there is an increasing availability of struc- with an increasing adoption of semantic web stan- tured biological information, in the semantic web framework dards. Atthe same time biologyis becominga data- in particular. The Gene Ontology[Ashburner et al., 2000] ini- centric science, since the increasing availability of tiative provides ontologies describing functions of gene prod- high throughput technologies yields a humanly in- ucts. It currently encompasses more than 17000 terms linked tractable amount of data describing the behavior by relations of inheritance and containment and it is avail- of biological systems at the molecular level. This able in RDF. MGED-ontology1 provides an ontology to de- creates the need for automated support to interpret scribe attributes relevant to mRNA experiments in OWL, and biological data given the pre-existing knowledge the BioPAX[The BioPAX workgroup] initiative is defining a about the biological systems under study. While common standard to represent biological pathways and inter- this is currently addressed through the analysis of action networks in OWL.
    [Show full text]
  • Experimental Philosophy
    Forthcoming in Annual Review of Psychology Experimental Philosophy Joshua Knobe 1,2 , Wesley Buckwalter 3, Shaun Nichols 4, Philip Robbins 5, Hagop Sarkissian 6, and Tamler Sommers 7 1 Program in Cognitive Science, Yale University; 2 Department of Philosophy, Yale University; 3 Department of Philosophy, City University of New York, Graduate Center; 4 Department of Philosophy, University of Arizona; 5 Department of Philosophy, University of Missouri; 6 Department of Philosophy, City University of New York, Baruch College; 7 Department of Philosophy, University of Houston. Abstract Experimental philosophy is a new interdisciplinary field that uses methods normally associated with psychology to investigate questions normally associated with philosophy. The present review focuses on research in experimental philosophy on four central questions. First, why is it that people’s moral judgments appear to influence their intuitions about seemingly nonmoral questions? Second, do people think that moral questions have objective answers, or do they see morality as fundamentally relative? Third, do people believe in free will, and do they see free will as compatible with determinism? Fourth, how do people determine whether an entity is conscious? Keywords: moral psychology, moral relativism, free will, consciousness, causation. INTRODUCTION Contemporary work in philosophy is shot through with appeals to intuition. When a philosopher wants to understand the nature of knowledge or causation or free will, the usual approach is to begin by constructing a series of imaginary cases designed to elicit prereflective judgments about the nature of these phenomena. These prereflective judgments are then treated as important sources of evidence. This basic approach has been applied with great sophistication across a wide variety of different domains.
    [Show full text]
  • Analyzing Experimental Data
    H OW CAN WE U NDERSTAND OUR WATER R ESOURCES? Analyzing Experimental Data he information in this chapter is a following format. The layout for the data table short summary of some topics that is based on the variables for the experiment. Tare covered in depth in the book The first column lists the different values or Students and Research written by Cothron, levels of the independent variable, and the Giese, and Rezba. See the end of this chapter remaining columns list the corresponding for full information on obtaining this book. observed values of the dependent variable. The book is an invaluable resource for any Values for each repeated trial are listed in middle or high school science teacher. separate columns, and then in the last column the average value. The average value will usual- SETTING UP SIMPLE DATA TABLES ly be a mean value, but there are some situa- AND GRAPHS tions where a median value or even mode value would be more appropriate. The labels for the Students should be encouraged to set up independent and dependent variable columns data tables and graphs on a computer should both include the measuring unit if whenever possible. This can be done by applicable, and the table as a whole should be first entering data into a spreadsheet given a title. The example below is a data table program such as Microsoft Excel. for a simple experiment to measure the effect of Making simple data tables... the length of a pendulum string on the number of pendulum swings per minute. A simple data table can be set up in the THE EFFECT OF LENGTH OF PENDULUM STRING ON THE NUMBER OF SWING CYCLES PER MINUTE Length of Number of Swing Cycles per Minute Pendulum String (cm) Trial 1 Trial 2 Trial 3 Average 30 50 70 90 A NALYZING E XPERIMENTAL D ATA 6/13 H OW CAN WE U NDERSTAND OUR WATER R ESOURCES? Displaying data with graphs..
    [Show full text]
  • The Social Epistemology of Experimental Economics
    THE SOCIAL EPISTEMOLOGY OF EXPERIMENTAL ECONOMICS DE SOCIALE EPISTEMOLOGIE VAN EXPERIMENTELE ECONOMIE THE SOCIAL EPISTEMOLOGY OF EXPERIMENTAL ECONOMICS DE SOCIALE EPISTEMOLOGIE VAN EXPERIMENTELE ECONOMIE Thesis to obtain the degree of Doctor from the Erasmus University Rotterdam by command of the rector magnificus Prof.dr. S.W.J. Lamberts and in accordance with the decision of the Doctorate Board. The public defence shall be held on Thursday 9th of November 2006 at 13.30 hours by Ana Cristina Cordeiro dos Santos born in Lisbon, Portugal. ISBN 90 9021 076 8 © Ana C. Cordeiro dos Santos, 2006 All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including information storage and retrieval) without permission in writing from the author. Cover Design by Luis Carlos Amaro See http://eps.eur.nl/dissertations for the electronic version of the thesis Printed by Gráficos à Lapa, Lisbon, Portugal DOCTORAL COMMITTEE Promotor: Prof.dr. I. U. Mäki Other members: Prof.dr. D. McCloskey Prof.dr. J. Sonnemans Dr. J.J. Vromen Para os meus pais, Conceição e António CONTENTS Acknowledgements ix 1 Introduction: Epistemology, Experiments and Economics 1 Part One: The Social Epistemology of Experiment 2 Experiment, Coherence and Materiality 25 3 The Argument from Sociality 61 4 The Social Epistemology of Experiment 89 Part Two: The Social Epistemology of Experimental Economics 5 The Market Experiment and the Foundation of Experimental Economics 107 6 The Epistemological Arguments of Experimental Economics 137 7 Human agency (or lack thereof) in Economic Experiments 175 8 ‘Human Agency’ and Inference 193 9 Preference Reversals or Good Experimental in Economics 209 10 Conclusion 237 References 243 Appendix 1 21 Appendix 2 57 Appendix 3 173 Summary (Dutch) 257 Curriculum Vitae 258 ACKNOWLEDGMENTS The Social Epistemology of Experimental Economics (SEEE) is about the inescapably collective nature of knowledge production, in time and space.
    [Show full text]