Uncertainty Guide

Total Page:16

File Type:pdf, Size:1020Kb

Uncertainty Guide ICSBEP Guide to the Expression of Uncertainties Editor V. F. Dean Subcontractor to Idaho National Laboratory Independent Reviewer Larry G. Blackwood (retired) Idaho National Laboratory Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments ACKNOWLEDGMENT We are most grateful to Fritz H. Fröhner, Kernforschungszentrum Karlsruhe, Institut für Neutronenphysik und Reaktortechnik, for his preliminary review of this document and for his helpful comments and answers to questions. Revision: 4 i Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments This guide was prepared by an ICSBEP subgroup led by G. Poullot, D. Doutriaux, and J. Anno of IRSN, France. The subgroup membership includes the following individuals: J. Anno IRSN (retired), France J. B. Briggs INL, USA R. Bartholomay Westinghouse Safety Management Solutions, USA V. F. Dean, editor Subcontractor to INL, USA D. Doutriaux IRSN (retired), France K. Elam ORNL, USA C. Hopper ORNL, USA R. Jeraj J. Stefan Institute, Slovenia Y. Kravchenko RRC Kurchatov Institute, Russian Federation V. Lyutov VNIITF, Russian Federation R. D. McKnight ANL, USA Y. Miyoshi JAERI, Japan R. D. Mosteller LANL, USA A. Nouri OECD Nuclear Energy Agency, France G. Poullot IRSN (retired), France R. W. Schaefer ANL (retired), USA N. Smith AEAT, United Kingdom Z. Szatmary Technical University of Budapest, Hungary F. Trumble Westinghouse Safety Management Solutions, USA A. Tsiboulia IPPE, Russian Federation O. Zurron ENUSA, Spain The contribution of French participants had strong technical support from Benoit Queffelec, Director of the Société Industrielle d’Etudes et Réalisations, Enghien (France). He is a consultant on statistics applied to industrial and fabrication processes. Revision: 4 ii Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments FOREWORD The methods used by the International Criticality Safety Benchmark Evaluation Project (ICSBEP) to treat uncertainties encountered in experimental data and in the derivation of benchmark models have evolved significantly since the project was initiated in 1992. At an ICSBEP Meeting in Portland, Maine (1999), the Working Group expressed the need for a guide to the treatment of uncertainties in order to assist the work of both evaluator and reader. Because of this historic background, when using an evaluation reported in the “International Handbook of Evaluated Criticality Safety Benchmark Experiments,” the reader will often encounter, especially in evaluations written before 2001, • mixing of uncertainties in physical parameters that are standard deviations coming from a large number of measurements with those given as tolerances, or bounds, • lack of information about the sources of given uncertainty values, including level of confidence and degrees of freedom corresponding to reported uncertainties, • inconsistencies in the coverage factor (corresponding to 1, 2 or 3 σ) between the different uncertainties, • after the uncertainties in physical parameters are propagated to obtain an equivalent reactivity uncertainty of keff, this total experimental uncertainty is given without mentioning the level of confidence. The mean and standard deviation are now recommended as optimal estimates for a parameter and its uncertainty. If these are used for the benchmark model and to calculate the uncertainty in keff, then, when the probability distribution of keff is Gaussian (a reasonable assumption in many cases), the level of confidence of the keff uncertainty range will be approximately 68%. This guide, written to develop a consistency among evaluators in the uncertainty treatment (Sections 2, 3.1, and 3.5 of the ICSBEP evaluation), is based on two main references, which are the respective U.S. and European standards. The standards are very similar and have the same origin, namely, the International Standard Organization (ISO) Guide to the Expression of Uncertainty in Measurement, published in 1995. These two main references are o Reference 1 – “American National Standard for Expressing Uncertainty - U.S. Guide to the Expression of Uncertainty in Measurement,” ANSI/NCSL Z540-2-1997 (Copyright © 1997 by National Conference of Standards Laboratories, All rights reserved.). and o Reference 2 – “Guide pour l'expression de l’incertitude de mesure.” European Pre-standard NF ENV 13005 Août 1999. Revision: 4 iii Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments TABLE OF CONTENTS 1.0 GENERAL CONSIDERATIONS IN THE DETERMINATION OF UNCERTAINTIES......................................................................................................1 1.1 Benefits of determining best estimates of uncertainties ........................................1 1.2 The uncertainty analysis .......................................................................................2 1.2.1 Evaluation of experimental data ................................................................2 1.2.2 Additional uncertainty from creating the benchmark model .......................2 1.2.3 An objective, rigorous, graded approach ...................................................3 1.3 Level of confidence...............................................................................................4 1.4 The guide .............................................................................................................4 2.0 METROLOGY AND ASSOCIATED STATISTICS............................................. 6 2.1 Type A evaluation of standard uncertainty ............................................................8 2.2 Type B evaluation of standard uncertainty ............................................................9 2.3 “Uncertainty of the uncertainty”.............................................................................11 2.4 Determining combined standard uncertainty.........................................................11 3.0 INVENTORY OF UNCERTAINTIES .................................................................. 14 3.1 Uncertainties from physics, chemistry, and isotopics of materials ...............17 3.2 Uncertainties on geometry................................................................................19 3.3 Uncertainties of dates ................................................................................ 20 3.4 Uncertainties from modeling.............................................................................21 3.5 General remarks on calculations of uncertainties ..........................................21 Revision: 4 iv Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments TABLE OF CONTENTS (cont’d) 4.0 CALCULATION OF EFFECTS OF UNCERTAINTIES ON keff.......................... 23 4.1 One-variable-at-a-time strategy ............................................................................23 4.1.1 Analytical method......................................................................................24 4.1.2 Deterministic method ................................................................................24 4.1.3 Monte Carlo method..................................................................................25 4.1.4 Monte Carlo perturbation method..............................................................25 4.1.5 Uncertainties of the calculated uncertainties .............................................26 4.2 Experimental-design methodology........................................................................27 5.0 ESTIMATIONS OF SYSTEMATIC EFFECTS AND THEIR UNCERTAINTIES .............................................................................................29 5.1 Systematic error ...................................................................................................29 5.2 Uncertainty of the estimate of systematic error .....................................................29 5.3 Systematic effect of modeling simplifications ........................................................30 5.4 Uncertainty from modeling simplifications .............................................................30 5.5 Systematic effect of modeling similar components as equal .................................30 6.0 ESTIMATION OF THE FINAL COMBINED STANDARD UNCERTAINTY ....... 32 6.1 General formula....................................................................................................32 6.2 Application cases..................................................................................................32 6.3 Value, small or dominant, of individual uncertainty ...............................................33 6.4 Effective degrees of freedom and level of confidence...........................................33 7.0 REFERENCES................................................................................................... 36 Revision: 4 v Date: November 30, 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments TABLE OF CONTENTS (cont’d) APPENDICES APPENDIX A: GLOSSARY OF METROLOGY AND ASSOCIATED STATISTICS ................................................................................. 37 A.1 Metrology ..............................................................................................................37 A.2 Basic statistical terms............................................................................................38 APPENDIX B: SUMMARY OF SOME PRINCIPLES FOR
Recommended publications
  • What Is the Balance Between Certainty and Uncertainty?
    What is the balance between certainty and uncertainty? Susan Griffin, Centre for Health Economics, University of York Overview • Decision analysis – Objective function – The decision to invest/disinvest in health care programmes – The decision to require further evidence • Characterising uncertainty – Epistemic uncertainty, quality of evidence, modelling uncertainty • Possibility for research Theory Objective function • Methods can be applied to any decision problem – Objective – Quantitative index measure of outcome • Maximise health subject to budget constraint – Value improvement in quality of life in terms of equivalent improvement in length of life → QALYs • Maximise health and equity s.t budget constraint – Value improvement in equity in terms of equivalent improvement in health The decision to invest • Maximise health subject to budget constraint – Constrained optimisation problem – Given costs and health outcomes of all available programmes, identify optimal set • Marginal approach – Introducing new programme displaces existing activities – Compare health gained to health forgone resources Slope gives rate at C A which resources converted to health gains with currently funded programmes health HGA HDA resources Slope gives rate at which resources converted to health gains with currently funded programmes CB health HDB HGB Requiring more evidence • Costs and health outcomes estimated with uncertainty • Cost of uncertainty: programme funded on basis of estimated costs and outcomes might turn out not to be that which maximises health
    [Show full text]
  • Linear Approximation of the Derivative
    Section 3.9 - Linear Approximation and the Derivative Idea: When we zoom in on a smooth function and its tangent line, they are very close to- gether. So for a function that is hard to evaluate, we can use the tangent line to approximate values of the derivative. p 3 Example Usep a tangent line to approximatep 8:1 without a calculator. Let f(x) = 3 x = x1=3. We know f(8) = 3 8 = 2. We'll use the tangent line at x = 8. 1 1 1 1 f 0(x) = x−2=3 so f 0(8) = (8)−2=3 = · 3 3 3 4 Our tangent line passes through (8; 2) and has slope 1=12: 1 y = (x − 8) + 2: 12 To approximate f(8:1) we'll find the value of the tangent line at x = 8:1. 1 y = (8:1 − 8) + 2 12 1 = (:1) + 2 12 1 = + 2 120 241 = 120 p 3 241 So 8:1 ≈ 120 : p How accurate was this approximation? We'll use a calculator now: 3 8:1 ≈ 2:00829885. 241 −5 Taking the difference, 2:00829885 − 120 ≈ −3:4483 × 10 , a pretty good approximation! General Formulas for Linear Approximation The tangent line to f(x) at x = a passes through the point (a; f(a)) and has slope f 0(a) so its equation is y = f 0(a)(x − a) + f(a) The tangent line is the best linear approximation to f(x) near x = a so f(x) ≈ f(a) + f 0(a)(x − a) this is called the local linear approximation of f(x) near x = a.
    [Show full text]
  • Would ''Direct Realism'' Resolve the Classical Problem of Induction?
    NOU^S 38:2 (2004) 197–232 Would ‘‘Direct Realism’’ Resolve the Classical Problem of Induction? MARC LANGE University of North Carolina at Chapel Hill I Recently, there has been a modest resurgence of interest in the ‘‘Humean’’ problem of induction. For several decades following the recognized failure of Strawsonian ‘‘ordinary-language’’ dissolutions and of Wesley Salmon’s elaboration of Reichenbach’s pragmatic vindication of induction, work on the problem of induction languished. Attention turned instead toward con- firmation theory, as philosophers sensibly tried to understand precisely what it is that a justification of induction should aim to justify. Now, however, in light of Bayesian confirmation theory and other developments in epistemology, several philosophers have begun to reconsider the classical problem of induction. In section 2, I shall review a few of these developments. Though some of them will turn out to be unilluminating, others will profitably suggest that we not meet inductive scepticism by trying to justify some alleged general principle of ampliative reasoning. Accordingly, in section 3, I shall examine how the problem of induction arises in the context of one particular ‘‘inductive leap’’: the confirmation, most famously by Henrietta Leavitt and Harlow Shapley about a century ago, that a period-luminosity relation governs all Cepheid variable stars. This is a good example for the inductive sceptic’s purposes, since it is difficult to see how the sparse background knowledge available at the time could have entitled stellar astronomers to regard their observations as justifying this grand inductive generalization. I shall argue that the observation reports that confirmed the Cepheid period- luminosity law were themselves ‘‘thick’’ with expectations regarding as yet unknown laws of nature.
    [Show full text]
  • Climate Scepticism, Epistemic Dissonance, and the Ethics of Uncertainty
    SYMPOSIUM A CHANGING MORAL CLIMATE CLIMATE SCEPTICISM, EPISTEMIC DISSONANCE, AND THE ETHICS OF UNCERTAINTY BY AXEL GELFERT © 2013 – Philosophy and Public Issues (New Series), Vol. 3, No. 1 (2013): 167-208 Luiss University Press E-ISSN 2240-7987 | P-ISSN 1591-0660 [THIS PAGE INTENTIONALLY LEFT BLANK] A CHANGING MORAL CLIMATE Climate Scepticism, Epistemic Dissonance, and the Ethics of Uncertainty Axel Gelfert Abstract. When it comes to the public debate about the challenge of global climate change, moral questions are inextricably intertwined with epistemological ones. This manifests itself in at least two distinct ways. First, for a fixed set of epistemic standards, it may be irresponsible to delay policy- making until everyone agrees that such standards have been met. This has been extensively discussed in the literature on the precautionary principle. Second, key actors in the public debate may—for strategic reasons, or out of simple carelessness—engage in the selective variation of epistemic standards in response to evidence that would threaten to undermine their core beliefs, effectively leading to epistemic double standards that make rational agreement impossible. The latter scenario is aptly described as an instance of what Stephen Gardiner calls “epistemic corruption.” In the present paper, I aim to give an explanation of the cognitive basis of epistemic corruption and discuss its place within the moral landscape of the debate. In particular, I argue that epistemic corruption often reflects an agent’s attempt to reduce dissonance between incoming scientific evidence and the agent’s ideological core commitments. By selectively discounting the former, agents may attempt to preserve the coherence of the latter, yet in doing so they threaten to damage the integrity of evidence-based deliberation.
    [Show full text]
  • Generalized Finite-Difference Schemes
    Generalized Finite-Difference Schemes By Blair Swartz* and Burton Wendroff** Abstract. Finite-difference schemes for initial boundary-value problems for partial differential equations lead to systems of equations which must be solved at each time step. Other methods also lead to systems of equations. We call a method a generalized finite-difference scheme if the matrix of coefficients of the system is sparse. Galerkin's method, using a local basis, provides unconditionally stable, implicit generalized finite-difference schemes for a large class of linear and nonlinear problems. The equations can be generated by computer program. The schemes will, in general, be not more efficient than standard finite-difference schemes when such standard stable schemes exist. We exhibit a generalized finite-difference scheme for Burgers' equation and solve it with a step function for initial data. | 1. Well-Posed Problems and Semibounded Operators. We consider a system of partial differential equations of the following form: (1.1) du/dt = Au + f, where u = (iti(a;, t), ■ ■ -, umix, t)),f = ifxix, t), ■ • -,fmix, t)), and A is a matrix of partial differential operators in x — ixx, • • -, xn), A = Aix,t,D) = JjüiD', D<= id/dXx)h--Yd/dxn)in, aiix, t) = a,-,...i„(a;, t) = matrix . Equation (1.1) is assumed to hold in some n-dimensional region Í2 with boundary An initial condition is imposed in ti, (1.2) uix, 0) = uoix) , xEV. The boundary conditions are that there is a collection (B of operators Bix, D) such that (1.3) Bix,D)u = 0, xEdti,B<E(2>. We assume the operator A satisfies the following condition : There exists a scalar product {,) such that for all sufficiently smooth functions <¡>ix)which satisfy (1.3), (1.4) 2 Re {A4,,<t>) ^ C(<b,<p), 0 < t ^ T , where C is a constant independent of <p.An operator A satisfying (1.4) is called semi- bounded.
    [Show full text]
  • Measurement, Uncertainty, and Uncertainty Propagation 205
    Measurement, Uncertainty, and Uncertainty Propagation 205 Name _____________________ Date __________ Partners ________________ TA ________________ Section _______ ________________ Measurement, Uncertainty, and Uncertainty Propagation Objective: To understand the importance of reporting both a measurement and its uncertainty and to address how to properly treat uncertainties in the lab. Equipment: meter stick, 2-meter stick DISCUSSION Understanding nature requires measuring things, be it distance, time, acidity, or social status. However, measurements cannot be “exact”. Rather, all measurements have some uncertainty associated with them.1 Thus all measurements consist of two numbers: the value of the measured quantity and its uncertainty2. The uncertainty reflects the reliability of the measurement. The range of measurement uncertainties varies widely. Some quantities, such as the mass of the electron me = (9.1093897 ± 0.0000054) ×10-31 kg, are known to better than one part per million. Other quantities are only loosely bounded: there are 100 to 400 billion stars in the Milky Way. Note that we not talking about “human error”! We are not talking about mistakes! Rather, uncertainty is inherent in the instruments and methods that we use even when perfectly applied. The goddess Athena cannot not read a digital scale any better than you. Significant Figures The electron mass above has eight significant figures (or digits). However, the measured number of stars in the Milky Way has barely one significant figure, and it would be misleading to write it with more than one figure of precision. The number of significant figures reported should be consistent with the uncertainty of the measurement. In general, uncertainties are usually quoted with no more significant figures than the measured result; and the last significant figure of a result should match that of the uncertainty.
    [Show full text]
  • Uncertainty 2: Combining Uncertainty
    Introduction to Mobile Robots 1 Uncertainty 2: Combining Uncertainty Alonzo Kelly Fall 1996 Introduction to Mobile Robots Uncertainty 2: 2 1.1Variance of a Sum of Random Variables 1 Combining Uncertainties • The basic theory behind everything to do with the Kalman Filter can be obtained from two sources: • 1) Uncertainty transformation with the Jacobian matrix • 2) The Central Limit Theorem 1.1 Variance of a Sum of Random Variables • We can use our uncertainty transformation rules to compute the variance of a sum of random variables. Let there be n random variables - each of which are normally distributed with the same distribution: ∼()µσ, , xi N ,i=1n • Let us define the new random variabley as the sum of all of these. n yx=∑i i=1 • By our transformation rules: σ2 σ2T y=JxJ • where the Jacobian is simplyn ones: J=1…1 • Hence: Alonzo Kelly Fall 1996 Introduction to Mobile Robots Uncertainty 2: 3 1.2Variance of A Continuous Sum σ2 σ2 y = n x • Thus the variance of a sum of identical random variables grows linearly with the number of elements in the sum. 1.2 Variance of A Continuous Sum • Consider a problem where an integral is being computed from noisy measurements. This will be important later in dead reckoning models. • In such a case where a new random variable is added at a regular frequency, the expression gives the development of the standard deviation versus time because: t n = ----- ∆t in a discrete time system. • In this simple model, uncertainty expressed as standard deviation grows with the square root of time.
    [Show full text]
  • Measurements and Uncertainty Analysis
    Measurements & Uncertainty Analysis Measurement and Uncertainty Analysis Guide “It is better to be roughly right than precisely wrong.” – Alan Greenspan Table of Contents THE UNCERTAINTY OF MEASUREMENTS .............................................................................................................. 2 TYPES OF UNCERTAINTY .......................................................................................................................................... 6 ESTIMATING EXPERIMENTAL UNCERTAINTY FOR A SINGLE MEASUREMENT ................................................ 9 ESTIMATING UNCERTAINTY IN REPEATED MEASUREMENTS ........................................................................... 9 STANDARD DEVIATION .......................................................................................................................................... 12 STANDARD DEVIATION OF THE MEAN (STANDARD ERROR) ......................................................................... 14 WHEN TO USE STANDARD DEVIATION VS STANDARD ERROR ...................................................................... 14 ANOMALOUS DATA ................................................................................................................................................. 15 FRACTIONAL UNCERTAINTY ................................................................................................................................. 15 BIASES AND THE FACTOR OF N–1 ......................................................................................................................
    [Show full text]
  • Investment Behaviors Under Epistemic Versus Aleatory Uncertainty
    Investment Behaviors Under Epistemic versus Aleatory Uncertainty 2 Daniel J. Walters, Assistant Professor of Marketing, INSEAD Europe, Marketing Department, Boulevard de Constance, 77300 Fontainebleau, 01 60 72 40 00, [email protected] Gülden Ülkümen, Gülden Ülkümen, Associate Professor of Marketing at University of Southern California, 701 Exposition Boulevard, Hoffman Hall 516, Los Angeles, CA 90089- 0804, [email protected] Carsten Erner, UCLA Anderson School of Management, 110 Westwood Plaza, Los Angeles, CA, USA 90095-1481, [email protected] David Tannenbaum, Assistant Professor of Management, University of Utah, Salt Lake City, UT, USA, (801) 581-8695, [email protected] Craig R. Fox, Harold Williams Professor of Management at the UCLA Anderson School of Management, Professor of Psychology and Medicine at UCLA, 110 Westwood Plaza #D511 Los Angeles, CA, USA 90095-1481, (310) 206-3403, [email protected] 3 Abstract In nine studies, we find that investors intuitively distinguish two independent dimensions of uncertainty: epistemic uncertainty that they attribute to missing knowledge, skill, or information, versus aleatory uncertainty that they attribute to chance or stochastic processes. Investors who view stock market uncertainty as higher in epistemicness (knowability) are more sensitive to their available information when choosing whether or not to invest, and are more likely to reduce uncertainty by seeking guidance from experts. In contrast, investors who view stock market uncertainty as higher
    [Show full text]
  • Calculus I - Lecture 15 Linear Approximation & Differentials
    Calculus I - Lecture 15 Linear Approximation & Differentials Lecture Notes: http://www.math.ksu.edu/˜gerald/math220d/ Course Syllabus: http://www.math.ksu.edu/math220/spring-2014/indexs14.html Gerald Hoehn (based on notes by T. Cochran) March 11, 2014 Equation of Tangent Line Recall the equation of the tangent line of a curve y = f (x) at the point x = a. The general equation of the tangent line is y = L (x) := f (a) + f 0(a)(x a). a − That is the point-slope form of a line through the point (a, f (a)) with slope f 0(a). Linear Approximation It follows from the geometric picture as well as the equation f (x) f (a) lim − = f 0(a) x a x a → − f (x) f (a) which means that x−a f 0(a) or − ≈ f (x) f (a) + f 0(a)(x a) = L (x) ≈ − a for x close to a. Thus La(x) is a good approximation of f (x) for x near a. If we write x = a + ∆x and let ∆x be sufficiently small this becomes f (a + ∆x) f (a) f (a)∆x. Writing also − ≈ 0 ∆y = ∆f := f (a + ∆x) f (a) this becomes − ∆y = ∆f f 0(a)∆x ≈ In words: for small ∆x the change ∆y in y if one goes from x to x + ∆x is approximately equal to f 0(a)∆x. Visualization of Linear Approximation Example: a) Find the linear approximation of f (x) = √x at x = 16. b) Use it to approximate √15.9. Solution: a) We have to compute the equation of the tangent line at x = 16.
    [Show full text]
  • The Flaw of Averages)
    THE NEED FOR GREATER REFLECTION OF HETEROGENEITY IN CEA or (The fLaw of averages) Warren Stevens PhD CONFIDENTIAL © 2018 PAREXEL INTERNATIONAL CORP. Shimmering Substance Jackson Pollock, 1946. © 2018 PAREXEL INTERNATIONAL CORP. / 2 CONFIDENTIAL 1 Statistical summary of Shimmering Substance Jackson Pollock, 1946. Average color: Brown [Beige-Black] Average position of brushstroke © 2018 PAREXEL INTERNATIONAL CORP. / 3 CONFIDENTIAL WE KNOW DATA ON EFFECTIVENESS IS FAR MORE COMPLEX THAN WE LIKE TO REPRESENT IT n Many patients are better off with the comparator QALYS gained © 2018 PAREXEL INTERNATIONAL CORP. / 4 CONFIDENTIAL 2 A THOUGHT EXPERIMENT CE threshold Mean CE Mean CE treatment treatment #1 #2 n Value of Value of Cost per QALY treatment treatment #2 to Joe #1 to Bob © 2018 PAREXEL INTERNATIONAL CORP. / 5 CONFIDENTIAL A THOUGHT EXPERIMENT • Joe is receiving a treatment that is cost-effective for him, but deemed un cost-effective by/for society • Bob is receiving an treatment that is not cost-effective for him, but has been deemed cost-effective by/for society • So who’s treatment is cost-effective? © 2018 PAREXEL INTERNATIONAL CORP. / 6 CONFIDENTIAL 3 HYPOTHETICALLY THERE ARE AS MANY LEVELS OF EFFECTIVENESS (OR COST-EFFECTIVENESS) AS THERE ARE PATIENTS What questions or uncertainties stop us recognizing this basic truth in scientific practice? 1. Uncertainty (empiricism) – how do we gauge uncertainty in a sample size of 1? avoiding false positives 2. Policy – how do we institute a separate treatment policy for each and every individual? 3. Information (data) – how we do make ‘informed’ decisions without the level of detail required in terms of information (we know more about populations than we do about people) © 2018 PAREXEL INTERNATIONAL CORP.
    [Show full text]
  • Chapter 3, Lecture 1: Newton's Method 1 Approximate
    Math 484: Nonlinear Programming1 Mikhail Lavrov Chapter 3, Lecture 1: Newton's Method April 15, 2019 University of Illinois at Urbana-Champaign 1 Approximate methods and asumptions The final topic covered in this class is iterative methods for optimization. These are meant to help us find approximate solutions to problems in cases where finding exact solutions would be too hard. There are two things we've taken for granted before which might actually be too hard to do exactly: 1. Evaluating derivatives of a function f (e.g., rf or Hf) at a given point. 2. Solving an equation or system of equations. Before, we've assumed (1) and (2) are both easy. Now we're going to figure out what to do when (2) and possibly (1) is hard. For example: • For a polynomial function of high degree, derivatives are straightforward to compute, but it's impossible to solve equations exactly (even in one variable). • For a function we have no formula for (such as the value function MP(z) from Chapter 5, for instance) we don't have a good way of computing derivatives, and they might not even exist. 2 The classical Newton's method Eventually we'll get to optimization problems. But we'll begin with Newton's method in its basic form: an algorithm for approximately finding zeroes of a function f : R ! R. This is an iterative algorithm: starting with an initial guess x0, it makes a better guess x1, then uses it to make an even better guess x2, and so on. We hope that eventually these approach a solution.
    [Show full text]