Nonstandard Analysis Based Calculus
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Nonstandard Methods in Analysis an Elementary Approach to Stochastic Differential Equations
Nonstandard Methods in Analysis An elementary approach to Stochastic Differential Equations Vieri Benci Dipartimento di Matematica Applicata June 2008 Vieri Benci (DMA-Pisa) Nonstandard Methods 03/06 1 / 42 In most applications of NSA to analysis, only elementary tools and techniques of nonstandard calculus seems to be necessary. The advantages of a theory which includes infinitasimals rely more on the possibility of making new models rather than in the dimostration techniques. These two points will be illustrated using a-theory in the study of Brownian motion. The aim of this talk is to make two points relative to NSA: Vieri Benci (DMA-Pisa) Nonstandard Methods 03/06 2 / 42 The advantages of a theory which includes infinitasimals rely more on the possibility of making new models rather than in the dimostration techniques. These two points will be illustrated using a-theory in the study of Brownian motion. The aim of this talk is to make two points relative to NSA: In most applications of NSA to analysis, only elementary tools and techniques of nonstandard calculus seems to be necessary. Vieri Benci (DMA-Pisa) Nonstandard Methods 03/06 2 / 42 These two points will be illustrated using a-theory in the study of Brownian motion. The aim of this talk is to make two points relative to NSA: In most applications of NSA to analysis, only elementary tools and techniques of nonstandard calculus seems to be necessary. The advantages of a theory which includes infinitasimals rely more on the possibility of making new models rather than in the dimostration techniques. Vieri Benci (DMA-Pisa) Nonstandard Methods 03/06 2 / 42 The aim of this talk is to make two points relative to NSA: In most applications of NSA to analysis, only elementary tools and techniques of nonstandard calculus seems to be necessary. -
An Introduction to Nonstandard Analysis 11
AN INTRODUCTION TO NONSTANDARD ANALYSIS ISAAC DAVIS Abstract. In this paper we give an introduction to nonstandard analysis, starting with an ultrapower construction of the hyperreals. We then demon- strate how theorems in standard analysis \transfer over" to nonstandard anal- ysis, and how theorems in standard analysis can be proven using theorems in nonstandard analysis. 1. Introduction For many centuries, early mathematicians and physicists would solve problems by considering infinitesimally small pieces of a shape, or movement along a path by an infinitesimal amount. Archimedes derived the formula for the area of a circle by thinking of a circle as a polygon with infinitely many infinitesimal sides [1]. In particular, the construction of calculus was first motivated by this intuitive notion of infinitesimal change. G.W. Leibniz's derivation of calculus made extensive use of “infinitesimal” numbers, which were both nonzero but small enough to add to any real number without changing it noticeably. Although intuitively clear, infinitesi- mals were ultimately rejected as mathematically unsound, and were replaced with the common -δ method of computing limits and derivatives. However, in 1960 Abraham Robinson developed nonstandard analysis, in which the reals are rigor- ously extended to include infinitesimal numbers and infinite numbers; this new extended field is called the field of hyperreal numbers. The goal was to create a system of analysis that was more intuitively appealing than standard analysis but without losing any of the rigor of standard analysis. In this paper, we will explore the construction and various uses of nonstandard analysis. In section 2 we will introduce the notion of an ultrafilter, which will allow us to do a typical ultrapower construction of the hyperreal numbers. -
Cauchy, Infinitesimals and Ghosts of Departed Quantifiers 3
CAUCHY, INFINITESIMALS AND GHOSTS OF DEPARTED QUANTIFIERS JACQUES BAIR, PIOTR BLASZCZYK, ROBERT ELY, VALERIE´ HENRY, VLADIMIR KANOVEI, KARIN U. KATZ, MIKHAIL G. KATZ, TARAS KUDRYK, SEMEN S. KUTATELADZE, THOMAS MCGAFFEY, THOMAS MORMANN, DAVID M. SCHAPS, AND DAVID SHERRY Abstract. Procedures relying on infinitesimals in Leibniz, Euler and Cauchy have been interpreted in both a Weierstrassian and Robinson’s frameworks. The latter provides closer proxies for the procedures of the classical masters. Thus, Leibniz’s distinction be- tween assignable and inassignable numbers finds a proxy in the distinction between standard and nonstandard numbers in Robin- son’s framework, while Leibniz’s law of homogeneity with the im- plied notion of equality up to negligible terms finds a mathematical formalisation in terms of standard part. It is hard to provide paral- lel formalisations in a Weierstrassian framework but scholars since Ishiguro have engaged in a quest for ghosts of departed quantifiers to provide a Weierstrassian account for Leibniz’s infinitesimals. Euler similarly had notions of equality up to negligible terms, of which he distinguished two types: geometric and arithmetic. Eu- ler routinely used product decompositions into a specific infinite number of factors, and used the binomial formula with an infi- nite exponent. Such procedures have immediate hyperfinite ana- logues in Robinson’s framework, while in a Weierstrassian frame- work they can only be reinterpreted by means of paraphrases de- parting significantly from Euler’s own presentation. Cauchy gives lucid definitions of continuity in terms of infinitesimals that find ready formalisations in Robinson’s framework but scholars working in a Weierstrassian framework bend over backwards either to claim that Cauchy was vague or to engage in a quest for ghosts of de- arXiv:1712.00226v1 [math.HO] 1 Dec 2017 parted quantifiers in his work. -
Data Envelopment Analysis with Fuzzy Parameters: an Interactive Approach
International Journal of Operations Research and Information Systems, 2(3), 39-53, July-September 2011 39 Data Envelopment Analysis with Fuzzy Parameters: An Interactive Approach Adel Hatami-Marbini, Universite Catholique de Louvain, Belgium Saber Saati, Islamic Azad University, Iran Madjid Tavana, La Salle University, USA ABSTRACT Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of deci- sion making units (DMUs) that use multiple inputs to produce multiple outputs. In the conventional DEA, all the data assume the form of specific numerical values. However, the observed values of the input and output data in real-life problems are sometimes imprecise or vague. Previous methods have not considered the preferences of the decision makers (DMs) in the evaluation process. This paper proposes an interactive evaluation process for measuring the relative efficiencies of a set of DMUs in fuzzy DEA with consideration of the DMs’ preferences. The authors construct a linear programming (LP) model with fuzzy parameters and calculate the fuzzy efficiency of the DMUs for different α levels. Then, the DM identifies his or her most preferred fuzzy goal for each DMU under consideration. A modified Yager index is used to develop a ranking order of the DMUs. This study allows the DMs to use their preferences or value judgments when evaluating the performance of the DMUs. Keywords: Data Envelopment Analysis, Efficiency Evaluation, Fuzzy Mathematical Programming, Interactive Solution, Preference Modeling INTRODUCTION efficiency. A DMU is considered efficient when no other DMU can produce more outputs using The changing economic conditions have chal- an equal or lesser amount of inputs. -
Fermat, Leibniz, Euler, and the Gang: the True History of the Concepts Of
FERMAT, LEIBNIZ, EULER, AND THE GANG: THE TRUE HISTORY OF THE CONCEPTS OF LIMIT AND SHADOW TIZIANA BASCELLI, EMANUELE BOTTAZZI, FREDERIK HERZBERG, VLADIMIR KANOVEI, KARIN U. KATZ, MIKHAIL G. KATZ, TAHL NOWIK, DAVID SHERRY, AND STEVEN SHNIDER Abstract. Fermat, Leibniz, Euler, and Cauchy all used one or another form of approximate equality, or the idea of discarding “negligible” terms, so as to obtain a correct analytic answer. Their inferential moves find suitable proxies in the context of modern the- ories of infinitesimals, and specifically the concept of shadow. We give an application to decreasing rearrangements of real functions. Contents 1. Introduction 2 2. Methodological remarks 4 2.1. A-track and B-track 5 2.2. Formal epistemology: Easwaran on hyperreals 6 2.3. Zermelo–Fraenkel axioms and the Feferman–Levy model 8 2.4. Skolem integers and Robinson integers 9 2.5. Williamson, complexity, and other arguments 10 2.6. Infinity and infinitesimal: let both pretty severely alone 13 3. Fermat’s adequality 13 3.1. Summary of Fermat’s algorithm 14 arXiv:1407.0233v1 [math.HO] 1 Jul 2014 3.2. Tangent line and convexity of parabola 15 3.3. Fermat, Galileo, and Wallis 17 4. Leibniz’s Transcendental law of homogeneity 18 4.1. When are quantities equal? 19 4.2. Product rule 20 5. Euler’s Principle of Cancellation 20 6. What did Cauchy mean by “limit”? 22 6.1. Cauchy on Leibniz 23 6.2. Cauchy on continuity 23 7. Modern formalisations: a case study 25 8. A combinatorial approach to decreasing rearrangements 26 9. -
Fuzzy Integer Linear Programming with Fuzzy Decision Variables
Applied Mathematical Sciences, Vol. 4, 2010, no. 70, 3493 - 3502 Fuzzy Integer Linear Programming with Fuzzy Decision Variables C. Sudhagar1 Department of Mathematics and Applied Sciences Middle East College of Information Technology, Muscat, Oman [email protected] K. Ganesan Department of Mathematics, SRM University, Chennai, India gansan [email protected] Abstract In this paper a new method for dealing with Fuzzy Integer Linear Programming Problems (FILPP) has been proposed. FILPP with fuzzy variables model was taken for solution. This solution method is based on the fuzzy ranking method. The proposed method can serve deci- sion makers by providing the reasonable range of values for the fuzzy variable, which is comparatively better than the currently available solu- tions. Numerical examples demonstrate the effectiveness and accuracy of the proposed method. Mathematics Subject Classification: 65K05, 90C10, 90C70, 90C90 Keywords: Fuzzy numbers, Ranking, Fuzzy integer linear programming 1 Introduction Linear Programming Problems(LPP) have an outstanding relevance in the field of Decision making, Artificial intelligence, Control theory, Management sciences, Job placement interventions etc. In many practical applications the available information in the system under consideration are not precise. In such situations, it is more appropriate to use the fuzzy LPP. 1Corresponding author 3494 C. Sudhagar and K. Ganesan The concept of fuzzy linear programming problems was first introduced by Tanaka et al.,[13, 12] After his work, several kinds of fuzzy linear programming problems have appeared in the literature and different methods have been proposed to solve such problems. Numerous methods for comparison of fuzzy numbers have been suggested in the literature. In Campos Verdegay paper[2] linear programming problems with fuzzy constraints and fuzzy coefficients in both matrix and right hand side of the constraint set are considered. -
0.999… = 1 an Infinitesimal Explanation Bryan Dawson
0 1 2 0.9999999999999999 0.999… = 1 An Infinitesimal Explanation Bryan Dawson know the proofs, but I still don’t What exactly does that mean? Just as real num- believe it.” Those words were uttered bers have decimal expansions, with one digit for each to me by a very good undergraduate integer power of 10, so do hyperreal numbers. But the mathematics major regarding hyperreals contain “infinite integers,” so there are digits This fact is possibly the most-argued- representing not just (the 237th digit past “Iabout result of arithmetic, one that can evoke great the decimal point) and (the 12,598th digit), passion. But why? but also (the Yth digit past the decimal point), According to Robert Ely [2] (see also Tall and where is a negative infinite hyperreal integer. Vinner [4]), the answer for some students lies in their We have four 0s followed by a 1 in intuition about the infinitely small: While they may the fifth decimal place, and also where understand that the difference between and 1 is represents zeros, followed by a 1 in the Yth less than any positive real number, they still perceive a decimal place. (Since we’ll see later that not all infinite nonzero but infinitely small difference—an infinitesimal hyperreal integers are equal, a more precise, but also difference—between the two. And it’s not just uglier, notation would be students; most professional mathematicians have not or formally studied infinitesimals and their larger setting, the hyperreal numbers, and as a result sometimes Confused? Perhaps a little background information wonder . -
Connes on the Role of Hyperreals in Mathematics
Found Sci DOI 10.1007/s10699-012-9316-5 Tools, Objects, and Chimeras: Connes on the Role of Hyperreals in Mathematics Vladimir Kanovei · Mikhail G. Katz · Thomas Mormann © Springer Science+Business Media Dordrecht 2012 Abstract We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in func- tional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S, all definable sets of reals are Lebesgue measurable, suggesting that Connes views a theory as being “vir- tual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnera- ble to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace − (featured on the front cover of Connes’ magnum opus) V. -
Lecture 25: Ultraproducts
LECTURE 25: ULTRAPRODUCTS CALEB STANFORD First, recall that given a collection of sets and an ultrafilter on the index set, we formed an ultraproduct of those sets. It is important to think of the ultraproduct as a set-theoretic construction rather than a model- theoretic construction, in the sense that it is a product of sets rather than a product of structures. I.e., if Xi Q are sets for i = 1; 2; 3;:::, then Xi=U is another set. The set we use does not depend on what constant, function, and relation symbols may exist and have interpretations in Xi. (There are of course profound model-theoretic consequences of this, but the underlying construction is a way of turning a collection of sets into a new set, and doesn't make use of any notions from model theory!) We are interested in the particular case where the index set is N and where there is a set X such that Q Xi = X for all i. Then Xi=U is written XN=U, and is called the ultrapower of X by U. From now on, we will consider the ultrafilter to be a fixed nonprincipal ultrafilter, and will just consider the ultrapower of X to be the ultrapower by this fixed ultrafilter. It doesn't matter which one we pick, in the sense that none of our results will require anything from U beyond its nonprincipality. The ultrapower has two important properties. The first of these is the Transfer Principle. The second is @0-saturation. 1. The Transfer Principle Let L be a language, X a set, and XL an L-structure on X. -
Decision Analysis in the UK Energy Supply Chain Risk Management: Tools Development and Application
Decision Analysis in the UK Energy Supply Chain Risk Management: Tools Development and Application by Amin Vafadarnikjoo Registration number: 100166891 Thesis submitted to The University of East Anglia for the Degree of Doctor of Philosophy (PhD) August 2020 Norwich Business School “This copy of the thesis has been supplied on condition that anyone who consults it is understood to recognise that its copyright rests with the author and that use of any information derived therefrom must be in accordance with current UK Copyright Law. In addition, any quotation or extract must include full attribution.” 1 Abstract Large infrastructures like electricity supply networks are widely presumed to be crucial for the functioning of societies as they create conditions for essential economic activities. There has always been a continuing concern and complexity around risks in the field of energy security and particularly power grids within energy supply chain. Drawing on this complexity and a need for useful tools, this research contributes to developing and utilising proper decision-making tools (i.e. methods and models) to deal with the risk identification and mitigation in the UK energy supply chain as a compound networked system. This thesis is comprised of four study phases (Figure I.A.). It is aimed at developing decision-making tools for risk identification, risk interdependency analysis, risk prioritisation, and long-term risk mitigation strategy recommendations. The application of the tools has focused on the UK power supply chain. The five new tools which are introduced and applied in this thesis are: (1) Proposed Expert Selection Model (ESM) and its application under hesitant fuzzy environment (i.e. -
Calculus/Print Version
Calculus/Print version Wikibooks.org March 24, 2011. Contents 1 TABLEOF CONTENTS 1 1.1 PRECALCULUS1 ....................................... 1 1.2 LIMITS2 ........................................... 2 1.3 DIFFERENTIATION3 .................................... 3 1.4 INTEGRATION4 ....................................... 4 1.5 PARAMETRICAND POLAR EQUATIONS5 ......................... 8 1.6 SEQUENCES AND SERIES6 ................................. 9 1.7 MULTIVARIABLE AND DIFFERENTIAL CALCULUS7 ................... 10 1.8 EXTENSIONS8 ........................................ 11 1.9 APPENDIX .......................................... 11 1.10 EXERCISE SOLUTIONS ................................... 11 1.11 REFERENCES9 ........................................ 11 1.12 ACKNOWLEDGEMENTS AND FURTHER READING10 ................... 12 2 INTRODUCTION 13 2.1 WHAT IS CALCULUS?.................................... 13 2.2 WHY LEARN CALCULUS?.................................. 14 2.3 WHAT IS INVOLVED IN LEARNING CALCULUS? ..................... 14 2.4 WHAT YOU SHOULD KNOW BEFORE USING THIS TEXT . 14 2.5 SCOPE ............................................ 15 3 PRECALCULUS 17 4 ALGEBRA 19 4.1 RULES OF ARITHMETIC AND ALGEBRA .......................... 19 4.2 INTERVAL NOTATION .................................... 20 4.3 EXPONENTS AND RADICALS ................................ 21 4.4 FACTORING AND ROOTS .................................. 22 4.5 SIMPLIFYING RATIONAL EXPRESSIONS .......................... 23 4.6 FORMULAS OF MULTIPLICATION OF POLYNOMIALS . 23 1 Chapter 1 on page -
Calculus for a New Century: a Pump, Not a Filter
DOCUMENT RESUME ED 300 252 SE 050 088 AUTHOR Steen, Lynn Arthur, Ed. TITLE Calculus for a New Century: A Pump, Not a Filter. Papers Presented at a Colloquium _(Washington, D.C., October 28-29, 1987). MAA Notes Number 8. INSTITUTION Mathematical Association of America, Washington, D.C. REPORT NO ISBN-0-88385-058-3 PUB DATE 88 NOTE 267p. AVAILABLE FROMMathematical Association of America, 1529 18th Street, NW, Washington, DC 20007 ($12.50). PUB TYPE Viewpoints (120) -- Collected Works - Conference Proceedings (021) EDRS PRICE MF01 Plus Postage. PC Not Available from EDRS. DESCRIPTORS *Calculus; Change Strategies; College Curriculum; *College Mathematics; Curriculum Development; *Educational Change; Educational Trends; Higher Education; Mathematics Curriculum; *Mathematics Instruction; Mathematics Tests ABSTRACT This document, intended as a resource for calculus reform, contains 75 separate contributions, comprising a very diverse set of opinions about the shap, of calculus for a new century. The authors agree on the forces that are reshapinc calculus, but disagree on how to respond to these forces. They agree that the current course is not satisfactory, yet disagree about new content emphases. They agree that the neglect of teaching must be repaired, but do not agree on the most promising avenues for improvement. The document contains: (1) a record of presentations prepared fcr a colloquium; (2) a collage of reactions to the colloquium by a variety of individuals representing diverse calculus constituencies; (3) summaries of 16 discussion groups that elaborate on particular themes of importance to reform efforts; (4) a series of background papers providing context for the calculus colloquium; (5) a selection of final examinations from Calculus I, II, and III from universities, colleges, and two-year colleges around the country; (6) a collection of reprints of documents related to calculus; and (7) a list of colloquium participants.