PDF Program-At-A-Glance
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
United States Patent [19] [11] E Patent Number: Re
United States Patent [19] [11] E Patent Number: Re. 33,629 Palmer et a1. [45] Reissued Date of Patent: Jul. 2, 1991 [54] NUMERIC DATA PROCESSOR 1973, IEEE Transactions on Computers vol. C-22, pp. [75] Inventors: John F. Palmer, Cambridge, Mass; 577-586. Bruce W. Ravenel, Nederland, Co1o.; Bulman, D. M. "Stack Computers: An Introduction," Ra? Nave, Haifa, Israel May 1977, Computer pp. 18-28. Siewiorek, Daniel H; Bell, C. Gordon; Newell, Allen, [73] Assignee: Intel Corporation, Santa Clara, Calif. “Computer Structures: Principles and Examples," 1977, [21] Appl. No.: 461,538 Chapter 29, pp. 470-485 McGraw-Hill Book Co. Palmer, John F., “The Intel Standard for Floatin [22] Filed: Jun. 1, 1990 g-Point Arithmetic,“ Nov. 8-11, 1977, IEEE COMP SAC 77 Proceedings, 107-112. Related US. Patent Documents Coonen, J. T., "Speci?cations for a Proposed Standard Reissue of: t for Floating-Point Arithmetic," Oct. 13, 1978, Mem. [64] Patent No.: 4,338,675 #USB/ERL M78172, pp. 1-32. Issued: Jul. 6, 1982 Pittman, T. and Stewart, R. G., “Microprocessor Stan Appl. No: 120.995 dards,” 1978, AFIPS Conference Proceedings, vol. 47, Filed: Feb. 13, 1980 pp. 935-938. “7094-11 System Support For Numerical Analysis,“ by [51] Int. Cl.-‘ ........................ .. G06F 7/48; G06F 9/00; William Kahan, Dept. of Computer Science, Univ. of G06F 11/00 Toronto, Aug. 1966, pp. 1-51. [52] US. Cl. .................................. .. 364/748; 364/737; “A Uni?ed Decimal Floating-Point Architecture For 364/745; 364/258 The Support of High-Level Languages,‘ by Frederic [58] Field of Search ............. .. 364/748, 745, 737, 736, N. -
Faster Math Functions, Soundly
Faster Math Functions, Soundly IAN BRIGGS, University of Utah, USA PAVEL PANCHEKHA, University of Utah, USA Standard library implementations of functions like sin and exp optimize for accuracy, not speed, because they are intended for general-purpose use. But applications tolerate inaccuracy from cancellation, rounding error, and singularities—sometimes even very high error—and many application could tolerate error in function implementations as well. This raises an intriguing possibility: speeding up numerical code by tuning standard function implementations. This paper thus introduces OpTuner, an automatic method for selecting the best implementation of mathematical functions at each use site. OpTuner assembles dozens of implementations for the standard mathematical functions from across the speed-accuracy spectrum. OpTuner then uses error Taylor series and integer linear programming to compute optimal assignments of function implementation to use site and presents the user with a speed-accuracy Pareto curve they can use to speed up their code. In a case study on the POV-Ray ray tracer, OpTuner speeds up a critical computation, leading to a whole program speedup of 9% with no change in the program output (whereas human efforts result in slower code and lower-quality output). On a broader study of 37 standard benchmarks, OpTuner matches 216 implementations to 89 use sites and demonstrates speed-ups of 107% for negligible decreases in accuracy and of up to 438% for error-tolerant applications. Additional Key Words and Phrases: Floating point, rounding error, performance, approximation theory, synthesis, optimization 1 INTRODUCTION Floating-point arithmetic is foundational for scientific, engineering, and mathematical software. This is because, while floating-point arithmetic is approximate, most applications tolerate minute errors [Piparo et al. -
Implicit Particle Filters for Data Assimilation Alexandre Chorin, Matthias Morzfeldand Xuemin Tu
Communications in Applied Mathematics and Computational Science IMPLICIT PARTICLE FILTERS FOR DATA ASSIMILATION ALEXANDRE CHORIN, MATTHIAS MORZFELD AND XUEMIN TU vol. 5 no. 2 2010 mathematical sciences publishers COMM. APP. MATH. AND COMP. SCI. Vol. 5, No. 2, 2010 IMPLICIT PARTICLE FILTERS FOR DATA ASSIMILATION ALEXANDRE CHORIN, MATTHIAS MORZFELD AND XUEMIN TU Implicit particle filters for data assimilation update the particles by first choosing probabilities and then looking for particle locations that assume them, guiding the particles one by one to the high probability domain. We provide a detailed description of these filters, with illustrative examples, together with new, more general, methods for solving the algebraic equations and with a new algorithm for parameter identification. 1. Introduction There are many problems in science, for example in meteorology and economics, in which the state of a system must be identified from an uncertain equation supplemented by noisy data (see, for instance,[9; 22]). A natural model of this situation consists of an Ito stochastic differential equation (SDE): dx D f .x; t/ dt C g.x; t/ dw; (1) where x D .x1; x2;:::; xm/ is an m-dimensional vector, f is an m-dimensional vector function, g.x; t/ is an m by m matrix, and w is Brownian motion which encapsulates all the uncertainty in the model. In the present paper we assume for simplicity that the matrix g.x; t/ is diagonal. The initial state x.0/ is given and may be random as well. The SDE is supplemented by measurements bn at times tn, n D 0; 1;::: . -
Convergence in the Max Norm Elbridge Gerry Puckett
Communications in Applied Mathematics and Computational Science ON THE SECOND-ORDER ACCURACY OF VOLUME-OF-FLUID INTERFACE RECONSTRUCTION ALGORITHMS: CONVERGENCE IN THE MAX NORM ELBRIDGE GERRY PUCKETT vol. 5 no. 1 2010 mathematical sciences publishers COMM. APP. MATH. AND COMP. SCI. Vol. 5, No. 1, 2010 ON THE SECOND-ORDER ACCURACY OF VOLUME-OF-FLUID INTERFACE RECONSTRUCTION ALGORITHMS: CONVERGENCE IN THE MAX NORM ELBRIDGE GERRY PUCKETT Given a two times differentiable curve in the plane, I prove that — using only the volume fractions associated with the curve — one can construct a piecewise linear approximation that is second-order in the max norm. I derive two parame- ters that depend only on the grid size and the curvature of the curve, respectively. When the maximum curvature in the 3 by 3 block of cells centered on a cell through which the curve passes is less than the first parameter, the approxima- tion in that cell will be second-order. Conversely, if the grid size in this block is greater than the second parameter, the approximation in the center cell can be less than second-order. Thus, this parameter provides an a priori test for when the interface is under-resolved, so that when the interface reconstruction method is coupled to an adaptive mesh refinement algorithm, this parameter may be used to determine when to locally increase the resolution of the grid. 1. Introduction In this article I study the interface reconstruction problem for a volume-of-fluid method in two space dimensions. Let 2 R2 denote a simply connected domain and let z.s/ D .x.s/; y.s//, where s is arc length, denote a curve in . -
Version 0.2.1 Copyright C 2003-2009 Richard P
Modern Computer Arithmetic Richard P. Brent and Paul Zimmermann Version 0.2.1 Copyright c 2003-2009 Richard P. Brent and Paul Zimmermann This electronic version is distributed under the terms and conditions of the Creative Commons license “Attribution-Noncommercial-No Derivative Works 3.0”. You are free to copy, distribute and transmit this book under the following conditions: Attribution. You must attribute the work in the manner specified • by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Noncommercial. You may not use this work for commercial purposes. • No Derivative Works. You may not alter, transform, or build upon • this work. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to the web page below. Any of the above conditions can be waived if you get permission from the copyright holder. Nothing in this license impairs or restricts the author’s moral rights. For more information about the license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/ Preface This is a book about algorithms for performing arithmetic, and their imple- mentation on modern computers. We are concerned with software more than hardware — we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication and division, and their connections to topics such as modular arithmetic, greatest common divisors, the Fast Fourier Transform (FFT), and the computation of special functions. -
Unitary and Symmetric Structure in Deep Neural Networks
University of Kentucky UKnowledge Theses and Dissertations--Mathematics Mathematics 2020 Unitary and Symmetric Structure in Deep Neural Networks Kehelwala Dewage Gayan Maduranga University of Kentucky, [email protected] Author ORCID Identifier: https://orcid.org/0000-0002-6626-9024 Digital Object Identifier: https://doi.org/10.13023/etd.2020.380 Right click to open a feedback form in a new tab to let us know how this document benefits ou.y Recommended Citation Maduranga, Kehelwala Dewage Gayan, "Unitary and Symmetric Structure in Deep Neural Networks" (2020). Theses and Dissertations--Mathematics. 77. https://uknowledge.uky.edu/math_etds/77 This Doctoral Dissertation is brought to you for free and open access by the Mathematics at UKnowledge. It has been accepted for inclusion in Theses and Dissertations--Mathematics by an authorized administrator of UKnowledge. For more information, please contact [email protected]. STUDENT AGREEMENT: I represent that my thesis or dissertation and abstract are my original work. Proper attribution has been given to all outside sources. I understand that I am solely responsible for obtaining any needed copyright permissions. I have obtained needed written permission statement(s) from the owner(s) of each third-party copyrighted matter to be included in my work, allowing electronic distribution (if such use is not permitted by the fair use doctrine) which will be submitted to UKnowledge as Additional File. I hereby grant to The University of Kentucky and its agents the irrevocable, non-exclusive, and royalty-free license to archive and make accessible my work in whole or in part in all forms of media, now or hereafter known. -
Alan Mathison Turing and the Turing Award Winners
Alan Turing and the Turing Award Winners A Short Journey Through the History of Computer TítuloScience do capítulo Luis Lamb, 22 June 2012 Slides by Luis C. Lamb Alan Mathison Turing A.M. Turing, 1951 Turing by Stephen Kettle, 2007 by Slides by Luis C. Lamb Assumptions • I assume knowlege of Computing as a Science. • I shall not talk about computing before Turing: Leibniz, Babbage, Boole, Gödel... • I shall not detail theorems or algorithms. • I shall apologize for omissions at the end of this presentation. • Comprehensive information about Turing can be found at http://www.mathcomp.leeds.ac.uk/turing2012/ • The full version of this talk is available upon request. Slides by Luis C. Lamb Alan Mathison Turing § Born 23 June 1912: 2 Warrington Crescent, Maida Vale, London W9 Google maps Slides by Luis C. Lamb Alan Mathison Turing: short biography • 1922: Attends Hazlehurst Preparatory School • ’26: Sherborne School Dorset • ’31: King’s College Cambridge, Maths (graduates in ‘34). • ’35: Elected to Fellowship of King’s College Cambridge • ’36: Publishes “On Computable Numbers, with an Application to the Entscheindungsproblem”, Journal of the London Math. Soc. • ’38: PhD Princeton (viva on 21 June) : “Systems of Logic Based on Ordinals”, supervised by Alonzo Church. • Letter to Philipp Hall: “I hope Hitler will not have invaded England before I come back.” • ’39 Joins Bletchley Park: designs the “Bombe”. • ’40: First Bombes are fully operational • ’41: Breaks the German Naval Enigma. • ’42-44: Several contibutions to war effort on codebreaking; secure speech devices; computing. • ’45: Automatic Computing Engine (ACE) Computer. Slides by Luis C. -
Historical Perspective and Further Reading
Gresham's Law (“Bad Historical Perspective and Further money drives out Good”) for 3.10 computers would say, “The Reading 3.10 Fast drives out the Slow even if the Fast is wrong.” This section surveys the history of the floating point going back to von Neumann, including the surprisingly controversial IEEE standards effort, plus the rationale W. Kahan, 1992 for the 80-bit stack architecture for floating point in the IA-32. At first it may be hard to imagine a subject of less interest than the correctness of computer arithmetic or its accuracy, and harder still to understand why a sub- ject so old and mathematical should be so controversial. Computer arithmetic is as old as computing itself, and some of the subject’s earliest notions, like the eco- nomical reuse of registers during serial multiplication and division, still command respect today. Maurice Wilkes [1985] recalled a conversation about that notion during his visit to the United States in 1946, before the earliest stored-program computer had been built: . a project under von Neumann was to be set up at the Institute of Advanced Studies in Princeton. Goldstine explained to me the principal features of the design, including the device whereby the digits of the multiplier were put into the tail of the accumulator and shifted out as the least significant part of the product was shifted in. I expressed some admiration at the way registers and shifting circuits were arranged . and Goldstine remarked that things of that nature came very easily to von Neumann. There is no controversy here; it can hardly arise in the context of exact integer arithmetic so long as there is general agreement on what integer the correct result should be. -
Floating Point
Contents Articles Floating point 1 Positional notation 22 References Article Sources and Contributors 32 Image Sources, Licenses and Contributors 33 Article Licenses License 34 Floating point 1 Floating point In computing, floating point describes a method of representing an approximation of a real number in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form: Significant digits × baseexponent The idea of floating-point representation over intrinsically integer fixed-point numbers, which consist purely of significand, is that An early electromechanical programmable computer, expanding it with the exponent component achieves greater range. the Z3, included floating-point arithmetic (replica on display at Deutsches Museum in Munich). For instance, to represent large values, e.g. distances between galaxies, there is no need to keep all 39 decimal places down to femtometre-resolution (employed in particle physics). Assuming that the best resolution is in light years, only the 9 most significant decimal digits matter, whereas the remaining 30 digits carry pure noise, and thus can be safely dropped. This represents a savings of 100 bits of computer data storage. Instead of these 100 bits, much fewer are used to represent the scale (the exponent), e.g. 8 bits or 2 A diagram showing a representation of a decimal decimal digits. Given that one number can encode both astronomic floating-point number using a mantissa and an and subatomic distances with the same nine digits of accuracy, but exponent. -
2000S 1900S 1800S 1700S 1600S 1500S 1400S
2000s Hala Shehadeh Alexis Stevens Cassie Williams Katie Quertermous Minah Oh Eva Strawbridge Josh Ducey Lihua Chen Brant Jones Edwin O’Shea Ling Xu Nusrat Jahan Anthony Tongen Roger Thelwell Elizabeth Brown Elizabeth Arnold Brian Walton Jason Rosenhouse Hasan Hamdan LouAnn Lovin Laura Taalman Rebecca Field Samantha Prins Jay Gopalakrishnan Timothy Hansen Yuhong Yang Sara Billey Rekha Thomas Steve Garren Jesse Wilkins Jeffrey Achter David Chopp Ed Lee Len Van Wyk James Liu Jane Harvill Debra Warne Paul Warne Jeanne Fitzgerald Philippe Loustaunau Peter Kohn Burt Totaro Dave Pruett Carl Droms Dorothy Wallace Bernd Sturmfels John Marafino Jim Sochacki Peter Sin Ching−Li Chai Joseph Watkins James Sethian John Nolan Craig Squier Dave Carothers Adrian Raftery Andrew Barron Chuck Cunningham Wesley Johnson Richard Smith Robert Kohn Hermann Fasel Joseph Pasciak Juergen Bokowski John Spurrier Ed Parker William Pardon Howard Newton Ching−Yuan Chiang Gary Peterson Robert Lax Akihiro Kanamori Craig Benham Cameron Gordon James Wilson Paul Deheuvels John Klippert Mark Tepley Audrey Terras Michael Collins Paul DuChateau Cornelius Horgan Thomas Cover Carter Lyons Adrian Mathias Thomas Kriete Jacques Lewin Thomas Kurtz Alexandre Chorin Loren Pitt John George Ronald Grimmer Charles Ziegenfus Spencer Dickson William Adams Joerg Wills John Hewett John McMillan John Hudson Jon May Howard Taylor Frederick Almgren James Simmonds Kenneth Travers Steven Kleiman James Rovnyack Ronald Jensen John Stallings Paul Waltman Norman Abramson J.J. Malone David Mumford Gilbert -
Bibliography
Bibliography [1] E. Abu-Shama and M. Bayoumi. A new cell for low power adders. In Int. Midwest Symposium on Circuits and Systems, pages 1014–1017, 1995. [2] R. C. Agarwal, F. G. Gustavson, and M. S. Schmookler. Series approx- imation methods for divide and square root in the Power3 micropro- cessor. In Koren and Kornerup, editors, Proceedings of the 14th IEEE Symposium on Computer Arithmetic (Adelaide, Australia), pages 116–123. IEEE Computer Society Press, Los Alamitos, CA, April 1999. [3] T. Ahrendt. Fast high-precision computation of complex square roots. In Proceedings of ISSAC’96 (Zurich, Switzerland), 1996. [4] M. Ajtai. The shortest vector problem in L2 is NP-hard for random- ized reductions (extended abstract). In Proceedings of the annual ACM symposium on Theory of computing (STOC), pages 10–19, 1998. [5] M. Ajtai, R. Kumar, and D. Sivakumar. A sieve algorithm for the short- est lattice vector problem. In Proceedings of the annual ACM symposium on Theory of computing (STOC), pages 601–610, 2001. [6] L. Aksoy, E. Costa, P. Flores, and J. Monteiro. Optimization of area in digital FIR filters using gate-level metrics. In Design Automation Confer- ence, pages 420–423, 2007. [7] E. Allen, D. Chase, V. Luchangco, J.-W. Maessen, and G. L. Steele Jr. Object-oriented units of measurement. In OOPSLA ’04: Proceedings of the 19th Annual ACM SIGPLAN Conference on Object-Oriented Program- ming, Systems, Languages, and Applications, pages 384–403, New York, NY, 2004. ACM Press. [8] Altera Corporation. FFT/IFFT Block Floating Point Scaling, 2005. Appli- cation note 404-1.0. -
Leroy P. Steele Prize for Mathematical Exposition 1
AMERICAN MATHEMATICAL SOCIETY LEROY P. S TEELE PRIZE FOR MATHEMATICAL EXPOSITION The Leroy P. Steele Prizes were established in 1970 in honor of George David Birk- hoff, William Fogg Osgood, and William Caspar Graustein and are endowed under the terms of a bequest from Leroy P. Steele. Prizes are awarded in up to three cate- gories. The following citation describes the award for Mathematical Exposition. Citation John B. Garnett An important development in harmonic analysis was the discovery, by C. Fefferman and E. Stein, in the early seventies, that the space of functions of bounded mean oscillation (BMO) can be realized as the limit of the Hardy spaces Hp as p tends to infinity. A crucial link in their proof is the use of “Carleson measure”—a quadratic norm condition introduced by Carleson in his famous proof of the “Corona” problem in complex analysis. In his book Bounded analytic functions (Pure and Applied Mathematics, 96, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1981, xvi+467 pp.), Garnett brings together these far-reaching ideas by adopting the techniques of singular integrals of the Calderón-Zygmund school and combining them with techniques in complex analysis. The book, which covers a wide range of beautiful topics in analysis, is extremely well organized and well written, with elegant, detailed proofs. The book has educated a whole generation of mathematicians with backgrounds in complex analysis and function algebras. It has had a great impact on the early careers of many leading analysts and has been widely adopted as a textbook for graduate courses and learning seminars in both the U.S.