FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations

Total Page:16

File Type:pdf, Size:1020Kb

FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations 1Mamidi Nagaraju, 2Geedimatla Shekar 1Department of ECE, VLSI Lab, National Institute of Technology (NIT), Calicut, Kerala, India 2Asst.Professor, Department of ECE, Amrita Vishwa Vidyapeetham University Amritapuri, Kerala, India Abstract amounts, and fractions are essential to many computations. Floating-point arithmetic lies at the In this project we explore the capability and heart of computer graphics cards, physics engines, flexibility of FPGA solutions in a sense to simulations and many models of the natural world. accelerate scientific computing applications which Floating-point computations suffer from errors due to require very high precision arithmetic, based on rounding and quantization. Fast computers let IEEE 754 standard 128-bit floating-point number programmers write numerically intensive programs, representations. Field Programmable Gate Arrays but computed results can be far from the true results (FPGA) is increasingly being used to design high due to the accumulation of errors in arithmetic end computationally intense microprocessors operations. Implementing floating-point arithmetic in capable of handling floating point mathematical hardware can solve two separate problems. First, it operations. Quadruple Precision Floating-Point greatly speeds up floating-point arithmetic and Arithmetic is important in computational fluid calculations. Implementing a floating-point dynamics and physical modelling, which require instruction will require at a generous estimate at least accurate numerical computations. However, twenty integer instructions, many of them conditional modern computers perform binary arithmetic, operations, and even if the instructions are executed which has flaws in representing and rounding the on an architecture which goes to great lengths to numbers. As the demand for quadruple precision speed up execution, this will be slow. In contrast, floating point arithmetic is predicted to grow, the even the simplest implementation of basic floating- IEEE 754 Standard for Floating-Point Arithmetic point arithmetic in hardware will require perhaps ten includes specifications for quadruple precision clock cycles per instruction, a small fraction of the floating point arithmetic. We implement quadruple time a software implementation would require. precision floating point arithmetic unit for all the Second, implementing the logic once in hardware common operations, i.e. addition, subtraction, allows the considerable cost of implementation to be multiplication and division. While previous work amortized across all users, including users which may has considered circuits for low precision floating- not be able to use another software floating-point point formats, we consider the implementation of implementation (say, because the relevant functions 128-bit quadruple precision circuits. The project are not publicly available in shared libraries). will provide arithmetic operation, simulation result, hardware design, Input via PS/2 Keyboard interface Quadruple precision arithmetic increases the and results displayed on LCD using Xilinx virtex5 accuracy and reliability of numerical computations (XC5VLX110TFF1136) FPGA device. by providing floating-point numbers that have more than twice the precision of double precision numbers. Keywords This is important in applications, such as computational fluid dynamics and physical FPGA, Floating-point, Quadruple precision, Arithmetic. modelling, which require accurate numerical computations. Most modern processors have 1. Introduction hardware support for double precision (64-bit) or double-extended precision (typically 80-bit) floating- point multiplication, but not for quadruple precision Integer arithmetic is common throughout (128-bit) floating-point arithmetic operations. It is computation. Integers govern loop behavior, also true, however, that double precision and double determine array size, measure pixel coordinates on extended precision are not enough for many scientific the screen, determine the exact colors displayed on a applications including climate modelling, computer display, and perform many other tasks. computational physics, and computational geometry. However, integers cannot easily represent fractional The use of quadruple precision arithmetic can greatly 7 International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 improve the numerical stability and reproducibility of The IEEE 754 standard specifies that a quadruple many of these applications. Due to these advantages precision number consists of a 1-bit sign, a 15-bit quadruple precision arithmetic can provide in biased exponent, and a 112-bit significant. The scientific computing applications, specifications for quadruple precision number format is shown in fig. quadruple precision numbers are being added to the 2.1. revised version of the IEEE 754 Standard for Floating-Point Arithmetic. Recently, the use of FPGA-based accelerators has become a promising approach for speeding up scientific applications. The computational capability of FPGAs is increasing rapidly. A top-level FPGA chip from Xilinx Virtex-5 series contains 51840 logic Slices, 10368 Kbits storage and 192 DSP processing blocks (25 × 18 Fig. 2.1: Quadruple Precision Format MAC). E is an unsigned biased number and the true exponent Initial plans for the floating-point unit envisioned e is obtained as e=E-Ebias with Ebias=16383. For were ambitious. Full support for quadruple precision- quadruple precision numbers value of E ranges from format IEEE 754 floating-point addition, subtraction, 0 to 32767.The number zero is represented with E=0 multiplication, and division, along with full support and f=0.An exponent E=2047 and f=0 represents for exceptions. Floating-point unit functionality can infinity. The fraction f represents a number in the be loosely divided into the following areas, the Adder range [0,l) and the significant S is given by S=l.f and module, Subtracter module the Multiplier module and f is in the range [1,2).The actual value of the Division module. Together these modules compose quadruple precision floating point number is the the internals of the FPU module, which encapsulates following: all behavior in one location and provides one central (sign bit) (exponent – 16383) interface for calculation of floating point calculations. Value = -1 x 2 x 1.(mantissa). A keyboard is interfaced with Floating Point Unit to feed the 128 bit input operands making use of the ps2 The basic format is described in IEEE 754 format, interface available on the FPGA board. The 128 bit quadruple precision using 128-bits. Floating-point output is displayed on the 16*2 LCD present on the arithmetic as differs in a number of ways from FPGA board. Field Programmable Gate Array standard integral arithmetic. Floating-point arithmetic (FPGA) is a silicon chip with unconnected logic is almost always inexact. Only floating point blocks, these logic blocks can be defined and numbers which are the sum of a limited sequence of redefined by user at any time. FPGAs are powers of two may be exactly represented using the increasingly being used for applications which format specified by IEEE 754. This contrasts with require high numerical stability and accuracy. With integer arithmetic, where (for example) the sum or less time to market and low cost, FPGAs are product of two numbers always equals their exact becoming a more attractive solution compared to value sum, excluding the rare case of overflow. For Application Specific Integrated Circuits (ASIC). example, in IEEE arithmetic 0.1 + 0.2 is not equal to FPGAs are mostly used in low volume applications 0.3, but rather is equal to 0.3000000000000004. This that cannot afford silicon fabrication or designs has many subtle ramifications, with the most which require frequent changes or upgrades. In common being that equality comparisons should FPGAs, the bottleneck for designing efficient almost never be exact. They should instead be floating-point units has mostly been area. With bounded by some epsilon. Another difference advancement in FPGA architecture, however, there is between floating-point and integer numbers is that a significant increase in FPGA densities. Devices floating-point numbers include the special values with millions of gates and frequencies reaching up to positive and negative infinity, positive and negative 500 MHz are becoming more suitable for floating- zero, and not-a-number (NaN). These values are point arithmetic reliant applications. produced in certain circumstances by calculations with particular arguments. For example, infinity is the result of dividing a large positive number by a 2. Floating Point Numerical small positive number. A third difference is that Representation floating-point operations can sometimes be made safe against certain conditions which may be considered errors. In particular, floating-point exceptions provide 8 International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 a mechanism for detecting possibly-invalid 3. Hardware Implementation operations such as the canonical division by zero, overflow to an infinity value, underflow to a number The block diagram of hardware
Recommended publications
  • Arithmetic Algorithms for Extended Precision Using Floating-Point Expansions Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu
    Arithmetic algorithms for extended precision using floating-point expansions Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu To cite this version: Mioara Joldes, Olivier Marty, Jean-Michel Muller, Valentina Popescu. Arithmetic algorithms for extended precision using floating-point expansions. IEEE Transactions on Computers, Institute of Electrical and Electronics Engineers, 2016, 65 (4), pp.1197 - 1210. 10.1109/TC.2015.2441714. hal- 01111551v2 HAL Id: hal-01111551 https://hal.archives-ouvertes.fr/hal-01111551v2 Submitted on 2 Jun 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. IEEE TRANSACTIONS ON COMPUTERS, VOL. , 201X 1 Arithmetic algorithms for extended precision using floating-point expansions Mioara Joldes¸, Olivier Marty, Jean-Michel Muller and Valentina Popescu Abstract—Many numerical problems require a higher computing precision than the one offered by standard floating-point (FP) formats. One common way of extending the precision is to represent numbers in a multiple component format. By using the so- called floating-point expansions, real numbers are represented as the unevaluated sum of standard machine precision FP numbers. This representation offers the simplicity of using directly available, hardware implemented and highly optimized, FP operations.
    [Show full text]
  • Floating Points
    Jin-Soo Kim ([email protected]) Systems Software & Architecture Lab. Seoul National University Floating Points Fall 2018 ▪ How to represent fractional values with finite number of bits? • 0.1 • 0.612 • 3.14159265358979323846264338327950288... ▪ Wide ranges of numbers • 1 Light-Year = 9,460,730,472,580.8 km • The radius of a hydrogen atom: 0.000000000025 m 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 2 ▪ Representation • Bits to right of “binary point” represent fractional powers of 2 • Represents rational number: 2i i i–1 k 2 bk 2 k=− j 4 • • • 2 1 bi bi–1 • • • b2 b1 b0 . b–1 b–2 b–3 • • • b–j 1/2 1/4 • • • 1/8 2–j 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 3 ▪ Examples: Value Representation 5-3/4 101.112 2-7/8 10.1112 63/64 0.1111112 ▪ Observations • Divide by 2 by shifting right • Multiply by 2 by shifting left • Numbers of form 0.111111..2 just below 1.0 – 1/2 + 1/4 + 1/8 + … + 1/2i + … → 1.0 – Use notation 1.0 – 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 4 ▪ Representable numbers • Can only exactly represent numbers of the form x / 2k • Other numbers have repeating bit representations Value Representation 1/3 0.0101010101[01]…2 1/5 0.001100110011[0011]…2 1/10 0.0001100110011[0011]…2 4190.308: Computer Architecture | Fall 2018 | Jin-Soo Kim ([email protected]) 5 Fixed Points ▪ p.q Fixed-point representation • Use the rightmost q bits of an integer as representing a fraction • Example: 17.14 fixed-point representation
    [Show full text]
  • Hacking in C 2020 the C Programming Language Thom Wiggers
    Hacking in C 2020 The C programming language Thom Wiggers 1 Table of Contents Introduction Undefined behaviour Abstracting away from bytes in memory Integer representations 2 Table of Contents Introduction Undefined behaviour Abstracting away from bytes in memory Integer representations 3 – Another predecessor is B. • Not one of the first programming languages: ALGOL for example is older. • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11. • Default version in GCC gnu11 The C programming language • Invented by Dennis Ritchie in 1972–1973 4 – Another predecessor is B. • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11. • Default version in GCC gnu11 The C programming language • Invented by Dennis Ritchie in 1972–1973 • Not one of the first programming languages: ALGOL for example is older. 4 • Closely tied to the development of the Unix operating system • Unix and Linux are mostly written in C • Compilers are widely available for many, many, many platforms • Still in development: latest release of standard is C18. Popular versions are C99 and C11. • Many compilers implement extensions, leading to versions such as gnu18, gnu11.
    [Show full text]
  • Floating Point Formats
    Telemetry Standard RCC Document 106-07, Appendix O, September 2007 APPENDIX O New FLOATING POINT FORMATS Paragraph Title Page 1.0 Introduction..................................................................................................... O-1 2.0 IEEE 32 Bit Single Precision Floating Point.................................................. O-1 3.0 IEEE 64 Bit Double Precision Floating Point ................................................ O-2 4.0 MIL STD 1750A 32 Bit Single Precision Floating Point............................... O-2 5.0 MIL STD 1750A 48 Bit Double Precision Floating Point ............................. O-2 6.0 DEC 32 Bit Single Precision Floating Point................................................... O-3 7.0 DEC 64 Bit Double Precision Floating Point ................................................. O-3 8.0 IBM 32 Bit Single Precision Floating Point ................................................... O-3 9.0 IBM 64 Bit Double Precision Floating Point.................................................. O-4 10.0 TI (Texas Instruments) 32 Bit Single Precision Floating Point...................... O-4 11.0 TI (Texas Instruments) 40 Bit Extended Precision Floating Point................. O-4 LIST OF TABLES Table O-1. Floating Point Formats.................................................................................... O-1 Telemetry Standard RCC Document 106-07, Appendix O, September 2007 This page intentionally left blank. ii Telemetry Standard RCC Document 106-07, Appendix O, September 2007 APPENDIX O FLOATING POINT
    [Show full text]
  • Extended Precision Floating Point Arithmetic
    Extended Precision Floating Point Numbers for Ill-Conditioned Problems Daniel Davis, Advisor: Dr. Scott Sarra Department of Mathematics, Marshall University Floating Point Number Systems Precision Extended Precision Patriot Missile Failure Floating point representation is based on scientific An increasing number of problems exist for which Patriot missile defense modules have been used by • The precision, p, of a floating point number notation, where a nonzero real decimal number, x, IEEE double is insufficient. These include modeling the U.S. Army since the mid-1960s. On February 21, system is the number of bits in the significand. is expressed as x = ±S × 10E, where 1 ≤ S < 10. of dynamical systems such as our solar system or the 1991, a Patriot protecting an Army barracks in Dha- This means that any normalized floating point The values of S and E are known as the significand • climate, and numerical cryptography. Several arbi- ran, Afghanistan failed to intercept a SCUD missile, number with precision p can be written as: and exponent, respectively. When discussing float- trary precision libraries have been developed, but leading to the death of 28 Americans. This failure E ing points, we are interested in the computer repre- x = ±(1.b1b2...bp−2bp−1)2 × 2 are too slow to be practical for many complex ap- was caused by floating point rounding error. The sentation of numbers, so we must consider base 2, or • The smallest x such that x > 1 is then: plications. David Bailey’s QD library may be used system measured time in tenths of seconds, using binary, rather than base 10.
    [Show full text]
  • IEEE Standard 754 for Binary Floating-Point Arithmetic
    Work in Progress: Lecture Notes on the Status of IEEE 754 October 1, 1997 3:36 am Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic Prof. W. Kahan Elect. Eng. & Computer Science University of California Berkeley CA 94720-1776 Introduction: Twenty years ago anarchy threatened floating-point arithmetic. Over a dozen commercially significant arithmetics boasted diverse wordsizes, precisions, rounding procedures and over/underflow behaviors, and more were in the works. “Portable” software intended to reconcile that numerical diversity had become unbearably costly to develop. Thirteen years ago, when IEEE 754 became official, major microprocessor manufacturers had already adopted it despite the challenge it posed to implementors. With unprecedented altruism, hardware designers had risen to its challenge in the belief that they would ease and encourage a vast burgeoning of numerical software. They did succeed to a considerable extent. Anyway, rounding anomalies that preoccupied all of us in the 1970s afflict only CRAY X-MPs — J90s now. Now atrophy threatens features of IEEE 754 caught in a vicious circle: Those features lack support in programming languages and compilers, so those features are mishandled and/or practically unusable, so those features are little known and less in demand, and so those features lack support in programming languages and compilers. To help break that circle, those features are discussed in these notes under the following headings: Representable Numbers, Normal and Subnormal, Infinite
    [Show full text]
  • A Variable Precision Hardware Acceleration for Scientific Computing Andrea Bocco
    A variable precision hardware acceleration for scientific computing Andrea Bocco To cite this version: Andrea Bocco. A variable precision hardware acceleration for scientific computing. Discrete Mathe- matics [cs.DM]. Université de Lyon, 2020. English. NNT : 2020LYSEI065. tel-03102749 HAL Id: tel-03102749 https://tel.archives-ouvertes.fr/tel-03102749 Submitted on 7 Jan 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. N°d’ordre NNT : 2020LYSEI065 THÈSE de DOCTORAT DE L’UNIVERSITÉ DE LYON Opérée au sein de : CEA Grenoble Ecole Doctorale InfoMaths EDA N17° 512 (Informatique Mathématique) Spécialité de doctorat :Informatique Soutenue publiquement le 29/07/2020, par : Andrea Bocco A variable precision hardware acceleration for scientific computing Devant le jury composé de : Frédéric Pétrot Président et Rapporteur Professeur des Universités, TIMA, Grenoble, France Marc Dumas Rapporteur Professeur des Universités, École Normale Supérieure de Lyon, France Nathalie Revol Examinatrice Docteure, École Normale Supérieure de Lyon, France Fabrizio Ferrandi Examinateur Professeur associé, Politecnico di Milano, Italie Florent de Dinechin Directeur de thèse Professeur des Universités, INSA Lyon, France Yves Durand Co-directeur de thèse Docteur, CEA Grenoble, France Cette thèse est accessible à l'adresse : http://theses.insa-lyon.fr/publication/2020LYSEI065/these.pdf © [A.
    [Show full text]
  • SPARC Assembly Language Reference Manual
    SPARC Assembly Language Reference Manual 2550 Garcia Avenue Mountain View, CA 94043 U.S.A. A Sun Microsystems, Inc. Business 1995 Sun Microsystems, Inc. 2550 Garcia Avenue, Mountain View, California 94043-1100 U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Portions of this product may be derived from the UNIX® system, licensed from UNIX Systems Laboratories, Inc., a wholly owned subsidiary of Novell, Inc., and from the Berkeley 4.3 BSD system, licensed from the University of California. Third-party software, including font technology in this product, is protected by copyright and licensed from Sun’s Suppliers. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 and FAR 52.227-19. The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications. TRADEMARKS Sun, Sun Microsystems, the Sun logo, SunSoft, the SunSoft logo, Solaris, SunOS, OpenWindows, DeskSet, ONC, ONC+, and NFS are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. UNIX is a registered trademark in the United States and other countries, exclusively licensed through X/Open Company, Ltd. OPEN LOOK is a registered trademark of Novell, Inc.
    [Show full text]
  • X86-64 Machine-Level Programming∗
    x86-64 Machine-Level Programming∗ Randal E. Bryant David R. O'Hallaron September 9, 2005 Intel’s IA32 instruction set architecture (ISA), colloquially known as “x86”, is the dominant instruction format for the world’s computers. IA32 is the platform of choice for most Windows and Linux machines. The ISA we use today was defined in 1985 with the introduction of the i386 microprocessor, extending the 16-bit instruction set defined by the original 8086 to 32 bits. Even though subsequent processor generations have introduced new instruction types and formats, many compilers, including GCC, have avoided using these features in the interest of maintaining backward compatibility. A shift is underway to a 64-bit version of the Intel instruction set. Originally developed by Advanced Micro Devices (AMD) and named x86-64, it is now supported by high end processors from AMD (who now call it AMD64) and by Intel, who refer to it as EM64T. Most people still refer to it as “x86-64,” and we follow this convention. Newer versions of Linux and GCC support this extension. In making this switch, the developers of GCC saw an opportunity to also make use of some of the instruction-set features that had been added in more recent generations of IA32 processors. This combination of new hardware and revised compiler makes x86-64 code substantially different in form and in performance than IA32 code. In creating the 64-bit extension, the AMD engineers also adopted some of the features found in reduced-instruction set computers (RISC) [7] that made them the favored targets for optimizing compilers.
    [Show full text]
  • Effectiveness of Floating-Point Precision on the Numerical Approximation by Spectral Methods
    Mathematical and Computational Applications Article Effectiveness of Floating-Point Precision on the Numerical Approximation by Spectral Methods José A. O. Matos 1,2,† and Paulo B. Vasconcelos 1,2,∗,† 1 Center of Mathematics, University of Porto, R. Dr. Roberto Frias, 4200-464 Porto, Portugal; [email protected] 2 Faculty of Economics, University of Porto, R. Dr. Roberto Frias, 4200-464 Porto, Portugal * Correspondence: [email protected] † These authors contributed equally to this work. Abstract: With the fast advances in computational sciences, there is a need for more accurate compu- tations, especially in large-scale solutions of differential problems and long-term simulations. Amid the many numerical approaches to solving differential problems, including both local and global methods, spectral methods can offer greater accuracy. The downside is that spectral methods often require high-order polynomial approximations, which brings numerical instability issues to the prob- lem resolution. In particular, large condition numbers associated with the large operational matrices, prevent stable algorithms from working within machine precision. Software-based solutions that implement arbitrary precision arithmetic are available and should be explored to obtain higher accu- racy when needed, even with the higher computing time cost associated. In this work, experimental results on the computation of approximate solutions of differential problems via spectral methods are detailed with recourse to quadruple precision arithmetic. Variable precision arithmetic was used in Tau Toolbox, a mathematical software package to solve integro-differential problems via the spectral Tau method. Citation: Matos, J.A.O.; Vasconcelos, Keywords: floating-point arithmetic; variable precision arithmetic; IEEE 754-2008 standard; quadru- P.B.
    [Show full text]
  • MIPS Floating Point
    CS352H: Computer Systems Architecture Lecture 6: MIPS Floating Point September 17, 2009 University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell Floating Point Representation for dynamically rescalable numbers Including very small and very large numbers, non-integers Like scientific notation –2.34 × 1056 +0.002 × 10–4 normalized +987.02 × 109 not normalized In binary yyyy ±1.xxxxxxx2 × 2 Types float and double in C University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 2 Floating Point Standard Defined by IEEE Std 754-1985 Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32-bit) Double precision (64-bit) University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 3 IEEE Floating-Point Format single: 8 bits single: 23 bits double: 11 bits double: 52 bits S Exponent Fraction x = (!1)S "(1+Fraction)"2(Exponent!Bias) S: sign bit (0 ⇒ non-negative, 1 ⇒ negative) Normalize significand: 1.0 ≤ |significand| < 2.0 Always has a leading pre-binary-point 1 bit, so no need to represent it explicitly (hidden bit) Significand is Fraction with the “1.” restored Exponent: excess representation: actual exponent + Bias Ensures exponent is unsigned Single: Bias = 127; Double: Bias = 1203 University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 4 Single-Precision Range Exponents 00000000 and 11111111 reserved Smallest
    [Show full text]
  • Object Pascal Language Guide
    Object Pascal Language Guide Borland® Object Pascal Borland Software Corporation 100 Enterprise Way, Scotts Valley, CA 95066-3249 www.borland.com Borland Software Corporation may have patents and/or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. COPYRIGHT © 1983, 2002 Borland Software Corporation. All rights reserved. All Borland brand and product names are trademarks or registered trademarks of Borland Software Corporation. Other brand and product names are trademarks or registered trademarks of their respective holders. Printed in the U.S.A. ALP0000WW21000 1E0R0102 0203040506-9 8 7654321 D3 Contents Chapter 1 Qualified identifiers . 4-2 Introduction 1-1 Reserved words . 4-3 Directives. 4-3 What’s in this manual? . 1-1 Numerals. 4-4 Using Object Pascal . 1-1 Labels . 4-4 Typographical conventions . 1-2 Character strings . 4-4 Other sources of information . 1-2 Comments and compiler directives. 4-5 Software registration and technical support . 1-3 Expressions . 4-5 Part I Operators. 4-6 Arithmetic operators . 4-6 Basic language description Boolean operators . 4-7 Logical (bitwise) operators . 4-8 Chapter 2 String operators . 4-9 Overview 2-1 Pointer operators. 4-9 Program organization . 2-1 Set operators . 4-10 Pascal source files . 2-1 Relational operators . 4-11 Other files used to build applications . 2-2 Class operators. 4-12 Compiler-generated files . 2-3 The @ operator . 4-12 Example programs. 2-3 Operator precedence rules . 4-12 A simple console application . 2-3 Function calls . 4-13 A more complicated example .
    [Show full text]