FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 FPGA Based Quadruple Precision Floating Point Arithmetic for Scientific Computations 1Mamidi Nagaraju, 2Geedimatla Shekar 1Department of ECE, VLSI Lab, National Institute of Technology (NIT), Calicut, Kerala, India 2Asst.Professor, Department of ECE, Amrita Vishwa Vidyapeetham University Amritapuri, Kerala, India Abstract amounts, and fractions are essential to many computations. Floating-point arithmetic lies at the In this project we explore the capability and heart of computer graphics cards, physics engines, flexibility of FPGA solutions in a sense to simulations and many models of the natural world. accelerate scientific computing applications which Floating-point computations suffer from errors due to require very high precision arithmetic, based on rounding and quantization. Fast computers let IEEE 754 standard 128-bit floating-point number programmers write numerically intensive programs, representations. Field Programmable Gate Arrays but computed results can be far from the true results (FPGA) is increasingly being used to design high due to the accumulation of errors in arithmetic end computationally intense microprocessors operations. Implementing floating-point arithmetic in capable of handling floating point mathematical hardware can solve two separate problems. First, it operations. Quadruple Precision Floating-Point greatly speeds up floating-point arithmetic and Arithmetic is important in computational fluid calculations. Implementing a floating-point dynamics and physical modelling, which require instruction will require at a generous estimate at least accurate numerical computations. However, twenty integer instructions, many of them conditional modern computers perform binary arithmetic, operations, and even if the instructions are executed which has flaws in representing and rounding the on an architecture which goes to great lengths to numbers. As the demand for quadruple precision speed up execution, this will be slow. In contrast, floating point arithmetic is predicted to grow, the even the simplest implementation of basic floating- IEEE 754 Standard for Floating-Point Arithmetic point arithmetic in hardware will require perhaps ten includes specifications for quadruple precision clock cycles per instruction, a small fraction of the floating point arithmetic. We implement quadruple time a software implementation would require. precision floating point arithmetic unit for all the Second, implementing the logic once in hardware common operations, i.e. addition, subtraction, allows the considerable cost of implementation to be multiplication and division. While previous work amortized across all users, including users which may has considered circuits for low precision floating- not be able to use another software floating-point point formats, we consider the implementation of implementation (say, because the relevant functions 128-bit quadruple precision circuits. The project are not publicly available in shared libraries). will provide arithmetic operation, simulation result, hardware design, Input via PS/2 Keyboard interface Quadruple precision arithmetic increases the and results displayed on LCD using Xilinx virtex5 accuracy and reliability of numerical computations (XC5VLX110TFF1136) FPGA device. by providing floating-point numbers that have more than twice the precision of double precision numbers. Keywords This is important in applications, such as computational fluid dynamics and physical FPGA, Floating-point, Quadruple precision, Arithmetic. modelling, which require accurate numerical computations. Most modern processors have 1. Introduction hardware support for double precision (64-bit) or double-extended precision (typically 80-bit) floating- point multiplication, but not for quadruple precision Integer arithmetic is common throughout (128-bit) floating-point arithmetic operations. It is computation. Integers govern loop behavior, also true, however, that double precision and double determine array size, measure pixel coordinates on extended precision are not enough for many scientific the screen, determine the exact colors displayed on a applications including climate modelling, computer display, and perform many other tasks. computational physics, and computational geometry. However, integers cannot easily represent fractional The use of quadruple precision arithmetic can greatly 7 International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 improve the numerical stability and reproducibility of The IEEE 754 standard specifies that a quadruple many of these applications. Due to these advantages precision number consists of a 1-bit sign, a 15-bit quadruple precision arithmetic can provide in biased exponent, and a 112-bit significant. The scientific computing applications, specifications for quadruple precision number format is shown in fig. quadruple precision numbers are being added to the 2.1. revised version of the IEEE 754 Standard for Floating-Point Arithmetic. Recently, the use of FPGA-based accelerators has become a promising approach for speeding up scientific applications. The computational capability of FPGAs is increasing rapidly. A top-level FPGA chip from Xilinx Virtex-5 series contains 51840 logic Slices, 10368 Kbits storage and 192 DSP processing blocks (25 × 18 Fig. 2.1: Quadruple Precision Format MAC). E is an unsigned biased number and the true exponent Initial plans for the floating-point unit envisioned e is obtained as e=E-Ebias with Ebias=16383. For were ambitious. Full support for quadruple precision- quadruple precision numbers value of E ranges from format IEEE 754 floating-point addition, subtraction, 0 to 32767.The number zero is represented with E=0 multiplication, and division, along with full support and f=0.An exponent E=2047 and f=0 represents for exceptions. Floating-point unit functionality can infinity. The fraction f represents a number in the be loosely divided into the following areas, the Adder range [0,l) and the significant S is given by S=l.f and module, Subtracter module the Multiplier module and f is in the range [1,2).The actual value of the Division module. Together these modules compose quadruple precision floating point number is the the internals of the FPU module, which encapsulates following: all behavior in one location and provides one central (sign bit) (exponent – 16383) interface for calculation of floating point calculations. Value = -1 x 2 x 1.(mantissa). A keyboard is interfaced with Floating Point Unit to feed the 128 bit input operands making use of the ps2 The basic format is described in IEEE 754 format, interface available on the FPGA board. The 128 bit quadruple precision using 128-bits. Floating-point output is displayed on the 16*2 LCD present on the arithmetic as differs in a number of ways from FPGA board. Field Programmable Gate Array standard integral arithmetic. Floating-point arithmetic (FPGA) is a silicon chip with unconnected logic is almost always inexact. Only floating point blocks, these logic blocks can be defined and numbers which are the sum of a limited sequence of redefined by user at any time. FPGAs are powers of two may be exactly represented using the increasingly being used for applications which format specified by IEEE 754. This contrasts with require high numerical stability and accuracy. With integer arithmetic, where (for example) the sum or less time to market and low cost, FPGAs are product of two numbers always equals their exact becoming a more attractive solution compared to value sum, excluding the rare case of overflow. For Application Specific Integrated Circuits (ASIC). example, in IEEE arithmetic 0.1 + 0.2 is not equal to FPGAs are mostly used in low volume applications 0.3, but rather is equal to 0.3000000000000004. This that cannot afford silicon fabrication or designs has many subtle ramifications, with the most which require frequent changes or upgrades. In common being that equality comparisons should FPGAs, the bottleneck for designing efficient almost never be exact. They should instead be floating-point units has mostly been area. With bounded by some epsilon. Another difference advancement in FPGA architecture, however, there is between floating-point and integer numbers is that a significant increase in FPGA densities. Devices floating-point numbers include the special values with millions of gates and frequencies reaching up to positive and negative infinity, positive and negative 500 MHz are becoming more suitable for floating- zero, and not-a-number (NaN). These values are point arithmetic reliant applications. produced in certain circumstances by calculations with particular arguments. For example, infinity is the result of dividing a large positive number by a 2. Floating Point Numerical small positive number. A third difference is that Representation floating-point operations can sometimes be made safe against certain conditions which may be considered errors. In particular, floating-point exceptions provide 8 International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-2 Number-3 Issue-5 September-2012 a mechanism for detecting possibly-invalid 3. Hardware Implementation operations such as the canonical division by zero, overflow to an infinity value, underflow to a number The block diagram of hardware