
Precision & Performance: Floating Point and IEEE 754 Compliance for NVIDIA GPUs Nathan Whitehead Alex Fit-Florea ABSTRACT 2. FLOATING POINT A number of issues related to floating point accuracy and compliance are a frequent source of confusion on 2.1 Formats both CPUs and GPUs. The purpose of this white pa- Floating point encodings and functionality are defined per is to discuss the most common issues related to in the IEEE 754 Standard [2] last revised in 2008. Gold- NVIDIA GPUs and to supplement the documentation berg [5] gives a good introduction to floating point and in the CUDA C Programming Guide [7]. many of the issues that arise. The standard mandates binary floating point data be encoded on three fields: a one bit sign field, followed 1. INTRODUCTION by exponent bits encoding the exponent offset by a nu- Since the widespread adoption in 1985 of the IEEE meric bias specific to each format, and bits encoding Standard for Binary Floating-Point Arithmetic (IEEE the significand (or fraction). 754-1985 [1]) virtually all mainstream computing sys- sign exponent fraction tems have implemented the standard, including NVIDIA with the CUDA architecture. IEEE 754 standardizes In order to ensure consistent computations across plat- how arithmetic results should be approximated in float- forms and to exchange floating point data, IEEE 754 ing point. Whenever working with inexact results, pro- defines basic and interchange formats. The 32 and 64 gramming decisions can affect accuracy. It is important bit basic binary floating point formats correspond to the to consider many aspects of floating point behavior in C data types float and double. Their corresponding order to achieve the highest performance with the preci- representations have the following bit lengths: sion required for any specific application. This is espe- cially true in a heterogeneous computing environment float 1 8 23 where operations will be performed on different types of hardware. double 1 11 52 Understanding some of the intricacies of floating point and the specifics of how NVIDIA hardware handles float- ing point is obviously important to CUDA programmers For numerical data representing finite values, the sign striving to implement correct numerical algorithms. In is either negative or positive, the exponent field encodes addition, users of libraries such as CUBLAS and CUFFT the exponent in base 2, and the fraction field encodes will also find it informative to learn how NVIDIA han- the significand without the most significant non-zero 1 7 dles floating point under the hood. bit. For example the value −192 equals (−1) ×2 ×1:5, We review some of the basic properties of floating and can be represented as having a negative sign, an point calculations in Section2. We also discuss the exponent of 7, and a fractional part :5. The exponents fused multiply-add operator, which was added to the are biased by 127 and 1023, respectively, to allow ex- IEEE 754 standard in 2008 [2] and is built into the ponents to extend from negative to positive. Hence the hardware of NVIDIA GPUs. In Section3 we work exponent 7 is represented by bit strings with values 134 through an example of computing the dot product of for float and 1030 for double. The integral part of 1: two short vectors to illustrate how different choices of is implicit in the fraction. implementation affect the accuracy of the final result. In Section4 we describe NVIDIA hardware versions and float NVCC compiler options that affect floating point calcu- 1 10000110 .100000000000000000000000 lations. In Section5 we consider some issues regarding the comparison of CPU and GPU results. Finally, in double Section6 we conclude with concrete recommendations 1 10000000110 .10000000000000000. 0000000 to programmers that deal with numeric issues relating to floating point on the GPU. Also, encodings to represent infinity and not-a-number (NaN) data are reserved. The IEEE 754 Standard [2] Permission to make digital or hard copies of all or part of this work describes floating point encodings in full. for any use is granted without fee provided that copies bear this notice and the full citation on the first page. Given that the fraction field uses a limited number of Copyright 2011 NVIDIA. bits, not all real numbers can be represented exactly. For example the mathematical value of the fraction 2=3 Here, the order in which operations are executed affects represented in binary is 0:10101010::: which has an in- the accuracy of the result. The results are independent finite number of bits after the binary point. The value of the host system. These same results would be ob- 2=3 must be rounded first in order to be represented tained using any microprocessor, CPU or GPU, which as a floating point number with limited precision. The supports single precision floating point. rules for rounding and the rounding modes are spec- ified in IEEE 754. The most frequently used is the 2.3 The Fused Multiply-Add (FMA) round-to-nearest-or-even mode (abbreviated as round- In 2008 the IEEE 754 standard was revised to include to-nearest). The value 2=3 rounded in this mode is rep- the fused multiply-add operation (FMA). The FMA op- resented in binary as: eration computes rn(X × Y + Z) with only one round- ing step. Without the FMA operation the result would have to be computed as rn(rn(X × Y ) + Z) with two float rounding steps, one for multiply and one for add. Be- 0 01111110 .01010101010101010101011 cause the FMA uses only a single rounding step the result is computed more accurately. double Let's consider an example to illustrate how the FMA .01010101010101010. 1010101 0 01111111110 operation works using decimal arithmetic first for clar- 2 The sign is positive and the stored exponent value ity. Let's compute x − 1 in finite-precision with four represents an exponent of −1. digits of precision after the decimal point, or a total of five digits of precision including the leading digit before 2.2 Operations and Accuracy the decimal point. For x = 1:0008, the correct mathematical result is The IEEE 754 standard requires support for a hand- 2 −4 ful of operations. These include the arithmetic opera- x −1 = 1:60064×10 . The closest number using only four digits after the decimal point is 1:6006 × 10−4. In tions add, subtract, multiply, divide, square root, fused- 2 −4 multiply-add, remainder, conversion operations, scal- this case rn(x − 1) = 1:6006 × 10 which corresponds ing, sign operations, and comparisons. The results of to the fused multiply-add operation rn(x × x + (−1)). The alternative is to compute separate multiply and add these operations are guaranteed to be the same for all 2 2 implementations of the standard, for a given format and steps. For the multiply, x = 1:00160064, so rn(x ) = 1:0016. The final result is rn(rn(x2) − 1) = 1:6000 × rounding mode. −4 The rules and properties of mathematical arithmetic 10 . do not hold directly for floating point arithmetic be- Rounding the multiply and add separately yields a re- cause of floating point's limited precision. For example, sult that is wrong by 0:00064. The corresponding FMA the table below shows single precision values A, B, and computation is wrong by only 0:00004, and its result is C, and the mathematical exact value of their sum com- closest to the correct mathematical answer. The results puted using different associativity. are summarized below: x = 1:0008 A = 21 × 1:00000000000000000000001 x2 = 1:00160064 B = 20 × 1:00000000000000000000001 x2 − 1 = 1:60064 × 10−4 true value C = 23 × 1:00000000000000000000001 rn(x2 − 1) = 1:6006 × 10−4 fused multiply-add (A + B) + C = 23 × 1:01100000000000000000001011 rn(x2) = 1:0016 × 10−4 A + (B + C) = 23 × 1:01100000000000000000001011 rn(rn(x2) − 1) = 1:6000 × 10−4 multiply, then add Mathematically, (A + B) + C does equal A + (B + C). Below is another example, using binary single preci- Let rn(x) denote one rounding step on x. Perform- sion values: ing these same computations in single precision floating point arithmetic in round-to-nearest mode according to A = 20 × 1:00000000000000000000001 IEEE 754, we obtain: B = −20 × 1:00000000000000000000010 rn(A × A + B) = 2−46 × 1:00000000000000000000000 A + B = 21 × 1:1000000000000000000000110000 ::: rn(rn(A × A) + B) = 0 rn(A + B) = 21 × 1:10000000000000000000010 B + C = 23 × 1:0010000000000000000000100100 ::: In this particular case, computing rn(rn(A × A) + B) rn(B + C) = 23 × 1:00100000000000000000001 3 as an IEEE 754 multiply followed by an IEEE 754 add A + B + C = 2 × 1:0110000000000000000000101100 ::: loses all bits of precision, and the computed result is 0. rn(rn(A + B) + C) = 23 × 1:01100000000000000000010 rn(A + rn(B + C)) = 23 × 1:01100000000000000000001 The alternative of computing the FMA rn(A × A + B) provides a result equal to the mathematical value. In For reference, the exact, mathematical results are com- general, the fused-multiply-add operation generates more puted as well in the table above. Not only are the re- accurate results than computing one multiply followed sults computed according to IEEE 754 different from by one add. The choice of whether or not to use the the exact mathematical results, but also the results cor- fused operation depends on whether the platform pro- responding to the sum rn(rn(A + B) + C) and the sum vides the operation and also on how the code is com- rn(A + rn(B + C)) are different from each other. In this piled. case, rn(A + rn(B + C)) is closer to the correct mathe- Figure1 shows CUDA C code and output correspond- matical result than rn(rn(A + B) + C).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-