
LifeJacket: Verifying precise floating- point optimizations in LLVM Andres Notzli¨ Fraser Brown Stanford University fnoetzli,[email protected] Abstract Name: PR26746 %a = fsub -0.0, %x Optimizing floating-point arithmetic is vital because it is ubiqui- %r = fsub +0.0, %a tous, costly, and used in compute-heavy workloads. Implementing => precise optimizations correctly, however, is difficult, since devel- %r = %x opers must account for all the esoteric properties of floating-point arithmetic to ensure that their transformations do not alter the out- Figure 1. Incorrect transformation involving floating-point in- put of a program. Manual reasoning is error prone and stifles incor- structions in LLVM 3.7.1. poration of new optimizations. We present an approach to automate reasoning about floating- point optimizations using satisfiability modulo theories (SMT) on a fragile interplay between developers, programming languages, solvers. We implement the approach in LifeJacket, a system for compilers, and hardware implementations. automatically verifying precise floating-point optimizations for the Compiler optimizations that alter the semantics of programs, LLVM assembly language. We have used LifeJacket to verify 43 even in subtle ways, can confuse users, make problems hard to LLVM optimizations and to discover eight incorrect ones, includ- debug, and cause cascading issues. IEEE 754-2008 acknowledges ing three previously unreported problems. LifeJacket is an open this by recommending that language standards and implementa- source extension of the Alive system for optimization verification. tions provide means to generate reproducible results for programs, independent from optimizations. In practice, many transformations 1. Introduction that are valid for real numbers, change the precision of floating- point expressions. As a result, compilers optimizing floating-point In this paper, we present LifeJacket, a system for automatically programs face the dilemma of choosing between speed and repro- verifying floating-point optimizations. Floating-point arithmetic is ducibility. They often address this dilemma by dividing floating- ubiquitous—modern hardware architectures natively support it and point optimizations into two groups, precise and imprecise op- programming languages treat it as a canonical representation of real timizations, where imprecise optimizations are optional (e.g. the numbers—but writing correct floating-point programs is difficult. -ffast-math flag in clang). While precise optimizations always Optimizing these programs is even more difficult. Unfortunately, produce the same result, imprecise ones produce reasonable results despite hardware support, floating-point computations are still ex- on common inputs (e.g. not for special values) but are arbitrarily pensive, so avoiding optimization is undesirable. bad in the general case. To implement precise optimizations, de- Reasoning about floating-point optimizations and programs is velopers have to reason about all edge cases of floating-point arith- difficult because of floating-point arithmetic’s unintuitive seman- metic, making it challenging to avoid bugs. tics. Floating-point arithmetic is inherently imprecise and lossy, To illustrate the challenge of developing floating-point opti- and programmersDRAFT must account for rounding, signed zeroes, special mizations, Figure 1 shows an example of an invalid transformation values, and non-associativity [7]. Before the standardization, a wide implemented in LLVM 3.7.1. We discuss the specification language range of incompatible floating-point hardware with varying sup- in more detail in Section 3.2 but, at a high-level, the transformation arXiv:1603.09290v1 [cs.PL] 30 Mar 2016 port for range, precision, and rounding existed. These implementa- simplifies +0:0 − (−0:0 − x) to x, an optimization that is cor- tions were not only incompatible but also had undesirable proper- rect in the realm of real numbers. Because floating-point numbers ties such as numbers that were not equal to zero for comparisons but distinguish between negative and positive zero, however, the opti- were treated as zeros for multiplication and division [13]. The IEEE mization is not valid if x = −0:0, because the original code returns 754-1985 standard and its stricter IEEE 754-2008 successor were +0:0 and the optimized code returns −0:0. While the zero’s sign carefully designed to avoid many of these pitfalls and designed for may be insignificant for many applications, the unexpected sign (contrary to popular opinion, perhaps) non-expert users. Despite change may cause a ripple effect. For example, the reciprocal of these advances, program correctness and reproducibility still rests zero is defined as 1=+0:0 = +1 and 1=−0:0 = −∞. Since reasoning manually about floating-point operations and optimizations is difficult, we argue that automated reasoning can help ensure correct optimizations. The goal of LifeJacket is to allow LLVM developers to automatically verify precise floating-point optimizations. Our work focuses on precise optimizations because they are both more amenable to verification and arguably harder to get right. LifeJacket builds on Alive [9], a tool for verifying LLVM optimizations, extending it with floating-point support. [Copyright notice will appear here once ’preprint’ option is removed.] Our contributions are as follows: 1 2016/3/31 • We describe the background for verifying precise floating-point already verifies some InstCombine optimizations, but it does not optimizations in LLVM and propose an approach using SMT support optimizations involving floating-point arithmetic. Instead solvers. of building LifeJacket from scratch, we extends Alive with the ma- • We implemented the approach in LifeJacket, an open source chinery to verify floating-point optimizations. To give the neces- fork of Alive that adds support for floating-point types, floating- sary context for discussing our implementation in Section 4, we point instructions, floating-point predicates, and certain fast- describe LLVM’s floating-point types and instructions and give a math flags. brief overview of Alive. • We validated the approach by verifying 43 optimizations. Life- 3.1 Floating-point arithmetic in LLVM Jacket finds 8 incorrect optimizations, including three previ- ously unreported problems in LLVM 3.7.1. In the following, we discuss LLVM’s semantics of floating-point types and instructions. The information is largely based on the In addition to the core contributions, our work also lead to LLVM Language Reference Manual for LLVM 3.7.1 [2] and the the discovery of two issues in Z3 [6], the SMT solver used by IEEE 754-2008 standard. For completeness, we note that the lan- LifeJacket, related to floating-point support. guage reference does not explicitly state that LLVM floating-point arithmetic is based on IEEE 754. However, the language reference 2. Related Work refers to the IEEE standard multiple times, and LLVM’s floating- point software implementation APFloat is explicitly based on the Alive is a system that verifies LLVM peephole optimizations. Life- standard. Jacket is a fork of this project that extends it with support for floating-point arithmetic. We are not the only ones interested in ver- Floating-point types LLVM defines six different floating-point ifying floating-point optimizations; close to the submission dead- types with bit-widths ranging from 16 bit to 128 bit. Floating-point line, we found that one of the Alive authors had independently be- values are stored in the IEEE binary interchange format, which gun a reimplementation of Alive that seems to include support for 1 encodes them in three parts: the sign s, the exponent e and the floating-point arithmetic. significand t. The value of a normal floating-point number is given Our work intersects with the areas of compiler correctness, by: (−1)s ×(1+21−p ×t)×2e−bias, where bias = 2w−1 −1 and optimization correctness, and analysing floating-point expressions. w is the number of bits in the exponent. The range of the exponents Research on compiler correctness has addressed floating-point for normal floating-point numbers is [1; 2w −2]. Exponents outside and floating-point optimizations. CompCert, a formally-verified of this range are used to encode special values: subnormal numbers, compiler, supports IEEE 754-2008 floating-point types and imple- Not-a-Number values (NaNs), and infinities. ments two floating-point optimizations [3]. In CompCert, develop- Floating-point zeros are signed, meaning that −0:0 and +0:0 ers use Coq to prove optimizations correct, while LifeJacket proves are distinct. While most operations ignore the sign of a zero, the optimization correctness automatically. sign has an observable effect in some situations: a division by zero Regarding optimization correctness, researchers have explored (generally) returns +1 or −∞ depending on the zero’s sign, for both the consequences of existing optimizations and techniques for example. As a consequence, x = y does not imply 1 = 1 . If x = 0 generating new optimizations. Recent work has discussed conse- x y and y = −0, x = y is true, since floating point 0 = −0. On the quences of unexpected optimizations [14]. In terms of new op- other hand, 1 = 1 is false, since 1 = 1 6= −∞ = 1 . timizations, STOKE [12] is a stochastic optimizer that supports x y 0 −0 ±∞ floating-point arithmetic and verifies instances of floating-point op- Infinities ( ) are used to represent an overflow or a division t = 0 e = 2w − 1 timizations with random testing. Souper [1] discovers new LLVM by zero. They are encoded
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-