Y Errors in Floating-Point Subtraction

Y Errors in Floating-Point Subtraction

Part V Real Arithmetic Parts Chapters 1. Numbers and Arithmetic 2. Representing Signed Numbers I. Number Representation 3. Redundant Number Systems 4. Residue Number Systems 5. Basic Addition and Counting 6. Carry-Lookahead Adders II. Addition / Subtraction 7. Variations in Fast Adders 8. Multioperand Addition 9. Basic Multiplication Schemes 10. High-Radix Multipliers III. Multiplication 11. Tree and Array Multipliers 12. Variations in Multipliers 13. Basic Division Schemes 14. High-Radix Dividers IV . Division Elementary Operations Elementary 15. Variations in Dividers 16. Division by Convergence 17. Floating-Point Reperesentations 18. Floating-Point Operations V. Real Arithmetic 19. Errors and Error Control 20. Precise and Certifiable Arithmetic 21. Square-Rooting Methods 22. The CORDIC Algorithms VI. Function Evaluation 23. Variations in Function Evaluation 24. Arithmetic by Table Lookup 25. High-Throughput Arithmetic 26. Low-Power Arithmetic VII. Implementation Topics 27. Fault-Tolerant Arithmetic 28. Past, Present, and Future May 2007 Computer Arithmetic, Number Representation Slide 1 About This Presentation This presentation is intended to support the use of the textbook Computer Arithmetic: Algorithms and Hardware Designs (Oxford University Press, 2000, ISBN 0-19-512583-5). It is updated regularly by the author as part of his teaching of the graduate course ECE 252B, Computer Arithmetic, at the University of California, Santa Barbara. Instructors can use these slides freely in classroom teaching and for other educational purposes. Unauthorized uses are strictly prohibited. © Behrooz Parhami Edition Released Revised Revised Revised Revised First Jan. 2000 Sep. 2001 Sep. 2003 Oct. 2005 May 2007 May 2007 Computer Arithmetic, Number Representation Slide 2 V Real Arithmetic Review floating-point numbers, arithmetic, and errors: • How to combine wide range with high precision • Format and arithmetic ops; the ANSI/IEEE standard • Causes and consequence of computation errors • When can we trust computation results? Topics in This Part Chapter 17 Floating-Point Representations Chapter 18 Floating-Point Operations Chapter 19 Errors and Error Control Chapter 20 Precise and Certifiable Arithmetic May 2007 Computer Arithmetic, Number Representation Slide 3 “According to my calculation, you should float now ... I think ...” “It’s an inexact science.” May 2007 Computer Arithmetic, Number Representation Slide 4 17 Floating-Point Representations Chapter Goals Study a representation method offering both wide range (e.g., astronomical distances) and high precision (e.g., atomic distances) Chapter Highlights Floating-point formats and related tradeoffs The need for a floating-point standard Finiteness of precision and range Fixed-point and logarithmic representations as special cases at the two extremes May 2007 Computer Arithmetic, Number Representation Slide 5 Floating-Point Representations: Topics Topics in This Chapter 17.1. Floating-Point Numbers 17.2. The ANSI / IEEE Floating-Point Standard 17.3. Basic Floating-Point Algorithms 17.4. Conversions and Exceptions 17.5. Rounding Schemes 17.6. Logarithmic Number Systems May 2007 Computer Arithmetic, Number Representation Slide 6 17.1 Floating-Point Numbers No finite number system can represent all real numbers Various systems can be used for a subset of real numbers Fixed-point ± w . f Low precision and/or range Rational ± p / q Difficult arithmetic Floating-point ± s×be Most common scheme Logarithmic ± logbx Limiting case of floating-point Fixed-point numbers x = (0000 0000 . 0000 1001)two Small number y = (1001 0000 . 0000 0000)two Large number Floating-point numbers x = ± s × be or ± significand × baseexponent Note that a floating-point number comes with two signs: Number sign, usually represented by a separate bit Exponent sign, usually embedded in the biased exponent May 2007 Computer Arithmetic, Number Representation Slide 7 Floating-Point Number Format and Distribution Fig. 17.1 Typical ± es floating-point number format. Sign E x p o n e n t : S i g n i f i c a n d : Signed integer, Represented as a fixed-point number 0 : + often represented 1 : – as unsigned value Usually normalized by shifting, Fig. 17.2 Subranges by adding a bias so that the MSB becomes nonzero. and special values in In radix 2, the fixed leading 1 Range with h bits: can be removed to save one bit; floating-point number [–bias, 2 h –1–bias] this bit is known as "hidden 1". representations. Negative numbers Positive numbers –∞ max – FLP – min – ±0 min + FLP + max + +∞ Sparser Denser Denser Sparser Overflow Underflow Overflow region regions region Midway Underflow Typical Overflow example example example example May 2007 Computer Arithmetic, Number Representation Slide 8 17.2 The ANSI/IEEE Floating-Point Standard Short (32-bit) format IEEE 754 Standard 8 bits, 23 bits for fractional part (now being revised bias = 127, (plus hidden 1 in integer part) to yield IEEE 754R) –126 to 127 Sign Exponent Significand 11 bits, bias = 1023, 52 bits for fractional part –1022 to 1023 (plus hidden 1 in integer part) Long (64-bit) format Fig. 17.3 The ANSI/IEEE standard floating-point number representation formats. May 2007 Computer Arithmetic, Number Representation Slide 9 Overview of IEEE 754 Standard Formats Table 17.1 Some features of the ANSI/IEEE standard floating-point number representation formats. –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Feature Single/Short Double/Long –––––––––––––––––––––––––––––––––––––––––––––––––––––––– Word width (bits) 32 64 Significand bits 23 + 1 hidden 52 + 1 hidden Significand range [1, 2 – 2–23] [1, 2 – 2–52] Exponent bits 8 11 Exponent bias 127 1023 Zero (±0) e + bias = 0, f = 0 e + bias = 0, f = 0 Denormal e + bias = 0, f ≠ 0 e + bias = 0, f ≠ 0 represents ±0.f×2–126 represents ±0.f×2–1022 Infinity (±∞) e + bias =255, f = 0 e + bias = 2047, f =0 Not-a-number (NaN) e + bias = 255, f ≠ 0 e + bias = 2047, f ≠ 0 Ordinary number e + bias ∈ [1, 254] e + bias ∈ [1, 2046] e ∈ [–126, 127] e ∈ [–1022, 1023] represents 1.f × 2e represents 1.f × 2e min 2–126 ≅ 1.2 × 10–38 2–1022 ≅ 2.2 × 10–308 max ≅ 2128 ≅ 3.4 × 1038 ≅ 21024 ≅ 1.8 × 10308 –––––––––––––––––––––––––––––––––––––––––––––––––––––––– May 2007 Computer Arithmetic, Number Representation Slide 10 Exponent Encoding Exponent encoding in 8 bits for the single/short (32-bit) ANSI/IEEE format Decimal code 0 1 126127 128 254 255 Hex code 00 01 7E7F 80 FE FF Exponent value –126 –10 +1 +127 f = 0: Representation of ±0 f = 0: Representation of ±∞ f ≠ 0: Representation of denormals, f ≠ 0: Representation of NaNs f × 2–126 Negative numbers Positive numbers –∞ max – FLP – min – ±0 min + FLP + max + +∞ Exponent encoding in Sparser Denser Denser Sparser 11 bits for the double/long Overflow Underflow Overflow region regions region (64-bit) format is similar Midway Underflow Typical Overflow example example example example May 2007 Computer Arithmetic, Number Representation Slide 11 Special Operands and Denormals Operations on special operands: Biased value 012 . 253 254 255 Ordinary number ÷ (+∞) = ±0 (+∞) × Ordinary number = ±∞ −126 −125 . 126 127 NaN + Ordinary number = NaN Ordinary FLP numbers ±0, Denormal ±∞, NaN (± 0.f × 2–126) Denormals –126 –125 0 2 2 . min Fig. 17.4 Denormals in the IEEE single-precision format. May 2007 Computer Arithmetic, Number Representation Slide 12 Extended Formats Single extended ≥ 11 bits ≥ 32 bits Bias is unspecified, but exponent range must include: Short (32-bit) format Single extended [−1022, 1023] 8 bits, 23 bits for fractional part bias = 127, (plus hidden 1 in integer part) –126 to 127 Double extended [−16 382, 16 383] Sign Exponent Significand 11 bits, bias = 1023, 52 bits for fractional part –1022 to 1023 (plus hidden 1 in integer part) Long (64-bit) format ≥ 15 bits ≥ 64 bits Double extended May 2007 Computer Arithmetic, Number Representation Slide 13 Requirements for Arithmetic Results of the 4 basic arithmetic operations (+, −, ×, ÷) as well as square-rooting must match those obtained if all intermediate computations were infinitely precise That is, a floating-point arithmetic operation should introduce no more imprecision than the error attributable to the final rounding of a result that has no exact representation (this is the best possible) Example: (1 + 2−1) × (1 + 2−23 ) Exact result 1 + 2−1 + 2−23 + 2−24 Rounded result 1 + 2−1 + 2−22 Error = ½ ulp May 2007 Computer Arithmetic, Number Representation Slide 14 17.3 Basic Floating-Point Algorithms Addition Assume e1 ≥ e2; alignment shift (preshift) is needed if e1 > e2 (± s1 × b e1) + (± s2 × b e2) = (± s1 × b e1) + (± s2/b e1–e2) × b e1 = (± s1 ± s2/b e1–e2) × b e1 = ± s × b e Example: Numbers to be added: x = 2 5 × 1.00101101 Operand with y = 2 1 × 1.11101101 smaller exponent Rounding, to be preshifted overflow, Operands after alignment shift: and x = 2 5 × 1.00101101 underflow y = 2 5 × 0.000111101101 issues Extra bits to be discussed Result of addition: rounded off later s = 2 5 × 1.010010111101 s = 2 5 × 1.01001100 Rounded sum May 2007 Computer Arithmetic, Number Representation Slide 15 Floating-Point Multiplication and Division Multiplication (± s1 × b e1) × (± s2 × b e2) = (± s1 × s2)× b e1+e2 Because s1 × s2 ∈ [1, 4), postshifting may be needed for normalization Overflow or underflow can occur during multiplication or normalization Division (± s1 × b e1) / (± s2 × b e2) = (± s1/s2)× b e1−e2 Because s1/s2 ∈ (0.5, 2), postshifting may be needed for normalization Overflow or underflow can occur during division or normalization May

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    84 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us