AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 8: Floating Point Arithmetic; Accuracy and Stability

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 8: Floating Point Arithmetic; Accuracy and Stability

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 8: Floating Point Arithmetic; Accuracy and Stability Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 16 Outline 1 Floating Point Arithmetic 2 Accuracy and Stability 3 Stability of Algorithms Xiangmin Jiao Numerical Analysis I 2 / 16 Floating Point Representations Computers can only use finite number of bits to represent a real number I Numbers cannot be arbitrarily large or small (associated risks of overflow and underflow) I There must be gaps between representable numbers (potential round-off errors) Commonly used computer-representations are floating point representations, which resemble scientific notation −1 −p+1 e ±(d0 + d1β + ··· + dp−1β )β ; 0 ≤ di < β where β is base, p is digits of precision, and e is exponent between emin and emax Normalize if d0 6= 0 (except for 0) Gaps between adjacent numbers scale with size of numbers 1−p Relative resolution given by machine epsilon machine = 0:5β For all x, there exists a floating point x0 such that 0 jx − x j ≤ machinejxj Xiangmin Jiao Numerical Analysis I 3 / 16 IEEE Floating Point Representations Single precision: 32 bits I 1 sign bit (S), 8 exponent bits (E), 23 significant bits (M), (−1)S × 1:M × 2E−127 −24 I machine is 2 ≈ 6e − 8 Double precision: 64 bits I 1 sign bit (S), 11 exponent bits (E), 52 significant bits (M), (−1)S × 1:M × 2E−1023 −53 I machine is 2 ≈ e − 16 Special quantities I +1 and −∞ when operation overflows; e.g., x=0 for nonzero x I NaN (Not a Number) is returnedp when an operation has no well-defined result; e.g., 0/0, −1, arcsin(2), NaN Xiangmin Jiao Numerical Analysis I 4 / 16 Machine Epsilon Define fl(x) as closest floating point approximation to x By definition of machine, we have: For all x 2 R, there exists with jj ≤ machine such that fl(x) = x(1 + ) Given operation +, −, ×, and = (denoted by ∗), floating point numbers x and y, and corresponding floating point arithmetic (denoted by ~), we require that x ~ y = fl(x ∗ y) This is guaranteed by IEEE floating point arithmetic Fundamental axiom of floating point arithmetic: For all x; y 2 F, there exists with jj ≤ machine such that x ~ y = (x ∗ y)(1 + ) These properties will be the basis of error analysis with rounding errors Xiangmin Jiao Numerical Analysis I 5 / 16 Potential “Catastrophic” Error 1: Cancellation Catastrophic cancellation: If x ≈ y and jx − yj jxj + jyj, then the computed result x^ − y^ is in general inaccurate. This is an issue if intermediate result involves cancellation errors This sometimes can be avoided by using alternative equations Example: 2 I Quadratic formula for quadratic roots of ax + bx + c = 0 can be solved as p −b ± b2 − 4ac x = 2a or 2c x = p −b ∓ b2 − 4ac depending on the sign of b Xiangmin Jiao Numerical Analysis I 6 / 16 Potential “Catastrophic” Error 2: Swamping Swamping: If jxj jyj (more precisely jxj . machinejyj), then x + y ≈ y (or fl(x) ⊕ fl(y) = fl(y)). In other words, the effect of smaller value may be lost This is an issue if there are intermediate large values in the computation It sometimes can be avoided by reordering computations Examples: PN 1 PN 1 I Approximating e with n=0 n! versus n=0 (N−n)! I in Gaussian elimination, using small values as pivoting can lead to large intermediate values Xiangmin Jiao Numerical Analysis I 7 / 16 Potential “Catastrophic” Error 3: Overflow Overflow: When multiplying two very large numbers x and y, then jxyj can cause unnecessary overflow This is an issue, for example, when computing squares before taking squared root, but it can often be addressed by rescaling Analogously, unnecessary underflow can occur when multiplying two small numbers, but underflow is typically less harmful Examples: 2 I In finding roots of ax + bx + c = 0, it is advisable to rescale the problem by dividing it by maxfjaj; jbj; jcjg (this avoids unnecessary overflow and underflow) x I When computing kxk2, rescale the problem to compute kxk1 kxk 1 2 (i.e., first dividing x by maxfjxi jg and multiplying it back. This can avoid unnecessary overflow and underflow) Xiangmin Jiao Numerical Analysis I 8 / 16 Outline 1 Floating Point Arithmetic 2 Accuracy and Stability 3 Stability of Algorithms Xiangmin Jiao Numerical Analysis I 9 / 16 Accuracy Roughly speaking, accuracy means that “error” is small in an asymptotic sense, say O(machine) Notation '(t) = O( (t)) means 9C s.t. j'(t)j ≤ Cj (t)j as t approaches 0 (or 1) 2 2 I Example: sin t = O(t ) as t ! 0 If ' depends on s and t, then '(s; t) = O( (t)) means 9C s.t. j'(s; t)j ≤ Cj (t)j for any s as t approaches 0 (or 1) 2 2 2 I Example: sin t sin s = O(t ) as t ! 0 When we say O(machine), we are thinking of a series of idealized machines for which machine ! 0 Xiangmin Jiao Numerical Analysis I 10 / 16 More on Accuracy An algorithm f~ is accurate if relative error is in the order of machine precision, i.e., ~ kf (x) − f (x)k=kf (x)k = O(machine); i.e., ≤ C1machine as machine ! 0, where constant C1 may depend on the condition number and the algorithm itself In most cases, we expect ~ kf (x) − f (x)k=kf (x)k = O(κmachine); i.e., ≤ Cκmachine as machine ! 0, where constant C should be independent of κ and value of x (although it may depends on the dimension of x) How do we determine whether an algorithm is accurate or not? I It turns out to be an extremely subtle question I A forward error analysis (operation by operation) is often too difficult and impractical, and cannot capture dependence on condition number I An effective solution is backward error analysis Xiangmin Jiao Numerical Analysis I 11 / 16 Outline 1 Floating Point Arithmetic 2 Accuracy and Stability 3 Stability of Algorithms Xiangmin Jiao Numerical Analysis I 12 / 16 We say an algorithm is backward stable if it gives “exactly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is backward stable if (for all x) f~(x) = f (x~) for some x~ with kx~ − xk=kxk = O(machine) Is stability or backward stability stronger? I Backward stability is stronger. Does (backward ) stability depend on condition number of f (x)? I No. Stability We say an algorithm is stable if it gives “nearly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is stable if (for all x) ~ kf (x) − f (x~)k=kf (x~)k = O(machine) for some x~ with kx~ − xk=kxk = O(machine) Xiangmin Jiao Numerical Analysis I 13 / 16 Is stability or backward stability stronger? I Backward stability is stronger. Does (backward ) stability depend on condition number of f (x)? I No. Stability We say an algorithm is stable if it gives “nearly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is stable if (for all x) ~ kf (x) − f (x~)k=kf (x~)k = O(machine) for some x~ with kx~ − xk=kxk = O(machine) We say an algorithm is backward stable if it gives “exactly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is backward stable if (for all x) f~(x) = f (x~) for some x~ with kx~ − xk=kxk = O(machine) Xiangmin Jiao Numerical Analysis I 13 / 16 I Backward stability is stronger. Does (backward ) stability depend on condition number of f (x)? I No. Stability We say an algorithm is stable if it gives “nearly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is stable if (for all x) ~ kf (x) − f (x~)k=kf (x~)k = O(machine) for some x~ with kx~ − xk=kxk = O(machine) We say an algorithm is backward stable if it gives “exactly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is backward stable if (for all x) f~(x) = f (x~) for some x~ with kx~ − xk=kxk = O(machine) Is stability or backward stability stronger? Xiangmin Jiao Numerical Analysis I 13 / 16 I No. Stability We say an algorithm is stable if it gives “nearly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is stable if (for all x) ~ kf (x) − f (x~)k=kf (x~)k = O(machine) for some x~ with kx~ − xk=kxk = O(machine) We say an algorithm is backward stable if it gives “exactly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is backward stable if (for all x) f~(x) = f (x~) for some x~ with kx~ − xk=kxk = O(machine) Is stability or backward stability stronger? I Backward stability is stronger. Does (backward ) stability depend on condition number of f (x)? Xiangmin Jiao Numerical Analysis I 13 / 16 Stability We say an algorithm is stable if it gives “nearly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is stable if (for all x) ~ kf (x) − f (x~)k=kf (x~)k = O(machine) for some x~ with kx~ − xk=kxk = O(machine) We say an algorithm is backward stable if it gives “exactly the right answer to nearly the right question” More formally, an algorithm f~ for problem f is backward stable if (for all x) f~(x) = f (x~) for some x~ with kx~ − xk=kxk = O(machine) Is stability or backward stability stronger? I Backward stability is stronger.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us