arXiv:1603.04483v1 [cs.MS] 14 Mar 2016 endV Moroz V. Leonid 2 a tcluaino nes qaero ihteueof use the with root square inverse of calculation Fast 1 nwrye i ysou yzalFzk,u.Cio lkow ul. Fizyki, Wydzia Bia l lymstoku, w Uniwersytet vvPltcncNtoa nvriy eateto Secur of Department University, National Polytechnic Lviv ¶ ∗ § ‡ † moroz [email protected] [email protected] [email protected] [email protected] otfntos[2 3 4 5 6.Alo hs loihsr algorithms i these then of accurate is All seed initial 16]. the approx 15, If to 14, used function. 13, be approximate can [12, algorithms functions Many root 11]. 10, [9, reflections efrac n lwrtruhu 4 ,6 ,8]. 7, 6, 5, [4, mo throughput consume slower functions and trigonometric performance and a root such square t operators inverse advanced easier but are throughput [1, multiplication high performance, processing subtraction, a signal addition, many and as in such computing spread scientific wide graphics, became 3D has arithmetics Floating-point Introduction 1 method Raphson boueo eaieerr tsbeun tgso h disc the w of systematic stages Keywords a subsequent in at derived errors are relative constants or magic absolute floating-point called single-precision so for the of root square inverse of nes qaero ucini ieyue n3 rpises graphics 3D in used widely is function root square Inverse epeetamteaia nlsso rnfrain used transformations of analysis mathematical a present We [email protected] 3 iiSak TSltos ul 801 o 8 S Layout, BSK 38, No. 580031, Hubli Solutions, IT VividSparks ai osat–aayia approach analytical – constant magic ia Holimath Vijay otn-on rtmtc;ivresur ot ai co magic root; square inverse arithmetics; floating-point : ∗ 1 t n oaa13 90 vv Ukraine Lviv, 79000 1/3, Romana Kn. st. eayJ Walczyk J. Cezary , § 3 n a .Cie´sli´nski L. Jan and , Abstract 1 † 2 keo1,1-4 i ytk Poland Bia lystok, 15-245 1L, skiego nryHrynchyshyn Andriy , t nomto n Technology, and Information ity ubr.Otmlvalues Optimal numbers. einadyedhigher yield and design o eainrqie o this for required teration se algorithm. ussed iiin qaeroot, square division, s ,3.Bscoperators Basic 3]. 2, ehrwr,soe in slower hardware, re y iiiigeither minimizing ay, qieiiilse to seed initial equire mt nes square inverse imate eilyi lightning in pecially piain uhas such pplications nfs calculation fast in sat Newton- nstant; ¶ 2 India ‡ 1 , function is less time-consuming. In other words, the function requires less cycles. In most of the case, initial seed is obtained from Look-Up Table (LUT) and the LUT consume significant silicon area of a chip. In this paper we present initial seed using so called magic constant [17, 18] which does not require LUT and we then used this magic constant to approximate inverse function using Newton-Raphson method and discussed its analytical approach. We present first mathematically rigorous description of the fast algorithm for computing inverse square root for single-precision IEEE Standard 754 floating-point numbers (type float).

1. float InvSqrt(float x) { 2. float halfnumber = 0.5f * x; 3. int i = *(int*) &x; 4. i = R-(i>>1); 5. x = *(float*)&i; 6. x = x*(1.5f-halfnumber*x*x); 7. x = x*(1.5f-halfnumber*x*x); 8. return x ; 9. } This code, written in C, will be referred to as function InvSqrt. It realizes a fast algorithm for calculation of the inverse square root. In line 3 we transfer bits of varaible x (type float) to variable i (type int). In line 4 we determine an initial value (then subject to the iteration process) of the inverse square root, where R = 0x5f3759df is a “magic constant”. In line 5 we transfer bits of a variable i (type int) to the variable x (type float). Lines 6 and 7 contain subsequent iterations of the Newton-Raphson algoritm. The algorithm InvSqrt has numerous applications, see [19, 20, 21, 22, 23]. The most important among them is 3D , where normalization of vec- tors is ubiquitous. InvSqrt is characterized by a high speed, more that 3 times higher than in computing the inverse square root using library functions. This property is discussed in detail in [24]. The errors of the fast inverse square root algorithm depend on the choice of R. In several theoretical papers [18, 24, 25, 26, 27] (see also the Eberly’s monograph [9]) attempts were made to determine analytically the optimal value (i.e. minimizing errors) of the magic constant. These attempts were not fully successfull. In our paper we present missing mathematical description of all steps of the fast inverse square root algorithm.

2 Preliminaries

The value of a floating-point number can be represented as:

x = ( 1)sx (1 + m )2ex , (2.1) − x where sx is the sign bit (sx = 1 for negative numbers and sx = 0 for positive numbers), 1 + m is normalized mantissa (or significand), where m 0, 1) and, x x ∈ h finally, ex is an integer.

2 Figure 1: The layout of a 32-bit floating-point number.

In the case of the standard IEEE-754 a floating-point number is encoded by 32 bits (Fig. 2). The first bit corresponds to a sign, next 8 bits correspond to an exponent ex and the last 23 bits encodes a mantissa. The fractional part of the mantissa is represented by an integer (without a sign) Mx:

23 Mx = Nm mx, where: Nm = 2 , (2.2) and the exponent is represented by a positive value Ex resulting from the shift of ex by a constant B (biased exponent):

Ex = ex + B, where: B = 127. (2.3)

Bits of a floating-point number can be interpreted as an integer given by:

I = ( 1)sx (N E + M ), (2.4) x − m x x where: M = N (2−ex x 1) (2.5) x m − In what follows we confine ourselves to positive numbers (s b = 0). Then, x ≡ S to a given integer I 0, 232 1 there corresponds a floating number x of the form x ∈ h − i (2.1), where

e := N −1I B, m := N −1I N −1I . (2.6) x ⌊ m x⌋− x m x − ⌊ m x⌋ This map, denoted by f, is inverse to the map x I . In other words, → x

f(Ix)= x . (2.7)

The range of available 32-bit floating-point numbers for which we can determine inverse square roots can be divided into 127 disjoint intervals:

63 x A , where: A = 22n (22n, 22n+1) 22n+1, 22(n+1)). (2.8) ∈ n n { }∪ ∪h n=[−63 I II An An | {z } | {z } Therefore, e = 2n for x AI 22n and e = 2n +1 for x AII . For any x A x ∈ n ∪ { } x ∈ n ∈ n exponents and significands of y = 1/√x are given by

n for x = 22n 0 for x = 22n − I I e =  n 1 for x A , m =  2/√1+ mx 1 for x A . (2.9) y − − ∈ n y − ∈ n  n 1 for x AII  √2/√1+ m 1 for x AII − − ∈ n x − ∈ n   3 It is convenient to introduce new variablesx ˜ = 2−2nx andy ˜ = 2ny (in order to have y˜ = 1/√x˜). Then:

0 for x = 22n 0 for x = 22n I I e =  1 for x A , m =  2/√1+ mx 1 for x A , (2.10) y˜ − ∈ n y˜ − ∈ n  1 for x AII  √2/√1+ m 1 for x AII − ∈ n x − ∈ n   which means that without loss of the generality we can confine ourselves to x 1, 4): ∈ h x˜ =1 for x = 22n  x˜ 1, 2) for x AI . (2.11) ∈ h ∈ n  x˜ 2, 4) for x AII ∈ h ∈ n  3 Theoretical explanation of InvSqrt code

In this section we present a mathematical interpretation of the code InvSqrt. The most important part of the code is contained in the line 4. Lines 4 and 5 produce a zeroth approximation of the inverse square root of given positive floating-point number x (sx = 0). The zeroth approximation will be used as an initial value for the Newton-Raphson iterations (lines 6 and 7 of the code).

Theorem 3.1. The porcedure of determining of an initial value using the magic constant, described by lines 4 and 5 of the code, can be represented by the following function 1 3 1 x˜ + + t for x˜ 1, 2)  −4 4 8 ∈ h   1 1 1 y˜0(˜x,t)=  x˜ + + t for x˜ 2,t) (3.1)  −8 2 8 ∈ h 1 1 1  x˜ + + t for x˜ t, 4)  −16 2 16 ∈ h  where  −1 t = tx =2+4mR + 2µxNm , (3.2) m := N −1R N −1R and µ = 0 for M even and µ = 1 for M odd. Finally, R m − ⌊ m ⌋ x x x x the floating-point number f(R), corresponding to the magic constant R, satisfies 1 e = 63 , m < , (3.3) R R 2

eR where f(R)=(1+ mR)2 .

Proof: The line 4 in the definition of the InvSqrt function consists of two operations. The first one is a right bit shift of the number Ix, defined by (2.4), which yields the integer part of its half:

I = I /2 = 2−1N (B + e )+ 2−1N (2−ex x 1) , (3.4) x/2 ⌊ x ⌋ m x ⌊ m − ⌋

4 The second operation yields I := R I , (3.5) y0 − x/2 and y f(R I ) is computed in the line 5. This floating-point number will be 0 ≡ − x/2 used as a zeroth approximation of the inverse square root, i.e., y 1/√x (for a 0 ≃ justification see the next section). Denoting, as usual,

ey0 y0 = (1+ my0 )2 , (3.6) and remembering that R = Nm(eR + B + mR), (3.7) we see from (2.6), (3.4) and (3.5) that

m = e + m e N −1I , e = e + m N −1I . (3.8) y0 R R − y0 − m x/2 y0 R ⌊ R − m x/2⌋

Here eR is an integer part and mR is a mantissa of the floating-point number given by f(R). It means that eR = 63 and mR < 1/2. According to formulas (3.4), (3.8) and (2.5), confining ourselves tox ˜ 1, 4), ∈ h we obtain:

−1 −1 2 NmB forx ˜ 1, 2) Ix/˜ 2 = 2 Nmmx˜ + −1 ∈ h . (3.9) ⌊ ⌋  2 Nm(B + 1) forx ˜ 2, 4) ∈ h Hence

−1 −1 −1 B + 1 mR + 2 Nm 2 Nmmx˜ forx ˜ 1, 2) ey˜0 = eR + ⌊ −−1 −⌊1 ⌋⌋ ∈ h . (3.10) − 2  mR N 2 Nmmx˜ forx ˜ 2, 4) ⌊ − m ⌊ ⌋⌋ ∈ h Therefore, requiring e = 1 (B 1) = 63 and m < 1 we get e = 1, which R 2 − R 2 y˜0 − means that e = e forx ˜ = 1, 2). The condition m < 1/2 implies y˜0 y h R m + 2−1 N −1 2−1N m = 0, (3.11) ⌊ R − m ⌊ m x˜⌋⌋ and −1 −1 0 for mx˜ 0, 2mR mR Nm 2 Nmmx˜ = ∈ h i , (3.12) ⌊ − ⌊ ⌋⌋  1 for mx˜ (2mR, 1) − ∈ which means that e B+1 = 1 forx ˜ 1, 2 + 4m e = R − 2 − ∈ h Ri , (3.13) y˜0  e B+3 = 2 forx ˜ (2 + 4m , 4) R − 2 − ∈ R which ends the proof.

In order to get a simple expression for the mantissa my˜0 we can make a next approximation: 2−1N m 2−1N m 2−1, (3.14) ⌊ m x˜⌋≃ m x˜ − which yields a new estimation of the inverse square root:

ey˜00 y˜00 = (1+ my˜00 )2 , (3.15)

5 −1 where for t =2+4mR + 2Nm :

e = 1, m = 2−2t 2−1m forx ˜ 1, 2) = A˜I y˜00 − y˜00 − x˜ ∈ h  e = 1, m = 2−2t 2−1m 2−1 forx ˜ 2,t = A˜II . (3.16) y˜00 − y˜00 − x˜ − ∈ h i  e = 2, m = 2−2t 2−1m + 2−1 forx ˜ (t, 4) = A˜III y˜00 − y˜00 − x˜ ∈  Because m = 2−ex˜ x˜ 1, the above equations yield x˜ − e y˜ = 2 y˜00 (˜α x˜ + β˜), 00 · where: 1 1 1 α˜ = 2−(1+ex˜), β˜ = t e e + , − 4 − 2 x˜ − y˜00 2 which means thaty ˜00 is a piecewise linear function ofx ˜:

I −2 −1 ˜I y˜0(˜x,t) = 2 (˜x 2 t 3) forx ˜ 1, 2) = A II − −3 − − ∈ h II y˜00(˜x,t)=  y˜ (˜x,t) = 2 (˜x t 4) forx ˜ 2,t = A˜ . (3.17) 0 − − − ∈ h i  y˜III(˜x,t) = 2−4(˜x t 8) forx ˜ (t, 4) = A˜III 0 − − − ∈ 

Corollary 3.2. y˜0(˜x) can be approximated by piece-wise linear function

y˜00(˜x) :=y ˜0(˜x,t1), (3.18) where t =2+4m + 2N −1 with a good accuracy (2N )−1 5.96 10−8. 1 R m m ≈ · This function is presented on Fig. 2 for a particular value of mR:

m = (R 190N )/N = 3630127/N 0.4327449, R − m m m ≃ known from the literature. The right part of Fig. 2 shows a very small relative error (˜y y˜ )/y˜ , which confirms the validity and accuracy of the approximation (3.14). 00 − 0 0 In order to improve the accuracy, the zeroth approximation (y˜00) is corrected twice using the Newton-Raphson method (lines 6 and 7 in the InvSqrt code):

y˜ =y ˜ f(˜y )/f ′(˜y ), 01 00 − 00 00 y˜ =y ˜ f(˜y )/f ′(˜y ), 02 01 − 01 01 where f(y)= y−2 x. Therefore: − y˜ (˜x,t) = 2−1y˜ (˜x,t)(3 y˜2 (˜x,t)x ˜), (3.19) 01 00 − 00 y˜ (˜x,t) = 2−1y˜ (˜x,t)(3 y˜2 (˜x,t)x ˜). (3.20) 02 01 − 01 In this section we gave a theoretical explanation of the InvSqrt code. In the original form of the code the magic constant R was guessed. Our interpretation gives us a natural possibility to treat R as a free parameter. In next section we will find its optimal values, minimizing errors.

6 Figure 2: Left: function 1/√x˜ and its zeroth approximationy ˜00(˜x, t) given by (3.17). Right: relative error of the zeroth approximationy ˜00(˜x, t) for 2000 random values ofx ˜. 4 Minimization of the relative error

Approximations of the inverse square root presented in the previous section depend on the parameter t directly related to the magic constant. The value of this pa- rameter can be estimated by analysing the relative error ofy ˜0k(˜x,t) with respect to √x˜: δ˜ (˜x,t)= √x˜y˜ (˜x,t) 1, where: k 0, 1, 2 . (4.1) k 0k − ∈ { } (r) ˜ As the best estimation we consider t = tk minimizing the relative error δk(˜x,t):

(r) I II III (r) max δ˜ (˜x,t ) < max δ˜ (˜x,t) where: A˜ = A˜ A˜ A˜ . (4.2) t=6 t k k k ∀ k x˜∈A˜ | | x˜∈A˜ | | ∪ ∪ 4.1 Zeroth approximation (r) ˜ In order to determine t0 we have to find extrema of δ0(˜x,t) with respect tox ˜, ˜ ˜I to identify maxima, and to compare them with boundary values δ0(1,t) = δ0 (1,t), ˜ ˜I ˜II ˜ ˜II ˜III ˜ ˜III δ0(2,t) = δ0 (2,t) = δ0 (2,t), δ0(t,t) = δ0 (t,t) = δ0 (t,t), δ0(4,t) = δ0 (4,t) at the ends of the considered intervals A˜I , A˜II , A˜III . Equating to zero derivatives of ˜I ˜II ˜III δ0 (˜x,t), δ0 (˜x,t), δ0 (˜x,t): 0= ∂ δ˜I (˜x,t) = 2−3x−1/2(3 3x + 2−1t), x˜ 0 − 0= ∂ δ˜II (˜x,t) = 2−2x−1/2(1 3 2−2x + 2−2t), x˜ 0 − · 0= ∂ δ˜III (˜x,t) = 2−2x−1/2(1 3 2−3x + 2−3t), (4.3) x˜ 0 − · we find local extrema:

I II III x˜0 = (6+ t)/6, x˜0 = (4+ t)/3, x˜0 = (8+ t)/3., (4.4) ˜I ˜II ˜III We easily verify that δ0 (˜x,t), δ0 (˜x,t), δ0 (˜x,t) are concave functions: ∂2δ˜K (˜x,t) < 0, for: K I,II,III x˜ 0 ∈ { } 7 which means that we have local maxima atx ˜K (where K I,II,III ). One of 0 ∈ { } them is negative:

δ˜III (t)= δ˜ (˜xIII,t)= 2−1 + 2−3t< 0 for t (2, 4). 0m 0 0 − ∈ The other maxima, given by

δ˜I (t)= δ˜ (xI ,t)= 1 + 2−1(1 + t/6)3/2, 0m 0 0 − δ˜II (t)= δ˜ (xII ,t)= 1 + 2 3−3/2(1 + t/4)3/2, (4.5) 0m 0 0 − · are increasing functions of t, satisfying

δ˜II (t) < δ˜I (t) δ˜I (t) 0, for t (2, 3 25/3 6 , (4.6) 0m 0m ∧ 0m ≤ ∈ · − i δ˜II (t) < δ˜I (t) δ˜I (t) > 0, for t (3 25/3 6, 25/3 + 24/3 2), (4.7) 0m 0m ∧ 0m ∈ · − − δ˜II (t) δ˜I (t) δ˜I (t) > 0, for t 25/3 + 24/3 2, 4). (4.8) 0m ≥ 0m ∧ 0m ∈ h − ˜I ˜II ˜III Because functions δ0(˜x,t), δ0 (˜x,t), δ0 (˜x,t) are concave, their minimal values with respect tox ˜ are assumed at boundaries of the intervals A˜I , A˜II , A˜III . It turns out that the global minimum is described by the following function:

δ˜II (t,t)= δ˜III (t,t)= 1 + 2−1√t, (4.9) 0 0 − which is increasing and negative for t (2, 4). Therefore, taking into account (4.6), ∈ (4.7) and (4.8), the condition (4.2) reduces to

δ˜I (t(r))= δ˜II (t(r),t(r)) for t(r) (2, 2 + 24/3 + 25/3), (4.10) 0m 0 | 0 0 0 | 0 ∈ − δ˜II (t(r))= δ˜II (t(r),t(r)) for t(r) 2 + 24/3 + 25/3, 4). (4.11) 0m 0 | 0 0 0 | 0 ∈ h− The right answer results from equation (4.11):

t(r) 3.7309796. (4.12) 0 ≃ Thus we obtain an estimation minimizing maximal relative error of zeroth approx- imation: (r) (r) (r) δ0 max = max δ0(˜x,t0 ) = δ0(t0 ,t0 ) 0.03421281, (4.13) x˜∈A˜ | | | |≃ (r) and the magic constant R0 :

R(r) = N (e + B)+ 2−2N (t(r) 2) 2−1 = 0 m R ⌊ m 0 − − ⌉ = 1597465647 = 0x5F 37642F . (4.14)

The resulting relative error is presented at Fig.3.

8 Figure 3: Relative error for zeroth approximation of the inverse square root. Grey points were generated by the function InvSqrt without two lines (6 and 7) of code and with R = R(r), for 4000 random values x 2−126, 2128). 0 ∈h 4.2 Newton-Raphson corrections The relative error can be reduced by Newton-Raphson corrections (3.19) and (3.20). Substitutingy ˜0k(˜x,t)=(1+ δ˜k(˜x,t))/√x˜ we rewrite them as

1 2 δ˜ (˜x,t)= δ˜ (3 + δ˜ − (˜x,t)), where: k 1, 2 . (4.15) k −2 k−1 k 1 ∈ { }

The quadratic dependence on δ˜k−1 implies a fast convergence of the Newton-Raphson iterations. Note that obtained functions δ˜k are non-positive. Their derivatives with respect tox ˜ can be easily calculated 3 ∂ δ˜ (˜x,t)= δ˜ − (˜x,t)(2 + δ˜ − (˜x,t))∂ δ˜ − (˜x,t) x˜ k −2 k 1 k 1 x˜ k 1 2 3 2 ∂ δ˜ (˜x,t)= δ˜ − (˜x,t)(2 + δ˜ − (˜x,t))∂ δ˜ − (˜x,t)+ x˜ k −2 k 1 k 1 x˜ k 1 2 3(δ˜ − (˜x,t) + 1)[∂ δ˜ − (˜x,t)] − k 1 x˜ k 1

One can easily see that that extremes of δ˜k can be determined by studying extremes and zeros of δ˜k−1.

Local maxima of δ˜ (˜x,t) correspond to negative local minima of δ˜ − (˜x,t) or • k k 1 for zeros of δ˜k−1(˜x,t). However, a local maximum of a non-positive function can not be a candidate for a global maximum of its modulus (compare (4.2)).

Local minima of δ˜k(˜x,t) correspond to positive maxima and negative minima • ˜ ˜I ˜II of δk−1(˜x,t), which means that they are given by δ0m(t), δ0m(t), see (4.5). Then, we compute

∂δ˜k(˜x,t) 3 = δ˜k−1(˜x,t)(2 + δ˜k−1(˜x,t)), (4.16) ∂δ˜k−1(˜x,t) −2

which implies that δ˜k(˜x,t) is an increasing function of δ˜k−1(˜x,t) for δ˜k−1(˜x,t) < 0 and a decreasing function of δ˜k−1(˜x,t) for δ˜k−1(˜x,t) > 0. It means that there

9 only two candidates for the minimum of δ˜1(˜x,t): one corresponding to the minimal negative value of δ˜k−1(˜x,t) and one corresponding to the maximal positive value of I II δ˜k−1(˜x,t). The smallest value of δ˜k(˜x,t) evaluated at boundaries of regions A˜ , A˜ and A˜III still is assumed atx ˜ = t. In the case k = 1 (the first Newton-Raphston correction) we have negative min- ˜I ˜II ˜ ima (δ1m(t) and δ1m(t)) corresponding to positive maxima of δ0(˜x,t). The minima are decreasing functions of t and satisfy conditions

δ˜II (t) < δ˜I (t) for t (2, 2 + 24/3 + 25/3), (4.17) | 1m | | 1m | ∈ − δ˜II (t) δ˜I (t) for t 2 + 24/3 + 25/3, 4). (4.18) | 1m | ≥ | 1m | ∈ h− In the case k = 2 we get minima at the same locations as for k = 1 and with the same monotonicity with respect to t. All this leads to the conclusion that the condition (4.2) for the first and second correction will be satisfied for the same value (r) (r) (r) t = t1 = t2 (smaller than t0 ):

t(r) = t(r) 3.7298003 , (4.19) 1 2 ≃ (r) where t1 is a solution to the equation ˜ (r) (r) ˜ (r) (r) δ1(t1 ,t1 )= δ1(4/3+ t1 /3,t1 ). (4.20)

II x˜0 | {z } Thus we obtained a new estimation of the magic constant:

R(r) = R(r) = N (e + B)+ 2−2N (t(r) 2) 2−1 = 1 2 m R ⌊ m 1 − − ⌉ = 1597463174 = 0x5F 375A86 (4.21) with corresponding maximal relative errors:

˜ ˜ (r) (r) −3 δ1 max = δ1(t1 ,t1 ) 1.75118 10 , | |≃ · (4.22) δ˜ = δ˜ (t(r),t(r)) 4.60 10−6 . 2 max | 2 1 1 |≃ ·

5 Minimization of the absolute error

Similarly as in the previous section we will derive optimal values of the magic constant minimizing the absolute error of approximationsy ˜00(˜x,t),y ˜01(˜x,t) and y˜02(˜x,t), i.e., by minimizing

∆˜ (˜x,t) =y ˜ (˜x,t) 1/√x,˜ k 0, 1, 2 . (5.1) k 0k − ∈ { } (a) (a) (a) In other words, we will find t0 , t1 and t2 such that

(a) (a) max ∆˜ (˜x,t ) < max ∆˜ (˜x,t) (5.2) t6=t k k k ∀ k x˜∈A˜ | | x˜∈A˜ | | for k = 0, 1, 2, where A˜ = A˜I A˜II A˜III . ∪ ∪ 10 Figure 4: Relative error of the first Newton-Raphson correction of the inverse square root approximation. Grey points were generated by the fuction InvSqrt without line 7 of the code and with R = R(r), for 4000 random values x 2−126, 2128). 1 ∈h

Figure 5: Relative error of the second Newton-Raphson correction of the inverse square (r) root approximation. Grey points were generated by the function InvSqrt z R = R1 = R(r), for 4000 random values x 2−126, 2128). The visible blur is a consequence of round- 2 ∈h off errors.

11 5.1 Zeroth approximation (a) In order to find t0 minimizing the absolute error ofy ˜00(˜x,t), we will compute and I II III study extremes of ∆˜ 0(˜x,t) inside intervals A˜ , A˜ and A˜ , and compare them with values of ∆˜ 0(˜x,t) at the ends of the intervals. First, we compute the boundary values: t 1 ∆˜ (1,t)= , 0 8 − 2 t 2√2 1 ∆˜ (2,t)= − , 0 8 − 4 (5.3) 1 1 ∆˜ 0(t,t)= , 2 − √t t 1 ∆˜ (4,t)= . 0 16 − 4

˜ I All of them are negative (for t < 4). Derivatives of error functions (∆0(˜x,t), ˜ II ˜ III ∆0 (˜x,t) and ∆0 (˜x,t)) are given by:

∂ ∆˜ I(˜x,t) = 2−1x˜−3/2 2−2, x˜ 0 − ∂ ∆˜ II(˜x,t) = 2−1x˜−3/2 2−3, (5.4) x˜ 0 − ∂ ∆˜ III(˜x,t) = 2−1x˜−3/2 2−4. x˜ 0 − Therefore local extrema are located at

I 2/3 II 4/3 III x˜0a = 2 , x˜0a = 2 , x˜0a = 4 (5.5)

(the locations do not depend on t). The second derivative is negative:

∂2∆˜ K(˜x,t)= 3 2−2x−5/2 < 0 x˜ 0 − · for K I,II,III . Therefore all these extremes are local maxima, given by ∈ { }

I I I I −1/2 3 3 1 ∆˜ 0(˜x ,t) =y ˜ (˜x ,t) (˜x ) = + t, 0a 0 0a − 0a 4 − 2√3 2 8 1 3√3 2 1 ∆˜ (˜xII ,t) =y ˜II(˜xII ,t) (˜xII )−1/2 = + t, (5.6) 0 0a 0 0a − 0a 2 − 4 8 1 1 ∆˜ (˜xIII,t) =y ˜III (˜xIII,t) (˜xIII )−1/2 = t . 0 0a 0 0a − 0a 16 − 4 ˜ III We see that ∆0(˜x0a ,t) < 0 for t < 4 (and negative maxima obviously are not important). Direct computation shows that for t 6 4 the first value of (5.6) is the greatest, i.e., ˜ ˜ I max ∆0(˜x,t)= ∆0(˜x0a,t) (5.7) x˜∈A˜ I II III Evaluating ∆˜ 0(˜x,t) at the ends of the intervals A˜ , A˜ and A˜ , we find the global minimum: ˜ ˜ I 1 t min ∆0(˜x,t)= ∆0(1,t)= + < 0, (5.8) x˜∈A˜ −2 8

12 which enables us to formulate the condition (5.2) in the following form:

˜ ˜ 3 3 1 1 t max ∆0(˜x,t)= min ∆0(˜x,t) 3 + t = . (5.9) x˜∈A˜ | x˜∈A˜ | ⇔ 4 − 2√2 8 2 − 8 Solving this equation we get:

t(a) = 1 + 3 22/3 3.7622, (5.10) 0 − · ≃ (a) which corresponds to a magic constant R0 given by

R(a) = N (e +B)+ 2−2N (t(a) 2) 2−1 = 1597531127 = 0x5F 3863F 7. (5.11) 0 m R ⌊ m 0 − − ⌉ The resulting maximal error of the zeroth approxmation reads

(a) 5 3 ∆0 max = max ∆k(˜x,tk ) = 3 0.0297246. (5.12) x˜∈A˜ | | 8 − 4√2 ≃

Figure 6: Absolute error of zeroth approximation of the inverse square root. Grey points were generated by the function InvSqrt without two lines (6 and 7) of the code and with R = R(a), for 4000 random values x 1, 4). 0 ∈h

5.2 Newton-Raphson corrections The absolute errror after Newton-Raphson corrections is a non-positive function, similarly as the relative error. This function reaches its maximal value equal to ˜I ˜II ˜ I ˜ II zero in intervals A and A (which corresponds to zeros of ∆0 i ∆0 ) and has a negative maximum in the interval A˜III . The other extremes (minima) are decreas- ing functions of the parameter t. They are located at x defined by the following equations: 75 27t 9t2 t3 1 27x 9tx 3t2x 27x2 0= + + + + + − 128 − 256 − 512 − 1024 2x3/2 64 64 256 − 128 9tx2 x3 + , for x A˜I , (5.13) − 256 32 ∈

13 Figure 7: Absolute error of the first Newton-Raphson correction of the inverse square root approximation. Grey points were generated by the function InvSqrt without the line 7 of the InvSqrt code and with R = R(a), for 4000 random valuesx ˜ 1, 4). 1 ∈h

1 3t 3t2 t3 1 3x 3tx 3t2x 9x2 0= + + + + + − 4 − 64 − 256 − 1024 2x3/2 32 64 512 − 256 9tx2 x3 + , for x A˜II . (5.14) − 1024 256 ∈ The condition (5.2) reduces to the equality of the local minimum (located in A˜I ) and ˜ (a) ∆1(1,t) (this is an increasing function of t). The equality is obtained for t = t1 , where t(a) 3.74699138, (5.15) 1 ≃ The corresponding maximal error and a magic constant are given by

∆ 0.001484497, (5.16) 1 max ≃ (a) R1 = 1597499226 = 0x5F 37E75A . (5.17) In the case of the second Newton-Raphson correction the minimization of errors is ˜ I ˜ I obtained similarly, by equating the local minimum ∆2(˜x,t) with ∆2(1,t). Hence we get another value a magic constant:

(a) R2 = 1597484501 = 0x5F 37ADD5, (5.18)

corresponding to

t(a) 3.73996986, ∆ 3.684 10−6. (5.19) 2 ≃ 2 max ≃ ·

6 Conclusions

In this paper we have presented a theoretical interpretation of the InvSqrt code, giv- ing a precise meaning to two values of the magic constant exisiting in the literature

14 Figure 8: Absolute error of the second Newton-Raphson correction of the inverse square root approximation. Grey points were generated by the function InvSqrt of the InvSqrt code and with R = R(a), for 4000 random valuesx ˜ 1, 4). 2 ∈h

and adding two more values of the magic constant. Using this magic constant we have conducted error analysis for Newton-Raphson Method and proved that error bounds for single precision computation are acceptable. The magic constant can be easily incorporated in existing floating point multiplier or floating point multiply- add fused and one need to replace LUT with the magic constant.

References

[1] M. Sadeghian and J. Stine: Optimized Low-Power Elementary Function Ap- proximation for Chybyshev series Approximation, 46th Asilomar Conf. on Sig- nal Systems and Computers, 2012. [2] K. Diefendorff, P. K. Dubey, R. Hochprung and H. Scales: Altivec Extension to PowerPC Accelerates Media Processing, IEEE Micro, pp. 85-95, Mar./Apr. 2000. [3] D. Harris: A Powering Unit for an OpenGL Lighting Engine, Proc. 35th Asilo- mar Conf. Singals, Systems, and Computers, pp. 1641-1645, 2001. [4] D. M. Russinoff: A Mechanically Checked Proof of Correctness of the AMD K5 Floating Point Square Root Microcode, Formal Methods in System Design, Vol.14, Issue 1, pp. 75-125,Jan 1999. [5] J-M Muller, N. Brisebarre, F. Dinechin, C-P. Jeannerod, V. Lefvre, G. Melquiond, N. Revol, D.Stehl and S. Torres: Software Implementation of Floating-Point Arithmetic, Handbook of Floating-Point Arithmetic, pp. 321- 372, Oct 2009. [6] D. E. Metafas and C. E. Goutis: A floating-point advanced proces- sor,Journal of VLSI signal processing systems for signal, image and video tech- nology, Vol.10, Issue 1, pp 53-65, Jan 1995. [7] M.Cornea, C. Anderson and C. Tsen: Software Implementation of the IEEE 754R Decimal Floating-Point Arithmetic,Software and Data Technologies, Vol.

15 10 of the series Communications in Computer and Information Science pp. 97- 109. [8] J-M Muller, N. Brisebarre, F. Dinechin, C-P. Jeannerod, V. Lefvre, G. Melquiond, N. Revol, D.Stehl and S. Torres: Hardware Implementation of Floating-Point Arithmetic, Handbook of Floating-Point Arithmetic, pp. 269- 320, Oct 2009. [9] D.H.Eberly: GPGPU Programming for Games and Science, CRC Press 2015. [10] N.Ide, M.Hirano, Y.Endo, S.Yoshioka, H.Murakami, A.Kunimatsu, T.Sato, T.Kamei, T.Okada, and M.Suzuki: 2. 44-GFLOPS 300-MHz Floating-Point Vector-Processing Unit for High-Performance 3D Graphics Computing, IEEE J. Solid-State Circuits, vol. 35, no. 7, pp. 1025-1033, July 2000. [11] S. Oberman,G. Favor and F. Weber :(AMD 3DNow! technology: architecture and implementations. IEEE Micro, Vol.9, No.2, pp. 37-48, Mar/Apr. 1999. [12] W. Liu and A. Nammarelli: Power Efficient Division and Square root Unit, IEEE Trans. Comp, vol. 61, No.8, pp. 1059-1070, Aug 2012. [13] T .J. Kwon and J. Draper: Floating-point Division and Square root Imple- mentation using a Taylor-Series Expan- sion Algorithm with Reduced Look-Up Table, 51st Midwest Symposium on Circuits and Systems, 2008. [14] L.. Xuan and D. J. An: A low latency High-throughput Elementary Func- tion Generator based on Enhanced double rotation CORDIC, Symposium on Computer Applications and Communications, 2014. [15] M. X. Nguyen and A. Dinh-Duc: Hardware-Based Algorithm for Sine and Co- sine Computations using Fixed Point Processor, 11th International Conf. on Electrical Engineering/Electronics Computer, Telecommunca- tions and Infor- mation Technology, 2014. [16] B. Paharami: Computer Arithmetic Algorithms and Hardware Designs, Oct 2010. [17] , quake3-1.32b/code/game/q math.c , III Arena, 1999. [18] C. Lomont, ”Fast inverse square root,” Purdue University, Tech. Rep., 2003. Available online: http://www.matrix67.com/data/InvSqrt.pdf, http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf. [19] S.Zafar, R.Adapa: Hardware architecture design and mapping of ”Fast Inverse Square Root’s algorithm”, Advances in Electrical Engineering (ICAEE), 2014 International Conference on. - IEEE, 2014. - pp. 1-4. [20] J.Blinn, Floating-point tricks, IEEE Computer Graphics and Applications 17 (4) (1997) 80-84. [21] Q.Avril, V. Gouranton and B. Arnaldi: Fast Collision Culling in Large-Scale Environments Using GPU Mapping Function, ACM Eurographics Parallel Graphics and Visualization, Cagliari, Italy (2012). [22] E.Ardizzone, R.Gallea, O.Gambino, R.Pirrone: Effective and Efficient Interpo- lation for Mutual Information based Multimodality Elastic Image Registration, 2003.

16 [23] J.L.V.M. Stanislaus, T.Mohsenin: High Performance Compressive Sensing Re- construction Hardware with QRD Process, IEEE International Symposium on Circuits and Systems (ISCAS ’ 12), May 2012. [24] M.Robertson: A Brief History of InvSqrt, Bachelor Thesis, Univ. of New Brunswick 2012. [25] D.Eberly: Fast inverse square root, Geometric Tools, LLC(2010), http://geometrictools.com/Documentation/FastInverseSqrt.pdf. [26] C.McEniry: The Mathematics Behind the Fast Inverse Square Root Function Code, Tech. rep. 2007. [27] B.Self: Efficiently Computing the Inverse Square Root Using Integer Opera- tions. May 31, 2012.

17 This figure "Bezwzgledny0.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1 This figure "Bezwzgledny1.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1 This figure "Wzgledny1.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1 This figure "Bezwzgledny2.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1 This figure "BityFloat.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1 This figure "Wzgledny2.jpeg" is available in "jpeg" format from:

http://arxiv.org/ps/1603.04483v1