Computing Transcendental Functions

Iris Oved [email protected] 29 April 1999

Exponential, logarithmic, and are transcendental. They are peculiar in that their values cannot be computed in terms of finite compositions of arithmetic and/or algebraic operations. So how might one go about computing such functions? There are several approaches to this problem, three of which will be outlined below. We will begin with Taylor approxima- tions, then take a look at a method involving Minimax , and we will conclude with a brief consideration of CORDIC approximations. While the discussion will specifically cover methods for calculating the , the procedures can be generalized for the calculation of other transcendental functions as well.

1 Taylor Approximations

One method for computing the sine function is to find a that allows us to approximate the sine locally. A polynomial is a good approximation to the sine function near some point, x = a, if the derivatives of the polynomial at the point are equal to the derivatives of the sine curve at that point. The more derivatives the polynomial has in common with the sine function at the point, the better the approximation. Thus the higher the degree of the polynomial, the better the approximation. A polynomial that is found by this method is called a Taylor Polynomial. Suppose we are looking for the Taylor Polynomial of degree 1 approximating the sine function near x = 0. We will call the polynomial p(x) and the sine function f(x). We want to find a linear function p(x) such that p(0) = f(0) and p0(0) = f 0(0). Since we are looking for a polynomial of degree 1, we know that it will be of the form p(x) = Bx + A. Now we set p(0) = f(0). Since we know that f(0) = 0, we have

p(0) = 0.

1 Substituting x = 0 into p(x) gives

p(0) = 0B + A.

Thus, in our polynomial, we know the coefficient A = 0. To find the value of the coefficient B, we take the derivative of our linear function and we have: p0(x) = B. Now we set p0(0) = f 0(0). Since we know that the derivative of the sine function is the cosine function, we know that f 0(0) = cos(0) = 1. So we have

p0(0) = 1.

Substituting x = 0 into p0(x), we get

p0(0) = B.

Thus we have the coefficient B = 1. Now that we have both coefficients, we arrive at the Taylor Polynomial of degree 1 that approximates the sine curve near x = 0:

p(x) = x.

Notice that this is the line tangent to the sine function at x = 0.1

1Indeed, the degree 1 Taylor Polynomial of any function is necessarily the line tangent to the function at the point about which the above procedures are performed.

2 4 3

2

-6 -4 -2 2 4 6

-2

-4 Figure 1.1. Sine curve and degree 1 Taylor with −2π ≤ x ≤ 2π

We can see in Figure 1.1 that the degree 1 Taylor Polynomial is a good approximation to the sine curve near x = 0, but clearly it gets worse as we move farther away from that point. Higher degree polynomials have a higher number of meaningful derivatives (derivatives not equal to zero). Higher degree Taylor Polynomials thus have more derivatives in common with the function that is being approximated. This means that the higher the degree of the Taylor Polynomial, the more accurately it approximates the sine function. To demonstrate this, let us find the Taylor Polynomial of degree 3, for the sine curve, centered at x = 0. Since we are looking for a polynomial of degree 3, we know that it will be of the form p(x) = Dx3 + Cx2 + Bx + A. As before, we set p(0) = f(0), and conclude that in our polynomial we have the coefficient A = 0. Next, to find the coefficient B, we take the first derivative of our polynomial and get p0(x) = 3Dx2 + 2Cx + B. Now we set p0(0) = f 0(0), and we have the coefficient B = 1. Now, to find the coefficient C, we take the second derivative of our polyno- mial and get p00(x) = 6Dx + 2C. Next we set p00(0) = f 00(0). Since we know that the derivative of the cosine function is the negative of the sine function, we know that f 00(0) = − sin(0) = 0. So we have

p00(0) = 0.

Substituting x = 0 into p00(x), we get

p00(0) = 2C.

Thus we have the coefficient C = 0. Finally, to find the coefficient D, we take the third derivative of our polyno- mial and get p000(x) = 6D. Now we set p000(0) = f 000(0).

4 Since we know that the derivative of the negative of the sine function is the negative of the cosine function, we know that f 000(0) = − cos(0) = −1. So we have p000(0) = −1. Substituting x = 0 into p000(x), we get

p000(0) = 6D.

Thus we have the coefficient D = −1/6. By this method, we have discovered that the degree 3 Taylor Polynomial that approximates the sine function near x = 0 is

1 3 2 p(x) = − 6 x + 0x + x + 0.

5 4 6

2

-6 -4 -2 2 4 6

-2

-4 Figure 1.2. Sine curve and degree 3 Taylor with −2π ≤ x ≤ 2π

Although the degree 3 Taylor Polynomial is a better approximation than the degree 1 Taylor Polynomial, it still becomes very inaccurate as we move farther away from x = 0. The general formula for the Taylor Polynomial of degree n approximating some function f(x) for x near 0 is

f 00(0) f 000(0) f (n)(0) f(0) + f 0(0)x + x2 + x3 + ··· + xn. 2! 3! n! So suppose we want to find the Taylor polynomial of degree 9 that approxi- mates the sine function near x = 0.2 Substituting f(x) = sin(x) and n = 9 into the general formula gives

x3 x5 x7 x9 x − + − + . 3! 5! 7! 9!

2Since sin(−x) = − sin(x), there are no interesting even-degree Taylor Polynomials at x = 0.

7 4 8

2

-6 -4 -2 2 4 6

-2

-4 Figure 1.3. Sine curve and degree 9 Taylor Polynomial with −2π ≤ x ≤ 2π

Figure 1.3 shows that the Taylor Polynomial of degree 9 is a better approxi- mation to the sine curve than the linear and cubic polynomials. Still, even this high degree polynomial provides approximations that become worse as we move farther away from x = 0. We can arrive at better and better approximations to the sine curve if we find Taylor Polynomials of higher and higher degree. However, no matter how large the degree of the polynomial, since the sine curve is periodic, while polynomials are not, there will always be a point at which the graph of the polynomial will deviate significantly from the sine curve in some direction. This is to be expected because the sine function is transcendental. No finite composition of arithmetic and/or algebraic operations will capture the behavior of a transcendental function. One solution to this problem is to limit the sine curve to a domain of x- values from which we can infer the sine of any real number. Since the sine function has a period of 2π, it is clear that we can infer the sine of any real number if we can calculate the sine of any real x in the domain 0 ≤ x ≤ 2π. In this domain, the sine function has a finite number of turns, so a high degree polynomial would be an adequate approximation to the curve for all real x in the domain. However, we would benefit by further reducing the domain of inputs so that each of the inputs will be even closer to x = 0. This is important because Taylor Polynomials are good approximations near a point, but not so good farther away from that point.

9 1 10

0.5

1 2 3 4 5 6

-0.5

-1 Figure 1.4. Sine curve with 0 ≤ x ≤ 2π

If we look at the sine curve in the domain 0 ≤ x ≤ 2π, we can see that the domain can be further limited. Notice that all of the sine-values at inputs between π and 2π are negations of values between 0 and π. As long as we make a note to negate our solution, we can infer the sine of all of the x-values in the domain π ≤ x ≤ 2π, from the sine of the x-values in the domain 0 ≤ x ≤ π. A polynomial of high degree that resembles the sine curve in this domain will provide information from which we can infer a reasonable appproximation to the sine of any real x. Still, there is some symmetry in the sine curve that we can π exploit to shrink the domain further. The sine of the x-values between 2 and π π can be inferred from the sine of the x-values between 0 and 2 by reflecting π across the line x = 2 . Now the Taylor Polynomial only needs to be a good π approximation to the sine curve over the limited domain 0 ≤ x ≤ 2 in order for us to compute the sine function for any real input.

11 1 12

0.8

0.6

0.4

0.2

0.25 0.5 0.75 1 1.25 1.5 π Figure 1.5. Sine curve with 0 ≤ x ≤ 2 Of course, one can always find further tricks to solving such problems—tricks usually involve further complexities. Perhaps for some practical reasons, accu- racy carries more weight than complexity. In these cases, it may be reasonable to find several Taylor Polynomials that are good approximations near different points, and split the curve accordingly. For example, one might want to find a Taylor Polynomial for the sine curve at x = 0 to approximate the sine of values in the first part of the domain, and another Taylor Polynomial for the π sine curve at x = 2 to approximate the sine of values in the second part of the domain. Suppose we want to use the degree 5 Taylor Polynomial at x = 0 π 3 and the degree 4 Taylor Polynomial at x = 2 . A reasonable place to divide the domain is the point at which these two polynomials intersect. Setting the degree 5 Taylor Polynomial at x = 0 equal to the degree 4 Taylor Polynomial π at x = 2 will allow us to find the x-value at which these polynomials intersect. Entering these polynomials into a graphing and zooming in, we can guess that the point of intersection is approximately at x = 0.92. Below is an implementation of the above method as a TI-82 calculator pro- gram. 4 The polynomials are evaluated by Horner’s method, which is to write a polynomial of the form ax4 + bx3 + cx2 + dx + e in the form (ax + b)x + cx + dx + e. Rewriting polynomials in this way increases the efficiency of the computation by reducing the number of operations. Rather than computing fourth degree polynomials using 10 multiplications, Horner’s method gives the same result with only 4 multiplications (the number of additions is unchanged).

:Prompt X :If (X<0):pi-X->X //limit to positive :2pi(fPart(X/(2pi)))->X //limit to 0<=x<=2pi :1->N //initialize negation flag :If(X>pi) :Then //limit to 0<=x<=pi :X-pi->X :-N->N //flip negation flag :End :If(X>pi/2):pi-X->X //limit to 0<=x<=pi/2 :If(X<=0.92) :((((((1/120)X)X+(-1/6))X)X+1)X)->G //degree 5 Taylor at x=0 :If(X>0.92) :Then :(X-pi/2)^2->X //degree 4 Taylor at x=Pi/2 :(X(X/24-0.5))+1->G :End :Disp N*G

3 π Since sin( 2 − x) = sin(x), there are no interesting odd-degree Taylor Polynomials at π x = 2 . 4Note: When entering the program, use the π key in place of “pi”, and use the “STO→” key in place of “->”.

13 2 Minimax Polynomials

Taylor Polynomials become less accurate as we get farther away from the point at which they are centered. This means that, if we have a Taylor Polynomial for the sine curve centered at x = 0 over some interval around x = 0, we expect that the maximum errors between the polynomial and the sine function will occur at the endpoints of the interval. If we have the degree 5 Taylor Polynomial at x = 0, the error function is   1 5 1 3 err(x) = sin(x) − 120 x − 6 x + x .

Since we can infer the sine of any real x from the sine of x-values between 0 π π and 2 , the farthest point from x = 0 that we need to consider is at x = 2 . 1 5 1 3 Comparing the the graphs of sin(x) and 120 x − 6 x + x, we see that the π maximum distance between them, on the domain 0 ≤ x ≤ 2 , indeed occurs at π π 2 . Substituting x = 2 into err(x) gives π err( 2 ) = 0.004525. Thus the maximum error (in absolute value) of the degree 5 Taylor approxima- tion to the sine curve centered at x = 0 is 0.004525. Minimax Polynomials are polynomials that minimize the maximum error. Rather than being highly accurate near a specific point and increasingly in- accurate away from that point, these approximations are reasonaby accurate over an entire domain. According to Chebyshev’s Equioscillation Theorem,5 for every over the interval a ≤ x ≤ b, there exist Mini- max Polynomials of every degree, and the Minimax Polynomial of degree ≤ n is uniquely characterized by the property that it has degree n and there exist at least n + 2 points in the domain a ≤ x ≤ b such that, at these points, the errors between the polynomial and the function oscillate in sign and are equal in magnitude. Symbolically, if m(x) is the degree n Minimax Polynomial ap- proximating a function f(x) on the interval a ≤ x ≤ b, then there exist n + 2 6 points, a ≤ x1 < x2 < ··· < xn+2 ≤ b, such that:

f(x1) − m(x1) = m(x2) − f(x2) = f(x3) − m(x3) = ....

Like Taylor Polynomials, Minimax Polynomials of higher degree are better ap- proximations than ones of lower degree.

2.1 Linear Minimax Polynomials Suppose we want to find the degree 1 Minimax Polynomial for the sine curve π over the domain 0 ≤ x ≤ 2 . This means that we are looking for a polynomial 5I don’t have a good reference for this theorem. My teacher, Alexander Perlis, learned about it from a fellow graduate student, Dan Coombs, who learned about it in a class several years ago. 6Notice the switching in the order of subtraction.

14 of the form p(x) = mx + b such that the maximum error between the polyno- mial and the sine curve is minimized. This would be the optimal straight line approximation to the sine curve over that domain. The error between the sine curve and the polynomial can be expressed by the function err(x) = (mx + b) − sin(x). (1) We know from the Equioscillation Theorem that the error between the Mini- max Polynomial and the sine curve will reach extrema at three points. Imagining all possible lines, it seems clear that the maximum error will occur at the two 7 π endpoints of the domain (at x = 0 and at x = 2 ) and at some other x in the π domain. Let us call this third critical value x = q, where 0 < q < 2 . Since we are looking for the polynomial with the minimum of possible max- imum errors, we must find the point x = q at which the error function reaches its maximum value. When the error is at its maximum, the derivative of the error function is zero and the second derivative is negative (when the slope of the error curve is zero and the error curve is concave down). To find this point, we take the derivative of (1) and set

err0(q) = 0.

Thus m − cos(q) = 0, so that q = arccos(m). Thus we know that one of the points at which the error between our Minimax Polynomial and the sine curve will reach its extremum is at x = arccos(m). We want to find the polynomial that makes the error at x = arccos(m) equal in absolute value to the error at each of the two endpoints of the domain (at x = 0 π and at x = 2 ). To find which values of m and b make up the linear Minimax Polynomial, we set the three maximum errors equal in absolute value to one another. The error magnitude at x = 0 is

err(0) = (0m + b) − sin(0) = b, the error magnitude at x = arccos(m) is      err arccos(m) = sin arccos(m) − m arccos(m) + b ,

π and the error magnitude at x = 2 is π π  π π err( 2 ) = m( 2 ) + b − sin( 2 ) = m 2 + b − 1. (The order of subtraction alternates so that the sign of the actual errors al- ternate, in accordance with Chebyshev’s theorem.) Equating the three error

7I don’t know a rigorous proof for this.

15 magnitudes gives us a system of two from which we can find values for m and b: b = sinarccos(m) − marccos(m) + b (2) π b = m( 2 ) + b − 1. (3) To solve for m, we can cancel out the bs in equation (3), add 1 to both sides, π and divide by 2 to get 2 m = π . 2 Now that we have the value m = π , we can find the value of b. Substituting 2 m = π into equation (2), adding b to both sides and dividing by 2 gives

sinarccos( 2 ) − 2 arccos( 2 ) b = π π π ≈ 0.1052568312. 2 Now that we have values for m and b, we have arrived at the degree 1 π Minimax Polynomial for the sine curve over the domain 0 ≤ x ≤ 2 :

2 p(x) = π x + 0.10527.

16 1 17

0.8

0.6

0.4

0.2

0.25 0.5 0.75 1 1.25 1.5 Figure 2.1. Sine curve and linear Minimax Polynomial

Since the maximum error between p(x) and sin(x) is b, we know that the maximum error of the degree 1 Minimax Polynomial for the sine curve over the π domain 0 ≤ x ≤ 2 is 0.10527. Not bad for a linear polynomial!

2.2 Quadratic Minimax Polynomials The higher the degree, n, of a Minimax Polynomial the better the approxima- tion. This is because Chebyshev’s theorem allows us to find the polynomial approximation that is the optimal of all polynomials of degree ≤ n. Suppose we want to find the degree 2 Minimax Polynomial for the sine curve π over the domain 0 ≤ x ≤ 2 . In order to do this we must find the points at which the error reaches local extrema and set the errors at these points equal to one another. We know from Chebyshev’s Equioscillation Theorem that the error between the degree 2 Minimax Polynomial and the sine curve will reach extrema at four points in the domain. Imagining the possible parabolas, it seems plausible8 that these extrema will occur at the endpoints of the domain π (at x = 0 and at x = 2 ) and at two additional points in the domain. Let us call π these two additional points x = w and x = z where 0 < w < z < 2 . We know that the polynomial will be of the form p(x) = ax2 + bx + c. The error between this polynomial and the sine curve can be expressed by the function

err(x) = ax2 + bx + c − sin(x).

The error function reaches its extrema whenever its first derivative is equal to zero and its second derivative is not equal to zero. The first derivative of the error function is err0(x) = 2ax + b − cos(x). By setting the derivative of the error function equal to zero and substituting x = w and x = z, we have the following two equations:

2aw + b = cos(w) (4)

2az + b = cos(z). (5) We also know that the maximum error will be equal to c since we have a maxi- π mum error at x = 0. We want to set err(0) = −err(w) = err(z) = −err( 2 ). So we have:

2 2 π2 π c = sin(w) − (aw + bw + c) = az + bz + c − sin(z) = 1 − ( 4 a + 2 b + c). (6) From equations (4), (5) and (6), we have a system of five equations and five unknown variables. We can eliminate two variables by rewriting equations (4)

8Again, I don’t know a rigorous proof.

18 and (5),

cos(z) − cos(w) a = (7) 2z − 2w z cos(w) − w cos(z) b = , (8) z − w and then substituting into (6) to obtain the system:

0 = az2 + bz − sin(z), (9) which comes from err(0) = err(z),

π2 π sin(w) − aw2 − bw = 1 − a − b, (10) 4 2 π which comes from −err(w) = −e( 2 ), and sin(w) − aw2 − bw c = , (11) 2 which comes from err(0) = −err(w). Keeping the substitutions (7) and (8) in mind, equations (9), (10), and (11) are a system of three equations in the three unknowns w, z, and c. But c does not appear in (9) and (10), so that gives us a system of two equations in two unknowns, which needs to be solved. (After doing so, we obtain the value of c from (11)). Solving (9) and (10) would be an exhausting (and perhaps impossible) ex- ercise. The following list of commands in Mathematica is a reasonable method for approximating the correct values.

In[1]:= << Graphics‘ImplicitPlot‘

::Note: When entering the command, put back-quotes around ‘ImplicitPlot‘

In[2]:= A := (Cos[z]-Cos[w])/(2z-2w)

::This is the value of the coefficient a in terms of w and z.

In[3]:= B:= (z*Cos[w]-w*Cos[z])/(z-w)

::This is the value of the coefficient b in terms of w and z.

In[4]:= E1 = Simplify[(2z-2w)(A*z^2+B*z-Sin[z])]

::Thus E1 == 0 corresponds to equation (9).

In[5]:= E2 = Expand[Simplify[4(2z-2w) (Sin[w]-A*w^2-B*w-1+Pi^2*A/4+Pi*B/2)]]

19 ::Thus E2 == 0 corresponds to equation (10).

In[6]:= ImplicitPlot[{E1==0, E2==0}, {w, 0, Pi/2}, {z, 0, Pi/2}]

::This graph shows where the two equations intersect. The diagonal line z == w should be ignored, since we must have w < z. 9

9The line appears because our definition of E1 and E2 has a (z−w) factor. The only reason for including that factor is that it makes some of the intermediate Mathematica output less messy. The user is encouraged to try these commands without including that factor.

20 1.5

1.25 21

1

0.75

0.5

0.25

0.25 0.5 0.75 1 1.25 1.5 Figure 2.2.1. Intersection when E1 == 0 and E2 == 0

We want to find for which a, b, and c the graphs of the errors of w and of z intersect. We find this by repeatedly zooming in on the graph, and each time guessing the point of intersection.

In[7]:= ImplicitPlot[E1==0, E2==0}, {w, 0.361145, 0.361146}, {z, 1.13333, 1.13334}, AspectRatio->.8]

In[8]:= A/. {w->0.361145, z->1.13334}

::This will output the value for the coefficient a ≈ −0.331429.

In[9]:= B/. {w->0.361145, z->1.13334}

::This will output the value for the coefficient b ≈ 1.17488.

In[10]:= C := (Sin[w]-A*w^2-B*w)/2

::This is the value of the coefficient c from equation (11).

In[11]:= C/. {w->0.361145, z->1.13334}

::This will output the value for the coefficient c ≈ −0.0138649.

Now that we have values for a, b, and c, we know that the quadratic Minimax Polynomial for the sine curve is:

p(x) = −0.331429x2 + 1.17488x − 0.0138649.

If we plot the error function, err(x) = [−0.331429x2+1.17488x+0.0138649]− π sin(x), we can see that the errors at x = 0, x = w, x = z, and x = 2 are equal in absolute value.

22 23 0.01

0.005

0.25 0.5 0.75 1 1.25 1.5

-0.005

-0.01 Figure 2.2.2. Error of quadratic Minimax to sine curve

Since the error is equal to the coefficient c, we know that the maximum error between this polynomial and the sine curve is 0.0138649. Compare this to a degree 2 Taylor approximation.10 Figure 2.2.3 shows our Minimax Polynomial and the sine curve. We can see that there are four points at which the distance between the polynomial and the sine curve is maximized. We can also see that, in accordance with Chebyshev’s theorem, the errors at these points oscillate in sign and are equal in magnitude.

10There are many degree 2 Taylor approximations since we have to pick a point of expansion. Centered at x = 0, the degree 2 Taylor polynomial is the same as the degree 1 Taylor—since sin(x) is an odd function—so it is just y = x, whose maximum deviation from sin(x) over π 0 ≤ x ≤ 2 is 0.570796.

24 1 25

0.8

0.6

0.4

0.2

0.25 0.5 0.75 1 1.25 1.5 Figure 2.2.3. Sine curve and quadratic Minimax

3 CoOrdinate Rotation DIgital (CORDIC) Approximations

The final method that we will consider for computing the sine function is an approach that is quite different from the polynomial methods described earlier. This approach is called Coordinate Rotation Digital Computer. As its name suggests, this method for computing transcendental functions involves an algo- rithm for rotating the coordinates of an angle. By starting with an angle α = 0 and rotating the coordinates of the angle by a of smaller and smaller pos- itive or negative angles, we can efficiently bring the angle α arbitrarily close to π any angle between 0 and 2 . By this algorithm, we can arrive at an accurate approximation to many of the common transcendental functions. The formula for rotating the coordinates (X,Y ) by an angle θ is given by

X0 = X cos(θ) − Y sin(θ)

Y 0 = X sin(θ) + Y cos(θ), where (X0,Y 0) are the new coordinates after rotating by angle θ. We can factor out cos(θ) from both coordinates leaving

X0 = cos(θ)X − Y tan(θ)

Y 0 = cos(θ)X tan(θ) + Y . Factoring out cos(θ) is not necessary for the computation. However, it will be made clear shortly that doing so increases the overall computational efficiency. The concept behind the CORDIC approximation is that by following a spe- cific series of angle rotations, one can arrive at an accurate approximation to π any angle between 0 and 2 . We begin by storing a table of values similar to the following:

θ tan(θ) cos(θ) arctan(20) 20 cosarctan(20) arctan(2−1) 2−1 cosarctan(2−1) arctan(2−2) 2−2 cosarctan(2−2) arctan(2−3) 2−3 cosarctan(2−3) ...... arctan(2−59) 2−59 cosarctan(2−59) Table 3.1

Using this table of values, we can rotate the coordinates of an angle α by a series of positive or negative increments of arctan(2−i) which allows us to hone π in on any angle between 0 and 2 . We choose to store values using exponents

26 of base 2 only because the binary computation of such increments involves a simple “bitshift” operation rather than a multiplication or division. We can perform the angle rotations using the rotation formula, and once each angle in the table has been either added or subtracted, we arrive at the approximate coordinates for the angle we are interested in. Since every angle in the table is used exactly once, the rotation formula shows that each coordinate is multiplied by cos(θ) exactly once for each angle θ in the table. Since cos(θ) = cos(−θ), it does not matter whether we are rotating clockwise or counterclockwise—the cosine factor remains the same. This means that we can factor out cos(θ) from the rotation formula, store the product of the cosines, and multiply the final coordinates by this cosine product. Storing the product of the cosines allows us to perform the computation without multiplying cos(θ) to the coordinates after each rotation, thereby adding efficiency to the process. Since the coordinates (X,Y ) of the angle α, as a point on the unit circle, are equal to (cos α, sin α), it is clear that we can use this method to solve for sine, cosine, and tangent. It turns out that variations of this method can be used to compute other common transcendental functions as well. As before, we will illustrate the CORDIC method by concentrating on the sine function.

3.1 An Example Using CORDIC Approximation Suppose we want to compute sin(−21) using the CORDIC approach. Before π we begin, we must find the sine input within the domain 0 ≤ x ≤ 2 that will output a value from which we can infer sin(−21). The reasons for limiting the sine curve to this domain were explained earlier, in the discussion of Taylor approximations (section 1). Since our input is a negative number, we start by finding a non-negative input value that has a sine output that is equivalent to ± sin(−21). We can find such an input value easily by multiplying −21 by −1 and adding π. Then the input that we have to work with is 21 + π which is approximately 24.142. Next we find the input between 0 and 2π that has an output from which we can infer sin(24.142). We can find this x-value by repeatedly subtracting 2π from 24.142 until we get a value between 0 and 2π. We can achieve the same result if we take the integer part of 24.142/2π, which is 3, and subtract 2π times that value from our input: 24.142 − 6π = 5.292. Now the input that we have to work with is 5.292. Next, we shift the domain to 0 ≤ x ≤ π. Since 5.292 is between π and 2π, and the sine is negative in this domain, we make a note to negate our final solution. We can now shift the input into the domain 0 ≤ x ≤ π by subtracting π from our input which leaves 5.292 − π = 2.15.

π Now we can limit our domain for the final time to 0 ≤ x ≤ 2 . We do this by π reflecting about the line x = 2 , in other words, by changing our input, 2.15, to π minus that input π − 2.15 = 0.9907.

27 The number that we will work with for the remainder of the calculation is 0.9907. Once we have computed the sine of 0.9907, we simply negate our solution and the resulting value will be sin(−21). For the purposes of this example, it will be sufficient to calculate the sine using a shortened table of angle rotations. By the end of this illustration, it should be clear that a longer table is likely to make the approximation more accurate. The following is a table for the first five angle rotations. The table can be extended by following the simple incrementation pattern.11

θi tan(θi) cos(θi) arctan(20) = 0.785 20 cosarctan(20) = 0.7602 arctan(2−1) = 0.4636 2−1 cosarctan(2−1) = 0.8944 arctan(2−2) = 0.245 2−2 cosarctan(2−2) = 0.9701 Table 3.1.1 arctan(2−3) = 0.1244 2−3 cosarctan(2−3) = 0.9923 arctan(2−4) = 0.0624 2−4 cosarctan(2−4) = 0.9981 cosine product for first five rotations: = 0.6533

Now we will step through this table, rotating by each angle θ and arriving at more and more accurate approximations to the coordinates of 0.9907. For each element in the table, if our angle approximation is less than 0.9907, we will rotate in the positive direction (counterclockwise). Otherwise, we will rotate in the negative (clockwise) direction. We are going to begin the rotations at α0 = 0. To avoid confusion, it is important to keep in mind that the angle α refers to the angle that is being rotated to approximate 0.9907, and the angle θi is the angle in the table at position i by which we rotate the angle α. We know that, on the unit circle, when we have an angle α = 0, the corresponding coordinates of the point on the circle are (1, 0). This means that when we begin the CORDIC approximation, we have

α0 = 0

X-coordinate = 1 Y -coordinate = 0.

0 (1) The first angle in our table is θ0 = arctan(2 ), or arctan(1), which is approximately 0.785. Since our initial approximation is α0 = 0, and our desired angle is 0.9907, we will rotate the coordinates of α in the positive direction. Once again, the rotation formula without the cos(θ) factor is

X0 = X − Y tan(θ)

Y 0 = X tan(θ) + Y ,

11These are approximate decimal values. A computational version should store many more decimal digits than shown in these tables so as to achieve higher precision in the final answer.

28 where (X0,Y 0) are the new coordinates after rotating by angle θ. Substituting X = 1, Y = 0, and θ0 = 0.785, gives

X0 = 1 − 0tan(0.785) = 1

Y 0 = 1tan(0.785) + 0 ≈ 1 So our new coordinates after the first rotation are (1, 1) and our new angle approximation is α1 = 0.785.

(2) Now we refer back to our stored table of values. We see that the second −1 1 angle in the table is θ1 = arctan(2 ), or arctan( 2 ), which is approximately 0.4636. Since our current approximation is α1 = 0.785, and our desired angle is 0.9907, we will again rotate our coordinates in the positive direction. Substi- tuting X = 1, Y = 1, and θ1 = 0.4636 into the rotation formula gives

X0 = 1 − 1tan(0.4636) ≈ 0.5

Y 0 = 1tan(0.4636) + 1 ≈ 1.5. So our new coordinates after the second rotation are (0.5, 1.5) and our new angle approximation is α2 = 0.785 + 0.4636 ≈ 1.24905.

−2 1 (3) The third angle in our table is θ2 = arctan(2 ), or arctan( 4 ), which is approximately 0.245. Since our current approximation is α2 = 1.2486, and our desired angle is 0.9907, this time we will rotate in the negative direction, or by −0.245. Substituting X = 0.5, Y = 1.5, and θ2 = −0.245 into the rotation formula gives X0 = 0.5 − 1.5tan(−0.245) ≈ 0.875 Y 0 = 0.5tan(−0.245) + 1.5 ≈ 1.375. So our new coordinates after the third rotation are (0.875, 1.375) and our new angle approximation is α3 ≈ 1.2495 − 0.245 = 1.00407.

We can see already that we are begining to hone in on our desired angle, 0.9907. If we were to continue in this fashion, we would rotate by the fourth an- −3 gle in the negative direction. This would bring us to α4 = 1.00407−arctan(2 ) which is approximately 0.879712. After that we would rotate by the fifth angle −4 in the positive direction bringing us to α5 = 0.879712 + arctan(2 ) which is approximately 0.942131. If we were to continue down an extended table, we would rotate the coordinates in either the positive or negative direction, using each angle on the list exactly once. When we have completed every rotation in the list, we multiply the Y -coordinate by the cosine product and negate this result to get the value of sin(−21).12 Listed below are the angle and coordinate

12Multiplying the X-coordinate by the cosine product will give the value of cos(0.9907), but this is not the same as cos(−21). Recall that we arrived at 0.9907 by shifting the sine curve π into the range 0 ≤ x ≤ 2 .

29 approximations that would be computed to approximate sin(−21) if the table contains 15 entries.

angle approximation x-coordinate y-coordinate α1 ≈ 0.785398 X = 1 Y ≈ 1 α2 ≈ 1.24905 X ≈ 0.5 Y ≈ 1.5 α3 ≈ 1.00407 X ≈ 0.875 Y ≈ 1.375 α4 ≈ 0.879712 X ≈ 1.04688 Y ≈ 1.26562 α5 ≈ 0.942131 X ≈ 0.967773 Y ≈ 1.33105 α6 ≈ 0.973371 X ≈ 0.926178 Y ≈ 1.3613 α7 ≈ 0.988994 X ≈ 0.904908 Y ≈ 1.37577 α ≈ 0.996807 X ≈ 0.89416 Y ≈ 1.38284 8 Table 3.1.2 α9 ≈ 0.992901 X ≈ 0.899561 Y ≈ 1.37935 α10 ≈ 0.990947 X ≈ 0.902255 Y ≈ 1.37759 α11 ≈ 0.991924 X ≈ 0.90091 Y ≈ 1.37847 α12 ≈ 0.991436 X ≈ 0.901583 Y ≈ 1.37803 α13 ≈ 0.991192 X ≈ 0.901919 Y ≈ 1.37781 α14 ≈ 0.99107 X ≈ 0.902088 Y ≈ 1.3777 α15 ≈ 0.991131 X ≈ 0.902004 Y ≈ 1.37776 cosine product for 15 rotations: = 0.607253 sine approximation: 1.37776 ∗ 0.607253 ∗ −1 = −0.8366489

In longer lists of angle rotations, the angle approximation αi might actually hit our desired angle. It is important, however, that the computation proceed through the final angle rotation in the table. This is because we factored out cos(θi) from the rotation formula and pre-computed the product of a particular number (the length of the table) of these cosine factors. By increasing the length of the table of angles, we can obtain an arbitrarily accurate approximation to the coordinates of a chosen angle. After using each angle in the table, we simply multiply both coordinates by our cosine product. At the end, some cleanup work (for example, negating the Y -coordinate, as in our previous example) finishes the calculation. The following is a C++ implementation of the CORDIC method for com- puting the sine function.13

3.2 C++ Program Code for CORDIC #include #include

13Note: The initialize () routine is only called once. It computes the table of values and the cosine product used by CORDIC. In keeping with the spirit of this paper, the math library calls in this routine ought to be replaced with the appropriate computations—for example, since efficiency does not matter here, ridiculously high degree Taylor Polynomials could be used. The manner in which one precomputes the table has nothing to do with CORDIC, so we kept this part of the code as short as possible.

30 const double pi = 4*atan(1); const int tableSize = 60; double angles[tableSize]; //array that stores angles double twopower[tableSize]; //array of tangents of angles double initialize() //iteratively fills array twopower with 2^-i //iteratively fills array angles with //arctan(twopower[i]) { double cosineProduct = 1; //product of cosines starts at 1 for (int i = 0; i < tableSize; i++) { twopower[i] = exp(-i*log(2.0)); angles[i] = atan(twopower[i]); cosineProduct *= cos(angles[i]); } return cosineProduct; } double limitDomain(int flag, double input) //sets the value of input to its equivalent //within domain 0<=x<=Pi/2. { if(input < 0) //sets 0<=input pi) //sets 0<=input<=Pi { input -= pi; flag = -1; //sines in here are negative } if(input > pi/2) //sets 0<=input<=Pi/2 input = pi - input; return flag * input; } double rotateCoords(double number) //initializes flag, angle, x, and y. //rotates coordinates and adds or subtracts angle for //each element in array angles. { int flag = 1; //flag for negation of solution double angle = 0; //current angle approx to number double x = 1; //X-coordinate double y = 0; //y-coordinate

31 double cosineProduct = initialize(); double num = limitDomain(flag, number); if(num < 0) flag = -1; num *= flag; for(int j = 0; j < tableSize; j++) { double newx = x; double newy = y; if(angle < num) //positive rotation { newx -= y * twopower[j]; newy += x * twopower[j]; angle += angles[j]; } else //negative rotation { newx += y * twopower[j]; newy -= x * twopower[j]; angle -= angles[j]; } x = newx; y = newy; }

x *= cosineProduct; //multiply coords by the cosines y *= cosineProduct; //that were factored out return y *= flag; } int main() { char response; double number;

cout<<"Please enter (s)ine, or press any other key to quit: "; cin>>response; while(response == ’s’) { cout<<"Please enter the number: "; cin>>number; double num = number; cout<<"The sine of "<>response;

32 } return 0; }

SAMPLE RUN ======

Please enter (s)ine, or press any other key to quit: s Please enter the angle (in radians): -21 The sine of -21 is: -0.836656

Please enter (s)ine, or press any other key to quit: s Please enter the angle (in radians): 9.32 The sine of 9.32 is: 0.104586

Please enter (s)ine, or press any other key to quit: s Please enter the angle (in radians): -5.1 The sine of -5.1 is: 0.925815

Please enter (s)ine, or press any other key to quit: m

33