Second Paper on Chebyshev Polynomials in the Unipodal Algebra

P. Reany February 2, 2020

Abstract This paper is a follow-up to the paper Chebyshev Polynomials, Fibonacci, and Lucas in the Unipodal Algebra. It contains more information that could have been in the first paper, but wasn’t to keep that paper short. It also breaks new ground by generalizing Chebyshev polynomials, called by some the ‘bivariate Chebyshev polynomials’, and then that generalization is used to find the solutions to the cubic equation.

It’s hard to avoid Chebyshev polynomials. They appear in just about every branch of mathematics, including geometry, combinatorics, theory, differential equations, approximation theory, numerical analysis, and statistics.

— A. Benjamin & D. Walton

1 Introduction

In the last paper, we saw how the Chebyshev polynomials of both first and second kinds can be reduced to the complex and uniplex parts of the following equation1

Xn = b , (1) with one simple necessary constraint: XX− = 1 , (2) where X− is the unegation of the unipodal number X. By working out the results of this pair of equations, we are able to ferret out the two unipodal versions of the Chebyshev polynomials of first and second kinds:

1  n −n Tn(x) = 2 X + X , (3a) u  n+1 −(n+1) Un(x) = X − X , (3b) 2X1 where X = X0 + X1u and X0 has been replaced by x. A useful alternative formulation for Tn(X0) is given by

1 n n Tn(x) = 2 [X+ + X−] , (4) 1It’s unlikely that the reader will be able to follow this paper without first having a good knowledge of the contents of the previous paper.

1 the proof of which is provided in the previous paper. A rather different proof can be found in Appendix 1. An obvious generalization of the Chebyshev polynomials, which is accomplished by the more general constraint placed on X than given in (2), is given as

XX− = r , (5) where r is any nonzero . This leads to the so-called ‘bivariate’ Chebyshev polyno- mials, which we’ll encounter later on.

Lemma 1

∞ n √ √ X t  2 2  p T (x) = 1 e(x+ x −1)t + e(x− x −1)t = etx cosh (t x2 − 1) . (6) n n! 2 n=0

Proof: We begin with (4) and multiply through by tn:

n 1 n n Tn(X0)t = 2 [(tX+) + (tX−) ] . (7) On dividing through by n! and summing from n = 0 to ∞, we get

∞ n ∞ 1 n 1 n X t X [tX+] + [tX−] T (X ) = 2 2 n 0 n! n! n=0 n=0 ∞ n ∞ n X [tX+] X [tX−] = 1 + 1 2 n! 2 n! n=0 n=0

1 tX+ 1 tX− = 2 e + 2 e √ √ 2 2 q 1 t(X0+ X0 −1) 1 t(X0− X0 −1) tX0 2 = 2 e + 2 e = e cosh (t X0 − 1) . (8)

Then, on replacing X0 by x in this last equation, we get (6).

Lemma 2 The Chebyshev polynomials of the first kind in X0 can be written as

b n c X2  n  T (X ) = Xn−2`(X2 − 1)` . (9) n 0 2` 0 0 `=0,1,...

Before we begin, given that Tn(X0) is pure complex (has no unisor part), it’s good to anticipate that the uniplex part of the expansion of (3a) is identically zero. Also, note that for any nonnegative k: ( 2 , for k even, [1 + (−1)k]uk = (10) 0 , for k odd. Note: We will be using the floor b·c, which operates on real numbers and returns the largest integer less than or equal to its argument.

2 Proof: Now, we expand both Xn and X−n = (X−)n with the binomial formula.

1 n − n Tn(X0) = 2 [X + (X ) ] 1 n n = 2 [(X0 + X1u) + (X0 − X1u) ] n h X n i = 1 Xn−kXk[1 + (−1)k]uk 2 k 0 1 k=0 n X n = Xn−kXk k 0 1 k=0,2,... b n c X2  n  = Xn−2`X2` 2` 0 1 `=0,1,... b n c X2  n  = Xn−2`(X2 − 1)` . 2` 0 0 `=0,1,...

Corollary

(n) Since Tn(x) = b0 = cosh nθ, then we have from (9) that

b n c X2  n  cosh nθ = (coshn−2` θ)(cosh2 θ − 1)` . (11) 2` `=0,1,...

On replacing θ by iθ in this last equation, we get

b n c X2  n  cos nθ = (cosn−2` θ)(cos2 θ − 1)` . (12) 2` `=0,1,...

Lemma 3 For the Chebyshev polynomials of the second kind in X0 alone, we have

b n c X2  n  U (X ) = Xn−2`+1(X2 − 1)`−1 . (13) n−1 0 2` − 1 0 0 `=0,1,...

Note that for any nonnegative integer k: ( 0 , for k even, [1 − (−1)k]uk = (14) 2u , for k odd.

Note: We will be using the floor function b·c, which was explained in the last lemma.

3 Proof: As before, we expand both Xn and X−n with the binomial formula.

u n − n Un−1(x) = [X − (X ) ] 2X1 u n n = [(X0 + X1u) − (X0 − X1u) ] 2X1 n   u h X n n−k k k k i = X0 X1 [1 − (−1) u ] 2X1 k k=0 n X n = Xn−kXk−1 k 0 1 k=1,3,... b n c X2  n  = Xn−2`+1X2`−2 2` − 1 0 1 `=1,2... b n c X2  n  = Xn−2`+1(X2 − 1)`−1 . 2` − 1 0 0 `=1,2,...

2 Introducing the unicoid of complexity r.

For the rest of the lemmas in this paper, we will be using a generalization of the unipodes X available to us. So far, we have restricted our unipodes to the unicoid of complexity unity, by the constraint:

XX− = 1 , (15) which enforces the relation between X0 and X1:

2 2 X0 − X1 = 1 . (16) But now we must generalize:

Definitions: The set of all unipodes X that satisfy the equation XX− = r, for r a nonzero complex number, is said to be the unicoid of complexity r, and such an X is called a unipode of complexity r. Also, every polynomial constructed from such an X and its unegate X− is called a polynomial of complexity r. So, now, our unipodes X will lie on the unicoid of complexity r, and X satisfies the constraint

XX− = r where r 6= 0 , (17) which enforces the relation between X0 and X1:

2 2 X0 − X1 = r . (18) We also retain the constraint on X with b by

Xn = b . (19)

Taken together, these last two equations imply that

bb− = rn . (20)

4 For completeness, I need to comment on the relation between Equations (17) and (19). Given a nonzero complex number r, there are n distinct choices for the value of XX− that all produce − 2 the same value of bb . Let αn be any primitive nth root of unity in the complex numbers. Then n − αn = 1. Thus we could choose a different constraint on XX , namely,

− j XX = αnr , 0 ≤ j ≤ n − 1 , (21) and produce the same value of bb−:

− n − n − n j n n j n n bb = X (X ) = (XX ) = (αnr) = (αn) r = r . (22)

Now, there are n2 roots to X in Eq. (19) over the unipodal numbers. For each value of n, Eq. (21) culls out n of those roots over the complex numbers. (More about this later.) Anyway, for the sake of simplicity, we will choose j = 0 in (21) to get (17).3 Note: Because of (17), unless r = 1, it’s not true that X− = X−1. Instead,

X−1 = r−1X− . (23)

Lemma 4 Recurrence relation for Chebyshev polynomials of complexity r of the first kind. Given that X and b are consistent with the last few equations, the appropriate form for the recurrence relation for Chebyshev polynomials of complexity r of the first kind is

Tn+1,r(X0) = 2X0Tn,r(X0) − rTn−1,r(X0) . (24)

Note # 1: In the literature ([15], p.3), the previous relation is known as the ‘bivariate Chebyshev polynomials Tn(x, s) of the first kind’, and defined by the recurrence relation

Tn(x, s) = 2xTn−1(x, s) + sTn−2(x, s) , (25) with T0(x, s) = 1 and T1(x, s) = x. Of course, (24) and (25) are essentially the same by letting r = −s. However, at the moment, I can’t seem to bring myself to regard r in (24) as anything more than a parameter.

n Note # 2: As we have done before, put simply, we define Tn,r(X0) as the complex part of X , and from that deduce the recurrence relation for Tn,r. Proof: As before,

(n) Tn,r(X0) ≡ b0 , where (26a) (n) 1 − b0 = b0 = 2 (b + b ) . (26b)

n Once again, we see that Tn,r(X0) is the complex (or scalar) part of X . Thus,

1 n − n Tn,r(X0) = 2 [X + (X ) ] , (27) 2A primitive nth root of unity is any complex number α such that the smallest positive integer power of α to be unity is n. 3 j Since we are free to choose the j in αn, we can think of this choice as discrete gauge freedom.

5 − which expresses Tn,r(X0) as a polynomial in X (and in X ). Again, a useful alternative is

1 n n Tn,r(x) = 2 [X+ + X−] , (28) where q q 2 2 X+ = X0 − X0 + r and X− = X0 − X0 − r . (29) We have an easy corollary to the previous lemma, which requires the following fact. The Lucas polynomials can be expressed as

1 p p L (x) = (x + x2 + 4)n + (x − x2 + 4)n . (30) n 2n

Corollary. The Lucas polynomials presented in (30) are easily expressed in terms of Tn,r, first by setting r = −4 and then, using (28):

1 q n q n 1 L (X ) = X + X2 + 4 + X − X2 + 4  = T (X ) . (31) n 0 2n 0 0 0 0 2n−1 n,−4 0

Now, we proceed as we did before when we established the recurrence relation on Tn(X0) in the first paper. On multiplying (27) by − 2X0 = X + X , (32) and using that XX− = r, yields

− 1 n − n 2X0Tn,r(X0) = (X + X ) 2 (X + (X ) ) 1 n+1 − n−1 n−1 − (n+1) = 2 [X + r(X ) + rX + (X ) ] = Tn+1,r(X0) + rTn−1,r(X0) , (33) which is the relation we were looking for. Now, let’s examine the first few polynomials:

n = 0,T0,r(X0) = 1 ,

n = 1,T1,r(X0) = X0 , 2 n = 2,T2,r(X0) = 2X0 − r , 3 n = 3,T3,r(X0) = 4X0 − 3rX0 , 4 2 2 n = 4,T4,r(X0) = 8X0 − 8rX0 + r , 5 3 2 n = 5,T5,r(X0) = 16X0 − 20rX0 + 5r X0 . (34)

Lemma 5 Differential equation for Chebyshev polynomials of complexity r of the first kind.

The differential equation we seek that is satisfied by y = Tn,r(x) is

(r − x2)y00 − xy0 + n2y = 0 , (35)

4 where X0, X1, b0, and b1 are real-valued.

4By making these variables real-valued, I’m by-passing the complications that arise from complex differentiations. Of course, the choices we then have for the parameter r will be likewise constrained.

6 Proof:

(n) As before, the simplest second-order linear differential equation in b0,r is

d2b(n) 0,r − n2b(n) = 0 . (36) dθ2 0,r

And as before, we need to replace b0,r by Tn,r; therefore, need to parameterize X by an angle θ by

X = r1/2euθ , (37) which gives us the values for X0 and X1:

1/2 1/2 X0 = r cosh θ and X1 = r sinh θ , (38) which is consistent with XX− = r. Again, we must convert derivatives by θ to derivatives by X0. Proceeding as we did last time, we get, with minor modifications, dX 0 = r1/2 sinh θ = X , (39) dθ 1 and so d dX0 d d = = X1 . (40) dθ dθ dX0 dX0 We also need that dX X 1 = 0 . (41) dX0 X1 Therefore,

2 d d d  2 = X1 X1 dθ dX0 dX0 2 d 2 d = X0 + X1 2 . (42) dX0 dX0

(n) Then, substituting this result into (36) with b0 → Tn,r(X0), we get

2 00 0 2 (r − X0 )Tn,r(X0) − X0Tn,r(X0) + n Tn,r(X0) = 0 . (43)

We can make this equation look more familiar by replacing X0 by x and Tn,r(X0) by y, to get

(r − x2)y00 − xy0 + n2y = 0 , (44) with solution yn = Tn,r(x), and this is the generalized differential equation for the Chebyshev polynomials of the first kind.

Lemma 6 The recurrence relation for the Chebyshev polynomials of complexity r of the second kind The recurrence relation for the Chebyshev polynomials of complexity r of the second kind is:

Un+1,r(x) = 2xUn,r(x) − rUn−1,r(x) for n ≥ 1 . (45)

7 Proof: Returning to (19), this time we focus on b1 instead of b0. We start with

(n) 1 n − n ub1 = 2 (X − (X ) ) . (46)

Proceeding in a manner similar to the substitution we made to replace b0(n) by Tn(X0), we set

(n) fn(X1) ≡ b1 , (47) where fn, although not the Chebyshev polynomials of the second kind we seek, is, again, a convenient precursor to it. Multiplying 1 n − n ufn = 2 (X − (X ) ) . (48) by the identity − 2X0 = X + X , (49) yields

− 1 n − n 2uX0fn = (X + X ) 2 (X − (X ) ) 1 n+1 − n−1 n−1 − n+1 = 2 [(X − r(X ) + rX − (X ) )] = ufn+1 + urfn−1 . (50)

On rearranging, and letting n → n − 1, we get the recurrence relation:

fn(X0) = 2X0fn−1(X0) − rfn−2(X0) for n ≥ 2 . (51)

Setting n = 0, 1, 2, 3, 4, 5, in turn, in (48), with X = X0 + X1u, gives us, respectively:

n = 0, f0 = 0 ,

n = 1, f1 = X1 ,

n = 2, f2 = 2X1X0 , 2 n = 3, f3 = X1[4X0 − r] , 3 n = 4, f4 = X1[8X0 − 4rX0] , 4 2 2 n = 5, f5 = X1[16X0 − 12rX0 + r ] . (52)

As we did before, we can define the set of Chebyshev polynomials Un,r (of the ‘second kind’) of complexity r by the relation: fn Un−1,r(X0) ≡ for n ≥ 1 , (53) X1 where fn is evenly divisible by X1 for all n ≥ 1. If we divide (51) through by X1 and use (53), we get the recurrence relation for the Chebyshev polynomials of complexity r of the second kind:

Un+1,r(X0) = 2X0Un,r(X0) − rUn−1,r(X0) for n ≥ 1 . (54)

Q.E.D.

Explicit forms for Un−1,r and Un,r are

u n − n Un−1,r(X0) = (X − (X ) ) for n ≥ 1 , (55) 2X1 and u n+1 − n+1 Un,r(X0) = (X − (X ) ) for n ≥ 0 . (56) 2X1

8 As was shown in the first paper, we have the alternative forms

1 n n Un−1,r(X0) = (X+ − X−) for n ≥ 1 , (57) 2X1 or 1 n+1 n+1 Un,r(X0) = (X+ − X− ) for n ≥ 0 . (58) 2X1 Despite the fact that the forms for the Chebyshev polynomials Un,r of complexity r look so similar to the Chebyshev polynomials Un of complexity unity, we must remember that in the former case 2 2 2 2 X1 is replaced by X0 − r but in the latter case X1 is replaced by X0 − 1. The first few Chebyshev polynomials of complexity r of the second kind are:

n = 0,U0,r = 1 ,

n = 1,U1,r = 2X0 , 2 n = 2,U2,r = 4X0 − r , 3 n = 3,U3,r = 8X0 − 4rX0 , 4 2 2 n = 4,U4,r = 16X0 − 12rX0 + r . (59) An easy corollary follows from the fact that the Fibonacci polynomials can be expressed as

1  p 2 n p 2 n Fn(x) = √ (x + x + 4) − (x − x + 4) . (60) 2n x2 + 4

Corollary: The Fibonacci polynomials are easily expressed in terms of the Chebyshev polynomials of complexity r of the second kind, first by setting r = −4 and then, using (57):

1  p 2 n p 2 n 1 Fn(x) = √ (x + x + 4) − (x − x + 4) = Un−1,−4(x) . (61) 2n x2 + 4 2n−1

Lemma 7 The differential equation for the Chebyshev polynomials of complexity r of the second kind The differential equation obeyed by the Chebyshev polynomials of complexity r of the second kind is (r − x2)y00 − 3xy0 + n(n + 2)y = 0 , (62) where X0, X1, b0, and b1 are real-valued.

Proof: Once again we consider the differential equation in b1: d2b(n) 1 − n2b(n) = 0 (63) dθ2 1 will lead us to the Chebyshev differential equation of complexity r of the second kind. First, we (n) substitute for b1 the expression Un−1,rX1 into (63) with y = Un−1,r for compactness. Then, noting that 1/2 X0 = r cosh θ , (64a)

d dX0 d d = = X1 , (64b) dθ dθ dX0 dX0 dX X 1 = 0 , (64c) dX0 X1 2 2 d d 2 d 2 = X0 + X1 2 . (64d) dθ dX0 dX0

9 Thus, Eq. (63) becomes 2  d 2 d  2 X0 + X1 2 (X1y) − n X1y = 0 , (65) dX0 dX0 which, after some tedious computations, becomes

2 00 0 2 (r − X0 )y − 3X0y + (n − 1)y = 0 , (66) where the primes represent derivatives by X0, and (66) has solution y = Un−1,r(X0). On setting x = X0 and replacing n by n + 1, we get the differential equation for the Chebyshev polynomials of complexity r of the second kind, which is satisfied by Un(x):

(r − x2)y00 − 3xy0 + n(n + 2)y = 0 . (67)

Lemma 8 Chebyshev Polynomials of complexity r and roots to the cubic equation

In this last lemma we will develop the standard expressions for the roots to the cubic equation by adapting the Chebyshev polynomials of complexity r. The roots themselves have been known for hundreds of years. The novelty here is the approach to finding them. We’ll be employing the now familiar equations:5

Xn = b , (68a) XX− = r , (68b) bb− = rn . (68c)

We begin with some background information. The general cubic equation has the form

ay3 + by2 + cy + d = 0 , (69) where a, b, c, d are arbitrary complex numbers, except that a 6= 0. Now, the common approach to all algebraic formulae to the roots to the cubic is to think in terms of an analogy to the formula for the roots to the quadratic equation ay2 + by + c = 0 , (70) where a, b, c are arbitrary complex numbers, except that a 6= 0. Now, we all know that the two solutions to this equation are given by the quadratic formula: √ −b ± b2 − 4ac y = . (71) ± 2a The point here is that the roots to the quadratic as presented in (71) are given as functions of the coefficients. So, by analogy, we’re looking for formulae for the roots to the cubic equation (69) which present themselves as a functions of its coefficients. However, our solution to finding these roots will be considerably simplified by the fact that by employing the well-known substitution y = x − b/3a into (69), together with some trivial rescaling, the result is the so-called reduced cubic equation:

x3 + px + q = 0 , (72) where p, q are complex numbers and are functions of a, b, c, d. What’s truly remarkable about this transformation is that it comes without any loss of generality. We won’t go into the details of

5This approach was introduced in the paper [2] back in 1991, where solutions to both the cubic and quartic were derived. For a similar treatment of the subject, see “The Cubic Equation,” pp 35–37, [13].

10 these claims, since, in the first place, they are readily available online, and in the second place, our unipodal methods require us to start with the simpler cubic equation. Therefore, we’ll take (72) as our starting point cubic, where p and q are assumed to be given. The basic reasoning of the unipodal solution to the cubic is as follows: We begin with a general reduced cubic (72) in which we’ll characterize x as ‘one independent variable’ and p and q as ‘two free parameters’. The natural question to ask is then: What is the simplest unipodal equation that might be used to solve for the roots to (72)? Let’s try

X3 = b . (73)

On expanding this last equation in the standard basis, we find that

3 (X0 + uX1) = b0 + b1u . (74)

Without adding any other constraints to (74), we have two independent variables X0 and X1 and one free parameters between b0 and b1. But we’ve seen this equation before. The complex part of the LHS is T3,r(X0); hence, equating complex parts of both sides yields

3 4X0 − 3rX0 − b0 = 0 . (75) Dividing this through by 4 gives us 3 1 X3 − rX − b = 0 . (76) 0 4 0 4 0 On equating like coefficients between this last equation and Eq. (72), we find that 3 1 p = − r and q = − b . (77) 4 4 0 Reversing these, we get 4 r = − p and b = −4q . (78) 3 0

But to make (74) conformable with (72), we need to enforce a constraint between X0 and X1 to reduce the number of independent variables by one. That’s the purpose of the constraint (68b); however, this adds to the unipodal system the free parameter r, though that addition is canceled by the induced constraint (68c), which enforces a relation between b0 and b1, thus reducing the number of their independent parameters between them to one. So now, both equations (72) and (74) have ‘one independent variable’ (which we’ve chosen to be X0) and ‘two free parameters’ (which we’ve chosen to be b0 and r) — just what is needed for the cubic. Now, to business. In the unipodal algebra, we can directly take the cube root of (73), once we reframe their variables in the idempotent basis. Thus, reframing (74), we get

3 (X+u+ + X−u−) = b+u+ + b−u− , (79) which, on taking the cube root across the equation, becomes

j 1/3 k 1/3 X+u+ + X−u− = α (b+) u+ + α (b−) u− , (80) √ 1 2 where α = 2 (1 + i 3) and both j and k independently take on values {0, 1, 2}, for a total of 3 = 9 cube roots. On equating like components and expanding, we get

j 1/3 X0 + X1 = α (b0 + b1) , (81a) k 1/3 X0 − X1 = α (b0 − b1) . (81b)

11 Adding these equations together produces

1  j 1/3 k 1/3 X0 = 2 α (b0 + b1) + α (b0 − b1) . (82a) And subtracting them produces

1  j 1/3 k 1/3 X1 = 2 α (b0 + b1) − α (b0 − b1) . (82b)

2 2 However, the relation X0 − X1 = (X0 + X1)(X0 − X1) = r will place a constraint on j and k: By multiplying (81a) and (81b) together, we get

j 1/3 k 1/3 (α (b0 + b1) )(α (b0 − b1) ) = r , (83) or j+k 2 2 1/3 j+k 3 1/3 α (b0 − b1) = α (r ) = r , (84) where we used (68c). From this we conclude that αj+k = 1, therefore k = −j, so (82a) becomes

(j) 1  j 1/3 −j 1/3 X0 = 2 α (b0 + b1) + α (b0 − b1) . (85)

3 Now, considering that b+ = b0 + b1 and that, from (68c), b− = b0 − b1 = r /b+, then (85) can alternatively be written as (j) 1 h j 1/3 r i X0 = α b+ + . (86) 2 j 1/3 α b+ (j) So, great! X0 in (85) or (86) are the three roots of the cubic equation (76). Now we need to calculate b : 1 r q r 4 4p3 b = ± b2 − r3 = ± 16q2 − (− p)3 = ±4 q2 + . (87) 1 0 3 27 Therefore, the three roots to (72) are given by r r  q q2 p3 1/3  q q2 p3 1/3 x(j) = αj − + + + α−j − − + , (88) 2 4 27 2 4 27 for j = 0, 1, 2.

Conclusion A trivial change from the unicoid of complexity unity to the unicoid of complexity r, a nonzero com- plex number, allows us to generalize the Chebyshev polynomials to the so-called bivariate Chebyshev polynomials, from which further relationships between the Chebyshev polynomials and the Fibonacci and Lucas polynomials can be made. That the cubic has a solution in the Chebyshev polynomials surprised me. I wonder what else modern mathematics has over-looked about these remarkable polynomials. By the way, don’t forget to read the appendices – they’re pretty interesting.

References

[1] A. Benjamin & D. Walton. ‘Counting on Chebyshev Polynomials’. Mathematics Magazine 82(2):117-126. April 2009. [2] D. Hestenes, P. Reany, G. Sobczyk, ‘Unipodal Algebra and Roots of Polynomials’, Advances in Applied Clifford Algebras, Vol. 1, No. 1, 51–64, 1991, Mexico City.

12 [3] J. P. Tignol. Galois’ theory of algebraic equations, 1980 (english translation 1988), The Bath Press, Avon, Great Britain.

[4] P. Reany. ‘Unipodal Algebra Reopens the Theory of Algebraic Equations,’ Advances in Applied Clifford Algebras 4(2), January 1994, Mexico City. [5] M. Karim. ‘Solutions of General Pentagonal Equation Under Certain Conditions’. Journal of Mathematical Sciences and Computer Applications 1(1), 2010.

[6] A. Kumar, P. Kumar. ‘Bicomplex Version of Laplace Transform’, International Journal of En- gineering and Technology Vol.3 (3), 2011, 225-232. [7] D. Rochon. ‘A Bicomplex Riemann Zeta Function’, Tokyo J. Math. Vol. 27, No. 2, 2004. [8] A. Benjamin, et al. ‘Combinatorial Trigonometry with Chebyshev Polynomials’, Journal of Sta- tistical Planning and Inference, 140(8): 2157-2160. DOI: 10.1016/j.jspi.2010.

[9] Y. Li. ‘On Chebyshev Polynomials, Fibonacci Polynomials, and Their Derivatives’. Journal of Applied Mathematics, Volume 2014, Article ID 451953, 8 pages. [10] G. Arfken, H. Weber, F. Harris. Mathematical Methods for Physicists. 7th ed. Elsevier, New York. 2013, pp 899–910.

[11] Nurkan, S. K., Niu, F., Guven, I. A. ‘A Note on Bicomplex Fibonacci and Lucas Numbers,’ (PDF document), url: http://www.ijpam.eu, doi: 10.12732/ijpam.v120i3.7. International Jour- nal of Pure and Applied Mathematics, Volume 120, No. 3 2018, 365–377. [12] F. Qi, D. Niu, and D. Lim. ‘Notes on Explicit and Inversion Formulas for the Chebyshev Polynomials of the First Two Kinds,’ (PDF document) https://www.researchgate.net.

[13] G. Sobczyk, New Foundations in Mathematics, Birkh¨auser/Springer-Science,2013, New York. [14] Z. Li, Some Identities Involving Chebyshev Polynomials, Hindawi Publishing Corporation, 2015. Mathematical Problems in Engineering Vol. [15] J. Cigler, A simple approach to q-Chebyshev polynomials, preprint server: https://arxiv.org/ftp/arxiv/papers/1201/1201.4703.pdf. [16] A. Benjamin & D. Walton. ‘Combinatorially Composing Chebyshev Polynomials’, https://www.math.hmc.edu/ benjamin/papers, 2008. [17] D. Hestenes. ‘A Unified Language for Mathematics and Physics’, J.S.R. Chisholm/A.K. Com- mons (Eds.), Clifford Algebras and their Applications in Mathematical Physics. Reidel, Dor- drecht/Boston (1986), 1–23. [18] F. Klein, Elementary Mathematics from an Advanced Standpoint, MacMilllan, 1932, London.

13 Appendix 1

1 n −n Knowing that 2 (X + X ) is pure complex, we can prove (4) by another method that is worth knowing. Let h·i be a selector that selects only the complex (scalar) part of its argument. In other words, if, say, we have the unipode a + bu with a, b complex numbers, then

ha + bui = hai = a . (89)

Turning this around, we can conclude that, for arbitrary complex numbers a, b,

a = hai = ha + bui , (90) where we say that the term bu has been virtually emplaced into the argument of the selector. Using this selector operator, we can prove (4) by virtually emplacing a carefully chosen unisor (vector) term: Proof:

1 n −n 1 n 1 −n 2 (X + X ) = h 2 X + 2 X i n 1 −n 1 = hX 2 (1 + u) + X 2 (1 − u)i n −n = hX u+ + X u−i n n = hX+u+ + X−u−i n 1 n 1 = hX+( 2 (1 + u)) + X−( 2 (1 − u))i 1 n n = 2 (X+ + X−) . (91)

Appendix 2 My Personal Highlights of the Unipodal Algebra By the ‘Highlights of the Unipodal Algebra’, I refer to my own experience with it, rather than present a full history of all algebras isomorphic to it, such as would include the bicomplex algebra, the tessarines, etc. In particular, I want to include the role that serendipity played in the process. In 1984, a year after my completing my undergraduate degree in mathematics, I was taking some time to go over solutions to cubic and quartic equations, which had not been part of the formal curriculum I studied in college. The next year, my friend Tony showed me the results of his work on the properties of certain ‘special’ numbers he had discovered (the 10-adic numbers). The 10-adic numbers is (for our purposes here) the ring formed by the set of all infinite strings of digits with addition, subtraction, and multiplication defined on them, and includes the 2 as a ‘subring’. Tony showed me that within this number system are nontrivial idempotents, e1 = e1 2 and e2 = e2. Once he showed me that

2 e1 · e2 = 0 and (e1 − e2) = 1 , (92)

I was intrigued and I promised him I’d spend some time working out the properties of this number system with him. I named the number system the ‘ring of hyperintegers’, and I made the obvious 2 connection to the trivial Clifford algebra C1 with one unit vector σ, where σ = 1. (There’s a superficial similarity, but they’re not isomorphic.) Therefore, I defined

σ = e1 − e2 . (93)

When I asked Tony how he came to know these numbers, he answered that he had discovered them by accident while he was attempting to find a proof of Fermat’s Last Theorem. His trick

14 was to look at the behavior of the r rightmost digits of positive integers x, y, z when placed in the equation xn + yn = zn . (94)

Let w be an integer, and let wr be the r rightmost digits of w. Tony found, to his consternation, that when x1 = 5, y1 = 6, and z1 = 1, then (94) would hold for any positive integer n, for the r rightmost digits of the special idempotents to at least r digits. So it looked to him that his discovery was taking him in the wrong direction to find a proof consistent with Fermat’s conjecture. Then he found a formula for this special x and y satisfying for r = 1

5 + 6 = 1 . (95)

That formula would produce for an arbitrary r the r rightmost digits of the idempotent e1, whose rightmost digit is 5, and another for e2, whose rightmost digit is 6. And since these two numbers are idempotents of the hyperintegers, of course

n n r e1,r + e2,r = 00..01 mod 10 , (96) where ‘00..01’ is a 1 preceded by r − 1 zeros. For the curious, the ten rightmost digits of these idempotents are e1,10 = 8212890625 , e2,10 = 1787109376 . (97) For some fun, place this string in WolramAlpha.com for computation:

a = 8212890625, b = 1787109376, ab, a + b, a − b, (a − b)2, a2 (98)

A month or two into this quest, something amazing occurred when I sought for possible cube roots to hyperintegers. I stumbled upon a cubic equation over the hyperintegers that was of the form of a ‘reduced cubic’, which can be formed from the hyperinteger equation

X3 = C, (99) where X = X0 + σX1, and C = C0 + σC1, where the components, X0,X1,C0,C1 are hyperintegers. Forunately, I knew from my self-study of the previous year that the reduced cubic could be obtained from a general cubic by a simple variable substitution, with no loss of generality. Had I not known of this connection to the general cubic, I may have discarded my discovery as an uninteresting special case. Anyway, it was then immediately obvious to me that if I translated the cubic results into a one-dimensional Clifford algebra, I’d have a novel method to solve for the roots of the general cubic equation, essentially, by just taking a cube root of (99), when the numbers are expressed in idempotent form, to get X = C1/3 . (100) where X = X0 + σX1, and C = C0 + σC1, which I referred to as just ‘Clifford nnumbers’, and their components X0,X1,C0,C1 were complex numbers. So, I wrote up a short paper on what I had discovered and showed it to my former professor, David Hestenes. That was in 1985/1986. After he read it, he was intruigued and commented a bit philosophically that the algebraists’s emphasis on fields/division-algebras was a bit overrated. Still, he was not completely enthralled with my use of complex scalars. As he put it [17], pp. 7–8: I am of half a mind to outlaw the Complex Clifford Algebras altogether, because the imaginary scalars do not have a natural geometric interpretation, and their algebraic features exist already in the real Clifford Algebras. However, there is already a consider- able literature on complex Clifford Algebras, and they do have some formal advantages. For example, a student of mine, Patrick Reany, has recently shown [9] that the solutions of any cubic equation over the complex numbers are contained in the solutions X of the

15 3 simple cubic equations X = C, where X and C are elements of the complex algebra C1. This opens up new possibilities for the theory of algebraic equations. Note that C1 is the largest Clifford Algebra for which all the elements commute. But note also that C1 can be regarded as a subalgebra of the real Euclidean algebra R3, with the pseudoscalar R3,3 playing the role of imaginary scalars.

Hestenes has been for decades engaged√ in the fruitful research program to eliminate from physics uninterpreted square roots of -1, i.e., i = −1. He started with the i in the Pauli equation, having revealed an interpretation of the Pauli matrices as vectors represented in 2 × 2 matrix form. He has subsequently developed and used Geometric Algebra/ for deep theoretical work. On the other hand, mathematicians do sometimes benefit by giving i of the complex numbers a geometrical interpretation. As stated by Felix Klein [18], pp. 101–102: A. The Fundamental Theorem of Algebra This is, as you know, the theorem that every algberaic equation of degree n in the field of complex numbers has, in general, n roots, or, more accurately, that every polynomial f(z) of degree n can be separated into n linear factors. All proofs of this theorem make fundamental use of the geometric interpretation of the complex quantity x + iy in the xy plane.... Anyway, back to the timeline: About the same time I wrote the paper on the cubic, I also wrote up the work on the theory of hyperintegers that Tony and I had done, and submitted it to a mathematician Hestenes knew at ASU. I considered the paper to be accessible even to high school students, by which they could be exposed to rings, ideals, zero-divisors, idempotents, and unipotent numbers, and induction proofs6, and, finally, modulo arithmetic — all new and interesting stuff to most of them. And still could be, right? But the official reply I received on the paper is that the theory and the results in the paper were already known, thus there was little reason to recommend the paper. However, I still feel now as I did back then: the professor who reviewed the paper, though correct on technical grounds, missed the point of the paper — hyperintegers are fun! Then, in 1991, out of the blue, Hestenes showed me a pre-print paper on the unipodal algebra and the roots of polynomial equations (which, at the time, would soon be published as [2]). Garret Sobczyk wrote the majority of the final submitted paper, invented the names ‘unipodal’ and ‘unipo- tent’, and extended the basic work I had done on the cubic to include a unipodal solution of roots to the quartic equation as well, with the aid of Mathematica. But I think his greatest contribution to the subject was to invent the unicoid of complexity r (which I had missed), by which the n2 unipodal roots can be reduced to n complex roots. Anyway, three years after that, while I was studying from the book Galois’ theory of algebraic equations by J. P. Tignol, [3], I noticed some equations and results in the book that looked like they’d be better done in the unipodal algebra, instead of the complex algebra. So, I wrote up a paper on that, titled “Unipodal Algebra Reopens the Theory of Algebraic Equations” [4], and it was published in 1994. In that paper I arrived at the recurrence relation for the Chebyshev polynomials of the first kind, without my knowing that that’s what there were. (I mean, how many undergraduate math majors in the 1980s were exposed to the Chebyshev polynomials?) Next, we jump ahead to about 2015, when I undertook a restudy of linear algebra, and it was in that effort that I came across the Chebyshev polynomials in earnest, presumably for the first time. If I recall correctly, the point of which was for the student to prove the orthogonality of these polynomials. But at that time, I made no connection back to my 1994 paper, simply because I had forgotten the precise form of those polynomials. That only occurred when I reread the 1994 paper in July 2019 (and then made the connection), and the rest of the story I already presented above.

6 An induction proof of the existence of the idempotent e1 is the least sophisticated way to establish its existence.

16 Appendix 3: Adding Apples and Oranges When mathematicians first decided to formulate the complex numbers into a legitimate algebraic system, I’m sure some people had the reasonable skeptical reaction, “But how can one add real numbers and imaginary numbers together? What does the result mean? Isn’t that adding apples and oranges together, so to speak?” The root of the problem of accepting complex numbers as ‘mathematically legitimate’, lies in psychology, not in mathematics. One way to assuage the fears of the naysayers is to claim that the complex number x + iy is just a familiar geometric object, i.e., a point in the Argand Plane. “That’s what it is!” So, if you believe in the points of the 2-d plane, you should be able to accept the complex numbers. A less geometrical justification of the complex numbers is to find a mapping of them into a matrix algebra, which can be done. So, if one believes in matrix algebras, that should be enough. Formally, we can think of the complex numbers as the extension of the real numbers by adding in a unit imaginary i, and denote it by R[i]. When one studies the properties of the complex numbers, one quickly learns that they can be used to represent rotations in the plane. Wanting to find a suitable generalization of the complex numbers to represent rotations in 3d, Sir William Rowan Hamilton set about a many-year research program, finally finding them in 1843. He called them the . It is claimed that one reason it took him so long to find them was that he was sure he would find what he wanted as a commutative extension of the complex numbers, but that was not to be. And the race to invent new and useful systems was on. According to Wikipedia, in 1848, James Cockle invented the commutative hypercomplex number system called the tessarines. They are numbers of the form

t = w + xi + yj + zk , where w, x, y, z ∈ R , (101) and i2 = −1, j2 = 1, k = ij = ji. Formally, the tessarines can be thought of as the extension of the reals by both i and j, or as R[i][j] = R[i, j]. By the way, in the language of this paper, j would be called the ‘unipotent number’ of the system. Apparently, Cockle was not too sure about the ontology of the zero divisors one can form within this algebra. (After all, the tessarines are just the unipodal numbers, where u is replaced by j). So he referred to them as ‘impossibles’. Well, now you see the danger of adding mathematical ‘apples and oranges’ together, folks. Not only can you get ‘imaginary’ numbers, even worse, you can get ‘impossible’ numbers. Okay, to be fair, I admit that I have a bit of sympathy for the people who claim that every new mathematical system should be proven correct before it is used. However, I will counter with this comment: Wisdom is justified of all her children. Anyway, as for the ontology of zero-divisors, even in Cockle’s day no one seemed too concerned about the ontology of the zero divisors 2,3 in the ring Z 6. As for myself, I look at all number systems as purely formal. If you can find a practical use for any one of them, so much the better. Moving on, history tells us that in 1892, Corrado Segre invented the bicomplex numbers, given as numbers of the form

X = x + yi + zj + tk , where x, y, z, t ∈ R , (102) and i2 = −1, j2 = −1, k = ij = ji, therefore, k2 = 1. This system is isomorphic to the tessarines. But the name suggests how we should think of these numbers: The bicomplex numbers are the extension of the reals, first by the unit imaginary number i, and then by the second (independent) unit imaginary number j. That is, the bicomplex numbers are the extension of the reals by two different imaginary numbers. The unipotent number k is then thought of as a derived number. I suppose that after one has made the psychological adjustment to adding real and imaginary numbers together, one can rather easily stomach adding into the mix another distinct unit imaginary — hence the bi-complex numbers.

17 Perhaps someday soon the bicomplex numbers R[i, j] and the unipodal numbers C[u] can be brought under the same roof, but that won’t be easy. Though the two systems are isomorphic, they have different traditions on how to approach problem solving, and different vocabularies that reflect the ways their adherents think about how to use those systems.

Appendix 4: Just for fun! # 1) Show that the square root of a vector in the unipodal algebra makes sense by finding one value each for the complex numbers α and β for the equation √ u = αu+ + βu− , (103) which has four distinct solutions. √ Ans: One choice is α = 1 and β = i, yielding u = u+ + iu−. #2) Show that the logarithm of a vector in the unipodal algebra makes sense by finding one value each for the complex numbers α and β for the equation

Log u = αu+ + βu− . (104)

Ans: α = 0 and β = iπ, yielding (?) Log u = −iπu− . (105) But something is amiss, because if we multiply through by 2, we get,

2 2Log u = Log u = Log 1 = 0 = −2iπu− . (106)

Is there a resolution?

18