Hyperoperation: Introduction To The Theory And Potential Solutions

Item Type text; Electronic Thesis

Authors Dalthorp, Mark

Publisher The University of Arizona.

Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.

Download date 24/09/2021 15:26:55

Item License http://rightsstatements.org/vocab/InC/1.0/

Link to Item http://hdl.handle.net/10150/632817 HYPEROPERATION: INTRODUCTION TO THE THEORY AND POTENTIAL

SOLUTIONS

By

MARK DALTHORP

______

A Thesis Submitted to The Honors College

In Partial Fulfillment of the Bachelors degree With Honors in

Mathematics

THE UNIVERSITY OF ARIZONA

M A Y 2019

Approved by:

______

Dr. Douglas Pickrell Department of Hyperoperation

Mark Dalthorp May 1, 2019

Abstract We investigate the of hyperoperations, which begins with , , and . The nth hyperoperation is defined by iterating the previous . We address the problem of extending hyperoperations to non-integer values. We show existence of an analytic solution, and present several approaches to construction.

1 Introduction

Our motivation is the observation that the arithmetic operations of addition, multiplication, and exponentiation can be thought of as a sequence, since multiplication and exponentiation are iterated forms of addition and multiplication, respec- tively: a · b = a + a + ... + a (1) | {z } b times ab = a · a · ... · a (2) | {z } b times One could easily imagine extending this and defining an operation to represent iterated exponentiation:

...a a ? b = aa (3) | {z } b times Since exponentiation is not commutative, there are two possible conventions for this. The standard way to think about (a...a) b−1 this is right-nested parentheses a(a ), because the other direction is relatively easy to understand (((aa)a)...)a = a(a ). The iterative perspective on addition, multiplication, and our new operation ?, can be seen as a . For example, observe that a · b = a + a + ... + a = a + (a + a + ... + a) = a + a · (b − 1). We have in general | {z } | {z } b times b−1 times a · b = a + (a · (b − 1)) (4) ab = a · (ab−1) (5) a ? b = aa?(b−1) (6) where each operation ∗ satisfies a ∗ 1 = a. It is fairly natural to continue this idea, defining a new operation to compound ?, and so on. We define a sequence of hyperoperation recursively for n > 1 by:

a ∗1 b = a + b (7)

a ∗n 1 = a (8)

a ∗n b = a ∗n−1 (a ∗n (b − 1)) (9)

This definition only applies to integer values of a and b. It is straightforward to see that ∗2 and ∗3 are multiplication and exponentiation, respectively. To see this definition in action, let us consider 2 ∗5 3 a simple example for n > 3.

2 ∗5 3 = 2 ∗4 (2 ∗5 2) = 2 ∗4 (2 ∗4 (2 ∗5 1)) (10)

= 2 ∗4 (2 ∗4 2) = 2 ∗4 (2 ∗3 (2 ∗4 1)) (11) 2 = 2 ∗4 (2 ∗3 2) = 2 ∗4 (2 ) = 2 ∗4 4 (12)

= 2 ∗3 (2 ∗4 3) = 2 ∗3 (2 ∗3 (2 ∗4 2)) (13)

= 2 ∗3 (2 ∗3 (2 ∗3 (2 ∗4 1))) (14)

= 2 ∗3 (2 ∗3 (2 ∗3 2)) (15)   (22) 2 4 = 2 = 2(2 ) = 216 (16) = 65536 (17)

1 We know that multiplication and exponentiation have natural extensions to non-integer values. This means we can make sense of a ∗4 b for noninteger a (but not b). What about for non-integer b? Our goal in this paper is to find a sequence of hyperoperations satisfying the recursion of (9-7), but are real-analytic functions in both arguments for a, b > 1. A solution to this recursion is not unique, because given any 1-periodic function θ(x) such that θ(0) = 0, then the operation a∗ˆnb = a ∗n (b + θ(b)) will also satisfy the same recursion, because

a∗ˆnb = a ∗n (b + θ(b)) = a ∗n−1 (a ∗n (b − 1 + θ(b))) = a ∗n−1 (a ∗n (b − 1 + θ(b − 1))) = a ∗n−1 (a∗ˆn(b − 1))

Thus, once we have constructed a solution, we actually have an infinite family of solutions. Ideally, our solution would also be monotonic in both variables. However, constructing a solution is actually more difficult than might be expected. In terms of construction, we will focus on ∗4, since this is the first difficult operation, and we can use the same techniques to construct the higher-order hyperoperations. Other authors have constructed analytic forms of a ∗4 b on limited domains, specifically for a ∈ (1, e1/e) or in (e1/e, ∞). Ideally, we would be able to construct a ”universal” solution, i.e. one that is analytic on the whole region from 1 to ∞. Though I do not yet have a solution, herein I offer several promising approaches. In this paper, we will briefly discuss the general theory of iteration of analytic functions, because this provides a framework to think about hyperoperation. Then, we will look at properties that will hold for any nice solution. Finally, we will examine the known non-universal constructions of hyperoperations, and include some of the approaches to finding a universal solution.

2 Theory of function iteration

The most straightforward way to approach the problem of solving the hyperoperation recursion is by looking at functional iteration, since they are defined by iteration.

2.1 Preliminary Definitions Definition 1. For arbitrary f whose image is a subset of its range, we use the following notation for iterates of f

f ◦n(x) = f(f(...f(f (x))...)) (18) | {z } n iterations f ◦−n(x) = f −1(f −1(...f −1(f −1(x))...)) (19) | {z } n iterations f ◦0(x) = x (20)

◦n+m ◦n ◦m ◦b Note that f = f ◦ f . Also notice, if f(b) = a ∗n b for some fixed a, we will have f (1) = a ∗n+1 b, so if we can find an analytic way to define analytic non-integer iterations we will have found an analytic solution to the hyperoperator equation. Definition 2. A point c is a fixed point of f if f(c) = c. A fixed point c is attracting if 0 < |f 0(c)| < 1, repelling if |f 0(c)| > 1, superattracting if |f 0(c)| = 0, and neutral or parabolic if |f 0(c)| = 1. We will mostly be concerned with attracting and repelling fixed points of functions, as these are the easiest to analyze.

◦n Proposition 1. For all x in a neighborhood of an attracting or superattracting fixed point c, limn→∞ f (x) = c, and f ◦n(x) − c ∼ κf 0(c)n for some constant κ (dependant on f, x, and c, but independent of n). The proof of this proposition is simple if we expand f into the first term of its Taylor series. Also, notice that if c is a repelling fixed point of f, it is an attracting fixed point of f −1. This will be useful for understanding the iteration of functions around repelling fixed points.

2.2 The Schroeder Function One of the easiest ways to solve for non-integer iteration is by using a function to control iteration, for example, given a function f, the Schroeder Function for f is a function h satisfying

h(f(x)) = λh(x) (21) i.e. h is an eigenfunction of the composition operator by f. One could imagine defining Schroeder functions for arbitrary f : A → A, but for this paper we will only consider the case of Schroeder functions for f a complex analytic function. The iteration of f can be determined by f ◦n(x) = h−1(λnh(x)), which opens up the possibility for non-integer iteration of f, since the right hand side does not necessitate and integer n. The following theorem tells us how to find Schroeder functions:

2 Theorem 1. If c is an attracting or repelling fixed point of an analytic function f :Ω → C, where Ω is a connected open set containing c, then there exists a unique function h analytic in a neighborhood of c such that h(f(x)) = λh(x) and h0(c) = 1. In this case, λ = f 0(c). Also, if c is an attracting fixed point of f then h is given by

∞ ! f ◦n(x) − c Z x Y f 0(f ◦k(t)) h(x) = lim = dt (22) n→∞ f 0(c)n f 0(c) c k=0 and in this case h extends to the entire basin of attraction of c Proof. First, we deal with the attracting case. We begin by showing that (22) solves (21): If h is defined by the limit definition (which converges in a neighborhood of c by Proposition 1), we have f ◦n(f(x)) − c f ◦n+1(x) − c f ◦n+1(x) − c h(f(x)) = lim = lim = f 0(c) lim = f 0(c)h(x) (23) n→∞ f 0(c)n n→∞ f 0(c)n n→∞ f 0(c)n+1 How we show that this function is analytic and equal to the integral expression: Observe that using the fact that f ◦n(x) → c exponentially, the product inside the integral converges absolutely, and we can show that it converges uniformly on any compact subset of the basin of attraction of c, and hence the product defines an analytic function on the basin of attraction of c. Also, uniform convergence allows us to take the limit out of the integral:

∞ ! n−1 ! n−1 ! Z x Y f 0(f ◦k(t)) Z x Y f 0(f ◦k(t)) Z x Y f 0(f ◦k(t)) dt = lim dt = lim dt (24) f 0(c) n→∞ f 0(c) n→∞ f 0(c) c k=0 c k=0 c k=0 n−1 ! 1 Z x Y 1 Z x  d  = lim f 0(f ◦k(t)) dt = lim f ◦n(t) dt (25) n→∞ f 0(c)n n→∞ f 0(c)n dt c k=0 c f ◦n(x) − c = lim (26) n→∞ f 0(c)n Notice also that the integral formula yields h0(c) = 1. When c is a repelling fixed point of f, observe that c is an attracting fixed point of f −1 (which exists locally by the inverse function theorem). Hence we can apply this same method to f −1(x), which will still solve (21). To show uniqueness, we will observe that (21) and the condition h0(c) = 1 uniquely determine the Taylor series of h. Differentiating (21) yields f 0(x)h0(f(x)) = λh0(x) (27) which, together with the condition h0(c) = 1 implies that λ = f 0(c). For higher order terms, we use Faa Di Bruno’s formula to obtain n dn X f 0(c)h(n)(x) = h(f(x)) = h(k)(f(x))B (f 0(x), f 00(x), ..., f (n−k+1)(x)) (28) dxn n,k k=1 0 0 n where Bn,k is an incomplete Bell polynomial. Noting that Bn,n(f (x)) = f (x) , and evaluating at x = c, we can rearrange the above equation to become:

n−1 1 X h(n)(c) = h(k)(c)B (f 0(c), f 00(c), ..., f (n−k+1)(c)) (29) f 0(c) − f 0(c)n n,k k=1 As long as |f 0(c)|= 6 0, 1, this is well-defined and gives us h(n)(c) in terms of the previous derivatives of h at c, so we the Taylor series of h at c is uniquely defined. The second construction makes it clear that if f is increasing on some interval containing c, then we can find a Schroeder Function that is also increasing on that interval. Also important to notice is that, for any f with fixed point c and Schroeder Function h, we have that f ◦n(x) = h−1(f 0(c)nh(x)) (30) for any n. This allows for an extension of f ◦n(x) to noninteger n, which gives us our simplest definition for analytic hyperoperation. Using (30), if c is an attracting fixed point of f, we can define fractional iterates of f by f ◦t(x) = h−1(f 0(c)th(x)) = lim f ◦−n(c + f 0(c)t(f ◦n(x) − c)) (31) n→∞ This works because h−1(y) can be solved for as f ◦n(z) − c y = h(z) = lim (32) n→∞ f 0(c)n z = h−1(y) = lim f ◦−n (c + f 0(c)ny) (33) n→∞

3 3 Properties of Hyperoperators

In this section, we examine properties of hyperoperation that can be determined solely from the recursion and the assumption of monotonicity.

3.1 Definitions

Throughout this section, we will use the notation ∗n to represent a sequence of operations satisfying

a ∗1 b = a + b (34)

a ∗n 1 = a , for n > 1 (35)

a ∗n b = a ∗n−1 (a ∗n (b − 1)) (36) such that it is real analytic and increasing in both variables for (a, b) ∈ (1, ∞) × (0, ∞). We will also need the inverse functions

Definition 3.

n n a = srtb (a ∗n b) = ( srtb a) ∗n b (37) n n b = sloga (a ∗n b) = a ∗n ( sloga b) (38)

read as ”nth super-root” and ”nth super-” respectively. We can also use the recursion properties of hyperoper- ators to show recursion for the super-:

n n n-1 n sloga b = sloga ( sloga b) + 1 = sloga (a ∗n−1 b) − 1 (39)

Also, derivatives: Definition 4. d C (a, b) = (a ∗ b) (40) n da n d D (a, b) = (a ∗ b) (41) n db n Notice that the following identities hold:

Cn(a, b) = Cn−1(a, a ∗n (b − 1)) + Dn−1(a, a ∗n (b − 1)) Cn(a, b − 1) (42)

Dn(a, b) = Dn−1(a, a ∗n (b − 1)) Dn(a, b − 1) (43)

We can use the recursion on ∗n to extend its definition to negative second argument, and if we do, we obtain values at negative integers

Theorem 2. If n and k are integers such that n ≥ 3 and 0 ≥ k ≥ 3 − n, then a ∗n k = k + 1

n Proof. By definition, a ∗n 1 = a for n ≥ 2. Furthermore, the assumption of monotonicity implies that sloga x is well-defined n-1 n-1 n-1 for all x. Therefore, for n ≥ 3 we have that 1 = sloga a = sloga (a ∗n 1) = sloga (a ∗n−1 (a ∗n 0)) = a ∗n 0. This shows the theorem for n = 3. We will prove the theorem for all n by induction: Suppose that, for some n, the theorem holds. Then, for n + 1. We will also show that it holds for 0 ≥ k ≥ 3 − (n + 1) by induction. By the above reasoning, it holds for k = 0. Assuming n n that it holds for some 0 ≤ k ≤ 3 − n, we will show it holds for k − 1: Observe k = sloga (k + 1) = sloga (a ∗n+1 k) = n sloga (a ∗n (a ∗n+1 (k − 1))) = a∗n+1 (k −1). Hence, for all 0 ≥ k ≥ 3−(n+1), we have that a∗n+1 k = k +1. By induction, we also have the result for all n > 3.

3.2 Limiting and Asymptotic Properties In this section, we will examine some of the limiting properties of hyperoperators, so it will be convenient to adopt the following notations:

Definition 5.

a ∗n ∞ = lim (a ∗n b) (44) b→∞ a ∗∞ b = lim (a ∗n b) (45) n→∞

4 Notice that a ∗n ∞ can be thought of as iterating x → a ∗n−1 x infinitely.

Theorem 3. Define ζn = sup{a : a ∗n ∞ converges}. Then for 1 < a < ζn, a ∗n ∞ converges. For a > ζn, we have lim a∗nx = ∞ for n ≥ 3. x→∞ x Furthermore, for a > ζn, x → a ∗n−1 x has no real fixed points in (0, ∞), and for a < ζn, x → a ∗n−1 x has at least one fixed point in (0, ∞) and the smallest such fixed point is a ∗n ∞.

Proof. First we show that a ∗n ∞ converges when 1 < a < ζn (by definition it cannot converge for a > ζn). Fix a < ζn. By 0 0 0 definition, there exists a < a < ζn such that a ∗n ∞ converges. By monotonicity we must have a ∗n b < a ∗n ∞ for all b, so a ∗n b must converge as b goes to infinity since it is a bounded increasing sequence. If a ∗n ∞ converges, then a ∗n ∞ = a ∗n−1 (a ∗n (∞ − 1)) = a ∗n−1 (a ∗n ∞), so a ∗n ∞ is a fixed point of a ∗n−1 x. Suppose that f(x) = a ∗n−1 x has a fixed point. Let c be the smallest such fixed point. Notice that f(0) = a ∗n−1 0 = 1 ◦k by Theorem 2. It is simple to see that f (1) = a ∗n k. Notice that f(1) = a > 1. By continuity, for x between 0 and c, ◦k f(x) > x, but my monotonicity f(x) < c. Thus the sequence f (1) = a ∗n k is increasing and bounded above, so it must be convergent. It must converge to a fixed point of f(x), but since c is the smallest possible fixed point, we conclude c = a ∗n ∞. Also, notice that we also conclude x → a ∗n−1 x has no fixed points in (0, ∞) if a > ζn. Now let’s consider a > ζn. To begin, we will suppose a ∗n x > x for all x > 0 (we will prove this later), so a ∗n ∞ must a∗nx be infinity, so we can apply L’Hopital’s rule to find that the limit becomes limx→∞ x = limx→∞ Dn(a, x). We will prove d x x the lemma by induction. Notice that ζ3 = 1 and that D3(a, x) = dx a = (ln a)a , which clearly goes to infinity for all a > 1. Now, we do the inductive step. If the limit diverges for n − 1, then for sufficiently large x, we know that Dn−1(a, x) > 2. Now, to show the limit goes to ∞ for n, notice:

a ∗n x lim = lim Dn(a, x) = lim Dn−1(a, a ∗n (x − 1)) Dn(a, x − 1) x→∞ x x→∞ x→∞ a ∗n x ≥ 2 lim Dn(a, x − 1) = 2 lim Dn(a, x) = 2 lim x→∞ x→∞ x→∞ x Since, as we have already observed, the limit must be greater than 1, it must be infinity because it is greater than or equal to twice itself (which only holds for 0 and ±∞). The following lemma is strengthened by Theorem 4:

Lemma 1. ζn is an increasing sequence.

Proof. As we have seen in the above theorem for a < ζn implies a ∗n x converges as x → ∞. Since a ∗n 0 = 1 > 0 and a ∗n ∞ < ∞, the function must have a fixed point, and therefore a ∗n+1 ∞ must converge, hence a < ζn+1 which implies ζn ≤ ζn+1. Proposition 2. n lim srtx x = ζn (46) x→∞ n Proof. Suppose there exists c > ζn such that lim supx→∞ srtx x ≥ c, then: (nsrt x) ∗ x c ∗ x 1 = lim sup x n ≥ lim n = ∞ (47) x→∞ x x→∞ x

n n This is a contradiction, so therefore lim supx→∞ srtx x ≤ ζn. Similarly, suppose there exists c < ζn such that lim infx→∞ srtx x ≤ c, then: (nsrt x) ∗ x c ∗ x 1 = lim inf x n ≤ lim n = 0 (48) x→∞ x x→∞ x n n Which is again a contradiction. Therefore, we know lim infx→∞ srtx x ≥ ζn. Thus, since lim infx→∞ srtx x ≥ ζn ≥ n n lim supx→∞ srtx x, so limx→∞ srtx x = ζn.

Proposition 3. n sup ( srtx x) = ζn+1 (49) x∈(1,∞)

n n n Proof. Given a a, if there exists some x such that srtx x = a (i.e. 1 = srt1 1 < a < supx∈(1,∞)( srtx x)). This identity can be rearranged to find that x = a ∗n x, so by Proposition ??, a ∗n ∞ exists, so a ≤ ζn+1. Similarly, if n a > supx∈(1,∞)( srtx x) then by monotonicity, for all x > 1, a ∗n x > x. By Proposition ??, this implies a > ζn+1. Thus a < n n n supx∈(1,∞)( srtx x) implies a ≤ ζn+1 and a > supx∈(1,∞)( srtx x) implies a ≥ ζn+1, so it must be supx∈(1,∞)( srtx x) = ζn+1.

Theorem 4. ζn+1 > ζn and ζn ∗n ∞ converges for all n.

5 b Proof. Suppose for some value of n, ζn ∗n ∞ converges (this is easy to show for n = 3 because ζ3 = 1 and a ∗3 b = a ). Thus, for large enough x, ζn ∗n x < x. By continuity, for some  > 0, (ζn + ) ∗n x < x, which, by Theorem 3 implies that ζn +  ≤ ζn+1. Now, if we can show that ζn+1 ∗n+1 ∞ converges, by induction we will have convergence for all values of n. By Proposition 2 and Proposition 3, and the fact that 1 ≤ ζn < ζn+1, we can tell that there must exist some 1 < b < ∞ n such that srtb b = ζn+1. Rearranging this equation gives us b = ζn+1 ∗n b, which by Proposition 3 implies that ζn+1 ∗n+1 ∞ exists.

We now move on to looking at limits in n of a ∗n b.

3.3 Inequalities If we assume our sequence of hyperoperators is increasing in terms of a and b, this gives us some information on the behavior of a ∗n b as a sequence in n. Here, we present a number of minor results consisting of inequalities of a ∗n b.

Lemma 2. If a ≤ b and 1 < b ≤ 2, then a ∗n b ≤ a ∗n−1 b, with equality if and only if a = b = 2. Proof. For b < 2: a ∗n b = a ∗n−1 (a ∗n (b − 1)) < a ∗n−1 (a ∗n 1) = a ∗n−1 a ≤ a ∗n−1 b (50) For a < 2 and b = 2: a ∗n 2 = a ∗n−1 a < a ∗n−2 a = a ∗n−1 2 (51)

For a = b = 2, clearly 2 ∗n 2 = 2 ∗n−1 2. Since 2 + 2 = 4, this gives us 2 ∗n 2 = 4 for any n.

Notice that we also know a ∗n b > a for a, b > 1, so Lemma 2 in fact tells us that under those conditions a ∗∞ b converges.

Lemma 3. a ∗n b > a ∗n−1 b if and only if a ∗n (b − 1) > b and a ∗n b < a ∗n−1 b if and only if a ∗n (b − 1) < b n-1 Proof. Simply take sloga of both sides of the inequalities.

Lemma 4. If a ≥ 2 and x ≥ 3, then a ∗n x > a ∗n−1 x. Proof. For integer values of x ≥ 2, notice that we can use induction (for n ≥ 2) to have

2 ∗n+1 x = 2 ∗n (2 ∗n (...2 ∗n (2 ∗n 2))...) ≥ 2 ∗n−1 (2 ∗n−1 (...2 ∗n−1 (2 ∗n−1 2))...) = 2 ∗n x (52) | {z } | {z } x times x times with equality if and only if x = 2 This in fact gives us 2 ∗n x ≥ 2 ∗1 x = x + 2 for integer x ≥ 2. Now, for any x ≥ 3:

2 ∗n x = 2 ∗n−1 (2 ∗n (x − 1)) ≥ 2 ∗n−1 (2 ∗n (bxc − 1)) ≥ 2 ∗n−1 (bxc + 1) > 2 ∗n−1 x (53) Now suppose a > 2: a ∗n x = a ∗n−1 (a ∗n (x − 1)) > a ∗n−1 (2 ∗n (x − 1)) > a ∗n−1 x (54)

Notice that this lemma tells us that ζn < 2 for any n, because 2 ∗n ∞ ≥ 2 ∗1 ∞ = 2 + ∞ = ∞. n Lemma 5. If a ≤ srtb b then a ∗n b < a ∗n−1 b Proof. a ∗n b = a ∗n−1 (a ∗n (b − 1)) < a ∗n−1 (a ∗n b) ≤ a ∗n−1 b (55)

n Lemma 6. If a < limn→∞ srtb b and x ≥ b, then a ∗∞ x = a ∗∞ b. n Proof. Notice that a < limn→∞ srtb b, then for sufficiently large n, a ∗n b < b, which gives us

a ∗∞ (b + 1) = lim a ∗n−1 (a ∗n b) ≤ lim a ∗n−1 b = a ∗∞ b ≤ a ∗∞ (b + 1) (56) n→∞ n→∞

Hence a ∗∞ (b + 1) = a ∗∞ b.

a ∗∞ (b + 2) = lim a ∗n−1 (a ∗n (b + 1)) = lim a ∗n−1 (a ∗n−1 (a ∗n b)) ≤ lim a ∗n−1 (a ∗n−1 b) ≤ lim a ∗n−1 b = a ∗∞ b n→∞ n→∞ n→∞ n→∞ (57) We can easily see that a ∗∞ (b + n) will telescope similarly for any b. Thus, if x ≥ b, we can find an n such that g + n ≥ x ≥ b, so we have: a ∗∞ (b + n) ≤ a ∗∞ b ≤ a ∗∞ x ≤ a ∗∞ (b + n) (58)

Thus, for any x ≥ b, we have that a ∗∞ x = a ∗∞ b. n Corollary 1. If a < limn→∞ srt2 2 and x ≥ a, then a ∗∞ x = a ∗∞ a n Proof. By Theorem 6, a∗∞ x = a∗∞ 2 for all x < limn→∞ srt2 2. However, note that a∗∞ 2 = a∗∞ a; because of monotonicity, this also gives the identity for all a ≤ x ≤ 2, and so for any x ≥ a, a ∗∞ x = a ∗∞ a.

6 3.4 Behavior for b < 0

For limits in the negative direction, we have the following result, which implies that for b < 0, the even hyperoperators a ∗n b have a vertical asymptote as a function of b, and the odd hyperoperators have a horizontal asymptote.

n Proposition 4. For every odd n ≥ 3, limx→−∞ a ∗n x ≥ 3 − n. For every even n > 3, limx→−∞ sloga x ≥ 3 − n − 1. x x Proof. Clearly, for n = 3, this property is true for a ∗3 x = a . (Recall we assume a > 1). limn→−∞ a = 0 = 3 − 3. Suppose limx→−∞ a ∗n x = ∆ ≥ 3 − n. Then,

n+1 n+1 n+1 lim sloga x = lim ( sloga (a ∗n x)) − 1 = ( sloga ∆) − 1 (59) x→−∞ x→−∞

n+1 Notice that ∆ ≥ 3 − n = 3 − (n + 1) + 1. By Theorem 2 and monotonicity, we can see that sloga ∆ exists and n+1 n+1 sloga ∆ − 1 ≥ sloga (3 − n) − 1 = 3 − (n + 1) − 1. n Suppose limx→−∞ sloga x = ∆ ≥ 3 − n − 1. Then,

n n lim a ∗n+1 x = lim sloga (a ∗n+1 (x + 1)) = sloga ( lim a ∗n+1 (x)) (60) x→−∞ x→−∞ x→−∞

n n Thus the limit is a fixed point of sloga x (notice the limit cannot be −∞ because sloga −∞ 6= −∞). Furthermore, since n n the limit is less than a ∗n+1 (−1) = 0, it must be negative. Since sloga 0 < 0 and sloga −∞ > −∞, it must have a negative n fixed point, so the value of one of those fixed points must be limx→−∞ a ∗n+1 x. Furthermore, since sloga −∞ ≥ 3 − n − 1, the value of the fixed point (i.e. the value of a ∗n+1 ∞) must also be greater than or equal to 3 − (n + 1). Thus, since the proposition holds for n = 3, by induction it holds for all n.

Corollary 2. If a even, then a ∗n x has a negative fixed point. If a odd, then a ∗n x has no negative fixed points.

n Proof. In our proof of Proposition 4, we showed that for even n, sloga x has a negative fixed point which equals a ∗n+1 −∞. For odd values of n, a ∗n −∞ ≥ 3 − n, so if a ∗n x were to have a negative fixed point c, then c would have to be strictly greater than 3− n. By Theorem 2, however, for all negative integers k ≥ 3 − n, a ∗n k = k + 1. By monotonicity, we can easily see therefore that there can be no fixed point between k and k + 1. Hence, a ∗n x cannot have any negative fixed points.

3.5 Absolute bounds on the growth of ∗4 Lemma 7. Let z ∈ C and n ∈ N. Then   1 e| log z| ∗ n ≥ |z ∗ n| ≥ 4 4 | log z| e ∗4 n

Proof. We prove this inductively. For n = 0 note the result is trivial. Let z = reiθ. Then, we assume the statement is true for n and show it for n + 1:

| log z|  |z∗4n|  (e )∗4n   z∗4n log z(z∗4n) | log z(z∗4n)| | log z| | log z| | log z| |z ∗4 (n + 1)| = |z | = |e | ≤ e = e ≤ e = e ∗4 (n + 1)

| log z|  |z∗4n|  (e )∗4n 1 |z ∗ (n + 1)| = |elog z(z∗4n)| ≥ e−| log z(z∗4n)| = e−| log z| ≥ e−| log z| = 4 | log z| e ∗4 (n + 1)

4 The Known Solutions 4.1 Hyperoperators and Schroeder Functions

Let f(x) = a ∗n x have a fixed point c ∈ R. Then we can define a ∗n+1 b by using the Schroeder function of f at the point c.

−1 0 b a ∗n+1 b = h (f (c) h(1)) (61)

The problem with construction is that it fails whenever |f 0(c)| = 1, and will not yield a real-valued function when there is no real fixed point of a ∗n x. Theorem 5. For n = 3, the above construction is increasing in both arguments for (a, b) ∈ (1, e1/e) × (0, ∞).

7 Proof. For a ∈ (1, e1/e), let c(a) be the smallest positive fixed point of ax (notice that for a > e1/e, ax > x for all x). Then by the definition of Schroeder functions, we have

exp◦n(x) − c(a) h(x) = lim a n→∞ (log c(a))n then h−1 is given by −1 ◦n n h (x) = lim loga (c(a) + (log c(a)) x) n→∞ Then, observe:

  ◦n  −1 b  ◦n b+n expa (1) − c(a) a ∗4 b = h (log c(a)) h(1) = lim loga c(a) + (log c(a)) (62) n→∞ (log c(a))n ◦n b ◦n  = lim loga c(a) + (log c(a)) (expa (1) − c(a)) (63) n→∞ ◦n b+n  = lim loga c(a) + (log c(a)) (1 − c(a)) (64) n→∞

(the last equality can be seen by expanding the Taylor series of ax about x = c(a)). Notice that 1 < c(a) < e for a ∈ (1, e1/e), and c(a) is an increasing function on this interval (in fact, c(a) is the inverse function of a1/a). This is clearly increasing as a function of b. So far I have only been able to show increasingness in a for a < 1.3306..., which can be done by showing that d2 1/c 1/c f(c, b) = dcdb (c ) ∗4 b > 0 for b > 0 and c < 1.3306.

4.2 Kneser’s solution for first argument > e1/e z 1/e Helmut Kneser in 1950 devised a solution for fractional iteration of e which can be generalized to find a∗4 b for any a > e . We outline this method, based on the description by Paulsen and Cowgill [2]: 1/e z πi Fix a > e . Note that a has has a fixed point, call it c, in the strip =z ∈ (0, log a ). This will be a repelling fixed point, and therefore it is an attracting fixed point of the primary branch of loga(z) as a function of the upper half-plane. It is fairly straightforward to show that the basin of attraction of c is the entire upper half-plane. Thus, by Theorem 1, we will have the Schroeder function of loga(z) defined in the upper half-plane as

◦n loga (z) − c h(z) = lim n n→∞ 0  loga(c)

Since this is a limit of univalent functions, h will be a univalent function on the upper half-plane. By extension s(z) = log(h(z)) 0 is a univalent function of the upper half-plane. Note s(loga(z)) + 1 = s(z), so s is almost the super-logarithm we log(loga(c)) want, but it does not map 1 to 0 (in fact, s(1) is undefined), and it does not map R into R. Let u = s([0, 1]), and V be ∞ the region of the plane bounded below by U = ∪n=−∞(u + n). Note V + 1 = V . By the Riemann mapping theorem, there exists a biholomorphic function ρ that maps V to the upper half-plane. In fact, we can find ρ such that ρ(z + 1) = ρ(z) + 1. This is possible because for some fixed z0 in the upper half-plane, we can find a unique ρ such that ρ(z0 + 1) = ρ(z0) + 1 0 0 and ρ (z0) = ρ (z0 + 1) (this is a slight modification of the uniqueness part of the Riemann mapping theorem). In this case, ∗ notice that ρ(z) and ρ (z) = ρ(z − 1) + 1 have the same value and derivative at z0 + 1, so by the Riemann mapping theorem, 4 they are the same function. Then we define sloga z = ρ(s(z)) − ρ(s(1)). Since s and ρ are univalent, this definition gives a univalent function. Also, it satisfies the correct recursion because ρ(z +1) = ρ(z)+1. It maps R into R because the boundary of V contains the image of R under s. Since this relies on there being no real fixed points of ax, it does not work for a ≤ e1/e. Thus the fractional-iterative approach and Kneser’s approach are incompatible.

5 A Topological Perspective

In this section, we show topologically that analytic hyperoperators exist. The proof is non-constructive, however, so the following sections contain some attempts to construct a solution. The topological result is based on the result of Belitskii and Lyubich [3], who showed the following result for any bijective f.

Theorem 6. Let A be an analytic manifold homeomorphic to R, and let f : A → A be an injective function which is a diffeomorphism onto its image such that every domain of f is a wandering domain, i.e. for any compact K, for sufficiently large n, f n(K) is disjoint from K. Let B and g be another manifold and function with the same properties. Then there exists an analytic function φ : A → B such that g◦m ◦ φ ◦ f = g◦n ◦ φ for some n, m ∈ N and n 6= m. ◦n Proof. For a function f, we define an equivalence relation ∼f by saying x ∼f y if there exists n, m ∈ N such that f (x) = ◦m f (y). Then we write A/f to represent the topological space of ∼f -equivalence classes on A.

8 ∼ We will work under the assumption that A/f = B/g. Let πA and πB be the natural projections from A and B onto these sets. −1 Let φ be an analytic isomorphism A/f → B/g. Then, given a ∈ A and b ∈ πB ◦ φ ◦ πA(a), there exists ψ : A → B such that ψ(a) = b and φ ◦ πA = πB ◦ ψ. From this, observe:

πBψ(f(x)) = φπA(f(x)) = φπA(x) = πBψ(x) which implies that gnψ(f(x)) = gmψ(x) for some n, m ∈ N. One can clearly see that m 6= n and, this identity holds for constant m, n. Another way to prove this result, which would make it a more direct corollary of Belitskii and Lyubich’s theorem, is as follows: If f and g are bijective, the result is identical to their construction. Suppose f is not bijective, but it is only injective. Let C = A \ f(A). Then, we let Aˆ = A ∪ S f ◦−n(C), where f ◦−n(C) are taken formally as sets diffeomorphic to C n∈N such that f extends to a diffeomorphism such that f(f ◦−nC) = f ◦−(n−1)(C). Then f is a bijection on Aˆ, and so Belitskii and Lyubich’s construction can be used for Aˆ, and then restricted to A as a subset of Aˆ.

5.1 Application to Hyperoperation 2 y Let A = {(x, y) ∈ R |x > 1, y < x ∗4 n for some n ∈ N}, and let B = (1, ∞) × (−2, ∞), and let f(x, y) = (x, x ) and g(x, y) = (x, y + 1). These functions satisfy the assumptions of Theorem 6. Hence there exists a real analytic function φ : A → B such that

g◦m(φ(f(x, y))) = g◦n(φ(x, y)) (65) y y (φx(x, x ), φy(x, x ) + m) = (φx(x, y), φy(x, y) + n) (66)

Then, we can define the superlogarithm by 1 4slog y = (φ (x, y) − φ (x, 0)) (67) x n − m y y which it is straightforward to verify satisfies the necessary recursion. Higher order hyperoperations can be defined analogously, each one based on the previous operation. Hence we know there exist universal solutions for the hyperoperator, though this does not tell us how to construct them.

6 The matrix method

One method that other authors have attempted (e.g. [1]) is to approximate the super-logarithm on [0, 1] by a sequence of polynomials pn, each with degree n + 1, such that pn(0) = −1 and function would be n-times differentiable at the endpoints k k-1 k if we extended it using the recursion sloga ( sloga (x)) + 1 = sloga (x). Then, taking the limit as n goes to infinity of the polynomials pn yields a possible solution. If the individual coefficients of pn converge in the limit, then this gives a Taylor series for the super-logarithm. More formally, for fixed k ≥ 4, a ∈ R we define pn to be the unique degree n + 1 such that pn(0) = −1 and the function S(x) defined below is n times differentiable:  p (x) if x ∈ [0, 1]  n k-1 S(x) = S( sloga (x)) + 1 if x > 1 (68)  S(a ∗k−1 x) − 1 if x < 0

For the rest of this section, we will only deal with the case of k = 4, and for clarity the superscript will be omitted. Andrew Robbins has used this method in [1], and provided numerical evidence that it converges, but he has not proven that it does. In his paper, he was solving for the coefficients of pn(x) at 0. Here, I will look at the coefficients at x = 1, which shows some interesting patterns that do not appear for the series expansion at x = 0.

Lemma 8. For n ≥ 1 n (n) n X k (k) sloga (loga x) = (ln a) s(n, k)x sloga (x) k=1 where s(n, k) are the Stirling of the second kind.

9 Proof. We prove this result inductively. Observe for n = 1: d slog 0(x) = (slog (log x) − 1) (69) a dx a a 1 = slog 0(log x) (70) a a x log a 0 0 (log a)x sloga (x) = sloga (loga x) (71) so the case of n = 1 holds. For higher n, assume it holds for n. Then:

n (n) n X k (k) sloga (loga x) = (ln a) s(n, k)x sloga (x) (72) k=1 n d d X slog (n)(log x) = (ln a)n s(n, k)xk slog (k)(x) (73) dx a a dx a k=1 n 1 X slog (n+1)(log x) = (ln a)n s(n, k)kxk−1 slog (k)(x) + s(n, k)xk slog (k+1)(x) (74) x log a a a a a k=1 n (n+1) n+1 X k (k) k+1 (k+1) sloga (loga x) = (ln a) s(n, k)kx sloga (x) + s(n, k)x sloga (x) (75) k=1 n ! (n+1) n+1 n+1 (n+1) X k (k) sloga (loga x) = (ln a) s(n, n)x sloga (x) + (s(n, k)k + s(n, k − 1))x sloga (x) (76) k=1

Since s(n, n) = 1 for all n, the n + 1 term in that series is correct. Note that Stirling numbers of the second kind satisfy the recursion s(n + 1, k) = ks(n, k) + s(n, k − 1), so by induction the result holds for all n.

Pn+1 (x−1)k Evaluating at x = 1, this gives us a restriction on the coefficients of pn. We let pn(x) = k=1 qk(n) k! . (Note pn(1) = 0 by the fact that we require S to be continuous). The requirement that pn(0) = −1 gives us the restriction

n+1 X (−1)k q (n) = −1 k! k k=1 Then using the lemma, we have the restriction that

n+1−m m X (−1)k X q (n) = (ln a)m s(m, k)xkq (n) k! k+m k k=0 k=1 for 1 ≤ m ≤ n. Thus we have a system of n + 1 equations and n + 1 unknowns to solve for pn(x). In matrix form:

 n+1    −1 1 −1 (−1) 0 0 0 ... 0   1! 2! 3! ... (n+1)! −1 n (log a)s(1, 1) 0 0 ... 0  1 −1 1 (−1)    0  0! 1! 2! ... n!   2 2     n−1  ~q(n) = (log a) s(2, 1) (log a) s(2, 2) 0 ... 0 ~q(n) +   (77)  1 −1 (−1)     .  0 0! 1! ... (n−1)! . . . .    . .. .    . . .    0 . .. . (log a)ns(n, 1) (log a)ns(n, 2) ... (log a)ns(n, n) 0

These matrices are relatively easy to understand on their own, but adding them together makes it difficult to compute the inverse, or prove that limn→∞ qk(n) would converge for any k.

6.1 Numerical approximations

Let An be the matrix on the left-hand side of (77) and let Bn be the matrix on the right hand side. Then the coefficients −1 q(n) will be given by the first column of (Bn − An) . Numerically, we investigate whether this seems to converge as n goes to infinity: For a = e, the sequence seems to converge, but my code in R fails for n ≥ 14, so it is difficult to tell whether or

10 not it is actually converging in reality. To maximum precision we have

 0.9159780  −0.4176662   −0.3275293    1.729634     −2.437250     −8.662994     66.04953  ~q(13) =   (78)  −85.3417     −1546.532     9508.837     49336.12     −594032.0     −5178330  −12182530 √ For a = 2, the same approximation is computationally possible for n up to at least n = 17, but the computation is slow. Like the case with n = 13, it seems√ that the coefficients converge. To highlight the apparent convergence, lets look at the sequence of q1(n) for a = e and a = 2.

√ n a = e a = 2 1 1.0000000 1.485251 2 0.9230769 1.651225 3 0.9230769 1.695852 4 0.9175351 1.702848 5 0.9175246 1.701120 6 0.9164576 1.698866 7 0.9164430 1.697627 8 0.9161470 1.697211 9 0.9161325 1.697205 10 0.9160351 1.697332 11 0.9160223 1.697458 12 0.9159885 1.697542

7 The Indefinite Summation Method

First, we define the indefinite summation:

Definition 6. Given a function f : A → C, where A is some set closed under the map x → x + 1, we say g : A → C is an indefinite sum of f if g(x + 1) = f(x) + g(x) P We denote the indefinite sum as g(x) = x f(x). Note that g is unique up to adding 1-periodic functions. If f is fairly regular, we can explicitly find g:

Lemma 9. If limx→∞ f(x) = 0 then ∞ X X f(x) = f(n) − f(n + x) x n=0 as long as this sum converges. Proof. First, we show that the sum converges for x ∈ [0, 1] if the derivative of f is negative for sufficiently large x:

∞ ∞ ∞ X X Z n+x X Z n+1 f(n) − f(n + x) = −f 0(t)dt ≤ −f 0(t)dt = f(0) (79) n=0 n=0 n n=0 n

11 If x > 1, note:

∞ ∞ ∞ X X Z n+x X Z n+x−1 Z n+x f(n) − f(n + x) = −f 0(t)dt = −f 0(t)dt + −f 0(t)dt (80) n=0 n=0 n n=0 n n+x−1 ∞ ∞ X Z n+x−1 X Z n+x Z ∞ = −f 0(t)dt + −f 0(t)dt = g(x − 1) + −f 0(t)dt (81) n=0 n n=0 n+x−1 x−1 = g(x − 1) + f(x − 1) (82) which shows both that the sum converges for x > 1, and that the desired recursion is true.

7.1 Application: ∗4 1/e Suppose we have a function µ0(t) = (e ) ∗4 t. Then we will use this to find functions µn(t) such that

∞   X µn(t) ex+1/e ∗ t = xn 4 n! n=0

Expanding the Taylor series around the problematic point e1/e means guarantees that, if this has non-zero radius of conver- gence, the function is analytic at this point. We can expand the series formally as follows:

  x+1/e P∞ µn(t) n x+1/e (x+1/e)((e )∗4t) (x+1/e)( x ) e ∗4 (t + 1) = e = e n=0 n! (83)

1 ∞ 1 ! µ (t)+ µn(t) µ0(t) +P∞ n−1 e xn X µn−1(t) + e µn(t) n = e e n=1 n! = µ (t + 1) exp x (84) 0 n! n=1 ∞ 1 1 ! X Bn(µ0(t) + µ1(t), ..., µn−1 + µn(t)) = µ (t + 1) 1 + e e (85) 0 n! n=1 This gives us the identity:

 1 1  µ (t + 1) = µ (t + 1)B µ (t) + µ (t), ..., µ (t) + µ (t) (86) n 0 n 0 e 1 n−1 e n

This in fact makes µn defined as an indefinite summation in terms of the previous µn’s. The Bell polynomials can be written as ∗ Bn(x1, ..., xn) = xn + Bn(x1, ..., xn−1) 1 1 0 d e µ0(t−1) 1 0 e µ0(t−1) 1 0 furthermore, notice that µ0(t) = dt e = e µ0(t − 1)e = e µ0(t − 1)µ0(t). Thus, we have   µn(t + 1) µ0(t + 1) ∗ 1 1 µ0(t + 1) 0 = 0 Bn µ0(t) + µ1(t), ..., µn−2(t) + µn−1(t) + 0 µn(t) (87) µ0(t + 1) µ0(t + 1) e e eµ0(t + 1)   µ0(t + 1) ∗ 1 1 µn−1(t) 1 = 0 Bn µ0(t) + µ1(t), ..., µn−2(t) + µn−1(t) + 0 0 µn(t) (88) µ0(t + 1) e e µ0(t + 1) µ0(t) hence     µn(t) X µ0(t + 1) 1 1 µn−1(t) = B∗ µ (t) + µ (t), ..., µ (t) + µ (t) + µ0 (t) µ0 (t + 1) n 0 e 1 n−2 e n−1 µ0 (t + 1) 0 t 0 0 This method suffers from the obvious problem that it is difficult to determine the radius of convergence of the Taylor series described. In fact, it is not trivial to show that the radius of convergence is non-zero. Also, this has the disadvantage that indefinite sums are difficult to evaluate numerically.

8 Interpolating the super-roots

4 The sequence of functions srtn x seems to be easier than finding ∗4 directly, because the sequence is much more well behaved.

12 8.1 Ramanujan’s Master Theorem One famous way to interpolate functions is a method devised by Ramanujan, who showed that if we define f as

∞ X φ(k) f(x) = (−x)k k! k=0 then, if φ is sufficiently regular Z ∞ xs−1f(x)dx = Γ(s)φ(−s) 0 1/e James Nixon (cite) has use this method to construct hyperoperations a∗n b for a < e . His solution will in fact be equivalent to the fractional iteration definition used above. In an attempt to use this method for general bases, I have attempted to 1 apply the theorem to the sequence of functions gn(θ) = 4 ; in this case log( srtn+1 θ)

∞ X gn(θ) f(x) = (−x)n n! n=0

R ∞ s−1 then we should hope that the integral 0 x f(x)dx converges for some values of s, in which case this gives us an interpolation that hopefully will work. However, we have yet to prove whether this integral converges. Unfortunately, the Taylor series of f does not give much insight into its asymptotic behavior. Numerically, the coefficients are difficult to compute for large 4 n. The reader might wonder why we chose the gn as the coefficients instead of using srtn . This is partially because of the bounding on ∗4 established by Lemma 7:

4 4  |x| = |( srtn x) ∗4 n| ≤ exp | log srtn x| ∗4 n (89) 4 4 log srtn |x| ≤ | log srtn x| (90) 1 1 4 ≥ 4 (91) log srtn |x| | log srtn x| gn(|x|) ≥ |gn(x)| (92) which means that gn is relatively tame in the complex plane. Also, the recursion on gn is much easier to work with than the 4 recursion on srtn , because gn(θ) satisfies gn+1(θ) = gn((log θ)gn+1(θ)).

8.2 The W Transformation

For any function f : R+ → R+ such that f 0(x) ∈ [0, 1] and f(0) = 0 for all x, we define the W -transform of f by z = W [f](z)eW [f](z)−f(W [f](z))

P∞ an n P∞ wn n If f has Taylor series n=1 n! z , then we can compute the Taylor series of W [f] explicitly; suppose W [f](z) = n=1 n! z . (The series starts at n = 1 because W [f](z) is guaranteed to be 0 at 0). Then we have:

z = W [f](z)eW [f](z)−f(W [f](z)) (93) ∞ n ! ! X X zn = W [f](z) exp w − w B (a , ..., a ) (94) n k n,k 1 n−k+1 n! n=1 k=1 ∞ n !! ! X X zn = W [f](z) 1 + B (w − w a ) ,..., w − w B (a , ..., a ) (95) n 1 1 1 n k n,k 1 n−k+1 n! n=1 k=1

This implies w1 = 1, and gives us equations for each of the higher order wn as a function of {wi, ai} for i < n. Hence we have for some sequence of polynomials wn(x1, ..., xn−1) (with w1(·) = 1) such that

∞ X xn W [f](x) = w (f 0(0), . . . , f (n−1)(0)) (96) n n! n=1

n−1 This is relevant to the problem because if we define φn(t) = W [0](t), then

exp(−φn(log θ)) srtn (θ) = θ (97)

13 4 We prove this inductively. For n = 1, observe φ1(t) = 0, where W is the Lambert W function, and srt1 (θ) = θ = exp(log(θ) exp(0)), so the base case holds. Using the properties of super-roots, we can transform (97) into

4 4 4  4 4 θ srtn (θ) = srtn+1 ( srtn (θ)) ∗4 (n + 1) = srtn+1 ( srtn (θ)) (98)     exp(−φn(log θ)) 4 θ exp(−φn(log θ)) 4 log θ exp(log θ−φn(log θ)) θ = srtn+1 θ = srtn+1 e (99) (100)

Now suppose log θ = φn+1(log u) for some u. Then we have   exp(−φn(log θ)) 4 log θ exp(log θ−φn(log θ)) θ = srtn+1 e (101)   exp(−φn(log θ)) 4 φn+1(log u) exp(φn+1(log u)−φn(φn+1(log u))) θ = srtn+1 e (102)

φn+1(log u) exp(−φn(φn+1(log u))) 4 log u e = srtn+1 e (103)

(log u) exp(−φn+1(log u)) 4 e = srtn+1 (u) (104) exp(−φn+1(log u)) 4 u = srtn+1 (u) (105)

Hence the identity holds for n + 1, and the inductive step is complete. If we can fractionally iterate the W transformation, 4 then we would have a definition for srtn θ that would apply to noninteger n. Observing that for t < 1, φn(t) converges to t at a rate of tn, we might define the fractional iteration of W analogously to the definition in (31) by

−n s φs(t) = lim W [t → t − t (t − φn(t))] (t) (106) n→∞ Proving that this converges, or that the limit is analytic in either s or t is difficult, however, and it is also daunting to test numerically.

References

[1] Robbins, Andrew. ”Solving for the Analytic Piecewise Extension of and the Super-logarithm” (Published online, accessed April 2019). [2] Paulsen, William and Cowgill, Samuel. ”Solving F (z + 1) = bF (z) in the complex plane”. 2010. Mathematics of Compu- tation 78(267). DOI: 10.1090/S0025-5718-09-02188-7 [3] Belitskii, G. and Lyubich, Yu. ”The real-analytic solutions of the Abel functional equation” 1999. Studia Mathematica 134(2).

9 Appendix: Code

For the computation of the matrix method, I computed coefficients using this code in R a<-exp(1) stirling<-function(n,k){ if(n^2 + k^2 == 0){ 1 } else if (n*k == 0){ 0 } else { stirling(n-1,k-1) + k*stirling(n-1,k) } } rn<-function(n,m=n){ cc<-numeric(0) for(i in 1:m){ cc<-c(cc,stirling(n,i)) } cc*log(a)^n } Bn<-function(n){ rr<-numeric(0)

14 for(i in 0:n){ rr<-c(rr,rn(i,n+1)) } t(matrix(rr,nrow=n+1,ncol=n+1)) } shift<-function(xs){ c(0,xs[-length(xs)]) } An<-function(n){ r0 <- (-1)^(0:(n+1))/(factorial(0:(n+1))) rows <- r0[-1] for(i in 1:n){ r0<-shift(r0) rows<-c(rows,r0[-1]) } t(matrix(rows,ncol=n+1,nrow=n+1)) }

Cn<-function(n){ solve(Bn(n) - An(n)) }

15