<<

Tetration Mathemat iiicalcalcal

Gottfried Helms - Univ Kassel 01 - 2008 Miniatures

Tetration by matrix-operations

Preview! missing references and indexing of equations Abstract: An introduction into my version of the extension of discrete to continuous itera- tion is given. Tetration is expressed in terms of powerseries, and as action on formal powerseries (exponentialseries). To be able to repeat the basic operation of expo- nentiation in this framework, the concept of use of matrices is introduced; the matrices which transform one powerseries in x into another of f(x) are called "operators" here. This text-version is only a preview; only the introduction and the part about "naive tetra- tion" (using finitely truncated matrices as sufficient with basic matrix-operations) is pre- sent – the part about the full "analytical approach" is not yet ready. Version: 0804-04 0803-30

Contents 1. "Iterated " ( IE or T(x)) and decremented IE (DIE or U(x))...... 2 1.1. Introduction...... 2 1.2. The exponential-series and its iterate...... 5 1.3. Fractional iterates...... 5 1.4. The matrix-approach...... 6 1.5. Matrix-notations and definitions...... 8 2. The naive matrix-approach...... 10 2.1. Example computation of x {b} h with integer height h >=0 ...... 10 2.2. Negative height...... 10 2.3. Acceleration of convergence - Euler- ...... 11 2.4. Fractional powers: matrix-logarithm or diagonalization ...... 12 2.5. Sums and series of powertowers (here: alternating series)...... 13 2.6. naive U-tetration (decremented iterated iteration)...... 16 2.7. Improvement of naive T-tetration by fixpoint-shift...... 16 2.8. Conclusion ...... 17 3. An analytical matrix-approach...... 18 3.1. Intro...... 18 4. citations/snippets...... 20 5. Appendix ...... 21 5.1. Relation between Bell-matrix B[U] and Ut-matrix...... 21 6. References and online-resources ...... 22

Project-Index http://go.helms-net.de/math/binomial/index Intro/notation http://go.helms-net.de/math/binomial/intro.pdf Tetration using matrices S. -2- 1. "Iterated exponentiation" ( IE or T(x)) and decremented IE (DIE or U(x))

1.1. Introduction

In the following I'll discuss two different types of tetration,

x x, b x, b b ,... - T-tetration, "iterated exponentiation" footnote a (IE) x x, t x-1,t t -1-1, ... - U-tetration, "decremented itereated exponentiation" (DIE) I shall call one such term bbx " powertower ", which has the " base " b, the " height " (or " iteration- parameter ") h (number of b's) and the " top-exponent " (or " initial value ") x. Note, that for exponentiation I also use the caret-symbol " ^" like b^b^x with evaluation from right be- cause of difficult typesetting if the height is more than one or two. Also, from common use in the tetration-discussion where no special parameter x is assumed, I adapt the notation b^^h for bb...b with b's h times occuring (this is using x=1 in my notation) for shortness. T-Tetration : let’s define the first version in functional notation x Tb(x) = b °0 °h °h-1 (footnote b) Tb (x)=x Tb (x) = T b (T b(x)) The height-parameter h may be assumed as integer in the beginning and complex in the following chap- ters. °h There is no restriction on x; however, if it is x=1 , then we deal with the classical tetration, then Tb (1) = b^^h . °h In the more current notation of member of tetration forum Tb (1)= b[4] h while I announce the ascii- version for my subject of discussion °h T b (x)= x {4,b} h , which is immediately concatenable x {4,b} h 1 {4,b} h 2 = x {4,b}( h 1 + h2 ) , and the more convenient notation, since the operator-number " 4" is assumed here everywhere: °h T b (x)= x {4,b} h = x {b} h

U-tetration define the second version as x Ut(x)=t – 1 °0 °h °h-1 Ut (x)= x Ut (x) = U t (U t(x)) proposing the ascii-version for this text: °h Ut (x)= x {4_,t} h = x {_ t} h The same restrictions are valid for t,h,x as before. If the subscript t is omitted, it is meant as a general remark or the current base is assumed. I use the letter t instead of b for the base-parameter, since U- and T-tetration are nearly related with a certain pair of bases, so it may be useful, to distinguish the letters from the beginning.

a see [TF08 1], "Notations and Opinions", Andrew Robbins in Tetration-forum; [Wiki:TE] "Tetration" in Wikipedia, or "Tetra- tion in Context" [AR07 09], same author b Note the recursive definition of the iteration. Sometimes I see ([Wiki:TE] for instance) this written differently wrt the height/iterator-parameter: °h °h-1 Tb (x) = T b(T b (x)) which I do not propose. The problem is here, that for a limit in the infinite , where h-1 is still infinite, we never had a starting operation except the evaluation of the –again– infinite partial powertower in advance.

Tetration Mathematical Miniatures Tetration using matrices S. -3- Base-parameter and domain The base-parameter b deserves a specific consideration. In the analytical discussion we'll make frequent use of the following relation of symbols: b = t 1/t t=exp(u) So in fact, concerning the domain for the base-parameter, I tend to use u as primary variable, which may assume any complex value. Depending on this, t is t=exp(u) and the range for t is thus C \ 0. Since t cannot be zero, this definition also prevents the discussion of b as b=0 1/0 and also b can assume the value 0 only as a limit, if at all. So actually this formulates a map u → exp(u/exp(u)) which is C → C \ 0

With the notion of "iterated exponentials" or "decremented iterated exponentials" there are two tiny, but significant differences to the common concept of tetration, or powertower.

First, the additional parameter x is not known in the usual idea of tetration and powertower (it may be assumed to be x=1 to model tetration). In T- and U-tetration it is seen as a starting value for the re- peated operation. The terms "iterated exponentiation" for the T-operation and "decremented iterated exponentiation" for the U-tetration were proposed by Andrew Robbins [TF 1]; I’ll follow that in princi- ple, but for simpliness I misuse the more common term of tetration as a generalism.

Second, the order of operation is not in contrast to the indexing of the parameters here. For instance in D.F.Barrow in [BR36] the general iterated exponentiation is indexed as

a0^a 1^...^a n although the evaluation is right associative, thus beginning with an.

D.F. Barrow, 1936 This does not make a difference to the usual concept of powertowers/tetration in the finite height. But it is significant when want to deal formally with infinite height. If the evaluation of powertowers for the infinite case is discussed in the common way, then it is thought in terms of partial evaluation b, b^b, b^b^b, b^b^b^...=b, b^^2,b^^3,b^^4, ... b^^inf and then obviously in the infinite case there cannot be assumed a parameter "in infinity" different from b. This way already Euler studied this operation and found a range of convergence for b, namely e-e <=b<=e 1/e . Let’s call this range the „ Euler-range “. However, the observation, that for b=sqrt(2) the sequential partial evaluations b^2 = 2, b^b^2 = 2, ..or: 2 {b} h = 2 for all h>0 b^4 =4, b^b^4 = 4, ....or: 4 {b} h = 4 for all h>0

Tetration Mathematical Miniatures Tetration using matrices S. -4- are constant with two different „top-parameters“ t lets us perplexed. Euler discussed this also (although it is inconsistent with the said partial evaluation) and concluded 1/2 t1=2 b^2 = 2 b=2 1/4 1/2 t2=4 b^4 = 4 b=4 = 2 2 {4,b} inf = 2 and 4 {4,b} inf = 4 ==> b=sqrt(2) but finally the powertower/tetration-concept didn’t adapt the required redefinition of indexing. In the definitions here the above considerations read without contradictions in the infinite case as x = x {b} 0, b^x = x {b} 1 b^b^x=x {b} 2, b^...^b^b^x = x {b} h, // h-times b repeated ...^b^b^x = x {b} inf, // infinitely many b’s assumed where the dots occur at the left side of the expression. Then setting x=0 , x=1 , x=b or general x= 1{b} h allows to write the infinite case for two different t, where b=sqrt(2)

1/t 1 t1 {b} inf = t 1 => b = t 1 1/t 2 t2 {b} inf = t 2 => b = t 2 allowing statements about the infinitely iterated case, beginning with an arbitrary starting-parameter t, which may assume the value of one of multiple possible fixpoints. So, if bt = t, b^b^t = b^t = t, ... this doesn’t change, how many b’s we append to the left. Then t is called a fixpoint for b. The concept of fixpoints is essential for non-integer tetration, but we shall discuss this later. Also it should be men- tioned here, that there are not only two, but infinitely many fixpoints for each base b, if the complex domain is considered for b and t.

Tetration Mathematical Miniatures Tetration using matrices S. -5-

1.2. The exponential-series and its iterate

The value for bx, when b and x are general complex values, are described with the means of the expo- nential- and the logarithm-functions, which in turn are defined by their power-series expansion. So we define bx as bx = exp( log(b)*x) and –for actual computation- approximate this by evaluation of the exponential-series using c = log(b) y = b x = 1 + cx + (cx) 2/2! + (cx) 3/3! ... up to finite truncation, where the approximation has a suitable accuracy. If we want to iterate this, we would replace x by y in this series: z = b y = 1 + cy + (cy) 2/2! + (cy) 3/3! ... = 1 +c (1 + cx + (cx) 2/2! + (cx) 3/3! ... ) +c 2(1 + cx + (cx) 2/2! + (cx) 3/3! ... ) 2 /2! + .... and for an analytical description as new powerseries in x we would (again assumed as infinite series) collect terms of like powers of x, and construct the new powerseries, which provides then the for b^b^x . (But note: since this is a new powerseries in x with different coefficients than before, we have a new function !) This can – however complicated for few – principally be done by induction and the power- series for the iterated exponential-series for an arbitrary natural value for the height parameter h can analytically be given.

1.3. Fractional iterates

But this is possible only for integer iterates – for which the computer-implementation for the exponen- tial and the logarithm realizes the approximation and truncation of series for us. For fractional or even complex iterates (assume a fractional or complex height parameter h for this), such functions are not implemented in the programming-languages. Coefficients of powerseries of such functions may be found by interpolation of the coefficients, which occur, if the coefficients of powerseries for consecutive integer heights are listed. Say, if we have for x {b} 1 = b x , x {b} 2, x {b} 3 the powerseries x 2 x {b} 1 = b = a 1,0 + a 1,1 x + a 1,2 x + ... x 2 x {b} 2 = b^b = a 2,0 + a 2,1 x + a 2,2 x + ... x 2 x {b} 3 = b^b^b = a 3,0 + a 3,1 x + a 3,2 x + ... and we find a reasonable interpolation scheme for these coefficients, depending on h, we may find for height h=0.5 2 x {b} 0.5 = = a 0.5,0 + a 0.5,1 x + a 0.5,2 x + ... and generally 2 x {b} h = = a h,0 + a h,1 x + a h,2 x + ... where the ah,k are expressible as functions of h : ah,k = A(h,k) The most direct approach is surely a polynomial interpolation, which is indeed easily possible for U- tetration with u=1 , t=exp(1) and T-tetration with b=e 1/e ; for other bases we get series, where h occurs in the exponent, and there may be other approaches of such interpolations (which I don't discuss here).

Tetration Mathematical Miniatures Tetration using matrices S. -6- 1.4. The matrix-approach

This interpolation-approach leads then naturally to the concept of matrix-operations, since for iteration we need not only the powerseries themselves, but also their formal powers in the same way as a pow- erseries in x needs its argument x as well as x's powers. In these matrices the exponential-series (and also their powers) are coded in their columns (in my concept, in some other concepts in their rows) and we can implement fractional powers for these powerseries by fractional powers of the matrices, using matrix-logarithm or eigensystem-decomposition ("diagonalization"). In Aldrovandi/Freitas [AF97] this is dealt in a more general and rigorous way; a statement, which sup- ports this approach is, for instance: There are two main approaches to describe the evolution of a . 1 The first has its roots in classical mechanics – the solutions of the dynamical differential equations provide the continuous motion of the representative point in phase space. 2 The second takes a quite different point of view: it models evolution by the successive iterations of a well-chosen map, so that the state is known after each step, as if the "time" parameter of the system were only de- fined at discrete values. 3 It is possible to go from the first kind of description to the second through the snapshots leading to a Poincaré map. they focus then on the matrix-operation for interpolation: Our aim here is to present the first steps into a converse procedure, going from a discrete to a continuous description while preserving the idea of iteration . This is possible if we are able to "interpolate" between the discrete values in such a way that the notion of iteration keeps its meaning in the intervals. They continue to explain: The clue to the question lies in the formalism of Bell polynomials , which attributes to every such a function f a matrix B[f] , whose inverse represents the and such that the matrix product represents the composition operation. In other words, these matrices provide a linear representation of the group formed by the func- tions with the operation of composition. Composition is thus represented by matrix product and, consequently, iterations are repre- sented by matrix powers . Furthermore, the representation is faithful, and the function f is completely determined by B[f] . Now, in the matrix group there does exist a clear interpolation of discrete powers by real powers and the inverse way, going from matrices to functions, yields a map interpolation with the desired properties. In the following they base their discussion on functions, whose powerseries-representation do not con- x tain the constant coefficient, for instance Ue(x) = exp(x)-1 or Ut(x) = t - 1 and come to a Bell-matrix, which is a slightly different representation (scaling) of coefficients which I provide here for U-tetration (see there). The Bell-matrix which - in my simpler representation here- I call "matrix-operator" is trian- x gular for Ut(x) = t – 1 , and thus accessible for formal inversion, diagonalisation (if t<>exp(1) ) and exponentiation. Unfortunately the same is not true for the T-tetration: here the matrix-operator is an infinite square-matrix. However, this operator can be factored in triangular matrices and using associa- tivity of matrix-operations we may represent the T-tetration-problem by better suited series- manipulations. A simple approach, here called " naive ", (before of an analytic approach), is possible for a certain range of parameters, where the occuring powerseries converge well. This can be done by practical use of fi- nite matrices, basic eigensystem-analysis and matrix-logarithm based on these finite truncations. How- ever, while one gets quickly approximative results, no reliable estimation of the approximation-error can be given because of numerical sensitivity of fixed eigenanalysis-procedures. Two examples shall illustrate the inherent problems: a) the inverse of a finite matrix Mn * n can be given uniquely, -1 b) and the inverse satisfies then the condition Mn*n *Mn*n = In*n for a given dimension n.

Tetration Mathematical Miniatures Tetration using matrices S. -7- However, in the infinite case (which expresses the analytical description of the iterated exponential) there is a) not such a uniqueness, so a "naive" solution may lead to a completely wrong path. So we have infi- nitely many reciprocals for the U-tetration matrix, since the complex logarithm is a multivalued function and has thus different matrix-operators, -1 b) the truncation of the analytical reciprocal, M n*n , is different from the the inverse of the truncated -1 -1 (Mn*n ) and Mn*n* M n*n must be systematically different from the identity-matrix In*n for any size if M is not row- or columnfinite (for instance for T-tetration it is infinite and square), so using a trun- -1 -1 cated (M )n*n if computed by the solution of Mn*n *Mn*n = In*n is inherently wrong. This extends then also to the "naive" computation of matrix-logarithms and eigensystem- decomposition, which are based on (or are implying) the naive matrix/matrix-inverse relation. Problem b) is a bit mildered, if we can use triangular matrices like in the U-tetration, since the entries of its inverse and/or eigenmatrices can be determined by finite computation (using easy ) and they are exact and consistent entries for any chosen size of truncation – however, the non-uniqueness- problem cannot be overcome by this. So the "naive" use of a canned eigensystem-solver (or of a matrix-inversion-routine) with a truncated matrix might give a useful approximated result, with which then even iterations up to a certain degree may be done, but the results are likely to be systematically biased in this or that direction. But because the "naive" approach has a good introductory power and can even be applied for some complex parameters with useful approximation (possibly much enhanced by convergence-acceleration, say Euler-summation), I present it in the following as first example.

Tetration Mathematical Miniatures Tetration using matrices S. -8-

1.5. Matrix-notations and definitions

Theoretically all vectors and matrices are thought as of infinite dimension – this is due to the infinite number of terms of a powerseries. Practically we deal with finite truncations – hoping we have enough terms for reasonable approximation of the resulting powerseries. 1.5.1. Basic notations, vectors and elements We define an arbitrary vector A A = [a 0, a 1, a 2, a 3,...] being a column-vector, indexing begins at zero A~ a row-vector as its transpose dA used as diagonal-matrix a special type of vectors, called "vandermonde-vector" here V(x) = [1,x,x 2,x 3,x 4,...] containing the consecutive powers of x a vector A is only denoted as V(x), if the elements are the consecutive powers of x, beginning at x 0 dV(x) if this is thought as diagonal-matrix this notation will be heavily used in algebraic formulae another special type of vectors dF the vector of factorials, taken as diagonal-matrix -1 dF = diag(0!,1!,2!,3!,...), dF = diag(1/0!,1/1!,1/2!,1/3!,...) elements of a vector A are indicated either by index or brackets Ak =A [k] the k'th element of A, where k=0 indicates the first element V(x) 1 = x the parameter of a vandermonde-vector is its second element elements of a matrix M are indicated either by row-/column-index or brackets Mr,c =M [r,c] the r'th/c'th element of M M*,c = M [ ,c] the c'th column of M and a matrix M may be defined by description of its elements this way: r Mr,c := c /r! 1.5.2. Matrix-constants The Pascalmatrix P: P = The notation for transpose, as usual in Pari/GP P~ = transpose(P)

The matrices of Stirling-numbers first and second kind: Stir1 = Stir2 = Note, that they are their mutual reciprocals a, so Stir1 = Stir2 -1 For the finite size this is unique; for the ininite case modified versions of Stir1 are possible. Their factorially similarity-scaled versions -1 S1 = dF * Stir1 * dF -1 S2 = dF * Stir2 * dF The same remark concerning uniqueness applies here.

a See [AS], page

Tetration Mathematical Miniatures Tetration using matrices S. -9- The „ Vandermonde-matrix “ VZ r VZ defined by VZ r,c = c

The factorially row-scaled version of VZ : -1 r B = dF * VZ or definition: Br,c := c /r! r r Bb = dV(log(b)) * B or B b r,c := log(b) c /r!

1.5.3. Operators (on formal powerseries) Matrices, which translate one vandermonde-vector into another, are called (matrix)-"operators " if V(x)~ * M = V(y)~ then M is called an "operator", acting on the ring of formal powerseries . For instance V(x)~ * P~ = V(x+1)~ // by binomial-theorem V(1/(x+1))~ * P = V(1/x)~ // by geometric series and derivatives

x V(x)~ * B b = V(b )~ // by exponential - series x V(x)~ * S2 t = V(t -1)~ // by exponential - series V(x)~ * S1 = V(log(1+x))~ // by logarithm - series and all mentioned matrices are " operators on powerseries " (or in this text simply " operators ").

1.5.4. relation to Bell matrices as operators on formal exponential-series Note the relation to Bell-matrices B[ f ] , which act on a function f, as given in [AF97]. The convention for Bell-matrices is to use the taylor-coefficients of the function f only. So, if in my approach S2 is the "operator" for the function U(x)=exp(x) -1 , which converts V(x) into -1 - V(exp(x)-1) , the according Bell-matrix B[U] contains the coefficients which convert dF *V(x) into dF 1*V(exp(x)-1) , thus acting on formal exponential-series instead of formal powerseries but in a com- pletely analoguous way. (note, that in [AF97] the Bell-matrices do not contain the first column/row as computed here) The coefficients in the B[U] -matrix can be converted into the coefficients in S2 by factorial similarity- scaling -1 B[U] = dF * S2 * dF and having a definition like VE(x) = [1, x/1! , x 2/2! , x 3/3!, ...] then V (x)~ * S2 = V (exp(x)-1)~ VE(x)~ * B[U] = VE(exp(x)-1) ~ // where in B[U] a leading row column is appended In some articles the Bell-matrix of a function may also be expressed in the transpose notation.

Tetration Mathematical Miniatures Tetration using matrices S. -10- 2. The naive matrix-approach

2.1. Example computation of x {b} h with integer height h >=0

°1 1 If we want to compute Tb (1) = 1 {b} 1 = b using a nice convergent example, b= sqrt(2) we select a dimension n, for instance n=32 , create the Bb 32x32 -matrix and write r r n = 32; b= sqrt(2); lb=log(b); B b = matrix(n,n,r,c, lb *c /c! ) Y~ = V(1)~ * B b y = Y 1 y ≈ T b(x) °1 then we get in y=Y 1 a good approximation to Tsqrt(2) (1)=1 {sqrt(2)} 1 =sqrt(2) . (Actually, in a CAS we had to write the second formula in the previous as

Y = (V(1)~ * B b )~ but this shouldn't matter here to have more clarity in the notation) Y shall not be an exact vandermonde-vector, and the approximation to a true vandermonde-vector shall be worse in higher indexed entries (representing higher powers of y). But we can generate the approxi- mative result in a new V(y) - vandermonde-vector, containing the consecutive powers of y, and thus we may iterate this

Z~ = V(y)~ * B b sqrt(2) b and get z=Z 1 ≈ sqrt(2) = b = 1 {sqrt(2)} 2 We might have as well written 2 Z~ = V(1)~ * B b introducing integer powers of Bb because of associativity of matrix-operations. More general, we might set x and h to a certain value in a range of good convergence, and write h Y~ = V(x)~ * B b to get

y = Y 1 ≈ x {sqrt(2)} h °h ≈ T sqrt(2) (x) in the second entry of Y.

2.2. Negative height

We also may introduce negative powers, which simply employs positive powers on the "naive" inverse of Bb: -h Y ~ = V(x)~ * B b –1 h = V(x)~ * (B b )

Unfortunately, the matrix Bb is much like a vandermonde matrix, which is numerically delicate to be inverted. The entries vary strongly with increasing size and don't converge to a stable value. This can nicely be seen, if we factorize Bb into two triangular factors (and one diagonal). Use again lb for log(b) then

Bb = dV(lb) * S2 * P ~ and the inverse, computed by inverting the diagonal and triangular factors is (analytically, in infinite size) -1 -1 -1 -1 Bb = P ~ * S2 * dV(lb) -1 = P ~ * S1 * dV(1/lb) -1 = P ~ * S1 b

Tetration Mathematical Miniatures Tetration using matrices S. -11-

-1 -1 Here all row by column-multiplications in P ~ * S1 b to get the entries of Bb give divergent series if infinite size is assumed; and truncation of matrices of increasing sizes lead to always increasing abso- lute values. So here we are principally lost in space. -1 However, Bb with a finite size can be inverted and gives "just a matrix", which provides size- dependent optimal coefficients for the series-inversion (which is in the truncated case in fact a polyno- mial inversion). It will provide some approximation to the expected values of the so-coded powerseries, with the condition, that x V(y)~ = V(x)~ * B b y ~ b -1 V(x)~ = V(y)~ * B b x ~ log(y)/log(b) but accurate in the sense of -1 V(x)~ = (V(x)~ * B b)* B b with a quality of approximation which heavily depends on the size of the matrix. One can play with such computations and using size of 32x32 or 64x64 and reasonable base b the error may be less than 1e-12 or even better.

2.3. Acceleration of convergence - Euler-summation

A very short note on acceleration of convergence may be appropriate here. In many situations powers of the operators, or even the operators themselves may define poorly or even divergent series. In many cases, the simple method of Euler-summation may accelerate convergence or even provide conver- gence, if the original series are divergent, but their coefficients alternating in sign. This is especially true if the coefficients have maximally a geometric growth-rate. Euler-summation can be implemented by a simple matrix-multiplication, and even arbitrary orders can implemented by simple means. I've defined a matrix-function ESum (o) ,where o denotes the order and o=1 is just ordinary summing. Then we might write

Y~ = ESum(2.0)* dV(x)~ * B b using order of o=2.0 (the basic Euler-application) and have to use V(x) as diagonal matrix dV(x) now. If we configure ESum (o) as triangular matrix, we see the progression of partial sums and can adapt the order-parameter o to get best results. Then we finally use only the last row as result for Y~ . If we configure Esum (o) to give only the last row (index r=n-1) , the result is just the row-vector Y~ , but without the possibility to inspect, whether we have used the most appropriate order and approached the best and most reliable result. Here is not the place to discuss Euler-summation in detail, I'll prepare another article about this subject. The interested reader may consult K.Knopp "infinite series in theory and application" [KN] or G.H.Hardy's monography "Divergent series" [HA] for deeper insight. Introducing the concept of "summation of divergent series" allows to extend the range for all parame- ters in the naive approach remarkably and for some computations this is even needed to arrive at a con- sistent approximation at all. In the following this acceleration is rarely mentioned; the algebraic ma- nipulations in this chapter about "naive matrix-approach" may imply the application of these summa- tion, where needed, without further explication.

Tetration Mathematical Miniatures Tetration using matrices S. -12-

2.4. Fractional powers: matrix-logarithm or diagonalization

To get fractional powers we may use the matrix-logarithm, or implement it ourselves according to the definition for the logarithmic series and use of the powers of matrix Bb as coefficients. We may need a lot of terms in that series, but may come out with some good approximation, and may write an example- fractional power of h=0.5 : h = 0.5; Bb 05 = exp( h * log(B b)) and then

Y~ = V(x)~ * B b 05 to get

y = Y 1 ≈ x {sqrt(2)} 0.5 °0.5 ≈ T sqrt(2) (x) in the second entry of the resulting (approximately vandermonde-type) vector Y. If we iterate this:

Z~ = V(y)~ * B b 05 or Z~ = (V(x)~ * B b 05 ) * B b 05 2 or Z~ = V(x)~ * ( B b 05 ) we get

z = Z 1 ≈ x {sqrt(2)} 0.5 {sqrt(2)} 0.5 ≈ x {sqrt(2)} (0.5+0.5) ≈ x {sqrt(2)} 1 ≈ sqrt(2) x in the second entry of Z as expected.

In the same practical way we might employ diagonalization (or: "eigensystem-decomposition") using an eigensystem-procedure of our choice. Assume your CAS or matrix-enabled calculator allows to de- compose a matrix into its three eigen-components and to store them as the 3 result-matrices W,D,WI :

[W,D,WI] = eigen(B b) where W contains the eigenvectors, D the diagonal-matrix of eigenvalues and WI the inverse of W, then you might, according to the rules of diagonalization, compute fractional powers of Bb by fractional po- wers of D only, simply by fractional powers of the scalar diagonal entries of D : Y~ = V(x)~ * (W * D h * WI ) then y = Y 1 °h ≈ T b (x) The parentheses are optional in the first fomula; you may compute from left to right as well, but (W Dh °0.5 WI ) contains just coefficients for the powerseries for Tb (x) (approximated) in its second column – the same which you would get by the previous approach exp( 0.5 * log( Bb))

Note, that Bb is a very bad configured matrix for eigenanalysis; small sizes of say 32x32 already need very high float-precision of some hundred significant digits (in Pari/GP) and minutes of computation (I've heard, that using commercial CAS this can be improved, but beyond dimension >128 numerical eigenanalysis seems hopeless with reasonable resources in memory, precision and time).

Tetration Mathematical Miniatures Tetration using matrices S. -13-

2.5. Sums and series of powertowers (here: alternating series)

2.5.1. Sums Since the matrix-concept implements linear transformations, sums and series are easily handled. For instance, if

y1 = x 1 {b} h y2 = x 2 {b} h then, in matrix-notations this is (analytically, with infinite dimension) h V(y 1)~ = V(x 1)~ * B b h V(y 2)~ = V(x 2)~ * B b and we may rewrite the sum simply as h h (V(y 1) + V(y 2))~ = V(x 1)~ * B b + V(x 2)~ * B b h = (V(x 1) + V(x 2) )~ * B b

If we have

y1 = x {b} h 1 y2 = x {b} h 2 then, in matrix-notations this is h1 V(y 1)~ = V(x)~ * B b h2 V(y 2)~ = V(x)~ * B b and we may rewrite the sum simply as h1 h2 V(y 1)~ + V(y 2)~ = V(x)~ * B b + V(x)~ * B b h1 h2 = V(x) ~ * (B b + B b ) and even h1 h2-h1 = V(x) ~ * B b ( I + B b )

2.5.2. Series As well as sums of powertowers we may formally discuss series of powertowers, but since we deal here with the "naive" version, the application is much limited, and a wider discussion may be found in the chapter on analytical matrix-based tetration. An idea, however, may give the following example. Assume an alternating series AS :

AS b(h) = 1 {b} h - 2 {b} h + 3{b} h – 4{b} h ... which for h=1 is the geometric series: 1 2 3 4 AS b(1) = b - b + b – b ... We know the value is

AS b(1) = b/(1 +b). In matrix-notation we need the notion of linear combination of infinitely many vandermonde-vectors, h Y~ = (V(1)-V(2)+V(3)-V(4)+...-...) * B b and AS b(h) = Y 1

Tetration Mathematical Miniatures Tetration using matrices S. -14- Here the parenthese, if evaluated first, could be written as vector of the eta a-function ( η(s) ): h Y~ = [ η(0), η(-1), η(-2), ...] * B b First we have then the interesting result, that with h=1 this must give (approximately) the value for the geometric series as above.

Writing the entries of second column in Bb as ck

AS b(1) = b/(1 + b) = Σ k=0 ..inf η(-k)* c k k = Σ k=0 ..inf η(-k)* log(b) /k! This can be crosschecked by conventional or Euler-summation of terms of this series for some b.

2.5.3. A simple conjecture about series, triggered by the naive approach Interestingly, we are able to look at a modification and come immediately to a new, bold, general and quite interesting hypothese on a property of series of powertowers.

If we define another series as

AR b(h) = (-1) {b} h - (-2) {b} h + (-3){b} h – (-4){b} h ... which for h=1 is the geometric series: -1 -2 -3 -4 AR b(1) = b - b + b – b ... = 1/b / (1 + 1/b) = 1/(b + 1) written this in matrix-notation (resorting to general h) h Z~ = (V(-1)-V(-2)+V(-3)-V(-4)+...-...) *B b and AR b(h) = Z 1 then the parenthese could again be written as vector of the eta-function: h Z~ = [ η(0), -η(-1), η(-2),-η(-3) ...] * B b h and we have, in serial notation using entries of Bb from second column, denoted as ck

AR b(1) = 1/(b + 1) k = Σ k=0 ..inf (-1) η(-k)* c k k k = Σ k=0 ..inf (-1) η(-k)* log(b) /k!

Although the geometric series representation of AR b(h) does not converge if of AS b(h) converges and vice versa for a given b, we may use the analytic continuation for the geometric series (as given by the closed-form formula) to arrive at an interesting result, if the two series are added. The sum of the two series is by addition of the two geometric-sum formulae

AQ b(1) =AS b(1) + AR b(1) = b/(1+b) + 1/(b+1) = 1 independent of chosen b and by the serial representation of the terms, provided by the matrix-equation using eta-values we get:

AQ b(1) =AS b(1) + AR b(1) k k = Σ k=0 ..inf (1+(-1) ) η(-k)* log(b) /k!

a or "alternating zeta-series"

Tetration Mathematical Miniatures Tetration using matrices S. -15- Here for even k>1 the term η(-k) =0 , and for odd k the term (1 + (-1) k)=0 . This leaves the single sum- mand for k=0 and we have

AQ b(1) =AS b(1) + AR b(1) = (1+(-1) k ) η(0)* log(b) 0/0! = 2 * η(0) = 2* 1/2 = 1 formally for any base b. In algebraic matrix-notation this is YZ =( (V(1) -V(2) +V(3) -V(4)+...-...) h + (V(-1)-V(-2)+V(-3)-V(-4)+...-...)) *B b

= ( [ η(0), η(-1), η(-2), η(-3) , ...] h +[ η(0), -η(-1), η(-2), -η(-3) , ...] ) *B b

= ( [ η(0), η(-1), 0 , η(-3) , 0 , ...] h +[ η(0), -η(-1), 0, -η(-3) , 0 , ...] ) *B b

h = [2 η(0), 0, 0 , 0 , 0 , ...] *B b

h = [ 1, 0, 0 , 0 , 0 , ...] *B b

h = V(0)~ * B b And using h=1 , writing y =x {b} 1 with x=0 means y = b 0 = 1 , which is the same result as before. Based on the observation in the last derivation, that b and h can be kept variable, we may even conjec- ture, that this holds analoguously for other heights, at least where h is a natural number h>=0 Example-Conjecture . let h integer>=0 then AQ b(h) = 0 {b} h = 1 {b} (h-1) = b^^(h-1) or AQ b(h) - 0 {b} h = 0 or Σk=-inf..inf (k {b} h) = 0

For h=0 (giving eta-function) and h=1 (giving geometric series) this is easily verified, for h>1 the veri- fication via serial summation needs then techniques of divergent summation, since we don't know yet closed-form evaluations for such series of higher powertowers. The special charme of this conjecture is, that it connects the eta-series and geometric series by a simple number-theoretical interesting property and also sorts these basic series into a completely new, more general framwork of tetrational series, which all have then one common property.

Tetration Mathematical Miniatures Tetration using matrices S. -16-

2.6. naive U-tetration (decremented iterated iteration)

The U-tetration is much more straightforward and better accessible with the naive approach, since the involved matrices are triangular. The consequence is, that its entries as well as that of its inverse, of its matrix-logarithm and of its eigen-decomposition are stable with varying sizes – the resulting powerser- ies are just truncations, where the error of approximation can (principally) be quantified. °h The basic matrix is the S2 -matrix of Stirling numbers 2'nd kind, and approximations to Ut (x) are ex- pressible by (using t as base parameter and u=log(t) )

Ut = dV(u) * S2 h then V(y)~ = V(x)~ * U t // analytically, infinite size assumed h Y ~ = V(x)~ * U t // practically, truncated matrices used, Y is not perfectly vandermonde

y = Y1 °h ≈ U t (x) Here we may also use negative heights immediately, since the inverse of S2 is just S1 , and the entries do not change with varying size of the matrix. -1 -1 Ut = ( dV(u)*S2 ) = S1 * dV(1/u) –1 h V(y)~ = V(x)~ * (U t ) // analytically, infinite size assumed –1 h Y ~ = V(x)~ * (U t ) // practically, truncated matrices used, Y is not perfectly vandermonde

y = Y1 °-h ≈ U t (x)

2.7. Improvement of naive T-tetration by fixpoint-shift

1/t An interesting relation exists between Tb() - and Ut() -tetration, if bases are related by b=t

Looking at the entries in the eigenmatrices of Bb (and in the discussion in [] it is stated as a fact), it ap- pears that (using t1/t =b and u=log(t) ) -1 Bb = dV(1/t) P~ * U t * P~ * dV(t)

(which, since it is a similarity-transform, the same extends also to powers of Bb and Ut) so that it is: -1 V(x)~ * B b = V(x)~ * ( dV(1/t) P~ * U t * P~ * dV(t) ) Using associativity, this gives, if infinite size is assumed here:

V(x)~ * B b = ( V(x/t-1)~ *U t )*(P~* dV(t)) x/t-1 = V(t -1)~ * P~* dV(t) x/t-1 = V(t -1 + 1)~ * dV(t) x/t-1 = V(t )~ * dV(t) = V(t x/t )~ = V(t 1/t*x )~ = V(b x)~ and the unavoidable truncation can be made at a point of computation, where its error can be better controlled. Where we (implicitely) have principally divergent infinite series in the matrix-product at the position indicated by the []-brackets: -1 V(x)~ * B b = V(x)~ * dV(1/t) [ P~ * U t ] * P~ * dV(t) we can avoid this implicte series-evaluation by changing of order and compute the differently bracketed term first:

Tetration Mathematical Miniatures Tetration using matrices S. -17-

-1 V(x)~ * B b = [ V(x)~ * dV(1/t) * P~ ] * U t * P~ * dV(t) V(y)~ = [ V(x/t-1)~ ] * U t and then is

V(x)~ * B b = V((y+1)*t)~ and bx = (y+1)*t which is also valid for integer powers of Bb resp Ut. For fractional powers we find, that the result is depending on the selection of fixpoint t, although the differences may be smaller than the generally accepted inaccuracy in the "naive" method. This problem is stated as still being unsolved in the tetra- tion-discussion for instance in the (online) discussion-group "tetration-forum".

2.8. Conclusion

The naive approach is principally limited to a narrow range of parameters b,x and h. But confining one- self to this range of parameters, the naive approach may give easily usable approximative results for fractional T- and U-tetration. Moreover, if the entries of involved matrices converge with increasing size (or are even constant as in the triangular case) then an analysis of the structure of these matrices may give hints for the expected values of these entries in the limit case of ideally infinite size, and help to find valid analytical expressions for the analytical tetration with matrices by careful inspection of the matrix-entries. Interesting "special values" could be computed, a new conjecture about series of powertowers could be easily introduced, and also some nice graphs for the general behaviour of some aspects of tetration can be produced. The matrix for T-tetration is numerically very difficult to handle if eigen-analysis is required for frac- tional or complex iterates. In contrast, the matrices for U-tetration are triangular and more stable in these cases too. To come to reliable statements about tetration, an analytical approach is still needed; be it only to derive algebraic identities and simplifications. Such an analytical approach would then provide a complete description for each entry of the involved matrices, so that unavoidable approximation-errors through truncation can be quantified and compensated to get then principally arbitrary precision in the results.

Tetration Mathematical Miniatures Tetration using matrices S. -18- 3. An analytical matrix-approach

3.1. Intro

The chapter "naive" matrix-approach shows a quick useful/usable approach to approximate results for continuous T-and U-tetration, given some restricted range for parameters. However, as also mentioned, the error of approximation cannot be quantified in general, and we are not able to deal with matrices of size of more than some dozen rows/columns and thus our resulting powerseries truncations are very harsh. We need – in principle – no new ideas; the only main aspect is here to define the analytical description of the entries of matrix-logarithms and of the eigen-matrices to derive series-descriptions which may then or may not be expressed by closed-form expressions, but at least can be handled by further alge- braic manipulations. Finding short analytical descriptions for the matrix-entries (or at least finite formu- lae) may then allow to recognize the column of entries as composition of coefficients of known powers- eries, for which we have "closed form" formulae and for whose vector-products we can extrapolate values from the closed forms of more elementary functions. A binomial composition, for instance, can analytically be resolved using a matrix-multiplication with (a power of) the Pascal-/Binomial-matrix (or its algebraic notation in a matrix-formula) P , or it may pro- vide a useful shift of a powerseries f(x) into g(x-1) according to the binomial-theorem, where f(x) seems inaccessible, but g(x) has a nice closed form. Matrix-logarithm/diagonalization : We'll find, that the concept of matrix-logarithm and/or diagonaliza- tion/eigensystem-decomposition, which I introduced in the "naive matrix-method" is well founded and well expressive for the full analytical description. Non-uniqueness of inverse : However, the problem of the non-uniqueness of the matrix-reciprocal, if infinite size is assumed, is not resolved here. An example, where not only a triangular matrix can be seen as the inverse of another triangular matrix but also another square matrix, was sketched in the arti- cle " Continuous iteration of powerseries-defined functions " [HE08 1], where I discussed the multi- valuedness of the logarithm-function and the according square matrix-operators as various reciprocals of the unique triangular matrix-operator for the exponential-series with one example. Divergent summation : Also, extending the ranges for all parameters to the complex domain means to introduce divergent series on a very general level and an apparently ubiquituous occurence. Even the simple half-iterate produces matrices, whose entries, real or complex, form then divergent powerseries with convergence-radius of zero. Well – techniques to deal with summation of divergent series exist, most prominently the extremly simple to understand Euler-summation, which can then provide mean- ingful numerical values in many cases. However, the divergence of those powerseries surpasses even the rate of geometric series (for which Euler-summation is best suited) and follow some hypergeometric or squared exponential rate ( exp(k 2), k being the series-index), for which numerical and theoretical summation-methods are less widely described and available (if at all). So here is an important area of research: find a divergent summation method for hypergeometric series where also complex coeffi- cients are involved... Derivative-notation : As I came up with the analysis, which I present here in my own language, I found, that several analytical approaches already exist in the literature, but some language-transfer might be useful. For instance, a matrix is presented in an article, containing the notion of derivatives of a function f(x) writing f"(0)/2! for the third entry. If the powerseries for f is known, this means simply the third coefficient of the underlying powerseries. Example: 2 3 f(x) = a 0 +a 1 x +a 2 x +a 3 x + ... Clearly, the second derivative is

f"(x) = 0 + 0 + a 2 2! + a 3 3! x + ... and is at x=0

f"(0) = 0 + 0 + a 2 2! + 0 + 0 + ... = a 2 2! f"(0)/2! = a 2

Tetration Mathematical Miniatures Tetration using matrices S. -19- So, while the research-article uses the notation f"(0)/2! we'll find in the current text simply the reference to the 3'rd coefficient a2 of the powerseries, which is the according entry of the current matrix-operator for that function. The "derivative-notation" is surely more concise and appropriate, if the behave of iteration of functions is discussed in more generality (for instance not only of functions defined by powerseries but by Dirichlet-series), but for the article here the more simple reference is assumed to be sufficient (and is used) as far as functions by powerseries are discussed to avoid overhead of complex- ity. ------(further text not yet ready) ------

Tetration Mathematical Miniatures Tetration using matrices S. -20- 4. citations/snippets

[AF97] Aldrovandi/Freitas:"Continuous iteration of dynamical maps" There are two main approaches to describe the evolution of a dynamical system. 1 The first has its roots in classical mechanics – the solutions of the dynamical differential equations provide the continuous motion of the representative point in phase space. 2 The second takes a quite different point of view: it models evolution by the successive iterations of a well-chosen map, so that the state is known after each step, as if the "time" parameter of the system were only de- fined at discrete values. 3 It is possible to go from the first kind of description to the second through the snapshots leading to a Poincaré map. Our aim here is to present the first steps into a converse procedure, going from a discrete to a continuous description while preserving the idea of iteration . This is possible if we are able to "interpolate" between the discrete values in such a way that the notion of iteration keeps its meaning in the intervals.

The clue to the question lies in the formalism of Bell polynomials, which attributes to every such a function f a matrix B[f], whose inverse represents the inverse function and such that the ma- trix product represents the composition operation. In other words, these matrices provide a lin- ear representation of the group formed by the functions with the operation of composition. Composition is thus represented by matrix product and, consequently, iterations are repre- sented by matrix powers. Furthermore, the representation is faithful, and the function f is com- pletely determined by B[f]. Now, in the matrix group there does exist a clear interpolation of dis- crete powers by real powers and the inverse way, going from matrices to functions, yields a map interpolation with the desired properties.

[AF97] Pg.8

The polynomials B nk [g] are the entries of a (lower-)triangular matrix B[g], with n as row index and k as the column index. From (2.5), the function coefficients constitute the first column, so that actually B[g] is an overcomplete representative of g. From (2.6), the eigenvalues of B[g] are j (g 1) .

Summing up, infinite Bell matrices constitute a linear representation of the group of invertible formal series. If we consider only the first N rows and columns, what we have is an approxima- tion, but it is important to notice that the group properties hold at each order N.

The general aspect of a Bell matrix can be illustrated by the case N = 5:

Because B[g] = B n[g], Bell matrices convert function iteration into matrix power and provide a linearization of the process of iteration.

[Af97] Pg 15 (the x*-expression denotes in the text, that the coefficients of the inverse are sought) It seems a difficult task to improve the above expressions, as it would mean knowing a closed k-1 j analytical expression for the recurrent summation of the form ΣΣΣ j=0 u σj[x*]. A closed expres- sion for σj[x*] would be necessary and, even for the simple alphabet consisting of powers of a fixed letter a, which we shall find in the application to Bell matrices, this would be equivalent to solving an as yet unsolved problem in Combinatorics. (...) (...) but have no known closed expression. They are calculated, one by one, just in this way.

Tetration Mathematical Miniatures Tetration using matrices S. -21- 5. Appendix

5.1. Relation between Bell-matrix B[U] and Ut-matrix

B[g] for a general function g, expressed by its coefficients [0, g 1/1!, g 2/2!,...]

G = OperatorFromSeries(setvector([0,g 1/1!,g 2/2!,g 3/3!,g 4/4!,g 5/5!])) -1 BellG = dF* G * dF the G-matrix in terms of g 1,g 2,.... You get V(x)~ * G = V(g(x)) ~ 1 . . . . .

0 g1 . . . .

0 1/2 g 2 g12 . . .

0 1/6 g 3 g2g1 g13 . .

0 1/24 g 4 1/3 g 3g1+1/4 g 22 3/2 g 2g12 g14 .

0 1/120 g 5 1/12 g 4g1+1/6 g 3g2 1/2 g 3g12+3/4 g 22g1 2 g 2g13 g15 the Bell-matrix BellG = B[g] (according to Aldrovandi/Freitas;top-row and left-column added for consistency) You get VE(x)~ * B[g] = VE(g(x)) ~ 1 . . . . .

0 g1 . . . .

0 g2 g12 . . .

0 g3 3g 2g1 g13 . .

0 g4 4g 3g1+ 3g 22 6g 2g12 g14 .

0 g5 5g 4g1+10g 3g2 10g 3g12+15g 22g1 10g 2g13 g15

The G-matrix in terms of a 1,a 2,..., where a 1=g 1/1! , a 2=g 2/2!, .. is assumed (my usual notation)

G = OperatorFromSeries(setvector([0,a 1,a 2,a 3,a 4,a 5])) 1 . . . . .

0 a1 . . . .

0 a2 a12 . . .

0 a3 2a 2a1 a13 . .

0 a4 2a 3a1+a 22 3a 2a12 a14 .

0 a5 2a 4a1+2a 3a2 3a 3a12+3a 22a1 4a 2a13 a15

B[g] for g(x)=exp(x)-1 ( = U(x) in my notation)

For g1=g 2=g 3=... = 1 , meaning g(x)= exp(x)-1, this gives a Bell-matrix identical to Stir2 in my nota- tion, which is just the factorial similarity-scaling of S2 , which I use for the same function. While the Bell-matrix acts on formal exponential-series ( VE(x) ), the S2 (= U) – matrix acts on formal powerseries (V(x) ) in a completely analoguos way.

Tetration Mathematical Miniatures Tetration using matrices S. -22- 6. References and online-resources

[AF97] R. Aldrovandi and L.P.Freitas; Continuous iteration of dynamical maps; 1997; Online at arXiv physics/9712026 16.dec 1997 [AR07 09] Andrew Robbins; Tetration in Context; Handout – not published – 09'2007 for "Tetration-Forum" http://tetration.itgo.com/pdf/Robbins_handout.pdf [BR36] D.F.Barrow; Infinite Exponentials; The American Mathematical Monthly, Vol. 43, no. 3 (Mar., 1936), 150-160 (JStore) [BA58] I.N. Baker; Zusammensetzung ganzer Funktionen; Math Zeitschr. Bd. 69 pp 121-163 (1958) (also online at digicenter göttingen) [EJ61] Paul Erdös, Eri Jabotinsky; On analytic iteration; J. Anal. Math. 8, 361-376 (1961) (also online at digicenter göttingen) [HE08 01] Gottfried Helms; Continuous iteration of powerseries-defined functions, (draft) 2008 for discussion in the "Tetration-Forum" http://go.helms-net.de/math/tetdocs/ContinuousfunctionalIteration.pdf [TF08 1] Andrew Robbins; Notations and Opinions; in "Tetration-forum" (2008) http://math.eretrandre.org/tetrationforum/showthread.php?tid=114 (ff) [WA91] Peter L. Walker; Infinitely Differentiable Generalized Logarithmic and Exponential Functions; of Computations, Vol.57, No. 196 (Oct., 1991), 723-733 (JStore) [Wiki:IF] (unknown authors); in: Wikipedia "Iterated function"; 2008 http://en.wikipedia.org/wiki/Iterated_function ; jan 2008 [Wiki:TE] Andrew Robbins and unknown authors; in: Wikipedia "Tetration"; 2008 http://en.wikipedia.org/wiki/Tetration

Tetration Mathematical Miniatures