<<

An Exponential Formula for Vector Fields ∗

Rudolf Winkel Institut f¨urReine und Angewandte Mathematik RWTH Aachen, D-52056 Aachen, Germany e-mail: [email protected] May 1995

Abstract: The well known for linear first order systems of ODE is generalized to polynomial vector fields. The substitution process of our Exponential Formula is closely related to the usual iteration of polynomial mappings.

The n-dimensional linear autonomous system of differential equationsx ˙ = Mx, x ∈ n n×n Mt P∞ M i i i IR ,M ∈ IR has the flow ϕ(t) = e := i=0 i! t defined for all t ∈ IR, where M is n the i-th power of the coefficient matrix. Application to an initial value x0 ∈ IR gives the Mt global solution ϕ(t, x0) = e x0 of the IVPx ˙ = Mx, x(0) = x0. The phase portrait of such a linear system is completely understood, when one knows it in some neighborhood of the origin – in fact this is true for all homogeneous systems. Furthermore in the linear case the global flow is already determined by the infinitesimal, i.e. algebraic, structure of the system and explicit solutions can be calculated using the Jordan normal form of M. All this is very well known (cf. [HS]). In this paper we describe a simple straightforward generalization of the matrix exponentiation, which gives the local flow for polynomial vector fieldsx ˙ = p(x), x ∈ IRn, n P∞ i i p ∈ (IR[x]) , p of degree m ≥ 2 in the form exp(tµ) = i=0(t /i!)µ . In this formula µ is a multilinear mapping corresponding to the homogenization of p, and the µi are sums of suitably composed multilinear mappings, which generalize the matrix powers M i. The structure of this composition can be represented in a natural way by graphs, namely by forests of certain rooted trees (see [W1]). Application of exp(tµ) to an n initial value x0 ∈ IR then gives the analytic solution of the (homogenized form of the) “polynomial IVP”x ˙ = p(x), x(0) = x0. We prove these facts in Section 1 using a fundamental recursion formula for the µi; moreover this section includes a proof of the 1-- property of exp(tµ) and remarks on the “history” of

∗proposed running head: “An Exponential Formula”

1 the Exponential Formula and on vector fields. Two further recursion formulas for the coefficients of exp(tµ)x0, the Simplex and the Formula, are discussed in Section 2; the latter seems to be the most simple and effective instrument for the practical computation of these coefficients. In general it is easy to see, that there exist recursion formulas for the solutions P∞ i ϕ(t, x0) ≡ i=0 ci/i! t of polynomial IVP’s, which are of the form cn+1 = Φn(cn, . . . , c0) with a certain of mappings Φ ≡ (Φ0, Φ1, Φ2,...) depending on the polynomial vector field; just substitute the above power series ϕ(t, x0) for x inx ˙ = p(x), differentiate n times and evaluate at t = 0. But of course one would like to have general formulas such as in matrix exponentiation, which are applicable to polynomial vector fields of all degrees and (finite) dimensions, and which give a clear structural understanding of what ‘really’ happens in building up a sequence Φ. At present we know several such conceptual formulas: 1. The simplest version is to use in dimension n = 1 Fa`adi Bruno’s Formula for the higher of a composition of functions (cf. [Co]). The result is rather unenlightening due to the appearance of Bell ; in higher dimensions it is possible in principle to find a generalization of Fa`adi Bruno’s Formula, but the situation is of course even more complicated. 2. Lie series (cf. [G]) give a very clean approach for analytic (finite dimensional) vector fields, but it lacks concreteness in the polynomial case due to the use of powers of the linear partial differential operator p1(x)∂1 + ... + pn(x)∂n, where p1, . . . , pn are the components of p(x). 3. The straightforward generalization of matrix exponentiation introduced in this paper realizes the sequence Φ as a highly symmetric and simply structured sub- stitutional process. It is possible to show the equivalence of the Exponential Formula and the Lie series in the case of polynomial vector fields with the help of certain rooted trees; this will be one of the main results contained in [W1]. The Simplex Formula in Section 2 can be considered as a variant of the Exponential Formula, which describes the exact contribution of different additive parts of the polynomial vector fields to the solution coefficients.

4. The Convolution Formula in Section 2 describes the recursion cn+1 = Φn(cn, . . . , c0) as a simply structured convolution with ‘structure constants’ given by the polyno- mial vector field. In fact, as we will show in Section 3.3, Fa`adi Bruno’s Formula in dimension n = 1 can be seen as a ‘desymmetrisation’ of the Convolution Formula. What are the possible uses of such formulas? In contrast to the complete information in the linear case one encounters great difficulties in understanding the dynamics of even the most simple polynomial vector fields: In dimensions n ≥ 3 one has chaotic systems like the extensively studied Lorenz- system (m = 2, n = 3) (cf. [GH]), which is an example for a polynomial approxi- mation of a system of partial differential . See [S] for other examples arising

2 from hydrodynamics, plasma , chemical reactors and for the genuinely quadratic Lotka-Volterra system, which models an ecosystem of n species. In two dimensions, where no chaos is possible, Hilbert’s 16th problem (dynamical part) asks for the maximum possible of limit cycles, i.e. isolated periodic orbits, for given polynomial degree. The answer to this problem is largely open since its proposal in 1900:

1. The finiteness of the number of limit cycles for every single plane polynomial vector field has been proved rather recently by J. Ecalle and Y.S. Il’yashenko independantely (see [I1,I2,E] and the papers in [Sch]), but it is not known whether there is a uniform bound for given polynomial degree. The methods used in these proofs are very general and valid even for analytic vector fields. Therefore they disregard largely the concrete polynomial information.

2. Although more than 2000 papers exist on quadratic plane vector fields, there is not yet a complete classification. So far at most four limit cycles for such vector fields have been observed, but the possibility of arbitrary large of limit cycles has not been excluded (see the surveys [C, CT, Y1, L], the books [Y2, Z], and again [Sch]). Most of the methods used are very special with lots of case divisions; bifurcations are frequently used too – but limit cycles are dynamically stable!

Clearly the situation demands new analytical tools of truely global nature, which avoid the two extremes of too much generality and too much specialization. Although the power series solutions to polynomials vector fields are in general only locally valid, it is our conviction, that the investigation of the sequence of solution coefficients as a whole will be – by analyticity – the key ingredient in the understanding of the global dynamics of finite dimensional polynomial vector fields.

In Section 3 we show by a “coalgebraic construction”, that the substitution process in the Exponential Formula is related to the group (IR, +) of real numbers under in the same way, as the usual iteration of polynomial mappings is related to the group (IR∗, ·) of nonzero real numbers under . In fact from the viewpoint of coalgebras and multilinear the constructions for the Exponential Formula and the iterates of polynomial mappings are the two most simple instances of a general procedure of “iteration”, which can be carried out for any affine algebraic group or coassociative comultiplication. The presentation in this paper is as simple and direct as possible. For example we use only real coefficients, although everything is valid for arbitrary fields of characteristic zero and a major portion even over arbitrary commutative rings with . Finally I want to thank Prof. Dr. Volker Enss for his encouragment and his careful reading of preliminary versions ([W]) of this paper.

3 1 The Exponential Formula

1.1 The Definition

Mt P∞ M i i The matrix exponentiation ϕ(t) = e := i=0 i! t for finite dimensional (autonomous) vector fields

T n×n x˙ = Mx, x = (x1, . . . , xn) ,M ∈ IR (1) will be generalized in this section to finite dimensional polynomial vector fields

T n x˙ = p(x), x = (x1, . . . , xn) , p ∈ (IR[x]) , deg p = m. (2) We write (2) componentwise as:

x˙i = (x1, . . . , xn), pi ∈ IR[x], 1 ≤ i ≤ n, max deg pi = m ∈ IN. (3) Applying the technique of homogenization or projectivisation from algebraic , the n-dimensional system becomes an (n + 1)-dimensional system by introduction of the homogenizing z: x x x˙ = zmp ( 1 ,..., n )(i = 1, . . . , n), z˙ = 0 . i i z z The original system is easily restored by the substitution z = 1. Subsequently we investigate therefore only n-dimensional polynomial vector fields homogeneous of degree m :

x˙i = Di(x), i = 1, . . . , n, Di ∈ IR[x] homogeneous of degree m (4)

n We choose once for all the canonical orthonormal base E = {e1, . . . , en} of IR with ei = (δij)j=1,...,n. The matrix M = ((mij))i,j=1,...,n of the linear system (1) is then simply the representation of an IR-linear endomorphism µ : IRn → IRn with respect to Pn the chosen : µ(ek) := i=1 mikei on E, i.e. µ(ek) is the k-th column vector of M resp. the coefficient vector of x in (1). M i represents the i-fold composition of µ. k L We generalize now the construction to the polynomial case m ≥ 2. Let m IRn the n m m-fold direct sum of IR and E = {(ek1 , . . . , ekm )|k1, . . . , km ∈ {1, . . . , n}} its basis. Furthermore let Xn i Di(x1, . . . , xn) := ak1...km xk1 · ... · xkm , (i = 1, . . . , n), (5) k1,...,km=1 then we define µ to be the m-fold multilinear mapping over IR: Mm Xn n n i µ : IR → IR , µ(ek1 , . . . , ekm ) := ak1...km ei, (6) i=1 m i.e. the of a base vector (ek1 , . . . , ekm ) ∈ E is the coefficient vector of xk1 ·...·xkm in (4).

4 By the the universal property of the tensorproduct µ can be represented equivalently as IR-linear mapping

Om Xn n n n i µ : TmIR ≡ IR → IR , µ(ek1 ⊗ ... ⊗ ekm ) := ak1...km ei, (7) i=1 which is the definition used subsequently.

µ is called commutative, iff

µ(ek1 ⊗ ... ⊗ ekm ) = µ(ekπ(1) ⊗ ... ⊗ ekπ(m) )

n n for all π of the numbers 1, . . . , m. In general µ ∈ L(TmIR , IR ) is noncom- mutative, but commutativety can be easily achieved over IR by suitably rewriting the Di: 2 2 2 Example 1.1 (m = n = 2) µ forx ˙ = 2xy + y ,y ˙ = 3x + y defined by µ(e1 ⊗ e1) = 3e2, µ(e2 ⊗ e2) = e1 + e2, µ(e1 ⊗ e2) = 2e1 6= 0 = µ(e2 ⊗ e1). So µ is noncommutative. But writing the same differential equations asx ˙ = xy + yx + y2,y ˙ = 3x2 + y2 we get a commutative µ: µ(e1 ⊗ e2) = e1 = µ(e2 ⊗ e1). 2 Commutativety of µ will be used in this paper only in Section 2.1 .

So far we have established a between the n-dimensional m-homogenous vector fields in noncommuting variables x1, . . . , xn and the of IR-linear mappings n n L(TmIR , IR ) with respect to the chosen basis. It remains to generalize the i-fold composition of µ from the linear case. µ is for m ≥ 2 a mapping of degree −(m − 1) with respect to the tensorproduct and therefore cannot be composed directly with itself. First we define for any given µ a family of IR-linear mappings dµ,p:

n n dµ,p : TpIR → Tp−m+1IR (8) ( 0 , if p < m dµ,p := Pp−m+1 ν−1 p−m−ν+1 ν=1 ⊗ id ⊗ µ ⊗ ⊗ id , if p ≥ m,

n i.e. µ is for p ≥ m shifted componentwise from the left to the right on TpIR leaving the other factors unchanged. From the definition it is immediately clear that: dµ,p is n IR-linear on TpIR , dµ,m = µ and dµ,p is IR-linear in µ , i.e.

dλµ,p = λdµ,p, ∀λ ∈ IR and (9)

0 n n dµ+µ0,p = dµ,p + dµ0,p, ∀µ, µ ∈ L(TmIR , IR ).

Because p is used only with certain values depending on m, we introduce the abbrevi- ation :

[k] := k(m − 1) + 1 for all k ∈ IN 0 and fixed m ∈ IN. (10)

5 Moreover we define ( id , if i = 0 (µ)i := (11) dµ,[1] ◦ ... ◦ dµ,[i] , if i > 0,

n n p δp : IR → TpIR , v 7→ ⊗ v if p > 0 (12)

– the latter is the p-fold diagonal mapping – and finally ( i id , if i = 0 µ := i (13) (µ) ◦ δ[i] , if i > 0.

i n n Directly from the definition one sees: (µ) is a IR-linear mapping from T[i]IR to IR , but 1 n n 1 n n δp is linear only in the case p = 1. Note that µ = (µ) : TmIR → IR and µ : IR → IR are different! µi for i > 0 can be represented as follows:

i n δ[i] n dµ,[i] n dµ,[i−1] dµ,[2] n n dµ,[1]=µ n µ : IR → T[i]IR → T[i−1]IR → ... → T[1]IR = TmIR → IR .

Definition of the Exponential Formula: For m, n ∈ IN and µ corresponding to the vector field (4), we define purely formal with the notations (8) and (10)-(13): X∞ µi exp(µ) := i=0 i!

i i (9) i i i i Remark 1.1: Since {tµ} = (tµ) ◦ δ[i] = t (µ) ◦ δ[i] = t µ for all t ∈ IR, we get: X∞ ti exp(tµ) := µi . (14) i=0 i! n For all initial values x0 ∈ IR all n power series in the components of exp(tµ)(x0) have a nonvanishing radius of convergence (see Section 1.5). Remark 1.2: For m = 1 one has as a special case of (14) (identifying µ and M): P∞ M i i Mt exp(tµ) = i=0 i! t = e . Remark 1.3: In Remark 1.1 exp(tµ) for t ∈ IR has been computed easily with the n n help of (9); what is the result for exp(µ1 + µ2), if µ1, µ2 ∈ L(TmIR , IR )?

∞ i ∞ 2 X (µ1 + µ2) (8),(11) X 1 X exp(µ1 + µ2) = ◦ δ[i] = ( dµ ,[1] ◦ ... ◦ dµ ,[i]) ◦ δ[i] . i! i! j1 ji i=0 i=0 j1,...,ji=1

The formula shows a strong mixing of µ1 and µ2, but in Section 2.1 we will see never- theless that it is possible to separate different additive parts of µ in transparent way. Remark 1.4: Application of the Exponential Formula to the system (3) homogenized 0 0 0 T n+1 i 0 T with the variable z shows for x = (x1, . . . , xn, c) ∈ IR , that µ (x ) = (∗,..., ∗, 0) 0 0 P∞ µi(x0) i T for i > 0 and consequently exp(tµ)(x ) = x + i=1 i! t = (∗,..., ∗, c) . Hence the

6 different n-dimensional hyperplanes {z ≡ c} are under the flow exp(tµ), and especially we have on {z ≡ 1} the unaltered original system. 2 We close this subsection with a simple example: 2 Example 1.2: Given the IVPx ˙ = −x , x(0) = a ∈ IR. With e ≡ e1 = (1) is µ(e ⊗ e) = −e, µ(a ⊗ a) = a2µ(e ⊗ e) = −a2 = (−a) · a and for i > 0: (µ)i(⊗i+1a) = i−1 i+1 (8) i−1 Pi i i−1 i i (µ) ◦dµ,[i](⊗ a) = (µ) ◦(−a) ν=1 ⊗ a = (−a)·i·(µ) (⊗ a) = ... = (−a )·i!·a. P∞ 1 i i+1 P∞ i a Hence: exp(tµ)(a) = a + i=1 i! (µ) (⊗ a) = a · i=0(−at ) = 1+at for |at| < 1, which is the well known solution. 2

1.2 A recursion formula and the solution property of the Ex- ponential Formula First of all we formulate and prove a recursion formula of fundamental importance. For l, k ∈ IN 0, l + 1 ≥ k ≥ 1 we define the IR-linear mapping

l n n (µ)k : T[l]IR → T[k−1]IR (15) ( n l idT[l]IR , if k = l + 1 (µ)k := dµ,[k] ◦ ... ◦ dµ,[l] , if k ≤ l.

l 1 1 l l Special cases are (µ)l = dµ,[l],(µ)1 = (µ) = dµ,[1] = dµ,m = µ and (µ)1 = (µ) = k l (µ) ◦ (µ)k+1 for l ≥ k ≥ 0.

Theorem (Recursion formula): For m ∈ IN, i ≥ k ≥ 0 and j1, . . . , j[k] ∈ IN 0 : X i (i − k)! j1 j[k] (µ)k+1 = (µ) ⊗ ... ⊗ (µ) . (16) j1! · ... · j[k]! j1+...+j[k]=i−k Especially for k = 1: X (i − 1)! (µ)i = (µ)j1 ⊗ ... ⊗ (µ)jm . (17) 2 j ! · ... · j ! j1+...+jm=i−1 1 m

Proof: Case k = 0: [0] = 1 for all m ∈ IN and both sides of (16) are equal to (µ)i. n Case k = i: both sides are equal to the on T[l]IR . i i−k In the linear case m = 1, i.e. [k] ≡ 1, the claim reduces to (µ)k+1 = (µ) = (i−k)! i−k (i−k)! (µ) . So it remains to show the recursion formula in the case m ≥ 2 for i ≥ k + 1 > 1 by induction on i:

7 2 For i = 2 and k = 1 one computes (µ)2 = dµ,[2] for the l.h.s. of (16) and for the r.h.s.: X 1 (µ)j1 ⊗ ... ⊗ (µ)jm = j ! · ... · j ! j1+...+jm=1 1 m (µ)1 ⊗ (µ)0 ⊗ ... ⊗ (µ)0 + (µ)0 ⊗ (µ)1 ⊗ ... ⊗ (µ)0 + ... + (µ)0 ⊗ ... ⊗ (µ)0 ⊗ (µ)1

(11) m−1 m−1 (8) = µ ⊗ ⊗ id + id ⊗ µ ⊗ ... ⊗ id + ... + ⊗ id ⊗ µ = dµ,[2]. Suppose now validity of (16) for some i and all k with 0 < k < i, then (16) is valid for i + 1 and all k with 0 < k ≤ i too: i+1 Case k = i:(µ)i+1 = dµ,[i+1] and dµ,[i+1] equals X 1 (µ)j1 ⊗ ... ⊗ (µ)j[i] = (µ)1 ⊗ ⊗[i]−1id + ... + ⊗[i]−1id ⊗ (µ)1. j1! · ... · j[i]! j1+...+j[i]=1 Case 0 < k < i:   (15) X (i − k)! i+1 i  j1 j[k]  (µ)k+1 = (µ)k+1 ◦ dµ,[i+1] = (µ) ⊗ ... ⊗ (µ) ◦ dµ,[i+1]. j1! · ... · j[k]! j1+...+j[k]=i−k

Fix now a [k]-tuple (j1, . . . , j[k]) with the sum j1 + ... + j[k] = i − k, then [j1],..., [j[k]] is an ordered partition of [i]:

X[k] k(mX−1)+1 k(mX−1)+1 [jν] = {jν(m − 1) + 1} = { jν}(m − 1) + k(m − 1) + 1 ν=1 ν=1 ν=1 = {i − k}(m − 1) + k(m − 1) + 1 = i(m − 1) + 1 = [i].

n n Therefore dµ,[i+1] : T[i+1]IR → T[i]IR can be written:

[i] [k] (8) X X λ−1 [i]−λ [j1]+...+[jν−1] [jν+1]+...+[j[k]] dµ,[i+1] = ⊗ id ⊗ µ ⊗ ⊗ id = ⊗ id ⊗ dµ,[jν +1] ⊗ ⊗ id. λ=1 ν=1 Hence:

j1 j[k] ((µ) ⊗ ... ⊗ (µ) ) ◦ dµ,[i+1] = (18)

X[k] j1 jν−1 jν jν+1 j[k] (µ) ⊗ ... ⊗ (µ) ⊗ {(µ) ◦ dµ,[jν +1]} ⊗ (µ) ⊗ ... ⊗ (µ) . ν=1

0 0 Finally using the substitution jν = jν + 1 (note jν > 0):   [k] (18) X (i − k)! X i+1  j1 jν +1 j[k]  (µ)k+1 = (µ) ⊗ ... ⊗ (µ) ⊗ ... ⊗ (µ) j1! · ... · j[k]! j1+...+j[k]=i−k ν=1

8 X[k] X (i − k)! = 0 × 0 j1! · ... · (j − 1)! · ... · j[k]! ν=1 j1+...+jν +...+j[k]=i−k+1 ν

0 ×(µ)j1 ⊗ ... ⊗ (µ)jν ⊗ ... ⊗ (µ)j[k]

X (i + 1 − k)! = (µ)j1 ⊗ ... ⊗ (µ)j[k] , j1! · ... · j[k]! j1+...+j[k]=i+1−k

0 where the last step – omitting the primes at jν – follows from the well known formula for multinomial coefficients, which generalizes the recursion relation for Pascal’s triangle. 2

n k Next we compute for vectors v1, . . . , vm ∈ IR with vν = (vν )k=1,...,n (ν = 1, . . . , m): Xn k1 km µ(v1 ⊗ ... ⊗ vm) = v1 · ... · vm µ(ek1 ⊗ ... ⊗ ekm ) (19) k1,...,km=1   n (7) X  i k1 km  = ak1...km v1 · ... · vm . k1,...,km=1 i=1,...,n

k For v1 = ... = vm = v ≡ (v )k=1,...,n this specializes to   n (13) X (5) 1 m  i k1 km  µ (v) = (µ ◦ δm)(v) = µ(⊗ v) = ak1...km v · ... · v = D(v), k1,...,km=1 i=1,...,n where in the last step clearly D(v) ≡ (Di(v1, . . . , vn))i=1,...,n. As an important consequence we have:

µ1 ≡ D as mappings of IRn into itself. (20)

Moreover, for all mappings A of IRn into itself and all m ∈ IN:

1 m µ ◦ A = µ ◦ δm ◦ A = µ ◦ (⊗ A) ◦ δm. (21) Finally we need the identity

δp1+...+pm = (δp1 ⊗ ... ⊗ δpm ) ◦ δm for p1, . . . , pm, m ∈ IN, (22) P P which we apply with pν = [jν], jν = i and [jν] = [i + 1].

Theorem (Solution property of the Exponential Formula): For m, n ∈ IN let µ correspond to the n-dimensional polynomial vector field (4) according to (7). Then d X(t) = exp(tµ) solves the dt X = D ◦ X and the IVP x˙ = D(x), x(0) = x0 ∈

9 i n P∞ µ (x0) i IR is solved by x(t) = exp(tµ)(x0) = i=0 i! t . The latter is a vector of convergent power series.

Proof: Formal differentiation of the power series gives:

∞ i ∞ i−1 ∞ i d (14) d X t X t X t exp(tµ) = ( µi) = µi = µi+1 dt dt i=0 i! i=1 (i − 1)! i=0 i!

∞ i ∞ i (13) X t i+1 (15) X t i+1 = (µ) ◦ δ[i+1] = µ ◦ (µ)2 ◦ δ[i+1] i=0 i! i=0 i!   ∞ i (17) X t X i! = µ ◦  (µ)j1 ⊗ ... ⊗ (µ)jm  ◦ δ i! j ! · ... · j ! [i+1] i=0 j1+...+jm=i 1 m   ∞ (22) X X 1 = tiµ ◦  (µ)j1 ⊗ ... ⊗ (µ)jm  ◦ (δ ⊗ ... ⊗ δ ) ◦ δ j ! · ... · j ! [j1] [jm] m i=0 j1+...+jm=i 1 m     ∞ j1 jm  (13) X X µ µ = µ ◦ ti  ⊗ ... ⊗  ◦ δ  j ! j !  m i=0 j1+...+jm=i 1 m à ! X∞ i m t i (21) 1 (20) = µ ◦ ⊗ µ ◦ δm = µ ◦ exp(tµ) = D ◦ exp(tµ) . i=0 i!

The second part of the claim concerning x(t) follows at once and the convergence will be discussed below. 2

Remark 1.5: In the linear case m = 1 the Theorem specializes to the familiar form d n n dt exp(tM) = M exp(tM), where of course µ ∈ L(IR , IR ) is represented by the matrix M.

1.3 The flow property The solution property proven in the last section is of formal nature; but it is well known ([H, p.207 ff.]), that the local power series solution for a finite dimensional analytic vector field has a non vanishing radius of convergence. The formal solution represents therefore a vector of convergent power series. On the other hand in the nonlinear case the radii of convergence will not be bounded away from zero uniformly for all starting points, whence one can expect “only” a formal or local flow. More precisely: the formal flow as an analytical mapping IRn × IR ⊃ U →

10 IRn is defined on an open neighborhood U ⊃ IRn × {0}, but in general there does not exist ε > 0 such that U ⊃ IRn × (−ε, ε). Already for C1-vector fields the solutions of all IVP’s generate a formal flow ([A, Thm.10.3]), so that one can conclude indirectly by the solution property of exp(tµ)(x0) that exp(tµ) is a formal flow and therefore a formal 1-parameter-group. We will show this property of exp(tµ) now directly by means of the recursion for- mula (16).

Theorem: exp(tµ) is a formal 1-parameter-group. 1 0 0 (13) Proof: exp(tµ)|t=0 = 0! µ 0 = id. It remains to show: exp((s + t)µ) = exp(sµ) ◦ exp(tµ) . (23)

P∞ sk k [k] R.h.s.(23) = k=0 k! (µ) (⊗ exp(tµ)), and l.h.s.(23) gives: X∞ (s + t)i X∞ Xi sk ti−k µi = ( )µi i=0 i! i=0 k=0 k! (i − k)! ( ) ∞ i k i−k X X s k t i = (µ) ◦ (µ)k+1 ◦ δ[i] . i=0 k=0 k! (i − k)! Now for 0 ≤ k ≤ i one has by the recursion formula (16):

i−k X j1 j[k] t i t j1 t j[k] (µ)k+1 = (µ) ⊗ ... ⊗ (µ) (i − k)! j1! j[k]! j1+...+j[k]=i−k

sk k and by collecting terms with the same k the argument of k! (µ) becomes:   X∞ X tj1 tj[k]  (µ)j1 ⊗ ... ⊗ (µ)j[k]  = ⊗[k] exp(tµ). j1! j[k]! i=k j1+...+j[k]=i−k 2

1.4 Some remarks on the ‘history’ of the Exponential Formula The correspondence of homogeneous vector fields with multilinear mappings has been observed the first time by L.Marcus in 1960 [M] for the quadratic case m = 2 and has been used by him for the classification of plane homogeneous quadratic vector fields. C.Coleman [C] has generalized this correspondence 1970 for arbitrary homoge- neous polynomial vector fields. Marcus and Coleman didn’t isolate µ as a multilinear mapping, but interpreted it as an in general nonassociative nonunitary binary resp. m-nary algebra on the underlying IRn. 1977 R¨ohrlhas introduced and investigated in [R] categories of m-nary with certain linear and nonlinear morphism.

11 Certain elementary dynamical phenomena correspond to structural properties of m- nary algebras ([R, Cor.(1.10)]), e.g. straight lines through the origin invariant under the flow correspond to the idempotent elements of the algebra. Therefore one may try to use homological methods as in algebraic topology to investigate the category of m-nary algebras and then to translate the results to dynamics. Unfortunately the first step turns out to be too complex: indeed the m-nary algebras with structure preserving morphism form an algebraic category, but finitely generated free objects are infinite dimensional in general, so that the cohomology theories which are canonically available for this type of categories are practically useless. Back to the Exponential Formula: R¨ohrl[R], inspired by an unpublished manuscript of M. K¨ocher, came rather close to the Exponential Formula by investigating automor- phism of m-nary algebras which map solutions to solutions. He didn’t find the Formula, because of his predominantly algebraic-categorical viewpoint, unfavorable notation and not recognizing the 1-parameter-group property. We found the Exponential Formula and the fundamental recursion formula (16) by trying to establish the 1-parameter- group property for R¨ohrl’sformalism. 2

1.5 Estimating the radius of convergence As already noted above (Sec.1.3), analytic vector fields are well known to possess locally analytic solutions. In this section we give a new proof of this result in the case of polynomial vector fields by establishing a lower bound for the radius of convergence of exp(tµ)(x0) depending on µ and the distance of x0 to the origin. Pi i Clearly ai = i!(i ∈ IN 0) solves the recursion ai+1 = ν=0 ( ν ) aνai−ν with a0 := 1. P i! To solve the more general recursion ai+1 = aj ·...·aj with a0 := 1 j1+...+jm=i j1!·...·jm! 1 m for arbitrary m ∈ IN, we introduce generalized for n ∈ IN 0 and all r ∈ IR:

(n + 1)!r := n!r · (1 + nr) with 0!r := 1 (24)

As special cases we have n!1 = n! and n!0 = 1. Then ai := i!m−1 solves the above recursion: Lemma: For all m ∈ IN and i ∈ IN 0: X i! (i + 1)! = j ! · ... · j ! . (25) m−1 j ! · ... · j ! 1 m−1 m m−1 j1+...+jm=i 1 m Proof: The case m = 1 is trivial and m = 2 is already done. Using Appell’s symbol P∞ (a,i) i (x, n) := x(x + 1) · ... · (x + n − 1) and the series 1F0(a; z) ≡ i=0 i! z = (1 − z)−a for |z| < 1 one can show X (b , j ) (b , j ) (b + ... + b , i) 1 1 · ... · m m = 1 m j ! j ! i! j1+...+jm=i 1 m n 1 ([Ca, Thm.2.3-2]). Since n!r = 1(1 + r)(1 + 2r) · ... · (1 + (n − 1)r) = r ( r , n) for all 1 r ∈ IR \{0}, an easy calculation with b1 = ... = bm = m−1 gives the result. 2

12 Let k k be an arbitrary norm on IRn with |xi| ≤ kxk for each component xi of the vector x ∈ IRn. For concrete calculations the maximum norm is most easily handled and gives the sharpest results. The following estimate is valid uniformly for all initial conditions in a ball of radius r (with respect to the chosen norm) around the origin. Theorem: Let for X = (X1,...,Xn)T p ∈ IRn[X] be homogeneous of degree m ≥ 1 and µ the mapping associated by (7) to the ODE X˙ = p(X). For r > 0 let X(0) = n x0 ∈ IR be any initial condition with kx0k ≤ r, then there exists an explicit P > 0 depending on µ , such that ° ° ° i ° i i(m−1)+1 °µ (x0)° ≤ i!m−1P r for all i ∈ IN 0. (26) Proof: In the case m = 1 choose P as the operator norm of the matrix of coefficients M and the estimate is valid. Therefore let m ≥ 2 subsequently. P We need the notations ν = (ν , . . . , ν ) ∈ {1, . . . , n}m for a multi-index, ≡ P P 1 m ν n ν1 νm k ν1,...,νm=1, p(X) = ν pνX · ... · X with coefficient vector pν = (pν)k=1,...,n and i n P xi := µ (x0) ∈ IR ; define P := ν kpνk. Then:

kµ(xj1 ⊗ ... ⊗ xjm )k ≤ P kxj1 k · ... · kxjm k , (27) ° ° ° ° (19) °P ° P ° ° ° ν1 νm ° ° ν1 νm ° because l.h.s.(27) = ν pνxj1 · ... · xjm ≤ ν pνxj1 · ... · xjm = P P ν1 νm ν(kpνk |xj1 | · ... · |xjm |) ≤ ν(kpνk kxj1 k · ... · kxjm k) = r.h.s.(27). Using (27) and the recursion formula (17) we now show the estimate by induction on i: 0 0 0(m−1)+1 For i = 0 clearly kx0k = kµ x0k ≤ r = 0!m−1P r . Assuming (26) for all P i P i! numbers 0, . . . , i ∈ IN 0 yields validity for i + 1: ( ( ) ≡ ) j j1+...+jm=i j1!·...·jm! ° ° ° ° ° ° (13) ° i+1 ° (17) X i °µi+1(x )° = °µ ◦ (µ) ◦ δ ° = ( ) kµ(x ⊗ ... ⊗ x )k 0 2 [i+1] j j1 jm

X i X i ≤ ( ) kµ(x ⊗ ... ⊗ x )k ≤ ( ) P kx k · ... · kx k j j1 jm j j1 jm

X i ≤ ( ) j ! · ... · j ! P · P j1+...+jm × r(i+1)(m−1)+1, j 1 m−1 m m−1

Pm because ν=1(jν(m − 1) + 1) = (i + 1)(m − 1) + 1. Formula (25) now proves the claim for i+1 µ (x0). The remaining statement in the theorem is obvious. 2

Corollary (Estimate of the radius of convergence): With the above notations the common radius of convergence of all power series in the components of exp(tµ)(x0) is at least ρ := (P (m − 1) rm−1)−1. i P∞ µ (x0) i P∞ i i Proof: Let exp(tµ) = i=0 i! t ≡ i=0 ait . By i!m−1 ≤ i!(m − 1) for m ≥ 2 :

13 i i i(m−1)+1 m−1 i kaik ≤ (m − 1) P r = r((m − 1) P r ) and consequently the common radius 1/i −1 m−1 −1 of convergence is bounded from below by (lim supi→∞ kaik ) ≥ (P (m − 1) r ) . The case m = 1 is trivial. 2

Corollary (Estimate of the remainder term): With the above notations let |t| ≤ m−1 t0 < ρ and q := t0/ρ = P (m−1)r t0 < 1. Then the error of the Taylor approximation k+1 of order k for |t| ≤ t0 is at most rq /(1 − q). Proof: By assumption |q| < 1. Therefore: ° ° ° ° ° X∞ i ° X∞ X∞ k+1 ° µ (x0) i° m−1 i i r q ° t ° ≤ r | P (m − 1) r t0 | = r q = . °i=k+1 i! ° i=k+1 i=k+1 1 − q 2

1.6 Homogenizing variable and radius of convergence In Remark 1.4 we have seen that for polynomial systems homogenized with the variable z all n-dimensional hyperplanes {z ≡ const.} are invariant under the flow. Clearly {z ≡ 1} contains the original system. Moreover all restrictions of the flow to the hyperplanes are modulo orientation topologically equivalent, with the exception of {z ≡ 0}, which contains only the homogeneous subsystem of highest degree. Because of this equivalence one might hope that for power series solutions with corresponding starting values, i.e. starting values on the same ray emanating from the origin, when choosing the “right” hyperplane, a “greater” part of the orbit is contained in the of convergence of the respective power series, thereby giving more dynamical information. This turns out to be wrong: the union of all parts of orbits contained in the of convergence of power series solutions with corresponding starting values form a ho- mogeneous set in IRn+1, i.e. a set which is the union of rays emanating from the origin. We now briefly indicate the proof of this fact. A simple calculation shows: Lemma: Let ϕ(t, x0) be the solution of the IVP x˙ = f(x), x(0) = x0 with x = T m−1 (x1, . . . , xn) and f homogeneous of degree m; then for all λ > 0: ϕ(t, λx0) = (λϕ)(λ t, x0) is the solution of x˙ = f(x), x(0) = λx0. Corollary: With the above notations let y := ϕ(T, x0) for some T > 0, then λy = 0 0 −(m−1) ϕ(T , λx0) with T = λ T .

Theorem: Let exp(tµ)(x0) be the solution of the IVP from the Lemma above with one of its n components having radius of convergence R, then for all λ > 0 exp(tµ)(λx0) has radius of convergence R0 = λ−(m−1)R in the same component. i i i [i] [i] i [i] i Proof: µ (λx) = (µ) ◦ δ[i](λx) = (µ) ◦ (λ δ[i](x)) = λ (µ) ◦ δ[i](x) = λ µ (x) i th by (10-13) and IR- of (µ) . Let ai be the i Taylor coefficient of the com- 0 ponent in question of exp(tµ)(x0) and ai the coefficient in the coresponding compo- 0−1 0 1/i i(m−1)+1 1/i nent of exp(tµ)(λx0), then: R = lim supi→∞ |ai| = lim supi→∞(λ |ai|) = m−1 1/i 1/i m−1 −1 λ lim supi→∞ λ |ai| = λ R . 2

14 1.7 Some remarks on homogeneous polynomial vector fields In the introduction we have mentioned already that the global dynamics of all homo- geneous vector fields are completely determined by the local dynamics in any neigh- borhood of the origin and that for linear systems these local dynamics are determined by the infinitesimal or algebraic information of the Jordan normal form. Therefore the question arises whether there is some substitute for the Jordan normal form in the case of finite dimensional homogeneous polynomial vector fields. First of all we investigate dimension 2 (dimension 1 is trivial): Quadratic homo- geneous plane vector fields have been investigated and classified by several authors. Marcus [M] proceeded by first classifying all nonassociative binary 2-dimensional al- gebras up to linear isomorphism and than grouping together the corresponding vector fields into topological equivalence classes, using his earlier results on plane foliations. Date [D] and Sibirskii [Si] used the theory of invariants of tensors and Newton [N] inspired by a paper of Lyagina has found a third approach, which appears to us as the most natural: one observes, that all lines through the origin for all homogeneous systems in all dimensions are isoclinals and that especially in two dimensions invariant lines ( the “eigenspaces” ) form natural topological barriers separating the plane in hy- perbolic, parabolic, and elliptic sectors. The analysis in [N] uses only elementary facts on polynomials and there is no principal obstacle for extending the analysis to higher degrees.

The situation in dimensions ≥ 3 is completely different:

• Invariant lines don’t separate anything now, and invariant hyperplanes – when they exist at all – do not reveal much information.

• 3-dimensional systems include as special cases the homogenized plane polynomial vector fields, which shows that a local analysis at the origin in 3 dimensions (homogeneous case) is at least as difficult as a global analysis in 2 dimensions (non-homogeneous case).

• van Strien and dos Santos have shown [SS] that for 3-dimensional homogeneous quadratic systems moduli appear, i.e. one-parameter families of topologically non- equivalent vector fields. This clearly destroys all hopes for a neat classification of homogeneous polynomial systems in higher dimensions.

15 2 The Simplex and the Convolution Formula

i Direct computation of µ (x0) needs at least (i − 1)! evaluations of µ using (19) (with different arguments). Assuming µ commutative there is a recursive calculation , which needs an in i exponentially growing number of evaluations of µ . This calculation scheme is the same for all polynomial degrees m and seems therefore suited for numerical applications, namely a lower order Taylor method. This will be explained in detail in the forthcoming paper [W1]. In this section we discuss two further recursion formulas for the Taylor coefficients in the Exponential Formula. In both cases these are “total history recursions”, i.e. for the calculation of the (i + 1)th coefficient one needs all preceding coefficients. These formulas need to be programmed for each polynomial degree m separately, but then yield coefficients up to a rather high degree, e.g. for a plane quadratic IVP a 486-PC easily calculates the first 100 or 200 coefficients of the local power series solution. The two recursion formulas are: 1. The Simplex Formula: This formula provides insight into how the different ho- i mogeneous parts of the original polynomial vector field contribute to the µ (x0); com- mutative form of µ is needed. The number of evaluations of µ grows polynomially of degree m − 1 in i. 2. The Convolution Formula: This formula dispenses with multilinear algebra and gives a connection to the properties of polynomial algebras; no homogenization or com- mutative form is needed. The effort for calculating the Taylor coefficients grows linearly in i for all degrees m.

2.1 The Simplex Formula n n Notations: A commutative µ ∈ L(TmIR , IR ) is written in this section as the sum of its homogeneous parts µν of degree ν: µ = µ1 + µ2 + ... + µm corresponding to the homogeneous parts of the original vector field p in (1), which in general is non- n i homogeneous. Let ξ0 ≡ x0 ∈ IR be the initial value and ξi := µ (ξ0) for i > 0. m Moreover for a multi-index j ≡ (j1, . . . , jm) ∈ IN 0 with j! := j1! · ... · jm! let ⊗jξ :=

ξj1 ⊗ ... ⊗ ξjm . ∆m,i := {j |j1 + ... + jm = i, jν ≥ 0} is the set of lattice points of m n ZZ contained in the simplex of IR spanned by i · e1, . . . , i · em. ∆m,i is called the simplex of degree m and size i. A special case is ∆m,0 = {(0,..., 0)}. The dimension of ∆m,i is zero for i = 0 and m − 1 for i > 0. For i > 0 and 0 ≤ l ≤ m l 0 l let ∆m,i := {j ∈ ∆m,i| at most l of the jν are 6= 0}. A special case is ∆m,i = ∅. ∆m,i is called the (l − 1)-dimensional boundary of ∆m,i, because the sum j1 + ... + jm = i has l (l − 1) degrees of freedom in the jν. Especially, ∆m,i is the set of vertices for l = 1 and m l\l−1 l l−1 the set of edges for l = 2; ∆m,i = ∆m,i. For 1 ≤ l ≤ m we call ∆m,i := ∆m,i \ ∆m,i = l {j ∈ ∆m,i| exactly l jν 6= 0} the inside of ∆m,i. 2

16 With this notation the recursion formula (17) reads: X i! ξ = µ(⊗ ξ) . (28) i+1 j! j j∈∆m,i

l Observe that ∆m,i decomposes as:

l l\l−1 2\1 1\0 ∆m,i = ∆m,i ∪ ... ∪ ∆m,i ∪ ∆m,i (29)

In µν(y1 ⊗...⊗ym) the components of y1, . . . , ym belonging to the homogenizing variable appear exactly m − ν times as a factor in each summand (see (19)); hence µν applied to         ∗ ∗ ∗ ∗          .   .   .   .   .   .   .   .    ⊗ ... ⊗   ⊗   ⊗ ... ⊗    ∗   ∗   ∗   ∗  0 0 1 1 | {z } | {z } l m−l gives zero, if m − ν > m − l, i.e. if ν < l . In ξi the coefficient of the homogenising variable is zero for i > 0 (Rem.1.4), hence for 1 < l ≤ m X i! X i! µ(⊗jξ) = (µl + ... + µm)(⊗jξ) , (30) l\l−1 j! l\l−1 j! j∈∆m,i j∈∆m,i

l\l−1 because every j ∈ ∆m,i has exactly l components jν 6= 0, and consequently µ(⊗jξ) has exactly l factors ξi with i > 0. Since for i > 0: X i! X i! X i! X i! ξ = µ(⊗ ξ) = µ(⊗ ξ) + µ(⊗ ξ) + ... + µ(⊗ ξ) i+1 j! j j! j j! j j! j j∈∆m,i j∈∆1 2\1 m\m−1 m,i j∈∆m,i j∈∆m,i and using (30) one gets Formula A: X i! X i! X i! ξi+1 = µ(⊗jξ) + (µ2 + ... + µm)(⊗jξ) + ... + µm(⊗jξ). 1\0 j! 2\1 j! m\m−1 j! j∈∆m,i j∈∆m,i j∈∆m,i

Collecting summands belonging to µl and using (29) gives for all i > 0: Formula B: X i! X i! X i! ξi+1 = µ1(⊗jξ) + µ2(⊗jξ) + ... + µm(⊗jξ) 1 j! 2 j! j∈∆m j! j∈∆m,i j∈∆m,i m,i

l Remark 2.1: Formulas A and B are valid also for i = 0, because by definition ∆m,0 = l\l−1 {(0,..., 0)} for 1 ≤ l ≤ m and ∆m,0 = ∅ for 1 < l ≤ m. Clearly formulas A and B are

17 extreme cases of a whole set of formulas using other partitions and collections of the µν l\l−1 and ∆m,i . P i! m−1 Remark 2.2: 1 µ1(⊗jξ) = mµ1(ξi ⊗ ⊗ ξ0) = Mξi for all m, where M is j∈∆m,i j! the matrix of coefficients of the linear part of the polynomial vector field. By ξi+1 = i+1 Mt Mξi +... = M ξ0 +... one sees exp(tµ)(ξ0) = e ξ0 +remainder, where “remainder” is of course the crucial part in the nonlinear case. l l\l−1 Remark 2.3: Let Dm,i, Dm,i and Dm,i the respective cardinalities of the sets ∆m,i, l l\l−1 ∆m,i and ∆m,i . Then by simple combinatorial considerations and different partitions of the summation simplexà one can show! the following formulas: à ! i + m − 1 P m D = c∗(m − 1, i + 1) = , D = i D , Dl\l−1 = Dl\l−1 m,i m − 1 m+1,i ν=0 m,ν m,i l l,i m\m−1 Pm Pm l\l−1 for 1 ≤ l ≤ m, Dm,i = Dm,i−m, Dm,i+1 = ν=1 Dν,i, Dm,i = l=1 Dm,i , Dm,i = m−1 Ps m−1 Dm,i−m + Dm,i and for i = sm + r with 0 ≤ r < m: Dm,i = Dm,r + ν=1 Dm,i−νm.

2.2 The Convolution Formula In order to avoid the occurrence of multinomial coefficients, we choose in this section the P i P P ∞ µ (x0) i ∞ ξi i ∞ i i notation exp(tµ)(x0) = i=0 i! t ≡ i=0 i! t ≡ i=0 xit , i.e. xi = µ (x0)/i! = ξi/i! l n with xi ≡ (xi)l=1,...,n ∈ IR . We depart from formula B (i ≥ 0): m m X X i! ξi+1 X X ξi+1 = µν(⊗jξ) ⇔ (i + 1)xi+1 = = µν(⊗jx), ν j! i! ν ν=1 j∈∆m,i ν=1 j∈∆m,i 1 1 because µν(⊗jξ) = µν(ξj ⊗ ... ⊗ ξj ) = µν(xj ⊗ ... ⊗ xj ) = µν(⊗jx). Hence: j! j1!·...·jm! 1 m 1 m   m 1 X X  xi+1 =  µν(⊗jx) (31) i + 1 ν ν=1 j∈∆m,i

Next we sum over j the individual of µν separately; this generates convolu- tions of the Taylor coefficients. We demonstrate this in the case m = n = 2 for the system: ( x˙ = Mx + Ny + Ax2 + Bxy + Cy2 y˙ = M 0x + N 0y + A0x2 + B0xy + C0y2

1 2 By (31) and (19) ( writing xi ≡ xi and yi ≡ xi ) one gets for the first equation: (The computation for the second equation is the same with primed capitals etc. .)   1 X µ B ¶ xi+1 = Mxi + Nyi + Axjxk + (xjyk + yjxk) + Cyjyk  i + 1 j+k=i 2   1 X X X   = Mxi + Nyi + A( xjxk) + B( xjyk) + C( yjyk) i + 1 j+k=i j+k=i j+k=i

18 1 = (Mx + Ny + A(x ∗ x) + B(x ∗ y) + C(y ∗ y) , ) i + 1 i i i i i Note that the commutative form of µ is no longer necessary. In the last step we used the following (k) (k) (k) (k) Notation: For ν x = (x0 , x1 , x2 ,...) (1 ≤ k ≤ ν) of elements of a commutative R the ν-fold convolution is defined by:   X (1) (ν)  (1) (ν) ∗ν(x , . . . , x ) := xj1 · ... · xjν . j1+...+jν =i i∈IN 0

1 ν 1 ν th ∗ν,i(x , . . . , x ) := (∗ν(x , . . . , x ))i the i component of the ν-fold convolution. Clearly ∗2(x, y) ≡ x∗y (the usual binary convolution) and ∗1(x) = x, hence ∗1,i(x) = xi. The ν-fold convolution is commutative, ν-linear and (x∗y)∗z = x∗(y∗z) = ∗3(x, y, z) implies, that the ν-fold convolution can be written as a product of (binary) with arbitrary bracketing. Definition of R[x]: Let R be a with unit and F (R) ≡ RIN 0 the free R-module of all R-sequences on IN 0. The convolution ∗ defined above turns F (R) into a R-algebra R ≡ (F (R), ∗). R as a R-algebra is embedded into R by the mapping Φ: R → R with r 7→ r := (r, 0, 0,...). Let R[x] ≡ R[x(1), . . . , x(n)] the ring of (1) (n) (k) (k) (k) (k) polynomials over R in the variables x , . . . , x ; of course x = (x0 , x1 , x2 ,...) for 1 ≤ k ≤ n and ∗ is the multiplication. Note that R[x] is an R-algebra with componentwise multiplication and r p(x) = r ∗ p(x). Moreover Φ : R → R extends to a R-algebra Φ : R[x] → R[x] by Φ(xl) := x(l). Now let p as in (2) a polynomial vector field in IRn, i.e. a n-tuple of polynomials pl ∈ R[x] in the variables x1, . . . , xn, then we denote by p the n-tuple with pl := Φ(pl) ∈ R[x](l = 1, . . . , n), i.e. p maps a n-tuple x of sequences x(k) ∈ R to the n-tuple p(x) of sequences with the components p,i(x). Let pν denote the homogeneous part of degree ν and the pν,i its components. Completely as in the special case above one can deduce from (31) the: Convolution Formula: Ã ! 1 1 Xm xi+1 = p,i(x) = pν,i(x) ∀i ∈ IN 0. (32) i + 1 i + 1 ν=1 In addition to the deduction of the Convolution Formula from the Exponential Formula via formula B and (31) we give now a Direct proof: Substitute a vector of power series to the systemx ˙ = p(x) and equate i coefficients for t . On the l.h.s. one has (i + 1)xi+1 and on the r.h.s. for any ν- homogeneous the ith component of a ν-fold product of power series, i.e. the ith component of the ν-fold convolution of their respective coefficients. 2 Remark 2.4 Since Φ is a R-algebra homomorphism, the first equality in (32) is valid for any form of the polynomial vector field compatible with the rules of calculation for

19 polynomials, for example a factorised p. Remark 2.5 In the case m = 1 the Convolution Formula gives by ∗1,i(x) = xi: 1 1 1 i+1 xi+1 = i+1 p1,i(x) = i+1 Mxi = (i+1)! M x0, where M is the matrix of coefficients of p1. Remark 2.6 For fixed m and n clearly the effort for calculating the Taylor coefficients grows linearly in i. It is not necessary to store completely the intermediary results of the ν-fold convolutions for ν = 1 up to ν = m − 1 (see the next example). For the Simplex Formula it is not clear how to store intermediary results efficiently, because for 0 0 all i, i , m ∈ IN we have ∆m,i ∩ ∆m,i0 = ∅, if i 6= i . Example 2.1: Let n = 2, m = 3. We have to store as intermediary results the coeffi- cients xi, yi, and the numbers (x∗x)i and (y∗y)i, but not the numbers (x∗y)i. Then all 3- Pi fold convolutions can be calculated: e.g. ∗3,i(x, y, y) = (x∗(y∗y))i = ν=0 xν ·(y∗y)i−ν.

2.3 Fa`adi Bruno’s Formula and the Convolution Formula

P∞ i P∞ i Let ϕ(t) ≡ i=0(ξi/i!)t ≡ i=0 xit be the power series solution of the polynomial IVP Pm k Pm k x˙ = p(x), x(0) = 0, where p(x) ≡ k=0(qk/k!)x ≡ k=0 Qkx . (Note that the initial value x(0) = 0 is uncomfortable but no restriction, because p can be translated.) Then clearly d ξ = ( )i (p ◦ ϕ(t))| . (33) i+1 dt t=0

For i = 0, 1 one has ξ0 = x0 = 0 and ξ1 = x1 = p(0) = Q0 = q0; so it remains to investigate i ≥ 1. In dimension n = 1 the r.h.s. of (33) can be evaluated using Fa`adi Bruno’s Formula and the formula for Bell polynomials (cf. [Co]):

Xi Xi (1) (2) (i−k+1) ξi+1 = qk Bi,k(ϕ , ϕ , . . . , ϕ )|t=0 = qk Bi,k(ξ1, ξ2, . . . , ξi−k+1) = k=1 k=1

Xi X i! c1 c2 qk ξ ξ ..., c1 c2 1 2 k=1 c1!c2! ... (1!) (2!) ... where the summation takes place over all c1, c2,... ∈ IN 0, such that c1 + c2 + c3 + ... = k and c1 + 2c2 + 3c3 + ... = i. This can be transformed further to Xi q X ξ = k i! x . . . x , i+1 k! j1 jk k=1 j1+...+jk=i where j1, j2,... ∈ IN, and finally to: Xm X

(i + 1)xi+1 = ξi+1/i! = Qk xj1 . . . xjk , k=1 j1+...+jk=i

20 which is our Convolution Formula. Therefore the use of Fa`adi Bruno’s Formula in one dimension amounts to a desymmetrisation of the Convolution Formula, which obscures the structural simplicity of the convolution process, and is not well suited to higher dimensional generalization.

3 The Exponential formula and the iteration of poly- nomial mappings

Let k be a field and A a k-algebra; then every k-linear map from A to A ⊗ A is called a comultiplication on A. We are interested in this section especially in two important examples of comultiplications: ∆(x) = x ⊗ 1 + 1 ⊗ x and ∆∗(x) = x ⊗ x. The following ‘excursion’ will put this two examples into perspective; we will use without further comment some elementary facts about coalgebras, Hopf algebras and affine algebraic groups, a full account of which the reader may find in the books of Sweedler [Sw] and Abe [Abe]. It is perfectly possible to understand the main constructions without reading the ‘excursion’, but skimming through it should give a feeling of the context. Excursion: A Hopf algebra H over a field k is called affine, if it is finitely generated as an algebra and commutative. The set of all k-algebra mappings G(k) := Alg(H, k) is then a group under the convolution ∗ defined by f ∗ g := mk ◦ (f ⊗ g) ◦ ∆, where mk is the multiplication in k and ∆ the comultiplication of H. The G(k) constructed in this way is called an affine algebraic group. We are interested here in the following two special cases:

Ga(k) := Alg(H, k) for H = k[x] with comultiplication ∆(x) = x ⊗ 1 + 1 ⊗ x, i.e. x is ∼ primitive, counit ε(x) = 0 and antipode S(x) = −x. Ga(k) = (k, +), i.e. Ga(k) and the additive group (k, +) of k are isomorphic.

−1 ∗ Gm(k) := Alg(H, k) for H = k[x, x ] with comultiplication ∆ (x) = x ⊗ x, i.e. x is ∗ ∗ −1 ∼ ∗ grouplike, counit ε (x) = 1 and antipode S (x) = x . Gm(k) = (k , ·), i.e. Gm(k) and the multiplicative group (k∗, ·) of nonzero elements of k are isomorphic.

Moreover, if one uses the above two Hopf algebras as measuring coalgebras, i.e. let Homk(A, B) be the space of k-linear mappings between the k-algebras A and B, and ϕ : H → Homk(A, B) a k-linear mapping, such that the comultiplication of H is “compatible” with the of A and B (cf. [Sw]), then the image of x under ϕ in the case H = k[x, x−1] will be an algebra homomorphism and in the case H = k[x] a derivation. 2

21 We will show now that the sequence of µi’s in the Exponential Formula is associated to the comultiplication ∆(x) = x ⊗ 1 + 1 ⊗ x in the same way as the sequence of iterates of real polynomial mappings is associated to the comultiplication ∆∗(x) = x ⊗ x. n−1 Define ∆0 = id, ∆1 = ∆ and recursively: ∆n = (⊗ 1 ⊗ ∆) ∆n−1 for n > 1; by coassociativity, i.e. (1 ⊗ ∆)∆ = (∆ ⊗ 1)∆, the definition does not depend on the place of ∆ in the n-fold tensor product. In the same way one defines the ‘higher order ∗ ∗ comultiplications’ ∆n for ∆ and in fact this definition is possible in any coassociative coalgebra. More concretely one computes:

nX+1 ν−1 n+1−ν ∗ n+1 ∆n(x) = ⊗ 1 ⊗ x ⊗ 1 and ∆n(x) = ⊗ x . (34) ν=1 n n Let now µ ∈ L(TmIR , IR ) (as in Sec.1.1) correspond to a polynomial mapping or vector field p : IRn → IRn, which is (made) homogeneous of degree m. Substitution of µ for x and id for 1 in the above formulas yields the IR-linear mappings:

(8) ∆p−m(µ) = dµ,p , (35) if we define ∆p−m(µ) := 0 for p < m, and

∗ ∗ p n n dµ,pm := ∆p−1(µ) = ⊗ µ : TpmIR → TpIR for all p ∈ IN . (36)

In Section 1 we have seen that the mappings dµ,p suitably composed (cf. (10-11)) and evaluated on the ‘diagonal’ (cf. (12-13)) yield the coefficients of power series solutions ∗ for polynomial IVP’s. We investigate now the analogous construction for the dµ,p: ( ∗ i id , if i = 0 (µ ) := ∗ ∗ ∗ and (37) dµ,m ◦ dµ,m2 ... ◦ dµ,mi , if i > 0, ( ∗ i id , if i = 0 µ := ∗ i . (38) (µ ) ◦ δmi , if i > 0.

The ‘evaluation formula’ (19) then shows that µ∗ i is nothing else than the i-fold com- position of the polynomial mapping p : IRn → IRn with itself expressed in terms of multilinear algebra. Note that µ∗ i corresponds to a homogeneous polynomial of degree mi.

Remark 3.1: The above constructions are valid more generally, if we replace IR by an arbitrary commutative ring R with unit, and also for arbitrary affine algebraic groups, respective, affine Hopf algebras. For example, let GLN (R) be the of order N over R, which is represented by a Hopf algebra with comultipli- PN cation ∆(Tij) = k=1 Tik ⊗ Tkj (cf. [Abe, Ex.4.3]); then following the lines of the 2 2 above construction one substitutes for the N variables Tij, i, j = 1,...,N the N n n mappings µij ∈ L(TmIR , IR ), such that for the corresponding polynomial mappings n n pij : IR → IR one has det(pij) 6= 0.

22 n n n n Remark 3.2: Suppose µ, µ˜ ∈ L(TmIR , IR ) and B : IR → IR a bijective and in general nonlinear ‘transformation of coordinates’, such that a given µ is transformed to a ‘nice’µ ˜ according toµ ˜ := B ◦ µ ◦ ⊗mB−1. We choose the letter B, because B is called the “B¨ottcher ” in the iteration theory of 1-dimensional polyno- mials, where a given polynomial p(x) = xm + lower order terms is transformed to p˜(x) = xm = B ◦ p ◦ B−1(x). If in the above construction one usesµ ˜ instead of µ, then p −1 −1 ∗ i ∗ i −1 it is not hard to see (note: ⊗ B ◦ ∆p = ∆p ◦ B ), thatµ ˜ = B ◦ µ ◦ B and µ˜i = B ◦ µi ◦ B−1.

References:

[A] H. Amann, “Ordinary differential equations; an introduction to nonlinear analy- sis”, de Gruyter, Berlin, 1990. [Abe] E. Abe, “Hopf Algebras”, Cambridge University Press, Cambridge 1977. [C] C. Coleman, Growth and Decay Estimates near Non-elementary Stationary Points, Can. J. Math. 22 (1970), 1156-1167. [Ca] B.C. Carlson, “Special functions of applied mathematics”, Academic Press, New York, 1977. [Co] L. Comtet, “Advanced Combinatorics”, Reidel, Dordrecht 1974. [CT] C. Chicone, J. Tian, On general properties of quadratic systems, Amer. Math. Monthly 89 (1982), 167-189. [D] T. Date, Classification and analysis of two-dimensional real homogeneous systems. J. Diff. Eqns. 32 (1979), 311. [E] J. Ecalle, Finitude des cycles limites et acc´el´ero-somationde l’application de retour, in: J.-P. Franc¸oise, R. Roussarie (eds.), “Bifurcations of Planar Vector Fields”, LNM 1455, Springer, Berlin, 1990. [G] W. Grobner¨ , “Die Lie-Reihen und ihre Anwendungen”, VEB Deutscher Verlag der Wissenschaften, Berlin 1960. [H] H. Hochstadt, “Differential Equations”, Rinehart & Winston, New York, 1964. [HS] M.W. Hirsch, S. Smale, “Differential Equations, Dynamical Systems, and Lin- ear Algebra”, Academic Press, New York, 1974. [I1] Y.S. Il’yashenko, Dulac’s Memoir “On limit cycles” and related problems of the local theory of differential equations, Russian Math. Surveys 40(6) (1985), 1-49. [I2] Y.S. Il’yashenko, “Finitness Theorems for Limit Cycles”, Trans. Math. Mono- graphs AMS 94, Providence, 1991. [L] N.G. Lloyd, Limit cycles of polynomial systems. in: T. Bedford, J. Swift (eds.), “New Directions in Dynamical Systems”, London Math. Soc. Lecture Note Ser. 127, 192-234, Cambridge, 1988. [M] L. Marcus, Quadratic Differential Equations and Non-associative Algebras, in: “Contributions to the Theory of Non-linear Oscillations”, Vol. V, 185-213, Princeton

23 University Press, 1960. [N] T.A. Newton, Two dimensional homogeneous quadratic differential systems. SIAM Rev. 20 (1978), 120-138. [R] H. Rohrl¨ , Algebras and Differential Equations, Nagoya Math. J. 68 (1977), 59-122. [Sa] P.L. Sachdev, “Nonlinear Ordinary Differential Equations and Their Applica- tions”, Marcel Dekker, New York, 1991. [Sch] D. Schlomiuk (ed.), “Bifurcations and Periodic Orbits of Vector Fields”, NATO ASI Series C, Kluwer, Dordrecht 1993. [Si] K.S. Sibirskii, “Introduction to the of invariants of differential equations”, Manchester Univ. Press, Manchester and New York, 1988. [Sw] M.E. Sweedler, “Hopf Algebras”, Benjamin, New York 1969. [SS] S.J. van Strien, G.T. dos Santos, Moduli of Stability for Germs of Homoge- neous Vectorfields on R3. J. Diff. Eqns. 69 (1987), 63-84. [W] R. Winkel, “Eine Exponentialformel f¨urendlichdimensionale polynomiale Vek- torfelder”, Thesis, RWTH Aachen, 1992. [W1] R. Winkel, Lie series and rooted trees for polynomial vector fields, in prepara- tion. [Y1] Ye Yanqian, Some Problems in the Qualitative Theory of Ordinary Differential Equations, J. Diff. Eqns. 46 (1982), 153-164. [Y2] Ye Yanqian et.al., “Theory of Limit Cycles” (2nd ed.), Trans. Math. Mono- graphs AMS 66, Providence, 1986. [Z] Zhang Zhi-fen et.al., “Qualitative Theory of Differential Equations”, Trans. Math. Monographs AMS 101, Providence, 1992.

24