<<

arXiv:1411.6268v2 [math.PR] 19 Nov 2015 anse.W hrfr ei iharve fterlvn theor relevant the of review a c assumptions. processes with our Markov motivates begin of that family therefore processes self-c a We a of in properties harnesses. mappings special w these some which properties analyze from algebraic We out of number a . assuming of language algebraic space the on ping pc otepast the to spect falplnmasi variable in polynomials all of 27, 28, 29, 30, 31]. As observed by Cuchiero in [14] and Szab Szab lowsk and [14] in 22 Cuchiero 17, by 16, observed 15, As 12, 10, 31]. probabilities 3, 30, tion [2, 29, references 28, numerous 27, in appeared and aiyo iertasomtos( transformations linear of family ayitrsigeape sta transformations that is examples interesting many eaottepito iwta h iermappings linear the pr that Markov view a than of rather point use operators the will of we adopt family case; we the time-homogeneous ”polyno denote the term to in the process broadly introduced a [15] such also denote see to [14], in Cuchiero approach. faplnma,ta is that , a of oyoil fdegree of polynomials i btat ihu xlctrfrnet h neligMro p Markov underlying the to operators. reference transition explicit without abstract” “in processes. nti ae esuypoete feouin fdegree-preser of evolutions of properties study we paper this In akvpoessta oiaeorter aeplnma regr polynomial have theory our motivate that processes Markov 2000 Date e od n phrases. and words Key hsi nepne axv eso ftepaper. the of version (arxiv) expanded an is This rne uut1 08 (PF27.tex). 2018. 1, August Printed : ahmtc ujc Classification. Subject Mathematics o h nntsmlgnrtro reqartcharnesses quadratic free e of an find generator to infinitesimal equation spec the commutation of the for a use solution we unique to and the equation, correspond to tion generator quadr which infinitesimal called the polynomials, regressions relate polynomial of with space processes Markov the on pings Abstract. NIIEIA EEAOSFRACASOF CLASS A FOR GENERATORS σ W LODZIMIERZ BRYC AND JACEK WESO WESO LOWSKI JACEK AND BRYC W LODZIMIERZ P fils uhMro rcse no ayitrsigpropertie interesting many enjoy processes Markov Such -fields. esuyteifiieia eeaoso vltoso linea of evolutions of generators infinitesimal the study We s,t ( ,dy x, ≤ OYOILPROCESSES POLYNOMIAL nntsmlgnrtr;qartccniinlvariance conditional quadratic generators; Infinitesimal P k s,t , fsc rcs na nnt tt pc en the define space state infinite an on process a such of ) k : 0 = P x 1. ≤ P noisl.Tecuilpoet hthlsin holds that property crucial The itself. into k , Introduction s,t 1 P → . . . , ) 0 60J25;46L53. ≤ hswl etebssfroralgebraic our for basis the be will This . 1 s ≤ ≤ k t where , htmptelna space linear the map that P P s,t ≤ k P ontices h degree the increase not do . s,t eoe iersaeof space linear a denotes eti commuta- certain a tchress We harnesses. atic of pii formula xplicit P a ls of class ial a eanalyzed be can iglna map- linear ving 3 4 5 26, 25, 24, 23, , sin ihre- with essions le quadratic alled cs.Ta is, That ocess. hstr more term this oesadthe and rocess r abstracted ere map- r ilprocess” mial 3] transi- [32], i fMarkov of y ;polynomial s; P ontained = P ( R s ) 2 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

We note that operators Ps,t are well-defined whenever the support of Xs is infinite. So, strictly speaking, operators P0,t are well defined only for the so called Markov families that can be started at an infinite number of values of X0. However, it will turn out that in the cases that we are interested in, even if a Markov process starts with X = 0 we can pass to the limit in P as s 0 and in this way define 0 s,t → a unique degree-preserving mapping P0,t : . With the above in mind, we introduce theP→P following definition. Definition 1.1. Suppose is the linear space of polynomials in one variable x. A P polynomial process is a family of linear maps Ps,t : , 0 s t with the following properties: { P→P ≤ ≤ } (i) for k =0, 1,... and 0 s t, ≤ ≤ P ( )= , s,t P≤k P≤k (ii) Ps,t(1) = 1, (iii) for 0 s t u ≤ ≤ ≤ (1.1) P P = P . s,t ◦ t,u s,u Our next task is to abstract out properties of polynomial processes that cor- respond to a special class of such Markov processes with linear regressions and quadratic conditional variances under the two-sided conditioning. The two-sided linearity of regression, which is sometimes called the “harness property”, takes the following form: u t t s (1.2) E(X X ,X )= − X + − X , 0 s

1.1. Algebra of of polynomials. We consider the linear space of all infinite sequencesQ of polynomials in variable x with a nonstandard multiplicationQ defined as follows. For P = (p ,p ,...,p ,... ) and Q = (q , q ,...,q ,... ) in , we 0 1 k 0 1 k Q define their product R = PQ as the of polynomials R = (r0, r1,... ) given by ∈ Q

deg(qk )

(1.4) rk(x)= [qk]j pj (x) , k =0, 1,.... Xj=0

It is easy to check that algebra has identity Q

E = (1, x, x2,... ).

In the sequel we will frequently use two special elements of Q

(1.5) F = (x, x2, x3,... ), and

(1.6) D = (0, 1, x, x2,... ) which is the left-inverse of F.

It may be useful to note that PF and PD act as shifts: if P = (p0(x),p1(x),...,pn(x),... ) then PF = (p1(x),p2(x),...,pn+1(x),... ) and PD = (0,p0(x),p1(x),...,pn−1(x),... ). On the other hand, FP is just a multiplication by x, i.e., FP = (xp0(x), xp1(x),...,xpn(x),... ).

Algebra is isomorphic to the algebra of all linear mappings under composition:Q to each P : we associate a unique sequence ofP polynomials→P P →P k P = (p0,p1,...,pk,... ) in variable x by setting pk = P(x ). Of course, if P does not increase the degree, then pk is of at most degree k. Under this isomorphism, the composition R = P Q of linear operators P and Q on induces the multiplication operation R = PQ for◦ the corresponding sequences P of polynomials P = (p0,p1,...,pk,... ) and Q = (q0, q1,...,qk,... ) which was introduced in (1.4). It is clear that degree-preserving linear mappings of are invertible. P

Proposition 1.2. If for every n polynomial pn is of degree n then P = (p0,p1,... ) has multiplicative inverse Q = (q0, q1,... ) and each polynomial qn is of degree n. 4 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

n k Proof. Write pn(x)= k=0 an,kx . The inverse Q = (q0, q1,... ) is given by the family of polynomials Pqn of degree n which solve the following recursion: n−1 1 1 n q0 = , and qn = x an,j qj (x) for n 1. a a − ≥ 0,0 n,n Xj=0   It is then clear that QP = E. Since both P and Q consist of polynomials of degree n at the n-th position of the sequence, the corresponding linear mappings P, Q on the set of polynomials preserve the degree of a polynomial. Since Q P is the identity on each finite- ◦ dimensional space ≤n, we see that P Q is also an identity, so PQ = E. P ◦ 

In this paper we study Ps,t through the corresponding elements Ps,t of the algebra . We therefore rewrite Definition 1.1 in the language of algebra . Q Q Definition 1.3 (Equivalent form of Definition 1.1). A polynomial process is a family P :0 s t with the following properties: { s,t ∈Q ≤ ≤ } (i) for 0 s t and n =0, 1,... , the n-th component of Ps,t is a polynomial of degree≤ ≤n, (ii) Ps,t(E FD)= E FD, (iii) for 0 −s t u we− assume ≤ ≤ ≤ (1.7) Ps,tPt,u = Ps,u. Since special elements (1.5) and (1.6) satisfy (1, 0, 0,... )= E FD, property (ii) − is Ps,t(1, 0, 0,... )=(1, 0, 0,... ), see Definition 1.1(ii).

1.2. Martingale polynomials. In this section we re-derive [32, Theorem 1]. We note that by Proposition 1.2 each P is an invertible element of . So from (1.7) s,t Q we see that P = E for all t 0. In particular, P is invertible and M = P−1 t,t ≥ 0,t t 0,t consists of polynomials mk(x; t) in variable x of degree k. From (1.7), we get −1 P0,sPs,tMt = P0,tMt = E. Multiplying this on the left by Ms = P0,s, we see that

(1.8) Ms = Ps,tMt, i.e., Mt is a sequence of martingale polynomials for Ps,t and in addition M0 = E. Conversely, { } −1 (1.9) Ps,t = MsMt . Ref. [32] points out that in general martingale polynomials are not unique. However, any sequence Mt = (m0(x; t),m1(x; t),... ) of martingale polynomials (with mk(x; t) of degree k for k = 0, 1,... ), still determines uniquely Ps,t via −1 e Ps,t = MsMt .

1.3. Quadratice e harnesses. Recall that our goal is to abstract out the proper- ties of the (algebraic) polynomial process Ps,t that correspond to relations (1.2) and (1.3). Since this correspondence is not{ direct,} we first give the self-contained algebraic definition, and then point out the motivation for such definition. INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 5

Definition 1.4. We will say that a polynomial process Ps,t : 0 s t is a quadratic harness with parameters η,θ,σ,τ,γ if the following{ three≤ conditions≤ } hold: 2 2 2 2 (i) (martingale property) Ps,t(FD F D )= FD F D , (ii) (harness property) there exists−X such that− for all t 0 we have ∈Q ≥ (1.10) P0,tF = (F + tX)P0,t, (iii) (quadratic harness property) element X introduced in (1.10) satisfies the following quadratic equation ∈Q (1.11) XF γFX = E + ηF + θX + σF2 + τX2. − −1 Remark 1.5. Using martingale polynomials Mt = P0,t from Section 1.2, we can re-write assumption (1.10) as

(1.12) FMt = Mt(F + tX). We now give a brief explanation how the algebraic properties in conditions (i)- (iii) of Definition 1.4 are related to the corresponding properties of a (polynomial) Markov process.

1.3.1. Motivation for the martingale property. For a Markov process (Xt) with the natural filtration ( ) this takes a more familiar form E(X )= X . Translated Ft t|Fs s back into the language of linear operators on polynomials, this is Ps,t(x) = x. In the language of algebra this is Ps,t(0, x, 0, 0,... ) = (0, x, 0, 0,... ). To get the final form of condition (i)Q we note that FD F2D2 = (0, x, 0, 0,... ). −

1.3.2. Motivation for harness property (1.10). For a Markov processes (Xt) with martingale polynomials mn(x; t), property (1.2) implies that E(X m (X ; t) X )= E(X m (X ; u) X )= E(E(X X ,X )m (X ; u) X ) t n t | s t n u | s t| s u n u | s u t t s = − X E(m (X ; u) X )+ − E(X m (X ; u) X ) u s s n u | s u s u n u | s − − u t t s = − X m (X ; s)+ − E(X m (X ; u) X ) u s s n s u s u n u | s − − The resulting identity u t t s E(X m (X ; t) X )= − X m (X ; s)+ − E(X m (X ; u) X ). t n t | s u s s n s u s u n u | s − − in the language of algebra with M = (m (x; t),m (x; t),... ) becomes Q t 0 1 u t t s (1.13) P FM = − FM + − P FM . s,t t u s s u s s,u u − − Since each polynomial xmn(x; t) can be written as the linear combination of m0(x; t), ..., mn+1(x; t), one can find Jt such that FMt = MtJt. Inserting this into (1.13), ∈Q −1 we see that martingale property eliminates Ps,t and after left-multiplication by Ms we get (u s)J = (u t)J + (t s)J . − t − s − u In particular, Jt depends linearly on t and thus J = J + t(J J ). t 0 1 − 0 6 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

This shows that for any martingale polynomials harness property implies that there exist Y, X such that ∈Q (1.14) FMt = Mt(Y + tX). −1 In our special case of Mt = P0,t , we get Y = J0 = FM0 = FE = F. This establishes (1.12), which of course is equivalent to (1.10).

1.3.3. Motivation for quadratic harness property (1.11). Suppose polynomial pro- cess Ps,t in the sense of Definition 1.3 arises from a Markov process with poly- nomial{ conditional} moments which is a harnesses and in addition has quadratic conditional variances (1.3). Then from the previous discussion, (1.14) holds, and under mild technical assumptions, [8, Theorem 2.3] shows that X, Y satisfy com- mutation equation which reduces to (1.11) when Y = F. This motivates condition (iii). In fact, it is known that under some additional assumptions on the growth of moments, (1.10) and (1.11) imply (1.2) and (1.3), see [31, Section 4.1]; this equiv- alence is also implicit in the proof of [8, Theorem 2.3] and is explicitly used in [12, page 1244]. However, this has no direct bearing on our paper, as in this paper we simply adopt the algebraic Definition 1.4.

1.4. Infinitesimal generator. A polynomial process Ps,t with harness property (1.10) is uniquely determined by X. Indeed, the n-th{ element} of the sequence on the left hand side of (1.10) is the (n + 1)-th polynomial in P0,t while the n-th element of the sequence on the right hand side of (1.10) depends only on the first n polynomials in P0,t. In fact, one can check that ∞ (1.15) P = (F + tX)k(E FD)Dk 0,t − kX=0

Proof. From (1.10) we get (1.16) P F(FkDk Fk+1Dk+1) = (F + tX)P (FkDk Fk+1Dk+1) 0,t − 0,t − k k k+1 k+1 Denote by Yk = P0,t(F D F D ). Multiplying (1.16) by D from the right, we get − Yk+1 = (F + tX)YkD. k k Since Y0 = P0,t(E FD) = E FD we get Yk = (F + tX) (E FD)D . The − −∞ − telescoping gives P0,t = k=0 Yk. P 

Since the k-th element of F + tX has degree k + 1, it follows from (1.12) that Mt is a rational function of t. Formula (1.9) shows that the left infinitesimal generator 1 (1.17) At = lim (Pt−h,t E), t> 0 h→0+ h − is a well defined element of , and that Q ∂ (1.18) A M = M t t −∂t t with differentiation defined componentwise. INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 7

Since Ps,t is continuous in t, the right infinitesimal generator exists and is given 1 + by the same expression. To see this, we compute limh→0 h (Pt,t+h E) on Mt. We have −

1 1 lim (Pt,t+hMt Mt) = lim Pt,t+h (Mt Mt+h) h→0+ h − h→0+ h − 1 ∂ = lim Pt,t+h lim h (Mt Mt+h)= Mt. h→0+ h→0+ − −∂t

The infinitesimal generator At determines Ps,t uniquely; for an algebraic proof see Proposition 2.3. Clearly, M = E t A M ds. t − 0 s s The infinitesimal generator At and itsR companion operator At : are the main objects of interest in this paper. P→P

Remark 1.6. We note that At = (0,a1(x; t),a2(x; t),... ) always starts with a 0, as from Ps,t(1) = 1 it follows that At(1) = 0. It is clear that an(x; t) is a polynomial in x of degree at most n. Martingale property implies that a1(x; t)=0, as At(x) = 0. The infinitesimal generator considered as an element of in the latter case starts Q with two zeros, At = (0, 0,a2(x; t),a3(x; t),... ).

2. Basic properties of infinitesimal generators for quadratic harnesses

Our first result introduces an auxiliary element Ht that is related to At by a commutation equation. ∈Q Theorem 2.1. Suppose that P : 0 s t is a quadratic harness as in { s,t ∈ Q ≤ ≤ } Definition 1.4 with generator At. For t> 0, let (2.1) H = A F FA t t − t and denote T = F tH . Then t − t (2.2) H T γT H = E + θH + ηT + τH2 + σT2. t t − t t t t t t Proof. Differentiating (1.12) and then using (1.18) we get FA M = A M (F + tX)+ M X. − t t − t t t −1 −1 Multiplying this from the right by Mt and using (1.12) to replace Mt(F + tX)Mt by F we get FA = A F + M XM−1. − t − t t t Comparing this with (2.1) we see that −1 (2.3) Ht = MtXMt . −1 We now use this and (1.12) in the definition of Tt = F tHt = Mt(F+tX)Mt tHt. Using (2.3) we get − − −1 Tt = MtFMt . −1 To derive equation (2.2) we now multiply (1.11) by Mt from the left and by Mt from the right. 

Remark 2.2. As observed in Remark 1.6 the n-th element of At is a polynomial of degree at most n. Thus writing Ht = (h0(x),h1(x),... ), from (2.1) we see that hn is of degree at most n + 1. Martingale property implies that h0(x) = 0. 8 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

We will need several uniqueness results that follow from the more detailed ele- mentwise analysis of the sequences of polynomials. We first show that the generator determines uniquely the polynomial process, at least under the harness (1.10) con- dition.

Proposition 2.3. A polynomial process P : 0 s

Proof. Fix s

P F FP = M M−1F FM M−1 s,t − s,t s t − s t = M (Y + tX)M−1 M (Y + sX)M−1 = (t s)M M−1 M XM−1 . s t − s t − s t t t  Therefore, from (2.3) we get

P F = FP + (t s)P H . s,t s,t − s,t t With s = 0, this implies

M F = (F tH )M . t − t t This equation is similar to equation (1.10) and again uniqueness follows from consideration of the degrees of the polynomials. The solution is

∞ M = (F tH )k(E FD)Dk, t − t − kX=0 compare(1.15). Thus Mt is determined uniquely, and (1.9) shows that operators Ps,t are uniquely determined. Since Ht is expressed in terms of At by (2.1), this ends the proof. 

Next, we show that Ht = (h0,h1,... ) is uniquely determined by the commutation equation (2.2) with the “initial condition” h0 = 0. From the proof of Proposition 2.3 it therefore follows that the entire quadratic harness Ps,t as well as its generator are also uniquely determined by (2.2). { }

Proposition 2.4. If σ, τ 0, and στ =1 then equation (2.2) has a unique solution among H such that h≥ (x)=0. 6 t ∈Q 0

Proof. Eliminating Tt = F tHt from (2.2) we can rewrite it into the following equivalent form. −

(2.4) H F γFH = E + θH + η(F tH )+ τH2 + σ(F tH )2 + (1 γ)tH2. t − t t − t t − t − t

Write Ht = H = (hn(x))n=0,1,... for a fixed t > 0, with h0 = 0. We will simultaneously prove that for n 1, polynomial h (x) is uniquely determined and ≥ n that its degree is at most n + 1. The proof is by induction. With h0 = 0 it is clear that the thesis holds true for n = 0. We therefore assume that h0,h1,...,hn are given polynomials of degrees at most 1, 2,...,n + 1, respectively. INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 9

Looking at the n-th element of (2.4) for n 0, we get the following equation for ≥ hn+1(x): (2.5) 1+ σt [h ] (σt2 + (1 γ)t + τ) h (x) − n n+1 − n+1 = xn + ηxn+1 + σxn+2+ (θ tη)h (x) − n n + (γ σt)xh (x) + (σt2 + (1 γ)t + τ) [h ] h (x) . − n − n j j Xj=0 The degree of the polynomial on the right hand side is at most n +2, as the n+2 highest degree term on the right hand side is (σ + (γ σt)[hn]n+1) x . In order to complete the proof, we only need to verify that for− all n 0, the coefficient 2 ≥ 1+ σt [hn]n+1(σt + (1 γ)t + τ) on the left hand side of (2.5) does not vanish. We− need to consider separately− two cases. Case γ + στ =0: We proceed by contradiction. Suppose that for some n 6 ≥ 0 the coefficient at hn+1(x) on the left hand side of (2.5) is 0. Since 1+ σt > 0, this implies that σt2 + (1 γ)t + τ cannot be zero. So we get − 1+ σt [h ] = . n n+1 σt2 + (1 γ)t + τ − We now use this value to compute the coefficient at xn+2 on the right hand side of (2.5). We get γ + στ σ + (γ σt)[h ] = =0. − n n+1 σt2 + (1 γ)t + τ 6 − Since the left hand side of (2.5) is 0, and the degree of the right hand side of (2.5) is n + 2, this is a contradiction. This shows that the coefficient of hn+1(x) on the left hand side of (2.5) is non-zero for all n 0. So each polynomial hn+1(x) is determined uniquely and has degree at≥ most n + 2 for all n 0. Case γ + στ =0: In this case σt2 +(1 γ≥)t+τ = (1+σt)(τ +t). Comparing the coefficients at xn+2 on both sides− of (2.5) we get (2.6) (1 + σt)(1 [h ] (τ + t))[h ] = σ(1 [h ] (τ + t)). − n n+1 n+1 n+2 − n n+1 Since h (x) = 0, this gives [h ] = σ/(1 + σt). So for n =1 we get 1 0 1 2 − [hn]n+1(τ +t)=(1 στ)/(1+σt) = 0. Dividing both sides of (2.6) by this expression we get recursively− [h ] 6 = σ/(1+σt) for all n 1. Therefore, n n+1 ≥ for n 1, the left hand side of (2.5) simplifies to (1 στ)hn+1(x). Using again≥ 1 στ = 0, this shows that polynomial h is− determined uniquely − 6 n+1 and its degree is at most n+2. Of course, (2.5) determines h1(x) uniquely, too, as h0(x) = 0. 

Our main result is the identification of the infinitesimal generator for quadratic harnesses with γ = στ. Such processes were called “free quadratic harnesses” in [8, Section 4.1]. Markov− processes with free quadratic harness property were constructed in [9] but the construction required more restrictions on the parameters than what we impose here. In this section we represent the infinitesimal generators using the auxiliary ϕt(D) with D defined by (1.6). In Section 4 we will use the results of this 10 WLODZIMIERZBRYCANDJACEKWESO LOWSKI section to derive the integral representation (4.3) for the operator At under a more restricted range of parameters η,θ,σ,τ. ∞ k For a ϕ(z) = k=0 ckz we shall write ϕ(D) for the series k k k k ckD . We note that since the sequenceP D begins with k zeros, k ckD is a sequenceP of finite sums: P n 2 n−j ϕ(D) = (c0,c0x + c1,c0x + c1x + c2,..., cj x ,... ). Xj=0 So ϕ(D) is a well defined element of . ∞ k Qk+1 2 We will also need D1 = k=0 F D = (0, 1, 2x, 3x ,... ) which represents the . P Theorem 2.5. Fix η,θ R and σ, τ 0 such that στ =1. Then the infinitesimal generator of the quadratic∈ harness with≥ the above parameter6 s and with γ = στ is given by − 1 (2.7) A = (E + ηF + σF2)D ϕ (D)D, t> 0, t 1+ σt 1 t ∞ k−1 where ϕt(z)= k=1 ck(t)z for small enough z solves the quadratic equation P (2.8) (z2 + ηz + σ)(t + τ)ϕ2 + ((θ tη)z 2tσ στ 1)ϕ + tσ +1=0 t − − − − t and the solution is chosen so that ϕt(0) = 1. (For στ < 1 this solution is written explicitly in formula (4.4) below.) We note that for each fixed t equation (2.8) has two real roots for z close enough to 0. As ϕt(z) we choose the smaller root when στ < 1 and the larger root when στ > 1. For z = 0, equation (2.8) becomes σ(t + τ)ϕ2(0) (1 + στ +2tσ)ϕ (0)+1+ tσ =0, t − t so this procedure ensures that ϕt(0) = 1.

3. Proof of Theorem 2.5

The plan of proof is to solve equation (2.2) for Ht, and then to use equation (2.1) to determine At.

3.1. Part I of proof: solution of equation (2.2) when γ = στ. Equation (2.2) takes the form −

H F tH2 + στFH στtH2 t − t t − t = E + θH + η(F tH )+ τH2 + σ(F2 tH F tFH + t2H2). t − t t − t − t t So after simplifications, the equation to solve for the unknown Ht is (3.1) (1 + σt)H F = E + ηF + σF2 + (θ ηt)H σ(t + τ)FH + (t + τ)(1 + σt)H2 . t − t − t t Lemma 3.1. The solution of (3.1) with the initial element h0 =0 is 1 (3.2) H = (E + ηF + σF2)ϕ (D)D, t 1+ σt t where ϕt satisfies equation (2.8) and ϕt(0) = 1. INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 11

Proof. Since t > 0 is fixed, we suppress the dependence on t and we use Remark 2.2 to write Ht = H = (0,h1(x),... ). From (3.1) we read out that h1(x) = 1 2 1+σt (1 + ηx + σx ). From Proposition 2.4 we see that (3.1) has a unique solution. In view of unique- ness, we seek the solution in a special form

1 ∞ (3.3) H = (E + ηF + σF2) c Dk 1+ σt k kX=1 with c = 1 and c = c (t) R. Note that 1 k k ∈

∞ ∞ 1 H2 = (E + ηF + σF2) c Dk(E + ηF + σF2) c Dj (1 + σt)2 k j kX=1 Xj=1 1 ∞ ∞ = (E + ηF + σF2) c Dk c Dj (1 + σt)2 k j kX=1 Xj=1 η ∞ ∞ + (E + ηF + σF2) c Dk−1 c Dj (1 + σt)2 k j Xk=1 Xj=1 σ ∞ σ ∞ ∞ + (F+ηF2+σF3) c Dj + (E+ηF+σF2) c Dk−2 c Dj . (1 + σt)2 j (1 + σt)2 k k Xj=1 Xk=2 Xj=1

Inserting this into (3.1) we get

∞ 2 2 k−1 2 E + ηF + σF + E + ηF + σF ckD = E + ηF + σF  kX=2 θ tη ∞ σ(t + τ) ∞ + − (E + ηF + σF2) c Dk (F + ηF2 + σF3) c Dk 1+ σt k − 1+ σt k kX=1 kX=1 ∞ ∞ ∞ ∞ t + τ + (E + ηF + σF2) c Dk c Dj + η(E + ηF + σF2) c Dk−1 c Dj 1+ σt k j k j h Xk=1 Xj=1 Xk=1 Xj=1 ∞ ∞ ∞ 2 3 j 2 k−2 j + σ(F + ηF + σF ) cj D + σ(E + ηF + σF ) ckD cj D . Xj=1 Xk=2 Xj=1 i

The terms with F + ηF2 + σF3 cancel out, so E + ηF + σF2 factors out. We further restrict our search for the solution by requiring that theremaining factors match, i.e.

∞ θ tη ∞ c Dk = − c Dk k+1 1+ σ k kX=1 kX=1 ∞ ∞ ∞ ∞ ∞ ∞ t + τ + c Dk c Dj + η c Dk−1 c Dj + σ c Dk−2 c Dj . 1+ σt k j k j k j  Xk=1 Xj=1 Xk=1 Xj=1 kX=2 Xj=1  12 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

Collecting the coefficients at the powers of D we get ∞ θ tη ∞ c Dk = − c Dk k+1 1+ σt k kX=1 Xk=1 ∞ k−1 ∞ k−1 ∞ k−1 t + τ + c c Dk + η c c Dk + σ c c Dk . 1+ σt j k−j j+1 k−j j+2 k−j  Xk=2 Xj=1 kX=1 Xj=0 Xk=1 Xj=0  We now compare the coefficients at the powers of D. Since στ = 1, for k =1 we get 6 θ ηt t + τ t + τ c = − + η + σc . 2 1+ σt 1+ σt 1+ σt 2 So c2 = (θ + ητ)/(1 στ)= β (say). For k 2, we have− the recurrence ≥ k−1 k−1 k−1 θ ηt t + τ (3.4) c = − c + c c + η c c + σ c c . k+1 1+ σt k 1+ σt j k−j j+1 k−j j+2 k−j  Xj=1 Xj=0 Xj=0  We solve this recurrence by the method of generating functions. One can proceed here with a formal power series, and then invoke uniqueness to verify that the power series has positive radius of convergence. Or one can use an a’priori bound from Lemma 3.2 below and restrict the argument of the to small enough z . Note that in this argument, t is fixed. | | ∞ k−1 Let ϕ(z)= k=1 ckz . Then P ∞ k ϕ(z)=1+ βz + ck+1z Xk=2 θ ηt t + τ t + τ = − z(ϕ(z) 1)+ z2ϕ2(z)+ ηz (ϕ2(z) 1) 1+ σt − 1+ σt 1+ σt − t + τ + σ (ϕ(z)(ϕ(z) 1) βz). 1+ σt − − This gives quadratic equation (2.8) for ϕ = ϕt. 

∞ k−1 The following technical lemma assures that the series ϕ(z)= k=1 ckz con- verges for all small enough z . P | | Lemma 3.2. Suppose στ =1 and ck is a solution of recursion (3.4) with c1 =1 for a fixed t. Then for every6 p> 1 {there} exists a constant M such that M k−2 (3.5) c for all k 3. | k|≤ kp ≥ Proof. We will prove the case p = 2 only, as this suffices to justify the convergence of the series. Solving (3.4) for ck+1, which appears also in one term on the right hand side of (3.4), we get

k−1 k−1 k−2 1 στ θ ηt t + τ | − | c | − | c + c c + η c c +σ c c 1+ σt | k+1|≤ 1+ σt | k| 1+ σt | j k−j | | | | j+1 k−j | | j+2 k−j |  Xj=1 Xj=0 Xj=0  Since we are not going to keep track of the constants, we simplify this as INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 13

θ ηt t + τ k−1 k−1 k−2 c | − | c + c c + η c c +σ c c | k+1|≤ 1 στ | k| 1 στ | j k−j | | | | j+1 k−j | | j+2 k−j | − | − |  Xj=1 Xj=0 Xj=0 

n−1 n−1 n−2 (3.6) c A c + B c c + c c + c c | n+1|≤ | n| | k n−k| | k+1 n−k| | k+2 n−k|  Xk=1 Xk=0 Xk=0  with A = |θ−ηt| and B = t+τ (1 + η + σ). Here, n 2. |1−στ| |1−στ| | | ≥ We now choose M 1 large enough so that (3.5) holds for k =3, 4, 5, 6, and we also require that ≥

(3.7) 4A + (24 c + 175)B M. | 2| ≤

We now proceed by induction and assume that (3.5) holds for all indices k between 3 and n for some n 6. To complete the≥ induction step we provide bounds for the sums on the right hand side of (3.6). The first sum is handled as follows:

n−1 n−3 c c =2 c c +2 c c + c c . | k n−k| | 1 n−1| | 2 n−2| | k n−k| Xk=1 Xk=3

We apply the induction bound (3.5) to c3,...,cn−1. Noting that c1 = 1 we get

n−1 M n−3 M n−4 n−3 1 c c 2 +2 c + M n−4 | k n−k|≤ (n 1)2 | 2|(n 2)2 k2(n k)2 kX=1 − − kX=3 − [n/2] M n−2 M n−2 1 n−3 1 8 +8 c + M n−2 + M n−2 ≤ (n + 1)2 | 2|(n + 1)2 k2(n k)2 k2(n k)2 Xk=3 − k=[Xn/2]+1 − [n/2] M n−2 M n−2 4M n−2 1 4M n−2 n−2 1 8 +8 c + ++ ≤ (n + 1)2 | 2|(n + 1)2 n2 k2 n2 (n k)2 Xk=3 k=[Xn/2]+1 − M n−2 M n−2 < 8+8 c + 16π2/3 < (61+8 c ) . (n + 1)2 | 2| | 2| (n + 1)2 

Here we used several times the bound (n + 1)/(n 2) 2 for n 6, inequality M 1, and we estimated two finite sums by the infinite− ≤ series ∞≥ 1/k2 = π2/6. ≥ k=1 P 14 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

We proceed similarly with the second sum:

n−1 n−3 c c =2 c c +2 c c + c c | k+1 n−k| | 1 n| | 2 n−1| | k+1 n−k| kX=0 kX=2 [n/2] n−3 M n−2 M n−3 4M n−3 1 4M n−3 1 2 +2 c + + ≤ n2 | 2|(n 1)2 n2 k2 n2 (n k)2 − kX=1 k=[Xn/2]+1 − M n−2 M n−2 8+8 c + 16π2/3 (61+8 c ) . ≤ (n + 1)2 | 2| ≤ (n + 1)2 | 2|  Finally, we bound the third sum on the right hand side of (3.6). We get

n−2 n−3 c c =2 c c + c c | k+2 n−k| | 2 n| | k+2 n−k| kX=0 Xk=1 [n/2] n−3 M n−2 4M n−2 1 4M n−2 1 2 c + + ≤ | 2| n2 n2 k2 n2 (n k)2 Xk=1 k=[Xn/2]+1 − M n−2 M n−2 8 c + 16π2/3 (8 c + 53) . ≤ (n + 1)2 | 2| ≤ (n + 1)2 | 2|  Combining these bounds together and using the induction assumption to the first term on the right hand side of (3.6) we get M n−2 c (4A + (24 c + 175)B) | n+1|≤ (n + 1)2 | 2|

n− M 1 In view of (3.7), this shows that c 2 , thus completing the proof of (3.5) | n+1|≤ (n+1) by induction. 

3.2. Part II of proof: solution of equation (2.1). We now use (3.2) to deter- mine At. Since E FD = (1, 0, 0,... ), from Remark 1.6 it follows that AtFD = At. There- fore, multiplying− (2.1) by D from the right we get

At = FAtD + HtD. Iterating this, we get ∞ k k+1 (3.8) At = F HtD , Xk=0 which is well defined as the series consists of finite sums elementwise. Since Ht is given by (3.2), we get

∞ 1 (3.9) A = Fk(E + ηF + σF2)ϕ (D)DDk+1. t 1+ σt t kX=0 1 ∞ = (E + ηF + σF2) FkDk+1ϕ (D)D. 1+ σt t kX=0 INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 15

We now note that ∞ k k+1 (3.10) F D = D1. Xk=0 (This can be seen either by examining each element of the sequence, or by solving the equation D1F FD1 = E, which is just a product formula for the derivative, by the previous technique.− The latter equation is of course of the same form as (2.1).) Replacing the series in (3.9) by the right hand side of (3.10) we get (2.7). This ends the proof of Theorem 2.5.

4. Integral representation In this section it is more convenient to use the linear operators on instead of P the sequences of polynomials. We assume that the polynomial process Ps,t : 0 s t corresponds to a quadratic harness from Definition 1.4, and as{ before we≤ use≤ the} parameters η,θ,σ,τ and γ to describe the quadratic harness. Infinitesimal generators of several quadratic harnesses, all different than those in Theorem 2.5, have been studied in this language by several authors. For quadratic harnesses with parameters η = σ = 0 and γ = q ( 1, 1), ∈ − according to [13], the infinitesimal generator At acting on a polynomial f is ∂ f(y) f(x) (4.1) At(f)(x)= − νx,t(dy), ZR ∂x  y x  − where νx,t(dy) is a uniquely determined probability measure. By inspecting the recurrences for the orthogonal polynomials Qn and Wn , from [13, Theorem 1.1(ii)] one can read out that for q2t (1 +{q)τ}probability{ } measure ν can be ≥ x,t expressed in terms of the transition probabilities Ps,t(x, dy) of the Markov process by the formula νx,t(dy) = Ptq2−(1+q)τ,t(θ + qx, dy). In this form, the formula coincides with Anshelevich [1, Corollary 22] who considered the case η = θ = τ = σ = 0 and γ = q [ 1, 1]. (However, the domain of the generator in [1] is much larger than the polynomials.)∈ − Earlier results in Refs. [5, page 392], [6, Example 4.9], and [7] dealt with infinitesimal generators for quadratic harnesses such that σ = η = γ = 0. The following result gives an explicit formula for the infinitesimal generator of the evolution corresponding to the “free quadratic harness” in the integral form similar to (4.1). The main new feature is the presence of an extra quadratic factor in front of the integral in expression (4.3) for the infinitesimal generator. Denote η + θσ ητ + θ (4.2) α = , β = . 1 στ 1 στ − − Theorem 4.1. Fix σ, τ 0 such that στ < 1 and η,θ R such that 1+ αβ > 0. Let γ = στ and let ϕ be≥ a continuous solution of (2.8)∈ with ϕ (0) = 1. − t t Then ϕt is a moment generating function of a unique probability measure νt and the generator of the quadratic harness with the above parameters on p is ∈P 1+ ηx + σx2 ∂ p(y) p(x) (4.3) At(p)(x)= − νt(dy), t> 0. 1+ σt Z ∂x  y x  − 16 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

Proof. Since στ < 1, the solution of quadratic equation (2.8) with ϕt(0) = 1 is z(tη θ)+2tσ + στ +1 (4.4) ϕ (z)= − t 2(t + τ)(z2 + zη + σ) (z(tη θ)+2tσ + στ + 1)2 4(z2 + zη + σ)(1 + tσ)(t + τ) − − . − p 2(t + τ)(z2 + zη + σ) (We omit the re-write for the case σ = η = 0.)

To avoid issues with σ = η = 0, it is better to rewrite (4.4) as 2(1 + tσ) ϕt(z)= . A + √A2 4B − with A = z(tη θ)+2tσ + στ + 1 and B = (z2 + zη + σ)(1 + tσ)(t + τ) −

We will identify νt through its Cauchy-Stieltjes transform 1 Gνt (z)= νt(dx) Z z x − which in our case will be well defined for all real z large enough. To this end we compute (1 + στ +2σt)z + tη θ (4.5) ϕ (1/z)/z = − t 2(t + τ)(σz2 + ηz + 1) [(1 στ)z (α + σβ)t β ατ]2 4(1 + σt)(t + τ)(1 + αβ) q − − − − − . − 2(t + τ)(σz2 + ηz + 1) Under our assumptions, the above expression is well defined for real large enough z R. ∈Expression (4.5) coincides with the Cauchy-Stieltjes transform in [21, Proposition 2.3], with their parameters 1 στ ητ + θ 2ητ + θστ + θ + t(ηστ + η +2θσ) c = − , α = , a = SY 1+ σt SY 1 στ SY (στ 1)2 − − and (σt + 1)(t + τ) η2τ + ηθ(στ +1)+ θ2σ + (1 στ)2 b = − . SY (1 στ)3  − (We added subscript ”SY” to avoid confusion with our use of α in (4.2).) This shows that ϕt(1/z)/z is a Cauchy-Stieltjes transform of a unique compactly- supported probability measure νt. For a more detailed description of measure νt and explicit formulas for its discrete and absolutely continuous components we refer to [21, Theorem 2.1]; see also Remark 4.2 below. It is well known that a Cauchy-Stieltjes transform is an analytic function in the upper complex plane, determines measure uniquely, and if it extends to real z with z large enough then the corresponding moment generating function is well defined | | for all z small enough and is given by G t (1/z)/z = ϕ (z). This shows that ϕ (z) | | ν t t is the moment generating function of the probability measure νt. INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 17

Next we observe that (3.3) in the operator notation is n 1+ ηx + σx2 H (xn)= c (t)xn−k t 1+ σt k kX=1 k−1 Writing ck(t)= y νt(dy), we therefore get R 1+ ηx + σx2 f(y) f(x) (4.6) Ht(f)(x)= − νt(dy). 1+ σt Z y x − n+1 n n Since the operator version of relation (2.1) is At(x ) = Ht(x )+ xA(x ), we derive (4.3) from (4.6) by induction on n; for a similar reasoning see [13, Lemma 2.4]. 

Remark 4.2. Denote by πt,η,θ,σ,τ (dx) the univariate law of Xt for the free quadratic harness (Xt) with parameters η,θ,σ,τ as in [9, Section 3]. Then νt is given by 1 (4.7) ν (dx)= (t2 + θtx + τx2)π (dx) . t t(t + τ) t,η,θ,σ,τ We read out this answer from [9, Eqtn. (3.4)] using the following elementary relation between the Cauchy-Stieltjes transforms: If ν(dx) = (ax2 + bx + c)π(dx) and m = xπ(dx) then the Cauchy-Stieltjes transforms of π and ν are related by the formulaR (4.8) G (z) = (az2 + bz + c)G (z) am az b. ν π − − − τ In our setting, m = 0, a = t(t+τ) , b = θ/(t + τ), c = t/(t + τ), and [9, Eqtn. (3.4)] gives τz + θt t [(1 + στ +2σt)z + tη θ] (4.9) G (z)= + − π τz2 + θtz + t2 2(σz2 + ηz + 1)(τz2 + θtz + t2) t [(1 στ)z (α + σβ)t β ατ]2 4(1 + σt)(t + τ)(1 + αβ) q − − − − − . − 2(σz2 + ηz + 1)(τz2 + θtz + t2) Inserting this expression into the right hand side of (4.8) we get the right hand side of (4.5). Uniqueness of Cauchy-Stieltjes transform implies (4.7).

5. Some other cases Several other cases can be worked out by a similar technique based on (2.2). Recasting [18, Section IV] in our notation, for m =0, 1,... we have (5.1) Dm+1F FDm+1 = (m + 1)Dm, 1 − 1 1 and if ∞ ck k H = k! D1 kX=1 then ∞ cm m+1 (5.2) A = (m+1)! D1 . mX=1 18 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

The generators for a L´evy processes (ξt) with exponential moments act on poly- ∂ nomials in variable x via κ( ∂x ), where tκ(θ) = logE(exp(θξt)); this well-known formula appears e.g. [1, Section 3]. 5.1. Centered Poisson process. For example, the quadratic harness with γ =1 and η = σ = τ = 0 which corresponds to the Poisson process can be also analyzed by the algebraic technique. Equation (2.4) assumes the form (5.3) HF FH = E + θH −

We search a solution H of the form ∞ ck k H = k! D1 = ϕ(D1). kX=1 Due to (5.1) equation (5.3) implies ϕ′(d)=1+ θϕ(d) and the solution with ϕ(0) = 0 is ϕ(d)= 1 eθd 1 . θ −  We get D H = 1 eθ 1 E . θ − It follows from (5.2) that 

∞ ∞ m− m θ 1 m+1 1 θ m A = (m+1)! D1 = θ2 m! D1 . mX=1 mX=2 Consequently,

1 θD1 A = 2 e E θD . θ − − 1  5.2. A non-L´evy example. Here we consider a quadratic harness with parame- ters γ = 1, τ = σ = 0. This quadratic harness appeared under the name quantum Bessel process in [4] (see also [20] for a multidimensional version), and as classical bi-Poisson process in [11]. Then equation (2.4) assumes the form (5.4) HF FH = E + θH + η(F tH)= E + ηF + (θ tη)H. − − − For H = (E + ηF)H˜ we obtain (E + ηF)(HF˜ FH˜) = (E + ηF)(E + (θ tη)H˜ ). − − Comparing this with (5.3) we conclude that by uniqueness the solution of (5.4) is

1 (θ−tη)D1 H = θ−tη (E + ηF) e E .  −  From (3.8) it follows that A =: A(H) = (E + ηF) A(H˜ ). Therefore we obtain

1 (θ−tη)D1 A = (θ−tη)2 (E + ηF) e E (θ tη)D1 .  − − −  INFINITESIMAL GENERATORS FOR POLYNOMIAL PROCESSES 19

Acknowledgement JW research was supported in part by NCN grant 2012/05/B/ST1/00554. WB research was supported in part by the Taft Research Center at the University of Cincinnati.

References

[1] M. Anshelevich, Generators of some non-commutative stochastic processes, Probability The- ory Related Fields, 157 (2013), pp. 777–815. Arxiv preprint arXiv:1104.1381. [2] D. Bakry and O. Mazet, Characterization of Markov semigroups on R associated to some families of orthogonal polynomials, in S´eminaire de Probabilit´es XXXVII, Springer, 2003, pp. 60–80. [3] P. Barrieu and W. Schoutens, Iterates of the infinitesimal generator and space–time harmonic polynomials of a Markov process, Journal of Computational and Applied Mathematics, 186 (2006), pp. 300–323. [4] P. Biane, Quelques proprietes du mouvement brownien non-commutatif., Ast´erisque, 236 (1996), pp. 73–102. [5] P. Biane and R. Speicher, Stochastic with respect to free Brownian motion and analysis on Wigner space, Related Fields, 112 (1998), pp. 373–409. [6] M. Bo˙zejko, B. K¨ummerer, and R. Speicher, q-Gaussian processes: non-commutative and classical aspects, Comm. Math. Phys., 185 (1997), pp. 129–154. [7] W. Bryc, Markov processes with free Meixner laws, Stochastic Processes and Applications, 120 (2010), pp. 1393–1403. [8] W. Bryc, W. Matysiak, and J. Weso lowski, Quadratic harnesses, q-commutations, and or- thogonal martingale polynomials, Transactions of the American Mathematical Society, 359 (2007), pp. 5449–5483. [9] W. Bryc, W. Matysiak, and J. Weso lowski, Free quadratic harness, Stochastic Processes and Applications, 121 (2011), pp. 657–671. [10] W. Bryc and J. Weso lowski, Conditional moments of q-Meixner processes, Probability Theory Related Fields, 131 (2005), pp. 415–441. [11] W. Bryc and J. Weso lowski, Classical bi-Poisson process: an invertible quadratic harness, Statistics & Probability Letters, 76 (2006), pp. 1664–1674. [12] W. Bryc and J. Weso lowski, Askey–Wilson polynomials, quadratic harnesses and martingales, Annals of Probability, 38 (2010), pp. 1221–1262. [13] W. Bryc and J. Weso lowski, Infinitesimal generators of q-Meixner processes, Stochastic Pro- cesses and Applications, 124 (2014), pp. 915–926. [14] C. Cuchiero, Affine and polynomial processes, PhD thesis, ETH ZURICH, 2011. [15] C. Cuchiero, M. Keller-Ressel, and J. Teichmann, Polynomial processes and their applications to mathematical finance, Finance and Stochastics, 16 (2012), pp. 711–740. [16] E. Di Nardo, On a representation of time space-harmonic polynomials via symbolic L´evy processes, Sci. Math. Jpn., 76 (2013), pp. 99–118. [17] E. Di Nardo and I. Oliva, A new family of time-space harmonic polynomials with respect to L´evy processes, Ann. Mat. Pura Appl. (4), 192 (2013), pp. 917–929. [18] P. Feinsilver, Operator calculus, Pacific J. Math., 78 (1978), pp. 95–116. [19] R. Mansuy and M. Yor, Harnesses, L´evy bridges and Monsieur Jourdain, Stochastic Process. Appl., 115 (2005), pp. 329–338. [20] W. Matysiak and M. Swieca,´ Zonal polynomials and a multidimensional quantum Bessel process, Stochastic Processes and their Applications, 125 (2015), pp. 3430 – 3457. [21] N. Saitoh and H. Yoshida, The infinite divisibility and orthogonal polynomials with a constant recursion formula in free probability theory, Probab. Math. Statist., 21 (2001), pp. 159–170. [22] W. Schoutens, Stochastic processes and orthogonal polynomials, Springer Verlag, 2000. [23] W. Schoutens and J. L. Teugels, L´evy processes, polynomials and martingales, Stochastic Models, 14 (1998), pp. 335–349. [24] A. Sengupta, Time-space harmonic polynomials for continuous-time processes and an exten- sion, Journal of Theoretical Probability, 13 (2000), pp. 951–976. [25] A. Sengupta, Markov processes, time–space harmonic functions and polynomials, Statistics & Probability Letters, 78 (2008), pp. 3277–3280. 20 WLODZIMIERZBRYCANDJACEKWESO LOWSKI

[26] A. Sengupta and A. Sarkar, Finitely polynomially determined L´evy processes, Electron. J. Probab, 6 (2001), pp. 1–22. [27] J. L. Sol´eand F. Utzet, On the orthogonal polynomials associated with a L´evy process, The Annals of Probability, 36 (2008), pp. 765–795. [28] J. L. Sol´e and F. Utzet, Time–space harmonic polynomials relative to a L´evy process, Bernoulli, 14 (2008), pp. 1–13. [29] P. J. Szab lowski, L´evy processes, martingales, reversed martingales and orthogonal polyno- mials, arXiv preprint arXiv:1212.3121, (2012). [30] P. J. Szab lowski, On stationary Markov processes with polynomial conditional moments, arXiv preprint arXiv:1312.4887, (2013). [31] P. J. Szab lowski, Markov processes, polynomial martingales and orthogonal polynomials, arXiv preprint arXiv:1410.6731, (2014). [32] P. J. Szab lowski, On Markov processes with polynomials conditional moments, Trans. Amer. Math. Soc. 367 (2015), pp. 8487–8519. (2015).

Department of Mathematics, University of Cincinnati, PO Box 210025, Cincinnati, OH 45221–0025, USA E-mail address: [email protected]

Faculty of Mathematics and Information Science Warsaw University of Technology pl. Politechniki 1 00-661 Warszawa, Poland E-mail address: [email protected]