arXiv:1612.09216v2 [math.PR] 25 Aug 2017 1991 2015/17/B/ST1/01102. ..Pofo hoe 4 Theorem References of Proof 2 Theorem 4.2. of Proof results main 4.1. of Proofs 4. representation Polynomial representation predictable 3.1. of Proof 3. results Main 2. Introduction 1. OEO HOI N RDCAL ERSNAIN O IT FOR REPRESENTATIONS PREDICTABLE AND CHAOTIC ON NOTE A hswr a atal upre yteNtoa cec Ce Science National the by supported partially was work This ahmtc ujc Classification. Subject Mathematics processes rgestejm of jump the triggers aeeso h tojm rcs i ocle eieswit regime so-called (in Itˆo- the of rameters ersnainte losoet nag h nopeema market-derivatives. incomplete all the price enlarge to one im allows of then is representation and Jacod-Yor of result seminal the all generalizes to sult related martingales int power-jumps stochastic and of motion sum Brownian a as martingale square-integrable a marting of orthogonal constructed explicitly some to respect vari random square-integrable a of representation chaotic Itˆo-L´evy modulated process Markov includes processes of a polynomials nal A BSTRACT K EYWORDS X nti ae epoiepeitbeadcatcrepresent chaotic and predictable provide we paper this In . uhapoesi oendb nt-tt CTMC finite-state a by governed is process a Such . akvadtv processes additive Markov . BGIWPLOSI UAZSETE,ADAN SULIMA ANNA AND STETTNER, ŁUKASZ PALMOWSKI, ZBIGNIEW ⋆ tcatcintegral stochastic X itiue eedn ntesae of states the on depending distributed 03;60H05. 60J30; DIIEPROCESSES ADDITIVE ⋆ rwinmotion Brownian ⋆ atnaerepresentation martingale C ONTENTS 1 bei ie sasmo tcatcitgaswith integrals stochastic of sum a as given is able ⋆ ls eietf h rdcal representation predictable the identify We ales. sadMro diiepoess h derived The processes. additive Markov and es hn anr.I diin h rniinof transition the addition, In manner). ching otnei nnilmteais h derived The mathematics. financial in portance eieswitching regime gaso rdcal rcse ihrsetto respect with processes predictable of egrals ktb eiso oe-upast n to and assets power-jump of series a by rket h up pern ntemdl hsre- This model. the in appearing jumps the J utpirt h rniin hsfamily This transition. the to prior just J hc losoet oiytepa- the modify to one allows which ⋆ oe-upprocess power-jump tosfrIˆ-akvadditive Itˆo-Markov for ations ⋆ opeemarket complete teudrtegrant the under ntre O-MARKOV ⋆ ˆ orthogo- J 15 15 13 13 8 8 3 2 2 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA

1. INTRODUCTION

The Martingale Representation Theorem allows one to represent all martingales, in a certain space, as stochastic integrals. This result is the cornerstone of mathematical finance that produces hedging of some derivatives (see e.g. F¨ollmer and Schied [21]). It is also the basis for the theory of backward stochastic differential equations (see El Karoui et al. [16] for a review). The martingale representation theorem can also lead to Wiener-like chaos expansions, which appear in the ( see Di Nunno et al. [15]). The martingale representation theorem is commonly considered in a Brownian setting. It states that any square-integrable martingale M adapted to the natural filtration of the W can be expressed as a stochastic integral with respect to the same Brownian motion W : t (1) M(t)= M(0) + ξ(s)dW (s), Z0 where ξ satisfies suitable integrability conditions. This is the classical Itˆotheorem (see Rogers and Williams [36, Thm. 36.1]). For processes having jumps coming according to a compensated Poisson measure Π¯, the represen- tation theorem was originally established by Kunita and Watanabe in their seminal paper [25, Prop. 5.2] (see also Applebaum [1, Thm. 5.3.5]). In particular, on probability space (Ω, F, P) with augmented natural filtration of a L´evy process

{Ht}t≥0, any locally square-integrable {Ht}-martingale M has the following representation: t t (2) M(t)= M(0) + ξ1(s)dW (s)+ ξ2(s, x)Π(d¯ s, dx), R Z0 Z0 Z \{0} where ξ1 and ξ2 are {Ht}- predictable processes satisfying suitable integrability conditions. Later the results were generalized in a series of papers by Chou and Meyer [8], Davis [12] and Elliott [17, 18]. In fact, Elliott [17] was assuming that there are a finite number of jumps within each finite interval. To deal with a general L´evy process (of unbounded variation) Nualart and Schoutens [28] derived a martingale representation using the chaotic representation property in terms of a suitable orthogonal sequence of martingales, obtained as the orthogonalization of the compensated Teugels power jump processes of the L´evy process (see also Corcuera et al. [11]). More on martingale representation can be found in Jacod and Shiryaev [23] , Liptser and Shiryaev [27], Protter [35] and in a review by Davis [13].

The goal of this paper is to derive the martingale representation theorem for an Itˆo- X. Itˆo-Markov additive processes are a natural generalization of Itˆo-L´evy processes and hence also of L´evy processes. The use of Itˆo-Markov additive processes is widespread, making them a classical model in applied probability with a variety of application areas, such as queues, insurance risk, inventories, data communication, finance, environmental problems and so forth (see Asmussen [2], Asmussen and Albrecher [3], Asmussen et al. [4], Asmussen and Kella [5], C¸inlar [9, 10], Prabhu [34, Ch. 7], Pacheco and Prabhu [30], Palmowski and Rolski [33], Palmowski and Ivanovs [32] and references therein). In this paper, we will also derive a chaotic representation of any square-integrable random variable in terms of certain orthogonal martingales. It should be underlined that the derived representation is of predictable type, that is, any square-integrable martingale is a sum of stochastic integrals of some predictable processes with respect to explicit martingales. This representation is hence different from CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 3 the one given in (2) (in the jump part) and it can be used to prove completeness of some financial markets. This paper is organized as follows. In Section 2 we present the main results. Their proofs are given in Section 4, preceeded by a crucial preliminary fact presented in Section 3.

2. MAIN RESULTS

We begin by defining Itˆo-Markov additive processes. Let (Ω, F, P) be a complete probability space and let T := [0; T ] be a finite time horizon, where 0

[λij ]i,j=1,2,...,N . The element λij is the transition intensity of the Markov chain J jumping from state ei to state ej .

A process (J, X)= {(J(t),X(t)) : t ∈ T} on the state space {e1,..., eN }×R is a Markov additive process (MAP) if (J, X) is a Markov process and the conditional distribution of (J(s + t),X(s + t) − X(s)) for s,t ∈ T, given (J(s),X(s)), depends only on J(s) (see C¸inlar [9, 10]). Every MAP has a very special structure. It is usually said that X is the additive component and J is the background process representing the environment. Moreover, the process X evolves as a L´evy process while J(t) = ej . Following Asmussen and Kella [5] we can decompose the process X as follows:

(3) X(t)= X(t)+ X(t), where N (4) X(t) := Ψi(t) i=1 X for

(i) (5) Ψi(t) := Un 1{J(Tn)=ei, Tn≤t} n X≥1 (i) and for the jump epochs {Tn} of J. Here Un (n ≥ 1, 1 ≤ i ≤ N) are independent random variables (i) which are also independent of X such that for every i, the random variables Un are identically distributed. Note that we can express the process Ψi as follows: t i Ψi(t)= x ΠU (ds, dx) R Z0 Z for the point measures i P (i) (6) ΠU ([0,t], dx) := (Un ∈ dx)1{J(Tn)=ei, Tn≤t}, i =1,...,N. n X≥1 (ij) Remark 1. One can consider jumps U with distribution depending also on the state ej the Markov chain is jumping to by extending the state space to the pairs (ei, ej ) (see Gautam et al. [22, Thm. 5] for details).

The first component in equation (3) is an Itˆo-L´evy process and it has the following decomposition (see Oksendal and Sulem [29, p. 5]): t t t (7) X(t) := X(0) + µ0(s)ds + σ0(s)dW (s)+ γ(s−, x)Π(d¯ s, dx), R Z0 Z0 Z0 Z 4 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA where W denotes the standard Brownian motion independent of J, Π(d¯ t, dx) := Π(dt, dx) − ν(dx)dt is the compensated Poisson random measure which is independent of J and W and

N i (8) µ0(t) := hµ0,J(t)i = µ0hei,J(t)i, i=1 X N i (9) σ0(t) := hσ0,J(t)i = σ0hei,J(t)i, i=1 X N (10) γ(t, x) := hγ(x),J(t)i = γi(x)hei,J(t)i, i=1 X 1 N ′ RN 1 N ′ RN for some vectors µ0 = (µ0,...,µ0 ) ∈ , σ0 = (σ0 ,...,σ0 ) ∈ and the vector-valued measur- able function γ(x) = (γ1(x),...,γN (x)). The measure ν is the so-called jump-measure identifying the distribution of the sizes of the jumps of the Poisson measure Π. The components X and X in (3) are, conditionally on the state of the Markov chain J, independent. Additionally, we suppose that the L´evy measure satisfies, for some ε> 0 and λ> 0,

(11) exp(λ|γ(s−, x)|)ν(dx) < ∞, P − a.e., exp(λx)P(U (i) ∈ dx) < ∞ c c Z(−ε,ε) Z(−ε,ε) for i = 1,...,N, where U (i) is a generic size of jump when the Markov chain J is jumping into state i. This implies that

|γ(s−, x)|kν(dx) < ∞, P − a.e., E(U (i))k < ∞, i =1,...,N, k ≥ 2 R Z and that the characteristic function E[exp(kuX] is analytic in a neighborhood of 0. Moreover, X has moments of all orders and the polynomials are dense in L2(R, dϕ(t, x)), where ϕ(t, x) := P(X(t) ≤ x).

Now we are ready to define the main process in this paper. The process (J, X) = {(J(t),X(t)) : t ∈ T} (for simplicity sometimes we write only X) with the decomposition (3) is called an Itˆo-Markov additive process. This process evolves as the Itˆo-L´evy process X between changes of states of the Markov chain J, that is, its parameters depend on the current state ei of the Markov chain J. In addition, a transition of (i) J from ei to ej triggers a jump of X distributed as U . This is a so-called non-anticipative Itˆo-Markov additive process. Itˆo-Markov additive processes are a natural generalization of Itˆo-L´evy processes and thus of L´evy processes. Moreover, if γ(s, x) = x then X is a Markov additive process. If additionally N = 1, then X is a L´evy process. If U (j) ≡ 0 and N > 1 then X is a Markov modulated L´evy process (see Pacheco et al. [31]). If there are no jumps additionally, that is, Π(d¯ s, dx)=0, we have a Markov modulated Brownian motion. From now, we will work with the following filtration on (Ω, F, P):

(12) Ft := Gt ∨ N , where N are the P-null sets of F and

1 N (13) Gt := σ{J(s), W (s), Γ(s), ΠU ([0,s], dx),..., ΠU ([0,s], dx); s ≤ t} CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 5 for t (14) Γ(t) := γ(s−, x)Π(ds, dx). R Z0 Z Note that the filtration {Ft}t≥0 is right-continuous (see Karatzas and Shreve [24, Prop. 7.7 ] and also Protter [35, Thm. 31 ]). By the same arguments as in the proof of Thm. 3.3 of Liao [26], the filtration

{Gt}t≥0 is quivalent to

1 N (15) Gt := σ{J(s), X(s), ΠU ([0,s], dx),..., ΠU ([0,s], dx); s ≤ t}. To present the main result we need a few additional processes. We observe that the Markov chain

J can be represented in terms of a marked Φj defined by j R Φj (t) := Φ([0,t] × ej )= 1{J(Tn)=ej , Tn≤t} = ΠU ([0,t], ), j =1, 2,...,N n X≥1 for the jump epochs {Tn} of J. Note that the process Φj describes the number of jumps into state ej up to time t. Let φj be the dual predictable projection of Φj (sometimes called the compensator). That is, the process

(16) Φj (t) := Φj(t) − φj (t), j =1,...,N, is an {Ft}-martingale and it is called the jth Markovian jump martingale. Note that φj is unique and t (17) φj (t) := λj (s)ds Z0 for

(18) λj (t) := 1{J(t−)=ei}λij i j X6= (see Zhang et al. [38, p. 290]). Following Corcuera et al. [11] we introduce the power-jump processes X(k)(t) := (∆X(s))k, k ≥ 2,

t (k) k k E X (t) Jt = E (∆X(s)) Jt = γ (s−, x)ν(dx)ds< ∞, P − a.e. k ≥ 2, R  0

Furthermore, any integral of an {Ft}-adapted process with respect to an {Ft}-adapted stochastic measure or an {Ft}-adapted is still adapted (see Jacod and Shiryaev [23, Prop. 3.5 (k) and Thm 4.31(i)]). Hence X is {Ft}-adapted for k ≥ 2. We will also need power martingales related to the second component of X given in (3), namely to X 6 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA or to Ψi defined in (5). For l ≥ 1 and i =1,...,N we define l t (l) (i) l i Ψi (t) := Un 1{J(Tn)=ei, Tn≤t} = x ΠU (ds, dx) R n 0 X≥1   Z Z i (l) for ΠU given by (6). The compensated version of Ψi is called an impulse regime-switching martin- gale if t (l) (l) E (i) l l ¯ i (20) Ψi (t) := Ψi (t) − Un φi(t)= x ΠU (ds, dx), R Z0 Z  ¯ i i P (i) where ΠU (dt, dx) = ΠU (dt, dx) − λi(t)ηi(dx)dt for λi defined in (18) and ηi(dx)= (Un ∈ dx). (l) Using similar arguments to those above it follows that Ψi is an {Ft}-martingale for l ≥ 1 and i =1, ..., N. Corcuera et al. [11] motivate trading in power-jump assets as follows. Power-jump process of order two is just a variation process of degree two, i.e. a process (see Barndorff-Nielsen and Shephard [6, 7] ), and is related to the so-called realized variance. Contracts on realized variance have found their way into OTC markets and are now traded regularly. Typically a 3th-power-jump asset measures a kind of asymmetry (””) and a 4th-power-jump process measures extremal movements (””). Trade in such assets can be of use if one likes to bet on the realized skewness or realized kurtosis of the stock. Furthermore, an insurance contract against a crash can also be easily built from 4th-power jump (or ith-power jump, i> 4) assets.

2 2 We denote by M the set of square-integrable {Ft}-martingales, i.e. M ∈M if M is a martingale, E 2 2 M(0) = 0 and supt M (t) < ∞. The martingale convergence theorem implies that each M ∈ M 2 is closed, i.e. there is an {Ft}-measurable random variable M such that M(t) → M(∞) in L (Ω, F) 2 and for each t, M(t) = E[M(∞)|Ft]. Thus there is a one-to-one correspondence between M and 2 2 L (Ω, F), so that M is a Hilbert space under the inner product M1(t) · M2(t) = E[M1(∞)M2(∞)]. 2 Following Protter [35, p. 179], we say that two martingales M1,M2 ∈ M are strongly orthogonal if 2 their product M1 · M2 is a uniformly integrable martingale. As noted in Protter [35], M1,M2 ∈ M are strongly orthogonal if and only if [M1,M2] is a uniformly integrable martingale. We say that 2 two random variables Y1, Y2 ∈ L (Ω, F) are weakly orthogonal if E[Y1(t), Y2(t)] = 0. Clearly, strong orthogonality implies weak orthogonality. One can obtain a set {H(k), k ≥ 1} of pairwise strongly orthonormal martingales such that each H(k) (n) is a linear combination of the X (n =1,...,k),

(k) (k) (k−1) (1) (21) H = ak,kX + ak,k−1X + ··· + ak,1X , k ≥ 1.

The constants a·,· can be calculated as described in Schoutens [37] – they correspond to the coefficients of the orthonormalization of the polynomials 1, x, x2,.... Similarly we proceed with the martingales (l) (l) Ψi and Φi and construct pairwise strongly orthonormal martingales Gi (i = 1,...,N, l ≥ 1) that (l) are appropriate linear combination of the processes Ψi and Φi: (l) (i) (l−1) (i) (l−2) (i) (1) (i) (22) Gi = cl,l Ψi + cl,l−1Ψi + ··· + cl,2Ψi + cl,1Φi. (i) We will find the coefficients c·,· as follows. Fix i ∈ {1,...,N}. Let us consider two spaces. The first one is the space S1 of all real polynomials on the positive real line endowed with the scalar product

(i) (i) hP (x),Q(x)i1 := E P (U )Q(U ) E Φi(1) .   CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 7

Note that l h (i) l+h hx , x i1 = E(U ) E Φi(1) . (l) The other space, S2, is the space of all linear transformations of the processes Ψi and Φj , i.e., (l−1) (l−2) (1) R S2 = {clΨi + cl−1Ψi + ··· + c2Ψi + c1Φi; l ∈{1, 2,...}, ci ∈ }. We endow this space with the scalar product (l) (h) E (l) (h) E (i) l+hE hΨi (t), Ψi (t)i2 := [Ψi , Ψi ](1) = (U ) Φi(1) , (l) E (l)  E (i) lE  hΨi (t), Φi(t)i2 := [Ψi , Φi](1) = (U ) Φi(1) ,    hΦi(t), Φi(t)i2 := E [Φi, Φi](1) = E Φi(1) ,   l (l)  for i = 1,...,N and l,h ≥ 0. One clearly sees that x ↔ Ψi is an isometry between S1 and S2. An 2 (1) (2) orthogonalization of {1, x, x ,...} in S1 gives an orthogonalization of {Φi, Ψi , Ψi ,...}. (l) (h) Finally, for i 6= j, the processes Ψi , Φi and Ψj , Φj do not jump at the same time, so (l) (h) (l) (h) (l) (l) [Ψi , Ψj ](t)= ∆Ψi (s)∆Ψj (s)=0, [Ψi , Φj ](t)= ∆Ψi (s)∆Φj (s)=0 s t s t X≤ X≤ and

[Φi, Φj ](t)= ∆Φi(s)∆Φj (s)=0. s t X≤ For the same reason we have

(l) (k) (k) [Ψi ,H ](t)=0, [Φi,H ](t)=0 for i ∈{1,...,N}, k ≥ 1 and l ≥ 1. In this way all martingales in (21) and (22) are pairwise strongly orthogonal.

The main results of this paper is given in the next two theorems.

Theorem 2. Any square-integrable {Ft}-measurable random variable F can be represented as follows:

N ∞ ∞ t t1− ts+τ−1− E F (t) = [F (t)] + ... f(υ1,...,υτ ,ι1,...,ιs,i)(t1,t2,...,ts+τ ) i=1 s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 0 X X X 1 Xs≥1 1 Xτ ≥1 Z Z Z (υτ ) (υ1) (ιs) (ι1) (23) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1),

2 (l) where f(υ1,...,υτ ,ι1,...,ιs) are some random fields for which (23) is well-defined on L (Ω, F), processes Gi and H(k) (i =1,...,N, l, k ≥ 1) are orthogonal martingales and the convergence is in L2 sence.

Remark 3. The right-hand side of (23) is understood as follows. We take a finite sum

N A1 A2 t t1− ts+τ−1−

··· f(υ1,...,υτ ,ι1,...,ιs,i)(t1,t2,...,ts+τ ) i=1 s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 0 X X X 1 Xs≥1 1 Xτ ≥1 Z Z Z (υτ ) (υ1) (ιs) (ι1) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1) in L2(Ω, F). Since L2(Ω, F) is a Hilbert space, the right-hand side of (23) is understood as the limit of 2 the above expression in L (Ω, F) for A1 → ∞ and A2 → ∞. 8 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA

Theorem 4. Any square-integrable {Ft}-martingale M can be represented as follows:

t ∞ t N t (1) (k) (k) (j) M(t) = M(0) + hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s) 0 k 0 j=1 0 Z X=2 Z X Z ∞ N t (l,i) (l) (24) + hΨ (s)dΨi (s), l i=1 0 X=1 X Z (1) (k) (j) (l,i) where hX , hX , hΦ and hΨ (for i, j =1,...,N, k ≥ 2 and l ≥ 1) are predictable processes. Remark 5. The right-hand side of (24) is understood in the same way as in Remark 3, that is, it is the limit in M2 of

t B1 t N t B2 N t (1) (k) (k) (j) (l,i) (l) hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s)+ hΨ (s)dΨi (s) 0 k 0 j=1 0 l i=1 0 Z X=2 Z X Z X=1 X Z for B1 → ∞ and B2 → ∞.

The proof of this theorem will be given in Section 4.1. The main idea of the proof is that every 2 random variable F in L (Ω, Ft) can be approximated by some type of polynomials. For these poly- nomials we will use the Itˆoformula together with induction to get an appropriate representation first in terms of orthogonal multiple stochastic integrals and then as a sum of single stochastic integrals. The main step of the construction is given in Section 3.

Remark 6. Note that the representation in Theorem 4 is as a sum of integrals with respect to the com- (k) (l) pensated processes X, X , Φj and Ψi . In fact, the above representation could also be derived for non-compensated processes at the cost of an additional Lebesgue integral with respect to an appro- priate sum of compensators.

This generalizes the classical results of Emery [20], Dellacherie et al. [14, p. 207] and Nualart and Schoutens [28].

3. PROOF OF PREDICTABLE REPRESENTATION

3.1. Polynomial representation. The main result of this section gives a stochastic integral represen- g p b tation of polynomials of the form X · Φj · Ψi . We follow the argument of Nualart and Schoutens [28].

Theorem 7. We have the following representation:

g b p t t1− ts+τ+ζ−1− g p b (g+p+b) X (t)Φj (t)Ψi (t)= f (t)+ ··· 0 0 0 s=1 τ=1 ζ=1 (v1,...,vτ ) (ι1,...,ιs) Z Z Z X X X ∈{1X,...,b}τ ∈{1X,...,g}s g p b (25) f ( + + ) (t,t ,t ,...,t )dΦ (t ) ... dΦ (t ) (v1,...,vτ ,ι1,...,ιs,i,j) 1 2 s+τ+ζ j s+τ+ζ j s+τ+1

(vτ ) (v2) (v1) (ιs) (ι2) (ι1) dΨi (ts+τ ) ... dΨi (ts+2)dΨi (ts+1)dX (ts) ... dX (t2)dX (t1), g p b where f (g+p+b) and f ( + + ) are some random fields being a sum of products of predictable processes (v1,...,vτ ,ι1,...,ιs,i,j) 2 with respect to t,t1,t2,...,ts+τ+ζ defined on L (Ω, F). g p b Proof. We will express X (t)Φj (t)Ψi (t) (for t ≥ 0, g,p,b ≥ 0, i, j = 1,...,N) as a sum of stochastic (k) (l) integrals of lower powers of X, Φj and Ψi with respect to the processes X , Φj and Ψi (for k ≤ g CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 9 and l ≤ b). Note that the processes Φj and Ψi have bounded variation and they are constant between c c jumps. Then (see Protter [35, p. 75]) the following holds true: [X, Φj ] (s)=0, [Φj , Φj ] (s)=0, c c c [Ψi, Φj ] (s)=0, [Ψi, Ψi] (s)=0 and [Ψi, X] (s)=0. Moreover, from the definition of X in (7) we c s 2 have [X, X] (s)= 0 σ0 (u−)du. Using Itˆo’s formula (see Protter [35, p. 81]) we can write t g p R b g p b g−1 p b (26) X (t)Φj (t)Ψi (t)= X (0)Φj (0)Ψi (0) + gX (s−)Φj (s−)Ψi (s−)dX(s) 0 t Z t g p−1 b g p b−1 + pX (s−)Φj (s−)Ψi (s−)dΦj (s)+ bX (s−)Φj (s−)Ψi (s−)dΨi(s) Z0 Z0 1 + g(g − 1)I + I , 2 1 2 where t I g−2 p b 2 1 := X (s−)Φj (s−)Ψi (s−)σ0 (s−)ds Z0 and I I g p b g−1 p b (27) 2 := 3 − X (s−)Φj (s−)Ψi (s−) − gX (s−)Φj (s−)Ψi (s−)∆X(s)

Using integration by parts we can rewrite I1 as follows:

t t s I g−2 p b 2 2 g−2 p b 1 = X (t−)Φj (t−)Ψi (t−) σ0 (u−)du − σ0(u−)du X (s)Φj (s)dΨi (s) Z0 Z0 Z0 ! t s t s 2 g−2 b 2 g−2 p b (29) − σ0 (u−)du X (s)Ψi (s)dΦj (s) − σ0 (u−)du X (s)d[Φj , Ψi ](s) Z0 Z0 ! Z0 Z0 ! t s 2 p b g−2 − σ0 (u−)du Φj (s)Ψi (s)dX (s) Z0 Z0 ! with p b b d[Φj , Ψi ](s) = dΨi (s) if i = j p b and [Φj , Ψi ](s)=0 otherwise. Note that X(s) = X(s−) + ∆X(s), Φj (s) = Φj (s−) + ∆Φj(s) and Ψi(s)= Ψi(s−) + ∆Ψi(s). Using the Binomial Theorem we can rewrite I3, defined in (28), as follows: g p b I3 = X(s−) + ∆X(s) Φj (s−) + ∆Φj (s) Ψi(s−) + ∆Ψi(s) I I I = 4 · 5 · 6,    where g g g g−m1 m1 (30) I4 := X (s−)+ X (s−) ∆X(s) , m1 m1=1   X  p I p p p−m2 m2 (31) 5 := Φj (s−)+ Φj (s−) ∆Φj (s) , m2 m2=1   X  10 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA

b I b b b−m3 m3 (32) 6 := Ψi (s−)+ Ψi (s−) ∆Ψi(s) . m3 m3=1   X  Then g b I I g g−m1 m1 b b−m3 m3 g b 4 · 6 = X (s−) ∆X(s) Ψi (s−) ∆Ψi(s) + X (s−)Ψi (s−) m1 m3 m1=1   ! m3=1   ! X  X  g b g g−m1 b m1 b g b−m3 m3 (33) + X (s−)Ψi (s−) ∆X(s) + X (s−)Ψi (s−) ∆Ψi(s) . m1 m3 m1=1   m3=1   X  X  Since X and Ψi do not jump at the same time, we have ∆X(s)∆Ψi(s)=0. Thus, the first component of the right-hand side of (33) is zero, and so g I I g b g g−m1 b m1 (34) 4 · 6 = X (s−)Ψi (s−)+ X (s−)Ψi (s−) ∆X(s) m1 m1=1   X  b b g b−m3 m3 + X (s−)Ψi (s−) ∆Ψi(s) . m3 m3=1   X  By (31) and (34), p g p b p g p−m b m I = I · I · I = X (s−)Φ (s−)Ψ (s−)+ X (s−)Φ 2 (s−)Ψ (s−) ∆Φ (s) 2 3 4 5 6 j i m j i j m =1 2 X2   g p  g g−m1 b m1 p p−m2 m2 (35) + X (s−)Ψi (s−) ∆X(s) Φj (s−) ∆Φj (s) m1 m2 m1=1   ! m2=1   ! X  X  b p b g b−m3 m p p−m2 m + X (s−)Ψ (s−) ∆Ψ (s) 3 Φ (s−) ∆Φ (s) 2 m i i m j j m =1 3 ! m =1 2 ! X3   X2   g   g g−m1 p b m1 + X (s−)Φj (s−)Ψi (s−) ∆X(s) m1 m1=1   ! X  b b g p b−m3 m3 + X (s−)Φj (s−)Ψi (s−) ∆Ψi(s) . m3 m3=1   ! X  The third component of the above sum is zero. This follows from the observation that ∆X(s)∆Φj (s)=

0, because X and Φj do not jump at the same time. Note that the fifth component of the sum (35) can be written as follows: g g g−m p b m X 1 (s−)Φ (s−)Ψ (s−) ∆X(s) 1 m j i m =1 1 ! X1   g  g−1 p b g g−m1 p b m1 (36) = gX (s−)Φj (s−)Ψi (s−)∆X(s)+ X (s−)Φj (s−)Ψi (s−) ∆X(s) . m1 m1=2   X  CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 11

Combining (35) and (36) we conclude that p I g p b p g p−m2 b m2 3 = X (s−)Φj (s−)Ψi (s−)+ X (s−)Φj (s−)Ψi (s−) ∆Φj (s) m2 m2=1   X  b p b g b−m m p p−m m + X (s−)Ψ 3 (s−) ∆Ψ (s) 3 Φ 2 (s−) ∆Φ (s) 2 m i i m j j m =1 3 ! m =1 2 ! X3   X2   g   g−1 p b g g−m1 p b m1 +gX (s−)Φj (s−)Ψi (s−)∆X(s)+ X (s−)Φj (s−)Ψi (s−) ∆X(s) m1 m1=2   X  b b g p b−m3 m3 + X (s−)Φj (s−)Ψi (s−) ∆Ψi(s) . m3 m3=1   X  Now, inserting the above I3 into I2 defined by (27), we derive p I p g p−m2 b m2 2 = X (s−)Φj (s−)Ψi (s−) ∆Φj (s) m2 0

I2 as follows: p p g p−m2 b I = X (s−)Φ (s−)Ψ (s−)∆Φ (s) 2 m j i j

Now, we rewrite I2 as a sum of some stochastic integrals: p t p g p−m2 b I = X (s−)Φ (s−)Ψ (s−)dΦ (s) 2 m j i j m =1 0 2 X2 Z   b t p b g b−m p p−m + X (s−)Ψ 3 (s−) Φ 2 (s−) δ dΨ(m3)(s) m i m j ij i m =1 0 3 ! m =1 2 ! X3 Z   X2   g t b t g g−m1 p b b g b−m3 + X (s−)Φ (s−)Ψ (s−)dX(m1)(s)+ X (s−)Ψ (s−)dΨ(m3)(s) m j i m i i m =2 0 1 m =1 0 3 X1 Z   X3 Z   t t g p−1 b g p b−1 − pX (s−)Φj (s−)Ψi (s−)dΦj (s) − bX (s−)Φj (s−)Ψi (s−)dΨi(s). Z0 Z0 We shall rewrite the above expression in terms of integrals with respect to the compensated processes (m1) X , Φj and Ψi given in (19), (16) and (20), respectively:

p t p g p−m b I = X (s−)Φ 2 (s−)Ψ (s−)dΦ (s) 2 m j i j m =1 0 2 X2 Z   b t p b g b−m3 p p−m2 (m3) (37) + X (s−)Ψ (s−) Φ (s−) δ dΨ (s) m i m j ij i m =1 0 3 ! m =1 2 ! X3 Z   X2   g t g g−m p b (m ) + X 1 (s−)Φ (s−)Ψ (s−)dX 1 (s) m j i m =2 0 1 X1 Z   b t b g b−m3 (m3) + X (s−)Ψ (s−)dΨ (s) m i i m =1 0 3 X3 Z   t t g p−1 b g p b−1 I − pX (s−)Φj (s−)Ψi (s−)dΦj (s) − bX (s−)Φj (s−)Ψi (s−)dΨi(s)+ 7, Z0 Z0 where p t p g p−m2 b I := X (s−)Φ (s−)Ψ (s−)λ (s)ds 7 m j i j m =1 0 2 X2 Z   b t p b g b m p p m m − 3 − 2 E (j) 3 (38) + X (s−)Ψi (s−) Φj (s−) δij Un λj (s)ds 0 m3 m2 m3=1 Z   ! m2=1   ! X X  g t g g−m1 p b + X (s−)Φ (s−)Ψ (s−)γm1 (s−, x)ν(dx)ds m j i m =2 0 1 X1 Z ZR   b t b g b−m m + X (s−)Ψ 3 (s−)δ E U (j) 3 λ (s)ds m i ij n j m =1 0 3 X3 Z   t t  g p−1 b g p b−1 E (i) − pX (s−)Φj (s−)Ψi (s−)λj (s)ds − bX (s−)Φj (s−)Ψi (s−)δij Un λi(s)ds. Z0 Z0  Note that I7 is a sum of terms of the form t I g−m1 p−m2 b−m3 (39) 8 := X (s−)Φj (s−)Ψi (s−)Θ(s−)ds Z0 (j) m3 for various predictable processes Θ(s−) (i.e. Θ(s−)= λj (s) or Θ(s−) = δij E Un λj (s) for m3 = , ..., b s γm1 s , x ν x m , ..., g 1 or Θ( −) = R ( − ) (d ) for 1 = 2 ). Similarly, as in (29) using integration by R CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 13 parts for we can rewrite (39) in the following way: t I g−m1 p−m2 b−m3 8 = X (t−)Φj (t−)Ψi (t−) Θ(u−)du Z0 t s g−m1 p−m2 b−m3 (40) − Θ(u−)du X (s)Φj (s)dΨi (s) Z0 Z0 ! t s g−m1 b−m3 − Θ(u−)du X (s)Ψi (s)dΦj (s) Z0 Z0 ! t s g−m1 p−m2 b−m3 − Θ(u−)du X (s)d[Φj , Ψi ](s) Z0 Z0 ! t s p−m2 b−m3 g−m1 − Θ(u−)du Φj (s)Ψi (s)dX (s). Z0 Z0 ! Combining (29), (37), (38) and (40) we can express (26) as a sum of stochastic integrals of powers (k) (l) strictly lower than g + p + b of X, Φj and Ψi with respect to the processes X , Φj and Ψi for k ≤ g and l ≤ b. Then all the above identities and induction complete the proof. 

4. PROOFS OF MAIN RESULTS

4.1. Proof of Theorem 2. We start with the following proposition.

Proposition 8. For fixed t ≥ 0 and s1 ≤ ... ≤ sm ≤ t let

X := (X(s1),..., X(sm)),

Φj := (Φj (s1),..., Φj (sm)) for j =1, ..., N,

Ψi := (Ψi(s1),..., Ψi(sm)) for i =1, ..., N and 2 Z ∈ L Ω, Ft .

2 Then, for any ε> 0, there exists m ∈ N and a random variable Zε ∈ L Ω, σ X, Φ1,..., ΦN , Ψ1,..., ΨN such that   2 E (Z − Zε) < ε.

  2 Proof. The conclusion follows from more general considerations. Let Z ∈ L (Ω, σ(Z1,Z2,...)) for 2 certain variables {Z1,Z2,... }. Then Z can be approximated with any given accuracy in L norm by m ˜ Zǫ := kl1Al , l X=1 l l l where Al = Z1 ∈B1,Z2 ∈B2,... for Borel sets Bk and constants kl. ǫ> M For any 0, there exists a positive integer such that ∞ P M P l l P l (Al \ Al)= (ZM+1 ∈BM+1,ZM+2 ∈BM+2,...)= ( Zn ∈Bn) ≤ ǫ, n M =\+1 m M l l where A = Z1 ∈B ,...,ZM ∈B . It remains to define Zǫ := kl1 M . l 1 M Al l=1  P  14 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA

Let N g1 gm p1 pm b1 bm P := X (t1) · ... · X (tm) · Φj (t1) · ... · Φj (tm)Ψi (t1) · ... · Ψi (tm):0 ≤ t1 < . . . i,j=1 n Y

Proof. Assume that P is not a total family. Then there is Z ∈ L2(Ω, F) such that Z is orthogonal to P. From Proposition 8 there exists a Borel function f such that

Zε = fε X, Φ1,..., ΦN , Ψ1,..., ΨN . Recall that under assumption (11) the polynomials are dense in L2(R, dϕ), so we can approximate

Zε by polynomials. Furthermore, since Z ⊥P, we have E(ZZε)=0. Then from the Schwarz inequal- ity we obtain 2 2 2 2 E Z = E Z(Z − Zε) ≤ E(Z )E (Z − Zε) ≤ εE(Z ). Letting ε −→ 0 yieldsZ =0 a.s.  q  p 

(k) Proof of Theorem 2. Since by a linear transformation we can switch from the X to the H(k), from (n) (n) (1) the Ψi to the Gi and from the Φi to the Gi , we can rewrite representation (25) as follows:

g b+p t t1− g p b (g+p+b) X (t)Φj (t)Ψi (t)= f (t)+ s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 X X 1 Xs≥1 1 Xτ ≥1 Z Z ts+τ−1− g p b (41) ··· f ( + + ) (t ,t ,...,t ) (υ1,...,υτ ,ι1,...,ιs,i) 1 2 s+τ Z0 (υτ ) (υ1) (ιs) (ι1) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1), g p b l where f (g+p+b) and f ( + + ) are random fields on L2(Ω, F), and G( ) and H(k) (i =1,...,N, (υ1,...,υτ ,ι1,...,ιs,i) i l, k ≥ 1) are orthogonal martingales defined in (22) and (21), respectively. In fact, stochastic integrals with respect to orthogonal martingales are again orthogonal (see Protter [35, Lemma 2, p. 180 and Theorem 35, p. 178] ). 2 Since by Lemma 9 the set P is a total family in L (Ω, F), any Ft-measurable random variable F in 2 g p b L (Ω, Ft) can be approximated by a polynomial constructed from the processes of the form X ·Φj ·Ψi appearing on the left side of (41) where i, j vary over {1,...,N}. First observe that if A and B have (l) (k) the form of a multiple integral with respect to Gi and H (i = 1,...,N, l, k ≥ 1) as on the right side of (41), then AB is also of this form (see the proof of [28, Thm. 1] for details). Thus every random 2 variable F in L (Ω, Ft) has the following representation:

N ∞ ∞ t t1− ts+τ−1− E F (t) = [F (t)] + ··· f(υ1,...,υτ ,ι1,...,ιs,i)(t1,t2,...,ts+τ ) i=1 s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 0 X X X 1 Xs≥1 1 Xτ ≥1 Z Z Z (υτ ) (υ1) (ιs) (ι1) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1), 2 where f(υ1,...,υτ ,ι1,...,ιs,i) is random field on L (Ω, F).  CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 15

4.2. Proof of Theorem 4. Now, in Theorem 2 we can again switch by a linear transformation from (k) (k) (n) (n) H to X and from G to Ψi and Φi. Thus we get the following representation:

N ∞ ∞ ∞ t t1− ts+τ+ζ−1− F (t) − EF (t)= ··· 0 0 0 i,j=1 s=1 τ=1 ζ=1 (v1,...,vτ ) (ι1,...,ιs) Z Z Z X X X X ∈{1X,...,b}τ ∈{1X,...,g}s

f(v1,...,vτ ,ι1,...,ιs,i,j)(t,t1,t2,...,ts+τ+ζ)dΦj(ts+τ+ζ ) ... dΦj (ts+τ+1)

(vτ ) (v2) (v1) (ιs) (ι2) (ι1) dΨi (ts+τ ) ... dΨi (ts+2)dΨi (ts+1)dX (ts) ... dX (t2)dX (t1). Using the same arguments as in the proof of Nualart and Schoutens [28, Thm. 2] we observe that F − EF can be represented as a sum of single integrals with respect to all processes appearing in the multiple integrals. Indeed, the procedure can be described as follows. First one takes s = 1 and (ι1) produces a sum of integrals with respect to X for ι1 ∈ {1,...,g}. Then one takes s = 2 and adds (ι2) integrals with respect to X for ι2 ∈{1,...,g}. This procedure continues until all processes appear in the integrals. Thus we get

t ∞ t N t E (1) (k) (k) (j) F (t) = F (t)+ hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s) 0 k 0 j=1 0 Z X=2 Z X Z ∞ N t ˜(l,i) (l) (42) + hΨ (s)dΨi (s), l i=1 0 X=1 X Z (1) (k) (j) ˜(l,i) where hX , hX , hΦ and hΨ (for i, j = 1,...,N, k ≥ 2 and l ≥ 1) are predictable processes. Now we want to obtain the above representation with respect to Itˆo-Markov additive processes X. From the definition in (3) and (4), we have

t t N t (1) (1) (1) hX (s)dX(s)= hX (s)dX(s) − hX (s)dΨi(s). 0 0 i=1 0 Z Z X Z For

l,i h˜( )(s) − h(1)(s) for l =1 and i =1,...,N, h(l,i)(s)= Ψ X (43) Ψ ˜(l,i) ( hΨ (s) for l ≥ 2 and i =1,...,N, and for any square-integrable {Ft}-martingale M the representation given in (42) can be rewritten as follows: t ∞ t N t (1) (k) (k) (j) M(t) = M(0) + hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s) 0 k 0 j=1 0 Z X=2 Z X Z ∞ N t (l,i) (l) (44) + hΨ (s)dΨi (s), l i=1 0 X=1 X Z (1) (k) (j) (l,i) where hX , hX , hΦ and hΨ (for i, j =1,...,N, k ≥ 2 and l ≥ 1) are predictable processes. 

REFERENCES

[1] Applebaum, D. (2004). L´evy Processes and . Vol. 93 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge. 16 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA

[2] Asmussen, S. (2003). Applied Probability and Queues. 2nd ed. Springer. [3] Asmussen, S. and Albrecher H. (2010). Ruin Probabilities. 2nd ed. World Scientific, Singapore. [4] Asmussen, S., Avram, F. and Pistorius, M. (2004). Russian and American put options under exponential phase-type L´evy models. Stochastic Processes and their Applications 109, 79–111. [5] Asmussen, S. and Kella, O. (2000). Multi-dimensional martingale for Markov additive processes and its applications. Advances in Applied Probability 32(2), 376–380. [6] Barndorff-Nielsen, O.E. and Shephard, N. (2003). Realised power variation and stochastic volatility models. Bernoulli 9, 243–265. [7] Barndorff-Nielsen, O.E. and Shephard, N. (2004). Financial Volatility: Stochastic Volatility and L´evy Based Models. Cam- bridge: Cambridge University Press. [8] Chou, C.S. and Meyer, P.A. (1975). Sur la repr´esentation des martingales comme int´egrales stochastiques dans les pro- cessus ponctuels. S´eminaire de Probabilit´es IX, Lecture Notes in Math. 465, Springer. [9] C¸inlar, E. (1972). Markov additive processes: I. Z. Wahrscheinlichkeitstheorie und verwandte Gebiete 24, 85–93. [10] C¸inlar, E. (1972). Markov additive processes: II. Z. Wahrscheinlichkeitstheorie und verwandte Gebiete 24, 95-121. [11] Corcuera, J.M., Nualart, D. and Schoutens, W. (2003). Completion of a L´evy market by power-jumpassets. Finance and Stochastics 9(1), 109–127. [12] Davis, M.H.A. (2005). The representation of martingales of jump processes. SIAM Journal on Control and Optimization 14, 623–638. [13] Davis, M.H.A. (2005). Martingale representation and all that. Systems and Control: Foundations and Applications. In Advances in Control, Communication Networks, and Transportation Systems: In Honor of Pravin Varaiya. Birkhauser. [14] Dellacherie, C., Maisonneuve, B. and Meyer, P. A. (1992). Probabilit´es et Potentiel. Herman, Paris. [15] Di Nunno, G., Oksendal, B. and Proske, F. (2009). Malliavin Calculus for L´evy Processes with Applications to Finance. Springer. [16] El Karoui, N. Peng, S. and Quenez, M.C. (1997). Backward stochastic differential equations in finance. 7(1), 1-71. [17] Elliott, R.J. (1976). Stochastic integrals for martingales of a jump process with partially accessible jump times. Z. Wahrscheinlichkeitstheorie und verwandte Gebiete 36, 213–226. [18] Elliott, R.J. (1977). Innovation projections of a jump process and local martingales. Mathematical Proceedings of the Cam- bridge Philosophical Society 81, 77–90. [19] Elliott, R.J., Aggoun, L. and Moore, J.B. (2004). Hidden Markov Models: Estimation and Control. Springer-Verlag, Berlin. [20] Emery, M. (1989). On the Azema Martingales. Lecture Notes in Mathematics 1372, 66–87, Springer, Berlin. [21] F¨ollmer, H. and Schied, A. (2002). Stochastic Finance: An introduction in discrete time. Studies in Mathematics 27. de Gruyter, Berlin-New York. [22] Gautam, N. Kulkarni, V., Palmowski, Z. and Rolski, T. (1999). Bounds for fluid models driven by semi-Markov inputs. Probability in Engineering and Information Sciences 13(4), 429–475. [23] Jacod, J. and Shiryaev, A. N. (2003). Limit Theorems for Stochastic Processes. 2nd ed. Springer-Verlag. [24] Karatzas, I. and Shreve, S. (1998). Brownian Motion and Stochastic Calculus. Springer, Berlin Heidelberg New York. [25] Kunita, H. and Watanabe, S. (1967). On square-integrable martingales. Nagoya Mathematical Journal 30, 209–245. [26] Liao, M. (2004). L´evy Processes in Lie Groups. Vol. 162 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge. [27] Liptser, R. S. and Shiryaev, A. N. (2001). of Random Processes: 2. Applications. Springer-Verlag. [28] Nualart, D. and Schoutens, W. (2000). Chaotic and predictable representations for L´evy processes. Stochastic Processes and Their Applications 90(1), 109–122. [29] Oksendal, B. and Sulem, A. (2004). Applied Stochastic Control of Jump Diffusions. Springer. [30] Pacheco, A. and Prabhu, N.U. (1995). Markov-additive processes of arrivals. In: Dshalalow, J.D. (ed.): Advances in Queue- ing, CRC Press. [31] Pacheco, A., Tang, L.C. and Prabhu, N.U. (2009). Markov-modulated Processes and Semigenerative Phenomena. World Scien- tific: Hackensack, New Jersey. [32] Ivanovs, J. and Palmowski, Z. (2012). Occupation densities in solving exit problems for Markov additive processes and their reflections. Stochastic Processes and their Applications 122(9), 3342–3360. [33] Palmowski, Z. and Rolski, T. (2002). A technique for the exponential change of measure for Markov processes. Bernoulli 8(6), 767–785. [34] Prabhu, N. U. (1998). Stochastic Storage Processes: Queues, Insurance Risk, Dams, and Data Communication. 2nd ed. Springer- Verlag. [35] Protter, P.E. (2005). Stochastic Integration and Differential Equations: A New Approach. 2nd ed. Springer-Verlag, Berlin. CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 17

[36] Rogers, L. C. G. and Williams D. (2000). Diffusions, Markov Processes and Martingales. 2nd ed. Vol. 2. Cambridge Mathe- matical Library. [37] Schoutens, W. (1999). Stochastic Processes and Orthogonal Polynomials. Lecture Notes in Statistics, Vol. 146. Springer, New York. [38] Zhang, X., Elliott, R. J., Siu, T. K. and Guo, J. Y. (2012). Markovian regime-switching market completion using additional markov jump assets. IMA Journal of Management Mathematics 23(3), 283–305.

FACULTY OF PURE AND APPLIED MATHEMATICS, WROCŁAW UNIVERSITYOF SCIENCEAND TECHNOLOGY, WYB. WYSPIANSKIEGO´ 27, 50-370 WROCŁAW, POLAND E-mail address: [email protected]

INSTITUTEOF MATHEMATICS POLISH ACAD.SCI.,SNIADECKICH 8, 00-656 WARSAW, ALSO VISTULA UNIVERSITY E-mail address: [email protected]

INSTITUTEOF MATHEMATICS, JAGIELLONIAN UNIVERSITY,ŁOJASIEWICZA 6, 30-348CRACOW AND INSTITUTEOF ECONOM- ICS, POLISH ACADEMYOF SCIENCES, PARADE SQUARE 1, 00-901 WARSAW E-mail address: [email protected]