arXiv:1612.09216v2 [math.PR] 25 Aug 2017 1991 2015/17/B/ST1/01102. ..Pofo hoe 4 Theorem References of Proof 2 Theorem 4.2. of Proof results main 4.1. of Proofs 4. representation Polynomial representation predictable 3.1. of Proof 3. results Main 2. Introduction 1. OEO HOI N RDCAL ERSNAIN O IT FOR REPRESENTATIONS PREDICTABLE AND CHAOTIC ON NOTE A hswr a atal upre yteNtoa cec Ce Science National the by supported partially was work This ahmtc ujc Classification. Subject Mathematics processes rgestejm of jump the triggers aeeso h tojm rcs i ocle eieswit regime so-called (in Itˆo-jump process the of rameters ersnainte losoet nag h nopeema market-derivatives. incomplete all the price enlarge to one im allows of then is representation and Jacod-Yor of result seminal the all generalizes to sult related martingales int power-jumps stochastic and of motion sum Brownian a as martingale square-integrable a marting of orthogonal constructed explicitly some to respect vari random square-integrable a of representation chaotic Itˆo-L´evy modulated process Markov includes processes of a polynomials nal A BSTRACT K EYWORDS X nti ae epoiepeitbeadcatcrepresent chaotic and predictable provide we paper this In . uhapoesi oendb nt-tt CTMC finite-state a by governed is process a Such . akvadtv processes additive Markov . BGIWPLOSI UAZSETE,ADAN SULIMA ANNA AND STETTNER, ŁUKASZ PALMOWSKI, ZBIGNIEW ⋆ tcatcintegral stochastic X itiue eedn ntesae of states the on depending distributed 03;60H05. 60J30; DIIEPROCESSES ADDITIVE ⋆ rwinmotion Brownian ⋆ atnaerepresentation martingale C ONTENTS 1 bei ie sasmo tcatcitgaswith integrals stochastic of sum a as given is able ⋆ ls eietf h rdcal representation predictable the identify We ales. sadMro diiepoess h derived The processes. additive Markov and es hn anr.I diin h rniinof transition the addition, In manner). ching otnei nnilmteais h derived The mathematics. financial in portance eieswitching regime gaso rdcal rcse ihrsetto respect with processes predictable of egrals ktb eiso oe-upast n to and assets power-jump of series a by rket h up pern ntemdl hsre- This model. the in appearing jumps the J utpirt h rniin hsfamily This transition. the to prior just J hc losoet oiytepa- the modify to one allows which ⋆ oe-upprocess power-jump tosfrIˆ-akvadditive Itˆo-Markov for ations ⋆ opeemarket complete teudrtegrant the under ntre O-MARKOV ⋆ ˆ orthogo- J 15 15 13 13 8 8 3 2 2 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA
1. INTRODUCTION
The Martingale Representation Theorem allows one to represent all martingales, in a certain space, as stochastic integrals. This result is the cornerstone of mathematical finance that produces hedging of some derivatives (see e.g. F¨ollmer and Schied [21]). It is also the basis for the theory of backward stochastic differential equations (see El Karoui et al. [16] for a review). The martingale representation theorem can also lead to Wiener-like chaos expansions, which appear in the Malliavin calculus ( see Di Nunno et al. [15]). The martingale representation theorem is commonly considered in a Brownian setting. It states that any square-integrable martingale M adapted to the natural filtration of the Brownian motion W can be expressed as a stochastic integral with respect to the same Brownian motion W : t (1) M(t)= M(0) + ξ(s)dW (s), Z0 where ξ satisfies suitable integrability conditions. This is the classical Itˆotheorem (see Rogers and Williams [36, Thm. 36.1]). For processes having jumps coming according to a compensated Poisson measure Π¯, the represen- tation theorem was originally established by Kunita and Watanabe in their seminal paper [25, Prop. 5.2] (see also Applebaum [1, Thm. 5.3.5]). In particular, on probability space (Ω, F, P) with augmented natural filtration of a L´evy process
{Ht}t≥0, any locally square-integrable {Ht}-martingale M has the following representation: t t (2) M(t)= M(0) + ξ1(s)dW (s)+ ξ2(s, x)Π(d¯ s, dx), R Z0 Z0 Z \{0} where ξ1 and ξ2 are {Ht}- predictable processes satisfying suitable integrability conditions. Later the results were generalized in a series of papers by Chou and Meyer [8], Davis [12] and Elliott [17, 18]. In fact, Elliott [17] was assuming that there are a finite number of jumps within each finite interval. To deal with a general L´evy process (of unbounded variation) Nualart and Schoutens [28] derived a martingale representation using the chaotic representation property in terms of a suitable orthogonal sequence of martingales, obtained as the orthogonalization of the compensated Teugels power jump processes of the L´evy process (see also Corcuera et al. [11]). More on martingale representation can be found in Jacod and Shiryaev [23] , Liptser and Shiryaev [27], Protter [35] and in a review by Davis [13].
The goal of this paper is to derive the martingale representation theorem for an Itˆo-Markov additive process X. Itˆo-Markov additive processes are a natural generalization of Itˆo-L´evy processes and hence also of L´evy processes. The use of Itˆo-Markov additive processes is widespread, making them a classical model in applied probability with a variety of application areas, such as queues, insurance risk, inventories, data communication, finance, environmental problems and so forth (see Asmussen [2], Asmussen and Albrecher [3], Asmussen et al. [4], Asmussen and Kella [5], C¸inlar [9, 10], Prabhu [34, Ch. 7], Pacheco and Prabhu [30], Palmowski and Rolski [33], Palmowski and Ivanovs [32] and references therein). In this paper, we will also derive a chaotic representation of any square-integrable random variable in terms of certain orthogonal martingales. It should be underlined that the derived representation is of predictable type, that is, any square-integrable martingale is a sum of stochastic integrals of some predictable processes with respect to explicit martingales. This representation is hence different from CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 3 the one given in (2) (in the jump part) and it can be used to prove completeness of some financial markets. This paper is organized as follows. In Section 2 we present the main results. Their proofs are given in Section 4, preceeded by a crucial preliminary fact presented in Section 3.
2. MAIN RESULTS
We begin by defining Itˆo-Markov additive processes. Let (Ω, F, P) be a complete probability space and let T := [0; T ] be a finite time horizon, where 0
[λij ]i,j=1,2,...,N . The element λij is the transition intensity of the Markov chain J jumping from state ei to state ej .
A process (J, X)= {(J(t),X(t)) : t ∈ T} on the state space {e1,..., eN }×R is a Markov additive process (MAP) if (J, X) is a Markov process and the conditional distribution of (J(s + t),X(s + t) − X(s)) for s,t ∈ T, given (J(s),X(s)), depends only on J(s) (see C¸inlar [9, 10]). Every MAP has a very special structure. It is usually said that X is the additive component and J is the background process representing the environment. Moreover, the process X evolves as a L´evy process while J(t) = ej . Following Asmussen and Kella [5] we can decompose the process X as follows:
(3) X(t)= X(t)+ X(t), where N (4) X(t) := Ψi(t) i=1 X for
(i) (5) Ψi(t) := Un 1{J(Tn)=ei, Tn≤t} n X≥1 (i) and for the jump epochs {Tn} of J. Here Un (n ≥ 1, 1 ≤ i ≤ N) are independent random variables (i) which are also independent of X such that for every i, the random variables Un are identically distributed. Note that we can express the process Ψi as follows: t i Ψi(t)= x ΠU (ds, dx) R Z0 Z for the point measures i P (i) (6) ΠU ([0,t], dx) := (Un ∈ dx)1{J(Tn)=ei, Tn≤t}, i =1,...,N. n X≥1 (ij) Remark 1. One can consider jumps U with distribution depending also on the state ej the Markov chain is jumping to by extending the state space to the pairs (ei, ej ) (see Gautam et al. [22, Thm. 5] for details).
The first component in equation (3) is an Itˆo-L´evy process and it has the following decomposition (see Oksendal and Sulem [29, p. 5]): t t t (7) X(t) := X(0) + µ0(s)ds + σ0(s)dW (s)+ γ(s−, x)Π(d¯ s, dx), R Z0 Z0 Z0 Z 4 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA where W denotes the standard Brownian motion independent of J, Π(d¯ t, dx) := Π(dt, dx) − ν(dx)dt is the compensated Poisson random measure which is independent of J and W and
N i (8) µ0(t) := hµ0,J(t)i = µ0hei,J(t)i, i=1 X N i (9) σ0(t) := hσ0,J(t)i = σ0hei,J(t)i, i=1 X N (10) γ(t, x) := hγ(x),J(t)i = γi(x)hei,J(t)i, i=1 X 1 N ′ RN 1 N ′ RN for some vectors µ0 = (µ0,...,µ0 ) ∈ , σ0 = (σ0 ,...,σ0 ) ∈ and the vector-valued measur- able function γ(x) = (γ1(x),...,γN (x)). The measure ν is the so-called jump-measure identifying the distribution of the sizes of the jumps of the Poisson measure Π. The components X and X in (3) are, conditionally on the state of the Markov chain J, independent. Additionally, we suppose that the L´evy measure satisfies, for some ε> 0 and λ> 0,
(11) exp(λ|γ(s−, x)|)ν(dx) < ∞, P − a.e., exp(λx)P(U (i) ∈ dx) < ∞ c c Z(−ε,ε) Z(−ε,ε) for i = 1,...,N, where U (i) is a generic size of jump when the Markov chain J is jumping into state i. This implies that
|γ(s−, x)|kν(dx) < ∞, P − a.e., E(U (i))k < ∞, i =1,...,N, k ≥ 2 R Z and that the characteristic function E[exp(kuX] is analytic in a neighborhood of 0. Moreover, X has moments of all orders and the polynomials are dense in L2(R, dϕ(t, x)), where ϕ(t, x) := P(X(t) ≤ x).
Now we are ready to define the main process in this paper. The process (J, X) = {(J(t),X(t)) : t ∈ T} (for simplicity sometimes we write only X) with the decomposition (3) is called an Itˆo-Markov additive process. This process evolves as the Itˆo-L´evy process X between changes of states of the Markov chain J, that is, its parameters depend on the current state ei of the Markov chain J. In addition, a transition of (i) J from ei to ej triggers a jump of X distributed as U . This is a so-called non-anticipative Itˆo-Markov additive process. Itˆo-Markov additive processes are a natural generalization of Itˆo-L´evy processes and thus of L´evy processes. Moreover, if γ(s, x) = x then X is a Markov additive process. If additionally N = 1, then X is a L´evy process. If U (j) ≡ 0 and N > 1 then X is a Markov modulated L´evy process (see Pacheco et al. [31]). If there are no jumps additionally, that is, Π(d¯ s, dx)=0, we have a Markov modulated Brownian motion. From now, we will work with the following filtration on (Ω, F, P):
(12) Ft := Gt ∨ N , where N are the P-null sets of F and
1 N (13) Gt := σ{J(s), W (s), Γ(s), ΠU ([0,s], dx),..., ΠU ([0,s], dx); s ≤ t} CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 5 for t (14) Γ(t) := γ(s−, x)Π(ds, dx). R Z0 Z Note that the filtration {Ft}t≥0 is right-continuous (see Karatzas and Shreve [24, Prop. 7.7 ] and also Protter [35, Thm. 31 ]). By the same arguments as in the proof of Thm. 3.3 of Liao [26], the filtration
{Gt}t≥0 is quivalent to
1 N (15) Gt := σ{J(s), X(s), ΠU ([0,s], dx),..., ΠU ([0,s], dx); s ≤ t}. To present the main result we need a few additional processes. We observe that the Markov chain
J can be represented in terms of a marked point process Φj defined by j R Φj (t) := Φ([0,t] × ej )= 1{J(Tn)=ej , Tn≤t} = ΠU ([0,t], ), j =1, 2,...,N n X≥1 for the jump epochs {Tn} of J. Note that the process Φj describes the number of jumps into state ej up to time t. Let φj be the dual predictable projection of Φj (sometimes called the compensator). That is, the process
(16) Φj (t) := Φj(t) − φj (t), j =1,...,N, is an {Ft}-martingale and it is called the jth Markovian jump martingale. Note that φj is unique and t (17) φj (t) := λj (s)ds Z0 for
(18) λj (t) := 1{J(t−)=ei}λij i j X6= (see Zhang et al. [38, p. 290]). Following Corcuera et al. [11] we introduce the power-jump processes X(k)(t) := (∆X(s))k, k ≥ 2,
t (k) k k E X (t) Jt = E (∆X(s)) Jt = γ (s−, x)ν(dx)ds< ∞, P − a.e. k ≥ 2, R 0
Furthermore, any integral of an {Ft}-adapted process with respect to an {Ft}-adapted stochastic measure or an {Ft}-adapted stochastic process is still adapted (see Jacod and Shiryaev [23, Prop. 3.5 (k) and Thm 4.31(i)]). Hence X is {Ft}-adapted for k ≥ 2. We will also need power martingales related to the second component of X given in (3), namely to X 6 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA or to Ψi defined in (5). For l ≥ 1 and i =1,...,N we define l t (l) (i) l i Ψi (t) := Un 1{J(Tn)=ei, Tn≤t} = x ΠU (ds, dx) R n 0 X≥1 Z Z i (l) for ΠU given by (6). The compensated version of Ψi is called an impulse regime-switching martin- gale if t (l) (l) E (i) l l ¯ i (20) Ψi (t) := Ψi (t) − Un φi(t)= x ΠU (ds, dx), R Z0 Z ¯ i i P (i) where ΠU (dt, dx) = ΠU (dt, dx) − λi(t)ηi(dx)dt for λi defined in (18) and ηi(dx)= (Un ∈ dx). (l) Using similar arguments to those above it follows that Ψi is an {Ft}-martingale for l ≥ 1 and i =1, ..., N. Corcuera et al. [11] motivate trading in power-jump assets as follows. Power-jump process of order two is just a variation process of degree two, i.e. a quadratic variation process (see Barndorff-Nielsen and Shephard [6, 7] ), and is related to the so-called realized variance. Contracts on realized variance have found their way into OTC markets and are now traded regularly. Typically a 3th-power-jump asset measures a kind of asymmetry (”skewness”) and a 4th-power-jump process measures extremal movements (”kurtosis”). Trade in such assets can be of use if one likes to bet on the realized skewness or realized kurtosis of the stock. Furthermore, an insurance contract against a crash can also be easily built from 4th-power jump (or ith-power jump, i> 4) assets.
2 2 We denote by M the set of square-integrable {Ft}-martingales, i.e. M ∈M if M is a martingale, E 2 2 M(0) = 0 and supt M (t) < ∞. The martingale convergence theorem implies that each M ∈ M 2 is closed, i.e. there is an {Ft}-measurable random variable M such that M(t) → M(∞) in L (Ω, F) 2 and for each t, M(t) = E[M(∞)|Ft]. Thus there is a one-to-one correspondence between M and 2 2 L (Ω, F), so that M is a Hilbert space under the inner product M1(t) · M2(t) = E[M1(∞)M2(∞)]. 2 Following Protter [35, p. 179], we say that two martingales M1,M2 ∈ M are strongly orthogonal if 2 their product M1 · M2 is a uniformly integrable martingale. As noted in Protter [35], M1,M2 ∈ M are strongly orthogonal if and only if [M1,M2] is a uniformly integrable martingale. We say that 2 two random variables Y1, Y2 ∈ L (Ω, F) are weakly orthogonal if E[Y1(t), Y2(t)] = 0. Clearly, strong orthogonality implies weak orthogonality. One can obtain a set {H(k), k ≥ 1} of pairwise strongly orthonormal martingales such that each H(k) (n) is a linear combination of the X (n =1,...,k),
(k) (k) (k−1) (1) (21) H = ak,kX + ak,k−1X + ··· + ak,1X , k ≥ 1.
The constants a·,· can be calculated as described in Schoutens [37] – they correspond to the coefficients of the orthonormalization of the polynomials 1, x, x2,.... Similarly we proceed with the martingales (l) (l) Ψi and Φi and construct pairwise strongly orthonormal martingales Gi (i = 1,...,N, l ≥ 1) that (l) are appropriate linear combination of the processes Ψi and Φi: (l) (i) (l−1) (i) (l−2) (i) (1) (i) (22) Gi = cl,l Ψi + cl,l−1Ψi + ··· + cl,2Ψi + cl,1Φi. (i) We will find the coefficients c·,· as follows. Fix i ∈ {1,...,N}. Let us consider two spaces. The first one is the space S1 of all real polynomials on the positive real line endowed with the scalar product
(i) (i) hP (x),Q(x)i1 := E P (U )Q(U ) E Φi(1) . CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 7
Note that l h (i) l+h hx , x i1 = E(U ) E Φi(1) . (l) The other space, S2, is the space of all linear transformations of the processes Ψi and Φj , i.e., (l−1) (l−2) (1) R S2 = {clΨi + cl−1Ψi + ··· + c2Ψi + c1Φi; l ∈{1, 2,...}, ci ∈ }. We endow this space with the scalar product (l) (h) E (l) (h) E (i) l+hE hΨi (t), Ψi (t)i2 := [Ψi , Ψi ](1) = (U ) Φi(1) , (l) E (l) E (i) lE hΨi (t), Φi(t)i2 := [Ψi , Φi](1) = (U ) Φi(1) , hΦi(t), Φi(t)i2 := E [Φi, Φi](1) = E Φi(1) , l (l) for i = 1,...,N and l,h ≥ 0. One clearly sees that x ↔ Ψi is an isometry between S1 and S2. An 2 (1) (2) orthogonalization of {1, x, x ,...} in S1 gives an orthogonalization of {Φi, Ψi , Ψi ,...}. (l) (h) Finally, for i 6= j, the processes Ψi , Φi and Ψj , Φj do not jump at the same time, so (l) (h) (l) (h) (l) (l) [Ψi , Ψj ](t)= ∆Ψi (s)∆Ψj (s)=0, [Ψi , Φj ](t)= ∆Ψi (s)∆Φj (s)=0 s t s t X≤ X≤ and
[Φi, Φj ](t)= ∆Φi(s)∆Φj (s)=0. s t X≤ For the same reason we have
(l) (k) (k) [Ψi ,H ](t)=0, [Φi,H ](t)=0 for i ∈{1,...,N}, k ≥ 1 and l ≥ 1. In this way all martingales in (21) and (22) are pairwise strongly orthogonal.
The main results of this paper is given in the next two theorems.
Theorem 2. Any square-integrable {Ft}-measurable random variable F can be represented as follows:
N ∞ ∞ t t1− ts+τ−1− E F (t) = [F (t)] + ... f(υ1,...,υτ ,ι1,...,ιs,i)(t1,t2,...,ts+τ ) i=1 s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 0 X X X 1 Xs≥1 1 Xτ ≥1 Z Z Z (υτ ) (υ1) (ιs) (ι1) (23) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1),
2 (l) where f(υ1,...,υτ ,ι1,...,ιs) are some random fields for which (23) is well-defined on L (Ω, F), processes Gi and H(k) (i =1,...,N, l, k ≥ 1) are orthogonal martingales and the convergence is in L2 sence.
Remark 3. The right-hand side of (23) is understood as follows. We take a finite sum
N A1 A2 t t1− ts+τ−1−
··· f(υ1,...,υτ ,ι1,...,ιs,i)(t1,t2,...,ts+τ ) i=1 s=1 τ=1 ι ,...,ι υ ,...,υ 0 0 0 X X X 1 Xs≥1 1 Xτ ≥1 Z Z Z (υτ ) (υ1) (ιs) (ι1) dGi (ts+τ ) ... dGi (ts+1)dH (ts) ... dH (t1) in L2(Ω, F). Since L2(Ω, F) is a Hilbert space, the right-hand side of (23) is understood as the limit of 2 the above expression in L (Ω, F) for A1 → ∞ and A2 → ∞. 8 Z. PALMOWSKI —Ł.STETTNER —A.SULIMA
Theorem 4. Any square-integrable {Ft}-martingale M can be represented as follows:
t ∞ t N t (1) (k) (k) (j) M(t) = M(0) + hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s) 0 k 0 j=1 0 Z X=2 Z X Z ∞ N t (l,i) (l) (24) + hΨ (s)dΨi (s), l i=1 0 X=1 X Z (1) (k) (j) (l,i) where hX , hX , hΦ and hΨ (for i, j =1,...,N, k ≥ 2 and l ≥ 1) are predictable processes. Remark 5. The right-hand side of (24) is understood in the same way as in Remark 3, that is, it is the limit in M2 of
t B1 t N t B2 N t (1) (k) (k) (j) (l,i) (l) hX (s)dX(s)+ hX (s)dX (s)+ hΦ (s)dΦj (s)+ hΨ (s)dΨi (s) 0 k 0 j=1 0 l i=1 0 Z X=2 Z X Z X=1 X Z for B1 → ∞ and B2 → ∞.
The proof of this theorem will be given in Section 4.1. The main idea of the proof is that every 2 random variable F in L (Ω, Ft) can be approximated by some type of polynomials. For these poly- nomials we will use the Itˆoformula together with induction to get an appropriate representation first in terms of orthogonal multiple stochastic integrals and then as a sum of single stochastic integrals. The main step of the construction is given in Section 3.
Remark 6. Note that the representation in Theorem 4 is as a sum of integrals with respect to the com- (k) (l) pensated processes X, X , Φj and Ψi . In fact, the above representation could also be derived for non-compensated processes at the cost of an additional Lebesgue integral with respect to an appro- priate sum of compensators.
This generalizes the classical results of Emery [20], Dellacherie et al. [14, p. 207] and Nualart and Schoutens [28].
3. PROOF OF PREDICTABLE REPRESENTATION
3.1. Polynomial representation. The main result of this section gives a stochastic integral represen- g p b tation of polynomials of the form X · Φj · Ψi . We follow the argument of Nualart and Schoutens [28].
Theorem 7. We have the following representation:
g b p t t1− ts+τ+ζ−1− g p b (g+p+b) X (t)Φj (t)Ψi (t)= f (t)+ ··· 0 0 0 s=1 τ=1 ζ=1 (v1,...,vτ ) (ι1,...,ιs) Z Z Z X X X ∈{1X,...,b}τ ∈{1X,...,g}s g p b (25) f ( + + ) (t,t ,t ,...,t )dΦ (t ) ... dΦ (t ) (v1,...,vτ ,ι1,...,ιs,i,j) 1 2 s+τ+ζ j s+τ+ζ j s+τ+1
(vτ ) (v2) (v1) (ιs) (ι2) (ι1) dΨi (ts+τ ) ... dΨi (ts+2)dΨi (ts+1)dX (ts) ... dX (t2)dX (t1), g p b where f (g+p+b) and f ( + + ) are some random fields being a sum of products of predictable processes (v1,...,vτ ,ι1,...,ιs,i,j) 2 with respect to t,t1,t2,...,ts+τ+ζ defined on L (Ω, F). g p b Proof. We will express X (t)Φj (t)Ψi (t) (for t ≥ 0, g,p,b ≥ 0, i, j = 1,...,N) as a sum of stochastic (k) (l) integrals of lower powers of X, Φj and Ψi with respect to the processes X , Φj and Ψi (for k ≤ g CHAOTIC AND PREDICTABLE REPRESENTATIONS FOR ITOˆ -MARKOVADDITIVEPROCESSES 9 and l ≤ b). Note that the processes Φj and Ψi have bounded variation and they are constant between c c jumps. Then (see Protter [35, p. 75]) the following holds true: [X, Φj ] (s)=0, [Φj , Φj ] (s)=0, c c c [Ψi, Φj ] (s)=0, [Ψi, Ψi] (s)=0 and [Ψi, X] (s)=0. Moreover, from the definition of X in (7) we c s 2 have [X, X] (s)= 0 σ0 (u−)du. Using Itˆo’s formula (see Protter [35, p. 81]) we can write t g p R b g p b g−1 p b (26) X (t)Φj (t)Ψi (t)= X (0)Φj (0)Ψi (0) + gX (s−)Φj (s−)Ψi (s−)dX(s) 0 t Z t g p−1 b g p b−1 + pX (s−)Φj (s−)Ψi (s−)dΦj (s)+ bX (s−)Φj (s−)Ψi (s−)dΨi(s) Z0 Z0 1 + g(g − 1)I + I , 2 1 2 where t I g−2 p b 2 1 := X (s−)Φj (s−)Ψi (s−)σ0 (s−)ds Z0 and I I g p b g−1 p b (27) 2 := 3 − X (s−)Φj (s−)Ψi (s−) − gX (s−)Φj (s−)Ψi (s−)∆X(s)