Communications on Stochastic Analysis

Volume 9 | Number 4 Article 3

12-1-2015 Doob's decomposition theorem for near- submartingales Hui-Hsiung Kuo

Kimiaki Saitô

Follow this and additional works at: https://digitalcommons.lsu.edu/cosa Part of the Analysis Commons, and the Other Commons

Recommended Citation Kuo, Hui-Hsiung and Saitô, Kimiaki (2015) "Doob's decomposition theorem for near-submartingales," Communications on Stochastic Analysis: Vol. 9 : No. 4 , Article 3. DOI: 10.31390/cosa.9.4.03 Available at: https://digitalcommons.lsu.edu/cosa/vol9/iss4/3 Communications on Stochastic Analysis Serials Publications Vol. 9, No. 4 (2015) 467-476 www.serialspublications.com

DOOB’S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES

HUI-HSIUNG KUO AND KIMIAKI SAITO*ˆ

Abstract. We study the discrete parameter case of near-martingales, near- submartingales, and near-supermartingales. In particular, we prove Doob’s decomposition theorem for near-submartingales. This generalizes the classical case for submartingales.

1. Motivation From Non-adapted Stochastic ≥ {F } Let B(t), t 0, be a Brownian motion starting at 0 and t the filtration∫ given F { ≤ ≤ } ≥ b by B(t), namely, t = σ B(s); 0 s t , t 0. The Itˆointegral a f(t) dB(t) (see, e.g., the book [8]) is defined for {Ft}-adapted stochastic processes f(t) with almost all sample paths being in L2[a, b]. Several extensions of the Itˆotheory of stochastic integration to cover non-adapted integrands have been introduced and extensively studied by, just to mention a few names, Buckdahn [3], Dorogovtsev [4], Hitsuda [5], Itˆo[6], Kuo–Potthoff [10], Le´on–Protter[12], Nualart–Pardoux [13], Pardoux–Protter [14], Russo–Vallois [15], and Skorokhod [16]. In particular, in his lecture for the 1976 Kyoto Symposium, Itˆo[6] gave rather elegant ideas to define the following non-adaptive stochastic integral ∫ t (I) B(1) dB(s), 0 ≤ t ≤ 1, (1.1) 0 namely, enlarging the σ-field Ft to Gt = σ{B(1),B(s); 0 ≤ s ≤ t}, 0 ≤ t ≤ 1, so that the integrand B(1) is adaptive and B(t) is a quasimartingale with respect to the filtration {Gt}. Then the stochastic integral in equation (1.1) is defined as a stochastic integral with respect to a quasimartingale and has the value ∫ t (I) B(1) dB(s) = B(1)B(t), 0 ≤ t ≤ 1. (1.2) 0 On the other hand, the Hitsuda–Skorokhod integral (see [5] [16]) can be expressed in terms of a integral (see the book [7]) and has the value ∫ ∫ t t ∗ − ≤ ≤ (HS) B(1) dB(s) = ∂s B(1) ds = B(1)B(t) t, 0 t 1. (1.3) 0 0

Received 2015-9-17; Communicated by the editors. 2010 Mathematics Subject Classification. Primary 60G42, 60G48; Secondary 60G50, 60H05. Key words and phrases. Brownian motion, stochastic integral, Hitsuda–Skorokhod integral, conditional expectation, martingale, near-martingale, near-submartingale, near-supermartingale, Doob’s decomposition theorem, instantly independent sequence. *This work was supported by JSPS Grant-in-Aid Scientific Research 15K04940. 467 468 HUI-HSIUNG KUO AND KIMIAKI SAITOˆ

Being motivated by Itˆo’sideas and observing the different values∫ of equations t (1.2) and (1.3), we have defined in [1] [2] the stochastic integral 0 B(1) dB(s) in the following way. Decompose the integrand B(1) as ( ) B(1) = B(t) + B(1) − B(t) , where the first term B(t) is the Itˆopart of B(1) and the second term B(1) − B(t) is the counterpart of B(1). For the Itˆopart, the evaluation points are the left endpoints of subintervals, while the evaluation points for the counterpart are the right endpoints of subintervals. Thus for 0 ≤ t ≤ 1, we have ∫ ∫ t t [ ( )] B(1) dB(s) = B(s) + B(1) − B(s) dB(s) 0 0 ∑n [ ( )]( ) = lim B(si−1) + B(1) − B(si) B(si) − B(si−1) n→∞ i=1 ∑n [ ( )]( ) = lim B(1) − B(si) − B(si−1) B(si) − B(si−1) n→∞ i=1 ( ∑n ∑n ) ( ) ( )2 = lim B(1) B(si) − B(si−1) − B(si) − B(si−1) n→∞ i=1 i=1 = B(1)B(t) − t, (1.4) where the limit is convergence in probability. Note that this value is the same as the Hitsuda–Skorokhod integral in equation (1.3). There is an intrinsic difference between the stochastic processes

Xt = B(1)B(t) − t, Yt = B(1)B(t), 0 ≤ t ≤ 1, (1.5) given by equations (1.4) and (1.2), respectively. For any s ≤ t, we see that 2 E[Xt | Fs] = B(s) − s. (1.6) In particular, put t = s to get 2 E[Xs | Fs] = B(s) − s. (1.7) It follows from equations (1.6) and (1.7) that

E[Xt | Fs] = E[Xs | Fs], ∀ s ≤ t. (1.8)

On the other hand, it is easy to check that the Yt = B(1)B(t) in equation (1.5) does not satisfy equation (1.8). This leads to the following concept introduced in [11].

Definition 1.1. A stochastic process Xt with E|Xt| < ∞ for a ≤ t ≤ b is called a near-martingale with respect to a filtration {Ft} if it satisfies the condition in equation (1.8). We can define near-submartingale and near-supermartingale with respect to a filtration {Ft} by the following respective conditions:

E[Xt | Fs] ≥ E[Xs | Fs], ∀ s ≤ t, (1.9) and E[Xt | Fs] ≤ E[Xs | Fs], ∀ s ≤ t. DOOB’S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES 469

Observe that if a stochastic process Xt is adapted to a filtration {Ft}, then near- martingale, near-submartingale, and near-supermartingale reduce to martingale, submartingale, and supermartingale, respectively. In this paper we will study the discrete parameter case of near-martingales and near-submartingales. In particular, we will prove Doob’s decomposition theorem for near-submartingales.

2. Near-martingales and Near-submartingales

Let {Fn ; 1 ≤ n ≤ N} be a fixed filtration, i.e., an increasing sequence of σ-fields.

Definition 2.1. A sequence Xn, 1 ≤ n ≤ N, of integrable random variables is called a near-martingale with respect to {Fn ; 1 ≤ n ≤ N} if

E[Xn+1 | Fn] = E[Xn | Fn], ∀ 1 ≤ n ≤ N − 1. (2.1) Remark 2.2. It is easy to see that the equality in equation (2.1) is equivalent to the equality:

E[Xm | Fn] = E[Xn | Fn], ∀ 1 ≤ n ≤ m ≤ N. (2.2) Similarly, we can define near-submartingale and near-supermartingale just by replacing the equality sign in equation (2.1) with ≥ and ≤, respectively. They also have the corresponding equivalent conditions as in equation (2.2). Obviously, if a sequence Xn, 1 ≤ n ≤ N, is adapted to {Fn ; 1 ≤ n ≤ N}, then near-martingale, near-submartingale, and near-supermartingale are martingale, submartingale, and supermartingale, respectively.

Example 2.3. Take a sequence ξ1, ξ2, . . . , ξN of independent random variables with mean 0. Let {Fn} be the filtration given by Fn = σ{ξk; 1 ≤ k ≤ n}. Put

Sn = ξ1 + ··· + ξn,Xn = SN − Sn, 1 ≤ n ≤ N. (2.3)

The sequence Sn, 1 ≤ n ≤ N, is a martingale. On the other hand,

E[Xn+1|Fn] = E[ξn+2 + ··· + ξN |Fn]

= E(ξn+2 + ··· + ξN ) = 0.

Similarly, we have E[Xn|Fn] = 0. Thus E[Xn+1|Fn] = E[Xn|Fn], which shows that Xn, 1 ≤ n ≤ N, is a near-martingale. Furthermore, suppose ξn, n ≥ 1, is a sequence of independent random variables with mean 0. For fixed N, Xn = SN − Sn, 1 ≤ n ≤ N, is a near-martingale as shown above. However, Xn = SN − Sn, n ≥ N, is a martingale.

Example 2.4. Let ξ1, ξ2, . . . , ξN be a sequence of independent random variables 2 F { ≤ ≤ } with mean 0 and var(ξn) = σn. Let n = σ ξk; 1 k n . Put ∑n ··· − 2 ≤ ≤ Sn = ξ1 + + ξn,Xn = SnSN σk, 1 n N. (2.4) k=1 470 HUI-HSIUNG KUO AND KIMIAKI SAITOˆ

It is easy to check that

[ n+1 ] ∑ F − 2 F E[Xn+1 n] = E Sn+1SN σk n k=1 n∑+1 ··· F − 2 = E[(Sn + ξn+1)(Sn + ξn+1 + ξN ) n] σk k=1 n∑+1 2 2 − 2 = Sn + σn+1 σk k=1 ∑n 2 − 2 = Sn σk. (2.5) k=1 Similarly, we can easily derive ∑n |F 2 − 2 E[Xn n] = Sn σk. (2.6) k=1 |F |F It follows from equations∑ (2.5) and (2.6) that E[Xn+1 n] = E[Xn n]. Hence the − n 2 ≤ ≤ sequence Xn = SnSN k=1 σk, 1 n N, is a near-martingale. Moreover, let ξn, n ≥ 1, be a sequence of independent random variables with 2 F { ≤ ≤ } mean 0 and var(ξn) = σn. Take n = σ ξk; 1 k n . Define Sn and Xn as in equation (2.4). For fixed N, the sequence Xn, 1 ≤ n ≤ N, is a near-martingale as shown above. On the other hand, the sequence Xn, n ≥ N, is a martingale.

Theorem 2.5. Let Sn, 1 ≤ n ≤ N, be a square integrable martingale with respect to a filtration {Fn; 1 ≤ n ≤ N}. Then

Vn = Sn(SN − Sn), 1 ≤ n ≤ N, is a near-martingale. Proof. Note that − − − 2 2 Vn+1 Vn = (Sn+1 Sn)SN Sn+1 + Sn. (2.7) Hence we have E[V − V |F ] = E[(S − S )S |F ] − E[S2 |F ] + E[S2|F ] n+1 n n { n+1 n N n n+1} n n n = E E[(S − S )S |F ] |F − E[S2 |F ] + S2 { n+1 n N n+1 n} n+1 n n = E (S − S )E[S |F ] |F − E[S2 |F ] + S2 { n+1 n N n}+1 n n+1 n n − |F − 2 |F 2 = E (Sn+1 Sn)Sn+1 n E[Sn+1 n] + Sn − |F 2 = SnE[Sn+1 n] + Sn − 2 2 = Sn + Sn = 0.

Hence E[Vn+1|Fn] = E[Vn|Fn] and so Vn, 1 ≤ n ≤ N, is a near-martingale. □ DOOB’S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES 471

Theorem 2.6. Suppose Sn, n = 1, 2,..., is a square integrable martingale with respect to a filtration {Fn; 1 ≤ n ≤ N}. For a fixed natural number N, let

Vn = Sn(SN − Sn), n = 1, 2,.... Then

(1) Vn, 1 ≤ n ≤ N, is a near-martingale, (2) Vn, n ≥ N, is a supermartingale. Proof. The first assertion follows from Theorem 2.5. To prove the second assertion, we use equation (2.7) to show that for n ≥ N, − |F − |F − 2 |F 2 E[Vn+1 Vn n] = SN E[(Sn+1 Sn) n] E[Sn+1 n] + Sn − 2 |F 2 = E[Sn+1 n] + Sn ≤ 0, 2 |F ≤ |F ≥ since Sn is a submartingale. Thus E[Vn+1 n] E[Vn n] for n N. But the sequence Vn, n ≥ N, is adapted to the filtration {Fn}. Therefore, we have

E[Vn+1|Fn] ≤ Vn, n ≥ N.

This shows that Vn, n ≥ N, is a supermartingale. □

3. Doob’s Decomposition Theorem In this section we prove Doob’s decomposition theorem for near-submartingales.

Theorem 3.1. Let Xn, n ≥ 1, be a near-submartingale with respect to a filtration {Fn}. Then there exists a unique decomposition

Xn = Mn + An, n ≥ 1, (3.1) with Mn and An satisfying the following conditions:

(1) Mn, n ≥ 1, is a near-martingale. (2) A1 = 0. (3) An is Fn−1-measurable for n ≥ 2. (4) An is inceasing almost surely. Proof. • Existence of a decomposition

Define A1 = 0 and M1 = X1. Then we have equation (3.1) for n = 1. To find A2 and M2 such that X2 = M2 + A2 with desired properties, we take conditional expectation with respect to F1:

E[X2|F1] = E[M2|F1] + E[A2|F1]

= E[M1|F1] + A2

= E[X1|F1] + A2. Therefore, we define

A2 = E[X2|F1] − E[X1|F1],M2 = X2 − A2.

Then we have equation (3.1) for n = 2. Observe that A2 is F1-measurable and A1 ≤ A2 almost surely since {Xn} is a near-submartingale. 472 HUI-HSIUNG KUO AND KIMIAKI SAITOˆ

Inductively, we repeat the above arguments to define An and Mn for n ≥ 3 by ∑n ( ) An = E[Xk|Fk−1] − E[Xk−1|Fk−1] , k=2 Mn = Xn − An.

Then we have equation (3.1) for n ≥ 3. Notice that An is Fn−1-measurable and An−1 ≤ An almost surely since {Xn} is a near-submartingale. Now, we need to show that Mn, n ≥ 1, is a near-martingale with respect to {Fn}. Note that for n ≥ 2, we have ∑n ( ) Mn = Xn − E[Xk|Fk−1] − E[Xk−1|Fk−1] , k=2 which yields the equality

Mn − Mn−1 = Xn − Xn−1 − E[Xn|Fn−1] + E[Xn−1|Fn−1].

Then we take conditional expectation with respect to Fn−1 to show that

E[Mn − Mn−1|Fn−1] = 0, namely, E[Mn|Fn−1] = E[Mn−1|Fn−1]. Hence Mn, n ≥ 1, is a near-martingale with respect to {Fn}. • Uniqueness of a decomposition Suppose we have two such decompositions

Xn = Mn + An = Nn + Bn, n ≥ 1. (3.2) Then we have

Mn − Nn = Bn − An, n ≥ 1. (3.3)

For n = 1, we have B1 = A1 = 0. Hence M1 = N1. For n ≥ 2, take the conditional expectation of equation (3.3) with respect to Fn−1 to get

E[Mn − Nn|Fn−1] = E[Bn − An|Fn−1] = Bn − An, (3.4) where in the last equality we have used the fact that An and Bn are Fn−1- measurable. On the other hand, use equation (3.3) for n − 1 and the fact that Mn and Nn are near-martingales to get

E[Mn − Nn|Fn−1] = E[Mn−1 − Nn−1|Fn−1]

= E[Bn−1 − An−1|Fn−1]

= Bn−1 − An−1, (3.5) where the last equality holds since Bn−1 and An−1 are Fn−2-measurable and so are Fn−1-measurable. Thus by equations (3.4) and (3.5),

Bn − An = Bn−1 − An−1, n ≥ 2.

This equation together with A1 = B1 implies that An = Bn almost surely for all n ≥ 1. Then by equation (3.2) we have Mn = Nn almost surely for all n ≥ 1. Hence the decomposition is unique. □ DOOB’S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES 473

Example 3.2. Let ξn, n ≥ 1, be a sequence of independent random variables with 2 F { ≤ ≤ } ··· mean 0 and var(ξn) = σn. Take n = σ ξk; 1 k n . Define Sn = ξ1 + + ξn. For fixed N, consider the sequence

Xn = SnSN , 1 ≤ n ≤ N. (3.6)

First we show that the sequence Xn, 1 ≤ n ≤ N, is a near-submartingale. It is easy to see that

E[Xn+1|Fn] = E[Sn+1SN |Fn]

= E[(Sn + ξn+1)(ξ1 + ··· + ξN )|Fn] 2 = E[(Sn + ξn+1) |Fn] 2 2 |F = E[Sn + 2Snξn+1 + ξn+1 n] 2 2 = Sn + σn+1. (3.7) On the other hand, we have |F |F |F 2 E[Xn n] = E[SnSN n] = SnE[SN n] = Sn. (3.8)

By equations (3.7) and (3.8), we have E[Xn+1|Fn] ≥ E[Xn|Fn] almost surely. Hence Xn, 1 ≤ n ≤ N, is a near-submartingale. To find the Doob decomposition of Xn, 1 ≤ n ≤ N, recall from Example 2.4 that the sequence ∑n ≡ − 2 ≤ ≤ Zn SnSN σk, 1 n N, k=1 is a near-martingale. This motivates us to define Mn and An by { S1SN , if n = 1, M = ∑ n − n 2 ≥ SnSN k=2 σk, if n 2. and { 0, if n = 1, A = ∑ n n 2 ≥ k=2 σk, if n 2. 2 Note that Mn = Zn + σ1. Hence Mn is a near-martingale. Then we can easily see that the Doob decomposition of SnSN is given by

SnSN = Mn + An, 1 ≤ n ≤ N. We need to point out a difference between martingale case and near-martingale 2 case. Suppose Xn is a square integrable martingale. It is well known that Xn is a submartingale. However, for a square integrable near-martingale Xn, it is 2 not true in general that Xn is a near-submartingale. For instance, the sequence Xn = SN − Sn, 1 ≤ n ≤ N, in Example 2.3 is a near-martingale. However, it is 2 ≤ ≤ easy to check that Xn, 1 n N, is not a near-submartingale. In fact, it is a near-supermartingale. 474 HUI-HSIUNG KUO AND KIMIAKI SAITOˆ

4. Instantly Independent Sequences Note that martingales must be adapted with respect to an associated filtration. In [11], we introduced the concept of instantly independent stochastic processes, which play the counterpart role of adapted stochastic processes. Thus for the discrete case, we have instantly independent sequences of random variables.

Definition 4.1. A sequence {Φn} of random variables is said to be instantly independent with respect to a filtration {Fn} if Φn and Fn are independent for each n. We have the following two basic properties of instantly independent sequences of random variables.

Theorem 4.2. If Xn is a near-martingale, then EXn is a constant (independent of n). Conversely, if EXn is a constant and Xn is instantly independent, then Xn is a near-martingale.

Proof. Suppose Xn is a near-martingale. Then we have

E[Xn+1|Fn] = E[Xn|Fn], ∀ n ≥ 1.

Upon taking expectation, we immediately get EXn+1 = EXn for all n ≥ 1. Hence EXn is a constant. Conversely, suppose EXn is a constant and Xn is instantly independent with respect to a filtration {F }. Then { n } E[X |F ] = E E[X |F ] F n+1 n { n+1 n}+1 n

= E EXn+1 Fn

= EXn+1 = c, where c is a constant. On the other hand, since Xn and Fn are independent, we have E[Xn|Fn] = EXn = c.

Hence E[Xn+1|Fn] = E[Xn|Fn] and so Xn, n ≥ 1, is a near-martingale. □

Theorem 4.3. Suppose Xn is a square integrable martingale and Φn is a square integrable sequence of instantly independent random variables with EΦn being a constant. Then the product XnΦn is a near-martingale. Proof. Using the assumptions we can easily derive { } E[X Φ |F ] = E E[X Φ |F ] F n+1 n+1 n { n+1 n+1 n+1 n} = E X E[Φ |F ] F { n+1 n+1 n+1} n

= E Xn+1EΦn+1 Fn

= EΦn+1 · E[Xn+1 | Fn]

= cXn, (4.1) where c = EΦn is a constant. On the other hand, we have

E[XnΦn|Fn] = XnE[Φn|Fn] = XnEΦn = cXn. (4.2) DOOB’S DECOMPOSITION THEOREM FOR NEAR-SUBMARTINGALES 475

It follows from equations (4.1) and (4.2) that E[Xn+1Φn+1|Fn] = E[XnΦn|Fn] almost surely. Hence Xn is a near-martingale. □

Example 4.4. Let ξ1, ξ2, . . . , ξN be a sequence of independent random variables with mean 0 and finite variances. Let Fn = σ{ξk; 1 ≤ k ≤ n}. Put

Sn = ξ1 + ξ2 + ··· + ξn.

Then Sn is a martingale with respect to the filtration {Fn}. Let θ be a real-vlaued function on R. For fixed N, assume that the random variables

θ(SN − Sn), 1 ≤ n ≤ N, are square integrable. Then the following sequence

Φn = θ(SN − Sn) − Eθ(SN − Sn), 1 ≤ n ≤ N, is instantly independent with respect to the filtration {Fn} with mean 0. Hence by Theorem 4.3 the sequence ( ) Yn = Sn θ(SN − Sn) − Eθ(SN − Sn) , 1 ≤ n ≤ N, is a near-martingale.

Acknowledgment. The mathematical concepts and the results in this paper were obtained in many discussions with K. Saitˆoduring Kuo’s visits to Meijo University since 2011. Kuo would like to give his deepest appreciation to Professor Saitˆofor the invitations and for the warm hospitality.

References 1. Ayed, W. and Kuo, H.-H.: An extension of the Itˆointegral, Communications on Stochastic Analysis 2, no. 3 (2008) 323–333. 2. Ayed, W. and Kuo, H.-H.: An extension of the Itˆointegral: toward a general theory of stochastic integration, Theory of Stochastic Processes 16(32), no. 1 (2010) 1–11. 3. Buckdahn, R.: Anticipative Girsanov transformations, Probab. Th. Rel. Fields 89 (1991) 211–238. 4. Dorogovtsev, A. A.: Itˆo–Volterra equations with an anticipating right-hand side in the ab- sence of moments, Infinite-dimensional Stochastic Analysis (Russian) 41–50, Akad. Nauk Ukrain. SSR, Inst. Mat., Kiev, 1990. 5. Hitsuda, M.: Formula for Brownian partial derivatives, Second Japan-USSR Symp. Probab. Th.2 (1972) 111–114. 6. Itˆo,K.: Extension of stochastic , Proc. Intern. Symp. Stochastic on Differential Equations, K. Itˆo(ed.) (1978) 95–109, Kinokuniya. 7. Kuo, H.-H.: White Noise Distribution Theory, CRC Press, 1996. 8. Kuo, H.-H.: Introduction to Stochastic Integration. Universitext (UTX), Springer, 2006. 9. Kuo, H.-H.: The Itˆocalculus and white noise theory: A brief survey toward general stochastic integration, Communications on Stochastic Analysis 8, no. 1 (2014) 111–139. 10. Kuo, H.-H. and Potthoff, J.: Anticipating stochastic integrals and stochastic differential equations; in: White Noise Analysis: Math. and Appl., T. Hida et al. (eds.), World Scientific (1990) 256–273. 11. Kuo, H.-H., Sae-Tang, A., and Szozda, B.: A stochastic integral for adapted and instantly independent stochastic processes, in “Advances in , Probability and Actuarial Sci- ence” Vol. I, Stochastic Processes, Finance and Control: A Festschrift in Honour of Robert J. Elliott (eds.: Cohen, S., Madan, D., Siu, T. and Yang, H.), World Scientific, 2012, 53–71. 476 HUI-HSIUNG KUO AND KIMIAKI SAITOˆ

12. Le´onJ. A. and Protter, P.: Some formulas for anticipative Girsanov transformations, in: Chaos Expansions, Multiple Wiener-Itˆointegrals and Their Applications, C. Houdr´eand V. P´erez-Abreu(eds.), CRC Press, 1994 13. Nualart, D. and Pardoux, E.: Stochastic with anticipating integrands, Probab. Th. Rel. Fields 78 (1988) 535–581. 14. Pardoux, E. and Protter, P.: A two-sided stochastic integral and its calculus, Probab. Th. Rel. Fields 76 (1987) 15–49. 15. Russo, F. and Vallois, P.: Anticipative Stratonovich equation via Zvonkin method, Stochastic Processes and Related Topics (Siegmundsberg, 1994), 129–138, Stochastics Monogr., 10, Gordon and Breach, Yverdon, 1996, 16. Skorokhod, A. V.: On a generalization of a stochastic integral, Theory Probab. Appl. 20 (1975) 219–233.

Hui-Hsiung Kuo: Dept. of Mathematics, Louisiana State University, Baton Rouge, LA 70803, USA. E-mail address: kuo@@math.lsu.edu Kimiaki Saito:ˆ Department of Mathematics, Meijo University, Tenpaku, Nagoya 468- 8502, Japan E-mail address: [email protected]