Interpolation for Partly Hidden Diffusion Processes*
Total Page:16
File Type:pdf, Size:1020Kb
Interpolation for partly hidden di®usion processes* Changsun Choi, Dougu Nam** Division of Applied Mathematics, KAIST, Daejeon 305-701, Republic of Korea Abstract Let Xt be n-dimensional di®usion process and S(t) be a smooth set-valued func- tion. Suppose X is invisible when X S(t), but we can see the process exactly t t 2 otherwise. Let X S(t ) and we observe the process from the beginning till the t0 2 0 signal reappears out of the obstacle after t0. With this information, we evaluate the estimators for the functionals of Xt on a time interval containing t0 where the signal is hidden. We solve related 3 PDE's in general cases. We give a generalized last exit decomposition for n-dimensional Brownian motion to evaluate its estima- tors. An alternative Monte Carlo method is also proposed for Brownian motion. We illustrate several examples and compare the solutions between those by the closed form result, ¯nite di®erence method, and Monte Carlo simulations. Key words: interpolation, di®usion process, excursions, backward boundary value problem, last exit decomposition, Monte Carlo method 1 Introduction Usually, the interpolation problem for Markov process indicates that of eval- uating the probability distribution of the process between the discretely ob- served times. One of the origin of the problem is SchrÄodinger's Gedankenex- periment where we are interested in the probability distribution of a di®usive particle in a given force ¯eld when the initial and ¯nal time density functions are given. We can obtain the desired density function at each time in a given time interval by solving the forward and backward Kolmogorov equations. The background and methods for the problem are well reviewed in [17]. The prob- lem also attracts interests in signal processing. It is useful in restoration of lost 1 *This work is supported by BK21 project. 2 **corresponding author, Email address : [email protected] ¡ Preprint submitted to Elsevier Science 20 July 2002 signal samples [8]. In this paper, we consider a special type of interpolation problem for which similar applications are expected. Let Xt be the signal process represented by some system of stochastic di®er- ential equations. Suppose Xt is invisible when Xt S(t) for some set-valued function S(t), 0 t T < and we can observe2the process exactly other- wise i.e. the observ· ation· pro1cess is Y = X I ; 0 t T: t t Xt2=S(t) · · In [10], 1-dimensional signal process and a ¯xed obstacle S(t) = (0; ) are 1 considered and the optimal estimator E[f(X1)IX1>0 Y[0;1]] was derived for bounded Borel function f. The key identity is j E[f(X )I Y ] = E[f(X )I ¿] (1) 1 X1>0j [0;1] 1 X1>0j where 1 ¿ is the lastly observed time before 1 before the signal enters into the obstacle.¡ This is not a stopping time but if we consider the reverse time process Ât of Xt, ¿ is a stopping time and we can prove (1) by applying the strong Markov property to Ât. This simpli¯ed estimator is again evaluated by the formula 1 E[f(X1)IX1>0 ¿] = f(x)q(x ¿)dx (2) j Z0 j p1(x)ux(¿) where q(x ¿) = 1 , p1(x) is the density function of X1 and ux(t) j 0 p1(z)uz(¿)dz is the ¯rst hitting time density of  starting at x to 0 at t. R t In [11], this problem was considered in a more generalized setting, where Xt is n-dimensional di®usion process and the obstacle S(t) is a time-varying set- valued function. For this setting, we should consider also the ¯rst hitting state ¿ as well as time ¿ and the corresponding identity becomes E[f(X )I Y ] = E[f(X )I ¿;  ] (3) 1 X12S(1)j [0;1] 1 X12S(1)j ¿ and q(x t) is replaced by j p1(x)ux(t; y) q(x t; y) = 1 : (4) j 0 p1(z)uz(t; y)dz R where ux(t; y) is the ¯rst hitting density on the surface of the images of S(t). This problem can be thought as a kind of ¯ltering problem (not a prediction), because we observe the hidden signal up to time 1(to be estimated) and the hidden process after time 1 ¿ still gives the information that the signal is in the obstacle. In this paper,¡we consider related extended problems. We con- tinue observing the hidden signal process till it reappears out of the obstacle. With this additional information we can estimate the hidden state with more 2 accuracy. An interesting fact is that even if the signal does not reappear in a ¯nite ¯xed time T , this information also improves the estimator. We also eval- uate extrapolator and backward ¯lter. The interpolator developed in Section 3.4 in particular, can be applied to reconstructing the lost parts of random signals in a probabilistically optimal way. In section 3, we prove the corresponding Bernstein-type [17] identity similar to (1) and (3) and derive the optimal estimator. Due to this identity, the problem reduces to that of computing the density functions of general excursions (or meander) in the solution of stochastic di®erential equations on given excursion intervals. It is well known that the transition density function for stochastic di®erential equation is evaluated through a parabolic PDE. For the interpola- tor, we should solve a Kolmogorov equation and two backward boundary value problems. We explain a numerical algorithm by ¯nite di®erence method for general 1-dimensional di®usion processes. When the signal is a simple Brown- ian motion process, we can use the so called last exit decomposition principle and we can derive the estimators more conveniently, which will be con¯rmed by a numerical computation example. Alternatively, for Brownian motion, we can use Monte Carlo method for some boundaries by numerical simulation of various conditional Brownian motions. We give several examples and compare the results by each method. 2 Reverse Time Di®usion Process Consider the following system of stochastic di®erential equation dX = a(t; X )dt + b(t; X )dW ; X = 0; 0 t < T (5) t t t t 0 · where a : [0; T ] Rn Rn; b : [0; T ] Rn Rn£m are measurable functions £ ! £ ! and Wt is m-dimensional Brownian motion. According to [10] and [11], to obtain the desired estimators, we need the reverse time di®usion process Ât of Xt on [0; T ). In [3], the conditions for the existence and uniqueness of the reverse time di®usion process are stated and they are abbreviated to Assumption 1 the functions a (t; x) and b (t; x), 1 i n, 1 j m are i ij · · · · bounded and uniformly Lipschitz continuous in (t; x) on [0; T ] Rn; £ > Assumption 2 the functions (b b )ij are uniformly HÄolder continuous in x; Assumption 3 there exists c > 0 such that n (b b>) y y c y 2; for all y Rn; (t; x) [0; T ] Rn; ij i j ¸ j j 2 2 £ i;jX=1 3 Assumption 4 the functions ax; bx and bxx satisfy Assumption 1 and 2. We say that Xt in (5) is time reversible i® the coe±cients satisfy the above 4 conditions. Theorem 1 [3] Assume Assumption 1 - Assumption 4 hold. Let p(t; x) be the solution of the Kolmogorov equation for Xt. Then Xt can be realized as t t Xt = XT + a¹(s; Xs)d( s) + b(s; Xs)dW¹ s; 0 t < T ZT ¡ ZT · where W¹ s is a Brownian motion independent of XT , and n @ m bik(s; x)b¢k(s; x)p(s; x) @xi à !¯ i=1 k=1 ¯x=Xs a¹(s; Xs) = a(s; Xs) + X X ¯ : (6) p(s; X ) ¯ ¡ s ¯ 3 Derivation of The Optimal Estimators 3.1 Notations & De¯nitions We will use the following notations throughout this paper. P (Rn) : the space of closed and connected subsets in Rn. ² cc S : [0; T ] P (Rn) : set-valued function. ² ! cc S(a;b) := S(t) , @S(a;b) := @S(t), @S := @S(0;T ). ² a<t<b a<t<b S S We say S is smooth i® for each x @S, there exists a unique tangent plane at x. We only consider smooth set-valued2 functions. X : the time reversible signal process satisfying (5). ² t Yt := XtIXt2=S(t); t [0; T ], Y[a;b] := Yt t2[a;b]. ² t 2t f g X 0 := X ;  0 := X I , the reverse time process. ² t t+t0 t t0¡t t2[0;T ] ~ t0 t0 t0 t0 Xt := Xt I t0 , Â~t := Xt I t0 . ² Xt 2=S(t0+t) Ât 2=S(t0¡t) Note that Xt0 and Ât0 have the same initial distribution of » d X . t t » t0 Let t = σ Wt : 0 t < T t0 ; t = σ W¹ t : 0 t < t0 , and » be all indepF endenf t and w·e consider¡theg ¯ltrationsG fσ(»; )·; 0 t <gT t and Ft · ¡ 0 σ(»; ); 0 t < t . Let X S(t ) and we de¯ne two random times Gt · 0 t0 2 0 ¿ := t sup s s < t ; X = S(s) 0, ² t0 0 ¡ f j 0 s 2 g _ 4 σ := inf s s > t ; X = S(s) T t . ² t0 f j 0 s 2 g ^ ¡ 0 t0 We omit the subscript t0 for convenience. ¿ and σ can be rephrased w.r.t.