Interpolation for Partly Hidden Diffusion Processes*

Interpolation for Partly Hidden Diffusion Processes*

<p>Interpolation for partly hidden diffusion processes* </p><p>Changsun Choi, Dougu Nam** </p><p>Division of Applied Mathematics, KAIST, Daejeon 305-701, Republic of Korea </p><p>Abstract </p><p>Let X<sub style="top: 0.1365em;">t </sub>be n-dimensional diffusion process and S(t) be a smooth set-valued function. Suppose X<sub style="top: 0.1365em;">t </sub>is invisible when X<sub style="top: 0.1365em;">t </sub>∈ S(t), but we can see the process exactly otherwise. Let X<sub style="top: 0.1365em;">t </sub>∈ S(t<sub style="top: 0.1365em;">0</sub>) and we observe the process from the beginning till the </p><p>0</p><p>signal reappears out of the obstacle after t<sub style="top: 0.1365em;">0</sub>. With this information, we evaluate the estimators for the functionals of X<sub style="top: 0.1365em;">t </sub>on a time interval containing t<sub style="top: 0.1365em;">0 </sub>where the signal is hidden. We solve related 3 PDE’s in general cases. We give a generalized last exit decomposition for n-dimensional Brownian motion to evaluate its estimators. An alternative Monte Carlo method is also proposed for Brownian motion. We illustrate several examples and compare the solutions between those by the closed form result, finite difference method, and Monte Carlo simulations. </p><p>Key words:&nbsp;interpolation, diffusion process, excursions, backward boundary value problem, last exit decomposition, Monte Carlo method </p><p></p><ul style="display: flex;"><li style="flex:1">1</li><li style="flex:1">Introduction </li></ul><p></p><p>Usually, the interpolation problem for Markov process indicates that of evaluating the probability distribution of the process between the discretely observed times. One of the origin of the problem is Schro¨dinger’s Gedankenexperiment where we are interested in the probability distribution of a diffusive particle in a given force field when the initial and final time density functions are given. We can obtain the desired density function at each time in a given time interval by solving the forward and backward Kolmogorov equations. The background and methods for the problem are well reviewed in [17]. The problem also attracts interests in signal processing. It is useful in restoration of lost </p><p>1</p><p>*This work is supported by BK21 project. **corresponding author, Email − address : [email protected] </p><p>2</p><p></p><ul style="display: flex;"><li style="flex:1">Preprint submitted to Elsevier Science </li><li style="flex:1">20 July 2002 </li></ul><p></p><p>signal samples [8]. In this paper, we consider a special type of interpolation problem for which similar applications are expected. </p><p>Let X<sub style="top: 0.1493em;">t </sub>be the signal process represented by some system of stochastic differential equations. Suppose X<sub style="top: 0.1493em;">t </sub>is invisible when X<sub style="top: 0.1493em;">t </sub>∈ S(t) for some set-valued function S(t), 0 ≤ t ≤ T &lt; ∞ and we can observe the process exactly otherwise i.e. the observation process is </p><p>Y<sub style="top: 0.1492em;">t </sub>= X<sub style="top: 0.1492em;">t</sub>I<sub style="top: 0.1545em;">X ∈/S(t)</sub>, 0 ≤ t ≤ T. </p><p>t</p><p>In [10], 1-dimensional signal process and a fixed obstacle S(t) = (0, ∞) are considered and the optimal estimator E[f(X<sub style="top: 0.1493em;">1</sub>)I<sub style="top: 0.1493em;">X &gt;0</sub>|Y<sub style="top: 0.1552em;">[0,1]</sub>] was derived for </p><p>1</p><p>bounded Borel function f. The key identity is </p><p>E[f(X<sub style="top: 0.1492em;">1</sub>)I<sub style="top: 0.1492em;">X &gt;0</sub>|Y<sub style="top: 0.1552em;">[0,1]</sub>] = E[f(X<sub style="top: 0.1492em;">1</sub>)I<sub style="top: 0.1492em;">X &gt;0</sub>|τ] </p><p>(1) </p><p></p><ul style="display: flex;"><li style="flex:1">1</li><li style="flex:1">1</li></ul><p></p><p>where 1 − τ is the lastly observed time before 1 before the signal enters into the obstacle. This is not a stopping time but if we consider the reverse time process χ<sub style="top: 0.1492em;">t </sub>of X<sub style="top: 0.1492em;">t</sub>, τ is a stopping time and we can prove (1) by applying the strong Markov property to χ<sub style="top: 0.1492em;">t</sub>. This simplified estimator is again evaluated by the formula </p><p>Z</p><p>∞</p><p>E[f(X<sub style="top: 0.1493em;">1</sub>)I<sub style="top: 0.1493em;">X &gt;0</sub>|τ] = </p><p>f(x)q(x|τ)dx </p><p>(2) </p><p>1</p><p>0</p><p>p<sub style="top: 0.1493em;">1</sub>(x)u<sub style="top: 0.1493em;">x</sub>(τ) </p><p>R</p><p></p><ul style="display: flex;"><li style="flex:1">where q(x|τ) = </li><li style="flex:1">, p<sub style="top: 0.1492em;">1</sub>(x) is the density function of X<sub style="top: 0.1492em;">1 </sub>and u<sub style="top: 0.1492em;">x</sub>(t) </li></ul><p></p><p><sup style="top: -0.447em;">∞ </sup>p<sub style="top: 0.1493em;">1</sub>(z)u<sub style="top: 0.1493em;">z</sub>(τ)dz </p><p>0</p><p>is the first hitting time density of χ<sub style="top: 0.1492em;">t </sub>starting at x to 0 at t. In [11], this problem was considered in a more generalized setting, where X<sub style="top: 0.1493em;">t </sub>is n-dimensional diffusion process and the obstacle S(t) is a time-varying setvalued function. For this setting, we should consider also the first hitting state χ<sub style="top: 0.1493em;">τ </sub>as well as time τ and the corresponding identity becomes </p><p>E[f(X<sub style="top: 0.1493em;">1</sub>)I<sub style="top: 0.1552em;">X ∈S(1)</sub>|Y<sub style="top: 0.1552em;">[0,1]</sub>] = E[f(X<sub style="top: 0.1493em;">1</sub>)I<sub style="top: 0.1552em;">X ∈S(1)</sub>|τ, χ<sub style="top: 0.1493em;">τ </sub>] </p><p>(3) </p><p></p><ul style="display: flex;"><li style="flex:1">1</li><li style="flex:1">1</li></ul><p></p><p>and q(x|t) is replaced by </p><p>p<sub style="top: 0.1493em;">1</sub>(x)u<sub style="top: 0.1493em;">x</sub>(t, y) </p><p>R</p><p></p><ul style="display: flex;"><li style="flex:1">q(x|t, y) = </li><li style="flex:1">.</li></ul><p></p><p>(4) </p><p><sup style="top: -0.4478em;">∞ </sup>p<sub style="top: 0.1493em;">1</sub>(z)u<sub style="top: 0.1493em;">z</sub>(t, y)dz </p><p>0</p><p>where u<sub style="top: 0.1493em;">x</sub>(t, y) is the first hitting density on the surface of the images of S(t). This problem can be thought as a kind of filtering problem (not a prediction), because we observe the hidden signal up to time 1(to be estimated) and the hidden process after time 1 − τ still gives the information that the signal is in the obstacle. In this paper, we consider related extended problems. We continue observing the hidden signal process till it reappears out of the obstacle. With this additional information we can estimate the hidden state with more </p><p>2accuracy. An interesting fact is that even if the signal does not reappear in a finite fixed time T, this information also improves the estimator. We also evaluate extrapolator and backward filter. The interpolator developed in Section 3.4 in particular, can be applied to reconstructing the lost parts of random signals in a probabilistically optimal way. </p><p>In section 3, we prove the corresponding Bernstein-type [17] identity similar to (1) and (3) and derive the optimal estimator. Due to this identity, the problem reduces to that of computing the density functions of general excursions (or meander) in the solution of stochastic differential equations on given excursion intervals. It is well known that the transition density function for stochastic differential equation is evaluated through a parabolic PDE. For the interpolator, we should solve a Kolmogorov equation and two backward boundary value problems. We explain a numerical algorithm by finite difference method for general 1-dimensional diffusion processes. When the signal is a simple Brownian motion process, we can use the so called last exit decomposition principle and we can derive the estimators more conveniently, which will be confirmed by a numerical computation example. Alternatively, for Brownian motion, we can use Monte Carlo method for some boundaries by numerical simulation of various conditional Brownian motions. We give several examples and compare the results by each method. </p><p></p><ul style="display: flex;"><li style="flex:1">2</li><li style="flex:1">Reverse Time Diffusion Process </li></ul><p></p><p>Consider the following system of stochastic differential equation </p><p>dX<sub style="top: 0.1493em;">t </sub>= a(t, X<sub style="top: 0.1493em;">t</sub>)dt + b(t, X<sub style="top: 0.1493em;">t</sub>)dW<sub style="top: 0.1493em;">t</sub>, X<sub style="top: 0.1493em;">0 </sub>= 0, 0 ≤ t &lt; T </p><p>(5) </p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">n</li><li style="flex:1">n</li><li style="flex:1">n×m </li></ul><p></p><p>where a : [0, T] × R → R , b&nbsp;: [0, T] × R → R are measurable functions </p><p>and W<sub style="top: 0.1493em;">t </sub>is m-dimensional Brownian motion. According to [10] and [11], to obtain the desired estimators, we need the reverse time diffusion process χ<sub style="top: 0.1493em;">t </sub>of X<sub style="top: 0.1493em;">t </sub>on [0, T). In [3], the conditions for the existence and uniqueness of the reverse time diffusion process are stated and they are abbreviated to </p><p>Assumption 1 the functions a<sub style="top: 0.1493em;">i</sub>(t, x) and b<sub style="top: 0.1493em;">ij</sub>(t, x), 1 ≤ i ≤ n, 1 ≤ j ≤ m are </p><p>n</p><p>bounded and uniformly Lipschitz continuous in (t, x) on [0, T] × R ; Assumption 2 the functions (b b<sup style="top: -0.3615em;">&gt;</sup>)<sub style="top: 0.1492em;">ij </sub>are uniformly H¨older continuous in x; Assumption 3 there exists c &gt; 0 such that </p><p>n</p><p>X</p><p>2</p><p>(b b<sup style="top: -0.4117em;">&gt;</sup>)<sub style="top: 0.1493em;">ij </sub>y<sub style="top: 0.1493em;">i</sub>y<sub style="top: 0.1493em;">j </sub>≥ c|y| , for all&nbsp;y ∈ R<sup style="top: -0.4117em;">n</sup>, (t, x) ∈ [0, T] × R<sup style="top: -0.4117em;">n</sup>; </p><p>i,j=1 </p><p>3</p><p>Assumption 4 the functions a<sub style="top: 0.1493em;">x</sub>, b<sub style="top: 0.1493em;">x </sub>and b<sub style="top: 0.1493em;">xx </sub>satisfy Assumption 1 and 2. </p><p>We say that X<sub style="top: 0.1492em;">t </sub>in (5) is time reversible iff the coefficients satisfy the above 4 conditions. </p><p>Theorem 1 [3] Assume Assumption 1 - Assumption 4 hold. Let p(t, x) be the </p><p>solution of the Kolmogorov equation for X<sub style="top: 0.1492em;">t</sub>. Then X<sub style="top: 0.1492em;">t </sub>can be realized as </p><p></p><ul style="display: flex;"><li style="flex:1">Z</li><li style="flex:1">Z</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">t</li><li style="flex:1">t</li></ul><p></p><p>¯</p><p>X<sub style="top: 0.1492em;">t </sub>= X<sub style="top: 0.1492em;">T </sub>+ </p><p>a¯(s, X<sub style="top: 0.1492em;">s</sub>)d(−s) + </p><p>b(s, X<sub style="top: 0.1492em;">s</sub>)dW<sub style="top: 0.1492em;">s</sub>, 0 ≤ t &lt; T </p><p></p><ul style="display: flex;"><li style="flex:1">T</li><li style="flex:1">T</li></ul><p></p><p>¯</p><p>where W<sub style="top: 0.1492em;">s </sub>is a Brownian motion independent of X<sub style="top: 0.1492em;">T </sub>, and </p><p>¯¯</p><ul style="display: flex;"><li style="flex:1">Ã</li><li style="flex:1">!</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">m</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">X</li><li style="flex:1">X</li></ul><p></p><p>∂</p><p>¯</p><p>b<sub style="top: 0.1492em;">ik</sub>(s, x)b (s, x)p(s, x) ¯ </p><p>·k </p><p>¯</p><p><sub style="top: 0.309em;">i=1 </sub>∂x<sub style="top: 0.1493em;">i </sub></p><p>k=1 </p><p>x=X<sub style="top: 0.0833em;">s </sub></p><p>a¯(s, X<sub style="top: 0.1493em;">s</sub>) = −a(s, X<sub style="top: 0.1493em;">s</sub>) + </p><p>.</p><p>(6) </p><p>p(s, X<sub style="top: 0.1492em;">s</sub>) </p><p></p><ul style="display: flex;"><li style="flex:1">3</li><li style="flex:1">Derivation of The Optimal Estimators </li></ul><p></p><p>3.1 Notations&nbsp;&amp; Definitions </p><p>We will use the following notations throughout this paper. </p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">n</li></ul><p></p><p>• P<sub style="top: 0.1493em;">cc</sub>(R ) : the space of closed and connected subsets in R . </p><p>n</p><p>• S : [0, T] → P<sub style="top: 0.1493em;">cc</sub>(R ) : set-valued function. </p><p></p><ul style="display: flex;"><li style="flex:1">S</li><li style="flex:1">S</li></ul><p></p><p>• S<sub style="top: 0.1545em;">(a,b) </sub>:= </p><p></p><ul style="display: flex;"><li style="flex:1">S(t) ,&nbsp;∂S<sub style="top: 0.1545em;">(a,b) </sub>:= </li><li style="flex:1">∂S(t), ∂S := ∂S<sub style="top: 0.1545em;">(0,T) </sub></li></ul><p>.</p><p></p><ul style="display: flex;"><li style="flex:1">a&lt;t&lt;b </li><li style="flex:1">a&lt;t&lt;b </li></ul><p></p><p>We say S is smooth iff for each x ∈ ∂S, there exists a unique tangent plane at x. We only consider smooth set-valued functions. </p><p>• X<sub style="top: 0.1493em;">t </sub>: the time reversible signal process satisfying (5). </p><p>• Y<sub style="top: 0.1493em;">t </sub>:= X<sub style="top: 0.1493em;">t</sub>I<sub style="top: 0.1552em;">X ∈/S(t)</sub>, t&nbsp;∈ [0, T], </p><p>Y<sub style="top: 0.1552em;">[a,b] </sub>:= {Y<sub style="top: 0.1493em;">t</sub>}<sub style="top: 0.1552em;">t∈[a,b] </sub></p><p>.</p><p>t</p><p>• X<sub style="top: 0.198em;">t</sub><sup style="top: -0.4358em;">t</sup><sup style="top: 0.0923em;">0 </sup>:= X<sub style="top: 0.1493em;">t+t </sub></p><p>,</p><p>χ<sup style="top: -0.4358em;">t</sup><sub style="top: 0.198em;">t</sub><sup style="top: 0.0923em;">0 </sup>:= X<sub style="top: 0.1493em;">t −t</sub>I<sub style="top: 0.1545em;">t∈[0,T]</sub>, the reverse time process. </p><p></p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">t<sub style="top: 0.0922em;">0 </sub></li><li style="flex:1">t<sub style="top: 0.0922em;">0 </sub></li></ul><p></p><p>• X<sub style="top: 0.198em;">t </sub>:= X<sub style="top: 0.198em;">t </sub>I </p><p>,</p><p>χ˜<sup style="top: -0.4357em;">t</sup><sub style="top: 0.198em;">t</sub><sup style="top: 0.0922em;">0 </sup>:= X<sub style="top: 0.198em;">t</sub><sup style="top: -0.4357em;">t</sup><sup style="top: 0.0922em;">0 </sup>I </p><p>.<br>˜</p><p></p><ul style="display: flex;"><li style="flex:1">t</li><li style="flex:1">t</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">X<sub style="top: 0.21em;">t </sub>∈/S(t<sub style="top: 0.0922em;">0</sub>+t) </li><li style="flex:1">χ<sub style="top: 0.21em;">t </sub>∈/S(t<sub style="top: 0.0922em;">0</sub>−t) </li></ul><p>d</p><p>t<sub style="top: 0.0922em;">0 </sub></p><p>Note that X<sub style="top: 0.198em;">t</sub><sup style="top: -0.4358em;">t </sup>and χ<sub style="top: 0.198em;">t </sub>have the same initial distribution of ξ ∼ X<sub style="top: 0.1493em;">t </sub>. </p><p>0<br>0</p><p>¯</p><p>Let F<sub style="top: 0.1493em;">t </sub>= σ{W<sub style="top: 0.1493em;">t </sub>: 0 ≤ t &lt; T − t<sub style="top: 0.1493em;">0</sub>}, G<sub style="top: 0.1493em;">t </sub>= σ{W<sub style="top: 0.1493em;">t </sub>: 0 ≤ t &lt; t<sub style="top: 0.1493em;">0</sub>}, and ξ be </p><p>all independent and we consider the filtrations σ(ξ, F<sub style="top: 0.1493em;">t</sub>), 0 ≤ t &lt; T − t<sub style="top: 0.1493em;">0 </sub>and σ(ξ, G<sub style="top: 0.1493em;">t</sub>), 0 ≤ t &lt; t<sub style="top: 0.1493em;">0</sub>. Let X<sub style="top: 0.1493em;">t </sub>∈ S(t<sub style="top: 0.1493em;">0</sub>) and we define two random times </p><p>0</p><p>• τ<sub style="top: 0.1492em;">t </sub>:= t<sub style="top: 0.1492em;">0 </sub>− sup{s|s &lt; t<sub style="top: 0.1492em;">0</sub>, X<sub style="top: 0.1492em;">s </sub>∈/ S(s)} ∨ 0, </p><p>0</p><p>4</p><p>• σ<sub style="top: 0.1493em;">t </sub>:= inf{s|s &gt; t<sub style="top: 0.1492em;">0</sub>, X<sub style="top: 0.1492em;">s </sub>∈/ S(s)} ∧ T − t<sub style="top: 0.1492em;">0</sub>. </p><p>0</p><p>We omit the subscript t<sub style="top: 0.1493em;">0 </sub>for convenience. τ and σ can be rephrased w.r.t. X<sub style="top: 0.198em;">t</sub><sup style="top: -0.435em;">t </sup>and χ<sup style="top: -0.435em;">t</sup><sub style="top: 0.1988em;">t</sub><sup style="top: 0.0923em;">0 </sup>as follows: </p><p>0</p><p>τ = τ(ξ) :&nbsp;the first hitting time of χ<sup style="top: -0.411em;">t</sup><sub style="top: 0.2467em;">t</sub><sup style="top: 0.0922em;">0 </sup>to S(t<sub style="top: 0.1492em;">0 </sub>− t) when ξ ∈ S(t<sub style="top: 0.1492em;">0</sub>), σ = σ(ξ) :&nbsp;the first hitting time of X<sub style="top: 0.246em;">t</sub><sup style="top: -0.4117em;">t </sup>to S(t<sub style="top: 0.1492em;">0 </sub>+ t) when ξ ∈ S(t<sub style="top: 0.1492em;">0</sub>). </p><p>0</p><p></p><ul style="display: flex;"><li style="flex:1">t<sub style="top: 0.0922em;">0 </sub></li><li style="flex:1">t<sub style="top: 0.0922em;">0 </sub></li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">x</li><li style="flex:1">x</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">x</li><li style="flex:1">x</li></ul><p></p><p>Let X<sub style="top: 0.2467em;">t </sub>and χ<sub style="top: 0.2467em;">t </sub>be the processes X<sub style="top: 0.1987em;">t </sub>and χ<sub style="top: 0.1987em;">t </sub>which start at x and P<sub style="top: 0.1492em;">X </sub>and P<sub style="top: 0.1492em;">χ </sub>be probability measures induced by them, then with P<sub style="top: 0.1492em;">ξ </sub>induced by ξ, these </p><p></p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p>three probability measures are independent. We denote E<sub style="top: 0.1665em;">χ </sub>, E<sub style="top: 0.1665em;">X </sub>and E<sub style="top: 0.1493em;">ξ </sub>for the corresponding integrations. </p><p>We define the first hitting density of X<sub style="top: 0.198em;">t</sub><sup style="top: -0.4357em;">t</sup><sup style="top: 0.0922em;">0 </sup>on ∂S<sub style="top: 0.1553em;">(t ,T) </sub>as follows: We first define </p><p>0</p><p>˜two measures on ∂S<sub style="top: 0.1553em;">(t ,T)</sub>. Let B be open set in ∂S<sub style="top: 0.1553em;">(t ,T) </sub>and x ∈ S(t<sub style="top: 0.1492em;">0</sub>). </p><p></p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p>t<sub style="top: 0.0922em;">0 </sub></p><p></p><ul style="display: flex;"><li style="flex:1">˜</li><li style="flex:1">˜</li></ul><p></p><p>x</p><p>ν<sub style="top: 0.1492em;">x</sub>(B) : = P<sub style="top: 0.1492em;">X </sub>((σ(x), X<sub style="top: 0.288em;">σ(x)</sub>) ∈ B), <br>˜µ<sub style="top: 0.1492em;">n</sub>(B) : n-dimensional Hausdorff measure on ∂S<sub style="top: 0.1545em;">(t ,T)</sub>. </p><p>0</p><p>Since ν<sub style="top: 0.1493em;">x </sub>¿ µ<sub style="top: 0.1493em;">n</sub>, there exists the unique density function v<sub style="top: 0.1493em;">x</sub>(t, y) := ∂ν<sub style="top: 0.1493em;">x</sub>/∂µ<sub style="top: 0.1493em;">n</sub>(t, y), (t, y) ∈ ∂S<sub style="top: 0.1553em;">(t ,T) </sub>s.t. </p><p>0</p><p>Z</p><p>˜ν<sub style="top: 0.1493em;">x</sub>(B) := </p><p>v<sub style="top: 0.1493em;">x</sub>(t, y)dµ<sub style="top: 0.1493em;">n</sub>(t, y). </p><p>˜</p><p>B</p><p>We can similarly define u<sub style="top: 0.1493em;">x</sub>(s, z) := ∂λ<sub style="top: 0.1493em;">x</sub>/∂µ<sub style="top: 0.1493em;">n</sub>(s, z), (s, z) ∈ ∂S<sub style="top: 0.1545em;">(0,t ) </sub>where </p><p>0</p><p>x</p><p></p><ul style="display: flex;"><li style="flex:1">˜</li><li style="flex:1">˜</li><li style="flex:1">˜</li></ul><p>λ<sub style="top: 0.1493em;">x</sub>(A) : = Pχ ((τ(x), χ<sub style="top: 0.1545em;">τ(x)</sub>) ∈ A) for open set A ⊂ ∂S<sub style="top: 0.1545em;">(0,t )</sub>. </p><p>0</p><p>3.2 Extrapolation </p><p>Suppose X<sub style="top: 0.1493em;">t </sub>∈ S(t<sub style="top: 0.1493em;">0</sub>) and we have observed Y<sub style="top: 0.1493em;">t </sub>for 0 ≤ t ≤ t<sub style="top: 0.1493em;">0 </sub>and we want to </p><p>0</p><p>estimate a future state X<sub style="top: 0.198em;">t</sub><sup style="top: -0.435em;">t</sup><sup style="top: 0.0922em;">0 </sup>, 0 ≤ t ≤ T − t<sub style="top: 0.1493em;">0</sub>. From the lastly observed point before t<sub style="top: 0.1493em;">0</sub>, we obtain the conditional density v(x) = q(x|τ, χ<sup style="top: -0.3615em;">t</sup><sub style="top: 0.2467em;">τ</sub><sup style="top: 0.0922em;">0 </sup>) for the present estimator as in (4) and we use this density as the initial condition for the Kolmogorov equation; </p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">n</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">∂p<sup style="top: -0.3623em;">t</sup><sup style="top: 0.0922em;">0 </sup>(t, x) </li><li style="flex:1">∂(p (t, x)a(t+t<sub style="top: 0.1493em;">0</sub>, x)) </li></ul><p></p><p>1</p><p>∂<sup style="top: -0.3622em;">2</sup>(b<sup style="top: -0.3622em;">&gt;</sup>(t+t<sub style="top: 0.1493em;">0</sub>, x)p<sup style="top: -0.3622em;">t</sup><sup style="top: 0.0922em;">0 </sup>(t, x)b(t+t<sub style="top: 0.1493em;">0</sub>, x)) </p><p>t<sub style="top: 0.0922em;">0 </sub></p><p></p><ul style="display: flex;"><li style="flex:1">X</li><li style="flex:1">X</li></ul><p></p><p>=− </p><p>+</p><p>,</p><ul style="display: flex;"><li style="flex:1">∂t </li><li style="flex:1">∂x </li></ul><p></p><p>2 <sub style="top: 0.3098em;">i,j=1 </sub></p><p>∂x<sub style="top: 0.1493em;">i</sub>∂x<sub style="top: 0.1493em;">j </sub></p><p>i=1 </p><p>(7) </p><p>p<sup style="top: -0.4118em;">t</sup><sup style="top: 0.0922em;">0 </sup>(0, x) = v(x). </p><p>The solution p<sup style="top: -0.3615em;">t</sup><sup style="top: 0.0923em;">0 </sup>(t, x) is the conditional density for the extrapolator. <br>5</p><p>3.3 Backward&nbsp;filtering </p><p>Suppose X<sub style="top: 0.1493em;">t </sub>∈ S(t<sub style="top: 0.1492em;">0</sub>) and we observe the hidden signal from the present time </p><p>0</p><p>t<sub style="top: 0.1493em;">0 </sub>up to a future time and we want to estimate X<sub style="top: 0.1493em;">t </sub>. </p><p>0</p><p>Applying the strong Markov property, we can get a similar identity as (3) </p><p>E[f(X<sub style="top: 0.1492em;">t </sub>)I<sub style="top: 0.1552em;">X </sub><sub style="top: 0.1552em;">∈S(t )</sub>|Y<sub style="top: 0.1552em;">[t ,T]</sub>] = E[f(X<sub style="top: 0.1492em;">t </sub>)I<sub style="top: 0.1552em;">X </sub><sub style="top: 0.1552em;">∈S(t )</sub>|σ, X<sub style="top: 0.246em;">σ</sub><sup style="top: -0.411em;">t</sup><sup style="top: 0.0922em;">0 </sup>], </p><p>(8) </p><p>0</p><p>t</p><p></p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p>t</p><p>0</p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">0</li></ul><p></p><p>We can think of two cases. One is the case when the hidden signal reappears before T and the other is when the signal does not appear up to T. </p><p>3.3.1 Case&nbsp;1 : t<sub style="top: 0.1492em;">0 </sub>&lt; t<sub style="top: 0.1492em;">0 </sub>+ σ ≤ T </p><p>Consider the joint density function </p><p></p><ul style="display: flex;"><li style="flex:1">h</li><li style="flex:1">i</li></ul><p></p><p>P ξ&nbsp;∈ dx; (σ(ξ), X<sub style="top: 0.288em;">σ</sub><sup style="top: -0.4357em;">t</sup><sup style="top: 0.0922em;">0</sup><sub style="top: 0.288em;">(ξ)</sub>) ∈ d(s, z) </p><p>v(x; s, z) = </p><p>,t<sub style="top: 0.1493em;">0 </sub>&lt; s ≤ T. dx d(s, z) </p><p>By conditioning on ξ, we can decompose this density such as </p><p>v(x; s, z) = p<sub style="top: 0.1493em;">t </sub>(x)v<sub style="top: 0.1493em;">x</sub>(s, z), </p><p>0</p><p>where p<sub style="top: 0.1493em;">t </sub>(x) is the density function of X<sub style="top: 0.1493em;">t </sub>. Hence, by the identity (8), we </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us