<<

arXiv:0805.0048v2 [math.PR] 10 Jun 2008 blt ovrec,a tk st rt process a write to so is Assuming well. stake as at useful convergence, discre very ability a constr proves purpose, discrete process general a continuous more have a For to process. advantageous stochastic is uous it sp respect, measurable finite-dimension that appropriate expected In the constru an yields on then consistently process and measure the process, with the the evaluatin associated technical: in of consists and distributions processes lengthy dimensional fac Markov prove continuous of can build matter processes to a such est As of to processes. challenging tence often stochastic is adapted it continuous space, probability some Given Introduction 1. admfunctions random iceeCntuto o Gaussian for Construction Discrete A with k ed oifiiy egnrlz uhacntuto otecl the to construction processes a Markov such Gaussian generalize We infinity. to tends process xetto of expectation Representation. phrases: and classifications: Keywords subject 2000 AMS toward en h inrpoesa h lotsr ahws ii o limit path-wise sure almost the as process Wiener the define ai ffntosi sdt omafiiedmninlproce finite-dimensional a form to used is functions of basis Abstract: ≤ 2 N f oevr epoeta h process the that prove we Moreover, . and X X . N g nteLeycntuto fBona oin Haar-derive a motion, Brownian of L´evy construction the In f X 20Yr vne e ok Y105 USA 10065, NY York, New Avenue, York 1230 ota tgvsa xc ersnaino h conditional the of representation exact an gives it that so n en otnosfntos ebidtefinite-dimensiona the build We functions. continuous being X t · 9 er al tnod A935 USA 94305, CA Stanford, Mall, Serra 390 akvProcesses Markov ξ = ihrsett h lrto eeae by generated filtration the to respect with n e-mail: aoaoyo ahmtclPhysics Mathematical of Laboratory n X ∞ =0 e-mail: hbu Taillefumier Thibaud f iaca ahProgram Math Financial non Toussaint Antoine n [email protected] ( X t asinPoes akvPoes Discrete Process, Markov Process, Gaussian [email protected] ) hc a ewritten be can which · ξ n lim = 1 N rmr 01,6J5 28C20. 60J25, 60G15, Primary →∞ X n X N =0 N f ovre ndistribution in converges n X X ( t t ) sacnegn eisof series convergent a as = · ξ g ldsrbtos(13). distributions al { n ( X t s fcentered of ass ss f bihrslsabout results ablish h eie finite- desired the g , ) erpeetto of representation te k/ W cino contin- a of uction · tn h measure the cting emd fprob- of mode me W R 2 ,temr exis- mere the t, 0 N c,s htthis that so ace, N t ietapproach direct N f } when ( n to and o 0 for t ) dW N ≤ d t l Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 2 where fn is a deterministic and ξn is a given . The L´evy construction of –later referred as – provides us with a first example of discrete representation for a continuous . Noticing the simple form of the probability density of a Brow- nian bridge, it is based on completing sample paths by interpolation according to the conditional probabilities of the Wiener process (10). More especially, the coefficients ξn are Gaussian independent and the elements fn, called Schauder elements, are obtained by time-dependent integration of the Haar elements. This latter point is of relevance since, for being a Hilbert system, the introduction of the Haar basis greatly simplify the demonstration of the existence of the Wiener process (3). From another perspective, fundamental among discrete representations is the Karhunuen-Lo`eve decomposition. Instead of yielding a convenient construction scheme, it represents a stochastic process by expanding it on a basis of or- thogonal functions (9; 11). The definition of the basis elements fn depends only on the second-order of the considered process and the coefficients ξn are pairwise uncorrelated random variables. Incidentally, such a decompo- sition is especially suited to study Gaussian processes because the coefficients of the representation then become Gaussian independent. For these reasons, the Karhunen-Lo´eve decomposition is of primary importance in exploratory data analysis, leading to methods referred as “principal component analysis”, “Hotelling transform” (6) or “proper orthogonal decomposition” (12) according to the field of application. In particular, it was directly applied to the study of stationary Gaussian Markov processes in the theory of random noise in radio receivers (7).

In view of this, we propose a construction of Gaussian Markov processes us- ing a Haar-like basis of functions. The class of processes we consider is general enough to encompass commonly studied centered Gaussian Markov processes that satisfy minimal properties of continuity. We stress the connection with the Haar basis because our basis of decomposition is the exact analog of the Haar- derived Schauder functions used in the L´evy construction of the Wiener process. As opposed to the Karhunene-Lo`eve decomposition, our basis is not made of orthogonal functions but the elements are such that the random coefficients ξn are always independent and Gaussian with law (0, 1), i.e. with zero and unitary variance. N The almost sure path-wise convergence of our decomposition toward a well- defined continuous process is quite straightforward. Most of the work lies in proving that the candidate process provides us with an exact representation of a Gaussian Markov process and in demonstrating that our decomposition con- verges in distribution toward this representation. Validating the decomposition essentially consists in proving the weak convergence of all the finite-dimensional measures induced by our construction on the Wiener space: it requires the in- troduction of an auxiliary orthonormal system of functions in view of using the Parseval relation. To furthermore establish the convergence in distribution of the representation, we only need demonstrating the tightness of this family of Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 3 induced measures.

The discrete construction we present displays both analytical and numerical interests for further applications. Analytically-wise, even if it does not exhibit the same orthogonal properties as the Karhunene-Lo`eve decomposition, our representation can prove as ad- vantageous to establish analytical results about Gaussian Markov processes. It is especially noticeable when computing quantities such as the characteristic functional of random processes (5; 2) as shown in annex. This is just an ex- ample of how, equipped with a discrete representation, one can expect to make demonstration of properties about continuous Gaussian Markov processes more tractable. If the measures induced by our decomposition on the converge weakly toward the measure of a Gaussian Markov process, we put forward that the convergence of our decomposition is almost sure path-wise toward the representation of a Gaussian Markov process. This result contrasts with the convergence in mean of the Karhunene-Lo`eve decomposition. From another point of view, three Haar-like properties make our decomposition particularly suitable for certain numerical computations: all basis elements have compact support on an open interval with dyadic rational endpoints; these in- tervals are nested and become smaller for larger indices of the basis element, and for any dyadic rational, only a finite number of basis elements is nonzero at that number. Thus the expansion in our basis, when evaluated at a dyadic rational, terminates in a finite number of steps. These properties suggest an exact schema to simulate sample paths of a Gaussian Markov process X in an iterative “top-down” fashion. Assuming conditional knowledge of a sample path N N on the dyadic points of D = k2− 0 k 2 , one can decide to further N { | ≤ ≤ } the simulation of this sample path at any time t in DN+1 by drawing a point according to the conditional law of X knowing X , which is simply ex- t t t DN pressed in the framework of our construction. It can{ } be∈ used to great advantage in numerical computations such as dychotomic search algorithms for first pas- sage times: considering a continuous boundary, we shall present elsewhere a fast Monte-Carlo algorithm that simulates sample-paths with increasing accuracy only in time regions where a first passage is likely to occur.

2. Main Result

Beforehand, we emphasize that the analytical and numerical advantages granted by the use of our decomposition come at the price of generality, being only suited for Gaussian Markov processes with minimal properties of continuity. We also remark that if the Karhunen-Lo`eve decomposition is widely used in data analysis, our decomposition mainly provides us with a discrete construction scheme for Gaussian Markov processes.

Proposition. Let X = Xt, t;0 t 1 be a real adapted process on some probability space (Ω, , P{) whichF takes≤ ≤ value} in the set of real numbers and let F Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 4

n ξn,k with n 0 and 0 k<2 be Gaussian random variables of law (0, 1) . If there exist≥ some non-zero≤ continuous functions f and g such thatN

t X = g(t) f(t) dW , with 0 t 1 , t · t ≤ ≤ Z0 n then there exists a basis of continuous functions Ψn,k for n 0 and 0 k< 2 such that the random variable ≥ ≤

N XN = Ψ (t) ξ t n,k · n,k n=0 0 k<2n−1 X ≤X follows the same law as the conditional expectation of Xt with respect to the N filtration generated by Xk/2N for 0 k 2 . The functions Ψn,k thus defined { n}+1 ≤ ≤n+1 have support in Sn,k = k 2− , (k+1)2− and admit simple analytical ex- pressions in terms of functions· g and f.  N  Moreover, the path-wise limit limN X defines almost surely a continuous process which is an exact representation→∞ of X and we have

XN D X , −→ meaning that the finite-dimensional process XN converges in distribution toward X when N tends to infinity. Remark. The function f can possibly be zero on a negligible set in [0, 1] in the previous proposition. To prove this proposition, the paper is organized as follows. We first review some background about Gaussian Markov processes X and their Doob represen- t tations as Xt = g(t) 0 f(t) dWt. Then we develop the rationale of our construc- tion by focusing on· the conditional expectations of the process X with respect R N to the filtration generated by Xk/2N for 0 k 2 . In the fifth section, we propose a basis of expansion to{ form the} finite-dimensional≤ ≤ candidate processes XN and justify the limit process X as the almost sure path-wise convergent N process limN X . In the sixth section, we introduce the auxiliary Hilbert system and prove→∞ an important intermediate result. In the last section, we show that the finite-dimensional processes XN converge in distribution toward X and that X is an exact representation of X .

3. Background on Gaussian Markov Processes

3.1. Basic Definitions

We first define the class of Gaussian Markov processes. Let us consider on some probability space (Ω, , P) a real adapted process X = Xt, t;0 t< which takes value in the setF of real numbers. We stress that the{ indeF x t≤of the∞} random + variable Xt runs in the continuous set R . For a given realization ω in Ω, the Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 5 collection of outcomes t X (ω) is a sample path of the process X. We only 7→ t consider processes X for which the sample paths t Xt(ω) are continuous. With these definitions, we are in a position to state the7→ two properties characterizing a Gaussian Markov process. 1. We say that X is a Gaussian process if, for any integers k and positive

reals t1

dX = αX dt + dW with α R , t t t ∈ α We designate Ut the Ornstein-Ulhenbeck process of parameter α starting at 0 for t = 0 and we give its integral expression

t α α(u t) Ut = e − dWu . (1) Z0 The process U α naturally appears as a Gaussian Markov process as well: it is Gaussian for integrating independent Gaussian contributions and Markov for being solution of a first-order stochastic differential equation. It is known that both processes can be described as discrete processes with an appropriate basis of random functions (14).

3.2. The Doob Representation

The discrete construction of the Wiener process and the Ornstein-Uhlenbeck process is likely to be generalized to a wider class of Gaussian Markov processes because any element of this class can be represented in terms of the Wiener process. By Doob’s theorem (4), for any Gaussian Markov process X, there Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 6

2 R+ exist a real non-zero function g and a real function f in Lloc( ) such that we have the integral representation of X

t X = g(t) f(t) dW . (2) t · t Z0 where W is the standard Wiener process. If we introduce the non-decreasing function h defined as t h(t)= f 2(u) du , Z0 then, for any t,s 0, the of X can be expressed in terms of functions h and g ≥ E (X X )= g(t)g(s) h min(t,s) . (3) t · s · The Doob’s representation (2) indicates that X is obtained from W by a change of variable in time t h(t) and, at any time t, by a change of variable in space by a time-dependent7→ factor x g(t) x. The couple of functions (f,g) that intervenes in the Doob’s representation7→ · of X is not determined univocally. Yet, one can defined a canonical class of functions (f,g) which are uniquely 2 R+ defined almost surely in Lloc( ) if we omit their signs (5). Incidentally, we can compare the integral formulation (2) with expression (1) of the Ornstein- Uhlenbeck process: we remark that the representation of this Gaussian Markov αt αt process is provided by setting g(t)= e and f(t)= e− , which happens to be its canonical representation.

3.3. Analytical Results

The discrete construction of Gaussian Markov processes will rely on two ana- lytical results that we detail in the following. First, the Doob’s representation allows us to give an analytical expression for the transition kernel p(Xt= x Xt0 = x0) of a general Gaussian Markov process. As the Doob’s representation is a| simple change of variables, it is easy to transform the expression of the Wiener transition kernel p (W = x W = x ) to establish t | t0 0 2 x x0 1 g(t) g(t0) p(X = x X = x )= exp − . t t0 0   | · −2 h(t) h(t0) g(t) 2π h(t) h(t0) − −   q     We have to mention that this expression is only valid if h(t) = h(t ), otherwise 6 0 X is deterministic and p(X = x X = x )= δ (x). t | t0 0 x0 Second, we can use this result to evaluate q(X = y X = x, X = z) with t < ty | tx tz x ty

p(Xty= y Xtx= x) p(Xtz= z Xty= y) q(Xty = y Xtx= x, Xtz= z) = | · | . | p(X = z X = x) tz | tx

Thanks to the previous expression, we can compute the distribution of Xty knowing Xtx and Xtz , which is expected to be a normal law because we only consider Gaussian processes. For a general Gaussian Markov process X, we 2 refer to that probability law as (X µ(ty), X σ(ty) ), with mean value X µ(ty) 2 N and variance X σ(ty) . We show in annex that these parameters satisfy

g(ty) h(tz) h(ty) g(ty) h(ty) h(tx) X µ(t )= − x + − z , (4) y g(t ) · h(t ) h(t ) · g(t ) · h(t ) h(t ) · x z − x z z − x h(ty) h(tx) h(tz) h(ty) σ(t )2 = g2(t ) − − . (5) X y y · h(t ) h(t ) z − x  Once more, we have to mention that these expressions are only valid if h(t ) = y 6 h(tx). Wether considering a Wiener process or an Ornstein-Uhlenbeck process, the evaluation of (4) and (5) with the corresponding expression of g and h leads to the already known results (14).

4. The Rationale of the Construction

4.1. Form of the Discrete Representation

Form now on, we will suppose that the zeros of the function f pertain to a negligible ensemble in [0, 1], causing the function h to be strictly increasing. We will further restrain ourselves to Gaussian Markov processes for which the functions f and g belong to the set of continuous functions on [0, 1] denoted

C(0, 1). We remark that in such a case, the functions t X µ(t) and t X σ(t) are continuous on [0, 1]. 7→ 7→ Bearing in mind the example of the L´evy construction for the Wiener process, n we want to define a basis of continuous functions Ψn,k in C(0, 1) with 0 k<2 to form the discrete process ≤

N XN = Ψ (t) ξ , t n,k · n,k n=0 0 k<2n−1 X ≤X where ξn,k are independent Gaussian random variables of standard normal law N (0, 1). We want to chose Ψn,k so that t X (ω) converges almost surely towardN t X(ω) when N tends to infinity.7→ Given the continuous nature of the 7→ processes XN , we require that the convergence is uniform and normal on [0, 1] N to ensure the definition of a continuous limit process X¯ = limN X on [0, 1]. →∞ Moreover, we want to define Ψn,k on supports Sn,k of the form

n+1 n+1 S = k 2− , (k+1)2− n,k ·   Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 8

As a consequence, the basis of functions Ψn,k will have the following proper- ties : all basis elements have compact support on an open interval with dyadic endpoints; these intervals are nested and becomes smaller for larger indices n of the basis element, and for any dyadic rational, only a finite number of basis elements is nonzero at that number.

4.2. Conditional Averages of the Process

Now remains to propose an analytical expression for Ψn,k. If we denote DN N N N the set of reals k2− 0 k 2 , the key point is to consider Zt = E(X X ,s D{ ) the| conditional≤ ≤ } expectation of the random variable X t|{ s} ∈ N t given Xs with s pertaining to the set of dyadic points DN . The collection of N N random variables Zt defined on Ω specify a continuous random process Z on N N N Ω. We notice that, if tx =k2− and tz =(k+1)2− with 0 k<2− are the two ≤N successive points of DN framing t, the random variable Zt is only conditioned by Xtx and Xtz : ZN = E(X X ,s D )= E(X X ,X ) . t t|{ s} ∈ N t| tx tz Using expression (4), we can express the sample paths t ZN (ω) as a function 7→ t of t on [tx,tz]: for a given ω in the sample space Ω, we write

N def N,k Zt (ω)= X µtx,tz (t,x,z) = X µ (t) , where the conditional dependency upon parameters Xtx (ω)=x and Xtz (ω)=z N,k N is implicit in X µ . The random process Zt appears then as a parametric func- tion of X : for any ω in the sample space Ω, the sample path t X (ω) s s DN t determines{ } a∈ set of value x = X (ω) and by extension7→ a sample s s DN s s DN N { } ∈ N { } ∈ path t Zt (ω) for the process Z . 7→ N Now two points are worth noticing: first, t Xt(ω) and t Zt (ω) are contin- uous sample paths that coincide on the set7→D ; second, we7→ have 0, 1 = D N { } 0 ⊂ D1 DN a growing sequence of sets with limit ensemble the set of dyadic⊂ ··· points ⊂ in [0, 1], which is dense in [0, 1]. Then, provided the path-wiseD con- vergence is almost surely normal and uniform on [0, 1], the limit process of ZN N when N tends to infinity should be continuous and the processes limN Z and X should be indistinguishable on Ω. →∞

4.3. Identification of Conditional Averages and Partial Sums

Identifying the process ZN with the partial sums XN provides us with a ratio- nale to build the functions Ψn,k. N+1 We first need to consider the random variable Zt on the support SN+1,k = [tx,tz] of the function ΨN+1,k. The of the process X entails ZN+1 = E(X X ,X ,X ) t t| tx ty tz E(X X ,X ) if t t t , = t tx ty x y E(X |X ,X ) if t ≤ t ≤ t .  t| ty tz y ≤ ≤ z Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 9

Hence, for a given ω the estimation of the sample paths t ZN+1(ω) is now 7→ t dependent upon y =Xty (ω) with ty the midpoint of tx and tz

µ (t,x,y) if t t t def ZN+1(ω)= X tx,ty x y = νN,k(t,y) . t µ (t,y,z) if t ≤ t ≤ t X  X ty ,tz y ≤ ≤ z We can identify the conditional process ZN and the partial sums XN . Then, for any ω in Ω, writing the sample path t ZN+1(ω) as a function of y, we have 7→ t Ψ (t) ξ (ω) = XN+1(ω) XN (ω) N+1,k · N+1,k t − t = ZN+1(ω) ZN (ω) = νN,k(t,y) µN,k(t) . t − t X − X N Assuming conditional knowledge on DN , the quantity Z =E(Xt Xs ,s DN ) becomes deterministic and the outcome of the random variable|{ZN}+1 is∈ only dependent upon the values of the process on DN+1 DN . More precisely, on the N+1 \ N,k support SN+1,k, the outcome of Zt is determined through the function X ν by y the outcome of Xty given Xtx=x and Xtz=z. The distribution of X given X = x and X = z follows the law ( µ(t ), ty tx tz N X y X σ(ty)) and we denote YN,k a Gaussian variable distributed according to such a law. With this notation, we are in a position to propose the following criterion to compute the function ΨN+1,k: the element ΨN+1,k is the only positive function with support included in SN+1,k such that the random variable ΨN+1,k(t) ξN+1,k N,k N,k · has the same law as X ν (t, YN,k) X µ (t). Direct calculations confirms that the previous relation provides us− with a consistent paradigm to define the functions ΨN+1,k. Incidentally, we have an interpretation for the statistical contribution of the components ΨN+1,k ξN+1,k: if one has previous knowledge of X on D , the function Ψ ξ· represents the uncertainty about t N k N+1,k · N+1,k Xt that is discarded by the knowledge of its value on DN+1 DN . P \

5. The Candidate Discrete Process

5.1. The Basis of Functions

We recall that we carry out the case for which the function f has a negligible set of zeros in [0, 1], which directly follows from the previous section. Before specifying the candidate basis elements Ψn,k, we introduce the following short notations to simplify the writing of their expressions

n n n ln,k = (2k)2− , mn,k = (2k+1)2− , rn,k = 2 (k+1)2− .

n 1 Then for n > 0 and 0 k < 2 − , the explicit formulation of the basis of ≤ functions Ψn,k reads L g(t) h(t) h(l ) if l t

h(rn,k) h(mn,k) Ln,k = − h(r ) h(l ) h(m ) h(l ) s n,k − n,k n,k − n,k   h(mn,k) h(ln,k) Rn,k = − h(r ) h(l ) h(r ) h(m ) s n,k − n,k n,k − n,k For N = 0, the basis element Ψ0,0 needs to satisfy the relation Ψ (t) ξ = E(X X ,s D = 0, 1 ) , 0,0 · 0,0 t|{ s} ∈ 0 { } which completely defines the analytical expression of Ψ0,0 as follows

g(t) h(t) h(l0,0) Ψ0,0(t)= · − . h(r ) h(l ) 0,0 − 0,0  As expected, we directly ascertain thep continuity of the Ψn,k by continuity of f and g. We should briefly discuss the form of the functions Ψn,k. In the case for which g(t) = 1 and h(t)= t, we find the usual expression of Ψn,k for the L´evy construc- tion of a Wiener process: the elements of the basis are the triangular wedged- functions obtained by integration of Hn,k the standard Haar functions. In the general case of a Gaussian Markov process, the expression of Ψn,k can be de- rived from the Wiener process basis elements by three operations: a change of 2 variable in time dt h′(t)dt = f (t)dt, a time-dependent change of variable in space x g(t) x and7→ a multiplication by the coefficients L and R . The 7→ · n,k n,k effect of this multiplication by Ln,k and Rn,k will be explain in section 7. Moreover, the paradigm of the construction makes no assumption about the form of the binary tree of nested compact supports Sn,k. Let us consider a given segment I0,0 = [l0,0, r0,0[ and construct by recurrence such a tree. We suppose that we have the following partition

I0,0 = In,k = ln,k, rn,k . 0 k<2n−1 0 k<2n−1 ≤ [ ≤ [   n 1 For each k such that 0 k< 2 − , we draw a point m in S . Then, we have ≤ n,k n,k

In,k = ln,k,mn,k mn,k, rn,k = In+1,2k In+1,2k+1   [   [ and by construction, we posit mn,k = ln+1,2k+1 = rn+1,2k. Iterating the process for increasing n, we build a tree of nested compact supports Sn,k = In,k. The definition (6) enables us to explicit elements Ψn,k that are adapted to any such tree. The so-defined functions Ψn,k will appear to be valid basis elements to build a discrete representation of X under the only requirement that

lim sup rn,k ln,k =0 . (7) n 0 k<2n−1 | − | →∞ ≤ Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 11

5.2. The almost sure Normal and Uniform Convergence

We want to prove the validity of the discrete representation of a Gaussian Markov process with Doob’s representation (2) using the proposed basis of func- N tions Ψn,k. Let us consider the partial sums X defined on Ω by

N XN = Ψ (t) ξ for t S = [0, 1] . t n,k · n,k ∈ 0,0 n=0 0 k<2n−1 X ≤X We need to study the path-wise convergence of the partial sums XN on Ω to N see in which sense we can consider limN X as a proper stochastic process. Again, we only consider Gaussian Markov→∞ processes for which the functions f and g belong to the set of continuous functions C[0, 1]. If we designate the L∞ norms of f and g on [0, 1] by f and g , we can show that k k∞ k k∞

(rn,k mn,k)(mn,k ln,k) sup sup Ψn,k(t) − − g f 0 k<2n−1 0 t 1 ≤ s rn,k ln,k ·k k∞ ·k k∞ ≤ ≤ ≤ − n+1 2− 2 g f . (8) ≤ ·k k∞ ·k k∞ For (f,g)-bounded Gaussian Markov processes, this inequality provides us with the same upper bound to the elements Φn,k as in the case of a Wiener process times a constant g f . By the same Borel-Cantelli argument as for the Haar constructionk k∞ of· thek k∞ Wiener process (8), for almost every ω in Ω, the sample path converges almost surely normally and uniformly in t to a function t Xt(ω) when N goes to infinity. It7→ is worth noticing that, since f and g are continuous functions, so are the N basis functions Ψn,k. Then, for every ω in Ω, the sample path t Xt (ω) is a continuous function in C[0, 1]. As the convergence when N tends7→ to infinity is normal and uniform in t, the limit functions t Xt(ω) results to be in C[0, 1] almost surely on Ω. This allows us to define7→ on Ω a limit process X = N limN X with continuous paths. Showing→∞ that X is an admissible discrete representation of the Gaussian Markov process X only amounts to demonstrate that, for any integers k and positive reals t < t < < t , the random vector (X , X , , X ) has a the 1 2 · · · k t1 t2 · · · tk same joint distribution as (Xt1 ,Xt2 , ,Xtk ). As X is defined as the path- wise almost sure limit of XN when N· · ·tends to infinity, this result is implied by the convergence in distribution of the continuous processes XN toward their limit X (8; 1). We will therefore establish the convergence in distribution of our representation in section 8 and incidentally validate X as an exact representation of X. In that perspective, we devote the following section to set out the meaning of the convergence in distribution of our candidate process XN . Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 12

6. The Convergence in Distribution

6.1. The Finite-dimensional Measures

In this section, we specify the finite-dimensional probability measures P N in- duced by the processes XN . Beforehand, we introduce the notations

n 1 [0, 0]=0 [n, k]=2 − +k with [0, 1]=2N  to allow us to list the midpoints mn,k of the tree of supports Sn,k in the prefix order. By reindexing according to

n t[n,k] = mn,k = (2k+1)2− ,

N we get an ordered sequence t0

N C = Vect Ψn,k 0≤n≤N 0≤k<2n−1 ! n o N When it is equipped with the L∞ norm, C is a complete, separable metric space under the distance d(f,g) = f g . We can provide the space CN N k − k∞ with the σ-algebra (C ) generated by the cylinder sets CB , ,B , which are B 0 ··· 2N defined for any collection of Borel sets B ,B , ,B N in (R) by 0 1 · · · 2 B N n 1 CB , ,B = x C x(t[n,k]) B[n,k], 0 n N, 0 k< 2 − 0 ··· 2N ∈ | ∈ ≤ ≤ ≤ n o The random process XN induces a natural measure P N on (CN , (CN )), such that B B (CN ) , P N (B)= P(ω XN (ω) B) . ∀ ∈B | ∈ Since for any x in CN we have x(0) = 0, the induced measure P N is entirely determined on the cylinder sets of the form C with B = 0 . Keeping B0, ,B2N 0 ··· { } N this in mind, we show in annex that PN admits a probability density p : for N any cylinder set CB , ,B of (C ) with B0 = 0 we have 0 ··· 2N B { }

N N P (C )= p (x , , x N ) dx dx N B0, ,B2N 1 2 1 2 ··· B · · · B · · · · · · Z 1 Z 2N where pN is made explicit with the help of the transition kernel p of X defined in (4) 2N 1 N − p (x , , x N )= p(x ,t x ,t ) . 1 · · · 2 k+1 k+1| k k kY=0 We want to specify in which sense the finite-dimensional probability measures P N defined on (CN , (CN )) converge to a limit measure P associated to X. B Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 13

6.2. The Weak Convergence

Here, we consider that the stochastic processes XN and X take value in the Wiener space, that is the space of continuous functions C[0, 1]. This allows us to characterize the X-induced measure µ = P associated with X on C[0, 1]. Defining the XN -induced measures µN on C[0, 1] as well, we then state the convergence of P N toward P in terms of weak convergence of µN toward µ. The process XN defined on some probability space (Ω, , P) have continuous N F sample paths t Xt (ω) and so does the Gaussian Markov process X. Being a complete, separable7→ metric space under the distance d(f,g)= f g , the Wiener space is a natural space to define XN - and X-inducedk measures.− k∞ We consider XN and X as a random variables with values in the measurable space (C[0, 1], (C[0, 1])), where (C[0, 1]) is the σ-field generated by the cylinder sets of CB[0, 1]: as previouslyBXN and X naturally induce the probability mea- sures µN and µ defined by

µN (B)= P ω Ω XN (ω) B and µ(B)= P ω Ω X(ω) B { ∈ ∈ } { ∈ ∈ } for any B in (C[0, 1]). More specially,B the measure µ is called the Wiener measure of the Gaussian Markov process X. Assuming a general cylinder set to be

t , ,tn 1 ··· CB , ,Bn = x C[0, 1] x(tk) Bk, 0

t , ,tn µ(C 1 ··· )= p(x ,t 0, 0) p(x ,t x ,t ) dx dx . B , ,Bn 1 1 n n n 1 n 1 t1 tn 1 ··· · · · | · · · | − − · · · ZB1 ZBn Through the induced measures µN and µ, we want to study the convergence of the process XN toward X from a probabilistic point of view. In that respect, the most general form of convergence one might expect is the weak convergence. We recall that µN is weakly convergent to µ if and only if for any bounded continuous function of C[0, 1], we have

lim φ(x) dµN (x)= φ(x) dµ(x) , N →∞ ZC ZC where C is a short notation for C[0, 1].

6.3. The Convergence in Distribution

We can now translate on the Wiener space the much desirable property that the continuous processes XN converges in distribution toward the Gaussian Markov process X, which we denote XN D X . −→ Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 14

We recall that, by definition, XN converges in distribution to X if and only if for any bounded continuous function φ in C[0, 1] we have

N lim EN φ(X ) = E φ(X) N →∞   N where EN and E are the expectations with respect to P and P respectively. With the definitions made in the previous section, the convergence in distribu- tion of the representation XN is rigorously equivalent to the weak convergence of the measures µN toward µ on the Wiener space (C[0, 1], (C[0, 1])). We will show this point in section 8 following the usual twoB steps reasoning inspired by the Prohorov theorem (1). To show the weak convergence of the sequence of measure µN , it is enough to prove the two statements:

1. For every integer k > 0 and reals 0 t1 0, there exist a compact K C[0, 1] such that µN (K) 1 η for every N. ⊂ ≥ − Before establishing these two criteria, we first need to compute the limit of the covariance of XN when N tends to infinity. This calculation is the crucial point to validate our representation and we will carefully detail it in the following section.

7. The Covariance Calculation

7.1. Definition of the Auxiliary Basis

Let us remember that the Gaussian Markov process X admits a Doob’s rep- resentation (2). If we posit appropriate regularity properties for f and g, it is straigtforward to see that, by It¯oformula, such a process is solution of the stochastic differential equation

X d t = f(t) dW . g(t) · t  

It is then tempting to inject the proposed basis element Ψn,k in the previous equation and to consider the functions Φn,k defined as

1 d Ψ (u) Φ (t)= n,k . n,k f(t) du g(u)  u=t

The functions Φn,k are actually well-defined despite the division by f, which is potentially zero, since calculations show that the Φn,k are given explicitly for Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 15 n> 0 by

L f(t) if l t

f(t) Φ0,0(t)= . h(r ) h(l ) 0,0 − 0,0

We will show that the family of functionsp Φn,k is a Hilbert system of a subspace of L2(0, 1), a property that will prove useful to compute, for any t and s in [0, 1], the limit of the covariance E(XN XN ) when N tends to infinity. The consider- t · s ation of this result will also enable us to interpret the coefficients Ln,k and Rn,k.

7.2. Characterization as an Hilbert System

From now on, we denote (f,g) the usual inner product of f and g in L2(0, 1). Let us introduce , the closure of the vectorial space defined as Ef Ef = φ L2(0, 1) ϕ L2(0, 1), φ = f ϕ . Ef { ∈ | ∃ ∈ · }

With the usual inner product, the space f inherits the structure of a Hilbert space from L2(0, 1) for being a closed subspaceE of a Hilbert space. It is immediate to see that the Φ belong to when written as (9). We need to prove that n,k Ef the Φn,k constitute an orthonormal family for the usual inner product and that the vectorial space of their finite linear combinations is dense in f . Let us start with the orthonormal property of the family andE consider two functions Φ and Φ ′ ′ with (n, k) = (n′, k′). If n = n′ and k = k′, Φ n,k n ,k 6 6 n,k and Φn′,k′ have disjoint supports and their inner product is necessarily zero. Assuming that n′ > n, Φn,k and Φn′,k′ have intersecting supports Sn,k and Sn′,k′ if and only if Sn,k is strictly included in Sn′,k′ . Then, it is very useful to remark the nullity of the following inner product

mn,k rn,k (f, Φ ) = L f 2(u) du + R f 2(u) du n,k n,k · n,k · Zln,k Zmn,k

h(rn,k) h(ln,k) h(mn,k) h(ln,k) = − − s h(rn,k) h(mn,k) − −  h(rn,k) h(ln,k) h(mn,k) h(ln,k) − − =0 , −s h(rn,k) h(mn,k) −  Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 16 which entails that Φ and Φ ′ ′ are orthogonal for n = n′. As for the norm n,k n ,k 6 of the functions Φn,k, we directly compute for n> 0

mn,k rn,k (Φ , Φ ) = L2 f 2(u) du + R2 f 2(u) du n,k n,k n,k · n,k · Zln,k Zmn,k h(r ) h(m ) h(m ) h(l ) = n,k − n,k + n,k − n,k =1 h(r ) h(l ) h(r ) h(l ) n,k − n,k n,k − n,k and it is straightforward to see that (Φ0,0, Φ0,0)=1 for n = 0. Hence, we have proved that the collection of Φn,k forms an orthonormal family of functions in . Ef We still have to show that the linear combinations of Φn,k generate a dense vectorial space in . We can easily be convinced of this point once we consider Ef the family of functions f Hn,k, where the Hn,k designate the Haar functions · 2 adapted to the supports Sn,k. The orthonormal system Hn,k is dense in L (0, 1) as soon as condition (7) is satisfied. Then, as each f H can be obtained by · n,k finite linear combination of Φn,k, we conclude that

Vect Φn,k n≥0 = f , 0≤k<2n−1 ! E n o The interpretation of the coefficients Rn,k and Ln,k is made conspicuous in this context. The natural surjective morphism of L2[0, 1] to is the application Ef φ f φ. The family f Hn,k in f is the image of the Haar basis Hn,k, but it7→ is not· a Hilbert system· of . TheE coefficients R and L results from the Ef n,k n,k operations of orthonormalization of the family f Hn,k to form a Hilbert system of . · Ef

7.3. Application of the Parseval Relation

Now that these preliminary remarks have been made, we can evaluate, for any t and s in [0, 1], the limit of the covariance E(XN XN ) when N tends to infinity. t · s As ξn,k are independent Gaussian random variables of normal law (0, 1), we see that the covariance of XN is given by N

N E XN XN = Ψ (t) Ψ (s) . (10) t · s n,k · n,k n=0 0 k<2n−1  X ≤X To compute the limit of the right-hand side in (10), we need to remark that the element of the basis Ψn,k and the function of the auxiliary Hilbert system Φn,k are linked by the following relation 1 d Ψ (v) Ψ (t) = g(t) χ (u) n,k du n,k · [0,t] dv g(v) Z0  v=u 1 = χ (u) g(t)f(u) Φ (u) du . (11) [0,t] · n,k Z0 Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 17

In the previous expression, we use the indicator functions of the segment [0,t] defined as 1 if0 u t χ (u)= . [0,t] 0 otherwise≤ ≤  In order to simplify the notations, we now introduce the functions ηt elements of the Hilbert space Ef

ηt(u)= χ[0,t](u) g(t)f(u) .

With the help of the function ηt, we can then write the integral definition of the basis element Ψ (11) as the inner product in n,k Ef

Ψn,k(t) = (ηt, Φn,k) .

Remembering that the family of functions Φn,k is a Hilbert system of f , we can make use of Parseval identity, which reads E

∞ (η , η )= (η , Φ ) (η , Φ ) . (12) s t s n,k · t n,k n=0 0 k<2n−1 X ≤X Thanks to this relation, we can conclude the evaluation of the variance of X, since a direct explicitation of (12) yields

1 ∞ χ (u) g(t)f(u) χ (u) g(s)f(u) du = Ψ (t) Ψ (s) . [0,t] · [0,s] n,k · n,k Z0 n=0 0 k<2n−1 X ≤X (13)

The left term in (13) precisely happens to be the same as the covariance of X given by relation (3) and we recap the statement by saying

N N lim E Xt Xs = g(s)g(t) h min(t,s) = E (Xt Xs) . N →∞ · · ·   N N In section 8, we will use this relation to show that the random vector (Xt1 ,Xt2 , N ,Xtk ) converges in distribution toward (Xt1 , Xt2 , ,Xtk ) for any reals ·0 · · t

8. The Convergence of the Representation

8.1. Formulation of the two Sufficient Criteria

We are now in a position to proceed to the demonstration of the weak conver- gence of the induced measures µN toward µ. This will prove the convergence in distribution of the representation

XN D X , −→ Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 18 and validate the fact that the limit process X is an exact representation of X. We just need to verify that our representation satisfies the two criteria mentioned in section 6. First, we show that the family of measures µN fulfill the tightness condition, using a characterization of tightness on C[0, 1]. This characterization is a version of the Arzel`a-Ascoli theorem. It is formulated in term of mod(x, δ) the δ-modulus of continuity of a function x in C[0, 1] : δ > 0, mod(x, δ)= max x(t) x(s) . ∀ t s δ − | − |≤

Bearing in mind that we only consider the subset of functions x in C[0, 1] for which x(0)=0, the characterization reads

lim sup µN x C[0, 1] mod x, δ >ǫ =0 . (14) δ 0 N ∈ | | → N ∈   

Then, for any reals 0 t1 < t2 < < tk 1, we show the convergence in distribution of the random≤ vectors (·X · ·N ,XN ≤, ,XN ) t1 t2 · · · tr N N N X ,X , ,X D Xt ,Xt , ,Xt . t1 t2 · · · tr −→ 1 2 · · · r Dealing with finite-dimensional vectors, we can use the Cram´er-Wold device (8): C N C N N N if we designate and the characteristic functions of (Xt1 ,Xt2 , ,Xtr ) Rr · · · and (Xt1 ,Xt2 , ,Xtr ) defined on , we just have to show the point-wise convergence of· the · · characteristic functions C N toward C .

8.2. The Tightness of the Induced Family of Distributions

We will confirm that the family of induced measure µN satisfy the tightness criterion (14) on C[0, 1]. First, define the random variables

bn = max ξn,k . 0 k<2n−1 | | ≤ We apply the usual Borel-Cantelli lemma and state: there exists a set Ω′ with P(Ω′) = 1 such that for every ω in Ω′, there is an integer n(ω) such that for all n>n(ω) we have b (ω) n. Let us set n ≤ Ω′ = ω Ω′ n(ω) n n { ∈ | ≤ } It clearly defines an increasing sequence of sets Ω′ Ω′ Ω′ with 0 ⊂ 1 ⊂ ··· ⊂ n limn P(Ωn′ ) = 1. For any η > 0, there is a n(η) such that for all n>n(η), →∞ we have P(Ω′ ) > 1 η/2. Then Considering the existence of the upper bound n − (8), for every ω in Ωn′ with N >n(η), we can write n(η) XN (ω) XN (ω) Ψ (t) Ψ (s) ξ (ω) t − s ≤ n,k − n,k n,k n=0 0 k<2n(η)−1 X ≤ X N n+1 + 2n 2− 2 g f (15) · ·k k∞ ·k k∞ n=Xn(η)+1 Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 19

The previous inequality actually holds for any N > 0 with the conventions of setting the elements Ψn,k to zero when n>N. For all ǫ> 0 and all η > 0 there exists an integer n(ǫ, η) >n(η) such that

∞ n+1 ǫ 2n 2− 2 g f · ·k k∞ ·k k∞ ≤ 2 n=nX(ǫ,η)+1 Now, for any given ǫ > 0 and η > 0, we chose a real M(ǫ, η) > 0 large enough so that for any ω in Ω

η P max bn(ω) >M(ǫ, η) n n(ǫ,η) ≤ 2  ≤  and we finally define the set Ωǫ,η as

Ωǫ,η = ω Ωn′ (ǫ,η) max bn(ω) M(ǫ, η) . ∈ n n(ǫ,η) ≤  ≤ 

Every element Ψn,k is a continuous function defined on a compat support Sn,k in [0, 1]. As a result, the finite set of functions Ψn,k for n n(ǫ, η) is uniformly equicontinuous: for any given ǫ > 0 and η > 0, there is a≤ real δ(ǫ, η) > 0 such that

t,s [0, 1], t s δ(ǫ, η) ∀ ∈ | − |≤ ⇒ 0 n n(ǫ, η) ǫ n, k with ≤ ≤ n 1 , Ψn,k(t) Ψn,k(s) . ∀ 0 k< 2 − − ≤ 2n(ǫ,η)+1M(ǫ, η)  ≤

For all ǫ> 0 and all η > 0, writing the inequality (15) on Ωǫ,η yields to

ω Ω , δ<δ(ǫ, η) N N, mod XN (ω),δ ǫ ∀ ∈ ǫ,η ⇒ ∀ ∈ ≤ As we have defined Ω so that P(Ω ) 1 η, we have shown that ǫ,η ǫ,η ≥ −

ǫ,η > 0, δ<δ(ǫ, η) sup P mod XN (ω),δ >ǫ η , ∀ ⇒ N N ≤ ∈    which proves the tightness criterion on the Wiener space C[0 ,1] for the family of continuous processes XN .

8.3. Point-wise Convergence of the Characteristic Functions

r N For any (λ1, λ2, , λr) in R , evaluating C , the characteristic function of (XN ,XN , ,X·N · ·), yields t1 t2 · · · tr r N i λpx(tp) N C (λ , λ , , λ ) = e p=1 dµ (x) , 1 2 · · · r ZC P Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 20

We use the definition (8) of the partial sum XN in terms of the basis elements N N Ψn,k to explicit the calculation of C on C , the space of admissible functions for XN :

r N N i λpXtp N C (λ1, λ2, , λr) = e p=1 P(dX ) . · · · N ZC P Remember that each coefficient ξn,k is independently distributed according to a normal law (0, 1) in the representation of XN . We have then N

N r N iξn,k λpΨn,k(tp) C (λ1, λ2, , λr)= e p P(dξn,k) . · · · R n=0 0 k<2n−1 Z Y ≤Y P  Therefore, we can compute each terms of the previous product:

ξ2 n,k iξn,k(λpΨn,k(tp)) iξn,k(λpΨn,k(tp)) 1 e P(dξn,k) = e e− 2 dξn,k R R √2π Z Z 1 2 = exp λ Ψ (t ) . − 2 p n,k p    Back to the formulation of CN , we end up with the analytical expression

1 r r C N (λ , λ , , λ ) = exp λ λ ρN (t ,s ) , 1 2 · · · r − 2 p q p q p q  X X  where we have used the short notation ρN for the covariance of XN

N ρN (t,s)= E(XN XN )= Ψ (t) Ψ (s) . t · s n,k · n,k n=0 0 k<2n−1 X ≤X With the covariance calculation of section 7, we have demonstrated that for t, s in [0, 1]

N lim ρ (t,s)= E(Xt Xs)= g(t)g(s) h min(t,s) = ρ(t,s) . N · · →∞ r  For any (λ1, λ2, , λr) in R , it directly entails the point-wise convergence of C N · · ·

r r N 1 lim C (λ1, λ2, , λr) = exp λpλq ρ(tp,sq) N · · · − 2 →∞ p q  X X  def = C (λ , λ , , λ ) , 1 2 · · · r where we remark that C is the characteristic function of the random vector

(Xt1 ,Xt2 , ,Xtk ). By the Cram´er-Wold· · · device, the point-wise convergence of C N toward C proves Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 21

N N N the convergence in distribution of any finite-dimensional random vectors Xt1 ,Xt2 , ,Xtr toward the random vector X ,X , ,X · · · t1 t2 · · · tr N N N X ,X , ,X D Xt ,Xt , ,Xt . t1 t2 · · · tr −→ 1 2 · · · r It also proves that X is an exact representation of the Gaussian Markov process X since, as the almost-sure path-wise limit of XN when N tends to infinity, X follows the same law as the law of the process X. As we have already established the tightness of the family of induced distributions µN , it is enough to prove the weak convergence of µN toward µ on the Wiener space, which is equivalent to the convergence in distribution of the continuous process XN towards X.

Appendix A: Gaussian calculation

We want to establish the analytical results given in section 3.3. Assuming tx < ty < tz, we want to compute p(Xty = y Xtx = x, Xtz = z). We use the Markov property as mentioned in the third section |

p(Xty = y Xtx = x) p(Xtz = z Xty = y) p(Xty = y Xtx = x, Xtz = z) = | · | | p(Xt = z Xt = x) z | x to get its analytical expression in terms of g and h

y x 2 z y 2 1 g(ty ) g(tx ) 1 g(tz ) g(ty ) exp − exp − 2 2 g(ty )√2π σx,y − 2 σ · g(tz )√2π σy,z − 2 σ · · x,y  ! · · y,z  ! z x 2 1 g(tz ) g(tx) exp − 2 g(tz )√2π σx,z − 2 σ · · x,z  ! , (16) with the expressions of the variance of Xt between any two consecutive points

2 2 2 σ = h(ty ) h(tx) , σ = h(tz) h(ty) , σ = h(tz) h(tx) . (17) x,y − y,z − x,z − After factorization of the exponentials, the resulting exponent of expression (16) is written

1 y x 2 1 z y 2 1 z x 2 + = 2 2 2 − 2 σ g(ty ) − g(tx) − 2 σ g(tz ) − g(ty ) 2 σ g(tz ) − g(tx) · x,y · y,z · x,z       1 1 1 1 x z + y2 + + y 2 2 2 2 2 − 2 g (ty ) σ σ g(ty ) g(tx)σ g(tz )σ · x,y y,z x,y y,z     Cy 2 2 2 z x | {z } x z g(tz ) g(tx ) + − . 2 2 2 2 2 − 2 g (tx)σ − 2 g (tz)σ 2 σ · x,y · y,z · x,z  We then factorize the term Cy so that we can write the exponent in the form

2 2 2 1 σx,z g(ty ) σy,z g(ty ) σx,y y2 2 x + z y 2 2 2 2 2 − 2 g (ty ) · σx,y σy,z − g(tx) · σx,z · g(tz ) · σx,z · · ·     1 z x 2 x2 z2 + 2 2 2 2 2 2 σ g(tz ) − g(tx) − 2 g (tx)σ − 2 g (tz )σ · x,z · x,y · y,z   Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 22 to obtain the canonical expression

2 2 2 2 2 1 σx,z g(ty ) σy,z g(ty ) σx,y 1 z x y x + z + 2 2 2 2 2 2 − 2 g (ty ) · σx,y σy,z − g(tx) · σx,z · g(tz) · σx,z · 2 σx,z g(tz ) − g(tx) · ·    · 2   2 2 2 2 2 1 σx,z g(ty ) σy,z g(ty ) σx,y x z + x + z . 2 2 2 2 2 2 2 2 2 2 g (ty ) · σx,y σy,z g(tx) · σx,z · g(tz ) · σx,z · − 2 g (tx)σx,y − 2 g (tz)σy,z ·   · · Q

The| quantity Q can be further simplify after being{z expanded }

2 1 σy,z 1 1 1 Q = x2 + xz 2 2 2 2 2 2 g (tx) σx,y σx,z − σx,y g(tx)g(tz ) · σx,z ·  ·  2 1 σx,y 1 + z2 z2 2 2 2 2 2 g (tz) σ σ · − σ · y,z · x,z y,z  2  2 1 σx,y 1 1 1 σy,z = x2 + xz z2 2 2 2 2 2 − 2 g (tx) · σ g(tx)g(tz ) · σ − 2 g (tz) · σ · x,z x,z · x,z 1 z x 2 = 2 − 2 σ g(tz ) − g(tx) · x,z   All terms except quadratic ones in y cancel out in the exponent, giving

p(Xt = y Xt = x, Xt = z) = y | x z 2 2 2 g(ty ) σy,z g(ty ) σx,y y g(t ) 2 x + g(t ) 2 z 1 − x · σx,z · z · σx,z · σx,yσy,z exp 2 2 . (18) √2π g(t ) · − σx,y ·σy,z  y σx,z h  2 i · 2 g (ty ) 2 · σx,z     We sum up the expression (18) noticing it represents the distribution of a normal law (X µ(ty ), X σ(ty )), whose parameters read N

2 2 g(ty ) σy,z g(ty ) σx,y g(ty ) h(tz ) h(ty ) g(ty ) h(ty ) h(tx) µ(t ) = x + z = − x + − z X y 2 2 g(tx) · σ · g(tz ) · σ · g(tx) · h(tz) h(tx) · g(tz) · h(tz) h(tx) · x,z x,z − −

2 2 h(t ) h(t ) h(t ) h(t ) σx,y σy,z y x y x σ(t )2 = g2(t ) · = g2(t ) − − X y y 2 y · σ · h(tz ) h(tx) x,z − 

Appendix B: Induced measures

Finite-dimensional measures

We first recall the definition of the 2N -dimensional vectorial space CN defined as

N C = Vect Ψn,k 0≤n≤N . n−1  0≤k<2   We specify a given element x of CN by writting Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 23

N x(t) = Ψ (t) a with a R , n,k · n,k n,k ∈ n n−1 X=0 0≤Xk<2 N and we remark that x can be viewed as a sample path of the process X if we posit ξn,k(ω) = an,k. We then introduce the following notations

[0, 0] = 0 [n, k] = 2n−1 +k with [0, 1] = 2N to enumerate indices between 0 and 2N in the prefix order.n With this convention, we introduce the positive reals 0 = t < t < < t = 1 defined as 0 1 ··· 2N t = m = (2k + 1) 2−n for 0

N P(x CB ,B ,··· ,B ) = p (x1, ,x N ) dx1 dx N ∈ 0 1 2N ··· ··· 2 ··· 2 B B Z 1 Z 2N and that pN has the following simple expression in terms of the transition kernel p associated with the Gaussian Markov process X

2N−1 N p (x , ,x N ) = p (x , t ) (x , t ) . (19) 1 ··· 2 k+1 k+1 k k k=0 Y  We start noticing that, by construction of XN , we have

P(ξ da ) = P Xm µ(m ) dx X = x , Xr = x n,k ∈ n,k n,k − n,k ∈ [n,k] ln,k [n−1,2k] n,k [n−1,2k+1] with the usual definitions for ln,k, rn,k and mn,k and with dx[n,k] = σn,k dan,k. As the Gaussian N · variables ξn,k are all independent in the definition of X and remembering the definition of the probability density q in section 3.3, we define pN for N 0 as ≥

N p (x , ,x ) = p (x ,r , ) (x ,l , ) q (x ,m , ) (x ,l , ), (x ,r , ) 1 ··· 2N 2N 1 0 0 1 0 [0,1] 1 0 0 1 0 2N 1 0 ··· N−1 2 −1   q (x ,m ) (x ,l ), (x ,r ) , ··· [N,k] N,k [N−1,2k] N,k [N−1,2k+1] N,k k=0 Y  and we will show by recurrence on N that pN as the expected expression (19). The basis statement is obvious for N = 0. As for the inductive step, let us write the probability density pN+1 as

pN+1(x , ,x ) = 1 ··· 2N+1 2N−1 pN (x , ,x ) q (x ,m ) (x ,l ), (x ,r ) . 1 ··· 2N [N+1,k] N+1,k [N,2k] N+1,k [N,2k+1] N+1,k k=0 Y  By our recurrence hypothesis, we have then Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 24

pN+1(x , ,x ) = 1 ··· 2N+1 2N−1 2N−1

p (xk+1, tk+1) (xk, tk) q (x[N+1,k],mN+1,k) (x[N,2k],lN+1,k), (x[N,2k+1],rN+1,k) .

k=0 k=0 Y  Y  But by definition of q, we have

q (x[N+1,k],mN+1,k) (x[N,2k],lN+1,k), (x[N,2k+1],rN+1,k) =

p (x[N,2k+1],rN +1,k) (x[N+1,k],mN+1,k) p (x[N+1,k],mN+1,k) (x[N,2k],lN+1,k ) . p (x[N,2k+1],rN+1,k) (x[N,2k],lN+1,k )) 

The product of the denominators in the previous expressions cancels out the term pN in pN+1 so that we have proven that

2N+1−1 N+1 p (x , ,x N ) = p (x , t ) (x , t ) . 1 ··· 2 +1 k+1 k+1 k k k=0 Y 

Characteristic Functionals

We want to express the characteristic functionals of the induced measures µN and of the Wiener measure µ. In view of this, we recall that, by the Riesz representation theorem, the dual space of C[0, 1] is the space M[0, 1] of finite measures on [0, 1]. For x in C[0, 1] and ν in M[0, 1], we write the duality product

1 x, ν = x(t) dν(t) . h i Z0 Considering first the induced measure µN , the characteristic functional CN is defined on M[0, 1] as the Fourier transform of µN on C[0, 1]

ihx, νi N CN (ν) = e dµ (x) with ν M[0, 1] . ∈ ZC N By construction of the process X and independence of the random variables ξn,k, we have

N N ihX ,νi N iξn,khΨn,k,νi CN (ν) = e P(dX ) = e P(dξn,k) . N R C n−1 Z Yn=0 0≤Yk<2 Z The random variables ξ being Gaussian of law (0, 1), we furthermore have n,k N

ξ2 1 n,k 1 2 iξn,khΨn,k,νi iξn,khΨn,k,νi − − hΨn,k,νi e P(dξn,k) = e e 2 dξn,k = e 2 . √2π ZR ZR

This allows to finally write the expression of the characteristic functional CN

N 1 2 CN (ν) = exp Ψn,k, ν . − 2 n=0 0≤k<2n−1  X X 

If we express the product of duality, we can formulate the functional CN in its common form (2) Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 25

N 1 1 1 CN (ν) = exp Ψn,k(t) dν(t) Ψn,k(s) dν(s) − 2 0 0  n=0 0≤k<2n−1 Z Z  X1 1X 1 = exp ρN (t, s) dν(t)dν(s) − 2 0 0  Z Z  where we made apparent ρN , the of the process XN :

N ρN (t, s) = E(XN XN ) = Ψ (t) Ψ (s) . t · s n,k · n,k n−1 Xn=0 0≤Xk<2 From the convergence results of our expansion, we directly have that the characteristic functional C of the Wiener measure ν

1 1 1 C (ν) = exp ρ(t, s) dν(t)dν(s) , (20) − 2 0 0  Z Z  with ρ the continuous correlation function of the Gaussian Markov process X:

ρ(t, s) = E(Xt Xs) = g(t)g(s) h min(t, s) . · · Now let t < t < < t be some reals in [0, 1], we posit the measure ν as 1 2 ··· k  k

ν = λpδtp ,

p X=1 where δtp denotes the Dirac distribution concentrated at tp. If we inject the expression of ν in the result (20), we find the expression of the characteristic function of Xt1 , Xt2 , , Xtk as one might expect. ··· Then, assume the distribution ν admits a density θ in L2(0, 1) with respect to the Lebesgues measure, we have

1

Ψn,k, ν = Ψn,k(t)θ(t) dt = Ψn,k, θ Z0 Thanks to the auxiliary orthonormal basis Φn,k, we can further write 

1 t 1 d Ψn,k(u) Ψn,k, θ = f(t) g(u)θ(u) du dt, f(t) du g(u) · 0 u=t 0 Z   Z  Φn,k which directly leads to the following| simple{z expression} for the characteristic functional 1 t 2 1 C (ν) = exp f(t) g(s)θ(s) ds dt . − 2 Z0 Z0    References

[1] Billingsley, P. (1999). Convergence of probability measures, Second ed. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons Inc., New York. A Wiley-Interscience Publication. MRMR1700749 (2000e:60008) Thibaud Taillefumier/A Discrete Construction for Gaussian Markov Processes 26

[2] Cameron, R. H. and Donsker, M. D. (1959). Inversion formulae for characteristic functionals of stochastic processes. Ann. of Math. (2) 69, 15– 36. MRMR0108850 (21 #7562) [3] Ciesielski, Z. (1961). H¨older conditions for realizations of Gaussian pro- cesses. Trans. Amer. Math. Soc. 99, 403–413. MRMR0132591 (24 #A2431) [4] Doob, J. L. (1949). Heuristic approach to the Kolmogorov-Smirnov theo- rems. Ann. Math. Statistics 20, 393–403. MRMR0030732 (11,43a) [5] Hida, T. (1960/1961). Canonical representations of Gaussian processes and their applications. Mem. Coll. Sci. Univ. Kyoto. Ser. A. Math. 33, 109–155. MRMR0119246 (22 #10012) [6] Hotelling, H. (1933). Analysis of a Complex of Statistical Variables into Principal Components. Journal of Educational Psychology 24, 7, 498–520. [7] Kac, M. and Siegert, A. J. F. (1947). An explicit representation of a stationary Gaussian process. Ann. Math. Statistics 18, 438–442. MRMR0021672 (9,97a) [8] Karatzas, I. and Shreve, S. E. (1991). Brownian motion and , Second ed. Graduate Texts in Mathematics, Vol. 113. Springer- Verlag, New York. MRMR1121940 (92h:60127) [9] Karhunen, K. (1946). Zur Spektraltheorie stochastischer Prozesse. Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys. 1946, 34, 7. MRMR0023012 (9,292h) [10] Levy,´ P. (1948). Processus Stochastiques et Mouvement Brownien. Suivi d’une note de M. Lo`eve. Gauthier-Villars, Paris. MRMR0029120 (10,551a) [11] Loeve,` M. (1963). . Third edition. D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London. MRMR0203748 (34 #3596) [12] Lumley, J. L. (1967). The Structure of Inhomogeneous Turbulent Flows. Atmospheric turbulence and radio propagation, 166–178. [13] Rogers, L. C. G. and Williams, D. (2000). Diffusions, Markov pro- cesses, and martingales. Vol. 1. Cambridge Mathematical Library. Cambridge University Press, Cambridge. Foundations, Reprint of the second (1994) edi- tion. MRMR1796539 (2001g:60188) [14] Taillefumier T., M. M. A haar-like construction for the ornstein uhlen- beck process. Accepted in J. Stat. Phys..