UPPSALA DISSERTATIONS IN MATHEMATICS 86

Gaussian Bridges - Modeling and Inference

Maik Görgens

Department of Mathematics Uppsala University UPPSALA 2014 Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Friday, 7 November 2014 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Professor Mikhail Lifshits (St. Petersburg State University and Linköping University).

Abstract Görgens, M. 2014. Gaussian Bridges - Modeling and Inference. Uppsala Dissertations in Mathematics 86. 32 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-506-2420-5.

This thesis consists of a summary and five papers, dealing with the modeling of Gaussian bridges and membranes and inference for the α-Brownian bridge. In Paper I we study continuous Gaussian processes conditioned that certain functionals of their sample paths vanish. We deduce anticipative and non-anticipative representations for them. Generalizations to Gaussian random variables with values in separable Banach spaces are discussed. In Paper II we present a unified approach to the construction of generalized Gaussian random fields. Then we show how to extract different Gaussian processes, such as fractional , Gaussian bridges and their generalizations, and Gaussian membranes from them. In Paper III we study a simple decision problem on the scaling parameter in α-Brownian bridges. We generalize the Karhunen-Loève theorem and obtain the distribution of the involved likelihood ratio based on Karhunen-Loève expansions and Smirnov's formula. The presented approach is applied to a simple decision problem for Ornstein-Uhlenbeck processes as well. In Paper IV we calculate the bias of the maximum likelihood estimator for the scaling parameter and propose a bias-corrected estimator. We compare it with the maximum likelihood estimator and two alternative Bayesian estimators in a simulation study. In Paper V we solve an optimal stopping problem for the α-Brownian bridge. In particular, the limiting behavior as α tends to zero is discussed.

Maik Görgens, Department of Mathematics, Analysis and , Box 480, Uppsala University, SE-75106 Uppsala, Sweden.

© Maik Görgens 2014

ISSN 1401-2049 ISBN 978-91-506-2420-5 urn:nbn:se:uu:diva-232544 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-232544) Für Bine und Milo

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I M. Görgens. Conditioning of Gaussian processes and a zero area Brownian bridge. Manuscript.

II M. Görgens and I. Kaj. Gaussian processes, bridges and membranes extracted from selfsimilar random fields. Manuscript.

III M. Görgens. Inference for α-Brownian bridge based on Karhunen-Loève expansions. Submitted for publication.

IV M. Görgens and M. Thulin. Bias-correction of the maximum likelihood estimator for the α-Brownian bridge. and Probability Letters, 93, 78–86, 2014.

V M. Görgens. Optimal stopping of an α-Brownian bridge. Submitted for publication.

Reprints were made with permission from the publishers.

Contents

1 Introduction ...... 9 1.1 Gaussian processes ...... 9 1.1.1 The Brownian bridge ...... 10 1.1.2 Representation of Gaussian processes ...... 11 1.1.3 Series expansions of Gaussian processes ...... 11 1.2 Models for Gaussian bridges and membranes ...... 13 1.2.1 Generalized Gaussian bridges ...... 13 1.2.2 Gaussian selfsimilar random fields ...... 15 1.3 Inference for α-Brownian bridges ...... 16 1.3.1 Estimation ...... 18 1.3.2 Hypothesis testing ...... 19 1.3.3 Optimal stopping ...... 20

2 Summary of Papers ...... 22 2.1 Paper I ...... 22 2.2 Paper II ...... 23 2.3 Paper III ...... 24 2.4 Paper IV ...... 25 2.5 Paper V ...... 25

3 Summary in Swedish ...... 27

Acknowledgements ...... 29

References ...... 30

1. Introduction

This is a thesis in the mathematical field of stochastics. This field is often divided into the areas probability theory, theoretical statistics, and stochastic processes. The boundaries between these areas are not sharp but intersec- tions of them exist. Paper I and Paper II contribute to the theory of stochastic modeling of Gaussian bridges and membranes and belong to the intersection of probability theory and stochastic processes, whereas in Papers III – V we study inference for a continuous time , and those papers thus belong to the intersection of the areas theoretical statistics and stochastic pro- cesses. Moreover, throughout the thesis we make use of functional analytical tools. The term Gaussian bridges is to be understood in a broad sense. While orig- inally introduced to describe Gaussian processes which attain a certain value at a specific time almost surely1, it was later (with the prefix “generalized”) used to denote Gaussian processes conditioned on the event that one or more func- tionals of the sample paths vanish2. Paper I contributes to the theory of such generalized Gaussian bridges. In Paper II we present a general method to con- struct selfsimilar Gaussian random fields and study how to extract Gaussian processes, bridges, and membranes from them. In Papers III – V we consider another generalization of Gaussian bridges – the α-Brownian bridges – and study problems of inference for the scaling parameter α and optimal stopping of such bridges. In all five papers of this thesis the Brownian bridge occurs at least as a special case3. In this first chapter we give a short introduction to Gaussian processes in general and to the topics studied in this thesis in particular. In Chapter 2 we summarize the included papers and in Chapter 3 we give an outline of the thesis in Swedish.

1.1 Gaussian processes Among all probability distributions the is of particular im- portance since, by the , sums of independent and iden- tical distributed (i.i.d.) random variables with finite behave roughly

1See for example [24]. 2We refer to [1] and in particular to [43]. 3A plot of the Brownian bridge is given on the cover page.

9 like normal random variables. The central limit theorem has its functional ana- =( ) = n logue as well: Random walks S Sn n∈N of the form Sn ∑k=1 Xk, where the Xk’s are i.i.d. random variables with finite variance, behave (suitably scaled) roughly like Brownian motion. The Brownian motion is the unique continu- ous stochastic process on the real line with i.i.d. and symmetric increments. It serves as a building block for many other Gaussian and non-Gaussian pro- cesses. Gaussian processes are not only of particular importance, but also very ac- cessible for investigation, since their finite dimensional distributions are solely determined by their and, moreover, Gaussian random variables are independent whenever they are orthogonal in the Hilbert space spanned by them. This importance and treatability makes the class of Gaussian processes an object of intensive study. Here we just mention the monographs [12], [27], [29], and [33].

1.1.1 The Brownian bridge

If we consider standard Brownian motion W =(Ws)s∈R (i.e., Brownian motion E = E 2 = scaled to fulfill W0 0 and W1 1) and tie it down to 0 at time 1 we obtain (restricted to the interval [0,1]) the Brownian bridge. As mentioned before, this process appears in all papers included in this thesis. The Brownian bridge B =(Bs)s∈[0,1] is a continuous centered Gaussian pro- cess uniquely defined by its covariance function EBsBt = s(1 − t) for 0 ≤ s ≤ t ≤ 1. It is of particular importance in asymptotic statistics (cf. [23]): Given n i.i.d. random variables with a continuous distribution function F, con- sider their empirical distribution function Fn. By the , Fn(s) → F(s) almost surely as n → ∞. In 1933, Glivenko [25] and Cantelli [15] showed that this convergence is uniform on the real line. Now, by the central limit theorem, √ n(Fn(s) − F(s)) −→ d B(F(s)), as n → ∞. (1.1) Kolmogorov showed in [31] that4,asn → ∞, √ nsup|Fn(s) − F(s)|−→d sup|B(F(s))| = sup |B(s)|. (1.2) s∈R s∈R 0≤s≤1 Moreover, he proved that the law of the left hand side in (1.2) is independent of F and studied the distribution of the right hand side in (1.2) (now known as the Kolmogorov distribution). These results, together with the work of Smirnov in [41], form the theoretical foundation of the Kolmogorov-Smirnov goodness-of-fit tests.

4All three papers [15], [25], and [31] were published in 1933 (with almost the same title) in the same issue of the Italian Giornale dell Istituto Italiano degli Attuari.

10 1.1.2 Representation of Gaussian processes After the aforementioned work of Kolmogorov and others, the Brownian bridge was studied in more detail. In doing so, it was fruitful to work with different representations of it. Introducing the Brownian bridge B as a Brownian motion conditioned to end at 0 at time 1 leads immediately to

Bs = Ws − sW1, 0 ≤ s ≤ 1. (1.3)

This representation is anticipative in the sense that, in order to compute Bs we use W1 – a random variable “not available” at time s < 1. An alternative representation of the Brownian bridge is given by the stochas- tic differential equation B dB = dW − s ds, B = 0, 0 ≤ s < 1. (1.4) s s 1 − s 0 The solution of (1.4) is s 1 − s Bs = dWx, 0 ≤ s < 1, (1.5) 0 1 − x and one has lims1 Bs = 0. Clearly, the stochastic processes defined by (1.3) and (1.5) are different (for example they induce different filtrations). How- ever, they induce the same probability law on C([0,1]) – the Banach space of continuous functions on [0,1] equipped with the supremum norm. Another way to represent Gaussian processes is by means of series expan- sions which we discuss in the following section.

1.1.3 Series expansions of Gaussian processes

Gaussian processes X =(Xs)s∈[0,T] with continuous sample paths may be rep- resented as a random series of the form ∞ Xs =d ∑ ξn fn(s), 0 ≤ s ≤ T, (1.6) n=1 (ξ )∞ where n n=1 is a sequence of i.i.d. standard normal random variables. While there exist many such series representations we present two of them in more detail.

Operator generated processes Given a separable Hilbert space H and a linear and bounded operator u : H −→ C([0,T]) we can define a X =(Xs)s∈[0,T] via ∞ Xs = ∑ ξn(uen)(s), (1.7) n=1

11 ( )∞ where en n=1 is an orthonormal basis in H. The convergence in (1.7) is almost surely. However, the null-set for which the right hand side does not converge depends in general on s and thus the convergence in (1.7) is in general not uniform. Now assume that X =(Xs)s∈[0,T] is a stochastic process with almost surely continuous paths. Then (see Theorem 3.5.1 in [12]) there exists a separable Hilbert space H and an operator u : H −→ C([0,T]) such that X can be written as in (1.7) almost surely. In particular, in this case the convergence is uniform for all s ∈ [0,T]. The operator u is called the generating operator (or the asso- ciated operator) of X (note that the different choices of u and H are equivalent only up to isomorphisms). The generating operator u : H −→ C([0,T]) encapsulates all information on the distribution of a Gaussian process X. In particular, changing the orthonor- mal basis in (1.7) does not change the distribution of X. In order to give an example we state that the Brownian bridge B on [0,1] 0([ , ]) −→ ([ , ]) 0([ , ]) is generated by the operator u : L2 0 1 C 0 1 , where L2 0 1 is the orthogonal complement of the function f (x) ≡ 1inL2([0,1]) and s ( )( )= ( ) , ≤ ≤ , ∈ 0([ , ]). ue s e x dx 0 s 1 e L2 0 1 0 √ { ( π ) ≥ } 0([ , ]) In particular, the orthonormal basis 2cos n x : n 1 in L2 0 1 yields the representation √ ∞ ( π ) = ξ sin n s . Bs 2 ∑ n π (1.8) n=1 n

Karhunen-Loève expansions Another important series representation of a continuous Gaussian process X = (Xs)s∈[0,T] is given by its Karhunen-Loève expansion. Let R be the covariance function of X, R(s,t)=EXsXt, and let μ be a finite measure on [0,T]. Let L2([0,T], μ) be the space of square integrable measurable functions on [0,T] with respect to the measure μ, and define the covariance operator of X, AR : L2([0,T], μ) −→ L2([0,T], μ),by T (ARe)(t)= R(t,s)e(s)μ(ds), e ∈ L2([0,T], μ). 0

Then AR is a linear and bounded, compact and self-adjoint, and non-negative (λ )∞ definite operator. Hence, its eigenvalues n n=1 are real and non-negative. Now, an application of a generalized version of the Karhunen-Loève Theorem (see Theorem 34.5.B in [36] for the classical Karhunen-Loève Theorem and Theorem 2 of Paper III for its extension) yields the following series expansion of X: ∞ T Xs = ∑ Znen(s) with Zn = Xsen(s)μ(ds), (1.9) n=1 0

12 ( )∞ where en n=1 is the sequence of corresponding orthonormalized continuous (λ )∞ ( )∞ eigenfunctions of the eigenvalues n n=1, and Zn n=1 is a sequence of inde- pendent normal random variables with mean 0 and variance λn. The conver- gence in (1.9) is almost surely√ and uniform in s for all s in the support of the measure μ. Setting fn = λnen we obtain a series representation of X of the form (1.6). The advantage of the Karhunen-Loève expansion is that it gives an orthog- ( )∞ onal decomposition of X since the eigenfunctions en n=1 are orthogonal in L2([0,T], μ). Moreover, if we sort the eigenvalues in descending order, then truncations based on the Karhunen-Loève expansion minimize the total mean square error, i.e., for all n ∈ N the expected L2([0,T], μ)-norm of the sum ∞ ∑k=n+1 Zkek is minimal among all series expansions of the form (1.6). Calculating the Karhunen-Loève expansion of the Brownian bridge with re- 2 2 spect to the Lebesgue measure on [0,1] yields√ the eigenvalues λn = 1/(n π ) and the normalized eigenfunctions en(s)= 2sin(nπs), n ≥ 1, which eventu- ally leads to the same representation of the Brownian bridge as in (1.8).

1.2 Models for Gaussian bridges and membranes In Paper I and Paper II we study the modeling of Gaussian bridges and mem- branes. In Paper I, generalized Gaussian bridges are obtained by conditioning Gaussian processes on the event that certain functionals of their sample paths vanish. In Paper II, Gaussian bridges and their higher dimensional analogue, Gaussian membranes, are extracted from certain selfsimilar Gaussian random fields.

1.2.1 Generalized Gaussian bridges In (1.1) we have seen that √ n(Fn(s) − s) −→ d B(s), as n → ∞, (1.10) where B =(Bs)s∈[0,1] is the Brownian bridge on [0,1] and Fn is the empiri- cal distribution function of the first n elements in the sequence U1,U2,... of independent and uniformly distributed random variables on [0,1]. In fact, by Donsker’s Theorem, the probability measure induced be the left hand side of (1.10) converges weakly to the probability measure induced by Brownian ([ , ]) ∈ N n, n,..., n bridge on the Skorokhod space D 0 1 . Now, for n , let U1 U2 Un be independent and uniformly distributed random variables on [0,1], condi- n n = / tioned that ∑i=1 Ui n 2 and let Gn be the empirical distribution function of n,..., n n the random variables U1 Un . From the conditioning of the Ui ’s it follows that 1 (Gn(s) − s)ds = 0. 0 13 0.2 0.0 0.2 0.4 0.6 − 0.4 − 0.6 − 0.0 0.2 0.4 0.6 0.8 1.0 Figure 1.1. A realization of a zero area Brownian bridge. (figure taken from Paper I).

√ We may thus expect that n(Gn(s)−s) converges, at least in the sense of finite dimensional distributions, to the Brownian bridge conditioned that its integral over [0,1] vanishes. We call this process the zero area Brownian bridge. A typical sample path is given in Figure 1.1 (taken from Paper I). The zero area Brownian bridge is one example of a generalized Gaussian bridge (or, as we call it in Paper I, conditioned Gaussian process): given a continuous Gaussian process X =(Xs)s∈[0,T], T > 0, and a finite subset A ⊂ ( ) C([0,T])∗ from the dual space of C([0,T]), let P A be the conditioned measure X P(A)(·)=P · −1( ) , X X a 0 a∈A where PX is the induced measure of X on C([0,T]). Every continuous Gaus- (A) P ([ , ]) sian process X whose induced measure X(A) on C 0 T coincides with P(A) X is called a generalized Gaussian bridge (or conditioned Gaussian process of X with respect to A). The Brownian bridge on [0,1] is the standard Brownian motion on [0,1] conditioned by A = {δ1}, and the zero area Brownian bridge appears by condi- tioning the Brownian bridge on [0,1] further by A = {a}, where a ∈ C([0,1])∗ is Lebesgue measure on [0,1], or alternatively, by conditioning standard Brow- nian motion on [0,1] by A = {δ1,a}. The random variables Xs,0≤ s ≤ T, and a(X), a ∈ A, are centered Gaus- sian random variables. Hence, conditioning becomes orthogonal projection in the Gaussian Hilbert space spanned by the random variables Xs,0≤ s ≤ T. In particular, anticipative representations of generalized Gaussian bridges are obtained easily. For example the anticipative representation of the Brownian bridge B as a conditioned Brownian motion W is, as in (1.3), Bs = Ws − sW1 for 0 ≤ s ≤ 1.

14 Finding non-anticipative representations for the conditioned process X(A), i.e., representations where the filtrations induced by the processes X and X(A) coincide (such as for example the representations (1.4) and (1.5) for Brownian bridge), is more involved. In general, additional assumptions on X and A are required. Generalized Gaussian bridges and special cases of them were studied before, for example in [1], [8], [9], [19], [24], and [43]5. In particular, in the recent work [43] non-anticipative representations for generalized bridges of a wide class of Gaussian processes were found. Generalized bridges as described in this section were used in connection with insider trading, where the additional information of an insider is mod- eled by functionals of the price process of some financial derivative. In [8] and [43] the additional expected utility for the inside trader is calculated for different models. Recently, in [17], generalized Gaussian bridges arising by conditioning Gaussian processes on the event that their first coordinates in the Karhunen-Loève expansion vanish were considered in the context of partial functional quantization. In Paper I we present another approach to the study of generalized Gaussian bridges and show how this approach extends to the conditioning of Gaussian random variables with values in arbitrary separable Banach spaces.

1.2.2 Gaussian selfsimilar random fields

A stochastic process X =(Xs)s∈R on the real line is called selfsimilar with H Hurst index H if, for all c > 0, the processes (Xcs)s∈R and (c Xs)s∈R coincide in distribution. It is said to have stationary increments if the distribution of Xt − Xs depends solely on the length t − s for all s,t ∈ R. The only continuous Gaussian selfsimilar process with stationary increments is, up to constants, the H =( H ) fractional Brownian motion B Bs s∈R which has covariance function 1 E BH BH = |s|2H + |t|2H −|s −t|2H , s,t ∈ R. s t 2 Moreover, the self-similarity index H needs to be restricted to 0 < H < 1. In the particular case H = 1/2 we obtain standard Brownian motion. Frac- tional Brownian motion is widely used in applications, for example in statisti- cal physics, telecommunications, financial mathematics and many more. The generalization of Gaussian processes, Gaussian random fields, are usu- ally defined as Gaussian probability measures on the space of distributions S∗, the dual space of the Schwartz functions S on Rd. A Gaussian random field is called selfsimilar with index H if, for all c > 0, the random fields (X(ϕc))ϕ∈S H and (c X(ϕ))ϕ∈S coincide in distribution, where the dilation ϕc of ϕ is de- −d −1 d fined by ϕc(x)=c ϕ(c x), x ∈ R .Forr ∈ N, it is said to have stationary

5In [1] and [43] under the name generalized Gaussian bridges.

15 increments of order r if its restriction to Sr is invariant under translations, where Sr ⊂ S is defined as j Sr = ϕ ∈ S : x ϕ(x)dx = 0 for all | j| < r ⊂ S, Rd

=( ,..., ) | | = ∑d j = ∏d jk where j j1 jd is a multi-index, j k=1 jk, and x k=1 xk for d x =(x1,...,xd) ∈ R . In [20], Dobrushin gave a complete characterization of the covariance func- tionals of stationary selfsimilar Gaussian random fields. In particular, he has shown that, for H < r, the covariance of all H-selfsimilar Gaussian random fields with stationary increments of order r equals ∞ E X(ϕ)X(ψ)= ϕˆ(rx)ψˆ (rx)r−2H−1drσ(dx), Sd−1 0 where ϕˆ and ψˆ denotes the Fourier transform of ϕ ∈ S and ψ ∈ S and σ is a finite, positive, and reflection invariant measure on the unit sphere Sd−1 of Rd. For example, choosing σ = ϖ, where ϖ is the uniform measure on Sd−1, and H = −d/2 yields, up to constants, Gaussian M on S with covariance functional E M(ϕ)M(ψ)= ϕ(x)ψ(x)dx, ϕ,ψ ∈ S. Rd Selfsimilar and fractional random fields have been studied from different perspectives. The monograph [16] gives a good account of the theory. For ex- ample, it was shown that Gaussian selfsimilar random fields appear as scaling limits of certain Poisson random ball models (confer for example [10] and the references mentioned therein). Moreover, Gaussian random fields were ex- tended from S to a suitable subset of the space of signed measures with finite total variation, and it was described how to extract fractional Brownian mo- H H = (μ ) tion B from certain Gaussian random fields X via Bs X s for suitable (μ ) choices of measures s s∈Rd .

1.3 Inference for α-Brownian bridges In Papers III–V we study problems of inference for the α-Brownian bridge. Inference for continuous time stochastic processes has been studied for a long time. One of the first systematic treatments of such problems was given in [26]. However, the approach described in [26] includes a reduction of the continuous time sample paths to a collection of countably many random vari- ables. How this collection is chosen is a non-trivial problem, that, however, is very relevant for the power of the deduced estimates and tests. Further work (for example [11] and [14]) studied parameter estimation for continu- ous time stochastic processes without this reduction, but under the assumption

16 of stationarity or (or both). for stochastic pro- cesses based on continuous observations has been studied extensively ever since. Here we just mention the sources [34] and [35]. We next motivate the introduction of the α-Brownian bridges by an exam- ple: assume that Sweden decides to join the European monetary union (EMU). In order to do this, at some date before the planed entrance, the exchange rate at which Swedish crowns will be exchanged to Euro needs to be fixed at some level K. Then, in the time between this rate becomes public and the date of the entrance to the EMU, currency dealers will tend to change Euro to Swedish crowns if the current rate is below K, and to change Swedish crowns to Euro if the current rate is above K. Moreover, this effect will be the stronger the closer the date of entrance is. Considering the exchange rate as a function of time, we obtain thus a mapping, which, at the day of entrance to the EMU, attains the fixed value K. α-Brownian bridges have been used as a building block in [42] and [44] to model such behavior. Given a standard Brownian motion W =(Ws)s∈[0,1] and a real number α > 0, consider the stochastic differential equation

(α) (α) α X (α) dX = dW − s ds, X = 0, 0 ≤ s < 1. (1.11) s s 1 − s 0 The unique strong solution of (1.11) is (α) s 1 − s α Xs = dWx, 0 ≤ s < 1, (1.12) 0 1 − x (α) and is called the α-Brownian bridge. It fulfills lims1 Xs = 0 almost surely and the parameter α determines how the process returns to 0 at time 1. Hence, X(α) has a continuous extension on [0,1] and therefore the term ”bridge“ is justified. The (usual) Brownian bridge is covered as the special case α = 1. In (1.11) and (1.12) we may allow non-positive values for α as well. Then Brownian motion is included as the special case α = 0. However, for α ≤ 0, (α) we do not have lims1 Xs = 0 any longer. In order to visualize the ef- fect of the parameter α graphically, we give a plot of the ”expected future“ E (α)|( (α)) ≤ ≤ ≤ α Xt Xx x∈[0,s] ,0 s t 1, for different values of in Figure 1.2 (taken from Paper III). α-Brownian bridges were introduced in [13] for the modeling of riskless profit given some future contracts in the absence of transaction costs. This extended the earlier work [2], where the arbitrage opportunity is derived from a model including the Brownian bridge. Later it was used in economical ([42] and [44]) and biological ([28]) contexts. In particular, in [44] a very similar situation to our motivating example was studied: the conversion of the ex- change rates between the Greek Drachma and the Euro to a fixed exchange rate on January 1st, 2001.

17 α=0 0 <α<1 α=1 α>1

0 s1 Figure 1.2. The influence of α to the “expected future” for different values of α (figure taken from Paper III).

The first more theoretical investigation of α-Brownian bridges was given in [37]. In this reference it was, among other results, shown that the α- Brownian bridge is not a bridge of a Gaussian Markov process in the sense of Section 1.2.1 unless α = 1. In [5] sample path properties were studied and in [3] the Karhunen-Loève expansion of X(α) (under the Lebesgue mea- sure) was computed. Some further references are given in subsequent sec- tions. Here, we only remark that generalizations have been discussed in the literature, where the constant α was replaced by a mapping s → α(s) [4], and where the Brownian motion in (1.11) was replaced by fractional Brownian motion [22]. Returning to the scenario of Sweden joining the European monetary union, assume that at some time close to the entrance to the EMU, a currency dealer holds Swedish crowns and has to decide whether to sell them or not. Sup- pose that she works under the assumption that the exchange rate follows an α-Brownian bridge with an unknown parameter α. Then she will have to es- timate α based on the past exchange rates (we study hypothesis testing and estimation for α-Brownian bridges in Paper III and Paper IV) and, once she found a good estimate, she will have to find the best selling strategy given the now fully defined model (we consider this problem in Paper V).

1.3.1 Estimation An application of Girsanov’s Theorem yields the log-likelihood function of α given a sample path of X(α) until T, (α) (α) (α) T X (α) α2 T (X )2 L α ( ) = −α s − s . ln Xs s∈[0,T] dXs 2 ds (1.13) 0 1 − s 2 0 (1 − s)

18 It follows that the maximum likelihood estimator for α equals (α) (α) T X (α) T (X )2 αˆ ( )=− s s . MLE T dXs 2 ds (1.14) 0 1 − s 0 (1 − s)

In [7] it was shown that αˆ MLE is a strongly consistent estimator for α, that is, that limT1 αˆ MLE(T)=α almost surely. Moreover, in [6] (for the first case) and [7] (for the second and third cases) it was shown that, as T  1, ⎧ ⎪ζ, for α < 1/2, ⎨ W 2−1 Iα (T)(αˆ (T) − α) −→ − √ 1 , for α = 1/2, MLE d ⎪ 1 2 ⎩⎪ 2 2 0 Ws ds ξ, for α > 1/2, where ζ denotes a standard Cauchy-distributed random variable, ξ a stan- dard normal random variable, and Iα (T) is the Fisher information. More- over, in [45], it was proven that under the assumption α > 1/2, the maximum likelihood estimator αˆ MLE satisfies the large deviation principle with speed |ln(1 − T)| and good rate ⎧ ⎨ (α−x)2 ( − ) , if x ≥ (1 + α)/3, J(x)= 2 2x 1 ⎩ 2α−4x+1 , < ( + α)/ 2 if x 1 3. All the aforementioned results are only of asymptotic nature in the sense that they describe the behavior of αˆ MLE(T) as T  1. The aim of Paper III and Paper IV is to give precise results for all values of T smaller than 1. In Paper IV we show that αˆ MLE(T) is a heavily biased estimator for α unless T is very close to 1 and we propose a bias-corrected estimator for α.

1.3.2 Hypothesis testing Hypothesis testing for the α-Brownian bridge was studied in [46]. More pre- cise, the simple statistical decision problem

H0 : α = α0 vs. H1 : α = α1, (1.15) where α0,α1 ≥ 1/2 was considered. The decision should be based on an ob- served trajectory until time T < 1 and should be in a way such that the prob- ability of making an error of the second kind is minimized, and at the same time bounding the probability of making an error of the first kind from above by some value p < 1 (usually p = 0.1, p = 0.05 or p = 0.01). The Neyman–Pearson Lemma provides us with the (in the just described sense) optimal test: we have to reject the null hypothesis whenever (α) L α1 (Xs )s∈[0,T] ϕα ,α ( ) = > α ,α , ( ). 0 1 T : (α) c 0 1 T q (1.16) L α0 (Xs )s∈[0,T]

19 ( ) Here, the constant cα0,α1,T q is to be chosen such that (α ) P 0 (ϕ ( ) > ( )) = , α0,α1 T cα0,α1,T q q that is, in order to find the best decision in (1.15) we need to know the dis- ϕ ( ) tribution of the likelihood ratio α0,α1 T under the null hypothesis. In [46] approximations for this distribution were given for T close to 1 by means of large deviations. In Paper III we consider the same problem (1.15) but with the less restric- tive assumption α0 + α1 ≥ 1. Applying Smirnov’s formula to the Karhunen- Loève expansion of X(α) under a certain measure μ allows us to determine the ϕ ( ) < distribution of α0,α1 T under the null hypothesis exactly for all T 1 (see Section 2.3).

1.3.3 Optimal stopping Optimal stopping has its roots in sequential analysis, where the size n of the data (X1,X2,...,Xn) on which decisions and estimates are based is not pre- defined but depends on some stopping rule η. This rule is chosen such that the costs of collecting the data is as small as possible while still providing the required level of significance for the inference. This idea is embedded into a continuous setting in the following way: Given a stochastic process Y =(Ys)s∈[0,T] and a progressively measurable function G : [0,T] × R[0,T] −→ R we consider

V = sup E G(τ,Y), 0≤τ≤T where the supremum is taken over all stopping times, and where the term progressively measurable means that the value of G(t,Y) is solely based on t and (Ys)s∈[0,t]. However, often the simpler problem

V = sup E Yτ (1.17) 0≤τ≤T is studied, i.e., the value of G at time t is just Yt. A solution of the optimal stopping problem (1.17) consists of the value V and a τ∗ for which the supremum is attained (if such a stopping time exists). If Y is a (possibly time-inhomogeneous) Markov process, one usually con- siders the augmented problem

V(x,t)= sup Ex,t Yτ , (1.18) t≤τ≤T where Ex,t denotes expectation under the assumption that Yt = x. Then the solution to (1.17) follows via V =V(Y0,0). If we assume that Y is a continuous process, it appears natural to continue the observation as long as Yt < V(Yt,t)

20 and to stop immediately as soon as Yt = V(Yt,t). Hence, we expect that the stopping time ∗ τ = inf{t ≥ 0:Yt = V(Yt,t)} is optimal in (1.17). In fact, this is true under some regularity conditions on Y (see Theorem 2.4 in [39]). Finding the value function V(x,t) for time- inhomogeneous Markov processes (such as α-Brownian bridges) is non-trivial and different approaches are discussed in the literature. In [21] the problem (1) V = sup E Xτ , 0≤τ≤1 i.e., the optimal stopping problem for the Brownian bridge, was solved. We extend these results in Paper V, by replacing the Brownian bridge X(1) by the α-Brownian bridge X(α) for arbitrary α ≥ 0.

21 2. Summary of Papers

In this chapter we give a short summary of each paper included in the thesis.

2.1 Paper I The first paper deals with conditioned Gaussian processes as introduced under the name generalized Gaussian bridges in Section 1.2.1. Let X =(Xs)s∈[0,T] be a continuous Gaussian process and let A ⊂ C([0,T])∗ be finite (we call elements in A conditions). Denote by X(A) the conditioned Gaussian process P P of X with respect to the set of conditions A, and let X and X(A) be the induced measures of X and X(A) on C([0,T]). Let u : H −→ C([0,T]) be a generating operator of X as introduced in Sec- tion 1.1.3. We show that the conditioned process X(A) admits a series expan- sion of the form ∞ (A) Xs = ∑ ξn(ufn)(s), (2.1) n=1 (ξ )∞ where n n=1 is a sequence of i.i.d. standard normal random variables and ( )∞ fn n=1 is an orthonormal basis in the Hilbert space

H(A) = {h ∈ H : a(uh)=0 for all a ∈ A}⊂H.

From the series expansion (2.1) we deduce an anticipative representation for the conditioned process X(A), i.e., we express X(A) in terms of X, where, in general, the complete realization of X(ω) is required in order to compute (A) Xs (ω),0≤ s ≤ 1. Moreover, the series expansion (2.1), together with an application of the Cameron-Martin Theorem, leads to a simple criterion for P P determining the equivalence of the measures X and X(A) . Next, we study non-anticipative representations of X(A). We show that, whenever X is a solution of a stochastic differential equation of the form

dXs = αdWs + β(s,X)ds, X0 = 0, 0 ≤ s < T, where (Ws)s∈[0,T] is standard Brownian motion and β a progressively measur- able functional, then X(A) solves a stochastic differential equation

(A) = α + δ( , (A)) , (A) = , ≤ < , dXs dWs s X ds X0 0 0 s T

22 for some progressively measurable functional δ. Moreover, if X is a Markov process we determine δ explicitly. After giving examples (e.g. the zero area Brownian bridge mentioned in Section 1.2.1) we finally study extensions to arbitrary separable Banach spaces and consider conditioning of Gaussian processes on [0,∞) and conditioning of Gaussian random measures.

2.2 Paper II In Paper II we present a unified framework for the construction of selfsimilar generalized Gaussian random fields on Rd. These fields are driven by Gaus- sian random balls white noise Mβ defined as Gaussian random measures on d −β−1 R × R+ with control measures νβ (dz)=νβ (dx,du)=dxu du for some β > 0. Given a point z =(x,u) ∈ Rd × R+ and a function h : Rd −→ R we define d the shift and scale map τzh : R −→ R by τzh(y)=h((y − x)/u). For a signed measure μ on Rd and an m > 0, let (−Δ)−m/2μ be the absolutely continuous measure with density ((−Δ)−m/2μ)(x)= |x − y|−(d−m)μ(dy), x ∈ Rd. Rd Denote the evaluation of a function g : Rd −→ R with respect to a signed measure η on Rd by η,g = g(y)η(dy) Rd and consider the Gaussian random field −m/2 X(μ)= (−Δ) μ,τzh Mβ (dz). (2.2) Rd×R+ The notion of stationarity and self-similarity carries over from generalized Gaussian random fields defined on S as in Section 1.2.2 to generalized Gaus- sian random fields defined on spaces of measures in a natural way. We ana- lyze for which choices of the parameters m and β, of the measure μ, and of the shot noise function h, the random field (2.2) is well defined, Moreover, we study their self-similarity properties in Theorem 2 of Paper II. Modifications of (2.2), where the driving Gaussian random measure is replaced by Gaussian white noise on Rd, are studied in Theorem 3 of Paper II. We then show how to extract Gaussian processes and Gaussian bridges from these generalized Gaussian random fields. For example, we discuss the extrac- tion of fractional Brownian motion in different representations, the extraction of generalized Gaussian bridges in the sense of Section 1.2.1, and the extrac- tion of Gaussian bridges and membranes on bounded domains D ⊂ Rd, which =( ) → → ∈ ∂ are Gaussian random fields X Xs s∈D¯ such that Xs 0ass s0 D.

23 In a final section we study a second attempt to the construction of Gaussian membranes through a modification of the control measure of the driving Gaus- sian random measures. This yields random fields which are not selfsimilar in a global sense but in a local sense as shown in Theorem 4 of Paper II.

2.3 Paper III The statistical decision problem (1.15) is considered in Paper III, i.e., given an observed trajectory of an α-Brownian bridge X(α) with unknown scaling parameter α until time T < 1, we want to test

H0 : α = α0 vs. H1 : α = α1.

We assume that α0 + α1 ≥ 1. As pointed out in Section 1.3.2, in order to find the optimal test, it is crucial to know the distribution of the likelihood ratio ϕ ( ) α0,α1 T (as defined in (1.16)) under the null hypothesis. We show that ϕα ,α (T) can be recast to (Proposition 1 of Paper III) 0 1 ϕ ( )= (α − α )(ψ ( )+ ( − ))/ , α0,α1 T exp 0 1 α0,α1 T ln 1 T 2 (2.3) where (α) (α) ( )2 T ( )2 XT Xs ψα ,α ( )= +(α + α − ) . 0 1 T 0 1 1 2 ds (2.4) 1 − T 0 (1 − s) We then generalize the Karhunen-Loève Theorem (Theorem 2 of Paper III) (α ) and calculate the Karhunen-Loève expansion of X 0 under the positive mea- sure δT (ds) (α0 + α1 − 1)I(s ≤ T)ds μ = μα ,α , (ds)= + 0 1 T 1 − T (1 − s)2

(Theorem 3 of Paper III), where δT denotes the point measure at T and I the indicator function. This yields ∞ (α0) Xs = ∑ Znen(s), (2.5) n=1 where the convergence in (2.5) is almost surely and uniform in s ∈ [0,T] (see ( )∞ also Section 1.1.3). Here, Zn n=1 is a sequence of centered normal random λ (λ )∞ variables with variance n, with n n=1 being the decreasing sequence of (α ) ∞ 0 ( ) eigenvalues in the Karhunen-Loève expansion of X , and en n=1 is the sequence of corresponding normalized continuous eigenfunctions. In partic- ( )∞ ular, the sequence en n=1 forms an orthonormal system in the Hilbert space L2([0,1], μ). From this fact, (2.4) and (2.5) it follows under the null hypothesis ∞ ∞ (α ) ψ ( )= 0 2 = 2 = λ ξ 2, α0,α1 T X μ ∑ Zn d ∑ n n (2.6) n=1 n=1

24 (ξ )∞ where n n=1 is a sequence of i.i.d. standard normal random variables. Ran- dom sums of the form (2.6) were studied by Smirnov [40] and Martynov [38] and concise formulas for their distributions were given. Based on the distri- ψ ( ) ϕ ( ) bution of α0,α1 T , the distribution of α0,α1 T is obtained via (2.3) (Theo- rem 1 of Paper III). In a final Section we apply the presented method to hypothesis testing for Ornstein-Uhlenbeck processes (Theorem 4 and Theorem 5 of Paper III).

2.4 Paper IV In Paper IV we study the bias of the maximum likelihood estimator for α given an observation of a sample path of the α-Brownian bridge X(α) until time T < 1. From (2.3) and (2.4) it can be deduced that

(α) (α) (X )2 1 ln(1 − T) (α) T (X )2 αˆ ( )=− T + − , = s . MLE T (α) (α) where IT 2 ds ( − ) 2 0 (1 − s) 2 1 T IT 2IT The moment generating function of a random variable does not only contain information about positive but also about negative moments of that random variable (provided that they exist). In particular, in [18], formulas for the of quotients of random variables based on their joint moment ( (α))2 generating function are derived. The joint Laplace transform of XT and (α) IT was computed in [7]. Based on the mentioned results from [7] and [18] we compute the expected value Eα [αˆ MLE(T)] of αˆ MLE(T) (Proposition 1 of Paper IV), and show that α → Eα [αˆ MLE(T)] is, as a mapping from R into R, surjective (Proposition 2 and Proposition 3 of Paper IV). Finally, we propose a bias corrected maximum likelihood estimator for α and compare its bias and mean squared error with those of the maximum like- lihood estimator and two further Bayesian estimators in a simulation study.

2.5 Paper V In Section 1.3 we presented a currency dealer who wants to change Swedish crowns to Euro under the assumption that the exchange rate follows an α- Brownian bridge X(α) with unknown parameter α. After an estimation of α she will have to solve the optimal stopping problem

(α) V(α)= sup E Xτ , (2.7) 0≤τ≤1 where the supremum is taken over all stopping times τ with 0 ≤ τ ≤ 1 almost surely. In Paper V we solve this problem by following the classical steps in

25 optimal stopping theory described for example in [39]. First, we augment problem (2.7) as in (1.18) and consider

(α) V(x,t,α)= sup Ex,t Xτ . (2.8) t≤τ≤1 Then we formulate a two-dimensional free boundary problem for the value function V(·,·,α) – the solution of which is computed in Theorem 1 of Paper V and is a candidate for problem (2.8). The final step would be to verify that the found candidate is actually the correct solution of (2.8). However, this can be done in exactly the same way as in [21] (i.e., by an application of the Itô formula together with the optional sampling theorem). Of particular interest is the limiting behavior of V(α) as α  0. Since X(0) is a Brownian motion (and thus a martingale) we have V(0)=0. On the other hand, if we consider the stopping time (α) 1/2, if X > 0, τ = 1/2 1, otherwise, then there exists a constant c > 0 such that E (α) = E (α) I( (α) > ) > Xτ X1/2 X1/2 0 c for all α < 1. Hence, we can not expect that V(·) is continuous at 0. The details of the limiting behavior are given in Theorem 2 of Paper V.

26 3. Summary in Swedish

Denna avhandling studerar modellering av Gaussiska stokastiska processer, särskilt Gaussiska bryggor och membran, och inferens för den α-Brownska bryggan. Arbetet tillhör området matematisk statistik men vi använder genom- gående även verktyg från funktionalanalys. Ett viktigt specialfall i samtliga fem artiklar är den Brownska bryggan som uppstår genom betingning på att Brownska rörelsen antar värdet 0 vid tiden 1. Processen är särskilt viktig i asymptotisk statistik eftersom empiriska fördel- ningsfunktioner av oberoende stokastiska variabler asymptotiskt beter sig som den Brownska bryggan. Speciellt ligger en detaljerad analys av supremum av den Brownska bryggan till grund för Kolmogorov-Smirnov testet som är en central metod inom teoretisk statistik. I samtliga artiklar studerar vi olika generaliseringar av den Brownska bryggan. I artikel I behandlar vi kontinuerliga Gaussiska processer vars realiseringar kontrolleras av en given betingningsfunktional A. Vi ger serieutvecklingar för den obetingade processen X och den betingade processen X(A). Genom att tillämpa Cameron-Martins sats bevisar vi med hjälp av dessa utvecklingar ett ekvivalenskriterium för de fördelningar som induceras av X och X(A). Under vissa villkor på processen X och funktionalen A härleder vi explicita kanon- iska representationer för X(A). Slutligen diskuterar vi generaliseringar till Gaussiska stokastiska variabler som tar värden i separabla Banachrum. I artikel II presenterar vi en enhetlig ram för att konstruera själv-similära Gaussiska stokastiska fält. Dessa fält indexeras av Schwartz-funktioner eller en bredare klass av signerade mått på Rd som kan parametriseras med ett så kallat Hurst index. Vi visar hur man med hjälp av en extraktionsmetod kan konstruera Gaussiska processer på Rd genom att betrakta dessa fält för en familj av mått indexerade av Rd. Speciellt visar vi hur man kan erhålla olika representationer av fraktionell Brownsk rörelse, generaliserade Gaus- siska bryggor och Gaussiska membran. Dessa är Gaussprocesser definierade på en begränsad domän, och som antar värdet noll på randen. Vi studerar även lokal själv-similaritet för sådana membran. Den α-Brownska bryggan är en generalisering av den Brownska bryggan som använder en skalningsparameter α ≥ 0 vilken bestämmer graden av kon- vergens mot 0 vid tiden 1. Inferens för skalningsparametern α baserad på en realisering fram till tiden T < 1 har tidigare studerats i litteraturen, men endast för värden T nära 1. I artikel III betraktar vi det statistiska beslutsproblemet H0 : α = α0 vs. H1 : α = α1 for α-Brownska bryggor. Vi visar att den relevanta likelihood- kvoten kan skrivas som en kvadrerad L2-norm av processen under ett visst

27 mått μ. Vi generaliserar Karhunen-Loèves sats och beräknar Karhunen-Loéve utvecklingen av den α-Browska bryggan under måttet μ. Baserat på denna utveckling erhåller vi fördelningen för likelihoodkvoten genom en tillämp- ning av Smirnovs formel. Detta leder till optimala test för alla 0 < T ≤ 1. Vi diskuterar också generaliseringar av denna ansats till Ornstein-Uhlenbeck processer. I artikel IV beräknas bias av ML-estimator för α. Det visar sig att estima- torn har ett avsevärt väntevärdesfel när T inte ligger nära 1. Därför föreslår vi en bias-korrigerad ML-estimator och jämför den med den okorrigerade och med två alternativa Bayesianska estimatorer i en simuleringsstudie. I den avslutande artikeln V betraktar vi optimala stopptidsproblem för den α-Brownska bryggan. Vi följer det klassiska konceptet i optimal stopptidste- ori genom att studera en motsvarande PDE med fri rand. Genom att lösa det fria randvärdesproblemet erhålls en kandidatlösning för det optimala stopp- tidsproblemet som man därefter visar att den är den sökta. Vi studerar också hur icke-kontinuitet i den α-Brownska bryggan vid tiden 1 när α går mot 0 påverkar lösningen till det optimala stopptidsproblemet.

28 Acknowledgements

I would like to express my deepest gratitude to my supervisor Ingemar Kaj for his support and encouragement throughout my graduate studies. It was only due to his patient guidance, that my chaotic first drafts finally turned into (hopefully) coherent papers. I am also indebted to my second supervisor Svante Janson, who, with his tremendous knowledge, always got me back on track whenever I got stuck (in particular with Paper I). I would like to thank Allan Gut for always caring about me like a mentor, for reading most of my manuscripts, and for pointing my attention to refer- ence [46] which eventually led to Paper III. Katja, thank you for your friendship and for sharing your lunch breaks with me. I wish you all the best in the final year of your graduate studies and beyond. Måns, you have been a great friend, office mate, and co-author. Thank you very much for your companionship. I wish you only the best as well. I would like to thank the past and present members of the MatStat group for creating a great working atmosphere. I enjoyed uncounted cakes with Fredrik, Ioannis, Jesper, Saeid, Silvelyn, and all the others. I think the decision about which subject one specializes in is to a large extent influenced by the teachers one has. I am very grateful to Werner Linde from the University of Jena for awakening my interest in stochastic processes. Finishing this thesis is only the preliminary end of a long journey. I would like to thank my parents and my sister for their love and support over all the years. Finally, this thesis is dedicated to Bine and Milo. Being with you is my greatest joy and with you I can be myself completely. I am glad to have you by my side for everything to come.

29 References

[1] L. Alili. Canonical decompositions of certain generalized Brownian bridges. Electron. Comm. Probab., 7:27–36 (electronic), 2002. [2] C. A. Ball and W. N. Torous. Bond price dynamics and options. The Journal of Financial and Quantitative Analysis, 18(4):pp. 517–531, 1983. [3] M. Barczy and E. Iglói. Karhunen-Loève expansions of α-Wiener bridges. Cent. Eur. J. Math., 9(1):65–84, 2011. [4] M. Barczy and P. Kern. General alpha-Wiener bridges. Commun. Stoch. Anal., 5(3):585–608, 2011. [5] M. Barczy and G. Pap. α-Wiener bridges: singularity of induced measures and sample path properties. Stoch. Anal. Appl., 28(3):447–466, 2010. [6] M. Barczy and G. Pap. Asymptotic behavior of maximum likelihood estimator for time inhomogeneous diffusion processes. J. Statist. Plann. Inference, 140(6):1576–1593, 2010. [7] M. Barczy and G. Pap. Explicit formulas for Laplace transforms of certain functionals of some time inhomogeneous diffusions. J. Math. Anal. Appl., 380(2):405–424, 2011. [8] F. Baudoin. Conditioned stochastic differential equations: theory, examples and application to finance. Stochastic Process. Appl., 100:109–145, 2002. [9] F. Baudoin and L. Coutin. Volterra bridges and applications. Markov Process. Related Fields, 13(3):587–596, 2007. [10] H. Biermé, A. Estrade, and I. Kaj. Self-similar random fields and rescaled random balls models. J. Theoret. Probab., 23(4):1110–1141, 2010. [11] P. Billingsley. Statistical inference for Markov processes. Statistical Research Monographs, Vol. II. The University of Chicago Press, Chicago, Ill., 1961. [12] V. I. Bogachev. Gaussian measures, volume 62 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1998. [13] M. J. Brennan and E. S. Schwartz. Arbitrage in stock index futures. The Journal of Business, 63(1):pp. S7–S31, 1990. [14] B. M. Brown and J. I. Hewitt. Asymptotic likelihood theory for diffusion processes. Journal of Applied Probability, 12(2):pp. 228–238, 1975. [15] F. P. Cantelli. Sulla determinazione empirica delle leggi di probabilita. Giorn. Ist. Ital. Attuari, 4:221–424, 1933. [16] S. Cohen and J. Istas. Fractional fields and applications, volume 73 of Mathématiques & Applications. Springer, Heidelberg, 2013. [17] S. Corlay. Partial functional quantization and generalized bridges. Bernoulli, 20(2):716–746, 2014. [18] N. Cressie, A. S. Davis, J. L. Folks, and G. E. Policello. The moment-generating function and negative integer moments. Amer. Statist., 35(3):148–150, 1981. [19] P. Deheuvels. A Karhunen-Loève expansion for a mean-centered Brownian bridge. Statist. Probab. Lett., 77(12):1190–1200, 2007.

30 [20] R. L. Dobrushin. Gaussian and their subordinated self-similar random generalized fields. Ann. Probab., 7(1):1–28, 1979. [21] E. Ekström and H. Wanntorp. Optimal stopping of a Brownian bridge. J. Appl. Probab., 46(1):170–180, 2009. [22] K. Es-Sebaiy and I. Nourdin. Parameter estimation for α-fractional bridges. In and stochastic analysis, volume 34 of Springer Proc. Math. Stat., pages 385–412. Springer, New York, 2013. [23] K. Ford. From Kolmogorov’s theorem on empirical distribution to number theory. In Kolmogorov’s heritage in mathematics, pages 97–108. Springer, Berlin, 2007. [24] D. Gasbarra, T. Sottinen, and E. Valkeila. Gaussian bridges. In Stochastic analysis and applications, volume 2 of Abel Symp., pages 361–382. Springer, Berlin, 2007. [25] V. I. Glivenko. Sulla determinazione empirica della legge di probabilita. Giorn. Ist. Ital. Attuari, 4:92–99, 1933. [26] U. Grenander. Stochastic processes and statistical inference. Ark. Mat., 1:195–277, 1950. [27] T. Hida and M. Hitsuda. Gaussian processes, volume 120 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1993. [28] J. S. Horne, E. O. Garton, S. M. Krone, and J. S. Lewis. Analyzing animal movements using brownian bridges. Ecology, 88:pp. 2354–2363, 2007. [29] I. A. Ibragimov and Y. A. Rozanov. Gaussian random processes, volume 9 of Applications of Mathematics. Springer-Verlag, New York-Berlin, 1978. [30] I. Kaj and M. S. Taqqu. Convergence to fractional Brownian motion and to the Telecom process: the integral representation approach. In In and out of equilibrium. 2, volume 60 of Progr. Probab., pages 383–427. Birkhäuser, Basel, 2008. [31] A. N. Kolmogorov. Sulla determinazione empirica di una legge di distribuzione. Giorn. Ist. Ital. Attuari, 4:83–91, 1933. [32] W. V. Li and W. Linde. Approximation, metric entropy and small ball estimates for Gaussian measures. Ann. Probab., 27(3):1556–1578, 1999. [33] M. Lifshits. Lectures on Gaussian processes. Springer Briefs in Mathematics. Springer, Heidelberg, 2012. [34] R. S. Liptser and A. N. Shiryaev. Statistics of random processes. I, volume 5 of Applications of Mathematics (New York). Springer-Verlag, Berlin, expanded edition, 2001. [35] R. S. Liptser and A. N. Shiryaev. Statistics of random processes. II, volume 6 of Applications of Mathematics (New York). Springer-Verlag, Berlin, expanded edition, 2001. [36] M. Loève. Probability theory. Third edition. D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London, 1963. [37] R. Mansuy. On a one-parameter generalization of the Brownian bridge and associated quadratic functionals. J. Theoret. Probab., 17(4):1021–1029, 2004. [38] G. V. Martynov. Computation of the distribution functions of quadratic forms of normal random variables. Theory Probab. Appl., 20(4):797–809, 1975. [39] G. Peskir and A. Shiryaev. Optimal stopping and free-boundary problems.

31 Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel, 2006. [40] N. V. Smirnov. On the distribution of von Mises ω2-test. Matem. Sb., 2(1):973–993, 1937. In Russian. [41] N. V. Smirnov. On the estimation of the discrepancy between empirical curves of distribution for two independent samples. Bulletin of Moscow University, 2:3–16, 1939. In Russian. [42] D. Sondermann, M. Trede, and B. Wilfling. Estimating the degree of interventionist policies in the run-up to emu. Applied Economics, 43(2):207–218, 2009. [43] T. Sottinen and A. Yazigi. Generalized Gaussian bridges. Stochastic Process. Appl., 124(9):3084–3105, 2014. [44] M. Trede and B. Wilfling. Estimating exchange rate dynamics with diffusion processes: an application to greek emu data. Empirical Economics, 33(1):23–39, 2007. [45] S. Zhao and Q. Liu. Large deviations for parameter estimators of α-Brownian bridge. J. Statist. Plann. Inference, 142(3):695–707, 2012. [46] S. Zhao and Y. Zhou. Sharp large deviations for the log-likelihood ratio of an α-Brownian bridge. Statist. Probab. Lett., 83(12):2750–2758, 2013.

32