Introduction to Lévy Processes

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Lévy Processes Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents Contents 1 1 Introduction and Examples 2 1.1 Infinitely divisible distributions . 2 1.2 Examples of infinitely divisible distributions . 2 1.3 The Lévy Khintchine formula . 4 1.4 Digression on Relativity . 6 2 Lévy processes 8 2.1 Definition of a Lévy process . 8 2.2 Examples of Lévy processes . 9 2.3 Exploring the Jumps of a Lévy Process . 11 3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition . 19 3.2 Consequences of the Lévy-Itô Decomposition . 21 3.3 Exercises . 23 3.4 Discussion . 23 4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group . 24 4.2 The Generator . 26 4.3 Recurrence and Transience . 28 4.4 Fractional Derivatives . 29 5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications . 31 5.2 Stochastic Integration . 32 5.3 Construction of the Stochastic Integral . 33 5.4 Quadratic Variation and Itô Formula with jumps . 34 5.5 Stochastic Differential Equation . 35 Bibliography 38 1 Chapter 1 Introduction and Examples In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula. The proof of the latter will be given in a subsequent chapter. 1.1 Infinitely divisible distributions Historically, Paul Lévy was interested in "arithmetic of probabilities", where he would investigate properties of probabilities distributions that can be decomposed as the sum of independent copies of itself. This field gave rise to what we now call infinitely divisible distributions. Infinitely divisible distributions and Lévy process are closely related, as Lévy process have infinitely divisible distributions. We start by introducing the concept of infinitely divisible distribution and give some examples. Definition 1.1.1. We say that a random variable X is infinitely divisible if for all n ∈ N, there exists Y1,...,Yn such that (d) X = Y1 + ··· + Yn. A very simple consequence of this definition is the following result: Proposition 1.1.2. The following are equivalent: • X has an infinitely divisible distribution, • µX the distribution of X has an n-convolution root that is itself the distribution of a random variable for each n, • φX the characteristic function of X has an n-root that is itself the characteristic function of a random variable for each n. We leave the proof as an exercise. 1.2 Examples of infinitely divisible distributions Gaussian random variables Let X be a random vector. We say that X has a Gaussian distribution if there exists m ∈ R and a symmetric positive definite matrix A such that X has density: 1 1 exp − hx − m, A−1(x − m)i . (2π)d/2pdet(A) 2 In this case, we write X ∼ N (m, A), m is the mean and A the covariance matrix. An easy exercise gives that the Fourier transform of such random variable is 1 φ (ξ) = exp ihξ, mi − hξ, Aξi . X 2 2 CHAPTER 1. INTRODUCTION AND EXAMPLES 3 Hence, it is easy to see that: m 1 A φ (ξ)1/n = exp ihξ, i − hξ, ξi . X n 2 n m A Consequently, we have that X is infinitely divisible with Yi ∼ N n , n . Poisson random variable We say that a discrete random variable X has a Poisson distribution with parameter λ if λk (X = k) = e−λ . P k! Consider now Y independent of X with Poisson distribution of parameter µ , we have: k k X X λl µk−l (X + Y = k) = (X = l, Y = k − l) = e−λ e−µ . P P l! (k − l)! l=0 l=0 Grouping terms in the last identity, we get: k 1 X k! (λ + µ)k (X + Y = k) = e−(λ+µ) λlµk−l = e−(λ+µ) . P k! l!(k − l)! k! l=0 Consequently, convolution of two Poisson distribution is a Poisson distribution and Poisson distributions are infinitely divisible. Alternatively, one can show that the characteristic function for a Poisson distribution is iξ φX (ξ) = exp(λ(e − 1)), λ giving that Poisson distributions are infinitely divisible, with Yi with Poisson distribution of parameter n . Compound Poisson random variable Consider N to be a Poisson random variable with parameter λ. Since N is integer-valued, one can form the following sum: N X X = Yi, k=1 where Yi are independent and identically distributed, independent of N. Let us denote µY their common distribution. Proposition 1.2.1. The characteristic function of X is Z ihξ,yi φX (ξ) = exp λ (e − 1)µY (dy) . Proof. +∞ ! PN Pk ihξ,Xi ihξ, Yii X ihξ, Yii φX (ξ) = E(e ) = E(e i=1 ) = E e i=1 1N=k . k=0 Now, exploiting the independence of N and Yi’s, we can write: +∞ ! Pk X ihξ, Yii φX (ξ) = E e i=1 P(N = k) k=0 +∞ Pk k X ihξ, Yii −λ λ = e i=1 e . E k! k=0 CHAPTER 1. INTRODUCTION AND EXAMPLES 4 Now, we note that k Pk ihξ, Yii Y ihξ,Yii k E e i=1 = E(e ) = φY (ξ) , i=1 denoting φY the common characteristic function of the Yi’s. We thus obtained +∞ X λk φ (ξ) = e−λ φ (ξ)k = exp λ φ (ξ) − 1 . X k! Y Y k=0 R ihξ,yi To conclude, we only write φY (ξ) as e µY (dy). Hence, we see that a compound Poisson distribution also has an infinite divisible distribution. 1.3 The Lévy Khintchine formula One can notice that in every example above, the characteristic function has an exponential form. This is no coincidence, as it is a shared properties by all infinitely divisible distributions. In fact, one can even give more information on the exponent. This is the so-called Lévy Khintchine formula. In this section, we only state the result, the proof will be given later. d Theorem 1.3.1. A probability distribution µ on R is infinitely divisible if and only if there exists d • a vector b ∈ R , called the drift, or mean, • a symmetric positive definite d × d matrix A , called the covariance matrix, d R 2 • a measure on ν such that d min(|y| , 1)ν(dy) < +∞, R R \{0} such that: Z Z ! ihξ,yi 1 ihξ,yi e µ(dy) = exp ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) . d 2 d R R \{0} Remark 1.3.2. Such a measure ν is called a Lévy measure. Later on, this measure will be linked to jumps the when discussing Lévy process. It shall be noted that one can state the whole theory by adding that d ν{0} = 0 and integrating over R , the point being that there should be no jumps of size 0. Besides, there is nothing special about the cut-off 1{|y|≤1} appearing above, one could take any > 0 and consider instead 1 1{|y|≤}, or even 1+|y|2 . Doing that would change the value for b. Remark 1.3.3. Obviously, the outstanding part of the previous theorem is the only if part. Indeed, if we are given a distribution with the above characteristic function, it is quite easy to see that it is infinitely divisible. Definition 1.3.4. The triple (A, ν, b) above is called the characteristic triplet, and they completely determine the distribution µ. Note that since A is a symmetric positive definite matrix, we will interchangeably write (Q, ν, b) as generating triplet, where Q is the quadratic form defined by Q(z) = hz, Azi. One interpretation of this result is that any infinitely divisible distribution can be decomposed as the 1 sum of fundamental building blocks. One would immediately observe that 2 hξ, Aξi in the exponent comes from a Gaussian distribution. Besides, barring the term multiplied by the indicator function, the integral R ihξ,yi d (e − 1)ν(dy) is the characteristic function of a compound Poisson process. R \{0} Stable distributions In this paragraph, we introduce a very important class of distributions known as Stable Distributions. Historically, those distributions arise from extensions of the Central Limit Theorem. Let X1,X2,... be a sequence of i.i.d. random variable, and for an, bn two sequence of real numbers, form X1 + ··· + Xn − an Sn = . bn If there exists a random variable X such that Sn converges in distribution to X, then we say that X has a stable distribution. A rather classical example of such distributions is for instance the case when X has a CHAPTER 1. INTRODUCTION AND EXAMPLES 5 √ 2 finite second moment. In this case, one can take bn = σ n and an = m, and we see that N (m, σ ) is a stable distribution. As an exercise, the reader can prove the following result: Proposition 1.3.5. Sn ⇒ X if and only if for all n, there exists cn and dn such that X1 + ··· + Xn = cnX + dn, where X1,...,Xn are independent copies of X. Remark 1.3.6. In the previous proposition, if dn can be taken to be 0, then X is said to be strictly stable.
Recommended publications
  • Poisson Processes Stochastic Processes
    Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written as −λt f (t) = λe 1(0;+1)(t) We summarize the above by T ∼ exp(λ): The cumulative distribution function of a exponential random variable is −λt F (t) = P(T ≤ t) = 1 − e 1(0;+1)(t) And the tail, expectation and variance are P(T > t) = e−λt ; E[T ] = λ−1; and Var(T ) = E[T ] = λ−2 The exponential random variable has the lack of memory property P(T > t + sjT > t) = P(T > s) Exponencial races In what follows, T1;:::; Tn are independent r.v., with Ti ∼ exp(λi ). P1: min(T1;:::; Tn) ∼ exp(λ1 + ··· + λn) . P2 λ1 P(T1 < T2) = λ1 + λ2 P3: λi P(Ti = min(T1;:::; Tn)) = λ1 + ··· + λn P4: If λi = λ and Sn = T1 + ··· + Tn ∼ Γ(n; λ). That is, Sn has probability density function (λs)n−1 f (s) = λe−λs 1 (s) Sn (n − 1)! (0;+1) The Poisson Process as a renewal process Let T1; T2;::: be a sequence of i.i.d. nonnegative r.v. (interarrival times). Define the arrival times Sn = T1 + ··· + Tn if n ≥ 1 and S0 = 0: The process N(t) = maxfn : Sn ≤ tg; is called Renewal Process. If the common distribution of the times is the exponential distribution with rate λ then process is called Poisson Process of with rate λ. Lemma. N(t) ∼ Poisson(λt) and N(t + s) − N(s); t ≥ 0; is a Poisson process independent of N(s); t ≥ 0 The Poisson Process as a L´evy Process A stochastic process fX (t); t ≥ 0g is a L´evyProcess if it verifies the following properties: 1.
    [Show full text]
  • A Mathematical Theory of Network Interference and Its Applications
    INVITED PAPER A Mathematical Theory of Network Interference and Its Applications A unifying framework is developed to characterize the aggregate interference in wireless networks, and several applications are presented. By Moe Z. Win, Fellow IEEE, Pedro C. Pinto, Student Member IEEE, and Lawrence A. Shepp ABSTRACT | In this paper, we introduce a mathematical I. INTRODUCTION framework for the characterization of network interference in In a wireless network composed of many spatially wireless systems. We consider a network in which the scattered nodes, communication is constrained by various interferers are scattered according to a spatial Poisson process impairments such as the wireless propagation effects, and are operating asynchronously in a wireless environment network interference, and thermal noise. The effects subject to path loss, shadowing, and multipath fading. We start introduced by propagation in the wireless channel include by determining the statistical distribution of the aggregate the attenuation of radiated signals with distance (path network interference. We then investigate four applications of loss), the blocking of signals caused by large obstacles the proposed model: 1) interference in cognitive radio net- (shadowing), and the reception of multiple copies of the works; 2) interference in wireless packet networks; 3) spectrum same transmitted signal (multipath fading). The network of the aggregate radio-frequency emission of wireless net- interference is due to accumulation of signals radiated by works; and 4) coexistence between ultrawideband and nar- other transmitters, which undesirably affect receiver nodes rowband systems. Our framework accounts for all the essential in the network. The thermal noise is introduced by the physical parameters that affect network interference, such as receiver electronics and is usually modeled as additive the wireless propagation effects, the transmission technology, white Gaussian noise (AWGN).
    [Show full text]
  • Introduction to Lévy Processes
    Introduction to L´evyprocesses Graduate lecture 22 January 2004 Matthias Winkel Departmental lecturer (Institute of Actuaries and Aon lecturer in Statistics) 1. Random walks and continuous-time limits 2. Examples 3. Classification and construction of L´evy processes 4. Examples 5. Poisson point processes and simulation 1 1. Random walks and continuous-time limits 4 Definition 1 Let Yk, k ≥ 1, be i.i.d. Then n X 0 Sn = Yk, n ∈ N, k=1 is called a random walk. -4 0 8 16 Random walks have stationary and independent increments Yk = Sk − Sk−1, k ≥ 1. Stationarity means the Yk have identical distribution. Definition 2 A right-continuous process Xt, t ∈ R+, with stationary independent increments is called L´evy process. 2 Page 1 What are Sn, n ≥ 0, and Xt, t ≥ 0? Stochastic processes; mathematical objects, well-defined, with many nice properties that can be studied. If you don’t like this, think of a model for a stock price evolving with time. There are also many other applications. If you worry about negative values, think of log’s of prices. What does Definition 2 mean? Increments , = 1 , are independent and Xtk − Xtk−1 k , . , n , = 1 for all 0 = . Xtk − Xtk−1 ∼ Xtk−tk−1 k , . , n t0 < . < tn Right-continuity refers to the sample paths (realisations). 3 Can we obtain L´evyprocesses from random walks? What happens e.g. if we let the time unit tend to zero, i.e. take a more and more remote look at our random walk? If we focus at a fixed time, 1 say, and speed up the process so as to make n steps per time unit, we know what happens, the answer is given by the Central Limit Theorem: 2 Theorem 1 (Lindeberg-L´evy) If σ = V ar(Y1) < ∞, then Sn − (Sn) √E → Z ∼ N(0, σ2) in distribution, as n → ∞.
    [Show full text]
  • Fclts for the Quadratic Variation of a CTRW and for Certain Stochastic Integrals
    FCLTs for the Quadratic Variation of a CTRW and for certain stochastic integrals No`eliaViles Cuadros (joint work with Enrico Scalas) Universitat de Barcelona Sevilla, 17 de Septiembre 2013 Damped harmonic oscillator subject to a random force The equation of motion is informally given by x¨(t) + γx_(t) + kx(t) = ξ(t); (1) where x(t) is the position of the oscillating particle with unit mass at time t, γ > 0 is the damping coefficient, k > 0 is the spring constant and ξ(t) represents white L´evynoise. I. M. Sokolov, Harmonic oscillator under L´evynoise: Unexpected properties in the phase space. Phys. Rev. E. Stat. Nonlin Soft Matter Phys 83, 041118 (2011). 2 of 27 The formal solution is Z t x(t) = F (t) + G(t − t0)ξ(t0)dt0; (2) −∞ where G(t) is the Green function for the homogeneous equation. The solution for the velocity component can be written as Z t 0 0 0 v(t) = Fv (t) + Gv (t − t )ξ(t )dt ; (3) −∞ d d where Fv (t) = dt F (t) and Gv (t) = dt G(t). 3 of 27 • Replace the white noise with a sequence of instantaneous shots of random amplitude at random times. • They can be expressed in terms of the formal derivative of compound renewal process, a random walk subordinated to a counting process called continuous-time random walk. A continuous time random walk (CTRW) is a pure jump process given by a sum of i.i.d. random jumps fYi gi2N separated by i.i.d. random waiting times (positive random variables) fJi gi2N.
    [Show full text]
  • Chapter 1 Poisson Processes
    Chapter 1 Poisson Processes 1.1 The Basic Poisson Process The Poisson Process is basically a counting processs. A Poisson Process on the interval [0 , ∞) counts the number of times some primitive event has occurred during the time interval [0 , t]. The following assumptions are made about the ‘Process’ N(t). (i). The distribution of N(t + h) − N(t) is the same for each h> 0, i.e. is independent of t. ′ (ii). The random variables N(tj ) − N(tj) are mutually independent if the ′ intervals [tj , tj] are nonoverlapping. (iii). N(0) = 0, N(t) is integer valued, right continuous and nondecreasing in t, with Probability 1. (iv). P [N(t + h) − N(t) ≥ 2]= P [N(h) ≥ 2]= o(h) as h → 0. Theorem 1.1. Under the above assumptions, the process N(·) has the fol- lowing additional properties. (1). With probability 1, N(t) is a step function that increases in steps of size 1. (2). There exists a number λ ≥ 0 such that the distribution of N(t+s)−N(s) has a Poisson distribution with parameter λ t. 1 2 CHAPTER 1. POISSON PROCESSES (3). The gaps τ1, τ2, ··· between successive jumps are independent identically distributed random variables with the exponential distribution exp[−λ x] for x ≥ 0 P {τj ≥ x} = (1.1) (1 for x ≤ 0 Proof. Let us divide the interval [0 , T ] into n equal parts and compute the (k+1)T kT expected number of intervals with N( n ) − N( n ) ≥ 2. This expected value is equal to T 1 nP N( ) ≥ 2 = n.o( )= o(1) as n →∞ n n there by proving property (1).
    [Show full text]
  • Levy Processes
    LÉVY PROCESSES, STABLE PROCESSES, AND SUBORDINATORS STEVEN P.LALLEY 1. DEFINITIONSAND EXAMPLES d Definition 1.1. A continuous–time process Xt = X(t ) t 0 with values in R (or, more generally, in an abelian topological groupG ) isf called a Lévyg ≥ process if (1) its sample paths are right-continuous and have left limits at every time point t , and (2) it has stationary, independent increments, that is: (a) For all 0 = t0 < t1 < < tk , the increments X(ti ) X(ti 1) are independent. − (b) For all 0 s t the··· random variables X(t ) X−(s ) and X(t s ) X(0) have the same distribution.≤ ≤ − − − The default initial condition is X0 = 0. A subordinator is a real-valued Lévy process with nondecreasing sample paths. A stable process is a real-valued Lévy process Xt t 0 with ≥ initial value X0 = 0 that satisfies the self-similarity property f g 1/α (1.1) Xt =t =D X1 t > 0. 8 The parameter α is called the exponent of the process. Example 1.1. The most fundamental Lévy processes are the Wiener process and the Poisson process. The Poisson process is a subordinator, but is not stable; the Wiener process is stable, with exponent α = 2. Any linear combination of independent Lévy processes is again a Lévy process, so, for instance, if the Wiener process Wt and the Poisson process Nt are independent then Wt Nt is a Lévy process. More important, linear combinations of independent Poisson− processes are Lévy processes: these are special cases of what are called compound Poisson processes: see sec.
    [Show full text]
  • Financial Modeling with L´Evy Processes and Applying L
    FINANCIAL MODELING WITH LEVY´ PROCESSES AND APPLYING LEVY´ SUBORDINATOR TO CURRENT STOCK DATA by GONSALGE ALMEIDA Submitted in partial fullfillment of the requirements for the degree of Doctor of Philosophy Dissertation Advisor: Dr. Wojbor A. Woyczynski Department of Mathematics, Applied Mathematics and Statistics CASE WESTERN RESERVE UNIVERSITY January 2020 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES We hereby approve the dissertation of Gonsalge Almeida candidate for the Doctoral of Philosophy degree Committee Chair: Dr.Wojbor Woyczynski Professor, Department of the Mathematics, Applied Mathematics and Statis- tics Committee: Dr.Alethea Barbaro Associate Professor, Department of the Mathematics, Applied Mathematics and Statistics Committee: Dr.Jenny Brynjarsdottir Associate Professor, Department of the Mathematics, Applied Mathematics and Statistics Committee: Dr.Peter Ritchken Professor, Weatherhead School of Management Acceptance date: June 14, 2019 *We also certify that written approval has been obtained for any proprietary material contained therein. CONTENTS List of Figures iv List of Tables ix Introduction . .1 1 Financial Modeling with L´evyProcesses and Infinitely Divisible Distributions 5 1.1 Introduction . .5 1.2 Preliminaries on L´evyprocesses . .6 1.3 Characteristic Functions . .8 1.4 Cumulant Generating Function . .9 1.5 α−Stable Distributions . 10 1.6 Tempered Stable Distribution and Process . 19 1.6.1 Tempered Stable Diffusion and Super-Diffusion . 23 1.7 Numerical Approximation of Stable and Tempered Stable Sample Paths 28 1.8 Monte Carlo Simulation for Tempered α−Stable L´evyprocess . 34 2 Brownian Subordination (Tempered Stable Subordinator) 44 i 2.1 Introduction . 44 2.2 Tempered Anomalous Subdiffusion . 46 2.3 Subordinators . 49 2.4 Time-Changed Brownian Motion .
    [Show full text]
  • Compound Poisson Processes, Latent Shrinkage Priors and Bayesian
    Bayesian Analysis (2015) 10, Number 2, pp. 247–274 Compound Poisson Processes, Latent Shrinkage Priors and Bayesian Nonconvex Penalization Zhihua Zhang∗ and Jin Li† Abstract. In this paper we discuss Bayesian nonconvex penalization for sparse learning problems. We explore a nonparametric formulation for latent shrinkage parameters using subordinators which are one-dimensional L´evy processes. We particularly study a family of continuous compound Poisson subordinators and a family of discrete compound Poisson subordinators. We exemplify four specific subordinators: Gamma, Poisson, negative binomial and squared Bessel subordi- nators. The Laplace exponents of the subordinators are Bernstein functions, so they can be used as sparsity-inducing nonconvex penalty functions. We exploit these subordinators in regression problems, yielding a hierarchical model with multiple regularization parameters. We devise ECME (Expectation/Conditional Maximization Either) algorithms to simultaneously estimate regression coefficients and regularization parameters. The empirical evaluation of simulated data shows that our approach is feasible and effective in high-dimensional data analysis. Keywords: nonconvex penalization, subordinators, latent shrinkage parameters, Bernstein functions, ECME algorithms. 1 Introduction Variable selection methods based on penalty theory have received great attention in high-dimensional data analysis. A principled approach is due to the lasso of Tibshirani (1996), which uses the ℓ1-norm penalty. Tibshirani (1996) also pointed out that the lasso estimate can be viewed as the mode of the posterior distribution. Indeed, the ℓ1 penalty can be transformed into the Laplace prior. Moreover, this prior can be expressed as a Gaussian scale mixture. This has thus led to Bayesian developments of the lasso and its variants (Figueiredo, 2003; Park and Casella, 2008; Hans, 2009; Kyung et al., 2010; Griffin and Brown, 2010; Li and Lin, 2010).
    [Show full text]
  • Models Beyond the Dirichlet Process
    Models beyond the Dirichlet process Antonio Lijoi Igor Prünster No. 129 December 2009 www.carloalberto.org/working_papers © 2009 by Antonio Lijoi and Igor Prünster. Any opinions expressed here are those of the authors and not those of the Collegio Carlo Alberto. Models beyond the Dirichlet process Antonio Lijoi1 and Igor Prunster¨ 2 1 Dipartimento Economia Politica e Metodi Quantitatavi, Universit`adegli Studi di Pavia, via San Felice 5, 27100 Pavia, Italy and CNR{IMATI, via Bassini 15, 20133 Milano. E-mail: [email protected] 2 Dipartimento di Statistica e Matematica Applicata, Collegio Carlo Alberto and ICER, Universit`a degli Studi di Torino, Corso Unione Sovietica 218/bis, 10134 Torino, Italy. E-mail: [email protected] September 2009 Abstract. Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of flexibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and efficient simulation techniques that make nonparametric models amenable to concrete use in a number of applied sta- tistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process. 1 Introduction Bayesian nonparametric inference is a relatively young area of research and it has recently under- gone a strong development.
    [Show full text]
  • Final Report (PDF)
    Foundation of Stochastic Analysis Krzysztof Burdzy (University of Washington) Zhen-Qing Chen (University of Washington) Takashi Kumagai (Kyoto University) September 18-23, 2011 1 Scientific agenda of the conference Over the years, the foundations of stochastic analysis included various specific topics, such as the general theory of Markov processes, the general theory of stochastic integration, the theory of martingales, Malli- avin calculus, the martingale-problem approach to Markov processes, and the Dirichlet form approach to Markov processes. To create some focus for the very broad topic of the conference, we chose a few areas of concentration, including • Dirichlet forms • Analysis on fractals and percolation clusters • Jump type processes • Stochastic partial differential equations and measure-valued processes Dirichlet form theory provides a powerful tool that connects the probabilistic potential theory and ana- lytic potential theory. Recently Dirichlet forms found its use in effective study of fine properties of Markov processes on spaces with minimal smoothness, such as reflecting Brownian motion on non-smooth domains, Brownian motion and jump type processes on Euclidean spaces and fractals, and Markov processes on trees and graphs. It has been shown that Dirichlet form theory is an important tool in study of various invariance principles, such as the invariance principle for reflected Brownian motion on domains with non necessarily smooth boundaries and the invariance principle for Metropolis algorithm. Dirichlet form theory can also be used to study a certain type of SPDEs. Fractals are used as an approximation of disordered media. The analysis on fractals is motivated by the desire to understand properties of natural phenomena such as polymers, and growth of molds and crystals.
    [Show full text]
  • Bayesian Nonparametric Modeling and Its Applications
    UNIVERSITY OF TECHNOLOGY,SYDNEY DOCTORAL THESIS Bayesian Nonparametric Modeling and Its Applications By Minqi LI A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Computing and Communications Faculty of Engineering and Information Technology October 2016 Declaration of Authorship I certify that the work in this thesis has not previously been submitted for a degree nor has it been submitted as part of requirements for a degree except as fully acknowledged within the text. I also certify that the thesis has been written by me. Any help that I have received in my research work and the preparation of the thesis itself has been acknowledged. In addition, I certify that all information sources and literature used are indicated in the thesis. Signed: Date: i ii Abstract Bayesian nonparametric methods (or nonparametric Bayesian methods) take the benefit of un- limited parameters and unbounded dimensions to reduce the constraints on the parameter as- sumption and avoid over-fitting implicitly. They have proven to be extremely useful due to their flexibility and applicability to a wide range of problems. In this thesis, we study the Baye- sain nonparametric theory with Levy´ process and completely random measures (CRM). Several Bayesian nonparametric techniques are presented for computer vision and pattern recognition problems. In particular, our research and contributions focus on the following problems. Firstly, we propose a novel example-based face hallucination method, based on a nonparametric Bayesian model with the assumption that all human faces have similar local pixel structures. We use distance dependent Chinese restaurant process (ddCRP) to cluster the low-resolution (LR) face image patches and give a matrix-normal prior for learning the mapping dictionaries from LR to the corresponding high-resolution (HR) patches.
    [Show full text]
  • Construction of Dependent Dirichlet Processes Based on Poisson Processes
    Construction of Dependent Dirichlet Processes based on Poisson Processes Dahua Lin Eric Grimson John Fisher CSAIL, MIT CSAIL, MIT CSAIL, MIT [email protected] [email protected] [email protected] Abstract We present a novel method for constructing dependent Dirichlet processes. The approach exploits the intrinsic relationship between Dirichlet and Poisson pro- cesses in order to create a Markov chain of Dirichlet processes suitable for use as a prior over evolving mixture models. The method allows for the creation, re- moval, and location variation of component models over time while maintaining the property that the random measures are marginally DP distributed. Addition- ally, we derive a Gibbs sampling algorithm for model inference and test it on both synthetic and real data. Empirical results demonstrate that the approach is effec- tive in estimating dynamically varying mixture models. 1 Introduction As the cornerstone of Bayesian nonparametric modeling, Dirichlet processes (DP) [22] have been applied to a wide variety of inference and estimation problems [3, 10, 20] with Dirichlet process mixtures (DPMs) [15, 17] being one of the most successful. DPMs are a generalization of finite mixture models that allow an indefinite number of mixture components. The traditional DPM model assumes that each sample is generated independently from the same DP. This assumption is limiting in cases when samples come from many, yet dependent, DPs. HDPs [23] partially address this modeling aspect by providing a way to construct multiple DPs implicitly depending on each other via a common parent. However, their hierarchical structure may not be appropriate in some problems (e.g.
    [Show full text]