<<

SEMIMARTINGALES, MARKOV PROCESSES AND THEIR APPLICATIONS IN MATHEMATICAL FINANCE

BY JIN WANG

A dissertation submitted to the Graduate School—New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Mathematics

Written under the direction of Paul Feehan and approved by

New Brunswick, New Jersey October, 2010 ABSTRACT OF THE DISSERTATION

Semimartingales, Markov processes and their Applications in Mathematical Finance

by Jin Wang Dissertation Director: Paul Feehan

We show that the marginal distribution of a can be matched by a Markov process. This is an extension of Gy¨ongy’stheorem to discontinuous semi- martingale.

ii Acknowledgements

I would like to thank my advisor, Professor Paul Feehan. His support and kindness helped me tremendously. I am deeply grateful for his constant encouragement and in- sightful discussions in all these years at Rutgers.

I would like to thank all of my committee members, Professor Dan Ocone, Profes- sor Richard Gundy and Professor Panagiota Daskalopoulos, for their comments and encouragements.

I also want to thank the mathematics department of Rutgers for its support. Last but not least, I want to thank all my friends at Rutgers, they made my PhD life colorful and enjoyable.

iii Dedication

To my parents, Wang Xizhi, Wu Jinfeng.

iv Table of Contents

Abstract ...... ii

Acknowledgements ...... iii

Dedication ...... iv

1. Introduction ...... 1

2. Background ...... 4

2.1. Volatility models ...... 4

2.1.1. The Black-Scholes model ...... 4

2.1.2. Stochastic volatility models ...... 5

2.1.3. Dupire’s local volatility model ...... 7

2.1.4. Volatility models with jumps ...... 8

2.2. Mimicking theorems ...... 9

2.2.1. Gy¨ongy’stheorem ...... 9

2.2.2. Brunick’s theorem ...... 10

2.2.3. Bentata and Cont’s theorems ...... 13

3. A Proof of Gy¨ongy’sTheorem ...... 18

3.1. Uniqueness of a solution to a partial differential equation ...... 19

4. Forward Equation for a Semimartingale ...... 24

4.1. Semimartingales and Markovian projection ...... 24

4.2. Forward equation ...... 27

4.3. Construction of fundamental solutions ...... 34

4.4. Existence and uniqueness of weak solutions ...... 47

4.5. Uniqueness of fundamental solutions ...... 51

v 5. Markov Processes and Pseudo-Differential Operators ...... 53

5.1. Operator semigroups and Feller processes ...... 53

5.2. Pseudo-differential operators ...... 58

5.3. Construction of a Markov process using a symbol ...... 61

6. Application of Brunick’s Theorem to Barrier Options ...... 66

6.1. Introduction ...... 66

6.2. Application to up-and-out calls ...... 67

References ...... 72

Vita ...... 74

vi 1

Chapter 1

Introduction

In order to price and hedge financial derivatives, models of the dy- namics of the underlying stocks have been introduced. The Black-Scholes model is based on the assumption that the stock price process follows a geometric Brownian motion with constant drift and volatility. It is well known that this model is too simple to capture the risk-neutral dynamics of many price processes. Dupire [10] significantly improved the Black-Scholes model by replacing the constant volatility parameter by a deterministic function of time and the stock price process. By construction, the asset price process given by Dupire’s local volatility model has the same one-dimensional marginal distributions as the market price process. Therefore, European-style vanilla options whose values are determined by the marginal distributions can be priced cor- rectly. In practice, the local volatility model is widely used to price not only vanilla options, but also complex options with path dependent payoffs, even though such op- tions cannot be priced correctly by Dupire’s model.

Dupire’s model is a practical implementation of a famous result of Gy¨ongy[15]. Given an Itˆoprocess, Gy¨ongyproved the existence of a Markov process with the same one-dimensional marginal distributions as the given Itˆoprocess. The coefficients defined in Gy¨ongy’sprocess are given by Dupire’s local volatility model.

In [8] Brunick generalized Gy¨ongy’sresult under a weaker assumption so that the prices of path dependent options can be determined exactly. For example, consider a barrier option whose value depends on the joint distribution of the stock price and its running maximum; Brunick’s result shows that there exists a two-dimensional Markov process with the same joint distributions. It gives a model process which is perfectly calibrated against market data, but simpler than the market process. 2

Bentata and Cont [3], [4] extended Gy¨ongy’stheorem to semimartingales with jumps. They showed that the flow of marginal distributions of a discontinuous semi- martingale could be matched by the marginal distributions of a Markov process. The Markov process was constructed as a solution to a martingale problem. However, their proof of the main theorem is incomplete. Using this result, they derived a partial integro-differential equation for call options. This is a generalization of Dupire’s local volatility formula. We will discuss the details in Section 2.2.3.

Motivated by these results, we present a partial differential equation based proof to the mimicking theorem of semimartingales in this thesis.

The thesis is organized as follows:

In chapter 2, we review some stochastic process models and mimicking theorems. The first section is dedicated to volatility models: the Black-Scholes model, stochastic volatility models, including and SABR model, Dupire’s local volatility model, and some jump models, including Merton’s jump diffusion model and Bates’ model. In the second section we introduce the theorems of Gy¨ongy, Brunick, and Bentata and Cont.

In chapter 3, we give a new proofs of Gy¨ongy’stheorem. The proof is based on a uniqueness result of a parabolic equation. We first derive the partial differential equation satisfied by the marginal distributions of an Itˆoprocess. Then we construct a Markov process and show that the marginal distributions of this Markov process satisfy the same partial differential equation. By the uniqueness result in [14], we conclude that the Markov process has the same marginal distributions as the Itˆoprocess.

In chapter 4, we extend the partial differential equation proof of Gy¨ongy’stheorem for Itˆoprocess to the case of semimartingale processes. We first derive the forward equation satisfied by the probability density functions of a semimartingale and show it is obeyed by the probability density functions of the mimicking process. This for- ward equation is a partial integro-differential equation (PIDE). By analogy with the construction of fundamental solutions to a partial differential equation, we construct fundamental solutions of the PIDE using the parametrix method [14]. Through our construction, we discover the conditions which ensure this equation has fundamental 3

solutions. These conditions guarantee the existence of the transition probability density function for the semimartingales. Then we derive and apply energy estimates to prove uniqueness of the fundamental solution.

Pseudo-differential operators should be natural tools to study our partial integro- differential equation. In chapter 5, we recall the relevant theory of pseudo-differential operators, discuss their relationship with the martingale problem and indicate areas of further research.

In the last chapter, we apply Brunick’s result and derive a local volatility formula for single barrier options. The exotic options market is most developed in the foreign exchange market. This formula allows us to price exotic options in the FX market. And these prices are consistent with the market prices of single barrier options we observe in the market.

More ideas for further research

1. In Gy¨ongy’stheorem, the drift and covariance processes are bounded, and the covariance process satisfies the uniformly elliptic condition. However, the covari- ance process in the Heston model is a CIR process which is neither bounded nor bounded away from zero. So we plan to remove these constraints.

2. We plan to give a numerical illustration of Cont-Bentata’s result using Monte Carlo simulation.

3. We also plan to give a numerical illustration of Brunick’s result by computation of

up-and-out call prices for 푆(0) ∈ [0, 푆푚푎푥] using original and mimicking processes, where mimicking processes are geometric Brownian motion or Heston process. 4

Chapter 2

Background

In this chapter, we review some models and theorems. The first section is dedicated to volatility models: the Black-Scholes model, stochastic volatility models, Dupire’s local volatility model, and some jump models. In the second section we introduce Gy¨ongy’s theorem, Brunick’s theorem, and Bentata and Cont’s result.

2.1 Volatility models

2.1.1 The Black-Scholes model

The Black-Scholes model [5] shows how to price options on a stock. An European call option gives the right to its owner to buy at time 푇 one unit of the stock at the price 퐾, where 푇 is called date of maturity and a positive number 퐾 is called strike or exercise price. The well known Black-Scholes formula gives the price of such an option as a function of the stock price 푆0, the strike price 퐾, the maturity 푇 , the short rate of interest 푟, and the volatility of the stock 휎. Only this last parameter is not directly observable but it can be estimated from historical data. Call options are now actively traded. The work of Black and Scholes on how to price call options seems irrelevant as the prices are already known. However, this is not the case. First, there exist more complex options, often called exotic options, a simple example that we will consider later is a barrier option. These are not actively traded and therefore need to be priced. More importantly the work of Black and Scholes shows that ”it is possible to create a hedged position, consisting of a long position in the stock and a short position in [calls on the same stock], whose value will not depend on the price of the stock”. In other words, they provide a method to hedge risk of a portfolio in the future. The 5

replicating portfolios have to be rebalanced continuously in a precise way. The Black- Scholes pricing is widely used in practice, because it is easy to calculate and provides an explicit formula of all the variables.

However, prices obtained from the Black-Scholes formula are often differernt from the market prices. In the model, the underlying asset is assumed to follow a geometric Brownian motion with constant volatility 휎. That is,

푑푆푡 = 휇푆푑푡 + 휎푆푑푊 (푡), where 푊 (푡) is a Brownian motion. Given the market price of a call or put option, the implied volatility is the unique volatility parameter to be put into the Black-Scholes formula to give the same price as the market price. At a given maturity, options with different strikes have different implied volatilities. When plotted against strikes, implied volatilities exhibit a smile or a skew effect. Although volatility is not constant, results from the Black-Scholes model are helpful in practice. The language of implied volatility is a useful alternative to market prices. It gives a metric by which option prices can be compared across different strikes, maturities, underlying, and observation times.

2.1.2 Stochastic volatility models

Stochastic volatility models are useful because they explain in a self-consistent way why options with different strikes and expirations have different Black-Scholes implied volatility. Under the assumption that the volatility of the underlying price is a stochastic process rather than a constant, they have more realistic dynamics for the underlying.

Suppose that the underlying price 푆 and its variance 푉 satisfy the following SDEs, √ 푑푆푡 = 휇푡푆푡푑푡 + 푉푡푆푡푑푊1, √ 푑푉푡 = 훼(푆푡, 푉푡, 푡)푑푡 + 휂훽(푆푡, 푉푡, 푡) 푉푡푑푊2,

< 푑푊1, 푑푊2 >= 휌푑푡, where 휇푡 is the instantaneous drift of 푆, 휂 is the volatility of the volatility process and

휌 is the correlation between random stock price returns and changes in 푉푡, 푊1, 푊2 are Brownian motions. 6

The stochastic process followed by the stock price is equivalent to the one assumed in the Black-Scholes model. As 휂 approaches 0, the stochastic volatility model becomes a time-dependent volatility version of the Black-Scholes model. The stochastic process followed by the variance can be very general. In the Black-Scholes model, there is only one source of randomness, the stock price, which can be hedged by stocks. In the stochastic volatility model, random changes in volatility also need to be hedged in order to construct a risk free portfolio.

The most popular and commonly used stochastic volatility models are the Heston model and the SABR model.

In the Heston model [16], the volatility follows a square root process, namely a CIR process √ 푑푉푡 = 휃(휔 − 푉푡)푑푡 + 휂 푉푡푑푊2, (2.1) where 휔 is the mean long-term volatility, 휃 is the speed at which the volatility reverts to its long-term mean and 휂 is the volatility of the volatility process.

the SABR model (stochastic alpha, beta, rho)[22] describes a single forward 퐹 under stochastic volatility 푉푡:

훽 푑퐹푡 = 푉푡퐹푡 푑푊1,

푑푉푡 = 훼푉푡 푑푊2,

The initial values 퐹0 and 푉0 are the current forward price and volatility, whereas 푊1 and

푊2 are two correlated Brownian motions with correlation coefficient 휌. The constant parameters 훽, 훼 are such that 0 ≤ 훽 ≤ 1,훼 ≥ 0.

Once a particular stochastic volatility model is chosen, it must be calibrated against existing market data. Calibration is the process of identifying the set of model param- eters which most likely give the observed data. For instance, in the Heston model, the set of model parameters {휔, 휃, 휂, 휌} can be estimated from historic underlying prices. Once the calibration has been performed, re-calibration of the model is needed over time. 7

2.1.3 Dupire’s local volatility model

Given the computational complexity of stochastic volatility models and the difficulty in calibrating parameters to the market prices of vanilla options, people want to find a simpler way to price exotic options consistently. Since before Breeden and Litzenberger [7], it was well understood that the risk-neutral probability density function could be derived from the market prices of European options. Dupire [10] and Derman and Kani [9] show that under risk neutral measure, there exist a unique diffusion process consistent with these distributions.

In Dupire ’s local volatility model, the constant volatility is replaced by a determin- istic function of time and the stock price process, which is known as the local volatility function. The underlying price 푆푡 is assumed to follow a stochastic differential equation

푑푆푡 = 푟푆푡 푑푡 + 휎(푡, 푆푡)푆푡 푑푊 (푡),

where 푟 is the instantaneous risk free rate and 푊 (푡) is a Brownian motion.

The diffusion coefficients 휎(푡, 푆푡) are consistent with the market prices for all option prices on a given underlying. Dupire’s formula shows

∂퐶 ∂퐶 2 ∂푇 + 푟퐾 ∂퐾 휎 (퐾, 푇 ; 푆0) = . 1 퐾2 ∂2퐶 2 ∂퐾2 By construction, European style vanilla options whose values are determined by the marginal distributions can be priced correctly. In practice, this model is used to de- termine prices of exotic options which are consistent with observed prices of vanilla options.

In [11], Dupire showed that the local variance is a conditional expectation of instan- taneous variance. Assume the stochastic process for stock prices is

√ 푑푆푡 = 휇푡푆푡푑푡 + 휎푡푆푡푑푊 (푡), he derived that

2 휎 (퐾, 푇 ; 푆0) = 퐸[휎∣푆푇 = 퐾], that is, local variance is the expectation of the instantaneous variance conditional on the final stock price 푆푇 being equal to the strike price 퐾. This equation implies that 8

Dupire’s local volatility model is in fact a practical implementation of Gy¨ongy’stheorem which we will introduce in the next section.

2.1.4 Volatility models with jumps

Diffusion based volatility models cannot explain why the implied volatility skew is so steep for very short expirations and why short-dated term structure of skew is incon- sistent with any model. Therefore, jumps are necessary to be modeled. Examples of such models are Merton’s jump diffusion model [21] and Bates’ jump-diffusion stochas- tic volatility model [2]. In these models, the dynamics of the underlying is easy to understand and describe, since the distribution of jump sizes is known. They are easy to simulate using Monte Carlo method.

Merton’s jump diffusion model

Assume the stock price follow the SDE

푑푆 = 훾푆푑푡 + 휎푆푑푊 + (퐽 − 1)푆푑푞,

where 푞 is a Poisson process, 푑푞 = 1 with probability 휆푑푡, and 푑푞 = 0 with probability 1 − 휆푑푡. The jump size is lognormally distributed with mean log-jump 휇 and standard deviation 훿. We can rewrite the SDE as

푑푆 = 휇푆푑푡 + 휎푆푑푊 + (푒훼+훿푍 − 1)푆푑푞,

with 푍 ∼ 푁(0, 1). This allows us to get the probability density of 푆푡

2 ∑∞ (휆푡)푘 exp − (푥−훾푡−푘휇) −휆푡 2(휎2푡+푘훿2) 푝푡(푥) = 푒 √ . 푘! 2휋(휎2푡 + 푘훿2) 푘=0 Prices of European options in this model can be obtained as a series where each term involves a Black-Scholes formula.

Bates’ model

Bates introduced the jump-diffusion stochastic volatility model by adding proportional log-normal jumps to the Heston stochastic volatility model. The model has the following 9

form √ 푑푆푡 = 휇푡푆푡푑푡 + 푉푡푆푡푑푊1 + 푑퐽푡 √ 푑푉푡 = 휃(휔 − 푉푡)푑푡 + 휂 푉푡푑푊2,

1 푡 푑푊 (푡) and 푑푊2 are Brownian motions with correlation 휌, and 퐽푡 is a with intensity 휆 and log-normal distribution of jump sizes such that if

¯ 1 2 2 푘 is its jump size then ln(1 + 푘) ∼ 푁(ln(1 + 푘) − 2 훿 , 훿 ). The no-arbitrage condition fixes the drift of the risk neutral process, under the risk-neutral probability 휇 = 푟 − 휆푘¯.

Applying Itˆo’slemma, we obtain the equation for the log-price 푋(푡) = ln 푆푡, 1 √ 푑푋(푡) = (푟 − 휆푘¯ − 푉 )푑푡 + 푉 푑푊 (푡)1 + 푑퐽˜ , 2 푡 푡 푡 ˜ where 푑퐽푡 is a compound Poisson process with intensity 휆 and normal distribution of jump sizes.

Bates’ model can also be viewed as a generalization of the Merton’s jump diffusion model allowing for stochastic volatility. Although the no arbitrage condition fixes the drift of the price process, the risk-neutral measure is not unique, because other pa- rameters of the model, for example, intensity of jumps and parameters of jump size distribution, can be changed.

2.2 Mimicking theorems

We want to construct simple processes which mimic certain features of the behavior of more complicated processes.

Since the European option prices are uniquely determined by the marginal distri- butions of the underlying price processes. In this section, we review some theorems in which the marginal distributions of a general process are matched by a Markov process.

2.2.1 Gy¨ongy’stheorem

Dupire derived the local volatility formula using the forward equation. In an earlier work of Gy¨ongy[15], he developed a result that is considered as a rigorous proof of the existence of the local volatility model. He proved the following result. 10

Theorem 2.1. (Gy¨ongy[15, Theorem 4.6]) Let 푊 be an 푟-dimensional Brownian motion, and

푑푋(푡) = 휇푡푑푡 + 휎푡푑푊 (푡) be a 푑-dimensional Itˆoprocess where 휇 is a bounded 푑-dimensional , and 휎 is a bounded 푑 × 푟-dimensional adapted process such that 휎휎푇 is uniformly positive definite. There exist deterministic measurable function 휇ˆ and 휎ˆ such that

휇ˆ(푡, 푋(푡)) = 퐸[휇푡∣푋(푡)] a.s. for each 푡,

푇 푇 휎ˆ휎ˆ (푡, 푋(푡)) = 퐸[휎푡휎푡 ∣푋(푡)] a.s. for each 푡, and there exists a weak solution to the stochastic differential equation:

ˆ ˆ ˆ ˆ 푑푋푡 =휇 ˆ(푡, 푋푡)푑푡 +휎 ˆ(푡, 푋푡)푑푊 (푡)

ˆ such that L (푋푡) = L (푋(푡)) for all 푡 ∈ ℝ+, where L denotes the law of a random variable and 푊ˆ (푡) denotes another Brownian motion, possibly on another space.

The assumptions on the drift and covariance processes in Gy¨ongy’stheorem seem too restrictive for many applications. For example, Atlan [1] computed the conditional expectations for the Heston model using properties of the and then used Gy¨ongy’stheorem to ensure the existence of a diffusion with the same marginal distri- butions. However, the covariance process in the Heston model is a CIR process (2.1) which is neither bounded nor bounded away from zero, so the conditions of Gy¨ongy’s theorem are not satisfied in this application.

Gy¨ongy’stheorem is only valid for continuous Itˆoprocesses. It is important to extend the result to processes with jumps.

2.2.2 Brunick’s theorem

In practice, the local volatility model is also used to price complex options with path dependent payoffs. However, the price cannot be uniquely determined by the marginal distributions of the asset price process. For example, the price of a barrier option would require knowledge of the joint probability distribution of the asset price process and its 11

running maximum. Brunick [8] generalized Gy¨ongy’sresult under a weaker assumption so that the prices of path dependent options could be determined exactly.

Definition 2.1. ([8, Definition 2.1])

푑 푛 Let 푌 : ℝ+ × 퐶(ℝ+, ℝ ) → ℝ be a predictable process. We say that the path-functional 푌 is a measurably updatable statistic if there exists a measurable function

푛 푑 푛 휙 : ℝ × ℝ × ℝ+ × 퐶(ℝ+, ℝ ) → ℝ

푑 such that 푌 (푡 + 푢, 푥) = 휙(푡, 푌 (푡, 푥); 푢, Δ(푡, 푥)) for all 푥 ∈ 퐶(ℝ+, ℝ ), where the map

퐷 푑 Δ: ℝ+ × 퐶(ℝ+, ℝ ) → 퐶(ℝ+, ℝ ) is defined by Δ(푡, 푥)(푢) = 푥(푡, 푢) − 푥(푡).

A measurably updatable statistic is a functional whose path-dependence can be summed up by a single vector in ℝ푁 . 푌 1(푡, 푥) = 푥(푡) is an updatable statistic, as

1 1 ∗ 2 푌 (푡 + 푢, 푥) = 푌 (푡, 푥) + Δ(푡, 푥)(푢). Let 푥 (푡) = sup푢≤푡 푥(푢), we see that 푌 (푡, 푥) = [푥(푡), 푥∗(푡)] ∈ ℝ2 is an updatable statistic as we can write

푌 2(푡 + 푢, 푥) = [푥(푡) + Δ(푡, 푥)(푢), max{푥∗(푡), sup 푥(푡) + Δ(푡, 푥)(푣)}]. 푣∈[0,푢]

Theorem 2.2. ([8, Theorem 2.11]) Let 푊 be an 푟-dimensional Brownian motion, and

푑푋(푡) = 휇푡푑푡 + 휎푡푑푊 (푡) be a 푑-dimensional Itˆoprocess where 휇 is a left-continuous 푑-dimensional adapted pro- cess, and 휎 is a left-continuous 푑 × 푟-dimensional adapted process with

∫ 푡 푇 퐸[ ∣휇푠∣ + ∣휎푠휎푠 푑푠∣푑푠] ≤ ∞ for all 푡. 0

푑 푛 Also suppose that 푌 : ℝ+×퐶(ℝ+, ℝ ) → ℝ is a measurably updatable statistic such that the maps 푥 7→ 푌 (푡, 푥) are continuous for each fixed 푡. Then there exist deterministic measurable function 휇ˆ and 휎ˆ such that

휇ˆ(푡, 푌 (푡, 푋)) = 퐸[휇푡∣푌 (푡, 푋)] a.s. for Lebesgue-a.e. 푡,

푇 푇 휎ˆ휎ˆ (푡, 푌 (푡, 푋)) = 퐸[휎푡휎푡 ∣푌 (푡, 푋)] a.s. for Lebesgue-a.e. 푡, 12

and there exists a weak solution to the SDE:

ˆ ˆ ˆ ˆ 푑푋푡 =휇 ˆ(푡, 푌 (푡, 푋))푑푡 +휎 ˆ(푡, 푌 (푡, 푋))푑푊 (푡)

ˆ such that L (푌 (푡, 푋)) = L (푌 (푡, 푋)) for all 푡 ∈ ℝ+, where L denotes the law of a random variable and 푊ˆ (푡) denotes another Brownian motion.

Brunick’s theorem is more general than Gy¨ongy’s. First, the requirements on 휇 and 휎 are weaker than the boundedness and uniform ellipticity in Gy¨ogy’stheorem. Secondly, this theorem implies the existence of a weak solution which preserves the one-dimensional marginal distribution of path-dependent functionals.

Corollary 2.1. ([8, Corallary 2.16]) Let 푊 be an 푟-dimensional Brownian motion, and

푑푋(푡) = 휇푡푑푡 + 휎푡푑푊 (푡) be a 푑-dimensional Itˆoprocess where 휇 is a left-continuous 푑-dimensional adapted pro- cess, and 휎 is a left-continuous 푑 × 푟-dimensional adapted process with

∫ 푡 푇 퐸[ ∣휇푠∣ + ∣휎푠휎푠 푑푠∣푑푠] ≤ ∞ for all 푡. 0

Then there exist deterministic measurable function 휇ˆ and 휎ˆ such that

휇ˆ(푡, 푋(푡)) = 퐸[휇푡∣푋(푡)] a.s. for Lebesgue-a.e. 푡,

푇 푇 휎ˆ휎ˆ (푡, 푋(푡)) = 퐸[휎푡휎푡 ∣푋(푡)] a.s. for Lebesgue-a.e. 푡, and there exists a weak solution to the SDE:

ˆ ˆ ˆ ˆ 푑푋푡 =휇 ˆ(푡, 푋푡)푑푡 +휎 ˆ(푡, 푋푡)푑푊 (푡)

ˆ such that L (푋푡) = L (푋(푡))) for all 푡 ∈ ℝ+, where L denotes the law of a random variable and 푊ˆ (푡) denotes another Brownian motion.

This lemma relaxes the requirements on the coefficients of the Itˆoprocess in Gy¨ongy’s theorem. 13

2.2.3 Bentata and Cont’s theorems

Bentata and Cont [3], [4] extended Gy¨ongy’stheorem to a discontinuous semimartin- gale. They showed that the flow of marginal distributions of a semimartingale 푋 could be matched by the marginal distributions of a Markov process 푌 whose infinitesimal generator is expresses in terms of the local characteristics of 푋. They gave a construc- tion of the Markov process 푌 as the solution to a martingale problem for an integral- differential operator. They applied these results to derive a partial integro-differential equation for call options in a general semimartingale model. This generalizes Dupire’s local volatility formula.

Consider a semimartingale given by the decomposition

∫ 푡 ∫ 푡 ∫ 푡 ∫ ∫ 푡 ∫ 푋 푋 ˜ 푋(푡) = 푋0 + 훽푠 푑푠 + 휎푠 푑푊 (푠) + 푦푀푋 (푑푠푑푦) + 푦푀푋 (푑푠푑푦), 0 0 0 ∥푦∥≤1 0 ∥푦∥>1

푑 where 푊 is a ℝ -valued Brownian motion, 푀푋 is a positive, integer valued random 푛 ˜ measure on [0, ∞) × ℝ with compensator 휇푋 , 푀푥 = 푀푋 − 휇푋 is the compensated

푋 푋 푛 measure, 훽푡 and 휎푡 are adapted processes in ℝ and 푀푛×푑(ℝ). Define, for 푡 ≥ 0, 푧 ∈ ℝ푑

푌 푋 푋 푇 푎 (푡, 푧) = 퐸[휎푡 (휎푡 ) ∣푋(푡−) = 푧]

푏(푥, 푡) = 퐸[훽(푡)∣푋(푡−) = 푧]

푚푌 (푡, 푦, 푧) = 퐸[푚푋 (푡, 푦)∣푋(푡−) = 푧].

푑 Let 푀푌 be an integer valued random measure on [0, 푇 ] × ℝ with compensator ˜ 푚푌 (푡, 푦, 푌 (푡−))푑푦푑푡, 푀푌 = 푀푌 − 푚푦 the associated compensated random measure,

푌 푑 푌 푌 푇 휎 : [0, ∞) × ℝ 7→ 푀푑×푛(ℝ) is a measurable function such that 휎 (푡, 푧)(휎 (푡, 푧)) = 푎푌 (푡, 푧).

Theorem 2.3. ([3, Theorem 1])

푌 푌 푑 If the function 훽 , 푎 and 푚푌 are continuous in (푡, 푧) on [0, 푇 ] × ℝ , the stochastic 14

differential equation

∫ 푡 ∫ 푡 ∫ 푡 ∫ 푌 푌 ˜ 푌 (푡) =푋0 + 훽푠 (푠, 푌 (푠))푑푠 + 휎푠 (푠, 푌 (푠))푑푊 (푠) + 푦푀푌 (푑푠푑푦) 0 0 0 ∥푦∥≤1 ∫ 푡 ∫ + 푦푀푌 (푑푠푑푦), 0 ∥푦∥>1

admits a weak solution (푌 (푡))푡∈[0,푇 ] whose marginal distributions mimic those of 푋:

푌 (푡) = 푋(푡) in distribution for any 푡 ∈ [0, 푇 ].

In the proof of this theorem, Bentata and Cont showed that, for any 퐶2 function 푓 with compact support

∫ 푇 푌 퐸[푓(푋(푡))] =푓(푋0) + 퐸[∇푓(푋(푡−)) ⋅ 훽 (푡, 푋(푡−))]푑푡 ∫ 0 1 푇 + 퐸[tr[∇2푓(푋(푡−))푎푌 (푡, 푋(푡−))]]푑푡 2 0 ∫ 푇 ∫ + 퐸[(푓(푋(푡−) + 푦) − 푓(푋(푡−)) 0 ℝ푑

− 1{∥푦∥≤1}푦∇푓(푋(푡−)))푚푌 (푡, 푑푦, 푋(푡−))]푑푡.

And it is easy to see that for the mimicking 푌 (푡), we also have

∫ 푇 푌 퐸[푓(푌 (푡))] =푓(푋0) + 퐸[∇푓(푌 (푡−)) ⋅ 훽 (푡, 푌 (푡−))]푑푡 ∫ 0 1 푇 + 퐸[tr[∇2푓(푌 (푡−))푎푌 (푡, 푌 (푡−))]]푑푡 2 0 ∫ 푇 ∫ + 퐸[(푓(푌 (푡−) + 푦) − 푓(푌 (푡−)) 0 ℝ푑

− 1{∥푦∥≤1}푦∇푓(푌 (푡−)))푚푌 (푡, 푑푦, 푌 (푡−))]푑푡.

In order to show that 푋(푡) and 푌 (푡) have the same marginal distributions, they asserted that 퐸[푓(푋(푡))] = 퐸[푓(푌 (푡))],

2 for any 푓 ∈ 퐶0 . However, they did not provide the proof of this statement in [3]. 15

In [3], 푌 is constructed as the solution ((푌 (푡))푡∈[0,푇 ], 푄푋0 ) of the martingale problem for an integro-differential operator

∑푑 ∂푓 ∑푑 푎푌 (푡, 푥)2 ∂2푓 퐿푓(푡, 푥) = 훽푌 (푡, 푥) (푡, 푥) + 푖푗 (푡, 푥) 푖 ∂푥 2 ∂푥 ∂푥 푖=1 푖 푖,푗=1 푖 푗 ∫ ∑푑 ∂푓 + [푓(푡, 푥 + 푦) − 푓(푡, 푥) − 1{∥푦∥≤1}푦푖 (푡, 푥)]푚푌 (푡, 푦, 푥)푑푦. 푑 ∂푥 ℝ 푖=1 푖 For any 푓 ∈ 퐸 ⊂ dom(퐿),

∫ 푡 푓 푀푡 = 푓(푌 (푡)) − 푓(푥) − 퐿푓(푋(푢))푑푢 0

is a 푄푥 martingale. The uniqueness if 푌 (푡) is guaranteed by the following theorem.

Theorem 2.4. ([27]) If either

푑 푇 푋 (i) for any 푡 ∈ [0, 푇 ], 푥, 푧 ∈ ℝ , 푥 푎푡 푥 > 0,

(ii) for any 훽 > 0, 푐 > 0, 푚(푡, 푦, 휔) ≥ 푐 , ∣푦∣1+훽 then 푌 is the unique Markov process with infinitesimal generator 퐿.

Bentata and Cont also derived a Dupire type formula for an asset price 푆 whose dynamics under the pricing measure is a stochastic volatility model with random jumps,

∫ 푇 ∫ 푇 ∫ 푇 ∫ ∞ 푦 ˜ 푑푆푇 = 푆0 푟(푡)푆푡−푑푡 + 푆푡−훿푡푑푊 (푡) + 푆푡−(푒 − 1)푀(푑푡푑푦), 0 0 0 −∞ ˜ where 푟(푡) is the discount rate, 훿푡 the spot volatility process and 푀 is a compensated random measure with compensator

휇(휔; 푑푡푑푦) = 푚(휔; 푡, 푦)푑푡푑푦.

Theorem 2.5. ([4, Propostion 3])

Assume 푟(푡) and 훿푡 are locally bounded process and that for any 푇 > 0, there exists a ∫ ∫ ∫ 푇 2푦 2 constant 0 < 푐푇 < ∞ such that 0 푦>1 푒 푚(⋅; 푡, 푦)푑푦 ≤ 푐푇 , (1∧푦 )푚푋 (⋅; 푡, 푦)푑푦 ≤ 푐푇 . Define √ 2 휎(푡, 푧) = 퐸[훿푡 ∣푆푡− = 푧],

휈(푡, 푦, 푧) = 퐸[푚(푡, 푦)∣푆푡− = 푧]. 16

The value 퐶푡(푇, 퐾) at time 푡 of a call option with expiry 푇 > 푡 and strike 퐾 > 0 is given by

퐶푡(푇, 퐾) = 퐸[max(푆푇 − 퐾, 0)∣ℱ푡].

Then the call option price (푇, 퐾) 7→ 퐶푡(푇, 퐾) as a function of maturity and strike, is a solution (in the sense of distributions) of the partial integro differential equation on [푡, ∞) × (0, ∞),

∂퐶 퐾2휎(푇, 퐾)2 ∂2퐶 ∂퐶 = − 푟퐾 2 ∂푇 ∫ 2 ∂퐾 ∂퐾 ∞ { ∂퐶 } + 푑푦푛(푇, 푦, 퐾)푒푦 퐶(푇, 퐾푒−푦) − 퐶(푇, 퐾) + 퐾(1 − 푒−푦) −∞ ∂퐾

+ with initial condition 퐶(푡, 퐾) = (푆푡 − 퐾) for any 퐾 > 0.

This partial integro-differential equation generalizes the Dupire formula derived in [10] for continuous process to the case of semimartingales with jumps. It implies that, any arbitrage free option price across strike and maturity may be parameterized by a local volatility function 휎(푡, 푆) and a local Levy measure 푚(푡, 푆, 푧).

They also gave some examples of stochastic models, including marked point pro- cesses and time changed Levy processes.

In my thesis, we consider the same mimicking theorem of semimartingales as Bentata and Cont did. However, my approach is totally different. In [3], Bentata and Cont concluded that 퐸[푓(푋(푡))] = 퐸[푓(푌 (푡))] for any 퐶2 function 푓, therefore, 푋(푡) and 푌 (푡) have the same marginal distributions. In my proof, I show that 푝푋 (푥, 푡) and 푝푌 (푥, 푡), the probability density functions of 푋(푡) and 푌 (푡), satisfies the same partial integro- differential equation. And then I prove that this equation has a unique fundamental solution under some assumptions for the diffusion and variance coefficients.

In [25], Ming Shi extended Gy¨ogy’stheorem to pure jump processes and applied his results to multi-name credit modeling. He also talked about the mimicking theorem of semimartingales in Section 3.4 which was a joint work with me. We started with the function 푓(푥) = 푒−푖푥휉 for 푥, 휉 ∈ ℝ푛. 퐸[푓(푋(푡))] is the Fourier transform of the marginal distribution function 푝푋 (푥, 푡). We then used this fact to derive the forward equation satisfied by 푝푋 (푥, 푡). 17

In my thesis, I generalize this result by choosing 푓 as any 퐶2 function with compact support. 18

Chapter 3

A Proof of Gy¨ongy’sTheorem

Gy¨ogyproved his theorem by extending a result of Krylov [20]. We summarize his main ideas of proof here. Consider the Green measure of an Itˆoprocess 푋(푡) with killing rate 훾(푡), which is defined by

[ ∫ +∞ ∫ ] − 푡 훾(푠,휔)푑푠 휇(퐴) = 퐸 1퐴(푋(푡))푒 0 푑푡 , 0

푛 for every Borel set 퐴 ⊂ 푅 , where 1퐴 denotes the indicator function of the set 퐴, 훾 is a non-negative 픱-adapted stochastic process. Denote the mimicking process by 푌 (푡), Gy¨ongyshowed that the Green measure of (푡, 푋(푡)) is identical the the Green measure of (푡, 푌 (푡)). Let 훾(푡) ≡ 1, then we have

[∫ +∞ ] [∫ +∞ ] 퐸 푒−푡푓(푡, 푋(푡))푑푡 = 퐸 푒−푡푓(푡, 푌 (푡))푑푡 0 0

for every bounded non-negative Borel measurable function 푓. Taking 푓(푡, 푥) = 푒−휆푡푔(푥)

푛 with arbitrary non-negative constant 휆 and functions 푔 ∈ 퐶0(ℝ ), we get

∫ +∞ ∫ +∞ 푒−휆푡푒−푡퐸[푔(푋(푡))]푑푡 = 푒−휆푡푒−푡퐸[푔(푌 (푡))]푑푡, 0 0 this gives us 퐸[푔(푋(푡))] = 퐸[푔(푌 (푡))].

Hence it follows that the distribution of 푋(푡) and 푌 (푡) are same for every 푡. However, the proof that 푋(푡) and 푌 (푡) have the same Green measure is quite long and technical.

We present an intuitive proof of Gy¨ongy’stheorem. The proof is based on the uniqueness of solutions to a parabolic equation. Assume 푋(푡) has probability den- sity functions 푝푋 (푥, 푡). We first derive the partial differential equation satisfied by the marginal distributions of an Itˆoprocess. Then we construct a Markov process and show 19

that the marginal distributions of this Markov process satisfy the same partial differ- ential equation. By uniqueness of solutions to a parabolic equation [14], we conclude that the Markov process has the same marginal distributions as the Itˆoprocess.

3.1 Uniqueness of a solution to a partial differential equation

In this section, we give a proof of Gy¨ongy’stheorem using uniqueness of solutions to a partial differential equation. We shall use the following definition and theorems.

Definition 3.1. (Fundamental solutions of a parabolic equation)[14, Sections 1.1 &

1.6] Suppose 푎푖푗, 푏푖 and 푐 are ℝ-valued functions on Ω. A fundamental solution of

∑푛 ∂2푢 ∑푛 ∂푢 ∂푢 퐿푢 := 푎 (푥, 푡) + 푏 (푥, 푡) + 푐(푥, 푡)푢 − = 0 (3.1) 푖푗 ∂푥 ∂푥 푖 ∂푥 ∂푡 푖,푗 푖 푗 푖=1 푖 ¯ in Ω = 퐷 × [푇0, 푇1] is a ℝ-valued function Γ(푥, 푡; 푦, 휏) defined for all (푥, 푡) ∈ 푈, (푦, 휏) ∈ 푈, 푡 > 휏, which satisfies the following conditions: (i) for fixed (푦, 휏) ∈ Ω, it satisfies (3.1), as a function of (푥, 푡), 푥 ∈ 퐷, 휏 < 푡 < 푇 ; (ii) for every continuous ℝ-valued function 푓(푥) in 퐷¯ such that [14, Section 1.6, Equa- tion (6.1)], ∣푓(푥)∣ ≤ const. exp(ℎ∣푥∣2), if 푥 ∈ 퐷, for some positive constant ℎ. Then ∫ lim Γ(푥, 푡; 푦, 휏)푓(푦)푑푦 = 푓(푥). 푡→휏 퐷

The integrals in Definition 3.1 exist only if 4ℎ(푡 − 휏) < 휆, where 휆 = 휆0 is defined as in [14, Section 1.2, Equation (2.2)] and the proof of Lemma 4.2 in this thesis. If

휆 ℎ < 0 4(푇1 − 푇0) then the integrals in Definition 3.1 exist for all 푇0 ≤ 휏 < 푡 ≤ 푇1 [14, Section 1.6].

Definition 3.2. [14] A function 푓(푥, 푡) is said to be H¨oldercontinuous with exponent 훼 if

훼 훼/2 ∣푓(푥, 푡) − 푓(푥0, 푡0)∣ ≤ 퐶(∣푥 − 푥0∣ + ∣푡 − 푡0∣ )

푛 for any (푥, 푡), (푥0, 푡0) ∈ [푇0, 푇1] × ℝ some constant 퐶 > 0. 20

Theorem 3.1. [14, Theorem 1.15] Assume that 퐿 is parabolic and the coefficients of 퐿

2 ∂푎푖푗 ∂ 푎푖푗 ∂푏푖 푎푖푗, , , 푏푖, , 푐 ∂푥푘 ∂푥푘∂푥푙 ∂푥푘

푛 are bounded, continuous ℝ-valued functions on ℝ × [푇0, 푇1]; they satisfy a uniform

푛 H¨oldercondition with exponent 훼 in 푥 ∈ ℝ . 푎푖푗 satisfies the ellipticity condition. Then a fundamental solution Γ∗ of 퐿∗푢 = 0 exists and

Γ(푥, 푡; 푦, 휏) = Γ∗(푦, 휏; 푥, 푡), where 퐿∗ is the formal adjoint of 퐿,

∑푛 ∂2 ∑푛 ∂ ∂푢 퐿∗푢 = (푎 (푥, 푡)푢) + (푏 (푥, 푡)푢) + 푐(푥, 푡)푢 + ∂푥 ∂푥 푖푗 ∂푥 푖 ∂푡 푖,푗 푖 푗 푖=1 푖

Theorem 3.2. ([13, Theorem 3.4]) The fundamental solution of (3.1) is unique.

Proof. Suppose that Γ(푥, 푡; 푦, 휏), Γ(˜ 푥, 푡; 푦, 휏) are two fundamental solutions for 퐿푢 = 0. Applying [14, Theorem 1.15], we see that

Γ(푥, 푡; 푦, 휏) = Γ∗(푦, 휏; 푥, 푡) = Γ(˜ 푥, 푡; 푦, 휏)

푛 for any (푥, 푡), (푦, 휏) ∈ [푇0, 푇1] × ℝ , 푡 > 휏. And so Γ(푥, 푡; 푦, 휏) = Γ(˜ 푥, 푡; 푦, 휏) as desired.

Now we can proceed to give another proof of Gy¨ongy’stheorem.

Theorem 3.3. Let 푊 be an 푟-dimensional Brownian motion on a (Ω, F , ℙ), and

푑푋(푡) = 휇푡푑푡 + 휎푡푑푊 (푡) be an ℝ푛-valued Itˆoprocess, 휇 is a bounded ℝ푛-valued adapted process, and 휎 is a bounded ℝ푛×푑-valued adapted process such that 휎휎푇 is uniformly positive definite. Then there exist deterministic measurable function 푎(푡, 푥) and 푏(푡, 푥) such that

푏(푡, 푥) = 퐸[휇푡∣푋(푡) = 푥] a.s. for 푡 ≥ 0,

푇 푇 푎(푡, 푥)푎 (푡, 푥) = 퐸[휎푡휎푡 ∣푋(푡) = 푥] a.s. for 푡, 21

and 2 ∂푎푖푗 ∂ 푎푖푗 ∂푏푖 푎푖푗, , , 푏푖, ∂푥푘 ∂푥푘∂푥푙 ∂푥푘 are bounded and H¨oldercontinuous with exponent 훼, 0 < 훼 < 1. Assume 푋(푡) has probability density functions 푝푋 (푥, 푡). Then there exists a weak solution to the stochastic differential equation:

푑푌 (푡) = 푏(푡, 푌 (푡))푑푡 + 푎(푡, 푌 (푡))푑푊ˆ (푡) such that L (푌 (푡)) = L (푋(푡)) for all 푡 > 0, where L denotes the law of a ran- dom variable and 푊ˆ (푡) denotes another Brownian motion, on another probability space (Ω˜, ℱ˜, ℙ˜).

2 푛 Proof. For any function 푓 ∈ 퐶0 (푅 ), It`oformula shows ∫ 푇 ∑푛 ∂ 푓(푋 ) − 푓(푥) = 푓(푋 )푑푋(푡) 푇 ∂푥 푡 0 푖=1 푖 ∫ 1 ∑푛 푇 ∂2 + 푓(푋 )푑[푋푖, 푋푗] 2 ∂푥 ∂푥 푡 푡 푖,푗=1 0 푖 푗 ∫ 푇 ∑푛 ∂ = 푓(푋 )(휇 푑푡 + 휎 푑푊 (푡)) ∂푥 푡 푡 푡 0 푖=1 푖 ∫ 1 ∑푛 푇 ∂2 + 푓(푋 )(휎(푡)휎푇 (푡)) 푑푡 2 ∂푥 ∂푥 푡 푖푗 푖,푗=1 0 푖 푗

Taking expectations on both side, we get ∫ [ 푇 ∑푛 ∂ ] 퐸[푓(푋 )] = 푓(푥) + 퐸 푓(푋 )휇 푑푡 푇 ∂푥 푡 푡 0 푖=1 푖 ∫ 1 [ 푇 ∑푛 ∂2 ] + 퐸 푓(푋 )(휎(푡)휎푇 (푡)) 푑푡 , 2 ∂푥 ∂푥 푡 푖푗 0 푖,푗=1 푖 푗 ∫ 푇 [ ∑푛 ∂ ] = 푓(푥) + 퐸 푓(푋 )휇 푑푡 ∂푥 푡 푡 0 푖=1 푖 ∫ 1 푇 ∑푛 [ ∂2 ] + 퐸 푓(푋(푡−))(휎(푡)휎푇 (푡)) 푑푡]. 2 ∂푥 ∂푥 푗푘 0 푖,푗=1 푖 푗 22

Using iterated conditional expectation and conditioning on 푋푡 gives ∫ 푇 [ ∑푛 ∂ ] 퐸[푓(푋 )] =푓(푥) + 퐸 푓(푋 )퐸[휇 ∣푋 ] 푑푡 푇 ∂푥 푡 푡 푡 0 푖=1 푖 ∫ 푇 ∑푛 [ 2 ] 1 ∂ 푇 + 퐸 푓(푋푡)퐸[(휎(푡)휎 (푡))푖푗∣푋푡]]푑푡 2 ∂푥푖∂푥푗 0 푗,푘=1 ∫ 푇 [ ∑푛 ∂ 1 ∑푛 ∂2 ] =푓(푥) + 퐸 푓(푋 )푏(푋 , 푡) + 푓(푋 )푎 (푋 , 푡) 푑푡. ∂푥 푡 푡 2 ∂푥 ∂푥 푡 푖푗 푡 0 푖=1 푖 푖,푗=1 푖 푗 Denoting ∑푛 ∂푓 1 ∑푛 ∂2푓 퐴푓(푥) = 푏 (푥, 푡) + 푎 (푥, 푡) , 푖 ∂푥 2 푖푗 ∂푥 푥 푖=1 푖 푖,푗=1 푖 푗 we have ∫ 푇 [ ] 퐸[푓(푋푇 )] = 푓(푥) + 퐸 퐴푓(푋(푡)) 푑푡. (3.2) 0

If 푋(푡) has probability density functions 푝푋 (푥, 푡) , we can rewrite (3.2) as

∫ ∫ 푇 ∫ 푓(푥)푝푋 (푥, 푇 )푑푥 = 푓(푥) + 퐴푓(푥)푝푋 (푥, 푡)푑푥푑푡. (3.3) ℝ푛 0 ℝ푛 Taking derivatives of (3.3) with respect to 푇 , we get ∫ ∫ ∂ ( ) 푓(푥)푝푋 (푥, 푇 )푑푥 = 퐴푓(푥) 푝푋 (푥, 푇 )푑푥 ∂푇 ℝ푛 ℝ푛 ∫ ( ∑푛 ∂푓 = 푏푖(푥, 푇 ) 푝푋 (푥, 푇 ) 푛 ∂푥 ℝ 푖=1 푖 1 ∑푛 ∂2푓 ) + 푎 (푥, 푇 ) 푝 (푥, 푇 ) 푑푥. 2 푖푗 ∂푥 푥 푋 푖,푗=1 푖 푗 Integrating by parts on the right hand side gives ∫ ∂ 푓(푥)푝푋 (푥, 푇 )푑푥 (3.4) ∂푇 ℝ푛 ∫ ( ∑푛 ∂ 1 ∑푛 ∂2 ) = − 푓(푥) (푏(푥, 푇 )푝푋 (푥, 푇 )) + 푓(푥) (푎푖푗(푥, 푇 )푝푋 (푥, 푇 )) 푑푥 푛 ∂푥 2 ∂푥 ∂푥 ℝ 푖=1 푖 푖,푗=1 푖 푗 ∫ ( ∑푛 ∂ 1 ∑푛 ∂2 ) = 푓(푥) − (푏(푥, 푇 )푝푋 (푥, 푇 )) + (푎푖푗(푥, 푇 )푝푋 (푥, 푇 )) 푑푥 푛 ∂푥 2 ∂푥 ∂푥 ℝ 푖=1 푖 푖,푗=1 푖 푗 ∫ ∗ = 푓(푥)(퐴 푝푋 (푥, 푇 ))푑푥 ℝ푛 where 퐴∗ is the formal adjoint of 퐴,

∑푛 ∂ 1 ∑푛 ∂2 퐴∗푝 (푥, 푡) := − (푏(푥, 푡)푝 (푥, 푡)) + (푎 (푥, 푡)푝 (푥, 푡)). 푋 ∂푥 푋 2 ∂푥 ∂푥 푖푗 푋 푖=1 푖 푖,푗=1 푖 푗 23

2 Since (3.4) is true for any 퐶0 function 푓, then 푝푋 (푥, 푡) is a weak solution of ∂푝 푋 = 퐴∗푝 (푥, 푡). ∂푡 푋

Consider the process 푌 (푡) defined as the solution to

푑푌 (푡) = 푏(푡, 푌 (푡))푑푡 + 푎(푡, 푌 (푡))푑푊ˆ (푡).

Itˆoformula gives ∫ 푇 [ ∑푛 ∂ 1 ∑푛 ∂2 ] 퐸[푓(푌 )] =푓(푥) + 퐸 푓(푌 )푏(푌 , 푡) + 푓(푌 )푎 (푌 , 푡) 푑푡. 푇 ∂푥 푡 푡 2 ∂푥 ∂푥 푡 푖푗 푡 0 푖=1 푖 푖,푗=1 푖 푗

Let 푝푌 (푥, 푡) denotes the probability density functions of 푌푡, we have

∫ ∫ 푇 ∫ 푓(푥)푝푌 (푥, 푇 )푑푥 = 푓(푥) + 퐴푓(푥)푝푌 (푥, 푡)푑푥푑푡. (3.5) ℝ푛 0 ℝ푛 Then we take derivatives with respect to 푇 and integrate by parts, we obtain ∫ ∫ ∂ ∗ 푓(푥)푝푌 (푥, 푇 )푑푥 = 푓(푥)(퐴 푝푌 (푥, 푇 ))푑푥. ∂푇 ℝ푛 ℝ푛

Therefore, 푝푌 (푥, 푡) is a solution to ∂푝 푌 = 퐴∗푝 (푥, 푡), ∂푡 푌 in the weak sense, with initial condition푝 ˜(푥, 0) = 훿(푥), where 훿(푥) is the Dirac delta function.

We now consider an initial value problem

∂푢 = 퐴∗푢(푥, 푡) (3.6) ∂푡 푢(푥, 0) = 훿(푥).

Since 푝푋 (푥, 푡), 푝푌 (푥, 푡) are solutions of (3.6), it is enough to show that (3.6) has a unique fundamental solution.

In [14, Section 1.4], a fundamental solution of

∂푢 퐿푢 := 퐴∗푢 − , ∂푡 is constructed by the parametrix method. By Theorem 3.2, we obtain the uniqueness of the fundamental solution. 24

Chapter 4

Forward Equation for a Semimartingale

In this chapter, we develop the main theorem of this dissertation. We first introduce the definition of semimartingales and define the Markovian projection of semimartingales. Then we derive the generator of a semimartingale and the backward equation. When the semimartingale has a probability density function, we show that this function satisfies the forward equation, the adjoint of the backward equation. Next we construct a fundamental solution of the forward equation, through our construction, discover the conditions which ensure that this equation has fundamental solutions. These conditions guarantee the existence of the probability density functions for the semimartingale. Finally, we show that the fundamental solution of the forward equation is unique. Our existence and uniqueness result implies that the mimicking process has the same marginal distributions as the original semimartingale.

4.1 Semimartingales and Markovian projection

In this section, we give the definition of a semimartingale and define its Markovian projection.

Definition 4.1. (Semimartingale, [24, Definition IV.15.1])

A process 푋 is a semimartingale on (Ω, ℱ, (픽(푡))푡≥0, ℙ) if it can be written in the form:

푋(푡) = 푋0 + 푀푡 + 퐴푡, (4.1) where 푀 is a null at 0 with c`adl`agpaths and 퐴 is an adapted process with paths of finite variation, and the filtration is right-continuous.

An ℝ푛-valued process 푋 = (푋1, . . . , 푋푛) is a semimartingale if each of its compo- nents 푋푖 is a semimartingale. 25

We emphasize that the decomposition (4.1) need not be unique. Semimartingales are good integrators, forming the largest class of processes with respect to which the Itˆointegral can be defined. The class of semimartingales is quite large, including, for example, all Itˆoprocesses and L´evyprocesses.

Proposition 4.1. (Itˆodecomposition for semimartingales, [4, Equation (2)])

On a filtered probability space (Ω, ℱ, (픽푡)푡≥0, ℙ), a semimartingale can be given by the decomposition

∫ 푡 ∫ 푡 ∫ 푡 ∫ ∫ 푡 ∫ 푋(푡) = 푋(0)+ 훽(푠)푑푠+ 휎(푠)푑푊 (푠)+ 푦푀˜(푑푠푑푦)+ 푦푀(푑푠푑푦), 0 0 0 ∥푦∥≤1 0 ∥푦∥>1 (4.2) where 푊 is a ℝ푑-valued Brownian motion, 푀 is a positive, integer valued random measure on [0, ∞) × ℝ푛 with compensator 휇 , 푀˜ = 푀 − 휇 is the compensated measure, 휇 has a density 푚(푡, 휔, 푦)푑푦, 훽(푡) and 휎(푡) are adapted processes valued in ℝ푛 and

푀푛×푑(ℝ).

Proposition 4.2. (Itˆoformula for semimartingales, [23, Theorem 71])

Let (푋(푡))푡≥0 be a semimartingale. For any twice continuously differentiable function 푓 : ℝ푛 → ℝ푛, ∫ ∫ ∑푛 푇 ∂ 1 ∑푛 푇 ∂2 푓(푋 ) − 푓(푋(0)) = 푓(푋(푡−))푑푋(푡)푖 + 푓(푋(푡−))푑[푋푖, 푋푗] 푇 ∂푥 2 ∂푥 ∂푥 푡 푖=1 0 푖 푖,푗=1 0 푖 푗 [ ] ∑ ∑푛 ∂ + 푓(푋(푡−) + △푋(푡)) − 푓(푋(푡−)) − 푓(푋(푡−))△푋(푡)푖 ∂푥푖 0≤푡≤푇 푖=1 We want to construct a stochastic differential equation whose solution mimics the marginal distributions of a semimartingale. We call the mimicking process a Markovian projection of a semimartingale.

Gy¨ongy[15] showed that there exists a Markovian projection of an Itˆoprocess. Brunick [8] generalized Gy¨ongy’sresult under a weaker assumption, and proved that there exists a Markovian projection of a two-dimensional process. Bentata and Cont [3], [4] gave the existence of the Markovian projection of a semimartingale. The Markovian projection is constructed as a solution to a martingale problem.

Here we present a partial differential equation based proof of mimicing theorem. Consider a semimartingale 푋(푡) with decomposition (4.2) and further require 26

(A1) 휎(푡)휎(푡)∗ satisfies the uniformly ellipticity condition for any 0 ≤ 푡 ≤ 푇 . That is, there is a constant 휆 > 0 such that ∑푛 푇 2 (휎(푡)휎(푡) )푖푗휉푖휉푗 ≥ 휆∣휉∣ , 푖,푗=1 for all 휉 ∈ ℝ푛 and any 0 ≤ 푡 ≤ 푇 . Ellipticity thus means that the symmetric matrix 휎(푡)휎(푡)∗ ∈ ℝ푛×푛 is positive definite, with smallest eigenvalue greater than or equal to 휆 > 0.

(A2) For any 푇 > 0, 훽(푡) and 휎(푡) are bounded functions of 푡 ∈ [0, 푇 ].

Suppose the process 푋(푡) start at 푋(푡0) = 푥0, and the density function of 푋(푡) is

푝푋 (푡0, 푥0; 푡, ⋅), we denote the density by 푝푋 (푡0, 푥0; 푡, 푥) and abbreviate by 푝푋 (푡, 푥)

Theorem 4.1. (Markovian projection of a semimartingale)

Let 푋(푡) be a semimartingale starting with 푋(푡0) = 푥0, and 푋(푡) has a decomposition

∫ 푡 ∫ 푡 ∫ 푡 ∫ ∫ 푡 ∫ ˜ 푋(푡) = 푋(푡0)+ 훽(푠)푑푠+ 휎(푠)푑푊 (푠)+ 푦푀(푑푠푑푦)+ 푦푀(푑푠푑푦), 푡0 푡0 푡0 ∥푦∥≤1 푡0 ∥푦∥>1 where 훽(푡), 휎(푡) satisfy (A1) and (A2). Assume 푋(푡) has probability density functions

푝푋 (푡, 푥). Define

푎(푥, 푡) := 퐸[휎(푡)휎∗(푡)∣푋(푡−) = 푥]

푏(푥, 푡) := 퐸[훽(푡)∣푋(푡−) = 푥]

푛(푡, 퐴, 푥) := 퐸[푚(푡, 퐴, ⋅)∣푋(푡−) = 푥], for all 푡 ≥ 0, 푥 ∈ ℝ푛, and any Borel set 퐴 ⊂ ℝ푛. Let 푁 be a positive, integer-valued random measure on [0, ∞)×ℝ푛 with compensator 푛, where 푛 has a density 휈(푡, 휔, 푥)푑푥, and 푁˜ = 푁 − 휈 is the associated compensated random measure.

We assume that

(i) 푎, 푏, and 퐴 → 푛(푡, 퐴, 푥) are continuous in (푡, 푥) on [0, ∞) × 푅푛,

∫ 푛 (ii) there is a constant 퐾, such that 푅 푚(푡, ⋅, 푑푦) ≤ 퐾 < ∞ a.s.

(iii) there is no jump exactly at 푡 for any fixed 푡 with probability 1. 27

(iv) 휈 has a compact support.

Define a process 푌 (푡)

∫ 푡 ∫ 푡 ∫ 푡 ∫ 1 ˜ ˜ 푌 (푡) = 푋(푡0) + 푏(푠, 푌 (푠))푑푠 + 푎 2 (푠, 푌 (푠))푑푊푠 + 푦푁(푑푠푑푦) 푡0 푡0 푡0 ∥푦∥≤1 ∫ 푡 ∫ + 푦푁(푑푠푑푦) 푡0 ∥푦∥>1

˜ 푛 where 푊푡 is an ℝ -valued Brownian motion. Then 푌 (푡) is a Markovian projection of the semimartingale 푋(푡), 푌 (푡) and 푋(푡) have the same marginal distributions for any 0 ≤ 푡 ≤ 푇 .

4.2 Forward equation

Suppose 푋(푡) is a semimartingale with generator 퐴, 푌 (푡) is the process defined in ˆ Theorem 4.1 with generator 퐴. We want to show that the density 푝푋 (푡0, 푥0; 푡, 푥) of ˆ 푋(푡) and 푝푌 (푡0, 푥0; 푡, 푦) are solutions to the forward equation defined by 퐴.

ˆ Definition 4.2. The generator 퐴푡, 푡 ≥ 0 of a (time-inhomogeneous) process (푌 (푡))푡≥0 is defined by

푥 ˆ 퐸 [푓(푌 (푡))] − 푓(푥) 푛 퐴푡0 푓(푥) := lim , 푥 ∈ ℝ , 푡0 ≥ 0, 푡→푡0 푡 where 퐸푥 is the expectation with respect to the probability law for 푌 (푡) starting at

푛 푌 (푡0) = 푥. The set of functions 푓 : ℝ → ℝ such that the limit exists at (푡0, 푥) is

denoted by D ˆ (푥), while D ˆ is the set of functions for which the limit exists for all 퐴푡0 퐴 푛 푥 ∈ ℝ and 푡0 ≥ 0.

ˆ 푛 푛 We assume 퐴 has the property that D퐴ˆ ⊂ 퐶∞(푅 , 푅) is dense, where 퐶∞(푅 , 푅) is the set of continuous functions from 푅푛 to 푅 which vanish at infinity.

If 푌 (푡) also obeys the conditions

(i) 푌 (푡) is time-homogeneous;

ˆ (ii) (퐴, D퐴ˆ) satisfies the positive maximum principle;

ˆ 푛 (iii) 푅(휆 − 퐴) is dense in 퐶∞(푅 , 푅) for some 휆 > 0; 28

then by [18, Theorem 4.5.3], 퐴ˆ is a generator of a Feller semigroup and 푌 (푡) is a time-homogeneous .

푛 푛 The Courr`egetheorem [18, Theorem 4.5.21] shows that if 퐴 : 퐶∞(푅 , 푅) → 퐶(푅 , 푅) is a linear operator satisfying the positive maximum principle. Then there exist func-

푛 ∞ 푛 tions 푎, 푏, 푐 : 푅 → 푅 and a kernel 휇 such that for 푢 ∈ 퐶0 (푅 , 푅) ∑푛 ∂2푢(푥) ∑푛 ∂푢(푥) 퐴푢(푥) = 푎푘푙(푥) + 푏푗(푥) + 푐(푥)푢(푥) ∂푥푘∂푥푙 ∂푥푗 푘,푙=1 푗=1 ∫ ∑푛 ∂푢(푥) + {푢(푦) − 휒(푦 − 푥)푢(푥) − 휒(푦 − 푥)(푦푗 − 푥푗) × 휇(푥, 푑푦), 푛 ∂푥 푅 푗=1 푗 where 휒 ∈ 퐶 (푅푛, 푅) with 0 ≤ 휒 ≤ 1 and 휒∣ = 1. ∞ 퐵1(0) The Courr`egetheorem gives us an example of the structure of 퐴ˆ. Conversely, when 퐴ˆ has the structure described in the Courr`egetheorem, then 퐴ˆ is the generator of a time-homogeneous Feller process.

Proposition 4.3. Let 푋 be a semimartingale with decomposition (4.2) and satisfy the

2 푛 2 assumptions in Theorem 4.1. If 푓 ∈ 퐶0 (ℝ ), that is 푓 is a 퐶 function with compact 푛 support on ℝ , then 푓 ∈ D퐴ˆ and the generator of 푌 (푡) is ∑푛 ∂푓 1 ∑푛 ∂2푓 퐴ˆ 푓(푥) = 푏 (푥, 푡) + 푎 (푥, 푡) 푡 푖 ∂푥 2 푖푗 ∂푥 푥 푖=1 푖 푖,푗=1 푖 푗 ∫ ( ∑푛 ∂푓 ) + 푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 휈(푡, 푦, 푥)푑푦. 푛 ∂푥 ℝ 푖=1 푖 Proof. Since 푋 is a semimartingale, with 푋(푡−) = 푥, the Itˆoformula gives

푓(푋푇 ) − 푓(푥) ∫ ∫ 푇 1 ∑푛 푇 ∂2 = ∇푓(푋(푠−)) ⋅ 푑푋(푠) + 푓(푋(푠−))푑[푋푖, 푋푗] 2 ∂푥 ∂푥 푠 푡 푖,푗=1 푡 푖 푗 ∑ ∑푛 ∂ + [푓(푋(푠−) + △푋(푠)) − 푓(푋(푠−)) − 푓(푋(푠−))△푋(푠)푖] ∂푥 푖=1 푖 0≤푠≤푇 [ ] ∫ 푇 ∫ ∫ = ∇푓(푋(푠−)) ⋅ 훽(푠)푑푠 + 휎(푠)푑푊 (푠) + 푦푀˜(푑푠푑푦) + 푦푀(푑푠푑푦) 푡 ∥푦∥≤1 ∥푦∥>1 ∫ 1 ∑푛 푇 ∂2 + 푓(푋(푠−))푑[푋푖, 푋푗] 2 ∂푥 ∂푥 푠 푖,푗=1 푡 푖 푗 [ ] ∑ ∑푛 ∂ + 푓(푋(푠−) + △푋(푠)) − 푓(푋(푠−)) − 푓(푋(푠−))△푋(푠)푖 . ∂푥푖 0≤푡≤푇 푖=1 29

Simplifying, we obtain

푓(푋푇 ) − 푓(푥) ∫ 푇 ∫ 푇 = ∇푓(푋(푠−)) ⋅ 훽(푠)푑푠 + ∇푓(푋(푠−)) ⋅ 휎(푠)푑푊 (푠) 푡 푡 ∫ ∫ ∫ 1 푇 ∑푛 ∂2 푇 + 푓(푋(푠−))(휎(푠)휎푇 (푠)) 푑푠 + 푦 ⋅ ∇푓(푋(푠−))푀˜(푑푠푑푦) 2 ∂푥 ∂푥 푖푗 푡 푖,푗=1 푖 푗 푡 ∥푦∥≤1 ∫ 푇 ∫ + 푦 ⋅ ∇푓(푋(푠−))푀(푑푠푑푦) 푡 ∥푦∥>1 ∫ 푇 ∫ + (푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 푦 ⋅ ∇푓(푋(푠−)))푀(푑푠푑푦). 푡 ℝ푛 We add the last two terms in the preceding equation and get

푓(푋푇 ) − 푓(푥) ∫ 푇 ∫ 푇 = ∇푓(푋(푠−)) ⋅ 훽(푠)푑푠 + ∇푓(푋(푠−)) ⋅ 휎(푠)푑푊 (푠) 푡 푡 ∫ ∫ ∫ 1 푇 ∑푛 ∂2 푇 + 푓(푋(푠−))(휎(푠)휎푇 (푠)) 푑푠 + 푦 ⋅ ∇푓(푋(푠−))푀˜(푑푠푑푦) 2 ∂푥 ∂푥 푖푗 푡 푖,푗=1 푖 푗 푡 ∥푦∥≤1 ∫ 푇 ∫ + (푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 1∥푦∥≤1푦 ⋅ ∇푓(푋(푠−)))푀(푑푠푑푦). 푡 ℝ푛 Let 퐸푥 denote 퐸[⋅∣푋(푡−) = 푥], when taking expectations involving 푋 or 퐸[⋅∣푌 (푡−) = 푥] when taking expectations involving 푌 . Taking expectations on both sides, we obtain

푥 퐸 [푓(푋푇 )] ⎡ ⎤ [∫ ] ∫ 푇 1 푇 ∑푛 ∂2 =푓(푥) + 퐸푥 ∇푓(푋(푠−)) ⋅ 훽(푠)푑푠 + 퐸푥 ⎣ 푓(푋(푠−))(휎(푠)휎푇 (푠)) 푑푠⎦ 2 ∂푥 ∂푥 푖푗 푡 푡 푖,푗=1 푖 푗 [∫ 푇 ∫ ] 푥 + 퐸 (푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 1∥푦∥≤1푦 ⋅ ∇푓(푋(푠−)))푚(푠, 푦)푑푠푑푦 . 푡 ℝ푛 We apply Fubini theorem and find that

푥 퐸 [푓(푋푇 )] ∫ 푇 =푓(푥) + 퐸푥[∇푓(푋(푠−)) ⋅ 훽(푠)]푑푠 푡 ∫ [ ] 푇 ∑푛 2 1 푥 ∂ 푇 + 퐸 푓(푋(푠−))(휎(푠)휎 (푠))푗푘 푑푠 2 ∂푥푗∂푥푘 푡 푗,푘=1 ∫ 푇 ∫ 푥 + 퐸 [ (푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 1∥푦∥≤1푦 ⋅ ∇푓(푋(푠−)))푚(푠, 푦)]푑푠푑푦. 푡 ℝ푛 30

Using iterated expectations conditioned on 푋(푠−), we see that

푥 퐸 [푓(푋푇 )] ∫ 푇 =푓(푥) + 퐸푥[∇푓(푋(푠−)) ⋅ 퐸푥[훽(푠)∣푋(푠−)]]푑푠 푡 ∫ [ ] 푇 ∑푛 2 1 푥 ∂ 푥 푇 + 퐸 푓(푋(푠−))퐸 [(휎(푠)휎 (푠))푖푗∣푋(푠−)] 푑푠 2 ∂푥푖∂푥푗 푡 푗,푘=1 ∫ 푇 ∫ + 퐸푥[퐸푥[ (푓(푋(푠−) + 푦) − 푓(푋(푠−)) 푡 ℝ푛

− 1∥푦∥≤1푦 ⋅ ∇푓(푋(푠−)))푚(푠, 푦)푑푦∣푋(푠−)]]푑푠.

Simplifying gives

푥 퐸 [푓(푋푇 )] ∫ 푇 =푓(푥) + 퐸푥[∇푓(푋(푠−)) ⋅ 푏(푋(푠−), 푠)]푑푠 푡 ∫ 1 푇 ∑푛 ∂2 + 퐸푥[ 푓(푋(푠−))푎 (푋(푠−), 푠)]푑푠 2 ∂푥 ∂푥 푖푗 푡 푖,푗=1 푖 푗 ∫ 푇 ∫ 푥 + 퐸 [ (푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 1∥푦∥≤1푦 ⋅ ∇푓(푋(푠−)))휈(푠, 푦, 푋(푠−))푑푦]푑푠. 푡 ℝ푛

Therefore,

푥 퐸 [푓(푋푇 )] (4.3) ∫ [ 푇 ( ∑푛 ∂푓 1 ∑푛 ∂2 ∂2푓 =푓(푥) + 퐸푥 푏 (푋(푠−), 푠) + 푎 (푋(푠−), 푠) 푖 ∂푥 2 ∂푥 ∂푥 푖푗 ∂푥 푥 푡 푖=1 푖 푖,푗=1 푖 푗 푖 푗 ∫ ( ∑푛 ∂푓 ) ) ] + 푓(푋(푠−) + 푦) − 푓(푋(푠−)) − 1∥푦∥≤1 푦푖 휈(푠, 푦, 푋(푠−))푑푦 푑푠 . 푛 ∂푥 ℝ 푖=1 푖 Similarly, suppose the process 푌 (푇 ) starts with 푌 (푡−) = 푥, then

푥 퐸 [푓(푌푇 )] ∫ [ 푇 ( ∑푛 ∂푓 1 ∑푛 ∂2 ∂2푓 =푓(푥) + 퐸푥 푏 (푌 (푠−), 푠) + 푎 (푌 (푠−), 푠) 푖 ∂푥 2 ∂푥 ∂푥 푖푗 ∂푥 푥 푡 푖=1 푖 푖,푗=1 푖 푗 푖 푗 ∫ ( ∑푛 ∂푓 ) ) ] + 푓(푌 (푠−) + 푦) − 푓(푌 (푠−)) − 1∥푦∥≤1 푦푖 휈(푠, 푦, 푌 (푠−))푑푦 푑푠 . 푛 ∂푥 ℝ 푖=1 푖 31

Then, by Definition 4.2, we get

∑푛 ∂푓 1 ∑푛 ∂2푓 퐴ˆ 푓(푥) = 푏 (푥, 푠) + 푎 (푥, 푠) (4.4) 푠 푖 ∂푥 2 푖푗 ∂푥 푥 푖=1 푖 푖,푗=1 푖 푗 ∫ ∑푛 ∂푓 + (푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 )휈(푠, 푦, 푥)푑푦, 푛 ∂푥 ℝ 푖=1 푖 and

[ ∫ 푇 ] 푥 푥 ˆ 퐸 [푓(푌 (푇 ))] = 푓(푥) + 퐸 퐴푠푓(푌 (푠−))푑푠 . (4.5) 푡

And also, by equation (4.3), we have

[ ∫ 푇 ] 푥 푥 ˆ 퐸 [푓(푋(푇 ))] = 푓(푥) + 퐸 퐴푠푓(푋(푠−))푑푠 . (4.6) 푡

Since 푌 (푡) = 푌 (푡−) with probability one, we have

푥 퐸 [푓(푌 (푇 ))∣푌 (푡−) = 푥] ˆ lim = (퐴푡푓)(푥), 푇 ↓푡 푇 − 푡 and similarly for 푋(푡),

푥 퐸 [푓(푋(푇 ))∣푋(푡−) = 푥] ˆ lim = (퐴푡푓)(푥), 푇 ↓푡 푇 − 푡 since 푋(푡) = 푋(푡−) with probability one.

Now we can derive the forward equation.

Proposition 4.4. (Forward equation) Let 푋 be a semimartingale with decomposition (4.2) on ℝ푛 with generator

∑푛 ∂푓 1 ∑푛 ∂2푓 퐴ˆ 푓(푥) = 푏 (푥, 푡) + 푎 (푥, 푡) 푡 푖 ∂푥 2 푖푗 ∂푥 푥 푖=1 푖 푖,푗=1 푖 푗 ∫ ∑푛 ∂푓 + (푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 )휈(푡, 푦, 푥)푑푦, 푛 ∂푥 ℝ 푖=1 푖

2 푛 for 푓 ∈ 퐶0 (ℝ ), and assume that the probability measure of 푋(푡) has a density 푝푋 (푥, 푡), i.e. ∫ 2 퐸[푓(푋(푡))] = 푓(푥)푝푋 (푥, 푡)푑푥, 푓 ∈ 퐶0 . ℝ푛 32

Then 푝푋 (푥, 푡) satisfies the forward equation

∗ ∂푝 퐴ˆ 푝 = on ℝ푛 × (0, ∞), 푡 ∂푡

ˆ ∗ ˆ where 퐴푡 is the formal adjoint of 퐴푡 and is given by

푛 2 푛 ∗ 1 ∑ ∂ ∑ ∂ 퐴ˆ 푔(푥) = (푎 (푥, 푡)푔) − (푏 (푥, 푡)푔) 푡 2 ∂푥 푥 푖푗 ∂푥 푖 푖,푗=1 푖 푗 푖=1 푖 ∫ + (푔(푥 − 푦, 푡)휈(푡, 푦, 푥 − 푦) − 푔(푥)휈(푡, 푦, 푥) ℝ푛 ∑푛 ∂ − 1 푦 (푔휈(푡, 푦, 푥)))푑푦. ∥푦∥≤1 푖 ∂푥 푖=1 푖

2 푛 Proof. For 푓 ∈ 퐶0 (ℝ ), we have

[ ∫ 푡 ] ˆ 퐸[푓(푋(푡))] = 푓(푋(푡0)) + 퐸 퐴푡푓(푋(푠−))푑푠 . 푡0

Since 푝푋 (푥, 푡) is the probability density function of 푋(푡), we can rewrite this equation as

∫ ∫ 푡 ∫ ˆ 푓(푥)푝푋 (푥, 푡)푑푥 = 푓(푋(푡0)) + 퐴푡푓(푥)푝(푥, 푠)푑푥푑푠. 푛 푛 ℝ 푡0 ℝ Taking derivatives with respect to 푡 of both sides of the preceding equation, we get ∫ ∂ 푓(푥)푝 (푥, 푡)푑푥 ∂푡 푋 ∫ ℝ푛 ( ) ˆ = 퐴푡푓(푥) 푝푋 (푥, 푡)푑푥 ℝ푛 ∫ ∑푛 ∂푓 1 ∑푛 ∂2푓 = 푏푖(푥, 푡) 푝푋 (푥, 푡) + 푎푖푗(푥, 푡) 푝푋 (푥, 푡) 푛 ∂푥 2 ∂푥 푥 ℝ 푖=1 푖 푖,푗=1 푖 푗 ∑푛 ∂푓 + (푓(푥 + 푦) − 푓(푥) − 1 푦 )휈(푡, 푦, 푥)푑푦푝 (푥, 푡)푑푥. ∥푦∥≤1 푖 ∂푥 푋 푖=1 푖 Integrating by parts on the right hand side, we get ∫ ∂ 푓(푥)푝푋 (푥, 푡)푑푥 ∂푡 ℝ푛 ∫ ( ∑푛 ∂ 1 ∑푛 ∂2 ) = − 푓(푥) (푏(푥, 푡)푝푋 (푥, 푡)) + 푓(푥) (푎푖푗(푥, 푡)푝푋 (푥, 푡)) 푑푥 푛 ∂푥 2 ∂푥 ∂푥 ℝ 푖=1 푖 푖,푗=1 푖 푗 ∫ ∫ ( ) ∑푛 ∂푓 + 푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 휈(푡, 푦, 푥)푝푋 (푥, 푡)푑푦푑푥. 푛 푛 ∂푥 ℝ ℝ 푖=1 푖 33

Consider the last integral in the preceding equation. For the first term, we need to shift 푥 by −푦, and for the last term, we integrate by parts once, ∫ ∫ ( ) ∑푛 ∂푓 푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 휈(푡, 푦, 푥)푝푋 (푥, 푡)푑푦푑푥 ℝ푛 ℝ푛 ∂푥푖 ∫ ∫ 푖=1

= (푓(푥)휈(푡, 푦, 푥 − 푦)푝(푥 − 푦, 푡) − 푓(푥)휈(푡, 푦, 푥)푝푋 (푥, 푡) ℝ푛 ℝ푛 ∑푛 ∂ + 푓(푥)1 푦 (휈(푡, 푦, 푥)푝 (푥, 푡))푑푦)푑푥. ∥푦∥≤1 푖 ∂푥 푋 푖=1 푖 Thus, ∫ ∂ 푓(푥)푝 (푥, 푡)푑푥 ∂푡 푋 ℝ푛 ⎛ ⎞ ∫ ∑푛 ∂ 1 ∑푛 ∂2 = 푓(푥) ⎝− (푏푖(푥, 푡)푝푋 (푥, 푡)) + (푎푖푗(푥, 푡)푝푋 (푥, 푡))⎠ 푑푥 푛 ∂푥 2 ∂푥 ∂푥 ℝ 푖=1 푖 푖,푗=1 푖 푗 ∫

+ (휈(푡, 푦, 푥 − 푦)푝(푥 − 푦, 푡) − 휈(푡, 푦, 푥)푝푋 (푥, 푡) ℝ푛 ∑푛 ∂ + 1∥푦∥≤1 푦푖 (휈(푡, 푦, 푥)푝푋 (푥, 푡))푑푦)푑푥 ∂푥푖 ∫ 푖=1 ˆ ∗ = 푓(푥)(퐴푡 푝푋 )(푥, 푡)푑푥. ℝ푛

We obtain ∫ ∫ ( ) ∫ ( ) ∂ ˆ ˆ ∗ 푓(푥) 푝푋 (푥, 푡)푑푥 = 퐴푡푓 (푥)푝푋 (푥, 푡)푑푥 = 푓(푥) 퐴푡 푝푋 (푥, 푡)푑푥, ℝ푛 ∂푡 ℝ푛 ℝ푛 which we can write it in terms of the 퐿2 inner product,

∂푝 ∗ ⟨푓, 푋 ⟩ = ⟨퐴ˆ 푓, 푝 ⟩ = ⟨푓, 퐴ˆ 푝 ⟩, ∂푡 푡 푋 푡 푋

ˆ ∗ ˆ where 퐴푡 is the formal adjoint of 퐴푡 and 푝푋 (푥, 푡) is a solution of

∂푝 ∗ 푋 = 퐴ˆ 푝 (푥, 푡) ∂푡 푡 푋 in the weak sense as in [12, Section 7.2].

Similarly, 푝(푦, 푡), the density function of the process 푌 (푡) also satisfies the equation

∂푝 ∗ 푌 = 퐴ˆ 푝 (푦, 푡) ∂푡 푡 푌 34

4.3 Construction of fundamental solutions

In this section, we will consider the fundamental solutions of the forward equation defined by the generator 퐴ˆ.

The construction of fundamental solutions of parabolic differential equations is de- scribed by Friedman in [14, Sections 1.2 & 1.4] in the case of bounded domains 푈 ⋐ 푅푛 and extended to unbounded domains 푈 ⊂ 푅푛 (including 푅푛) in [14, Section 1.6]. We follow his approach and construct fundamental solutions of the forward equation we derived in the previous section, noting that the forward equation is a partial integro- differential equation. Construction of fundamental solutions of integro-differential op- erators of this kind have also been described by Garroni and Menaldi [?, Theorem 4.3.6] for bounded domains in 푅푛 and extended to the case of unbounded domains (in particular, 푅푛) by [?, §4.3.3].

Define 퐿∗ by the expression

∂ ∗ 퐿∗ := − 퐴ˆ . ∂푡 푡 we simply this expression before we proceed:

∑푛 1 ∑푛 퐿∗푝(푥, 푡) =푝 (푥, 푡) + (푏 (푥, 푡)푝(푥, 푡)) − (푎 (푥, 푡)푝(푥, 푡)) 푡 푖 푥푖 2 푖푗 푥푖푥푗 푖=1 푖,푗=1 ∫ − 푝(푥 − 푦, 푡)휈(푡, 푦, 푥 − 푦) − 푝(푥, 푡)휈(푡, 푦, 푥) ℝ푛 ∑푛

+ 1∥푦∥≤1 푦푖(휈(푡, 푦, 푥)푝(푥, 푡))푥푖 푑푦. 푖=1 This leads to

1 ∑푛 퐿∗푝(푥, 푡) =푝 (푥, 푡) − 푎 (푥, 푡)푝 (푥, 푡) 푡 2 푖푗 푥푖푥푗 푖,푗=1 ∑푛 1 ∑푛 ∑푛 1 ∑푛 + (푏 (푥, 푡) − 푎 (푥, 푡) )푝 (푥, 푡) + (푏 (푥, 푡) − 푎 (푥, 푡) )푝(푥, 푡) 푖 2 푖푗 푥푗 푥푖 푖 푥푖 2 푖푗 푥푖푥푗 푖=1 푗=1 푖=1 푗=1 ∫ ∫ − 푝(푥 − 푦, 푡)휈(푡, 푦, 푥 − 푦)푑푦 + 푝(푥, 푡) 휈(푡, 푦, 푥)푑푦 ℝ푛 ℝ푛 ∫ ∑푛 ∫ ∑푛

− 푝(푥, 푡) 1∥푦∥≤1 푦푖휈(푡, 푦, 푥)푥푖 푑푦 − 푝푥푖 (푥, 푡) 1∥푦∥≤1 푦푖휈(푡, 푦, 푥)푑푦. 푛 푛 ℝ 푖=1 ℝ 푖=1 35

Denote 1 퐴ˆ (푥, 푡) := 푎 (푥, 푡), 푡푖푗 2 푖푗 ∫ 1 ∑푛 ∑푛 퐵푖(푥, 푡) :=푏푖(푥, 푡) − 푎푖푗(푥, 푡)푥푗 1∥푦∥≤1 푦푖휈(푡, 푦, 푥)푑푦, 2 푛 푗=1 ℝ 푖=1 ∑푛 1 ∑푛 퐶(푥, 푡) := (푏 (푥, 푡) − 푎 (푥, 푡) ) 푖 푥푖 2 푖푗 푥푖푥푗 푖=1 푗=1 ∫ ∑푛

+ 휈(푡, 푦, 푥) − 1∥푦∥≤1 푦푖휈(푡, 푦, 푥)푥푖 푑푦. 푛 ℝ 푖=1 Then, ∑푛 ∑푛 ∗ ˆ 퐿 푝(푥, 푡) =푝푡(푥, 푡) − 퐴푡푖푗(푥, 푡)푝푥푖푥푗 (푥, 푡) + 퐵푖(푥, 푡)푝푥푖 (푥, 푡) 푖,푗=1 푖=1 ∫ + 퐶(푥, 푡)푝(푥, 푡) − 푝(푧, 푡)휈(푡, 푥 − 푧, 푧)푑푧, ℝ푛 Consider the equation

퐿∗푝 =0, (4.7)

¯ where the coefficients 퐴푖푗, 퐵푖푗, 퐶 and 푛 are defined on 푈푇 ≡ 푈 × [0, 푇 ]. The domain 푈 ⊂ ℝ푛 can be unbounded and includes the important case 푈 = ℝ푛. Throughout this section we assume:

(A3) 퐿∗ is parabolic in 푈.

(A4) The coefficients of 퐿∗ are continuous functions in 푈 and, in addition, for all

(푥, 푡), (푥0, 푡0) ∈ 푈, there exist constants 0 < 푀, 0 < 훼 < 1 such that ( ) 훼 훼/2 ∣퐴푖푗(푥, 푡) − 퐴푖푗(푥 − 0, 푡0)∣ ≤ 푀 ∣푥 − 푥0∣ + ∣푡 − 푡0∣ ,

훼 ∣퐵푖(푥, 푡) − 퐵푖(푥 − 0, 푡0)∣ ≤ 푀∣푥 − 푥0∣ ,

훼 ∣퐶(푥, 푡) − 퐶(푥 − 0, 푡0)∣ ≤ 푀∣푥 − 푥0∣ .

(A5) 휈(푡, 푦, 푥) is compactly supported, namely, there exists a constant 푀1 such that

휈(푡, 푦, 푥) = 0 if ∣푥∣ ≥ 푀1 or ∣푦∣ ≥ 푀1.

We define the fundamental solutions of a forward equation follows, by analogy with the standard definition in [14, Section 1.1] of a fundamental solution for a parabolic partial differential equation. 36

Definition 4.3. (Fundamental solutions of a forward equation)

∗ A fundamental solution of a forward equation 퐿 푝 = 0 in 푈푇 is a function 푝(푥, 푡; 푦, 휏) defined for all (푥, 푡), (푦, 휏) ∈ 푈푇 , 푡 > 휏, which satisfies the following conditions:

(i) For fixed (푦, 휏) ∈ 푈푇 , it satisfies, as a function of (푥, 푡), 푥 ∈ 푈, 휏 < 푡 < 푇 , the equation 퐿∗푝 = 0; (ii) For every continuous function 푓 in 푈¯ obeying the growth condition in Definition 3.1, if 푥 ∈ 푈 then ∫ lim 푝(푥, 푡; 푦, 휏)푓(푦)푑푦 = 푓(푥). 푡→휏 푈

For a Itˆoprocess, the operator 퐿∗ is

∑푛 ∑푛 ∗ 퐿 푝 = 푝푡 − 퐴푖푗(푥, 푡)푝푥푖푥푗 + 퐵푖(푥, 푡)푝푥푖 + 퐶(푥, 푡)푝, 푖,푗=1 푖=1 for a semimartingale, the operator 퐿∗ is

∑푛 ∑푛 ∫ ∗ 퐿 푝 = 푝푡 − 퐴푖푗(푥, 푡)푝푥푖푥푗 + 퐵푖(푥, 푡)푝푥푖 + 퐶(푥, 푡)푝 − 푝(푧, 푡)휈(푡, 푥 − 푧, 푧)푑푧. 푛 푖,푗=1 푖=1 ℝ

We adapt the parametrix method in [14, Section 1.2] for the construction of funda- mental solutions of linear second-order parabolic PDEs to the construction of funda- mental solutions of our linear second-order parabolic PIDE; see also [?, Chapter V] and [?, Section IV.11]. Existence of fundamental solutions for parabolic PIDEs of the kind examined in this thesis is also proved in [?], using related methods. The construction of fundamental solutions for our parabolic PIDE closely mirrors that of parabolic PDEs due to the assumption on the density defining the integral term.

First we introduce the function 퐺(푥, 푡; 푦, 휏), for 푡 > 휏, ⎡ ⎤ ∑푛 1 ⎣ −1 푖푗 ⎦ 퐺(푥, 푡; 푦, 휏) = 퐶(푦, 휏) 푛 exp 퐴 (푦, 휏)(푥푖 − 푦푖)(푥푗 − 푦푗) , (푡 − 휏) 2 4(푡 − 휏) 푖,푗=1 where

1 푖푗 1 퐶(푦, 휏) = √ [det(퐴 (푦, 휏))] 2 , (2 휋)푛

푖푗 and 퐴 (푥, 푡) is the inverse matrix to 퐴푖푗(푥, 푡). The function 퐺 is called the parametrix. 37

For each fixed (푦, 휏), the function 퐺(푥, 푡; 푦, 휏) satisfies the following equation with “constant coefficients”,

∑푛 ∗ 퐿0푝(푥, 푡) := 푝푡 − 퐴푖푗(푦, 휏)푝푥푖푥푗 = 0, 푖,푗=1 and also satisfies the following proposition.

Proposition 4.5. ([14, Section 1.2 Theorem 1])

Let 푓 be a continuous function in 푈푇 obeying the growth condition in Definition 3.1. Then ∫ 퐽(푥, 푡, 휏) := 퐺(푥, 푡; 푦, 휏)푓(푦, 휏)푑푦 푈 is continuous function in (푥, 푡, 휏), 푥 ∈ 푈¯, 0 ≤ 휏 < 푇 and

lim 퐽(푥, 푡, 휏) = 푓(푥, 푡), 푡→휏 uniformly with respect to (푥, 푡), 푥 ∈ 푆, 0 < 푡 ≤ 푇 , where 푆 is any closed subset of 푈.

From now on, we use 푝 to denote the fundamental solutions of the forward equation, because 푝 is traditional for transitional probability density function. It follows from Proposition 4.5 that property (ii) in Definition 4.3 of the fundamental solution is also satisfied for 퐺 = 푝.

∗ ∗ In order to construct a fundamental solution for 퐿 푝 = 0, we view 퐿0 as a first approximation to 퐿∗ and we view 퐺 as the principal part of the fundamental solution 퐺. We then try to find 푝 in the form

∫ 푡 ∫ 푝(푥, 푡; 푦, 휏) = 퐺(푥, 푡; 푦, 휏) + 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎, 휏 푈 where Φ is to be determined by the condition that 푝 satisfies the equation ,

0 = 퐿∗푝(푥, 푡; 푦, 휏) ∫ 푡 ∫ = 퐿∗퐺(푥, 푡; 푦, 휏) + 퐿∗ 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. 휏 푈

∗ ∗ We first consider the term 퐿 퐺(푥, 푡; 푦, 휏). For fixed (푦, 휏), 퐿0퐺(푥, 푡; 푦, 휏) = 0, and 38

so

∗ ∗ ∗ 퐿 퐺(푥, 푡; 푦, 휏) = 퐿 퐺(푥, 푡; 푦, 휏) − 퐿0퐺(푥, 푡; 푦, 휏) (4.8) ∑푛 ∑푛

= 퐺푡 − 퐴푖푗(푥, 푡)퐺푥푖푥푗 + 퐵푖(푥, 푡)퐺푥푖 + 퐶(푥, 푡)퐺 (4.9) 푖,푗=1 푖=1 ∫ ∑푛

− 퐺(푡, 푧; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 − 퐺푡 + 퐴푖푗(푦, 휏)퐺푥푖푥푗 (4.10) 푛 ℝ 푖,푗=1 ∑푛 ∑푛

= − (퐴푖푗(푥, 푡) − 퐴푖푗(푦, 휏))퐺푥푖푥푗 + 퐵푖(푥, 푡)퐺푥푖 + 퐶(푥, 푡)퐺 푖,푗=1 푖=1 (4.11) ∫ − 퐺(푡, 푧; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧. (4.12) 푈

We need the following lemma to calculate

∫ 푡 ∫ 퐿∗ 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. 휏 푈

Lemma 4.1. ([14, Section 1.3 Lemma 1 ]) Let 푓 be a continuous function in 푈 obeying the growth condition in Definition 3.1 and locally H¨oldercontinuous in 푥 ∈ 푈, uniformly with respect to 푡, and

∫ 푡 ∫ 푉 (푥, 푡) = 퐺(푥, 푡; 휂, 휎)푓(휂, 휎)푑휂푑휎. 0 푈

Then ∫ ∫ ∂ 푇 ∂ 푉 (푥, 푡) = 퐺(푥, 푡; 휂, 휎)푓(휂, 휎)푑휂푑휎, ∂푥푖 ∂푥푖 0 ∫ 푈 ∫ ∂2 푇 ∂2 푉 (푥, 푡) = 퐺(푥, 푡; 휂, 휎)푓(휂, 휎)푑휂푑휎, ∂푥푖∂푥푗 0 푈 ∂푥푖∂푥푗 ∫ ∫ ∂ 푇 ∑푛 ∂2 푉 (푥, 푡) = 푓(푥, 푡) + 퐴 (휂, 휎) 퐺(푥, 푡; 휂, 휎)푓(휂, 휎)푑휂푑휎. ∂푡 푖푗 ∂푥 ∂푥 0 푈 푖,푗=1 푖 푗

If Φ is such that Lemma 4.1 applies to 푓(푥, 푡) := Φ(푥, 푡; 푦, 휏), then

∫ 푡 ∫ 푉 (푥, 푡) = 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. 휏 푈 39

Consequently, ∑푛 ∑푛 ∗ 퐿 푉 (푥, 푡) = 푉푡 − 퐴푖푗(푥, 푡)푉푥푖푥푗 + 퐵푖(푥, 푡)푉푥푖 + 퐶(푥, 푡)푉 푖,푗=1 푖=1 ∫ − 푉 (푡, 푧)휈(푡, 푥 − 푧, 푧)푑푧 ℝ푛 ∫ ∫ 푇 ∑푛 ∂2 = Φ(푥, 푡; 푦, 휏) + 퐴 (휂, 휎) 퐺(푥, 푡; 푦, 휏)Φ(휂, 휎; 푦, 휏)푑휂푑휎 푖푗 ∂푥 ∂푥 0 푈 푖,푗=1 푖 푗 ∫ 푇 ∫ ∑푛 ∑푛

+ − 퐴푖푗(푥, 푡)퐺푥푖푥푗 + 퐵푖(푥, 푡)퐺푥푖 + 퐶(푥, 푡)퐺푑휂푑휎 휏 푈 푖,푗=1 푖=1 ∫ 푇 ∫ ∫ − 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑휂푑휎푑푧. 휏 푈 푈 Therefore, ∫ 푇 ∫ { ∑푛 ∗ 퐿 푉 (푥, 푡) = Φ(푥, 푡; 푦, 휏) + − [ 퐴푖푗(푥, 푡) − 퐴푖푗(휂, 휎)]퐺푥푖푥푗 (4.13) 휏 푈 푖,푗=1 ∑푛

+ 퐵푖(푥, 푡)퐺푥푖 + 퐶(푥, 푡)퐺푑휂푑휎 (4.14) 푖=1 ∫ } − 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 푑휂푑휎 (4.15) 푈 ∫ 푇 ∫ = Φ(푥, 푡; 푦, 휏) + 퐿∗퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. (4.16) 휏 푈 Combining (4.8) and (4.13) together, we get

0 = 퐿∗푝(푥, 푡; 푦, 휏)

= 퐿∗퐺(푥, 푡; 푦, 휏) + 퐿∗푉 (푥, 푡)

∫ 푇 ∫ = 퐿∗퐺(푥, 푡; 푦, 휏) + Φ(푥, 푡; 푦, 휏) + 퐿∗퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. 휏 푈 Therefore,

∫ 푇 ∫ −Φ(푥, 푡; 푦, 휏) = 퐿∗퐺(푥, 푡; 푦, 휏) + 퐿∗퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎. (4.17) 휏 푈 Thus, for each fixed (푦, 휏), the function Φ(푥, 푡; 푦, 휏) is a solution of a Volterra integral equation with singular kernel 퐿∗퐺(푥, 푡; 푦, 휏).

Before we proceed to prove the existence of the fundamental solution, we give two useful lemmas. Lemma 4.2 is modeled after [14, Inequality (4.3), Section 1.4], but we need to evaluate an extra term, namely the integral term ∫ 퐺(푡, 푧; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 푈 40

in the PIDE.

Lemma 4.2. Let the notation be as above. Then,

const 1 ∣퐿∗퐺(푥, 푡; 푦, 휏)∣ ≤ (4.18) (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼

훼 where 휇, 훼 are constants, 1 − 2 < 휇 < 1.

Proof. From the definition of 퐿∗, we have ∑푛 ∑푛 ∗ 퐿 퐺(푥, 푡; 푦, 휏) = − (퐴푖푗(푥, 푡) − 퐴푖푗(푦, 휏))퐺푥푖푥푗 + 퐵푖(푥, 푡)퐺푥푖 + 퐶(푥, 푡)퐺 푖,푗=1 푖=1 ∫ − 퐺(푡, 푧; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧, 푈 where ⎡ ⎤ ∑푛 1 ⎣ −1 푖푗 ⎦ 퐺(푥, 푡; 푦, 휏) = 퐶(푦, 휏) 푛 exp 퐴 (푦, 휏)(푥푖 − 푦푖)(푥푗 − 푦푗) , (푡 − 휏) 2 4(푡 − 휏) 푖,푗=1 and

1 푖푗 1 퐶(푦, 휏) = √ [det(퐴 (푦, 휏))] 2 . (2 휋)푛

푖푗 Here (퐴 ) is the inverse matrix of (퐴푖푗) and satisfies the ellipticity condition, that is, there exists a constant 휆 > 0 such that ∑푛 푖푗 2 퐴 (푥푖 − 푦푖)(푥푗 − 푦푗) ≥ 휆∣푥 − 푦∣ . 푖,푗=1

푛 For fixed (푦, 휏), if 0 < 휈 < 2 , then [ ] const 휆∣푥 − 푦∣2 퐺(푥, 푡; 푦, 휏) ≤ 푛 exp − (4.19) (푡 − 휏) 2 4(푡 − 휏) [ ] const 1 ∣푥 − 푦∣푛−2휈 휆∣푥 − 푦∣2 = exp − 휈 푛−2휈 푛 −휈 (푡 − 휏) ∣푥 − 푦∣ (푡 − 휏) 2 4(푡 − 휏) const 1 ≤ . (푡 − 휏)휈 ∣푥 − 푦∣푛−2휈

The last inequality is true because [ ] ( ) [ ] ∣푥 − 푦∣푛−2휈 휆∣푥 − 푦∣2 ∣푥 − 푦∣ 푛−2휈 휆 ∣푥 − 푦∣ exp − = √ exp − (√ )2 푛 −휈 (푡 − 휏) 2 4(푡 − 휏) 푡 − 휏 4 푡 − 휏 is bounded as a function of (푥, 푡; 푦, 휏) in 푈푇 × 푈푇 ∖퐷푇 , where 퐷푇 = {(푥, 푡; 푡, 휏) ∈ 푛 푈푇 × 푈푇 : 푥 = 푦, 푡 = 휏} as long as 0 < 휈 < 2 for any 푛 ≥ 1. 41

훼 훼 Let 휇 := 휈 + (1 − 2 ), then 1 − 2 < 휇 < 1, const 1 퐺(푥, 푡; 푦, 휏) ≤ , 휇−(1− 훼 ) 푛+2−2휇−훼 (푡 − 휏) 2 ∣푥 − 푦∣ const 1 ≤ , (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼

1 since 훼 is bounded as 1 − 훼/2 > 0. (푡−휏)−(1− 2 )

We assume that 퐶(푥, 푡) is bounded as a function of (푥, 푡) ∈ 푈푇 , thus const 1 퐶(푥, 푡)퐺(푥, 푡; 푦, 휏) ≤ . (4.20) (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼

Similarly, we can apply this trick to 퐺푥푖 and 퐺푥푖푥푗 and get [ ] ∂ const∣푥 − 푦∣ 휆∣푥 − 푦∣2 퐺(푥, 푡; 푦, 휏) ≤ exp − , 푛 +1 ∂푥푖 (푡 − 휏) 2 4(푡 − 휏) const 1 ≤ , (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼 [ ] ∂2 const∣푥 − 푦∣2 휆∣푥 − 푦∣2 퐺(푥, 푡; 푦, 휏) ≤ exp − , 푛 +2 ∂푥푖∂푥푗 (푡 − 휏) 2 4(푡 − 휏) const 1 ≤ . (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇

훼 By assumption (퐴4), 퐵(푥, 푡) is bounded and ∣퐴푖푗(푥, 푡) − 퐴푖푗(푦, 휏)∣ ≤ const∣푥 − 푦∣ , we get

∂ const 1 퐵(푥, 푡) 퐺(푥, 푡; 푦, 휏) ≤ , (4.21) 휇 푛+2−2휇−훼 ∂푥푖 (푡 − 휏) ∣푥 − 푦∣

∂2 const 1 (퐴 (푥, 푡) − 퐴 (푦, 휏)) 퐺(푥, 푡; 푦, 휏) ≤ . (4.22) 푖푗 푖푗 휇 푛+2−2휇−훼 ∂푥푖∂푥푗 (푡 − 휏) ∣푥 − 푦∣ Now we consider the integral ∫ 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧. 푈 푛 푛 Because 휈 is compactly supported as a function of (푥, 푧) ∈ ℝ × ℝ , for ∣푥 − 푦∣ ≤ 2푀1 we have ∫ ∫ 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 ≤ const 퐺(푧, 푡; 푦, 휏)푑푧 푈 ∫푈 1 휆∣푧 − 푦∣2 ≤ const 푛 exp[− ]푑푧 푈 (푡 − 휏) 2 4(푡 − 휏) ≤ const, 42

where the last estimate follows because the function [ ] 1 휆∣푥 − 푦∣2 푛 exp − (푡 − 휏) 2 4(푡 − 휏) is integrable.

When ∣푥 − 푦∣ > 2푀1, by changing the variables 푥 − 푧 and 푧, we get ∫ ∫ 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 = − 퐺(푥 − 푧, 푡; 푦, 휏)휈(푡, 푧, 푥 − 푧)푑푧. 푈 푈 Since 휈 is bounded function with compact support, we have ∫ ∫ 푀1 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 ≤ const 퐺(푥 − 푧, 푡; 푦, 휏)푑푧. 푈 −푀1 By equation (4.19), we get ∫ ∫ 푀1 푀1 1 휆∣푥 − 푦 − 푧∣2 퐺(푥 − 푧, 푡; 푦, 휏)푑푧 ≤ const 푛 exp[− ]푑푧 2 4(푡 − 휏) −푀1 −푀1 (푡 − 휏)

Because ∣푥 − 푦∣ > 2푀1 and ∣푧∣ < 푀1, so 1 1 ∣푥 − 푦 − 푧∣ ≥ ∣푥 − 푦∣ − ∣푧∣ ≥ ∣푥 − 푦∣ − 푀 ≥ ∣푥 − 푦∣ − ∣푥 − 푦∣ = ∣푥 − 푦∣, 1 2 2 1 −∣푥 − 푦 − 푧∣2 ≤ ∣푥 − 푦∣2, 4 thus ∫ ∫ 푀1 2 푀1 휆 2 1 휆∣푥 − 푦 − 푧∣ 1 4 ∣푥 − 푦∣ 푛 exp[− ]푑푧 ≤ 푛 exp[− ]푑푧. 2 4(푡 − 휏) 2 4(푡 − 휏) −푀1 (푡 − 휏) −푀1 (푡 − 휏) Using the same trick applied in equation (4.19), we get

휆 2 휆 2 1 2 ∣푥 − 푦∣ 1 2 ∣푥 − 푦∣ 푛 exp[− ] ≤ const 푛 exp[− ]. (푡 − 휏) 2 4(푡 − 휏) (푡 − 휏) 2 4(푡 − 휏) Therefore, ∫ 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 푈 also satisfies the inequality ∫ const 1 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 ≤ , (4.23) 휇 푛+2−2휇−훼 푈 (푡 − 휏) ∣푥 − 푦∣ Adding inequalities (4.20), (4.21), (4.22), (4.23) together, we obtain const 1 ∣퐿∗퐺(푥, 푡; 푦, 휏)∣ ≤ , (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼

훼 when 1 − 2 < 휇 < 1. 43

The following Lemma is based on [14, Lemma 2, Section 1.4] which only holds for bounded domain. We extend it to the case of an unbounded domain.

Lemma 4.3. Suppose that 푈 is a domain in ℝ푛, which could be ℝ푛, 0 < 훼 < 푛, 0 < 훽 < 푛, and 훼 + 훽 > 푛 then for any 푥 ∈ 푈, 푧 ∈ 푈, 푥 ∕= 푧, we have ∫ 1 푑푦 ≤ 푐표푛푠푡 ⋅ ∣푥 − 푧∣푛−훼−훽. 훼 훽 푈 ∣푥 − 푦∣ ∣푦 − 푧∣ ∣푥 − 푧∣ ∣푥 − 푧∣ Proof. Let 푈 = {푦 ∈ 푈∣푠.푡.∣푦 − 푧∣ < }, 푈 = {푦 ∈ 푈∣푠.푡.∣푦 − 푥∣ < } and 1 2 2 2 푈3 = 푈∖푈1 ∪ 푈2.

First, we estimate the integral on 푈1: ∫ ∫ 1 2훼 푑푦 ≤ 푑푦 ∣푥 − 푦∣훼∣푦 − 푧∣훽 ∣푥 − 푧∣훼∣푦 − 푧∣훽 푈1 푈1 훼 ∫ ∣푥−푧∣ 2 2 ≤ 휔 푑 푟푛−1−훽푑푟 푛−1 훼 ∣푥 − 푧∣ 0 = 푐표푛푠푡 ⋅ ∣푥 − 푧∣푛−훼−훽.

Similarly, ∫ 1 푑푦 ≤ 푐표푛푠푡 ⋅ ∣푥 − 푧∣푛−훼−훽. ∣푥 − 푦∣훼∣푦 − 푧∣훽 푈2 ∫ 1 Now we estimate 퐼3 := 푑푦. First we notice 푈3 ∣푥 − 푦∣훼∣푦 − 푧∣훽 1 1 1 ≤ + . ∣푥 − 푦∣훼∣푦 − 푧∣훽 ∣푥 − 푦∣훼+훽 ∣푧 − 푦∣훼+훽

Thus ∫ [ ] 1 1 퐼 ≤ + 푑푦 3 ∣푥 − 푦∣훼+훽 ∣푧 − 푦∣훼+훽 푈3 ∫ ∞ 푛−1−훼−훽 ≤ 2휔푛−1 푟 푑푟 2 ∣푥−푧∣ = 푐표푛푠푡 ⋅ ∣푥 − 푧∣푛−훼−훽, for 훼 + 훽 > 푛.

The following result closely follows the construction of Φ in [14, Section 1.4] — see the proofs of [14, Section 1.4, Theorems 7 & 8]; this is extended to unbounded domains in [14, Section 1.6] — see the proofs of [14, Section 1.6, Theorems 10 & 11]. 44

Theorem 4.2. There exists a solution Φ of equation (4.17) of the form ∑∞ ∗ −Φ(푥, 푡; 푦, 휏) = (퐿 퐺)푘(푥, 푡; 푦, 휏), (4.24) 푘=1 where

∗ ∗ (퐿 퐺)1 := 퐿 퐺, ∫ 푇 ∫ ∗ ∗ ∗ (퐿 퐺)푘+1(푥, 푡; 푦, 휏) := [퐿 퐺(푥, 푡; 휂, 휎)](퐿 퐺)푘(휂, 휎; 푦, 휏)푑휂푑휎. 휏 푈 Proof. Using Lemma 4.2 and Lemma 4.3, we get 1

∫ 푇 ∫ ∗ ∗ ∗ ∣(퐿 퐺)2(푥, 푡; 푦, 휏)∣ = [퐿 퐺(푥, 푡; 휂, 휎)](퐿 퐺)(휂, 휎; 푦, 휏)푑휂푑휎 휏 푈∫ ∫ 푇 1 1 1 1 ≤ const 푑휂푑휎 (푡 − 휎)휇 ∣푥 − 휂∣푛+2−2휇−훼 (휎 − 휏)휇 ∣휂 − 푦∣푛+2−2휇−훼 ∫휏 푈 ∫ 푇 1 1 1 1 ≤ const 푑휎 푑휂 휇 휇 푛+2−2휇−훼 푛+2−2휇−훼 휏 (푡 − 휎) (휎 − 휏) 푈 ∣푥 − 휂∣ ∣휂 − 푦∣ 1 1 ≤ const , (푡 − 휏)2휇−1 ∣푥 − 푦∣푛+2(2−2휇−훼)

훼 when 2휇 > 1 and 푛 + 2(2 − 2휇 − 훼) > 0. Since 1 − 2 < 휇 < 1, the singularity of (퐿퐺2) ∗ is weaker that that of 퐿퐺. Similarly, we can proceed to evaluate (퐿 퐺)3,

∗ ∣(퐿 퐺)3(푥, 푡; 푦, 휏)∣ ∫ 푇 ∫ ∗ ∗ = [퐿 퐺(푥, 푡; 휂, 휎)]2(퐿 퐺)(휂, 휎; 푦, 휏)푑휂푑휎 휏 푈∫ ∫ 푇 1 1 1 1 ≤ const 푑휎 푑휂 2휇−1 휇 푛+2(2−2휇−훼) 푛+2−2휇−훼 휏 (푡 − 휎) (휎 − 휏) 푈 ∣푥 − 휂∣ ∣휂 − 푦∣ 1 1 ≤ const , (푡 − 휏)3휇−2 ∣푥 − 푦∣푛+3(2−2휇−훼) when 3휇 > 2 and 푛 + 3(2 − 2휇 − 훼) > 0.

We know after finite steps, we arrive at some 푘0 for which 푘0휇 < 푘0 − 1, and

푛 + 푘0(2 − 2휇 − 훼) < 0, thus

∗ ∣(퐿 퐺)푘0 (푥, 푡; 푦, 휏)∣ ≤ const.

From 푘0, we proceed to prove by induction on 푚, assume that [퐶(푡 − 휏)1−휇]푚 ∣(퐿∗퐺) (푥, 푡; 푦, 휏)∣ ≤ 퐶 , 푚+푘0 0 Γ(1 + (1 − 휇)푚)

1See Remark 4.1 for additional details for the case of unbounded domains 푈 ⊂ 푅푛. 45

where 퐶0, 퐶 are constants and Γ(푡) is the gamma function. For 푚 = 0, this follows

∗ from ∣(퐿 퐺)푘0 (푥, 푡; 푦, 휏)∣ ≤ const. Assuming now that it holds for some integer 푚 ≥ 0, and using Lemma 4.2 we get

푚 ∫ 푡 ∗ 퐶 −휇 (1−휇)푚 ∣(퐿 퐺)푚+1+푘0 (푥, 푡; 푦, 휏)∣ ≤ const.퐶0 (푡 − 휎) (휎 − 휏) 푑휎. Γ((1 − 휇)푚 + 1) 휏

휎−휏 Substituting 휌 = 푡−휏 into the preceding expression and using the formula ∫ 1 Γ(푎)Γ(푏) (1 − 휌)푎−1휌푏−1푑휌 = , 0 Γ(푎 + 푏) we obtain

∫ 푡 ∫ 1 (푡 − 휎)−휇(휎 − 휏)(1−휇)푚푑휎 = (푡 − 휏)1−휇(푚 + 1)휌1−휇푚(1 − 휌)−휇푑휎 휏 0 Γ(1 − 휇)Γ(1 + 푚(1 − 휇)) = (푡 − 휏)1−휇(푚 + 1) . Γ(1 + (1 − 휇)(푚 + 1))

Thus

[퐶(푡 − 휏)1−휇]푚+1 ∣(퐿∗퐺) (푥, 푡; 푦, 휏)∣ ≤ 퐶 , 푚+1+푘0 0 Γ(1 + (1 − 휇)(푚 + 1)) and the induction holds for 푚 + 1.

It follows that ∑∞ ∗ −Φ(푥, 푡; 푦, 휏) = (퐿 퐺)푘(푥, 푡; 푦, 휏) 푘=1 ∑푘0 ∑∞ ∗ ∗ = (퐿 퐺)푘(푥, 푡; 푦, 휏) + (퐿 퐺)푘0+푚(푥, 푡; 푦, 휏) 푘=1 푚=1 const 1 ≤ , (푡 − 휏)휇 ∣푥 − 푦∣푛+2−2휇−훼 and the series converges.

From Theorem 4.2, it follows that the series expansion (4.24) for Φ(푥, 푡; 푦, 휏) con- verges and that integral term in (4.17) is equal to

∑∞ ∫ 푇 ∫ ∗ ∗ 퐿 퐺(푥, 푡; 휂, 휎) ⋅ (퐿 퐺)푘(휂, 휎; 푦, 휏)푑휂푑휎. 푘=1 휏 푈 Therefore,

∫ 푡 ∫ 푝(푥, 푡; 푦, 휏) = 퐺(푥, 푡; 푦, 휏) + 퐺(푥, 푡; 휂, 휎)Φ(휂, 휎; 푦, 휏)푑휂푑휎, 휏 푈 46

satisfies (4.7), so property (i) in Definition 4.3 holds. By Proposition 4.5, property (ii) also holds. Therefore 푝(푥, 푡; 푦, 휏) is a fundamental solution of (4.7). Compare the statements and proofs of [14, Section 1.4, Theorem 8] for bounded domains and [14, Section 1.6, Theorem 10] for unbounded domains.

Remark 4.1. Friedman notes in [14, Section 1.6] that the construction of the fun- damental solution, 푝(푥, 푡; 푦, 휏), extends from the case of a bounded domain 푈 ⋐ 푅푛 to an unbounded domain 푈 ⊂ 푅푛 and in particular 푈 = 푅푛. We briefly summarize one approach to the changes for unbounded domains here and refer the reader to stan- dard references for further details [?, Chapter V] and [?, Section IV.11]. The estimate in Lemma 4.2 is replaced [?, Chapter V, Equation (3.19)], [?, Chapter 4, Equation (11.17)] by the better behaved ( ) 2 ∗ 1 (훼−푛−2) ∣푥 − 푦∣ ∣퐿 퐺(푥, 푡; 푦, 휏)∣ ≤ 푐(푡 − 휏) 2 exp −퐶 , 푡 − 휏 where 훼 ∈ (0, 1) is the H¨olderconstant, as before, and 퐶, 푐 are positive constants; Lemma 4.3 will not be used. This estimate is standard when no integral term appears in the definition of 퐿∗, while the proof of Lemma 4.2 shows that the integral term in 퐿∗퐺(푥, 푡; 푦, 휏) also obeys this estimate; indeed, the proof of Lemma 4.2 yields ( ) ∫ 2 − 푛 ∣푥 − 푦∣ 퐺(푧, 푡; 푦, 휏)휈(푡, 푥 − 푧, 푧)푑푧 ≤ 푐(푡 − 휏) 2 exp −퐶 푡 − 휏 푈 ( ) 2 1 (훼−푛−2) ∣푥 − 푦∣ ≤ 푐(푡 − 휏) 2 exp −퐶 , 푡 − 휏

1 1 using 2 (훼 − 2) ∈ (−1, − 2 ) and 0 ≤ 푡 − 휏 ≤ 푇 , and this bound replaces (4.23). ∗ Next, the estimate appearing in the proof of Theorem 4.2 for the term (퐿 퐺)푘(푥, 푡; 푦, 휏) in the infinite series defining Φ(푥, 푡; 푦, 휏), obtained by the iterative method of solving the Volterra integral equation (4.17), is replaced by [?, Chapter V, Equation (3.22)], [?, Chapter 4, Equation (11.25)] ( ) ( ) 푛(푘−1) 푘 2 ∗ 푘 휋 2 Γ (훼/2) − 1 (푘훼−푛−2) ∣푥 − 푦∣ ∣(퐿 퐺) (푥, 푡; 푦, 휏)∣ ≤ 푐 (푡 − 휏) 2 exp −퐶 , 푘 퐶 Γ(푘훼/2) 푡 − 휏

∗ ∗ ∗ for 푘 ≥ 1, where (퐿 퐺)1 := 퐿 퐺. These estimates for (퐿 퐺)푘(푥, 푡; 푦, 휏) ensure uniform convergence of the series in the statement of Theorem 4.2 for 푡 − 휏 > 0 and yields the 47

estimate ( ) 2 1 (훼−푛−2) ∣푥 − 푦∣ ∣Φ(푥, 푡; 푦, 휏)∣ ≤ 푐(푡 − 휏) 2 exp −퐶 , 푡 − 휏 just as in [?, Chapter 4, Equation (11.26)].

4.4 Existence and uniqueness of weak solutions

In this section, we will show the existence and uniqueness of weak solutions of the partial integral equation (4.7) 퐿∗푝 = 0.

푛 Let 푈 be an open subset of ℝ , and set 푈푇 = 푈 × (0, 푇 ]. 푈 can be unbounded in ℝ푛, and the special case 푈 = ℝ푛 is of particular importance.

Let ∑푛 ∑푛 ∫

퐴푢 = − (푎푖푗(푥, 푡)푢푥푖 )푥푗 + (푏푖(푥, 푡)푢)푥푖 + 푐(푥, 푡)푢 + 푢(푧, 푡)휈(푡, 푥 − 푧, 푧)푑푧, 푖,푗=1 푖=1 푈 where 휈(푡, 푥, 푧) : [0, 푇 ] × 푈 × 푈 7→ ℝ.

We will study the following parabolic equation with initial and boundary conditions

푢푡 + 퐴푢 =푓 in 푈푡 (4.25)

푢 =0 on ∂푈 × [0, 푇 ]

푢 =푔 on 푈 × {푡 = 0}

We assume that the coefficients of 퐿 satisfy the following conditions (A5): ∑푛 2 푛 푎푖푗(푥, 푡)휉푖휉푗 ≥휃∣휉∣ for all (푥, 푡) ∈ 푈푇 , 휉 ∈ ℝ (4.26) 푖,푗=1

∞ 푎푖푗, 푏푖, 푐 ∈퐿 (푈푇 ) (4.27)

휈 ∈퐿∞(0, 푇 ; 퐿2(푈 × 푈)) (4.28)

2 푓 ∈퐿 (푈푇 ) (4.29)

푔 ∈퐿2(푈) (4.30)

Remark 4.2. 푛 ∈ 퐿∞(0, 푇 ; 퐿2(푈 × 푈)) means ∫ ∫ 2 ∥휈(푡, ⋅, ⋅)∥퐿2(푈×푈) := 휈 (푡, 푥, 푧)푑푥푑푧 푈 푈 48

is bounded by a finite constant for a.e. 푡 ∈ [0, 푇 ].

To apply a theorem in [26, Section 3.2], we need the following lemma.

Lemma 4.4. Suppose 푎푖푗, 푏푖, 푐 satisfy (A5). Then there exist positive constants 휆 and 퐶 depending only on the coefficients of 퐿 such that

(퐿푢, 푢) + 휆∥푢∥퐿2(푈) ≥ 퐶∥푢∥ 1 퐻0 (푈) for a.e. 푡 ∈ [0, 푇 ] and all 푢 ∈ 퐻1(푈).

Proof. Since 푎푖푗 satisfies the elipticity condition (4.27), we have ∫

푎 (푥, 푡)푢 푢 푑푥 ≥ 휃∥∇푢∥2 , (4.31) 푖푗 푥푖 푥푗 퐿2(푈) 푈 for a.e. 푡 ∈ [0, 푇 ] and some positive constant 휃.

∞ Since 푏푖 ∈ 퐿 (푈푇 ), then for a.e 푡 ∈ [0, 푇 ] the Cauchy inequality yields ∫

푏푖(푡, 푥)푢(푡, 푥)푢푥푖 (푥) ≤ 푀∥푢∥퐿2(푈)∥푢∥퐻1(푈). 푈

Then for any 휀 ≥ 0, there exists 푀휀 > 0 such that ∫

푏푖(푡, 푥)푢(푡, 푥)푢푥푖 (푥) ≥ −푀∥푢∥퐿2(푈)∥푢∥퐿2(퐻1(푈)) (4.32) 푈 2 2 ≥ −푀휀∥푢∥퐿2(푈) − 휀∥푢∥퐻1(푈). (4.33)

∞ Because 푐 ∈ 퐿 (푈푇 ), then for a.e 푡 ∈ [0, 푇 ], there exists a constant 푀1 such that ∫ 2 2 푐푢 (푡, 푥)푑푥 ≤ 푀1∥푢∥퐿2(푈). (4.34) 푈

Since ∫ 푛2(푡, 푥, 푧)푑푥푑푧 푈×푈 49

is bounded by a finite number for a.e. 푡 ∈ [0, 푇 ], the Cauchy inequality gives ∫ ∫

푢(푧)푢(푥)휈(푡, 푥 − 푧, 푧)푑푥푑푧 (4.35) ∫ 푈 푈 ∫ ∫ 2 1 2 1 ≤( 푢 (푥)푑푥) 2 ( ( 푢(푧)휈(푡, 푥 − 푧, 푧)푑푧) 푑푥) 2 푈 ∫ ∫ 푈 푈 ∫ 2 2 1 ≤∥푢∥퐿2(푈)( ( 푢 (푧)푑푧 휈 (푡, 푥 − 푧, 푧)푑푧)푑푥) 2 ∫ 푢∫ 푈 푈 2 2 =∥푢∣퐿2(푈) 휈 (푡, 푥 − 푧, 푧)푑푥푑푧 푈 푈 2 ≤푀2∥푢∥퐿2(푈). for a.e 푡 ∈ [0, 푇 ] and some constant 푀2. By combining (4.31), (4.32), (4.34) and (4.35), we obtain ∫ 2 < 퐿푢, 푢 >퐿2(푈) = {푎푖푗(푥, 푡)푢푥푖 푢푥푗 + 푏푖(푡, 푥)푢(푡, 푥)푢푥푖 (푥) + 푐푢 }푑푥 ∫ 푈 ∫ + 푢(푧)푢(푥)휈(푡, 푥 − 푧, 푧)푑푧푑푥 푈 푈 2 2 2 2 ≥ 휃∥∇푢∥퐿2(푈) − 푀휀∥푢∥퐿2(푈) − 휀∥푢∥퐻1(푈) − 푀1∥푢∥퐿2(푈) − 푀2∥푢∥퐿2(푈).

Setting 휀 := 휃/2, 휆 := 푀휀 + 푀1 + 푀2, we obtain

< 퐿푢, 푢 >퐿2(푈) +휆∥푢∥퐿2(푈) ≥ 퐶∥푢∥ 1 , 퐻0 (푈) where 퐶 := 휃/2. This completes the proof.

Let 푉 be a separable Hilbert space with dual 푉 ′; then 퐿2(0, 푇 ; 푉 ) is a Hilbert space with dual 퐿2(0, 푇 ; 푉 ′). Assume that for each 푡 ∈ [0, 푇 ] we are given a continuous bilinear form 푎(푡; ⋅, ⋅) on 푉 or, equivalently, an operator 풜(푡) ∈ ℒ(푉, 푉 ′),

풜(푡)푢(푣) = 푎(푡; 푢, 푣), 푢, 푣 ∈ 푉, 푡 ∈ [0, 푇 ], such that for each pair 푢, 푣 ∈ 푉 the function 푎(⋅; 푢, 푣) is in 퐿∞(0, 푇 ; ℝ). Assume 퐻 is a Hilbert space identified with its dual and that the embedding 푉 ,→ 퐻 is dense and

′ 2 ′ continuous;hence 퐻 ⊂ 푉 by restriction. Finally, let 푓 ∈ 퐿 (0, 푇 ; 푉 ) and 푢0 ∈ 퐻 be given.

Consider the abstract Cauchy problem

2 ′ 2 ′ 푢 ∈ 퐿 (0, 푇 ; 푉 ): 푢 + 퐴푢 = 푓 in 퐿 (0, 푇 ; 푉 ), 푢(0) = 푢0 50

where the separable Hilbert spaces 푉 ,→ 퐻 ,→ 푉 ′, bounded and measurable operators

′ 2 ′ 퐴(푡): 푉 7→ 푉 , and 푓 ∈ 퐿 (0, 푇 ; 푉 ), 푢0 ∈ 퐻 are given as above.

Proposition 4.6. [26, Proposition 2.3]

Assume the operators are uniformly coercive: there is a 푐 > 0 such that

2 퐴(푡)푣(푣) ≥ 푐∥푣∥푉 , 푣 ∈ 푉, 푡 ∈ [0, 푇 ].

Then there exists a unique solution of the Cauchy problem, and it satisfies

2 2 2 2 ∥푢∥퐿2(0,푇 ;푉 ) ≤ (1/푐) (∥푓∥퐿2(0,푇 ;푉 ′) + ∣푢0∣퐻 ).

This result can be extended. 푢 ∈ 퐻 if and only if 푣 ∈ 퐻 where 푣(푡) = 푒−휆푡푢(푡), 0 ≤ 푡 ≤ 푇 and 푢 is a solution of the proceeding Cauchy problem exactly when 푣 is the corresponding solution of the problem

′ −휆푡 푣 ∈ 퐻 : 푢 + (퐴(⋅) + 휆퐼)푣 = 푒 푓(푡), 푣(0) = 푢0.

Corollary 4.1. A sufficient condition for existence by Proposition (4.6) is that there exist a 휆 ∈ ℝ and 푐 > 0 such that

2 2 퐴(푡)푣(푣) + 휆∣푣∣퐻 ≥ 푐∥푣∥푉 , 푣 ∈ 푉, 푡 ∈ [0, 푇 ].

Similarly, uniqueness is obtained from such an estimate, even with 푐 = 0.

Theorem 4.3. If the coefficients of 퐴 satisfy (A4) and (A5), then there exists a unique weak solution of (4.25).

2 1 푛 1 푛 푛 Proof. We choose 퐻 = 퐿 (푈), 푉 = 퐻0 (푈) for 푈 ⊊ ℝ or 푉 = 퐻 (ℝ ) for 푈 = ℝ , then 푉 ′ = 퐻−1(푈). Lemma 4.4 implies

2 ∥푢∥ 2 + 휆∥푢∥퐿2(푈) ≥ 퐶∥푢∥ 1 . 퐿 (푈) 퐻0 (푈)

We obtain the desired result by Proposition 4.6. 51

4.5 Uniqueness of fundamental solutions

Theorem 4.4. There exists a unique solution to (4.7).

Proof. Assume that there exist two fundamental solutions 푝(푥, 푡) and푝 ˜(푥, 푡). Given

∞ any initial condition 푔 ∈ 퐶0 (푈), suppose the following parabolic equation with initial and boundary conditions

푢푡 + 퐿푢 = 푓 in 푈푇 ,

푢 = 0 on ∂푈 × [0, 푇 ],

푢 = 푔 on 푈 × {푡 = 0}, has two solutions which can be expressed in terms of fundamental solutions 푝(푥, 푡) and 푝˜(푥, 푡) ∫ 푢(푥, 푡) = 푝(푥 − 푧, 푡)푔(푧)푑푧, ∫푈 푢˜(푥, 푡) = 푝˜(푥 − 푧, 푡)푔(푧)푑푧. 푈 According to Theorem 4.3, the functions 푢 and푢 ˜ are equal for a.e. 푡 ∈ [0, 푇 ]. Thus ∫

(푝(푥 − 푧, 푡) − 푝˜(푥 − 푧, 푡))푔(푧)푑푧 = 0 for a.e. (푥, 푡) ∈ 푈푇 , 푈

∞ for every 푔 ∈ 퐶0 (푈). This implies

푝(푥, 푡) =푝 ˜(푥, 푡) for a.e. (푥, 푡) ∈ 푈푇 .

The partial integro-equation, ∑푛 ∑푛 ∫ ∗ 퐿 푝 = 푝푡 − 퐴푖푗(푥, 푡)푝푥푖푥푗 + 퐵푖(푥, 푡)푝푥푖 + 퐶(푥, 푡)푝 − 푝(푧, 푡)휈(푡, 푥 − 푧, 푧)푑푧 = 0 푛 푖,푗=1 푖=1 ℝ (4.36) has a unique fundamental solution. Since the marginal density function 푝푋 (푥, 푡) of the semimartingale 푋(푡) and the the marginal density function 푝푌 (푦, 푡) of the mimicking process 푌 (푡) both satisfy (4.36), the uniqueness of the fundamental solution of (4.36) implies that 푋(푡) and 푌 (푡) have the same marginal distributions. 52

This is an extension of Theorem 3.3 for the case of a semimartingale. It is also justified in [25, Theorem 3.4.2]. 53

Chapter 5

Markov Processes and Pseudo-Differential Operators

When we have a partial integro-differential equation, it is natural to investigate it using pseudo differential operators. However, in the forward equation we derive, the integral ∫ term is ℝ푛 푝(푧, 푡)휈(푡, 푥 − 푧, 푧)푑푧 which is not a standard convolution. So we cannot apply the general theory of pseudo-differential operators.

In this chapter, we review operator semigroups, Feller processes and discuss how research of pseudo differential operators arises in the martingale problem. To end this chapter, we indicate an area of further study. We plan to investigate the generator 퐴 of a semimartingale 푋(푡). We first want to show that 퐴 is a pseudo-differential operator with a symbol 푎(푡, 푥, 휉). In principle we could check that 푎(푡, 푥, 휉) satisfies certain conditions in [6, Theorem 4.2] which should imply the existence of a Markov process with generator 퐴. We believe that this could lead to a new proof of the mimicking theorems, but this appears to be a challenging problem and we leave it for future research.

5.1 Operator semigroups and Feller processes

In this section, we give a brief introduction to operator semigroups and their gener- ators from a probabilistic perspective. We outline the relationship between operator semigroups and Feller processes.

Definition 5.1. (Operator semigroups [18] Definition 4.1.1)

Let (푋, ∥ ⋅ ∥푋 ) be a Banach space. Then a one parameter family of bounded linear 54

operators (푇푡)푡≥0 ∈ ℒ(푋, 푋) is called an operator semigroup if

푇푡+푠 = 푇푡 ∘ 푇푠, for all 푠, 푡 ≥ 0, (5.1)

푇0 = 퐼.

We call (푇푡)푡≥0 strongly continuous if

lim ∥푇푡푢 − 푢∥푋 = 0, 푡7→0 for all 푢 ∈ 푋.

The semigroup (푇푡)푡≥0 is called a contraction semigroup, if for all 푡 ≥ 0,

∥푇푡∥ ≤ 1 holds, that is, if each of the operators 푇푡 is a contraction. As usual, ∥푇푡∥ denotes the operator norm.

It is easy to see that (5.1) corresponds to the exponential Cauchy functional equation

푔(푠 + 푡) = 푔(푠)푔(푡), 푔(0) = 1, where 푔(⋅) is a nonnegative function from ℝ to ℝ. The solution to the exponential Cauchy functional equation is the family of exponential functions 푔(푡) = 푒훼푡, 훼 ∈ ℝ. However, this family represents all possible solutions only if an additional assumption of continuity is made. In fact, the assumption that 푔(푡) is continuous from the right in the origin is already sufficient to make the functions 푔(푡) = 푒훼푡 the only solutions. We now introduce a similar assumption for the operator semigroup defined by (5.1). Now we want to show that by analogy to the Cauchy equation, an operator semigroup can

푡퐴 be represented in the form 푇푡 = 푒 for a suitable operator 퐴.

Definition 5.2. (Generators of semigroups [18] Definition 4.1.11)

Let (푇푡)푡≥0 be a strongly continuous semigroup of operators on a Banach space (푋, ∥ ⋅

∥푋 ). The generator 퐴 of (푇푡)푡≥0 is defined by 푇 푢(푥) − 푢(푥) (퐴푢)(푥) = lim 푡 , (5.2) 푡→0+ 푡 with domain { } 푇 푢(푥) − 푢(푥) 퐷(퐴) = 푢 ∈ 푋 lim 푡 exists as strong limits. . 푡→0+ 푡 55

While the infinitesimal generator 퐴 is defined as the right-hand derivative of 푇푡 at

0, the derivative of 푇푡 at any point can be calculated by

푑 ((푇푡+ℎ − 푇푡)푢)푥 (푇푡푢)(푥) = lim , 푑푡 ℎ7→0 ℎ and we have the following lemma.

Lemma 5.1. ([18] Lemma 4.1.14) Let (푇푡)푡≥0 be a strongly continuous semigroup of operators on a Banach space (푋, ∥ ⋅ ∥푋 ), and denote by 퐴 its generator with domain 퐷(퐴) ⊂ 푋, then

∫ 푡 (i) For any 푢 ∈ 푋 and 푡 ≥ 0, it follows that 0 푇푠푢푑푠 ∈ 퐷(퐴) and

∫ 푡 푇푡푢 − 푢 = 퐴 푇푠푢푑푠. 0

(ii) For 푢 ∈ 퐷(퐴) and 푡 ≥ 0, we have 푇푡푢 ∈ 퐷(퐴), that is, 퐷(퐴) is invariant under

푇푡, and 푑 푇 푢 = 퐴푇 푢 = 푇 퐴푢. 푑푡 푡 푡 푡

(iii) For 푢 ∈ 퐷(퐴) and 푡 ≥ 0, we get

∫ 푡 ∫ 푡 푇푡푢 − 푢 = 퐴푇푠푢푑푠 = 푇푠퐴푢푑푠. 0 0

푑 The derivative 푑푡 푇푡푢 is well defined on the domain 퐷(퐴) of 퐴 and in fact the equation

퐷푡푇푡푓 = 퐴푇푡푓 is Kolmogorov’s backward equation and 퐷푡푇푡푓 = 푇푡퐴푓 is Kolmogorov’s forward equation.

From a stochastic point of view, operator semigroups start from the study of Markov processes.

Definition 5.3. Given a Markov process 푋, we can define the corresponding family of operators (푇푠,푡) for 0 ≤ 푠 ≤ 푡 by

(푇푠,푡푓)(푥) = 퐸[푓(푋(푡))∣푋(푠) = 푥], (5.3)

푛 푛 푛 for each 푓 ∈ 퐵푏(ℝ ), 푥 ∈ ℝ , where 퐵푏(ℝ ) denotes the space of bounded Borel mea- surable functions on ℝ푛. 56

If a Markov process 푋(푡) is time-homogeneous, we can write 푇푡−푠 = 푇푠,푡.

Theorem 5.1. Let 푋(푡) be a time-homogeneous Markov process, then the transition operators (푇푡)푡≥0 form a semigroup.

푛 Proof. We want to show that 푇푡+푠푓(푥) = 푇푡푇푠푓(푥) holds for any 푓 ∈ 퐵푏(ℝ ). We have

푥 푇푠푓(푥) = 푇0,푠푓(푥) = 퐸 [푓(푋(푠))] = 푔(푥).

Then, because of the Markov property of 푋, we get

푥 푥 푋(푡) 푇푡(푇푠푓(푥)) = 푇0,푡푔(푥) = 퐸 [푔(푋(푡))] = 퐸 [퐸 [푓(푋(푠))]]

푥 푥 푥 = 퐸 [퐸 [푓(푋푠+푡)]] = 퐸 [푓(푋푠+푡)] = 푇푠+푡푓(푥).

Hence, 푇푠+푡 = 푇푠푇푡,(푇푡)푡≥0 form a semigroup.

Example 5.1. (The generator of a compound Poisson process)

Let (푋푛)푛∈ℕ be a sequence of independent and identically-distributed random variables with distribution function 퐹 (푥) and let 푁푡 be a Poisson process with intensity 휆. Denote

푆푛 = 푋1 + ⋅ ⋅ ⋅ + 푋푛. The compound Poisson process 푌 is defined by ∑

푌 (푡) = 푆푛1푁푡=푛. 푛≥1

The transition operator 푇푡 of 푌 is given by

푥 푇푡 = 퐸 [푓(푌 (푡))],

where we assume that 푔 ∈ 퐶푏(ℝ). To simplify calculations, we define an operator 퐿 by ∫

퐿푓(푥) := 퐸[푓(푥 + 푋1)] = 푓(푥 + 푦)퐹 (푑푦) ℝ

and note that

푛 퐿 푓(푥) := 퐸[푓(푥 + 푋1 + ⋅ ⋅ ⋅ + 푋푛)] = 퐸[푓(푥 + 푆푛)]. 57

Now it holds that

∑ 푥 (푇푡푓)(푥) = 퐸 [푓(푌 (푡))] = 퐸[푓(푥 + 푆푛)]푃 (푁푡 = 푛) 푛≥0 ∑ (휆푡)푛 = 푒−휆푡 퐸[푓(푥 + 푆 )] 푛! 푛 푛≥0 ∑ (휆푡)푛 = 푒−휆푡 퐿푛푓(푥) 푛! 푛≥0 ( ) = 푒휆푡(퐿−퐼)푓 푓 (푥), and the transition semigroup can be written as 푒푡퐴 where 퐴 is given by ∫ ( ) 퐴푓(푥) = 휆(퐿 − 퐼)푓(푥) = 휆 푓(푥 + 푦) − 푓(푥) 퐹 (푑푦). ℝ

Example 5.2. The generator of a Levy process)

푛 Let (푋(푡))푡≥0 be a Levy process on ℝ with characteristic triple (퐴, 휇, 훾). Then the generator of 푋 is defined for any 푢 ∈ 퐶0(ℝ)

1 ∑푛 ∂2푢 ∑푛 ∂푢 (퐴푢)(푥) = 퐴푗푘 (푥) + (푥) 2 ∂푥푗푥푘 ∂푥푗 푗,푘=1 푗=1 ∫ ∑푛 ∂푢 = (푓(푥 + 푦) − 푓(푥) − 푦푗 (푥)1∣푦∣≤1)휇(푑푦). 푛 ∂푥 ℝ 푗=1 푗

Now we define the Feller process, a type of process that is essentially a Markov process satisfying some additional mild regularity assumptions.

Definition 5.4. (Feller process [18] Definition 4.1.4) Let (푇푡)푡≥0 be a strongly contin-

푛 uous semigroup of operators on a Banach space (퐶∞(ℝ , ℝ), ∥ ⋅ ∥∞) which is positive

preserving, i.e. 푢 ≥ 0 yields 푇푡푢 ≥ 0. Then (푇푡)푡≥0 is called a Feller semigroup.

A Markov process 푋 with transition semigroup (푇푡)푡≥0 is a Feller process if (푇푡)푡≥0 is a Feller semigroup.

The class of Feller processes includes L´evyprocesses, Dupire local volatility processes and affine processes in finance. Feller processes may have nonstationary increments, while L´evyprocesses necessarily have stationary increments. 58

Recall that the positive maximum principle [12, Thereom 4 in Section 6.4] also holds for an elliptic second order differential operator ∑푛 ∑푛

퐿푢 = 푎푖푗(푥)푢푥푖푥푗 + 푏푖(푥)푢푥푖 + 푐(푥)푢. 푖,푗=1 푖=1 The connection to semigroups is made by fact that generators of Feller processes satisfy the same maximum principle.

Proposition 5.1. (Maximum principle [18, Theorem 4.5.1])

푛 Let (푇푡)푡≥0 be a Feller semigroup on 퐶∞(ℝ , ℝ) with generator (퐴, 퐷(퐴)), 퐷(퐴) ⊂

푛 퐶∞(ℝ , ℝ). Then 퐴 satisfies the positive maximum principle; that is, for 푢 ∈ 퐷(퐴) such

푛 that for some 푥0 ∈ ℝ the fact that 푢(푥0) = sup푥∈ℝ푛 푢(푥) ≥ 0 implies that 퐴푢(푥0) ≤ 0.

푛 Proof. Suppose that 푢 ∈ 퐷(퐴) and that for some 푥0 ∈ ℝ we have 푢(푥0) = sup푥∈ℝ푛 푢(푥) ≥

0. Since each of the operators 푇푡, 푡 ≥ 0 is positivity preserving we find that

+ − + + (푇푡푢)(푥0) = (푇푡푢 )(푥0) − (푇푡푢 )(푥0) ≤ (푇푡푢 )(푥0) ≤ ∥푢 ∥∞ = 푢(푥0) which implies 푇푡푢(푥0) − 푢(푥0) 퐴푢(푥0) = lim ≤ 0. 푡7→0 푡 □

The fact that generators of a Feller semigroup and elliptic operators satisfy the

∞ 푛 positive maximum principle suggests a connection between them. Denote by 퐶푐 (ℝ ) the class of functions on ℝ푛 which are infinitely differentiable and have compact support. Then we recall the following theorem.

Theorem 5.2. [19] If 푋 is a continuous Feller processes on [0, 푇 ] with operator 퐴 and

∞ 퐶푐 (ℝ) ⊆ 퐷(퐴), then 퐴 is elliptic.

5.2 Pseudo-differential operators

In the preceding section we have seen that if 푋 is a continuous Feller process, its generator 퐴 is a second order elliptic differential operator. As an example, consider an

ℝ-valued L´evyprocess 푋 and its transition operator 푇푡: ∫

푇푡푓(푥) = 퐸[푓(푋(푡))∣푋(0) = 푥] = 푓(푥 + 푦)휇푡(푑푦). ℝ 59

The second equation is true because of independence and stationarity of increments; 휇푡

is a probability measure. The Fourier transform of 휇푡 is ∫ 푖푥푢 푖푢푋(푡) 푡휙(푢) 휇ˆ푡(푢) = 푒 휇푡(푑푥) = 퐸[푒 ∣푋(0) = 푥] = 푒 , ℝ

where 휙 is the characteristic exponent of the Levy process 푋. Using the convolution theorem, we get ˆ ˆ ˆ ˆ 푡휙(푢) 푇푡푓(푢) = 푓 ∗ 휇푡(푢) = 푓(푢)휇ˆ푡(푢) = 푓(푢)푒 .

The inverse Fourier transform gives ∫ 1 −푖푢푥 푡휙(푢) ˆ 푇푡푓(푥) = √ 푒 푒 푓(푢)푑푢. 2휋 ℝ

The generator 퐴 of 푋 is given by

푇 푓(푥) − 푓(푥) 퐴푓(푥) = lim 푡 푡→0 푡 ∫ 1 푒푡휙(푢) − 1 = √ 푒−푖푢푥 푓ˆ(푢)푑푢 2휋 푡 ∫ℝ 1 = √ 푒−푖푢푥휙(푢)푓ˆ(푢)푑푢. 2휋 ℝ

In general, operators with such a representation are called pseudo-differential operators. We give a formal definition, beginning with

Definition 5.5. (Continuous negative definite functions) A function 휙 : ℝ푛 → 퐶 is continuous negative definite if it is continuous and if, for any choice of 푘 ∈ 푁 and vectors 휉1, . . . , 휉푘 ∈ ℝ푛, the matrix

푗 푙 푗 푙 (휙(휉 ) + 휙(휉 ) − 휙(휉 − 휉 ))푗,푙=1,...,푘

is positive Hermitian, i.e. for all 푐1, . . . , 푐푛 ∈ 퐶, ∑푚 푗 푙 푗 푙 (휙(휉 ) + 휙(휉 ) − 휙(휉 − 휉 ))푗,푙=1,...,푘푐푗푐¯푙 ≥ 0. 푗,푙=1

Some typical examples of continuous negative definite functions are:

∙ ∣휉∣훼 for 훼 ∈ (0, 2],

∙ 1 − 푒−푖푠휉 for 푠 ≥ 0, 60

∙ log(1 + 휉2) + 푖 arctan 휉.

We now recall

Definition 5.6. (Pseudo-differential operators) [18]

∞ 푛 Let (퐴, 퐷(퐴)) be an operator with 퐶0 (ℝ ) ⊂ 퐷(퐴). Then 퐴 is a pseudo-differential operator if

(퐴푢)(푥) = −푎(푥, 퐷)푢(푥) (5.4) ∫ = −(2휋)−푛/2 푒푖푥⋅휉푎(푥, 휉)ˆ푢(휉)푑휉, ℝ푛 for 푢 ∈ 퐶∞(ℝ푛). 0 ∫ 푢ˆ(휉) = (2휋)−푛/2 푒−푖푥⋅휉푢(푥)푑푥 ℝ푛 is the Fourier transform of 푢. The symbol 푎(푥, 휉): ℝ푛 × ℝ푛 → ℂ is locally bounded in (푥, 휉), 푎(⋅, 휉) is measurable for every 휉, and 푎(푥, ⋅) is a continuous negative definite function for every 푥.

Time-inhomogeneous processes

Definition 5.7. The family of generators of 푋 is defined by

푇푠−ℎ,푠푢 − 푢 퐴푠푢 = lim , (5.5) ℎ→0+ ℎ for all 푠 > 0, 푓 ∈ 퐷(퐴푠), where 퐷(퐴푠) denotes the domain of 퐴푠.

∞ 푛 Definition 5.8. Let (퐴푠)푠>0 be a family of operators with 퐶0 (ℝ ) ⊂ 퐷(퐴푠). Then 퐴푠 is a pseudo-differential operator for all 푠 > 0 if

(퐴푠푢)(푥) = −푎(푠, 푥, 퐷)푢(푥) (5.6) ∫ = −(2휋)−푛/2 푒푖푥⋅휉푎(푠, 푥, 휉)ˆ푢(휉)푑휉, ℝ푛 ∫ ∞ 푛 −푛/2 −푖푥⋅휉 for 푢 ∈ 퐶0 (ℝ ), 푢ˆ(휉) = (2휋) ℝ푛 푒 푢(푥)푑푥, is the Fourier transform of 푢. The symbol 푎(푠, 푥, 휉): ℝ+ × ℝ푛 × ℝ푛 → 퐶 is locally bounded in (푥, 휉), 푎(푠, ⋅, 휉) is measurable for every 휉, 푠, and 푎(푠, 푥, ⋅) is a continuous negative definite function for every (푠, 푥). 61

5.3 Construction of a Markov process using a symbol

Time-homogeneous case

In [17], Hoh showed that if a symbol of a pseudo-differential operator satisfies certain conditions, then one can construct a unique Markov process whose generator has that symbol.

Let (퐸, 푑) be a separable metric spaceand let 퐷퐸 denote the space of all cadlag paths with values in 퐸,

퐷퐸 := {휔 : [0, ∞) → 퐸, 휔 is right continuous, lim 휔(푠) exists for all 푡 > 0}, 푠→푡

let 푀(퐷퐸) denote the set of probability measures on 퐷퐸.

Definition 5.9. [17] Let 퐴 be a linear operator with domain 퐷(퐴). A probability

measure 푃 ∈ 푀(퐷퐸) is called a solution of the martingale problem for the operator 퐴 if for every 휙 ∈ 퐷(퐴), the process

∫ 푡 휙(푋(푡)) − 퐴휙(푋(푠))푑푠 0 is a martingale under ℙ with respect to the filtration 픽 = {F ()푡≥0.

If for every probability measure 휇 ∈ 푀(퐷퐸), there is a unique solution 푃휇 of the martingale problem for 퐴 with initial distribution

−1 푃휇 ∘ 푋(0) = 휇, then the martingale problem for A is called well posed.

We assume that 휙 : ℝ푛 → ℝ is a continuous negative definite reference function for some 푟 > 0, and 푐 > 0. Define 휆(휉) = (1 + 휙(휉))1/2, 휉 ∈ ℝ푛.

Theorem 5.3. ([17]) Let 푎 : ℝ푛 × ℝ푛 → ℝ be a continuous negative definite symbol such that 푎(푥, 0) = 0 for all 푥 ∈ ℝ푛. Let 푀 be the smallest integer such that 푛 푀 > ( ∨ 2) + 푛 푟 and suppose that 62

(i) 푎(푥, 휉) is (2푀 + 1 − 푛) times continuous differentiable with respect to 푥 and for all 훽 ∈ ℕ, ∣훽∣ ≤ 2푀 + 1 − 푛,

훽 2 ∣∂푥 푎(푥, 휉)∣ ≤ 푐휆 (휉) (5.7)

holds for all 푥 ∈ ℝ푛, 휉 ∈ ℝ푛.

(ii) For some strictly positive function 훾 : ℝ푛 → ℝ+,

푎(푥, 휉) ≥ 훾(푥) ⋅ 휆2(휉), (5.8)

for all 푥 ∈ ℝ푛, ∣휉∣ ≥ 1. Then the martingale problem for the operator −푎(푥, 퐷)

∞ 푛 with domain 퐶0 (ℝ ) is well-posed.

We illustrate theorem 5.3 with some examples. Let the continuous negative definite reference function 휙(휉) be 휉2, then 휆(휉) = (1 + 휉2)1/2.

Examples:

1. Brownian motion 푊 (푡), the symbol of its generator is

1 푎(푥, 휉) = 휉2. 2

2. Geometric Brownian motion 푋(푡), 푑푋(푡) = 휇푋(푡)푑푡 + 휎푋(푡)푑푊 (푡), the symbol of its generator is 1 푎(푥, 휉) = 휎2푥2휉2 − 휇푥휉. 2 √ 3. CIR process 푋(푡), 푑푋(푡) = 휃(휇 − 푋(푡))푑푡 − 휎 푋(푡)푑푊 (푡), the symbol of its generator is 1 푎(푥, 휉) = 휎2푥휉2 + 휃(휇 − 푥)휉. 2

4. Levy process 푋(푡) with Levy triplet (휎, 휇, 훾), the symbol of its generator is ∫ 1 푖푦휉 푎(푥, 휉) = 푐(푥) − 푖훾휉 + 휎2휉2 + (1 − 푒푖푦휉 + )휈(푑푦). 2 2 ℝ 1 + 푦

5. A continuous diffusion process 푋(푡), 푑푋(푡) = 휎(푋(푡))푑푊 (푡) where 휎(푋(푡)) ia an adapted process, the symbol of its generator is

1 푎(푥, 휉) = 휎(푥)2휉2. 2 63

In order to satisfy (5.7), when 푀 = 3, 푛 = 1, 휎(푥) needs to be 6 times continuous

1 2 differentiable with respect to 푥. If 훾(푥) = 4 휎(푥) , then (5.8) holds for all ∣휉∣ ≥ 1.

Time-inhomogeneous case

Bottcher showed that we could construct a time-inhomogeneous Markov process using pseudo-differential operators.

Definition 5.10. ([6]) A continuous negative definite function 휙 : ℝ푛 → ℝ belongs to

푛 the class Λ if for all 훼 ∈ 푁0 there exists constants 푐훼 ≥ 0 such that

훼 2−(∣훼∧2∣)/2 ∣∂휉 (1 + 휙(휉))∣ ≤ 푐훼(1 + 휙(휉)) .

Definition 5.11. ([6]) Let 푚 ∈ ℕ, 푗 ∈ 0, 1, 2 and 휙 ∈ Λ. A function 푎 : ℝ+×ℝ푛×ℝ푛 →

휙,푚 푛 + ℂ is in the class 푆푗 if for all 훼, 훽 ∈ 푁0 and for any compact 퐾 ⊂ ℝ there are

constants 푐훼,훽,퐾 ≥ 0 such that

훼 훽 푚−(∣훼∣∧푗) ∣∂휉 ∂푥 푎(푡, 푥, 휉)∣ ≤ 푐훼,훽,퐾 (1 + 휙(휉)) /2 holds for all 푡 ∈ 퐾, 푥 ∈ ℝ푛 and 휉 ∈ ℝ푛. Here 푚 ∈ ℝ is called the order of the symbol.

휙,푚 Furthermore, the notation 푎 ∈ (푡 − 푠)푆푗 is used if, for 푠 ≤ 푡,

훼 훽 푚−(∣훼∣∧푗) ∣∂휉 ∂푥 푎(푠, 푡, 푥, 휉)∣ ≤ (푡 − 푠)푐훼,훽(1 + 휙(휉)) /2 holds, where the constants 푐훼,훽 are independent of 푠 and 푡.

We now recall

Theorem 5.4. ([6, Theorem 4.2])

Suppose 휙 ∈ Λ is a negative definite function, and there exist 푐0 > 0, 푟0 > 0 such that

푟 휙(휉) ≥ 푐0∣휉∣ 0 for all 휉 large. If a pseudo-differential operator with symbol 푎(푠, 푥, 휉) satisfies the following conditions,

∙ 푎(⋅, 푥, 휉) is a continuous function for all 푥 ∈ ℝ푛, 휉 ∈ ℝ푛,

∙ 푎(푡, 푥, ⋅) is continuous negative definite for all 푡 ∈ ℝ+, 푥 ∈ ℝ푛,

∙ 푎(푡, 푥, 0) = 0 holds for all 푡 and 푥, 64

휙,푚 ∙ 푎 ∈ 푆2 is elliptic, that is, there exist 푅 > 0, 푐 > 0, for any 푥, ∣휉∣ ≥ 푅, ℜ푎(푡, 푥, 휉) ≥ 푐(1 + 휙(휉))푚/2 holds uniformly in 푡 on compact sets,

∞ then 푎(푠, 푥, 휉) defines a family of operators 푇푠,푡 on 퐶 such that

(a) 푇푠,푡 is a linear operator,

(b) 푇푠,푡 is a contraction,

(c) 푇푠,푡 is positivity preserving,

(d) 푇푡,푡 = 퐼,

(e) 푇푠,푡 = 푇푠,휏 푇휏,푡 for 푠 ≤ 휏 ≤ 푡,

(f) 푇푠,푡1 = 1, that is, lim푘→∞ 푇푠,푡푢푘 = 1 holds for 푢푘 ∈ 퐶∞ with 푢푘 → 1.

Theorem 5.4 has the following important corollary:

Corollary 5.1. ([6, Corallary 4.3]) The operator given in Theorem 5.4 defines a Markov process.

Example:

1. Dupire local volatility model 푑푆푡 = 휇푆푡 푑푡 + 휎(푡, 푆푡)푆푡 푑푊 (푡)

1 푎(푡, 푥, 휉) = 휎(푡, 푥)2푥2휉2 − 휇푥휉 2

2. A time-inhomogeneous Markov process in ℝ푛 with generator

∑ ∂2푢(푡, 푥) ∑푛 ∂푢(푡, 푥) 퐴 푢(푡, 푥) = 푎 (푡, 푥) + 푏 (푡, 푥) + 푐(푡, 푥)푢(푡, 푥) 푡 푖푗 ∂푥 ∂푥 푖 ∂푥 푖,푗 푖 푗 푖=1 푖 ∫ ( < 푦, ∇푢(푡, 푥) >) + 푢(푡, 푥 + 푦) − 푢(푡, 푥) − 휇(푡, 푥, 푑푦) 2 ℝ푛∖{0} 1 + ∣푦∣

1 푇 푎(푡, 푥, 휉) = 휉 (푎푖푗(푡,푥))푖푗휉 − 푏(푡, 푥) ⋅ 휉 + 푐(푡, 푥) 2 ∫ ( 푖푦 ⋅ 휉 ) + 1 − 푒푖푦⋅휉 + 휇(푡, 푥, 푑푦). 2 ℝ푛∖{0} 1 + ∣푦∣ 65

Future research plan

Suppose there is a semimartingale 푋(푡) with decomposition (4.2)

∫ 푡 ∫ 푡 ∫ 푡 ∫ ∫ 푡 ∫ 푋(푡) = 푋(0)+ 훽(푠)푑푠+ 휎(푠)푑푊 (푠)+ 푦푀˜(푑푠푑푦)+ 푦푀(푑푠푑푦). 0 0 0 ∥푦∥≤1 0 ∥푦∥>1

In section 4.2, we derive the generator 퐴 of 푋,

∑푛 ∂푓 1 ∑푛 ∂2푓 퐴푓(푥) = 푏 (푥, 푡) + 푎 (푥, 푡) 푖 ∂푥 2 푖푗 ∂푥 푥 푖=1 푖 푖,푗=1 푖 푗 ∫ ( ∑푛 ∂푓 ) + 푓(푥 + 푦) − 푓(푥) − 1∥푦∥≤1 푦푖 휈(푡, 푦, 푥)푑푦. 푛 ∂푥 ℝ 푖=1 푖 We plan to show that 퐴 is a pseudodifferential operator with symbol 푎(푡, 푥, 휉) as defined in Definition 5.8. We would like to discover what are the requirements for the coefficients

푎푖푗, 푏푖, 푐, 푛 in the generator 퐴, such that 푎(푡, 푥, 휉) satisfies the conditions in Theorem 5.4. If we had this result, we could conclude that there exists a unique Markov process 푌 (푡) which mimics 푋(푡) for all 0 ≤ 푡 ≤ 푇 . 66

Chapter 6

Application of Brunick’s Theorem to Barrier Options

6.1 Introduction

In practice, the local volatility model is used to price both vanilla options with path- independent payoffs and complex options with path-dependent payoffs. However, the price of an option with a path-dependent payoff cannot be uniquely determined by the marginal distributions of the asset price process. For example, the price of a barrier option would require knowledge of the joint probability distribution of the asset price process and its running maximum. Brunick [8] generalizes Gy¨ongy’sresult under a weaker assumption so that the prices of path-dependent options can be determined exactly.

Definition 6.1. ([8, Definition 2.1])

푑 푛 Let 푌 : ℝ+ × 퐶(ℝ+, ℝ ) → ℝ be a predictable process. We say that the path-functional 푌 is a measurably updatable statistic if there exists a measurable function

푛 푑 푛 휙 : ℝ × ℝ × ℝ+ × 퐶(ℝ+, ℝ ) → ℝ

푑 such that 푌 (푡 + 푢, 푥) = 휙(푡, 푌 (푡, 푥); 푢, Δ(푡, 푥)) for all 푥 ∈ 퐶(ℝ+, ℝ ), where the map

퐷 푑 Δ: ℝ+ × 퐶(ℝ+, ℝ ) → 퐶(ℝ+, ℝ ) is defined by Δ(푡, 푥)(푢) = 푥(푡, 푢) − 푥(푡).

A measurably updatable statistic is a functional whose path-dependence can be summed up by a single vector in ℝ푁 . 푌 1(푡, 푥) = 푥(푡) is an updatable statistic, as

1 1 ∗ 2 푌 (푡 + 푢, 푥) = 푌 (푡, 푥) + Δ(푡, 푥)(푢). Let 푥 (푡) = sup푢≤푡 푥(푢), we see that 푌 (푡, 푥) = [푥(푡), 푥∗(푡)] ∈ ℝ2 is an updatable statistic as we can write

푌 2(푡 + 푢, 푥) = [푥(푡) + Δ(푡, 푥)(푢), max{푥∗(푡), sup 푥(푡) + Δ(푡, 푥)(푣)}]. 푣∈[0,푢] We first recall Brunick’s extension of Gy¨ngy’s theorem. 67

Theorem 6.1. ([8, Theorem 2.11]) Let 푊 be an 푟-dimensional Brownian motion, and

푑푋(푡) = 휇푡푑푡 + 휎(푡)푑푊 (푡) (6.1) be a 푑-dimensional Itˆoprocess where 휇 is a left-continuous 푑-dimensional adapted pro- cess, and 휎 is a left-continuous 푑 × 푟-dimensional adapted process with

∫ 푡 푇 퐸[ ∣휇푠∣ + ∣휎(푠)휎(푠) 푑푠∣푑푠] ≤ ∞ for all 푡. 0

푑 푛 Also suppose that 푌 : ℝ+×퐶(ℝ+, ℝ ) → ℝ is a measurably updatable statistic such that the maps 푥 7→ 푌 (푡, 푥) are continuous for each fixed 푡. Then there exist deterministic measurable functions 휇ˆ and 휎ˆ such that

휇ˆ(푡, 푌 (푡, 푋)) = 퐸[휇푡∣푌 (푡, 푋)] a.s. for Lebesgue-a.e. 푡,

휎ˆ휎ˆ푇 (푡, 푌 (푡, 푋)) = 퐸[휎(푡)휎(푡)푇 ∣푌 (푡, 푋)] a.s. for Lebesgue-a.e. 푡, and there exists a weak solution to the SDE:

ˆ ˆ ˆ ˆ 푑푋푡 =휇 ˆ(푡, 푌 (푡, 푋))푑푡 +휎 ˆ(푡, 푌 (푡, 푋))푑푊 (푡)

ˆ such that L (푌 (푡, 푋)) = L (푌 (푡, 푋)) for all 푡 ∈ ℝ+, where L denotes the law of a random variable and 푊ˆ (푡) denotes another Brownian motion.

Brunick’s theorem is more general than Gy¨ongy’s. First, the requirements of 휇 and 휎 are weaker than the boundedness and uniform ellipticity in Gy¨ongy’stheorem. Secondly, this theorem gives the existence of a weak solution which preserves the one- dimensional marginal distribution of path-dependent functional.

6.2 Application to up-and-out calls

We can apply Brunick’s theorem to construct a 2-dimensional Markov process explicitly which mimicks the joint marginal density of an Itˆoprocess and its running maximum (or minimum).

For example, consider an up-and-out European call. The payoff of this option is

+ (푆(푇 ) − 퐾) 1{푆∗(푇 )≤퐵}, 68

∗ where 푆 (푇 ) = sup0≤푢≤푇 푆(푢) is the running maximum of 푆(푡). The stock price, 푆(푡), follows the SDE 푑푆(푡) = 푟푆(푡)푑푡 + 휎(푡)푆(푡)푑푊 (푡), where 휎(푡) is any adapted stochastic process, where 푟 is the constant interest rate and

{푊 (푡)}푡≥0 is Brownian motion on a probability space (Ω, ℱ, 픽, ℚ). Our goal is to find an analytic formula for

휎ˆ(푡, 푥, 푦) := 퐸[휎(푡)∣푆(푡) = 푥, 푆∗(푡) = 푦].

We apply this result to the problem of pricing single barrier options. Theorem ( ) tells us that there exists a measurable, deterministic function휎 ˆ such that

휎ˆ(푡, 푥, 푦) = 퐸[휎(푡)∣푆(푡) = 푥, 푆∗(푡) = 푦], and there exists a weak solution to the SDE

푑푆ˆ(푡) = 푟푆ˆ(푡)푑푡 +휎 ˆ(푡, 푆ˆ(푡), 푆ˆ∗(푡))푆ˆ(푡)푑푊ˆ (푡),

ˆ ˆ∗ ∗ where (푆(푡), 푆 (푡)) has the same joint distribution as (푆(푡), 푆푡 ).

The risk-neutral pricing formula tells us that the price of an up-and-out call option with strike 퐾, maturity 푇 and barrier 퐵 is given by ∫ ∫ −푟푇 + ∗ ∗ 퐶(푇, 퐾, 퐵) = 푒 (푆(푇 ) − 퐾) 1{푆∗(푇 )≤퐵}휙(푇, 푆(푇 ), 푆 (푇 ))푑푆(푇 )푑푆 (푇 ) ℝ ℝ ∫ ∞ ∫ 퐵 = 푒−푟푇 (푆(푇 ) − 퐾)휙(푇, 푆(푇 ), 푆∗(푇 ))푑푆(푇 )푑푆∗(푇 ), 퐾 −∞ where 휙(푇, 푆(푇 ), 푆∗(푇 )) is the joint distribution of (푆(푇 ), 푆∗(푇 )). We differentiate both sides with respect to 퐵 and get ∫ ∂ ∞ 퐶(푇, 퐾, 퐵) = 푒−푟푇 (푆(푇 ) − 퐾)휙(푇, 푆(푇 ), 퐵)푑푆(푇 ). (6.2) ∂퐵 퐾

We differentiate both sides with respect to 퐾 and get ∫ ∂2 +∞ 퐶(푇, 퐾, 퐵) = −푒−푟푇 휙(푇, 푆(푇 ), 퐵)푑푆(푇 ). (6.3) ∂퐵∂퐾 퐾 69

Again, we can differentiate both sides with respect to 퐾 and get ∂3 퐶(푇, 퐾, 퐵) = −푒−푟푇 휙(푇, 퐾, 퐵). (6.4) ∂퐵∂퐾2

Hence, the joint transition density function can be represented as a derivative of the up-and-out call option price.

Next we apply the Itˆo-Tanaka’s formula [24, Theorem 43.3] to the function (푆ˆ(푡) −

+ 퐾) 1{푆ˆ∗(푇 )≤퐵}, we get ∫ [ 푇 1 ] (푆ˆ(푇 ) − 퐾)+1 = (푆ˆ − 퐾)+ + 1 (푆ˆ )푑푆ˆ + 퐿퐾 1 (6.5) {푆ˆ∗(푇 )≤퐵} 0 [퐾,∞) 푢 푢 2 푇 {푆ˆ∗(푇 )≤퐵} ∫0 [ 푇 ˆ + ˆ ˆ = (푆0 − 퐾) + 1[퐾,∞)(푆푢)푆푢(푟푑푢 (6.6) 0 1 ] +ˆ휎(푢, 푆ˆ , 푆ˆ∗)푑푊 (푢)) + 퐿퐾 1 푢 푢 2 푇 {푆ˆ∗(푇 )≤퐵}

퐾 where 퐿푇 is the of ℎ푎푡푆(푇 ) at 퐾. By the definition of local time,

∫ 푇 퐾 1 ˆ ˆ ˆ 퐿푇 = lim 1(퐾−휀,퐾+휀)(푆푢)푑 < 푆, 푆 >푢 휀↓0 2휀 0 ∫ 푇 1 ˆ 2 ˆ ˆ∗ ˆ2 = lim 1(퐾−휀,퐾+휀)(푆푢)ˆ휎 (푢, 푆푢, 푆푢)푆푢푑푢. 휀↓0 2휀 0 Taking expectations of (6.5) we get

−푟푇 + 퐶(푇, 퐾, 퐵) = 푒 퐸[(푆(푇 ) − 퐾) 1{푆∗(푇 )≤퐵}]

−푟푇 ˆ + = 푒 퐸[(푆(푇 ) − 퐾) 1{푆ˆ(푇 )∗≤퐵}] ∫ 푇 −푟푇 ˆ + ˆ ˆ = 푒 {(푆0 − 퐾) 퐸[1{푆ˆ(푇 )∗≤퐵}] + 퐸[ 푟1[퐾,∞)(푆푢)푆푢1{푆ˆ(푇 )∗≤퐵}푑푢] 0 ∫ 푇 1 1 ˆ 2 ˆ ˆ∗ ˆ2 + 퐸[lim 1(퐾−휀,퐾+휀)(푆푢)ˆ휎 (푢, 푆푢, 푆푢)푆푢푑푢1{푆ˆ(푇 )∗≤퐵}]} 2 휀↓0 2휀 0 ∫ 퐵 ∫ ∞ ∫ 푇 −푟푇 ˆ + ˆ ˆ ˆ∗ ˆ ˆ∗ = 푒 {(푆0 − 퐾) 퐸[1{푆ˆ(푇 )∗≤퐵}] + 푟 푆푢휙(푢, 푆푢, 푆푢)푑푢푑푆푢푑푆푢 0 퐾 0 ∫ ∞ ∫ 퐵 ∫ 푇 1 1 ˆ 2 ˆ ˆ∗ ˆ2 + [lim 1(퐾−휀,퐾+휀)(푆푢)ˆ휎 (푢, 푆푢, 푆푢)푆푢푑푢] 2 0 0 휀↓0 2휀 0 ˆ ˆ∗ ˆ ˆ∗ 휙(푢, 푆푢, 푆푢)푑푆푢푑푆푢} ∫ 퐵 ∫ ∞ ∫ 푇 −푟푇 ˆ + ˆ ∗ ˆ ˆ ˆ∗ ˆ ˆ∗ = 푒 {(푆0 − 퐾) 푃 (푆(푇 ) ≤ 퐵) + 푟 푆푢휙(푢, 푆푢, 푆푢)푑푢푑푆푢푑푆푢 0 퐾 0 ∫ 퐵 ∫ 푇 1 2 ˆ∗ 2 ˆ∗ ˆ∗ + 휎ˆ (푢, 퐾, 푆푢)퐾 휙(푢, 퐾, 푆푢)]푑푢푑푆푢}, 2 0 0 70

ˆ ˆ∗ ˆ ˆ∗ where 휙(푢, 푆푢, 푆푢) is the joint density of (푆푢, 푆푢). Taking derivatives with respect to 퐵 gives { ∫ ∫ ∂ ∞ 푇 퐶(푇, 퐾, 퐵) = 푒−푟푇 (푆ˆ − 퐾)+휙 (퐵) + 푟 푆ˆ 휙(푢, 푆ˆ , 퐵)푑푢푑푆ˆ ∂퐵 0 푆ˆ(푇 )∗ 푢 푢 푢 ∫ 퐾 }0 1 푇 + 휎ˆ2(푢, 퐾, 퐵)퐾2휙(푢, 퐾, 퐵)푑푢 , 2 0 and by taking derivatives with respect to 푇 we obtain { ∂2 ∂ ∂ 퐶(푇, 퐾, 퐵) = −푟 퐶(푇, 퐾, 퐵) + 푒−푟푇 (푆ˆ − 퐾)+ 휙 (퐵) ∂퐵∂푇 ∂퐵 0 ∂푇 푆ˆ(푡)∗ ∫ } ∞ 1 +푟 푆ˆ(푇 )휙(푇, 푆ˆ(푇 ), 퐵)푑푆ˆ(푇 ) + 휎ˆ2(푇, 퐾, 퐵)퐾2휙(푇, 퐾, 퐵) 2 퐾 { ∂ ∂ = −푟 퐶(푇, 퐾, 퐵) + 푒−푟푇 (푆ˆ − 퐾)+ 휙 (퐵) ∂퐵 0 ∂푇 푆ˆ(푡)∗ ∫ 1 ∞ + 휎ˆ2(푇, 퐾, 퐵)퐾2휙(푇, 퐾, 퐵) + 푟 퐾휙(푇, 푆ˆ(푇 ), 퐵)푑푆ˆ(푇 ) 2 퐾 ∫ ∞ } +푟 (푆ˆ(푇 ) − 퐾)휙(푇, 푆ˆ(푇 ), 퐵)푑푆ˆ(푇 ) , 퐾

ˆ∗ where 휙푆ˆ∗(푡)(퐵) is the marginal distribution of 푆 (푡). Combining (6.2), (6.3) and (6.4) yields

∂2 ∂2 푟퐾 퐶(푇, 퐾, 퐵) + 퐶(푇, 퐾, 퐵) ∂퐵∂퐾 ∂퐵∂푇 ∂ 1 ∂3 = 푒−푟푇 (푆ˆ − 퐾)+ 휙 (퐵) − 휎ˆ2(푇, 퐾, 퐵)퐾2 퐶(푇, 퐾, 퐵), 0 ∂푇 푆ˆ(푇 )∗ 2 ∂퐵∂퐾2 where휎 ˆ(푇, 퐾, 퐵) is the local volatility of up-and-out call, 퐶(푇, 퐾, 퐵) is the price of an up-and-out call.

Consider the density 휙푆ˆ(푇 )∗ (퐵):

∫ ∞ ˆ 휙푆ˆ(푇 )∗ (퐵) = 휙(푇, 푆(푇 ), 퐵)푑푆(푇 ) 0 ∫ 퐾 ∫ ∞ = 휙(푇, 푆ˆ(푇 ), 퐵)푑푆(푇 ) + 휙(푇, 푆ˆ(푇 ), 퐵)푑푆(푇 ). 0 퐾

From (6.3), we have ∫ +∞ ∂2 휙(푇, 푆(푇 ), 퐵)푑푆(푇 ) = −푒푟푇 퐶(푇, 퐾, 퐵). 퐾 ∂퐵∂퐾 ∫ 퐾 ˆ We can derive a similar formula for 0 휙(푇, 푆(푇 ), 퐵)푑푆(푇 ) by differentiating the up- and-out put option price. 71

The up-and-out put with strike 퐾, maturity 푇 and barrier 퐵 is given by ∫ ∫ −푟푇 + ∗ ∗ 푃 (푇, 퐾, 퐵) = 푒 (퐾 − 푆(푇 )) 1{푆∗(푇 )≤퐵}휙(푇, 푆(푇 ), 푆 (푇 ))푑푆(푇 )푑푆푇(6.7) ℝ ℝ ∫ 퐾 ∫ 퐵 −푟푇 ∗ ∗ = 푒 (퐾 − 푆(푇 ))휙(푇, 푆(푇 ), 푆 (푇 ))푑푆(푇 )푑푆푇 . 0 −∞

We differentiate (6.6) with respect to 퐵 to get ∫ ∂ 퐾 푃 (푇, 퐾, 퐵) = 푒−푟푇 (퐾 − 푆(푇 ))휙(푇, 푆(푇 ), 퐵)푑푆(푇 ). ∂퐵 0

Differentiating the preceding expression with respect to 퐾 yields ∫ ∂2 퐾 푃 (푇, 퐾, 퐵) = 푒−푟푇 휙(푇, 푆(푇 ), 퐵)푑푆(푇 ). ∂퐵∂퐾 0

Therefore, { } ∂2 ∂2 휙 (퐵) = 푒푟푇 푃 (푇, 퐾, 퐵) − 퐶(푇, 퐾, 퐵) , 푆ˆ∗(푇 ) ∂퐵∂퐾 ∂퐵∂퐾 and { } ∂ ∂3 ∂3 휙 (퐵) = 푟휙 (퐵) + 푒푟푇 푃 (푇, 퐾, 퐵) − 퐶(푇, 퐾, 퐵) . ∂푇 푆ˆ∗(푇 ) 푆ˆ(푇 )∗ ∂퐵∂퐾∂푇 ∂퐵∂퐾∂푇

Substituting the preceding expression into (6.7), we obtain

∂2 ∂2 푟퐾 퐶(푇, 퐾, 퐵) + 퐶(푇, 퐾, 퐵) ∂퐵∂퐾 { ∂퐵∂푇 } ∂3 ∂3 = (푆ˆ − 퐾)+ 푃 (푇, 퐾, 퐵) − 퐶(푇, 퐾, 퐵) + 푒−푟푇 푟휙 (퐵) 0 ∂퐵∂퐾∂푇 ∂퐵∂퐾∂푇 푆ˆ(푇 )∗ 1 ∂3 − 휎ˆ2(푇, 퐾, 퐵)퐾2 퐶(푇, 퐾, 퐵), 2 ∂퐵∂퐾2

Solving this equation for휎 ˆ(푇, 퐾, 퐵) yields { [ 3 3 2 2 1 ˆ + ∂ ∂ ∂ 휎ˆ (푇, 퐾, 퐵) = (푆0 − 퐾) ( 푃 − 퐶) + 푟( 푃 1 퐾2 ∂3 퐶 ∂퐵∂퐾∂푇 ∂퐵∂퐾∂푇 ∂퐵∂퐾 2 ∂퐵∂퐾2 } ∂2 ] ∂2 ∂2 − 퐶) − 푟퐾 퐶 − 퐶 . ∂퐵∂퐾 ∂퐵∂퐾 ∂퐵∂푇

This is an analogue of Dupire local volatility for barrier option. 72

References

[1] M. Atlan. Localizing volatilities. arXiv:math/0604316v1, 2006. [2] D. Bates. Jumps and stochastic volatility: the exchange rate processes impliciti in deutschemark options. Rev. Fin. Studies,, 9:69–107, 1996. [3] A. Bentata and R. Cont. Forward equations for option prices in semimartingale models. arXiv:1001.1380v2. [4] A. Bentata and R. Cont. Mimicking the marginal distributions of a semimartingale. arXiv:0910.3992v2. [5] F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal of Political Economy, 81(3):637–654, 1973. [6] B. B¨ottcher. Construction of time-inhomogeneous Markov processes via evolution equations using pseudo-differential operators. J. Lond. Math. Soc. (2), 78(3):605– 621, 2008. [7] D.T. Breeden and R.H. Litzenberger. Prices of state-contingent claims implicit in option prices. The Journal of Business, 51(4):621–651, 1978. [8] G. Brunick. A weak existence result with application to the financial engineer’s calibration problem. Ph.D dissertation, Carmegie Mellon University, 2008. [9] E. Derman and I. Kani. Riding on a smile. Risk, 7(2):32–39, 1994. [10] B. Dupire. Pricing with a smile. Risk, 7(1):18–20, 1994. [11] B. Dupire. A unified theory of volatility. Discussion paper Paribas Capital Mar- kets, reprinted in ”Derivative Pricing: The Classic Colletion”,edited by P. Carr, 2004, Risk Books, London, 1996. [12] L.C. Evans. Partial differential equations, volume 19 of Graduate Studies in Math- ematics. American Mathematical Society, Providence, RI, second edition, 2010. [13] P. Feehan. Notes on existence and uniqueness for parabolic differential equations on euclidean space. [14] A. Friedman. Partial differential equations of parabolic type. Prentice-Hall Inc., Englewood Cliffs, N.J., 1964. [15] I. Gy¨ongy. Mimicking the one-dimensional marginal distributions of processes having an Itˆodifferential. Probab. Theory Relat. Fields, 71(4):501–516, 1986. [16] S. Heston. A closed-form solution for options with stochastic volatility with appli- cations to bond and currency options. Review of Financial Studies, 6(2):327–343, 1993. 73

[17] W. Hoh. A symbolic calculus for pseudo-differential operators generating Feller semigroups. Osaka J. Math., 35(4):789–820, 1998.

[18] N. Jacob. Pseudo differential operators and Markov processes. Vol. I Fourier anal- ysis and semigroups. Imperial College Press, London, 2001.

[19] N. Jacob. Pseudo differential operators & Markov processes. Vol. II Generators and their potential theory. Imperial College Press, London, 2002.

[20] N.V. Krylov. On the relation between differential operators of second order and the solutions of stochastic differential equations. Steklov Seminar. Publication Division, New York, pages 214–229, 1984.

[21] R. Merton. Option pricing when underlying stock returns are discontinuous,. J. Financial Ecomonics, 3:125–144, 1976.

[22] A. Lesniewski P. Hagan, D. Kumar and D. Woodward. Managing smile risk. Wilmott Magazine, pages 84–108, 2002.

[23] Philip E. Protter. Stochastic integration and differential equations, volume 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1, Corrected third printing.

[24] L. C. G. Rogers and D. Williams. Diffusions, Markov processes, and martingales. Vol. 2. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 2000. Itˆocalculus, Reprint of the second (1994) edition.

[25] M. Shi. Local intensity and its dynamics in multi-name credit derivatives modeling. Ph.D. dissertation, Rutgers University, 2010.

[26] R. E. Showalter. Monotone operators in Banach space and nonlinear partial differ- ential equations, volume 49 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1997.

[27] D.W. Stroock. Diffusion processes associated with L´evy generators. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 32(3):209–244, 1975. 74

Vita

Jin Wang

2004-2010 Ph. D. in Mathematics, Rutgers, the University of New Jersey, New Brunswick, NJ, USA

1999-2003 B. Sc. in Mathematics from Tsinghua University, Beijing, China

2005-2010 Teaching assistant, Department of Mathematics, Rutgers, the University of New Jersey, New Brunswick, NJ, USA