Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2006 Option Pricing with Selfsimilar Additive Processes Mack L. (Mack Laws) Galloway

Follow this and additional works at the FSU Digital Library. For more information, please contact [email protected] THE FLORIDA STATE UNIVERSITY

COLLEGE OF ARTS AND SCIENCES

OPTION PRICING WITH SELFSIMILAR ADDITIVE PROCESSES

By

MACK L. GALLOWAY

A Dissertation submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Fall Semester, 2006 The members of the Committee approve the Dissertation of Mack L. Galloway defended on August 7, 2006.

Craig Nolder Professor Directing Dissertation

Fred Huffer Outside Committee Member

Paul Beaumont Committee Member

Bettye Anne Case Committee Member

John R. Quine Committee Member

The Office of Graduate Studies has verified and approved the above named committee members.

ii ACKNOWLEDGEMENTS

I thank the following friends and faculty for their help and support: Craig Nolder, Warren Beaves, Bettye Anne Case, Jennifer Mann, Fred Huffer, Paul Beaumont, Alec Kercheval, David Kopriva, John Quine, Max Gunzberger, James Doran, and Giles Levy. I also thank my parents, Harriett and Louie Galloway, III for their love and support throughout my life.

iii TABLE OF CONTENTS

List of Tables ...... vi

List of Figures ...... vii

Abstract ...... ix

1. Introduction ...... 1

2. Review of Stochastic Processes ...... 5 2.1 Additive Processes ...... 5 2.2 L´evy Stochastic Volatility Models ...... 9

3. Selfsimilar Additive Processes ...... 11 3.1 Selfdecomposability and Selfsimilarity ...... 11 3.2 Strictly Stable L´evy Processes ...... 12 3.3 Sato Processes ...... 13

4. Subordinated, Additive Processes ...... 17 4.1 Supporting Theorems and Definition ...... 17 4.2 The Generalized Subordination Theorem ...... 20

5. Construction of Additive Models ...... 23 5.1 Derivation of Characteristic Functions ...... 23 5.2 Expressions for the Central Moments ...... 28 5.3 Properties of the Increments: ...... 30

6. Option Pricing and Calibration ...... 32 6.1 Definition of the Price Process ...... 32 6.2 Numerical Pricing Method for European Options ...... 35 6.3 Calibration ...... 37

7. Empirical Study ...... 39 7.1 Models of Interest ...... 39 7.2 Data Specifications ...... 39 7.3 Algorithmic Specifications ...... 40 7.4 Results ...... 41

iv 8. Conclusion ...... 51 8.1 Future Research ...... 51

9. Plots ...... 53

A. Truncation and Sampling Error: Lee’s Theorems ...... 67 A.1 Bounds on the Discounted Characteristic Function ...... 68

B. Plots of Difference between Calculated Pricing Error and Upper Bound on Theoretical Pricing Error ...... 73

C. Additive Models: Discounted Characteristic Function Bounds for the European Call Option ...... 78 C.1 Variance Gamma Family ...... 78 C.2 Normal Inverse Gaussian Family ...... 80

D. VG-OU-Γ Discounted Characteristic Function Bounds for the European Call Option ...... 83

E. NIG-OU-Γ Discounted Characteristic Function Bounds for the European Call Option ...... 91

REFERENCES ...... 101

BIOGRAPHICAL SKETCH ...... 103

v LIST OF TABLES

7.1 Classification of Calibrated Models ...... 39

7.2 Upper Bound on the Time-averaged Theoretical Average Pricing Error (Jan. - Dec. 2005) ...... 45

7.3 Upper Bound on the Time-averaged Theoretical Root Mean Square Pricing Error (Jan. - Dec. 2005) ...... 45

7.4 Intra-family Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Average Pricing Errors (Jan. - Dec. 2005) 46

7.5 Intra-family Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Root Mean Square Pricing Errors (Jan. - Dec. 2005) ...... 46

7.6 Intra-class Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Average Pricing Errors (Jan. - Dec. 2005) 47

7.7 Intra-class Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Root Mean Square Pricing Errors (Jan. - Dec. 2005) ...... 47 7.8 Variance Gamma Models: Twelve Quote Date Time-average of the Risk- Neutral Parameters (Jan. - Dec. 2005): “( )” denotes the sample standard deviation of the mean...... 49

7.9 Normal Inverse Gaussian Models: Twelve Quote Date Time-average of the Risk-Neutral Parameters (Jan. - Dec. 2005): “( )” denotes the sample standard deviation of the mean...... 49

7.10 Normal H-Inverse Gaussian Model: Three Quote Date Time-average of the Risk-Neutral Parameters (Sep. - Nov. 2005): “( )” denotes the sample standard deviation of the mean...... 50

vi LIST OF FIGURES

9.1 Upper Bounds on Theoretical Average Pricing Error [in-sample, out-of-the- money options] ...... 54

9.2 Upper Bounds on Theoretical Average Pricing Error [out-of-sample, out-of- the-money options] ...... 55

9.3 Upper Bounds on Theoretical Root Mean Square Error [in-sample, out-of-the- money options] ...... 56

9.4 Upper Bounds on Theoretical Root Mean Square Error [out-of-sample, out- of-the-money options] ...... 57

9.5 Market and Model Option Prices vs. Moneyness (K/S) for Variance Gamma family : April 2005 quote date, out-of-the-money options ...... 58

9.6 Market and Model Option Prices vs. Moneyness (K/S) for Variance Gamma family : August 2005 quote date, out-of-the-money options ...... 59

9.7 Market and Model Option Prices vs. Moneyness (K/S) for Variance Gamma family : December 2005 quote date, out-of-the-money options ...... 60

9.8 Market and Model Option Prices vs. Moneyness (K/S) for Normal Inverse Gaussian family : April 2005 quote date, out-of-the-money options ...... 61

9.9 Market and Model Option Prices vs. Moneyness (K/S) for Normal Inverse Gaussian family : August 2005 quote date, out-of-the-money options .... 62

9.10 Market and Model Option Prices vs. Moneyness (K/S) for Normal Inverse Gaussian family : December 2005 quote date, out-of-the-money options ... 63

9.11 Risk-neutral Mean, Variance, , for April 2005 quote date [VG models - top row][NIG models - bottom row] ...... 64

9.12 Risk-neutral Mean, Variance, Skewness, Kurtosis for August 2005 quote date [VG models - top row][NIG models - bottom row] ...... 65

9.13 Risk-neutral Mean, Variance, Skewness, Kurtosis for December 2005 quote date [VG models - top row][NIG models - bottom row] ...... 66

vii B.1 In-sample Average Pricing Error Differences: (Upper Bound of Theoretical Value - Calculated Value using out-of-the-money options) ...... 74

B.2 Out-of-sample Average Pricing Error Differences: (Upper Bound of Theoret- ical Value - Calculated Value using out-of-the-money options) ...... 75

B.3 In-sample Root Mean Square Error Differences: (Upper Bound of Theoretical Value - Calculated Value using out-of-the-money options) ...... 76

B.4 Out-of-sample Root Mean Square Error Differences: (Upper Bound of Theo- retical Value - Calculated Value using out-of-the-money options) ...... 77

viii ABSTRACT

The use of time-inhomogeneous additive models in option pricing has gained attention in recent years due to their potential to adequately price options across both strike and maturity with relatively few parameters. In this thesis two such classes of models based on the selfsimilar additive processes of Sato are developed. One class of models consists of the risk-neutral exponentials of a selfsimilar , while the other consists of the risk-neutral exponentials of a time-changed by an independent, increasing, selfsimilar additive process. Examples from each class are constructed in which the time one distributions are Variance Gamma or Normal Inverse Gaussian distributed. Pricing errors are assessed for the case of Standard and Poor’s 500 index options from the year 2005. Both sets of time-inhomogeneous additive models show dramatic improvement in pricing error over their associated L´evy processes. Furthermore, with regard to the average of the pricing errors over the quote dates studied, the selfsimilar Normal Inverse Gaussian model yields a mean pricing error significantly less than that implied by the bid-ask spreads of the options, and also significantly less than that given by its associated, less parsimonious, L´evy stochastic volatility model.

ix CHAPTER 1

Introduction

The purpose of this thesis is to study the ability of certain non-L´evy, additive processes to price options on the Standard and Poor’s 500 index (SPX) across both strike and maturity. As noted by Carr and Wu [10] and Eberlein and Kluge [15], L´evy models are incapable of fitting surfaces of equity options across both strike and maturity. Although stochastic volatility models with jumps have success in pricing both across strike and maturity [12], the use of such models likely incurs the cost of computing with a significantly larger number of parameters than in the L´evy case. By using additive models which allow for nonstationarity of increments, one obtains greater flexibility in pricing across maturity than with L´evy processes. Furthermore, the additive processes developed in this thesis require only one more parameter than a L´evy model of like time one distribution. This additional parameter is known as the Hurst exponent. One class of models studied consists of price processes whose logarithm is given by a H-selfsimilar additive process of Sato. These models were previously studied by Nolder in 2003 [23]. Recently Carr, Geman, Madan, and Yor [7] demonstrated that exponential H-selfsimilar additive models could be calibrated with great success using options on the Standard and Poor’s 500 index, as well as for individual equity options. One reason for their success is due to the fact that these processes may be constructed from any of a multitude of selfdecomposable distributions already used in finance. In contrast, selfsimilar L´evy models consist solely of Brownian motion and Paretian stable processes [21]. The selfsimilar additive models studied in this thesis are those whose time one distributions coincide with those of the Variance Gamma and Normal Inverse Gaussian processes, denoted by “HssVG” and “HssNIG,” respectively. The other class of models studied consists of those processes whose logarithm is given by

1 a L´evy process time-changed by an independent, increasing H-selfsimilar additive process. Using a specific subset of the class of subordinated additive processes, Madan and Yor [20] showed that for a prespecified set of zero mean probability densities, q , one could construct { t} a Markov martingale process, M for which the one-dimensional distributions were those { t} corresponding to the set q . Furthermore, such a Markov martingale is represented as a { t} time-changed by an independent, increasing, additive process. This idea is crucial in the construction of mean-corrected exponential L´evy stochastic volatility models. Here, subordination of a L´evy process is studied with examples of a two parameter Brownian motion time-changed by an independent, increasing, selfsimilar additive process. When the directing process used for the time-change is a selfsimilar (resp. selfsimilar Inverse ) the resulting process is denoted by “VHG” (resp. “NHIG”). In order that this research may be viewed in historical context, a brief history of pricing with additive processes now follows. The use of additive processes to price options dates prior to 1900 when Louis Bachelier, a student of Henri Poincar´e, began the development of the theory of Brownian motion [14]. In his dissertation, “Theorie de la Speculation,” changes in the price of a stock were modelled 1 with what is now known as a zero mean, arithmetic Brownian motion, a 2 -selfsimilar additive process [21]. Later, Paul L´evy studied and further developed the theory associated with a class of stochastic processes which included Brownian motion as a special case, the stable processes [18]. As in the case of Brownian motion, the increments of these stable processes were both independent and stationary. Unlike the associated one-dimensional distributions of Brownian motion, the non-Gaussian stable distributions permitted existence of, at most, the first [25]. The forthcoming financial application came in 1963 when Mandelbrot coined the phrase “stable Paretian” to denote the set of non-Gaussian stable distributions. By modelling the change in the logarithm of the closing spot price of cotton [22] as a Paretian stable distributed random variable, he was able to demonstrate that the tail behavior of his model was consistent with that corresponding to the observed change in the logarithm of the spot price [21]. A special case of this stable Paretian distribution would later be revived by Carr and Wu for the purpose of option pricing [10]. In January of 1973 Peter Clark used Bochner’s idea of subordination to introduce a new pricing model, the Lognormal-Normal process. Given σ1,σ2,µ > 0, the time one

2 2 distribution of the process being time–changed was Normal(0,σ2) distributed while the 2 directing process (subordinator) had a time one distribution which was Lognormal(µ, σ1) distributed. Clark attributed the nonuniformity of the rate of change in cotton futures prices to the nonuniformity of the rate at which information became available to traders. In his argument, fast trading days were caused by the influx of new information inconsistent with previously held expectations, while slow trading days were caused from a lack of new information [11]. This idea would prove useful in the subsequent development of the L´evy stochastic volatility models [6]. During that same year Fisher Black and Myron Scholes produced their closed-form expression for the price of a European call option, where the underlying price process was modelled by a geometric Brownian motion [3]. Another closed-form option pricing formula appeared in 1998 in which the logarithm of the underlier was distributed according to a non-stable L´evy process, known as the . In this model, a Brownian motion was time-changed by an independent Gamma process, yielding a model which could better accommodate the leptokurtic and negatively skewed risk-neutral distributions of the logarithm of the return data [19]. Al- though this process lacked selfsimilarity, its increments were independent and stationary, with notably, finite moments. Other well-known L´evy processes include: Normal Inverse Gaussian, Meixner, Generalized Hyperbolic [26], the five parameter CGMY process with drift term included [5], and the more general 6 parameter KoBoL process with drift term included [4]. As noted by Carr and Wu, L´evy models have associated model implied volatility surfaces as a function of maturity, T, and moneyness, ln(K/S) , which tend to a constant value √T as maturity gets large. This was contrary to the observed behavior associated with SPX options where, as maturity increased, the market implied volatility tended to a function which appeared to be independent of maturity alone and decreasing with moneyness [10]. One answer to the problem of fitting option prices across both strike and maturity came in 2003 when Carr, Madan, Geman, and Yor showed that one may subordinate a L´evy process by the time integral of a mean-reverting, positive process in order to obtain a process which was quite flexible across both strike and maturity [6]. Since closed-form characteristic functions exist where the rate of time change is given by the solution to the Cox-Ingersoll Ross or Ornstein-Uhlenbeck stochastic differential equation with one-dimensional Gamma or Inverse Gausssian distribution, one could use Fourier transform methods to quickly price options under such models.

3 Another answer to this problem came with the simpler Finite Moment Log , a stable Paretian process in which the skewness parameter, β, was set to an extreme value of 1. With β fixed and the drift term set so that the martingale condition was − satisfied, the FMLS model was able to fit SPX option data relatively well with only two free parameters [10]. In an effort to find models which had more flexibility than the Finite Moment Log Stable process and were more parsimonious than the stochastic volatility models, researchers pursued the development of time-inhomogeneous additive processes. In 2003 Nolder proposed the modelling of price processes with the H-selfsimilar additive processes of Sato [23]. Eberlein and Kluge in 2004 used Euro-caplets and swaptions to calibrate a model which incorporated a piecewise stationary additive process [15]. More recently in 2005 Carr, Madan, Geman, and Yor used spot options on the Standard and Poor’s 500 index to calibrate exponential H-selfsimilar additive processes of Sato [7]. In this thesis the set of additive models developed and studied consists of the L´evy models, the H-selfsimilar additive models, and the models formed by time-changing a L´evy process by an independent, increasing H-selfsimilar additive process. Examples from each of these classes are constructed in which the models share like time one distribution. These additive models, along with their related L´evy stochastic volatility models, are calibrated using SPX option data in order to assess their relative pricing performance. Chapter two consists of introductory definitions and theorems regarding additive processes along with a review of L´evy stochastic volatility models. Chapter three explains Sato’s construction of H- selfsimilar additive processes and gives the L´evy characteristics for these processes. Chapter four generalizes Sato’s subordination theorem to the case of additive directing processes. It also contains the needed generalized Laplace transform for the additive case, along with the L´evy spot characteristics of the subordinated additive processes. Chapter five contains the derivations of characteristic functions for both the Variance Gamma and Normal Inverse Gaussian families of additive processes. Also included are the first four central moments of the one-dimensional distributions, as well as those of the increments. The formulation of the risk-neutral price processes and discussions of the numerical option pricing method and calibration problem are found in chapter six. Chapter seven contains the data filters, market parameter values used for pricing, and the calibration results. Chapter eight concludes and suggests future research endeavors.

4 CHAPTER 2

Review of Stochastic Processes

2.1 Additive Processes

The simplest of all distributions is the Dirac measure, or trivial distribution, defined as follows.

Definition 1 [1] Dirac measure, δ : for x Rd, x ∈

1 if x A d δx (A)= ∈ A R 0 if x / A ∀ ∈B  ∈   Extending this definition to the case of random variables yields the constant random variables.

d Definition 2 [25] A random variable, X on R , is trivial or constant if its distribution PX is trivial. Furthermore X is non-zero if P = δ . X 6 0

Further extending to the case of processes, one obtains the deterministic processes as defined below.

Definition 3 [25]A Rd-valued X is trivial or deterministic if for every { t} t 0, X is trivial. X is non-zero if t 0 with P = δ . ≥ t { t} ∃ ≥ Xt 6 0

Of course, one might not choose to model financial assets with such processes. However, as will be shown, these definitions are needed for technical reasons in the later proofs. In order to define more complicated distributions than the trivial ones, a function is sought which may determine them uniquely, the characteristic function.

5 Definition 4 [25] Characteristic function: The characteristic function µ (z) of a probability measure µ on Rd is defined by b i z,x d µ (z)= e h iµ (dx) , z R . Rd ∈ Z Proposition 5 [25] If µ (z)=b µ (z) for z Rd then µ = µ . 1 2 ∈ 1 2

Furthermore, one mayb determineb convergence in distribution by pointwise convergence of a sequence of characteristic functions as shown below.

Proposition 6 [1] (Glivenko) If µ (z) µ (z) for every z, then µ µ. n → n →

Much of the theory which willb be presentedb is based on a special subset of distributions, the infinitely divisible distributions. In order to define these distributions, one must first define the convolution of two distributions.

Definition 7 [25] Convolution of two distributions: The convolution of two distributions µ and µ on Rd, denoted by µ = µ µ , is a distribution defined by 1 2 1∗ 2

d µ (B)= 1B (x + y) µ1 (dx) µ2 (dy) , B R . Rd Rd ∈B Z Z ×  Definition 8 [25] Infinite Divisibility: A probability measure µ on Rd is infinitely divisible d if, for any positive integer n, there is a probability measure µn on R such that

n µ =(µn) .

The characteristic function of an infinitelyb b divisible distribution has a unique representa- tion in terms of three key quantities: a symmetric nonnegative-definite matrix, a Rd-valued parameter, and a positive measure on Rd which satisfies a certain integrability condition. The following theorem shows how these quantities are related to an infinitely divisible distribution.

Proposition 9 [25] (i) ( L´evy-Khintchine) Let D = x : x 1 . If µ is an infinitely { | |≤ } divisible distribution on Rd, then

1 i z,x d µ (z) = exp z, Az + i γ,z + e h i 1 i z, x 1D (x) ν (dx) , z R −2 h i h i Rd − − h i ∈  Z   b 6 where A is a symmetric nonnegative-definite d d matrix, γ Rd, and ν is a measure on × ∈ Rd satisfying

ν ( 0 ) = 0, { } x 2 1 ν (dx) < . Rd | | ∧ ∞ Z  (ii) Conversely, if A is a symmetric nonnegative-definite d d matrix, ν is a measure on × Rd with ν ( 0 ) = 0 and satisfies the previously stated integrability condition, and γ Rd, { } ∈ then there exists an infinitely divisible distribution, µ, whose characteristic function is given by the previously defined, µ (z) in (i). (iii) The representation of µ (z) in (i) by (A, ν, γ) is unique. b A few infinitely divisible distributionsb include the following: the dirac measure, the expo- nential, the L´evy, the student-t, and the Cauchy distributions. Some familiar distributions which are not infinitely divisible are the uniform and binomial distributions [25]. A rather special subset of infinitely divisible distributions are the set of selfdecomposable distributions. Such distributions will play a key role in the formulation of the characteristic functions of the distributions for the processes introduced in later chapters.

Definition 10 [25] An additive process in law is a Rd-valued stochastic process X : t 0 { t ≥ } such that

For any choice of n 1 and 0 t < t < < t , random variables X , X • ≥ ≤ 0 1 ··· n t0 t1 − X , X X ,..., X X − are independent. t0 t2 − t1 tn − tn 1 X =0 a.s. • 0 X is stochastically continuous. •

With the extra technical requirement of c`adl`ag sample paths, the definition of an additive process is obtained below.

Definition 11 [25] An additive process on a measurable space (Ω, ) is an additive process F in law in which there is a Ω with P [Ω ]=1 such that, for every ω Ω , X (ω) is 0 ∈ F 0 ∈ 0 t right-continuous in t 0 and has left limits in t> 0. ≥

7 The following proposition by Sato makes explicit the relationship between infinitely divisible distributions and additive processes in law.

Proposition 12 [25] Let X be a Rd-valued additive process in law and, for 0 s t< { t} ≤ ≤ , let µ be the distribution of X X . Then µ is infinitely divisible and ∞ s,t t − s s,t µ µ = µ for 0 s t u< s,t ∗ t,u s,u ≤ ≤ ≤ ∞ µ = δ for 0 s< s,s 0 ≤ ∞ µ δ as t s s,t → 0 ↑ µ δ as t s. s,t → 0 ↓

Thus, the increments of an additive process in law have distributions which are infinitely divisible. Additive processes in law share other special properties. Suppose one wishes to determine if an additive process X is identical in law with another additive process Y . { t} { t} It is unnecessary to check all of the finite-dimensional distributions of both processes. One d only needs to check equality of the one-dimensional distributions, namely Xt = Yt for each t> 0. This property is made explicit in the following theorem.

′ Rd Proposition 13 [25] If Xt and Xt are -valued additive processes in law such that ′ { } ′ X =d X for any t 0, then X and X are identical in law. t t ≥ { t}  t  It is noteworthy to mention of a special subset of these additive processes: the L´evy processes.

Definition 14 [25] A L´evyprocess is an additive process where the distribution of X X s+t − s does not depend on s for any s, t 0. ≥

Alternatively, a L´evy process (resp. L´evy process in law) is an additive process (resp. additive process in law) with stationary increments. Another important property of an additive process in law is illustrated in the following proposition.

Proposition 15 [25] Let X be a Rd-valued L´evyor additive process in law. Then it has a modification which is, respectively, a L´evyor additive process.

8 2.2 L´evy Stochastic Volatility Models

L´evy stochastic volatility models may be viewed as an extension of the work of Peter Clark who used Bochner’s idea of subordination in order to model asset prices with a L´evy process time-changed by an independent, increasing L´evy process, namely the Lognormal-Normal process. Instead of subordinating a L´evy process by another L´evy process and thus preserving time-homogeneity and independent increments, Carr, Madan, Geman, and Yor proposed time-changing a L´evy process by a process which could be represented as the time-integral of mean reverting, positive process. The property of mean reversion associated with the the stochastic rate of time change caused the sample paths of the subordinated process to exhibit volatility persistence, otherwise known as ”volatility clustering.” [6]. A couple of popular models used to define the rate of time change of the market are the Cox-Ingersoll Ross (CIR) and the Orstein-Uhlenbeck processes (OU). The CIR process, y is given by { t}

1 dy = κ (η y ) dt + λy 2 dW t − t t t with W as a standard Brownian motion, η as the long-run rate of time change, κ as the rate of mean reversion, and λ as a constant of proportionality with respect to the standard deviation of the solution, y . The OU process y , is defined by letting z be an increasing t { t} { t} L´evy process with dy = λy dt + dz , t − t λt where λ is a mean reversion parameter and y > 0. Another name for z is the 0 { t} “Background Driving L´evy Process,” BDLP [26], [12]. Let X be a Rd-valued L´evy process and y be a solution to one of the previously { t} { t} defined stochastic differential equations. The corresponding L´evy stochastic volatility process, Z is defined by { t}

t Z =d X where Y = y ds { t} { Yt } t s Z0 [26]. The following theorem of Bardorff-Nielsen shows how to choose a BDLP in order to obtain an OU process with a prespecified stationary distribution.

9 Proposition 16 [2] Let c (ξ) be a differentiable and selfdecomposable characteristic function (λ) and let κ (ξ) = log c (ξ) . Suppose that ξκ′ (ξ) is continuous at zero and let φ (ξ)= λξκ′ (ξ) for a λ > 0. Then exp φ(λ) (ξ) is an infinitely divisible characteristic function. Further- (λ) (λ) more, letting z (t) be the homogeneous L´evyprocess for which z (1) has characteristic function exp φ(λ) (ξ) and defining the process y by dy = λy dt + dz(λ) we have that t t − t t a stationary version of yt exists, with one-dimensional marginal distribution given by the characteristic function c (ξ) .

Two popular choices for the stationary distributions are the Gamma and Inverse Gaussian distributions. The characteristic function for the Gamma case is given below [26].

1 λt ϕGamma OU (ξ; t, λ, a, b, y0) = exp iy0λ− 1 e− u − { − λa b + blog  itu iu λb b iλ 1 (1 e λt) u − } −   − − − −   Let X be a Rd-valued L´evy process, independent of Y = t y ds, where y is a { t} t 0 s { s} stationary OU process with Gamma(a, b) distribution. Let −→θ Rn, be the n-parameter X1 ∈R vector which determines the distribution of X1. It follows that the characteristic function,

ϕXYt , of XYt is given by

ϕX ξ; −→θ X , λ, a, b, y0 = ϕY i log ϕX ξ; −→θ X ; λ, a, b, y0 . Yt 1 t − 1 1        If X is a Variance Gamma process with characteristic function given by { t} GM Ct ϕ (ξ; C, G, M)= VGt GM +(M G) iξ + ξ2  −  and X is time-changed by the time integral of an OU-Gamma process, then the { t} characteristic function of the time t distribution of the subordinated process is given by:

ϕX (ξ; C, G, M, λ, a, b, y0) = ϕY ( i log ϕX (ξ; C, G, M); λ, a, b, y0) Yt t − 1 = ϕ ( iCy log ϕ (ξ;1, G, M); λ, a, by , 1) Yt − 0 X1 0

= ϕXYt (ξ; Cy0, G, M, λ, a, by0, 1) . This representation is possible due to the fact that the characteristic functions of the time t distributions of X and Y could be factored by C and y , respectively. This is helpful, { t} { t} 0 as it reduces the dimensionality of the domain over which one must minimize a prescribed distance between a set of model and market prices.

10 CHAPTER 3

Selfsimilar Additive Processes

3.1 Selfdecomposability and Selfsimilarity

In this section the definitions of selfdecomposability and selfsimilarity are presented, along with several theorems which will be useful in the construction of time-inhomogeneous selfsimilar processes.

Definition 17 Selfdecomposable [25]: A probability measure µ on Rd is selfdecomposable if, d for any b> 1, there is a probability measure ρb on R such that z µ (z)= µ ρ (z) . b b   Not only is a selfdecomposableb distributionb b infinitely divisible, as mentioned in the previous chapter, but the measure, ρ, in the previous definition is also infinitely divisible.

Proposition 18 [25, p.93] If µ is selfdecomposable, then µ is infinitely divisible and, for any b> 1, ρb is uniquely determined and infinitely divisible.

Furthermore, when µ is a selfdecomposable distribution on R, its characteristic function has a very useful representation, as shown in the following proposition.

Proposition 19 [25] A probability measure µ on R is selfdecomposable if and only if

1 2 izx k (x) µ (z) = exp Az + iγz + e 1 izx1[ 1,1] (x) dx −2 R − − − x  Z | |   2 k(x) b R ∞ where A 0, γ , k (x) 0, x 1 x dx < , and k (x) is increasing on ≥ ∈ ≥ −∞ | | ∧ | | ∞ ( , 0) and decreasing on (0, ) . −∞ ∞ R 

11 Definition 20 [25] A stochastic process X is selfsimilar if, for any a > 0, there is a { t} b> 0 such that X t 0 =d bX t 0 . (3.1) { at | ≥ } { t | ≥ } Of particular importance to the construction of these processes is the representation of the number b in the previous definition.

Proposition 21 [25, p.73] Let X be a Rd-valued, selfsimilar, stochastically continuous, { t} and non-trivial process with X0 = constant a.s. Denote by Γ the set of all a > 0 such that there are b> 0 such that X t 0 =d bX t 0 . It follows that there is a H > 0 such { at | ≥ } { t | ≥ } that, for every a Γ, b = aH . ∈ This H is known as the Hurst exponent, the exponent of the non-trivial selfsimilar process. In the case of time-homogeneous additive processes, the reciprocal of H defines an important parameter for the class of selfsimilar L´evy processes, the strictly stable processes. The value 1 H is known as the index of the non-trivial strictly stable process [25]. 3.2 Strictly Stable L´evy Processes

Before studying the strictly stable processes, their corresponding distributions are first defined.

Definition 22 [25] Strictly Stable Distribution: A probability measure, µ on Rd is strictly stable if for any a> 0, there is a b> 0 such that

µ (z)a = µ (bz) .

Furthermore, a L´evyprocess, X, is strictlyb stableb if the distribution of X1 is strictly stable.

Interestingly, the L´evy processes whose time one distributions are strictly stable coincide with the selfsimilar L´evy processes, as stated in the following.

Proposition 23 [25, p.71] A L´evyprocess is strictly stable iff it is selfsimilar.

Remark 24 The more difficult forward direction of the proof of the previous proposition makes use of Proposition 13 in Chapter 2. In fact, selfsimilarity follows directly upon application of this proposition.

12 It is also true that strictly stable processes have selfdecomposable distributions, as stated in the next proposition. Proposition 25 [25, p.91] Any strictly stable distribution on Rd is selfdecomposable.

Remark 26 As shown by Sato, this theorem follows from the definition of an α-strictly stable distribution, µ, where

a 1 µ (z) = µ a α z , z R,α> 0. ∀ ∈   By fixing b > 1, defining a = bα, and making the appropriate change of variables, ξ 1 ξ b b 7→ b for ξ R the proof is obtained. If 0 0 for all ξ R. It follows that for all a> 0 one may satisfy the definition | | ∈ of selfdecomposability by defining a residual measure, ρb, by b a 1 1 − ρ (ξ)= µ ξ . b b    As will be shown in the next section, this relationship between selfsimilarity and b b selfdecomposability extends beyond the L´evy case. 3.3 Sato Processes

The properties of selfsimilarity and selfdecomposability are shared not only with the strictly stable processes but also with a larger subset of additive processes which are not necessarily time-homogeneous. They were recently named after Ken-Iti Sato, who proved existence of these processes in the following theorem. Proposition 27 [25, p.99] If µ is a non-trivial selfdecomposable distribution on Rd, then, for any H > 0, there exists, uniquely in law, a non-trivial H selfsimilar additive process − X : t 0 such that P = µ. { t ≥ } X1 The following proposition is a key element in the proof of the previous proposition. Proposition 28 [25, p.51] If µ :0 s t< is a system of probability measures on { s,t ≤ ≤ ∞} Rd with

µ µ = µ for 0 s t u< , (3.2) s,t ∗ t,u s,u ≤ ≤ ≤ ∞ µ = δ for 0 s< , s,s 0 ≤ ∞ µ δ as s t, s,t → 0 ↑ µ δ as t s. s,t → 0 ↓ 13 then there is an additive process in law X : t 0 such that, for 0 s t< , X X { t ≥ } ≤ ≤ ∞ t − s has the distribution µs,t.

The proof of Proposition 27 may now be explained in the following remark.

H Remark 29 Given a selfdecomposable distribution µ on R, Sato defines µt (z) = µ t z P and X1 = µ. This is a reasonable choice if the corresponding one-dimensional distributions  d b b of an additive process X are µ , since for each t> 0, X = aH X as shown below. { t} { t} at t H µat (z) = µ [at] z = µ tH aH z b b H = µt a z  b  d Since X is additive, equality in distribution follows: X = aH X . In order to show { t} b { at} t that such an additive process X with the chosen set of distributions exists, Sato uses { t}  Proposition 28. Making use of selfdecomposability of µ, it follows that for any b> 1,t> 0

H µt (z) = µ t z 1 = µ tHz ρ tH z . b b b b    b t H b Since the above relationship holds for b = s > 1, it follows that

H s  H H µt (z) = µ t z ρ t H t z t ( s )   h i H  = µs (z) ρ t H t z . b b ( s ) b

H  By defining µs,t by µs,t = ρ t H t z ,bthe followingb convolution equation is obtained. ( s )  µ = µ µ (3.3) b b t s ∗ s,t As it is necessary to define µ for 0 s t, one must define the following quantities: { t} ≤ ≤ µ0, µ0,t, and µt,t. The choices of µ0 = δ0, µ0,t = µt, and µt,t = δ0 are consistent with the convolution equations 3.2 and imply, along with 3.3, the following

µt µu µs,tµt,u = = µs,u for 0 s t u. µs µt ≤ ≤ ≤

b b H Let 0 s t. Since bµtb(z) µ t z band infinite divisibility of µ implies that µ is ≤ ≤ ≡b b continuous in its argument [1, p. 30] and has no zero [25, p.32], then there is left and b b b 14 right continuity of µs,t by 3.3. Now that the required set of transition measures has been produced, Sato lets X be the additive process in law such that for any t > s 0, µ is { t} ≥ s,t the distribution of X X by Proposition 28. By construction, µ = µ = P , so that t − s 0,1 X1 µ (z) = µ aH z implies X =d aH X for all t,a > 0. As mentioned earlier, since X at t at t { at} H and a Xt are both additive processes in law, by Proposition 13 there is identity in law and b b thus selfsimilarity of X .  { t} Interestingly, for a non-trivial selfdecomposable distribution, µ, there is an associated L´evy process, as well as a selfsimilar additive process for which the time one distributions of both processes are given by µ. As Sato points out [25, p.100], these processes are identical in law if and only if H 1 and µ is strictly stable with index, α = 1 . ≥ 2 H Since the increments of an additive process are infinitely divisible, one may derive the corresponding system of L´evy characteristics.

Theorem 30 L´evyCharacteristics of the Selfsimilar Additive Process: Given a non-trivial selfdecomposable distribution, µ, on R, let Y be a R-valued H selfsimilar additive process { t} − on (Ω, , P) with H > 0 and P = µ. If the L´evytriplet corresponding to µ is given F Y1 by (A, ν, γ) where ρ is the L´evydensity of ν, then the generating triplet of Yt is given by

2H AYt = t A H H H t γ + t− [ tH ,tH ] [ 1,1] uρ t− u du 0 1  − − △ −  H H ρ (u) = t− ρht− u , u R R 0  i Yt  ∈ \{ }   Proof. In Sato’s theorem Y is constructed by defining µ (ξ) = µ tH ξ . The { t} Yt representation of the characteristic function of the distribution of Yt in terms of (A, ρ, γ) b b is obtained simply by performing the necessary change of variables.

H µYt (ξ) = µ t ξ

H 2 H ix(tH ξ) H = exp A t ξ + iγ t ξ + e 1 ix t ξ 1[ 1,1] (x) ρ (x) dx b b − R − − −  Z    h  i 2H 2 H iuξ H H = exp At ξ + i γt ξ + e 1 iuξ1[ tH ,tH ] (u) t− ρ t− u du − R − − −  Z       The integral term may now be written as two terms: one consisting of the familiar L´evy-Khintchine integral, and an additional drift term. This additional drift term is the

15 integral over the symmetric difference of [ 1, 1] and tH ,tH , where [ 1, 1] tH ,tH c− − − △ − ≡ [ 1, 1]c tH ,tH [ 1, 1] tH ,tH . Finiteness of this second term is due to the fact − ∩ − ∪ − ∩ −     that µ is selfdecomposable,  by construction.  Since selfdecomposability implies that the L´evy measure is increasing on ( , 0) and decreasing on (0, ) , it follows that ρ is bounded on −∞ ∞ [ 1, 1] tH ,tH . − △ −  

16 CHAPTER 4

Subordinated, Additive Processes

4.1 Supporting Theorems and Definition

A theorem is needed which provides the characteristic function and system of L´evy triplets for a process obtained by time-changing a L´evy process by an independent, increasing additive process. A few useful tools must first be developed in order to proceed with the theorem.

4.1.1 Generalized Laplace Transform

In order to state the subordination theorem of Sato for the case of additive directing processes, the following theorem is needed.

Theorem 31 If X is a R-valued additive process where, for each t 0, the associated ≥ system of generating triplets A ,ν ,γ satisfies the following: A =0, S [0, ), γ0 0, { t t t} t νt ⊂ ∞ t ≥ and sν (ds) < , then the generalized Laplace Transform of the distribution of X is (0,1] t ∞ t givenR by wXt 0 wx E e = exp γt w + (e 1) νt (dx) (4.1) (0, ) −  Z ∞    for each w C, with (ω) 0. ∈ ℜ ≤

Remark 32 The proof to this theorem follows directly from that of the case for L´evy subordinators as given by Sato [25]. As in the L´evycase, this theorem relies on the infinite divisibility of the distributions of the additive process, X . { t}

17 4.1.2 Additivity in law of a L´evy process time-changed by an independent, increasing additive process

Another theorem required to generalize the subordination theorem of Sato establishes that time-changing a L´evy process by an independent, increasing additive process results in a process which is additive in law.

Theorem 33 On (Ω, , P) let X be a Rd-valued L´evy process and Z be a R-valued, F independent, increasing additive process. Define Y (ω) = X (ω) , t 0, ω Ω. It t Zt(ω) ∀ ≥ ∀ ∈ follows that Y is a Rd-valued additive process in law.

a.s a.s Proof. Since Z0 (ω) = X0 = 0, the time zero condition of the subordinated process, Y , easily follows. Independence of the increments of Y is shown within Sato’s proof of independent increments for subordination of a L´evy process by an increasing L´evy process [25, Theorem 30.1]. Finally, stochastic continuity is proven using an argument similar to that given by Applebaum [1, Theorem 1.3.25]. Let a, t, and ε > 0. By stochastic continuity of X, there is a δ > 0 such that P [ X > a] < ε if 0 δ ] < ε if 0 < t s < δ . Define δ = min (δ ,δ ). 2 | t − s| 1 2 | − | 2 1 2 It follows that 0 a] = [ Xu > a] µZt Zs (du) | − | [0, ) | | − Z ∞ P P = [ Xu > a] µZt Zs (du)+ [ Xu > a] µZt Zs (du) [0,δ) | | − [δ, ) | | − Z Z ∞ P sup ( [ Xu > a]) + µZt Zs (du) ≤ 0 u δ | | [δ, ) − ≤ ≤ Z ∞ sup (P [ Xu > a]) + P [Zt Zs >δ] ≤ 0 u δ | | − ε≤ ≤ ε + . ≤ 2 2

4.1.3 A useful system of L´evy measures

Following Sato, a system of L´evy measures is defined in order to simplify the represen- tation of the system of L´evy characteristics for the case of additive directing processes.

18 Definition 34 On (Ω, , P) let X be Rd-valued L´evyprocess and Z be a R-valued, in- F s dependent, increasing additive process. For t > 0 define νt (B) = (0, ) µX1 (B) νZt (ds), ∞ B Rd 0 . ∈B \{ } R

The next two propositions are used to establish that for all t > 0, νt is a L´evy measure. The third proposition is used in the derivation of the L´evy characteristics of the subordinated process in order to express the integral term of the L´evy-Khintchine formula in terms of its standard truncation function, rather than the zero truncation function used for subordinators. Proposition 35 [25, p.198] Let X be a Rd-valued L´evyprocess. For any ε > 0 there is a

C = C (ε) such that, for any t, P [ X > ε] Ct. (4.2) | t| ≤ There are C1, and C2 such that, for any t,

E X 2 ; X 1 C t (4.3) | t| | t|≤ ≤ 1 E[X ; X 1]  C t. (4.4) | t | t|≤ |≤ 2 As done in the proof of Theorem 30.1[25] it will be shown that ν is a system of L´evy { t} measures. Clearly, for each t > 0, ν is a Borel measure with ν 0 = 0. Furthermore, it t t { } satisfies the L´evy integrability condition in the following:

2 2 s x 1 νt (dx) = x 1 µX1 (dx) νZt (ds) R | | ∧ R (0, ) | | ∧ Z Z Z ∞  2  = E Xs ; Xs 1 νZt (ds) (0, ) | | | |≤ Z ∞   + P [ Xs > 1] νZt (ds) by Fubini (0, ) | | Z ∞ 2 E Xs ; Xs 1 νZt (ds)+ νZt (ds) ≤ (0,1) | | | |≤ (1, ) Z Z ∞   + P [ Xs > 1] νZt (ds)+ νZt (ds) (0,1) | | (1, ) Z Z ∞

C1 sνZt (ds)+ νZt (ds) by 4.3 ≤ (0,1) (1, ) Z Z ∞

+C2 sνZt (ds)+ νZt (ds) by 4.2 (0,1) (1, ) Z Z ∞ < . ∞ 19 By definition νt is a L´evy measure for all t> 0. 4.2 The Generalized Subordination Theorem

Using the definition and theorems of the previous section, the extension to Sato’s subordi- nation theorem will now be proven. Notation 36 Let the following quantities be defined for an additive process, X . { t} Ψ (w) = log E ewXt , w C with (w) 0 Xt ∈ ℜ ≤ η (w) = log E eiwXt , w Rd Xt ∈ Theorem 37 On (Ω, , P) let X be a Rd-valued L´evy process with generating triplet F R (AX1 ,γX1 ,νX1 ) and Z be a -valued, increasing, additive process with time t generating 0 0 triplet 0,γZt ,νZt where γZt 0, and (0, ) (1 s) νZt (ds) < . Define the exponent of the ≥ ∞ ∧ ∞ generalized Laplace transform of Zt byR

0 ξx ΨZt (ξ)= γZt ξ + e 1 νZt (dx) , (ξ) 0. (0, ) − ℜ ≤ Z ∞ If X and Z are independent and Y is defined by Y (ω)= X (ω) for all t 0 and ω Ω, t Zt(ω) ≥ ∈ then Y is a Rd-valued additive process in law with the following properties:

iξYt E e = exp ΨZt ηX1(ξ) and

 0   AYt = γZt AX1 (4.5)

0 s γYt = γZt γX1 + xµX1 (dx) νZt (ds) (4.6) (0, ) x 1 Z ∞ Z| |≤ 0 s νYt (B) = γZt νX1 (B)+ µX1 (B) νZt (ds) . (4.7) (0, ) Z ∞ Proof. By the previous theorem Y is a Rd-valued additive process. Following Sato, its characteristic function is given by

iξYt iξXs E e = E e µZt (ds) by independence of X and Z (0, ) Z ∞     ηXs (ξ) = e µZt (ds) by infinite divisibility of µXs (0, ) Z ∞

sηX1 (ξ) = e µZt (ds) by time homogeneity of Xt (0, ) { } Z ∞ η (ξ) Z = E e( X1 ) t = exph Ψ η i since (η (ξ)) 0 µ (ξ) 1. Zt X1(ξ) ℜ X1 ≤ ⇐| X1 |≤  20 b The L´evy characteristics are proven as follows. Define D = x : x < 1, x Rd . Using the | | ∈ previous result, the logarithm of the characteristic function of the subordinated process is given by the following.

iξYt log E e = ΨZt ηX1(ξ)

0 η (ξ) s    [ X1 ] = γZt ηX1 (ξ)+ e 1 νZt (ds) (0, ) − Z ∞ h i 0 i ξ,x = γZt ηX1 (ξ)+ e h iµXs (dx) 1 νZt (ds) (0, ) R − Z ∞ Z  0 i ξ,x = γZt ηX1 (ξ)+ e h i 1 i x, ξ 1D (x) µXs (dx) νZt (ds) (0, ) R − − h i Z ∞ Z  

+ i x, ξ µXs (dx) νZt (ds) (0, ) D h i Z ∞ Z 

since xµXs (dx) νZt (ds) is finite by 4.4 (0, ) D Z ∞ Z

0 i ξ,x = γZt ξ, AX1 ξ + i γX1 ,ξ + e h i 1 i x, ξ 1D (x) νX1 (dx) −h i h i R − − h i  Z   i ξ,x + e h i 1 i x, ξ 1D (x) νt (dx)+ i x, ξ µXs (dx) νZt (ds) R − − h i (0, ) D h i Z Z ∞ Z   0 0 = ξ,γZt AX1 ξ + i γZt γX1 ,ξ + x, ξ µXs (dx) νZt (ds) − (0, ) D h i  Z ∞ Z  

i ξ,x 0 + e h i 1 i x, ξ 1D (x) γZt νX1 + νt (dx) R − − h i Z   Since Y is additive, there exist unique L´evy characteristics (AYt ,γYt ,νYt ) which satisfy the

L´evy-Khintchine Theorem. Upon comparing the L´evy-Khintchine representation of the characteristic function of the distribution of Yt on the left-hand side of the equation with the terms on the right, the theorem is proved.

4.2.1 Special Cases

With the general theorem in hand, characteristic functions and L´evy characteristics are obtained for the cases of time-homogeneous and time-inhomogeneous directing processes.

L´evy Subordinator

Corollary 38 On (Ω, , P) let X be a R-valued L´evy process with generating triplet F R (AX1 ,γX1 ,νX1 ) and Z be a -valued, independent, increasing L´evyprocess with generat-

21 ing triplet given by 0,γ0 ,ν , where ̺ is the L´evy density of ν , γ0 0, and Z1 Z1 Z1 Z1 Z1 ≥ (0, ) (1 s) νZ1 (ds) < . If Y is defined by Yt (ω) = XZt(ω) (ω) for all t 0 and ω Ω, ∞ ∧ ∞ ≥ ∈ Rthen Y is a R-valued additive process in law with the following properties:

iξYt E e = exp tΨZ1 ηX1(ξ) and   

0 AYt = t γZ1 AX1  0  s γYt = t γZ1 γX1 + xµX1 (dx) ̺Z1 (s) ds (0, ) x 1  Z ∞ Z| |≤  0 s R νYt (B) = t γZ1 νX1 (B)+ µX1 (B) ̺Z1 (s) ds , B ( ) . (0, ) ∈B  Z ∞  Proof. For a L´evy subordinator Z , the distribution of Z is given by the tth- fold { t} t convolution of the distribution of Z1. It follows that the L´evy triplet at time, t , is given by

(0, tνZ1 , tγZ1 )[25, p.38].

Selfsimilar, Additive Subordinator

Corollary 39 On (Ω, , P) let X be a R-valued L´evy process with generating triplet F R (AX1 ,γX1 ,νX1 ) and Z be a -valued, independent, increasing H-selfsimilar additive process 0 with time one generating triplet given by 0,γZ1 ,νZ1 , where ̺Z1 is the L´evydensity of νZ1 , 0 γZ1 0 , and (0, ) (1 s) νZ1 (ds) < . If Y is defined by Yt (ω)= XZt(ω) (ω) for all t 0 ≥ ∞ ∧ ∞ ≥ and ω Ω, then Y is a R-valued additive process in law with the following properties: ∈ R E eiξYt = exp Ψ tH η and Z1 · X1(ξ)    H 0 AYt = t γZ1 AX1 H 0 H s H γYt = t γZ1 γX1 + t− xµX1 (dx) ̺Z1 t− s ds (0, ) x 1 Z ∞ Z| |≤  H 0 H s H R νYt (B) = t γZ1 νX1 (B)+ t− µX1 (B) ̺Z1 t− s ds, B ( ) . (0, ) ∈B Z ∞  Proof. The representation of the expectation follows from the construction of the selfsimilar additive process from the selfdecomposable distribution, µZ1 . The L´evy triplets are found by recalling the formula for the L´evy triplets of the selfsimilar additive process in terms of the characteristics of the selfdecomposable distribution, Z1, and making the appropriate 0 H 0 H H substitutions, γZt = t γZ1 and ̺Zt (u)= t− ̺Z1 t− u .  22 CHAPTER 5

Construction of Additive Models

5.1 Derivation of Characteristic Functions

For each of the L´evy, selfsimilar additive, and subordinated additive model classes, two processes are defined by their associated characteristic functions. The first process of each class has a time one distribution which is VG1 distributed while the second has a time one distribution which is NIG1 distributed. Let Ψ and η be defined as in the previous chapter.

5.1.1 L´evy Processes

The three parameter Variance Gamma Process may be defined as a Brownian motion with drift which is time-changed by an independent Gamma process [19]. Let σ,ν > 0 and µ R. On the probability space (Ω, , P) define X X : t 0 to be a R-valued ∈ F ≡ { t ≥ } Brownian motion, where X is Normal(µ, σ2) distributed, and Z Z : t 0 to be an R- 1 ≡ { t ≥ } t 1 valued, independent Gamma process where, for any t > 0, Zt is Gamma ν , ν distributed. R Since for all t > 0 and ξ , the characteristic exponent of Zt is given by ηZ 1 , 1 (ξ) = ∈ t[ ν ν ] t log (1 iξν)− ν , it follows by the subordination theorem for the L´evy case that the − characteristich functioni of the time t distribution of the Variance Gamma process is given by

E eiξ(VGt) = exp Ψ (η (ξ)) { Zt X1 } t   = exp log (1 i iη (ξ) ν)− ν − {− X1 }  t  1 − ν = 1+ ν iµξ + σ2ξ2 . − 2    The Normal Inverse Gaussian process is defined as a Brownian motion with drift which is time-changed by an independent Inverse Gaussian process [2]. Fix δ,γ > 0 and β R. ∈ On the probability space (Ω, , P) define X X : t 0 to be a R-valued Brownian F ≡ { t ≥ } 23 motion, where X is Normal(βδ2,δ2) distributed, and Z Z : t 0 to be a R-valued, 1 ≡ { t ≥ } independent Inverse Gaussian process where, for any t > 0 , Zt is IG(t,δγ) distributed with characteristic exponent given by η (ξ)= t 2iξ +(δγ)2 δγ , ξ R. The Zt[1,δγ] − − − ∈ subordination theorem for the L´evy case yields the followingq characteristic function of the time t distribution of the Normal Inverse Gaussian process.

E eiξ(NIGt) = exp Ψ (η (ξ)) { Zt X1 }   = exp t 2i iη (ξ) +(δγ)2 δγ − − {− X1 } −  q  = exp δt ξ2 2iβξ + γ2 γ . − − −  np o 5.1.2 H-selfsimilar Additive Processes

H-selfsimilar additive processes are constructed as in the proof to Sato’s existence theorem in Chapter 3. It must first be verified that the time one distributions of the Variance Gamma and Normal Inverse Gaussian processes are selfdecomposable. Consequently, the following integral representation of the modified Bessel function of the second kind [12] is needed. 2 2 ν ∞ α t β (1+ν) α exp t− dt =2 K (αβ) − 2 − 2t β ν Z0     On the probability space (Ω, , P) define X X : t 0 to be a R-valued Brownian, F ≡ { t ≥ } motion, where X is Normal(µ, σ2) distributed, and Z Z : t 0 to be an R-valued, 1 ≡ { t ≥ } t 1 independent Gamma process where for any t> 0, Zt is Gamma ν , ν distributed. If for any t > 0, ̺Zt is the L´evy density corresponding to Zt, then the L´evy density corresponding to the time t Variance Gamma random variable, V Gt, is given by

dµXs ̺VGt (x) = (x) ̺Zt (s) ds (0, ) dx Z ∞ 1 [x µs]2 t s = exp − exp ds 2 2 (0, ) √2πσ s − 2σ s ! νs −ν Z ∞   1 µ2 2 2 2 t xµ σ2 + ν x µ 2 = exp 2 σ K 1 | | +  2 q 2 2 √2πσν σ  x  σ rσ ν !    | |      t 1 1 µ2 2 µ  = exp + x x . ν x −σ σ2 ν | |− σ | | "r #! 24 The exponential is a decreasing function of x if ν > 0. Consequently, by Proposition 19 of Chapter 3 with the k-function given by

t 1 µ2 2 µ k (x)= exp 2 + x x ν −σ "rσ ν | |− σ #! and t = 1, the selfdecomposability of the time one distribution of the Variance Gamma process is shown. On the probability space (Ω, , P) define X X : t 0 to be a R-valued Brownian F ≡ { t ≥ } motion, where X is Normal(βδ2,δ2) distributed, and Z Z : t 0 to be a R-valued, 1 ≡ { t ≥ } independent Inverse Gaussian process where, for any t> 0, Zt is IG(t,δγ) distributed. If, for any t> 0, ̺Zt is the L´evy density corresponding to Zt, then the L´evy density corresponding to the time t Normal Inverse Gaussian random variable, NIGt, is given by

dµXs ̺NIGt (x) = (x) ̺Zt (s) ds (0, ) dx Z ∞ 2 2 1 [x βδ s] t 3 1 2 2 = exp − s− 2 exp δ γ s ds 2 2 (0, ) √2πδ s − 2δ s √2π −2 Z ∞ !   1 2 2 t δ β + γ 2 2 x = exp(βx) 2 x K1 δ β + γ | | 2πδ  | | δ  p δ !  p  t 1  = δα exp(βx) K (α x ) where α β2 + γ2. π x 1 | | ≡ | | p In order to check selfdecomposability, another integral representation theorem for the modified Bessel function of the second kind is needed [12]:

z ν 1/2 e− π ∞ t ν 1/2 t − Kν (z)= e− t − 1+ dt. Γ ν + 1 2z 2z 2 r Z0   The L´evy density may now be expressed as

α x 1/2 t 1 e− | | π ∞ t 1/2 t ̺NIG (x) = δα exp(βx) e− t 1+ dt t π x Γ 3 2α x 2α x | | " 2 r | | Z0  | | # 1/2 1 t  1 π ∞ t 1/2 t = exp(βx α x ) δα e− t 1+ dt . − | | x π Γ 3 2α x 2α x "| | 2 r | | Z0  | | #  + Consequently, the product of ̺ and x is decreasing on R and increasing on R− if NIGt | | α> β . Upon applying the representation theorem for selfdecomposable distributions on R | | 25 with

1 1/2 − t 1 π ∞ t 1/2 t k (x) = exp(βx α x ) δα e− t 1+ dt − | | π Γ 3 2α x 2α x " 2 r | | Z0  | | # and t = 1, selfdecomposability of the time  one distribution of the Normal Inverse Gaussian process is verified. It follows that the proof to Sato’s existence theorem for selfsimilar additive processes allows one to construct the characteristic functions associated with the H-selfsimilar additive versions of the Variance Gamma and Normal Inverse Gaussian processes.

For σ,ν,H > 0 and µ R, the characteristic function of the distribution of HssVG ∈ t is determined as follows. On the probability space (Ω, , P) let m = P be the F VG1 selfdecomposable time one Variance Gamma(σ, µ, ν) distribution on R. By Sato’s theorem and its proof, the characteristic function of the time t distribution corresponding to the H-selfsimilar additive process, HssV G is given by { t}

E eiξ(HssV Gt) = m tH ξ; σ, µ, ν 1    1 − ν = 1+ ν iµtH ξ + σ2t2H ξ2 . b − 2   

Similarly, for the characteristic function of the distribution of HssNIGt, let δ,γ,H > 0, β R, and µ = P be the selfdecomposable Normal Inverse Gaussian(δ,β,γ) distribution ∈ NIG1 on R. By applying Sato’s existence theorem along with its proof, the characteristic function of the time t distribution corresponding to the H-selfsimilar additive process, HssNIG { t} is given by

E eiξ(HssNIGt) = µ tH ξ; β,δ,γ   = exp δ t2H ξ2 2iβtH ξ + γ2 γ . b − − −  np o 5.1.3 Subordinated Additive Processes

Subordinated additive processes are constructed via their associated characteristic functions using the generalized subordination theorem of chapter 4. The first process is a Brownian motion time changed by an independent H-selfsimilar additive Gamma process (VHG) while the second is a Brownian motion time changed by an independent H-selfsimilar additive Inverse Gaussian process (NHIG). In order to construct the selfsimilar additive versions of

26 the directing processes, selfdecomposability of the time one distributions of the Gamma and Inverse Gaussian distributions must first be verified. The L´evy densities corresponding to the Gamma process and Inverse Gaussian process are given below [26]

Distribution L´evy Density 1 Gamma(a, b) ̺ (x)= a exp( bx) x− , x> 0 1 −3 1 2 Inverse Gaussian(a, b) ̺ (x)= ax− 2 exp b x , x> 0 √2π − 2 where a, b > 0 for both distributions. Since the corresponding k-functions given by

k = a exp( bx) , x> 0 and Γ − 1 1 1 2 kIG = ax− 2 exp b x , x> 0 √2π −2   are decreasing on the positive reals, it follows that the Gamma and Inverse Gaussian distributions are selfdecomposable.

The characteristic function of the distribution of V HGt is determined as follows. Let σ,ν,H > 0 and µ R. On the probability space (Ω, , P) define X X : t 0 ∈ F ≡ { t ≥ } 2 to be a R-valued Brownian motion where X1 is Normal(µ, σ ) distributed. On the same probability space define Z Z : t 0 to be an independent, increasing H-selfsimilar ≡ { t ≥ } 1 1 R additive process such that Z1 is Gamma ν , ν distributed . Since for all ξ , the 1 ∈ − ν characteristic exponent of Z1 is given by ηZ 1 , 1 (ξ) = log (1 iξν) , it follows by the 1[ ν ν ] − generalized subordination theorem for additive directing processesh thati the characteristic function of the time t distribution corresponding to the subordinated process, V HG , is { t} given by

E eiξ(VHGt) = exp Ψ (η (ξ)) { Zt X1 } H   = exp ΨZ1 t ηX1 (ξ) 1   1 − ν = exp log 1 i itH σ2ξ2 + iµξ ν − − −2     !  1 1 − ν = 1+ tH ν iµξ + σ2ξ2 . − 2    The characteristic function of the distribution of NHIGt is similarly defined. Fix δ,γ,H > 0 and β R. On the probability space (Ω, , P) define X X : t 0 to be a R- ∈ F ≡ { t ≥ } 2 2 valued Brownian motion where X1 is Normal(βδ ,δ ) distributed. On the same probability space define Z Z : t 0 to be an independent, increasing H-selfsimilar additive process ≡ { t ≥ } 27 such that Z is IG(1,δγ) distributed. Since for all ξ R, the characteristic exponent 1 ∈ of Z is given by η (ξ) = 2iξ +(δγ)2 δγ , it follows by the generalized 1 Z1[1,δγ] − − − subordination theorem for additive directingq processes that the characteristic function of the time t distribution of the subordinated process, NHIG , is given by { t}

E eiξ(NHIGt) = exp Ψ (η (ξ)) { Zt X1 } H   = exp ΨZ1 t ηX1 (ξ)

= exp  2i it H η (ξ) +(δγ)2 δγ − − {− X1 } −  q  = exp δ tH [ξ2 2iβξ]+ γ2 γ . − − −  np o For the sake of comparison, tables of characteristic functions for each family of processes are given below.

t 1 2 2 − ν VG 1+ ν iµξ + 2 σ ξ − 1 H 1 2 2H 2 − ν HssVG 1+ ν  iµt ξ + 2 σ t ξ − 1 VHG 1+ tH ν iµξ + 1 σ2ξ2 − ν  − 2    NIG exp δt ξ2 2iβξ + γ2 γ − − − HssNIG exp  δ npt2H ξ2 2iβtH ξ + γ2o γ − − − NHIG exp  δ nptH [ξ2 2iβξ]+ γ2 γ o − − −  np o In both tables, the only difference between the selfsimilar process and the subordinated process is an extra factor of tH multiplied by the squared term of the argument, ξ, of the characteristic function. In order to observe what influence this has on the resulting distributions, the first four central moments of all six additive processes are calculated. 5.2 Expressions for the Central Moments

Let Y be an R-valued additive process on (Ω, , P) and Φ be the characteristic function { t} F Yt of Yt, t > 0. The first four central moments of the distribution of Yt are calculated via E [Y n]= 1 ∂ Φ (u) and shown in the tables below. t in ∂u Yt |u=0

28 Central Moment Formula

E [Yt] iηY′ t (0) 2 − E Y Y η(2) (0) − − Yt 3 E hY Y  i iη(3) (0) − Yt 4 2 E hY Y  i η(4) (0)+3 η(2) (0) − Yt Yt h  i h i The following tables contain the first four central moments of the time t distribution corresponding to each of the six additive processes.

Moment VG E [Yt] tµ 2 E Y Y t [µ2ν + σ2] − 3 E hY Y  i t [2µ3ν2 +3σ2νµ] − 4 E hY Y  i t2 [3µ4ν2 +6σ2νµ2 +3σ4]+ t [6µ4ν3 + 12σ2ν2µ2 +3σ4ν] − h  i Moment HssVG H E [Yt] t µ 2 E Y Y t2H [µ2ν + σ2] − 3 E hY Y  i t3H [2µ3ν2 +3σ2νµ] − 4 E hY Y  i t4H [3µ4ν2 (1+2ν)+6σ2νµ2 (1+2ν)+3σ4 (1 + ν)] − h  i Moment VHG H E [Yt] t µ 2 E Y Y t2H [µ2ν]+ tH [σ2] − 3 E hY Y  i t3H [2µ3ν2]+ t2H [3σ2νµ] − 4 E hY Y  i t4H [3µ4ν2 (1+2ν)] + t3H [6σ2νµ2 (1+2ν)] + t2H [3σ4 (1 + ν)] − h  i Moment NIG 1 E [Yt] t [δβγ− ] 2 2 2 3 E Y Y t [β + γ ] δγ− − 3 2 2 5 E hY Y  i t [β + γ ]3δβγ− − 4 2 4 2 3 5 4 2 2 4 7 E hY Y  i t [β δγ +2β γ δ + δγ ]+ t [5β +6β γ + γ ] 3δγ− − { } h  i

29 Moment HssNIG H 1 E [Yt] t [δβγ− ] 2 2H 2 2 3 E Y Y t [β + γ ] δγ− − 3 3H 2 2 5 E hY Y  i t [β + γ ] 3βδγ− − 4 4H 4 2 2 4 7 E hY Y  i t [β (δγ +5)+2 β γ (δγ +3)+ γ (δγ + 1)] 3δγ− − h  i  Moment NHIG H 1 E [Yt] t [δβγ− ] 2 2H 2 H 2 3 E Y Y t β ]+ t [γ δγ− − 3 3H 2 2H 2 5 E hY Y  i t β ]+ t [γ 3βδγ− − 4 4H 4 3H 2 2 2H 4 7 E hY Y  i t β (δγ +5)]+ t [2β γ (δγ +3)]+ t [γ (δγ + 1) 3δγ− − h  i    5.3 Properties of the Increments:

In order to investigate the distributional properties of the increments of the additive processes, the first four central moments of the distribution of the increment over the time interval [s, t] for 0 0. By additivity of Y, the characteristic function of the increment is t+s − t given by

µYt+s (ξ) µYt+s Yt (ξ)= , 0 s t. − µYt (ξ) ≤ ≤ b Since the distributions of Ytb are infinitely divisible, set (At,νt,γt) as the corresponding { } b { } system of unique L´evy triplets. By the previous equation the L´evy triplet of the increment is given by (A A ,ν ν ,γ γ ) . Let Y denote the VG process or either of its t+s − t t+s − t t+s − t { t} two time-inhomogeneous versions. In the following tables the first four central moments of the increments, X Y Y for 0 s

Moment VG E [Xs,t] sµ 2 E X X s [µ2ν + σ2] s,t − s,t 3 E hX X  i s [2µ3ν2 +3σ2νµ] s,t − s,t 4 E hX X  i [t + s]2 t2 [3µ4ν2 +6σ2νµ2 +3σ4]+ s [6µ4ν3 + 12σ2ν2µ2 +3σ4ν] s,t − s,t − h  i 

30 Moment HssVG E [X ] [t + s]H tH µ s,t − 2 E X X [t + s]2H t2H [µ2ν + σ2] s,t − s,t − 3 E hX X  i [t + s]3H t3H  [2µ3ν2 +3σ2νµ] s,t − s,t − 4 E hX X  i [t + s]4H t4H  [3µ4ν2 (1+2ν)+6σ2νµ2 (1+2ν)+3σ4 (1 + ν)] s,t − s,t − h  i   Moment VHG E [X ] [t + s]H tH µ s,t − 2 E X X [t + s]2H t2H [µ2ν]+ [t + s]H tH [σ2] s,t − s,t − − 3 E hX X  i [t + s]3H t3H  [2µ3ν2]+ [t + s]2H t2H [3σ2νµ] s,t − s,t − − 4 E hX X  i [t + s]4H t4H  [3µ4ν2 (1+2 ν)]  s,t − s,t − 3H 2H h  i + [t + s] t3H [6σ2νµ2 (1+2ν)] + [t + s] t2H [3σ4 (1 + ν)] − −     As shown above, the skewness and kurtosis of the increment distribution for a selfsimilar process is constant with respect to maturity, while that of a subordinated additive process is a rational function of maturity. Another interesting property among the three classes of models is the variance structure of the increment. For the L´evy case, the increment variance is an increasing function of maturity. If H 0, 1 both the selfsimilar and subordinated ∈ 2 additive models have an increment variance which decreases with maturity. For H 1 , 1  ∈ 2 the increment variance of the selfsimilar distribution is increasing with maturity, while for the subordinated case, the increment variance is either decreasing or increasing depending on the choice of parameters. Finally, if H 1 both models have an increment variance ≥ which increases with maturity.

31 CHAPTER 6

Option Pricing and Calibration

6.1 Definition of the Price Process

R P Let X Xt t [0,T ] be a -valued on (Ω, , ) in which the characteristic ≡ { } ∈ F function of its time t distribution with n-dimensional parameter vector, θ Rn, is denoted ∈ by φθ . Define Θ Rn as the set of parameter vectors such that for each θ Θ, φθ (ξ) is Xt ⊂ ∈ Xt defined for each ξ R i and each t [0,T ]. In the case where X has independent ∈ ∪ {− } ∈ { t} θ P increments, let S St t [0,T ] be the price process on (Ω, , t , ) used to model a risk- ≡ ∈ F {F } neutral underlier of an option with continuously compounded values of both interest rate, r, and dividend yield, q, where

d S exp(X + [r q] t) Sθ = 0 t − for each t [0,T ] . t φθ ( i) ∈ Xt − (r q)t θ By definition, e− − S is an ( P) martingale, and the time-consistent linear pricing t Ft − θ rule used to determine the fair price of a contingent claim, H, on St t [0,T ] is given by π, ∈ where  θ r(T t) θ π (H)= e− − EP H S t,T T |Ft [12].    In the case where X is a L´evy stochastic volatility model, the method of Carr, Geman, Madan and Yor (CGMY) [6] is used to obtain a martingale price process whose marginals (r q)t θ match those of e− − St . The reason for doing this, as noted by CGMY, is that in the previous case, the additivity of X enabled the conditional expectation of the valuation operator to be evaluated as an unconditional one [6]. As a result, the normalization term of φθ ( i) occurred in the definition of Sθ,t [0,T ]. Consequently, when X lacks independent Xt − t ∈ increments, one may not simply evaluate the pricing rule using the conditional expectation under P.

32 In order to establish existence of a martingale whose marginals match those of (r q)t θ e− − St , one needs only to assume that the option quotes are free of static arbitrage [9]. Specifically, it is assumed that there are no arbitrage opportunities in which dynamic trades are limited to those where the position in the underlier during the life of the option depends only on the current time, t, and its spot price, St. By results of CGMY [9], [20], there exists a martingale M on some probability space Ω, , , P for which the the { t} G {Gt} following holds   e e d (r q)t θ M = e− − S for each t [0,T ] . t t ∈ Consequently, the price process, S Sθ , on Ω, , , P is defined by ≡ t G {Gt}    θ d (r qe)t θ e St = e − Mt .

(r q)t θ   Since e− − S is a P martingale, it follows that the pricing rule is given by t Gt −    θ r(T t) θ π e (H)= e− − E❡ H S . t,T P T |Gt    Below are the characteristic functions of the time t distributions of the logarithm of the risk-neutral underlier for the models used in this study.

risk-neutral exponential Variance Gamma •

t 1 2 2 VG exp iξ (ln S0 + t [r q]) ν log 1+ ν iµξ + 2 σ ξ Φln St (ξ)= − − t −  1 ν µ + 1 σ2 − ν   − 2 risk-neutral exponential Variance H Gamma   • − 1 H 1 2 2 VHG exp iξ (ln S0 + t [r q]) ν log 1+ t ν iµξ + 2 σ ξ Φln St (ξ)= − − 1 −  1 tH ν µ +1 σ2 − ν   − 2 risk-neutral exponential H-selfsimilar Variance Gamma • 1 H 1 2 2H 2 HssV G exp iξ (ln S0 + t [r q]) ν log 1+ ν iµt ξ + 2 σ t ξ Φln St (ξ)= − − − 1  1 ν µtH +1 σ2t2H − ν  − 2 risk-neutral exponential Normal Inverse Gaussian  • 2 2 exp iξ (ln S0 + t [r q]) δt ξ 2iβξ + γ γ NIG − − − − Φln St (ξ)= n exp δt 1 2hβp+ γ2 γ io − − − −  hp i 33 risk-neutral exponential Normal H-Inverse Gaussian •

H 2 2 exp iξ (ln S0 + t [r q]) δ t (ξ 2iβξ)+ γ γ NHIG − − − − Φln St (ξ)= n exp δ tH ( 1hp2β)+ γ2 γ io − − − −  hp i risk-neutral exponential H-selfsimilar Normal Inverse Gaussian •

2H 2 H 2 exp iξ (ln S0 + t [r q]) δ t ξ 2iβt ξ + γ γ HssNIG − − − − Φln St (ξ)= n exp δ t2H hp2βtH + γ2 γ io − − − −  hp i risk-neutral exponential Variance Gamma - Ornstein Uhlenbeck with stationary • Gamma distribution

VG-OU-Γ ΦOU Γt ( i log ΦVG1 (ξ; C, G, M); λ, a, b, 1) − Φln St (ξ) = exp(iξ (ln S0 + t [r q])) − − ΦOU Γt ( i log ΦVG1 ( i; C, G, M); λ, a, b, 1) − − − where GM C Φ (ξ; C, G, M)= VG1 GM + i (M G) ξ + ξ2  −  and

1 λt ϕOU Γt (ξ; λ, a, b, y0) = exp iy0λ− 1 e− ξ − { − λa b + blog  itξ iξ λb b iλ 1 (1 e λt) ξ − } −   − − − −   risk-neutral exponential Normal Inverse Gaussian - Ornstein Uhlenbeck with stationary • Gamma distribution

NIG-OU-Γ ΦOU Γt ( i log ΦNIG1 (ξ; β,δ,γ); λ, a, b, 1) − Φln St (ξ)= exp(iξ (ln S0 + t [r q])) − − ΦOU Γt ( i log ΦNIG1 ( i; β,δ,γ); λ, a, b, 1) − − − where Φ (ξ; β,δ,γ) = exp δ ξ2 2iβξ + γ2 γ NIG1 − − − n h io and p

1 λt ϕOU Γt (ξ; λ, a, b, y0) = exp iy0λ− 1 e− ξ − { − λa b + blog  itξ iξ λb b iλ 1 (1 e λt) ξ − } −   − − − −  

34 It is noted that in the Variance Gamma case, the parametrization (µ, σ, ν) (C, G, M) 7→ is used since it allows a single parameter,C, to be associated with time, unlike the original parametrization. Below is the required mapping. 1 C = ν 1 1 1 1 − G = µ2ν2 + σ2ν µν r4 2 − 2 ! 1 1 1 1 − M = µ2ν2 + σ2ν + µν r4 2 2 ! The parametrization chosen for the NIG process, however, does not require such a trans- formation of variables, since the parameter, δ, is a multiplicative constant of time, t, and appears only once in the characteristic function given. 6.2 Numerical Pricing Method for European Options

For any of the eight previously defined models, let the risk-neutral underlier, S S , be ≡ { t} (r q)t defined on (Ω, , , P) so that e− − S is a ( P) martingale. Before proceeding, F {Ft} t Ft − it is necessary to first extend the characteristic function of the distribution of the logarithm of the risk-neutral stock price, Φln St , to a larger domain in the complex plane. Suppose that for each t > 0, there exists a nonempty set, D , such that D = x R : E ex ln St < . t t ∈ ∞ Let Λ = ζ C : (ζ) D . The following proposition by Lee extends the domain of ln St { ∈ −ℑ ∈ t}    each Φln St to a horizontal strip in the complex plane.

rt Proposition 40 (Lee)[17] The discounted characteristic function, e− Φln St , is well-defined rt and analytic in Λln St , which is a convex set. Partial derivatives of e− Φln St may be taken through the expectation.

In order to calculate European option prices, the “modified call” method of Carr and Madan [8] is used, requiring a closed form expression for the inverse Fourier transform of a damped call price in terms of the characteristic function of the distribution of the logarithm of the risk-neutral stock price. The steps of this method are summarized below.

In order to obtain an integrable version of the call price function, for a given strike, • K, define k = ln (K) for K > 0 and let cT , the modified call price, be given by

cT (k) = exp(αk) πT (k)

35 for some α> 0 such that c L (R) . T ∈ 1 Let ψ : R C be defined by • → iξk ψT (ξ)= e cT (k) dk. ZR

Consequently, a sufficient condition for c to be integrable is ψ (0) < . Therefore, T T ∞ choose an upper bound, α+, for α > 0, so that ψ (0) < if 0 <α<α+. The call price T ∞ may now be expressed as

iνk ψT π (k) = exp( αk) e− (ν) dν. T − 2π ZR The last task is the computation of ψ . Let ρ be the density of ln (S ) and Φ be • T T T ln ST the characteristic function of the distribution of ln (ST ). As derived in Carr [8], the expression for the inverse Fourier tranform of the modified call is given by

iνk ψT (ν) = e cT (k) dk ZR

rT iνk αk s k + = e− e e e e ρ (s) dsdk − T ZR ZR  rT iνk αk s k + = e− ρ (s) e e e e dkds by Fubini and ψ (ν) < T − | T | ∞ ZR ZR  s(1+iν+α) rT e = e− ρ (s) ds T α2 + α ν2 + iν (1+2α) ZR − rT e− P i( i(1+iν+α))s = E e − α2 + α ν2 + iν (1+2α) rT − e− Φ (ν i [1 + α])   = ln ST − . α2 + α ν2 + iν (2α + 1) − Consequently, ψ (0) < if and only if Φ ( i (α + 1)) < . T ∞ ln ST − ∞ The price of a T -maturity call at strike level ek is given by • π (k) = exp( αk) c (k) T − T iνk ψT = exp( αk) e− (ν) dν − 2π ZR αk e− ∞ ixk = e− ψ (x) dx since π (k) R π ℜ T T ∈ Z0  36 where rT e− Φ (ν i [α + 1]) ψ (ν)= ln ST − , ν R. T α2 + α ν2 + iν (2α + 1) ∈ − A discrete Fourier transform is implemented using the Trapezoidal rule. Fix ∆x > 0 and define the following quantities for u =1 ...N, N =2m for some m N. ∈ 2π ∆k = N∆x N k = ∆k +(u 1)∆k u − 2 − x = (u 1)∆x u −

Using the previous work, option prices on the set of discrete log strikes, k for { u} u =1 ...N, are computed as follows.

αk e− ∞ ixku π (k ) = e− ψ (x) dx T u π ℜ T Z0  αku N e− ∆x ixj ku ≈ [2 δj 1] e− ψT (xj) π 2 ℜ − − j=1 !   X by the composite Trapezoidal Rule N ∆x N αku i(j 1)∆x( ∆k+(u 1)∆k) ≈ e− [2 δj 1] e− − − 2 − ψT (xj) 2π ℜ − − j=1 ! X N ∆x 2π N αku i (j 1)(u 1) ixj ( ∆k) = e− e− N − − [2 δj 1] e 2 ψT (xj) 2π ℜ − − j=1 ! X n o For the purpose of calibration, a linear interpolation between the output nodes of the discrete Fourier transform is performed, as done by Carr and Wu [10] since the model call prices are given only at these log strike values. 6.3 Calibration

Now that a price process, pricing operator, and its implementation have been described, the inverse problem of estimating the model parameters given a set of option prices is addressed. Given any of the previously defined risk-neutral price processes, S, on (Ω, , P), let Θ denote F the set of model parameter vectors which determine the distribution corresponding to the

37 logarithm of the risk-neutral underlier, ln St. Furthermore, let the set of N model option prices corresponding to θ Θ be given by πθ and the corresponding set of weights ∈ i i=1...N be given by w . In order to estimate the set of parameters chosen by the market, the { i}i=1...N  quadratic pricing error, : Θ R+, is minimized with E →

N θ 2 (θ)= wi πi (r, q, S0,Ti,Ki) Pi (Ti,Ki) , θ Θ (6.1) E i=1 − ∈ [12]. X   It is noteworthy to mention a few of the theoretical and computational difficulties pertaining to the calibration of a given model. First, the number of options used to calibrate a model is generally inadequate to uniquely identify the optimal model parameter vector [13]. Consequently, there may be many model parameter vectors which yield pricing errors to within a given tolerance [29]. Second, the set of values used to represent the current market values of the options, to within bid-ask, may induce numerical instability with respect to calendar time, resulting in relatively large variations in calibrated parameter vectors [29]. Finally, implementation of the least squares minimization may be difficult since the objective function is not convex in θ and may have many local minima [13]. Without resorting to a regularization scheme which includes a relative entropy penaliza- tion term [13] or an evolutionary optimization technique [16], the data sets are calibrated as posed, with the following conditions. First, the cut-off criterion for the optimization routine does not require that the gradient norm achieve a certain tolerance, but rather the relative change in the infinity norm of the minimizer (solution) achieve a certain tolerance. Second, in a manner similar to CGMY [7], deep out-of-the-money options with relatively small price to spot ratios are deleted from the data set in order to decrease noise in the error surface.

38 CHAPTER 7

Empirical Study

7.1 Models of Interest

In this study the relative pricing performance of the six additive models developed in the previous chapters, along with two L´evy stochastic volatility models is assessed. The stochastic volatility models included are the Variance Gamma and Normal Inverse Gaussian models, time-changed by the time integral of a stationary Ornstein-Uhlenbeck process with one-dimensional Gamma distribution [2]. The table below serves to classify these eight stochastic models.

Table 7.1: Classification of Calibrated Models

Process Class VG family NIG family L´evy VG NIG selfsimilar, additive HssVG HssNIG subordinated, additive VHG NHIG subordinated, stochastic volatility VG-OU-Gamma NIG-OU-Gamma

7.2 Data Specifications

The options used to calibrate the risk-neutral models in this study were the European exercised, Standard and Poor’s 500 index options with 4 p.m. EST time-stamped bid and ask quotes, obtained by private communication with Warren Beaves. The quote dates coincided with the second Tuesday of the month for the time period spanning January 2005 through December 2005. Following Carr and Wu [10], only out-of-the-money options were used

39 since their liquidity is typically greater than that of in-the-money options of like strike and maturity. The maturities of the options ranged from approximately 1 month to 1 year. Conse- quently, the nearest term SPX options were not included in this analysis, leaving a set of SPX options with a total of five different maturities for each quote date. Another filter on the option data was a high pass filter on the ratio of option price to index value. Similar to CGMY [7], deep out-of-the-money options with price to spot ratio less than 0.0006 were discarded. In addition, any option with an ask or bid quote of zero was discarded. Finally, options with a moneyness (strike/spot) outside the interval, [0.55, 1.4], were not included in the calibration. After all of the filters were applied, the moneyness for each of the remaining options was bounded below by 0.58 and above by 1.27. Furthermore, the number of options per quote date ranged from 109 to 134. The value of the index on a given quote date was that taken at 4 p.m. EST. The risk free rate was taken to be the three-month constant maturity Treasury [24], as of the given quote date. The quarterly dividend yield was obtained from the Standard and Poor’s web site [28]. The most recent available dividend yield was used for each option quote date. 7.3 Algorithmic Specifications

For each of the twelve data sets, each model was calibrated by minimizing the weighted sum of squared differences between market and model price over the feasible model parameter set, Θ , as defined in the previous chapter. A sequential quadratic programming technique with line search was implemented in the “fmincon” optimization routine of Matlab. The coding of the minimization termination condition was modified so that the final iterate, k, occurred −→x k −→x k−1 when the relative change in the infinity norm of the minimizer, k − k, was less than −→x k−1 5 k k 10− . In order to calculate the value of the objective function at a given point, θ , the squared difference between market and model option price was normalized by the square of the Black- Scholes vega evaluated at the implied volatility of the market option price for each option. Given a data set of N options, let I : R+ R+ be the Black-Scholes implied volatility of the → model price, and Ii denote the Black-Scholes implied volatility corresponding to the average of the bid and ask quotes for option i 1 ...N. Following Cont and Tankov [12], the sum of ∈ squared differences between market and model implied volatility was approximated in terms

40 θ of the sum of squared differences between market price, Ci, and model price, πi , as shown below. 2 2 N N N θ θ 2 ∂I θ πi Ci I πi Ii πi Ci = − . i=1 − ≈ i=1 ∂C i − i=1 vega    i  Since the meanX number  of options X per quote date was 123, thisX approximation allowed more weight to be given to the price differences associated with the cheaper options in a manner much more efficient than by numerically inverting the Black-Scholes formula for each option price [12]. Model option prices were determined by the “modified call” method of Carr, et. al. with puts calculated by the put-call parity formula. The number of FFT points used was 212 or 4096 points with a constant quadrature spacing of 0.25. As a simplification, the number of days per year was set to 252. Consequently, discounting with respect to the risk-free rate and computing accrued dividends was done with respect to the same time to expiry. The maturity of each option was taken as the time difference between 4 p.m. EST on the quote date and 4 p.m. EST on the Thursday before the third Friday of a contract’s expiration month. This was a simplification because the settlement value of the index is determined on Friday morning while the trading of options ceases on the previous day. 7.4 Results

The following definitions of error were used to measure the pricing performance of each of the eight models. Denote N by the number of options used on a particular quote date, P , { i} by the set of observed bid-ask averages, and π , by the set of corresponding calculated { i} model prices. The average pricing error (AP E) and root mean square error (RMSE) as defined by Schoutens [26] are the following.

N i=1 Pi πi AP E = N| − | P i=1 Pi 1P N 2 RMSE = (Pi πi) rN i=1 − X The out-of-sample pricing errors were obtained by pricing options for the current quote date using the previous day’s parameter estimates. Since the option prices were taken as the set of averages of the bid and ask quotes on a given quote date, each model price may deviate at most by (ask bid) /2 if it is to lie between the bid and ask quotes. Setting the price −

41 difference to be (ask bid) /2 and the observed option price to be (ask + bid) /2 in the − previous formulas yields the two measures of uncertainty in market prices given below.

N i=1 (aski bidi) AP Emarket = N − Pi=1 (aski + bidi) 1P 1 N 2 RMSEmarket = (aski bidi) 2rN i=1 − X For the set of models considered, this “market” benchmark was rather difficult to attain in practice. For the purpose of calculating an upperbound on the theoretical AP E and RMSE, all option prices were calculated without interpolation. Consequently, an upper bound on the numerical error in the model pricing measures was determined from the truncation and sampling errors associated with the corresponding calculated option prices. For the jth quote (j) (j) date with N options where j = 1 ... 12, let Pi denote the average of the bid and ask th (j) th (j) quotes for the i option, ϑi denote the theoretical value of the i option, πi denote the (j) corresponding calculated option price using the midpoint quadrature rule, and εi denote the sum of truncation and sampling error associated with the ith calculated option price using the bounds given by Lee [17]. Let AP E(j) (resp. RMSE(j)) denote the Average Pricing Error (resp. Root Mean Square Error) for the jth quote date calculated using the midpoint quadrature rule. An upper bound for the theoretical Average Pricing Error for the jth quote date is determined by

N (j) ϑ(j) P (j) i − i AP E(j) = i=1 theoretical N (j) P (j) Pi i=1 N (j) P (j) (j) (j) (j) ϑi πi + πi Pi i=1 − − N (j) ≤ P (j ) Pi i=1 NP(j) (j) εi AP E(j) + i=1 . N (j) ≤ P (j) Pi i=1 P Similarly, an upper bound for the theoretical Root Mean Square Error for the jth quote

42 date is given by

N (j) 1 2 RMSE(j) = ϑ(j) P (j) theoretical vN (j) i − i u i=1 u X   t N (j) 1 2 = ϑ(j) π(j) + π(j) P (j) vN (j) i − i i − i u i=1 u X   t N (j) 1 2 2 ε(j) + π(j) P (j) +2ε(j) π(j) P (j) ≤ vN (j) i i − i i i − i u i=1 u X h i h i t N (j) N (j) 1 2 2 RMSE(j) + ε(j) + ε(j) π(j) P (j) . ≤ vN (j) i vN (j) i i − i u i=1 u i=1 u X h i u X t t Bar graphs of the difference between the calculated Average Pricing Error and the upper bound on the theoretical Average Pricing Error for each quote date and for each process are found in Appendix B. In figures 9.1-9.4 are the bar graphs of the in-sample and out-of-sample upper bounds on the theoretical APE and RMSE for each quote date, for each model. It is noted that the August 2005 out-of-sample quote date (fig. 9.2) had a relatively large APE market value. This however, was consistent with the larger bid-ask spreads which occurred for that day. Furthermore, figures 9.1-9.4 show that with respect to APE and RMSE, the NIG models consistently out-performed their corresponding VG models. This observation is made apparent in the following set of stataistics.

12 resp. 11 : the fraction of quote dates in which each of the four models of the • 12 12 NIG family had a lower in-sample (resp. out-of-sample) APE bound than that of the corresponding member of the VG family

11 resp. 10 : the fraction of quote dates in which each of the four models of the NIG • 12 12 family had a lower in-sample (resp. out-of-sample) RMSE bound than that of the corresponding member of the VG family

Furthermore, the selfsimilar NIG model (HssNIG) stood out above the other additive models with superior performance in two categories: the ability to yield a lower in-sample and out-of-sample APE and RMSE bound than its stochastic volatility counterpart (NIG- OU-Gamma), and the ability to attain the effective market APE. Below is a summary of the HssNIG pricing performance frequencies with respect to APE and RMSE.

43 Average Pricing Error:

10 resp. 12 : the fraction of quote dates in which the HssNIG model had a lower • 12 12 in-sample (resp. out-of-sample) APE bound than that of each stochastic volatility model

8 resp. 5 : the fraction of quote dates in which the HssNIG model had a lower • 12 12 in-sample (resp. out-of-sample) APE bound than that of the market Root Mean Square Error:

10 resp. 11 : the fraction of quote dates in which the HssNIG model had a lower • 12 12 in-sample (resp. out-of-sample) RMSE bound than that of each stochastic volatility model

1 resp. 2 : the fraction of quote dates in which the HssNIG model had a lower • 12 12 in-sample (resp. out-of-sample) RMSE bound than that of the market

The fact that the models failed to consistently attain the RMSE market values is not too surprising since the squared difference between model and market option price was 1 weighted by vega2 for each option in the data set. After having observed the individual pricing performance measures for each model, for each quote date, attention is now directed toward the twelve quote date time averages of these measures. The averages of in-sample and out-of-sample upper bounds on the theoretical APE and RMSE over the twelve quote dates are given by

N (j) ε(j) 1 12 1 12 i AP EU.B. = AP E(j) +  i=1  and 12 12 NP(j) j=1 j=1  P (j)  X X  i   i=1   P  12 12 N (j) N (j) 1 1 1 2 2 RMSEU.B. = RMSE(j) + ε(j) + ε(j) π(j) P (j) . 12 12 vN (j) i vN (j) i i − i  j=1 j=1 u i=1 u i=1 X X u X h i u X t t    The in-sample and out-of-sample upper bounds to the twelve quote date time-averaged theoretical Average Pricing Error and Root Mean Square Error are given in Tables 7.2 and 7.3 below.

44 Table 7.2: Upper Bound on the Time-averaged Theoretical Average Pricing Error (Jan. - Dec. 2005)

Process Name in-sample AP EU.B. (%) out-sample AP EU.B. (%) VG 12.682 13.619 NIG 11.288 12.456 VHG 8.111 8.636 NHIG 6.349 7.157 HssVG 6.725 7.218 HssNIG 4.346 5.393 VG-OU-Gamma 5.860 6.651 NIG-OU-Gamma 5.175 6.289 market 4.724 5.089

Table 7.3: Upper Bound on the Time-averaged Theoretical Root Mean Square Pricing Error (Jan. - Dec. 2005)

Process Name in-sample RMSEU.B. ($) out-sample RMSEU.B. ($) VG 2.082 2.236 NIG 1.824 2.016 VHG 1.442 1.537 NHIG 1.109 1.247 HssVG 1.094 1.226 HssNIG 0.759 0.955 VG-OU-Gamma 1.085 1.199 NIG-OU-Gamma 0.954 1.113 market 0.624 0.677

In Table 7.2 the only process with an upper bound on the time-averaged theoretical Average Pricing Error which was less than the market average for the in-sample case was the HssNIG model. In the case of RMSE in Table 7.3 no model had an upper bound less than the market average. Furthermore, the worst performing model class, with respect to in-sample and out-of-sample average APE and RMSE bounds, was the L´evy class. Not surprising is that with one additional parameter, the subordinated additive and selfsimilar models had lower APE and RMSE bounds than their time-homogeneous counterpart. Remarkable,

45 however, is the difference in performance between the two, four parameter additive models. Tables 7.4-7.5 contain the twelve quote date time averages of the ratio of the upper bound for the selfsimilar model to that of either the corresponding subordinated additive model or the corresponding stochastic volatility model. In each of the first two rows of Table 7.4 and 7.5, the model pairs of the additive processes have identical time one distributions.

Table 7.4: Intra-family Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Average Pricing Errors (Jan. - Dec. 2005)

U.B. U.B. APE1 APE1 model 1 : model 2 in-sample U.B. out-sample U.B. APE2 APE2 HssVG: VHG 0.825 D E 0.835 D E HssNIG: NHIG 0.683 0.754 HssVG: VG-OU-Gamma 1.15 1.09 HssNIG: NIG-OU-Gamma 0.836 0.856

Table 7.5: Intra-family Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Root Mean Square Pricing Errors (Jan. - Dec. 2005)

U.B. U.B. RMSE1 RMSE1 model 1 : model 2 in-sample U.B. out-sample U.B. RMSE2 RMSE2 HssVG: VHG 0.756 D E 0.795 D E HssNIG: NHIG 0.685 0.767 HssVG: VG-OU-Gamma 1.02 1.02 HssNIG: NIG-OU-Gamma 0.807 0.855

In terms of APE and RMSE, the four parameter selfsimilar models outperformed their corresponding four parameter subordinated additive models for the in-sample and out-of- sample cases. As shown in the first and second rows of Tables 7.4-7.5, the mean ratio of the upper bound for the selfsimilar process to that of its corresponding subordinated additive model ranged between 0.68 and 0.84. Remarkably, the selfsimilar additive NIG model had pricing error bounds which were, on average, 80 to 86 percent of that corresponding to the NIG-OU-Gamma process, both for in-sample and out-of-sample APE and RMSE bounds,

46 as shown in the fourth rows of Tables 7.4 and 7.5. This is in contrast to the VG case where the selfsimilar process, on average, had pricing error bounds which were 102 to 115 percent of that corresponding to the VG-OU-Gamma process for in-sample and out-of-sample APE and RMSE bounds. In Tables 7.6 and 7.7 each pair of models lies within the same model class: L´evy, selfsimilar, subordinated additive, and stochastic volatility. For each of the four model classes, the twelve month time-averaged ratio of the theoretical APE and RMSE upper bounds of the NIG model to that of its corresponding VG model is given.

Table 7.6: Intra-class Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Average Pricing Errors (Jan. - Dec. 2005)

U.B. U.B. APE1 APE1 model 1 : model 2 in-sample U.B. out-sample U.B. APE2 APE2 NIG: VG 0.888 D E 0.915 D E NHIG: VHG 0.784 0.827 HssNIG: HssVG 0.644 0.747 NIG-OU-G: VG-OU-G 0.891 0.944

Table 7.7: Intra-class Comparison: Twelve Quote Date Time-averaged Ratio of the Upper Bounds on the Theoretical Root Mean Square Pricing Errors (Jan. - Dec. 2005)

U.B. U.B. RMSE1 RMSE1 model 1 : model 2 in-sample U.B. out-sample U.B. RMSE2 RMSE2 NIG: VG 0.871 D E 0.899 D E NHIG: VHG 0.768 0.805 HssNIG: HssVG 0.696 0.776 NIG-OU-G: VG-OU-G 0.884 0.927

Notable in rows three of Tables 7.6 and 7.7 is that for both in-sample and out-of-sample cases, the time-averaged ratio of APE and RMSE bounds for the selfsimilar NIG model to that of its VG model was less than 0.78. Furthermore, the time-averaged ratio of in-sample APE bounds of the selfsimilar NIG to that of its VG counterpart was approximately 2/3. In figures 9.5 - 9.10 the market and calculated model option prices are plotted versus simple moneyness (K/S) for the sets of options corresponding to the April, August, and

47 December 2005 quote dates. Figures 9.5 - 9.7 consist of the plots for the VG family while figures 9.8 - 9.10 consist of the plots for the NIG family of models. In each panel of plots, the subordinated additive model showed a drastic improvement in pricing over the L´evy model with respect to long and near-term puts. Further improvement in the long and near-term puts and calls are seen as one moves from the subordinated additive case to the selfsimilar case. For the NIG family of models, the selfsimilar model usually provided better pricing of the two longest maturity set of puts than did its L´evy stochastic volatility counterpart. In order to illustrate the rather distinct distributional properties of the increments of the additive processes in this study, the term structures of the first four central moments of the distributions calibrated with the April, August, and December 2005 quote dates are included. In figures 9.11 - 9.13 the top (resp. bottom) rows consist of the term structures of the first four central moments pertaining to the VG (resp. NIG) family of processes. Notable is the signature linear dependence of mean and variance on maturity for the L´evy processes. Furthermore, the absolute values of skewness and kurtosis decrease in a manner consistent 1 1 with the well-known √t dependence for skewness and t dependence for kurtosis. For the set of parameters chosen by the market, the skewness and kurtosis of the subordinated additive models are monotonic with respect to maturity with values lying in a smaller interval than that pertaining to the L´evy process. Furthermore, the skewness and kurtosis term structures of these models have concavities of opposite sign to that of the corresponding term structures for the L´evy processes. The selfsimilar processes have a rather striking constant skewness and kurtosis. Invari- ance with respect to maturity is due to the fact that the characteristic function is obtained by a composition of a characteristic function at time one with the map, ξ tH ξ. When 7→ the chain rule is applied to the composition, the monomial term yields a factor of tH . Consequently, the moment number is matched by the same number of factors of tH , thereby allowing cancellation in the calculation of skewness and kurtosis. The stability of the parameters for each model over time is denoted by the sample standard deviations of the corresponding time-averaged parameter estimates over the twelve quote dates. In Tables 7.8-7.10 the standard deviations are included in parentheses.

In the case of the NHIG model, the standard deviations of the β and γ parameters were large compared to the same parameters for the other NIG additive models. These large

48 Table 7.8: Variance Gamma Models: Twelve Quote Date Time-average of the Risk-Neutral Parameters (Jan. - Dec. 2005): “( )” denotes the sample standard deviation of the mean.

parameter VG VHG HssVG VG-OU-Gamma µ -0.131 (0.034) -0.132 (0.030) -0.094 (0.015) -0.147 (0.055) σ 0.118 (0.006) 0.107 (0.010) 0.132 (0.007) 0.113 (0.01) ν 0.312 (0.079) 0.996 (0.229) 1.023 (0.163) 0.091 (0.036) H 0.935 (0.085) 0.622 (0.028) λ 4.173 (1.811) a 0.466 (0.088) b 0.502 (0.190)

Table 7.9: Normal Inverse Gaussian Models: Twelve Quote Date Time-average of the Risk- Neutral Parameters (Jan. - Dec. 2005): “( )” denotes the sample standard deviation of the mean.

parameter NIG NHIG HssNIG NIG-OU-Gamma β -8.379 (2.588) -25.914 (18.622) -6.884 (1.862) -11.935 (3.529) δ 0.189 (0.035) 0.111 (0.011) 0.138 (0.016) 0.320 (0.063) γ 13.251 (2.015) 14.479 (6.171) 8.580 (1.112) 25.692 (5.665) H 0.874 (0.116) 0.622 (0.027) λ 2.719 (1.628) a 0.589 (0.180) b 0.591 (0.221)

standard deviations were due to the relatively large estimates of β and γ for the September, October, and November 2005 quote dates. The time-averaged parameter estimates and sample standard deviations over these quote dates are found in Table 7.10. Not only did the NHIG model calibrate with large variations in parameter estimates, but the error surface associated with the NHIG model had multiple local minima. Such behavior was not observed in the case of the selfsimilar NIG model, where the relative simplicity of the skewness and kurtosis appears to have been beneficial with regard to parameter stability.

49 Table 7.10: Normal H-Inverse Gaussian Model: Three Quote Date Time-average of the Risk-Neutral Parameters (Sep. - Nov. 2005): “( )” denotes the sample standard deviation of the mean.

parameter NHIG β -52.6 (4.2) δ 0.109 (0.010) γ 22.73 (0.89) H 0.729 (0.039)

50 CHAPTER 8

Conclusion

Two classes of additive processes have been developed and their respective characteristic functions and time-dependent L´evy characteristics derived. Models based on the Variance Gamma process and Normal Inverse Gaussian process were constructed and used to model the logarithm of the risk-neutral underlier of European exercised options. Out-of-the-money SPX spot options from the year 2005 were used to obtain the risk-neutral parameters for the underlier. Not too surprisingly, the subordinated additive models usually had lower in- sample and one-day-ahead out-of-sample APE and RMSE errors than did the corresponding L´evy models. In like manner, it was found that the selfsimilar models had lower in-sample and out-of-sample pricing errors than the corresponding subordinated additive models. The most successful model was the selfsimilar NIG process, HssNIG, which outperformed both of the six-parameter L´evy stochastic volatility models, VG-OU-Γ and NIG-OU-Γ, in addition to both of the subordinated additive models. Furthermore, the stability of the parameters of the selfsimilar NIG model, as measured by the sample standard deviation of the mean parameter estimate, was noticeably better than that of its subordinated additive counterpart. Finally, it is interesting to note that whether the form of the time one distribution was held in common between two models, as in the NHIG vs. HssNIG comparison, or the manner of time evolution of the distribution was held in common, as in the HssVG vs. HssNIG comparison, the pricing error associated with the HssNIG model was significantly less than that of the other model. 8.1 Future Research

Although the selfsimilar models performed better than the subordinated models for SPX spot options, there may be other markets for which the subordinated additive models yield lower

51 pricing errors. Consequently, a next step for future research is to perform the same analysis on options in other markets. An idea suggested by CGMY[7] was to model the parameters of the selfsimilar additive model with a stochastic process. It would be very interesting to study how the time-dependent L´evy densities varied with the spot parameters over time. Yet another area of research is to price exotics using these models in order to study their path dependent properties, similar to the kind of study previously done by Wim Schoutens [27]. It would also be interesting to derive the relative entropy functions, the Kullback-Leibler distances, for these models and perform calibrations using a relative entropy regularization as discussed by Cont [12]. Another research path is to incorporate stochastic volatility and additivity into one model as mentioned in a conversation with Ernst Eberlein in 2004.

52 CHAPTER 9

Plots

53 iue91 pe onso hoeia vrg rcn ro i-ape out-of-the [in-sample, Error Pricing Average Theoretical on Bounds options] Upper money 9.1: Figure Average Pricing Error Bound Comparison: In−Sample OutMoneys VG NIG 15 VHG NHIG HssVG HssNIG 10 VG−OU−Gamma NIG−OU−Gamma Market

APE Bound [%] 5

0 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

54 Average Pricing Error Bound Comparison: In−Sample OutMoneys VG NIG 15 VHG NHIG HssVG HssNIG 10 VG−OU−Gamma NIG−OU−Gamma Market

APE Bound [%] 5

0 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date - iue92 pe onso hoeia vrg rcn ro oto-ape out- [out-of-sample, Error Pricing Average Theoretical options] on money Bounds Upper 9.2: Figure Average Pricing Error Bound Comparison: Out−of−Sample OutMoneys VG NIG 15 VHG NHIG HssVG 10 HssNIG VG−OU−Gamma NIG−OU−Gamma Market

APE Bound [%] 5

0 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

55 Average Pricing Error Bound Comparison: Out−of−Sample OutMoneys VG NIG 15 VHG NHIG HssVG 10 HssNIG VG−OU−Gamma NIG−OU−Gamma Market

APE Bound [%] 5

0 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date of-the- iue93 pe onso hoeia otMa qaeErr[nsml,out-of- [in-sample, Error Square Mean Root Theoretical on options] Bounds money Upper 9.3: Figure Root Mean Square Pricing Error Bound Comparison: In−Sample OutMoneys 3 VG NIG 2.5 VHG NHIG 2 HssVG HssNIG 1.5 VG−OU−Gamma NIG−OU−Gamma Market 1 RMSE Bound [$] 0.5

0 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

56 Root Mean Square Pricing Error Bound Comparison: In−Sample OutMoneys 3 VG NIG 2.5 VHG NHIG 2 HssVG HssNIG 1.5 VG−OU−Gamma NIG−OU−Gamma Market 1 RMSE Bound [$] 0.5

0 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date the- iue94 pe onso hoeia otMa qaeErr[u-fsml,o [out-of-sample, Error Square Mean Root Theoretical on options] Bounds the-money Upper 9.4: Figure Root Mean Square Pricing Error Bound Comparison: Out−of−Sample OutMoneys 3 VG NIG 2.5 VHG NHIG 2 HssVG HssNIG 1.5 VG−OU−Gamma NIG−OU−Gamma 1 Market RMSE Bound [$] 0.5

0 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

57 Root Mean Square Pricing Error Bound Comparison: Out−of−Sample OutMoneys 3 VG NIG 2.5 VHG NHIG 2 HssVG HssNIG 1.5 VG−OU−Gamma NIG−OU−Gamma 1 Market RMSE Bound [$] 0.5

0 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date ut-of- iue95 aktadMdlOto rcsv.Mnyes(/)frVrac G Variance for (K/S) Moneyness options out-of-the-money vs. date, Prices quote 2005 Option April Model : and family Market 9.5: Figure VG: 12−Apr−2005 APE = 11.05% VG−OU−Gamma: 12−Apr−2005 APE = 8.00% 28 days 60 60 47 days 110 days 50 50 174 days 235 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

58 VHG: 12−Apr−2005 APE = 9.93% HssVG: 12−Apr−2005 APE = 7.24%

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] amma iue96 aktadMdlOto rcsv.Mnyes(/)frVrac G Variance for (K/S) Moneyness options out-of-the-money vs. date, Prices quote 2005 Option August Model : and family Market 9.6: Figure VG: 09−Aug−2005 APE = 14.23% VG−OU−Gamma: 09−Aug−2005 APE = 5.08% 27 days 60 60 52 days 91 days 50 50 152 days 215 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

59 VHG: 09−Aug−2005 APE = 7.41% HssVG: 09−Aug−2005 APE = 5.97%

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] amma iue97 aktadMdlOto rcsv.Mnyes(/)frVrac G Variance for (K/S) Moneyness options out-of-the-money vs. date, Prices quote 2005 Option December Model : and family Market 9.7: Figure VG: 13−Dec−2005 APE = 16.01% VG−OU−Gamma: 13−Dec−2005 APE = 6.20% 70 70 25 days 45 days 60 60 64 days 127 days 50 50 190 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

60 VHG: 13−Dec−2005 APE = 7.89% HssVG: 13−Dec−2005 APE = 7.09% 70 70

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] amma iue98 aktadMdlOto rcsv.Mnyes(/)frNra Inv Normal for (K/S) Moneyness options out-of-the-money vs. date, Prices quote 2005 Option April Model : and family Gaussian Market 9.8: Figure NIG: 12−Apr−2005 APE = 10.43% NIG−OU−Gamma: 12−Apr−2005 APE = 5.96% 28 days 60 60 47 days 110 days 50 50 174 days 235 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

61 NHIG: 12−Apr−2005 APE = 8.34% HssNIG: 12−Apr−2005 APE = 4.67%

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] erse iue99 aktadMdlOto rcsv.Mnyes(/)frNra Inv Normal for (K/S) Moneyness options out-of-the-money vs. date, Prices quote 2005 Option August Model : and family Gaussian Market 9.9: Figure NIG: 09−Aug−2005 APE = 12.54% NIG−OU−Gamma: 09−Aug−2005 APE = 4.85% 27 days 60 60 52 days 91 days 50 50 152 days 215 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

62 NHIG: 09−Aug−2005 APE = 6.28% HssNIG: 09−Aug−2005 APE = 3.57%

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] erse iue91:Mre n oe pinPie s oens KS o omlIn Normal for (K/S) Moneyness options out-of-the-money date, vs. quote Prices 2005 Option December Model : family and Gaussian Market 9.10: Figure NIG: 13−Dec−2005 APE = 14.56% NIG−OU−Gamma: 13−Dec−2005 APE = 6.10% 70 70 25 days 45 days 60 60 64 days 127 days 50 50 190 days 40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S]

63 NHIG: 13−Dec−2005 APE = 6.80% HssNIG: 13−Dec−2005 APE = 5.33% 70 70

60 60

50 50

40 40

30 30

20 20

10 10

*: model, o: market option price [$] 0 *: model, o: market option price [$] 0 0.7 0.8 0.9 1 1.1 1.2 0.7 0.8 0.9 1 1.1 1.2 moneyness [K/S] moneyness [K/S] verse oes-tprw[I oes-bto row] bottom - quo models 2005 April row][NIG for top Kurtosis - Skewness, Variance, models Mean, Risk-neutral 9.11: Figure Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Apr05 quote date: Apr05 quote date: Apr05 quote date: Apr05 7.0815 0.025 −0.5 16 VG VHG 7.081 14 0.02 HssVG −1 7.0805 VG−OU−Gamma 12 7.08 0.015 −1.5 10 7.0795 0.01 8 7.079 −2 0.005 7.0785 6

7.078 0 −2.5 4 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs]

64 Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Apr05 quote date: Apr05 quote date: Apr05 quote date: Apr05 7.081 0.03 −0.5 30 NIG NHIG 7.0805 0.025 −1 25 HssNIG 7.08 NIG−OU−Gamma 0.02 7.0795 −1.5 20 0.015 7.079 −2 15 0.01 7.0785 −2.5 10 7.078 0.005

edt [VG date te 7.0775 0 −3 5 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] V oes-tprw[I oes-bto row] q bottom 2005 - August models for row][NIG Kurtosis top Skewness, - Variance, models Mean, [VG Risk-neutral 9.12: Figure Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Aug05 quote date: Aug05 quote date: Aug05 quote date: Aug05 7.123 0.025 −0.5 18 VG 7.122 16 VHG 0.02 HssVG −1 7.121 14 VG−OU−Gamma

7.12 0.015 12 −1.5 7.119 0.01 10 7.118 8 −2 0.005 7.117 6

7.116 0 −2.5 4 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs]

65 Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Aug05 quote date: Aug05 quote date: Aug05 quote date: Aug05 7.123 0.025 −1 35 NIG NHIG 7.122 30 0.02 −1.5 HssNIG 7.121 NIG−OU−Gamma 25 7.12 0.015 −2 20 7.119 0.01 −2.5 15 7.118 0.005 −3 7.117 10

7.116 0 −3.5 5 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 oedate uote Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] V oes-tprw[I oes-bto row] bottom - models row][NIG Decembe top for - Kurtosis models Skewness, Variance, [VG Mean, Risk-neutral 9.13: Figure Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Dec05 quote date: Dec05 quote date: Dec05 quote date: Dec05 7.156 0.02 −0.4 11 VG −0.6 10 VHG 7.154 HssVG 0.015 −0.8 9 VG−OU−Gamma

7.152 −1 8 0.01 7.15 −1.2 7 −1.4 6 0.005 7.148 −1.6 5

7.146 0 −1.8 4 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs]

66 Mean( ln( S )) vs. Maturity Variance( ln( S )) vs. Maturity Skewness( ln( S )) vs. Maturity Kurtosis( ln( S )) vs. Maturity t t t t quote date: Dec05 quote date: Dec05 quote date: Dec05 quote date: Dec05 7.156 0.02 −0.5 25 NIG NHIG 7.154 HssNIG 0.015 −1 20 NIG−OU−Gamma 7.152 0.01 −1.5 15 7.15

0.005 −2 10 7.148 05qoedate quote 2005 r

7.146 0 −2.5 5 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] Mauturity (t) [yrs] APPENDIX A

Truncation and Sampling Error: Lee’s Theorems

The truncation and sampling error which arise from the discrete Fourier transform method of pricing a European exercised call option are derived by Lee [17]. Following Lee’s convention, given a R valued random variable, X, let A = ν R : EeνX < . Define − X ∈ ∞ Λ = ζ C : (ζ) A and let f : Λ C denote the discounted characteristic X { ∈ −ℑ ∈ X } X →  rT iζX function where f (ζ)= E e− e . Let C : R R, the price of a European exercised call → k option of strike, e , be given by  αk e− ∞ iuk C (k)= e− c (u) du, π α Z0 where cα is the inverse Fourier transform of the dampedb call price for some α > 0 with

α +1 AX . ∈b Proposition 41 (Theorem 6.1 [17]) Let 1 A and α be any real number such that ∈ X α +1 A . If f is such that c decays as a power c (u) Φ(u) /u1+γ for all u > u , and ∈ X α | α |≤ 0 some γ > 0 with Φ being a decreasing function of u, then the truncation error associated b b N with the price of a European exercised call option, ∞ (k) (k) , satisfies −

N PΦ(N∆)P ∞ (k) (k) , − ≤ πeαkγ [N∆]γ

X X provided N∆ > u0.

γu If f is such that c decays exponentially c (u) Φ(u) e− for all u > u , where α | α | ≤ 0 γ > 0 and Φ (u) is decreasing in u, then the truncation error associated with the price of a b N b European exercised call option, ∞ (k) (k) , satisfies −

P N P ∆Φ (N∆) ∞ (k) (k) , − ≤ 2πeαk+γN∆ sinh (γ∆/2)

X X 67 provided N∆ > u0.

Proposition 42 (Theorem 6.2 [17]) the sampling error, C ∞ , satisfies, (α> 0) | − α | 2π[α p] P 2πα − p ∞ exp f ( i) exp ∆t f ( i [p + 1]) ∆t − p C inf − 4πα− + . p>α: p+1 AX   4π[α p] p +1 − ≤ ∈  1 exp ∆t exp(pk)[p + 1] 1 exp −  α − − ∆t   X −   h  i 

A.1 Bounds on the Discounted Characteristic Function

Given a R-valued semi-martingale X , let A = ν R : EeνXt < and define { t} Xt ∈ ∞ Λ = ζ C : (ζ) A . Xt { ∈ −ℑ ∈ Xt } 

Theorem 43 Let the discounted characteristic function, fXt , of the distribution at time t of the logarithm of the risk-neutral underlier which follows an exponential additive process, X be given by { t}

rt f (ξ)= e− exp iξ [ln (S )+(r q) t ln (φ ( i ))] φ (ξ) , Xt { 0 − − Xt − } Xt for ξ Λ . Define g : A ( , 0) R with ∈ Xt Xt − Xt ∩ −∞ → g (z) = exp( z [ln (S )+(r q) t ln (φ ( i))]) . Xt − 0 − − Xt −

If X is a Variance Gamma process with parameters satisfying { t} µ < 0 σ,ν > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

t ν rt 20 2t/ν f (ξ x + iz) e− g (z) x − VGt[µ,σ,ν] ≡ ≤ VGt 9σ2ν | |   provided x > √10 z . | | 68 If X is a selfsimilar Variance Gamma process with parameters satisfying { t} µ < 0 σ,ν,H > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

1 ν rt 20 2/ν f (ξ x + iz) e− g (z) x − HssV Gt[µ,σ,ν,H] ≡ ≤ HssV Gt 9σ2t2H ν | |   provided x > √10 z . | | If X is a Variance H-Gamma process with parameters satisfying { t} µ < 0 σ,ν,H > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

1 ν rt 20 2/ν f (ξ x + iz) e− g (z) x − VHGt[µ,σ,ν,H] ≡ ≤ VHGt 9σ2tH ν | |   provided x > √10 z . | | If X is a Normal Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

rt δtγ 3 fNIG[β,δ,γ] (ξ x + iz) e− g (z) e exp δt x ≡ ≤ −r8 | |!

4 provided x > max √2 z , z β . | | √3 | − |  

69 If X is a selfsimilar Normal Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ,H > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

rt δγ 3 H fHssNIG[β,δ,γ,H] (ξ x + iz) e− g (z) e exp δt x ≡ ≤ −r8 | |!

4 H provided x > max √2 z , z βt− . | | √3 −  

If X is a Normal H-Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ,H > 0, and if ξ Λ with x (ξ) 0 and z (ξ) < 0, it follows that the discounted ∈ Xt ≡ ℜ ≥ ≡ ℑ characteristic function, fXt , satisfies

rt δγ 3 H fNHIG[β,δ,γ,H] (ξ x + iz) e− g (z) e exp δt 2 x ≡ ≤ −r8 | |!

4 provided x > max √2 z , z β . | | √3 | − |  

Theorem 44 Let X be a VG-OU-Γ process with parameters satisfying { t}

C, G, M, λ, a, b, y0 > 0.

Define A = ν R : EeνXt < and Λ = ζ C : (ζ) A . The discounted Xt ∈ ∞ Xt { ∈ −ℑ ∈ Xt } characteristic function of the logarithm of the risk-neutral underlier, f : Λ C, is given  Xt → by

rt f (ξ) = e− exp iξ [ln (S0)+(r q) t ln (φOU Γt ( i log (φVG1 ( i))))] { − − − − − }

φOU Γt ( i log (φVG1 (ξ))) , · − −

70 where ξ Λ with x (ξ) 0 and z (ξ) < 0. Define ∈ Xt ≡ ℜ ≥ ≡ ℑ bλ λab 2 ϑ = max 1, , , √2λabC 1 e λt ln (2) ( − −   ) and h : A ( , 0) R by − Xt ∩ −∞ →

h (z) = exp z [ln (S0)+(r q) t ln (φOU Γt ( i log (φVG1 ( i))))] . {− − − − − − } The discounted characteristic function, f, satisfies

λt a − 3a 1−e λt rt +2 1 e− 10 GM Cy0 f (ξ x + iz) e− h (z) 2 2 1+ − λ | ≡ | ≤ · · bλ 9 x2     M G +2z + 2GM + z (M G) + z2, | − | | − |  p  if x > max 10(z2 + z (M G) ), .  | − |     p  GM (eϑ/C 1) + z (M G)+ z2  | − − |     p  Theorem 45 Let X be a NIG-OU-Γ process with parameters satisfying { t} β < 0

δ,γ,λ,a,b,y0 > 0.

Define A = ν R : EeνXt < and Λ = ζ C : (ζ) A . The discounted Xt ∈ ∞ Xt { ∈ −ℑ ∈ Xt } characteristic function, f : Λ C, of the logarithm of the risk-neutral underlier is given  Xt → by

rt f (ξ) = e− exp iξ [ln (S0)+(r q) t ln (φOU Γt ( i log (φNIG1 ( i))))] { − − − − − }

φOU Γt ( i log (φNIG1 (ξ))) , · − − where ξ Λ with x (ξ) 0 and z (ξ) < 0. Define ∈ Xt ≡ ℜ ≥ ≡ ℑ bλ λab 2 ϑ = max 1, , , 2λab 1 e λt ln (2) ( − −   ) and h : A ( , 0) R by − Xt ∩ −∞ →

h (z) = exp z [ln (S0)+(r q) t ln (φOU Γt ( i log (φNIG1 ( i))))] . {− − − − − − } 71 The discounted characteristic function, f, satisfies

λt a rt 3a +2 1 e− f (ξ x + iz) e− h (z) 2 2 1+ − | ≡ | ≤ · · bλ   δγy0 λt δy0 λt 3 exp 1 e− exp 1 e− x · λ − · − λ − 8 | |   " r # !     2 z2 + 4 γ + 1 ϑ γ2, 3 δ −  q    7γ2 + z2,    if x > max   .  p   √2 z ,  | |  4   √ z β   3 | − |     

72 APPENDIX B

Plots of Difference between Calculated Pricing Error and Upper Bound on Theoretical Pricing Error

73 iueB1 nsml vrg rcn ro ieecs UprBudo Theoretic of Bound (Upper Differences: options) Error out-of-the-money Pricing using Value Average Calculated In-sample - Value B.1: Figure Average Pricing Error Differences: In−Sample OutMoneys 0 VG −1 −2 NIG VHG −4 NHIG HssVG −6 HssNIG −8 VG−OU−Gamma NIG−OU−Gamma −10

log10( difference ) −12 −14 −16 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

74 Average Pricing Error Differences: In−Sample OutMoneys 0 VG −1 −2 NIG VHG −4 NHIG HssVG −6 HssNIG −8 VG−OU−Gamma NIG−OU−Gamma −10

log10( difference ) −12 −14 −16 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date al iueB2 u-fsml vrg rcn ro ieecs UprBudo Theo of options) Bound out-of-the-money (Upper using Value Differences: Calculated Error - Pricing Value Average Out-of-sample B.2: Figure Average Pricing Error Differences: Out−of−Sample OutMoneys 0 VG −1 −2 NIG VHG −4 NHIG HssVG −6 HssNIG −8 VG−OU−Gamma NIG−OU−Gamma −10

log10( difference ) −12 −14 −16 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

75 Average Pricing Error Differences: Out−of−Sample OutMoneys 0 VG −1 −2 NIG VHG −4 NHIG HssVG −6 HssNIG −8 VG−OU−Gamma NIG−OU−Gamma −10

log10( difference ) −12 −14 −16 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date retical iueB3 nsml otMa qaeErrDffrne:(pe on fTheore of Bound (Upper options) Differences: out-of-the-money Error using Square Value Mean Calculated Root - Value In-sample B.3: Figure Root Mean Square Error Differences: In−Sample OutMoneys

0 VG NIG VHG −2 NHIG HssVG HssNIG −4 VG−OU−Gamma NIG−OU−Gamma

−6 log10( difference )

−8 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

76 Root Mean Square Error Differences: In−Sample OutMoneys

0 VG NIG VHG −2 NHIG HssVG HssNIG −4 VG−OU−Gamma NIG−OU−Gamma

−6 log10( difference )

−8 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date tical iueB4 u-fsml otMa qaeErrDffrne:(pe on fT of options) Bound out-of-the-money (Upper using Value Differences: Calculated Error - Square Value Mean ical Root Out-of-sample B.4: Figure Root Mean Square Error Differences: Out−of−Sample OutMoneys

0 VG NIG VHG −2 NHIG HssVG HssNIG −4 VG−OU−Gamma NIG−OU−Gamma

log10( difference ) −6

−8 Jan05 Feb05 Mar05 Apr05 May05 Jun05 Quote Date

77 Root Mean Square Error Differences: Out−of−Sample OutMoneys

0 VG NIG VHG −2 NHIG HssVG HssNIG −4 VG−OU−Gamma NIG−OU−Gamma

log10( difference ) −6

−8 Jul05 Aug05 Sep05 Oct05 Nov05 Dec05 Quote Date heoret- APPENDIX C

Additive Models: Discounted Characteristic Function Bounds for the European Call Option

Let X be a R-valued additive process and define A = ν R : EeνXt < and { t} Xt ∈ ∞ Λ = ζ C : (ζ) A . The discounted characteristic function of the logarithm Xt { ∈ −ℑ ∈ Xt }  of the risk-neutral underlier, f :Λ C, is given by Xt → rt f (ξ) = e− exp iξ [ln (S )+(r q) t ln (φ ( i))] { 0 − − Xt − } φ (ξ). · Xt Let ξ Λ with x (ξ) 0 and z (ξ) < 0. Define h : A ( , 0) R with ∈ Xt ≡ ℜ ≥ ≡ ℑ − Xt ∩ −∞ → h (z) = exp z [ln (S )+(r q) t ln (φ ( i))] . {− 0 − − Xt − } C.1 Variance Gamma Family

If X is a Variance Gamma process with parameters satisfying { t} µ < 0 σ,ν > 0, then a bound on the discounted characteristic function is given by

rt fVG (ξ x + iz) = e− h (z) φXt (x + iz) | ≡ | | | t ν rt 1 2 2 − = e− h (z) 1 i [x + iz] µν + σ ν [x + iz] − 2 t 2 ν rt 1 2 2 z − e− h (z) σ νx 1 if x> z ≤ 2 − x2 | |   t ν − t rt 9 2 − 2 = e− h (z) σ ν x ν 20 | |

78 where x> √10 z . | | If X is a selfsimilar Variance Gamma process with parameters satisfying { t} µ < 0 σ,ν,H > 0, then

rt H fssV G (ξ x + iz) = e− h (z) φX1 t (x + iz) | ≡ | 1 ν rt H  1 2 2H 2 − = e− h (z) 1 it [x + iz] µν + σ νt [x + iz] − 2 1 2 ν rt 1 2 2H 2 z − e− h (z) σ νt x 1 if x> z ≤ 2 − x2 | | 1  ν − rt 9 2 2H − 2 = e− h (z) σ νt x ν 20 | |

where x> √10 z . | |

If X is a Variance H-Gamma process with parameters satisfying { t} µ < 0 σ,ν,H > 0, then

rt fVHG (ξ x + iz) = e− h (z) φXt (x + iz) | ≡ | | | 1 ν rt H 1 2 H 2 − = e− h (z) 1 it [x + iz] µν + σ νt [x + iz] − 2 1 2 ν rt 1 2 H 2 z − e− h (z) σ νt x 1 if x> z ≤ 2 − x2 | | 1  ν − rt 9 2 H − 2 = e− h (z) σ νt x ν 20 | |

where x> √10 z . | |

79 C.2 Normal Inverse Gaussian Family

If X is a Normal Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ > 0, then

rt f (ξ x + iz) = e− h (z) φ (x + iz) | NIG ≡ | | Xt | rt 2 2 = e− h (z) exp δt [x + iz] 2iβ [x + iz]+ γ γ − − −  q  rt γδt 2 2 2 = e− h (z) e exp δt x z +2βz + γ +2ix (z β) − − − rt γδt n o = e− h (z) e p 1 δt x2 z2 +2βz + γ2 +2ix (z β) 2 exp − | − − | . · cos 1 arg(x2 z2 +2βz + γ2 +2ix (z β)) ( · 2 − − )  If x> z , then arg (x2 z2 +2βz + γ2 +2ix (z β)) π , π . Bounding by π requires | | − − ∈ − 2 2 3 2x z β  | − | √3 x2 z2 +2βz + γ2 ≤ − 4 z β | − | √3 if x> √2 z . ⇐ x ≤ | |

4 z β Consequently, if x max √2 z , | − | , then ≥ | | √3 n o √ 1 rt γδt 3 2 2 2 2 fNIG (ξ x + iz) e− h (z) e exp δt x z +2βz + γ +2ix (z β) | ≡ | ≤ (− 2 − − )

√ 1 rt γδt 3 2 2 2 e− h (z) e exp δt x z ≤ (− 2 − )

rt γδt 3 e− h (z) e exp δt x since x> √2 z . ≤ (− r8 | |) | | If X is a selfsimilar Normal Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ,H > 0,

80 then

rt f (ξ x + iz) = e− h (z) φ (x + iz) | ssNIG ≡ | | Xt | rt 2H 2 H 2 = e− h (z) exp δ t [x + iz] 2iβt [x + iz]+ γ γ − − −  q  rt γδ = e− h (z) e

exp δ t2H (x2 z2)+2βztH + γ2 +2ix (zt2H βtH ) · − − − rt n γδ o = e − h (z) ep 1 δ t2H (x2 z2)+2βztH + γ2 +2ix zt2H βtH 2 exp − − − . · cos 1 arg t2H (x2 z2)+2βztH + γ2 +2ix zt2H βtH ( · 2 − −  )

 If x > z , then arg t2H (x2 z2)+2βztH + γ2 +2ix zt2H βtH π , π . Bounding | | − − ∈ − 2 2 π by 3 requires  

2x t2H z βtH − √ 2H 2 2 H 2 3 t (x z )+2βzt + γ ≤ − H 4 z βt− − √3 if x> √2 z . ⇐ x ≤ | |

4 z βt−H Consequently, if x max √2 z , | − | , then ≥ | | √3   rt γδ f (ξ x + iz) e− h (z) e | ssNIG ≡ | ≤ √3 1 exp δ t2H x2 z2 +2βztH + γ2 +2ix zt2H βtH 2 · (− 2 − − )   √ 1 rt γδ 3 2H 2 2 2 e− h (z) e exp δ t x z ≤ (− 2 − ) 

rt γδ 3 H e− h (z) e exp δt x since x> √2 z . ≤ (− r8 | |) | |

If X is a Normal H-Inverse Gaussian process with parameters satisfying { t} β < 0 δ,γ,H > 0,

81 then

rt f (ξ x + iz) = e− h (z) φ (x + iz) | NHIG ≡ | | Xt | rt H 2 H 2 = e− h (z) exp δ t [x + iz] 2iβt [x + iz]+ γ γ − − −  q  rt γδ H 2 2 2 H = e− h (z) e exp δ t (x z +2βz)+ γ +2ixt (z β) − − − rt γδ n o = e− h (z) e p 1 δ tH (x2 z2 +2βz)+ γ2 +2ixtH (z β) 2 exp − − − . · cos 1 arg tH (x2 z2 +2βz)+ γ2 +2ixtH (z β) ( · 2 − − )

 If x > z , then arg tH (x2 z2 +2βz)+ γ2 +2ixtH (z β) π , π . Bounding by π | | − − ∈ − 2 2 3 requires   2x z β | − | √3 x2 z2 +2βz + γ2t H ≤ − − 4 z β | − | √3 if x> √2 z . ⇐ x ≤ | |

4 z β Consequently, if x max √2 z , | − | , then ≥ | | √3 n o √ 1 rt γδ 3 H 2 2 2 fNHIG (ξ x + iz) e− h (z) e exp δ t x z | ≡ | ≤ (− 2 − ) 

rt γδ 3 H e− h (z) e exp δt 2 x since x> √2 z . ≤ (− r8 | |) | |

82 APPENDIX D

VG-OU-Γ Discounted Characteristic Function Bounds for the European Call Option

Let X be a VG-OU-Γ process with parameters satisfying { t}

C, G, M, λ, a, b, y0 > 0.

Define A = ν R : EeνXt < and Λ = ζ C : (ζ) A . The discounted Xt ∈ ∞ Xt { ∈ −ℑ ∈ Xt } characteristic function of the logarithm of the risk-neutral underlier, f : Λ C, is given  Xt → by

rt f (ξ) = e− exp iξ [ln (S0)+(r q) t ln (φOU Γt ( i log (φVG1 ( i))))] { − − − − − }

φOU Γt ( i log (φVG1 (ξ))) . · − − Let ξ Λ with x (ξ) 0 and z (ξ) < 0 and define ∈ Xt ≡ ℜ ≥ ≡ ℑ u = ( i log (φ (ξ))) ℜ − VG1 w = ( i log (φ (ξ))) . ℑ − VG1

For the VG case u is given by

u = ( i log (φ (x + iz))) ℜ − VG1 GM = iC log ℜ − GM + [M G] i [x + iz] + [x + iz]2   −   = ( iC ln η + i arg(η) ) ℜ − { | | } GM where η ≡ GM + [M G] i [x + iz] + [x + iz]2 − = C arg(η).

83 Similarly,

w = ( i log (φ (x + iz))) ℑ − VG1 = C ln η − | | GM = C ln − GM + [M G] i [x + iz] + [x + iz]2 − GM = C ln − GM + [M G][ix z]+ x2 z2 +2 xiz − − − GM = C ln . − GM z [M G] + x2 z2 + ix 2z + [M G]

− − − { − } Let h : A ( , 0) R with − Xt ∩ −∞ →

h (z) = exp z [ln (S0)+(r q) t ln (φOU Γt ( i log (φVG1 ( i))))] . {− − − − − − } The absolute value of the discounted characteristic function is given by

rt f (ξ x + iz) = e− h (z) φOU Γt (u + iw) | ≡ | | − | y0 λt exp i [u + iw] λ 1 e− rt − = e− h (z) λa b − exp i[u+iw] λb blog (1−e λt)  it [u + iw] · − b i[u+iw] −    − λ   rt y0 λt = e− h (z) exp w 1 e− − λ −   λa  b exp b log it [u + iw] . i [u + iw] λb   (1 e−λt)  −  − b i [u + iw] −  − λ     

The following term is simplified as follows: 

λa b b log it [u + iw] i [u + iw] λb   (1 e−λt)  −  − b i [u + iw] −  − λ    b −  b ln (1−e λt)  b i[u+iw] λa  − λ  = −   (w + λb) iu   −  b   −  +ib arg (1−e λt) itu + tw b i[u+iw] −   − λ      

84 b − b ln (1−e λt) + tw b i[u+iw] λa   − λ   = −   (w + λb) iu   −  b   −  + i b arg (1−e λt) tu b i[u+iw] −   − λ      b   −   b ln (1−e λt) + tw b i[u+iw] λa [(w + λb)+ iu]   − λ   = − 2   (w + λb) + u2    b   −  +i b arg (1−e λt) tu b i[u+iw] −   − λ     λa[(w+λb)]  b  −  −  2 2 b ln  (1−e λt) + tw  (w+λb) +u b i[u+iw] =  − λ  + i (real-valued terms) . λau b − + 2 2 b arg (1−e λt ) tu (w+λb) +u b i[u+iw] −   − λ  

Define A and B as follows:

λa [(w + λb)] b A = − b ln + tw 2 2  (1 e−λt)  (w + λb) + u b i [u + iw] − − λ   λau b B = b arg tu . 2 2   (1 e−λt)  −  (w + λb) + u b i [u + iw] − − λ     Consequently,

rt y0 λt f (ξ x + iz) = e− h (z) exp w 1 e− exp(A + B). | ≡ | − λ −  

Claim 46 If x M G +2z + z2 + z (M G) , then 1 < u < 1. ≥| − | | − | − C

p 1 x(M G+2z) π π 2 − − Proof. Let x > z + z (M G) so that θ tan− x2 z2+GM z[M G] −2 , 2 . Now | − | ≡ − − − ∈ a lower bound onpx is found by the following:    u 1 < < 1 − C GM 1 < arg < 1 ⇔ − GM + [M G] i [x + iz] + [x + iz]2  −  GM 1 < arg < 1 ⇔ − GM z [M G] + x2 z2 + ix 2z + [M G]  − − − { − } 85 π 1 x (M G +2z) π 1 < − < tan− − − < < 1 ⇐ − 4 x2 z2 + GM z [M G] 4  − − −  x (M G +2z) 1 < − − < 1 ⇐ − x2 z2 + GM z [M G] − − − x2 z2 + GM z (M G) > x (M G +2z) ⇐ − − − | − | x2 z2 z (M G) > x (M G +2z) ⇐ − −| − | | − | M G +2z + M G +2z 2 +4(z2 + z (M G) ) x> | − | | − | | − | ⇐ q 2 x> M G +2z + z2 + z (M G) . ⇐ | − | | − | p

Another lower bound on x is sought for the purpose of bounding w from below.

Claim 47 Let ϑ> 0. If x GM (eϑ/C 1) + z (M G)+ z2 , then w ϑ. ≥ | − − | ≥ p Proof. By definition, w GM = ln C − 2 GM +(M G) i [x + iz] + [x + iz] − implies that for ϑ> 0 the following hold: w C GM = ln ϑ − ϑ 2 GM +(M G) i [x + iz] + [x + iz] − 2 2 C GM z (M G) + x z ln − − − ≥ ϑ GM

GM z (M G) + x2 z2 ϑ 1 if − − − exp ≥ GM ≥ C   x2 GM eϑ/C 1 + z (M G)+ z2 ⇐ ≥ − − 1 x GM eϑ/C 1 + z (M G)+ z2 2 . ⇐ ≥ − − 

Note that x 2GM + z (M G) + z2 implies w C. Also, by inspection of the log ≥ | − | ≥ term, x> z2 + zp(M G) implies w> 0. | − | p

Claim 48 If x M G +2z + 2GM + z (M G) + z2, then 1 < u < 1. ≥| − | | − | − w p 86 Proof. Take x > M G +2z + 2GM + z (M G) + z2. Consequently, the previous | − | | − | claims hold such that 1 u 1 andp w 1 imply 1 u 1. − ≤ C ≤ C ≥ − ≤ w ≤− 2 bλ λab Claim 49 Define ϑ = max 1, 1 e−λt , ln(2) . −  h i  GM (eϑ/C 1) + z (M G)+ z2 , If x > max | − − | ,  p  M G +2z + 2GM + z (M G) + z2  | − | | − |  λt a 3a +1 1 e− then exp(A) 2 2  1+ − . p  ≤ bλ   Proof. Take x M G +2z + 2GM + z (M G) + z2 so that u 1 and w > 0. ≥ | − | | − | w ≤ An upper bound on eA is given by p

λa [(w + λb)] b eA exp − b ln + tw ≤  2 2  (1 e−λt)  (w + λb) + u b i [u + iw] −  − λ   1 e−λt 1 e−λt   λa (w + λb) b + w −λ iu −λ  exp b ln − tw ≤  2 2   b   −  (w + λb) + u    1 e−λt 1 e−λt   λab (w + λb) b + w −λ iu −λ  exp ln − since w> 0 and ln > 0 ≤  2 2  b    |•| (w + λb) + u   1 λab(w+λb) λt 2 λt 2 2 (w+λb)2+u2  1 e− 1 e−  1+ w − + u − ≤ bλ bλ      ! 1 λab(w+λb) λt 2 2 (w+λb)2+u2 1 e− u 2 1+ w − since 1 ≤ bλ ! w ≤    λab(w+λb) λt 2 2 a 1 e− (w+λb) +u bλ 2 2 2w − if w> λt ≤ bλ 1 e−    λab(w+λb) − λt (w+λb)2+u2 a a λab 1 e− 2 2 2 w w 1+ − if w> 1 ≤ bλ    λt a 3a 1 e− λab 2 2 1+ − w w ≤ bλ   λt a 2 3a +1 1 e− λab 2 2 1+ − if w> . ≤ bλ ln (2)     2 λab ln(w) 1 ln(2) The last inequality is true since w > w− 2 . The lower bound ln(2) ⇒ w ≤ ≤ λab h i 87 on x which satisfies the previously stated lower bounds on w is obtained by defining 2 bλ λab ϑ = max 1, 1 e−λt , ln(2) and applying Claim 2. −  h i 

Claim 50

GM e√2λab/C 1 + z (M G)+ z2 , − − If x > max  r   ,    M G +2z + 2GM + z (M G) + z2  | − | | − | then exp(B) 2.   ≤  p 

Proof. Take x M G +2z + 2GM + z (M G) + z2 so that u 1, u 1, and ≥| − | | − | w ≤ C ≤ w> 0. An upper bound for eB is givenp by:

λau b eB = exp b arg tu  2 2   (1 e−λt)  −  (w + λb) + u b i [u + iw] − − λ      λau b exp b arg . ≤  2 2   (1 e−λt) (1 e−λt)  (w + λb) + u b + w − iu − λ − λ     Since w> 0, if follows that

b 1 u π π − arg −λt −λt = tan bλ − , .  (1 e ) (1 e )  w + − ∈ 2 2 b + w − iu − 1 e λt !   λ − λ −   Furthermore, u 1 implies for u> 0 that w ≤

1 u 1 u 0 tan− bλ tan− 1. ≤ w + 1 e−λt ! ≤ w ≤ −  

If u< 0, then

1 u 1 u 0 tan− bλ tan− 1. ≥ w + 1 e−λt ! ≥ w ≥− −   Consequently, b arg 1 .  (1 e−λt) (1 e−λt)  ≤ b + w − iu − λ − λ  

88 Using the previous results, the bound for eB is derived in the following: λabu eB exp , u 0 ≤ 2 2 ≥ (w + λb) + u  λabC u exp since 1 ≤ (w + λb)2 + u2 C ≤   λabC exp ≤ w2 1   e 2 w> √2λabC ≤ ⇐ 2. ≤ Likewise, for u 0 ≤ λabu eB exp − , u 0 ≤ 2 2 ≤ (w + λb) + u  λabC u exp since 1 ≤ (w + λb)2 + u2 C ≤   2 if w> √2λabC. ≤ The lower bound on x corresponding to that of w in the last step for both cases, u> 0 and u 0, is given by Claim 2 with ϑ = √2λabC. ≤

− 1−e λt 2 y0 λt 10 GM Cy0 Claim 51 If x> 10(z + z (M G) ), then exp w 1 e− λ . | − | − λ − ≤ 9 x2 Proof. For x> pz2 + z (M G) the following is true:    | − | y0 λt p 1 e− y0 λt λ exp w 1 e− = exp − GM − λ − C ln GM z(M G ) +x2 z2+ix 2z+(M G) !   · − − − { − }   Cy − 0 (1 e λt) GM λ −

≤ x2 z2 z (M G)+ GM − − − Cy 0 (1 e −λt) GM λ − since x> z2 + z (M G) ≤ x2 z2 z (M G) | − | − − − Cy 0 (1 e−λt) p λ − GM 2 ≤ 2 z +z(M G) x 1 2 − − x Cy  0 (1 e−λt) 10 GM λ − if x> 10(z2 + z (M G) ). ≤ 9 x2 | − |

p

89 2 bλ λab √ Theorem 52 Define ϑ = max 1, 1 e−λt , ln(2) , 2λabC . The discounted characteristic − function, f, is bounded by:  h i 

λt a − 3a 1−e λt rt +2 1 e− 10 GM Cy0 f (ξ x + iz) e− h (z) 2 2 1+ − λ | ≡ | ≤ · · bλ 9 x2     M G +2z + 2GM + z (M G) + z2, | − | | − |  p  if x > max 10(z2 + z (M G) ), .  | − |     p  GM (eϑ/C 1) + z (M G)+ z2  | − − |    Proof. Let the lower bound on x givenp in the hypothesis of the theorem be satisfied. By the previous claims it follows that

rt y0 λt f (ξ x + iz) = e− h (z)exp(A + B) exp w 1 e− | ≡ | − λ − λt a − 3a   1−e λt rt +1 1 e− 10GM Cy0 e− h (z) 2 2 1+ − (2) λ . ≤ bλ 9 x2      

90 APPENDIX E

NIG-OU-Γ Discounted Characteristic Function Bounds for the European Call Option

Let X be a NIG-OU-Γ process with parameters satisfying { t} β < 0

δ,γ,λ,a,b,y0 > 0.

Define A = ν R : EeνXt < and Λ = ζ C : (ζ) A . The discounted Xt ∈ ∞ Xt { ∈ −ℑ ∈ Xt } characteristic function of the logarithm of the risk-neutral underlier, f : Λ C, is given  Xt → by

rt f (ξ) = e− exp iξ [ln (S0)+(r q) t ln (φOU Γt ( i log (φNIG1 ( i))))] { − − − − − }

φOU Γt ( i log (φNIG1 (ξ))) . · − − Let ξ Λ with x (ξ) 0 and z (ξ) < 0 and define ∈ Xt ≡ ℜ ≥ ≡ ℑ u = ( i log (φ (ξ))) ℜ − NIG1 w = ( i log (φ (ξ))) . ℑ − NIG1

For the NIG case the following is true:

log (φ (x + iz)) = δ γ2 +(x + iz)2 2iβ (x + iz) γ NIG1 − − − q  = δ γ2 + x2 z2 +2βz + i 2x (z β) γ − − { − } − p 1  = δγ δ exp log γ2 + x2 z2 +2βz + i 2x (z β) − 2 − { − }     91 1 ln γ2 + x2 z2 +2βz + i 2x (z β) = δγ δ exp 2 | − { − }| − +i 1 arg(γ2 + x2 z2 +2βz + i 2x (z β) )  2 − { − }  1 = δγ δ exp ln γ2 + x2 z2 +2βz + i 2x (z β) − 2 − { − }   1 exp i arg γ2 + x2 z2 +2βz + i 2x (z β) · 2 − { − }   1  1 1 = δγ δ γ2 + x2 z2 +2βz + i 2x (z β) 2 cos θ + i sin θ . − − { − } 2 2     

Define

1 ψ = δ γ2 + x2 z2 +2βz + i 2x (z β) 2 − { − } θ = arg γ2 + x2 z2 +2βz + i 2x (z β ) . − { − }  We note that ψ > 0 if x> z . The real and imaginary parts of i log (φ (ξ)) are given | | − NIG1 by

u = ( i log (φ (x + iz))) ℜ − NIG1 1 1 = i δγ ψ cos θ iψ sin θ ℜ − − 2 − 2        1 = ψ sin θ − 2   and

w = ( i log (φ (x + iz))) ℑ − NIG1 1 1 = i δγ ψ cos θ iψ sin θ ℑ − − 2 − 2        1 = δγ + ψ cos θ . − 2   Let h : A ( , 0) R with − Xt ∩ −∞ →

h (z) = exp z [ln (S0)+(r q) t ln (φOU Γt ( i log (φNIG1 ( i))))] . {− − − − − − } The absolute value of the discounted characteristic function is given by

rt f (ξ x + iz) = e− h (z) φOU Γt (u + iw) , | ≡ | | − | y0 λt exp i [u + iw] λ 1 e− rt − = e− h (z) λa b − exp i[u+iw] λb blog (1−e λt)  it [u + iw] · − b i[u+iw] −    − λ  

92 rt y0 λt = e− h (z) exp w 1 e− − λ −   λa  b exp b log it [u + iw] . · i [u + iw] λb   (1 e−λt)  −  − b i [u + iw] −  − λ     

We now simplify the complex logarithm. 

λa b b log it [u + iw] i [u + iw] λb   (1 e−λt)  −  − b i [u + iw] −  − λ    b −  b ln (1−e λt)  λa b i[u+iw] = −  − λ  (w + λb) iu b  −  −  +ib arg (1−e λt) itu + tw   b i[ u+iw] −   − λ   b   −   b ln (1−e λt) + tw  λa b i[u+iw] = −   − λ   (w + λb) iu b  −  −  + i b arg (1−e λt) tu   b i[u+iw] −    − λ    b   −   b ln (1−e λt) + tw λa [(w + λb)+ iu] b i[u+iw] − λ = − 2     (w + λb) + u2 b  −   +i b arg (1−e λt ) tu   b i[u+iw] −    − λ   λa[(w+λb)]  b  −  −  2 2 b ln  (1−e λt) + tw  (w+λb) +u b i[u+iw] =  − λ  + i (real-valued terms) λau b − + 2 2 b arg (1−e λt ) tu (w+λb) +u b i[u+iw] −   − λ   Define A and B as follows.

λa [(w + λb)] b A = − b ln + tw 2 2  (1 e−λt)  (w + λb) + u b i [u + iw] − − λ   λau b B = b arg tu . 2 2   (1 e−λt)  −  (w + λb) + u b i [u + iw] − − λ     Consequently, for x> 0 and z < 0,

rt y0 λt f (ξ x + iz) = e− h (z) exp w 1 e− exp(A + B). | ≡ | − λ −  

93 4 θ √3 Claim 53 If x> max √2 z , z β and βz > 0, then cos | | > . | | √3 | − | 2 2 n o   1 2x(z β) π π − Proof. If x> z and βz > 0, then θ = tan− γ2+x2 z2+2βz −2 , 2 . The argument may | | − ∈ be now be bounded as follows:   

π 1 2x (z β) π θ < tan− − < | | 3 ⇔ γ2 + x2 z2 +2βz 3   − 1 2x z β π tan− | − | < ⇔ γ2 + x2 z2 +2βz 3  −  1 2x z β π tan− | − | < since βz > 0 ⇐ x2 z2 3  −  2x z β | − | < √3 ⇐ x2 1 z2 − x2 4 z β | − | < √3 if x √2 z ⇐ x ≥ | | 4 x max z β , √2 z . ⇐ ≥ √3 | − | | |  

2 Claim 54 Let w > 0. If x max z2 + 4 γ + w0 γ2 , √2 z , 4 z β and 0 ≥ 3 δ − | | √3 | − | r  βz > 0, then w w0.  ≥ Proof. Let x> max √2 z , 4 z β with β < 0. A lower bound for w is given by | | √3 | − | n o θ w = δγ + Ψ cos − 2   1 θ = δγ + δ γ2 + x2 z2 +2βz + i 2x (z β) 2 cos − − { − } 2   √ 3 1 δγ + δ γ2 + x2 z2 +2βz + i 2x (z β) 2 . ≥ − 2 − { − }

Bounding below by w0 yields

√3 1 w δγ + δ γ2 + x2 z2 +2βz + i 2x (z β) 2 w ≥ − 2 − { − } ≥ 0 2 2 2 2 4 w 0 γ + x z +2βz + i 2x (z β) γ + ⇔ − { − } ≥ 3 δ 2 2 2 2 4 w0   γ + x z γ + since βz > 0 ⇐ − ≥ 3 δ   4 w 2 x z2 + γ + 0 γ2 . ⇐ ≥ 3 δ − s  

94 2 We note that If x> max z2 + 4 γ + 1 γ2, √2 z , 4 z β and βz > 0, then 3 δ − | | √3 | − | w 1. q  ≥ 

Theorem 55

2 7γ2 + z2, z2 + 4 γ + 1 γ2, 3 δ −  p q   If x > max √2 z , ,  | |       4 z β   √3 | − |  u   then 0 < | | 1. w ≤

2 Proof. Let x > max z2 + 4 γ + 1 γ2, √2 z , 4 z β so that w > 0 by the 3 δ − | | √3 | − |   previous claim. Such a choiceq of x further implies that u 0 < | | 1 w ≤ δγ θ θ cos sin since Ψ > 0 x> z ⇔ Ψ ≤ 2 − 2 ⇐ | |     δγ θ θ π π cos | | sin | | , θ , . ⇔ Ψ ≤ 2 − 2 ∈ − 2 2       Choose θ (x) small enough so that | | δγ √3 1 θ θ < − < cos | | sin | | . Ψ 2 2 − 2     By the proof of the first claim, if x > max √2 z , 4 z β , then θ < π and | | √3 | − | | | 3 √3 1 θ θ δγ √3 1 − < cos | | sin | | . It will now be shownn that < − forox sufficiently large. 2 2 − 2 Ψ 2     δγ √3 1 1 2γ < − γ2 + x2 z2 +2βz + i 2x (z β) 2 > Ψ 2 ⇔ − { − } √3 1 − 2γ 2 γ2 + x2 z 2 +2βz + i 2x (z β) > ⇔ − { − } √3 1  −  2γ 2 γ2 + x2 z2 +2βz > ⇐ − √3 1  −  2γ 2 γ 2 + x2 z2 +2βz > ⇐ − √3 1  −  2 2 γ2 + x2 z2 >γ2 , βz > 0 ⇐ − √3 1  −  95 2 2 2 2 2 2 δγ √3 1 Thus x > 7γ + z > γ 1 + z < − . Finally, upon combining √3 1 − ⇒ Ψ 2 s  −  the constraints,p the proof easily follows.

Corollary 56

7γ2 + z2, p  2  If x > max  z2 + 4 γ + 1 γ2,  ,  3 δ   −   q   √ 4 2 z , √ z β  | | 3 | − |     b  then 1 < arg < 1. −  (1 e−λt)  − b i [u + iw] − − λ   Proof. By the previous theorem, if

7γ2 + z2,  p  2 4 1 2 2  z + 3 γ + δ γ ,   −  x> max  q  ,     √2 z ,  | |  4   √ z β   3 | − |    u   | | then 0 < w < 1. It follows that

b 1 u π π − arg −λt = tan bλ , since w> 0.  (1 e )  − + w ∈ − 2 2 b i [u + iw] − 1 e λt ! − λ −     b 1 u π − Furthermore, for u > 0, arg (1−e λt) < tan− w 4 . Likewise, for u < 0, b i[u+iw] ≤  − λ  b 1 u π  b − − arg (1−e λt) > tan− w 4 . Hence, arg (1−e λt) < 1. b i[u+iw] ≥− b i[u+iw]  − λ   − λ   2 bλ λab Claim 57 Define ϑ = max 1, 1 e−λt , ln(2) . An upper bound for exp(A) is given by −  h i  λt a 3a +1 1 e− exp(A) 2 2 1+ − ≤ bλ  

96 2 z2 + 4 γ + 1 ϑ γ2, 3 δ −  q    7γ2 + z2,    if x> max   .  p   √2 z ,  | |  4   √ z β   3 | − |      Proof. The method of proof is the same as that of the VG-OU-Γ case for eA. Since 2 x > max 7γ2 + z2, z2 + 4 γ + 1 γ2, √2 z , 4 z β then it follows that 0 < 3 δ − | | √3 | − | u  q  | | p  w < 1 by Theorem 3. Consequently,

λa [(w + λb)] b eA exp − b ln + tw since w> 0 ≤  2 2  (1 e−λt)  (w + λb) + u b i [u + iw] −  − λ   1 e−λt 1 e−λt   λa (w + λb) b + w −λ iu −λ  exp b ln − tw ≤  2 2   b   −  (w + λb) + u    1 e−λt 1 e−λt   λab (w + λb) b + w −λ iu −λ  exp ln − since w> 0 and ln > 0 ≤  2 2  b    |•| (w + λb) + u   1 λab(w+λb) λt 2 λt 2 2 (w+λb)2+u2  1 e− 1 e−  1+ w − + u − ≤ bλ bλ      ! 1 λab(w+λb) λt 2 2 (w+λb)2+u2 1 e− u 2 1+ w − since 0 < | | 1 ≤ bλ w ≤    ! λab(w+λb) λt 2 2 a 1 e− (w+λb) +u bλ 2 2 2w − if w> λt ≤ bλ 1 e−    λab(w+λb) − λt (w+λb)2+u2 a a λab 1 e− 2 2 2 w w 1+ − if w> 1 ≤ bλ    λt a 3a 1 e− λab 2 2 1+ − w w ≤ bλ   λt a 2 3a +1 1 e− λab 2 2 1+ − if w> . ≤ bλ ln (2)    

2 λab ln(w) 1 ln(2) The last inequality is true since w > w− 2 . The lower ln(2) ⇒ w ≤ ≤ λab   97 bound on x which satisfies the previously stated lower bounds on w is obtained by defining 2 bλ λab ϑ = max 1, 1 e−λt , ln(2) and applying Claim 2. −  h i  Claim 58 Define ϑ = max 1, 2λab . A bound for exp(B) is given by { } exp(B) 2 ≤ 2 z2 + 4 γ + 1 ϑ γ2, 3 δ −  q    7γ2 + z2,    if x> max    p   √2 z ,  | |    4 z β   √3 | − |       2  Proof. Let x > max 7γ2 + z2, z2 + 4 γ + 1 γ2, √2 z , 4 z β so that if 3 δ − | | √3 | − |   u> 0, p q  λau eB exp b since θ 1 by previous corollary ≤ 2 2 | |≤ (w + λb) + u  λabu exp ≤ 2 (w + λb) 

λabu exp ≤ w2   λab u = exp w w   λab   u exp since 0 < | | 1 ≤ w w ≤   2 if w> 2λab. ≤ Similarly, if u< 0, then λau eB exp [ b] since θ 1 ≤ 2 2 − | |≤ (w + λb) + u  λabu exp ≤ − w2   λab u = exp | | w w  u  2 since 1 and if w> 2λab. ≤ w ≤

98 2 Since x> z2 + 4 γ + 1 2λab γ2 w> 2λab by Claim 2, the claim is proved. 3 δ − ⇒ q  y0 λt It will now be shown that exp w 1 e− may be expressed in terms of x so − λ − that Lee’s theorem may be used. 

Claim 59

4 If x > max √2 z , z β , | | √3 | − |   y0 λt δγy0 λt δy0 λt 3 then exp w 1 e− exp 1 e− exp 1 e− x . − λ − ≤ λ − − λ − · 8 ·| |   " r # !     Proof. Fix x> max √2 z , 4 z β and let β < 0 so that | | √3 | − | n o y0 λt δγy0 λt exp w 1 e− = exp 1 e− − λ − λ −       y0 λt λ 1 e− exp − − 1 · cos 1 θ δ γ2 + x2 z2 +2βz + i 2x (z β) 2 ( · 2 · | −  { − }| ) δγy0  λt exp 1 e− ≤ λ −    y0 λt λ 1 e− exp √ − − 1 by Claim 1 · 3 δ γ2 + x2 z2 +2βz + i 2x (z β) 2 ( · 2 · | −  { − }| ) δγy0 λt exp 1 e− ≤ λ −    √ 1 y0 λt 3 2 2 2 2 exp 1 e− δ γ + x z +2βz (− λ − · 2 · − )    δγy0 λt exp 1 e− ≤ λ −    √ 1 y0 λt 3 2 2 2 exp 1 e− δ x z · (− λ − · 2 · − )    δγy0 λt exp 1 e− ≤ λ −   1  2 2 y0 λt √3 z exp 1 e− δ x 1 · − λ − · 2 · | | − x2 (   ) 

99 δγy0 λt exp 1 e− ≤ λ −   1  2 y0 λt √3 1 exp 1 e− δ x since x> √2 z · − λ − · 2 · | | 2 | | (   )  δγy0 λt δy0 λt 3 exp 1 e− exp 1 e− x . ≤ λ − − λ − · 8 ·| |   ( " r # )  

2 bλ λab Theorem 60 Define ϑ = max 1, 1 e−λt , ln(2) , 2λab . A bound on the discounted − characteristic function, f, is given by   

λt a rt 3a +2 1 e− f (ξ x + iz) e− h (z) 2 2 1+ − | ≡ | ≤ · · bλ   δγy0 λt δy0 λt 3 exp 1 e− exp 1 e− x · λ − · − λ − 8 | |   " r # !     2 z2 + 4 γ + 1 ϑ γ2, 3 δ −  q    7γ2 + z2    if x> max   .  p   √2 z ,  | |    4 z β   √3 | − |      Proof. Let the lower bound on x given in the hypothesis of the theorem be satisfied. By the previous claims it follows that

rt y0 λt f (ξ x + iz) = e− h (z)exp(A + B) exp w 1 e− | ≡ | − λ −  λt a  rt 3a +1 1 e−  e− h (z) 2 2 1+ − 2 ≤ bλ     δγy0 λt δy0 λt 3 exp 1 e− exp 1 e− x . · λ − − λ − · 8 ·| |   ( " r # )  

100 REFERENCES

[1] D. Applebaum. L´evyProcesses and . Cambridge University Press, Cambridge, first edition, 2004. 1, 6, 29, 4.1.2

[2] O. Barndorff-Nielsen. Processes of Normal Inverse Gaussian type. Finance and Stochastics, 2:41–68, 1998. 16, 5.1.1, 7.1

[3] F. Black and M. Scholes. The pricing of options and corporate liabilities. The Journal of Political Economy, 81(3):637–654, May-Jun 1973. 1

[4] S. Boyarchenko and S. Levendorskii. Non-Gaussian Merton-Black-Scholes Theory. World Scientific, Singapore, first edition, 2002. 1

[5] P. Carr, H. Geman, D. Madan, and M. Yor. The fine structure of asset returns: An empirical investigation. Journal of Business, 75(2), 2002. 1

[6] P. Carr, H. Geman, D. Madan, and M. Yor. Stochastic Volatility for L´evy processes. , 13(3):345–382, July 2003. 1, 2.2, 6.1

[7] P. Carr, H. Geman, D. Madan, and M. Yor. Selfdecomposability and option pricing. Mathematical Finance, forthcoming 2006. 1, 6.3, 7.2, 8.1

[8] P. Carr and D. Madan. Option pricing and the Fast Fourier Transform. Journal of Computational Finance, 2(4):61–73, Summer 1999. 6.2

[9] P. Carr and D. Madan. A note on sufficient conditions for no arbitrage. Finance Research Letters, (2):125–130, June 2005. 6.1

[10] P. Carr and L. Wu. The Finite Moment Log Stable process and option pricing. Journal of Finance, 58(2):753–778, April 2003. 1, 6.2, 7.2

[11] P. Clark. A subordinated stochastic process model with finite variance for speculative prices. Econometrica, 41(1):135–155, January 1973. 1

[12] R. Cont and P. Tankov. Financial Modelling with Jump Processes. CRC Press, Boca Raton, first edition, 2004. 1, 2.2, 5.1.2, 6.1, 6.3, 7.3, 8.1

[13] R. Cont and P. Tankov. Recovering exponential L´evy models from option prices: regu- larization of an ill-posed inverse problem. SIAM Journal of Control and Optimization, 43, 2006. 6.3

101 [14] J. Courtault, Y. Kabanov, B. Bru, P. Cr´epel, I. Lebon, and A. Le Marchand. Louis Bachelier on the centenary of Th´eorie de la Sp´eculation. Mathematical Finance, 10(3):341–353, July 2000. 1

[15] E. Eberlein and W. Kluge. Exact pricing formulae for caps and swaptions in a L´evy term structure model. Journal of Computational Finance, 9(2), 2005. 1

[16] S. Hamida and R. Cont. Recovering volatility from option prices by evolutionary optimization. Journal of Computational Finance, 8(4), 2005. 6.3

[17] R. Lee. Option pricing by transform methods: extensions, unification and error control. Journal of Computational Finance, 7(3):51–86, 2004. 40, 7.4, A, 41, 42

[18] M. Loeve. Paul L´evy. The Annals of Probability, 1(1):1–8, February 1973. 1

[19] D. Madan, P. Carr, and E. Chang. The Variance Gamma process and option pricing. European Finance Review, 2(1):79–105, June 1998. 1, 5.1.1

[20] D. Madan and M. Yor. Making Markov martingales meet marginals: with explicit constructions. Bernoulli, 8:509–536, March 2002. 1, 6.1

[21] B. Mandelbrot. The variation of certain speculative prices. Journal of Business, 36(4):394–419, October 1963. 1

[22] B. Mandelbrot. Scaling in financial prices: I. Tails and dependence. Quantitative Finance, 1, 2001. 1

[23] C. Nolder. Selfsimilar additive processes and financial modelling. In Proceedings of the 2003 Financial Engineering and Applications Conference, Banff, Canada, 2003. 1

[24] Board of Governors of the Federal Reserve System. http://www.federalreserve.gov. 7.2

[25] K. Sato. L´evyProcesses and Infinitely Divisible Distributions. Cambridge University Press, Cambridge, first edition, 2002. 1, 2, 3, 4, 5, 7, 8, 9, 2.1, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 3.1, 22, 23, 25, 27, 28, 29, 3.3, 32, 4.1.2, 35, 4.1.3, 4.2.1

[26] W. Schoutens. L´evyProcesses in Finance: Pricing Financial Derivatives. John Wiley and Sons, West Sussex, first edition, 2003. 1, 2.2, 2.2, 5.1.3, 7.4

[27] W. Schoutens, E. Simons, and J. Tistaert. A perfect calibration! Now what? Wilmott Magazine, 2004. 8.1

[28] Standard and Poor’s. http://www2.standardandpoors.com. 7.2

[29] P. Tankov. L´evy processes in finance: Inverse problems and dependence modelling. Dissertation, L’Ecole Polytechnique, 2004. 6.3

102 BIOGRAPHICAL SKETCH

Mack L. Galloway

Mack L. Galloway was born May 1, 1972 in Shreveport, Louisiana. In the spring of 1995, he completed a Bachelors of Science in Physics at The University of Texas at Austin. He obtained a Masters in Financial Mathematics in the spring of 2001 from the Department of Mathematics at FSU. Under the advisement of Professor Craig A. Nolder, he obtained a Doctorate in Financial Mathematics in the fall of 2006, also from the Department of Mathematics at FSU.

103