<<

Degree Project

Regularized Calibration of Jump-Diffusion Pricing Models

Hiba Nassar 2010-10-21 Subject: Mathematics Level: Master Course code: 5MA11E

Regularized Calibration of Jump-Diffusion Option Pricing Models

Hiba Nassar October 21, 2010

1 Contents

1 Introduction to stochastics 4 1.1 Stochasticprocess ...... 4 1.2 Martingale...... 5 1.3 L´evymodels...... 5 1.4 EquivalenceMeasure ...... 6 1.5 Pricingoptions ...... 7 1.5.1 Pricingcalloption ...... 8 1.5.2 Option pricing for the Black-Scholes model ...... 9 1.5.3 OptionpricingfortheMertonmodel ...... 11 1.6 Relativeentropyfunction ...... 14 1.6.1 Properties: ...... 15 1.7 Directproblemandinverseproblem ...... 18

2 Inverseproblemandregularizationmethods 19 2.1 Howtosolvetheinverseproblem? ...... 19 2.2 Regularizationmethod...... 20 2.3 TikhonovRegularization...... 21

3 Calibration problem 22 3.1 Non-linearLeastSquares(NLS)...... 22 3.1.1 Choice of weights wi:...... 23 3.1.2 WhatiswrongwithNLS?...... 23 3.2 Regularizationbyrelativeentropy ...... 23 3.2.1 Choiceofregularizationparameter ...... 25

4 Numerical implementation 27

5 Somecommentsofpossibleextensions 29

A Matlab Codes 30

References 33

2 Acknowledgment I am deeply indebted to associate professor Irina Asekritova and associate pro- fessor Roger Pettersson. Without their guidance and support, I would never been able to complete this work. It is a pleasure to thank everyone who has helped me along the way. I would like to say many thanks to my family without whose support none of this would have been possible. Lastly, the most special thanks goes to my partner and friend, my fianc´eRani Basna.

3 Abstract An important issue in finance is model calibration. The calibration problem is the inverse of the option pricing problem. Calibration is per- formed on a set of option prices generated from a given exponential L´evy model. By numerical examples, it is shown that the usual formulation of the inverse problem via Non-linear Least Squares is an ill-posed problem. To achieve well-posedness of the problem, some regularization is needed. Therefore a regularization method based on relative entropy is applied.

1 Introduction to stochastics

It is useful to start with some definitions and basic concepts in stochastics. The definitions and theorems presented below can be found, for example, in [13] and [15].

1.1 A stochastic process is an extension of deterministic processes expressed in terms of , instead of dealing with only one possible ’reality’ of how the process might evolve under time. For a stochastic process there is some in- determinacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to.

Definition 1.1.1. A stochastic process is a collection of random variables

(Xt; t )=(Xt(w); t ; w Ω); ∈T ∈T ∈ defined on some probability space (Ω, , P). We call X a continuous-time processF if is an interval, such as = [0,T ]; and we call it a discrete-time process if Tis a finite or countably infiniteT set, such as = 0, 1, 2,... . Such processesT are also called . T { } Remark 1.1.2. A stochastic process is a function of two variables t and w, where the first variable represents time and the second uncertinity.

For a fixed time t, Xt(w) is a random variable. •

Xt = Xt(w); w Ω ∈ Here w is a number.

For a fixed w, Xt(w) is a function of time: •

Xt(w)= w(t); t ; ∈T which is called a realization, or a sample path of the process X. Here w is a function.

4 1.2 Martingale An increasing sequence of sub σ-algebras • ( t) , s t for s

If for each t, Xt is t-measurable, it is said to be adapted to the filtra- • F tion t F Definition 1.2.1. A process Xt is a martingale with respect to a filtration t if: F

1. E [ Xt ] < for each t. | | ∞ 2. Xt is adapted to the filtration t. F 3. E [Xt s]= Xs for s

Definition 1.3.1. A stochastic process (Xt)t≥0 is a L´evy Process if:

1. It has independent increments: for different times t0,...,tn the random

variables Xt ,Xt Xt ,Xt Xt ,...,Xt Xt − are independent. 0 1 − 0 2 − 1 n − n 1 2. It has stationary increments: the distribution of Xt+h Xt does not de- pend on t. − 3. it is continuous in probability:

lim P ( Xt+h Xt ε)=0 forall ε> 0. h→0 | − |≥

4. Its sample paths are right-continuous with left limits (”cadlag”).

Definition 1.3.2. Let Xt be a L´evy process. The L´evy measure v is defined by: v (A)= E [#t [0, 1] : ∆Xt A, ∆Xt = 0] , ∈ ∈ for any Borel set A, i.e. it is an expected number of jumps, in the time interval [0, 1], whose height belongs to A. Note: A Borel set is any set in a topological space that can be formed from open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Here our topological space is just the space (Xt)t>0 lives on, namely R.

Example 1. A Poisson process Nt is a L´evy process with L´evy measure ν given by { } λ , 1 A ν(A)= ∈ 0 , 1 / A ∈ for Borel-sets A. It means that the jump heights are always one. The parameter λ is the mean number of jumps of Xt in the interval [0, 1], named intensity.

5 Example 2. A is a L´evy process

Nt Xt = Yi i=1 where Nt is a Poisson process and Yi are independent and identically dis- tributed{ random} variables with distribution function F . The L´evy measure of Xt is given by ν(A)= λ dF (x) A where λ is the intensity of Nt . { } Example 3. A standard Wt is a L´evy process where the in- { } crements Wt+h Wt are normal distributions with mean zero and variance h. The L´evy measure− is identical to zero.

Theorem 1.3.3 (L´evy-Ito decomposition). Let (Xt)t≥0 be a L´evy process and v its L´evy measure, Assume x v (dx) < . Then there exist a |x|<1 | | ∞ constant γ and a Brownian motion B = µt + σW such that: t t l ε Xt = γt + Bt + Xt + lim Xt (1) ε↓0

l ε where Xt denotes the sum of large jumps of size larger than 1 and Xt denotes the sum of small jumps of size between ε and 1 The triple σ2,v,γ is called the characteristic triple or the L´evy triplet of the process X . t Note: if x v (dx) < is not true then more careful decomposition |x|<1 | | ∞ is needed. 1.4 Equivalence Measure Let P and Q be two probability measures defined on Ω, equipped with σ-algebra . We say that P is absolutely continuous with respect to Q (P Q) if F ≪ Q (A) = 0 P (A) = 0 A . ⇒ ∀ ∈F If P Q and Q P, then we say that P and Q are equivalent measures (Q ∼ P). ≪ ≪

Theorem 1.4.1. Let P and Q be two measures on a probability space (Ω, ), such that Q is absolutely continuous with respect to P, i.e. Q P. Then thereF exists a unique non-negative function Z : Ω R such that: ≪ → Z is -measurable. • F Q (A)= Z (x) dP (x), A . • A ∀ ∈F Q (A) < . • ∞ In this case dQ dQ = ZdP i.e. Z = . dP Z is the Radon-Nikodym of Q with respect to P.

6 Properties:

1. If Q P then ≪ d(aQ) dQ = a , a R. dP dP ∀ ∈ 2. If Q P and Q P then 1 ≪ 2 ≪ d (Q + Q ) dQ dQ 1 2 = 1 + 2 . dP dP dP

Theorem 1.4.2 (Girsanov). Let (Ω, , P) be a probability space, and W (t) , 0 t T , be a Brownian motion. Let Θ(Ft) , 0 t T , be an adapted process,i.e.≤ ≤ ≤ ≤ Θ(t) t, t. Define Z (t) as: ∈F ∀ t 1 t Z (t) = exp Θ(u) dW (u) Θ2 (u) du − − 2 0 0 and t W (t)= W (t)+ Θ(u) du. 0 P 1 T Θ2(s)ds Let Z = Z (T ). Assume the Novikov condition i.e. E e 2 0 < . ∞ dQ R Then E (Z) = 1 and Q defined by Z = dP is a probability measure, and the process W (t) is a standard Wiener process under the probability measure Q. In many applications of the , Θ is just a constant = 0 for which the Novikov condition is true. In finance, a probability measure Q is called a risk-neutral measure if −rt Sˆt = e St is a Q-martingale, where r is the interest rate.

1.5 Pricing options Definition 1.5.1. An option is a financial contract that gives the owner the right, but not the obligation, to make a specified transaction for a specified time. Definition 1.5.2. European is an option that gives the owner the right, but not the obligation, to buy a share at a given price, known as strike K, at a certain time, known as maturity date T . Remark 1.5.3. An American call option is similar to European call option with the only difference that the owner has the right to the option at any time before the maturity, not only at the expiry date. There are two important concepts in the economical world, ”arbitrage” and ”complete market”, but there is not a unique way to define them. Here is an attempt to define them as simple as possible. Definition 1.5.4. Arbitrage: ”something from nothing ”. Here specifically: a chance to make money with no possible loss. Definition 1.5.5. A complete market is a market in which the complete set of possible gambles on future states can be constructed with existing assets.

7 Remark 1.5.6. The term ”Exponential L´evy models” is used when the price of financial asset is represented (under a measure P) as the exponent of a L´evy process: rt+Xt St = S0e , (2) where Xt is a L´evy process (under P), and r is the interest rate (here assumed to be constant).

Remark 1.5.7 (Non-arbitrage). A market is arbitrage-free if and only if there −rt Xt is measure Q ∼ P such that Sˆt = e St = S0e is a Q-martingale, where rdenotes the interest rate (in our case, it is constant).

Remark 1.5.8 (Complete). A market is complete if and only if there is a −rt Xt unique measure Q ∼ P such that Sˆt = e St = S0e is a Q-martingale.

1.5.1 Pricing call option Pricing call option is one of the most important fields in mathematical finance. It deals with the question of how much the option-buyer has to pay to the seller per share.

The option price is primarily influenced by many factors such as:

the difference between the K and the stock price St, • the time remaining for exercising the option, • the of underlying stock, • and the interest rate r, which has less effectiveness than the previous ones. • A good way to understand how to price a European call option is to start doing that at the maturity time T .

Let us say we are now in time T , the stock price value is ST and the strike value is K. the usual argument is as follows.

+ If the option price CT > (ST K) , no one will buy the option because • the buyer will lose money for− sure.

+ If the option price CT < (ST K) , everyone wants to buy the option • but, in this case, the seller will− loose money for sure.

For that reason, the fair price of the option at time T is

+ CT =(ST K) . −

While in time t

8 1.5.2 Option pricing for the Black-Scholes model Black-Scholes model is one of the most important concepts in modern financial theory. The model named after Fisher Black and Myron Scholes, who developed it in 1973.

The stock price is given (under some measure P) as an exponential L´evy model (2) where Xt = µt + σWt.

The stock price St can also be written as the solution of the stochastic differential equation σ2 dS = µ + S dt + σS dW (stocks) t 2 t t t dBt = rBtdt (bond) where Wt is a standard Weiner process, µ represents a drift, σ is a disper- sion/volatility, r bank interest. The parameters µ,σ and r are assumed to be constant. The volatility can be interpreted as a measure of risk.

0.3

0.25

0.2

0.15

0.1

0.05

0

−0.05 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure 1: One path of Xt for the Black-Scholes model describing a stock price.

There is an equivalent measure Q defined by using the Girsanov theorem (Theorem 1.4.2) where Θ in this case is given by

2 µ r + σ Θ= − 2 . σ Then 2 2 2 µ−r+ σ µ−r+ σ − 2 W − 1 2 T dQ σ T 2 σ Z = = e ! dP

9 and 2 µ r + σ W = W + − 2 t is a Q-standard Wiener process. t t σ The Black-Scholes model under the measure Q is: σ2 dS = µ + S dt + σS dW t 2 t t t 2 2 σ σ µ r + 2 = µ + Stdt + σSt dWt − dt 2 − σ σ2 σ2 = µ + Stdt + σStdWt + rStdt µ + Stdt 2 − 2 = rStdt + σStdWt where Wt is a standard Wiener process.

Hence σ2 r− 2 t+σWt rt+Xt St = S0e = S0e ,   σ2 f 2 where Xt = Bt = 2 t + σWt i.e. Xt has L´evy triplet (σ , 0, 0). To see that recall (1). Note that− the Girsanov Theorem does not change σ. Only the drift σ2 is changed: µ is replaced by 2 . − σ2 −rt − t+σWt It is easy to see that Sˆt = e St is a Q-martingale since Sˆt = e 2 and hence: f

σ2 Q Q − 2 (t−s)+σ(Wt−Ws) E Sˆt s = E Sˆse s |F |F σ2 f f − (t−s) Q σ(Wt−Ws) = Sˆse 2 E e s |F f f = Sˆs,

Q σ(Wt−Ws) where it was used the well known exponential martingale equality E e s = 2 |F σ (t s) f f e 2 − . Hence Q is risk-neutral. One can show that the Black-Scholes model offers no other risk-neutral measure Q, see [?]. Hence, by Remark 1.5.7 and 1.5.8, the Black-Scholes model is arbitrage-free and complete. Pricing Option: Recalling the European call option on a stock price St given by the Black- scholes model, with strike K and maturity date T , the Black-Scholes option pricing formula is:

BS −rτ Q + C (S,K,τ,σ)= e E (ST K) St = S − | 2 + r σ τ σW = e−rτ EQ Se − 2 + τ K (4)   − −rτ = StN (d ) Ke N (d ) + − − where

K σ2 K σ2 ln S + τ r + 2 ln S + τ r 2 d = − t , d = − t − + σ√τ − σ√τ

10 and u 1 z2 N (u)= e− 2 dz √2π −∞ 1.5.3 Option pricing for the Merton model Merton model: Extending the notation of the Black-Scholes model, by adding discrete jumps to a Brownian motion, a jump-diffusion model (2) is obtained where Xt = µt + σWt + ∆Xs.

Nt Xt = µt + σWt + Yi, (5) i=1 where (Nt)t≥0 is Poisson process with intensity λ and Yi are jump sizes assumed 2 to have a Gaussian distribution: Yi N m,δ . ∼ In [2], there is a brief discussion on option pricing of the Merton model. Here more detailed calculations are presented.

The stock price St can be written as a stochastic differential equation such that: 2 σ Y1 dSt = µ + Stdt + Stdwt + St e 1 dNt (stocks) 2 − dBt = rBtdt (bond) . By Girsanov’s Theorem (Theorem 1.4.2) there is a risk neutral measure Q −rt such that e St is a Q-martingale, where Θ is defined as:

2 σ Y1 µ r + 2 + λE e 1 Θ= − − . σ Then 2 2 2 − + σ + Y1 −1 − + σ + Y1 −1 µ r 2 λE(e ) µ r 2 λE(e ) − W − 1 T dQ σ T 2 σ Z = = e ! dP and 2 µ r + σ + λE eY1 1 − 2 − W˜ t = Wt + t is a standard Q Wiener process. σ The Merton model under the measure Q is: 2 σ Y1 dSt = µ + Stdt + σStdWt + St e 1 dNt 2 − σ2 2 µ r + + λE eY1 1 σ 2 Y1 = µ + Stdt + σSt dWt − − dt + St e 1 dNt 2 − σ − 2 2 σ σ Y1 Y1 = µ + Stdt + σStdWt µ + Stdt + r λE e 1 Stdt + St e 1 dNt 2 − 2 − − − Y1 Y1 = r λE e 1 Stdt+ σStdWt + St e 1 dNt. − − − 11 0.6

0.4

0.2

0

−0.2

−0.4

−0.6

−0.8 0 0.2 0.4 0.6 0.8 1

Figure 2: One path of Xt for the Merton Model describing a stock price

Then 2 Nt σ Y1 Xt = r λE e 1 t + σWt + Yi. − 2 − − i=1 2 M σ2 Y σ2 m+δ Let µ = r λE e 1 1 = r λ e 2 1 . Then − 2 − − − 2 − − Nt Nt M Xt = µ t + σWt + Yi = Bt + Yi i=1 i=1 M 2 where Bt =(µ r)t + σWt. It means Xt has L´evy triplet (σ ,v, 0), according − to (1), where Wt is a standard Wiener process and λ (x m)2 v(dx)= exp − 2 dx. (6) √2πδ − 2δ

Note that also here the Girsanov Theorem does not change σ. Only the drift is −rt changed. Let us now, for the sake of self-containment, prove that Sˆt = e St is a Q-martingale. Since

2 σ Y1 Nt − 2 −λE[e −1] t+σWt+ i=1 Yi Sˆt = e   f P

12 we have

2 σ Y1 Nt Q Q − 2 −λE[e −1] t+σWt+ i=1 Yi E Sˆt s = E e s |F   |F f P 2 σ Y1 Nt Q − 2 −λE[e −1] (t−s)+σ(Wt−Ws)+ i=N Yi = SˆsE Sˆse s s   |F f f P 2 σ Y1 Nt Q − 2 (t−s)+σ(Wt−Ws) Q −λE[e −1](t−s)+ = Yi = SˆsE e s E e i Ns s |F P |F 2 f f − σ Y1 Nt Ns − (t−s) Q σ(Wt−Ws) −λE[e −1](t−s) Q Yi = Sˆse 2 E e s e E e i=1 s |F P |F f f σ2 Q σ(Wt−Ws) (t−s) where it is well known that E e s = e 2 . Furthermore |F f f ∞ Nt−Ns n Q Yi Yi E e i=1 s = E e i=1 P (Nt Ns = n) P |F P − n=1 ∞ n n Y (λ (t s)) −λ(t−s) = E e i=1 i − e P n! n=1 n ∞ (λ (t s))n = E eYi − e−λ(t−s) n! n=1 i=1 n ∞ (λ (t s))n = E eYi − e−λ(t−s) n! n=1 i=1 ∞ n n (λ (t s)) = E eY1 − e−λ(t−s) n! n=1 ∞ n E eY1 λ (t s) = − e−λ(t−s) n! n=1 Y1 = e(E[e ]λ(t−s))e−λ(t−s)

Y1 = eλ(E[e ]−1)(t−s).

Hence

2 2 Q − σ (t−s) σ (t−s) −λE[eY1 −1](t−s) λE[eY1 −1](t−s) E Sˆt s = Sˆse 2 e 2 e e = Sˆs |F Thus Q is a risk-neutral measure. Note that there are other risk-neutral mea- sures Q, see for example [2, page 341] where also the jumps are involved in Z given by Theorem 1.4.2. Hence the Merton model is arbitrage free but incomplete. Pricing Options To determine the price of options, we will start from the general option pricing

13 formula (3) and substitute ST by what it is equal in the Merton model:

M −r(T −t) Q + C = e E (ST K) St = S t − | M NT + −r(T −t) Q µ (T −t)+σ(WT −Wt)+ = Yi = e E Ste i Nt K St = S − | f f P + τ=T −t −rτ Q µM τ+σW + Nτ Y = e E Se τ i=1 i K − f P ∞ M n + −rτ Q µ τ+σWτ + Yi = e E Se i=1 K P (Nτ = n) − n=0 P f ∞ + n −rτ Q µM τ+σW + n Y (λτ) −λτ = e E Se τ i=1 i K e − n! n=0 P f 2 n Since Yi N m,δ , it is clear that Yi nm + √nδN (0, 1) = mn + ∼ i=1 ∼ n/τδ√τN (0, 1) = mn + n/τδW , where W is a standard Q-Wiener pro- τ τ cess. M n M Thus we can write µ τ + σWτ + i=1 Yi =µ τ + nm + σWτ + n/τδWτ L 2 n L but σWτ + n/τδWτ = σ + δ τ Wτ , where = denotes equality in law or in other words in distribution. For that let σ = σ2 + δ n . Then: n τ σ2 µM = r λE eY1 1 − 2 − − 2 1 2 n m+ δ = r σ δ λ e 2 1 . − 2 n − τ − − Therefore

2 + ∞ + δ n 1 2 n m 2 r− 2 (σn−δ )−λ e −1 τ+nm+σnWτ (λτ) CM = e−rτ EQ Se τ K e−λτ t     −  n! n=0  2 + ∞ 2 2 σn n nδ δ r− τ+σnWτ −rτ Q nm+ 2 −λ exp m+ 2 +λτ 2 (λτ) −λτ = e E Se e  K e    −  n! n=0 ∞ (λτ)n   = e−λτ CBS (S ,K,τ,σ ) n! n n n=0 (7) where nδ2 δ2 Sn = S exp nm + λ exp m + + λτ 2 − 2 and CBS is the Black-Scholes European call option price (4)

1.6 Relative entropy function In Chapter 1.4 we defined equivalent measures. Moreover, it is possible to measure how close two equivalent measures are to each other. One measure is known as ”entropy function” which can be defined as follow:

14 Definition 1.6.1. Let P and Q be two equivalent probability measures on (Ω, ). The relative entropy of Q with respect to P is defined as: F dQ dQ dQ ε (Q, P)= EQ ln = EP ln dP dP dP The entropy function ε (Q, P) is a measure of the discrepancy between two probability measures and it describes the amount of inefficiency to assume that Q is a true distribution, when the true one of a random variable X is P.

In [2], there is a brief presentation of the entropy function with omitted ex- plaining calculations. Here some more details are included.

Note that ε (Q, P) can be written as:

dQ ε (Q, P)= EP f dP where f (x)= x ln x is a strictly convex function, i.e.:

f (αx + (1 α) x ) < αf (x )+(1 α) f (x ) , 0 <α< 1 1 − 2 1 − 2 1.6.1 Properties:

1. Q ε (Q, P) is a convex function: If Q1, Q2 are two probability measures equivalent→ to P then

d (αQ + (1 α) Q ) ε (αQ + (1 α) Q , P)= EP f 1 − 2 1 − 2 dP d (αQ ) d ((1 α) Q ) = EP f 1 + − 2 dP dP dQ dQ = EP f α 1 + (1 α) 2 dP − dP dQ dQ EP αf 1 + (1 α) f 2 ≤ dP − dP dQ dQ αEP f 1 + (1 α) EP f 2 ≤ dP − dP αε (Q , P)+(1 α) ε (Q , P) ≤ 1 − 2 2. Q ε (Q, P) is a non-negative function: that can be proven by using Jensen’s→ inequality which is : ”if f is a convex function then f (E (X)) E (f (X))” Since f (x)= x ln x is a convex function, ≤

dQ dQ 0= f (1) = f EP EP f = ε (Q, P) dP ≤ dP

dQ 3. ε (Q, P)=0 if and only if dP = 1 almost surely.

15 Proposition 1.6.2. Relative entropy of L´evy processes Let P and Q be equivalent probability measures on (Ω, ) generated by the exponential L´evy models with L´evy triplets σ2,vP ,γP andF σ2,vQ,γQ with σ > 0. The relative entropy ε (Q, P) is given by 2 T 1 ε (Q, P)= γQ γP x vQ vP dx 2σ2 − − − −1 (8) ∞ dvQ dvQ dvQ + T ln + 1 vP dx. dvP dvP − dvP −∞ If P and Q correspond to risk-neutral exponential L´evy models, the relative en- tropy reduces to

T ∞ 2 ε (Q, P)= (ex 1) vQ vP dx 2σ2 − − −∞ (9) ∞ dvQ dvQ dvQ + T ln + 1 vP dx. dvP dvP − dvP −∞ Example 4. Relative entropy of Merton jump-diffusion process: Let Xt be according to the Merton model as in equation (5) (defined on a filtered probability space (Ω, , P)) with L´evy triplet (σ2,vP , 0). Define a risk neutral martingale probabilityF measure Q ∼ P under which the jump diffusion process 2 Q Xt has the triplet (σ ,v , 0) where the L´evy measures take the form:

2 λP x mP vP (dx)= exp − dx P P 2 √2πδ − 2δ and 2 λQ x mQ vQ (dx)= exp − dx. Q Q2 √2πδ − 2δ In [12], it is stated the entropy function for the Merton model is:

2 Q2 P 2 Q P T Q mQ+ δ P mP + δ Q λ δ ε (Q, P)= λ e 2 1 λ (e 2 1 + Tλ ln 2σ2 − − − λP δQ Q2 Q P 2 3 δ + m + m + TλP + TλQ + . −2 2δP 2    (10)

Now we will show (10) by applying the previous formula for risk neutral expo-

16 nential L´evy models (9). We find

1 1 ∞ 2 ∞ ∞ ε (Q, P)= (ex 1) vQ vP dx + vP dx vQdx T 2σ2 − − − −∞ −∞ −∞ ∞ dvQ + ln dvQ dvP −∞ 1 2 = λQEQ (ex 1) λP EP (ex 1) + λP λQ 2σ2 − − − − 2 2 ∞ λQδP x mQ x mP + ln − + − dvQ λP δQ − Q2 P 2 −∞ 2δ 2δ 2 Q2 P 2 1 Q mQ+ δ P mP + δ P Q = λ e 2 1 λ (e 2 1 + λ λ 2σ2 − − − − 2 2 λQδP ∞ x mQ ∞ x mP + λQ ln − dvQ + − dvQ. λP δQ − Q2 P 2 −∞ 2δ −∞ 2δ Now ∞ ∞ 2 2 x mQ dvQ = x2 2mQx + mQ dvQ − − −∞ −∞ 2 = λQ EQ (x) 2mQEQ (x)+ mQ − 2 2 2 2 = λQ δQ + mQ 2mQ + mQ − 2 = λQδQ and ∞ ∞ 2 2 x mP dvQ = x2 2mP x + mP dvQ − − −∞ −∞ 2 = λQ EQ (x) 2mP EQ (x)+ mP − 2 2 2 = λQ δQ + mQ 2mQmP + mP − 2 2 = λQ δQ + mQ + mP . Thus

2 Q2 P 2 1 1 Q mQ+ δ P mP + δ P Q ε (Q, P)= λ e 2 1 λ (e 2 1 + λ λ T 2σ2 − − − − 2 2 2 Q Q Q P λQδP λQδQ λ δ + m + m + λQ ln + λP δQ − Q2 P 2 2δ 2δ 2 Q2 P 2 Q P 1 Q mQ+ δ P mP + δ Q λ δ = λ e 2 1 λ (e 2 1 + λ ln 2σ2 − − − λP δQ Q2 Q P 2 3 δ + m + m + λP + λQ + −2 2δP 2   

17 1.7 Direct problem and inverse problem To find the price of an option in a market we need at least to know:

A model that is reasonable for the market (L´evy triplet if the market • model is assumed to be a L´evy process)

The maturity time T . • The strike K. • Then we can find the price by:

−r(T −t) Q + Ct = e E (ST K) St = s . − | This can be considered as a direct problem. Whenever there is a direct problem, there exists an inverse one. The inverse problem in our case is that we know the option prices Ci in a market, option strikes Ki, and maturities Ti. Our aim is to find a model of the market. There is always an error between real option prices and modeled option prices. That error can be considered as a noise. That noise can lead to enormous errors while solving the inverse problem. If that is the case then the problem is ill-posed. To deal with such type of problem, we need some regularization methods. For that, we shall preface the detailed exposition of inverse problem and regularization methods with brief intuitive arguments in the next chapter.

18 2 Inverse problem and regularization methods

This section contains an introduction and some definitions about inverse prob- lem and regularization methods, which can be found in [9]. Let X and Y be two Hilbert spaces, A : X Y be an operator. The problem of finding y given x, where y = A(x), can→ be considered as a direct problem. The inverse problem is defined as a problem where we have y and need to find x from the equation y = A(x).

Definition 2.0.1. The inverse problem is defined as a problem for which we have the results (data) y, and we need to find the cause, i.e. solve the equation

y = A(x). (11) with respect to x

However, in reality there is always some noise (δ) in the data that we mea- sure. Therefore the problem can be formulated as finding a solution of the following equation: yδ = A(x)+ δ, if the measured data yδ is known.

x A transform y + yδ

δ

We want to remind the definition of well-posed problem in the Hadamard sense (see for example [9]).

Definition 2.0.2. The problem of solving y = A(x) is called well-posed if:

1. Existence: there exists a solution. 2. Uniqueness: there is at most one solution of the problem.

3. Stability (continuity): the solution depends continuously on the data .

If the problem does not have one of these properties it is called ill-posed.

2.1 How to solve the inverse problem? We want to solve yδ = A(x)+ δ, which is a continuous problem. Numerically, we have to transfer our problem into a finite dimensional problem. For that we have to replace our problem by a discrete problem, i.e. we want to estimate x and y by values in finite dimensional space. This process is called discretization. δ δ δ Let x =(x1,...,xn) and y =(y1,...,yn). Then our problem can be written as:

19 δ x1 δ1 y1 . . . A  .  +  .  =  .  . x δ yδ  n  n  n The most common way of solving  such a problem  is to use the nonlinear least squares (NLS) method (sometimes it is better to use weighted nonlinear least squares). This method for finding the minimization can be simply expressed as:

n ∗ δ 2 δ x = argmin (Ax)i yi , yi ; i =1: n are the data we measured x − i=1 2.2 Regularization method Regularization, in general terms, is the approximation of an ill-posed problem by a family of neighboring well-posed problems.

Definition 2.2.1. [9] Let A : X Y be a linear and bounded operator between Hilbert spaces. A Regularization→ Strategy is a family of linear and bounded operators Rα : Y X ; α> 0 such that → δ α,δ Rαy = x x as δ 0 forall x → → δ where y y δ. Note that Rα . − ≤ → ∞ We can estimate the error between the exact and computed solution by using the triangle inequality and norm properties:

α,δ δ x x Rαy Rαy + Rαy x − ≤ − − δ Rα . y y + RαAx x (12) ≤ − − δ Rα + RαAx x . ≤ − error

xα,δ x − RαAx x −

Rα δ b α∗ α Figure 3: Behavior of the total error

20 Here the first term describes the error in the data multiplied by Rα which goes to infinity as α 0. The second term denotes the approximation er- → −1 ror RαAx x = (Rα A )y and this term goes to zero as α 0, as illustrated in− the Figure 3.− → A delicate problem is to chooseα = α(δ) in order to keep the total error as small as possible.

2.3 Tikhonov Regularization Andrey Nikolayevich Tikhonov developed the Tikhonov regularization method for approximation of solutions in 1963, and his approach led to a very good mathematical theory which has been developed over the past four decades. Here we will introduce his method.

Definition 2.3.1. Let A : X Y be a linear and bounded operator between two Hilbert spaces, and α be a→ given constant. The approximate solution of the inverse problem xα can be found by minimizing the Tikhonov functional

2 2 Jα(x) := Ax y + α x , x X (13) − Y X ∈ i.e. n δ 2 2 xα = argmin Jα(x) = argmin Ax yi + α x x x − Y X i=1 provided that a minimizer exists, which follows from the Theorem 2.3.2. The term α x 2 is called a penalty term, the functional (13) is called the Tikhonov Functional, and the parameter α > 0 is called the regularization parameter. A common method to find α is the discrepancy principle of Morozov. δ In this method we choose α = α δ, y and xα such that

δ ρ δ Axα y ρ δ for two constants 1 < ρ < ρ 1 ≤ − ≤ 2 1 2 Theorem 2.3.2. Let A : X Y be a linear and bounded operator between → Hilbert spaces and α> 0. Then Tikhonov functional Jα has a unique minimum xα X. This minimum xα is the unique solution of the normal equation ∈ αxα + A∗Axα = A∗y, (14) where A∗ : Y X is the adjoint operator to A. → α α The solution x of equation (14) can be written in the form x = Rαy with

∗ −1 ∗ Rα := (αI + A A) A , where I is the identity operator.

21 3 Calibration problem

In a market where options are known and traded on the market, their prices Ci are available, the maturities Ti, and strikes Ki are known. These can be considered as source of information and used for selecting a model describing the market.

Selecting a model from data can be obtained by at least two ways:

by econometric analysis of the time series of prices of call options, • by solving the calibration problem, which will be introduced in this chap- • ter.

Calibration problem is a problem in which we try to identify a vector of param- eters θQ which produce a model consistent with observed option prices in the market.

In other words: the calibration problem is to look at prices (Ci)i∈I , as a time series, for a set of call options with different strikes Ki and maturities Ti, and search for a risk neutral model which price the options correctly i.e.:

market model −rT Q + C C = e E (ST Ki) St = s . i ≈ i − | This problem can be considered as an inverse problem to the pricing option problem, which we introduced in subsection 1.5.1. In this case, comparing with the classical inverse problem ”yδ = Ax + δ”, the collected prices of options market δ Ci can be considered as y , and

model −rT Q + C = e E (ST Ki) St = s i − | can be considered as a nonlinear operator A. Our aim is to find θQ (σ,λ,m,δ) which satisfies the calibration problem. First, we will use a Non-linear Least Squares method for solving this problem and show its shortcomings. Next, we will apply the regularization approach using relative entropy, and discuss its properties.

3.1 Non-linear Least Squares (NLS) As mentioned in the previous chapter, in many articles it was suggested to use non-linear least squares (NLS) in order to obtain the variables as a particular solution to the calibration problem:

N ∗ θ 2 θ = argmin wi C (Ti,Ki) Ci (15) θ − i=1 where wi are weights. We will discuss how to choose the weights later. The risk-neutral parameter vector θQ is chosen by minimizing the sum of squared errors between market prices and model prices.

22 3.1.1 Choice of weights wi:

The relative weights wi of option prices should reflect our confidence in indi- vidual data points. Here we trust more option prices with higher trade volume than option prices with lower trade volume. The weights therefore are chosen to balance the magnitude of the different terms that can be assessed from the bid-ask spreads (the difference in price between the lowest price for which a seller is willing to sell an asset and the highest price that a buyer is willing to pay for it): 1 wi = 2 i =1: N. Cbid Cask i − i However, the bid-ask prices are not always available from option price data bases. For that a reasonable substitution of wi is a squared Black-Scholes ”vega” (the first derivative of Black-Scholes price with respect to volatility) which shows how sensitive the option prices are, with respect to the Black-Scholes model volatility. Then wi can be written: 1 1 wi = = i =1: N, 2 −rT 2 (vegai) Kie i N (d−) √Ti or 1 wi = i =1: N. vegai which is done in my implementations.

3.1.2 What is wrong with NLS? The calibration problem of equation (15) is an ill-posed problem because of the following reasons:

1. As we can see in the Figure 4, the NLS functional is not convex and therefore the solution is not unique, see Figure 5.

2. The solution is very sensitive to the initial value (instability), so an esti- mate is likely to get stuck at a local minima. In Figure 5 we can see if we choose, for example, two different initial points a and b, it gives two different local minima a∗ and b∗, respectively.

3.2 Regularization by relative entropy As we have seen in 3.1, the nonlinear least squares method does not solve the uniqueness and stability of the calibration problem. One way to enforce unique- ness and stability of the solution is to inject prior information into the problem by specifying a prior model P and add to the nonlinear least squares criterion a convex penalization term. The prior parameters can be estimated by some techniques depending on the data that we measured. However we don’t trust the data completely due to the non-liquidity (small trade volume), so the do- main of reasonable parameters usually reflect the personal point of view on the history of the market.

23 5000

4500

4000

3500

3000

2500 NLS 2000

1500

1000

500

0 −2 −1.5 −1 0 −0.5 0.5 0 1 0.5 1.5 2 1 2.5 1.5 3 3.5 2 4

lambda m

Figure 4: The NLS functional, it is a non-convex function

1

0.8

0.6 a

a*

0.4

0.2 m

0

−0.2

b

−0.4 b*

−0.6

−0.8 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 lambda

Figure 5: In this figure, you can see the sensitivity of the functional to the initial values.

A common choice of the regularization term is the relative entropy ε (Q, P) of the pricing measure Q with respect to the prior model P: dQ dQ dQ ε (Q, P)= EQ ln = EP ln . dP dP dP This choice is natural because of the following properties of the relative entropy: Relative entropy function is a convex function (see Figure 8), as we proved • in (1.6.1).

24 1800

1600

1400

1200

1000

entropy 800

600

400

200

4 0 3.5 2 1.5 3 1 2.5 0.5 2 0 1.5 −0.5 1 −1 −1.5 0.5 −2 0

m lambda

Figure 6: Relative entropy of Merton where

Relative entropy function preserves absolute continuity of the calibrated • measure Q with respect to the prior, i.e. if the L´evy measure approaches zero at some points where the prior P is nonzero, the gradient of the relative entropy terms pushes it away from zero by becoming arbitrary large.

By choosing the prior a risk-neutral measure, we are insured that the • solution of a calibration problem will be an equivalent martingale measure.

Relative entropy function is easy to calculate; it can be obtained by apply- • ing Proposition 1.6.2 if we are working with exponential L´evy processes.

Then the regularized calibration problem becomes to minimize the func- tional: n θ 2 J (θ)= wi C (Ti,Ki) Ci + αε (Q,P ) (16) − i=1 over the set of parameters θ that correspond to martingale measures equivalent to the prior P, where α is a regularization parameter.

3.2.1 Choice of regularization parameter A regularization parameter α cannot be approximated by a priori fixed number because it depends on the level of noise presented in the data.

If we chose α small, we trust relatively more the new information contained • in the market option prices more than the prior information.

If we chose α large enough, we relatively trust the prior information more • If α 0, the calibration problem reduces to a nonlinear least squares. • →

25 Therefore the correct choice of α is important. A good approach to compute the regularization parameter can be done by using the discrepancy principle of Morozov. If we choose α large enough, the convexity of the relative entropy makes the non-convex objective function J(θ) convex, as it is shown in Figure 7. Therefore, the minimization of the functional J (θ) has a unique and the stable solution.

18000

16000

14000

12000

10000 J 8000

6000 4 3.5 4000 3

2000 2.5 2 0 1.5 2 1.5 1 1 0.5 0 0.5 −0.5 −1 −1.5 0 −2 lambda

m

Figure 7: In this figure, you can see the convexity of the J functional.

26 4 Numerical implementation

Note: all computation is done by using Matlab. You can check codes in Ap- pendix A. Let us suppose the market follows the Merton model with triplet (σ = 0.8,v, 0) where v is as in (6), with m = 0.5,δ = 0.1 and λ = 0.8, and at time t = 0 the ∗ stock price is S0 = 100. The direct problem is to generate the option prices Ci in different maturities Ti and strikes Ki by applying formula (7). ∗ As we discussed before, the inverse problem is to use Ci ,Ti and Ki to find the best triplet (σ,v, 0) describing the market. In reality the prices of course differs from the Merton model, therefore I added a noise to the Merton option prices to simulate the real data, i.e.

∗ Ci = Ci + noise.

The calibration problem, in this case, is to identify the characteristic triple of L´evy process which is defined by a vector of parameters θQ = θ σQ,λQ,mQ,δQ . First, we use the NLS method, which turns into minimizing the function:

2 n −λTi n 1 e (λTi) BS C (Sn,Ki,Ti,σn) Ci . (17) K e−rTi N (d ) √T n! − i=1 i − i n≥0 For simplicity, we fixed two parameters σ and δ, and tried to find λ and m. By minimizing the previous sum, we got two minimum points for λ and m depending on the initial value, In Figure 5 you can see that the optimal minimum gets stacked at the point (λ = 1.8252,m = 0.4201) where the initial value is (λ = 1.2,m = 0.3). If, on the other hand,− we change the initials to (λ = 0.4,m = 0.6) minimizing− leads to (λ = 0.8102,m = 0.5021), which insure that the solution by NLS is not unique and instable i.e. the problem is an ill-posed. Next, we apply regularized calibration with relative entropy. There are four main steps for the the numerical solution:

1. Choice of the weights: as suggested, vegas are good choices as weights.

2. Choice of the prior measure P, there are two ways, either using the data but we don’t trust them completely (so it is not recommended), or depending on the point of view of the trader, i.e. it depends on his/her experience in the market. In this implementation two parameters are fixed, and the following choices were made: δP = δ = 0.1,σP = σ = 0.8,mP = 0.47 and λP = 0.78

3. Choice of regularization parameter α: we choose α = 0.03.

4. Solution of the regularization problem for a given α and P by minimizing the J functional which is the sum of (17) and (10) i.e.

27 2 n −λτ n 1 e (λτ) B J (θ)= C S (τ,sn,σn) Ci K e−rTi N (d ) √T n! − i=1 i − i n≥0 2 Q2 P 2 Q P T Q µQ δ P µP δ Q λ δ + α λ e + 2 1 λ e + 2 1 + Tλ ln 2σ2 − − − λP δQ 2 2 3 µQ µP + δQ + TλP + TλQ + − P 2 −2 2δ

the minimization gives λreg=0.7970 and mreg=0.4859 which is not depending on the initial value of the minimization procedure. In the next figure, we compared between the true L´evy density and the estimated L´evy density. we got the following result:

While we fixed δ, both curves have same spread, • Both curves have almost the same hight since λreg λ • ≈ There is a small shift since the means do not coincide exactly. •

3.5 true Levy density estimated Levy density

3

2.5

2 m

1.5

1

0.5

0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 lambda

Figure 8: Comparison of the true L´evy density and the estimated L´evy density

28 5 Some comments of possible extensions

The model calibration can be done for other models than the Merton model. In fact, a non-parametric estimate of L´evy density is possible (see [2]). The technique should be implemented to real data for example the DAX index with European option data. If, for example, the Merton model is applied to real data then it is a delicate question of how to estimate all the parameters θQ (σ,λ,m,δ). Economically, it is a matter of what parameter domain is rea- sonable. Statistically and numerically we may face severe non-convexity with local minima.

29 A Matlab Codes

Black-Scholes model figure (1) • %Hiba Nassar mu=0.5; sigma=2; N=1000; t=linspace(0,1,N); w= sqrt(t).*randn(1,N); S0=1; x=cumsum(mu*t+sigma*w)./(1:N); S=S0*(exp(mu-0.5*sigma^2)+sigma*w); plot (t,x)

Merton model Figure (2) • %Roger Petrson Mertonrnd(1,1,0,1,-.8,0.3,1000,1); function [t,X]=Mertonrnd(lambda,T,mu,sigma,alpha,delta,ngrid,Ns); Nvektor=poissrnd(lambda*T,1,Ns); hopp=alpha+delta*randn(max(Nvektor),Ns) hoppt=sort(unifrnd(0,T,max(Nvektor),Ns).*(hopp~=0)+T*(hopp==0)) tgrid=linspace(0,T,ngrid)’*ones(1,Ns); [t preind]=sort([hoppt;tgrid]); ind=size(preind,1)*ones(size(preind,1),1)*((1:Ns)-1)+preind; preaddhopp=[hopp;0*tgrid]; addhopp=preaddhopp(ind); compX=cumsum(addhopp)+mu*t; W=cumsum([zeros(1,Ns);(sqrt(diff(t)).*normrnd(0,1,size(t,1)-1,Ns))]); X=mu*t+sigma*W+compX; plot(t,X)

Pricing by Black-Scholes • %Hiba Nassar function cBS=BSpricing(s0,k,sigma,T,r) dplus=(log (s0./k)+(r+1/2*sigma.^2).*T)./(sigma.*sqrt(T)); dminus=(log (s0./k)+(r-1/2*sigma.^2).*T)./(sigma.*sqrt(T)); cBS=s0.*normcdf(dplus)-k.*exp(-r*T).*normcdf(dminus);

Pricing by Merton • %Hiba Nassar unction cM=mertonpricing (s0,T,k,r,m,delta,lambda,sigma) cM=0; for n=0:4

30 sn=s0.*exp(n*m+ n*delta^2/2 -lambda*exp(m+delta^2/2)+lambda*T); sigman=sqrt(sigma.^2+n*delta^2./T); cM=cM+exp(-r*T).*(exp(-lambda*T).*(lambda*T).^n)/prod(1:n).* BSpricing(sn,k,sigman,T,r); end

Direct and indirect problem • %Hiba Nassar m0=0.5;%the mean of Y T=unifrnd(0,1,1,100); k=100*ones(1,100) sigma0=0.8; % the diffusion r=0.001; % the discount delta0=0.1;% the vaeiance of Y s0=100; lambda0=0.8;%poisson intensity\ cBS0=BSpricing(s0,k,sigma0,T,r); %pricing by merton cM=mertonpricing (s0,T,k,r,m0,delta0,lambda0,sigma0); theta=0.03; cM0=cM0+theta*randn(1,length(T)); %invers problem w=1./vega(s0,k,sigma0,T,r); error=@(lambda,m,delta)w*((mertonpricing (s0,T,k,r,m,delta,lambda,sigma0)-cM0).^2)’; %sigmaguess= input (sigmaguess=) %x(1)=lambda %x(2)=m %x(3)=delta prior=fminsearch(@(x)error(x(1),x(2),delta0),[1.2 -1]); lambdainv =prior(1) minv=prior(2) [lambdav,mv] = meshgrid(0:.2:4, -2:.2:2); for i=1:length (lambdav) for j=1:length (lambdav) z(i,j)=error(lambdav(i,j),mv(i,j),delta0); end end subplot(161),surfc(lambdav,mv,z) [lambdav,mv] = meshgrid(0:.2:4, -2:.2:2); for i=1:length (lambdav) for j=1:length (lambdav) z(i,j)=entropy(max(T), mv(i,j), delta0, lambdav(i,j),... sigma0, minv, delta0, lambdainv); end end subplot(162),surfc(lambdav,mv,z) xlabel(’lambda’) ylabel(’m’)

31 zlabel(’entropy’) alpha=0.03; J=@(lambda,m,delta)(w*((mertonpricing (s0,T,k,r,m,delta,lambda,sigma0)-cM0).^2)’+... alpha*(ones(1,length(T))*entropy(T,m,delta,lambda,sigma0,.47,delta0,0.78)’)); reg=fminsearch(@(x)J(x(1),x(2),delta0),[1 -1]); lambdareg=reg(1) mreg=reg(2)

[lambdav,mv] = meshgrid(0:.2:4, -2:.2:2); for i=1:length (lambdav) for j=1:length (lambdav) a(i,j)=J(lambdav(i,j), mv(i,j), delta0); end end reg=J(lambdareg,mreg,delta0); subplot(163),surf(lambdav,mv,a) hold on \subplot(164),plot3 (lambdareg,mreg,reg,’r*’) xlabel(’lambda’) ylabel(’m’) zlabel(’J’)

[lambdav,mv] = meshgrid(0:.2:4, -2:.2:2); for i=1:length (lambdav) for j=1:length (lambdav) q(i,j)=J(lambdav(i,j), mv(i,j), delta0); end end subplot(165),contour(lambdav,mv,q) %compare between true levy and estimated one x=m0+delta0*linspace(-2.9,2.9); h0=lambda0*normpdf((x-m0)/delta0)/delta0; h=lambdareg*normpdf((x-mreg)/delta0)/delta0; \suplot(166),plot(x,h0,x,h,’--’) legend(’true Levy density’,’estimated Levy density’)

32 References

[1] Benth, F.E., Option Theory with Stochastic Analysis, An introduction to mathematical finance , Springer- Verlag Berlin Heidelberg, 2004.

[2] Cont, R. and Tankov, P., Financial Modeling With Jump Processes, CHAPMAN & HALL/CRC Financial Mathematics Series, CHAPMAN & HALL/CRC, 2004.

[3] Cont, R. and Tankov, P., Calibration of jump-diffusion option pricing mod- els: A robust non-parametric approach, Rapport Interne 490, CMAP, Ecole Polytechnique, 2002. Forthcoming in: Journal of Computational Finance.

[4] Cont, R. and Tankov, P., Non-parametric calibration of jump-diffusion op- tion pricing models, Journal of Computational Finance 7, PP. 1-49, 2004.

[5] Cont, R., Tankov, P. and Voltchkova, E. , Option pricing models with jumps: integro-differential equations and inverse problems, European Congress on Computational Methods in Applied Sciences and Engineering, pp. 1-20, 2004.

[6] Engl, H.W., Hanke, M., and Neubauer, A., Regularization of inverse prob- lems, Mathenatics and its applications, Volume 375, Kluwer Academic Pub- lisher, London, 2000. [7] Inverse Problem, Course in Physics (707) Inverse Problems S.M. Tan and Colin Fox, The University of Auckland.

[8] Karatzas, I. and Shreve, S.E., Methods of , Springer Finance (textbook), Springer- Verlag New York, LLC, 2004.

[9] Kirsch, A., An Introduction to the Mathematical Theory of Inverse Problems, Applications of mathematics; 39, Springer, 1998.

[10] Kolbjørnsen, O., Fundamentals of inverse problems, Preprint series: De- partment of Mathematics at the Norwegian University of Science and Tech- nology; ; No. 6/2002, 2002.

[11] Lavrentev, M.M., Romanov, V.G. , Shishatskii, S.P., Ill-posed problems of mathematical physics and analysis, Translations of Mathematical Mono- graphs, vol. 64, American Mathematical Society, Providence, R. I., 1986.

[12] Matsuda, K., Parametric Regularized Calibration of Merton Jump- Diffusion Model with relative Entropy: What Difference Does It Make?, Academic Press, The Graduate Center, The City University of New York, 2005.

[13] Mikosch, T., Elementary with finance in view , Ad- vanced series on statistical science & applied probability; V.6, World Scien- tific, 1999-2000.

[14] Shreve, S.E. and Karatzas, I., Methods of mathimatical finance, Application of mathematics;39, Springer, 1998.

33 [15] Shreve, S.E., Stochastic Calculus for Finance Continuous-Time Models , Springer Finance (textbook), Springer- Verlag New York, LLC, 2004.

[16] Somersalo, K., Statistical and computational inverse problems, Applied mathematical science 160 , Springer, 2005.

[17] Tarantola, A., Inverse Problem Theory and methods for models parame- ter estimation, Society of industrial and applied mathematics, Philadelphia, Siam, 2005.

[18] Vogel, C.R., Computational methods for inverse problems, Society of in- dustrial and applied mathematics, Philadelphia, Siam, 2002.

34

SE-351 95 Växjö / SE-391 82 Kalmar Tel +46-772-28 80 00 [email protected] Lnu.se/dfm