INTRADAY SURFACE CALIBRATION

Master Thesis

Tobias Blomé & Adam Törnqvist

Master thesis, 30 credits Department of Mathematics and Mathematical Statistics Spring Term 2020

Intraday volatility surface calibration Adam T¨ornqvist,[email protected] Tobias Blom´e,[email protected]

c Copyright by Adam T¨ornqvist and Tobias Blom´e,2020

Supervisors: Jonas Nyl´en Nasdaq Oskar Janson Nasdaq Xijia Liu Department of Mathematics and Mathematical Statistics

Examiner: Natalya Pya Arnqvist Department of Mathematics and Mathematical Statistics

Master of Science Thesis in Industrial Engineering and Management, 30 ECTS Department of Mathematics and Mathematical Statistics Ume˚aUniversity SE-901 87 Ume˚a,Sweden

i Abstract

On the financial markets, investors search to achieve their economical goals while simultaneously being exposed to minimal risk. Volatility surfaces are used for estimating options’ implied volatilities and corresponding prices, which are used for various risk calculations.

Currently, volatility surfaces are constructed based on yesterday’s market in- formation and are used for estimating options’ implied volatilities today. Such a construction gets redundant very fast during periods of high volatility, which leads to inaccurate risk calculations.

With an aim to reduce volatility surfaces’ estimation errors, this thesis explores the possibilities of calibrating volatility surfaces intraday using incomplete mar- ket information. Through statistical analysis of the volatility surfaces’ historical movements, characteristics are identified showing sections with resembling mo- tion patterns. These insights are used to adjust the volatility surfaces intraday.

The results of this thesis show that calibrating the volatility surfaces intraday can reduce the estimation errors significantly during periods of both high and low volatility. However, these results highly depend on the conducted choices when constructing and analyzing the volatility surfaces which leave room for further reasearch.

ii Sammanfattning

F¨or investerare p˚afinansmarknader v¨arlden ¨over ¨ar m˚alet att n˚asina ekonomiska m˚almed s˚al˚agrisk som m¨ojligt. D¨arf¨or ¨ar korrekta och precisa riskber¨akningar av h¨ogsta prioritet. Volatilitetsytor anv¨ands vid riskber¨akningar f¨or att estimera optionspriser och optioners implicita volatiliteter. Idag konstrueras volatilitet- sytor baserat p˚amarknadsinformation fr˚anen dag och anv¨ands f¨or estimation n¨asta dag. Under perioder av h¨og volatilitet blir denna sorts konstruktion l¨att inaktuell, vilket leder till felaktiga riskber¨akningar.

M˚aletmed detta examensarbete var att reducera en volatilitetsytas estima- tionsfel. Detta genoma att utforska m¨ojligheten att kalibrera en volatilitetsyta baserat p˚al¨opande information under dagen. Genom att analysera hur vola- tilitetsytor r¨or sig ¨over tid identiferades karakt¨arsdrag och m¨onster som kan anv¨andas f¨or att kalibrera volatilitetsytor l¨opande under en dag.

Resultatet i detta examensarbete visar att kalibrering av volatilitetsytor int- ra dag kan reducera volatilitetsytors estimationsfel oavsett perioder av h¨og eller l˚agvolatilitet. Detta resultat ¨ar dock beroende av hur volatilitetsytorna ¨ar ska- pade och analyserade, vilket ger utrymme f¨or vidare studier inom omr˚adet.

iii Acknowledgements

Firstly, we would like to thank Nasdaq for bringing the idea of this thesis to us and letting us do our thesis at the Ume˚aoffice. Under the circumstances of the COVID-19 pandemic, we are deeply thankful for letting us use Nasdaq’s equipment to complete this thesis from home.

Secondly, we thank Jonas Nyl´enand Oskar Janson at Nasdaq for providing valuable insights, discussions and stunning supervision. We would also like to direct our gratitude to Markus Nyberg for useful expertise and assistance on how to write an academical report.

Thirdly, we would like to thank our supervisor Xijia Liu, at Ume˚aUniver- sity, for knowledge on data analysis and remarks on the structure and content in this thesis.

Finally, we direct a thank you to our fiancees Victoria Bertilsson and Melina Ahlenius˚ for your support. Without you we would probably still be analysing the volatility surfaces. A special thanks to you Victoria for letting us redo the apartment to a home office and to you Melina, for your help in report writing and the idea proposal of dividing the volatility surfaces into sections.

//Adam & Tobias

iv Contents

1 Introduction 1 1.1 Background ...... 1 1.2 Problematization ...... 3 1.3 Project goal ...... 4 1.4 Datasets ...... 4 1.4.1 LME Copper ...... 4 1.4.2 WTI NYMEX ...... 4 1.5 Limitations ...... 5 1.6 Literature review ...... 5 1.7 Software ...... 5

2 Theory 6 2.1 Options ...... 6 2.2 ...... 7 2.3 The volatility surface ...... 8 2.4 ...... 9 2.5 Total implied volatility and variance ...... 9 2.6 Construction of a volatility surface ...... 10 2.7 Arbitrage ...... 12 2.7.1 Arbitrage conditions for options ...... 13 2.7.2 Arbitrage conditions for implied volatility ...... 14 2.8 K-Means clustering ...... 15 2.9 Transition matrix ...... 16

3 Method 17 3.1 Calculate implied volatility ...... 17 3.2 Arbitrage: tests and how to eliminate it ...... 17 3.2.1 arbitrage ...... 18 3.2.2 Butterfly arbitrage ...... 18 3.3 Construction of a volatility surface ...... 19 3.3.1 Calculate change and error between market data and the surface ...... 25 3.4 Analysis of the surface ...... 25 3.4.1 Part 1: Implied volatility time series analysis ...... 26 3.4.2 Part 2: Magnitude of change analysis ...... 28 3.5 Intraday calibration ...... 30 3.6 The Intraday Calibration Model (ICM) ...... 33 3.6.1 Model description ...... 33 3.6.2 Model evaluation ...... 34 3.7 Parallel shift ...... 35

v 4 Results 36 4.1 Review of assumptions and approach ...... 36 4.2 LME Copper ...... 36 4.3 WTI NYMEX ...... 40 4.4 Comparison and conclusion of results ...... 43 4.5 Volatile periods ...... 45

5 Discussion 47 5.1 Results ...... 47 5.2 Construction of the volatility surfaces ...... 47 5.3 Analysis of the volatility surface ...... 48 5.4 Future studies ...... 50

vi List of Figures

1 S&P 500 from April 28th 2015 to April 28th 2020...... 2 2 S&P 500’s intraday price process from March 23rd to March 25th 2020...... 3 3 Example of a volatility surface...... 9 4 A raw SVI parameterization fitted to market data...... 20 5 Visualization of Durrleman’s condition. Arbitrage opportunities are introduced as g(x) falls below the dashed line...... 21 6 Polynomial regression interpolation in the time to maturity di- rection for log-moneyness = 0...... 22 7 Volatility smiles for different time to maturities T where no cal- endar arbitrage opportunities are introduced...... 22 8 A volatility surface expressed in total implied variance, ω, in (a) and expressed in implied volatility, σimp, in (b)...... 24 9 An example of relative day-to-day changes, δ, within a volatility surface...... 26 10 Example of a regression model of 4th degree fitted to a time series (i,j) dt ...... 27 11 A volatility surface divided into sections using K-means clustering of y(i,j), here using K =6...... 28 (i,j) 12 Relative day-to day changes of the time series dt shown in Figure 10. The solid line represent the mean change over the time period...... 28 13 A section from Figure 11 divided into subsections using K-means based on standard deviation, here using K =4 ...... 29 14 Intraday calibration adjustments of a volatility surface based upon 20% of available data...... 32 15 Relative day-to-day changes within a volatility surface. This fig- ure should be examined as a reference to Figure 14...... 33 16 Daily TSS in the LME Copper volatility surface for the second year...... 37 17 Method performances for the first 30 days in the second year. . . 37 18 Error increases for the parallel shift. The occasions are indicated by red dots...... 38 19 Error increases for the Intraday Calibration Model. The occa- sions are indicated by red dots...... 39 20 Daily TSS in the WTI NYMEX volatility surface for the second year...... 40 21 Method performances for the first 30 days in the second year. . . 41 22 Error increases for the parallel shift. The occasions are indicated by red dots...... 42 23 Error increases for the Intraday Calibration Model. The occa- sions are indicated by red dots...... 42 24 Daily TSS in the volatility surfaces for the second year...... 43

vii 25 Relative day-to-day changes within a volatility surface...... 44 26 Method performances on LME Copper for the six highest volatile days in the second year...... 45 27 Method performances on WTI NYMEX for the six highest volatile days in the second year...... 46 28 Example of a volatility surface divided into sections and subsec- tions...... 50

viii List of Tables

1 Examples of estimated coefficients of the regression models fitted to different implied volatility time series...... 27 2 Example of a transformation matrix...... 30 3 Hyper-parameters of ICM for the LME Copper dataset...... 36 4 Method performances for LME Copper for the second year. . . . 38 5 Results on the occasions when error is increased for LME Copper. 39 6 Hyper-parameters of ICM for the WTI NYMEX dataset. . . . . 40 7 Method performances for WTI NYMEX for the second year. . . 41 8 Results on the occasions when error is increased for WTI NYMEX. 43 9 Method performances for the 95% quantile of the most volatile days of the second year...... 46

ix 1 INTRODUCTION

1 Introduction

This section provides a background story about the financial market and exam- ples of volatility in the market. Further in, problematization, project goal, the data available and limitations of this thesis are presented.

1.1 Background On the financial markets, henceforth the market, participants aim to achieve their economic goals while being exposed to minimal risk. As the rapid evo- lution of technology, there exists a wide ever-increasing set of assets, financial contracts and derivatives available on the market. An example of a financial contracts are options, which gives the owner the right, but not the obligation, to buy or sell an underlying asset at an agreed-upon price and/or date. The underlying asset can be anything from commodities like corn or wheat to a index, for example, the S&P 500. An option can be described as a ”financial insurance” since it has a limited downside. Consider buying a stock for $100 and an option giving you the right to sell the stock for $80 in the future. If the stock price falls below $80 in the future, the option limits the loss. Options are therefore often used as a tool for limiting the risk of investments.

In finance, the risk is defined in terms of the volatility σ, which is the degree of variation for an asset’s price process. It is measured by calculating the standard deviation of the returns for a given period of time. Recalling that standard deviation is the square-root of the variance, volatility represents how much an asset’s price swings around its mean price. An asset with high volatility is there- fore considered a higher risk since the price is expected to be less predictable. In Figure 1, the price process of the S&P 500 over the last five years is shown along with its corresponding logarithmic returns. It is clear to see that the volatility is not constant over time but fluctuates and that high volatility implies large movements in the price process. The abnormal movements in the price process and the logarithmic returns during 2020 is the COVID-19 pandemic. If one only examines the price process it seems like the S&P 500’s price decreases steady during this period and one could argue that the price is rather predictable - it will be less than the day before. However, examining the logarithmic returns as well one finds that there exists large positive return during the period, making the price process less predictable.

1 1 INTRODUCTION

(a) S&P 500’s price process. (b) S&P 500’s logarithmic returns.

Figure 1: S&P 500 from April 28th 2015 to April 28th 2020.

Imagine that we are financial experts and our task is to set a fair price on an option where the S&P 500 is the underlying asset, then we are interested in the volatility. The option is valuable only if the underlying asset reaches a certain price within a given period of time. The probability of reaching this level is higher if the volatility is high, which motivates for a higher option price. How- ever, if the price is too high the option will not sell and we will miss out on potential profits. On the other hand, if the price is too low we will lose money to smart investors who take advantage of this opportunity. But how do we de- termine the ”correct” volatility and the corresponding price? We can examine the asset’s historical volatility, ask what the market participants think is a rea- sonable price or examine the price history for similar contracts.

Theoretically, an option’s price is determined using a pricing formula with in- formation available in the option’s contract and the market as input, including the volatility of the underlying asset. The choice of formula to use depends on the option’s style but the most common one is Black-Scholes formula, originally derived for pricing European call options.

In practice, it is not as straightforward since the volatility of the underlying asset is unknown. However, from an existing option contract, one can derive the option’s implied volatility using a pricing formula in reverse. The pricing formula is used for various values of volatility until the theoretical price equal to the option’s market price is found. Implied volatility is called a forward-looking and subjective measure since it is derived from an existing option and current market conditions, being the volatility justifying the option’s market price.

As financial experts, we can now determine the fair price of an option by study- ing similar options’ contracts. But what if similar options don’t exist? The answer to this question is to use the volatility surface. Using existing option contracts we can determine their implied volatilities and then interpolate be- tween these discrete points to construct a surface, which can be used to find any option’s implied volatility and the corresponding price. It sounds simple in

2 1 INTRODUCTION theory, but in reality it is not.

For a volatility surface to be trustworthy, there must exist options for vari- ous contract specifications and these need to be frequently and recently traded. One common way to create a volatility surface is to use option trades for a whole day to represent the next upcoming day’s prices and market conditions. However, during periods of very high volatility, such as the financial crisis or COVID-19 pandemic, yesterday’s market conditions do not hold for today.

In Figure 2, the intraday price process of S&P 500 is shown for March 23rd and 24th, two days during the COVID-19 pandemic with very high volatility. Considering these two days, imagine once again that we are financial experts but our task now is to construct a volatility surface based on the market condi- tions of March 23rd that will be used for estimation on March 24th. If we only examine the price process, we can safely state that something changed in the market conditions between the close on March 23rd and the opening on March 24th. This illustrates how yesterdays prices and market conditions are unable to represent today and the need for methods to adjust the volatility surface intraday using incomplete market information.

Figure 2: S&P 500’s intraday price process from March 23rd to March 25th 2020.

1.2 Problematization The problem with intraday adjustments is two-fold, there are fewer option trades and these are not synchronized. It doesn’t hold to adjust the volatility surface unless the new option trades reflect the current market conditions and that the volatility surface’s market fit is improved.

At Nasdaq, where this thesis work is conducted, the front office and risk man- agement functions require accurate estimates of option portfolios intraday to

3 1 INTRODUCTION properly value options and measure risk from trading activities. Nasdaq has implemented end-of-day volatility surface generation for equity options as part of a product line offering but has not yet extended it with intraday calibration [13]. End-of-day volatility surfaces might not be sufficient during periods of very high volatility and using incorrect volatility surfaces for risk calculations may give rise to significant inaccuracies. One should also remember that dur- ing periods of high volatility it is of most importance that risk calculations are accurate and trustworthy.

1.3 Project goal Under circumstances as in Figure 2, a volatility surface based upon yesterdays information is to a large extent redundant and needs to be adjusted intraday using today’s incomplete market data in order to be useful. The goal of this thesis is to find one or several approaches to calibrate a volatility surface intraday given incomplete market data.

1.4 Datasets This section provides a brief description of the datasets used for this thesis. All datasets are provided by the Nasdaq Ume˚aOffice but due to confidentiality they can not be disclosed in details.

1.4.1 LME Copper This dataset consists of end-of-day volatility surfaces for American options on copper over two years, traded at the London Metal Exchange [4]. It contains implied volatilities and corresponding market information for a uniform set of options with different strike prices and maturities. Due to no information about how the volatility surfaces are constructed, the data is treated as ”real” market data expressed in implied volatility. For more information about contract spec- ifications, visit the London Metal Exchange website at https://www.lme.com.

1.4.2 WTI NYMEX This dataset consists of end-of-day volatility surfaces for American options on Light Sweet Crude Oil Futures over two years, traded at the New York Mer- cantile Exchange [11]. It contains implied volatilities and corresponding market information for a uniform set of options with different strike prices and matu- rities. Due to no information about how the volatility surfaces are constructed, the data is treated as ”real” market data, expressed in implied volatility. For more information about contract specifications, visit the CME Group website at https://www.cmegroup.com.

4 1 INTRODUCTION

1.5 Limitations The data provided by the Nasdaq Ume˚aOffice is not real market data but instead data of very high quality. This should be considered when examining the methods and results presented in this thesis. It has not been possible to access real data to validate the results of this thesis as an effect of the COVID-19 pandemic, which should be kept in mind when examining the results. On the other hand, we have tried to simulate the data as intraday market data to give the models in this thesis a representation as close as possible to reality.

1.6 Literature review Implied volatility and the nature of volatility surfaces has been well studied both by practitioners and scholars ever since the disclosure of the Black-Scholes formula in 1973 [6]. While there exist extensive research on methods for mod- elling implied volatility and constructing volatility surfaces, including both models and models [10, 6, 1], the field of calibrat- ing the volatility surfaces intraday have remained rather unexplored. Prac- titioners have studied the possibility of moving volatility surfaces by parallel shifts, which has been proven to improve the volatility surfaces accuracy during periods of high volatility but fail when volatility is low [15]. Models with incon- sistent performances are not suitable for risk calculations, which are one of the volatility surfaces main usages, and it is therefore motivated to investigate the field of intraday calibration of the volatility surfaces further.

1.7 Software The software used for implementations throughout this thesis work was Python. The following packages was used:

(i) NumPy: Calculations. (ii) pandas: Data analysis. (iii) scikit-learn: Predictive data analysis. (iv) matplotlib: Visualization.

(v) SciPy: Statistical tools.

5 2 THEORY

2 Theory

This section provides the theory behind the methods used in this thesis. Start- ing off with how to determine the implied volatility of an option. Then, theory for constructing a volatility surface is given along with necessary arbitrage con- straints.

2.1 Options In Section 1.1 the concept of options was briefly introduced, in this section a more detailed explanation follows. An option gives the owner the right to buy or sell an underlying asset at an agreed-upon price, known as the , at a later point in time, known as the maturity date. Options which give the owner the right to buy an underlying asset for the strike price are known as call options while options which give the owner the right to sell an underlying asset are known as put options. Consider a scenario where the price of the under- lying asset is higher than the strike price, then are denoted as ”in the money” and put options as ”out the money”. If the price of the underlying asset is less than the strike price, then call options are out the money and put options are in the money.

It exists various styles of options depending on the contract specifications but the two most common ones are European style or American style. If the options’ contract only allow the owner to the options at the maturity date they are called European options. If it is allowed to exercise the options at any time up until the maturity date, they are called American options. The price of an European call option is determined by Black-Scholes formula. Definition 2.1.1. The price of an European call option with strike price K and time to maturity T is given by the Black-Scholes formula: −rT C = sN[d1] − Ke N[d2] (1) where s is the price of the underlying asset, N[·] is the cumulative distribution function for the normal distribution N(0,1), r is the risk free rate and ln(s/K) + (r − σ2/2)T d1 = √ σ T √ d2 = d1 − σ T. where σ is the volatility of the underlying asset. The payoff for an European call option is: max(s − K, 0), which can be written more compact as: (s − K)+. To price an European one can use the put-call-parity.

6 2 THEORY

Definition 2.1.2. Consider an European call option and an European put op- tion with strike price K and time to maturity T. Denoting the pricing functions by c(t,s) and p(t,s), the following relation exists:

p(t, s) = Ke−r(T −t) + c(t, s) − s known as the put-call-parity. The payoff for an European put option is:

max(K − s, 0), which can be written more compact as:

(K − s)+.

While European options have a closed formula for determining their prices, American options don’t. Due to the possibility to exercise the options at any time, the prices of American options change up until the maturity date and can’t be calculated explicitly but need to be approximated. One common method for approximating American option prices’ is by using a Binomial tree, which trace the option prices’ evolution in discrete time steps. This method is fully described in [8] along with other pricing methods.

2.2 Implied volatility The implied volatility of an option can be derived using Black-Scholes formula (1), or any other pricing formula, in reverse for various values of volatility until the theoretical price equals the option’s market price.

The Black-Scholes formula was derived with assumptions on the dynamics of the underlying asset, where one of the assumption is that the volatility of the underlying asset is assumed constant. Under this assumption one would expect the same implied volatility for different strike prices and maturities given the assumption of constant volatility. But when extracting the implied volatility from available market data variations depending on the strike price and time to maturity are often observed. For a detailed review of all assumptions made for the Black-Scholes formula, [2] is recommended. Since the implied volatility of an underlying asset varies for different strike prices and time to maturities, it is defined as below.

Definition 2.2.1. Implied volatility is a function of strike price K and time to maturity T : σimp = f(K,T |C, s, r) (2) where C, s and r are observed in the market.

7 2 THEORY

2.3 The volatility surface Given options available on the market for a certain underlying asset, the implied volatility can be derived for each option. This set of discrete data points can be interpolated to create a surface, known as the implied volatility surface, hence- forth volatility surface, which can be used to determine the implied volatility for any combination of strike price and time to maturity. An example of a volatility surface is shown in Figure 3, here visualized in log-moneyness instead of strike price, see Section 2.4 for further explanation.

Investigations of volatility surfaces have derived a handful general characteristics [5, 14]. In regards of the volatility surface’s general appearance, the following observed profile characteristics can be stated.

(i) The volatility surface has a smile profile in the strike price direction, known as a . (ii) The volatility surface has a linear leaning profile in the time to maturity direction known as the term structure. (iii) The magnitude of the volatility smile decreases as time to maturity in- creases. Shorter maturities display pronounced smiles and longer maturi- ties give rise to shallow smiles.

Observations also show that the volatility surface changes over time and there- fore the following time dependent characteristics can be stated.

(i) Implied volatility has high positive auto-correlation and mean-reversion, known as volatility clustering. (ii) Implied volatility and returns in the underlying asset are negatively cor- related. This is known as the leverage effect.

(iii) The variance of daily variations in the surface can be described with very few principal components.

8 2 THEORY

Figure 3: Example of a volatility surface.

2.4 Moneyness When visualizing a volatility surface it is common to substitute the strike price direction to moneyness or log-moneyness, as shown in Figure 3. Definition 2.4.1. For an option with strike price K and underlying price s, moneyness is defined as: K x = s and the corresponding log-moneyness is defined as: K ln (x) = ln ( ). s Using moneyness or log-moneyness instead of strike price is convenient since it allows one to center the volatility surface around 1 or 0. This is more suitable when comparing surfaces of different underlying assets.

2.5 Total implied volatility and variance Implied volatility is often transformed into total implied variance since it allows for simplicity in calculations and expressions.

9 2 THEORY

Definition 2.5.1. For an option with implied volatility σimp and time to ma- turity T, the total implied volatility is defined as: √ Σ = σimp T and the total implied variance as:

2 ω = σimpT. (3)

2.6 Construction of a volatility surface There exist various methods for constructing volatility surfaces and the choice of the method to use is up to the practitioner. One commonly used method is the stochastic volatility inspired parameterization, henceforth denoted as the raw SVI parameterization. This method’s popularity among practitioners is due to its two key properties [7]:

2 (i) For a fixed time to maturity T , the implied variance σimp is linear in the moneyness x as |x| → ∞. (ii) It is easy to fit against options’ market prices whilst ensuring no calendar spread arbitrage.

The raw SVI parameterization can be used for constructing volatility surfaces slice-by-slice in the time to maturity direction. There exist extensions of the raw SVI parameterization method, for example the neutral SVI parameterization or the surface SVI parameterization presented in [7]. However, for this thesis the raw SVI parameterization, proposed in [12] will be used. Definition 2.6.1. The raw SVI parameterization of the total implied variance for a fixed time to maturity T is defined as:

ωSVI (x) = a + b(ρ(x − m) + p(x − m)2 + σ2, where x is moneyness and a, b, σ, ρ, m is the parameter set. Remark: The raw SVI parameter σ is not to be confused with the volatility of the underlying’s price process.

When adjusting the raw SVI parameters, the effects on the volatility smile are: (i) a changes the vertical translation of the volatility smile in the positive direction, (ii) b affects the angle between the put and call wing, (iii) ρ rotates the smile, (iv) m changes the horizontal translation of the smile,

10 2 THEORY

(v) σ reduces the at-the-money (ATM) curvature of the smile. These parameters can be difficult to understand and therefore the SVI-jump- wing (SVI-JW) parameters are introduced. Definition 2.6.2. The SVI-jump-wing (SVI-JW) parameterization expressed in terms of the raw SVI parameters, for a fixed time to maturity T is defined as: √ a + b(−ρm + m2 + σ2) v = , T T 1 b m  ψT = √ ρ − √ , ωT 2 m2 + σ2 1 pT = √ b(1 − p), ωT 1 cT = √ b(1 + p), ωT 1 p vˆ = a + bσ 1 − ρ2, T T

where ωT = vT T . This parameterization depends explicitly on the time to maturity T and can be viewed as generalizing the raw SVI parameterization. The SVI-JW parameters have the following interpretations:

(i) vT gives the ATM implied total variance,

(ii) ψT gives the ATM skew,

(iii) pT gives the slope of the left wing (put options),

(iv) cT gives the slope of the right wing (call options),

(v)ˆvT is the minimum implied variance. √ If smiles scaled perfectly as 1/ ωT , these parameters would be constant, hence independent of T . This makes it easy to extrapolate the volatility surface for higher value on T . Also note that by definition, for any T > 0 we have:

∂σ (x, T ) ψ = imp T ∂x x=0 Assume that m 6= 0. For any T > 0, define the (T -dependence) quantities: Lemma 2.6.1. Assume that m 6= 0. For any T > 0, define the (T-dependence) quantities: √ 2ψ ω r 1 β = ρ − T T and α = sign(β) − 1, b β2

11 2 THEORY where we have further assumed that β ∈ [−1, 1]. Then, the raw SVI and SVI-JW parameters are related as follows: √ ωT b = (cT + pT ), 2 √ p ω ρ = 1 − T T , b p 2 a =v ˆT T − bσ 1 − ρ , (v − ψ )T m = T T , b{−ρ + sign(α)pq − α2 − αp1 − ρ2} σ = αm.

If m = 0, then the formula above for b, ρ and a still hold, but σ = (vT T − a)/b. Proof. This lemma is omitted to Gatheral’s work in [7]. The relationships between the raw SVI- and SVI-JW parameters are strong tools for calibrating the parameterization for a better market fit and to elimi- nate arbitrage possibilities. They are also key-components when calibrating the volatility surface intraday.

2.7 Arbitrage Arbitrage is a phenomenon that guarantees a positive payoff with zero proba- bility of a negative payoff for a portfolio [2]. It is defined as: Definition 2.7.1. An arbitrage possibility is a portfolio h with the properties

h Vt = 0, h VT > 0, with probability 1,

where V denotes the payoff and t < T . Arbitrage can be classified as static or dynamic. Dynamic arbitrage refers to a strategy that re-balances the portfolio over time while for static arbitrage, the portfolio remains unchanged [9].

In the aspect of a volatility surface it is important to handle static arbitrage to avoid miss-pricing of options. Gatheral gives in [7] a compact definition of a volatility surface free of static arbitrage: Definition 2.7.2. A volatility surface is free of static arbitrage if and only if the following conditions are satisfied.

(i) it is free of calendar spread arbitrage.

(ii) each volatility smile is free of butterfly arbitrage.

12 2 THEORY

The calendar spread arbitrage concerns arbitrage possibilities in the time to maturity direction of the volatility surface while the butterfly arbitrage con- cerns arbitrage possibilities in the moneyness direction. Before handling these conditions, arbitrage conditions for options and implied volatility are needed.

2.7.1 Arbitrage conditions for options Theorem 2.7.1. Let s > 0 be a constant and denote the price of an European call option by C(K,T ), where K is the strike price and T is the time to maturity. (a) Let C : (0, ∞) × [0, ∞) → R satisfy the following conditions.

(A1) (Convexity in K) C(·,T ) is a convex function, ∀T ≥ 0; (A2) (Monotonicity in T) C(·,K), is non-decreasing, ∀K > 0; (A3) (Large strike limit) lim C(K,T ) = 0, ∀T ≥ 0; K→∞ (A4) (Bounds) (s − K)+ ≤ C(K,T ) ≤ s, ∀K > 0,T ≥ 0; and (A5) (Expiry Value) C(K, 0) = (s − K)+, ∀K > 0. Then (i) the function Cˆ : [0, ∞)×[0, ∞) → R ( s, if K = 0 (K,T ) 7→ C(K,T ), if K > 0 satisfies assumptions (A1)-(A5) but with K ≥ 0 instead of K > 0; and (ii) there exists a non-negative Markov martingale X with the property that + Cˆ(K,T ) = E((Xt − K) |X0 = s) for all K,T ≥ 0. (b) All of the listed conditions in part (a) of this theorem are necessary prop- erties of Cˆ for it to be the conditional expectation of a call option under the assumption that X is a (non-negative) martingale. Remark: (s − K)+ is the payoff-function of an European call option.

13 2 THEORY

2.7.2 Arbitrage conditions for implied volatility The following arbitrage conditions follow the ones stated in Theorem 2.9 in [16]. These conditions handle static arbitrage and they are applied for total implied volatility. √ K Theorem 2.7.2. Let s > 0, x = ln ( s ) and Σ(x, T ) = σimp(x, T ) T satisfy the following conditions:

(i) (Smoothness) for every T > 0, Σ(x, T ) is twice diffentiable with respect to x. (ii) (Positivity) for every x ∈ R and T > 0,

Σ(x, T ) > 0.

(iii) (Durrleman’s Condition) for every T > 0 and x ∈ R,  x∂ Σ2 1 0 ≤ 1 − x − Σ2(∂ Σ)2 + Σ∂ Σ, Σ 4 x xx where Σ denotes Σ(x, T ). (iv) (Monotonicity in T) for every x ∈ R, Σ(X, ·) is non-decreasing. (v) (Large moneyness behaviour) for every T > 0

lim d+(x, Σ(x, T )) = −∞. x→∞

(vi) (Value at maturity) for every x ∈ R,

Σ(x, 0) = 0.

Then

Ce : [0, ∞)×[0, ∞) → R ( sB(x, Σ(x, T )), if K > 0 (K,T ) 7→ s, if K = 0

is a call price surface parameterised by s that is free of static arbitrage.

Remark: ∂x and ∂xx denotes the first and second order partial derivatives of Σ(x, T ) with respect to x. d+(·) is a component of B(·), which is a ”scaled Black-Scholes” function. For further explanations, see Roper [16] Definition 2.3 and Definition 2.4.

Now, all theory for constructing a volatility surface free of arbitrage has been presented. The remaining theory are essential concepts used for analyzing and adjusting the volatility surface.

14 2 THEORY

2.8 K-Means clustering K-means clustering is used for partitioning a dataset into K distinct, non- overlapping clusters. K-means algorithm will assign each observation in a dataset to exactly one of the K clusters, after specifying the number of clusters. Let C1, ..., CK denote sets containing the indices of the observation in each cluster. The clustering sets will satisfy two properties:

C1 ∪ C2 ∪ ... ∪ CK = {1, ..., n} 0 Ck ∩ Ck 0 = θ, ∀k 6= k Which says, no observation belongs to more than one cluster and that the clusters are non-overlapping. A good clustering is one for which the within- cluster variation is as small as possible. Let the the within-cluster variation for cluster Ck be denoted by W (Ck), then the problem to solve is:

K X  min W (Ck) . (4) C1,...,CK k=1

The formula above says that we want to partition the observations into K clus- ters such that the total within-cluster variation is as small as possible. In order to make this calculation, the within-cluster variation needs to be defined. The most common choice is to use squared Euclidean distance. That is defined:

p 1 X X 2 W (Ck) = (xij − xi 0j) , (5) |Ck| 0 i,i ∈Ck j=1 where |Ck| denotes the number of observations in the kth cluster. Combining (4) and (5) gives the optimization problem that defines K-means clustering:

 K p  X 1 X X 2 min (xi,j − xi 0j) . (6) C1,...,C K |Ck| 0 k=1 i,i ∈Ck j=1

To solve equation (6) and determine the clusters, the step-by-step implementa- tion below can be used.

(i) Randomly assign a number , from 1 to K, to each of the observations. This serves as initial cluster assignments for the observations. (ii) Iterate until the cluster assignments stop changing: (a) For each of the K clusters, compute the cluster centroid. The kth cluster centroid is the vector of the p feature means for the observa- tions in the kth cluster. (b) Assign each observation to the cluster whose centroid is the closest, where closest is defined using Euclidean distance.

15 2 THEORY

2.9 Transition matrix A transition matrix describes the movements of a Markov chain over a finite state space S with cardinality S. Let the probability of moving from i to j in one time step be defined as P r(j|i) = Pi,j and that the total probability of moving from state i to all other states equals to 1, then transition matrix P is defined as:   P1,1 P1,2 ...P1,j ...P1,S P2,1 P2,2 ...P2,j ...P2,S     ......   ......  P =   . Pi,1 Pi,2 ...Pi,j ...Pi,S     ......   ......  PS,1 PS,2 ...PS,j ...PS,S

16 3 METHOD

3 Method

This section provides methods for calculating implied volatility of an option, construction of a volatility surface, elimination of arbitrage, measure errors and finally how to analyze the volatility surface. Figures and examples in this section are constructed using the LME Copper dataset.

3.1 Calculate implied volatility Implied volatility can be determined using various approaches and one common way is to use an iterative search. Having the market price C of an European call option with strike price K, time to maturity T , the risk free rate r and Black-Scholes formula as the pricing formula, Newton’s method can be used. Below follows a step-by-step implementation to determine the implied volatility (2) using Newton’s method.

(i) Let {C, s, K, T, r} be known and constant.

q 2π C (ii) Set σn = σ0 = T s as the initial guess of the implied volatility.

(iii) Determine the theoretical price Cˆ using Black-Scholes formula (1) with input parameters {s, K, T, r, σn}. (iv) If |Cˆ − C| < , STOP, where  is the accepted estimation error.

Cˆ−C (v) Set σn+1 = σn − , where ν(σn) is the of C w.r.t. the ν(σn) implied volatility. (vi) Go to step (iii).

Remark: The initial guess σ0 is a closed form estimate of the implied volatility derived by Menachem Brenner and Marti G. Subrahmanyan in [3].

3.2 Arbitrage: tests and how to eliminate it The arbitrage conditions for implied volatility stated in Theorem 2.7.2 can be linked to either calendar spread arbitrage or butterfly arbitrage and the two conditions stated in Definition 2.7.2 can therefore be extended.

Extension of Definition 2.7.2 (i): A volatility surface is free of calendar spread arbitrage if and only if conditions (iv) and (vi) in Theorem 2.7.2 are satisfied.

Extension of Definition 2.7.2 (ii): Each volatility smile is free of butter- fly arbitrage if and only if conditions (iii) and (v) in Theorem 2.7.2 are satisfied.

Using these extensions it is possible to construct tests for detecting arbitrage possibilities in the volatility surface.

17 3 METHOD

3.2.1 Calendar spread arbitrage To determine if a volatility surface is free of calendar spread arbitrage, the following test can be used: Procedure 3.2.1 (Calendar spread arbitrage test). For each maturity slice Ti and log-moneyness xj, determine the total implied variance ω(xj,Ti), where i = {1, 2, . . . , n}, j = {1, 2, . . . , m}. If for all xj,

ω(xj,Ti) ≤ ω(xj,Ti+1), the volatility surface is free of calendar spread arbitrage. Remark: This test was originally constructed by Ohman¨ [17].

For visualization of this test one can plot the total implied variance against the log-moneyness for each maturity slice in the same figure. If non of the lines intersect, the volatility surface is free of calendar spread arbitrage.

To eliminate calendar spread arbitrage, Gatheral and Jacquier [7] propose Lemma 5.1, which states that it is possible to interpolate between two maturity slices T1 and T2 without risk of introducing static arbitrage in the interpolated region. Here follows this lemma:

Lemma 3.2.1 (Lemma 5.1 in [7]). Given two volatility smiles ω(x, T1) and ω(x, T2) with T1 < T2 where the two smiles are free of butterfly arbitrage and such that ω(x, T2) > ω(x, T1) for all x, there exists an interpolation such that the interpolated region is free of static arbitrage for T1 < T < T2. Proof. This lemma is omitted to Gatheral and Jacquier [7].

3.2.2 Butterfly arbitrage To determine if a volatility surface is free of butterfly arbitrage, the following test can be used: Procedure 3.2.2 (Butterfly arbitrage test). For a fixed maturity slice T , de- termine condition (iii) in Theorem 2.7.2 for each xj, j = {1, 2, . . . , m}. In other words, let

 x∂ Σ2 1 g(x) = 1 − x − Σ2(∂ Σ)2 + Σ∂ Σ Σ 4 x xx and calculate g(xj) for j = {1, 2, . . . , m}. If for all xj

g(xj) > 0 the volatility surface is free of butterfly arbitrage.

18 3 METHOD

To eliminate butterfly arbitrage Gatheral and Jacquier [7] propose to fix the SVI- JW parameters vT , ψT och pT , for a maturity slice T and choose the remaining parameters as:

0 0 0 4pT cT cT = pT + 2ψT , and vˆT = vT 0 . (7) (pT + cT ) It follows by continuity of the parameterization in all of the SVI-JW parame- ∗ ∗ ∗ 0 ters that there must exist a pair of parameters (cT , vˆT ), where cT ∈ (cT , cT ) and ∗ 0 vˆT ∈ (ˆvT , vˆT ), such that the new volatility smile is free of butterfly arbitrage and as close as possible to the original one in some sense.

According to Gatheral and Jacquier [7], this set of parameters should insure no butterfly arbitrage but it is proven that in some particular cases when arbi- trage is introduced in the left wing of the smile, pT can’t be fixed but also has to be chosen [17]. The choice of pT is not restricted to a closed interval as the other two parameters.

The optimal choice of parameters can be found using least squares with an objective function set to the differences between the total implied variances given from the new parameterization and the original plus a large penalty P for butterfly arbitrage. The expression to minimize is then:

m X SVI 0 2 min (ωj − ωj) + P, cT ,vˆT ,pT j=1

SVI where ωj is the total implied variance given from the original parameteriza- 0 tion and ωj is the total implied variance given from the new parameterization for log-moneyness j.

Remark: The size of P should be large enough to insure no butterfly arbi- trage. From our empirical findings, P = 10000 is sufficient.

3.3 Construction of a volatility surface In this section we will describe how to use the raw SVI parameterization for con- structing a volatility surface. When finished, the volatility surface is represented by an uniformly distributed discrete set of implied volatilities. Henceforth total implied variance is used since the raw SVI parameterization is defined in total implied variance.

For a fixed time to maturity T , the raw SVI parameterization, Definition 2.6, is a function to determine the total implied variance for any choice of moneyness x. The goal is to determine the set of parameters {a, b, ρ, m, σ} allowing the best fit to the market data. This is an optimization problem which can be solved using least squares where the expression to minimize is defined as:

19 3 METHOD

m X SVI 2 min wj(ωj − ωˆj) (8) a,b,ρ,m,σ j=1

SVI where wj is a weight, ωj is the raw SVI-parameterization andω ˆj is the ob- served total implied variance obtained using Newton’s method described in 2.2 to determine the implied volatility and transform it into total implied variance using (3).

Remark: Various methods exist for solving (8), for example the Levenberg- Marquardt algorithm or the Trust Region Reflective algorithm, which was used throughout this thesis work since it is robust and the default method in the SciPy package.

In Figure 4, a raw SVI parameterization is displayed together with correspond- ing market data.

Figure 4: A raw SVI parameterization fitted to market data.

Having a raw SVI parameterization, we must insure that no butterfly arbitrage opportunities are introduced. In [7], the authors give an example of when butter- fly arbitrage is introduced which is re-created in Figure 5. The set of parameters used for this example are {a, b, ρ, m, σ} = {−0.0410, 0.1331, 0.3060, 0.3586, 0.4153}.

20 3 METHOD

Figure 5: Visualization of Durrleman’s condition. Arbitrage opportunities are introduced as g(x) falls below the dashed line.

To eliminate butterfly arbitrage we transform the raw SVI parameters into SVI- JW parameters and then apply least squares. The objective function is set to the differences between the total implied variances given from the new param- eterization and the original plus a large penalty for butterfly arbitrage.

When all butterfly arbitrage opportunities are eliminated we can safely inter- polate between a pair of raw SVI parameterizations without risk of introducing new butterfly arbitrage. We therefore choose a uniformly distributed set of log-moneyness and use the raw SVI parameterization to determine the corre- sponding set of total implied variances for each time to maturity.

The interpolation between the sets of total implied variances can be done in various ways. Since we know that the surface has a linear leaning profile in the time to maturity direction, linear- or polynomial regression are suitable approaches. Using such models we can predict a uniformly distributed set of maturities for each log-moneyness. In Figure 6, an interpolation of market data using linear regression and the predicted set of maturities are shown.

21 3 METHOD

Figure 6: Polynomial regression interpolation in the time to maturity direction for log-moneyness = 0.

Now, the calendar spread arbitrage test, defined in Procedure 3.2.1, can be performed. In Figure 6, a visualization of volatility smiles for time to maturity 40, 76 and 112 days are shown. Here, no calendar arbitrage opportunities are introduced since each volatility smile are strictly greater than the other with longer time to maturity. In the case of intersection between volatility smiles, Lemma 3.2.1 is used to eliminate the calendar arbitrage.

Figure 7: Volatility smiles for different time to maturities T where no calendar arbitrage opportunities are introduced.

22 3 METHOD

With no calendar spread arbitrage, we now have an arbitrage free set of discrete total implied variances to represent the surface. The volatility surface is shown in Figure 8 where plot (a) displays the surface expressed in total implied variance and (b) displays it in implied volatility. The surface is constructed using total 2500 discrete points in the range [−0.5, 0.5] for log-moneyness and [4, 300] for time to maturity with 50 steps in each range.

23 3 METHOD

(a) A volatility surface expressed in total implied variance, ω.

(b) A volatility surface expressed in implied volatility, σimp.

Figure 8: A volatility surface expressed in total implied variance, ω, in (a) and expressed in implied volatility, σimp, in (b).

24 3 METHOD

3.3.1 Calculate change and error between market data and the sur- face To measure change in the volatility surface, relative change is used. For a fixed time to maturity Ti and log-moneyness xj, a relative change is defined as:

σt+1 − σt δi,j = + 1 (9) σt where σt is the implied volatility σimp in the volatility surface for day t and σt+1 is the implied volatility σimp in the volatility surface for day t + 1 at time to maturity Ti and log-moneyness xj.

To measure a volatility surface’s fit against the market and available options, the total squared relative error is used. For a certain option’s time to maturity Ti and log-moneyness xj, a squared relative error is defined as:

σˆimp − σimp 2 εi,j = (10) σimp whereσ ˆimp is the option’s implied volatility and σimp is the implied volatility in the volatility surface at time to maturity Ti and log-moneyness xj.

The total squared relative error is defined as:

n m X X TSS = εi,j (11) i j where i denote the time to maturity, j denote the log-moneyness. TSS is the total error of a volatility surface’s fit against the market.

3.4 Analysis of the surface From the theory presented in Section 2.3 we have knowledge about volatility surfaces’ characteristics. We know there exists volatility clustering leading to periods of high and low day-to-day changes in the volatility surface. However, these characteristics fail to capture the movements within the volatility surface. In Figure 9, a relative day-to-day change in the surface is shown. It is clear to see that the movements are not constant over the whole volatility surface but fluctuate in both the moneyness- and time to maturity direction.

25 3 METHOD

Figure 9: An example of relative day-to-day changes, δ, within a volatility surface.

The non-constant changes within the volatility surface motivate for analysing the evolution over time in order to identify further characteristics. This anal- ysis consists of two parts where the first part concerns the relative day-to-day changes over time with a goal to find sections within the surface with similar behaviour. In the second part, the magnitude of the relative day-to-day changes are considered with a goal to find subsections within the sections with similar magnitude.

3.4.1 Part 1: Implied volatility time series analysis To analyze the relative day-to-day changes we examine the evolution of a volatil- ity surface over time. Assume a sequence of time dependent volatility surfaces constructed as described in Section 3.3. Then for a fixed time to maturity and log-moneyness we define an implied volatility time series as:

(i,j) {dt : t ∈ T } where i denote the time to maturity, j denote the log-moneyness and t denote the index of time set T .

(i,j) For each time series dt we fit a polynomial regression model and create a feature vector (i,j) ˆ ˆ ˆ y = (β0, β1,..., βp)

26 3 METHOD with the model’s p number of coefficients as its elements. By doing this, the be- haviour of a time series is described by its corresponding feature vector y(i,j). In Figure 10, an example of an implied volatility time series is shown together with corresponding regression model and in Table 1, examples of y(i,j) are shown.

Figure 10: Example of a regression model of 4th degree fitted to a time series (i,j) dt .

Table 1: Examples of estimated coefficients of the regression models fitted to different implied volatility time series.

(i,j) ˆ ˆ ˆ ˆ ˆ y β0 β1 β2 β3 β4 1 0.160409 0.00273398 -7.49381e-05 7.60478e-07 -3.13995e-09 2 0.203659 0.00283794 -8.4716e-05 8.99169e-07 -3.82849e-09 ...... n 0.24649 0.00170954 -3.17275e-05 2.18391e-07 -5.64291e-10

Having feature vectors y(i,j) to describe the behaviour of the volatility surface for each time to maturity and log-moneyness we can apply K-means clustering, following the step-by-step implementation presented in Section 2.8, to determine which observations that can be treated similarly. The division performed by the K-means clustering is shown in Figure 11.

27 3 METHOD

Figure 11: A volatility surface divided into sections using K-means clustering of y(i,j), here using K = 6.

3.4.2 Part 2: Magnitude of change analysis In the second part of the analysis, the relative day-to-day changes of the im- plied volatility time series are considered. In Figure 12, the relative day-to-day (i,j) changes for the implied volatility time series dt from Figure 10 are shown.

(i,j) Figure 12: Relative day-to day changes of the time series dt shown in Figure 10. The solid line represent the mean change over the time period.

28 3 METHOD

From the relative day-to-day change example in Figure 9, one would expect a variation in magnitude of change for different time to maturities and log- moneyness. To determine the magnitude of change, the standard deviation of an implied volatility time series relative day-to-day changes is used. This metric is a representation of the magnitude of change within the volatility surface for a certain time to maturity and log-moneyness.

Considering only one of the six sections discovered in the first part of the analy- sis, we can apply K-means clustering once more to determine which observations within the section that can be treated similarly with respect to their standard deviations. A result of this clustering is shown in Figure 13.

Figure 13: A section from Figure 11 divided into subsections using K-means based on standard deviation, here using K = 4

Having a volatility surface divided into sections and subsection are valuable in- sights when adjusting the volatility surface to improve the fit against the market. However, for the subsections to be useful we need to determine how they relate to each other.

Considering only one section, the mean standard deviation of each subsection is calculated. This metric is then used to determine the relative differences between the subsections. Using the theory presented in Section 2.9 about transition ma- trices, a matrix with the relative differences as element are created. Instead of having a transition matrix with the probabilities of moving from subsection i to subsection j as elements, we have a matrix, henceforth denoted as a trans- formation matrix, with the relative differences of each subsection as elements. This transformation matrix describe the relationship between the subsections and will be used for determine how changes in one subsection affects the others.

29 3 METHOD

An example of a transformation matrix is shown in Table 2.

Table 2: Example of a transformation matrix.

Subsection: 1 2 3 4 1 1 10.1313 4.40932 2.09985 2 0.0987042 1 0.435218 0.207264 3 0.226792 2.2977 1 0.47623 4 0.476225 4.82477 2.09983 1

3.5 Intraday calibration A problem with dynamical adjustments of a volatility surface is to preserve its properties while increasing its fit against the market. During intraday, this problem is amplified: the option trades are fewer and does not happen at the same or predetermined time.

To handle the issue of low liquidity we introduce a concept called artificial changes, produced by the transformation matrices, and batch-calibration. Given a new option trade, we identify which section and subsection it belongs to and calculate the relative change to the volatility surface, henceforth denoted as a true change. Using the section’s transformation matrix we can determine the change in the related subsections and use these as artificial changes. The infor- mation from one option trade is therefore enough to change a whole section in the volatility surface surface, which solve the problems with absence of option trades for some strike prices or maturities.

Batch-calibration means adjustments are performed based on several option trades. By this, option trades done up until the calibration can be treated synchronized which solves the problem with option trades not occurring at the same time and ensures a higher number of option trades per calibration.

During a calibration, the volatility surface is adjusted by the mean change of each subsection in the volatility surface. When calculating the mean change of a subsection, artificial changes should be weighted less than true changes. To perform an intraday calibration the step-by-step implementation in Procedure 3.5.1 can be used. Procedure 3.5.1 (Intraday calibration). Assume the existence of an arbitrage- free end-of-day volatility surface and a batch of new intraday option trades. The steps of an intraday calibration are then: (i) For every new intraday option trade up until the calibration: (a) Determine the relative change between the option trade’s implied volatil- ity and the implied volatility given by the existing end-of-day volatility surface using equation (9).

30 3 METHOD

(b) Identify which section and subsection the option trade belongs to. (c) Use the section’s transformation matrix to determine artificial changes in the related subsections. (ii) Calculate the mean change for each subsection.

(iii) Adjust each subsection by its mean change. (iv) Eliminate arbitrage opportunities within the volatility surface, See Section 3.2 for methods. Remark: Artificial changes should be weighted less than true changes. From our empirical findings, a weight of 0.5 is suitable. However, the choice of weight highly depend on the data available.

The implementation above can be used for performing several calibration through- out a day as as a copy of the original end-of-day volatility surface is avail- able. For additional calibrations, the batch of new intraday options consist of the ones used in the previous calibrations plus the ones available since the latest calibration.

An example of how the intraday calibration adjusts the volatility surface is shown in Figure 14. This adjustment is based upon 20% of available data for a day. The data is randomly selected in order to represent the option trades dur- ing intraday. In Figure 15, the whole relative day-to-day change of the volatility surface is shown as a reference. It is clear to see that the intraday adjustments mimic the main characteristics of the relative day-to-day changes.

31 3 METHOD

Figure 14: Intraday calibration adjustments of a volatility surface based upon 20% of available data.

32 3 METHOD

Figure 15: Relative day-to-day changes within a volatility surface. This figure should be examined as a reference to Figure 14.

Remark: After a volatility surface has been adjusted, step (iii) in Procedure 3.5.1, there may exist arbitrage opportunities. Eliminating these opportunities will affect the adjustments into being smoother than shown in Figure 14.

3.6 The Intraday Calibration Model (ICM) The analysis of the volatility surface can be accomplished for various combi- nations of sections, subsections and polynomial degrees of the linear regression models. We therefore introduce the Intraday Calibration Model of which these parameters can be chosen.

3.6.1 Model description The Intraday Calibration Model is based on a main concept: analyzing histor- ical data and identify characteristics. For the analysis part to perform well, sufficient historical data is needed. What defines sufficient? We have identified at least two important aspects. The first one is a good quality of the data, for example arbitrage-free end-of-day surfaces constructed using whole days of data. If the analysis is based upon bad quality data, it will produce insights of bad quality. The second aspect is the amount of data used for analysis. If the data is over a time period, the analysis might not capture enough information. Vice versa, the analysis might not capture the details if the time

33 3 METHOD period is too long.

The Intraday Calibration Model has the following set of hyper-parameters Ω = {d, Ks,Kss} which need to be tuned for the model to perform well. These hyper-parameters are: (a) d: Polynomial degree of the linear regression models.

(b) Ks: Number of sections.

(c) Kss: Number of subsections within each section. d decides the flexibility of the linear regression models and their ability to fit the implied volatility time series. The purpose of a linear regression model is to capture the general behaviour of the time series. A low polynomial degree might fail to capture the general behaviour and a high degree increases the risk of overfitting the linear regression model. During time periods of constant low or high volatility a lower polynomial degree might be sufficient to capture the general behaviour. However, a higher polynomial degree might be needed dur- ing time periods of fluctuating volatility. No matter the choice of degree, the final decision should yield a result that is reasonable for the practitioner.

The choices of Ks and Kss should be based upon the practitioner’s expertise. The purposes of the sections and subsections are to identify which regions within a volatility surface can be treated similarly. A section or subsection should not be scattered across the surface but preferably be concentrated to a region. It should also be of reasonable size. Our empirical findings show that using a high number of sections and subsections yield non-intuitive results where sections are scattered across the surface and only covering very small areas of the volatility surface. The final choices of Ks and Kss should be, similar to the choice of d, reasonable and yield an intuitive result.

3.6.2 Model evaluation

∗ ∗ ∗ ∗ To determine an optimal set of hyper-parameters Ω = {d ,Ks ,Kss} the In- traday Calibration Model’s performance is evaluated over a tuning period of T number of days for various combinations of hyper-parameters. The performance is measured by the total sum of squared relative errors TSS between market data and the volatility surface, see equation (11). The optimal set of hyper- parameters is the one yielding the smallest sum of TSS for a time period. One thing to keep in mind is that there may be local minimums for the sum of TSS that don’t reflect the optimal set of hyper-parameters.

For a set of hyper-parameters Ω, perform an analysis of the available histor- ical volatility surfaces, as presented in Section 3.4. Starting at day t, where t = {1, 2,...,T }, calculate the relative errors between the option trades’ im- plied volatilities and the volatility surface’s implied volatilities up until next intraday calibration. Now, perform an intraday calibration, as described in

34 3 METHOD

Procedure 3.5.1. Continue until no more option trades are available. Move on to next day.

3.7 Parallel shift Another method for adjusting a volatility surface is parallel shift, which simply shifts the whole surface in desired direction. The step-by-step implementation in Procedure 3.7.1 can be used to adjust the surface using parallel shift. Procedure 3.7.1 (Intraday calibration using parallel shift). Assume the exis- tance of an arbitrage-free end-of-day volatility surface and a batch of new in- traday option trades. The steps of an intraday calibration using parallel shift is then: (i) For every new intraday option trade up until the calibration:

Determine the relative change between the option trade’s implied volatility and the implied volatility given by the existing end-of-day volatility surface using equation (9). (ii) Calculate the mean change of all new intraday option trades. (iii) Shift the whole surface by the mean change.

The problem with preserving the volatility surface’s properties is avoided when using parallel shift. This since the surface itself is not adjusted but instead its’ position. However, the relative day-to-day change need be of the same magnitude across the whole volatility surface for this method to perform well.

35 4 RESULTS

4 Results

This section presents the performance of the Intraday Calibration model and parallel shift for the LME Copper- and WTI NYMEX dataset. First, a brief review of how results are produced is given. Then, the results are presented for each dataset separately. Finally, the results of the two datasets are compared and high volatility periods are presented.

4.1 Review of assumptions and approach The methods are evaluated over the second year in both datasets. For each day, four intraday calibrations are performed with batch size 20%, 40%, 60% and 80% of available data.

The first year of data is used for analysis of the volatility surfaces in the Intraday Calibration Model. We have used a rolling time window for the analysis of the volatility surfaces, meaning that for each day evaluated the time window moves one day ahead.

The optimal set of hyper-parameters for the Intraday Calibration Model is de- termined by model evaluation, see Section 3.6.2, where the tuning period is the first 30 days of the second year. This period is included when the methods’ performance are measured.

In figures and tables, the methods are presented along with the total squared relative errors TSS of the last end-of-day volatility surface. This to give a per- spective on how the methods’ adjustments perform in relation to no adjustments intraday.

4.2 LME Copper The Intraday Calibration Model’s hyper-parameters used for the LME Copper dataset are presented in Table 3.

Table 3: Hyper-parameters of ICM for the LME Copper dataset.

Degree Sections Subsections 4 6 6

In Figure 16, the daily TSS of the volatility surface with no adjustments over the second year are shown. The figure show the magnitude of change for the surface over the time period. It is clear to see that the magnitude of change fluctuates and that the second year captures periods of both low and high volatility.

36 4 RESULTS

Figure 16: Daily TSS in the LME Copper volatility surface for the second year.

In Figure 17, the methods performances of the first 30 days are shown. Both methods manage to reduce the TSS in general but the parallel shift is almost always outperformed by the Intraday Calibration Model.

Figure 17: Method performances for the first 30 days in the second year.

The methods’ performances are presented in Table 4. Sum of TSS is the sum of the total squared relative errors over the whole time period. Error reduc- tion indicates how much each method have reduced the error in relation to no adjustments and Error increases is the number of occasions when the methods

37 4 RESULTS produce a higher daily TSS than no adjustments.

Table 4: Method performances for LME Copper for the second year.

Method No adjustments Parallel shift ICM Sum of TSS 21.12 19.37 13.26 Error reduction* − 8.27% 37.19% Error increases (occasions)* − 118 14

* Relative to No adjustments.

In Figure 18 and 19, the daily TSS for the second year are shown with red dots indicating when the methods adjustments increased the TSS. As stated in Table 4 both methods reduce the error in general. However, the parallel shift’s error reduction is not very large and the daily TSS are increased at 118 occasions while the Intraday Calibration Model has a significantly higher error reduction and only increases the daily TSS at 14 occasions. Examining the fig- ures it is clear that the parallel shift has trouble reducing the errors no matter how volatile a day is, in comparison to the Intraday Calibration Model which only increases the daily TSS when the movements in the surface are very low.

Figure 18: Error increases for the parallel shift. The occasions are indicated by red dots.

38 4 RESULTS

Figure 19: Error increases for the Intraday Calibration Model. The occasions are indicated by red dots.

The analysis of the occasions when the error is increased is presented in Table 5. Mean error is the mean of the TSS on the occasions when TSS is increased. Highest error is the highest TSS of the method when it performed worse than no adjustments and Mean relative error increase is how much the methods increase the TSS on average relative to no adjustments.

Table 5: Results on the occasions when error is increased for LME Copper.

Method Parallel shift ICM Mean error 0.022 0.001 Highest error 0.720 0.064 Mean relative error increase 0.014 0.003

In Table 5 the red dots from Figure 18 and 19 are visualized in values. The mean error for no adjustments over the whole period is 0.081, which is signifi- cantly higher than the Mean error presented for both methods. This indicate that the methods seem to increase the error when daily TSS are small in the volatility surface. Examining Figure 18 and 19 we find that most of the red dots are located on small daily TSS. It is also visible that the parallel shift increases the error on volatile days as well.

The Mean relative error increase shows that when errors are increased, the par- allel shift increases the error more than the Intraday Calibration Model which hardly increases the error at all.

39 4 RESULTS

4.3 WTI NYMEX The Intraday Calibration Model’s hyper-parameters for the WTI NYMEX dataset are presented in Table 6.

Table 6: Hyper-parameters of ICM for the WTI NYMEX dataset.

Degree Sections Subsections 3 7 7

In Figure 20, the daily TSS of the volatility surface with no adjustments over the second year are shown. Worth noticing is that daily TSS are greater in general for the WTI NYMEX dataset compared to the LME Copper dataset.

Examining Figure 21, we find that both methods reduce the TSS, where the In- traday Calibration Model method outperforms parallel shift in general, similarly to the result in Figure 17.

Figure 20: Daily TSS in the WTI NYMEX volatility surface for the second year.

40 4 RESULTS

Figure 21: Method performances for the first 30 days in the second year.

In Table 7, the methods’ performances are shown. Both the parallel shift and the Intraday Calibration Model has a greater error reduction for the WTI NYMEX dataset compared to the LME Copper dataset. Both methods also have fewer occasions with increased errors which validate the methods’ reliability.

Table 7: Method performances for WTI NYMEX for the second year.

Method No adjustments Parallel shift ICM Sum of TSS 117.83 90.59 60.40 Error reduction* − 23.12% 48.74% Error increases (occasions)* − 55 11

* Relative to No adjustments.

In Figure 22 and 23, the daily TSS for the second year are shown with red dots indicating when the methods’ adjustments increased the TSS. The par- allel shift performs better on this dataset than on the LME Copper dataset. It has a larger error reduction and fewer occasions where the daily TSS are increased. The Intraday Calibration Model has a similar trend and reduces the occasions and improves the error reduction.

41 4 RESULTS

Figure 22: Error increases for the parallel shift. The occasions are indicated by red dots.

Figure 23: Error increases for the Intraday Calibration Model. The occasions are indicated by red dots.

The analysis of the occasions when the error is increased is presented in Table 8.

42 4 RESULTS

Table 8: Results on the occasions when error is increased for WTI NYMEX.

Method Parallel shift ICM Mean error 0.044 0.005 Highest error 1.515 0.690 Mean relative error increase 0.016 0.025

In Table 8 the red dots from Figure 22 and 23 are visualized in values. The mean error for no adjustment over the whole period is 0.453, which is signifi- cantly higher than the Mean error presented for both methods. As previously in Section 4.2, we find that the methods perform better on periods with high volatility than periods of low volatility, which is strengthened by the results presented in Table 8. But what is also noticeable is that the Highest error for both methods here are much higher than for the LME Coppar dataset.

The Mean relative error increase shows that both methods increases the er- ror with a very small amount.

4.4 Comparison and conclusion of results Comparing the results for the two datasets we find both similarities and dis- parities. In both cases the methods reduce the errors and the parallel shift is outperformed by the Intraday Calibration Model. The parallel shift struggles with the LME Copper dataset; the method has a small error reduction, (8.27%), while it also increases the daily TSS at several occasions (118). Comparing these results with the results on the WTI NYMEX dataset where parallel shift has a rather good error reduction (23.12%) and lower frequency of error increases (55) we can conclude that parallel shift might not be suitable in all cases.

(a) LME Copper (b) WTI NYMEX

Figure 24: Daily TSS in the volatility surfaces for the second year.

Comparing the daily TSS of the surfaces, see Figure 24, we can conclude that the changes in the volatility surfaces are significantly larger in the WTI NYMEX

43 4 RESULTS dataset than in the LME Copper dataset. When the changes are large it is in- tuitive that a vertical adjustment of the surface can reduce the error but when the changes are smaller, vertical adjustments are not sufficient. Recalling the relative day-to-day changes for a volatility surface in Figure 9, also displayed in Figure 25 below, we know that the changes are not constant over the whole volatility surface. From our empirical findings, the bigger portion of the total change is located in the ”front” of the volatility surface for low time to maturi- ties. Considering the parallel shift, these changes will be heavily weighted and have a large effect on the amount of vertical alignment making the adjustment to large during periods when most of the changes in the volatility surface are very small.

Figure 25: Relative day-to-day changes within a volatility surface.

One difference between the methods’ performances are when error increases. As presented in Tables 4 and 7 above, the Intraday Calibration Model has a lower frequency of error increases than the parallel shift for both datasets. But maybe the most important part of when the methods increase the TSS, is when they do it. As visualized in Figures 19 and 23 the Intraday Calibration Model increases error when the daily TSS are small. Compared to the parallel shift, which is visualized in Figures 18 and 22, the increased errors are more random and can even make TSS higher on the most volatile days.

From the results in Sections 4.2 and 4.3, we conclude that the Intraday Calibra- tion Model is robust and reliable, showing good performance for various market conditions.

44 4 RESULTS

4.5 Volatile periods The results so far show how the methods operate during a whole year, but how do the the methods perform during the days when TSS is at its highest peaks? In Figures 26 and 27 the most volatile days from the second year are shown for each method and dataset.

Figure 26: Method performances on LME Copper for the six highest volatile days in the second year.

In the LME Copper dataset the highest daily TSS peak was at day 152 with TSS = 0.71 as visualized in Figure 26. This day is particular interesting since the parallel shift actually increases the error and the Intraday Calibration Model reduces the error significantly.

45 4 RESULTS

Figure 27: Method performances on WTI NYMEX for the six highest volatile days in the second year.

In the WTI NYMEX dataset, the highest daily TSS is at day 184 with the TSS = 12.2. The six most volatile days of this dataset are visualized in Figure 27. In this dataset the parallel shift performs well and the method reduces the daily TSS for all the six days. However, it is still outperformed by the Intra- day Calibration Model. In Table 9, the method performances for the 5% most volatile days in each dataset are shown. For these days, the Intraday Calibra- tion Model’s performance is better than its average error reduction. The same conclusion can be stated for the parallel shift’s performance. This discovery is of outmost interest since it is during the periods of high volatility that the performances are of the highest importance.

Table 9: Method performances for the 95% quantile of the most volatile days of the second year.

Method Parallel shift ICM Error reduction* (LME Copper) 10.08% 45.81% Error reduction* (WTI NYMEX) 29.52% 53.94%

* Relative to No adjustments.

46 5 DISCUSSION

5 Discussion

This section contains discussions regarding the results and methods presented throughout this thesis, potential drawbacks of the Intraday Calibration Model and suggestions for future studies.

5.1 Results The goal of this thesis was to find one or several approaches to calibrate a volatility surface intraday in order to perform more accurate risk calculations. The Intraday Calibration Model reduces the error against the market during days of both high and low volatility.

Starting with discussing the days with high volatility, the Intraday Calibra- tion Model perform very well and reduces the errors by almost half, (45.81%), for LME Copper and more than half, (53.94%), for WTI NYMEX, compared to no adjustments intraday. It is during the periods of high volatility that accurate risk calculations are of outermost importance. The adjustments done with the Intraday Calibration Model adapt the volatility surface’s fit as market condi- tions change intraday.

The Intraday Calibration Model struggles during days with low volatility and may at occasions increase error instead of reducing, a deficiency showing the model’s imperfection. However, as pointed out it is not during periods of low volatility that it is critical to be precise. If the Intraday Calibration Model would be inconsistent and increase errors when volatility is both low and high it would not be trustworthy. Since this shortcoming only applies when volatility is low we feel confident about the Intraday Calibration Model’s performance.

5.2 Construction of the volatility surfaces The results presented in Section 4 highly depend on the choices made when con- structing the volatility surfaces. The first thing we want to address is our choice of using the raw SVI parameterization for constructing the volatility surfaces. This method allows us to represent the volatility surfaces with a number of uni- formly distributed data points which easily can be modified when adjusting the volatility surfaces. However, the raw SVI parameterization has its advantages and disadvantages. The most obvious one is that the volatility surfaces are constructed slice-by-slice in the time to maturity direction. This is an advan- tage as the method is straightforward, easy to understand and apply. On the other hand, the method easily introduces calendar arbitrage opportunities as each volatility is independent of another, which is a problem when the volatility surface is adjusted intraday. The calendar arbitrage opportunities need to be eliminated and this can alter the adjustments, introducing a source of uncon- trollable error.

47 5 DISCUSSION

Another important aspect we want to bring light on is the size of the volatility surface, i.e. how far the surface span in each direction. The size highly depends on the underlying asset and the options available on the market but also on the market liquidity. We chose to study the volatility surface in the range of -0.5 to 0.5 in the log-moneyness direction and 4 to 300 days in the time to maturity for both datasets. These choices are motivated by the nature of the underlying assets in the datasets but due to confidentiality,, we can’t disclose the motiva- tions in detail. However, we want to address the importance of limiting the size of the volatility surface in such a way that it actually means something. A volatility surface should only span over an area where options are traded since it is only in this region information is available. Considering the log-moneyness direction, options far in, or out, the money are often not traded frequently and therefore introduce large uncertainties regarding their implied volatilities and consequently, their prices. Therefore we want to raise a concern regarding ex- trapolation in the log-moneyness direction. As previously stated, it doesn’t hold to create a volatility surface unless it can fit the market reasonably and if the information is stale or not available, this can’t be done properly. Considering the time to maturity direction this problem is not equally present. The volatil- ity surface’s characteristics tell us it has a linear leaning profile in the time to maturity direction making methods as linear regression suitable for both inter- polation and extrapolation as long as the arbitrage conditions of Theorem 2.7.2 are satisfied.

The volatility surfaces we created are represented by points in the moneyness di- rection and time to maturity direction, creating a predetermined grid of implied volatilities. This representation is essential for the analysis of the volatility sur- face since it allows us to study how the volatility surface evolves over time. One needs to recall that a volatility surface is a continuous representation of discrete points, i.e. the options available on the market, and that the options are not consistent from one day to another. As time goes by, an option’s time to ma- turity decreases and its placement on the volatility surface changes. Similarly, if the underlying asset’s price change, so will the log-moneyness, also affecting the placement on the volatility surface.

5.3 Analysis of the volatility surface The results presented in Section 4 does not only depend on how the volatility surfaces are constructed, but also on how they are analyzed. Analyzing his- torical volatility surfaces are motivated by the lack of research and knowledge regarding the changes within the volatility surfaces. Without such knowledge, it is almost impossible to determine which adjustments that will improve the volatility surface’s market fit intraday. By visual inspection of the relative day- to-day changes, recall Figures 9, 15 or 25, we conclude that the change within the volatility surface is not homogeneous, which is a motivation for cluster anal- ysis. K-means was chosen as the clustering algorithm since it doesn’t have any distribution dependencies.

48 5 DISCUSSION

Having the volatility surface represented by a fixed grid of points allowed us to study each point’s evolution over time: the implied volatility time series. The choice to describe the times series by linear regression models allowed us to represent the volatility surface’s evolution over time by the linear regression models’ coefficients. Such representation could have been achieved using a gen- eralized additive model with splines or similarly but then additional complexity for understanding the model is introduced. Since the K-means clustering pro- duced sections that seemed reasonable based upon the linear regression models we decided to not investigate this further.

At this point in the analysis, we have identified sections within the volatility surface that change in unity which confirms the suspicions of non-homogeneous changes within the volatility surface. However, these divisions don’t address how large the changes are within each section, which calls for further analysis. Therefore we chose to calculate the standard deviation of the relative returns of the time series and perform an additional clustering within each section. The choice of standard deviation as a metric for describing the magnitude of change felt rather intuitive. It also has a strong connection to how movements are de- fined in finance.

Another aspect having a strong influence on the analysis of the volatility sur- face is the time window, i.e. how much historical data should be analyzed? Our answer to this question is ”it depends”. For our results, we had a rolling one year time window where every historical day was weighted equally. Another ap- proach is to weight older information less than newer information, meaning that yesterday’s changes have a bigger impact on the model compared to changes a week, or a month, ago. Being at the beginning of a period with a changed level of volatility such approach might be suitable. As a general guideline, the amount of historical data to analyze should be enough for the model to identify reasonable patterns. If the time window is too small, the analysis might not capture enough information. Vice versa, the analysis might not capture the details if the time window is to large.

The last thing we want to put light on is the concept of the transformation matrix, a center piece in the Intraday Calibration Model. The transformation matrices describe the relationships between the subsections within each section, which enables the Intraday Calibration Model to extract additional information from each option trade and create the artificial changes, which handle the issue with fewer option trades intraday. A major drawback of the transformation matrix is its stiffness. To explain why this is a problem, follow the example below.

Example: Assume having a volatility surface divided into sections and sub- sections, see Figure 28 below. Now consider a scenario where the volatility surface increases in one subsection but decreases in another subsection within

49 5 DISCUSSION the same section. Then, for a new option trade located in the ”increasing” sub- section, the Intraday Calibration Model will create an artificial change in the ”decreasing” subsection based upon the section’s transformation matrix. Recall that the subsections are determined based upon their average standard devia- tion and that the transformation matrices contain the relationships between the subsections. Also recall that standard deviation is a measure of variation for a set of values and that it therefore can’t, by definition, be negative.

(a) Sections. (b) Subsections.

Figure 28: Example of a volatility surface divided into sections and subsections.

In this scenario, the method will suggest an adjustment in the wrong direction since the artificial changes always has the same sign as the real change but with a different size. As mentioned in Procedure 3.5.1, step (ii), artificial changes should be weighted less than true changes in order to counteract this behaviour but if liquidity is low there might not exist true changes for all subsections. However, as our results show, the effect of the model’s stiffness is rather small and the model performs well during periods of both low and high volatility.

5.4 Future studies A possible extension of our work is to examine the Intraday Calibration Model for other underlying assets to confirm if the model can be used independently of the underlying asset.

When constructing the volatility surfaces, Section 3.3, the raw SVI parameter- ization was used. In this field many studies have been made and also criticized the simplicity of this method. This argues to use a more advanced method, as SSVI mentioned in [17]. For part two in the analysis of the volatility surface, Section 3.4.2, it might have been possible to use a more sophisticated approach to determine the magnitude of change, for example a generalized autoregressive conditional heteroskedasticity model, instead of using the subsections’ average standard deviation. This was not possible due to limited resources and time but is something that we urge further studies to discover.

50 5 DISCUSSION

An aspect that we would have liked to investigate is the possibility to incor- porate the trade volume for the options in the model. This was not possible for us to achieve since the available data did not include the trade volume. The Intraday Calibration Model should also be examined for true market data. We have developed the model for handling true market data but its performance in such a scenario has not been verified.

Also, we would like future work to discover the possibility to use insights from one underlying asset’s volatility surface on another underlying asset. For exam- ple, is it possible to analyze the volatility surface of a stock index, for example, the S&P 500, and use the insights for modelling the volatility surface of a stock included in the index as for example, Apple Inc.

51 REFERENCES

References

[1] Jesper Graa Andreasen and Brian Norsk Huge. ”Volatility Interpolation”. In: Econometrics: Econometric & Statistical Methods - Special Topics eJournal (2010). [2] Tomas Bj¨ork. Arbitrage Theory in Continuous Time. Oxford University Press Inc., New York., 2009. isbn: 978–0–19–957474–2. [3] Menachem Brenner and Marti G. Subrahmanyan. ”A Simple Formula to Compute the Implied Standard Deviation”. In: Financial Analysts Journal 44.5 (1988), pp. 80–83. doi: 10.2469/faj.v44.n5.80. eprint: https: //doi.org/10.2469/faj.v44.n5.80. url: https://doi.org/10.2469/ faj.v44.n5.80. [4] The London Metal Exchange. OPTIONS CONTRACT SPECIFICATIONS. url: https://www.lme.com/en- GB/Metals/Non- ferrous/Copper/ Options. (accessed: 2020-03-23). [5] Matthias R. Fengler. ”Handbook of computational Finance”. In: Springer- Verlag, Berlin, Heidelberg, 2012. Chap. Option Data and Modeling BSM Implied Volatility, pp. 117–142. [6] Jim Gatheral. The volatility surface : a practitioner’s guide. John Wiley & Sons, Inc., Hoboken, New Jersey., 2006. isbn: 13 978-0-471-79251-2. [7] Jim Gatheral and Antonie Jacquier. ”Arbitrage-Free SVI Volatility Sur- faces”. In: Quantitative Finance 14.1 (2014), pp. 59–71. doi: http://dx. doi.org/10.2139/ssrn.2033323. [8] Martin B Haugh and Leonid Kogan. ”Pricing American options: a duality approach”. In: Operations Research 52.2 (2004), pp. 258–270. [9] Stefano Herzel. ”Arbitrage opportunities on derivatives: A linear program- ming approach”. In: Dynamics of Continuous, Discrete and Impulsive Sys- tems Series B: Applications and Algorithms 12 (Aug. 2005). [10] Cristian Homescu. ”Implied Volatility Surface: Construction Methodolo- gies and Characteristics”. In: Risk Management eJournal (2011). [11] CME Group Inc. Crude Oil Option Contract Specs. url: https://www. cmegroup . com / trading / energy / crude - oil / light - sweet - crude _ contractSpecs_options.html. (accessed: 2020-04-21). [12] Jim Gatheral. A parsimonious arbitrage-free implied volatility parameter- ization with application to the valuation of volatility derivatives. Mer- rill Lynch. May 26, 2004. url: http://faculty.baruch.cuny.edu/ jgatheral/madrid2004.pdf (visited on 02/24/2019). [13] Jonas Nyl´en. Thesis Proposal: Intraday Volatility Surface Calibration. Nas- daq Ume˚aOffice. [14] Riccardo Rebonato. Volatility and Correlation 2nd Edition. John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, 2004. isbn: 0-470-09139-8.

52 REFERENCES

[15] L. C. G. Rogers and M. R. Tehranchi. ”Can the implied volatility surface move by parallel shifts?.” In: Finance & Stochastics 14.2 (2010), pp. 235 –248. issn: 09492984. url: http://proxy.ub.umu.se/login?url=http: //search.ebscohost.com/login.aspx?direct=true&db=buh&AN= 48537849&site=ehost-live&scope=site. [16] Michael Roper. Arbitrage Free Implied Volatility Surfaces. School of Math- ematics and Statistics The University of Sydney, Mar. 2010. [17] Adam Ohman.¨ ”The Calibrated SSVI Method - Implied Volatility Surface Construction”. MA thesis. Stockholm, Sweden: KTH Royal Institute of Technology, 2019.

53