Value at Risk and Expected Shortfall: Traditional Measures and Extreme Value Theory Enhancements

Total Page:16

File Type:pdf, Size:1020Kb

Value at Risk and Expected Shortfall: Traditional Measures and Extreme Value Theory Enhancements VALUE AT RISK AND EXPECTED SHORTFALL: TRADITIONAL MEASURES AND EXTREME VALUE THEORY ENHANCEMENTS WITH A SOUTH AFRICAN MARKET APPLICATION by Anelda Dicks Assignment presented at the University of Stellenbosch in partial fulfilment of the requirements for the degree of Masters of Commerce Mast ers of C ommerc e Department of Statistics and Actuarial Science University of Stellenbosch Private Bag X1, 7602 Matieland, South Africa Supervisors: Prof. W.J. Conradie Prof. T. de Wet December 2013 i Stellenbosch University http://scholar.sun.ac.za Copyright © 2013 University of Stellenbosch All rights reserved. ii Stellenbosch University http://scholar.sun.ac.za Declaration I, the undersigned, hereby declare that the work contained in this assignment is my own original work and that I have not previously in its entirety or in part submitted it at any university for a degree. Signature: ………………………………. A. Dicks Date: …………………………………….. iii Stellenbosch University http://scholar.sun.ac.za Abstract Accurate estimation of Value at Risk (VaR) and Expected Shortfall (ES) is critical in the management of extreme market risks. These risks occur with small probability, but the financial impacts could be large. Traditional models to estimate VaR and ES are investigated. Following usual practice, 99% 10 day VaR and ES measures are calculated. A comprehensive theoretical background is first provided and then the models are applied to the Africa Financials Index from 29/01/1996 to 30/04/2013. The models considered include independent, identically distributed (i.i.d.) models and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) stochastic volatility models. Extreme Value Theory (EVT) models that focus especially on extreme market returns are also investigated. For this, the Peaks Over Threshold (POT) approach to EVT is followed. For the calculation of VaR, various scaling methods from one day to ten days are considered and their performance evaluated. The GARCH models fail to converge during periods of extreme returns. During these periods, EVT forecast results may be used. As a novel approach, this study considers the augmentation of the GARCH models with EVT forecasts. The two-step procedure of pre- filtering with a GARCH model and then applying EVT, as suggested by McNeil (1999), is also investigated. This study identifies some of the practical issues in model fitting. It is shown that no single forecasting model is universally optimal and the choice will depend on the nature of the data. For this data series, the best approach was to augment the GARCH stochastic volatility models with EVT forecasts during periods where the first do not converge. Model performance is judged by the actual number of VaR and ES violations compared to the expected number. The expected number is taken as the number of return observations over the entire sample period, multiplied by 0.01 for 99% VaR and ES calculations. iv Stellenbosch University http://scholar.sun.ac.za Opsomming Akkurate beraming van Waarde op Risiko (Value at Risk) en Verwagte Tekort (Expected Shortfall) is krities vir die bestuur van ekstreme mark risiko’s. Hierdie risiko’s kom met klein waarskynlikheid voor, maar die finansiële impakte is potensieel groot. Tradisionele modelle om Waarde op Risiko en Verwagte Tekort te beraam, word ondersoek. In ooreenstemming met die algemene praktyk, word 99% 10 dag maatstawwe bereken. ‘n Omvattende teoretiese agtergrond word eers gegee en daarna word die modelle toegepas op die Africa Financials Index vanaf 29/01/1996 tot 30/04/2013. Die modelle wat oorweeg word sluit onafhanklike, identies verdeelde modelle en Veralgemeende Auto-regressiewe Voorwaardelike Heteroskedastiese (GARCH) stogastiese volatiliteitsmodelle in. Ekstreemwaarde Teorie modelle, wat spesifiek op ekstreme mark opbrengste fokus, word ook ondersoek. In hierdie verband word die Peaks Over Threshold (POT) benadering tot Ekstreemwaarde Teorie gevolg. Vir die berekening van Waarde op Risiko word verskillende skaleringsmetodes van een dag na tien dae oorweeg en die prestasie van elk word ge- evalueer. Die GARCH modelle konvergeer nie gedurende tydperke van ekstreme opbrengste nie. Gedurende hierdie tydperke, kan Ekstreemwaarde Teorie modelle gebruik word. As ‘n nuwe benadering oorweeg hierdie studie die aanvulling van die GARCH modelle met Ekstreemwaarde Teorie vooruitskattings. Die sogenaamde twee-stap prosedure wat voor-af filtrering met ‘n GARCH model behels, gevolg deur die toepassing van Ekstreemwaarde Teorie (soos voorgestel deur McNeil, 1999), word ook ondersoek. Hierdie studie identifiseer sommige van die praktiese probleme in model passing. Daar word gewys dat geen enkele vooruistkattingsmodel universeel optimaal is nie en die keuse van die model hang af van die aard van die data. Die beste benadering vir die data reeks wat in hierdie studie gebruik word, was om die GARCH stogastiese volatiliteitsmodelle met Ekstreemwaarde Teorie vooruitskattings aan te vul waar die voorafgenoemde nie konvergeer nie. Die prestasie van die modelle word beoordeel deur die werklike aantal Waarde op Risiko en Verwagte Tekort oortredings met die verwagte aantal te vergelyk. Die verwagte aantal word geneem as die aantal obrengste waargeneem oor die hele steekproefperiode, vermenigvuldig met 0.01 vir die 99% Waarde op Risiko en Verwagte Tekort berekeninge. v Stellenbosch University http://scholar.sun.ac.za Acknowledgements The author thanks the following people for their contribution towards this study: • Prof. Tertuis de Wet, my co-study leader for theoretical assistance and proof reading as well as for moral support and encouragement throughout the research period. Thank you for unforgettable study years and valuable life insights. • Prof. Willie Conradie, co-study leader for research suggestions, meticulous proof reading and guidance during my research. Your passion about financial risk management is inspiring. Thank you. • Prof. N.J. le Roux for advice on plots in R. • My family for their patience and emotional support. • My friends for all their interest and for making this journey a pleasant one. vi Stellenbosch University http://scholar.sun.ac.za Contents Declaration iii Abstract iv Opsomming v Acknowledgements vi Contents vii List of Abbreviations xii List of Notation xiii 1. Introduction 1 2. Brief Literature Review of Extreme Value Theory Estimation Approaches 3 2.1. General results 3 2.2. Alternatives to the GPD 5 2.3. Empirical quantile approach 6 2.4. Semi-nonparametric EVT approach 6 vii Stellenbosch University http://scholar.sun.ac.za 2.5. Alternative to dynamic back-testing 7 3. Traditional VaR and ES measures 8 3.1. Basic Definitions 8 3.2. Comparison of VaR and ES 10 3.3. Models for independent, identically distributed returns 11 3.3.1. Normal model 11 3.3.2. Student t model 13 3.3.3. The problem with the i.i.d. assumption 13 3.4. The GARCH conditional model 14 3.4.1. Model description 14 3.4.2. Maximum likelihood estimation 16 3.4.2.1. Normal model 17 3.4.2.2. Student t model 18 3.4.3. Forecasts and long term volatility 20 3.4.3.1. Long term volatility 21 3.4.3.2. One day ahead forecasts 21 3.4.3.3. Term structure of volatility forecasts 23 3.4.3.4. Convergence of forecasts to long term volatility 23 3.5. Asymmetric GARCH conditional models 24 3.5.1. The rationale for asymmetric models 24 3.5.2. A-GARCH and GJR-GARCH 25 3.5.2.1. Model definition, long term variance and forecasts 25 3.5.2.2. Estimation 26 viii Stellenbosch University http://scholar.sun.ac.za 3.5.3. E-GARCH 26 3.5.3.1. Model definition 26 3.5.3.2. Properties 27 3.5.3.3. Long term variance 28 3.5.3.4. Estimation 29 3.5.3.5. One day forecasts 30 3.6. Summary 32 4 Traditional EVT 33 4.1 Introduction 33 4.2 Method of Block Maxima 34 4.3 Peaks over Threshold (POT) approach 37 4.3.1 Theoretical motivation for using the GPD 38 4.3.2 Generalised Pareto Distribution (GPD) 39 4.3.2.1 Parameter estimation 39 4.4 Fréchet class of distributions 42 4.4.1 Basic Definitions 43 4.4.2 The extreme value index 44 4.4.3 Threshold selection and exploratory data analysis 46 4.4.3.1. Quantile-Quantile (QQ) plots 47 4.4.3.2. Hill plots 48 4.4.3.3. Mean excess plots 48 4.5 Quantile estimation 49 4.5.1 First order estimation 50 ix Stellenbosch University http://scholar.sun.ac.za 4.5.1.1 The Weissman estimator 50 4.5.1.2 The probability approach estimator 52 4.5.2 Second order refinements 54 4.5.2.1 Quantile view 54 4.5.2.2 Probability view 57 4.6 Derivation of VaR and ES using the POT approach 60 4.6.1 Confidence intervals for VaR and ES 62 4.7 Limitations of EVT 65 4.8 Scaling of VaR and ES measures 66 4.8.1 The square root of time rule 66 4.8.2 The alpha root rule 66 4.8.3 Empirically established scaling laws 67 4.9 Summary 68 5. Combining EVT with stochastic volatility models 70 5.1. Two step procedure 70 5.3. h-day time horizons 72 6. Practical issues in model fitting 73 7. The data 75 7.1 Description of the data 75 x Stellenbosch University http://scholar.sun.ac.za 8. Analysis of the Africa Financials Index Data 81 8.1. Schematic Overview 81 8.2. Independent identically distributed models 82 8.3. GARCH models 85 8.3.1. Estimation results 85 8.3.2. Analysis of the absolute exceedances 89 8.4. Extreme Value Analysis 93 8.4.1. Exploratory analysis for the first data window 93 8.4.2. VaR and ES over the whole data series 105 8.4.2.1. One day VaR and ES estimates 107 8.4.2.2.
Recommended publications
  • Back-Testing of Expected Shortfall: Main Challenges and Methodologies
    © CHAPPUIS HALDER & CO Back-testing of Expected Shortfall: Main challenges and methodologies By Leonard BRIE with Benoit GENEST and Matthieu ARSAC Global Research & Analytics 1 1 This work was supported by the Global Research & Analytics Dept. of Chappuis Halder & Co. Many collaborators from Chappuis Halder & Co. have been involved in the writing and the reflection around this paper; hence we would like to send out special thanks to Claire Poinsignon, Mahdi Kallel, Mikaël Benizri and Julien Desnoyers-Chehade © Global Research & Analytics Dept.| 2018 | All rights reserved Executive Summary In a context of an ever-changing regulatory environment over the last years, Banks have witnessed the draft and publication of several regulatory guidelines and requirements in order to frame and structure their internal Risk Management. Among these guidelines, one has been specifically designed for the risk measurement of market activities. In January 2016, the Basel Committee on Banking Supervision (BCBS) published the Fundamental Review of the Trading Book (FRTB). Amid the multiple evolutions discussed in this paper, the BCBS presents the technical context in which the potential loss estimation has changed from a Value-at-Risk (VaR) computation to an Expected Shortfall (ES) evaluation. The many advantages of an ES measure are not to be demonstrated, however this measure is also known for its major drawback: its difficulty to be back-tested. Therefore, after recalling the context around the VaR and ES models, this white paper will review ES back-testing findings and insights along many methodologies; these have either been drawn from the latest publications or have been developed by the Global Research & Analytics (GRA) team of Chappuis Halder & Co.
    [Show full text]
  • Incorporating Extreme Events Into Risk Measurement
    Lecture notes on risk management, public policy, and the financial system Incorporating extreme events into risk measurement Allan M. Malz Columbia University Incorporating extreme events into risk measurement Outline Stress testing and scenario analysis Expected shortfall Extreme value theory © 2021 Allan M. Malz Last updated: July 25, 2021 2/24 Incorporating extreme events into risk measurement Stress testing and scenario analysis Stress testing and scenario analysis Stress testing and scenario analysis Expected shortfall Extreme value theory 3/24 Incorporating extreme events into risk measurement Stress testing and scenario analysis Stress testing and scenario analysis What are stress tests? Stress tests analyze performance under extreme loss scenarios Heuristic portfolio analysis Steps in carrying out a stress test 1. Determine appropriate scenarios 2. Calculate shocks to risk factors in each scenario 3. Value the portfolio in each scenario Objectives of stress testing Address tail risk Reduce model risk by reducing reliance on models “Know the book”: stress tests can reveal vulnerabilities in specfic positions or groups of positions Criteria for appropriate stress scenarios Should be tailored to firm’s specific key vulnerabilities And avoid assumptions that favor the firm, e.g. competitive advantages in a crisis Should be extreme but not implausible 4/24 Incorporating extreme events into risk measurement Stress testing and scenario analysis Stress testing and scenario analysis Approaches to formulating stress scenarios Historical
    [Show full text]
  • Expected Shortfall
    REPRINTED FROM Cutting edge RISK MANAGEMENT • DERIVATIVES • REGULATION Back-testing expected shortfall In depth The shortfalls of Risk.net December 2014 expected shortfall Expected shortfall The future? Cutting edge: Introduction End of the back-test quest? Ever since regulators suggested replacing value-at-risk with expected shortfall, the industry has been debating how and whether it can be back-tested. Quants at MSCI are proposing three methods. Nazneen Sherif introduces this month’s technical articles rom the start, expected shortfall has suffered in comparison with one It’s too soon to break out the champagne, however. The trading book of the key advantages of the measure it is supposed to be replacing: it review also attempts to capture liquidity risks by introducing a spread of dif- cannot be back-tested, critics claimed, while tests of value-at-risk are ferent time horizons for individual risk factors, which would sink any Fsimple and intuitive. attempt to back-test, Acerbi warns: “Back-testing any measure, including Regulators have ploughed on regardless. Expected shortfall has been VAR, on asynchronous time horizons spoils a fundamental assumption in endorsed as VAR’s successor in two consultation papers on the Fundamental back-testing – time independence of different observations. No back-testing review of the trading book because of its supposed benefits as a measure of tail risk. The widely contested solution to back-testing difficulties is to perform capital calculations using expected shortfall, and then to back-test using VAR. This means the tail is left untested, an outcome regulators concede “Expected shortfall has better properties than VAR, so looks odd (www.risk.net/2375204).
    [Show full text]
  • On Estimation of Loss Distributions and Risk Measures
    ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 16, Number 1, 2012 Available online at www.math.ut.ee/acta/ On estimation of loss distributions and risk measures Meelis Käärik and Anastassia Žegulova Abstract. The estimation of certain loss distribution and analyzing its properties is a key issue in several finance mathematical and actuarial applications. It is common to apply the tools of extreme value theory and generalized Pareto distribution in problems related to heavy-tailed data. Our main goal is to study third party liability claims data obtained from Estonian Traffic Insurance Fund (ETIF). The data is quite typical for insurance claims containing very many observations and being heavy- tailed. In our approach the fitting consists of two parts: for main part of the distribution we use lognormal fit (which was the most suitable based on our previous studies) and a generalized Pareto distribution is used for the tail. Main emphasis of the fitting techniques is on the proper threshold selection. We seek for stability of parameter estimates and study the behaviour of risk measures at a wide range of thresholds. Two related lemmas will be proved. 1. Introduction The estimation of loss distributions has several practical and theoretical aspects, first of them being the choice of theoretical candidate distributions. There are few intuitive choices like lognormal, gamma, log-gamma, Weibull and Pareto distributions, but it is not rare that mentioned distributions do not fit very well. This work is a follow-up to our preliminary research (Käärik and Umbleja, 2010, 2011) where we established that lognormal distribution had best fit among the candidates.
    [Show full text]
  • Measuring Systemic Risk∗
    Measuring Systemic Risk∗ Viral V. Acharya, Lasse H. Pedersen, Thomas Philippon, and Matthew Richardsony First version: March 10, 2010 This version: July, 2016 Abstract We present an economic model of systemic risk in which undercapitalization of the financial sector as a whole is assumed to harm the real economy, leading to a systemic risk externality. Each financial institution's contribution to systemic risk can be measured as its systemic expected shortfall (SES), that is, its propensity to be undercapitalized when the system as a whole is under- capitalized. SES increases in the institution's leverage and its marginal expected shortfall (MES), that is, its losses in the tail of the system's loss distribution. We demonstrate empirically the ability of components of SES to predict emerging systemic risk during the financial crisis of 2007-2009. ∗ We would like to thank Rob Engle for many useful discussions. We are grateful to Christian Brownlees, Farhang Farazmand, Hanh Le and Tianyue Ruan for excellent research assistance. We also received useful comments from Tobias Adrian, Mark Carey, Matthias Drehman, Dale Gray and Jabonn Kim (discussants), Andrew Karolyi (editor), and seminar participants at several central banks and universities where the current paper and related systemic risk rankings at vlab.stern.nyu.edu/welcome/risk have been presented. Pedersen gratefully acknowledges support from the European Research Council (ERC grant no. 312417) and the FRIC Center for Financial Frictions (grant no. DNRF102). y Acharya, Philippon, and Richardson are at New York University, Stern School of Business, 44 West 4th St., New York, NY 10012; Pedersen is at Copenhagen Business School, New York University, AQR Capital Management, and CEPR.
    [Show full text]
  • Comparative Analyses of Expected Shortfall and Value-At-Risk Under Market Stress1
    Comparative analyses of expected shortfall and value-at-risk under market stress1 Yasuhiro Yamai and Toshinao Yoshiba, Bank of Japan Abstract In this paper, we compare value-at-risk (VaR) and expected shortfall under market stress. Assuming that the multivariate extreme value distribution represents asset returns under market stress, we simulate asset returns with this distribution. With these simulated asset returns, we examine whether market stress affects the properties of VaR and expected shortfall. Our findings are as follows. First, VaR and expected shortfall may underestimate the risk of securities with fat-tailed properties and a high potential for large losses. Second, VaR and expected shortfall may both disregard the tail dependence of asset returns. Third, expected shortfall has less of a problem in disregarding the fat tails and the tail dependence than VaR does. 1. Introduction It is a well known fact that value-at-risk2 (VaR) models do not work under market stress. VaR models are usually based on normal asset returns and do not work under extreme price fluctuations. The case in point is the financial market crisis of autumn 1998. Concerning this crisis, CGFS (1999) notes that “a large majority of interviewees admitted that last autumn’s events were in the “tails” of distributions and that VaR models were useless for measuring and monitoring market risk”. Our question is this: Is this a problem of the estimation methods, or of VaR as a risk measure? The estimation methods used for standard VaR models have problems for measuring extreme price movements. They assume that the asset returns follow a normal distribution.
    [Show full text]
  • Fitting a Distribution to Value-At-Risk and Expected Shortfall, with an Application to Covered Bonds Arxiv:1505.07484V2 [Q-Fin
    Fitting a distribution to Value-at-Risk and Expected Shortfall, with an application to covered bonds Dirk Tasche∗ Covered bonds are a specific example of senior secured debt. If the issuer of the bonds defaults the proceeds of the assets in the cover pool are used for their debt service. If in this situation the cover pool proceeds do not suffice for the debt service, the creditors of the bonds have recourse to the issuer's assets and their claims are pari passu with the claims of the creditors of senior unsecured debt. Historically, covered bonds have been very safe invest- ments. During their more than two hundred years of existence, investors never suffered losses due to missed payments from covered bonds. From a risk management perspective, therefore modelling covered bonds losses is mainly of interest for estimating the impact that the asset encumbrance by the cover pool has on the loss characteristics of the issuer's senior unsecured debt. We explore one-period structural modelling approaches for covered bonds and senior un- secured debt losses with one and two asset value variables respectively. Obviously, two-assets models with separate values of the cover pool and the issuer's remaining portfolio allow for more realistic modelling. However, we demonstrate that exact calibration of such models may be impossible. We also investigate a one-asset model in which the riskiness of the cover pool is reflected by a risk-based adjustment of the encumbrance ratio of the issuer's assets. Keywords: Covered bonds, expected loss, asset value distribution, quantile, expected short- fall, method of moments, two-parameter distribution family.
    [Show full text]
  • Comparative Analyses of Expected Shortfall and Value-At-Risk: Their Estimation Error, Decomposition, and Optimization
    MONETARY AND ECONOMIC STUDIES/JANUARY 2002 Comparative Analyses of Expected Shortfall and Value-at-Risk: Their Estimation Error, Decomposition, and Optimization Yasuhiro Yamai and Toshinao Yoshiba We compare expected shortfall with value-at-risk (VaR) in three aspects: estimation errors, decomposition into risk factors, and optimization. We describe the advantages and the disadvantages of expected shortfall over VaR. We show that expected shortfall is easily decomposed and optimized while VaR is not. We also show that expected shortfall needs a larger size of sample than VaR for the same level of accuracy. Key words: Expected shortfall; Value-at-risk; Optimization Research Division I, Institute for Monetary and Economic Studies, Bank of Japan (E-mail: [email protected], [email protected]) The authors would like to thank Tsukasa Yamashita (BNP Paribas Tokyo) for his helpful comments. 87 I. Introduction Artzner et al. (1997) proposed the use of expected shortfall to alleviate the problems inherent in value-at-risk (VaR). Expected shortfall considers loss beyond the VaR level and is shown to be sub-additive, while VaR disregards loss beyond the percentile and is not sub-additive.1 In this paper, we compare expected shortfall with VaR in three aspects: their estimation errors, their decomposition into risk factors, and their optimization. These comparative analyses can help risk managers consider the usefulness of expected shortfall as a risk management tool. The rest of the paper is organized as follows. Section II gives our definitions and concepts of VaR and expected shortfall. Section III examines the estimation error of expected shortfall using Monte Carlo simulations.
    [Show full text]
  • A Fast, Accurate Method for Value-At-Risk and Expected Shortfall
    Econometrics 2014, 2, 98-122; doi:10.3390/econometrics2020098 OPEN ACCESS econometrics ISSN 2225-1146 www.mdpi.com/journal/econometrics Article A Fast, Accurate Method for Value-at-Risk and Expected Shortfall Jochen Krause 1 and Marc S. Paolella 1;2;* 1 Department of Banking and Finance, University of Zurich, CH-8032 Zurich, Switzerland 2 Swiss Finance Institute, CH-8006 Zurich, Switzerland * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +41-44-634-45-84. Received: 5 June 2014; in revised form: 21 June 2014 / Accepted: 22 June 2014 / Published: 25 June 2014 Abstract: A fast method is developed for value-at-risk and expected shortfall prediction for univariate asset return time series exhibiting leptokurtosis, asymmetry and conditional heteroskedasticity. It is based on a GARCH-type process driven by noncentral t innovations. While the method involves the use of several shortcuts for speed, it performs admirably in terms of accuracy and actually outperforms highly competitive models. Most remarkably, this is the case also for sample sizes as small as 250. Keywords: GARCH; mixture-normal-GARCH; noncentral t; lookup table JEL classifications: C51; C53; G11; G17 1. Introduction Value-at-risk (VaR) and, more recently, expected shortfall (ES) are fundamental risk measures. In May 2012, the Basel Committee on Banking Supervision announced its intention to replace VaR with ES in banks’ internal models for determining regulatory capital requirements ([1] p. 3). Calculation of the ES requires first calculating the VaR, so that accurate methods for VaR are still necessary. Numerous papers have proposed and tested models for VaR prediction, which amounts to delivering a prediction of a left-tail quantile of the h-step ahead predictive distribution of a univariate time series of financial asset returns.
    [Show full text]
  • In Defence of Var Risk Measurement and Backtesting in Times of Crisis 1 June, 2020
    In Defence of VaR Risk measurement and Backtesting in times of Crisis 1 June, 2020 In Defence of VaR Executive Summary Since the Financial Crisis, Value at Risk (VaR) has been on the receiving end of a lot of criticisms. It has been blamed for, amongst other things, not measuring risk accurately, allowing banks to get away with holding insufficient capital, creating over reliance on a single model, and creating pro-cyclical positive feedback in financial markets leading to ‘VaR shocks’. In this article we do not seek to defend VaR against all these claims. But we do seek to defend the idea of modelling risk, and of attempting to model risk accurately. We point out that no single model can meet mutually contradictory criteria, and we demonstrate that certain approaches to VaR modelling would at least provide accurate (as measured by Backtesting) near-term risk estimates. We note that new Market Risk regulation, the Fundamental Review of the Trading Book (FRTB) could provide an opportunity for banks and regulators to treat capital VaR and risk management VaR sufficiently differently as to end up with measures that work for both rather than for neither. Contents 1 Market Risk VaR and Regulatory Capital 3 2 Can VaR accurately measure risk? 5 3 Regulation and Risk Management 10 4 Conclusion 13 Contacts 14 In Defence of VaR 1 Market Risk VaR and Regulatory Capital Market Risk VaR was developed at JP Morgan in the wake of the 1987 crash, as a way to answer then chairman Dennis Weatherstone’s question “how much might we lose on our trading portfolio by tomorrow’s close?” VaR attempts to identify a quantile of loss for a given timeframe.
    [Show full text]
  • Extreme Risk, Value-At-Risk and Expected Shortfall in the Gold Market
    International Business & Economics Research Journal – January/February 2015 Volume 14, Number 1 Extreme Risk, Value-At-Risk And Expected Shortfall In The Gold Market Knowledge Chinhamu, University of KwaZulu-Natal, South Africa Chun-Kai Huang, University of Cape Town, South Africa Chun-Sung Huang, University of Cape Town, South Africa and Delson Chikobvu, University of Free State, South Africa ABSTRACT Extreme value theory (EVT) has been widely applied in fields such as hydrology and insurance. It is a tool used to reflect on probabilities associated with extreme, and thus rare, events. EVT is useful in modeling the impact of crashes or situations of extreme stress on investor portfolios. It describes the behavior of maxima or minima in a time series, i.e., tails of a distribution. In this paper, we propose the use of generalised Pareto distribution (GPD) to model extreme returns in the gold market. This method provides effective means of estimating tail risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES). This is confirmed by various backtesting procedures. In particular, we utilize the Kupiec unconditional coverage test and the Christoffersen conditional coverage test for VaR backtesting, while the Bootstrap test is used for ES backtesting. The results indicate that GPD is superior to the traditional Gaussian and Student’s t models for VaR and ES estimations. Keywords: Gold Prices; Generalised Pareto Distribution; Value-At-Risk; Expected Shortfall; Kupiec; Christoffersen INTRODUCTION isk measures are used primarily to safeguard a financial position against severe losses. To successfully model such tail-related risks, we need to find suitable techniques to measure and capture R these extreme events.
    [Show full text]
  • A Joint Quantile and Expected Shortfall Regression Framework
    A Joint Quantile and Expected Shortfall Regression Framework Timo Dimitriadisy and Sebastian Bayer University of Konstanz, Department of Economics, 78457 Konstanz, Germany This Version: 07.08.2017 Abstract We introduce a novel regression framework which simultaneously models the quantile and the Expected Shortfall (ES) of a response variable given a set of covariates. This regression is based on a strictly consistent loss function for the pair quantile and ES, which allows for M- and Z-estimation of the joint regression parameters. We show consistency and asymptotic normality for both estimators under weak regularity conditions. The underlying loss function depends on two specification functions, whose choice affects the properties of the resulting estimators. We find that the Z-estimator is numerically unstable and thus, we rely on M-estimation of the model parameters. Extensive simulations verify the asymptotic properties and analyze the small sample behavior of the M-estimator for different specification functions. This joint regression framework allows for various applications including estimating, forecasting, and backtesting ES, which is particularly relevant in light of the recent introduction of ES into the Basel Accords. Keywords: Expected Shortfall, Joint Elicitability, Joint Regression, M-estimation, Quantile Regression 1. Introduction Measuring and forecasting risks is essential for a variety of academic disciplines. For this purpose, risk measures which are formally defined as a map (with certain properties) from a space of random variables to a real number, are applied to condense the complex nature of the involved risks to a single number (Artzner et al., 1999). In the context of financial risk measurement, to date the most commonly used risk measure is the Value-at-Risk (VaR), which is the α-quantile of the return distribution.
    [Show full text]