ARCH/GARCH Models in Applied Financial Econometrics

Total Page:16

File Type:pdf, Size:1020Kb

ARCH/GARCH Models in Applied Financial Econometrics JWPR026-Fabozzi c114-NP November 27, 2007 10:17 1 CHAPTER NP 2 3 4 5 6 ARCH/GARCH Models in Applied 7 8 Financial Econometrics 9 10 11 ROBERT F. ENGLE, PhD 12 Michael Armellino Professorship in the Management of Financial Services, Leonard N. Stern School 13 of Business, New York University 14 15 SERGIO M. FOCARDI 16 Partner, The Intertek Group 17 18 19 FRANK J. FABOZZI, PhD, CFA, CPA 20 Professor in the Practice of Finance, School of Management, Yale University 21 22 23 24 25 Review of Linear Regression and Autoregressive Integration of First, Second, and Higher Moments 7 26 Models 2 Generalizations to High-Frequency Data 7 27 ARCH/GARCH Models 3 Multivariate Extensions 9 28 Application to Value at Risk 5 Summary 10 29 Why ARCH/GARCH? 5 References 10 30 Generalizations of the ARCH/GARCH Models 7 31 32 33 34 Abstract: Volatility is a key parameter used in many financial applications, from deriva- 35 tives valuation to asset management and risk management. Volatility measures the size 36 of the errors made in modeling returns and other financial variables. It was discov- 37 ered that, for vast classes of models, the average size of volatility is not constant but 38 changes with time and is predictable. Autoregressive conditional heteroskedasticity 39 (ARCH)/generalized autoregressive conditional heteroskedasticity (GARCH) models 40 and stochastic volatility models are the main tools used to model and forecast volatil- 41 ity. Moving from single assets to portfolios made of multiple assets, we find that not 42 only idiosyncratic volatilities but also correlations and covariances between assets are 43 time varying and predictable. Multivariate ARCH/GARCH models and dynamic fac- 44 tor models, eventually in a Bayesian framework, are the basic tools used to forecast 45 correlations and covariances. 46 47 48 Keywords: autoregressive conditional duration, ACD-GARCH, autoregressive 49 conditional heteroskedasticity (ARCH), autoregressive models, conditional 50 autoregressive value at risk (CAViaR), dynamic factor models, generalized 51 autoregressive conditional heteroskedasticity (GARCH), exponential 52 GARCH (EGARCH), F-GARCH, GARCH-M, heteroskedasticity, 53 high-frequency data, homoskedasticity, integrated GARCH (IGARCH), 54 MGARCH, threshold ARCH (TARCH), temporal aggregation, 55 ultra-high-frequency data, value at risk (VaR), VEC, volatility 56 57 58 In this chapter we discuss the modeling of the time be- ers know that errors made in predicting markets are not 59 havior of the uncertainty related to many econometric of a constant magnitude. There are periods when unpre- 60 models when applied to financial data. Finance practition- dictable market fluctuations are larger and periods when 61 1 JWPR026-Fabozzi c114-NP November 27, 2007 10:17 2 ARCH/GARCH Models in Applied Financial Econometrics 1 they are smaller. This behavior, known as heteroskedastic- time series are considered to be realizations of stochastic 2 ity, refers to the fact that the size of market volatility tends processes. That is, each point of an economic time series 3 to cluster in periods of high volatility and periods of low is considered to be an observation of a random variable. 4 volatility. The discovery that it is possible to formalize and We can look at a stochastic process as a sequence of vari- 5 generalize this observation was a major breakthrough in ables characterized by joint-probability distributions for 6 econometrics. In fact, we can describe many economic and every finite set of different time points. In particular, we 7 financial data with models that predict, simultaneously, can consider the distribution ft of each variable Xt at each 8 the economic variables and the average magnitude of the moment. Intuitively, we can visualize a stochastic process 9 squared prediction error. as a very large (infinite) number of paths. A process is 10 In this chapter we show how the average error size can called weakly stationary if all of its second moments are 11 be modeled as an autoregressive process. Given their au- constant. In particular this means that the mean and vari- µ = µ σ 2 = σ 2 12 toregressive nature, these models are called autoregressive ance are constants t and t that do not depend 13 conditional heteroskedasticity (ARCH) or generalized autore- on the time t. A process is called strictly stationary if none 14 gressive conditional heteroskedasticity (GARCH). This dis- of its finite distributions depends on time. A strictly sta- 15 covery is particularly important in financial econometrics, tionary process is not necessarily weakly stationary as its 16 where the error size is, in itself, a variable of great interest. finite distributions, though time-independent, might have 17 infinite moments. µ σ 2 18 The terms t and t are the unconditional mean and 19 variance of a process. In finance and economics, however, 20 REVIEW OF LINEAR REGRESSION we are typically interested in making forecasts based on 21 past and present information. Therefore, we consider the AND AUTOREGRESSIVE MODELS 22 distribution ft2 x It1 of the variable Xt2 at time t2 con- Let’s first discuss two examples of basic econometric mod- 23 ditional on the information It1 known at time t1.Based 24 els, the linear regression model and the autoregressive on information available at time t − 1, It−1, we can also 25 model, and illustrate the meaning of homoskedasticity or define the conditional mean and the conditional variance heteroskedasticity in each case. µ | , σ 2 | 26 ( t It−1 ) t It−1 . 27 The linear regression model is the workhorse of eco- A process can be weakly stationary but have time- 28 nomic modeling. A univariate linear regression represents varying conditional variance. If the conditional mean is 29 a proportionality relationship between two variables: constant, then the unconditional variance is the uncondi- 30 y = α + βx + ε tional expectation of the conditional variance. If the con- 31 ditional mean is not constant, the unconditional variance 32 The preceding linear regression model states that the ex- is not equal to the unconditional expectation of the condi- 33 pectation of the variable y is β times the expectation of the tional variance; this is due to the dynamics of the condi- 34 variable x plus a constant α. The proportionality relation- tional mean. 35 ship between y and x is not exact but subject to an error In describing ARCH/GARCH behavior, we focus on 36 ε. In standard regression theory, the error ε is assumed to the error process. In particular, we assume that the errors 37 have a zero mean and a constant standard deviation σ . are an innovation process, that is, we assume that the 38 The standard deviation is the square root of the variance, conditional mean of the errors is zero. We write the error 2 2 39 which is the expectation of the squared error: σ = E ε . process as: εt = σt zt where σt is the conditional standard 40 It is a positive number that measures the size of the error. deviation and the z terms are a sequence of independent, 41 We call homoskedasticity the assumption that the expected zero-mean, unit-variance, normally distributed variables. 42 size of the error is constant and does not depend on the size Under this assumption, the unconditional variance of the 43 of the variable x. We call heteroskedasticity the assumption error process is the unconditional mean of the conditional 44 that the expected size of the error term is not constant. variance. Note, however, that the unconditional variance 45 The assumption of homoskedasticity is convenient from of the process variable does not, in general, coincide with 46 a mathematical point of view and is standard in regression the unconditional variance of the error terms. 47 theory. However, it is an assumption that must be verified In financial and economic models, conditioning is often 48 empirically. In many cases, especially if the range of vari- stated as regressions of the future values of the variables 49 ables is large, the assumption of homoskedasticity might on the present and past values of the same variable. For 50 be unreasonable. For example, assuming a linear relation- example, if we assume that time is discrete, we can express 51 ship between consumption and household income, we conditioning as an autoregressive model: 52 can expect that the size of the error depends on the size of 53 household income. But, in fact, high-income households Xt+1 = α0 + β0 Xt +···+βn Xt−n + εt+1 54 have more freedom in the allocation of their income. 55 In the preceding household-income example, the lin- The error term εi is conditional on the information Ii 56 ear regression represents a cross-sectional model without that, in this example, is represented by the present and the 57 any time dimension. However, in finance and economics past n values of the variable X. The simplest autoregressive 58 in general, we deal primarily with time series, that is, se- model is the random walk model of the logarithms of 59 quences of observations at different moments of time. Let’s prices pi : 60 call Xt the value of an economic time series at time t. Since 61 the groundbreaking work of Haavelmo (1944), economic pt+1 = µt + pt + εt JWPR026-Fabozzi c114-NP November 27, 2007 10:17 PLEASE SUPPLY PART TITLE 3 1 In terms of returns, the random walk model is simply: Thus, the statement that ARCH models describe the time 2 evolution of the variance of returns is true only if returns rt = pt = µ + εt 3 have a constant expectation. 4 A major breakthrough in econometric modeling was the ARCH/GARCH effects are important because they are 5 discovery that, for many families of econometric mod- very general.
Recommended publications
  • Financial Econometrics Lecture Notes
    Financial Econometrics Lecture Notes Professor Doron Avramov The Hebrew University of Jerusalem & Chinese University of Hong Kong Introduction: Why do we need a course in Financial Econometrics? 2 Professor Doron Avramov, Financial Econometrics Syllabus: Motivation The past few decades have been characterized by an extraordinary growth in the use of quantitative methods in the analysis of various asset classes; be it equities, fixed income securities, commodities, and derivatives. In addition, financial economists have routinely been using advanced mathematical, statistical, and econometric techniques in a host of applications including investment decisions, risk management, volatility modeling, interest rate modeling, and the list goes on. 3 Professor Doron Avramov, Financial Econometrics Syllabus: Objectives This course attempts to provide a fairly deep understanding of such techniques. The purpose is twofold, to provide research tools in financial economics and comprehend investment designs employed by practitioners. The course is intended for advanced master and PhD level students in finance and economics. 4 Professor Doron Avramov, Financial Econometrics Syllabus: Prerequisite I will assume prior exposure to matrix algebra, distribution theory, Ordinary Least Squares, Maximum Likelihood Estimation, Method of Moments, and the Delta Method. I will also assume you have some skills in computer programing beyond Excel. MATLAB and R are the most recommended for this course. OCTAVE could be used as well, as it is a free software, and is practically identical to MATLAB when considering the scope of the course. If you desire to use STATA, SAS, or other comparable tools, please consult with the TA. 5 Professor Doron Avramov, Financial Econometrics Syllabus: Grade Components Assignments (36%): there will be two problem sets during the term.
    [Show full text]
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]
  • Advanced Financial Econometrics
    MFE 2013-14 Trinity Term | Advanced Financial Econometrics Advanced Financial Econometrics Kevin Sheppard Department of Economics Oxford-Man Institute of Quantitative Finance Course Site: http://www.kevinsheppard.com/Advanced_Econometrics Weblearn Site: https://weblearn.ox.ac.uk/portal/site/socsci/sbs/mfe2012_2013/ttafe 1 Aims, objectives and intended learning outcomes 1.1 Basic aims This course aims to extend the core tools of Financial Econometrics I & II to some advanced forecasting problems. The first component covers techniques for working with many forecasting models – often referred to as data snoop- ing. The second component covers techniques for handling many predictors – including the case where the number of predictors is much larger than the number of data points available. 1.2 Specific objectives Part I (Week 1 – 4) Revisiting the bootstrap and extensions appropriate for time-series data • Technical trading rules and large number of forecasts • Formalized data-snooping robust inference • False-discovery rates • Detecting sets of models that are statistically indistinguishable • Part I (Week 5 – 8) Dynamic Factor Models and Principal Component Analysis • The Kalman Filter and Expectations-Maximization Algorithm • Partial Least Squares and the 3-pass Regression Filter • Regularized Reduced Rank Regression • 1.3 Intended learning outcomes Following class attendance, successful completion of assigned readings, assignments, both theoretical and empiri- cal, and recommended private study, students should be able to Detect superior performance using methods that are robust to data snooping • Produce and evaluate forecasts in for macro-financial time series in data-rich environments • 1 2 Teaching resources 2.1 Lecturing Lectures are provided by Kevin Sheppard. Kevin Sheppard is an Associate Professor and a fellow at Keble College.
    [Show full text]
  • An Introduction to Financial Econometrics
    An introduction to financial econometrics Jianqing Fan Department of Operation Research and Financial Engineering Princeton University Princeton, NJ 08544 November 14, 2004 What is the financial econometrics? This simple question does not have a simple answer. The boundary of such an interdisciplinary area is always moot and any attempt to give a formal definition is unlikely to be successful. Broadly speaking, financial econometrics is to study quantitative problems arising from finance. It uses sta- tistical techniques and economic theory to address a variety of problems from finance. These include building financial models, estimation and inferences of financial models, volatility estimation, risk management, testing financial economics theory, capital asset pricing, derivative pricing, portfolio allocation, risk-adjusted returns, simulating financial systems, hedging strategies, among others. Technological invention and trade globalization have brought us into a new era of financial markets. Over the last three decades, enormous number of new financial products have been created to meet customers’ demands. For example, to reduce the impact of the fluctuations of currency exchange rates on a firm’s finance, which makes its profit more predictable and competitive, a multinational corporation may decide to buy the options on the future of foreign exchanges; to reduce the risk of price fluctuations of a commodity (e.g. lumbers, corns, soybeans), a farmer may enter into the future contracts of the commodity; to reduce the risk of weather exposures, amuse parks (too hot or too cold reduces the number of visitors) and energy companies may decide to purchase the financial derivatives based on the temperature. An important milestone is that in the year 1973, the world’s first options exchange opened in Chicago.
    [Show full text]
  • Rescaling Marked Point Processes
    1 Rescaling Marked Point Processes David Vere-Jones1, Victoria University of Wellington Frederic Paik Schoenberg2, University of California, Los Angeles Abstract In 1971, Meyer showed how one could use the compensator to rescale a multivari- ate point process, forming independent Poisson processes with intensity one. Meyer’s result has been generalized to multi-dimensional point processes. Here, we explore gen- eralization of Meyer’s theorem to the case of marked point processes, where the mark space may be quite general. Assuming simplicity and the existence of a conditional intensity, we show that a marked point process can be transformed into a compound Poisson process with unit total rate and a fixed mark distribution. ** Key words: residual analysis, random time change, Meyer’s theorem, model evaluation, intensity, compensator, Poisson process. ** 1 Department of Mathematics, Victoria University of Wellington, P.O. Box 196, Welling- ton, New Zealand. [email protected] 2 Department of Statistics, 8142 Math-Science Building, University of California, Los Angeles, 90095-1554. [email protected] Vere-Jones and Schoenberg. Rescaling Marked Point Processes 2 1 Introduction. Before other matters, both authors would like express their appreciation to Daryl for his stimulating and forgiving company, and to wish him a long and fruitful continuation of his mathematical, musical, woodworking, and many other activities. In particular, it is a real pleasure for the first author to acknowledge his gratitude to Daryl for his hard work, good humour, generosity and continuing friendship throughout the development of (innumerable draft versions and now even two editions of) their joint text on point processes.
    [Show full text]
  • Lecture 26 : Poisson Point Processes
    Lecture 26 : Poisson Point Processes STAT205 Lecturer: Jim Pitman Scribe: Ben Hough <[email protected]> In this lecture, we consider a measure space (S; S; µ), where µ is a σ-finite measure (i.e. S may be written as a disjoint union of sets of finite µ-measure). 26.1 The Poisson Point Process Definition 26.1 A Poisson Point Process (P.P.P.) with intensity µ is a collection of random variables N(A; !), A 2 S, ! 2 Ω defined on a probability space (Ω; F; P) such that: 1. N(·; !) is a counting measure on (S; S) for each ! 2 Ω. 2. N(A; ·) is Poisson with mean µ(A): − P e µ(A)(µ(A))k (N(A) = k) = k! all A 2 S. 3. If A1, A2, ... are disjoint sets then N(A1; ·), N(A2; ·), ... are independent random variables. Theorem 26.2 P.P.P.'s exist. Proof Sketch: It's not enough to quote Kolmogorov's Extension Theorem. The only convincing argument is to give an explicit constuction from sequences of independent random variables. We begin by considering the case µ(S) < 1. P µ(A) 1. Take X1, X2, ... to be i.i.d. random variables so that (Xi 2 A) = µ(S) . 2. Take N(S) to be a Poisson random variable with mean µ(S), independent of the Xi's. Assume all random varibles are defined on the same probability space (Ω; F; P). N(S) 3. Define N(A) = i=1 1(Xi2A), for all A 2 S. P 1 Now verify that this N(A; !) is a P.P.P.
    [Show full text]
  • A Course in Interacting Particle Systems
    A Course in Interacting Particle Systems J.M. Swart January 14, 2020 arXiv:1703.10007v2 [math.PR] 13 Jan 2020 2 Contents 1 Introduction 7 1.1 General set-up . .7 1.2 The voter model . .9 1.3 The contact process . 11 1.4 Ising and Potts models . 14 1.5 Phase transitions . 17 1.6 Variations on the voter model . 20 1.7 Further models . 22 2 Continuous-time Markov chains 27 2.1 Poisson point sets . 27 2.2 Transition probabilities and generators . 30 2.3 Poisson construction of Markov processes . 31 2.4 Examples of Poisson representations . 33 3 The mean-field limit 35 3.1 Processes on the complete graph . 35 3.2 The mean-field limit of the Ising model . 36 3.3 Analysis of the mean-field model . 38 3.4 Functions of Markov processes . 42 3.5 The mean-field contact process . 47 3.6 The mean-field voter model . 49 3.7 Exercises . 51 4 Construction and ergodicity 53 4.1 Introduction . 53 4.2 Feller processes . 54 4.3 Poisson construction . 63 4.4 Generator construction . 72 4.5 Ergodicity . 79 4.6 Application to the Ising model . 81 4.7 Further results . 85 5 Monotonicity 89 5.1 The stochastic order . 89 5.2 The upper and lower invariant laws . 94 5.3 The contact process . 97 5.4 Other examples . 100 3 4 CONTENTS 5.5 Exercises . 101 6 Duality 105 6.1 Introduction . 105 6.2 Additive systems duality . 106 6.3 Cancellative systems duality . 113 6.4 Other dualities .
    [Show full text]
  • Point Processes, Temporal
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CiteSeerX Point processes, temporal David R. Brillinger, Peter M. Guttorp & Frederic Paik Schoenberg Volume 3, pp 1577–1581 in Encyclopedia of Environmetrics (ISBN 0471 899976) Edited by Abdel H. El-Shaarawi and Walter W. Piegorsch John Wiley & Sons, Ltd, Chichester, 2002 is an encyclopedic account of the theory of the Point processes, temporal subject. A temporal point process is a random process whose Examples realizations consist of the times fjg, j 2 , j D 0, š1, š2,... of isolated events scattered in time. A Examples of point processes abound in the point process is also known as a counting process or environment; we have already mentioned times a random scatter. The times may correspond to events of floods. There are also times of earthquakes, of several types. fires, deaths, accidents, hurricanes, storms (hale, Figure 1 presents an example of temporal point ice, thunder), volcanic eruptions, lightning strikes, process data. The figure actually provides three dif- tornadoes, power outages, chemical spills (see ferent ways of representing the timing of floods Meteorological extremes; Natural disasters). on the Amazon River near Manaus, Brazil, dur- ing the period 1892–1992 (see Hydrological ex- tremes)[7]. Questions The formal use of the concept of point process has a long history going back at least to the life tables The questions that scientists ask involving point of Graunt [14]. Physicists contributed many ideas in process data include the following. Is a point pro- the first half of the twentieth century; see, for ex- cess associated with another process? Is the associ- ample, [23].
    [Show full text]
  • Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc
    Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc To cite this version: Paolo Di Tella, Monique Jeanblanc. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process. 2019. hal-02146840 HAL Id: hal-02146840 https://hal.archives-ouvertes.fr/hal-02146840 Preprint submitted on 4 Jun 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella1;2 and Monique Jeanblanc3 Abstract Let X be a point process and let X denote the filtration generated by X. In this paper we study martingale representation theorems in the filtration G obtained as an initial and progressive enlargement of the filtration X. In particular, the progressive enlargement is done by means of a whole point process H. We work here in full generality, without requiring any further assumption on the process H and we recover the special case in which X is enlarged progressively by a random time t. Keywords: Point processes, martingale representation, progressive enlargement, initial enlargement, random measures.
    [Show full text]
  • Markov Determinantal Point Processes
    Markov Determinantal Point Processes Raja Hafiz Affandi Alex Kulesza Emily B. Fox Department of Statistics Dept. of Computer and Information Science Department of Statistics The Wharton School University of Pennsylvania The Wharton School University of Pennsylvania [email protected] University of Pennsylvania [email protected] [email protected] Abstract locations are likewise over-dispersed relative to uniform placement (Bernstein and Gobbel, 1979). Likewise, many A determinantal point process (DPP) is a random practical tasks can be posed in terms of diverse subset selec- process useful for modeling the combinatorial tion. For example, one might want to select a set of frames problem of subset selection. In particular, DPPs from a movie that are representative of its content. Clearly, encourage a random subset Y to contain a diverse diversity is preferable to avoid redundancy; likewise, each set of items selected from a base set Y. For ex- frame should be of high quality. A motivating example we ample, we might use a DPP to display a set of consider throughout the paper is the task of selecting a di- news headlines that are relevant to a user’s inter- verse yet relevant set of news headlines to display to a user. ests while covering a variety of topics. Suppose, One could imagine employing binary Markov random fields however, that we are asked to sequentially se- with negative correlations, but such models often involve lect multiple diverse sets of items, for example, notoriously intractable inference problems. displaying new headlines day-by-day. We might want these sets to be diverse not just individually Determinantal point processes (DPPs), which arise in ran- but also through time, offering headlines today dom matrix theory (Mehta and Gaudin, 1960; Ginibre, 1965) repul- that are unlike the ones shown yesterday.
    [Show full text]
  • Two Pillars of Asset Pricing †
    American Economic Review 2014, 104(6): 1467–1485 http://dx.doi.org/10.1257/aer.104.6.1467 Two Pillars of Asset Pricing † By Eugene F. Fama * The Nobel Foundation asks that the Nobel lecture cover the work for which the Prize is awarded. The announcement of this year’s Prize cites empirical work in asset pricing. I interpret this to include work on efficient capital markets and work on developing and testing asset pricing models—the two pillars, or perhaps more descriptive, the Siamese twins of asset pricing. I start with efficient markets and then move on to asset pricing models. I. Efficient Capital Markets A. Early Work The year 1962 was a propitious time for PhD research at the University of Chicago. Computers were coming into their own, liberating econometricians from their mechanical calculators. It became possible to process large amounts of data quickly, at least by previous standards. Stock prices are among the most accessible data, and there was burgeoning interest in studying the behavior of stock returns, cen- tered at the University of Chicago Merton Miller, Harry Roberts, Lester Telser, and ( Benoit Mandelbrot as a frequent visitor and MIT Sidney Alexander, Paul Cootner, ) ( Franco Modigliani, and Paul Samuelson . Modigliani often visited Chicago to work ) with his longtime coauthor Merton Miller, so there was frequent exchange of ideas between the two schools. It was clear from the beginning that the central question is whether asset prices reflect all available information—what I labelled the efficient markets hypothesis Fama 1965b . The difficulty is making the hypothesis testable. We can’t test whether ( ) the market does what it is supposed to do unless we specify what it is supposed to do.
    [Show full text]
  • Financial Econometrics Value at Risk
    Financial Econometrics Value at Risk Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Value at Risk Introduction VaR RiskMetricsTM Summary Risk What do we mean by risk? I Dictionary: possibility of loss or injury I Volatility a common measure for assets I Two points of view on volatility measure F Risk is both good and bad changes F Volatility is useful because there is symmetry in gains and losses What sorts of risk? I Market risk I Credit risk I Liquidity risk I Operational risk I Other risks sometimes mentioned F Legal risk F Model risk Different ways of dealing with risk Maximize expected utility with preferences about risk implicit in the utility function I What are problems with this? The worst that can happen to you I What are problems with this? Safety first I One definition (Roy): Investor chooses a portfolio that minimizes the probability of a loss greater in magnitude than some disaster level I What are problems with this? I Another definition (Telser): Investor specifies a maximum probability of a return less than some level and then chooses the portfolio that maximizes the expected return subject to this restriction Value at risk Value at risk summarizes the maximum loss over some horizon with a given confidence level I Lower tail of distribution function of returns for a long position I Upper tail of distribution function of returns for a short position F Can use lower tail if symmetric I Suppose standard normal distribution, which implies an expected return of zero I 99 percent of the time, loss is at most -2.32634
    [Show full text]