ARCH/GARCH Models in Applied Financial Econometrics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Financial Econometrics Lecture Notes
Financial Econometrics Lecture Notes Professor Doron Avramov The Hebrew University of Jerusalem & Chinese University of Hong Kong Introduction: Why do we need a course in Financial Econometrics? 2 Professor Doron Avramov, Financial Econometrics Syllabus: Motivation The past few decades have been characterized by an extraordinary growth in the use of quantitative methods in the analysis of various asset classes; be it equities, fixed income securities, commodities, and derivatives. In addition, financial economists have routinely been using advanced mathematical, statistical, and econometric techniques in a host of applications including investment decisions, risk management, volatility modeling, interest rate modeling, and the list goes on. 3 Professor Doron Avramov, Financial Econometrics Syllabus: Objectives This course attempts to provide a fairly deep understanding of such techniques. The purpose is twofold, to provide research tools in financial economics and comprehend investment designs employed by practitioners. The course is intended for advanced master and PhD level students in finance and economics. 4 Professor Doron Avramov, Financial Econometrics Syllabus: Prerequisite I will assume prior exposure to matrix algebra, distribution theory, Ordinary Least Squares, Maximum Likelihood Estimation, Method of Moments, and the Delta Method. I will also assume you have some skills in computer programing beyond Excel. MATLAB and R are the most recommended for this course. OCTAVE could be used as well, as it is a free software, and is practically identical to MATLAB when considering the scope of the course. If you desire to use STATA, SAS, or other comparable tools, please consult with the TA. 5 Professor Doron Avramov, Financial Econometrics Syllabus: Grade Components Assignments (36%): there will be two problem sets during the term. -
POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p. -
Advanced Financial Econometrics
MFE 2013-14 Trinity Term | Advanced Financial Econometrics Advanced Financial Econometrics Kevin Sheppard Department of Economics Oxford-Man Institute of Quantitative Finance Course Site: http://www.kevinsheppard.com/Advanced_Econometrics Weblearn Site: https://weblearn.ox.ac.uk/portal/site/socsci/sbs/mfe2012_2013/ttafe 1 Aims, objectives and intended learning outcomes 1.1 Basic aims This course aims to extend the core tools of Financial Econometrics I & II to some advanced forecasting problems. The first component covers techniques for working with many forecasting models – often referred to as data snoop- ing. The second component covers techniques for handling many predictors – including the case where the number of predictors is much larger than the number of data points available. 1.2 Specific objectives Part I (Week 1 – 4) Revisiting the bootstrap and extensions appropriate for time-series data • Technical trading rules and large number of forecasts • Formalized data-snooping robust inference • False-discovery rates • Detecting sets of models that are statistically indistinguishable • Part I (Week 5 – 8) Dynamic Factor Models and Principal Component Analysis • The Kalman Filter and Expectations-Maximization Algorithm • Partial Least Squares and the 3-pass Regression Filter • Regularized Reduced Rank Regression • 1.3 Intended learning outcomes Following class attendance, successful completion of assigned readings, assignments, both theoretical and empiri- cal, and recommended private study, students should be able to Detect superior performance using methods that are robust to data snooping • Produce and evaluate forecasts in for macro-financial time series in data-rich environments • 1 2 Teaching resources 2.1 Lecturing Lectures are provided by Kevin Sheppard. Kevin Sheppard is an Associate Professor and a fellow at Keble College. -
An Introduction to Financial Econometrics
An introduction to financial econometrics Jianqing Fan Department of Operation Research and Financial Engineering Princeton University Princeton, NJ 08544 November 14, 2004 What is the financial econometrics? This simple question does not have a simple answer. The boundary of such an interdisciplinary area is always moot and any attempt to give a formal definition is unlikely to be successful. Broadly speaking, financial econometrics is to study quantitative problems arising from finance. It uses sta- tistical techniques and economic theory to address a variety of problems from finance. These include building financial models, estimation and inferences of financial models, volatility estimation, risk management, testing financial economics theory, capital asset pricing, derivative pricing, portfolio allocation, risk-adjusted returns, simulating financial systems, hedging strategies, among others. Technological invention and trade globalization have brought us into a new era of financial markets. Over the last three decades, enormous number of new financial products have been created to meet customers’ demands. For example, to reduce the impact of the fluctuations of currency exchange rates on a firm’s finance, which makes its profit more predictable and competitive, a multinational corporation may decide to buy the options on the future of foreign exchanges; to reduce the risk of price fluctuations of a commodity (e.g. lumbers, corns, soybeans), a farmer may enter into the future contracts of the commodity; to reduce the risk of weather exposures, amuse parks (too hot or too cold reduces the number of visitors) and energy companies may decide to purchase the financial derivatives based on the temperature. An important milestone is that in the year 1973, the world’s first options exchange opened in Chicago. -
Rescaling Marked Point Processes
1 Rescaling Marked Point Processes David Vere-Jones1, Victoria University of Wellington Frederic Paik Schoenberg2, University of California, Los Angeles Abstract In 1971, Meyer showed how one could use the compensator to rescale a multivari- ate point process, forming independent Poisson processes with intensity one. Meyer’s result has been generalized to multi-dimensional point processes. Here, we explore gen- eralization of Meyer’s theorem to the case of marked point processes, where the mark space may be quite general. Assuming simplicity and the existence of a conditional intensity, we show that a marked point process can be transformed into a compound Poisson process with unit total rate and a fixed mark distribution. ** Key words: residual analysis, random time change, Meyer’s theorem, model evaluation, intensity, compensator, Poisson process. ** 1 Department of Mathematics, Victoria University of Wellington, P.O. Box 196, Welling- ton, New Zealand. [email protected] 2 Department of Statistics, 8142 Math-Science Building, University of California, Los Angeles, 90095-1554. [email protected] Vere-Jones and Schoenberg. Rescaling Marked Point Processes 2 1 Introduction. Before other matters, both authors would like express their appreciation to Daryl for his stimulating and forgiving company, and to wish him a long and fruitful continuation of his mathematical, musical, woodworking, and many other activities. In particular, it is a real pleasure for the first author to acknowledge his gratitude to Daryl for his hard work, good humour, generosity and continuing friendship throughout the development of (innumerable draft versions and now even two editions of) their joint text on point processes. -
Lecture 26 : Poisson Point Processes
Lecture 26 : Poisson Point Processes STAT205 Lecturer: Jim Pitman Scribe: Ben Hough <[email protected]> In this lecture, we consider a measure space (S; S; µ), where µ is a σ-finite measure (i.e. S may be written as a disjoint union of sets of finite µ-measure). 26.1 The Poisson Point Process Definition 26.1 A Poisson Point Process (P.P.P.) with intensity µ is a collection of random variables N(A; !), A 2 S, ! 2 Ω defined on a probability space (Ω; F; P) such that: 1. N(·; !) is a counting measure on (S; S) for each ! 2 Ω. 2. N(A; ·) is Poisson with mean µ(A): − P e µ(A)(µ(A))k (N(A) = k) = k! all A 2 S. 3. If A1, A2, ... are disjoint sets then N(A1; ·), N(A2; ·), ... are independent random variables. Theorem 26.2 P.P.P.'s exist. Proof Sketch: It's not enough to quote Kolmogorov's Extension Theorem. The only convincing argument is to give an explicit constuction from sequences of independent random variables. We begin by considering the case µ(S) < 1. P µ(A) 1. Take X1, X2, ... to be i.i.d. random variables so that (Xi 2 A) = µ(S) . 2. Take N(S) to be a Poisson random variable with mean µ(S), independent of the Xi's. Assume all random varibles are defined on the same probability space (Ω; F; P). N(S) 3. Define N(A) = i=1 1(Xi2A), for all A 2 S. P 1 Now verify that this N(A; !) is a P.P.P. -
A Course in Interacting Particle Systems
A Course in Interacting Particle Systems J.M. Swart January 14, 2020 arXiv:1703.10007v2 [math.PR] 13 Jan 2020 2 Contents 1 Introduction 7 1.1 General set-up . .7 1.2 The voter model . .9 1.3 The contact process . 11 1.4 Ising and Potts models . 14 1.5 Phase transitions . 17 1.6 Variations on the voter model . 20 1.7 Further models . 22 2 Continuous-time Markov chains 27 2.1 Poisson point sets . 27 2.2 Transition probabilities and generators . 30 2.3 Poisson construction of Markov processes . 31 2.4 Examples of Poisson representations . 33 3 The mean-field limit 35 3.1 Processes on the complete graph . 35 3.2 The mean-field limit of the Ising model . 36 3.3 Analysis of the mean-field model . 38 3.4 Functions of Markov processes . 42 3.5 The mean-field contact process . 47 3.6 The mean-field voter model . 49 3.7 Exercises . 51 4 Construction and ergodicity 53 4.1 Introduction . 53 4.2 Feller processes . 54 4.3 Poisson construction . 63 4.4 Generator construction . 72 4.5 Ergodicity . 79 4.6 Application to the Ising model . 81 4.7 Further results . 85 5 Monotonicity 89 5.1 The stochastic order . 89 5.2 The upper and lower invariant laws . 94 5.3 The contact process . 97 5.4 Other examples . 100 3 4 CONTENTS 5.5 Exercises . 101 6 Duality 105 6.1 Introduction . 105 6.2 Additive systems duality . 106 6.3 Cancellative systems duality . 113 6.4 Other dualities . -
Point Processes, Temporal
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CiteSeerX Point processes, temporal David R. Brillinger, Peter M. Guttorp & Frederic Paik Schoenberg Volume 3, pp 1577–1581 in Encyclopedia of Environmetrics (ISBN 0471 899976) Edited by Abdel H. El-Shaarawi and Walter W. Piegorsch John Wiley & Sons, Ltd, Chichester, 2002 is an encyclopedic account of the theory of the Point processes, temporal subject. A temporal point process is a random process whose Examples realizations consist of the times fjg, j 2 , j D 0, š1, š2,... of isolated events scattered in time. A Examples of point processes abound in the point process is also known as a counting process or environment; we have already mentioned times a random scatter. The times may correspond to events of floods. There are also times of earthquakes, of several types. fires, deaths, accidents, hurricanes, storms (hale, Figure 1 presents an example of temporal point ice, thunder), volcanic eruptions, lightning strikes, process data. The figure actually provides three dif- tornadoes, power outages, chemical spills (see ferent ways of representing the timing of floods Meteorological extremes; Natural disasters). on the Amazon River near Manaus, Brazil, dur- ing the period 1892–1992 (see Hydrological ex- tremes)[7]. Questions The formal use of the concept of point process has a long history going back at least to the life tables The questions that scientists ask involving point of Graunt [14]. Physicists contributed many ideas in process data include the following. Is a point pro- the first half of the twentieth century; see, for ex- cess associated with another process? Is the associ- ample, [23]. -
Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc
Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc To cite this version: Paolo Di Tella, Monique Jeanblanc. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process. 2019. hal-02146840 HAL Id: hal-02146840 https://hal.archives-ouvertes.fr/hal-02146840 Preprint submitted on 4 Jun 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella1;2 and Monique Jeanblanc3 Abstract Let X be a point process and let X denote the filtration generated by X. In this paper we study martingale representation theorems in the filtration G obtained as an initial and progressive enlargement of the filtration X. In particular, the progressive enlargement is done by means of a whole point process H. We work here in full generality, without requiring any further assumption on the process H and we recover the special case in which X is enlarged progressively by a random time t. Keywords: Point processes, martingale representation, progressive enlargement, initial enlargement, random measures. -
Markov Determinantal Point Processes
Markov Determinantal Point Processes Raja Hafiz Affandi Alex Kulesza Emily B. Fox Department of Statistics Dept. of Computer and Information Science Department of Statistics The Wharton School University of Pennsylvania The Wharton School University of Pennsylvania [email protected] University of Pennsylvania [email protected] [email protected] Abstract locations are likewise over-dispersed relative to uniform placement (Bernstein and Gobbel, 1979). Likewise, many A determinantal point process (DPP) is a random practical tasks can be posed in terms of diverse subset selec- process useful for modeling the combinatorial tion. For example, one might want to select a set of frames problem of subset selection. In particular, DPPs from a movie that are representative of its content. Clearly, encourage a random subset Y to contain a diverse diversity is preferable to avoid redundancy; likewise, each set of items selected from a base set Y. For ex- frame should be of high quality. A motivating example we ample, we might use a DPP to display a set of consider throughout the paper is the task of selecting a di- news headlines that are relevant to a user’s inter- verse yet relevant set of news headlines to display to a user. ests while covering a variety of topics. Suppose, One could imagine employing binary Markov random fields however, that we are asked to sequentially se- with negative correlations, but such models often involve lect multiple diverse sets of items, for example, notoriously intractable inference problems. displaying new headlines day-by-day. We might want these sets to be diverse not just individually Determinantal point processes (DPPs), which arise in ran- but also through time, offering headlines today dom matrix theory (Mehta and Gaudin, 1960; Ginibre, 1965) repul- that are unlike the ones shown yesterday. -
Two Pillars of Asset Pricing †
American Economic Review 2014, 104(6): 1467–1485 http://dx.doi.org/10.1257/aer.104.6.1467 Two Pillars of Asset Pricing † By Eugene F. Fama * The Nobel Foundation asks that the Nobel lecture cover the work for which the Prize is awarded. The announcement of this year’s Prize cites empirical work in asset pricing. I interpret this to include work on efficient capital markets and work on developing and testing asset pricing models—the two pillars, or perhaps more descriptive, the Siamese twins of asset pricing. I start with efficient markets and then move on to asset pricing models. I. Efficient Capital Markets A. Early Work The year 1962 was a propitious time for PhD research at the University of Chicago. Computers were coming into their own, liberating econometricians from their mechanical calculators. It became possible to process large amounts of data quickly, at least by previous standards. Stock prices are among the most accessible data, and there was burgeoning interest in studying the behavior of stock returns, cen- tered at the University of Chicago Merton Miller, Harry Roberts, Lester Telser, and ( Benoit Mandelbrot as a frequent visitor and MIT Sidney Alexander, Paul Cootner, ) ( Franco Modigliani, and Paul Samuelson . Modigliani often visited Chicago to work ) with his longtime coauthor Merton Miller, so there was frequent exchange of ideas between the two schools. It was clear from the beginning that the central question is whether asset prices reflect all available information—what I labelled the efficient markets hypothesis Fama 1965b . The difficulty is making the hypothesis testable. We can’t test whether ( ) the market does what it is supposed to do unless we specify what it is supposed to do. -
Financial Econometrics Value at Risk
Financial Econometrics Value at Risk Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Value at Risk Introduction VaR RiskMetricsTM Summary Risk What do we mean by risk? I Dictionary: possibility of loss or injury I Volatility a common measure for assets I Two points of view on volatility measure F Risk is both good and bad changes F Volatility is useful because there is symmetry in gains and losses What sorts of risk? I Market risk I Credit risk I Liquidity risk I Operational risk I Other risks sometimes mentioned F Legal risk F Model risk Different ways of dealing with risk Maximize expected utility with preferences about risk implicit in the utility function I What are problems with this? The worst that can happen to you I What are problems with this? Safety first I One definition (Roy): Investor chooses a portfolio that minimizes the probability of a loss greater in magnitude than some disaster level I What are problems with this? I Another definition (Telser): Investor specifies a maximum probability of a return less than some level and then chooses the portfolio that maximizes the expected return subject to this restriction Value at risk Value at risk summarizes the maximum loss over some horizon with a given confidence level I Lower tail of distribution function of returns for a long position I Upper tail of distribution function of returns for a short position F Can use lower tail if symmetric I Suppose standard normal distribution, which implies an expected return of zero I 99 percent of the time, loss is at most -2.32634