Generalized Autoregressive Score Models By: Drew Creal, Siem Jan Koopman, André Lucas

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Autoregressive Score Models By: Drew Creal, Siem Jan Koopman, André Lucas Econometrics Generalized Autoregressive Score Models by: Drew Creal, Siem Jan Koopman, André Lucas To capture the dynamic behavior of univariate and multivariate time series processes, we can allow parameters to be time-varying by having them as functions of lagged dependent variables as well as exogenous variables. Although other approaches of introducing time dependence exists, the GAS models, Generalized Autoregressive Score, particular approach have become popular in applied statistics and econometrics. Here we discuss a further development of Creal, Koopman, and Lucas (2012) which is based on the score function of the predictive model density at time t. Typical examples are the generalized autoregressive that time-varying parameters in a multi-state model for conditional heteroskedasticity (GARCH) models of pooled marked point-processes can be introduced natu- Engle (1982) and Bollerslev (1986), the autoregressive rally in our framework. conditional duration and intensity (ACD and ACI, respectively) models of Engle and Russell (1998) and the dynamic copula models of Patton (2006). Creal, The GAS model Koopman, and Lucas (2012) argue that the score Let N × 1 vector yt denote the dependent variable of function is an effective choice for introducing a driving interest, ft the time-varying parameter vector, xt a vector mechanism for time-varying parameters. In particular, of exogenous variables (covariates), all at time t, and θ t by scaling the score function appropriately, standard a vector of static parameters. Define Y = {y1, . , yt}, t t dynamic models such as the GARCH, ACD, and ACI F = {f0, f1, . , ft}, and X = {x1, . , xt}. The available models can be recovered. Application of this framework information set at time t consists of ft, t where (1) to other non-linear, non-Gaussian, possibly multivariate, { F } models will lead to the formulation of new time-varying t 1 t 1 t = Y − ,F− l, X , for t =1,...,n. (1) parameter models. Ft { } They have labeled their model as the generalized auto- regressive score (GAS) model. Here we aim to introduce We assume that yt is generated by the observation density the GAS model and to illustrate the approach for a class y p(y f , ; θ). (1)(1) of multivariate point-process models that is used empiri- t ∼ t | t Ft cally for the modeling credit risk. We further aim to show Furthermore, we assume that the mechanism for updating Siem Jan Koopman the time-varying parameter ft is given by the familiar autoregressive updating equation Prof. Dr. Siem Jan Koopman is Professor of Econometrics at the Vrije Universiteit p q Amsterdam and research fellow at the Tinbergen ft+1 = ω + Aist i+1 + Bjft j+1, (2) − − (2) Institute since 1999. His Ph.D. is from the London i=1 j=1 School of Economics (LSE) and dates back to 1992. He had positions at the LSE between 1992 where ω is a vector of constants, coefficient matrices(1) A and 1997 and at the CentER (Tilburg University) i and B have appropriate dimensions for i = 1, . , p and between 1997 and 1999. His research interests j are Statistical analysis of time series; Time series j = 1, . , q, while st is an appropriate function of past s = s (y ,f , ; θ) econometrics; Financial econometrics; Kalman data, t t t t F t . The unknown coefficients in filter; Simulation-based estimation; Forecasting. (2) are functions of θ, that is ω = ω(θ), Ai = Ai(θ), and Bj (1) (1) = Bj(θ) for i = 1, . , p and j = 1, . , q. AENORM vol. 20 (75) May 2012 37 1 1 1 1 1 1 1 Econometrics Econometrics The approach is based on the observation density (1) where t = ln p(yt ft, t; θ) for a realization of y . Eva- | F t for a given parameter ft. When observation yt is realized, luating the log-likelihood function of the GAS model is time-varying ft to the next period t + 1 is updated using particularly simple. It only requires the implementation (2) with of the GAS updating equation (2) and the evaluation of * ∂ ln p(yt ft , t ; θ) t for a particular value θ of θ. st = St t, t = | F , ·∇ ∇ ∂ft (3) Example : GARCH models Consider the basic model yt = σtεt where the Gaussian disturbance εt has zero mean St = S(t, ft , t ; θ), F and unit variance while σt is a time-varying standard de- where S(·) is a matrix function. Given the dependence viation. It is a basic exercise to show that the GAS (1, 1) 1 2 of the driving mechanism in (2) on the scaled score model with St = − and f = σ reduces to It t 1 t t vector (3), the equations (1) – (3) define the generalized | − f = ω + A y2 f + B f , (7) autoregressive score model with orders p and q. We t+1 1 t − t 1 t refer to the model as GAS (p, q) and we typically take p = q = 1. which is equivalent to the standard GARCH(1, 1) model The use of the score for updating ft is intuitive. It as given by defines a steepest ascent direction for improving the f = α + α y2 + β f ,f= σ2, (8) model’s local fit in terms of the likelihood or density t+1 0 1 t 1 t t t at time t given the current position of the parameter ft. This provides the natural direction for updating where coefficients α0 = ω, α1 = A1 and β1 = B1 − A1 are (1) the parameter. In addition, the score depends on the unknown. When we assume that εt follows a Student’s t complete density, and not only on the first or second distribution with ν degrees of freedom and unit variance, order moments of the observations yt. Via its choice the GAS (1, 1) specification for the conditional variance of the scaling matrix St, the GAS model allows for leads to the updating equation additional flexibility in how the score is used for 1 updating f . In many situations, it is natural to consider ft+1 = ω + A1 1+3ν− t · · a form of scaling that depends on the variance of the (1 + ν 1) − 2 (9) score. For example, we can define the scaling matrix 1 1 2 1 yt ft (1 2ν− )(1 + ν− y /(1 2ν− ) ft) − as − t − (4) +B1ft. 1 St = t−t 1, t t 1 =Et 1 [ t t ] , I | − I | − − ∇ ∇ This model is clearly different compatered to the standard where Et−1 is expectation with respect to the density t-GARCH(1, 1) model which has the Student’s t density p(y f , ; θ). For this choice of S , the GAS model in (1) with the updating equation (7). The denominator t| t Ft t encompasses well-known models such as GARCH, ACD of the second term in the right-hand side of (9) causes a and ACI. Another possibility for scaling is more moderate increase in the variance for a large (5) realization of |y | as long as ν is finite. The intuition is 1 t St = t t 1, t t 1 t t 1 = t−t 1, clear: if the errors are modeled by a fattailed distribution, J | − J | − J | − I | − a large absolute realization of yt does not necessitate a where St is defined as the square root matrix of the substantial increase in the variance. Multivariate exten- (pseudo)-inverse information matrix for (1) with sions of this approach are developed in Creal, Koopman, respect to ft. An advantage of this specific choice for and Lucas (2011). St is that the statistical properties of the corresponding GAS model become more tractable. In particular, the Example : Regression model The time-varying linear driver st becomes a martingale difference with unity regression model yt = xt′βt + εt has a k × 1 vector xt of variance. exogenous variables, a k × 1 vector of time-varying re- A convenient property of the GAS model is the gression coefficients βt and normally independently dis- 2 relatively simple way of estimating parameters by tributed disturbances εt ~ N(0, σ ). Let ft = βt. The scaled maximum likelihood (ML). This feature applies to all score function based on St = t t 1 in for this regres- J | − special cases of GAS models. For an observed time sion model is given by series y1, . , yn and by adopting the standard prediction 1/2 s =(x x )− x (y x f )/σ, (10) error decomposition, we can express the maximization t t t t t − t t problem as where the inverse t t 1 of used to construct t t 1 is I | − J | − n the Moore-Penrose pseudo inverse to account for the ˆ θ = arg max t, (6) singularity of x x ′. The GAS (1, 1) specification for the θ t t t=1 time-varying regression coefficient becomes 38 AENORM vol. 20 (75) May 2012 1 Econometrics Econometrics denote the last event time before time t and let λk,t = (λ1k,t, xt (yt xt ft) ft+1 = ω + A1 − + B1ft. (11) . , λ ) be a J × 1 vector of log-intensities. We model (x x )1/2 · σ Jk,t ′ t t the log intensities by The updating equation (11) can be extended by λ = d + Zf + X β, including σ2 as a time-varying factor and by adjusting the k,t t k,t (12) scaled score function (10) for the time-varying parameter 2 vector ft = (βt′ , σt )′. where d is a J × 1 vector of baseline intensities, Z is a J × r matrix of factor loadings, and β is a p × 1 vector of regression parameters for the exogenous covariates Illustration: dynamic pooled marked point Xk,t.
Recommended publications
  • POISSON PROCESSES 1.1. the Rutherford-Chadwick-Ellis
    POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted a series of pathbreaking experiments on radioactive decay. In one of these, a radioactive substance was observed in N = 2608 time intervals of 7.5 seconds each, and the number of decay particles reaching a counter during each period was recorded. The table below shows the number Nk of these time periods in which exactly k decays were observed for k = 0,1,2,...,9. Also shown is N pk where k pk = (3.87) exp 3.87 =k! {− g The parameter value 3.87 was chosen because it is the mean number of decays/period for Rutherford’s data. k Nk N pk k Nk N pk 0 57 54.4 6 273 253.8 1 203 210.5 7 139 140.3 2 383 407.4 8 45 67.9 3 525 525.5 9 27 29.2 4 532 508.4 10 16 17.1 5 408 393.5 ≥ This is typical of what happens in many situations where counts of occurences of some sort are recorded: the Poisson distribution often provides an accurate – sometimes remarkably ac- curate – fit. Why? 1.2. Poisson Approximation to the Binomial Distribution. The ubiquity of the Poisson distri- bution in nature stems in large part from its connection to the Binomial and Hypergeometric distributions. The Binomial-(N ,p) distribution is the distribution of the number of successes in N independent Bernoulli trials, each with success probability p.
    [Show full text]
  • Rescaling Marked Point Processes
    1 Rescaling Marked Point Processes David Vere-Jones1, Victoria University of Wellington Frederic Paik Schoenberg2, University of California, Los Angeles Abstract In 1971, Meyer showed how one could use the compensator to rescale a multivari- ate point process, forming independent Poisson processes with intensity one. Meyer’s result has been generalized to multi-dimensional point processes. Here, we explore gen- eralization of Meyer’s theorem to the case of marked point processes, where the mark space may be quite general. Assuming simplicity and the existence of a conditional intensity, we show that a marked point process can be transformed into a compound Poisson process with unit total rate and a fixed mark distribution. ** Key words: residual analysis, random time change, Meyer’s theorem, model evaluation, intensity, compensator, Poisson process. ** 1 Department of Mathematics, Victoria University of Wellington, P.O. Box 196, Welling- ton, New Zealand. [email protected] 2 Department of Statistics, 8142 Math-Science Building, University of California, Los Angeles, 90095-1554. [email protected] Vere-Jones and Schoenberg. Rescaling Marked Point Processes 2 1 Introduction. Before other matters, both authors would like express their appreciation to Daryl for his stimulating and forgiving company, and to wish him a long and fruitful continuation of his mathematical, musical, woodworking, and many other activities. In particular, it is a real pleasure for the first author to acknowledge his gratitude to Daryl for his hard work, good humour, generosity and continuing friendship throughout the development of (innumerable draft versions and now even two editions of) their joint text on point processes.
    [Show full text]
  • Lecture 26 : Poisson Point Processes
    Lecture 26 : Poisson Point Processes STAT205 Lecturer: Jim Pitman Scribe: Ben Hough <[email protected]> In this lecture, we consider a measure space (S; S; µ), where µ is a σ-finite measure (i.e. S may be written as a disjoint union of sets of finite µ-measure). 26.1 The Poisson Point Process Definition 26.1 A Poisson Point Process (P.P.P.) with intensity µ is a collection of random variables N(A; !), A 2 S, ! 2 Ω defined on a probability space (Ω; F; P) such that: 1. N(·; !) is a counting measure on (S; S) for each ! 2 Ω. 2. N(A; ·) is Poisson with mean µ(A): − P e µ(A)(µ(A))k (N(A) = k) = k! all A 2 S. 3. If A1, A2, ... are disjoint sets then N(A1; ·), N(A2; ·), ... are independent random variables. Theorem 26.2 P.P.P.'s exist. Proof Sketch: It's not enough to quote Kolmogorov's Extension Theorem. The only convincing argument is to give an explicit constuction from sequences of independent random variables. We begin by considering the case µ(S) < 1. P µ(A) 1. Take X1, X2, ... to be i.i.d. random variables so that (Xi 2 A) = µ(S) . 2. Take N(S) to be a Poisson random variable with mean µ(S), independent of the Xi's. Assume all random varibles are defined on the same probability space (Ω; F; P). N(S) 3. Define N(A) = i=1 1(Xi2A), for all A 2 S. P 1 Now verify that this N(A; !) is a P.P.P.
    [Show full text]
  • A Course in Interacting Particle Systems
    A Course in Interacting Particle Systems J.M. Swart January 14, 2020 arXiv:1703.10007v2 [math.PR] 13 Jan 2020 2 Contents 1 Introduction 7 1.1 General set-up . .7 1.2 The voter model . .9 1.3 The contact process . 11 1.4 Ising and Potts models . 14 1.5 Phase transitions . 17 1.6 Variations on the voter model . 20 1.7 Further models . 22 2 Continuous-time Markov chains 27 2.1 Poisson point sets . 27 2.2 Transition probabilities and generators . 30 2.3 Poisson construction of Markov processes . 31 2.4 Examples of Poisson representations . 33 3 The mean-field limit 35 3.1 Processes on the complete graph . 35 3.2 The mean-field limit of the Ising model . 36 3.3 Analysis of the mean-field model . 38 3.4 Functions of Markov processes . 42 3.5 The mean-field contact process . 47 3.6 The mean-field voter model . 49 3.7 Exercises . 51 4 Construction and ergodicity 53 4.1 Introduction . 53 4.2 Feller processes . 54 4.3 Poisson construction . 63 4.4 Generator construction . 72 4.5 Ergodicity . 79 4.6 Application to the Ising model . 81 4.7 Further results . 85 5 Monotonicity 89 5.1 The stochastic order . 89 5.2 The upper and lower invariant laws . 94 5.3 The contact process . 97 5.4 Other examples . 100 3 4 CONTENTS 5.5 Exercises . 101 6 Duality 105 6.1 Introduction . 105 6.2 Additive systems duality . 106 6.3 Cancellative systems duality . 113 6.4 Other dualities .
    [Show full text]
  • Point Processes, Temporal
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CiteSeerX Point processes, temporal David R. Brillinger, Peter M. Guttorp & Frederic Paik Schoenberg Volume 3, pp 1577–1581 in Encyclopedia of Environmetrics (ISBN 0471 899976) Edited by Abdel H. El-Shaarawi and Walter W. Piegorsch John Wiley & Sons, Ltd, Chichester, 2002 is an encyclopedic account of the theory of the Point processes, temporal subject. A temporal point process is a random process whose Examples realizations consist of the times fjg, j 2 , j D 0, š1, š2,... of isolated events scattered in time. A Examples of point processes abound in the point process is also known as a counting process or environment; we have already mentioned times a random scatter. The times may correspond to events of floods. There are also times of earthquakes, of several types. fires, deaths, accidents, hurricanes, storms (hale, Figure 1 presents an example of temporal point ice, thunder), volcanic eruptions, lightning strikes, process data. The figure actually provides three dif- tornadoes, power outages, chemical spills (see ferent ways of representing the timing of floods Meteorological extremes; Natural disasters). on the Amazon River near Manaus, Brazil, dur- ing the period 1892–1992 (see Hydrological ex- tremes)[7]. Questions The formal use of the concept of point process has a long history going back at least to the life tables The questions that scientists ask involving point of Graunt [14]. Physicists contributed many ideas in process data include the following. Is a point pro- the first half of the twentieth century; see, for ex- cess associated with another process? Is the associ- ample, [23].
    [Show full text]
  • Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc
    Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella, Monique Jeanblanc To cite this version: Paolo Di Tella, Monique Jeanblanc. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process. 2019. hal-02146840 HAL Id: hal-02146840 https://hal.archives-ouvertes.fr/hal-02146840 Preprint submitted on 4 Jun 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Martingale Representation in the Enlargement of the Filtration Generated by a Point Process Paolo Di Tella1;2 and Monique Jeanblanc3 Abstract Let X be a point process and let X denote the filtration generated by X. In this paper we study martingale representation theorems in the filtration G obtained as an initial and progressive enlargement of the filtration X. In particular, the progressive enlargement is done by means of a whole point process H. We work here in full generality, without requiring any further assumption on the process H and we recover the special case in which X is enlarged progressively by a random time t. Keywords: Point processes, martingale representation, progressive enlargement, initial enlargement, random measures.
    [Show full text]
  • Markov Determinantal Point Processes
    Markov Determinantal Point Processes Raja Hafiz Affandi Alex Kulesza Emily B. Fox Department of Statistics Dept. of Computer and Information Science Department of Statistics The Wharton School University of Pennsylvania The Wharton School University of Pennsylvania [email protected] University of Pennsylvania [email protected] [email protected] Abstract locations are likewise over-dispersed relative to uniform placement (Bernstein and Gobbel, 1979). Likewise, many A determinantal point process (DPP) is a random practical tasks can be posed in terms of diverse subset selec- process useful for modeling the combinatorial tion. For example, one might want to select a set of frames problem of subset selection. In particular, DPPs from a movie that are representative of its content. Clearly, encourage a random subset Y to contain a diverse diversity is preferable to avoid redundancy; likewise, each set of items selected from a base set Y. For ex- frame should be of high quality. A motivating example we ample, we might use a DPP to display a set of consider throughout the paper is the task of selecting a di- news headlines that are relevant to a user’s inter- verse yet relevant set of news headlines to display to a user. ests while covering a variety of topics. Suppose, One could imagine employing binary Markov random fields however, that we are asked to sequentially se- with negative correlations, but such models often involve lect multiple diverse sets of items, for example, notoriously intractable inference problems. displaying new headlines day-by-day. We might want these sets to be diverse not just individually Determinantal point processes (DPPs), which arise in ran- but also through time, offering headlines today dom matrix theory (Mehta and Gaudin, 1960; Ginibre, 1965) repul- that are unlike the ones shown yesterday.
    [Show full text]
  • Notes on the Poisson Point Process
    Notes on the Poisson point process Paul Keeler January 5, 2016 This work is licensed under a “CC BY-SA 3.0” license. Abstract The Poisson point process is a type of random object known as a point pro- cess that has been the focus of much study and application. This survey aims to give an accessible but detailed account of the Poisson point process by covering its history, mathematical definitions in a number of settings, and key properties as well detailing various terminology and applications of the process, with rec- ommendations for further reading. 1 Introduction In probability, statistics and related fields, a Poisson point process or a Poisson process or a Poisson point field is a type of random object known as a point pro- cess or point field that consists of randomly positioned points located on some underlying mathematical space [68]. The process has convenient mathematical properties [39], which has led to it being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numer- ous disciplines such as astronomy [5], biology [53], ecology [70], geology [16], physics [61], image processing [12], and telecommunications [7][28]. The Poisson point process is often defined on the real line playing an impor- tant role in the field of queueing theory [40] where it is used to model certain random events happening in time such as the arrival of customers at a store or phone calls at an exchange. In the plane, the point process, also known as a spa- tial Poisson process [9], may represent scattered objects such as users in a wireless network [2, 6, 7, 27], particles colliding into a detector, or trees in a forest [68].
    [Show full text]
  • Stochastic Differential Equations with Jumps
    Probability Surveys Vol. 1 (2004) 1–19 ISSN: 1549-5787 DOI: 10.1214/154957804100000015 Stochastic differential equations with jumps∗,† Richard F. Bass Department of Mathematics, University of Connecticut Storrs, CT 06269-3009 e-mail: [email protected] Abstract: This paper is a survey of uniqueness results for stochastic dif- ferential equations with jumps and regularity results for the corresponding harmonic functions. Keywords and phrases: stochastic differential equations, jumps, mar- tingale problems, pathwise uniqueness, Harnack inequality, harmonic func- tions, Dirichlet forms. AMS 2000 subject classifications: Primary 60H10; secondary 60H30, 60J75. Received September 2003. 1. Introduction Researchers have increasingly been studying models from economics and from the natural sciences where the underlying randomness contains jumps. To give an example from financial mathematics, the classical model for a stock price is that of a geometric Brownian motion. However, wars, decisions of the Federal Reserve and other central banks, and other news can cause the stock price to make a sudden shift. To model this, one would like to represent the stock price by a process that has jumps. This paper is a survey of some aspects of stochastic differential equations (SDEs) with jumps. As will quickly become apparent, the theory of SDEs with jumps is nowhere near as well developed as the theory of continuous SDEs, and in fact, is still in a relatively primitive stage. In my opinion, the field is both fertile and important. To encourage readers to undertake research in this area, I have mentioned some open problems. arXiv:math/0309246v2 [math.PR] 9 Nov 2005 Section 2 is a description of stochastic integration when there are jumps.
    [Show full text]
  • Stochastic Point Processes
    Stochastic Point Processes Cengiz Öztireli, 2015 Distributions in Nature [Filckr user Hans Dekker] Sampling • Conversion from continuous to discrete • Integration in rendering [Filckr user Josh Pesavento ] Point Distributions • All kinds of patterns Studying Point Distributions • How can we analyze patterns? Periodogram Density [Heck et al. 2013] [Faal 2011] Point Processes • Formal characterization of point patterns Point Processes • Definition Point Process Point Processes • Infinite point processes Observation window Point Process Statistics • Correlations as probabilities %(n)(x , , x )dV dV = p(x , , x N) 1 ··· n 1 ··· n 1 ··· n| Product density Small volumes Points in space Point process xi Point Process Statistics • Correlations as probabilities – First order product density %(1)(x)=λ(x) Expected number of points around x x Measures local density Point Process Statistics • Correlations as probabilities – First order product density %(1)(x)=λ(x) Point Process Statistics • Correlations as probabilities – First order product density %(1)(x)=λ(x) Constant Point Process Statistics • Correlations as probabilities – Second order product density %(2)(x, y)=%(x, y) Expected number of points around x & y y x Measures the joint probability p(x, y) Point Process Statistics • Correlations as probabilities – Higher order? %(3)(x, y, z) z Expected number of points around x y z y The “second order dogma” in physics x Point Process Statistics • Summary: – First and second order product densities %(1)(x)=λ(x) %(2)(x, y)=%(x, y) y x x Point Process Statistics • Example: homogenous Poisson process – a.k.a random sampling p(x)=p p(x, y)=p(x)p(y) 2 λ(x)dV = p p(x, y)=%(x, y)dVxdVy = p(x)p(y)=λ(x)dVxλ(y)dVy = λ 2 λ(xp)=(x, yλ)=%(x, y)dVxdVy = p(x)p(y)=λ(x)dVxλ(y)dVy = λ 2 p(x, y)=%(x, y)dVxdVy = p(x)p(y)=λ(x)dVxλ(y)dVy = λ %(x, y)=λ(x)λ(y)=λ2 Stationary Processes Stationary Isotropic (translation invariant) (translation & rotation invariant) [Zhou et al.
    [Show full text]
  • Point Processes on Directed Linear Network Arxiv:1812.09071V2 [Math
    Point processes on directed linear network Jakob G. Rasmussen and Heidi S. Christensen Department of Mathematical Sciences, Aalborg University Abstract In this paper we consider point processes specified on directed linear net- works, i.e. linear networks with associated directions. We adapt the so-called conditional intensity function used for specifying point processes on the time line to the setting of directed linear networks. For models specified by such a conditional intensity function, we derive an explicit expression for the like- lihood function, specify two simulation algorithms (the inverse method and Ogata’s modified thinning algorithm), and consider methods for model check- ing through the use of residuals. We also extend the results and methods to the case of a marked point process on a directed linear network. Further- more, we consider specific classes of point process models on directed linear networks (Poisson processes, Hawkes processes, non-linear Hawkes processes, self-correcting processes, and marked Hawkes processes), all adapted from well- known models in the temporal setting. Finally, we apply the results and meth- ods to analyse simulated and neurological data. Keywords: Conditional intensity, dendrite network, Hawkes process, self-correcting process 1 Introduction Point processes on linear networks are important for modelling events or objects on a real network, such as a road or river network. In recent years there have been a fair amount of papers on functional summary statistics and models for point processes specified on linear networks (see Okabe and Yamada, 2001; Ang et al., arXiv:1812.09071v2 [math.ST] 2 Jan 2019 2012; Baddeley et al., 2014; McSwiggan et al., 2016; Baddeley et al., 2017; Rakshit et al., 2017).
    [Show full text]
  • ARCH/GARCH Models in Applied Financial Econometrics
    JWPR026-Fabozzi c114-NP November 27, 2007 10:17 1 CHAPTER NP 2 3 4 5 6 ARCH/GARCH Models in Applied 7 8 Financial Econometrics 9 10 11 ROBERT F. ENGLE, PhD 12 Michael Armellino Professorship in the Management of Financial Services, Leonard N. Stern School 13 of Business, New York University 14 15 SERGIO M. FOCARDI 16 Partner, The Intertek Group 17 18 19 FRANK J. FABOZZI, PhD, CFA, CPA 20 Professor in the Practice of Finance, School of Management, Yale University 21 22 23 24 25 Review of Linear Regression and Autoregressive Integration of First, Second, and Higher Moments 7 26 Models 2 Generalizations to High-Frequency Data 7 27 ARCH/GARCH Models 3 Multivariate Extensions 9 28 Application to Value at Risk 5 Summary 10 29 Why ARCH/GARCH? 5 References 10 30 Generalizations of the ARCH/GARCH Models 7 31 32 33 34 Abstract: Volatility is a key parameter used in many financial applications, from deriva- 35 tives valuation to asset management and risk management. Volatility measures the size 36 of the errors made in modeling returns and other financial variables. It was discov- 37 ered that, for vast classes of models, the average size of volatility is not constant but 38 changes with time and is predictable. Autoregressive conditional heteroskedasticity 39 (ARCH)/generalized autoregressive conditional heteroskedasticity (GARCH) models 40 and stochastic volatility models are the main tools used to model and forecast volatil- 41 ity. Moving from single assets to portfolios made of multiple assets, we find that not 42 only idiosyncratic volatilities but also correlations and covariances between assets are 43 time varying and predictable.
    [Show full text]