A University of Sussex Phd Thesis Available Online Via Sussex
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Parameters Estimation for Asymmetric Bifurcating Autoregressive Processes with Missing Data
Electronic Journal of Statistics ISSN: 1935-7524 Parameters estimation for asymmetric bifurcating autoregressive processes with missing data Benoîte de Saporta Université de Bordeaux, GREThA CNRS UMR 5113, IMB CNRS UMR 5251 and INRIA Bordeaux Sud Ouest team CQFD, France e-mail: [email protected] and Anne Gégout-Petit Université de Bordeaux, IMB, CNRS UMR 525 and INRIA Bordeaux Sud Ouest team CQFD, France e-mail: [email protected] and Laurence Marsalle Université de Lille 1, Laboratoire Paul Painlevé, CNRS UMR 8524, France e-mail: [email protected] Abstract: We estimate the unknown parameters of an asymmetric bi- furcating autoregressive process (BAR) when some of the data are missing. In this aim, we model the observed data by a two-type Galton-Watson pro- cess consistent with the binary Bee structure of the data. Under indepen- dence between the process leading to the missing data and the BAR process and suitable assumptions on the driven noise, we establish the strong con- sistency of our estimators on the set of non-extinction of the Galton-Watson process, via a martingale approach. We also prove a quadratic strong law and the asymptotic normality. AMS 2000 subject classifications: Primary 62F12, 62M09; secondary 60J80, 92D25, 60G42. Keywords and phrases: least squares estimation, bifurcating autoregres- sive process, missing data, Galton-Watson process, joint model, martin- gales, limit theorems. arXiv:1012.2012v3 [math.PR] 27 Sep 2011 1. Introduction Bifurcating autoregressive processes (BAR) generalize autoregressive (AR) pro- cesses, when the data have a binary tree structure. Typically, they are involved in modeling cell lineage data, since each cell in one generation gives birth to two offspring in the next one. -
STOCHASTIC COMPARISONS and AGING PROPERTIES of an EXTENDED GAMMA PROCESS Zeina Al Masry, Sophie Mercier, Ghislain Verdier
STOCHASTIC COMPARISONS AND AGING PROPERTIES OF AN EXTENDED GAMMA PROCESS Zeina Al Masry, Sophie Mercier, Ghislain Verdier To cite this version: Zeina Al Masry, Sophie Mercier, Ghislain Verdier. STOCHASTIC COMPARISONS AND AGING PROPERTIES OF AN EXTENDED GAMMA PROCESS. Journal of Applied Probability, Cambridge University press, 2021, 58 (1), pp.140-163. 10.1017/jpr.2020.74. hal-02894591 HAL Id: hal-02894591 https://hal.archives-ouvertes.fr/hal-02894591 Submitted on 9 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Applied Probability Trust (4 July 2020) STOCHASTIC COMPARISONS AND AGING PROPERTIES OF AN EXTENDED GAMMA PROCESS ZEINA AL MASRY,∗ FEMTO-ST, Univ. Bourgogne Franche-Comt´e,CNRS, ENSMM SOPHIE MERCIER & GHISLAIN VERDIER,∗∗ Universite de Pau et des Pays de l'Adour, E2S UPPA, CNRS, LMAP, Pau, France Abstract Extended gamma processes have been seen to be a flexible extension of standard gamma processes in the recent reliability literature, for cumulative deterioration modeling purpose. The probabilistic properties of the standard gamma process have been well explored since the 1970's, whereas those of its extension remain largely unexplored. In particular, stochastic comparisons between degradation levels modeled by standard gamma processes and aging properties for the corresponding level-crossing times are nowadays well understood. -
Lecture Notes
Lecture Notes Lars Peter Hansen October 8, 2007 2 Contents 1 Approximation Results 5 1.1 One way to Build a Stochastic Process . 5 1.2 Stationary Stochastic Process . 6 1.3 Invariant Events and the Law of Large Numbers . 6 1.4 Another Way to Build a Stochastic Process . 8 1.4.1 Stationarity . 8 1.4.2 Limiting behavior . 9 1.4.3 Ergodicity . 10 1.5 Building Nonstationary Processes . 10 1.6 Martingale Approximation . 11 3 4 CONTENTS Chapter 1 Approximation Results 1.1 One way to Build a Stochastic Process • Consider a probability space (Ω, F, P r) where Ω is a set of sample points, F is an event collection (sigma algebra) and P r assigns proba- bilities to events . • Introduce a function S :Ω → Ω such that for any event Λ, S−1(Λ) = {ω ∈ Ω: S(ω) ∈ Λ} is an event. • Introduce a (Borel measurable) measurement function x :Ω → Rn. x is a random vector. • Construct a stochastic process {Xt : t = 1, 2, ...} via the formula: t Xt(ω) = X[S (ω)] or t Xt = X ◦ S . Example 1.1.1. Let Ω be a collection of infinite sequences of real numbers. Specifically, ω = (r0, r1, ...), S(ω) = (r1, r2, ...) and x(ω) = r0. Then Xt(ω) = rt. 5 6 CHAPTER 1. APPROXIMATION RESULTS 1.2 Stationary Stochastic Process Definition 1.2.1. The transformation S is measure-preserving if: P r(Λ) = P r{S−1(Λ)} for all Λ ∈ F. Proposition 1.2.2. When S is measure-preserving, the process {Xt : t = 1, 2, ...} has identical distributions for every t. -
Network Traffic Modeling
Chapter in The Handbook of Computer Networks, Hossein Bidgoli (ed.), Wiley, to appear 2007 Network Traffic Modeling Thomas M. Chen Southern Methodist University, Dallas, Texas OUTLINE: 1. Introduction 1.1. Packets, flows, and sessions 1.2. The modeling process 1.3. Uses of traffic models 2. Source Traffic Statistics 2.1. Simple statistics 2.2. Burstiness measures 2.3. Long range dependence and self similarity 2.4. Multiresolution timescale 2.5. Scaling 3. Continuous-Time Source Models 3.1. Traditional Poisson process 3.2. Simple on/off model 3.3. Markov modulated Poisson process (MMPP) 3.4. Stochastic fluid model 3.5. Fractional Brownian motion 4. Discrete-Time Source Models 4.1. Time series 4.2. Box-Jenkins methodology 5. Application-Specific Models 5.1. Web traffic 5.2. Peer-to-peer traffic 5.3. Video 6. Access Regulated Sources 6.1. Leaky bucket regulated sources 6.2. Bounding-interval-dependent (BIND) model 7. Congestion-Dependent Flows 7.1. TCP flows with congestion avoidance 7.2. TCP flows with active queue management 8. Conclusions 1 KEY WORDS: traffic model, burstiness, long range dependence, policing, self similarity, stochastic fluid, time series, Poisson process, Markov modulated process, transmission control protocol (TCP). ABSTRACT From the viewpoint of a service provider, demands on the network are not entirely predictable. Traffic modeling is the problem of representing our understanding of dynamic demands by stochastic processes. Accurate traffic models are necessary for service providers to properly maintain quality of service. Many traffic models have been developed based on traffic measurement data. This chapter gives an overview of a number of common continuous-time and discrete-time traffic models. -
On Conditionally Heteroscedastic Ar Models with Thresholds
Statistica Sinica 24 (2014), 625-652 doi:http://dx.doi.org/10.5705/ss.2012.185 ON CONDITIONALLY HETEROSCEDASTIC AR MODELS WITH THRESHOLDS Kung-Sik Chan, Dong Li, Shiqing Ling and Howell Tong University of Iowa, Tsinghua University, Hong Kong University of Science & Technology and London School of Economics & Political Science Abstract: Conditional heteroscedasticity is prevalent in many time series. By view- ing conditional heteroscedasticity as the consequence of a dynamic mixture of in- dependent random variables, we develop a simple yet versatile observable mixing function, leading to the conditionally heteroscedastic AR model with thresholds, or a T-CHARM for short. We demonstrate its many attributes and provide com- prehensive theoretical underpinnings with efficient computational procedures and algorithms. We compare, via simulation, the performance of T-CHARM with the GARCH model. We report some experiences using data from economics, biology, and geoscience. Key words and phrases: Compound Poisson process, conditional variance, heavy tail, heteroscedasticity, limiting distribution, quasi-maximum likelihood estimation, random field, score test, T-CHARM, threshold model, volatility. 1. Introduction We can often model a time series as the sum of a conditional mean function, the drift or trend, and a conditional variance function, the diffusion. See, e.g., Tong (1990). The drift attracted attention from very early days, although the importance of the diffusion did not go entirely unnoticed, with an example in ecological populations as early as Moran (1953); a systematic modelling of the diffusion did not seem to attract serious attention before the 1980s. For discrete-time cases, our focus, as far as we are aware it is in the econo- metric and finance literature that the modelling of the conditional variance has been treated seriously, although Tong and Lim (1980) did include conditional heteroscedasticity. -
Arbitrage Free Approximations to Candidate Volatility Surface Quotations
Journal of Risk and Financial Management Article Arbitrage Free Approximations to Candidate Volatility Surface Quotations Dilip B. Madan 1,* and Wim Schoutens 2,* 1 Robert H. Smith School of Business, University of Maryland, College Park, MD 20742, USA 2 Department of Mathematics, KU Leuven, 3000 Leuven, Belgium * Correspondence: [email protected] (D.B.M.); [email protected] (W.S.) Received: 19 February 2019; Accepted: 10 April 2019; Published: 21 April 2019 Abstract: It is argued that the growth in the breadth of option strikes traded after the financial crisis of 2008 poses difficulties for the use of Fourier inversion methodologies in volatility surface calibration. Continuous time Markov chain approximations are proposed as an alternative. They are shown to be adequate, competitive, and stable though slow for the moment. Further research can be devoted to speed enhancements. The Markov chain approximation is general and not constrained to processes with independent increments. Calibrations are illustrated for data on 2695 options across 28 maturities for SPY as at 8 February 2018. Keywords: bilateral gamma; fast Fourier transform; sato process; matrix exponentials JEL Classification: G10; G12; G13 1. Introduction A variety of arbitrage free models for approximating quoted option prices have been available for some time. The quotations are typically in terms of (Black and Scholes 1973; Merton 1973) implied volatilities for the traded strikes and maturities. Consequently, the set of such quotations is now popularly referred to as the volatility surface. Some of the models are nonparametric with the Dupire(1994) local volatility model being a popular example. The local volatility model is specified in terms of a deterministic function s(K, T) that expresses the local volatility for the stock if the stock were to be at level K at time T. -
Portfolio Optimization and Option Pricing Under Defaultable Lévy
Universit`adegli Studi di Padova Dipartimento di Matematica Scuola di Dottorato in Scienze Matematiche Indirizzo Matematica Computazionale XXVI◦ ciclo Portfolio optimization and option pricing under defaultable L´evy driven models Direttore della scuola: Ch.mo Prof. Paolo Dai Pra Coordinatore dell’indirizzo: Ch.ma Prof.ssa. Michela Redivo Zaglia Supervisore: Ch.mo Prof. Tiziano Vargiolu Universit`adegli Studi di Padova Corelatori: Ch.mo Prof. Agostino Capponi Ch.mo Prof. Andrea Pascucci Johns Hopkins University Universit`adegli Studi di Bologna Dottorando: Stefano Pagliarani To Carolina and to my family “An old man, who had spent his life looking for a winning formula (martingale), spent the last days of his life putting it into practice, and his last pennies to see it fail. The martingale is as elusive as the soul.” Alexandre Dumas Mille et un fantˆomes, 1849 Acknowledgements First and foremost, I would like to thank my supervisor Prof. Tiziano Vargiolu and my co-advisors Prof. Agostino Capponi and Prof. Andrea Pascucci, for coauthoring with me all the material appearing in this thesis. In particular, thank you to Tiziano Vargiolu for supporting me scientifically and finan- cially throughout my research and all the activities related to it. Thank you to Agostino Capponi for inviting me twice to work with him at Purdue University, and for his hospital- ity and kindness during my double stay in West Lafayette. Thank you to Andrea Pascucci for wisely leading me throughout my research during the last four years, and for giving me the chance to collaborate with him side by side, thus sharing with me his knowledge and his experience. -
Multivariate ARMA Processes
LECTURE 10 Multivariate ARMA Processes A vector sequence y(t)ofn elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation A0y(t)+A1y(t − 1) + ···+ Apy(t − p) (1) = M0ε(t)+M1ε(t − 1) + ···+ Mqε(t − q), wherein A0,A1,...,Ap,M0,M1,...,Mq are matrices of order n × n and ε(t)is a disturbance vector of n elements determined by serially-uncorrelated white- noise processes that may have some contemporaneous correlation. In order to signify that the ith element of the vector y(t) is the dependent variable of the ith equation, for every i, it is appropriate to have units for the diagonal elements of the matrix A0. These represent the so-called normalisation rules. Moreover, unless it is intended to explain the value of yi in terms of the remaining contemporaneous elements of y(t), then it is natural to set A0 = I. It is also usual to set M0 = I. Unless a contrary indication is given, it will be assumed that A0 = M0 = I. It is assumed that the disturbance vector ε(t) has E{ε(t)} = 0 for its expected value. On the assumption that M0 = I, the dispersion matrix, de- noted by D{ε(t)} = Σ, is an unrestricted positive-definite matrix of variances and of contemporaneous covariances. However, if restriction is imposed that D{ε(t)} = I, then it may be assumed that M0 = I and that D(M0εt)= M0M0 =Σ. The equations of (1) can be written in summary notation as (2) A(L)y(t)=M(L)ε(t), where L is the lag operator, which has the effect that Lx(t)=x(t − 1), and p q where A(z)=A0 + A1z + ···+ Apz and M(z)=M0 + M1z + ···+ Mqz are matrix-valued polynomials assumed to be of full rank. -
A Representation Free Quantum Stochastic Calculus
JOURNAL OF FUNCTIONAL ANALYSIS 104, 149-197 (1992) A Representation Free Quantum Stochastic Calculus L. ACCARDI Centro Matemarico V. Volterra, Diparfimento di Matematica, Universitci di Roma II, Rome, Italy F. FACNOLA Dipartimento di Matematica, Vniversild di Trento, Povo, Italy AND J. QUAEGEBEUR Department of Mathemarics, Universiteit Leuven, Louvain, Belgium Communicated by L. Gross Received January 2; revised March 1, 1991 We develop a representation free stochastic calculus based on three inequalities (semimartingale inequality, scalar forward derivative inequality, scalar conditional variance inequality). We prove that our scheme includes all the previously developed stochastic calculi and some new examples. The abstract theory is applied to prove a Boson Levy martingale representation theorem in bounded form and a general existence, uniqueness, and unitarity theorem for quantum stochastic differential equations. 0 1992 Academic Press. Inc. 0. INTRODUCTION Stochastic calculus is a powerful tool in classical probability theory. Recently various kinds of stochastic calculi have been introduced in quan- tum probability [18, 12, 10,211. The common features of these calculi are: - One starts from a representation of the canonical commutation (or anticommutation) relations (CCR or CAR) over a space of the form L2(R+, dr; X) (where 3? is a given Hilbert space). - One introduces a family of operator valued measures on R,, related to this representation, e.g., expressed in terms of creation or annihilation or number or field operators. 149 0022-1236192 $3.00 Copyright 0 1992 by Academic Press, Inc. All rights of reproduction in any form reserved. 150 QUANTUM STOCHASTIC CALCULUS -- One shows that it is possible to develop a theory of stochastic integration with respect to these operator valued measures sufficiently rich to allow one to solve some nontrivial stochastic differential equations. -
AUTOREGRESSIVE MODEL with PARTIAL FORGETTING WITHIN RAO-BLACKWELLIZED PARTICLE FILTER Kamil Dedecius, Radek Hofman
AUTOREGRESSIVE MODEL WITH PARTIAL FORGETTING WITHIN RAO-BLACKWELLIZED PARTICLE FILTER Kamil Dedecius, Radek Hofman To cite this version: Kamil Dedecius, Radek Hofman. AUTOREGRESSIVE MODEL WITH PARTIAL FORGETTING WITHIN RAO-BLACKWELLIZED PARTICLE FILTER. Communications in Statistics - Simulation and Computation, Taylor & Francis, 2011, 41 (05), pp.582-589. 10.1080/03610918.2011.598992. hal- 00768970 HAL Id: hal-00768970 https://hal.archives-ouvertes.fr/hal-00768970 Submitted on 27 Dec 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Communications in Statistics - Simulation and Computation For Peer Review Only AUTOREGRESSIVE MODEL WITH PARTIAL FORGETTING WITHIN RAO-BLACKWELLIZED PARTICLE FILTER Journal: Communications in Statistics - Simulation and Computation Manuscript ID: LSSP-2011-0055 Manuscript Type: Original Paper Date Submitted by the 09-Feb-2011 Author: Complete List of Authors: Dedecius, Kamil; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Department of Adaptive Systems Hofman, Radek; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Department of Adaptive Systems Keywords: particle filters, Bayesian methods, recursive estimation We are concerned with Bayesian identification and prediction of a nonlinear discrete stochastic process. The fact, that a nonlinear process can be approximated by a piecewise linear function advocates the use of adaptive linear models. -
Normalized Random Measures Driven by Increasing Additive Processes
The Annals of Statistics 2004, Vol. 32, No. 6, 2343–2360 DOI: 10.1214/009053604000000625 c Institute of Mathematical Statistics, 2004 NORMALIZED RANDOM MEASURES DRIVEN BY INCREASING ADDITIVE PROCESSES By Luis E. Nieto-Barajas1, Igor Prunster¨ 2 and Stephen G. Walker3 ITAM-M´exico, Universit`adegli Studi di Pavia and University of Kent This paper introduces and studies a new class of nonparamet- ric prior distributions. Random probability distribution functions are constructed via normalization of random measures driven by increas- ing additive processes. In particular, we present results for the dis- tribution of means under both prior and posterior conditions and, via the use of strategic latent variables, undertake a full Bayesian analysis. Our class of priors includes the well-known and widely used mixture of a Dirichlet process. 1. Introduction. This paper considers the problem of constructing a stochastic process, defined on the real line, which has sample paths be- having almost surely (a.s.) as a probability distribution function (d.f.). The law governing the process acts as a prior in Bayesian nonparametric prob- lems. One popular idea is to take the random probability d.f. as a normal- ized increasing process F (t)= Z(t)/Z¯, where Z¯ = limt→∞ Z(t) < +∞ (a.s.). For example, the Dirichlet process [Ferguson (1973)] arises when Z is a suitably reparameterized gamma process. We consider the case of normal- ized random d.f.s driven by an increasing additive process (IAP) L, that is, Z(t)= k(t,x) dL(x) and provide regularity conditions on k and L to ensure F is a random probability d.f. -
Uniform Noise Autoregressive Model Estimation
Uniform Noise Autoregressive Model Estimation L. P. de Lima J. C. S. de Miranda Abstract In this work we present a methodology for the estimation of the coefficients of an au- toregressive model where the random component is assumed to be uniform white noise. The uniform noise hypothesis makes MLE become equivalent to a simple consistency requirement on the possible values of the coefficients given the data. In the case of the autoregressive model of order one, X(t + 1) = aX(t) + (t) with i.i.d (t) ∼ U[α; β]; the estimatora ^ of the coefficient a can be written down analytically. The performance of the estimator is assessed via simulation. Keywords: Autoregressive model, Estimation of parameters, Simulation, Uniform noise. 1 Introduction Autoregressive models have been extensively studied and one can find a vast literature on the subject. However, the the majority of these works deal with autoregressive models where the innovations are assumed to be normal random variables, or vectors. Very few works, in a comparative basis, are dedicated to these models for the case of uniform innovations. 2 Estimator construction Let us consider the real valued autoregressive model of order one: X(t + 1) = aX(t) + (t + 1); where (t) t 2 IN; are i.i.d real random variables. Our aim is to estimate the real valued coefficient a given an observation of the process, i.e., a sequence (X(t))t2[1:N]⊂IN: We will suppose a 2 A ⊂ IR: A standard way of accomplish this is maximum likelihood estimation. Let us assume that the noise probability law is such that it admits a probability density.