Lecture 22 - 11/19/2019 Lecture 22: Robust Location Estimation Lecturer: Jiantao Jiao Scribe: Vignesh Subramanian

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 22 - 11/19/2019 Lecture 22: Robust Location Estimation Lecturer: Jiantao Jiao Scribe: Vignesh Subramanian EE290 Mathematics of Data Science Lecture 22 - 11/19/2019 Lecture 22: Robust Location Estimation Lecturer: Jiantao Jiao Scribe: Vignesh Subramanian In this lecture, we get a historical perspective into the robust estimation problem and discuss Huber's work [1] for robust estimation of a location parameter. The Huber loss function is given by, ( 1 2 2 t ; jtj ≤ k ρHuber(t) = 1 2 : (1) k jtj − 2 k ; jtj > k Here k is a parameter and the idea behind the loss function is to penalize outliers (beyond k) linearly instead of quadratically. Figure 1 shows the Huber loss function for k = 1. In this lecture we will get an intuitive 1 2 Figure 1: The green line plots the Huber-loss function for k = 1, and the blue line plots the quadratic function 2 t . understanding for the reasons behind the particular form of this function, quadratic in interior, linear in exterior and convex and will see that this loss function is optimal for one dimensional robust mean estimation for Gaussian location model. First we describe the problem setting. 1 Problem Setting Suppose we observe X1;X2;:::;Xn where Xi − µ ∼ F 2 F are i.i.d. Here, F = fF j F = (1 − )G + H; H 2 Mg; (2) where G 2 M is some fixed distribution function which is usually assumed to have zero mean, and M denotes the space of all probability measures. This describes the corruption model where the observed distribution is a convex combination of the true distribution G(x) and an arbitrary corruption distribution H. It is a location model since we assume X − µ has distribution F where µ 2 R is unknown. The goal here is to estimate the parameter µ. First we must determine how we evaluate estimators and in the paper, Huber restricted his attention to M-estimators of the form, n X µ^ = min ρ(Xi − t): t i=1 1 1 2 1 Pn As an example if ρ(t) = 2 t , thenµ ^ = n i=1 Xi, the empirical mean which is sensitive to outliers. To evaluate estimators Huber looks at asymptotics. 2 Asymptotics 0 Let (t) = ρ (t). Then from first order condition of optimality, an optimizer Tn must satisfy, n X (Xi − Tn) = 0: (3) i=1 Assume for now µ = 0, and EF [ (X)] = 0. This means that for the population version of (3), Tn = 0 is a solution. We now assume that Tn ! 0 as n ! 1, and we will provide a proof sketch showing Tn is asymptotically normal and compute its asymptotic variance. From (3), using the first order approximation for the term (Xi − Tn) around the point Xi and using the mean-value theorem, for some 0 ≤ θ ≤ 1 we have, n n X X 0 (Xi) − Tn (Xi − θTn) = 0: i=1 i=1 Rearranging we get, n p1 P p i=1 (Xi) nT = n : n 1 Pn 0 n i=1 (Xi − θTn) Since we have EF [ (X)] = 0, the numerator by the Central Limit Theorem converges weakly to 2 N ∼ N (0; EF [ (X) ]). Further since we assumed Tn ! 0 as n ! 1 then from the weak law of large numbers 0 the denominator converges weakly to EF [ (X)]. Thus we have, 2 p w EF [ (X) ] n(Tn − 0) −!N 0; 0 2 : (EF [ (X)]) One basic result for M-estimators is showing the maximum likelihood estimator achieves the smallest asymptotic variance among all M-estimators. We provide a proof below. Letting f(x) denote the density function for F , we have Z b 0 0 EF [ (X)] = f(x) (x)dx a b Z b 0 0 = f(x) (x) − (x)f (x)dx: a a If we assume that f(a) = f(b) = 0 then we have, Z b 0 0 EF [ (X)] = − (x)f (x)dx: a Thus, 2 R b 2 EF [ (X) ] a (x) f(x)dx 0 2 = 2 (EF [ (X)]) R b f 0(x) a (x) f(x) f(x)dx 1 ≥ 2 ; f 0(x) f(x) f(x)dx 2 where we used the Cauchy-Schwarz inequality. Observe that the RHS does not depend on and the f 0(x) ρ(t) − A inequality is tight when (x) / − f(x) which results in f(t) / e for some constant A. Thus minimizing ρ(t) is equivalent to finding the maximum likelihood estimator. When f(x) is a Gaussian density function, then ρ is the squared-loss function and the optimizer Tn is the empirical mean. 3 Two player game and Huber's Theorem Consider a two player game with payoff function given by −V ( ; F ). Here is the action chosen by the statistician to maximize the payoff (minimize the asymptotic variance) and F is chosen by the adversary to minimize the payoff (maximize the asymptotic variance). Theorem 1. Assume G is symmetric around 0, log-concave with density function g(x) with convex support. Define FS = fF j F = (1 − )G + H; H symmetric around 0g (4) The two-player game under the assumptions describe above has a saddle point ( 0;F0) i.e., sup V ( 0;F ) = V ( 0;F0) = inf V ( ; F0): F 2FS First we describe the form of f0(x) which is the density function of F0. Let [t0; t1] be the interval where 0 g (x) ≤ k. We know that this interval exists since g(x) is log-concave with convex support. Here k is the g(x) solution to the equation 1 Z t1 g(t ) + g(t ) = g(t)dt + 0 1 : (5) 1 − t0 k Then, 8 (1 − )g(t )ek(t−t0); t ≤ t <> 0 0 f0(t) = (1 − )g(t); t0 < t < t1 (6) > −k(t−t1) :(1 − )g(t1)e ; t ≥ t1 0 f0(t) 0(t) = − : (7) f0(t) Before we look at the proof of this theorem we look at an example. 2 1 − t Example 2. Let g(t) = p e 2 . Then −t = t = k. We can solve either by binary search or line search 2π 0 1 for k using the equation, 1 Z k 2g(k) = g(t)dt + : 1 − −k k The optimal loss function to use in this case is the Huber loss function given by (1). ( 1 2 2 t ; jtj ≤ k ρHuber(t) = 1 2 : k jtj − 2 k ; jtj > k Note that for a generic distribution g(t) the dependence of t0 and t1 on k can be highly non-linear and it is not easy to solve for k using (5). Next we look at the proof for Theorem 1. 3 Proof First we verify that the distribution H determined by F0 and G is indeed a distribution i.e. its density function h(t) is non-negative and integrates to one. We have, 8 (1 − )(g(t )ek(t−t0) − g(t)); t ≤ t <> 0 0 h0(t) = 0; t0 < t < t1 : (8) > −k(t−t1) :(1 − )(g(t1)e − g(t)); t ≥ t1 Since g(t) and f0(t) integrate to one, h(t) integrates to one. To show non-negativity of h(t) we use the fact that g(t) is log-concave, which implies − log(g(t)) is a convex function. For any t ≤ t0, − log(g(t)) ≥ − log(g(t0)) − k(t − t0); k(t−t0) ) g(t) ≤ g(t0)e : 0 0 g (t0) 0 g (t) where we used the the facts = k and (log(g(t)) = . The proof for the case t ≥ t1 follows via a g(t0) g(t) similar argument. Next we need to show that V ( 0;F0) is a saddle point. We have, V ( 0;F0) = inf V ( ; F0); because for given F0, 0 was optimal and resulted in the optimizer being the maximum likelihood estimator as discussed in Section 2. Next we show that, V ( 0;F0) = sup V ( 0;F ): F 2FS For any F 2 FS we have 2 EF [ 0(X) ] V ( 0;F ) = 0 2 : (EF [ 0(X)]) We can rewrite the numerator as, 2 2 2 EF [ 0(X) ] = (1 − )EG[ 0(X) ] + EH [ 0(X) ] 2 2 ≤ (1 − )EG[ 0(X) ] + k ; 0 2 f0(t) where we upper EH [ 0(X) ] using 0(t) = − f(t) and the form of f0(t) from (6) which results in j (t)j = k 0 for t ≤ t or t ≥ t and j (t)j = g (t) ≤ k for t < t < t . Note that f (t) results in h (t) = 0 for t < t < t 0 1 g(t) 0 1 0 0 0 1 and thus maximizes the numerator. Similarly the denominator can be written as, 0 2 0 0 2 (EF [ 0(X)]) = ((1 − )EG([ 0(X)]) + EH ([ 0(X)])) 0 2 ≥ ((1 − )EG([ 0(X)])) ; 0 0 where we used the fact that 0 ≥ 0 pointwise and 0(t) = 0 for t ≤ t0 or t ≥ t1. Again note that f0(t) results in h0(t) = 0 for t0 < t < t1 and minimizes the denominator. Thus F0 is the maximizer of V ( 0;F ) among all F 2 FS. 4 4 Summary There were several criticisms of Huber's work including those on the assumptions that G and H are symmet- ric, and the requirement that be known in order to compute the Huber loss. Further in higher dimensions 1 the breakdown point scales as 1+d which is undesirable. (From Wikipedia: Intuitively, the breakdown point of an estimator is the proportion of incorrect observations (e.g. arbitrarily large observations) an estimator can handle before giving an incorrect (e.g., arbitrarily large) result).
Recommended publications
  • Estimation of Common Location and Scale Parameters in Nonregular Cases Ahmad Razmpour Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1982 Estimation of common location and scale parameters in nonregular cases Ahmad Razmpour Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Statistics and Probability Commons Recommended Citation Razmpour, Ahmad, "Estimation of common location and scale parameters in nonregular cases " (1982). Retrospective Theses and Dissertations. 7528. https://lib.dr.iastate.edu/rtd/7528 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This reproduction was made from a copy of a document sent to us for microfilming. While the most advanced technology has been used to photograph and reproduce this document, the quality of the reproduction is heavily dependent upon the quality of the material submitted. The following explanation of techniques is provided to help clarify markings or notations which may appear on this reproduction. 1. The sign or "target" for pages apparently lacking from the document photographed is "Missing Page(s)". If it was possible to obtain the missing page(s) or section, they are spliced into the film along with adjacent pages. This may have necessitated cutting through an image and duplicating adjacent pages to assure complete continuity. 2. When an image on the film is obliterated with a round black mark, it is an indication of either blurred copy because of movement during exposure, duplicate copy, or copyrighted materials that should not have been filmed.
    [Show full text]
  • A Comparison of Unbiased and Plottingposition Estimators of L
    WATER RESOURCES RESEARCH, VOL. 31, NO. 8, PAGES 2019-2025, AUGUST 1995 A comparison of unbiased and plotting-position estimators of L moments J. R. M. Hosking and J. R. Wallis IBM ResearchDivision, T. J. Watson ResearchCenter, Yorktown Heights, New York Abstract. Plotting-positionestimators of L momentsand L moment ratios have several disadvantagescompared with the "unbiased"estimators. For generaluse, the "unbiased'? estimatorsshould be preferred. Plotting-positionestimators may still be usefulfor estimatingextreme upper tail quantilesin regional frequencyanalysis. Probability-Weighted Moments and L Moments •r+l-" (--1)r • P*r,k Olk '- E p *r,!•[J!•. Probability-weightedmoments of a randomvariable X with k=0 k=0 cumulativedistribution function F( ) and quantile function It is convenient to define dimensionless versions of L mo- x( ) were definedby Greenwoodet al. [1979]to be the quan- tities ments;this is achievedby dividingthe higher-orderL moments by the scale measure h2. The L moment ratios •'r, r = 3, Mp,ra= E[XP{F(X)}r{1- F(X)} s] 4, '", are definedby ßr-" •r/•2 ß {X(u)}PUr(1 -- U)s du. L momentratios measure the shapeof a distributionindepen- dently of its scaleof measurement.The ratios *3 ("L skew- ness")and *4 ("L kurtosis")are nowwidely used as measures Particularlyuseful specialcases are the probability-weighted of skewnessand kurtosis,respectively [e.g., Schaefer,1990; moments Pilon and Adamowski,1992; Royston,1992; Stedingeret al., 1992; Vogeland Fennessey,1993]. 12•r= M1,0, r = •01 (1 - u)rx(u) du, Estimators Given an ordered sample of size n, Xl: n • X2:n • ''' • urx(u) du. X.... there are two establishedways of estimatingthe proba- /3r--- Ml,r, 0 =f01 bility-weightedmoments and L moments of the distribution from whichthe samplewas drawn.
    [Show full text]
  • The Asymmetric T-Copula with Individual Degrees of Freedom
    The Asymmetric t-Copula with Individual Degrees of Freedom d-fine GmbH Christ Church University of Oxford A thesis submitted for the degree of MSc in Mathematical Finance Michaelmas 2012 Abstract This thesis investigates asymmetric dependence structures of multivariate asset returns. Evidence of such asymmetry for equity returns has been reported in the literature. In order to model the dependence structure, a new t-copula approach is proposed called the skewed t-copula with individ- ual degrees of freedom (SID t-copula). This copula provides the flexibility to assign an individual degree-of-freedom parameter and an individual skewness parameter to each asset in a multivariate setting. Applying this approach to GARCH residuals of bivariate equity index return data and using maximum likelihood estimation, we find significant asymmetry. By means of the Akaike information criterion, it is demonstrated that the SID t-copula provides the best model for the market data compared to other copula approaches without explicit asymmetry parameters. In addition, it yields a better fit than the conventional skewed t-copula with a single degree-of-freedom parameter. In a model impact study, we analyse the errors which can occur when mod- elling asymmetric multivariate SID-t returns with the symmetric multi- variate Gauss or standard t-distribution. The comparison is done in terms of the risk measures value-at-risk and expected shortfall. We find large deviations between the modelled and the true VaR/ES of a spread posi- tion composed of asymmetrically distributed risk factors. Going from the bivariate case to a larger number of risk factors, the model errors increase.
    [Show full text]
  • A Robust Hybrid of Lasso and Ridge Regression
    A robust hybrid of lasso and ridge regression Art B. Owen Stanford University October 2006 Abstract Ridge regression and the lasso are regularized versions of least squares regression using L2 and L1 penalties respectively, on the coefficient vector. To make these regressions more robust we may replace least squares with Huber’s criterion which is a hybrid of squared error (for relatively small errors) and absolute error (for relatively large ones). A reversed version of Huber’s criterion can be used as a hybrid penalty function. Relatively small coefficients contribute their L1 norm to this penalty while larger ones cause it to grow quadratically. This hybrid sets some coefficients to 0 (as lasso does) while shrinking the larger coefficients the way ridge regression does. Both the Huber and reversed Huber penalty functions employ a scale parameter. We provide an objective function that is jointly convex in the regression coefficient vector and these two scale parameters. 1 Introduction We consider here the regression problem of predicting y ∈ R based on z ∈ Rd. The training data are pairs (zi, yi) for i = 1, . , n. We suppose that each vector p of predictor vectors zi gets turned into a feature vector xi ∈ R via zi = φ(xi) for some fixed function φ. The predictor for y is linear in the features, taking the form µ + x0β where β ∈ Rp. In ridge regression (Hoerl and Kennard, 1970) we minimize over β, a criterion of the form n p X 0 2 X 2 (yi − µ − xiβ) + λ βj , (1) i=1 j=1 for a ridge parameter λ ∈ [0, ∞].
    [Show full text]
  • A Bayesian Hierarchical Spatial Copula Model: an Application to Extreme Temperatures in Extremadura (Spain)
    atmosphere Article A Bayesian Hierarchical Spatial Copula Model: An Application to Extreme Temperatures in Extremadura (Spain) J. Agustín García 1,† , Mario M. Pizarro 2,*,† , F. Javier Acero 1,† and M. Isabel Parra 2,† 1 Departamento de Física, Universidad de Extremadura, Avenida de Elvas, 06006 Badajoz, Spain; [email protected] (J.A.G.); [email protected] (F.J.A.) 2 Departamento de Matemáticas, Universidad de Extremadura, Avenida de Elvas, 06006 Badajoz, Spain; [email protected] * Correspondence: [email protected] † These authors contributed equally to this work. Abstract: A Bayesian hierarchical framework with a Gaussian copula and a generalized extreme value (GEV) marginal distribution is proposed for the description of spatial dependencies in data. This spatial copula model was applied to extreme summer temperatures over the Extremadura Region, in the southwest of Spain, during the period 1980–2015, and compared with the spatial noncopula model. The Bayesian hierarchical model was implemented with a Monte Carlo Markov Chain (MCMC) method that allows the distribution of the model’s parameters to be estimated. The results show the GEV distribution’s shape parameter to take constant negative values, the location parameter to be altitude dependent, and the scale parameter values to be concentrated around the same value throughout the region. Further, the spatial copula model chosen presents lower deviance information criterion (DIC) values when spatial distributions are assumed for the GEV distribution’s Citation: García, J.A.; Pizarro, M.M.; location and scale parameters than when the scale parameter is taken to be constant over the region. Acero, F.J.; Parra, M.I. A Bayesian Hierarchical Spatial Copula Model: Keywords: Bayesian hierarchical model; extreme temperature; Gaussian copula; generalized extreme An Application to Extreme value distribution Temperatures in Extremadura (Spain).
    [Show full text]
  • Procedures for Estimation of Weibull Parameters James W
    United States Department of Agriculture Procedures for Estimation of Weibull Parameters James W. Evans David E. Kretschmann David W. Green Forest Forest Products General Technical Report February Service Laboratory FPL–GTR–264 2019 Abstract Contents The primary purpose of this publication is to provide an 1 Introduction .................................................................. 1 overview of the information in the statistical literature on 2 Background .................................................................. 1 the different methods developed for fitting a Weibull distribution to an uncensored set of data and on any 3 Estimation Procedures .................................................. 1 comparisons between methods that have been studied in the 4 Historical Comparisons of Individual statistics literature. This should help the person using a Estimator Types ........................................................ 8 Weibull distribution to represent a data set realize some advantages and disadvantages of some basic methods. It 5 Other Methods of Estimating Parameters of should also help both in evaluating other studies using the Weibull Distribution .......................................... 11 different methods of Weibull parameter estimation and in 6 Discussion .................................................................. 12 discussions on American Society for Testing and Materials Standard D5457, which appears to allow a choice for the 7 Conclusion ................................................................
    [Show full text]
  • Location-Scale Distributions
    Location–Scale Distributions Linear Estimation and Probability Plotting Using MATLAB Horst Rinne Copyright: Prof. em. Dr. Horst Rinne Department of Economics and Management Science Justus–Liebig–University, Giessen, Germany Contents Preface VII List of Figures IX List of Tables XII 1 The family of location–scale distributions 1 1.1 Properties of location–scale distributions . 1 1.2 Genuine location–scale distributions — A short listing . 5 1.3 Distributions transformable to location–scale type . 11 2 Order statistics 18 2.1 Distributional concepts . 18 2.2 Moments of order statistics . 21 2.2.1 Definitions and basic formulas . 21 2.2.2 Identities, recurrence relations and approximations . 26 2.3 Functions of order statistics . 32 3 Statistical graphics 36 3.1 Some historical remarks . 36 3.2 The role of graphical methods in statistics . 38 3.2.1 Graphical versus numerical techniques . 38 3.2.2 Manipulation with graphs and graphical perception . 39 3.2.3 Graphical displays in statistics . 41 3.3 Distribution assessment by graphs . 43 3.3.1 PP–plots and QQ–plots . 43 3.3.2 Probability paper and plotting positions . 47 3.3.3 Hazard plot . 54 3.3.4 TTT–plot . 56 4 Linear estimation — Theory and methods 59 4.1 Types of sampling data . 59 IV Contents 4.2 Estimators based on moments of order statistics . 63 4.2.1 GLS estimators . 64 4.2.1.1 GLS for a general location–scale distribution . 65 4.2.1.2 GLS for a symmetric location–scale distribution . 71 4.2.1.3 GLS and censored samples .
    [Show full text]
  • Robust Estimator of Location Parameter1)
    The Korean Communications in Statistics Vol. 11 No. 1, 2004 pp. 153-160 Robust Estimator of Location Parameter1) Dongryeon Park2) Abstract In recent years, the size of data set which we usually handle is enormous, so a lot of outliers could be included in data set. Therefore the robust procedures that automatically handle outliers become very importance issue. We consider the robust estimation problem of location parameter in the univariate case. In this paper, we propose a new method for defining robustness weights for the weighted mean based on the median distance of observations and compare its performance with several existing robust estimators by a simulation study. It turns out that the proposed method is very competitive. Keywords : Location parameter, Median distance, Robust estimator, Robustness weight 1. Introduction It is often assumed in the social sciences that data conform to a normal distribution. When estimating the location of a normal distribution, a sample mean is well known to be the best estimator according to many criteria. However, numerous studies (Hample et al., 1986; Hoaglin et al., 1976; Rousseeuw and Leroy, 1987) have strongly questioned normal assumption in real world data sets. In fact, a few large errors might infect the data set so that the tails of the underlying distribution are heavier than those of the normal distribution. In this situation, the sample mean is no longer a good estimate for the center of symmetry because all the observations equally contribute to the value of the sample mean, so the estimators which are insensitive to extreme values should have better performance.
    [Show full text]
  • Robust Regression Through the Huber's Criterion and Adaptive Lasso
    Robust Regression through the Huber’s criterion and adaptive lasso penalty Sophie Lambert-Lacroix, Laurent Zwald To cite this version: Sophie Lambert-Lacroix, Laurent Zwald. Robust Regression through the Huber’s criterion and adap- tive lasso penalty. Electronic Journal of Statistics , Shaker Heights, OH : Institute of Mathematical Statistics, 2011, 5, pp.1015-1053. 10.1214/11-EJS635. hal-00661864 HAL Id: hal-00661864 https://hal.archives-ouvertes.fr/hal-00661864 Submitted on 20 Jan 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Electronic Journal of Statistics Vol. 5 (2011) 1015–1053 ISSN: 1935-7524 DOI: 10.1214/11-EJS635 Robust regression through the Huber’s criterion and adaptive lasso penalty Sophie Lambert-Lacroix UJF-Grenoble 1 / CNRS / UPMF / TIMC-IMAG UMR 5525, Grenoble, F-38041, France e-mail: [email protected] and Laurent Zwald LJK - Universit´eJoseph Fourier BP 53, Universit´eJoseph Fourier 38041 Grenoble cedex 9, France e-mail: [email protected] Abstract: The Huber’s Criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection.
    [Show full text]
  • Maximum Likelihood Characterization of Distributions
    Bernoulli 20(2), 2014, 775–802 DOI: 10.3150/13-BEJ506 Maximum likelihood characterization of distributions MITIA DUERINCKX1,*, CHRISTOPHE LEY1,** and YVIK SWAN2 1D´epartement de Math´ematique, Universit´elibre de Bruxelles, Campus Plaine CP 210, Boule- vard du triomphe, B-1050 Bruxelles, Belgique. * ** E-mail: [email protected]; [email protected] 2 Universit´edu Luxembourg, Campus Kirchberg, UR en math´ematiques, 6, rue Richard Couden- hove-Kalergi, L-1359 Luxembourg, Grand-Duch´ede Luxembourg. E-mail: [email protected] A famous characterization theorem due to C.F. Gauss states that the maximum likelihood es- timator (MLE) of the parameter in a location family is the sample mean for all samples of all sample sizes if and only if the family is Gaussian. There exist many extensions of this re- sult in diverse directions, most of them focussing on location and scale families. In this paper, we propose a unified treatment of this literature by providing general MLE characterization theorems for one-parameter group families (with particular attention on location and scale pa- rameters). In doing so, we provide tools for determining whether or not a given such family is MLE-characterizable, and, in case it is, we define the fundamental concept of minimal necessary sample size at which a given characterization holds. Many of the cornerstone references on this topic are retrieved and discussed in the light of our findings, and several new characterization theorems are provided. Of particular interest is that one part of our work, namely the intro- duction of so-called equivalence classes for MLE characterizations, is a modernized version of Daniel Bernoulli’s viewpoint on maximum likelihood estimation.
    [Show full text]
  • Parametric Forecasting of Value at Risk Using Heavy-Tailed Distribution
    REVISTA INVESTIGACIÓN OPERACIONAL _____ Vol., 30, No.1, 32-39, 2009 DEPENDENCE BETWEEN VOLATILITY PERSISTENCE, KURTOSIS AND DEGREES OF FREEDOM Ante Rozga1 and Josip Arnerić, 2 Faculty of Economics, University of Split, Croatia ABSTRACT In this paper the dependence between volatility persistence, kurtosis and degrees of freedom from Student’s t-distribution will be presented in estimation alternative risk measures on simulated returns. As the most used measure of market risk is standard deviation of returns, i.e. volatility. However, based on volatility alternative risk measures can be estimated, for example Value-at-Risk (VaR). There are many methodologies for calculating VaR, but for simplicity they can be classified into parametric and nonparametric models. In category of parametric models the GARCH(p,q) model is used for modeling time-varying variance of returns. KEY WORDS: Value-at-Risk, GARCH(p,q), T-Student MSC 91B28 RESUMEN En este trabajo la dependencia de la persistencia de la volatilidad, kurtosis y grados de liberad de una Distribución T-Student será presentada como una alternativa para la estimación de medidas de riesgo en la simulación de los retornos. La medida más usada de riesgo de mercado es la desviación estándar de los retornos, i.e. volatilidad. Sin embargo, medidas alternativas de la volatilidad pueden ser estimadas, por ejemplo el Valor-al-Riesgo (Value-at-Risk, VaR). Existen muchas metodologías para calcular VaR, pero por simplicidad estas pueden ser clasificadas en modelos paramétricos y no paramétricos . En la categoría de modelos paramétricos el modelo GARCH(p,q) es usado para modelar la varianza de retornos que varían en el tiempo.
    [Show full text]
  • Sufficient Conditions for Monotone Hazard Rate an Application to Latency-Probability Curves
    JOURNAL OF MATHEMATICAL PSYCHOLOGY 8, 303-332 (1971) Sufficient Conditions for Monotone Hazard Rate An Application to Latency-Probability Curves EWART A. C. THOMAS The University of Michigan, Ann Arbor, Michigan 48104 It is assumed that when a subject makes a response after comparing a random variable with a fixed criterion, his response latency is inversely related to the distance between the value of the variable and the criterion. This paper then examines the relationship to response probability of (a) response latency, (b) difference between two latencies and (c) dispersion of latency, and also some properties of the latency distribu- tion. It is shown that the latency-probability curve is decreasing if and only if the hazard function of the underlying distribution is increasing and, by using a fundamental lemma, sufficient conditions are obtained for monotone hazard rate. An inequality is established which is satisfied by the slope of the Receiver Operating Characteristic under these sufficient conditions. Finally, a Latency Operating Charac- teristic is defined and it is suggested that such a plot can be useful in assessing the consistency between latency data and theory. I. INTRODUCTION Many authors have remarked that an analysis of the response times from a signal detection experiment can yield valuable information about the nature of the detection process (e.g. Carterette, 1966; Green and Lute, 1967; Laming, 1969). The method of such an analysis depends on the assumed detection and response time models, and on the response time statistic being observed, though the dependence on the former is more fundamental than that on the latter.
    [Show full text]