Asymptotic Theory of Robustness a Short Summary

Total Page:16

File Type:pdf, Size:1020Kb

Asymptotic Theory of Robustness a Short Summary Asymptotic Theory of Robustness a short summary Matthias Kohl \Mathematical Statistics" University of Bayreuth Contents 1 Introduction2 2 L2 differentiability2 3 Local Asymptotic Normality4 4 Convolution Representation and Asymptotic Minimax Bound5 5 Asymptotically Linear Estimators7 5.1 Definitions....................................7 5.2 Cram´er-Rao Bound...............................9 6 Infinitesimal Robust Setup 10 7 Optimal Influence Curves 13 7.1 Introduction................................... 13 7.2 Bias Terms................................... 15 7.3 Minimum Trace Subject to Bias Bound................... 16 7.4 Mean Square Error............................... 17 References 18 2 L2 DIFFERENTIABILITY Author Index 19 Subject Index 21 1 Introduction In this technical report we give a short summary of results contained in Chapters 2 - 5 of [Rieder, 1994] where we restrict our considerations to the estimation of a finite- dimensional parameter in the one sample i.i.d. case (i.e., no testing, no functionals). More precisely, we assume a parametric family P = fPθ j θ 2 Θg ⊂ M1(A) (1.1) of probability measures on some sample space (Ω; A), whose parameter space Θ is an k open subset of some finite dimensional R . Sections2-5 contain a short abridge of some classical results of asymptotic statistics. For a more detailed introduction to this topics we also refer to Chapter 2 of [Bickel et al., 1998] and Chapters 6 - 9 of [van der Vaart, 1998], respectively. In the infinitesimal robust setup introduced in Sec- tion6, the family P will serve as ideal center model and at least under the null hypothesis Pθ 2 P the observations y1; : : : ; yn at time n 2 N are assumed to be i.i.d.. Finally, in Section7 we give the solutions (i.e., optimal influence curves) to the optimization prob- lems motivated in Subsection 7.1. For the derivation of optimal influence curves confer also [Hampel, 1968] and [Hampel et al., 1986]. 2 L2 differentiability To avoid domination assumptions in the definition of L2 differentiability, we employ the following square root calculus that was introduced by Le Cam. The following definition is taken from [Rieder, 1994]; for more details confer Subsection 2.3.1 of [Rieder, 1994]. Definition 2.1 For any measurable space (Ω; A) and k 2 N we define the following real k Hilbert space that includes the ordinary L2(P ) p k k L2(A) = fξ dP j σ 2 L2(P );P 2 Mb(A)g (2.1) On this space, an equivalence relation is given by p Z p p ξ dP ≡ ηpdQ () jξ p − η q j2dµ = 0 (2.2) k where j · j denotes the Euclidean norm on R and µ 2 Mb(A) may be any measure, depending on P and Q, so that dP = p dµ, dQ = q dµ. We define linear combinations 2 L2 DIFFERENTIABILITY with real coefficients and a scalar product by p p p αξ dP + βηpdQ = (αξ p + βη q )pdµ (2.3) p Z p hξ dP j ηpdQ i = ξτ η pq dµ (2.4) We fix some θ 2 Θ and define L2 differentiability of the family P at θ using this square root calculus; confer Definition 2.3.6 of [Rieder, 1994]. Here Eθ denotes expectation taken under Pθ. Definition 2.2 Model P is called L2 differentiable at θ if there exists some function k Λθ 2 L2(Pθ) such that, as t ! 0, p p 1 τ k dPθ+t − dPθ (1 + t Λθ)k k = o(jtj) (2.5) 2 L2 and τ Iθ = Eθ ΛθΛθ 0 (2.6) The function Λθ is called the L2 derivative and the k × k matrix Iθ Fisher Information of P at θ. Remark 2.3 A concise definition of L2 differentiability for arrays of probability mea- sures on general sample spaces may be found in Section 2.3 of [Rieder, 1994]. //// We now consider a parameter sequence (θn) about θ of the form t θ = θ + pn t ! t 2 k (2.7) n n n R Corresponding to this parametric alternatives (θn) two sequences of product measures are defined on the n-fold product measurable space (Ωn; An) n n n O n O Pθ = Pθ Pθn = Pθn (2.8) i=1 i=1 Theorem 2.4 If P is L2 differentiable at θ, its L2 derivative Λθ is uniquely determined k in L2(Pθ). Moreover, Eθ Λθ = 0 (2.9) and the alternatives given by (2.7) and (2.8) have the log likelihood expansion dP n τ n θn t X 1 τ 0 log = p Λθ(yi) − t Iθt + oP n (n ) (2.10) dP n n 2 θ θ i=1 where n ! 1 X n p Λ (y ) P −!w N (0; I ) (2.11) n θ i θ θ i=1 Proof: This is a special case of Theorem 2.3.7 in [Rieder, 1994]. //// 3 LOCAL ASYMPTOTIC NORMALITY 3 Local Asymptotic Normality We first state a result of asymptotic statistics that is known as Le Cam's third lemma. Theorem 3.1 Let Pn;Qn 2 M1(An) be two sequences of probabilities with log likeli- hoods L 2 log dQn , and S a sequence of statistics on (Ω ; A ) taking values in some n dPn n n n p p p p×p finite-dimensional (R¯ ; B¯ ) such that for a; c 2 R , σ 2 [0; 1) and C 2 R , Sn a C c (Pn) −!w N 2 ; τ 2 (3.1) Ln −σ =2 c σ then Sn a + c C c (Qn) −!w N 2 ; τ 2 (3.2) Ln σ =2 c σ (3.3) Proof: [Rieder, 1994], Corollary 2.2.6. //// The following definition corresponds to Definition 2.2.9 of [Rieder, 1994]. Definition 3.2 A sequence (Qn) of statistical models on sample spaces (Ωn; An), Qn = fQn;t j t 2 Θng ⊂ M1(An) (3.4) k k with the same finite-dimensional parameter space Θn = R (or at least Θn " R ) is called asymptotically normal, if there exists a sequence of random variables Zn : (Ωn; An) ! k k (R ; B ) that are asymptotically normal, Zn(Qn;0) −!w N (0;C) (3.5) k×k k with positive definite covariance C 2 R , and such that for all t 2 R the log likelihoods L 2 log dQn;t have the approximation n;t dQn;0 1 L = tτ Z − tτ Ct + o (n0) (3.6) n;t n 2 Qn;0 The sequence Z = (Zn) is called the asymptotically sufficient statistic and C the asymp- totic covariance of the asymptotically normal models (Qn). We now state Remark 2.2.10 of [Rieder, 1994] where we add a part (c) and (d). Remark 3.3 (a) The covariance C is uniquely defined by (3.5) and (3.6). And (3.6) implies that another sequence of statistics W = (Wn) is asymptotically sufficient iff 0 Wn = Zn + oQn;0 (n ). (b) Neglecting the approximation, the terminology of asymptotically sufficient may be justified in regard to Neyman's criterion; confer Proposition C.1.1 of [Rieder, 1994]. One speaks of local asymptotic normality if, as in section2, asymptotic normality depends on suitable local reparametrizations. 4 CONVOLUTION REPRESENTATION AND ASYMPTOTIC MINIMAX BOUND (c) The notion of local asymptotic normality { in short, LAN { was introduced by [Le Cam, 1960]. (d) The sequence of statistical models Q = fQ j Q = P n g given by the alterna- n n;t n;t θn p1 Pn tives (2.7) and (2.8) is LAN with asymptotically sufficient statistic Zn = n i=1 Λθ(yi) and asymptotic covariance C = Iθ. //// 4 Convolution Representation and Asymptotic Minimax Bound In this section we present the convolution and the asymptotic minimax theorems in the parametric case; confer Theorems 3.2.3, 3.3.8 of [Rieder, 1994]. These two mathematical results of asymptotic statistics are mainly due to Le Cam and H´ajek. Assume a sequence of statistical models (Qn) on sample spaces (Ωn; An), Qn = fQn;t j t 2 Θng ⊂ M1(An) (4.1) k k with the same finite-dimensional parameter space Θn = R (or Θn " R ). The parameter of interest is Dt for some p × k-matrix D of full rank p ≤ k. Moreover we consider asymptotic estimators p p S = (Sn) Sn : (Ωn; An) ! (R ; B ) (4.2) The following definition corresponds to Definition 3.2.2 of [Rieder, 1994]. Definition 4.1 An asymptotic estimator S is called regular for the parameter transform p k D, with limit law M 2 M1(B ), if for all t 2 R , (Sn − Dt)(Qn;t) −!w M (4.3) k that is, Sn(Qn;t) −!w M ∗ IDt as n ! 1, for every t 2 R . Remark 4.2 For a motivation of this regularity assumption we refer to Example 3.2.1 of [Rieder, 1994]. The Hodges estimator introduced there is asymptotically normal but superefficient. However it is not regular in the sense of Definitions 4.1. Moreover, in the light of the asymptotic minimax theorem (Theorem 4.5), the Hodges estimator has maximal estimator risk; confer [Rieder, 1994], Example 3.3.10. //// We now may state the convolution theorem. Theorem 4.3 Assume models (Qn) that are asymptotically normal with asymptotic p×k covariance C 0 and asymptotically sufficient statistic Z = (Zn). Let D 2 R be a matrix of rank p ≤ k. Let the asymptotic estimator S be regular for D with limit law p M. Then there exists a probability M0 2 M1(B ) such that −1 τ M = M0 ∗ N (0; Γ) Γ = DC D (4.4) and −1 (Sn − DC Zn)(Qn;0) −!w M0 (4.5) 4 CONVOLUTION REPRESENTATION AND ASYMPTOTIC MINIMAX BOUND An asymptotic estimator S∗ is regular for D and achieves limit law M ∗ = N (0; Γ) iff ∗ −1 0 Sn = DC Zn + oQn;0 (n ) (4.6) Proof: Three variants of the proof are given in [Rieder, 1994], Theorem 3.2.3.
Recommended publications
  • Higher-Order Asymptotics
    Higher-Order Asymptotics Todd Kuffner Washington University in St. Louis WHOA-PSI 2016 1 / 113 First- and Higher-Order Asymptotics Classical Asymptotics in Statistics: available sample size n ! 1 First-Order Asymptotic Theory: asymptotic statements that are correct to order O(n−1=2) Higher-Order Asymptotics: refinements to first-order results 1st order 2nd order 3rd order kth order error O(n−1=2) O(n−1) O(n−3=2) O(n−k=2) or or or or o(1) o(n−1=2) o(n−1) o(n−(k−1)=2) Why would anyone care? deeper understanding more accurate inference compare different approaches (which agree to first order) 2 / 113 Points of Emphasis Convergence pointwise or uniform? Error absolute or relative? Deviation region moderate or large? 3 / 113 Common Goals Refinements for better small-sample performance Example Edgeworth expansion (absolute error) Example Barndorff-Nielsen’s R∗ Accurate Approximation Example saddlepoint methods (relative error) Example Laplace approximation Comparative Asymptotics Example probability matching priors Example conditional vs. unconditional frequentist inference Example comparing analytic and bootstrap procedures Deeper Understanding Example sources of inaccuracy in first-order theory Example nuisance parameter effects 4 / 113 Is this relevant for high-dimensional statistical models? The Classical asymptotic regime is when the parameter dimension p is fixed and the available sample size n ! 1. What if p < n or p is close to n? 1. Find a meaningful non-asymptotic analysis of the statistical procedure which works for any n or p (concentration inequalities) 2. Allow both n ! 1 and p ! 1. 5 / 113 Some First-Order Theory Univariate (classical) CLT: Assume X1;X2;::: are i.i.d.
    [Show full text]
  • The Method of Maximum Likelihood for Simple Linear Regression
    08:48 Saturday 19th September, 2015 See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 6: The Method of Maximum Likelihood for Simple Linear Regression 36-401, Fall 2015, Section B 17 September 2015 1 Recapitulation We introduced the method of maximum likelihood for simple linear regression in the notes for two lectures ago. Let's review. We start with the statistical model, which is the Gaussian-noise simple linear regression model, defined as follows: 1. The distribution of X is arbitrary (and perhaps X is even non-random). 2. If X = x, then Y = β0 + β1x + , for some constants (\coefficients", \parameters") β0 and β1, and some random noise variable . 3. ∼ N(0; σ2), and is independent of X. 4. is independent across observations. A consequence of these assumptions is that the response variable Y is indepen- dent across observations, conditional on the predictor X, i.e., Y1 and Y2 are independent given X1 and X2 (Exercise 1). As you'll recall, this is a special case of the simple linear regression model: the first two assumptions are the same, but we are now assuming much more about the noise variable : it's not just mean zero with constant variance, but it has a particular distribution (Gaussian), and everything we said was uncorrelated before we now strengthen to independence1. Because of these stronger assumptions, the model tells us the conditional pdf 2 of Y for each x, p(yjX = x; β0; β1; σ ). (This notation separates the random variables from the parameters.) Given any data set (x1; y1); (x2; y2);::: (xn; yn), we can now write down the probability density, under the model, of seeing that data: n n (y −(β +β x ))2 Y 2 Y 1 − i 0 1 i p(yijxi; β0; β1; σ ) = p e 2σ2 2 i=1 i=1 2πσ 1See the notes for lecture 1 for a reminder, with an explicit example, of how uncorrelated random variables can nonetheless be strongly statistically dependent.
    [Show full text]
  • Use of the Kurtosis Statistic in the Frequency Domain As an Aid In
    lEEE JOURNALlEEE OF OCEANICENGINEERING, VOL. OE-9, NO. 2, APRIL 1984 85 Use of the Kurtosis Statistic in the FrequencyDomain as an Aid in Detecting Random Signals Absmact-Power spectral density estimation is often employed as a couldbe utilized in signal processing. The objective ofthis method for signal ,detection. For signals which occur randomly, a paper is to compare the PSD technique for signal processing frequency domain kurtosis estimate supplements the power spectral witha new methodwhich computes the frequency domain density estimate and, in some cases, can be.employed to detect their presence. This has been verified from experiments vith real data of kurtosis (FDK) [2] forthe real and imaginary parts of the randomly occurring signals. In order to better understand the detec- complex frequency components. Kurtosis is defined as a ratio tion of randomlyoccurring signals, sinusoidal and narrow-band of a fourth-order central moment to the square of a second- Gaussian signals are considered, which when modeled to represent a order central moment. fading or multipath environment, are received as nowGaussian in Using theNeyman-Pearson theory in thetime domain, terms of a frequency domain kurtosis estimate. Several fading and multipath propagation probability density distributions of practical Ferguson [3] , has shown that kurtosis is a locally optimum interestare considered, including Rayleigh and log-normal. The detectionstatistic under certain conditions. The reader is model is generalized to handle transient and frequency modulated referred to Ferguson'swork for the details; however, it can signals by taking into account the probability of the signal being in a be simply said thatit is concernedwith detecting outliers specific frequency range over the total data interval.
    [Show full text]
  • Statistical Models in R Some Examples
    Statistical Models Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Statistical Models Outline Statistical Models Linear Models in R Statistical Models Regression Regression analysis is the appropriate statistical method when the response variable and all explanatory variables are continuous. Here, we only discuss linear regression, the simplest and most common form. Remember that a statistical model attempts to approximate the response variable Y as a mathematical function of the explanatory variables X1;:::; Xn. This mathematical function may involve parameters. Regression analysis attempts to use sample data find the parameters that produce the best model Statistical Models Linear Models The simplest such model is a linear model with a unique explanatory variable, which takes the following form. y^ = a + bx: Here, y is the response variable vector, x the explanatory variable, y^ is the vector of fitted values and a (intercept) and b (slope) are real numbers. Plotting y versus x, this model represents a line through the points. For a given index i,y ^i = a + bxi approximates yi . Regression amounts to finding a and b that gives the best fit. Statistical Models Linear Model with 1 Explanatory Variable ● 10 ● ● ● ● ● 5 y ● ● ● ● ● ● y ● ● y−hat ● ● ● ● 0 ● ● ● ● x=2 0 1 2 3 4 5 x Statistical Models Plotting Commands for the record The plot was generated with test data xR, yR with: > plot(xR, yR, xlab = "x", ylab = "y") > abline(v = 2, lty = 2) > abline(a = -2, b = 2, col = "blue") > points(c(2), yR[9], pch = 16, col = "red") > points(c(2), c(2), pch = 16, col = "red") > text(2.5, -4, "x=2", cex = 1.5) > text(1.8, 3.9, "y", cex = 1.5) > text(2.5, 1.9, "y-hat", cex = 1.5) Statistical Models Linear Regression = Minimize RSS Least Squares Fit In linear regression the best fit is found by minimizing n n X 2 X 2 RSS = (yi − y^i ) = (yi − (a + bxi )) : i=1 i=1 This is a Calculus I problem.
    [Show full text]
  • A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications
    Special Publication 800-22 Revision 1a A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications AndrewRukhin,JuanSoto,JamesNechvatal,Miles Smid,ElaineBarker,Stefan Leigh,MarkLevenson,Mark Vangel,DavidBanks,AlanHeckert,JamesDray,SanVo Revised:April2010 LawrenceE BasshamIII A Statistical Test Suite for Random and Pseudorandom Number Generators for NIST Special Publication 800-22 Revision 1a Cryptographic Applications 1 2 Andrew Rukhin , Juan Soto , James 2 2 Nechvatal , Miles Smid , Elaine 2 1 Barker , Stefan Leigh , Mark 1 1 Levenson , Mark Vangel , David 1 1 2 Banks , Alan Heckert , James Dray , 2 San Vo Revised: April 2010 2 Lawrence E Bassham III C O M P U T E R S E C U R I T Y 1 Statistical Engineering Division 2 Computer Security Division Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8930 Revised: April 2010 U.S. Department of Commerce Gary Locke, Secretary National Institute of Standards and Technology Patrick Gallagher, Director A STATISTICAL TEST SUITE FOR RANDOM AND PSEUDORANDOM NUMBER GENERATORS FOR CRYPTOGRAPHIC APPLICATIONS Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems.
    [Show full text]
  • Chapter 5 Statistical Models in Simulation
    Chapter 5 Statistical Models in Simulation Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Purpose & Overview The world the model-builder sees is probabilistic rather than deterministic. Some statistical model might well describe the variations. An appropriate model can be developed by sampling the phenomenon of interest: Select a known distribution through educated guesses Make estimate of the parameter(s) Test for goodness of fit In this chapter: Review several important probability distributions Present some typical application of these models ٢ ١ Review of Terminology and Concepts In this section, we will review the following concepts: Discrete random variables Continuous random variables Cumulative distribution function Expectation ٣ Discrete Random Variables [Probability Review] X is a discrete random variable if the number of possible values of X is finite, or countably infinite. Example: Consider jobs arriving at a job shop. Let X be the number of jobs arriving each week at a job shop. Rx = possible values of X (range space of X) = {0,1,2,…} p(xi) = probability the random variable is xi = P(X = xi) p(xi), i = 1,2, … must satisfy: 1. p(xi ) ≥ 0, for all i ∞ 2. p(x ) =1 ∑i=1 i The collection of pairs [xi, p(xi)], i = 1,2,…, is called the probability distribution of X, and p(xi) is called the probability mass function (pmf) of X. ٤ ٢ Continuous Random Variables [Probability Review] X is a continuous random variable if its range space Rx is an interval or a collection of intervals. The probability that X lies in the interval [a,b] is given by: b P(a ≤ X ≤ b) = f (x)dx ∫a f(x), denoted as the pdf of X, satisfies: 1.
    [Show full text]
  • Statistical Modeling Methods: Challenges and Strategies
    Biostatistics & Epidemiology ISSN: 2470-9360 (Print) 2470-9379 (Online) Journal homepage: https://www.tandfonline.com/loi/tbep20 Statistical modeling methods: challenges and strategies Steven S. Henley, Richard M. Golden & T. Michael Kashner To cite this article: Steven S. Henley, Richard M. Golden & T. Michael Kashner (2019): Statistical modeling methods: challenges and strategies, Biostatistics & Epidemiology, DOI: 10.1080/24709360.2019.1618653 To link to this article: https://doi.org/10.1080/24709360.2019.1618653 Published online: 22 Jul 2019. Submit your article to this journal Article views: 614 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=tbep20 BIOSTATISTICS & EPIDEMIOLOGY https://doi.org/10.1080/24709360.2019.1618653 Statistical modeling methods: challenges and strategies Steven S. Henleya,b,c, Richard M. Goldend and T. Michael Kashnera,b,e aDepartment of Medicine, Loma Linda University School of Medicine, Loma Linda, CA, USA; bCenter for Advanced Statistics in Education, VA Loma Linda Healthcare System, Loma Linda, CA, USA; cMartingale Research Corporation, Plano, TX, USA; dSchool of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA; eDepartment of Veterans Affairs, Office of Academic Affiliations (10A2D), Washington, DC, USA ABSTRACT ARTICLE HISTORY Statistical modeling methods are widely used in clinical science, Received 13 June 2018 epidemiology, and health services research to
    [Show full text]
  • Principles of Statistical Inference
    Principles of Statistical Inference In this important book, D. R. Cox develops the key concepts of the theory of statistical inference, in particular describing and comparing the main ideas and controversies over foundational issues that have rumbled on for more than 200 years. Continuing a 60-year career of contribution to statistical thought, Professor Cox is ideally placed to give the comprehensive, balanced account of the field that is now needed. The careful comparison of frequentist and Bayesian approaches to inference allows readers to form their own opinion of the advantages and disadvantages. Two appendices give a brief historical overview and the author’s more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The underlying mathematics is kept as elementary as feasible, though some previous knowledge of statistics is assumed. This book is for every serious user or student of statistics – in particular, for anyone wanting to understand the uncertainty inherent in conclusions from statistical analyses. Principles of Statistical Inference D.R. COX Nuffield College, Oxford CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521866736 © D. R. Cox 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
    [Show full text]
  • Probability, Algorithmic Complexity, and Subjective Randomness
    Probability, algorithmic complexity, and subjective randomness Thomas L. Griffiths Joshua B. Tenenbaum [email protected] [email protected] Department of Psychology Brain and Cognitive Sciences Department Stanford University Massachusetts Institute of Technology Abstract accounts that can express the strong prior knowl- We present a statistical account of human random- edge that contributes to our inferences. The struc- ness judgments that uses the idea of algorithmic tures that people find simple form a strict (and flex- complexity. We show that an existing measure of ible) subset of those easily expressed in a computer the randomness of a sequence corresponds to the as- sumption that non-random sequences are generated program. For example, the sequence of heads and by a particular probabilistic finite state automaton, tails TTHTTTHHTH appears quite complex to us, even and use this as the basis for an account that evalu- though, as the parity of the first 10 digits of π, it ates randomness in terms of the length of programs is easily generated by a computer. Identifying the for machines at different levels of the Chomsky hi- kinds of regularities that contribute to our sense of erarchy. This approach results in a model that pre- dicts human judgments better than the responses simplicity will be an important part of any cognitive of other participants in the same experiment. theory, and is in fact necessary since Kolmogorov complexity is not computable (Kolmogorov, 1965). The development of information theory prompted There is a crucial middle ground between Kol- cognitive scientists to formally examine how humans mogorov complexity and the arbitrary encoding encode experience, with a variety of schemes be- schemes to which Simon (1972) objected.
    [Show full text]
  • Effects of Skewness and Kurtosis on Model Selection Criteria
    Economics Letters 59 (1998) 17±22 Effects of skewness and kurtosis on model selection criteria Sõdõka BasËcËõ* , Asad Zaman Department of Economics, Bilkent University, 06533, Bilkent, Ankara, Turkey Received 29 July 1997; accepted 4 February 1998 Abstract We consider the behavior of model selection criteria in AR models where the error terms are not normal by varying skewness and kurtosis. The probability of estimating the true lag order for varying degrees of freedom (k) is the interest. For both small and large samples skewness does not effect the performance of criteria under consideration. On the other hand, kurtosis does effect some of the criteria considerably. In large samples and for large values of k the usual asymptotic theory results for normal models are con®rmed. Moreover, we showed that for small sample sizes performance of some newly introduced criteria which were not considered in Monte Carlo studies before are better. 1998 Elsevier Science S.A. Keywords: Model Selection Criteria; AR lag order determination; Robustness JEL classi®cation: C22; C51 1. Introduction In a Monte-Carlo study LutkepohlÈ (1985) compares the ®nite-sample performance of 12 different identi®cation approaches for AR models. In his study, Schwarz Criterion (SC) (Schwarz, 1978; Rissanen, 1978) and Hannan-Quinn Criterion (HQC) (Hannan and Quinn, 1979; Quinn, 1980) emerge as being the best among the compared criteria. Akaike Information Criterion (AIC) (Akaike, 1973, 1974) also performs well. Koreisha and Pukkila (1993) augment LutkepohlÈ (1985) and show that the performance of the above mentioned criteria depends on the number of nonzero elements of the matrices of the AR parameters and the maximum possible lag order that is used.
    [Show full text]
  • Autocorrelation-Robust Inference*
    G. S. Maddala and C. R. Rao, eds., ok of Statistics, Vol. 15 © 1997 EIsevier Science B.V. AH rights reserved 11 Autocorrelation-Robust Inference* P. M. Robinson and C. Ve/asco 1. Introduction Time series data occur commonly in the natural and engineering sciences, eco­ nomics and many other fields of enquiry. A typical feature of such data is their apparent dependence across time, for example sometimes records close together in time are strongly correlated. Accounting for serial dependence can con­ siderably complicate statistical inference. The attempt to model the dependence parametrically, or even nonparametrically, can be difficult and computationally expensive. In sorne circumstances, the serial dependence is merely a nuisance feature, interest focusing on "static" aspects such as a location parameter, or a probability density function. Here, we can frequently base inference on point or function estimates that are natural ones to use in case of independence, and may well be optimal in that case. Such estimates will often be less efficient, at least asymp­ totically, than ones based on a comprehensive model that incorporates the serial dependence. But apart from their relative computational simplicity, they often remain consistent even in the presence of certain forms of dependence, and can be more reliable than the "efficient" ones, which can sometimes become inconsistent when the dependence is inappropriately dealt with, leading to statistical inferences that are invalid, even asymptotically. In this sense the estimates can be more "robust" than "efficient" ones, and the main focus in this paper is their use in statistical inference, though we also discuss attempts at more efficient inference.
    [Show full text]
  • Measures of Multivariate Skewness and Kurtosis in High-Dimensional Framework
    Measures of multivariate skewness and kurtosis in high-dimensional framework Takuma SUMIKAWA∗ Kazuyuki KOIZUMIy Takashi SEOz Abstract In this paper, we propose new definitions for multivariate skewness and kurtosis when the covariance matrix has a block diagonal structure. These measures are based on the ones of Mardia (1970). We give the expectations and the variances for new multivariate sample mea- sures of skewness and kurtosis. Further, we derive asymptotic distributions of statistics by new measures under multivariate normality. To evaluate accuracy of these statistics, numer- ical results are given by the Monte Carlo simulation. We consider the problem estimating for covariance structure. Pavlenko, Bj¨orkstr¨omand Tillander (2012) consider a method which approximate the inverse covariance matrix to block diagonal structure via gLasso. In this paper, we propose an improvement of Pavlenko, Bj¨orkstr¨omand Tillander (2012)'s method by using AIC. Finally, numerical results are shown in order to investigate probability that the new method select true model for covariance structure. Key Words and Phrases: Multivariate Skewness; Multivariate Kurtosis; Normality test; Co- variance structure approximation; Block diagonal matrix; AIC. 1 Introduction In multivariate statical analysis, normality for sample is assumed in many cases. Hence, assessing for multivariate normality is an important problem. The graphical method and the statistical hypothesis testing are considered to this problem by many authors (see, e.g. Henze (2002); Thode (2002)). From aspects of calculation cost and simplicity, we focus on the testing theory based on multivariate skewness and kurtosis. There are various definitions of multivariate skewness and kurtosis (Mardia (1970), Malkovich and Afifi (1973), Srivastava (1984) and so on.).
    [Show full text]