Chapter 1 Order Statistics This presentation delivered to CE397 Statistics in Water Resources, University of Texas at Austin Spring 2009 on March 31, 2009 by William H. Asquith Ph.D. The software used is R from www.r-project.org and is available for all platforms. The lmomco package provides some functions discussed in class related to order statitics and L-moments and can be found at www.cran.r-project.org/package=lmomco. 1 2 1 Order Statistics 1.1 Introduction A branch of statistics known as order statistics plays a prominent role in L-moment theory. The study of order statistics is the study of the statistics of ordered (sorted) random variables and samples. This chapter presents a very brief introduction to order statistics to provide a foundation for later chapters. A comprehensive exposition of order statistics is provided by David (1981), and an R-oriented approach is described in various contexts by Baclawski (2008). The random variable X for a sample of size n, when sorted, forms the order statistics of X: X1:n ≤ X2:n ≤ ··· ≤ Xn:n. The sample order statistics from a random sample are created by sorting the sample into ascending order: x1:n ≤ x2:n ≤ ··· ≤ xn:n. As we will see, the concept and use of order statistics take into account both the value (magnitude) and the relative relation (order) to other observations. Barrett (2004, p. 23) reports that . the effects of ordering can be impressive in terms of both what aspects of sample behavior can be usefully employed and the effectiveness and efficiency of resulting inferences. and that . linear combinations of all ordered samples values can provide efficient estimators. This presentation will show that the L-moments, which are based on linear combinations of order statistics, do in fact provide efficient estimators of distributional geometry. In general, order statistics are already a part of the basic summary statistic repertoire that most individuals— including nonscientists or statisticians—are familiar with. The minimum and maximum are examples of extreme order statistics and are defined by the following notation minfXng = X1:n (1.1) maxfXng = Xn:n (1.2) The familiar median X0:50 by convention is 1.1 Introduction 3 ( (X[n=2]:n + X[(n=2)+1]:n)=2 if n is even X0:50 = (1.3) X[(n+1)=2]:n if n is odd and thus clearly is defined in terms of one order statistic or a linear combination of two order statistics. Other order statistics exist and several important interpretations towards the purpose of this presentation can be made. Concerning L-moments, Hosking (1990, p. 109) and Hosking and Wallis (1997, p. 21) provide an “intuitive” justification for L-moments and by association the probability-weighted moments. The justification follows: • The order statistic X1:1 (a single observation) contains information about the location of the distribution on the real-number line R; • For a sample of n = 2, the order statistics are X1:2 (smallest) and X2:2 (largest). For a highly dispersed distribution, the expected difference between X2:2 − X1:2 would be large, whereas for a tightly dispersed distribution, the differ- ence would be small. The expected differences between order statistics of an n = 2 sample hence can be used to expression the variability or scale of a distribution; and • For a sample of n = 3, the order statistics are X1:3 (smallest), X2:3 (median), and X3:3 (largest). For a negatively skewed distribution, the difference X2:3 − X1:3 would be larger (more data to the left) than X3:3 − X2:3. The opposite (more data to the right) would occur if a distribution where positively skewed. These interpretations show the importance of the intra-sample differences in the expression of distribution geometry. Expectations and Distributions of Order Statistics A fundamental definition regarding order statistics, which will be critically important in the computation of L-moments and probability-weighted moments, is the expectation of an order statistic. The expectation is defined in terms of the QDF. The expectation of an order statistic for the jth largest of r values is defined (David, 1981, p. 33) in terms of the QDF x(F) as 4 1 Order Statistics Z 1 n! j−1 n− j E[Xj:n] = x(F) × F × (1 − F) dF (1.4) ( j − 1)!(n − j)! 0 The expectation of an order statistic for a sample of size n = 1 is especially important because Z 1 E[X1:1] = x(F) dF = m = arithmetic mean (1.5) 0 Therefore, the familiar mean can be interpreted thus: The mean is the expected value of a single observation if one and only one sample is drawn from the distribution. Hosking (2006) reports from references cited therein that “the expectations of extreme order statistics characterize a distribution.” In particular, if the expectation of a random variable X is finite, then the set fE[X1:n : n=1;2;···]g or fE[Xn:n : n=1;2;···]g uniquely determine the distribution. Hosking (2006) reports that such sets of expectations contain redundant information and that technically a subset of expectations can be dropped and the smaller set is still sufficient to characterize the distribution. USING R USING R Using eq. (1.4) and R, the expected value of the 123rd-ordered (increasing) value of a sample of size n = 300 is computed for an Exponential distribution in example 1–1 . The ratio of factorial functions in eq. (1.4) is difficult to compute for large values—judicious use of the fact that n! = G(n + 1) and use of logarithms of the complete Gamma function G(a) suffices. The results of the integration using QDF of the Exponential by the qexp() function and stochastic computation using random variates of the Exponential by the rexp() function for E[X123:300] are equivalent. 1–1 nsim <- 10000; n <- 300; j <- 123 int <- integrate(function(f,n=NULL,j=NULL) { exp(lgamma(n+1) - lgamma(j) - lgamma(n-j+1)) * qexp(f) * fˆ(j-1) * (1-f)ˆ(n-j) }, lower=0, upper=1, n=n, j=j) E_integrated <- int$value 1.1 Introduction 5 E_stochastic <- mean(replicate(nsim, sort(rexp(n))[j])) cat(c("RESULTS:", round(E_integrated, digits=3), "and", round(E_stochastic, digits=3), "\n")) RESULTS: 0.526 and 0.527 J Distributions of Order Statistic Extrema The extrema X1:n and Xn:n are of special interest in many practical problems of distributional analysis. Let us consider the sample maximum of random variable X having CDF of F(x) = Pr[Xn:n ≤ x]. If Xn:n ≤ x, then all xi ≤ x for i = 1;2;··· ;n, it can be shown that n n Fn(x) = Pr[X ≤ x] = fF(x)g (1.6) Similarly, it can be shown for the sample minimum that n n F1(x) = Pr[X > x] = f1 − F(x)g (1.7) Using the arguments producing eqs. (1.6) and (1.7) with a focus on the QDF, Gilchrist (2000, p. 85) provides 1=n xn:n(Fn:n) = x(Fn:n ) (1.8) 1=n x1:n(F1:n) = x(1 − (1 − F1:n) ) (1.9) for the maximum and minimum, respectively. Gilchrist (2000, p. 85) comments that, at least for xn:n that “the quantile function of the largest observation is thus found from the original quantile function in the simplest of calculations.” 6 1 Order Statistics For the general computation of the distribution of non extrema order statistics, the computations are more difficult. (Gilchrist, 2000, p. 86) shows that the QDF of the distribution of the jth order statistic of a sample of size n is (−1) x j:n(Fj:n) = x[B (Fj:n; j; n − j + 1)] (1.10) where x j:n(Fj:n) is to be read as “the QDF of the jth order statistic for a sample of size n given by nonexceedance (−1) (−1) probability Fj:n.” The function B (F;a;b) is the QDF of the Beta distribution—the notation represents the inverse of the CDF, which is of course a QDF. It follows that the QDF of the order statistic extrema for an F are (−1) x1:n(F) = x[B (F; 1; n)] (1.11) (−1) xn:n(F) = x[B (F; n; 1)] (1.12) for the minimum X1:n and maximum Xn:n, respectively. USING R USING R In the context of eqs. (1.6) and eq. (1.7), the expectations of extrema for the Exponential distribution are stochas- tically computed in example 1–2 using the min() and max() functions. The random variates from the Exponential are computed by the rexp() function. The example begins by setting the sample size n = 4, the size of a simulation run in nsim, and finally, the scale parameter (note that R uses a rate expression for the dispersion parameter) of the Exponential distribution is set to 1000. (A location parameter of 0 is implied.) The example reports (1000, 1500, 500) for the respective mean, maximum, and minimum values. (It is well known that the mean of this Exponential distribution is 1000.) 1–2 n <- 4; nsim <- 200000 s <- 1/1000 # inverse of scale parameter = 1000 # Expectation of Expectation of Exponential Distribution mean(replicate(nsim, mean(rexp(n, rate=s)))) 1.1 Introduction 7 [1] 1000.262 # Expectation of Maximum from Exponential Distribution mean(replicate(nsim, max(rexp(n, rate=s)))) [1] 1504.404 # Expectation of Minimum from Exponential Distribution mean(replicate(nsim, min(rexp(n, rate=s)))) [1] 499.6178 The demonstration continues in example 1–3 with the stochastic computation of the expected values of the max- imum and minimum through eqs.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-