An Analysis of Drag Forces Based on L-Moments
Total Page:16
File Type:pdf, Size:1020Kb
Advances in Safety and Reliability – Kołowrocki (ed.) © 2005 Taylor & Francis Group, London, ISBN 0 415 38340 4 An analysis of drag forces based on L-moments P.H.A.J.M. van Gelder Delft University of Technology, Delft, The Netherlands CeSOS, NTNU, Trondheim, Norway M.D. Pandey University of Waterloo, Ontario, Canada ABSTRACT: Since L-moment estimators are linear functions of the ordered data values, they are virtually unbiased and have relatively small sampling variance, especially in comparison to the classical coefficients of skewness and kurtosis. Moreover, estimators of L-moments are relatively insensitive to outliers. Liaw and Zheng (2004) calculated drag forces on cylinders by polynomial approximations in which the coefficients are estimated by least-squares and moment methods. In this paper, the coefficients will be estimated with L-Moment methods. Its advantages will shown to be that: (i) an L-Moments approach leads to a linear solution of the polynomialisation of drag forces and (ii) other distribution types for the turbulent current and wave heights acting on the cylinder can be analysed fairly simple. 1 INTRODUCTION of quantile estimators (Hosking and Wallis, 1987; Rosbjerg et al., 1992). As compared with for exam- Since Hosking (1990), introduced L-moments, they ple the classical method of moments, the robustness have become popular tools for solving various sta- vis-à-vis sample outliers is clearly a characteristic tistical problems related to parameter estimation, of L-moment estimators. However, estimators can be distribution identification, and regionalization. The “too robust” in the sense that large (or small) sam- L-moments are linear functions of probability ple values reflecting important information on the weighted moments (PWM’s) and hence for certain tail of the parent distribution are given too little applications, such as the estimation of distribution weight in the estimation. Hosking (1990) assessed that parameters, serve identical purposes (Hosking, 1986). L-moments weigh each element of a sample according In other situations, however, L-moments have signi- to its relative importance. ficant advantages over PWM’s due to their ability to The present paper briefly describes first the the- summarize a statistical distribution in a more mean- ory of L-Moments followed by an overview of papers ingful way. Since L-moment estimators are linear with applications of L-moments. The literature review functions of the ordered data values, they are virtually has shown that the theory of L-moments hase mostly unbiased and have relatively small sampling variance. been applied to a regionalized setting combining L-moment ratio estimators also have small bias and data from more than one site. However, in univari- variance, especially in comparison with the classi- ate settings the method of L-moments has not been cal coefficients of skewness and kurtosis. Moreover, investigated in detail. Therefore, this paper presents estimators of L-moments are relatively insensitive to a Monte Carlo experiment in a univariate setting outliers. These often-heard arguments in favor of esti- in order to compare the L-moments method with mation of distribution parameters by L-moments (or the classical parameter estimation methods (MOM, PWM’s)should, nevertheless, not be accepted without MML, and MLS). The performance of these methods any scrutiny. For instance, in the wave height frequency will also be analyzed with respect to inhomogeneous analysis, the interest is in the estimation of a given data. quantile, not in the L-moments themselves. Although L-moments are in fact summary statistics for prob- the latter may have desirable sampling properties, the ability distributions and data samples. They are anal- same does not necessarily apply to a function of them, ogous to ordinary moments – they provide measures such as a quantile estimator. In fact, several simulation of location, dispersion, skewness, kurtosis, and other studies have demonstrated that for some distribu- aspects of the shape of probability distributions or tions, other estimation methods may be superior to the data samples – but are computed from linear combina- method of L-moments in terms of mean square errors tions of the ordered data values (hence the prefix L). 661 Hosking and Wallis (1997) give an excellent overview of the whole theory of L-moments. Liaw and Zheng (2004) calculated drag forces by polynomial approximations in which the coefficients are estimated by least-squares and moment meth- Hosking (1990) showed that the first few L-moments ods. In this paper, coefficients will be estimated with follow from PWMs via: L-Moment methods, and the its advantages will be shown. 2 L-MOMENTS FOR DATA SAMPLES Probability weighted moments, defined by Greenwood et al. (1979), are precursors of L-moments. Sample The coefficients in Eqn. (5) are those of the shifted probability weighted moments, computed from data Legendre polynomials. The first L-moment is the values x1:n,x2:n,…,xn:n, arranged in increasing order, sample mean, a measure of location. The second L- are given by: moment is (a multiple of) Gini’s mean difference statistic (Johnson et al., 1994), a measure of the dispersion of the data values about their mean. By dividing the higher-order L-moments by the dispersion measure, we obtain the L-moment ratios: L-moments are certain linear combinations of prob- These are dimensionless quantities, independent of the ability weighted moments that have simple interpre- units of measurement of the data; t is a measure of tations as measures of the location, dispersion and 3 skewness and t4 is a measure of kurtosis – these are shape of the data sample. A sample of size 2 con- respectively the L-skewness and L-kurtosis. They take tains two observations in ascending order x1:2 and values between −1 and +1 (exception: some even- x2:2. The difference between the two observations − order L-moment ratios computed from very small x2:2 x1:2 is a measure of the scale of the distribu- samples can be less than −1). The L-moment ana- tion. A sample of size 3 contains three observations logue of the coefficient of variation (standard deviation in ascending order x1:3,x2:3 and x3:3. The differ- divided by the mean), is the L-CV,defined by: ence between the two observations x2:3 − x1:3 and the difference between the two observations x3:3 − x2:3 can be subtracted from each other to have a mea- sure of the skewness of the distribution. This leads It takes values between 0 and 1 (if X ∃ 0). to: (x3:3 − x2:3) − (x2:3 − x1:3) = x3:3 − 2x2:3 + x1:3.A sample of size 4 contains four observations in ascend- ing order x ,x ,x and x . A measure for the 1:4 2:4 3:4 4:4 3 L-MOMENTS FOR PROBABILITY kurtosis of the distribution is given by: x − x −3 4:4 1:4 DISTRIBUTIONS (x3:4 − x2:4). In short: the above linear combinations of the elements of the ordered sample contain informa- For a probability distribution with cumulative distri- tion about the location, scale, skewness and kurtosis of bution function F(x), probability weighted moments the distribution from which the sample was drawn. A are defined by: natural way to generalize the above approach to sam- ples of size n, is to take all possible sub-samples of size 2 and then take the average of the differences, i.e., (x2:2 − x1:2)/2: L-moments are defined in terms of probability weighted moments, analogously to the sample L- moments: Furthermore, the skewness and kurtosis are similarly obtained as: 662 L-moment ratios are defined by: in which: The L-moment analogue of the coefficient of variation, is the L-CV,defined by: So, the expression (15) can be written in terms of an incomplete Beta function as: Examples (for a complete overview,see theAppendix): Uniform (rectangular) distribution on (0,1): Indeed; note that Normal distribution with mean 0 and variance 1: The theory of L-moments has been applied in numer- The probability density function of Xr:n is given by the ous papers. The following work is worth mentioning: first derivative of Eqn. (16): Rao and Hamed (1997), Duan et al. (1998), Ben-Zvi and Azmon (1997), Van Gelder and Neykov (1998), Demuth and Kuells (1997), and Pearson et al. (1991). Now, the expected value of r-th order statistics can be 4 RELATION OF L-MOMENTS WITH ORDER obtained as STATISTICS Consider a sample consisting of n observations {x1, x2,…, xn} randomly drawn from a statistical pop- ulation. If the sample values are rearranged in a non-decreasing order of magnitude, x #x #…# Substituting from eqn. (17) into (18) and introducing 1:n 2:n a transformation, u = F(x)orx = F−1(u), 0 ≤ u ≤ 1, xn:n, then the r-th member (xr:n) of this new sequence is called the r-th order statistic of the sample (Harter, leads to: 1969). When all the sample values come from a com- mon parent population with cumulative distribution function F(x), the probability distribution (CDF) of the r-th order statistic, i.e., Prob[Xr:n ≤ x], means that at least r observations in a sample of n do not exceed Note that x(u) denotes the quantile function of a ran- a fixed value, x. dom variable. The expectation of the maximum and A sample randomly drawn from a distribution is minimum of a sample of size n can be easily obtained analogous to a Bernouilli experiment in which the from eqn. (19) by setting r = n and r = 1, respectively. success is defined by the sampled value being less than the threshold, x.