Swedish Translation for the ISI Multilingual Glossary of Statistical Terms, Prepared by Jan Enger, Bernhard Huitfeldt, Ulf Jorner, and Jan Wretman

Total Page:16

File Type:pdf, Size:1020Kb

Swedish Translation for the ISI Multilingual Glossary of Statistical Terms, Prepared by Jan Enger, Bernhard Huitfeldt, Ulf Jorner, and Jan Wretman Swedish translation for the ISI Multilingual Glossary of Statistical Terms, prepared by Jan Enger, Bernhard Huitfeldt, Ulf Jorner, and Jan Wretman. Finally revised version, January 2008. For principles, see the appendix.
Recommended publications
  • Circularly-Symmetric Complex Normal Ratio Distribution for Scalar Transmissibility Functions. Part II: Probabilistic Model and Validation
    Mechanical Systems and Signal Processing 80 (2016) 78–98 Contents lists available at ScienceDirect Mechanical Systems and Signal Processing journal homepage: www.elsevier.com/locate/ymssp Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part II: Probabilistic model and validation Wang-Ji Yan, Wei-Xin Ren n Department of Civil Engineering, Hefei University of Technology, Hefei, Anhui 23009, China article info abstract Article history: In Part I of this study, some new theorems, corollaries and lemmas on circularly- Received 29 May 2015 symmetric complex normal ratio distribution have been mathematically proved. This Received in revised form part II paper is dedicated to providing a rigorous treatment of statistical properties of raw 3 February 2016 scalar transmissibility functions at an arbitrary frequency line. On the basis of statistics of Accepted 27 February 2016 raw FFT coefficients and circularly-symmetric complex normal ratio distribution, explicit Available online 23 March 2016 closed-form probabilistic models are established for both multivariate and univariate Keywords: scalar transmissibility functions. Also, remarks on the independence of transmissibility Transmissibility functions at different frequency lines and the shape of the probability density function Uncertainty quantification (PDF) of univariate case are presented. The statistical structures of probabilistic models are Signal processing concise, compact and easy-implemented with a low computational effort. They hold for Statistics Structural dynamics general stationary vector processes, either Gaussian stochastic processes or non-Gaussian Ambient vibration stochastic processes. The accuracy of proposed models is verified using numerical example as well as field test data of a high-rise building and a long-span cable-stayed bridge.
    [Show full text]
  • Best Representative Value” for Daily Total Column Ozone Reporting
    Atmos. Meas. Tech., 10, 4697–4704, 2017 https://doi.org/10.5194/amt-10-4697-2017 © Author(s) 2017. This work is distributed under the Creative Commons Attribution 4.0 License. A more representative “best representative value” for daily total column ozone reporting Andrew R. D. Smedley, John S. Rimmer, and Ann R. Webb Centre for Atmospheric Science, University of Manchester, Manchester, M13 9PL, UK Correspondence to: Andrew R. D. Smedley ([email protected]) Received: 8 June 2017 – Discussion started: 13 July 2017 Revised: 20 October 2017 – Accepted: 20 October 2017 – Published: 5 December 2017 Abstract. Long-term trends of total column ozone (TCO), 1 Introduction assessments of stratospheric ozone recovery, and satellite validation are underpinned by a reliance on daily “best repre- Global ground-based monitoring of total column ozone sentative values” from Brewer spectrophotometers and other (TCO) relies on the international network of Brewer spec- ground-based ozone instruments. In turn reporting of these trophotometers since they were first developed in the 1980s daily total column ozone values to the World Ozone and Ul- (Kerr et al., 1981), which has expanded the number of traviolet Radiation Data Centre (WOUDC) has traditionally sites and measurement possibilities from their still-operating been predicated upon a simple choice between direct sun predecessor instrument, the Dobson spectrophotometer. To- (DS) and zenith sky (ZS) observations. For mid- and high- gether these networks provide validation of satellite-retrieved latitude monitoring sites impacted by cloud cover we dis- total column ozone as well as instantaneous point measure- cuss the potential deficiencies of this approach in terms of ments that have value for near-real-time low-ozone alerts, its rejection of otherwise valid observations and capability particularly when sited near population centres, as inputs to to evenly sample throughout the day.
    [Show full text]
  • Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F
    Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F Course : Basic Econometrics : HC43 / Statistics B.A. Hons Economics, Semester IV/ Semester III Delhi University Course Instructor: Siddharth Rathore Assistant Professor Economics Department, Gargi College Siddharth Rathore guj75845_appC.qxd 4/16/09 12:41 PM Page 461 APPENDIX C SOME IMPORTANT PROBABILITY DISTRIBUTIONS In Appendix B we noted that a random variable (r.v.) can be described by a few characteristics, or moments, of its probability function (PDF or PMF), such as the expected value and variance. This, however, presumes that we know the PDF of that r.v., which is a tall order since there are all kinds of random variables. In practice, however, some random variables occur so frequently that statisticians have determined their PDFs and documented their properties. For our purpose, we will consider only those PDFs that are of direct interest to us. But keep in mind that there are several other PDFs that statisticians have studied which can be found in any standard statistics textbook. In this appendix we will discuss the following four probability distributions: 1. The normal distribution 2. The t distribution 3. The chi-square (␹2 ) distribution 4. The F distribution These probability distributions are important in their own right, but for our purposes they are especially important because they help us to find out the probability distributions of estimators (or statistics), such as the sample mean and sample variance. Recall that estimators are random variables. Equipped with that knowledge, we will be able to draw inferences about their true population values.
    [Show full text]
  • C 2013 Johannes Traa
    c 2013 Johannes Traa MULTICHANNEL SOURCE SEPARATION AND TRACKING WITH PHASE DIFFERENCES BY RANDOM SAMPLE CONSENSUS BY JOHANNES TRAA THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 2013 Urbana, Illinois Adviser: Assistant Professor Paris Smaragdis ABSTRACT Blind audio source separation (BASS) is a fascinating problem that has been tackled from many different angles. The use case of interest in this thesis is that of multiple moving and simultaneously-active speakers in a reverberant room. This is a common situation, for example, in social gatherings. We human beings have the remarkable ability to focus attention on a particular speaker while effectively ignoring the rest. This is referred to as the \cocktail party effect” and has been the holy grail of source separation for many decades. Replicating this feat in real-time with a machine is the goal of BASS. Single-channel methods attempt to identify the individual speakers from a single recording. However, with the advent of hand-held consumer electronics, techniques based on microphone array processing are becoming increasingly popular. Multichannel methods record a sound field from various locations to incorporate spatial information. If the speakers move over time, we need an algorithm capable of tracking their positions in the room. For compact arrays with 1-10 cm of separation between the microphones, this can be accomplished by applying a temporal filter on estimates of the directions-of-arrival (DOA) of the speakers. In this thesis, we review recent work on BSS with inter-channel phase difference (IPD) features and provide extensions to the case of moving speakers.
    [Show full text]
  • Wrapped Log Kumaraswamy Distribution and Its Applications
    International Journal of Mathematics and Computer Research ISSN: 2320-7167 Volume 06 Issue 10 October-2018, Page no. - 1924-1930 Index Copernicus ICV: 57.55 DOI: 10.31142/ijmcr/v6i10.01 Wrapped Log Kumaraswamy Distribution and its Applications K.K. Jose1, Jisha Varghese2 1,2Department of Statistics, St.Thomas College, Palai,Arunapuram Mahatma Gandhi University, Kottayam, Kerala- 686 574, India ARTICLE INFO ABSTRACT Published Online: A new circular distribution called Wrapped Log Kumaraswamy Distribution (WLKD) is introduced 10 October 2018 in this paper. We obtain explicit form for the probability density function and derive expressions for distribution function, characteristic function and trigonometric moments. Method of maximum likelihood estimation is used for estimation of parameters. The proposed model is also applied to a Corresponding Author: real data set on repair times and it is established that the WLKD is better than log Kumaraswamy K.K. Jose distribution for modeling the data. KEYWORDS: Log Kumaraswamy-Geometric distribution, Wrapped Log Kumaraswamy distribution, Trigonometric moments. 1. Introduction a test, atmospheric temperatures, hydrological data, etc. Also, Kumaraswamy (1980) introduced a two-parameter this distribution could be appropriate in situations where distribution over the support [0, 1], called Kumaraswamy scientists use probability distributions which have infinite distribution (KD), for double bounded random processes for lower and (or) upper bounds to fit data, where as in reality the hydrological applications. The Kumaraswamy distribution is bounds are finite. Gupta and Kirmani (1988) discussed the very similar to the Beta distribution, but has the important connection between non-homogeneous Poisson process advantage of an invertible closed form cumulative (NHPP) and record values.
    [Show full text]
  • The Effect of Changing Scores for Multi-Way Tables with Open-Ended
    The Effect of Changing Scores for Multi-way Tables with Open-ended Ordered Categories Ayfer Ezgi YILMAZ∗y and Tulay SARACBASIz Abstract Log-linear models are used to analyze the contingency tables. If the variables are ordinal or interval, because the score values affect both the model significance and parameter estimates, selection of score values has importance. Sometimes an interval variable contains open-ended categories as the first or last category. While the variable has open- ended classes, estimates of the lowermost and/or uppermost values of distribution must be handled carefully. In that case, the unknown val- ues of first and last classes can be estimated firstly, and then the score values can be calculated. In the previous studies, the unknown bound- aries were estimated by using interquartile range (IQR). In this study, we suggested interdecile range (IDR), interpercentile range (IPR), and the mid-distance range (MDR) as alternatives to IQR to detect the effects of score values on model parameters. Keywords: Contingency tables, Log-linear models, Interval measurement, Open- ended categories, Scores. 2000 AMS Classification: 62H17 1. Introduction Categorical variables, which have a measurement scale consisting of a set of cate- gories, are of importance in many fields often in the medical, social, and behavioral sciences. The tables that represent these variables are called contingency tables. Log-linear model equations are applied to analyze these tables. Interaction, row effects, and association parameters are strictly important to interpret the tables. In the presence of an ordinal variable, score values should be considered. As us- ing row effects parameters for nominal{ordinal tables, association parameter is suggested for ordinal{ordinal tables.
    [Show full text]
  • Measures of Dispersion
    MEASURES OF DISPERSION Measures of Dispersion • While measures of central tendency indicate what value of a variable is (in one sense or other) “average” or “central” or “typical” in a set of data, measures of dispersion (or variability or spread) indicate (in one sense or other) the extent to which the observed values are “spread out” around that center — how “far apart” observed values typically are from each other and therefore from some average value (in particular, the mean). Thus: – if all cases have identical observed values (and thereby are also identical to [any] average value), dispersion is zero; – if most cases have observed values that are quite “close together” (and thereby are also quite “close” to the average value), dispersion is low (but greater than zero); and – if many cases have observed values that are quite “far away” from many others (or from the average value), dispersion is high. • A measure of dispersion provides a summary statistic that indicates the magnitude of such dispersion and, like a measure of central tendency, is a univariate statistic. Importance of the Magnitude Dispersion Around the Average • Dispersion around the mean test score. • Baltimore and Seattle have about the same mean daily temperature (about 65 degrees) but very different dispersions around that mean. • Dispersion (Inequality) around average household income. Hypothetical Ideological Dispersion Hypothetical Ideological Dispersion (cont.) Dispersion in Percent Democratic in CDs Measures of Dispersion • Because dispersion is concerned with how “close together” or “far apart” observed values are (i.e., with the magnitude of the intervals between them), measures of dispersion are defined only for interval (or ratio) variables, – or, in any case, variables we are willing to treat as interval (like IDEOLOGY in the preceding charts).
    [Show full text]
  • A Two Parameter Discrete Lindley Distribution
    Revista Colombiana de Estadística January 2016, Volume 39, Issue 1, pp. 45 to 61 DOI: http://dx.doi.org/10.15446/rce.v39n1.55138 A Two Parameter Discrete Lindley Distribution Distribución Lindley de dos parámetros Tassaddaq Hussain1;a, Muhammad Aslam2;b, Munir Ahmad3;c 1Department of Statistics, Government Postgraduate College, Rawalakot, Pakistan 2Department of Statistics, Faculty of Sciences, King Abdulaziz University, Jeddah, Saudi Arabia 3National College of Business Administration and Economics, Lahore, Pakistan Abstract In this article we have proposed and discussed a two parameter discrete Lindley distribution. The derivation of this new model is based on a two step methodology i.e. mixing then discretizing, and can be viewed as a new generalization of geometric distribution. The proposed model has proved itself as the least loss of information model when applied to a number of data sets (in an over and under dispersed structure). The competing mod- els such as Poisson, Negative binomial, Generalized Poisson and discrete gamma distributions are the well known standard discrete distributions. Its Lifetime classification, kurtosis, skewness, ascending and descending factorial moments as well as its recurrence relations, negative moments, parameters estimation via maximum likelihood method, characterization and discretized bi-variate case are presented. Key words: Characterization, Discretized version, Estimation, Geometric distribution, Mean residual life, Mixture, Negative moments. Resumen En este artículo propusimos y discutimos la distribución Lindley de dos parámetros. La obtención de este Nuevo modelo está basada en una metodo- logía en dos etapas: mezclar y luego discretizar, y puede ser vista como una generalización de una distribución geométrica. El modelo propuesto de- mostró tener la menor pérdida de información al ser aplicado a un cierto número de bases de datos (con estructuras de supra y sobredispersión).
    [Show full text]
  • Methods to Calculate Uncertainty in the Estimated Overall Effect Size from a Random-Effects Meta-Analysis
    Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis Areti Angeliki Veroniki1,2*; Email: [email protected] Dan Jackson3; Email: [email protected] Ralf Bender4; Email: [email protected] Oliver Kuss5,6; Email: [email protected] Dean Langan7; Email: [email protected] Julian PT Higgins8; Email: [email protected] Guido Knapp9; Email: [email protected] Georgia Salanti10; Email: [email protected] 1 Li Ka Shing Knowledge Institute, St. Michael’s Hospital, 209 Victoria Street, East Building. Toronto, Ontario, M5B 1T8, Canada 2 Department of Primary Education, School of Education, University of Ioannina, Ioannina, Greece 3 MRC Biostatistics Unit, Institute of Public Health, Robinson Way, Cambridge CB2 0SR, U.K | downloaded: 28.9.2021 4 Department of Medical Biometry, Institute for Quality and Efficiency in Health Care (IQWiG), Im Mediapark 8, 50670 Cologne, Germany 5 Institute for Biometrics and Epidemiology, German Diabetes Center, Leibniz Institute for Diabetes Research at Heinrich Heine University, 40225 Düsseldorf, Germany 6 Institute of Medical Statistics, Heinrich-Heine-University, Medical Faculty, Düsseldorf, Germany This article has been accepted for publication and undergone full peer review but has not https://doi.org/10.7892/boris.119524 been through the copyediting, typesetting, pagination and proofreading process which may lead to differences between this version and the Version of Record. Please cite this article as doi: 10.1002/jrsm.1319 source: This article is protected by copyright. All rights reserved. 7 Institute of Child Health, UCL, London, WC1E 6BT, UK 8 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, U.K.
    [Show full text]
  • Multivariate Statistical Functions in R
    Multivariate statistical functions in R Michail T. Tsagris [email protected] College of engineering and technology, American university of the middle east, Egaila, Kuwait Version 6.1 Athens, Nottingham and Abu Halifa (Kuwait) 31 October 2014 Contents 1 Mean vectors 1 1.1 Hotelling’s one-sample T2 test ............................. 1 1.2 Hotelling’s two-sample T2 test ............................ 2 1.3 Two two-sample tests without assuming equality of the covariance matrices . 4 1.4 MANOVA without assuming equality of the covariance matrices . 6 2 Covariance matrices 9 2.1 One sample covariance test .............................. 9 2.2 Multi-sample covariance matrices .......................... 10 2.2.1 Log-likelihood ratio test ............................ 10 2.2.2 Box’s M test ................................... 11 3 Regression, correlation and discriminant analysis 13 3.1 Correlation ........................................ 13 3.1.1 Correlation coefficient confidence intervals and hypothesis testing us- ing Fisher’s transformation .......................... 13 3.1.2 Non-parametric bootstrap hypothesis testing for a zero correlation co- efficient ..................................... 14 3.1.3 Hypothesis testing for two correlation coefficients . 15 3.2 Regression ........................................ 15 3.2.1 Classical multivariate regression ....................... 15 3.2.2 k-NN regression ................................ 17 3.2.3 Kernel regression ................................ 20 3.2.4 Choosing the bandwidth in kernel regression in a very simple way . 23 3.2.5 Principal components regression ....................... 24 3.2.6 Choosing the number of components in principal component regression 26 3.2.7 The spatial median and spatial median regression . 27 3.2.8 Multivariate ridge regression ......................... 29 3.3 Discriminant analysis .................................. 31 3.3.1 Fisher’s linear discriminant function ....................
    [Show full text]
  • Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R
    Journal of Modern Applied Statistical Methods Volume 10 | Issue 2 Article 8 11-1-2011 Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R. Harring University of Maryland, [email protected] John A. Wasko University of Maryland, [email protected] Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Harring, Jeffrey R. and Wasko, John A. (2011) "Probabilistic Inferences for the Sample Pearson Product Moment Correlation," Journal of Modern Applied Statistical Methods: Vol. 10 : Iss. 2 , Article 8. DOI: 10.22237/jmasm/1320120420 Available at: http://digitalcommons.wayne.edu/jmasm/vol10/iss2/8 This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal of Modern Applied Statistical Methods Copyright © 2011 JMASM, Inc. November 2011, Vol. 10, No. 2, 476-493 1538 – 9472/11/$95.00 Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R. Harring John A. Wasko University of Maryland, College Park, MD Fisher’s correlation transformation is commonly used to draw inferences regarding the reliability of tests comprised of dichotomous or polytomous items. It is illustrated theoretically and empirically that omitting test length and difficulty results in inflated Type I error. An empirically unbiased correction is introduced within the transformation that is applicable under any test conditions. Key words: Correlation coefficients, measurement, test characteristics, reliability, parallel forms, test equivalency.
    [Show full text]
  • Principles of Statistical Inference
    Principles of Statistical Inference In this important book, D. R. Cox develops the key concepts of the theory of statistical inference, in particular describing and comparing the main ideas and controversies over foundational issues that have rumbled on for more than 200 years. Continuing a 60-year career of contribution to statistical thought, Professor Cox is ideally placed to give the comprehensive, balanced account of the field that is now needed. The careful comparison of frequentist and Bayesian approaches to inference allows readers to form their own opinion of the advantages and disadvantages. Two appendices give a brief historical overview and the author’s more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The underlying mathematics is kept as elementary as feasible, though some previous knowledge of statistics is assumed. This book is for every serious user or student of statistics – in particular, for anyone wanting to understand the uncertainty inherent in conclusions from statistical analyses. Principles of Statistical Inference D.R. COX Nuffield College, Oxford CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521866736 © D. R. Cox 2006 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
    [Show full text]