On the Use of Relative Likelihood Ratios in Statistical Inference

Total Page:16

File Type:pdf, Size:1020Kb

On the Use of Relative Likelihood Ratios in Statistical Inference ON THE USE OF RELATIVE LIKELIHOOD RATIOS IN STATISTICAL INFERENCE Charles G. Boncelet*, Lisa M. Marvel**, Michael E. Picollelli* *University of Delaware, Newark DE USA **US Army Research Laboratory, APG Aberdeen MD USA ABSTRACT Standard statistical procedures including maximum likeli- hood estimates and confidence intervals can by found in many We ask the question, “How good are parameter estimates?,” textbooks, including Bickel and Docksum [1] and Kendall and offer criticism of confidence intervals as an answer. In- and Stuart [4]. Relative likelihoods have received some atten- stead we suggest the engineering community adopt a little tion in the statistics and epidemiological literature, but little known idea, that of defining plausible intervals from the rela- attention in the engineering literature. The best reference on tive likelihood ratio. Plausible intervals answer the question, relative likelihood methods is the text by Sprott [7]. One en- “What range of values could plausibly have given rise to the gineering reference is a recent paper by Sander and Beyerer data we have seen?” We develop a simple theorem for com- [6]. puting plausible intervals for a wide variety of common dis- In this paper, we adopt a “frequentist” interpretation of tributions, including the Gaussian, exponential, and Poisson, probability and statistical inference. Bayesian statisticians among others. adopt a different view, one with which we have some sym- pathy, but that view is not explored herein. For a recent dis- 1. INTRODUCTION cussion of Bayesian statistics, see Jaynes [3]. We consider a basic question of statistical analysis, “How good are the parameter estimates?”. Most often this question 2. PRELIMINARIES: LIKELIHOOD FUNCTIONS, is answered with confidence intervals. We argue that confi- MAXIMUM LIKELIHOOD ESTIMATES, AND dence intervals are flawed. We propose that a little known al- CONFIDENCE INTERVALS ternative based on relative likelihood ratios be used. We term these “plausible intervals.” We present a theorem on comput- Consider a common estimation problem: estimating the mean ing plausible intervals that applies to a wide range of distri- of a Gaussian distribution. Let X1, X2,..., Xn be IID (in- butions, including the Gaussian, exponential, and Chi-square. dependent and identically distributed) Gaussian random vari- 2 2 A similar result holds for the Poisson. Lastly, we show how ables with mean µ and variance σ , i.e., Xi ∼ N(µ, σ ). 2 to compute plausible regions for the Gaussian when both lo- For the moment, we assume we know σ and seek to esti- cation and scale parameters are estimated. mate µ. The density of each Xi is This work is part of a larger effort to understand why sta- 1 −(x − µ)2 tistical and signal processing procedures do not work as well p i f(xi; µ) = exp 2 in practice as theory indicates they should. Furthermore, we 2πσ2 2σ strive to develop better and more robust procedures. In this The likelihood function of µ is work, we begin to understand why “statistically significant” results are, upon further evaluation, not as reliable as thought. L(µ; xn) = f(x ; µ)f(x ; µ) ··· f(x ; µ) One reason is that confidence intervals are too small. They do 1 1 2 n − Pn (x − µ)2 not accurately reflect the range of possible inputs that could = (2πσ2)−n=2 exp i=1 i have given rise to the observed data. 2σ2 This work is not defense specific, though there are nu- n merous defense applications of statistical inference and signal where we use the notation x1 = x1; x2; : : : ; xn. processing. We expect this work to influence a wide range The maximum likelihood estimate (MLE) of µ is found by n of procedures beyond simple parameter estimation. For in- setting the first derivative of L(µ; x1 ) to 0, stance, Kalman filtering, spectral estimation, and hypothesis d n testing are potential applications. 0 = L(µ; x1 ) dµ µ=µ^ CGB can be reached at [email protected], LMM at mar- [email protected], and MEP at [email protected]. The calculations are somewhat easier if we find the maximum The confidence interval is not too helpful at predicting fu- of the log-likelihood function, ture values either. For instance, consider the following change to the experiment: After observing n measurements we will d n 0 = log L(µ; x ) compute a sample average and a confidence interval. Then dµ 1 µ=µ^ we will make another n measurements (independent of the n ! d n 1 X first) and ask what is the probability the second sample mean = − log 2πσ2 − (x − µ)2 dµ 2 2σ2 i is in the confidence interval? Let the second sample mean be 0 i=1 µ=µ^ denoted µ^ . Then, n ! n 1 X 0 cσ 0 cσ = xi − µ^ Pr u ≤ µ^ ≤ v = Pr µ^ − p ≤ µ^ ≤ µ^ + p σ2 n i=1 n n cσ 0 cσ From which we conclude the MLE of µ is the sample mean = Pr − p ≤ µ^ − µ^ ≤ p n n n 1 X µ^ = x = x Since both µ^ and µ^0 are independent N(µ, σ2=n) random n i n i=1 variables, the probability is Just how good is the sample mean as an estimator of µ? p p p Φ(c= 2) − Φ(−c= 2) = 2Φ(c= 2) − 1 A commonly used measure of goodness is the confidence in- n n terval. Find u(X1 ) and v(X1 ) such that For example, when α = 0:05, c = 1:96 and the probability evaluates to only 0.834. n n We regard the latter criticism as particularly damning. Pr u(X1 ) ≤ µ ≤ v(X1 ) ≥ 1 − α The usual reason to perform statistical analysis is to deter- for some α > 0. Normally, the confidence interval is selected mine something about future observations. After all, the n n as the minimal range v(X1 ) − u(X1 ). For the Gaussian current observations are already known. Parameter estimates example, the confidence interval is are often useful to the extent they help inform us about future n n observations. Confidence intervals are misleading indicators Pr u(X1 ) ≤ µ ≤ v(X1 ) ≥ 1 − α cσ of future values. u(Xn) = µ^ − p To guarantee that µ^0 is in the confidence interval with 1 n p cσ probability 1 − α, c must increase to 1:96 2 = 2:78. v(Xn) = µ^ + p 1 n 4. RELATIVE LIKELIHOOD RATIO INTERVALS where c = Φ−1(1 − α=2) and Φ(·) is the Normal distribution function. In the usual case where α = 0:05, c = 1:96. We hope to revive an older idea that has received little atten- tion in the engineering literature, relative likelihood ratios.A 3. CRITICISMS OF CONFIDENCE INTERVALS good reference is the text by Sprott [7]. The relative likeli- hood ratio is the following: Many criticisms of confidence intervals have been raised. L(θ; xn) L(θ; xn) Here we list a few of them: R(θ; xn) = 1 = 1 1 sup L(θ; xn) ^ n There is considerable confusion as to what a confidence θ 1 L(θ; x1 ) interval actually represents. The standard interpretation goes where θ represents the unknown parameter or parameters and something like this: Before the experiment is done, we agree θ^ is the MLE of θ. to compute a sample average and a confidence interval (u; v) The relative likelihood ratio helps answer the question, as above. Then the probability the interval will cover µ is “What values of θ could plausibly have given the data xn that 1 − α. 1 we observed?” The relative likelihood is useful after the ex- Note, however, after the experiment is done, the X have i periment is run, while probabilities are most useful before the values, x . Then, µ^, u, and v are numbers. Since µ is not i experiment is run. considered to be random, we cannot even ask the question, As an example, we consider the Gaussian example above. “What is the probability µ is in the interval (u; v)?” µ is either The unknown parameter is θ = µ and the MLE is θ^ = µ^. in the interval or not, but the question is not within the realm Compare the relative likelihood ratio to a threshold, of probability. The confidence interval has probability about the true Pn 2 − i=1(xi − µ) mean of 1 − α if µ = µ^. In general µ 6= µ^, and the interval exp 2 n 2σ contains less mass, R(θ; x1 ) = n ≥ α (1) − P (x − µ^)2 exp i=1 i Φ(v − µ)/σ − Φ(u − µ)/σ < 1 − α 2σ2 After taking logs and simplifying, the relation becomes 1.0 2σ2 (µ − µ^)2 ≤ log 1/α (2) n 1=n α 1−γ Solving for µ gives a relative likelihood ratio interval, which γe we shall refer to as a plausible interval. r r 2σ2 log 1/α 2σ2 log 1/α µ^ − ≤ µ ≤ µ^ + n n u 1.0 v γ σ σ µ^ − cp ≤ µ ≤ µ^ + cp (3) n n Fig. 1: Graphical representation of the plausible interval for an exponential distribution. When α = 0:05, c = 2:45. We see the plausible interval is bigger than the confidence interval. It is a more conservative measure. To reiterate, the plausible interval gives all values of θ that could have plau- After taking logs, multiplying by -1, and comparing to α, the sibly given rise to the data observed, where plausibly is mea- relation reduces to sured by the ratio of the likelihood at θ to the maximum like- k X pj log α lihood. − p^ log = KL(^pjjp) ≤ − (5) j p^ n As another example, consider estimating the parameter in j=1 j an exponential distribution.
Recommended publications
  • WHAT DID FISHER MEAN by an ESTIMATE? 3 Ideas but Is in Conflict with His Ideology of Statistical Inference
    Submitted to the Annals of Applied Probability WHAT DID FISHER MEAN BY AN ESTIMATE? By Esa Uusipaikka∗ University of Turku Fisher’s Method of Maximum Likelihood is shown to be a proce- dure for the construction of likelihood intervals or regions, instead of a procedure of point estimation. Based on Fisher’s articles and books it is justified that by estimation Fisher meant the construction of likelihood intervals or regions from appropriate likelihood function and that an estimate is a statistic, that is, a function from a sample space to a parameter space such that the likelihood function obtained from the sampling distribution of the statistic at the observed value of the statistic is used to construct likelihood intervals or regions. Thus Problem of Estimation is how to choose the ’best’ estimate. Fisher’s solution for the problem of estimation is Maximum Likeli- hood Estimate (MLE). Fisher’s Theory of Statistical Estimation is a chain of ideas used to justify MLE as the solution of the problem of estimation. The construction of confidence intervals by the delta method from the asymptotic normal distribution of MLE is based on Fisher’s ideas, but is against his ’logic of statistical inference’. Instead the construc- tion of confidence intervals from the profile likelihood function of a given interest function of the parameter vector is considered as a solution more in line with Fisher’s ’ideology’. A new method of cal- culation of profile likelihood-based confidence intervals for general smooth interest functions in general statistical models is considered. 1. Introduction. ’Collected Papers of R.A.
    [Show full text]
  • Statistical Theory
    Statistical Theory Prof. Gesine Reinert November 23, 2009 Aim: To review and extend the main ideas in Statistical Inference, both from a frequentist viewpoint and from a Bayesian viewpoint. This course serves not only as background to other courses, but also it will provide a basis for developing novel inference methods when faced with a new situation which includes uncertainty. Inference here includes estimating parameters and testing hypotheses. Overview • Part 1: Frequentist Statistics { Chapter 1: Likelihood, sufficiency and ancillarity. The Factoriza- tion Theorem. Exponential family models. { Chapter 2: Point estimation. When is an estimator a good estima- tor? Covering bias and variance, information, efficiency. Methods of estimation: Maximum likelihood estimation, nuisance parame- ters and profile likelihood; method of moments estimation. Bias and variance approximations via the delta method. { Chapter 3: Hypothesis testing. Pure significance tests, signifi- cance level. Simple hypotheses, Neyman-Pearson Lemma. Tests for composite hypotheses. Sample size calculation. Uniformly most powerful tests, Wald tests, score tests, generalised likelihood ratio tests. Multiple tests, combining independent tests. { Chapter 4: Interval estimation. Confidence sets and their con- nection with hypothesis tests. Approximate confidence intervals. Prediction sets. { Chapter 5: Asymptotic theory. Consistency. Asymptotic nor- mality of maximum likelihood estimates, score tests. Chi-square approximation for generalised likelihood ratio tests. Likelihood confidence regions. Pseudo-likelihood tests. • Part 2: Bayesian Statistics { Chapter 6: Background. Interpretations of probability; the Bayesian paradigm: prior distribution, posterior distribution, predictive distribution, credible intervals. Nuisance parameters are easy. 1 { Chapter 7: Bayesian models. Sufficiency, exchangeability. De Finetti's Theorem and its intepretation in Bayesian statistics. { Chapter 8: Prior distributions. Conjugate priors.
    [Show full text]
  • Estimation Methods in Multilevel Regression
    Newsom Psy 526/626 Multilevel Regression, Spring 2019 1 Multilevel Regression Estimation Methods for Continuous Dependent Variables General Concepts of Maximum Likelihood Estimation The most commonly used estimation methods for multilevel regression are maximum likelihood-based. Maximum likelihood estimation (ML) is a method developed by R.A.Fisher (1950) for finding the best estimate of a population parameter from sample data (see Eliason,1993, for an accessible introduction). In statistical terms, the method maximizes the joint probability density function (pdf) with respect to some distribution. With independent observations, the joint probability of the distribution is a product function of the individual probabilities of events, so ML finds the likelihood of the collection of observations from the sample. In other words, it computes the estimate of the population parameter value that is the optimal fit to the observed data. ML has a number of preferred statistical properties, including asymptotic consistency (approaches the parameter value with increasing sample size), efficiency (lower variance than other estimators), and parameterization invariance (estimates do not change when measurements or parameters are transformed in allowable ways). Distributional assumptions are necessary, however, and there are potential biases in significance tests when using ML. ML can be seen as a more general method that encompasses ordinary least squares (OLS), where sample estimates of the population mean and regression parameters are equivalent for the two methods under regular conditions. ML is applied more broadly across statistical applications, including categorical data analysis, logistic regression, and structural equation modeling. Iterative Process For more complex problems, ML is an iterative process [for multilevel regression, usually Expectation- Maximization (EM) or iterative generalized least squares (IGLS) is used] in which initial (or “starting”) values are used first.
    [Show full text]
  • Akaike's Information Criterion
    ESCI 340 Biostatistical Analysis Model Selection with Information Theory "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise." − John W. Tukey, (1962), "The future of data analysis." Annals of Mathematical Statistics 33, 1-67. 1 Problems with Statistical Hypothesis Testing 1.1 Indirect approach: − effort to reject null hypothesis (H0) believed to be false a priori (statistical hypotheses are not the same as scientific hypotheses) 1.2 Cannot accommodate multiple hypotheses (e.g., Chamberlin 1890) 1.3 Significance level (α) is arbitrary − will obtain "significant" result if n large enough 1.4 Tendency to focus on P-values rather than magnitude of effects 2 Practical Alternative: Direct Evaluation of Multiple Hypotheses 2.1 General Approach: 2.1.1 Develop multiple hypotheses to answer research question. 2.1.2 Translate each hypothesis into a model. 2.1.3 Fit each model to the data (using least squares, maximum likelihood, etc.). (fitting model ≅ estimating parameters) 2.1.4 Evaluate each model using information criterion (e.g., AIC). 2.1.5 Select model that performs best, and determine its likelihood. 2.2 Model Selection Criterion 2.2.1 Akaike Information Criterion (AIC): relates information theory to maximum likelihood ˆ AIC = −2loge[L(θ | data)]+ 2K θˆ = estimated model parameters ˆ loge[L(θ | data)] = log-likelihood, maximized over all θ K = number of parameters in model Select model that minimizes AIC. 2.2.2 Modification for complex models (K large relative to sample size, n): 2K(K +1) AIC = −2log [L(θˆ | data)]+ 2K + c e n − K −1 use AICc when n/K < 40.
    [Show full text]
  • Plausibility Functions and Exact Frequentist Inference
    Plausibility functions and exact frequentist inference Ryan Martin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago [email protected] July 6, 2018 Abstract In the frequentist program, inferential methods with exact control on error rates are a primary focus. The standard approach, however, is to rely on asymptotic ap- proximations, which may not be suitable. This paper presents a general framework for the construction of exact frequentist procedures based on plausibility functions. It is shown that the plausibility function-based tests and confidence regions have the desired frequentist properties in finite samples—no large-sample justification needed. An extension of the proposed method is also given for problems involving nuisance parameters. Examples demonstrate that the plausibility function-based method is both exact and efficient in a wide variety of problems. Keywords and phrases: Bootstrap; confidence region; hypothesis test; likeli- hood; Monte Carlo; p-value; profile likelihood. 1 Introduction In the Neyman–Pearson program, construction of tests or confidence regions having con- trol over frequentist error rates is an important problem. But, despite its importance, there seems to be no general strategy for constructing exact inferential methods. When an exact pivotal quantity is not available, the usual strategy is to select some summary statistic and derive a procedure based on the statistic’s asymptotic sampling distribu- tion. First-order methods, such as confidence regions based on asymptotic normality arXiv:1203.6665v4 [math.ST] 12 Mar 2014 of the maximum likelihood estimator, are known to be inaccurate in certain problems. Procedures with higher-order accuracy are also available (e.g., Brazzale et al.
    [Show full text]
  • An Introduction to Maximum Likelihood in R
    1 INTRODUCTION 1 An introduction to Maximum Likelihood in R Stephen P. Ellner ([email protected]) Department of Ecology and Evolutionary Biology, Cornell University Last compile: June 3, 2010 1 Introduction Maximum likelihood as a general approach to estimation and inference was created by R. A. Fisher between 1912 and 1922, starting with a paper written as a third-year undergraduate. Then, and for many years, it was more of theoretical than practical interest. Now, the ability to do nonlinear optimization on the computer has made likelihood methods practical and very popular. Let’s start with the probability density function for one observation x from normal random variable with mean µ and variance σ2, (x µ)2 1 − − f(x µ, σ)= e 2σ2 . (1) | √2πσ For a set of n replicate independent observations x1,x2, ,xn the joint density is ··· n f(x1,x2, ,xn µ, σ)= f(xi µ, σ) (2) ··· | i=1 | Y We interpret this as follows: given the values of µ and σ, equation (2) tells us the relative probability of different possible values of the observations. Maximum likelihood turns this around by defining the likelihood function n (µ, σ x1,x2, ,xn)= f(xi µ, σ) (3) L | ··· i=1 | Y The right-hand side is the same as (2) but the interpretation is different: given the observations, the function tells us the relative likelihood of different possible values of the parameters for L the process that generated the data. Note: the likelihood function is not a probability, and it does not specifying the relative probability of different parameter values.
    [Show full text]
  • Likelihood Ratios: a Simple and Flexible Statistic for Empirical Psychologists
    Psychonomic Bulletin & Review 2004, 11 (5), 791-806 Likelihood ratios: A simple and flexible statistic for empirical psychologists SCOTT GLOVER Pennsylvania State University, University Park, Pennsylvania and PETER DIXON University of Alberta, Edmonton, Alberta, Canada Empirical studies in psychology typically employ null hypothesis significance testing to draw statis- tical inferences. We propose that likelihood ratios are a more straightforward alternative to this ap- proach. Likelihood ratios provide a measure of the fit of two competing models; the statistic repre- sents a direct comparison of the relative likelihood of the data, given the best fit of the two models. Likelihood ratios offer an intuitive, easily interpretable statistic that allows the researcher great flexi- bility in framing empirical arguments. In support of this position, we report the results of a survey of empirical articles in psychology, in which the common uses of statistics by empirical psychologists is examined. From the results of this survey, we show that likelihood ratios are able to serve all the im- portant statistical needs of researchers in empirical psychology in a format that is more straightforward and easier to interpret than traditional inferential statistics. A likelihood ratio statistic reflects the relative likeli- LIKELIHOOD RATIOS hood of the data, given two competing models. Likelihood ratios provide an intuitive approach to summarizing the A likelihood ratio can be thought of as a comparison of evidence provided by an experiment. Because they de- two statistical models of the observed data. Each model scribe evidence, rather than embody a decision, they can provides a probability density for the observations and a easily be adapted to the various goals for which inferential set of unknown parameters that can be estimated from the statistics might be used.
    [Show full text]
  • SPATIAL and TEMPORAL MODELLING of WATER ACIDITY in TURKEY LAKES WATERSHED Spatial and Temporal Modelling of Water Acidity in Turkey Lakes Watershed
    SPATIAL AND TEMPORAL MODELLING OF WATER ACIDITY IN TURKEY LAKES WATERSHED Spatial and Temporal Modelling of Water Acidity in Turkey Lakes Watershed By Jing Lin, M.A. A Project Submitted to the School of Graduate Studies in Partial Fulfilment of the Requirements for the Degree Master of Science McMaster University ©Copyright by Jing Lin, May, 2005 MASTER OF SCIENCE (2005) McMaster University (Statistics) Hamilton, Ontario TITLE: Spatial and Temporal Modelling for Thrkey Lakes Watershed Acidity AUTHOR: Jing Lin , M.A. (Shandong University, P. R. China) SUPERVISOR: Professor Abdel H. El-Shaarawi NUMBER OF PAGES: x, 74 ll Abstract Acid rain continues to be a major environmental problem. Canada has been monitoring indicators of acid rain in various ecosystems since the 1970s. This project focuses on the analysis of a selected subset of data generated by the Turkey Lakes Watershed (TLW) monitoring program from 1980 to 1997. TLW consists of a series of connected lakes where 6 monitoring stations are strategically located to measure the input from an upper stream lake into a down stream lake. Segment regression models with AR(1) errors and unknown point of change are used to summarize the data. Relative likelihood based methods are applied to estimate the point of change. For pH, all the regression parameters except autocorrelation have been found to change significantly between the model segments. This was not the case for so~- where a single model was found to be adequate. In addition pH has been found to have a moderate increasing trend and pronounced seasonality while so~- showed a dramatic decreasing trend but little seasonality.
    [Show full text]
  • P Values, Hypothesis Testing, and Model Selection: It’S De´Ja` Vu All Over Again1
    FORUM P values, hypothesis testing, and model selection: it’s de´ja` vu all over again1 It was six men of Indostan To learning much inclined, Who went to see the Elephant (Though all of them were blind), That each by observation Might satisfy his mind. ... And so these men of Indostan Disputed loud and long, Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! So, oft in theologic wars The disputants, I ween, Rail on in utter ignorance Of what each other mean, And prate about an Elephant Not one of them has seen! —From The Blind Men and the Elephant: A Hindoo Fable, by John Godfrey Saxe (1872) Even if you didn’t immediately skip over this page (or the entire Forum in this issue of Ecology), you may still be asking yourself, ‘‘Haven’t I seen this before? Do we really need another Forum on P values, hypothesis testing, and model selection?’’ So please bear with us; this elephant is still in the room. We thank Paul Murtaugh for the reminder and the invited commentators for their varying perspectives on the current shape of statistical testing and inference in ecology. Those of us who went through graduate school in the 1970s, 1980s, and 1990s remember attempting to coax another 0.001 out of SAS’s P ¼ 0.051 output (maybe if I just rounded to two decimal places ...), raising a toast to P ¼ 0.0499 (and the invention of floating point processors), or desperately searching the back pages of Sokal and Rohlf for a different test that would cross the finish line and satisfy our dissertation committee.
    [Show full text]
  • Maximum Likelihood Estimation ∗ Contents
    Statistical Inference: Maximum Likelihood Estimation ∗ Konstantin Kashin Spring óþÕ¦ Contents Õ Overview: Maximum Likelihood vs. Bayesian Estimationó ó Introduction to Maximum Likelihood Estimation¢ ó.Õ What is Likelihood and the MLE?..............................................¢ ó.ó Examples of Analytical MLE Derivations..........................................ß ó.ó.Õ MLE Estimation for Sampling from Bernoulli Distribution..........................ß ó.ó.ó MLE Estimation of Mean and Variance for Sampling from Normal Distribution.............. ó.ó.ì Gamma Distribution................................................. ó.ó.¦ Multinomial Distribution..............................................É ó.ó.¢ Uniform Distribution................................................ Õþ ì Properties of MLE: e Basics ÕÕ ì.Õ Functional Invariance of MLE................................................ÕÕ ¦ Fisher Information and Uncertainty of MLE Õó ¦.þ.Õ Example of Calculating Fisher Information: Bernoulli Distribution...................... Õ¦ ¦.Õ e eory of Fisher Information.............................................. Õß ¦.Õ.Õ Derivation of Unit Fisher Information...................................... Õß ¦.Õ.ó Derivation of Sample Fisher Information..................................... ÕÉ ¦.Õ.ì Relationship between Expected and Observed Fisher Information...................... ÕÉ ¦.Õ.¦ Extension to Multiple Parameters......................................... óþ ¢ Proofs of Asymptotic Properties of MLE óÕ ¢.Õ Consistency of MLE.....................................................
    [Show full text]
  • The Ways of Our Errors
    The Ways of Our Errors Optimal Data Analysis for Beginners and Experts c Keith Horne Universit y of St.Andrews DRAFT June 5, 2009 Contents I Optimal Data Analysis 1 1 Probability Concepts 2 1.1 Fuzzy Numbers and Dancing Data Points . 2 1.2 Systematic and Statistical Errors { Bias and Variance . 3 1.3 Probability Maps . 3 1.4 Mean, Variance, and Standard Deviation . 5 1.5 Median and Quantiles . 6 1.6 Fuzzy Algebra . 7 1.7 Why Gaussians are Special . 8 1.8 Why χ2 is Special . 8 2 A Menagerie of Probability Maps 10 2.1 Boxcar or Uniform (ranu.for) . 10 2.2 Gaussian or Normal (rang.for) . 11 2.3 Lorentzian or Cauchy (ranl.for) . 13 2.4 Exponential (rane.for) . 14 2.5 Power-Law (ranpl.for) . 15 2.6 Schechter Distribution . 17 2.7 Chi-Square (ranchi2.for) . 18 2.8 Poisson . 18 2.9 2-Dimensional Gaussian . 19 3 Optimal Statistics 20 3.1 Optimal Averaging . 20 3.1.1 weighted average . 22 3.1.2 inverse-variance weights . 22 3.1.3 optavg.for . 22 3.2 Optimal Scaling . 23 3.2.1 The Golden Rule of Data Analysis . 25 3.2.2 optscl.for . 25 3.3 Summary . 26 3.4 Problems . 26 i ii The Ways of Our Errors c Keith Horne DRAFT June 5, 2009 4 Straight Line Fit 28 4.1 1 data point { degenerate parameters . 28 4.2 2 data points { correlated vs orthogonal parameters . 29 4.3 N data points . 30 4.4 fitline.for .
    [Show full text]
  • Profile Likelihood Estimation of the Vulnerability P(X>V) and the Mixing
    Revista Colombiana de Estadística Diciembre 2013, volumen 36, no. 2, pp. 193 a 208 Profile Likelihood Estimation of the Vulnerability P (X>v) and the Mixing Proportion p Parameters in the Gumbel Mixture Model Estimación de verosimilitud perfil de los parámetros de vulnerabilidad P (X>v) y proporción de mezcla p en el modelo Gumbel de mezclas José A. Montoya1;a, Gudelia Figueroa1;b, Nuša Pukšič2;c 1Departamento de Matemáticas, División de Ciencias Exactas y Naturales, Universidad de Sonora, Hermosillo, México 2Institute of Metals and Technology, Ljubljana, Slovenia Abstract This paper concerns to the problem of making inferences about the vul- nerability θ = P (X>v) and the mixing proportion p parameters, when the random variable X is distributed as a mixture of two Gumbel distributions and v is a known fixed value. A profile likelihood approach is proposed for the estimation of these parameters. This approach is a powerful though sim- ple method for separately estimating a parameter of interest in the presence of unknown nuisance parameters. Inferences about θ, p or (θ; p) are given in terms of profile likelihood regions and can be easily obtained on a computer. This methodology is illustrated through a real problem where the main pur- pose is to model the size of non-metallic inclusions in steel. Key words: Invariance principle, Likelihood approach, Likelihood region, Mixture of distributions. Resumen En este artículo consideramos el problema de hacer inferencias sobre el parámetro de vulnerabilidad θ = P (X>v) y la proporción de mezcla p cuando X es una variable aleatoria cuya distribución es una mezcla de dos distribuciones Gumbel y v es un valor fijo y conocido.
    [Show full text]