Maximum Likelihood Estimation in an Alpha-Power

Total Page:16

File Type:pdf, Size:1020Kb

Maximum Likelihood Estimation in an Alpha-Power ISSN: 2641-6336 DOI: 10.33552/ABBA.2019.02.000541 Annals of Biostatistics & Biometric Applications Short Communication Copyright © All rights are reserved by Clement Boateng Ampadu Maximum Likelihood Estimation in an Alpha-Power Clement Boateng Ampadu*1 and Abdulzeid Yen Anafo2 1Department of Biostatistics, USA 2African Institute for Mathematical Sciences (AIMS), Ghana Received Date: May 26, 2019 *Corresponding author: Clement Boateng Ampadu, Department of Biostatistics, USA. Published Date: June 03, 2019 Abstract In this paper a new kind of alpha-power transformed family of distributions (APTA-F for short) is introduced. We discuss the maximum likelihood method of estimating the unknown parameters in a sub-model of this new family. The simulation study performed indicates the method of maximum likelihood is adequate in estimating the unknown parameters in the new family. The applications indicate the new family can be used to model real life data in various disciplines. Finally, we propose obtaining some properties of this new family in the conclusions section of the paper. Keywords: Maximum likelihood estimation; Monte carlo simulation; Alpha-power transformation; Guinea Big data Introduction Table 1: Variants of the Alpha Power Transformed Family of Distributions. Mahdavi and Kundu [1] proposed the celebrated APT-F family Year Name of Distribution Author of distributions with CDF 2019 Ampadu APT -G Ampadu [4] αξF − ( x;1) α Gx( ;,αξ) = 1 2019 PT− G Ampadu [4] α −1 e α 1 where 1≠>α 0, x ∈and ξ is a vector of parameters all PT− F 2019 e Within Quantile Ampadu and to appear in [5] of whose entries are positive. By modifying the above CDF, the Distribution (New) Zubair-G family of distributions appeared in [2] with the following 2019 APTA-F (Current Manuscript) Ampadu CDF The rest of this paper is organized as follows. In Section 2 we 2 eαξGx( ; ) −1 introduce and illustrate the new family. In Section 3 we derive the Fx( ;,αξ) = eα −1 quantile function of the APTA-F family of distributions. In Section 4, we discuss approximate random sampling from the APTAF family where α >∈0, x and ξ is a vector of parameters all of of distributions, and maximum likelihood stimation in Section 5. whose entries are positive. Subsequently in [3] we introduced the Section 6 presents simulation results associated with the APTA-F Ampadu-G family of distributions by modifying the above CDF, to family of distributions, assuming F follows the Weibull distribution give the following CDF 2 1− e−λξGx( ; ) as defined in Section 2. Section 7 discusses usefulness of the APTA-F Fx( ;,λξ) = to the conclusions. 1− e−λ family of distributions in fitting real-life data. Section 8 is devoted The New Family where λ >∈0, x , and ξ is a vector of parameters all of whose x ∈∈,α , and Fx( ;ξ ) be a baseline CDF all entries are positive. Consequently, we introduced new variants of of whose parameters are in the vector ξ . We say a random variable the APT-F family of distributions, which are recorded in the table Definition 2.1. Let X follows the alpha-power transformed distribution of the Ampadu below (Table 1). type (APTA-F for short) if the CDF is given by This work is licensed under Creative Commons Attribution 4.0 License ABBA.MS.ID.000541. Page 1 of 5 Annals of Biostatistics & Biometric Applications Volume 2-Issue 4 Proposition 2.4. The PDF of the alpha-power transformed ξ −αξFx( ; ) Fx( ; ) e distribution of the Ampadu type is given by e−α αα− Fx( ; ξ) If the baseline distribution is given by the Weibull distribution e(1;;−αξ Fx( )) fx( ξ) with CDF a x where x ∈∈,α and Fx( ;ξ ) and fx( ;ξ ) are the CDF − 1− e b and PDF, respectively, of some basline distribution all of whose for xab,,> 0, then we have the following from the above parameters are in the vector ξ (Figure 2) The Quantile definitionProposition 2.2. The CDF of the APTA-Weibull distribution is −1 Theorem 3.1. Let QFF (⋅=) ( ⋅) denote the quantile of some given by baseline distribution with CDF F and PDF f, α ∈ and 01<<u . The a x x a quantile function of the alpha-power transformed distribution of − − b 1− eeb αe the ampadu type is given by W(−α eu−α ) Q − F α Where xab,,> 0and α ∈ Notation 2.3. If a random variable V follows the APTA-Weibull Where = gives the principal solution for w distribution, we write W( z) : Pr oductLog( z) in z= wew V APTAW( a,, b α ) (Figure 1) −1 01<<u . Since QFF (⋅=) ( ⋅) the quantile function can be obtained by solving the following equation for y Proof. Let F( ye) −α Fy( ) u = e−α Approximate Random Number Generation It is well known that the principal solution for w in z= wew , denoted W( z) := Pr oductLog( z) admit the following Taylor series n−1 expansion∞ ([7]−n) 3 8 125 ∑ zn =−+ zz234 z + z + z 5 −... n=1 n! 2 3 24 Figure 1: The CDF of APTAW (1.28352800, 48.41833872, -0.07850413) fitted to the empirical distribution of the data on W( z) := Pr oductLog( z) , gives us a way to approximate the random patients with breast cancer, Section 5.2 [6]. Using the first term of this series to approximate sample from the APTA-F family of distributions. By differentiating the CDF of the APTA-F distribution we have In particular if uU (0,1) , that is, u is a uniform random the following variable, then an (approximate) random sample from the APTA-F family of distributions can be obtained via −α X= QF ( eu) −1 where QFF (⋅=) ( ⋅) denote the quantile of some baseline distribution with CDF F and PDF f, α ∈ Parameter Estimation In this section, we obtain the maximum likelihood estimators F family of distributions. For this, let XX, ,.... Xbe a random sample of size n from the APTA-F (MLEs) for12 the parametersn of the APTA- family of distributions. The likelihood function from Proposition 2.4 is given by Figure 2: The PDF of APTAW (1.28352800, 48.41833872, n −−ααFx( ; ξ) -0.07850413) fitted to the histogram of the data on patients with = i −αξ ξ L∏{ e (1;;Fx( ii)) fx( )} breast cancer, Section 5.2 [6]. i=1 Citation: Page 2 of 5 2(3): 2019. ABBA.MS.ID.000541. DOI: 10.33552/ABBA.2019.02.000541. Clement Boateng Ampadu, Abdulzeid Yen Anafo. Maximum Likelihood Estimation in an Alpha-Power. Annal Biostat & Biomed Appli. Annals of Biostatistics & Biometric Applications Volume 2-Issue 4 From the above the log-likelihood function is given by and the maximum likelihood estimators for the parameters nn na, b, and α are obtained. The procedure has been repeated 200 lnL=−∑∑α( 1 Fx( i ; ξ)) +− ln( 1 αξFx( ii ;)) + ∑ ln fx( ; ξ) ii=11= i= 1times and the mean and mean square error for the estimates are ξ and can be obtained by maximizing the equation computed, and the results are summarized in Table 2 below. immediately above. The MLE’s of The derivatives of the equation immediately above with respect close to the true values of the parameters and hence the estimation From Table 2 above, we find that the simulated estimates are to the unknown parameters, are given as follows method is adequate. We have also observed that the estimated nn n lnL=−α 1 Fx ; ξ +− ln 1 αξFx ; + ln fx ; ξsample size as seen in the Figures below. ∑∑( ( i)) ( ( ii)) ∑( ) mean square errors (MSEs) consistently decrease with increasing ii=11= i= 1 nn ∂ ln L Fx( i ;ξ ) =−+∑∑ln( 1Fx( i ;ξ )) ∂−αii=11= αξFx( i ;1) ∂∂Fx( ;;ξξ) fx( ) α ii ∂ ln L nn∂Fx( ;ξ ) ∂∂ξξ n =−+∑∑α i + ∑ ∂∂ξii=11 ξ= αξFx( ii;1) − i= 1 fx( ; ξ) Now solving the system below for _ and _ gives the maximum likelihood estimators, α and ξˆ of the unknown parameters: Figure 3: Decreasing MSE of a for increasing sample size. ∂ ln L ∂ ln L = 0 and = 0 ∂α ∂ξ Simulation Studies In this section, a Monte Carlo simulation study is carried out to assess the performance of the estimation method, when the Simulationbaseline distribution Study Oneis defined as in Section 2. Figure 4: Decreasing MSE of b for increasing sample size. from the APTA-Weibull distribution. The approximate samples have Approximate samples of sizes 200, 350, 500, and 700, are drawn been drawn for (ab,,α ) = (1.3, 48.4, 0) using 1 X=−− b( log( 1 eu−α )) a Table 2: Result of Simulation Study. Parameter a Sample Size Average Estimate MSE 200 350 1.276885 0.0378470.019242 Figure 5: Decreasing MSE of α for increasing sample size. 500 1.2694851.261209 0.021077 Simulation study two 700 Parameter1.268349 b 0.016317 Sample Size Average Estimate MSE from the APTA-Weibull distribution. The approximate samples have Approximate samples of sizes 400, 550, 700, and 900, are drawn 200 been drawn for (ab,,α ) = (1.3, 1.3, 0) using 350 150.1065 1 48.55782 198.7113 X=−− b( log( 1 eu−α )) a 500 48.3844 140.3203 and the maximum likelihood estimators for the parameters a, 48.73268 123.0901 b, and _ are obtained. The procedure has been repeated 400 times 700 Parameter48.10531 α and the bias and root mean square error for the estimates are Sample Size Average Estimate MSE computed, and the results are summarized in Table 3 below. 200 350 -0.10479 0.377691 500 -0.16417 0.436073 From Table 3 above, we find that the estimated root mean -0.08719 0.311987 square errors (RMSEs) consistently decrease with increasing 700 -0.06227 0.213638 sample size as seen in the Figures below.
Recommended publications
  • Computer-Assisted Data Treatment in Analytical Chemometrics III. Data Transformation
    Computer-Assisted Data Treatment in Analytical Chemometrics III. Data Transformation aM. MELOUN and bJ. MILITKÝ department of Analytical Chemistry, Faculty of Chemical Technology, University Pardubice, CZ-532 10 Pardubice ъDepartment of Textile Materials, Technical University, CZ-461 17 Liberec Received 30 April 1993 In trace analysis an exploratory data analysis (EDA) often finds that the sample distribution is systematically skewed or does not prove a sample homogeneity. Under such circumstances the original data should often be transformed. The power simple transformation and the Box—Cox transformation improves a sample symmetry and also makes stabilization of variance. The Hines— Hines selection graph and the plot of logarithm of the maximum likelihood function enables to find an optimum transformation parameter. Procedure of data transformation in the univariate data analysis is illustrated on quantitative determination of copper traces in kaolin raw. When exploratory data analysis shows that the i) Transformation for variance stabilization implies sample distribution strongly differs from the normal ascertaining the transformation у = g(x) in which the one, we are faced with the problem of how to analyze variance cf(y) is constant. If the variance of the origi­ the data. Raw data may require re-expression to nal variable x is a function of the type cf(x) = ^(x), produce an informative display, effective summary, the variance cr(y) may be expressed by or a straightforward analysis [1—10]. We may need to change not only the units in which the data are 2 fdg(x)f stated, but also the basic scale of the measurement. <У2(У)= -^ /i(x)=C (1) To change the shape of a data distribution, we must I dx J do more than change the origin and/or unit of mea­ surement.
    [Show full text]
  • Systematic Evaluation and Optimization of Outbreak-Detection Algorithms Based on Labeled Epidemiological Surveillance Data
    Universit¨atOsnabr¨uck Fachbereich Humanwissenschaften Institute of Cognitive Science Masterthesis Systematic Evaluation and Optimization of Outbreak-Detection Algorithms Based on Labeled Epidemiological Surveillance Data R¨udigerBusche 968684 Master's Program Cognitive Science November 2018 - April 2019 First supervisor: Dr. St´ephaneGhozzi Robert Koch Institute Berlin Second supervisor: Prof. Dr. Gordon Pipa Institute of Cognitive Science Osnabr¨uck Declaration of Authorship I hereby certify that the work presented here is, to the best of my knowledge and belief, original and the result of my own investigations, except as acknowledged, and has not been submitted, either in part or whole, for a degree at this or any other university. city, date signature Abstract Indicator-based surveillance is a cornerstone of Epidemic Intelligence, which has the objective of detecting disease outbreaks early to prevent the further spread of diseases. As the major institution for public health in Germany, the Robert Koch Institute continuously collects confirmed cases of notifiable diseases from all over the country. Each week the numbers of confirmed cases are evaluated for statistical irregularities. Theses irregularities are referred to as signals. They are reviewed by epidemiologists at the Robert Koch Institute and state level health institutions, and, if found relevant, communicated to local health agencies. While certain algo- rithms have been established for this task, they have usually only been tested on simulated data and are used with a set of default hyperparameters. In this work, we develop a framework to systematically evaluate outbreak detection algorithms on labeled data. These labels are manual annotations of individual cases by ex- perts, such as records of outbreak investigations.
    [Show full text]
  • Applied Regression Analysis Course Notes for STAT 378/502
    Applied Regression Analysis Course notes for STAT 378/502 Adam B Kashlak Mathematical & Statistical Sciences University of Alberta Edmonton, Canada, T6G 2G1 November 19, 2018 cbna This work is licensed under the Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/ by-nc-sa/4.0/. Contents Preface1 1 Ordinary Least Squares2 1.1 Point Estimation.............................5 1.1.1 Derivation of the OLS estimator................5 1.1.2 Maximum likelihood estimate under normality........7 1.1.3 Proof of the Gauss-Markov Theorem..............8 1.2 Hypothesis Testing............................9 1.2.1 Goodness of fit..........................9 1.2.2 Regression coefficients...................... 10 1.2.3 Partial F-test........................... 11 1.3 Confidence Intervals........................... 11 1.4 Prediction Intervals............................ 12 1.4.1 Prediction for an expected observation............. 12 1.4.2 Prediction for a new observation................ 14 1.5 Indicator Variables and ANOVA.................... 14 1.5.1 Indicator variables........................ 14 1.5.2 ANOVA.............................. 16 2 Model Assumptions 18 2.1 Plotting Residuals............................ 18 2.1.1 Plotting Residuals........................ 19 2.2 Transformation.............................. 22 2.2.1 Variance Stabilizing....................... 23 2.2.2 Linearization........................... 24 2.2.3 Box-Cox and the power transform............... 25 2.2.4 Cars Data............................. 26 2.3 Polynomial Regression.......................... 27 2.3.1 Model Problems......................... 28 2.3.2 Piecewise Polynomials...................... 32 2.3.3 Interacting Regressors...................... 36 2.4 Influence and Leverage.......................... 37 2.4.1 The Hat Matrix......................... 37 2.4.2 Cook's D............................. 38 2.4.3 DFBETAS...........................
    [Show full text]
  • V2501023 Diagnostic Regression Analysis and Shifted Power
    TECHNOMETRICS 0, VOL. 25, NO. 1, FEBRUARY 1983 Diagnostic Regression Analysis and Shifted Power Transformations A. C. Atkinson Department of Mathematics imperial College of Science and Technology London, SW7 2B2, England Diagnostic displays for outlying and influential observations are reviewed. In some examples apparent outliers vanish after a power transformation of the data. Interpretation of the score statistic for transformations as regression on a constructed variable makes diagnostic methods available for detection of the influence of individual observations on the transformation, Methods for the power transformation are exemplified and extended to the power transform- ation after a shift in location, for which there are two constructed variables. The emphasis is on plots as diagnostic tools. KEY WORDS: Box and Cox; Cook statistic; Influential observations; Outliers in regression; Power transformations; Regression diagnostics; Shifted power transformation. 1. INTRODUCTION may indicate not inadequate data, but an inadequate It is straightforward, with modern computer soft- model. In some casesoutliers can be reconciled with ware, to fit multiple-regressionequations to data. It is the body of the data by a transformation of the re- often, however, lesseasy to determine whether one, or sponse.But there is the possibility that evidencefor, or a few, observations are having a disproportionate against, a transformation may itself be being unduly effect on the fitted model and hence, more impor- influenced by one, or a few, observations. In Section 3 tantly, on the conclusions drawn from the data. the diagnostic plot for influential observations is ap- Methods for detecting such observations are a main plied to the score test for transformations, which is aim of the collection of techniques known as regres- interpreted in terms of regression on a constructed sion diagnostics, many of which are described in the variable.
    [Show full text]
  • Transforming Variables to Central Normality Arxiv:2005.07946V2 [Stat
    Transforming variables to central normality Jakob Raymaekers and Peter J. Rousseeuw Section of Statistics and Data Science, KU Leuven, Belgium November 21, 2020 Abstract Many real data sets contain numerical features (variables) whose distribution is far from normal (gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box-Cox and Yeo-Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data. Keywords: Anomaly detection, Data preprocessing, Feature transformation, Outliers, Sym- metrization. 1 Introduction In machine learning and statistics, some numerical data features may be very nonnormal (nongaussian) and asymmetric (skewed) which often complicates the next steps of the analysis. Therefore it is customary to preprocess the data by transforming such features arXiv:2005.07946v2 [stat.ML] 21 Nov 2020 in order to bring them closer to normality, after which it typically becomes easier to fit a model or to make predictions. To be useful in practice, it must be possible to automate this preprocessing step.
    [Show full text]
  • Projective Power Entropy and Maximum Tsallis Entropy Distributions
    Entropy 2011, 13, 1746-1764; doi:10.3390/e13101746 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Projective Power Entropy and Maximum Tsallis Entropy Distributions Shinto Eguchi , Osamu Komori and Shogo Kato The Institute of Statistical Mathematics, Tachikawa, Tokyo 190-8562, Japan; E-Mails: [email protected] (O.K.); [email protected] (S.K.) Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 26 July 2011; in revised form: 20 September 2011 / Accepted: 20 September 2011 / Published: 26 September 2011 Abstract: We discuss a one-parameter family of generalized cross entropy between two distributions with the power index, called the projective power entropy. The cross entropy is essentially reduced to the Tsallis entropy if two distributions are taken to be equal. Statistical and probabilistic properties associated with the projective power entropy are extensively investigated including a characterization problem of which conditions uniquely determine the projective power entropy up to the power index. A close relation of the entropy with the Lebesgue space Lp and the dual Lq is explored, in which the escort distribution associates with an interesting property. When we consider maximum Tsallis entropy distributions under the constraints of the mean vector and variance matrix, the model becomes a multivariate q-Gaussian model with elliptical contours, including a Gaussian and t-distribution model. We discuss the statistical estimation by minimization of the empirical loss associated with the projective power entropy. It is shown that the minimum loss estimator for the mean vector and variance matrix under the maximum entropy model are the sample mean vector and the sample variance matrix.
    [Show full text]
  • TESTING STATISTICAL ASSUMPTIONS 2012 Edition
    TESTING STATISTICAL ASSUMPTIONS 2012 Edition Copyright @c 2012 by G. David Garson and Statistical Associates Publishing Page 1 Single User License. Do not copy or post. TESTING STATISTICAL ASSUMPTIONS 2012 Edition @c 2012 by G. David Garson and Statistical Associates Publishing. All rights reserved worldwide in all media. The author and publisher of this eBook and accompanying materials make no representation or warranties with respect to the accuracy, applicability, fitness, or completeness of the contents of this eBook or accompanying materials. The author and publisher disclaim any warranties (express or implied), merchantability, or fitness for any particular purpose. The author and publisher shall in no event be held liable to any party for any direct, indirect, punitive, special, incidental or other consequential damages arising directly or indirectly from any use of this material, which is provided “as is”, and without warranties. Further, the author and publisher do not warrant the performance, effectiveness or applicability of any sites listed or linked to in this eBook or accompanying materials. All links are for information purposes only and are not warranted for content, accuracy or any other implied or explicit purpose. This eBook and accompanying materials is © copyrighted by G. David Garson and Statistical Associates Publishing. No part of this may be copied, or changed in any format, sold, or used in any way under any circumstances. Contact: G. David Garson, President Statistical Publishing Associates 274 Glenn Drive Asheboro, NC 27205 USA Email: [email protected] Web: www.statisticalassociates.com Copyright @c 2012 by G. David Garson and Statistical Associates Publishing Page 2 Single User License.
    [Show full text]
  • Entropy and Divergence Associated with Power Function and the Statistical Application
    Entropy 2010, 12, 262-274; doi:10.3390/e12020262 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Entropy and Divergence Associated with Power Function and the Statistical Application Shinto Eguchi ∗ and Shogo Kato The Institute of Statistical Mathematics, Tachikawa, Tokyo 190-8562, Japan ? Author to whom correspondence should be addressed; E-Mail: [email protected]. Received: 29 December 2009; in revised form: 20 February 2010 / Accepted: 23 February 2010 / Published: 25 February 2010 Abstract: In statistical physics, Boltzmann-Shannon entropy provides good understanding for the equilibrium states of a number of phenomena. In statistics, the entropy corresponds to the maximum likelihood method, in which Kullback-Leibler divergence connects Boltzmann-Shannon entropy and the expected log-likelihood function. The maximum likelihood estimation has been supported for the optimal performance, which is known to be easily broken down in the presence of a small degree of model uncertainty. To deal with this problem, a new statistical method, closely related to Tsallis entropy, is proposed and shown to be robust for outliers, and we discuss a local learning property associated with the method. Keywords: Tsallis entropy; projective power divergence; robustness 1. Introduction Consider a practical situation in which a data set fx1; ··· ; xng is randomly sampled from a probability density function of a statistical model ffθ(x): θ 2 Θg, where θ is a parameter vector and Θ is the parameter space. A fundamental tool for the estimation of unknown parameter θ is the log-likelihood function defined by 1 Xn `(θ) = log f (x ) (1) n θ i i=1 Entropy 2010, 12 263 which is commonly utilized by statistical researchers ranging over both frequentists and Bayesians.
    [Show full text]
  • Consequences of Power Transforms in Linear Mixed-Effects Models of Chronometric Data
    CONSEQUENCES OF POWER TRANSFORMS IN LINEAR MIXED-EFFECTS MODELS OF CHRONOMETRIC DATA Van Rynald T. Liceralde A thesis submitted to the faculty at the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Master of Arts in the Department of Psychology and Neuroscience (Cognitive Psychology). Chapel Hill 2018 Approved by: Jennifer E. Arnold Patrick J. Curran Peter C. Gordon © 2018 Van Rynald T. Liceralde ALL RIGHTS RESERVED ii ABSTRACT Van Rynald T. Liceralde: Consequences of Power Transforms in Linear Mixed-Effects Models of Chronometric Data (Under the direction of Peter C. Gordon) Psycholinguists regularly use power transforms in linear mixed-effects models of chronometric data to avoid violating the statistical assumption of residual normality. Here, I extend previous demonstrations of the consequences of using power transforms to a wide range of sample sizes by analyzing word recognition megastudies and performing Monte Carlo simulations. Analyses of the megastudies revealed that stronger power transforms were associated with greater changes in fixed-effect test statistics and random-effect correlation patterns as they appear in the raw scale. The simulations reinforced these findings and revealed that models fit to transformed data tended to be more powerful in detecting main effects in smaller samples but were less powerful in detecting interactions as sample sizes increased. These results suggest that the decision to use a power transform should be motivated by hypotheses about how the predictors relate to chronometric data instead of the desire to meet the normality assumption. iii To all my teachers, great and horrible. To 17-year old Van who thought that he’s not meant to learn to write code.
    [Show full text]
  • The TRANSREG Procedure This Document Is an Individual Chapter from SAS/STAT® 13.1 User’S Guide
    SAS/STAT® 13.1 User’s Guide The TRANSREG Procedure This document is an individual chapter from SAS/STAT® 13.1 User’s Guide. The correct bibliographic citation for the complete manual is as follows: SAS Institute Inc. 2013. SAS/STAT® 13.1 User’s Guide. Cary, NC: SAS Institute Inc. Copyright © 2013, SAS Institute Inc., Cary, NC, USA All rights reserved. Produced in the United States of America. For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc. For a web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others’ rights is appreciated. U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication or disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a) and DFAR 227.7202-4 and, to the extent required under U.S.
    [Show full text]
  • Computer Intelligent Processing Technologies (Cipts) Tools for Analysing Environmental Data
    Technical Report No. 2 Computer Intelligent Processing Technologies (CIPTs) Tools for Analysing Environmental Data Prepared by: Earth Observation Sciences Ltd Broadmede, Farnham Business Park Farnham, GU9 8QT, UK January 1998 Cover design: Rolf Kuchling, EEA Legal notice The contents of this report do not necessarily reflect the official opinion of the European Commission or other European Communities institutions. Neither the European Environment Agency nor any person or company acting on the behalf of the Agency is responsible for the use that may be made of the information contained in this report. A great deal of additional information on the European Union is available on the Internet. It can be accessed through the Europa server (http://europa.eu.int) Cataloguing data can be found a the end of this publication Luxembourg: Office for Official Publications of the European Communities, 1998 ISBN ©EEA, Copenhagen, 1998 All rights reserved, No part of this document or any information appertaining to its content may be used, stored, reproduced or transmitted in any form or by any means, including by photocopying, recording, taping, information storage systems, without the prior permission of the European Environmental Agency. Printed in Printed on recycled and chlorine-free bleached paper European Environment Agency Kongens Nytorv 6 DK-1050 Copenhagen K Denmark Tel: +45 33 36 71 00 Fax: +45 33 36 71 99 E-mail: [email protected] TABLE OF CONTENTS ACKNOWLEDGEMENTS ..............................................................................................................5
    [Show full text]
  • Evaluating Crystallographic Likelihood Functions Using Numerical Quadratures
    bioRxiv preprint doi: https://doi.org/10.1101/2020.01.12.903690; this version posted April 23, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. 1 Evaluating crystallographic likelihood functions using numerical quadratures Petrus H. Zwarta;b* and Elliott D. Perrymana;b;c aCenter for Advanced Mathematics in Energy Research Applications, Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron road, Berkeley, CA 94720 USA, bMolecular Biophysics and Integrative Bioimaging Division, Lawrence Berkeley National Laboratory, 1 Cyclotron road, Berkeley, CA 94720 USA, and cThe University of Tennessee at Knoxville, Knoxville, TN 37916 USA. E-mail: [email protected] Maximum Likelihood, Refinement, Numerical Integration Abstract Intensity-based likelihood functions in crystallographic applications have the poten- tial to enhance the quality of structures derived from marginal diffraction data. Their usage however is complicated by the ability to efficiently compute these targets func- tions. Here a numerical quadrature is developed that allows for the rapid evaluation of intensity-based likelihood functions in crystallographic applications. By using a sequence of change of variable transformations, including a non-linear domain com- pression operation, an accurate, robust, and efficient quadrature is constructed. The approach is flexible and can incorporate different noise models with relative ease. PREPRINT: Acta Crystallographica Section D A Journal of the International Union of Crystallography bioRxiv preprint doi: https://doi.org/10.1101/2020.01.12.903690; this version posted April 23, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder.
    [Show full text]