Bayesian Analysis with Python

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Analysis with Python Bayesian Analysis with Python Unleash the power and flexibility of the Bayesian framework Osvaldo Martin BIRMINGHAM - MUMBAI Bayesian Analysis with Python Copyright © 2016 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: November 2016 Production reference: 1211116 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78588-380-4 www.packtpub.com Credits Author Project Coordinator Osvaldo Martin Nidhi Joshi Reviewer Proofreader Austin Rochford Safis Editor Commissioning Editor Indexer Veena Pagare Mariammal Chettiyar Acquisition Editor Graphics Tushar Gupta Disha Haria Content Development Editor Production Coordinator Aishwarya Pandere Nilesh Mohite Technical Editor Cover Work Suwarna Patil Nilesh Mohite Copy Editor Safis Editing Vikrant Phadke About the Author Osvaldo Martin is a researcher at The National Scientific and Technical Research Council (CONICET), the main organization in charge of the promotion of science and technology in Argentina. He has worked on structural bioinformatics and computational biology problems, especially on how to validate structural protein models. He has experience in using Markov Chain Monte Carlo methods to simulate molecules and loves to use Python to solve data analysis problems. He has taught courses about structural bioinformatics, Python programming, and, more recently, Bayesian data analysis. Python and Bayesian statistics have transformed the way he looks at science and thinks about problems in general. Osvaldo was really motivated to write this book to help others in developing probabilistic models with Python, regardless of their mathematical background. He is an active member of the PyMOL community (a C/Python-based molecular viewer), and recently he has been making small contributions to the probabilistic programming library PyMC3. I would like to thank my wife, Romina, for her support while writing this book and in general for her support in all my projects, specially the unreasonable ones. I also want to thank Walter Lapadula, Juan Manuel Alonso, and Romina Torres-Astorga for providing invaluable feedback and suggestions on my drafts. A special thanks goes to the core developers of PyMC3. This book was possible only because of the dedication, love, and hard work they have put into PyMC3. I hope this book contributes to the spread and adoption of this great library. About the Reviewer Austin Rochford is a principal data scientist at Monetate Labs, where he develops products that allow retailers to personalize their marketing across billions of events a year. He is a mathematician by training and is a passionate advocate of Bayesian methods. www.PacktPub.com eBooks, discount offers, and more Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks. https://www.packtpub.com/mapt Get the most in-demand software skills with Mapt. Mapt gives you full access to all Packt books and video courses, as well as industry-leading tools to help you plan your personal development and advance your career. Why subscribe? • Fully searchable across every book published by Packt • Copy and paste, print, and bookmark content • On demand and accessible via a web browser Table of Contents Preface vii Chapter 1: Thinking Probabilistically - A Bayesian Inference Primer 1 Statistics as a form of modeling 2 Exploratory data analysis 2 Inferential statistics 3 Probabilities and uncertainty 5 Probability distributions 7 Bayes' theorem and statistical inference 10 Single parameter inference 13 The coin-flipping problem 13 The general model 14 Choosing the likelihood 14 Choosing the prior 16 Getting the posterior 18 Computing and plotting the posterior 18 Influence of the prior and how to choose one 21 Communicating a Bayesian analysis 23 Model notation and visualization 23 Summarizing the posterior 24 Highest posterior density 24 Posterior predictive checks 27 Installing the necessary Python packages 28 Summary 29 Exercises 29 Chapter 2: Programming Probabilistically – A PyMC3 Primer 31 Probabilistic programming 32 Inference engines 33 Non-Markovian methods 33 Markovian methods 36 [ i ] Table of Contents PyMC3 introduction 46 Coin-flipping, the computational approach 46 Model specification 47 Pushing the inference button 48 Diagnosing the sampling process 48 Summarizing the posterior 55 Posterior-based decisions 55 ROPE 56 Loss functions 57 Summary 58 Keep reading 58 Exercises 59 Chapter 3: Juggling with Multi-Parametric and Hierarchical Models 61 Nuisance parameters and marginalized distributions 62 Gaussians, Gaussians, Gaussians everywhere 64 Gaussian inferences 64 Robust inferences 69 Student's t-distribution 69 Comparing groups 75 The tips dataset 76 Cohen's d 80 Probability of superiority 81 Hierarchical models 81 Shrinkage 84 Summary 88 Keep reading 88 Exercises 89 Chapter 4: Understanding and Predicting Data with Linear Regression Models 91 Simple linear regression 92 The machine learning connection 92 The core of linear regression models 93 Linear models and high autocorrelation 100 Modifying the data before running 101 Changing the sampling method 103 Interpreting and visualizing the posterior 103 Pearson correlation coefficient 107 Pearson coefficient from a multivariate Gaussian 110 Robust linear regression 113 Hierarchical linear regression 117 Correlation, causation, and the messiness of life 124 [ ii ] Table of Contents Polynomial regression 126 Interpreting the parameters of a polynomial regression 129 Polynomial regression – the ultimate model? 130 Multiple linear regression 131 Confounding variables and redundant variables 135 Multicollinearity or when the correlation is too high 138 Masking effect variables 142 Adding interactions 144 The GLM module 145 Summary 146 Keep reading 146 Exercises 147 Chapter 5: Classifying Outcomes with Logistic Regression 149 Logistic regression 150 The logistic model 151 The iris dataset 152 The logistic model applied to the iris dataset 155 Making predictions 158 Multiple logistic regression 159 The boundary decision 159 Implementing the model 160 Dealing with correlated variables 162 Dealing with unbalanced classes 163 How do we solve this problem? 165 Interpreting the coefficients of a logistic regression 165 Generalized linear models 166 Softmax regression or multinomial logistic regression 167 Discriminative and generative models 171 Summary 174 Keep reading 174 Exercises 175 Chapter 6: Model Comparison 177 Occam's razor – simplicity and accuracy 178 Too many parameters leads to overfitting 179 Too few parameters leads to underfitting 181 The balance between simplicity and accuracy 182 Regularizing priors 183 Regularizing priors and hierarchical models 184 Predictive accuracy measures 185 Cross-validation 185 [ iii ] Table of Contents Information criteria 186 The log-likelihood and the deviance 186 Akaike information criterion 187 Deviance information criterion 188 Widely available information criterion 189 Pareto smoothed importance sampling leave-one-out cross-validation 190 Bayesian information criterion 190 Computing information criteria with PyMC3 190 A note on the reliability of WAIC and LOO computations 194 Interpreting and using information criteria measures 194 Posterior predictive checks 196 Bayes factors 197 Analogy with information criteria 199 Computing Bayes factors 199 Common problems computing Bayes factors 202 Bayes factors and information criteria 202 Summary 205 Keep reading 205 Exercises 205 Chapter 7: Mixture Models 207 Mixture models 207 How to build mixture models 209 Marginalized Gaussian mixture model 215 Mixture models and count data 216 The Poisson distribution 216 The Zero-Inflated Poisson model 218 Poisson regression and ZIP regression 220 Robust logistic regression 223 Model-based clustering 225 Fixed component clustering 227 Non-fixed component clustering 227 Continuous mixtures 228 Beta-binomial and negative binomial 228 The Student's t-distribution 229 Summary 230 Keep reading 230 Exercises 230 Chapter 8: Gaussian Processes 233 Non-parametric statistics 234 Kernel-based models 234 The Gaussian kernel 235 Kernelized linear regression 235 [ iv ] Table of Contents Overfitting and priors 241 Gaussian processes 242 Building the covariance matrix 243 Sampling from a GP prior
Recommended publications
  • DSA 5403 Bayesian Statistics: an Advancing Introduction 16 Units – Each Unit a Week’S Work
    DSA 5403 Bayesian Statistics: An Advancing Introduction 16 units – each unit a week’s work. The following are the contents of the course divided into chapters of the book Doing Bayesian Data Analysis . by John Kruschke. The course is structured around the above book but will be embellished with more theoretical content as needed. The book can be obtained from the library : http://www.sciencedirect.com/science/book/9780124058880 Topics: This is divided into three parts From the book: “Part I The basics: models, probability, Bayes' rule and r: Introduction: credibility, models, and parameters; The R programming language; What is this stuff called probability?; Bayes' rule Part II All the fundamentals applied to inferring a binomial probability: Inferring a binomial probability via exact mathematical analysis; Markov chain Monte C arlo; J AG S ; Hierarchical models; Model comparison and hierarchical modeling; Null hypothesis significance testing; Bayesian approaches to testing a point ("Null") hypothesis; G oals, power, and sample size; S tan Part III The generalized linear model: Overview of the generalized linear model; Metric-predicted variable on one or two groups; Metric predicted variable with one metric predictor; Metric predicted variable with multiple metric predictors; Metric predicted variable with one nominal predictor; Metric predicted variable with multiple nominal predictors; Dichotomous predicted variable; Nominal predicted variable; Ordinal predicted variable; C ount predicted variable; Tools in the trunk” Part 1: The Basics: Models, Probability, Bayes’ Rule and R Unit 1 Chapter 1, 2 Credibility, Models, and Parameters, • Derivation of Bayes’ rule • Re-allocation of probability • The steps of Bayesian data analysis • Classical use of Bayes’ rule o Testing – false positives etc o Definitions of prior, likelihood, evidence and posterior Unit 2 Chapter 3 The R language • Get the software • Variables types • Loading data • Functions • Plots Unit 3 Chapter 4 Probability distributions.
    [Show full text]
  • University of North Carolina at Charlotte Belk College of Business
    University of North Carolina at Charlotte Belk College of Business BPHD 8220 Financial Bayesian Analysis Fall 2019 Course Time: Tuesday 12:20 - 3:05 pm Location: Friday Building 207 Professor: Dr. Yufeng Han Office Location: Friday Building 340A Telephone: (704) 687-8773 E-mail: [email protected] Office Hours: by appointment Textbook: Introduction to Bayesian Econometrics, 2nd Edition, by Edward Greenberg (required) An Introduction to Bayesian Inference in Econometrics, 1st Edition, by Arnold Zellner Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, 2nd Edition, by Joseph M. Hilbe, de Souza, Rafael S., Emille E. O. Ishida Bayesian Analysis with Python: Introduction to statistical modeling and probabilistic programming using PyMC3 and ArviZ, 2nd Edition, by Osvaldo Martin The Oxford Handbook of Bayesian Econometrics, 1st Edition, by John Geweke Topics: 1. Principles of Bayesian Analysis 2. Simulation 3. Linear Regression Models 4. Multivariate Regression Models 5. Time-Series Models 6. State-Space Models 7. Volatility Models Page | 1 8. Endogeneity Models Software: 1. Stan 2. Edward 3. JAGS 4. BUGS & MultiBUGS 5. Python Modules: PyMC & ArviZ 6. R, STATA, SAS, Matlab, etc. Course Assessment: Homework assignments, paper presentation, and replication (or project). Grading: Homework: 30% Paper presentation: 40% Replication (project): 30% Selected Papers: Vasicek, O. A. (1973), A Note on Using Cross-Sectional Information in Bayesian Estimation of Security Betas. Journal of Finance, 28: 1233-1239. doi:10.1111/j.1540- 6261.1973.tb01452.x Shanken, J. (1987). A Bayesian approach to testing portfolio efficiency. Journal of Financial Economics, 19(2), 195–215. https://doi.org/10.1016/0304-405X(87)90002-X Guofu, C., & Harvey, R.
    [Show full text]
  • The Bayesian Approach to Statistics
    THE BAYESIAN APPROACH TO STATISTICS ANTHONY O’HAGAN INTRODUCTION the true nature of scientific reasoning. The fi- nal section addresses various features of modern By far the most widely taught and used statisti- Bayesian methods that provide some explanation for the rapid increase in their adoption since the cal methods in practice are those of the frequen- 1980s. tist school. The ideas of frequentist inference, as set out in Chapter 5 of this book, rest on the frequency definition of probability (Chapter 2), BAYESIAN INFERENCE and were developed in the first half of the 20th century. This chapter concerns a radically differ- We first present the basic procedures of Bayesian ent approach to statistics, the Bayesian approach, inference. which depends instead on the subjective defini- tion of probability (Chapter 3). In some respects, Bayesian methods are older than frequentist ones, Bayes’s Theorem and the Nature of Learning having been the basis of very early statistical rea- Bayesian inference is a process of learning soning as far back as the 18th century. Bayesian from data. To give substance to this statement, statistics as it is now understood, however, dates we need to identify who is doing the learning and back to the 1950s, with subsequent development what they are learning about. in the second half of the 20th century. Over that time, the Bayesian approach has steadily gained Terms and Notation ground, and is now recognized as a legitimate al- ternative to the frequentist approach. The person doing the learning is an individual This chapter is organized into three sections.
    [Show full text]
  • Posterior Propriety and Admissibility of Hyperpriors in Normal
    The Annals of Statistics 2005, Vol. 33, No. 2, 606–646 DOI: 10.1214/009053605000000075 c Institute of Mathematical Statistics, 2005 POSTERIOR PROPRIETY AND ADMISSIBILITY OF HYPERPRIORS IN NORMAL HIERARCHICAL MODELS1 By James O. Berger, William Strawderman and Dejun Tang Duke University and SAMSI, Rutgers University and Novartis Pharmaceuticals Hierarchical modeling is wonderful and here to stay, but hyper- parameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an ex- treme, but not uncommon, example use of the wrong hyperparameter priors can even lead to impropriety of the posterior. For exchangeable hierarchical multivariate normal models, we first determine when a standard class of hierarchical priors results in proper or improper posteriors. We next determine which elements of this class lead to admissible estimators of the mean under quadratic loss; such considerations provide one useful guideline for choice among hierarchical priors. Finally, computational issues with the resulting posterior distributions are addressed. 1. Introduction. 1.1. The model and the problems. Consider the block multivariate nor- mal situation (sometimes called the “matrix of means problem”) specified by the following hierarchical Bayesian model: (1) X ∼ Np(θ, I), θ ∼ Np(B, Σπ), where X1 θ1 X2 θ2 Xp×1 = . , θp×1 = . , arXiv:math/0505605v1 [math.ST] 27 May 2005 . X θ m m Received February 2004; revised July 2004. 1Supported by NSF Grants DMS-98-02261 and DMS-01-03265. AMS 2000 subject classifications. Primary 62C15; secondary 62F15. Key words and phrases. Covariance matrix, quadratic loss, frequentist risk, posterior impropriety, objective priors, Markov chain Monte Carlo.
    [Show full text]
  • The Deviance Information Criterion: 12 Years On
    J. R. Statist. Soc. B (2014) The deviance information criterion: 12 years on David J. Spiegelhalter, University of Cambridge, UK Nicola G. Best, Imperial College School of Public Health, London, UK Bradley P. Carlin University of Minnesota, Minneapolis, USA and Angelika van der Linde Bremen, Germany [Presented to The Royal Statistical Society at its annual conference in a session organized by the Research Section on Tuesday, September 3rd, 2013, Professor G. A.Young in the Chair ] Summary.The essentials of our paper of 2002 are briefly summarized and compared with other criteria for model comparison. After some comments on the paper’s reception and influence, we consider criticisms and proposals for improvement made by us and others. Keywords: Bayesian; Model comparison; Prediction 1. Some background to model comparison Suppose that we have a given set of candidate models, and we would like a criterion to assess which is ‘better’ in a defined sense. Assume that a model for observed data y postulates a density p.y|θ/ (which may include covariates etc.), and call D.θ/ =−2log{p.y|θ/} the deviance, here considered as a function of θ. Classical model choice uses hypothesis testing for comparing nested models, e.g. the deviance (likelihood ratio) test in generalized linear models. For non- nested models, alternatives include the Akaike information criterion AIC =−2log{p.y|θˆ/} + 2k where θˆ is the maximum likelihood estimate and k is the number of parameters in the model (dimension of Θ). AIC is built with the aim of favouring models that are likely to make good predictions.
    [Show full text]
  • A Widely Applicable Bayesian Information Criterion
    JournalofMachineLearningResearch14(2013)867-897 Submitted 8/12; Revised 2/13; Published 3/13 A Widely Applicable Bayesian Information Criterion Sumio Watanabe [email protected] Department of Computational Intelligence and Systems Science Tokyo Institute of Technology Mailbox G5-19, 4259 Nagatsuta, Midori-ku Yokohama, Japan 226-8502 Editor: Manfred Opper Abstract A statistical model or a learning machine is called regular if the map taking a parameter to a prob- ability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/logn, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model.
    [Show full text]
  • An Introduction to Data Analysis Using the Pymc3 Probabilistic Programming Framework: a Case Study with Gaussian Mixture Modeling
    An introduction to data analysis using the PyMC3 probabilistic programming framework: A case study with Gaussian Mixture Modeling Shi Xian Liewa, Mohsen Afrasiabia, and Joseph L. Austerweila aDepartment of Psychology, University of Wisconsin-Madison, Madison, WI, USA August 28, 2019 Author Note Correspondence concerning this article should be addressed to: Shi Xian Liew, 1202 West Johnson Street, Madison, WI 53706. E-mail: [email protected]. This work was funded by a Vilas Life Cycle Professorship and the VCRGE at University of Wisconsin-Madison with funding from the WARF 1 RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 2 Abstract Recent developments in modern probabilistic programming have offered users many practical tools of Bayesian data analysis. However, the adoption of such techniques by the general psychology community is still fairly limited. This tutorial aims to provide non-technicians with an accessible guide to PyMC3, a robust probabilistic programming language that allows for straightforward Bayesian data analysis. We focus on a series of increasingly complex Gaussian mixture models – building up from fitting basic univariate models to more complex multivariate models fit to real-world data. We also explore how PyMC3 can be configured to obtain significant increases in computational speed by taking advantage of a machine’s GPU, in addition to the conditions under which such acceleration can be expected. All example analyses are detailed with step-by-step instructions and corresponding Python code. Keywords: probabilistic programming; Bayesian data analysis; Markov chain Monte Carlo; computational modeling; Gaussian mixture modeling RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 3 1 Introduction Over the last decade, there has been a shift in the norms of what counts as rigorous research in psychological science.
    [Show full text]
  • Bayes Theorem and Its Recent Applications
    BAYES THEOREM AND ITS RECENT APPLICATIONS Nicholas Stylianides Eleni Kontou March 2020 Abstract Individuals who have studied maths to a specific level have come across Bayes’ Theorem or Bayes Formula. Bayes’ Theorem has many applications in areas such as mathematics, medicine, finance, marketing, engineering and many other. This paper covers Bayes’ Theorem at a basic level and explores how the formula was derived. We also, look at some extended forms of the formula and give an explicit example. Lastly, we discuss recent applications of Bayes’ Theorem in valuating depression and predicting water quality conditions. The goal of this expository paper is to break down the interesting topic of Bayes’ Theorem to a basic level which can be easily understood by a wide audience. What is Bayes Theorem? Thomas Bayes was an 18th-century British mathematician who developed a mathematical formula to calculate conditional probability in order to provide a way to re-examine current expectations. This mathematical formula is well known as Bayes Theorem or Bayes’ rule or Bayes’ Law. It is also the basis of a whole field of statistics known as Bayesian Statistics. For example, in finance, Bayes’ Theorem can be utilized to rate the risk of loaning cash to possible borrowers. The formula for Bayes’ Theorem is: [1] P(A) =The probability of A occurring P(B) = The probability of B occurring P(A|B) = The probability of A given B P(B|A) = The probability of B given A P(A∩B) = The probability of both A and B occurring MA3517 Mathematics Research Journal @Nicholas Stylianides, Eleni Kontou Bayes’ Theorem: , a $% &((!) > 0 %,- .// Proof: From Bayes’ Formula: (1) From the Total Probability (2) Theorem: Substituting equation (2) into equation (1) we obtain .
    [Show full text]
  • A Review of Bayesian Optimization
    Taking the Human Out of the Loop: A Review of Bayesian Optimization The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Shahriari, Bobak, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. 2016. “Taking the Human Out of the Loop: A Review of Bayesian Optimization.” Proc. IEEE 104 (1) (January): 148– 175. doi:10.1109/jproc.2015.2494218. Published Version doi:10.1109/JPROC.2015.2494218 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:27769882 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP 1 Taking the Human Out of the Loop: A Review of Bayesian Optimization Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams and Nando de Freitas Abstract—Big data applications are typically associated with and the analytics company that sits between them. The analyt- systems involving large numbers of users, massive complex ics company must develop procedures to automatically design software systems, and large-scale heterogeneous computing and game variants across millions of users; the objective is to storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., rec- enhance user experience and maximize the content provider’s ommendation systems, medical analysis tools, real-time game revenue. engines, speech recognizers) thus involves many tunable config- The preceding examples highlight the importance of au- uration parameters.
    [Show full text]
  • Is Structure Based Drug Design Ready for Selectivity Optimization?
    bioRxiv preprint doi: https://doi.org/10.1101/2020.07.02.185132; this version posted July 3, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. PrEPRINT AHEAD OF SUBMISSION — July 2, 2020 1 IS STRUCTURE BASED DRUG DESIGN READY FOR 2 SELECTIVITY optimization? 1,2,† 2 3 4 4 3 SteVEN K. Albanese , John D. ChoderA , AndrEA VOLKAMER , Simon Keng , Robert Abel , 4* 4 Lingle WANG Louis V. Gerstner, Jr. GrADUATE School OF Biomedical Sciences, Memorial Sloan Kettering Cancer 5 1 Center, NeW York, NY 10065; Computational AND Systems Biology PrOGRam, Sloan Kettering 6 2 Institute, Memorial Sloan Kettering Cancer Center, NeW York, NY 10065; Charité – 7 3 Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin; Schrödinger, NeW York, NY 10036 8 4 [email protected] (LW) 9 *For CORRespondence: †Schrödinger, NeW York, NY 10036 10 PrESENT ADDRess: 11 12 AbstrACT 13 Alchemical FREE ENERGY CALCULATIONS ARE NOW WIDELY USED TO DRIVE OR MAINTAIN POTENCY IN SMALL MOLECULE LEAD 14 OPTIMIZATION WITH A ROUGHLY 1 Kcal/mol ACCURACY. Despite this, THE POTENTIAL TO USE FREE ENERGY CALCULATIONS 15 TO DRIVE OPTIMIZATION OF COMPOUND SELECTIVITY AMONG TWO SIMILAR TARGETS HAS BEEN RELATIVELY UNEXPLORED IN 16 PUBLISHED studies. IN THE MOST OPTIMISTIC scenario, THE SIMILARITY OF BINDING SITES MIGHT LEAD TO A FORTUITOUS 17 CANCELLATION OF ERRORS AND ALLOW SELECTIVITY TO BE PREDICTED MORE ACCURATELY THAN AffiNITY. Here, WE ASSESS 18 THE ACCURACY WITH WHICH SELECTIVITY CAN BE PREDICTED IN THE CONTEXT OF SMALL MOLECULE KINASE inhibitors, 19 CONSIDERING THE VERY SIMILAR BINDING SITES OF HUMAN KINASES CDK2 AND CDK9, AS WELL AS ANOTHER SERIES OF 20 LIGANDS ATTEMPTING TO ACHIEVE SELECTIVITY BETWEEN THE MORE DISTANTLY RELATED KINASES CDK2 AND ERK2.
    [Show full text]
  • Migrating MATLAB®To Python
    Migrating MATLAB® to Python Strategies, Comparisons and a Guide to Converting for Experts Migrating MATLAB® to Python Strategies, Comparisons and a Guide to Converting for Experts Alexandre Chabot-Leclerc Enthought, Inc. ©2020 Enthought, Inc. Written by Enthought, Inc. All Rights Reserved. Use only permitted under license. Copying, sharing, redistributing, or other unauthorized use strictly prohibited. All trademarks and registered trademarks are the property of their respective owners. MATLAB and Simulink are registered trademark of The MathWorks, Inc. Enthought, Inc. 200 W Cesar Chavez St Suite 202 Austin, TX 78701 United States www.enthought.com Version 1.2.0 Migrating MATLAB® to Python B Introduction 1 Why Python 2 Diff erences Between Python and MATLAB® 4 Fundamental Data Types 4 Organizing Code in Packages, not Toolboxes 6 Syntax 6 Indexing and Slicing: Why Zero-Based Indexing 8 NumPy Arrays Are Not Matrices 10 Programming Paradigm: Object-Oriented vs. Procedural 13 Anti-Patterns 15 How Do I? 17 Load Data 17 Signal Processing 19 Linear Algebra 19 Machine Learning 20 Statistical Analysis 21 Image Processing and Computer Vision 21 Optimization 22 Natural Language Processing 22 Data Visualization 22 Save Data 25 What Else? 26 Strategies for Converting to Python 27 From the Bottom Up: Converting One Function at a Time 27 From the Top Down: Calling Python from MATLAB® 33 What Next? 37 Acknowledgments 37 Appendix 37 Code Example: Profiling Contiguous Array Operations 37 Complete Version of main.py, in Chapter 4 38 References 39 Accelerate your Python migration with Enthought’s Python for Scientists and Engineers training course! Migrating MATLAB® to Python C Introduction This document will guide you through the transition from MATLAB® to Python.
    [Show full text]
  • Probabilistic Programming in Python Using Pymc3
    Probabilistic programming in Python using PyMC3 John Salvatier1, Thomas V. Wiecki2 and Christopher Fonnesbeck3 1 AI Impacts, Berkeley, CA, United States 2 Quantopian Inc, Boston, MA, United States 3 Department of Biostatistics, Vanderbilt University, Nashville, TN, United States ABSTRACT Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package. Subjects Data Mining and Machine Learning, Data Science, Scientific Computing and Simulation Keywords Bayesian statistic, Probabilistic Programming, Python, Markov chain Monte Carlo, Statistical modeling INTRODUCTION Probabilistic programming (PP) allows for flexible specification and fitting of Bayesian Submitted 9 September 2015 statistical models. PyMC3 is a new, open-source PP framework with an intuitive and Accepted 8 March 2016 readable, yet powerful, syntax that is close to the natural syntax statisticians use to Published 6 April 2016 describe models. It features next-generation Markov chain Monte Carlo (MCMC) Corresponding author Thomas V. Wiecki, sampling algorithms such as the No-U-Turn Sampler (NUTS) (Hoffman & Gelman, [email protected] 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC) (Duane et al., 1987).
    [Show full text]