Turing: a Language for Flexible Probabilistic Inference

Total Page:16

File Type:pdf, Size:1020Kb

Turing: a Language for Flexible Probabilistic Inference Turing: a language for flexible probabilistic inference Hong Ge Kai Xu Zoubin Ghahramani University of Cambridge University of Edinburgh University of Cambridge Uber Abstract however, it is currently necessary first to derive the inference method, e.g. in the form of variational or Markov chain Monte Carlo (MCMC) algorithm, and Probabilistic programming is becoming an then implement it in application-specific code. Worse attractive approach to probabilistic machine yet, building models from data is often an iterative learning. Through relieving researchers from process, where a model is proposed, fit to data and the tedious burden of hand-deriving inference modified depending on its performance. Each of these algorithms, not only does it enable devel- steps is time-consuming, error-prone and usually re- opment of more accurate and interpretable quires expert knowledge in mathematics and computer models but it also encourages reproducible science, an impedance for researchers who are not ex- research. However, successful probabilistic perts. In contrast, deep learning methods have bene- programming systems require flexible, generic fited enormously from easy-to-use frameworks based on and efficient inference engines. In this work, automatic differentiation that implement end-to-end we present a system called Turing for flex- optimisation. There is a real potential for automated ible composable probabilistic programming probabilistic inference methods (in conjunction with ex- inference. Turing has a intuitive modeling isting automated optimisation systems) to revolutionise syntax and supports a wide range of sampling machine learning practice. based inference algorithms. Most importantly, Turing inference is composable: it combines Probabilistic programming languages [Goodman et al., Markov chain sampling operations on subsets 2008, Wood et al., 2014, Lunn et al., 2000, Minka et al., of model variables, e.g. using a combination 2014, Stan Development Team, 2014, Murray, 2013, of a Hamiltonian Monte Carlo (HMC) engine Pfeffer, 2001, 2009, Paige and Wood, 2014, Murray and a particle Gibbs (PG) engine. This com- et al., 2017] aim to fill this gap by providing a very posable inference engine allows the user to eas- flexible framework for defining probabilistic models and ily switch between black-box style inference automating the model learning process using generic methods such as HMC, and customized infer- inference engines. This frees researchers from writing ence methods. Our aim is to present Turing complex models by hand and enables them to focus and its composable inference engines to the on designing a suitable model using their insight and community and encourage other researchers expert knowledge, and accelerates the iterative pro- to build on this system to help advance the cess of model modification. Moreover, probabilistic field of probabilistic machine learning. programming languages make it possible to implement and publish novel learning and inference algorithms in the form of generic inference engines. This enables fair 1 Introduction direct comparison between new and existing learning and inference algorithms on the same set of problems, something that is sorely needed by the scientific com- Probabilistic model-based machine learning [Ghahra- munity. Furthermore, open problems that cannot be mani, 2015, Bishop, 2013] has been used successfully solved by state-of-the-art algorithms can be published for a wide range of problems and new applications are in the form of challenging problem sets, allowing infer- constantly being explored. For each new application, ence experts to easily identify open research questions in the field. st Proceedings of the 21 International Conference on Artifi- In this work, we introduce a system for probabilistic ma- cial Intelligence and Statistics (AISTATS) 2018, Lanzarote, chine learning called Turing. Turing is an expressive Spain. PMLR: Volume 84. Copyright 2018 by the author(s). Turing: a language for flexible probabilistic inference probabilistic programming language developed with the family of models we can perform efficient inference. a focus on intuitive modelling syntax and inference Although motivated from a pragmatic point of view, efficiency. The organisation of the remainder of this this approach has leads to a fruitful collection of soft- paper is as follows. Section 2 sets up the problem and ware systems including BUGS, Stan [Stan Development notation. Section 3 describes the proposed inference Team, 2014], and Infer.NET [Minka et al., 2014]. engines and Section 4 describe some techniques used The second approach to probabilistic programming for the implementation of proposed engines. Section 5 relaxes constraints imposed by existing inference al- discusses some related work. Section 6 concludes. gorithms, and attempts to introduce languages that are flexible enough to encode arbitrary probabilistic 2 Background models. In probabilistic modelling, we are often interested in the 2.2 Inference for probabilistic programs problem of simulating from a probability distribution p(θ j y; γ). Here, θ could represent the parameters of Probabilistic programs can only realize their flexibility interest, y some observed data and γ some fixed model potential when accompanied with efficient inference hyper-parameters. The target distribution p(θ j y; γ) engines. To explain how inference in probabilistic pro- arises from conditioning a probabilistic model p(y; θ j γ) gramming works, we consider the the following HMM on some observed data y. example with K states: 2.1 Model as computer programs πk ∼ Dir(θ) φk ∼ p(γ)(k = 1; 2;:::;K) One way to represent a probabilistic model is by using a zt j zt−1 ∼ Cat(· j πz ) (4) computer program. Perhaps the earliest and most influ- t−1 y j z ∼ h(· j φ )(t = 1; 2;:::;N) ential probabilistic programming system so far is BUGS t t zt [Lunn et al., 2000]. The BUGS language dates back Here Dir and Cat denote the Dirichlet and Categorical to 1990's. In BUGS, a probabilistic model is encoded distribution respectively. The complete collection of using a simple programming language conveniently re- parameters in this model is fπ ; φ ; z g. An effi- sembling statistical notations. After specifying a BUGS 1:K 1:K 1:T cient Gibbs sampler with the following series of three model and conditioning on some observed data, Monte steps is often used for Bayesian inference: Carlo samples can be automatically drawn from the model's posterior distribution. Algorithm 2.1 shows Step 1: Sample z1:T ∼ z1:T j φ1:K ; π1:K ; y1:T ; the generic structure of a probabilistic program. Step 2: Sample φk ∼ φk j z1:T ; y1:T ; γ; Algorithm 2.1. A generic probabilistic program. Step 3: Sample πk ∼ πk j z1:T ; θ (k = 1;:::;K). Input: data y and hyper-parameter γ 2.3 Computation graph based inference Step 1: Define global parameters One challenge of performing inference for probabilistic global θ ∼ p(· j γ); (1) programs is about how to obtain the computation graph between model variables. This is in contrast with Step 2: For each observation yn, define (local) latent graphical models, where the dependence structure (or variables and compute likelihoods computation graph) is normally static and known ahead of time. For example, consider variables in the HMM θlocal ∼ p(· j θlocal ; θglobal; γ) (2) n 1:n−1 model, the chain z1; z2; : : : ; zT is dependent, while other local global variables are independent given this chain. However, yn ∼ p(· j θ ; θ ; γ) (3) 1:n when the HMM model is represented in the form of where n = 1; 2;:::;N. a probabilistic program (i.e.θ = fπ1:K ; φ1:K ; z1:T g, cf Algorithm 2.1), we no longer have the computation Above model variables (or parameters) are divided graph between model parameters. into two groups: θlocal denotes model parameters (or n For certain probabilistic programs, it is possible to latent variables) specific to observation yn, such as a mixture indicator for a data item in mixture models, construct the computation graph between variables through static analysis. This is the approach taken by and θglobal denote global parameters. the BUGS language [Lunn et al., 2000] and infer.NET Currently there are two main approaches to probabilis- [Minka et al., 2014]. Once the computation graph tic programming. The first approach is based on the is constructed, a Gibbs sampler or message passing idea that probabilistic programs should only support algorithm [Minka, 2001, Winn and Bishop, 2005] can Hong Ge, Kai Xu, Zoubin Ghahramani Support discrete Support universal Composable MCMC Sampler Require gradients? Require adaption? variables? programs? operator? HMC No Yes Yes No Yes NUTS No Yes Yes No Yes IS Yes No No Yes No SMC Yes No No Yes No PG Yes No No Yes Yes PMMH Yes No No Yes Yes IPMCMC Yes No No Yes Yes Table 1: Supported Monte Carlo algorithms in Turing (v0.4). be applied to each random node of the computation with stochastic branches [Goodman et al., 2008, Mans- graph. However, one caveat of this approach is that the inghka et al., 2014, Wood et al., 2014], such as con- computation graph underlying a probabilistic program ditionals, loops and recursions, poses a substantially needs to be fixed during inference time. For programs more challenging Bayesian inference problem because involving stochastic branches, this requirement may inference engines have to manage varying number of not be satisfied. In such cases, we have to resort to model dimensions, dynamic computation graph and so other inference methods. on. 2.4 Hamiltonian Monte Carlo based 2.5 Simulation based inference inference Currently, most inference engines for universal proba- For the family of models whose log probability is point- bilistic programs (those involving stochastic branches) wise computable and differentiable, there exists an use forward-simulation based sampling methods such efficient sampling method using Hamiltonian dynamics. as rejection sampling (RS), sequential Monte Carlo Developed as an extension of the Metropolis algorithm, (SMC), and particle MCMC [Andrieu et al., 2010].
Recommended publications
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software January 2017, Volume 76, Issue 1. doi: 10.18637/jss.v076.i01 Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matthew D. Hoffman Columbia University Columbia University Adobe Creative Technologies Lab Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University Columbia University Marcus A. Brubaker Jiqiang Guo Peter Li York University NPD Group Columbia University Allen Riddell Indiana University Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces sup- port sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matt Hoffman Columbia University Columbia University Adobe Research Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University University of Warwick Marcus A. Brubaker Jiqiang Guo Peter Li University of Toronto, NPD Group Columbia University Scarborough Allen Riddell Dartmouth College Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.2.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line, through R using the RStan package, or through Python using the PyStan package. All three interfaces support sampling and optimization-based inference. RStan and PyStan also provide access to log probabilities, gradients, Hessians, and data I/O. Keywords: probabilistic program, Bayesian inference, algorithmic differentiation, Stan.
    [Show full text]
  • Advancedhmc.Jl: a Robust, Modular and Efficient Implementation of Advanced HMC Algorithms
    2nd Symposium on Advances in Approximate Bayesian Inference, 20191{10 AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms Kai Xu [email protected] University of Edinburgh Hong Ge [email protected] University of Cambridge Will Tebbutt [email protected] University of Cambridge Mohamed Tarek [email protected] UNSW Canberra Martin Trapp [email protected] Graz University of Technology Zoubin Ghahramani [email protected] University of Cambridge & Uber AI Labs Abstract Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm. It is challeng- ing to implement these adaption schemes robustly in practice, hindering wider adoption amongst practitioners who are not directly working with the Stan modelling language. AdvancedHMC.jl (AHMC) contributes a modular, well-tested, standalone implementation of NUTS that recovers and extends Stan's NUTS algorithm. AHMC is written in Julia, a modern high-level language for scientific computing, benefiting from optional hardware acceleration and interoperability with a wealth of existing software written in both Julia and other languages, such as Python. Efficacy is demonstrated empirically by comparison with Stan through a third-party Markov chain Monte Carlo benchmarking suite. 1. Introduction Hamiltonian Monte Carlo (HMC) is an efficient Markov chain Monte Carlo (MCMC) algo- rithm which avoids random walks by simulating Hamiltonian dynamics to make proposals (Duane et al., 1987; Neal et al., 2011). Due to the statistical efficiency of HMC, it has been widely applied to fields including physics (Duane et al., 1987), differential equations (Kramer et al., 2014), social science (Jackman, 2009) and Bayesian inference (e.g.
    [Show full text]
  • R G M B.Ec. M.Sc. Ph.D
    Rolando Kristiansand, Norway www.bayesgroup.org Gonzales Martinez +47 41224721 [email protected] JOBS & AWARDS M.Sc. B.Ec. Applied Ph.D. Economics candidate Statistics Jobs Events Awards Del Valle University University of Alcalá Universitetet i Agder 2018 Bolivia, 2000-2004 Spain, 2009-2010 Norway, 2018- 1/2018 Ph.D scholarship 8/2017 International Consultant, OPHI OTHER EDUCATION First prize, International Young 12/2016 J-PAL Course on Evaluating Social Programs Macro-prudential Policies Statistician Prize, IAOS Massachusetts Institute of Technology - MIT International Monetary Fund (IMF) 5/2016 Deputy manager, BCP bank Systematic Literature Reviews and Meta-analysis Theorizing and Theory Building Campbell foundation - Vrije Universiteit Halmstad University (HH, Sweden) 1/2016 Consultant, UNFPA Bayesian Modeling, Inference and Prediction Multidimensional Poverty Analysis University of Reading OPHI, Oxford University Researcher, data analyst & SKILLS AND TECHNOLOGIES 1/2015 project coordinator, PEP Academic Research & publishing Financial & Risk analysis 12 years of experience making economic/nancial analysis 3/2014 Director, BayesGroup.org and building mathemati- Consulting cal/statistical/econometric models for the government, private organizations and non- prot NGOs Research & project Mathematical 1/2013 modeling coordinator, UNFPA LANGUAGES 04/2012 First award, BCG competition Spanish (native) Statistical analysis English (TOEFL: 106) 9/2011 Financial Economist, UDAPE MatLab Macro-economic R, RStudio 11/2010 Currently, I work mainly analyst, MEFP Stata, SPSS with MatLab and R . SQL, SAS When needed, I use 3/2010 First award, BCB competition Eviews Stata, SAS or Eviews. I started working in LaTex, WinBugs Python recently. 8/2009 M.Sc. scholarship Python, Spyder RECENT PUBLICATIONS Disjunction between universality and targeting of social policies: Demographic opportunities for depolarized de- velopment in Paraguay.
    [Show full text]
  • Maple Advanced Programming Guide
    Maple 9 Advanced Programming Guide M. B. Monagan K. O. Geddes K. M. Heal G. Labahn S. M. Vorkoetter J. McCarron P. DeMarco c Maplesoft, a division of Waterloo Maple Inc. 2003. ­ ii ¯ Maple, Maplesoft, Maplet, and OpenMaple are trademarks of Water- loo Maple Inc. c Maplesoft, a division of Waterloo Maple Inc. 2003. All rights re- ­ served. The electronic version (PDF) of this book may be downloaded and printed for personal use or stored as a copy on a personal machine. The electronic version (PDF) of this book may not be distributed. Information in this document is subject to change without notice and does not rep- resent a commitment on the part of the vendor. The software described in this document is furnished under a license agreement and may be used or copied only in accordance with the agreement. It is against the law to copy the software on any medium as specifically allowed in the agreement. Windows is a registered trademark of Microsoft Corporation. Java and all Java based marks are trademarks or registered trade- marks of Sun Microsystems, Inc. in the United States and other countries. Maplesoft is independent of Sun Microsystems, Inc. All other trademarks are the property of their respective owners. This document was produced using a special version of Maple that reads and updates LATEX files. Printed in Canada ISBN 1-894511-44-1 Contents Preface 1 Audience . 1 Worksheet Graphical Interface . 2 Manual Set . 2 Conventions . 3 Customer Feedback . 3 1 Procedures, Variables, and Extending Maple 5 Prerequisite Knowledge . 5 In This Chapter .
    [Show full text]
  • End-User Probabilistic Programming
    End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas).
    [Show full text]
  • Bayesian Data Analysis and Probabilistic Programming
    Probabilistic programming and Stan mc-stan.org Outline What is probabilistic programming Stan now Stan in the future A bit about other software The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Stan Tens of thousands of users, 100+ contributors, 50+ R packages building on Stan Commercial and scientific users in e.g. ecology, pharmacometrics, physics, political science, finance and econometrics, professional sports, real estate Used in scientific breakthroughs: LIGO gravitational wave discovery (2017 Nobel Prize in Physics) and counting black holes Used in hundreds of companies: Facebook, Amazon, Google, Novartis, Astra Zeneca, Zalando, Smartly.io, Reaktor, ... StanCon 2018 organized in Helsinki in 29-31 August http://mc-stan.org/events/stancon2018Helsinki/ Probabilistic program
    [Show full text]
  • An Introduction to Data Analysis Using the Pymc3 Probabilistic Programming Framework: a Case Study with Gaussian Mixture Modeling
    An introduction to data analysis using the PyMC3 probabilistic programming framework: A case study with Gaussian Mixture Modeling Shi Xian Liewa, Mohsen Afrasiabia, and Joseph L. Austerweila aDepartment of Psychology, University of Wisconsin-Madison, Madison, WI, USA August 28, 2019 Author Note Correspondence concerning this article should be addressed to: Shi Xian Liew, 1202 West Johnson Street, Madison, WI 53706. E-mail: [email protected]. This work was funded by a Vilas Life Cycle Professorship and the VCRGE at University of Wisconsin-Madison with funding from the WARF 1 RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 2 Abstract Recent developments in modern probabilistic programming have offered users many practical tools of Bayesian data analysis. However, the adoption of such techniques by the general psychology community is still fairly limited. This tutorial aims to provide non-technicians with an accessible guide to PyMC3, a robust probabilistic programming language that allows for straightforward Bayesian data analysis. We focus on a series of increasingly complex Gaussian mixture models – building up from fitting basic univariate models to more complex multivariate models fit to real-world data. We also explore how PyMC3 can be configured to obtain significant increases in computational speed by taking advantage of a machine’s GPU, in addition to the conditions under which such acceleration can be expected. All example analyses are detailed with step-by-step instructions and corresponding Python code. Keywords: probabilistic programming; Bayesian data analysis; Markov chain Monte Carlo; computational modeling; Gaussian mixture modeling RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 3 1 Introduction Over the last decade, there has been a shift in the norms of what counts as rigorous research in psychological science.
    [Show full text]
  • Software Packages – Overview There Are Many Software Packages Available for Performing Statistical Analyses. the Ones Highlig
    APPENDIX C SOFTWARE OPTIONS Software Packages – Overview There are many software packages available for performing statistical analyses. The ones highlighted here are selected because they provide comprehensive statistical design and analysis tools and a user friendly interface. SAS SAS is arguably one of the most comprehensive packages in terms of statistical analyses. However, the individual license cost is expensive. Primarily, SAS is a macro based scripting language, making it challenging for new users. However, it provides a comprehensive analysis ranging from the simple t-test to Generalized Linear mixed models. SAS also provides limited Design of Experiments capability. SAS supports factorial designs and optimal designs. However, it does not handle design constraints well. SAS also has a GUI available: SAS Enterprise, however, it has limited capability. SAS is a good program to purchase if your organization conducts advanced statistical analyses or works with very large datasets. R R, similar to SAS, provides a comprehensive analysis toolbox. R is an open source software package that has been approved for use on some DoD computer (contact AFFTC for details). Because R is open source its price is very attractive (free). However, the help guides are not user friendly. R is primarily a scripting software with notation similar to MatLab. R’s GUI: R Commander provides limited DOE capabilities. R is a good language to download if you can get it approved for use and you have analysts with a strong programming background. MatLab w/ Statistical Toolbox MatLab has a statistical toolbox that provides a wide variety of statistical analyses. The analyses range from simple hypotheses test to complex models and the ability to use likelihood maximization.
    [Show full text]
  • Richard C. Zink, Ph.D
    Richard C. Zink, Ph.D. SAS Institute, Inc. • SAS Campus Drive • Cary, North Carolina 27513 • Office 919.531.4710 • Mobile 919.397.4238 • Blog: http://blogs.sas.com/content/jmp/author/rizink/ • [email protected] • @rczink SUMMARY Richard C. Zink is Principal Research Statistician Developer in the JMP Life Sciences division at SAS Institute. He is currently a developer for JMP Clinical, an innovative software package designed to streamline the review of clinical trial data. Richard joined SAS in 2011 after eight years in the pharmaceutical industry, where he designed and analyzed clinical trials in diverse therapeutic areas including infectious disease, oncology, and ophthalmology, and participated in US and European drug submissions and FDA advisory committee hearings. Richard is the Publications Officer for the Biopharmaceutical Section of the American Statistical Association, and the Statistics Section Editor for the Therapeutic Innovation & Regulatory Science (formerly Drug Information Journal). He holds a Ph.D. in Biostatistics from the University of North Carolina at Chapel Hill, where he serves as an Adjunct Assistant Professor of Biostatistics. His research interests include data visualization, the analysis of pre- and post-market adverse events, subgroup identification for patients with enhanced treatment response, and the assessment of data integrity in clinical trials. He is author of Risk-Based Monitoring and Fraud Detection in Clinical Trials Using JMP and SAS, and co-editor of Modern Approaches to Clinical Trials Using
    [Show full text]
  • Winbugs for Beginners
    WinBUGS for Beginners Gabriela Espino-Hernandez Department of Statistics UBC July 2010 “A knowledge of Bayesian statistics is assumed…” The content of this presentation is mainly based on WinBUGS manual Introduction BUGS 1: “Bayesian inference Using Gibbs Sampling” Project for Bayesian analysis using MCMC methods It is not being further developed WinBUGS 1,2 Stable version Run directly from R and other programs OpenBUGS 3 Currently experimental Run directly from R and other programs Running under Linux as LinBUGS 1 MRC Biostatistics Unit Cambridge, 2 Imperial College School of Medicine at St Mary's, London 3 University of Helsinki, Finland WinBUGS Freely distributed http://www.mrc-bsu.cam.ac.uk/bugs/welcome.shtml Key for unrestricted use http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/WinBUGS14_immortality_key.txt WinBUGS installation also contains: Extensive user manual Examples Control analysis using: Standard windows interface DoodleBUGS: Graphical representation of model A closed form for the posterior distribution is not needed Conditional independence is assumed Improper priors are not allowed Inputs Model code Specify data distributions Specify parameter distributions (priors) Data List / rectangular format Initial values for parameters Load / generate Model specification model { statements to describe model in BUGS language } Multiple statements in a single line or one statement over several lines Comment line is followed by # Types of nodes 1. Stochastic Variables that are given a distribution 2. Deterministic
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software January 2017, Volume 76, Issue 1. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matthew D. Hoffman Columbia University Columbia University Adobe Creative Technologies Lab Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University Columbia University Marcus A. Brubaker Jiqiang Guo Peter Li York University NPD Group Columbia University Allen Riddell Indiana University Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces sup- port sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
    [Show full text]