Intermediate Bayes 2016

Total Page:16

File Type:pdf, Size:1020Kb

Intermediate Bayes 2016 Collaborative Centre for Data Analysis, Modelling and Computation QUT, GPO Box 2434 IntermediateBrisbane 4001, Australia Bayes Kerrie Mengersen QUT Brisbane ACEMS 2016 Course Outline 1. Foundations 2. Linear and hierarchical modelling 3. Computational methods 4. Software 5. Case Study 6. Latent variable models 7. Spatial models 8. Bayesian networks Acknowledgement to Dr Clair Alston, Griffith University, Australia, for some of these course notes. • 1. Foundations A Bayesian and a Frequentist were to be executed. The judge asked them what were their last wishes. The Bayesian replied that he would like to give the Frequentist one more lecture. The judge granted the Bayesian's wish and then turned to the Frequentist for his last wish. The Frequentist quickly responded that he wished to hear the lecture again and again and again and again........ (Xiao-Li Meng) p(q|y) = p(y|q) p(q) / p(y) Bayes Laplace Boole Fisher Jeffreys Geman Gelfand Today’s Venn Neyman Geman Smith Bayesians 1763 1812 1838 1930’s 1950’s 1980’s 1990’s 2000’s “Probability “ Inverse “Bayesian Theory” Probability” Analysis” Recall Bayes’ Rule p(A|B) = p(B|A)P(A) / p(B) p(A | B) = p(A and B) / p(B) p(B | A) = p(B and A) / p(A) p(A and B) = p(B and A) p(B and A) = p(B | A) p(A) p(A | B) = p(B | A) p(A) / p(B) Bayes’ Theorem p(A|B) = p(B|A) p(A) / p(B) Think of: A=q (unknown parameters, etc) B=y (known/observed ‘data’) So: p(q|y) = p(y|q) p(q) / p(y) The Reverend Thomas Bayes (1701-61) studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). Bayesian Modelling Frequentist approach to modelling We have some data y, and want to know about q given y q can be unknown parameters, missing data, latent variables, etc. Eg 1: sample y “successes” from n trials. What is Pr(success), q? Eg 2: sample y from N(q,1). What is population mean q? Frequentist: estimate q through the likelihood: p(y|q) How likely is y for specified values of q? Eg: prob. of observing y if y~Bin(n,q=0.3) or y~N(q=1,1) Solved using moment estimators or maximum likelihood. But we really want to know about p(q|y) Bayesian approach to modelling Example: Estimating a proportion • From an ecologist: I want to know where koalas might be present. I surveyed 29 sites and 22 have koalas. What is the probability that a koala will be present at a different site in the same area, given this information? • From a clinician: I want to know about the safety of a medical procedure. I treated 29 patients and 22 survived. What is the probability of survival, given this information? • What is unobserved? q = probability of success (presence of koalas, survival) Likelihood Prior DAG: Binomial model Model y ~ Binomial (q, n) q ~ Beta (a,b) a b q n y Posterior Your turn Binomial example with 22 successes out of 29 trials: Consider the following priors for q: Beta(1,1) Beta(9,1) Beta(100,100) Choose one of these priors: 1. What is the prior mean for q? 2. What is the posterior distribution for q? 3. What is the posterior mean for q? 4. What general conclusions can you make about the influence of priors and sample size? Answers Sample proportion = 22/29 = 0.76 Beta(1,1): Prior mean = 1/(1+1) = 0.5 Posterior mean = (22+1)/(22+1+7+1) = (22+1)/(29+1+1) = 0.74 Beta(9,1): Prior mean = 9/(9+1) = 0.90 Posterior mean = (22+9)/(22+9+7+1) = 0.79 Beta(100,100): Prior mean = (100)/(100+100) = 0.5 Posterior mean = (22+100)/(22+100+7+100) = 0.53 Your turn Simulate and plot the density for the likelihood and these sets of priors and posteriors. Sample code # calculate the likelihood # p(y=22|theta) = Bin(n=29, theta) y=22 n=29 theta=c(0,0.001,seq(0.01,0.99,0.01),0.999,1) lik=dbinom(x=y,size=n,prob=theta) lik=lik/sum(lik) plot(theta,lik,type="l",ylab="prob",ylim=c(0,0.2)) # calculate the prior # p(theta) = Beta(1,1) # change to Beta(9,1), Beta(100,100) later a1=1; b1=1 prior1=dbeta(theta,a1,b1) prior1=prior1/sum(prior1) lines(theta,prior1,col=2,lty=2) # calculate the corresponding posterior post1=dbeta(theta,y+a1,n-y+b1) post1=post1/sum(post1) lines(theta,post1,col=2,lty=1) Influences on posterior • The posterior mean is a compromise between the prior mean and the data. • The stronger the prior, the more weight the prior has in the posterior. • The larger the sample size, the more weight the likelihood has in the posterior. Conjugate priors • It might be reasonable to expect the posterior distribution to be of the same form as the prior distribution. This is the principle of conjugacy. • A conjugate prior for a Binomial likelihood is a Beta distribution: the posterior is then also a Beta distribution. Conjugate priors Dynamic Updating If we obtain more data, we do not have to redo all of the analysis: our posterior from the first analysis simply becomes our prior for this next analysis. Binomial example: Stage 0. Prior p(q) ~ Beta(1,1); ie E(q)=0.5. Stage 1. Observe y=22 presences from 29 sites. Likelihood: p(y|q)~Bin(n=29, q) Posterior: p(q|y)~Beta(23,8); ie E(q|y) = 0.74 Stage 2: Observe 5 more presences from 10 sites. Likelihood: p(y|q)~Bin(n=10, q); Prior p(q)~Beta(23,8); Posterior p(q|y)~Beta(28,13); ie E(q|y) = 0.68. Your turn Confirm that the dynamic updating method described in the previous slide gives the same outcome as analysing all of the data together. Data: Prior: Posterior: Posterior mean: Example: Estimating a normal mean n observations Y = (y1,..,yn) from a normal distribution, unknown mean m, known variance s2 Normal Model Normal model, unknown mean unknown variance s2 ~ Inverse Gamma(a,b) s ~ Uniform(a,b) What do these ‘look like’? s ~ Half Cauchy(a,b) Linear regression Linear regression: priors Linear regression: Posterior Linear regression: Posterior Model Comparison • Bayes factors, posterior odds, BIC, DIC • Reversible jump MCMC, Birth and death MCMC • Model averaging Bayes factors • Consider models M1 and M2 (not necessarily nested) • Choose a model based on its posterior probability given the data. This is proportional to the prior probability of the model multiplied by the likelihood of the model given the data. So we consider: p(M2|y) p(M2) p(y|M2) Bayes factors To compare M2 versus M1: p(M2|y) / P(M1|y) = {p(M2) / p(M1)} {p(y|M2) / P(y|M1)} • The second term (the ratio of marginal likelihoods) is termed the Bayes factor B21. This is similar to a likelihood ratio, but p(y|M) is integrated over the parameters instead of maximised: eg, p(y|M1) = p(y|M1,q1) p(q1) dq1 • 2log(B21) gives same scale as usual deviance and LR statistics. Guidelines for Bayes Factors (arbitrary!) B21 2log(B21) Interpretation <1 Negative Supports M1 1 to 3 0 to 2 Weak support for M2 3-20 2-6 Supports M2 20-150 6-10 Strong evidence for M2 >150 >10 Very strong support for M2 Bayesian Information Criterion (BIC) • Approximate the Bayes factor • Under some assumptions, if p is the dimension of the model and n is the no. observations: BIC = log P(y|q*,M) – p/2 log n We can rewrite as BIC = n log(1-R2) + k log(n) Discussion of BIC • BIC penalises models which improve fit at the expense of more parameters (encourages parsimony). • A problem is that the true dimensionality (number of parameters p) of the model is often not known, and also that the number of parameters may increase with sample size n. • Can approximate using the effective number of parameters (Speigelhalter et al, 1999). • Alternatives are DIC (deviance information criterion, calculated in WinBUGS), conditional posterior predictive probabilities, etc. Markov chain Monte Carlo • “Decompose” joint posterior distribution into a sequence of conditional distributions – these are often much simpler (eg, simple univariate normals, etc) • Simulate from each conditional distribution in turn. We use a simulation method that resembles a Markov chain (so that the new simulated value relies only on the previous value), giving a set of simulated values q (1), q (2), …, q (i) , ... which converges to the required conditional, The resulting simulations will come from the required joint distribution • We can use Markov chain theory to make statements about behaviour and convergence of the chain Computational Algorithms • Gibbs sampling: sample from the conditionals themselves • Metropolis-Hastings: sample from an “easy” distribution and accept those values that conform to the conditional distribution • Lots of variations: reversible jump, slice sampling, particle filters, perfect sampling, adaptive rejection sampling, etc • Need to ensure conditions, eg detailed balance, reversibility • Approximations: Variational Bayes (VB), Approximate Bayesian Computation (ABC), Sequential Monte Carlo (SMC) Gibbs sampling Suppose we have a joint posterior p(q1, q2 | y,… ) (0) (0) 0. Choose starting values q1 , q2 1. At ith iteration (i) (i) (i-1) Sample q1 from p(q1 | q2 ,y,...) (i) (i) Sample q2 from p(q2 | q1 ,y,…) 2. Repeat step 1 many times 3. Make inferences based on simulated values Exercise Data yi|l ~ Poisson(l), i=1,…,m yi|f ~ Poisson(f), i=m+1,…,n Priors l ~ Gamma(a,b) f ~ Gamma(c,d) m is discrete over {1,…, n} a, b, c, d known constants.
Recommended publications
  • Bayesian Linear Mixed Models with Polygenic Effects
    JSS Journal of Statistical Software June 2018, Volume 85, Issue 6. doi: 10.18637/jss.v085.i06 Bayesian Linear Mixed Models with Polygenic Effects Jing Hua Zhao Jian’an Luan Peter Congdon University of Cambridge University of Cambridge University of London Abstract We considered Bayesian estimation of polygenic effects, in particular heritability in relation to a class of linear mixed models implemented in R (R Core Team 2018). Our ap- proach is applicable to both family-based and population-based studies in human genetics with which a genetic relationship matrix can be derived either from family structure or genome-wide data. Using a simulated and a real data, we demonstrate our implementa- tion of the models in the generic statistical software systems JAGS (Plummer 2017) and Stan (Carpenter et al. 2017) as well as several R packages. In doing so, we have not only provided facilities in R linking standalone programs such as GCTA (Yang, Lee, Goddard, and Visscher 2011) and other packages in R but also addressed some technical issues in the analysis. Our experience with a host of general and special software systems will facilitate investigation into more complex models for both human and nonhuman genetics. Keywords: Bayesian linear mixed models, heritability, polygenic effects, relationship matrix, family-based design, genomewide association study. 1. Introduction The genetic basis of quantitative phenotypes has been a long-standing research problem as- sociated with a large and growing literature, and one of the earliest was by Fisher(1918) on additive effects of genetic variants (the polygenic effects). In human genetics it is common to estimate heritability, the proportion of polygenic variance to the total phenotypic variance, through twin and family studies.
    [Show full text]
  • Probabilistic Programming in Python Using Pymc3
    Probabilistic programming in Python using PyMC3 John Salvatier1, Thomas V. Wiecki2 and Christopher Fonnesbeck3 1 AI Impacts, Berkeley, CA, United States 2 Quantopian Inc, Boston, MA, United States 3 Department of Biostatistics, Vanderbilt University, Nashville, TN, United States ABSTRACT Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package. Subjects Data Mining and Machine Learning, Data Science, Scientific Computing and Simulation Keywords Bayesian statistic, Probabilistic Programming, Python, Markov chain Monte Carlo, Statistical modeling INTRODUCTION Probabilistic programming (PP) allows for flexible specification and fitting of Bayesian Submitted 9 September 2015 statistical models. PyMC3 is a new, open-source PP framework with an intuitive and Accepted 8 March 2016 readable, yet powerful, syntax that is close to the natural syntax statisticians use to Published 6 April 2016 describe models. It features next-generation Markov chain Monte Carlo (MCMC) Corresponding author Thomas V. Wiecki, sampling algorithms such as the No-U-Turn Sampler (NUTS) (Hoffman & Gelman, [email protected] 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC) (Duane et al., 1987).
    [Show full text]
  • Package 'Laplacesdemon'
    Package ‘LaplacesDemon’ March 4, 2013 Version 13.03.04 Date 2013-03-04 Title Complete Environment for Bayesian Inference Author Statisticat, LLC <[email protected]> Maintainer Martina Hall <[email protected]> Depends R (>= 2.14.1), parallel ByteCompile TRUE Description Laplace’s Demon is a complete environment for Bayesian inference. License GPL-2 URL http://www.r-project.org, http://www.bayesian-inference.com/software Repository CRAN NeedsCompilation no Date/Publication 2013-03-04 20:36:44 R topics documented: LaplacesDemon-package . .4 ABB.............................................5 as.covar . .7 as.initial.values . .8 as.parm.names . .9 as.ppc . 11 BayesFactor . 12 BayesianBootstrap . 15 BayesTheorem . 17 BMK.Diagnostic . 19 burnin . 21 1 2 R topics documented: caterpillar.plot . 22 CenterScale . 23 Combine . 25 Consort . 27 CSF............................................. 28 data.demonsnacks . 31 de.Finetti.Game . 32 dist.Asymmetric.Laplace . 33 dist.Asymmetric.Log.Laplace . 35 dist.Bernoulli . 36 dist.Categorical . 38 dist.Dirichlet . 39 dist.HalfCauchy . 41 dist.HalfNormal . 42 dist.Halft . 44 dist.Inverse.Beta . 45 dist.Inverse.ChiSquare . 47 dist.Inverse.Gamma . 48 dist.Inverse.Gaussian . 50 dist.Inverse.Wishart . 51 dist.Inverse.Wishart.Cholesky . 53 dist.Laplace . 55 dist.Laplace.Precision . 57 dist.Log.Laplace . 59 dist.Log.Normal.Precision . 60 dist.Multivariate.Cauchy . 62 dist.Multivariate.Cauchy.Cholesky . 64 dist.Multivariate.Cauchy.Precision . 65 dist.Multivariate.Cauchy.Precision.Cholesky . 67 dist.Multivariate.Laplace . 69 dist.Multivariate.Laplace.Cholesky . 71 dist.Multivariate.Normal . 74 dist.Multivariate.Normal.Cholesky . 75 dist.Multivariate.Normal.Precision . 77 dist.Multivariate.Normal.Precision.Cholesky . 79 dist.Multivariate.Polya . 80 dist.Multivariate.Power.Exponential .
    [Show full text]
  • Laplacesdemon: a Complete Environment for Bayesian Inference Within R
    LaplacesDemon: A Complete Environment for Bayesian Inference within R Statisticat, LLC Abstract LaplacesDemon, usually referred to as Laplace's Demon, is a contributed R package for Bayesian inference, and is freely available on the Comprehensive R Archive Network (CRAN). Laplace's Demon is a complete environment for Bayesian inference. The user may build any kind of probability model with a user-specified model function. The model may be updated with Laplace Approximation, numerous MCMC algorithms, and PMC. After updating, a variety of facilities are available, including MCMC diagnostics, posterior predictive checks, and validation. Laplace's Demon seeks to be generalizable and user- friendly to Bayesians, especially Laplacians. Keywords:~Adaptive, AM, Bayesian, Delayed Rejection, DR, DRAM, DRM, DEMC, En- semble, Gradient Ascent, HARM, Hamiltonian, High Performance Computing, Hit-And- Run, HMC, HPC, Importance Sampling, INCA, Laplace Approximation, LaplacesDemon, Laplace's Demon, Markov chain Monte Carlo, MCMC, Metropolis, Metropolis-within-Gibbs, No-U-Turn Sampler, NUTS, Optimization, Parallel, R, PMC, Random Walk, Random-Walk, Resilient Backpropagation, Reversible-Jump, Slice, Statisticat, t-walk. Bayesian inference is named after Reverend Thomas Bayes (1701-1761) for developing Bayes' theorem, which was published posthumously after his death (Bayes and Price 1763). This was the first instance of what would be called inverse probability1. Unaware of Bayes, Pierre-Simon Laplace (1749-1827) independently developed Bayes' theo- rem and first published his version in 1774, eleven years after Bayes, in one of Laplace's first major works (Laplace 1774, p. 366{367). In 1812, Laplace introduced a host of new ideas and mathematical techniques in his book, Theorie Analytique des Probabilites (Laplace 1812).
    [Show full text]
  • Laplacesdemon Examples
    LaplacesDemon Examples Statisticat, LLC Abstract The LaplacesDemon package is a complete environment for Bayesian inference within R. Virtually any probability model may be specified. This vignette is a compendium of examples of how to specify different model forms. Keywords:~Bayesian, Bayesian Inference, Laplace's Demon, LaplacesDemon, R, Statisticat. LaplacesDemon (Statisticat LLC. 2013), usually referred to as Laplace's Demon, is an R pack- age that is available on CRAN (R Development Core Team 2012). A formal introduction to Laplace's Demon is provided in an accompanying vignette entitled \LaplacesDemon Tutorial", and an introduction to Bayesian inference is provided in the \Bayesian Inference" vignette. The purpose of this document is to provide users of the LaplacesDemon package with exam- ples of a variety of Bayesian methods. It is also a testament to the diverse applicability of LaplacesDemon to Bayesian inference. To conserve space, the examples are not worked out in detail, and only the minimum of nec- essary materials is provided for using the various methodologies. Necessary materials include the form expressed in notation, data (which is often simulated), the Model function, and initial values. The provided data, model specification, and initial values may be copy/pasted into an R file and updated with the LaplacesDemon or (usually) LaplaceApproximation func- tions. Although many of these examples update quickly, some examples are computationally intensive. Initial values are usually hard-coded in the examples, though the Parameter-Generating Func- tion (PGF) is also specified. It is recommended to generate initial values with the GIV function according to the user-specified PGF. Notation in this vignette follows these standards: Greek letters represent parameters, lower case letters represent indices, lower case bold face letters represent scalars or vectors, proba- bility distributions are represented with calligraphic font, upper case letters represent index limits, and upper case bold face letters represent matrices.
    [Show full text]
  • Stratified Sampling and Bootstrapping for Approximate Bayesian
    Stratified sampling and bootstrapping for approximate Bayesian computation Umberto Picchini∗ Richard G. Everitt† Abstract Approximate Bayesian computation (ABC) is computationally intensive for com- plex model simulators. To exploit expensive simulations, data-resampling via boot- strapping can be employed to obtain many artificial datasets at little cost. However, when using this approach within ABC, the posterior variance is inflated, thus resulting in biased posterior inference. Here we use stratified Monte Carlo to considerably re- duce the bias induced by data resampling. We also show empirically that it is possible to obtain reliable inference using a larger than usual ABC threshold. Finally, we show that with stratified Monte Carlo we obtain a less variable ABC likelihood. Ultimately we show how our approach improves the computational efficiency of the ABC samplers. We construct several ABC samplers employing our methodology, such as rejection and importance ABC samplers, and ABC-MCMC samplers. We consider simulation stud- ies for static (Gaussian, g-and-k distribution, Ising model, astronomical model) and dynamic models (Lotka-Volterra). We compare against state-of-art sequential Monte Carlo ABC samplers, synthetic likelihoods, and likelihood-free Bayesian optimization. For a computationally expensive Lotka-Volterra case study, we found that our strategy leads to a more than 10-fold computational saving, compared to a sampler that does not use our novel approach. Keywords: intractable likelihoods; likelihood-free; pseudo-marginal MCMC; sequential Monte Carlo; time series 1 Introduction The use of realistic models for complex experiments typically results in an intractable likelihood function, i.e. the likelihood is not analytically available in closed form, or it is computationally too expensive to evaluate.
    [Show full text]
  • Laplacesdemon’
    Package ‘LaplacesDemon’ July 9, 2021 Version 16.1.6 Title Complete Environment for Bayesian Inference Depends R (>= 3.0.0) Imports parallel, grDevices, graphics, stats, utils Suggests KernSmooth ByteCompile TRUE Description Provides a complete environment for Bayesian inference using a variety of different sam- plers (see ?LaplacesDemon for an overview). License MIT + file LICENSE URL https://github.com/LaplacesDemonR/LaplacesDemon BugReports https://github.com/LaplacesDemonR/LaplacesDemon/issues NeedsCompilation no Author Byron Hall [aut], Martina Hall [aut], Statisticat, LLC [aut], Eric Brown [ctb], Richard Hermanson [ctb], Emmanuel Charpentier [ctb], Daniel Heck [ctb], Stephane Laurent [ctb], Quentin F. Gronau [ctb], Henrik Singmann [cre] Maintainer Henrik Singmann <[email protected]> Repository CRAN Date/Publication 2021-07-09 14:00:02 UTC R topics documented: LaplacesDemon-package . .6 ABB............................................. 11 1 2 R topics documented: AcceptanceRate . 13 as.covar . 15 as.initial.values . 16 as.parm.names . 17 as.ppc . 18 BayesFactor . 19 BayesianBootstrap . 23 BayesTheorem . 25 BigData . 28 Blocks . 32 BMK.Diagnostic . 35 burnin . 36 caterpillar.plot . 38 CenterScale . 39 Combine . 40 cond.plot . 42 Consort . 43 CSF............................................. 46 data.demonchoice . 48 data.demonfx . 49 data.demonsessions . 50 data.demonsnacks . 51 data.demontexas . 52 de.Finetti.Game . 53 deburn . 54 dist.Asymmetric.Laplace . 55 dist.Asymmetric.Log.Laplace . 57 dist.Asymmetric.Multivariate.Laplace . 59 dist.Bernoulli . 61 dist.Categorical . 62 dist.ContinuousRelaxation . 64 dist.Dirichlet . 65 dist.Generalized.Pareto . 67 dist.Generalized.Poisson . 68 dist.HalfCauchy . 70 dist.HalfNormal . 71 dist.Halft . 73 dist.Horseshoe . 74 dist.HuangWand . 76 dist.Inverse.Beta . 78 dist.Inverse.ChiSquare . 79 dist.Inverse.Gamma . 81 dist.Inverse.Gaussian .
    [Show full text]
  • Laplacesdemon Examples
    LaplacesDemon Examples Statisticat, LLC Abstract The LaplacesDemon package is a complete environment for Bayesian inference within R. Virtually any probability model may be specified. This vignette is a compendium of examples of how to specify different model forms. Keywords: Bayesian, LaplacesDemon, LaplacesDemonCpp, R. LaplacesDemon (Statisticat LLC. 2015), often referred to as LD, is an R package that is avail- able at https://web.archive.org/web/20150430054143/http://www.bayesian-inference. com/software. LaplacesDemonCpp is an extension package that uses C++. A formal intro- duction to LaplacesDemon is provided in an accompanying vignette entitled “LaplacesDemon Tutorial”, and an introduction to Bayesian inference is provided in the “Bayesian Inference” vignette. The purpose of this document is to provide users of the LaplacesDemon package with exam- ples of a variety of Bayesian methods. It is also a testament to the diverse applicability of LaplacesDemon to Bayesian inference. To conserve space, the examples are not worked out in detail, and only the minimum of nec- essary materials is provided for using the various methodologies. Necessary materials include the form expressed in notation, data (which is often simulated), the Model function, and initial values. The provided data, model specification, and initial values may be copy/pasted into an R file and updated with the LaplacesDemon or (usually) LaplaceApproximation func- tions. Although many of these examples update quickly, some examples are computationally intensive. All examples are provided in R code, but the model specification function can be in an- other language. A goal is to provide these example model functions in C++ as well, and some are now available at https://web.archive.org/web/20140513065103/http://www.
    [Show full text]
  • Changes on CRAN 2013-05-26 to 2013-11-30
    NEWS AND NOTES 166 Changes on CRAN 2013-05-26 to 2013-11-30 by Kurt Hornik and Achim Zeileis New CRAN task views NumericalMathematics Topic: Numerical Mathematics. Maintainer: Hans W. Borchers. Packages: BB, Bessel, MASS, Matrix∗, MonoPoly, PolynomF, R.matlab, R2Cuba, Rcpp, RcppArmadillo, RcppEigen, RcppOctave, Rmpfr, Ryacas, SparseGrid, Spher- icalCubature, akima, appell, combinat, contfrac, cubature, eigeninv, elliptic, expm, features, gaussquad, gmp, gsl∗, hypergeo, irlba, magic, matlab, mpoly, multipol, nleqslv, numDeriv∗, numbers, onion, orthopolynom, partitions, pcenum, polyCub, polynom∗, pracma, rPython, rSymPy, signal, ssvd, statmod, stinepack, svd. WebTechnologies Topic: Web Technologies and Services. Maintainer: Scott Chamberlain, Karthik Ram, Christopher Gandrud, Patrick Mair. Packages: AWS.tools, CHCN, FAOSTAT, GuardianR, MTurkR, NCBI2R, OAIHarvester, Quandl, RCurl∗, RJSO- NIO∗, RLastFM, RMendeley, RNCBI, RNCEP, ROAuth, RSiteCatalyst, RSocrata, RTDAmeritrade, RWeather, Rcolombos, Reol, Rfacebook, RgoogleMaps, Rook, Syn- ergizeR, TFX, WDI, XML∗, alm, anametrix, bigml, cgdsr, cimis, crn, datamart, dataone, decctools, dismo, dvn, fImport, factualR, flora, ggmap, gooJSON, googlePublic- Data, googleVis, govStatJPN, govdat, httpuv, httr∗, imguR, ngramr, nhlscrapr, opencpu, osmar, pitchRx, plotGoogleMaps, plotKML, quantmod, rAltmetric, rPlant, rdata- market, rebird, rentrez, repmis, rfigshare, rfishbase, rfisheries, rgauges, rgbif, rj- son∗, rplos, rpubchem, rsnps, rvertnet, scholar, scrapeR, selectr, seq2R, seqinr, servr, shiny∗, sos4R,
    [Show full text]
  • Bayesian Inference for Stable Differential Equation Models With
    Bayesian inference for stable differential equation models with applications in computational neuroscience Author: Philip Maybank A thesis presented for the degree of doctor of philosophy school of mathematical, physical and computational sciences university of reading February 16, 2019 Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. It matters that you don’t just give up. THE LATE PROF STEPHEN HAWKING, 1942-2018 ABSTRACT Inference for mechanistic models is challenging because of nonlinear interactions between model parameters and a lack of identifiability. Here we focus on a specific class of mechanistic models, which we term stable differential equations. The dynamics in these models are approximately linear around a stable fixed point of the system. We exploit this property to develop fast approximate methods for posterior inference. We first illustrate our approach using simulated EEG data on the Liley et al model, a mechanistic neural population model. Then we apply our methods to experimental EEG data from rats to estimate how parameters in the Liley et al model vary with level of isoflurane anaesthesia. More generally, stable differential equation models and the corresponding inference methods are useful for analysis of stationary time-series data. Compared to the existing state-of-the art, our methods are several orders of magnitude faster, and are particularly suited to analysis of long time-series (>10,000 time-points) and models of moderate dimension (10-50 state variables and 10-50 parameters.) 3 ACKNOWLEDGEMENTS I have enjoyed studying Bayesian methodology with Richard Culliford and Changqiong Wang, and Neuroscience with Asad Malik and Catriona Scrivener.
    [Show full text]
  • Mamba.Jl Documentation Release 0.12.0
    Mamba.jl Documentation Release 0.12.0 Brian J Smith Oct 20, 2018 Contents 1 Overview 3 1.1 Purpose..................................................3 1.2 Features..................................................3 1.3 Getting Started..............................................4 2 Contents 5 2.1 Introduction...............................................5 2.2 Tutorial..................................................7 2.3 MCMC Types.............................................. 21 2.4 Sampling Functions........................................... 57 2.5 Examples................................................. 93 2.6 Discussion................................................ 165 2.7 Supplement................................................ 165 2.8 References................................................ 166 2.9 Indices.................................................. 166 Bibliography 167 i ii Mamba.jl Documentation, Release 0.12.0 Version 0.12.0 Requires julia releases 1.0.x Date Oct 20, 2018 Maintainer Brian J Smith ([email protected]) Contributors Benjamin Deonovic ([email protected]), Brian J Smith (brian-j- [email protected]), and others Web site https://github.com/brian-j-smith/Mamba.jl License MIT Contents 1 Mamba.jl Documentation, Release 0.12.0 2 Contents CHAPTER 1 Overview 1.1 Purpose Mamba is an open platform for the implementation and application of MCMC methods to perform Bayesian analysis in julia. The package provides a framework for (1) specification of hierarchical models through stated relationships between
    [Show full text]
  • Laplacesdemon: a Complete Environment for Bayesian Inference Within R
    LaplacesDemon: A Complete Environment for Bayesian Inference within R Statisticat, LLC Abstract LaplacesDemon, also referred to as LD, is a contributed R package for Bayesian infer- ence, and is freely available at https://web.archive.org/web/20141224051720/http: //www.bayesian-inference.com/indexe. The user may build any kind of probability model with a user-specified model function. The model may be updated with iterative quadrature, Laplace Approximation, MCMC, PMC, or variational Bayes. After updat- ing, a variety of facilities are available, including MCMC diagnostics, posterior predictive checks, and validation. Hopefully, LaplacesDemon is generalizable and user-friendly for Bayesians, especially Laplacians. Keywords: Bayesian, Big Data, High Performance Computing, HPC, Importance Sampling, Iterative Quadrature, Laplace Approximation, LaplacesDemon, LaplacesDemonCpp, Markov chain Monte Carlo, MCMC, Metropolis, Optimization, Parallel, PMC, R, Rejection Sampling, Variational Bayes. Bayesian inference is named after Reverend Thomas Bayes (1701-1761) for developing Bayes’ theorem, which was published posthumously after his death (Bayes and Price 1763). This was the first instance of what would be called inverse probability1. Unaware of Bayes, Pierre-Simon Laplace (1749-1827) independently developed Bayes’ theo- rem and first published his version in 1774, eleven years after Bayes, in one of Laplace’s first major works (Laplace 1774, p. 366–367). In 1812, Laplace introduced a host of new ideas and mathematical techniques in his book, Theorie Analytique des Probabilites (Laplace 1812). Before Laplace, probability theory was solely concerned with developing a mathematical anal- ysis of games of chance. Laplace applied probabilistic ideas to many scientific and practical problems. Although Laplace is not the father of probability, Laplace may be considered the father of the field of probability.
    [Show full text]