Bayesian Statistics Lecture Notes 2015

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Statistics Lecture Notes 2015 Bayesian Statistics Lecture Notes 2015 B.J.K. Kleijn Korteweg-de Vries institute for Mathematics Contents Preface iii 1 Introduction 1 1.1 Frequentist statistics . 1 1.1.1 Data and model . 1 1.1.2 Frequentist estimation . 6 1.2 Bayesian statistics . 9 1.3 The frequentist analysis of Bayesian methods . 12 1.4 Exercises . 13 2 Bayesian basics 15 2.1 Bayes's rule, prior and posterior distributions . 16 2.1.1 Bayes's rule . 16 2.1.2 Bayes's billiard . 20 2.1.3 The Bayesian view of the model . 21 2.1.4 A frequentist's view of the posterior . 23 2.2 Bayesian point estimators . 27 2.2.1 Posterior predictive and posterior mean . 27 2.2.2 Small-ball and formal Bayes estimators . 30 2.2.3 The Maximum-A-Posteriori estimator . 31 2.3 Confidence sets and credible sets . 33 2.3.1 Frequentist confidence intervals . 33 2.3.2 Bayesian credible sets . 36 2.4 Tests and Bayes factors . 37 2.4.1 Neyman-Pearson tests . 37 2.4.2 Randomized tests and the Neyman-Pearson lemma . 40 2.4.3 Test sequences and minimax optimality . 42 2.4.4 Bayes factors . 44 2.5 Decision theory and classification . 47 2.5.1 Decision theory . 47 2.5.2 Bayesian decision theory . 51 i 2.5.3 Frequentist versus Bayesian classification . 53 2.6 Exercises . 56 3 Choice of the prior 61 3.1 Subjective priors . 62 3.1.1 Motivation for the subjectivist approach . 62 3.1.2 Methods for the construction of subjective priors . 63 3.2 Non-informative priors . 66 3.2.1 Uniform priors . 66 3.2.2 Jeffreys prior and reference priors . 68 3.3 Conjugate families . 71 3.3.1 Basic definitions and some examples . 71 3.3.2 Exponential families . 73 3.4 Hierarchical priors . 75 3.4.1 Hyperparameters and hyperpriors . 76 3.4.2 Hierarchical prior construction in an example . 77 3.5 Empirical prior choice . 79 3.5.1 Model selection with empirical methods . 80 3.6 Dirichlet priors . 82 3.6.1 Dirichlet distributions . 82 3.6.2 The Dirichlet process . 85 3.7 Exercises . 90 4 Bayesian asymptotics 93 4.1 Efficiency in smooth parametric models . 94 4.1.1 Optimality in smooth, parametric estimation problems . 94 4.1.2 Regularity and efficiency . 96 4.2 Le Cam's Bernstein-von Mises theorem . 99 4.2.1 Proof of the Bernstein-von Mises theorem . 102 4.2.2 Three subsidiary lemmas . 106 4.3 Exercises . 108 A Measure theory 111 A.1 Sets and sigma-algebras . 111 A.2 Measures . 112 A.3 Measurability, random variables and integration . 115 A.4 Existence of stochastic processes . 118 A.5 Conditional distributions . 119 A.6 Convergence in spaces of probability measures . 121 Bibliography 123 ii Preface These lecture notes were written for the course `Bayesian Statistics', taught at University of Amsterdam in the spring of 2007. The course was aimed at first-year MSc.-students in statistics, mathematics and related fields. The aim was for students to understand the basic properties of Bayesian statistical methods; to be able to apply this knowledge to statistical questions and to know the extent (and limitations) of conclusions based thereon. Considered were the basic properties of the procedure, choice of the prior by objective and subjective crite- ria and Bayesian methods of inference. In addition, some non-parametric Bayesian modelling and basic posterior asymptotic behaviour will received due attention. An attempt has been made to make these lecture notes as self-contained as possible. Nevertheless the reader is expected to have been exposed to some statistics, preferably from a mathematical perspective. It is not assumed that the reader is familiar with asymptotic statistics. Where possible, definitions, lemmas and theorems have been formulated such that they cover parametric and nonparametric models alike. An index, references and an extensive bibliography are included. Since Bayesian statistics is formulated in terms of probability theory, some background in measure theory is prerequisite to understanding these notes in detail. However the reader is not supposed to have all measure-theorical knowledge handy: appendix A provides an overview of relevant measure-theoretic material. In the description and handling of nonparametric statistical models, functional analysis and topology play a role. Of the latter two, however, only the most basic notions are used and all necessary detail in this respect will be provided during the course. For corrections to early versions of these notes the author thanks C. Muris. Bas Kleijn, Amsterdam, January 2007 iii iv Chapter 1 Introduction The goal of statistical inference is to understand, describe and estimate (aspects of) the randomness of measured data. Quite naturally, this invites the assumption that the data represents a sample from an unknown but fixed probability distribution. 1.1 Frequentist statistics Any frequentist inferential procedure relies on three basic ingredients: the data, a model and an estimation procedure. The central assumption in frequentism is that the data has a definite but unknown, underlying distribution to which all inference pertains. 1.1.1 Data and model The data is a measurement or observation which we denote by Y , taking values in a corre- sponding samplespace. Definition 1.1. The samplespace for an observation Y is a measurable space (Y ; B) (see definition A.2) containing all values that Y can take upon measurement. Measurements and data can take any form, ranging from categorical data (sometimes referred to as nominal data where the samplespace is simply a (usually finite) set of points or labels with no further mathematical structure), ordinal data (also known as ranked data, where the samplespace is endowed with an total ordering), to interval data (where in addition to having an ordering, the samplespace allows one to compare differences or distances between points), to ratio data (where we have all the structure of the real line). Moreover Y can collect the results of a number of measurements, so that it takes its values in the form of a vector (think of an experiment involving repeated, stochastically independent measurements of the same quantity, leading to a so-called independent and identically distributed (or i.i.d.) 1 2 Introduction sample). The data Y may even take its values in a space of functions or in other infinite- dimensional spaces, for example, in the statistical study of continuous-time time-series. The samplespace Y is assumed to be a measurable space to enable the consideration of probability measures on Y , formalizing the uncertainty in measurement of Y . As was said in the opening words of this chapter, frequentist statistics hinges on the assumption that there exists a probability measure P0 : B ! [0; 1] on the samplespace Y representing the \true distribution of the data": Y ∼ P0 (1.1) Hence from the frequentist perspective, statistics revolves around the central question: \What does the data make clear about P0?", which may be considered in parts by questions like, \From the data, what can we say about the mean of P0?", \Based on the data that we have, how sharp can we formulate hypotheses concerning the value of the variance of P0?", etcetera. The second ingredient of a statistical procedure is a model, which contains all explanations under consideration of the randomness in Y . Definition 1.2. A (frequentist) statistical model P is a collection of probability measures P : B ! [0; 1] on the samplespace (Y ; B). The distributions Pθ are called model distributions. For every sample space (Y ; B), the collection of all probability distributions is called the full model (sometimes referred to as the full non-parametric model), denoted M (Y ; B). The model P contains the candidate distributions for Y that the statistician finds \rea- sonable" explanations of the uncertainty he observes (or expects to observe) in Y . As such, it constitutes a choice of the statistician analyzing the data rather than a given. From a more mathematical perspective we observe that a model P on (Y ; B) is a subset of the vector space ca(Y ; B) of all signed measures µ : B ! R (that is, all countably additive, real-valued set functions) that are of finite total variation. Equipped with the total-variational norm (see appendix A, definition A.7), µ 7! kµk, ca(Y ; B) is a Banach space, in which the full model can be characterized as follows: M (Y ; B) = P 2 ca(Y ; B): P ≥ 0;P (Y ) = 1 : Often, we describe models as families of probability densities rather than distributions. Definition 1.3. If there exists a σ-finite measure µ : B ! [0; 1] such that for all P 2 P, P µ, we say that the model is dominated. The Radon-Nikodym theorem (see theorem A.26) guarantees that we may represent a dominated probability measure P in terms of a probability density function p = dP/dµ : Y ! R [0; 1) that satisfies A p(y) dµ(y) = P (A) for all A 2 B. For dominated models, it makes sense to adopt a slightly different mathematical perspective: if µ dominates P, we map P to the space of all µ-integrable functions L1(µ) by means of the Radon-Nikodym mapping. Frequentist statistics 3 Example 1.4. Suppose that Y is countable (and let B be the powerset of Y ): then the measure µ that puts mass one at every point in Y , also known as the counting measure on Y , is σ-finite and dominates every other (finite) measure on Y . Consequently, any model on (Y ; B) can be represented in terms of elements p in the Banach space L1(µ), more commonly denoted as, X `1 = f(f1; f2;:::) 2 [0; 1] : jfij < 1g: i≥1 P P where it is noted that pi ≥ 0 and kpk = i jpij = i pi = 1 for all P 2 M (Y ; B).
Recommended publications
  • Assessing Fairness with Unlabeled Data and Bayesian Inference
    Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference Disi Ji1 Padhraic Smyth1 Mark Steyvers2 1Department of Computer Science 2Department of Cognitive Sciences University of California, Irvine [email protected] [email protected] [email protected] Abstract We investigate the problem of reliably assessing group fairness when labeled examples are few but unlabeled examples are plentiful. We propose a general Bayesian framework that can augment labeled data with unlabeled data to produce more accurate and lower-variance estimates compared to methods based on labeled data alone. Our approach estimates calibrated scores for unlabeled examples in each group using a hierarchical latent variable model conditioned on labeled examples. This in turn allows for inference of posterior distributions with associated notions of uncertainty for a variety of group fairness metrics. We demonstrate that our approach leads to significant and consistent reductions in estimation error across multiple well-known fairness datasets, sensitive attributes, and predictive models. The results show the benefits of using both unlabeled data and Bayesian inference in terms of assessing whether a prediction model is fair or not. 1 Introduction Machine learning models are increasingly used to make important decisions about individuals. At the same time it has become apparent that these models are susceptible to producing systematically biased decisions with respect to sensitive attributes such as gender, ethnicity, and age [Angwin et al., 2017, Berk et al., 2018, Corbett-Davies and Goel, 2018, Chen et al., 2019, Beutel et al., 2019]. This has led to a significant amount of recent work in machine learning addressing these issues, including research on both (i) definitions of fairness in a machine learning context (e.g., Dwork et al.
    [Show full text]
  • Bayesian Inference Chapter 4: Regression and Hierarchical Models
    Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Aus´ınand Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 1 / 35 Objective AFM Smith Dennis Lindley We analyze the Bayesian approach to fitting normal and generalized linear models and introduce the Bayesian hierarchical modeling approach. Also, we study the modeling and forecasting of time series. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 2 / 35 Contents 1 Normal linear models 1.1. ANOVA model 1.2. Simple linear regression model 2 Generalized linear models 3 Hierarchical models 4 Dynamic models Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 3 / 35 Normal linear models A normal linear model is of the following form: y = Xθ + ; 0 where y = (y1;:::; yn) is the observed data, X is a known n × k matrix, called 0 the design matrix, θ = (θ1; : : : ; θk ) is the parameter set and follows a multivariate normal distribution. Usually, it is assumed that: 1 ∼ N 0 ; I : k φ k A simple example of normal linear model is the simple linear regression model T 1 1 ::: 1 where X = and θ = (α; β)T . x1 x2 ::: xn Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 4 / 35 Normal linear models Consider a normal linear model, y = Xθ + . A conjugate prior distribution is a normal-gamma distribution:
    [Show full text]
  • The Bayesian Lasso
    The Bayesian Lasso Trevor Park and George Casella y University of Florida, Gainesville, Florida, USA Summary. The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the priors on the regression parameters are indepen- dent double-exponential (Laplace) distributions. This posterior can also be accessed through a Gibbs sampler using conjugate normal priors for the regression parameters, with indepen- dent exponential hyperpriors on their variances. This leads to tractable full conditional distri- butions through a connection with the inverse Gaussian distribution. Although the Bayesian Lasso does not automatically perform variable selection, it does provide standard errors and Bayesian credible intervals that can guide variable selection. Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso pa- rameter. The methods described here can also be extended to other Lasso-related estimation methods like bridge regression and robust variants. Keywords: Gibbs sampler, inverse Gaussian, linear regression, empirical Bayes, penalised regression, hierarchical models, scale mixture of normals 1. Introduction The Lasso of Tibshirani (1996) is a method for simultaneous shrinkage and model selection in regression problems. It is most commonly applied to the linear regression model y = µ1n + Xβ + ; where y is the n 1 vector of responses, µ is the overall mean, X is the n p matrix × T × of standardised regressors, β = (β1; : : : ; βp) is the vector of regression coefficients to be estimated, and is the n 1 vector of independent and identically distributed normal errors with mean 0 and unknown× variance σ2. The estimate of µ is taken as the average y¯ of the responses, and the Lasso estimate β minimises the sum of the squared residuals, subject to a given bound t on its L1 norm.
    [Show full text]
  • Point Estimation Decision Theory
    Point estimation Suppose we are interested in the value of a parameter θ, for example the unknown bias of a coin. We have already seen how one may use the Bayesian method to reason about θ; namely, we select a likelihood function p(D j θ), explaining how observed data D are expected to be generated given the value of θ. Then we select a prior distribution p(θ) reecting our initial beliefs about θ. Finally, we conduct an experiment to gather data and use Bayes’ theorem to derive the posterior p(θ j D). In a sense, the posterior contains all information about θ that we care about. However, the process of inference will often require us to use this posterior to answer various questions. For example, we might be compelled to choose a single value θ^ to serve as a point estimate of θ. To a Bayesian, the selection of θ^ is a decision, and in dierent contexts we might want to select dierent values to report. In general, we should not expect to be able to select the true value of θ, unless we have somehow observed data that unambiguously determine it. Instead, we can only hope to select an estimate that is “close” to the true value. Dierent denitions of “closeness” can naturally lead to dierent estimates. The Bayesian approach to point estimation will be to analyze the impact of our choice in terms of a loss function, which describes how “bad” dierent types of mistakes can be. We then select the estimate which appears to be the least “bad” according to our current beliefs about θ.
    [Show full text]
  • Posterior Propriety and Admissibility of Hyperpriors in Normal
    The Annals of Statistics 2005, Vol. 33, No. 2, 606–646 DOI: 10.1214/009053605000000075 c Institute of Mathematical Statistics, 2005 POSTERIOR PROPRIETY AND ADMISSIBILITY OF HYPERPRIORS IN NORMAL HIERARCHICAL MODELS1 By James O. Berger, William Strawderman and Dejun Tang Duke University and SAMSI, Rutgers University and Novartis Pharmaceuticals Hierarchical modeling is wonderful and here to stay, but hyper- parameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an ex- treme, but not uncommon, example use of the wrong hyperparameter priors can even lead to impropriety of the posterior. For exchangeable hierarchical multivariate normal models, we first determine when a standard class of hierarchical priors results in proper or improper posteriors. We next determine which elements of this class lead to admissible estimators of the mean under quadratic loss; such considerations provide one useful guideline for choice among hierarchical priors. Finally, computational issues with the resulting posterior distributions are addressed. 1. Introduction. 1.1. The model and the problems. Consider the block multivariate nor- mal situation (sometimes called the “matrix of means problem”) specified by the following hierarchical Bayesian model: (1) X ∼ Np(θ, I), θ ∼ Np(B, Σπ), where X1 θ1 X2 θ2 Xp×1 = . , θp×1 = . , arXiv:math/0505605v1 [math.ST] 27 May 2005 . X θ m m Received February 2004; revised July 2004. 1Supported by NSF Grants DMS-98-02261 and DMS-01-03265. AMS 2000 subject classifications. Primary 62C15; secondary 62F15. Key words and phrases. Covariance matrix, quadratic loss, frequentist risk, posterior impropriety, objective priors, Markov chain Monte Carlo.
    [Show full text]
  • Empirical Bayes Methods for Combining Likelihoods Bradley EFRON
    Empirical Bayes Methods for Combining Likelihoods Bradley EFRON Supposethat several independent experiments are observed,each one yieldinga likelihoodLk (0k) for a real-valuedparameter of interestOk. For example,Ok might be the log-oddsratio for a 2 x 2 table relatingto the kth populationin a series of medical experiments.This articleconcerns the followingempirical Bayes question:How can we combineall of the likelihoodsLk to get an intervalestimate for any one of the Ok'S, say 01? The resultsare presented in the formof a realisticcomputational scheme that allows model buildingand model checkingin the spiritof a regressionanalysis. No specialmathematical forms are requiredfor the priorsor the likelihoods.This schemeis designedto take advantageof recentmethods that produceapproximate numerical likelihoodsLk(6k) even in very complicatedsituations, with all nuisanceparameters eliminated. The empiricalBayes likelihood theoryis extendedto situationswhere the Ok'S have a regressionstructure as well as an empiricalBayes relationship.Most of the discussionis presentedin termsof a hierarchicalBayes model and concernshow such a model can be implementedwithout requiringlarge amountsof Bayesianinput. Frequentist approaches, such as bias correctionand robustness,play a centralrole in the methodology. KEY WORDS: ABC method;Confidence expectation; Generalized linear mixed models;Hierarchical Bayes; Meta-analysis for likelihoods;Relevance; Special exponential families. 1. INTRODUCTION for 0k, also appearing in Table 1, is A typical statistical analysis blends data from indepen- dent experimental units into a single combined inference for a parameter of interest 0. Empirical Bayes, or hierar- SDk ={ ak +.S + bb + .5 +k + -5 dk + .5 chical or meta-analytic analyses, involve a second level of SD13 = .61, for example. data acquisition. Several independent experiments are ob- The statistic 0k is an estimate of the true log-odds ratio served, each involving many units, but each perhaps having 0k in the kth experimental population, a different value of the parameter 0.
    [Show full text]
  • Statistical Inference a Work in Progress
    Ronald Christensen Department of Mathematics and Statistics University of New Mexico Copyright c 2019 Statistical Inference A Work in Progress Springer v Seymour and Wes. Preface “But to us, probability is the very guide of life.” Joseph Butler (1736). The Analogy of Religion, Natural and Revealed, to the Constitution and Course of Nature, Introduction. https://www.loc.gov/resource/dcmsiabooks. analogyofreligio00butl_1/?sp=41 Normally, I wouldn’t put anything this incomplete on the internet but I wanted to make parts of it available to my Advanced Inference Class, and once it is up, you have lost control. Seymour Geisser was a mentor to Wes Johnson and me. He was Wes’s Ph.D. advisor. Near the end of his life, Seymour was finishing his 2005 book Modes of Parametric Statistical Inference and needed some help. Seymour asked Wes and Wes asked me. I had quite a few ideas for the book but then I discovered that Sey- mour hated anyone changing his prose. That was the end of my direct involvement. The first idea for this book was to revisit Seymour’s. (So far, that seems only to occur in Chapter 1.) Thinking about what Seymour was doing was the inspiration for me to think about what I had to say about statistical inference. And much of what I have to say is inspired by Seymour’s work as well as the work of my other professors at Min- nesota, notably Christopher Bingham, R. Dennis Cook, Somesh Das Gupta, Mor- ris L. Eaton, Stephen E. Fienberg, Narish Jain, F. Kinley Larntz, Frank B.
    [Show full text]
  • A New Hyperprior Distribution for Bayesian Regression Model with Application in Genomics
    bioRxiv preprint doi: https://doi.org/10.1101/102244; this version posted January 22, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. A New Hyperprior Distribution for Bayesian Regression Model with Application in Genomics Renato Rodrigues Silva 1,* 1 Institute of Mathematics and Statistics, Federal University of Goi´as, Goi^ania,Goi´as,Brazil *[email protected] Abstract In the regression analysis, there are situations where the model have more predictor variables than observations of dependent variable, resulting in the problem known as \large p small n". In the last fifteen years, this problem has been received a lot of attention, specially in the genome-wide context. Here we purposed the bayes H model, a bayesian regression model using mixture of two scaled inverse chi square as hyperprior distribution of variance for each regression coefficient. This model is implemented in the R package BayesH. Introduction 1 In the regression analysis, there are situations where the model have more predictor 2 variables than observations of dependent variable, resulting in the problem known as 3 \large p small n" [1]. 4 To figure out this problem, there are already exists some methods developed as ridge 5 regression [2], least absolute shrinkage and selection operator (LASSO) regression [3], 6 bridge regression [4], smoothly clipped absolute deviation (SCAD) regression [5] and 7 others. This class of regression models is known in the literature as regression model 8 with penalized likelihood [6]. In the bayesian paradigma, there are also some methods 9 purposed as stochastic search variable selection [7], and Bayesian LASSO [8].
    [Show full text]
  • An Economic Theory of Statistical Testing∗
    An Economic Theory of Statistical Testing∗ Aleksey Tetenovy August 21, 2016 Abstract This paper models the use of statistical hypothesis testing in regulatory ap- proval. A privately informed agent proposes an innovation. Its approval is beneficial to the proponent, but potentially detrimental to the regulator. The proponent can conduct a costly clinical trial to persuade the regulator. I show that the regulator can screen out all ex-ante undesirable proponents by committing to use a simple statistical test. Its level is the ratio of the trial cost to the proponent's benefit from approval. In application to new drug approval, this level is around 15% for an average Phase III clinical trial. The practice of statistical hypothesis testing has been widely criticized across the many fields that use it. Examples of such criticism are Cohen (1994), Johnson (1999), Ziliak and McCloskey (2008), and Wasserstein and Lazar (2016). While conventional test levels of 5% and 1% are widely agreed upon, these numbers lack any substantive motivation. In spite of their arbitrary choice, they affect thousands of influential decisions. The hypothesis testing criterion has an unusual lexicographic structure: first, ensure that the ∗Thanks to Larbi Alaoui, V. Bhaskar, Ivan Canay, Toomas Hinnosaar, Chuck Manski, Jos´eLuis Montiel Olea, Ignacio Monz´on, Denis Nekipelov, Mario Pagliero, Amedeo Piolatto, Adam Rosen and Joel Sobel for helpful comments and suggestions. I have benefited from presenting this paper at the Cornell Conference on Partial Identification and Statistical Decisions, ASSET 2013, Econometric Society NAWM 2014, SCOCER 2015, Oxford, UCL, Pittsburgh, Northwestern, Michigan, Virginia, and Western. Part of this research was conducted at UCL/CeMMAP with financial support from the European Research Council grant ERC-2009-StG-240910-ROMETA.
    [Show full text]
  • Bayesian Learning
    SC4/SM8 Advanced Topics in Statistical Machine Learning Bayesian Learning Dino Sejdinovic Department of Statistics Oxford Slides and other materials available at: http://www.stats.ox.ac.uk/~sejdinov/atsml/ Department of Statistics, Oxford SC4/SM8 ATSML, HT2018 1 / 14 Bayesian Learning Review of Bayesian Inference The Bayesian Learning Framework Bayesian learning: treat parameter vector θ as a random variable: process of learning is then computation of the posterior distribution p(θjD). In addition to the likelihood p(Djθ) need to specify a prior distribution p(θ). Posterior distribution is then given by the Bayes Theorem: p(Djθ)p(θ) p(θjD) = p(D) Likelihood: p(Djθ) Posterior: p(θjD) Prior: p(θ) Marginal likelihood: p(D) = Θ p(Djθ)p(θ)dθ ´ Summarizing the posterior: MAP Posterior mode: θb = argmaxθ2Θ p(θjD) (maximum a posteriori). mean Posterior mean: θb = E [θjD]. Posterior variance: Var[θjD]. Department of Statistics, Oxford SC4/SM8 ATSML, HT2018 2 / 14 Bayesian Learning Review of Bayesian Inference Bayesian Inference on the Categorical Distribution Suppose we observe the with yi 2 f1;:::; Kg, and model them as i.i.d. with pmf π = (π1; : : : ; πK): n K Y Y nk p(Djπ) = πyi = πk i=1 k=1 Pn PK with nk = i=1 1(yi = k) and πk > 0, k=1 πk = 1. The conjugate prior on π is the Dirichlet distribution Dir(α1; : : : ; αK) with parameters αk > 0, and density PK K Γ( αk) Y p(π) = k=1 παk−1 QK k k=1 Γ(αk) k=1 PK on the probability simplex fπ : πk > 0; k=1 πk = 1g.
    [Show full text]
  • 6 X 10.5 Long Title.P65
    Cambridge University Press 978-0-521-68567-2 - Principles of Statistical Inference D. R. Cox Index More information Author index Aitchison, J., 131, 175, 199 Daniels, H.E., 132, 201 Akahira, M., 132, 199 Darmois, G., 28 Amari, S., 131, 199 Davies, R.B., 159, 201 Andersen, P.K., 159, 199 Davis, R.A., 16, 200 Anderson, T.W., 29, 132, 199 Davison, A.C., 14, 132, 200, 201 Anscombe, F.J., 94, 199 Dawid, A.P., 131, 201 Azzalini, A., 14, 160, 199 Day, N.E., 160, 200 Dempster, A.P., 132, 201 de Finetti, B., 196 Baddeley, A., 192, 199 de Groot, M., 196 Barnard, G.A., 28, 29, 63, 195, 199 Dunsmore, I.R., 175, 199 Barndorff-Nielsen, O.E., 28, 94, 131, 132, 199 Barnett, V., 14, 200 Edwards, A.W.F., 28, 195, 202 Barnett, V.D., 14, 200 Efron, B., 132, 202 Bartlett, M.S., 131, 132, 159, 200 Eguchi, S., 94, 201 Berger, J., 94, 200 Berger, R.L., 14, 200 Bernardo, J.M., 62, 94, 196, 200 Farewell, V., 160, 202 Besag, J.E., 16, 160, 200 Fisher, R.A., 27, 28, 40, 43, 44, 53, 55, Birnbaum, A., 62, 200 62, 63, 66, 93, 95, 132, 176, 190, Blackwell, D., 176 192, 194, 195, 202 Boole, G., 194 Fraser, D.A.S., 63, 202 Borgan, Ø., 199 Fridette, M., 175, 203 Box, G.E.P., 14, 62, 200 Brazzale, A.R., 132, 200 Breslow, N.E., 160, 200 Brockwell, P.J., 16, 200 Garthwaite, P.H., 93, 202 Butler, R.W., 132, 175, 200 Gauss, C.F., 15, 194 Geisser, S., 175, 202 Gill, R.D., 199 Carnap, R., 195 Godambe, V.P., 176, 202 Casella, G.C., 14, 29, 132, 200, 203 Good, I.J., 196 Christensen, R., 93, 200 Green, P.J., 160, 202 Cochran, W.G., 176, 192, 200 Greenland, S., 94, 202 Copas, J., 94, 201 Cox, D.R., 14, 16, 43, 63, 94, 131, 132, 159, 160, 192, 199, 201, 204 Hacking, I., 28, 202 Creasy, M.A., 44, 201 Hald, A., 194, 202 209 © Cambridge University Press www.cambridge.org Cambridge University Press 978-0-521-68567-2 - Principles of Statistical Inference D.
    [Show full text]
  • 9 Introduction to Hierarchical Models
    9 Introduction to Hierarchical Models One of the important features of a Bayesian approach is the relative ease with which hierarchical models can be constructed and estimated using Gibbs sampling. In fact, one of the key reasons for the recent growth in the use of Bayesian methods in the social sciences is that the use of hierarchical models has also increased dramatically in the last two decades. Hierarchical models serve two purposes. One purpose is methodological; the other is substantive. Methodologically, when units of analysis are drawn from clusters within a population (communities, neighborhoods, city blocks, etc.), they can no longer be considered independent. Individuals who come from the same cluster will be more similar to each other than they will be to individuals from other clusters. Therefore, unobserved variables may in- duce statistical dependence between observations within clusters that may be uncaptured by covariates within the model, violating a key assumption of maximum likelihood estimation as it is typically conducted when indepen- dence of errors is assumed. Recall that a likelihood function, when observations are independent, is simply the product of the density functions for each ob- servation taken over all the observations. However, when independence does not hold, we cannot construct the likelihood as simply. Thus, one reason for constructing hierarchical models is to compensate for the biases—largely in the standard errors—that are introduced when the independence assumption is violated. See Ezell, Land, and Cohen (2003) for a thorough review of the approaches that have been used to correct standard errors in hazard model- ing applications with repeated events, one class of models in which repeated measurement yields hierarchical clustering.
    [Show full text]