Bayesian Inference Chapter 4: Regression and Hierarchical Models

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Inference Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Aus´ınand Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 1 / 35 Objective AFM Smith Dennis Lindley We analyze the Bayesian approach to fitting normal and generalized linear models and introduce the Bayesian hierarchical modeling approach. Also, we study the modeling and forecasting of time series. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 2 / 35 Contents 1 Normal linear models 1.1. ANOVA model 1.2. Simple linear regression model 2 Generalized linear models 3 Hierarchical models 4 Dynamic models Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 3 / 35 Normal linear models A normal linear model is of the following form: y = Xθ + ; 0 where y = (y1;:::; yn) is the observed data, X is a known n × k matrix, called 0 the design matrix, θ = (θ1; : : : ; θk ) is the parameter set and follows a multivariate normal distribution. Usually, it is assumed that: 1 ∼ N 0 ; I : k φ k A simple example of normal linear model is the simple linear regression model T 1 1 ::: 1 where X = and θ = (α; β)T . x1 x2 ::: xn Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 4 / 35 Normal linear models Consider a normal linear model, y = Xθ + . A conjugate prior distribution is a normal-gamma distribution: 1 θ j φ ∼ N m; V φ a b φ ∼ G ; : 2 2 Then, the posterior distribution given y is also a normal-gamma distribution with: −1 m∗ = XT X + V−1 XT y + V−1m −1 V∗ = XT X + V−1 a∗ = a + n b∗ = b + yT y + mT V−1m − m∗T V∗−1m∗ Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 5 / 35 Normal linear models The posterior mean is given by: −1 E [θ j y] = XT X + V−1 XT y + V−1m −1 −1 = XT X + V−1 XT X XT X XT y + V−1m −1 = XT X + V−1 XT Xθ^ + V−1m −1 where θ^ = XT X XT y is the maximum likelihood estimator. Thus, this expression may be interpreted as a weighted average of the prior estimator, m, and the MLE, θ^, with weights proportional to precisions since, 1 conditional on φ, the prior variance is φ V and that the distribution of the MLE ^ 1 T −1 from the classical viewpoint is θ j φ ∼ N θ; φ (X X) Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 6 / 35 Normal linear models Consider a normal linear model, y = Xθ + , and assume the limiting prior distribution, 1 p(θ; φ) / : φ Then, we have that, 1 −1 θ j y; φ ∼ N θ^; XT X ; φ 0 T 1 n − k yT y − θ^ XT X θ^ φ j y ∼ G ; : @ 2 2 A T ^T T ^ 2 y y−θ (X X)θ 2 1 Note thatσ ^ = n−k is the usual classical estimator of σ = φ . In this case, Bayesian credible intervals, estimators etc. will coincide with their classical counterparts. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 7 / 35 ANOVA model The ANOVA model is an example of normal lineal model where: yij = θi + ij ; 1 where ij ∼ N (0; φ ); for i = 1;:::; k; and j = 1;:::; ni : Thus, the parameters are θ = (θ1; : : : ; θk ), the observed data are T y = (y11;:::; y1n1 ; y21;:::; y2n2 ;:::; yk1;:::; yknk ) , the design matrix is: 0 1 0 ··· 0 1 B . C B . C B C B 1n1 0 ··· 0 C B C B 0 1 0 C X = B C B . C B . C B C B 0 1n 0 C B 2 C B . C @ . A 0 0 1 Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 8 / 35 ANOVA model 1 Assume conditionally independent normal priors, θi ∼ N mi ; , for αi φ a b i = 1;:::; k; and a gamma prior φ ∼ G( 2 ; 2 ). This corresponds to a normal-gamma prior distribution for (θ; φ) where 1 1 m = (m1;:::; mk ) and V = diag ;:::; . α1 αk Then, it is obtained that, 00 n1y¯1·+α1m1 1 0 1 11 n +α α +n 1 1 1 1 1 θ j y; φ ∼ N BB . C ; B .. CC @@ . A φ @ . AA n1y¯1·+α1m1 1 n1+α1 αk +nk and Pk Pni 2 Pk ni 2 ! a + n b + (yij − y¯i·) + (¯yi· − mi ) φ j y ∼ G ; i=1 j=1 i=1 ni +αi 2 2 Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 9 / 35 ANOVA model 1 If we assume alternatively the reference prior, p(θ; φ) / φ , we have: 00 1 0 1 11 y¯1· n 1 1 θ j y; φ ∼ N BB . C ; B .. CC ; @@ . A φ @ . AA 1 y¯k· nk n − k (n − k)σ ^2 φ ∼ G ; ; 2 2 2 1 Pk 2 whereσ ^ = n−k i=1 (yij − y¯i·) is the classical variance estimate for this problem. A 95% posterior interval for θ1 − θ2 is given by: r 1 1 y¯1· − y¯2· ± σ^ + tn−k (0:975) ; n1 n2 which is equal to the usual, classical interval. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 10 / 35 Example: ANOVA model Suppose that an ecologist is interested in analysing how the masses of starlings (a type of birds) vary between four locations. A sample data of the weights of 10 starlings from each of the four locations can be downloaded from: http://arcue.botany.unimelb.edu.au/bayescode.html. Assume a Bayesian one-way ANOVA model for these data where a different mean is considered for each location and the variation in mass between different birds is described by a normal distribution with a common variance. Compare the results with those obtained with classical methods. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 11 / 35 Simple linear regression model Another example of normal linear model is the simple regression model: yi = α + βxi + i ; 1 for i = 1;:::; n, where i ∼ N 0; φ : Suppose that we use the limiting prior: 1 p(α; β; φ) / : φ Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 12 / 35 Simple linear regression model Then, we have that: 0 0 n 11 α α^ 1 Px 2 −nx¯ y; φ ∼ N ; i β @ β^ @ i=1 AA φnsx −nx¯ n ! n − 2 s 1 − r 2 φ j y ∼ G ; y 2 2 where: s α^ =y ¯ − β^x¯; β^ = xy ; sx Pn 2 Pn 2 sx = i=1 (xi − x¯) ; sy = i=1 (yi − y¯) ; 2 Pn sxy 2 sy 1 − r sxy = i=1 (xi − x¯)(yi − y¯) ; r = p ; σ^ = : sx sy n − 2 Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 13 / 35 Simple linear regression model Thus, the marginal distributions of α and β are Student-t distributions: α − α^ j y ∼ tn−2 r n 2 P 2 σ^ i=1xi n sx β − β^ j y ∼ tn−2 q σ^2 sx Therefore, for example, a 95% credible interval for β is given by: σ^ β^ ± p tn−2(0:975) sx equal to the usual classical interval. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 14 / 35 Simple linear regression model Suppose now that we wish to predict a future observation: ynew = α + βxnew + new : Note that, E [ynew j φ, y] =α ^ + β^xnew Pn 2 2 1 i=1xi + nxnew − 2nxx¯ new V [ynew j φ, y] = + 1 φ nsx 1 s + nx¯2 + nx 2 − 2nxx¯ = x new new + 1 φ nsx Therefore, 2 !! 1 (¯x − xnew ) 1 ynew j φ, y ∼ N α^ + β^xnew ; + + 1 φ sx n Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 15 / 35 Simple linear regression model And then, ynew − α^ + β^xnew r j y ∼ tn−2 2 σ^ (¯x−xnew ) + 1 + 1 sx n leading to the following 95% credible interval for ynew : v u 2 ! u (¯x − xnew ) 1 α^ + β^xnew ± σ^t + + 1 tn−2 (0:975) ; sx n which coincides with the usual, classical interval. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 16 / 35 Example: Simple linear regression model Consider the data file prostate.data that can be downloaded from: http://statweb.stanford.edu/~tibs/ElemStatLearn/. This includes, among other clinical measures, the level of prostate specific antigen in logs (lpsa) and the log cancer volume (lcavol) in 97 men who were about to receive a radical prostatectomy. Use a Bayesian linear regression model to predict the lpsa in terms of the lcavol. Compare the results with a classical linear regression fit. Conchi Aus´ınand Mike Wiper Regression and hierarchical models Masters Programmes 17 / 35 Generalized linear models The generalized linear model generalizes the normal linear model by allowing the possibility of non-normal error distributions and by allowing for a non-linear relationship between y and x. A generalized linear model is specified by two functions: 1 A conditional, exponential family density function of y given x, parameterized by a mean parameter, µ = µ(x) = E[Y j x] and (possibly) a dispersion parameter, φ > 0, that is independent of x. 2 A (one-to-one) link function, g(·), which relates the mean, µ = µ(x) to the covariate vector, x, as g(µ) = xθ.
Recommended publications
  • Generalized Linear Models (Glms)
    San Jos´eState University Math 261A: Regression Theory & Methods Generalized Linear Models (GLMs) Dr. Guangliang Chen This lecture is based on the following textbook sections: • Chapter 13: 13.1 – 13.3 Outline of this presentation: • What is a GLM? • Logistic regression • Poisson regression Generalized Linear Models (GLMs) What is a GLM? In ordinary linear regression, we assume that the response is a linear function of the regressors plus Gaussian noise: 0 2 y = β0 + β1x1 + ··· + βkxk + ∼ N(x β, σ ) | {z } |{z} linear form x0β N(0,σ2) noise The model can be reformulate in terms of • distribution of the response: y | x ∼ N(µ, σ2), and • dependence of the mean on the predictors: µ = E(y | x) = x0β Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University3/24 Generalized Linear Models (GLMs) beta=(1,2) 5 4 3 β0 + β1x b y 2 y 1 0 −1 0.0 0.2 0.4 0.6 0.8 1.0 x x Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University4/24 Generalized Linear Models (GLMs) Generalized linear models (GLM) extend linear regression by allowing the response variable to have • a general distribution (with mean µ = E(y | x)) and • a mean that depends on the predictors through a link function g: That is, g(µ) = β0x or equivalently, µ = g−1(β0x) Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University5/24 Generalized Linear Models (GLMs) In GLM, the response is typically assumed to have a distribution in the exponential family, which is a large class of probability distributions that have pdfs of the form f(x | θ) = a(x)b(θ) exp(c(θ) · T (x)), including • Normal - ordinary linear regression • Bernoulli - Logistic regression, modeling binary data • Binomial - Multinomial logistic regression, modeling general cate- gorical data • Poisson - Poisson regression, modeling count data • Exponential, Gamma - survival analysis Dr.
    [Show full text]
  • Assessing Fairness with Unlabeled Data and Bayesian Inference
    Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference Disi Ji1 Padhraic Smyth1 Mark Steyvers2 1Department of Computer Science 2Department of Cognitive Sciences University of California, Irvine [email protected] [email protected] [email protected] Abstract We investigate the problem of reliably assessing group fairness when labeled examples are few but unlabeled examples are plentiful. We propose a general Bayesian framework that can augment labeled data with unlabeled data to produce more accurate and lower-variance estimates compared to methods based on labeled data alone. Our approach estimates calibrated scores for unlabeled examples in each group using a hierarchical latent variable model conditioned on labeled examples. This in turn allows for inference of posterior distributions with associated notions of uncertainty for a variety of group fairness metrics. We demonstrate that our approach leads to significant and consistent reductions in estimation error across multiple well-known fairness datasets, sensitive attributes, and predictive models. The results show the benefits of using both unlabeled data and Bayesian inference in terms of assessing whether a prediction model is fair or not. 1 Introduction Machine learning models are increasingly used to make important decisions about individuals. At the same time it has become apparent that these models are susceptible to producing systematically biased decisions with respect to sensitive attributes such as gender, ethnicity, and age [Angwin et al., 2017, Berk et al., 2018, Corbett-Davies and Goel, 2018, Chen et al., 2019, Beutel et al., 2019]. This has led to a significant amount of recent work in machine learning addressing these issues, including research on both (i) definitions of fairness in a machine learning context (e.g., Dwork et al.
    [Show full text]
  • Bayesian Methods: Review of Generalized Linear Models
    Bayesian Methods: Review of Generalized Linear Models RYAN BAKKER University of Georgia ICPSR Day 2 Bayesian Methods: GLM [1] Likelihood and Maximum Likelihood Principles Likelihood theory is an important part of Bayesian inference: it is how the data enter the model. • The basis is Fisher’s principle: what value of the unknown parameter is “most likely” to have • generated the observed data. Example: flip a coin 10 times, get 5 heads. MLE for p is 0.5. • This is easily the most common and well-understood general estimation process. • Bayesian Methods: GLM [2] Starting details: • – Y is a n k design or observation matrix, θ is a k 1 unknown coefficient vector to be esti- × × mated, we want p(θ Y) (joint sampling distribution or posterior) from p(Y θ) (joint probabil- | | ity function). – Define the likelihood function: n L(θ Y) = p(Y θ) | i| i=1 Y which is no longer on the probability metric. – Our goal is the maximum likelihood value of θ: θˆ : L(θˆ Y) L(θ Y) θ Θ | ≥ | ∀ ∈ where Θ is the class of admissable values for θ. Bayesian Methods: GLM [3] Likelihood and Maximum Likelihood Principles (cont.) Its actually easier to work with the natural log of the likelihood function: • `(θ Y) = log L(θ Y) | | We also find it useful to work with the score function, the first derivative of the log likelihood func- • tion with respect to the parameters of interest: ∂ `˙(θ Y) = `(θ Y) | ∂θ | Setting `˙(θ Y) equal to zero and solving gives the MLE: θˆ, the “most likely” value of θ from the • | parameter space Θ treating the observed data as given.
    [Show full text]
  • Generalized Linear Models
    Generalized Linear Models A generalized linear model (GLM) consists of three parts. i) The first part is a random variable giving the conditional distribution of a response Yi given the values of a set of covariates Xij. In the original work on GLM’sby Nelder and Wedderburn (1972) this random variable was a member of an exponential family, but later work has extended beyond this class of random variables. ii) The second part is a linear predictor, i = + 1Xi1 + 2Xi2 + + ··· kXik . iii) The third part is a smooth and invertible link function g(.) which transforms the expected value of the response variable, i = E(Yi) , and is equal to the linear predictor: g(i) = i = + 1Xi1 + 2Xi2 + + kXik. ··· As shown in Tables 15.1 and 15.2, both the general linear model that we have studied extensively and the logistic regression model from Chapter 14 are special cases of this model. One property of members of the exponential family of distributions is that the conditional variance of the response is a function of its mean, (), and possibly a dispersion parameter . The expressions for the variance functions for common members of the exponential family are shown in Table 15.2. Also, for each distribution there is a so-called canonical link function, which simplifies some of the GLM calculations, which is also shown in Table 15.2. Estimation and Testing for GLMs Parameter estimation in GLMs is conducted by the method of maximum likelihood. As with logistic regression models from the last chapter, the generalization of the residual sums of squares from the general linear model is the residual deviance, Dm 2(log Ls log Lm), where Lm is the maximized likelihood for the model of interest, and Ls is the maximized likelihood for a saturated model, which has one parameter per observation and fits the data as well as possible.
    [Show full text]
  • The Bayesian Lasso
    The Bayesian Lasso Trevor Park and George Casella y University of Florida, Gainesville, Florida, USA Summary. The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the priors on the regression parameters are indepen- dent double-exponential (Laplace) distributions. This posterior can also be accessed through a Gibbs sampler using conjugate normal priors for the regression parameters, with indepen- dent exponential hyperpriors on their variances. This leads to tractable full conditional distri- butions through a connection with the inverse Gaussian distribution. Although the Bayesian Lasso does not automatically perform variable selection, it does provide standard errors and Bayesian credible intervals that can guide variable selection. Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso pa- rameter. The methods described here can also be extended to other Lasso-related estimation methods like bridge regression and robust variants. Keywords: Gibbs sampler, inverse Gaussian, linear regression, empirical Bayes, penalised regression, hierarchical models, scale mixture of normals 1. Introduction The Lasso of Tibshirani (1996) is a method for simultaneous shrinkage and model selection in regression problems. It is most commonly applied to the linear regression model y = µ1n + Xβ + ; where y is the n 1 vector of responses, µ is the overall mean, X is the n p matrix × T × of standardised regressors, β = (β1; : : : ; βp) is the vector of regression coefficients to be estimated, and is the n 1 vector of independent and identically distributed normal errors with mean 0 and unknown× variance σ2. The estimate of µ is taken as the average y¯ of the responses, and the Lasso estimate β minimises the sum of the squared residuals, subject to a given bound t on its L1 norm.
    [Show full text]
  • Posterior Propriety and Admissibility of Hyperpriors in Normal
    The Annals of Statistics 2005, Vol. 33, No. 2, 606–646 DOI: 10.1214/009053605000000075 c Institute of Mathematical Statistics, 2005 POSTERIOR PROPRIETY AND ADMISSIBILITY OF HYPERPRIORS IN NORMAL HIERARCHICAL MODELS1 By James O. Berger, William Strawderman and Dejun Tang Duke University and SAMSI, Rutgers University and Novartis Pharmaceuticals Hierarchical modeling is wonderful and here to stay, but hyper- parameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an ex- treme, but not uncommon, example use of the wrong hyperparameter priors can even lead to impropriety of the posterior. For exchangeable hierarchical multivariate normal models, we first determine when a standard class of hierarchical priors results in proper or improper posteriors. We next determine which elements of this class lead to admissible estimators of the mean under quadratic loss; such considerations provide one useful guideline for choice among hierarchical priors. Finally, computational issues with the resulting posterior distributions are addressed. 1. Introduction. 1.1. The model and the problems. Consider the block multivariate nor- mal situation (sometimes called the “matrix of means problem”) specified by the following hierarchical Bayesian model: (1) X ∼ Np(θ, I), θ ∼ Np(B, Σπ), where X1 θ1 X2 θ2 Xp×1 = . , θp×1 = . , arXiv:math/0505605v1 [math.ST] 27 May 2005 . X θ m m Received February 2004; revised July 2004. 1Supported by NSF Grants DMS-98-02261 and DMS-01-03265. AMS 2000 subject classifications. Primary 62C15; secondary 62F15. Key words and phrases. Covariance matrix, quadratic loss, frequentist risk, posterior impropriety, objective priors, Markov chain Monte Carlo.
    [Show full text]
  • Heteroscedastic Errors
    Heteroscedastic Errors ◮ Sometimes plots and/or tests show that the error variances 2 σi = Var(ǫi ) depend on i ◮ Several standard approaches to fixing the problem, depending on the nature of the dependence. ◮ Weighted Least Squares. ◮ Transformation of the response. ◮ Generalized Linear Models. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Weighted Least Squares ◮ Suppose variances are known except for a constant factor. 2 2 ◮ That is, σi = σ /wi . ◮ Use weighted least squares. (See Chapter 10 in the text.) ◮ This usually arises realistically in the following situations: ◮ Yi is an average of ni measurements where you know ni . Then wi = ni . 2 ◮ Plots suggest that σi might be proportional to some power of 2 γ γ some covariate: σi = kxi . Then wi = xi− . Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Variances depending on (mean of) Y ◮ Two standard approaches are available: ◮ Older approach is transformation. ◮ Newer approach is use of generalized linear model; see STAT 402. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Transformation ◮ Compute Yi∗ = g(Yi ) for some function g like logarithm or square root. ◮ Then regress Yi∗ on the covariates. ◮ This approach sometimes works for skewed response variables like income; ◮ after transformation we occasionally find the errors are more nearly normal, more homoscedastic and that the model is simpler. ◮ See page 130ff and check under transformations and Box-Cox in the index. Richard Lockhart STAT 350: Heteroscedastic Errors and GLIM Generalized Linear Models ◮ Transformation uses the model T E(g(Yi )) = xi β while generalized linear models use T g(E(Yi )) = xi β ◮ Generally latter approach offers more flexibility.
    [Show full text]
  • Generalized Linear Models
    CHAPTER 6 Generalized linear models 6.1 Introduction Generalized linear modeling is a framework for statistical analysis that includes linear and logistic regression as special cases. Linear regression directly predicts continuous data y from a linear predictor Xβ = β0 + X1β1 + + Xkβk.Logistic regression predicts Pr(y =1)forbinarydatafromalinearpredictorwithaninverse-··· logit transformation. A generalized linear model involves: 1. A data vector y =(y1,...,yn) 2. Predictors X and coefficients β,formingalinearpredictorXβ 1 3. A link function g,yieldingavectoroftransformeddataˆy = g− (Xβ)thatare used to model the data 4. A data distribution, p(y yˆ) | 5. Possibly other parameters, such as variances, overdispersions, and cutpoints, involved in the predictors, link function, and data distribution. The options in a generalized linear model are the transformation g and the data distribution p. In linear regression,thetransformationistheidentity(thatis,g(u) u)and • the data distribution is normal, with standard deviation σ estimated from≡ data. 1 1 In logistic regression,thetransformationistheinverse-logit,g− (u)=logit− (u) • (see Figure 5.2a on page 80) and the data distribution is defined by the proba- bility for binary data: Pr(y =1)=y ˆ. This chapter discusses several other classes of generalized linear model, which we list here for convenience: The Poisson model (Section 6.2) is used for count data; that is, where each • data point yi can equal 0, 1, 2, ....Theusualtransformationg used here is the logarithmic, so that g(u)=exp(u)transformsacontinuouslinearpredictorXiβ to a positivey ˆi.ThedatadistributionisPoisson. It is usually a good idea to add a parameter to this model to capture overdis- persion,thatis,variationinthedatabeyondwhatwouldbepredictedfromthe Poisson distribution alone.
    [Show full text]
  • Empirical Bayes Methods for Combining Likelihoods Bradley EFRON
    Empirical Bayes Methods for Combining Likelihoods Bradley EFRON Supposethat several independent experiments are observed,each one yieldinga likelihoodLk (0k) for a real-valuedparameter of interestOk. For example,Ok might be the log-oddsratio for a 2 x 2 table relatingto the kth populationin a series of medical experiments.This articleconcerns the followingempirical Bayes question:How can we combineall of the likelihoodsLk to get an intervalestimate for any one of the Ok'S, say 01? The resultsare presented in the formof a realisticcomputational scheme that allows model buildingand model checkingin the spiritof a regressionanalysis. No specialmathematical forms are requiredfor the priorsor the likelihoods.This schemeis designedto take advantageof recentmethods that produceapproximate numerical likelihoodsLk(6k) even in very complicatedsituations, with all nuisanceparameters eliminated. The empiricalBayes likelihood theoryis extendedto situationswhere the Ok'S have a regressionstructure as well as an empiricalBayes relationship.Most of the discussionis presentedin termsof a hierarchicalBayes model and concernshow such a model can be implementedwithout requiringlarge amountsof Bayesianinput. Frequentist approaches, such as bias correctionand robustness,play a centralrole in the methodology. KEY WORDS: ABC method;Confidence expectation; Generalized linear mixed models;Hierarchical Bayes; Meta-analysis for likelihoods;Relevance; Special exponential families. 1. INTRODUCTION for 0k, also appearing in Table 1, is A typical statistical analysis blends data from indepen- dent experimental units into a single combined inference for a parameter of interest 0. Empirical Bayes, or hierar- SDk ={ ak +.S + bb + .5 +k + -5 dk + .5 chical or meta-analytic analyses, involve a second level of SD13 = .61, for example. data acquisition. Several independent experiments are ob- The statistic 0k is an estimate of the true log-odds ratio served, each involving many units, but each perhaps having 0k in the kth experimental population, a different value of the parameter 0.
    [Show full text]
  • Generalized Linear Models
    Generalized Linear Models Advanced Methods for Data Analysis (36-402/36-608) Spring 2014 1 Generalized linear models 1.1 Introduction: two regressions • So far we've seen two canonical settings for regression. Let X 2 Rp be a vector of predictors. In linear regression, we observe Y 2 R, and assume a linear model: T E(Y jX) = β X; for some coefficients β 2 Rp. In logistic regression, we observe Y 2 f0; 1g, and we assume a logistic model (Y = 1jX) log P = βT X: 1 − P(Y = 1jX) • What's the similarity here? Note that in the logistic regression setting, P(Y = 1jX) = E(Y jX). Therefore, in both settings, we are assuming that a transformation of the conditional expec- tation E(Y jX) is a linear function of X, i.e., T g E(Y jX) = β X; for some function g. In linear regression, this transformation was the identity transformation g(u) = u; in logistic regression, it was the logit transformation g(u) = log(u=(1 − u)) • Different transformations might be appropriate for different types of data. E.g., the identity transformation g(u) = u is not really appropriate for logistic regression (why?), and the logit transformation g(u) = log(u=(1 − u)) not appropriate for linear regression (why?), but each is appropriate in their own intended domain • For a third data type, it is entirely possible that transformation neither is really appropriate. What to do then? We think of another transformation g that is in fact appropriate, and this is the basic idea behind a generalized linear model 1.2 Generalized linear models • Given predictors X 2 Rp and an outcome Y , a generalized linear model is defined by three components: a random component, that specifies a distribution for Y jX; a systematic compo- nent, that relates a parameter η to the predictors X; and a link function, that connects the random and systematic components • The random component specifies a distribution for the outcome variable (conditional on X).
    [Show full text]
  • Generalized Linear Models Outperform Commonly Used Canonical Analysis in Estimating Spatial Structure of Presence/Absence Data
    Generalized Linear Models outperform commonly used canonical analysis in estimating spatial structure of presence/absence data Lélis A. Carlos-Júnior1,2,3, Joel C. Creed4, Rob Marrs2, Rob J. Lewis5, Timothy P. Moulton4, Rafael Feijó-Lima1,6 and Matthew Spencer2 1 Programa de Pós-Graduacão¸ em Ecologia e Evolucão,¸ Universidade do Estado do Rio do Janeiro, Rio de Janeiro, Brazil 2 School of Environmental Sciences, University of Liverpool, Liverpool, United Kingdom 3 Departamento de Biologia, Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, Brazil 4 Departamento de Ecologia, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil 5 Department of Forest Genetics and Biodiversity, Norwegian Institute of Bioeconomy Research, Bergen, Nor- way 6 Division of Biological Sciences, University of Montana, Missoula, MT, United States of America ABSTRACT Background. Ecological communities tend to be spatially structured due to environ- mental gradients and/or spatially contagious processes such as growth, dispersion and species interactions. Data transformation followed by usage of algorithms such as Redundancy Analysis (RDA) is a fairly common approach in studies searching for spatial structure in ecological communities, despite recent suggestions advocating the use of Generalized Linear Models (GLMs). Here, we compared the performance of GLMs and RDA in describing spatial structure in ecological community composition data. We simulated realistic presence/absence data typical of many β-diversity studies. For model selection we used standard methods commonly used in most studies involving RDA and GLMs. Submitted 19 November 2018 Methods. We simulated communities with known spatial structure, based on three Accepted 30 July 2020 real spatial community presence/absence datasets (one terrestrial, one marine and Published 3 September 2020 one freshwater).
    [Show full text]
  • A New Hyperprior Distribution for Bayesian Regression Model with Application in Genomics
    bioRxiv preprint doi: https://doi.org/10.1101/102244; this version posted January 22, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. A New Hyperprior Distribution for Bayesian Regression Model with Application in Genomics Renato Rodrigues Silva 1,* 1 Institute of Mathematics and Statistics, Federal University of Goi´as, Goi^ania,Goi´as,Brazil *[email protected] Abstract In the regression analysis, there are situations where the model have more predictor variables than observations of dependent variable, resulting in the problem known as \large p small n". In the last fifteen years, this problem has been received a lot of attention, specially in the genome-wide context. Here we purposed the bayes H model, a bayesian regression model using mixture of two scaled inverse chi square as hyperprior distribution of variance for each regression coefficient. This model is implemented in the R package BayesH. Introduction 1 In the regression analysis, there are situations where the model have more predictor 2 variables than observations of dependent variable, resulting in the problem known as 3 \large p small n" [1]. 4 To figure out this problem, there are already exists some methods developed as ridge 5 regression [2], least absolute shrinkage and selection operator (LASSO) regression [3], 6 bridge regression [4], smoothly clipped absolute deviation (SCAD) regression [5] and 7 others. This class of regression models is known in the literature as regression model 8 with penalized likelihood [6]. In the bayesian paradigma, there are also some methods 9 purposed as stochastic search variable selection [7], and Bayesian LASSO [8].
    [Show full text]