A Compendium of Conjugate Priors
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
1 Introduction
TR:UPM-ETSINF/DIA/2015-1 1 Univariate and bivariate truncated von Mises distributions Pablo Fernandez-Gonzalez, Concha Bielza, Pedro Larra~naga Department of Artificial Intelligence, Universidad Polit´ecnica de Madrid Abstract: In this article we study the univariate and bivariate truncated von Mises distribution, as a generalization of the von Mises distribution (Jupp and Mardia (1989)), (Mardia and Jupp (2000)). This implies the addition of two or four new truncation parameters in the univariate and, bivariate cases, respectively. The re- sults include the definition, properties of the distribution and maximum likelihood estimators for the univariate and bivariate cases. Additionally, the analysis of the bivariate case shows how the conditional distribution is a truncated von Mises dis- tribution, whereas the marginal distribution that generalizes the distribution intro- duced in Singh (2002). From the viewpoint of applications, we test the distribution with simulated data, as well as with data regarding leaf inclination angles (Bowyer and Danson. (2005)) and dihedral angles in protein chains (Murzin AG (1995)). This research aims to assert this probability distribution as a potential option for modelling or simulating any kind of phenomena where circular distributions are applicable. Key words and phrases: Angular probability distributions, Directional statistics, von Mises distribution, Truncated probability distributions. 1 Introduction The von Mises distribution has received undisputed attention in the field of di- rectional statistics (Jupp and Mardia (1989)) and in other areas like supervised classification (Lopez-Cruz et al. (2013)). Thanks to desirable properties such as its symmetry, mathematical tractability and convergence to the wrapped nor- mal distribution (Mardia and Jupp (2000)) for high concentrations, it is a viable option for many statistical analyses. -
The Exponential Family 1 Definition
The Exponential Family David M. Blei Columbia University November 9, 2016 The exponential family is a class of densities (Brown, 1986). It encompasses many familiar forms of likelihoods, such as the Gaussian, Poisson, multinomial, and Bernoulli. It also encompasses their conjugate priors, such as the Gamma, Dirichlet, and beta. 1 Definition A probability density in the exponential family has this form p.x / h.x/ exp >t.x/ a./ ; (1) j D f g where is the natural parameter; t.x/ are sufficient statistics; h.x/ is the “base measure;” a./ is the log normalizer. Examples of exponential family distributions include Gaussian, gamma, Poisson, Bernoulli, multinomial, Markov models. Examples of distributions that are not in this family include student-t, mixtures, and hidden Markov models. (We are considering these families as distributions of data. The latent variables are implicitly marginalized out.) The statistic t.x/ is called sufficient because the probability as a function of only depends on x through t.x/. The exponential family has fundamental connections to the world of graphical models (Wainwright and Jordan, 2008). For our purposes, we’ll use exponential 1 families as components in directed graphical models, e.g., in the mixtures of Gaussians. The log normalizer ensures that the density integrates to 1, Z a./ log h.x/ exp >t.x/ d.x/ (2) D f g This is the negative logarithm of the normalizing constant. The function h.x/ can be a source of confusion. One way to interpret h.x/ is the (unnormalized) distribution of x when 0. It might involve statistics of x that D are not in t.x/, i.e., that do not vary with the natural parameter. -
Machine Learning Conjugate Priors and Monte Carlo Methods
Hierarchical Bayes for Non-IID Data Conjugate Priors Monte Carlo Methods CPSC 540: Machine Learning Conjugate Priors and Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2016 Hierarchical Bayes for Non-IID Data Conjugate Priors Monte Carlo Methods Admin Nothing exciting? We discussed empirical Bayes, where you optimize prior using marginal likelihood, Z argmax p(xjα; β) = argmax p(xjθ)p(θjα; β)dθ: α,β α,β θ Can be used to optimize λj, polynomial degree, RBF σi, polynomial vs. RBF, etc. We also considered hierarchical Bayes, where you put a prior on the prior, p(xjα; β)p(α; βjγ) p(α; βjx; γ) = : p(xjγ) But is the hyper-prior really needed? Hierarchical Bayes for Non-IID Data Conjugate Priors Monte Carlo Methods Last Time: Bayesian Statistics In Bayesian statistics we work with posterior over parameters, p(xjθ)p(θjα; β) p(θjx; α; β) = : p(xjα; β) We also considered hierarchical Bayes, where you put a prior on the prior, p(xjα; β)p(α; βjγ) p(α; βjx; γ) = : p(xjγ) But is the hyper-prior really needed? Hierarchical Bayes for Non-IID Data Conjugate Priors Monte Carlo Methods Last Time: Bayesian Statistics In Bayesian statistics we work with posterior over parameters, p(xjθ)p(θjα; β) p(θjx; α; β) = : p(xjα; β) We discussed empirical Bayes, where you optimize prior using marginal likelihood, Z argmax p(xjα; β) = argmax p(xjθ)p(θjα; β)dθ: α,β α,β θ Can be used to optimize λj, polynomial degree, RBF σi, polynomial vs. -
Package 'Rfast'
Package ‘Rfast’ May 18, 2021 Type Package Title A Collection of Efficient and Extremely Fast R Functions Version 2.0.3 Date 2021-05-17 Author Manos Papadakis, Michail Tsagris, Marios Dimitriadis, Stefanos Fafalios, Ioannis Tsamardi- nos, Matteo Fasiolo, Giorgos Borboudakis, John Burkardt, Changliang Zou, Kleanthi Lakio- taki and Christina Chatzipantsiou. Maintainer Manos Papadakis <[email protected]> Depends R (>= 3.5.0), Rcpp (>= 0.12.3), RcppZiggurat LinkingTo Rcpp (>= 0.12.3), RcppArmadillo SystemRequirements C++11 BugReports https://github.com/RfastOfficial/Rfast/issues URL https://github.com/RfastOfficial/Rfast Description A collection of fast (utility) functions for data analysis. Column- and row- wise means, medians, variances, minimums, maximums, many t, F and G-square tests, many re- gressions (normal, logistic, Poisson), are some of the many fast functions. Refer- ences: a) Tsagris M., Papadakis M. (2018). Taking R to its lim- its: 70+ tips. PeerJ Preprints 6:e26605v1 <doi:10.7287/peerj.preprints.26605v1>. b) Tsagris M. and Pa- padakis M. (2018). Forward regression in R: from the extreme slow to the extreme fast. Jour- nal of Data Science, 16(4): 771--780. <doi:10.6339/JDS.201810_16(4).00006>. License GPL (>= 2.0) NeedsCompilation yes Repository CRAN Date/Publication 2021-05-17 23:50:05 UTC R topics documented: Rfast-package . .6 All k possible combinations from n elements . 10 Analysis of covariance . 11 1 2 R topics documented: Analysis of variance with a count variable . 12 Angular central Gaussian random values simulation . 13 ANOVA for two quasi Poisson regression models . 14 Apply method to Positive and Negative number . -
Polynomial Singular Value Decompositions of a Family of Source-Channel Models
Polynomial Singular Value Decompositions of a Family of Source-Channel Models The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Makur, Anuran and Lizhong Zheng. "Polynomial Singular Value Decompositions of a Family of Source-Channel Models." IEEE Transactions on Information Theory 63, 12 (December 2017): 7716 - 7728. © 2017 IEEE As Published http://dx.doi.org/10.1109/tit.2017.2760626 Publisher Institute of Electrical and Electronics Engineers (IEEE) Version Author's final manuscript Citable link https://hdl.handle.net/1721.1/131019 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ IEEE TRANSACTIONS ON INFORMATION THEORY 1 Polynomial Singular Value Decompositions of a Family of Source-Channel Models Anuran Makur, Student Member, IEEE, and Lizhong Zheng, Fellow, IEEE Abstract—In this paper, we show that the conditional expec- interested in a particular subclass of one-parameter exponential tation operators corresponding to a family of source-channel families known as natural exponential families with quadratic models, defined by natural exponential families with quadratic variance functions (NEFQVF). So, we define natural exponen- variance functions and their conjugate priors, have orthonor- mal polynomials as singular vectors. These models include the tial families next. Gaussian channel with Gaussian source, the Poisson channel Definition 1 (Natural Exponential Family). Given a measur- with gamma source, and the binomial channel with beta source. To derive the singular vectors of these models, we prove and able space (Y; B(Y)) with a σ-finite measure µ, where Y ⊆ R employ the equivalent condition that their conditional moments and B(Y) denotes the Borel σ-algebra on Y, the parametrized are strictly degree preserving polynomials. -
Arxiv:1911.00962V1 [Cs.CV] 3 Nov 2019 Though the Label of Pose Itself Is Discrete
Conservative Wasserstein Training for Pose Estimation Xiaofeng Liu1;2†∗, Yang Zou1y, Tong Che3y, Peng Ding4, Ping Jia4, Jane You5, B.V.K. Vijaya Kumar1 1Carnegie Mellon University; 2Harvard University; 3MILA 4CIOMP, Chinese Academy of Sciences; 5The Hong Kong Polytechnic University yContribute equally ∗Corresponding to: [email protected] Pose with true label Probability of Softmax prediction Abstract Expected probability (i.e., 1 in one-hot case) σ푁−1 푡 푠 푡푗∗ of each pose ( 푖=0 푠푖 = 1) 푗∗ 3 푠 푠 2 푠3 2 푠푗∗ 푠푗∗ This paper targets the task with discrete and periodic 푠1 푠1 class labels (e:g:; pose/orientation estimation) in the con- Same Cross-entropy text of deep learning. The commonly used cross-entropy or 푠0 푠0 regression loss is not well matched to this problem as they 푠푁−1 ignore the periodic nature of the labels and the class simi- More ideal Inferior 푠푁−1 larity, or assume labels are continuous value. We propose to Distribution 푠푁−2 Distribution 푠푁−2 incorporate inter-class correlations in a Wasserstein train- Figure 1. The limitation of CE loss for pose estimation. The ing framework by pre-defining (i:e:; using arc length of a ground truth direction of the car is tj ∗. Two possible softmax circle) or adaptively learning the ground metric. We extend predictions (green bar) of the pose estimator have the same proba- the ground metric as a linear, convex or concave increasing bility at tj ∗ position. Therefore, both predicted distributions have function w:r:t: arc length from an optimization perspective. the same CE loss. -
Paradoxes and Priors in Bayesian Regression
Paradoxes and Priors in Bayesian Regression Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Agniva Som, B. Stat., M. Stat. Graduate Program in Statistics The Ohio State University 2014 Dissertation Committee: Dr. Christopher M. Hans, Advisor Dr. Steven N. MacEachern, Co-advisor Dr. Mario Peruggia c Copyright by Agniva Som 2014 Abstract The linear model has been by far the most popular and most attractive choice of a statistical model over the past century, ubiquitous in both frequentist and Bayesian literature. The basic model has been gradually improved over the years to deal with stronger features in the data like multicollinearity, non-linear or functional data pat- terns, violation of underlying model assumptions etc. One valuable direction pursued in the enrichment of the linear model is the use of Bayesian methods, which blend information from the data likelihood and suitable prior distributions placed on the unknown model parameters to carry out inference. This dissertation studies the modeling implications of many common prior distri- butions in linear regression, including the popular g prior and its recent ameliorations. Formalization of desirable characteristics for model comparison and parameter esti- mation has led to the growth of appropriate mixtures of g priors that conform to the seven standard model selection criteria laid out by Bayarri et al. (2012). The existence of some of these properties (or lack thereof) is demonstrated by examining the behavior of the prior under suitable limits on the likelihood or on the prior itself. -
Chapter 2 Parameter Estimation for Wrapped Distributions
ABSTRACT RAVINDRAN, PALANIKUMAR. Bayesian Analysis of Circular Data Using Wrapped Dis- tributions. (Under the direction of Associate Professor Sujit K. Ghosh). Circular data arise in a number of different areas such as geological, meteorolog- ical, biological and industrial sciences. We cannot use standard statistical techniques to model circular data, due to the circular geometry of the sample space. One of the com- mon methods used to analyze such data is the wrapping approach. Using the wrapping approach, we assume that, by wrapping a probability distribution from the real line onto the circle, we obtain the probability distribution for circular data. This approach creates a vast class of probability distributions that are flexible to account for different features of circular data. However, the likelihood-based inference for such distributions can be very complicated and computationally intensive. The EM algorithm used to compute the MLE is feasible, but is computationally unsatisfactory. Instead, we use Markov Chain Monte Carlo (MCMC) methods with a data augmentation step, to overcome such computational difficul- ties. Given a probability distribution on the circle, we assume that the original distribution was distributed on the real line, and then wrapped onto the circle. If we can unwrap the distribution off the circle and obtain a distribution on the real line, then the standard sta- tistical techniques for data on the real line can be used. Our proposed methods are flexible and computationally efficient to fit a wide class of wrapped distributions. Furthermore, we can easily compute the usual summary statistics. We present extensive simulation studies to validate the performance of our method. -
Bayesian Filtering: from Kalman Filters to Particle Filters, and Beyond ZHE CHEN
MANUSCRIPT 1 Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond ZHE CHEN Abstract— In this self-contained survey/review paper, we system- IV Bayesian Optimal Filtering 9 atically investigate the roots of Bayesian filtering as well as its rich IV-AOptimalFiltering..................... 10 leaves in the literature. Stochastic filtering theory is briefly reviewed IV-BKalmanFiltering..................... 11 with emphasis on nonlinear and non-Gaussian filtering. Following IV-COptimumNonlinearFiltering.............. 13 the Bayesian statistics, different Bayesian filtering techniques are de- IV-C.1Finite-dimensionalFilters............ 13 veloped given different scenarios. Under linear quadratic Gaussian circumstance, the celebrated Kalman filter can be derived within the Bayesian framework. Optimal/suboptimal nonlinear filtering tech- V Numerical Approximation Methods 14 niques are extensively investigated. In particular, we focus our at- V-A Gaussian/Laplace Approximation ............ 14 tention on the Bayesian filtering approach based on sequential Monte V-BIterativeQuadrature................... 14 Carlo sampling, the so-called particle filters. Many variants of the V-C Mulitgrid Method and Point-Mass Approximation . 14 particle filter as well as their features (strengths and weaknesses) are V-D Moment Approximation ................. 15 discussed. Related theoretical and practical issues are addressed in V-E Gaussian Sum Approximation . ............. 16 detail. In addition, some other (new) directions on Bayesian filtering V-F Deterministic -
36-463/663: Hierarchical Linear Models
36-463/663: Hierarchical Linear Models Taste of MCMC / Bayes for 3 or more “levels” Brian Junker 132E Baker Hall [email protected] 11/3/2016 1 Outline Practical Bayes Mastery Learning Example A brief taste of JAGS and RUBE Hierarchical Form of Bayesian Models Extending the “Levels” and the “Slogan” Mastery Learning – Distribution of “mastery” (to be continued…) 11/3/2016 2 Practical Bayesian Statistics (posterior) ∝ (likelihood) ×(prior) We typically want quantities like point estimate: Posterior mean, median, mode uncertainty: SE, IQR, or other measure of ‘spread’ credible interval (CI) (^θ - 2SE, ^θ + 2SE) (θ. , θ. ) Other aspects of the “shape” of the posterior distribution Aside: If (prior) ∝ 1, then (posterior) ∝ (likelihood) 1/2 posterior mode = mle, posterior SE = 1/I( θ) , etc. 11/3/2016 3 Obtaining posterior point estimates, credible intervals Easy if we recognize the posterior distribution and we have formulae for means, variances, etc. Whether or not we have such formulae, we can get similar information by simulating from the posterior distribution. Key idea: where θ, θ, …, θM is a sample from f( θ|data). 11/3/2016 4 Example: Mastery Learning Some computer-based tutoring systems declare that you have “mastered” a skill if you can perform it successfully r times. The number of times x that you erroneously perform the skill before the r th success is a measure of how likely you are to perform the skill correctly (how well you know the skill). The distribution for the number of failures x before the r th success is the negative binomial distribution. -
A Geometric View of Conjugate Priors
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Geometric View of Conjugate Priors Arvind Agarwal Hal Daume´ III Department of Computer Science Department of Computer Science University of Maryland University of Maryland College Park, Maryland USA College Park, Maryland USA [email protected] [email protected] Abstract Using the same geometry also gives the closed-form solution for the maximum-a-posteriori (MAP) problem. We then ana- In Bayesian machine learning, conjugate priors are lyze the prior using concepts borrowed from the information popular, mostly due to mathematical convenience. geometry. We show that this geometry induces the Fisher In this paper, we show that there are deeper reasons information metric and 1-connection, which are respectively, for choosing a conjugate prior. Specifically, we for- the natural metric and connection for the exponential family mulate the conjugate prior in the form of Bregman (Section 5). One important outcome of this analysis is that it divergence and show that it is the inherent geome- allows us to treat the hyperparameters of the conjugate prior try of conjugate priors that makes them appropriate as the effective sample points drawn from the distribution un- and intuitive. This geometric interpretation allows der consideration. We finally extend this geometric interpre- one to view the hyperparameters of conjugate pri- tation of conjugate priors to analyze the hybrid model given ors as the effective sample points, thus providing by [7] in a purely geometric setting, and justify the argument additional intuition. We use this geometric under- presented in [1] (i.e. a coupling prior should be conjugate) standing of conjugate priors to derive the hyperpa- using a much simpler analysis (Section 6). -
Recursive Bayesian Filtering in Circular State Spaces
Recursive Bayesian Filtering in Circular State Spaces Gerhard Kurza, Igor Gilitschenskia, Uwe D. Hanebecka aIntelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute for Anthropomatics and Robotics Karlsruhe Institute of Technology (KIT), Germany Abstract For recursive circular filtering based on circular statistics, we introduce a general framework for estimation of a circular state based on different circular distributions, specifically the wrapped normal distribution and the von Mises distribution. We propose an estimation method for circular systems with nonlinear system and measurement functions. This is achieved by relying on efficient deterministic sampling techniques. Furthermore, we show how the calculations can be simplified in a variety of important special cases, such as systems with additive noise as well as identity system or measurement functions. We introduce several novel key components, particularly a distribution-free prediction algorithm, a new and superior formula for the multiplication of wrapped normal densities, and the ability to deal with non-additive system noise. All proposed methods are thoroughly evaluated and compared to several state-of-the-art solutions. 1. Introduction Estimation of circular quantities is an omnipresent issue, be it the wind direction, the angle of a robotic revolute joint, the orientation of a turntable, or the direction a vehicle is facing. Circular estimation is not limited to applications involving angles, however, and can be applied to a variety of periodic phenomena. For example phase estimation is a common issue in signal processing, and tracking objects that periodically move along a certain trajectory is also of interest. Standard approaches to circular estimation are typically based on estimation techniques designed for linear scenarios that are tweaked to deal with some of the issues arising in the presence of circular quantities.