Generalized Linear Models Beyond the Exponential Family with Loss Reserve Applications
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Adversarial Variational Bayes Methods for Tweedie Compound Poisson Mixed Models
ADVERSARIAL VARIATIONAL BAYES METHODS FOR TWEEDIE COMPOUND POISSON MIXED MODELS Yaodong Yang∗, Rui Luo, Yuanyuan Liu American International Group Inc. ABSTRACT 1.5 w The Tweedie Compound Poisson-Gamma model is routinely 1.0 λ =1.9, α =49, β =0.03 random i effect used for modeling non-negative continuous data with a discrete x bi Σb Density probability mass at zero. Mixed models with random effects ac- i 0.5 U count for the covariance structure related to the grouping hier- λi αi βi archy in the data. An important application of Tweedie mixed 0.0 i yi models is pricing the insurance policies, e.g. car insurance. 0 5 10 n Tweedie Value M However, the intractable likelihood function, the unknown vari- ance function, and the hierarchical structure of mixed effects (a) Tweedie Density Function (b) Tweedie Mixed-Effects model. have presented considerable challenges for drawing inferences Fig. 1: Graphical model of Tweedie mixed-effect model. on Tweedie. In this study, we tackle the Bayesian Tweedie i:i:d mixed-effects models via variational inference approaches. In Poisson(λ);Gi ∼ Gamma(α; β): Tweedie is heavily used particular, we empower the posterior approximation by im- for modeling non-negative heavy-tailed continuous data with plicit models trained in an adversarial setting. To reduce the a discrete probability mass at zero (see Fig. 1a). As a result, variance of gradients, we reparameterize random effects, and Tweedie gains its importance from multiple domains [4, 5], in- integrate out one local latent variable of Tweedie. We also cluding actuarial science (aggregated loss/premium modeling), employ a flexible hyper prior to ensure the richness of the ap- ecology (species biomass modeling), meteorology (precipi- proximation. -
1 One Parameter Exponential Families
1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e. -
6: the Exponential Family and Generalized Linear Models
10-708: Probabilistic Graphical Models 10-708, Spring 2014 6: The Exponential Family and Generalized Linear Models Lecturer: Eric P. Xing Scribes: Alnur Ali (lecture slides 1-23), Yipei Wang (slides 24-37) 1 The exponential family A distribution over a random variable X is in the exponential family if you can write it as P (X = x; η) = h(x) exp ηT T(x) − A(η) : Here, η is the vector of natural parameters, T is the vector of sufficient statistics, and A is the log partition function1 1.1 Examples Here are some examples of distributions that are in the exponential family. 1.1.1 Multivariate Gaussian Let X be 2 Rp. Then we have: 1 1 P (x; µ; Σ) = exp − (x − µ)T Σ−1(x − µ) (2π)p=2jΣj1=2 2 1 1 = exp − (tr xT Σ−1x + µT Σ−1µ − 2µT Σ−1x + ln jΣj) (2π)p=2 2 0 1 1 B 1 −1 T T −1 1 T −1 1 C = exp B− tr Σ xx +µ Σ x − µ Σ µ − ln jΣj)C ; (2π)p=2 @ 2 | {z } 2 2 A | {z } vec(Σ−1)T vec(xxT ) | {z } h(x) A(η) where vec(·) is the vectorization operator. 1 R T It's called this, since in order for P to normalize, we need exp(A(η)) to equal x h(x) exp(η T(x)) ) A(η) = R T ln x h(x) exp(η T(x)) , which is the log of the usual normalizer, which is the partition function. -
Lecture 2 — September 24 2.1 Recap 2.2 Exponential Families
STATS 300A: Theory of Statistics Fall 2015 Lecture 2 | September 24 Lecturer: Lester Mackey Scribe: Stephen Bates and Andy Tsao 2.1 Recap Last time, we set out on a quest to develop optimal inference procedures and, along the way, encountered an important pair of assertions: not all data is relevant, and irrelevant data can only increase risk and hence impair performance. This led us to introduce a notion of lossless data compression (sufficiency): T is sufficient for P with X ∼ Pθ 2 P if X j T (X) is independent of θ. How far can we take this idea? At what point does compression impair performance? These are questions of optimal data reduction. While we will develop general answers to these questions in this lecture and the next, we can often say much more in the context of specific modeling choices. With this in mind, let's consider an especially important class of models known as the exponential family models. 2.2 Exponential Families Definition 1. The model fPθ : θ 2 Ωg forms an s-dimensional exponential family if each Pθ has density of the form: s ! X p(x; θ) = exp ηi(θ)Ti(x) − B(θ) h(x) i=1 • ηi(θ) 2 R are called the natural parameters. • Ti(x) 2 R are its sufficient statistics, which follows from NFFC. • B(θ) is the log-partition function because it is the logarithm of a normalization factor: s ! ! Z X B(θ) = log exp ηi(θ)Ti(x) h(x)dµ(x) 2 R i=1 • h(x) 2 R: base measure. -
Multivariate Tweedie Lifetimes: the Impact of Dependence
Multivariate Tweedie Lifetimes: The Impact of Dependence Daniel H. Alai1 Zinoviy Landsman2 Michael Sherris3 CEPAR, Risk and Actuarial Studies, Australian School of Business UNSW, Sydney NSW 2052, Australia Departmant of Statistics, University of Haifa Mount Carmel, Haifa 31905, Israel DRAFT ONLY DO NOT CIRCULATE WITHOUT AUTHORS' PERMISSION Abstract Systematic improvements in mortality increases dependence in the survival distributions of insured lives. This is not accounted for in standard life tables and actuarial models used for annuity pricing and reserving. Furthermore, systematic longevity risk undermines the law of large numbers; a law that is relied on in the risk management of life insurance and annuity portfolios. This paper applies a multivari- ate Tweedie distribution to incorporate dependence, which it induces through a common shock component. Model parameter estimation is developed based on the method of moments and generalized to allow for truncated observations. Keywords: systematic longevity risk, dependence, multivariate Tweedie, lifetime distribution [email protected] [email protected] [email protected] 1 Introduction The paper generalizes an approach explored in a previous paper by the au- thors to model the dependence of lifetimes using the gamma distribution with a common stochastic component. The property of gamma random variables generates a multivariate gamma distribution using the so-called multivari- ate reduction method; see Chereiyan (1941) and Ramabhadran (1951). This constructs a dependency structure that is natural for modelling lifetimes of individuals within a pool. The method uses the fact that a sum of gamma random variables with the same rate parameter follows a gamma distribution with that rate parameter. -
Robust Monitoring of Time Series with Application to Fraud Detection
Robust Monitoring of Time Series with Application to Fraud Detection Peter Rousseeuw, Domenico Perrotta, Marco Riani and Mia Hubert May 22, 2018 Abstract Time series often contain outliers and level shifts or structural changes. These unexpected events are of the utmost importance in fraud detection, as they may pin- point suspicious transactions. The presence of such unusual events can easily mislead conventional time series analysis and yield erroneous conclusions. A unified frame- work is provided for detecting outliers and level shifts in short time series that may have a seasonal pattern. The approach combines ideas from the FastLTS algorithm for robust regression with alternating least squares. The double wedge plot is pro- posed, a graphical display which indicates outliers and potential level shifts. The methodology was developed to detect potential fraud cases in time series of imports into the European Union, and is illustrated on two such series. Keywords: alternating least squares, double wedge plot, level shift, outliers. arXiv:1708.08268v4 [stat.CO] 20 May 2018 Peter Rousseeuw, Department of Mathematics, University of Leuven, Belgium. Domenico Perrotta, Joint Research Centre, Ispra, Italy. Marco Riani, Department of Economics, University of Parma, Italy. Mia Hubert, Department of Mathematics, University of Leuven, Belgium. Peter Rousseeuw and Mia Hubert gratefully acknowledge the support by project C16/15/068 of Internal Funds KU Leuven. The work of Domenico Perrotta was supported by the Project “Automated Monitoring Tool on External Trade Step 5” of the Joint Research Centre and the European Anti-Fraud Office of the European Commission, under the Hercule-III EU programme. In this paper, references to specific countries and products are made only for purposes of illustration and do not necessarily refer to cases investigated or under investigation by anti-fraud authorities. -
Exponential Families and Theoretical Inference
EXPONENTIAL FAMILIES AND THEORETICAL INFERENCE Bent Jørgensen Rodrigo Labouriau August, 2012 ii Contents Preface vii Preface to the Portuguese edition ix 1 Exponential families 1 1.1 Definitions . 1 1.2 Analytical properties of the Laplace transform . 11 1.3 Estimation in regular exponential families . 14 1.4 Marginal and conditional distributions . 17 1.5 Parametrizations . 20 1.6 The multivariate normal distribution . 22 1.7 Asymptotic theory . 23 1.7.1 Estimation . 25 1.7.2 Hypothesis testing . 30 1.8 Problems . 36 2 Sufficiency and ancillarity 47 2.1 Sufficiency . 47 2.1.1 Three lemmas . 48 2.1.2 Definitions . 49 2.1.3 The case of equivalent measures . 50 2.1.4 The general case . 53 2.1.5 Completeness . 56 2.1.6 A result on separable σ-algebras . 59 2.1.7 Sufficiency of the likelihood function . 60 2.1.8 Sufficiency and exponential families . 62 2.2 Ancillarity . 63 2.2.1 Definitions . 63 2.2.2 Basu's Theorem . 65 2.3 First-order ancillarity . 67 2.3.1 Examples . 67 2.3.2 Main results . 69 iii iv CONTENTS 2.4 Problems . 71 3 Inferential separation 77 3.1 Introduction . 77 3.1.1 S-ancillarity . 81 3.1.2 The nonformation principle . 83 3.1.3 Discussion . 86 3.2 S-nonformation . 91 3.2.1 Definition . 91 3.2.2 S-nonformation in exponential families . 96 3.3 G-nonformation . 99 3.3.1 Transformation models . 99 3.3.2 Definition of G-nonformation . 103 3.3.3 Cox's proportional risks model . -
Exponential Family
School of Computer Science Probabilistic Graphical Models Learning generalized linear models and tabular CPT of fully observed BN Eric Xing Lecture 7, September 30, 2009 X1 X1 X1 X1 X2 X3 X2 X3 X2 X3 Reading: X4 X4 © Eric Xing @ CMU, 2005-2009 1 Exponential family z For a numeric random variable X T p(x |η) = h(x)exp{η T (x) − A(η)} Xn 1 N = h(x)exp{}η TT (x) Z(η) is an exponential family distribution with natural (canonical) parameter η z Function T(x) is a sufficient statistic. z Function A(η) = log Z(η) is the log normalizer. z Examples: Bernoulli, multinomial, Gaussian, Poisson, gamma,... © Eric Xing @ CMU, 2005-2009 2 1 Recall Linear Regression z Let us assume that the target variable and the inputs are related by the equation: T yi = θ xi + ε i where ε is an error term of unmodeled effects or random noise z Now assume that ε follows a Gaussian N(0,σ), then we have: 1 ⎛ (y −θ T x )2 ⎞ p(y | x ;θ) = exp⎜− i i ⎟ i i ⎜ 2 ⎟ 2πσ ⎝ 2σ ⎠ © Eric Xing @ CMU, 2005-2009 3 Recall: Logistic Regression (sigmoid classifier) z The condition distribution: a Bernoulli p(y | x) = µ(x) y (1 − µ(x))1− y where µ is a logistic function 1 µ(x) = T 1 + e−θ x z We can used the brute-force gradient method as in LR z But we can also apply generic laws by observing the p(y|x) is an exponential family function, more specifically, a generalized linear model! © Eric Xing @ CMU, 2005-2009 4 2 Example: Multivariate Gaussian Distribution z For a continuous vector random variable X∈Rk: 1 ⎧ 1 T −1 ⎫ p(x µ,Σ) = 1/2 exp⎨− (x − µ) Σ (x − µ)⎬ ()2π k /2 Σ ⎩ 2 ⎭ Moment parameter -
3.4 Exponential Families
3.4 Exponential Families A family of pdfs or pmfs is called an exponential family if it can be expressed as ¡ Xk ¢ f(x|θ) = h(x)c(θ) exp wi(θ)ti(x) . (1) i=1 Here h(x) ≥ 0 and t1(x), . , tk(x) are real-valued functions of the observation x (they cannot depend on θ), and c(θ) ≥ 0 and w1(θ), . , wk(θ) are real-valued functions of the possibly vector-valued parameter θ (they cannot depend on x). Many common families introduced in the previous section are exponential families. They include the continuous families—normal, gamma, and beta, and the discrete families—binomial, Poisson, and negative binomial. Example 3.4.1 (Binomial exponential family) Let n be a positive integer and consider the binomial(n, p) family with 0 < p < 1. Then the pmf for this family, for x = 0, 1, . , n and 0 < p < 1, is µ ¶ n f(x|p) = px(1 − p)n−x x µ ¶ n ¡ p ¢ = (1 − p)n x x 1 − p µ ¶ n ¡ ¡ p ¢ ¢ = (1 − p)n exp log x . x 1 − p Define 8 >¡ ¢ < n x = 0, 1, . , n h(x) = x > :0 otherwise, p c(p) = (1 − p)n, 0 < p < 1, w (p) = log( ), 0 < p < 1, 1 1 − p and t1(x) = x. Then we have f(x|p) = h(x)c(p) exp{w1(p)t1(x)}. 1 Example 3.4.4 (Normal exponential family) Let f(x|µ, σ2) be the N(µ, σ2) family of pdfs, where θ = (µ, σ2), −∞ < µ < ∞, σ > 0. -
On Curved Exponential Families
U.U.D.M. Project Report 2019:10 On Curved Exponential Families Emma Angetun Examensarbete i matematik, 15 hp Handledare: Silvelyn Zwanzig Examinator: Örjan Stenflo Mars 2019 Department of Mathematics Uppsala University On Curved Exponential Families Emma Angetun March 4, 2019 Abstract This paper covers theory of inference statistical models that belongs to curved exponential families. Some of these models are the normal distribution, binomial distribution, bivariate normal distribution and the SSR model. The purpose was to evaluate the belonging properties such as sufficiency, completeness and strictly k-parametric. Where it was shown that sufficiency holds and therefore the Rao Blackwell The- orem but completeness does not so Lehmann Scheffé Theorem cannot be applied. 1 Contents 1 exponential families ...................... 3 1.1 natural parameter space ................... 6 2 curved exponential families ................. 8 2.1 Natural parameter space of curved families . 15 3 Sufficiency ............................ 15 4 Completeness .......................... 18 5 the rao-blackwell and lehmann-scheffé theorems .. 20 2 1 exponential families Statistical inference is concerned with looking for a way to use the informa- tion in observations x from the sample space X to get information about the partly unknown distribution of the random variable X. Essentially one want to find a function called statistic that describe the data without loss of im- portant information. The exponential family is in probability and statistics a class of probability measures which can be written in a certain form. The exponential family is a useful tool because if a conclusion can be made that the given statistical model or sample distribution belongs to the class then one can thereon apply the general framework that the class gives. -
Generalized Linear Models Lecture 11
Generalized Linear Models Lecture 11. Tweedie models. Compound Poisson models GLM (MTMS.01.011) Lecture 11 1 / 22 Tweedie distributions Tw(µ, p, ϕ), Twp(µ, ϕ) Tweedie distributions form a subclass of exponential dispersion family, the class includes continuous distributions like normal, gamma and inverse gaussian, discrete distributions like Poisson, and also Poisson-gamma mixtures (that have positive probability at zero and are continuous elsewhere) The variance function for Tweedie distributions has the following form ν(µ) = µp GLM with Tweedie response Canonical parameter θi and canonical function b(θi ) are ( 1−p ( 2−p µi µ(θi ) 1−p , p 6= 1 2−p , p 6= 2 θi = θ(µi ) = b(θi ) = log µi , p = 1 log µ(θi ), p = 2 Mean: µi p Variance: ϕµi – power-variance distributions The Tweedie distributions were named by Bent Jorgensen after Maurice Tweedie, a statistician and medical physicist at the University of Liverpool, UK, who presented the first thorough study of these distributions in 1984. GLM (MTMS.01.011) Lecture 11 2 / 22 Tweedie family Twp(µ, ϕ) Tweedie distributions exist for all p > 0, no analytic form exists for 0 < p < 1 If 1 < p < 2, the distribution are continuous for Y > 0, with a positive mass at Y = 0. If p > 2, the distributions are continuous, Y > 0 Known distributions: p = 0 – normal distribution p = 1 and ϕ = 1 – Poisson distribution 1 < p < 2 – compound Poisson-gamma distribution p = 2 – gamma distribution p = 3 – inverse gaussian distribution 2 < p < 3,p > 3 – positive stable distributions There are Tweedie models that allow for zero-issues as well GLM (MTMS.01.011) Lecture 11 3 / 22 Applications of Twp distribution 1 1 < p < 2 Fish count estimation (silky shark, tuna fish): pˆ = 1.12, Shono (2008, 2010) Root length density of apple trees: pˆ = 1.4, Silva (1999) 2 p ≥ 2 Tweedie distribution with p ≥ 2 is a continuous non-negative distribution. -
A Primer on the Exponential Family of Distributions
A Primer on the Exponential Family of Distributions David R. Clark, FCAS, MAAA, and Charles A. Thayer 117 A PRIMER ON THE EXPONENTIAL FAMILY OF DISTRIBUTIONS David R. Clark and Charles A. Thayer 2004 Call Paper Program on Generalized Linear Models Abstract Generahzed Linear Model (GLM) theory represents a significant advance beyond linear regression theor,], specifically in expanding the choice of probability distributions from the Normal to the Natural Exponential Famdy. This Primer is intended for GLM users seeking a hand)' reference on the model's d]smbutional assumptions. The Exponential Faintly of D,smbutions is introduced, with an emphasis on variance structures that may be suitable for aggregate loss models m property casualty insurance. 118 A PRIMER ON THE EXPONENTIAL FAMILY OF DISTRIBUTIONS INTRODUCTION Generalized Linear Model (GLM) theory is a signtficant advance beyond linear regression theory. A major part of this advance comes from allowmg a broader famdy of distributions to be used for the error term, rather than just the Normal (Gausstan) distributton as required m hnear regression. More specifically, GLM allows the user to select a distribution from the Exponentzal Family, which gives much greater flexibility in specifying the vanance structure of the variable being forecast (the "'response variable"). For insurance apphcations, this is a big step towards more realistic modeling of loss distributions, while preserving the advantages of regresston theory such as the ability to calculate standard errors for estimated parameters. The Exponentml family also includes several d~screte distributions that are attractive candtdates for modehng clatm counts and other events, but such models will not be considered here The purpose of this Primer is to give the practicmg actuary a basic introduction to the Exponential Family of distributions, so that GLM models can be designed to best approximate the behavior of the insurance phenomenon.