Laplacesdemon’

Total Page:16

File Type:pdf, Size:1020Kb

Laplacesdemon’ Package ‘LaplacesDemon’ July 9, 2021 Version 16.1.6 Title Complete Environment for Bayesian Inference Depends R (>= 3.0.0) Imports parallel, grDevices, graphics, stats, utils Suggests KernSmooth ByteCompile TRUE Description Provides a complete environment for Bayesian inference using a variety of different sam- plers (see ?LaplacesDemon for an overview). License MIT + file LICENSE URL https://github.com/LaplacesDemonR/LaplacesDemon BugReports https://github.com/LaplacesDemonR/LaplacesDemon/issues NeedsCompilation no Author Byron Hall [aut], Martina Hall [aut], Statisticat, LLC [aut], Eric Brown [ctb], Richard Hermanson [ctb], Emmanuel Charpentier [ctb], Daniel Heck [ctb], Stephane Laurent [ctb], Quentin F. Gronau [ctb], Henrik Singmann [cre] Maintainer Henrik Singmann <[email protected]> Repository CRAN Date/Publication 2021-07-09 14:00:02 UTC R topics documented: LaplacesDemon-package . .6 ABB............................................. 11 1 2 R topics documented: AcceptanceRate . 13 as.covar . 15 as.initial.values . 16 as.parm.names . 17 as.ppc . 18 BayesFactor . 19 BayesianBootstrap . 23 BayesTheorem . 25 BigData . 28 Blocks . 32 BMK.Diagnostic . 35 burnin . 36 caterpillar.plot . 38 CenterScale . 39 Combine . 40 cond.plot . 42 Consort . 43 CSF............................................. 46 data.demonchoice . 48 data.demonfx . 49 data.demonsessions . 50 data.demonsnacks . 51 data.demontexas . 52 de.Finetti.Game . 53 deburn . 54 dist.Asymmetric.Laplace . 55 dist.Asymmetric.Log.Laplace . 57 dist.Asymmetric.Multivariate.Laplace . 59 dist.Bernoulli . 61 dist.Categorical . 62 dist.ContinuousRelaxation . 64 dist.Dirichlet . 65 dist.Generalized.Pareto . 67 dist.Generalized.Poisson . 68 dist.HalfCauchy . 70 dist.HalfNormal . 71 dist.Halft . 73 dist.Horseshoe . 74 dist.HuangWand . 76 dist.Inverse.Beta . 78 dist.Inverse.ChiSquare . 79 dist.Inverse.Gamma . 81 dist.Inverse.Gaussian . 82 dist.Inverse.Matrix.Gamma . 84 dist.Inverse.Wishart . 85 dist.Inverse.Wishart.Cholesky . 87 dist.Laplace . 88 dist.Laplace.Mixture . 90 R topics documented: 3 dist.Laplace.Precision . 92 dist.LASSO . 94 dist.Log.Laplace . 95 dist.Log.Normal.Precision . 97 dist.Matrix.Gamma . 99 dist.Matrix.Normal . 100 dist.Multivariate.Cauchy . 102 dist.Multivariate.Cauchy.Cholesky . 103 dist.Multivariate.Cauchy.Precision . 105 dist.Multivariate.Cauchy.Precision.Cholesky . 107 dist.Multivariate.Laplace . 109 dist.Multivariate.Laplace.Cholesky . 111 dist.Multivariate.Normal . 114 dist.Multivariate.Normal.Cholesky . 115 dist.Multivariate.Normal.Precision . 117 dist.Multivariate.Normal.Precision.Cholesky . 119 dist.Multivariate.Polya . 121 dist.Multivariate.Power.Exponential . 122 dist.Multivariate.Power.Exponential.Cholesky . 124 dist.Multivariate.t . 126 dist.Multivariate.t.Cholesky . 128 dist.Multivariate.t.Precision . 130 dist.Multivariate.t.Precision.Cholesky . 131 dist.Normal.Inverse.Wishart . 133 dist.Normal.Laplace . 135 dist.Normal.Mixture . 137 dist.Normal.Precision . 138 dist.Normal.Variance . 140 dist.Normal.Wishart . 142 dist.Pareto . 144 dist.Power.Exponential . 145 dist.Scaled.Inverse.Wishart . 147 dist.Skew.Discrete.Laplace . 149 dist.Skew.Laplace . 151 dist.Stick . 153 dist.Student.t . 154 dist.Student.t.Precision . 156 dist.Truncated . 158 dist.Wishart . 159 dist.Wishart.Cholesky . 161 dist.YangBerger . 163 dist.Zellner . 164 Elicitation . ..
Recommended publications
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software January 2017, Volume 76, Issue 1. doi: 10.18637/jss.v076.i01 Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matthew D. Hoffman Columbia University Columbia University Adobe Creative Technologies Lab Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University Columbia University Marcus A. Brubaker Jiqiang Guo Peter Li York University NPD Group Columbia University Allen Riddell Indiana University Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces sup- port sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
    [Show full text]
  • Supplementary Materials
    Int. J. Environ. Res. Public Health 2018, 15, 2042 1 of 4 Supplementary Materials S1) Re-parametrization of log-Laplace distribution The three-parameter log-Laplace distribution, LL(δ ,,ab) of relative risk, μ , is given by the probability density function b−1 μ ,0<<μ δ 1 ab δ f ()μ = . δ ab+ a+1 δ , μ ≥ δ μ 1−τ τ Let b = and a = . Hence, the probability density function of LL( μ ,,τσ) can be re- σ σ τ written as b μ ,0<<=μ δ exp(μ ) 1 ab δ f ()μ = ya+ b δ a μδ≥= μ ,exp() μ 1 ab exp(b (log(μμ )−< )), log( μμ ) = μ ab+ exp(a (μμ−≥ log( ))), log( μμ ) 1 ab exp(−−b| log(μμ ) |), log( μμ ) < = μ ab+ exp(−−a|μμ log( ) |), log( μμ ) ≥ μμ− −−τ | log( )τ | μμ < exp (1 ) , log( ) τ 1(1)ττ− σ = μσ μμ− −≥τμμ| log( )τ | exp , log( )τ . σ S2) Bayesian quantile estimation Let Yi be the continuous response variable i and Xi be the corresponding vector of covariates with the first element equals one. At a given quantile levelτ ∈(0,1) , the τ th conditional quantile T i i = β τ QY(|X ) τ of y given X is then QYτ (|iiXX ) ii ()where τ iiis the th conditional quantile and β τ i ()is a vector of quantile coefficients. Contrary to the mean regression generally aiming to minimize the squared loss function, the quantile regression links to a special class of check or loss τ βˆ τ function. The th conditional quantile can be estimated by any solution, i (), such that ββˆ τ =−ρ T τ ρ =−<τ iiii() argmini τ (Y X ()) where τ ()zz ( Iz ( 0))is the quantile loss function given in (1) and I(.) is the indicator function.
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matt Hoffman Columbia University Columbia University Adobe Research Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University University of Warwick Marcus A. Brubaker Jiqiang Guo Peter Li University of Toronto, NPD Group Columbia University Scarborough Allen Riddell Dartmouth College Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.2.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line, through R using the RStan package, or through Python using the PyStan package. All three interfaces support sampling and optimization-based inference. RStan and PyStan also provide access to log probabilities, gradients, Hessians, and data I/O. Keywords: probabilistic program, Bayesian inference, algorithmic differentiation, Stan.
    [Show full text]
  • Advancedhmc.Jl: a Robust, Modular and Efficient Implementation of Advanced HMC Algorithms
    2nd Symposium on Advances in Approximate Bayesian Inference, 20191{10 AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms Kai Xu [email protected] University of Edinburgh Hong Ge [email protected] University of Cambridge Will Tebbutt [email protected] University of Cambridge Mohamed Tarek [email protected] UNSW Canberra Martin Trapp [email protected] Graz University of Technology Zoubin Ghahramani [email protected] University of Cambridge & Uber AI Labs Abstract Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm. It is challeng- ing to implement these adaption schemes robustly in practice, hindering wider adoption amongst practitioners who are not directly working with the Stan modelling language. AdvancedHMC.jl (AHMC) contributes a modular, well-tested, standalone implementation of NUTS that recovers and extends Stan's NUTS algorithm. AHMC is written in Julia, a modern high-level language for scientific computing, benefiting from optional hardware acceleration and interoperability with a wealth of existing software written in both Julia and other languages, such as Python. Efficacy is demonstrated empirically by comparison with Stan through a third-party Markov chain Monte Carlo benchmarking suite. 1. Introduction Hamiltonian Monte Carlo (HMC) is an efficient Markov chain Monte Carlo (MCMC) algo- rithm which avoids random walks by simulating Hamiltonian dynamics to make proposals (Duane et al., 1987; Neal et al., 2011). Due to the statistical efficiency of HMC, it has been widely applied to fields including physics (Duane et al., 1987), differential equations (Kramer et al., 2014), social science (Jackman, 2009) and Bayesian inference (e.g.
    [Show full text]
  • Maple Advanced Programming Guide
    Maple 9 Advanced Programming Guide M. B. Monagan K. O. Geddes K. M. Heal G. Labahn S. M. Vorkoetter J. McCarron P. DeMarco c Maplesoft, a division of Waterloo Maple Inc. 2003. ­ ii ¯ Maple, Maplesoft, Maplet, and OpenMaple are trademarks of Water- loo Maple Inc. c Maplesoft, a division of Waterloo Maple Inc. 2003. All rights re- ­ served. The electronic version (PDF) of this book may be downloaded and printed for personal use or stored as a copy on a personal machine. The electronic version (PDF) of this book may not be distributed. Information in this document is subject to change without notice and does not rep- resent a commitment on the part of the vendor. The software described in this document is furnished under a license agreement and may be used or copied only in accordance with the agreement. It is against the law to copy the software on any medium as specifically allowed in the agreement. Windows is a registered trademark of Microsoft Corporation. Java and all Java based marks are trademarks or registered trade- marks of Sun Microsystems, Inc. in the United States and other countries. Maplesoft is independent of Sun Microsystems, Inc. All other trademarks are the property of their respective owners. This document was produced using a special version of Maple that reads and updates LATEX files. Printed in Canada ISBN 1-894511-44-1 Contents Preface 1 Audience . 1 Worksheet Graphical Interface . 2 Manual Set . 2 Conventions . 3 Customer Feedback . 3 1 Procedures, Variables, and Extending Maple 5 Prerequisite Knowledge . 5 In This Chapter .
    [Show full text]
  • Some Properties of the Log-Laplace Distribution* V
    SOME PROPERTIES OF THE LOG-LAPLACE DISTRIBUTION* V, R. R. Uppuluri Mathematics and Statistics Research Department Computer Sciences Division Union Carbide Corporation, Nuclear Division Oak Ridge, Tennessee 37830 B? accSpBficS of iMS artlclS* W8 tfcHteHef of Recipient acKnowta*g«9 th« U.S. Government's right to retain a non - excluslva, royalty - frw Jlcqns* In JMWl JS SOX 50j.yrt«ltt BfiXSflng IM -DISCLAIMER . This book w« prtMtM n in nonim nl mrk moniond by m vncy of At UnllM Sum WM»> ih« UWIrt SUM Oow.n™»i nor an, qprcy IMrw, w «v ol Ifmf mptovMi. mkn «iy w»ti«y, nptoi Of ImolM. « »u~i •ny U»l liability or tBpomlbllliy lof Ihe nary, ramoloweB. nr unhtlm at mi Womwlon, appmlui, product, or pram dliclowl. or rwromti that lu u« acuU r»i MMngt prhiltly ownad rljM<- flihmot Mi to any malic lomnwclal product. HOCOI. 0. »tvlc by ttada naw. tradtimr», mareittciunr. or otktmte, taa not rucaaainy coralltun or Imply In anoonamnt, nKommendailon. o» Iworlnj by a, Unilid SUM Oowrnment or m atncy limml. Th, Amnd ooMora ol «iihon ncrtwd fmtin dn not tauaaruv iu» or nlfcn thoaot tt» Unltad Sum GiMmrmt or any agmv trmot. •Research sponsored by the Applied Mathematical Sciences Research Program, Office of Energy Research, U.S. Department of Energy, under contract W-74Q5-eng-26 with the Union Carbide Corporation. DISTRIBii.il i yf T!it3 OGCUHEiU IS UHLIKITEOr SOME PROPERTIES OF THE LOS-LAPLACE DISTRIBUTION V. R. R. Uppuluri ABSTRACT A random variable Y is said to have the Laplace distribution or the double exponential distribution whenever its probability density function is given by X exp(-A|y|), where -» < y < ~ and X > 0.
    [Show full text]
  • End-User Probabilistic Programming
    End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas).
    [Show full text]
  • An Introduction to Data Analysis Using the Pymc3 Probabilistic Programming Framework: a Case Study with Gaussian Mixture Modeling
    An introduction to data analysis using the PyMC3 probabilistic programming framework: A case study with Gaussian Mixture Modeling Shi Xian Liewa, Mohsen Afrasiabia, and Joseph L. Austerweila aDepartment of Psychology, University of Wisconsin-Madison, Madison, WI, USA August 28, 2019 Author Note Correspondence concerning this article should be addressed to: Shi Xian Liew, 1202 West Johnson Street, Madison, WI 53706. E-mail: [email protected]. This work was funded by a Vilas Life Cycle Professorship and the VCRGE at University of Wisconsin-Madison with funding from the WARF 1 RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 2 Abstract Recent developments in modern probabilistic programming have offered users many practical tools of Bayesian data analysis. However, the adoption of such techniques by the general psychology community is still fairly limited. This tutorial aims to provide non-technicians with an accessible guide to PyMC3, a robust probabilistic programming language that allows for straightforward Bayesian data analysis. We focus on a series of increasingly complex Gaussian mixture models – building up from fitting basic univariate models to more complex multivariate models fit to real-world data. We also explore how PyMC3 can be configured to obtain significant increases in computational speed by taking advantage of a machine’s GPU, in addition to the conditions under which such acceleration can be expected. All example analyses are detailed with step-by-step instructions and corresponding Python code. Keywords: probabilistic programming; Bayesian data analysis; Markov chain Monte Carlo; computational modeling; Gaussian mixture modeling RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 3 1 Introduction Over the last decade, there has been a shift in the norms of what counts as rigorous research in psychological science.
    [Show full text]
  • GNU Octave a High-Level Interactive Language for Numerical Computations Edition 3 for Octave Version 2.0.13 February 1997
    GNU Octave A high-level interactive language for numerical computations Edition 3 for Octave version 2.0.13 February 1997 John W. Eaton Published by Network Theory Limited. 15 Royal Park Clifton Bristol BS8 3AL United Kingdom Email: [email protected] ISBN 0-9541617-2-6 Cover design by David Nicholls. Errata for this book will be available from http://www.network-theory.co.uk/octave/manual/ Copyright c 1996, 1997John W. Eaton. This is the third edition of the Octave documentation, and is consistent with version 2.0.13 of Octave. Permission is granted to make and distribute verbatim copies of this man- ual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the en- tire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the same conditions as for modified versions. Portions of this document have been adapted from the gawk, readline, gcc, and C library manuals, published by the Free Software Foundation, 59 Temple Place—Suite 330, Boston, MA 02111–1307, USA. i Table of Contents Publisher’s Preface ...................... 1 Author’s Preface ........................ 3 Acknowledgements ........................................ 3 How You Can Contribute to Octave ........................ 5 Distribution .............................................. 6 1 A Brief Introduction to Octave ....... 7 1.1 Running Octave...................................... 7 1.2 Simple Examples ..................................... 7 Creating a Matrix ................................. 7 Matrix Arithmetic ................................. 8 Solving Linear Equations..........................
    [Show full text]
  • Machine Learning - HT 2016 3
    Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework � Formulate linear regression in the language of probability � Introduce the maximum likelihood estimate � Relation to least squares estimate Basics of Probability � Univariate and multivariate normal distribution � Laplace distribution � Likelihood, Entropy and its relation to learning 1 Univariate Gaussian (Normal) Distribution The univariate normal distribution is defined by the following density function (x µ)2 1 − 2 p(x) = e− 2σ2 X (µ,σ ) √2πσ ∼N Hereµ is the mean andσ 2 is the variance. 2 Sampling from a Gaussian distribution Sampling fromX (µ,σ 2) ∼N X µ By settingY= − , sample fromY (0, 1) σ ∼N Cumulative distribution function x 1 t2 Φ(x) = e− 2 dt √2π �−∞ 3 Covariance and Correlation For random variableX andY the covariance measures how the random variable change jointly. cov(X, Y)=E[(X E[X])(Y E[Y ])] − − Covariance depends on the scale of the random variable. The (Pearson) correlation coefficient normalizes the covariance to give a value between 1 and +1. − cov(X, Y) corr(X, Y)= , σX σY 2 2 2 2 whereσ X =E[(X E[X]) ] andσ Y =E[(Y E[Y]) ]. − − 4 Multivariate Gaussian Distribution Supposex is an-dimensional random vector. The covariance matrix consists of all pariwise covariances. var(X ) cov(X ,X ) cov(X ,X ) 1 1 2 ··· 1 n cov(X2,X1) var(X2) cov(X 2,Xn) T ··· cov(x) =E (x E[x])(x E[x]) = . . − − . � � . cov(Xn,X1) cov(Xn,X2) var(X n,Xn) ··· Ifµ=E[x] andΣ = cov[x], the multivariate normal is defined by the density 1 1 T 1 (µ,Σ) = exp (x µ) Σ− (x µ) N (2π)n/2 Σ 1/2 − 2 − − | | � � 5 Bivariate Gaussian Distribution 2 2 SupposeX 1 (µ 1,σ ) andX 2 (µ 2,σ ) ∼N 1 ∼N 2 What is the joint probability distributionp(x 1, x2)? 6 Suppose you are given three independent samples: x1 = 1, x2 = 2.7, x3 = 3.
    [Show full text]
  • Field Guide to Continuous Probability Distributions
    Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations.
    [Show full text]
  • 4. Continuous Random Variables
    http://statwww.epfl.ch 4. Continuous Random Variables 4.1: Definition. Density and distribution functions. Examples: uniform, exponential, Laplace, gamma. Expectation, variance. Quantiles. 4.2: New random variables from old. 4.3: Normal distribution. Use of normal tables. Continuity correction. Normal approximation to binomial distribution. 4.4: Moment generating functions. 4.5: Mixture distributions. References: Ross (Chapter 4); Ben Arous notes (IV.1, IV.3–IV.6). Exercises: 79–88, 91–93, 107, 108, of Recueil d’exercices. Probabilite´ et Statistique I — Chapter 4 1 http://statwww.epfl.ch Petit Vocabulaire Probabiliste Mathematics English Fran¸cais P(A | B) probabilityof A given B la probabilit´ede A sachant B independence ind´ependance (mutually) independent events les ´ev´enements (mutuellement) ind´ependants pairwise independent events les ´ev´enements ind´ependants deux `adeux conditionally independent events les ´ev´enements conditionellement ind´ependants X,Y,... randomvariable unevariableal´eatoire I indicator random variable une variable indicatrice fX probability mass/density function fonction de masse/fonction de densit´e FX probability distribution function fonction de r´epartition E(X) expected value/expectation of X l’esp´erance de X E(Xr) rth moment of X ri`eme moment de X E(X | B) conditional expectation of X given B l’esp´erance conditionelle de X, sachant B var(X) varianceof X la variance de X MX (t) moment generating function of X, or la fonction g´en´eratrices des moments the Laplace transform of fX (x) ou la transform´ee de Laplace de fX (x) Probabilite´ et Statistique I — Chapter 4 2 http://statwww.epfl.ch 4.1 Continuous Random Variables Up to now we have supposed that the support of X is countable, so X is a discrete random variable.
    [Show full text]