Fixed Point Characterizations of Continuous Univariate Probability
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images Hasni Idayu Binti Saidi
University of Northern Colorado Scholarship & Creative Works @ Digital UNC Dissertations Student Research 8-2018 Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images Hasni Idayu Binti Saidi Follow this and additional works at: https://digscholarship.unco.edu/dissertations Recommended Citation Saidi, Hasni Idayu Binti, "Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images" (2018). Dissertations. 507. https://digscholarship.unco.edu/dissertations/507 This Text is brought to you for free and open access by the Student Research at Scholarship & Creative Works @ Digital UNC. It has been accepted for inclusion in Dissertations by an authorized administrator of Scholarship & Creative Works @ Digital UNC. For more information, please contact [email protected]. ©2018 HASNI IDAYU BINTI SAIDI ALL RIGHTS RESERVED UNIVERSITY OF NORTHERN COLORADO Greeley, Colorado The Graduate School POWER COMPARISONS OF THE RICIAN AND GAUSSIAN RANDOM FIELDS TESTS FOR DETECTING SIGNAL FROM FUNCTIONAL MAGNETIC RESONANCE IMAGES A Dissertation Submitted in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy Hasni Idayu Binti Saidi College of Education and Behavioral Sciences Department of Applied Statistics and Research Methods August 2018 This dissertation by: Hasni Idayu Binti Saidi Entitled: Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images has been approved as meeting the requirement for the Degree of Doctor of Philosophy in College of Education and Behavioral Sciences in Department of Applied Statistics and Research Methods Accepted by the Doctoral Committee Khalil Shafie Holighi, Ph.D., Research Advisor Trent Lalonde, Ph.D., Committee Member Jay Schaffer, Ph.D., Committee Member Heng-Yu Ku, Ph.D., Faculty Representative Date of Dissertation Defense Accepted by the Graduate School Linda L. -
Supplementary Materials
Int. J. Environ. Res. Public Health 2018, 15, 2042 1 of 4 Supplementary Materials S1) Re-parametrization of log-Laplace distribution The three-parameter log-Laplace distribution, LL(δ ,,ab) of relative risk, μ , is given by the probability density function b−1 μ ,0<<μ δ 1 ab δ f ()μ = . δ ab+ a+1 δ , μ ≥ δ μ 1−τ τ Let b = and a = . Hence, the probability density function of LL( μ ,,τσ) can be re- σ σ τ written as b μ ,0<<=μ δ exp(μ ) 1 ab δ f ()μ = ya+ b δ a μδ≥= μ ,exp() μ 1 ab exp(b (log(μμ )−< )), log( μμ ) = μ ab+ exp(a (μμ−≥ log( ))), log( μμ ) 1 ab exp(−−b| log(μμ ) |), log( μμ ) < = μ ab+ exp(−−a|μμ log( ) |), log( μμ ) ≥ μμ− −−τ | log( )τ | μμ < exp (1 ) , log( ) τ 1(1)ττ− σ = μσ μμ− −≥τμμ| log( )τ | exp , log( )τ . σ S2) Bayesian quantile estimation Let Yi be the continuous response variable i and Xi be the corresponding vector of covariates with the first element equals one. At a given quantile levelτ ∈(0,1) , the τ th conditional quantile T i i = β τ QY(|X ) τ of y given X is then QYτ (|iiXX ) ii ()where τ iiis the th conditional quantile and β τ i ()is a vector of quantile coefficients. Contrary to the mean regression generally aiming to minimize the squared loss function, the quantile regression links to a special class of check or loss τ βˆ τ function. The th conditional quantile can be estimated by any solution, i (), such that ββˆ τ =−ρ T τ ρ =−<τ iiii() argmini τ (Y X ()) where τ ()zz ( Iz ( 0))is the quantile loss function given in (1) and I(.) is the indicator function. -
Some Properties of the Log-Laplace Distribution* V
SOME PROPERTIES OF THE LOG-LAPLACE DISTRIBUTION* V, R. R. Uppuluri Mathematics and Statistics Research Department Computer Sciences Division Union Carbide Corporation, Nuclear Division Oak Ridge, Tennessee 37830 B? accSpBficS of iMS artlclS* W8 tfcHteHef of Recipient acKnowta*g«9 th« U.S. Government's right to retain a non - excluslva, royalty - frw Jlcqns* In JMWl JS SOX 50j.yrt«ltt BfiXSflng IM -DISCLAIMER . This book w« prtMtM n in nonim nl mrk moniond by m vncy of At UnllM Sum WM»> ih« UWIrt SUM Oow.n™»i nor an, qprcy IMrw, w «v ol Ifmf mptovMi. mkn «iy w»ti«y, nptoi Of ImolM. « »u~i •ny U»l liability or tBpomlbllliy lof Ihe nary, ramoloweB. nr unhtlm at mi Womwlon, appmlui, product, or pram dliclowl. or rwromti that lu u« acuU r»i MMngt prhiltly ownad rljM<- flihmot Mi to any malic lomnwclal product. HOCOI. 0. »tvlc by ttada naw. tradtimr», mareittciunr. or otktmte, taa not rucaaainy coralltun or Imply In anoonamnt, nKommendailon. o» Iworlnj by a, Unilid SUM Oowrnment or m atncy limml. Th, Amnd ooMora ol «iihon ncrtwd fmtin dn not tauaaruv iu» or nlfcn thoaot tt» Unltad Sum GiMmrmt or any agmv trmot. •Research sponsored by the Applied Mathematical Sciences Research Program, Office of Energy Research, U.S. Department of Energy, under contract W-74Q5-eng-26 with the Union Carbide Corporation. DISTRIBii.il i yf T!it3 OGCUHEiU IS UHLIKITEOr SOME PROPERTIES OF THE LOS-LAPLACE DISTRIBUTION V. R. R. Uppuluri ABSTRACT A random variable Y is said to have the Laplace distribution or the double exponential distribution whenever its probability density function is given by X exp(-A|y|), where -» < y < ~ and X > 0. -
Intensity Distribution of Interplanetary Scintillation at 408 Mhz
Aust. J. Phys., 1975,28,621-32 Intensity Distribution of Interplanetary Scintillation at 408 MHz R. G. Milne Chatterton Astrophysics Department, School of Physics, University of Sydney, Sydney, N.S.W. 2006. Abstract It is shown that interplanetary scintillation of small-diameter radio sources at 408 MHz produces intensity fluctuations which are well fitted by a Rice-squared. distribution, better so than is usually claimed. The observed distribution can be used to estimate the proportion of flux density in the core of 'core-halo' sources without the need for calibration against known point sources. 1. Introduction The observed intensity of a radio source of angular diameter ;s 1" arc shows rapid fluctuations when the line of sight passes close to the Sun. This interplanetary scintillation (IPS) is due to diffraction by electron density variations which move outwards from the Sun at high velocities (-350 kms-1) •. In a typical IPS observation the fluctuating intensity Set) from a radio source is reCorded for a few minutes. This procedure is repeated on several occasions during the couple of months that the line of sight is within - 20° (for observatio~s at 408 MHz) of the direction of the Sun. For each observation we derive a mean intensity 8; a scintillation index m defined by m2 = «(S-8)2)/82, where ( ) denote the expectation value; intensity moments of order q QI! = «(S-8)4); and the skewness parameter Y1 = Q3 Q;3/2, hereafter referred to as y. A histogram, or probability distribution, of the normalized intensity S/8 is also constructed. The present paper is concerned with estimating the form of this distribution. -
Idealized Models of the Joint Probability Distribution of Wind Speeds
Nonlin. Processes Geophys., 25, 335–353, 2018 https://doi.org/10.5194/npg-25-335-2018 © Author(s) 2018. This work is distributed under the Creative Commons Attribution 4.0 License. Idealized models of the joint probability distribution of wind speeds Adam H. Monahan School of Earth and Ocean Sciences, University of Victoria, P.O. Box 3065 STN CSC, Victoria, BC, Canada, V8W 3V6 Correspondence: Adam H. Monahan ([email protected]) Received: 24 October 2017 – Discussion started: 2 November 2017 Revised: 6 February 2018 – Accepted: 26 March 2018 – Published: 2 May 2018 Abstract. The joint probability distribution of wind speeds position in space, or point in time. The simplest measure of at two separate locations in space or points in time com- statistical dependence, the correlation coefficient, is a natu- pletely characterizes the statistical dependence of these two ral measure for Gaussian-distributed quantities but does not quantities, providing more information than linear measures fully characterize dependence for non-Gaussian variables. such as correlation. In this study, we consider two models The most general representation of dependence between two of the joint distribution of wind speeds obtained from ide- or more quantities is their joint probability distribution. The alized models of the dependence structure of the horizon- joint probability distribution for a multivariate Gaussian is tal wind velocity components. The bivariate Rice distribu- well known, and expressed in terms of the mean and co- tion follows from assuming that the wind components have variance matrix (e.g. Wilks, 2005; von Storch and Zwiers, Gaussian and isotropic fluctuations. The bivariate Weibull 1999). -
Machine Learning - HT 2016 3
Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework � Formulate linear regression in the language of probability � Introduce the maximum likelihood estimate � Relation to least squares estimate Basics of Probability � Univariate and multivariate normal distribution � Laplace distribution � Likelihood, Entropy and its relation to learning 1 Univariate Gaussian (Normal) Distribution The univariate normal distribution is defined by the following density function (x µ)2 1 − 2 p(x) = e− 2σ2 X (µ,σ ) √2πσ ∼N Hereµ is the mean andσ 2 is the variance. 2 Sampling from a Gaussian distribution Sampling fromX (µ,σ 2) ∼N X µ By settingY= − , sample fromY (0, 1) σ ∼N Cumulative distribution function x 1 t2 Φ(x) = e− 2 dt √2π �−∞ 3 Covariance and Correlation For random variableX andY the covariance measures how the random variable change jointly. cov(X, Y)=E[(X E[X])(Y E[Y ])] − − Covariance depends on the scale of the random variable. The (Pearson) correlation coefficient normalizes the covariance to give a value between 1 and +1. − cov(X, Y) corr(X, Y)= , σX σY 2 2 2 2 whereσ X =E[(X E[X]) ] andσ Y =E[(Y E[Y]) ]. − − 4 Multivariate Gaussian Distribution Supposex is an-dimensional random vector. The covariance matrix consists of all pariwise covariances. var(X ) cov(X ,X ) cov(X ,X ) 1 1 2 ··· 1 n cov(X2,X1) var(X2) cov(X 2,Xn) T ··· cov(x) =E (x E[x])(x E[x]) = . . − − . � � . cov(Xn,X1) cov(Xn,X2) var(X n,Xn) ··· Ifµ=E[x] andΣ = cov[x], the multivariate normal is defined by the density 1 1 T 1 (µ,Σ) = exp (x µ) Σ− (x µ) N (2π)n/2 Σ 1/2 − 2 − − | | � � 5 Bivariate Gaussian Distribution 2 2 SupposeX 1 (µ 1,σ ) andX 2 (µ 2,σ ) ∼N 1 ∼N 2 What is the joint probability distributionp(x 1, x2)? 6 Suppose you are given three independent samples: x1 = 1, x2 = 2.7, x3 = 3. -
Field Guide to Continuous Probability Distributions
Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations. -
4. Continuous Random Variables
http://statwww.epfl.ch 4. Continuous Random Variables 4.1: Definition. Density and distribution functions. Examples: uniform, exponential, Laplace, gamma. Expectation, variance. Quantiles. 4.2: New random variables from old. 4.3: Normal distribution. Use of normal tables. Continuity correction. Normal approximation to binomial distribution. 4.4: Moment generating functions. 4.5: Mixture distributions. References: Ross (Chapter 4); Ben Arous notes (IV.1, IV.3–IV.6). Exercises: 79–88, 91–93, 107, 108, of Recueil d’exercices. Probabilite´ et Statistique I — Chapter 4 1 http://statwww.epfl.ch Petit Vocabulaire Probabiliste Mathematics English Fran¸cais P(A | B) probabilityof A given B la probabilit´ede A sachant B independence ind´ependance (mutually) independent events les ´ev´enements (mutuellement) ind´ependants pairwise independent events les ´ev´enements ind´ependants deux `adeux conditionally independent events les ´ev´enements conditionellement ind´ependants X,Y,... randomvariable unevariableal´eatoire I indicator random variable une variable indicatrice fX probability mass/density function fonction de masse/fonction de densit´e FX probability distribution function fonction de r´epartition E(X) expected value/expectation of X l’esp´erance de X E(Xr) rth moment of X ri`eme moment de X E(X | B) conditional expectation of X given B l’esp´erance conditionelle de X, sachant B var(X) varianceof X la variance de X MX (t) moment generating function of X, or la fonction g´en´eratrices des moments the Laplace transform of fX (x) ou la transform´ee de Laplace de fX (x) Probabilite´ et Statistique I — Chapter 4 2 http://statwww.epfl.ch 4.1 Continuous Random Variables Up to now we have supposed that the support of X is countable, so X is a discrete random variable. -
Discriminating Between the Normal and the Laplace Distributions
Discriminating Between The Normal and The Laplace Distributions Debasis Kundu1 Abstract Both normal and Laplace distributions can be used to analyze symmetric data. In this paper we consider the logarithm of the ratio of the maximized likelihoods to discriminate between the two distribution functions. We obtain the asymptotic distributions of the test statistics and it is observed that they are independent of the unknown parameters. If the underlying distribution is normal the asymptotic distribution works quite well even when the sample size is small. But if the underlying distribution is Laplace the asymptotic distribution does not work well for the small sample sizes. For the later case we propose a bias corrected asymptotic distribution and it works quite well even for small sample sizes. Based on the asymptotic distributions, minimum sample size needed to discriminate between the two distributing functions is obtained for a given probability of correct selection. Monte Carlo simulations are performed to examine how the asymptotic results work for small sizes and two data sets are analyzed for illustrative purposes. Key Words and Phrases: Asymptotic distributions; Likelihood ratio tests; Probability of correct selection; Location Scale Family. Address of Correspondence: 1 Department of Mathematics, Indian Institute of Tech- nology Kanpur, Pin 208016, INDIA. e-mail: [email protected]., Phone 91-512-2597141, Fax 91-512-2597500 1 1 Introduction Suppose an experimenter has n observations and the elementary data analysis, say for ex- ample histogram plot, stem and leaf plot or the box-plot, suggests that it comes from a symmetric distribution. The experimenter wants to ¯t either the normal distribution or the Laplace distribution, which one is preferable? It is well known that the normal distribution is used to analyze symmetric data with short tails, whereas the Laplace distribution is used for the long tails data. -
Arxiv:2008.01277V3 [Q-Fin.RM] 13 Oct 2020
GENERALIZED AUTOREGRESSIVE SCORE ASYMMETRIC LAPLACE DISTRIBUTION AND EXTREME DOWNWARD RISK PREDICTION APREPRINT Hong Shaopeng∗ School of Statistics Capital University of Economics and Trade BeiJing, China October 14, 2020 ABSTRACT Due to the skessed distribution, high peak and thick tail and asymmetry of financial return data, it is difficult to describe the traditional distribution. In recent years, generalized autoregressive score (GAS) has been used in many fields and achieved good results. In this paper, under the framework of generalized autoregressive score (GAS), the asymmetric Laplace distribution (ALD) is improved, and the GAS-ALD model is proposed, which has the characteristics of time-varying parameters, can describe the peak thick tail, biased and asymmetric distribution. The model is used to study the Shanghai index, Shenzhen index and SME board index. It is found that: 1) the distribution parameters and moments of the three indexes have obvious time-varying characteristics and aggregation characteristics. 2) Compared with the commonly used models for calculating VaR and ES, the GAS-ALD model has a high prediction effect. Keywords Generalized Autoregressive Score · Asymmetric Laplace Distribution · VaR · ES 1 Introduction As a special kind of economic system, the financial market has emerged with a series of unique macro laws, which are usually called "stylized facts". For example, the return distribution is sharp and asymmetrical. Due to Knight uncertainty, static parameter models usually cannot accurately describe the characteristics -
Expected Logarithm and Negative Integer Moments of a Noncentral 2-Distributed Random Variable
entropy Article Expected Logarithm and Negative Integer Moments of a Noncentral c2-Distributed Random Variable † Stefan M. Moser 1,2 1 Signal and Information Processing Lab, ETH Zürich, 8092 Zürich, Switzerland; [email protected]; Tel.: +41-44-632-3624 2 Institute of Communications Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan † Part of this work has been presented at TENCON 2007 in Taipei, Taiwan, and at ISITA 2008 in Auckland, New Zealand. Received: 21 July 2020; Accepted: 15 September 2020; Published: 19 September 2020 Abstract: Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral c2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and tight upper and lower bounds on them are proposed. Keywords: central c2 distribution; chi-square distribution; expected logarithm; exponential distribution; negative integer moments; noncentral c2 distribution; squared Rayleigh distribution; squared Rice distribution 1. Introduction The noncentral c2 distribution is a family of probability distributions of wide interest. They appear in situations where one or several independent Gaussian random variables (RVs) of equal variance (but potentially different means) are squared and summed together. The noncentral c2 distribution contains as special cases among others the central c2 distribution, the exponential distribution (which is equivalent to a squared Rayleigh distribution), and the squared Rice distribution. In this paper, we present closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral c2-distributed RV with even or odd degrees of freedom. -
A Note on the Bivariate Distribution Representation of Two Perfectly
1 A note on the bivariate distribution representation of two perfectly correlated random variables by Dirac’s δ-function Andr´es Alay´on Glazunova∗ and Jie Zhangb aKTH Royal Institute of Technology, Electrical Engineering, Teknikringen 33, SE-100 44 Stockholm, Sweden; bUniversity of Sheffield, Electronic and Electrical Engineering, Mappin Street, Sheffield, S1 3JD, UK Abstract In this paper we discuss the representation of the joint probability density function of perfectly correlated continuous random variables, i.e., with correlation coefficients ρ=±1, by Dirac’s δ-function. We also show how this representation allows to define Dirac’s δ-function as the ratio between bivariate distributions and the marginal distribution in the limit ρ → ±1, whenever this limit exists. We illustrate this with the example of the bivariate Rice distribution. I. INTRODUCTION The performance evaluation of wireless communications systems relies on the analysis of the behavior of signals modeled as random variables or random processes that obey prescribed probability distribution laws. The Pearson product-moment correlation coefficient is commonly used to quantify the strength of the relationship between two random variables. For example, in wireless communications, two-antenna diversity systems take advantage of low signal correlation. The diversity gain obtained by combining the received signals at the two antenna branches depends on the amount of correlation between them, e.g., the lower the correlation the higher the diversity gain, [1]. Moreover, when the correlation coefficient is 1 or 1 it is said that the two signals are perfectly correlated, which implies that there is no diversity gain. In this case the joint probability− distribution of the signals is not well-defined and, to the best knowledge of the authors, a thorough discussion of these limit cases is not found in the literature.