2017 Timo Koski Department of Math

Total Page:16

File Type:pdf, Size:1020Kb

2017 Timo Koski Department of Math Lecture Notes: Probability and Random Processes at KTH for sf2940 Probability Theory Edition: 2017 Timo Koski Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden 2 Contents Foreword 9 1 Probability Spaces and Random Variables 11 1.1 Introduction...................................... .......... 11 1.2 Terminology and Notations in Elementary Set Theory . .............. 11 1.3 AlgebrasofSets.................................... .......... 14 1.4 ProbabilitySpace.................................... ......... 19 1.4.1 Probability Measures . ...... 19 1.4.2 Continuity from below and Continuity from above . .......... 21 1.4.3 Why Do We Need Sigma-Fields? . .... 23 1.4.4 P - Negligible Events and P -AlmostSureProperties . 25 1.5 Random Variables and Distribution Functions . ............ 25 1.5.1 Randomness?..................................... ...... 25 1.5.2 Random Variables and Sigma Fields Generated by Random Variables ........... 26 1.5.3 Distribution Functions . ...... 28 1.6 Independence of Random Variables and Sigma Fields, I.I.D. r.v.’s . ............... 29 1.7 TheBorel-CantelliLemmas ............................. .......... 30 1.8 ExpectedValueofaRandomVariable . ........... 32 1.8.1 A First Definition and Some Developments . ........ 32 1.8.2 TheGeneralDefinition............................... ....... 34 1.8.3 The Law of the Unconscious Statistician . ......... 34 1.8.4 Three Inequalities for Expectations . ........... 35 1.8.5 Limits and Integrals . ...... 37 1.9 Appendix: lim sup xn and lim inf xn .................................. 37 1.9.1 Sequencesofrealnumbers . .. .. .. .. .. .. .. .. .. .. .. ......... 37 1.9.2 lim sup xn ............................................ 38 1.9.3 lim inf xn ............................................. 38 1.9.4 Properties, The Limit of a Sequence . ......... 38 1.10 Appendix: lim sup An and lim inf An .................................. 39 1.11 Appendix: Combinatorics of Counting and Statistics of Particles in Cells . 40 1.12Exercises ........................................ ......... 42 1.12.1 Easy Drills . .... 42 1.12.2 Measures, Algebras and Sigma Fields . ......... 42 1.12.3 Random Variables and Expectation . ......... 44 3 4 CONTENTS 2 Probability Distributions 47 2.1 Introduction...................................... .......... 47 2.2 Continuous Distributions . .......... 49 2.2.1 Univariate Continuous Distributions . ......... 49 2.2.2 Continuous Bivariate Distributions . ......... 57 2.2.3 Mean, Variance and Covariance of Linear Combinations of R.V.s . ............ 59 2.3 Discrete Distributions . .......... 59 2.3.1 Univariate........................................ ..... 59 2.3.2 Bivariate Discrete Distributions . ......... 64 2.4 Transformations of Continuous Distributions . ............... 65 2.4.1 The Probability Density of a Function of Random Variable . .......... 65 2.4.2 Change of Variable in a Joint Probability Density . ........ 67 2.5 Appendix: Decompositions of Probability Measures on the Real Line ............... 71 2.5.1 Introduction ..................................... ...... 71 2.5.2 Decompositions of µ on (R, ) ................................. 72 B 2.5.3 Continuous, Discrete and Singular Random Variables . ........... 74 2.6 Exercises ......................................... ........ 75 2.6.1 Distribution Functions . ...... 75 2.6.2 Univariate Probability Density Functions . ......... 76 2.6.3 Multivariate P.d.f.’s . .... 81 2.6.4 ExpectationsandVariances . ......... 87 2.6.5 Additional Exercises . ....... 88 3 Conditional Probability and Expectation w.r.t. a Sigma Field 91 3.1 Introduction...................................... .......... 91 3.2 Conditional Probability Densities and Conditional Expectations . ................. 91 3.3 Conditioning w.r.t. an Event . ......... 94 3.4 Conditioning w.r.t. a Partition . ......... 95 3.5 Conditioning w.r.t. a Random Variable . ......... 96 3.6 A Case with an Explicit Rule for Conditional Expectation . ............ 97 3.7 Conditioning w.r.t. a σ -Field ..................................... 98 3.7.1 Properties of Conditional Expectation . .......... 99 3.7.2 An Application of the Properties of Conditional Expectation w.r.t. a σ-Field . 101 3.7.3 EstimationTheory.................................. ......102 3.7.4 TowerPropertyandEstimationTheory . ..........103 3.7.5 Jensen’s Inequality for Conditional Expectation . ............103 3.8 Exercises ......................................... ........104 3.8.1 Easy Drills . 104 3.8.2 Conditional Probability . ......104 3.8.3 Joint Distributions & Conditional Expectations . ..........106 3.8.4 Miscellaneous . .109 3.8.5 Martingales....................................... .112 CONTENTS 5 4 Characteristic Functions 117 4.1 OnTransformsofFunctions . .. .. .. .. .. .. .. .. .. .. .. ...........117 4.2 Characteristic Functions: Definition and Examples . ...............119 4.2.1 Definition and Necessary Properties of Characteristic Functions ..............119 4.2.2 Examples of Characteristic Functions . ..........122 4.3 Characteristic Functions and Moments of Random Variables . ................127 4.4 Characteristic Functions of Sums of Independent Random Variables ................128 4.5 Expansions of Characteristic Functions . ...............131 4.5.1 ExpansionsandErrorBounds . ........131 4.5.2 A Scaled Sum of Standardised Random Variables (Central Limit Theorem) . .132 4.6 AnAppendix:ALimit ................................... ......133 4.6.1 A Sequence of Numbers with the Limit ex ..........................133 4.6.2 Some Auxiliary Inequalities . ......134 4.6.3 Applications . 135 4.7 Exercises ......................................... ........136 4.7.1 Additional Examples of Characteristic Functions . ............136 4.7.2 Selected ExamQuestionsfrom the PastDecades . ............137 4.7.3 Various Applications of the Characteristic Function . ............138 4.7.4 Mellin Transform in Probability . .139 5 Generating Functions in Probability 143 5.1 Introduction...................................... ..........143 5.2 Probability Generating Functions . ...........143 5.3 Moments and Probability Generating Functions . .............147 5.4 Probability Generating Functions for Sums of Independent RandomVariables . .148 5.5 Sums of a Random Number of Independent Random Variables . ...............149 5.6 TheProbabilityofanEvenNumberofSuccesses . ..............151 5.7 MomentGeneratingFunctions . ...........154 5.7.1 Definition and First Properties . ........154 5.7.2 M.g.f. is really an Exponential Moment Generating Function, E.m.g.f ! ..........157 5.8 Exercises ......................................... ........158 5.8.1 Probability Generating Functions . ........158 5.8.2 MomentGeneratingFunctions . ........159 5.8.3 Sums of a Random Number of Independent Random Variables . ............161 5.8.4 Various Additional Generating Functions in Probability . ...........163 5.8.5 TheChernoffInequality .............................. .......164 6 Convergence of Sequences of Random Variables 165 6.1 Introduction...................................... ..........165 6.2 Definitions of Modes of Convergence, Uniqueness of the Limit . ................167 6.3 RelationsbetweenConvergences . .............169 6.4 SomeRulesofComputation ............................. .........172 6.5 AsymptoticMomentsandPropagationofError . ..............174 6.6 ConvergencebyTransforms . .............177 6.6.1 Theorems on Convergence by Characteristic Functions . ...............177 6.6.2 ConvergenceandGeneratingFunctions . ...........179 6 CONTENTS 6.6.3 CentralLimitTheorem ............................... ......179 6.7 AlmostSureConvergence ............................. ...........180 6.7.1 Definition......................................... 180 6.7.2 Almost Sure Convergence Implies Convergence in Probability . .............180 6.7.3 A Summary of the General Implications between Convergence Concepts and One Special Implication ........................................... 181 6.7.4 TheStrongLawofLargeNumbers . .........182 6.8 Exercises ......................................... ........183 6.8.1 Convergence in Distribution . ........183 6.8.2 CentralLimitTheorem ............................... ......187 6.8.3 Convergence in Probability . ......188 6.8.4 ProofofTheorem6.7.2 ............................... ......190 6.8.5 Almost Sure Convergence, The Interrelationship Between Almost Sure Convergence and Mean Square Convergence, Criteria for Almost Sure Convergence ..............190 7 Convergence in Mean Square and a Hilbert Space 193 7.1 Convergence in Mean Square; Basic Points of View . .............193 7.1.1 Definition......................................... 193 7.1.2 The Hilbert Space L (Ω, , P).................................193 2 F 7.2 Cauchy-Schwartz and Triangle Inequalities . ..............194 7.3 PropertiesofMeanSquareConvergence . ...............194 7.4 Applications........................................ ........196 7.4.1 MeanErgodicTheorem ............................... ......196 7.4.2 MeanSquareConvergenceofSums . ..........196 7.4.3 Mean Square Convergence of Normal Random Variables . .............197 7.5 Subspaces, Orthogonality and Projections in L (Ω, , P)......................198 2 F 7.6 Exercises ......................................... ........201 7.6.1 MeanSquareConvergence . .. .. .. .. .. .. .. .. .. .. .. ........201 7.6.2 Optimal Estimation as Projection on Closed Linear Subspaces in L (Ω, , P) ......201 2 F 8 Gaussian Vectors 205 8.1 Multivariate Gaussian Distribution . ..........205
Recommended publications
  • Arxiv:2004.02679V2 [Math.OA] 17 Jul 2020
    THE FREE TANGENT LAW WIKTOR EJSMONT AND FRANZ LEHNER Abstract. Nevanlinna-Herglotz functions play a fundamental role for the study of infinitely divisible distributions in free probability [11]. In the present paper we study the role of the tangent function, which is a fundamental Herglotz-Nevanlinna function [28, 23, 54], and related functions in free probability. To be specific, we show that the function tan z 1 ´ x tan z of Carlitz and Scoville [17, (1.6)] describes the limit distribution of sums of free commutators and anticommutators and thus the free cumulants are given by the Euler zigzag numbers. 1. Introduction Nevanlinna or Herglotz functions are functions analytic in the upper half plane having non- negative imaginary part. This class has been thoroughly studied during the last century and has proven very useful in many applications. One of the fundamental examples of Nevanlinna functions is the tangent function, see [6, 28, 23, 54]. On the other hand it was shown by by Bercovici and Voiculescu [11] that Nevanlinna functions characterize freely infinitely divisible distributions. Such distributions naturally appear in free limit theorems and in the present paper we show that the tangent function appears in a limit theorem for weighted sums of free commutators and anticommutators. More precisely, the family of functions tan z 1 x tan z ´ arises, which was studied by Carlitz and Scoville [17, (1.6)] in connection with the combinorics of tangent numbers; in particular we recover the tangent function for x 0. In recent years a number of papers have investigated limit theorems for“ the free convolution of probability measures defined by Voiculescu [58, 59, 56].
    [Show full text]
  • Arxiv:1906.06684V1 [Math.NA]
    Randomized Computation of Continuous Data: Is Brownian Motion Computable?∗ Willem L. Fouch´e1, Hyunwoo Lee2, Donghyun Lim2, Sewon Park2, Matthias Schr¨oder3, Martin Ziegler2 1 University of South Africa 2 KAIST 3 University of Birmingham Abstract. We consider randomized computation of continuous data in the sense of Computable Analysis. Our first contribution formally confirms that it is no loss of generality to take as sample space the Cantor space of infinite fair coin flips. This extends [Schr¨oder&Simpson’05] and [Hoyrup&Rojas’09] considering sequences of suitably and adaptively biased coins. Our second contribution is concerned with 1D Brownian Motion (aka Wiener Process), a prob- ability distribution on the space of continuous functions f : [0, 1] → R with f(0) = 0 whose computability has been conjectured [Davie&Fouch´e’2013; arXiv:1409.4667,§6]. We establish that this (higher-type) random variable is computable iff some/every computable family of moduli of continuity (as ordinary random variables) has a computable probability distribution with respect to the Wiener Measure. 1 Introduction Randomization is a powerful technique in classical (i.e. discrete) Computer Science: Many difficult problems have turned out to admit simple solutions by algorithms that ‘roll dice’ and are efficient/correct/optimal with high probability [DKM+94,BMadHS99,CS00,BV04]. Indeed, fair coin flips have been shown computationally universal [Wal77]. Over continuous data, well-known closely connected to topology [Grz57] [Wei00, 2.2+ 3], notions of proba- § § bilistic computation are more subtle [BGH15,Col15]. 1.1 Overview Section 2 resumes from [SS06] the question of how to represent Borel probability measures.
    [Show full text]
  • Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images Hasni Idayu Binti Saidi
    University of Northern Colorado Scholarship & Creative Works @ Digital UNC Dissertations Student Research 8-2018 Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images Hasni Idayu Binti Saidi Follow this and additional works at: https://digscholarship.unco.edu/dissertations Recommended Citation Saidi, Hasni Idayu Binti, "Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images" (2018). Dissertations. 507. https://digscholarship.unco.edu/dissertations/507 This Text is brought to you for free and open access by the Student Research at Scholarship & Creative Works @ Digital UNC. It has been accepted for inclusion in Dissertations by an authorized administrator of Scholarship & Creative Works @ Digital UNC. For more information, please contact [email protected]. ©2018 HASNI IDAYU BINTI SAIDI ALL RIGHTS RESERVED UNIVERSITY OF NORTHERN COLORADO Greeley, Colorado The Graduate School POWER COMPARISONS OF THE RICIAN AND GAUSSIAN RANDOM FIELDS TESTS FOR DETECTING SIGNAL FROM FUNCTIONAL MAGNETIC RESONANCE IMAGES A Dissertation Submitted in Partial Fulfillment of the Requirement for the Degree of Doctor of Philosophy Hasni Idayu Binti Saidi College of Education and Behavioral Sciences Department of Applied Statistics and Research Methods August 2018 This dissertation by: Hasni Idayu Binti Saidi Entitled: Power Comparisons of the Rician and Gaussian Random Fields Tests for Detecting Signal from Functional Magnetic Resonance Images has been approved as meeting the requirement for the Degree of Doctor of Philosophy in College of Education and Behavioral Sciences in Department of Applied Statistics and Research Methods Accepted by the Doctoral Committee Khalil Shafie Holighi, Ph.D., Research Advisor Trent Lalonde, Ph.D., Committee Member Jay Schaffer, Ph.D., Committee Member Heng-Yu Ku, Ph.D., Faculty Representative Date of Dissertation Defense Accepted by the Graduate School Linda L.
    [Show full text]
  • Laws of Total Expectation and Total Variance
    Laws of Total Expectation and Total Variance Definition of conditional density. Assume and arbitrary random variable X with density fX . Take an event A with P (A) > 0. Then the conditional density fXjA is defined as follows: 8 < f(x) 2 P (A) x A fXjA(x) = : 0 x2 = A Note that the support of fXjA is supported only in A. Definition of conditional expectation conditioned on an event. Z Z 1 E(h(X)jA) = h(x) fXjA(x) dx = h(x) fX (x) dx A P (A) A Example. For the random variable X with density function 8 < 1 3 64 x 0 < x < 4 f(x) = : 0 otherwise calculate E(X2 j X ≥ 1). Solution. Step 1. Z h i 4 1 1 1 x=4 255 P (X ≥ 1) = x3 dx = x4 = 1 64 256 4 x=1 256 Step 2. Z Z ( ) 1 256 4 1 E(X2 j X ≥ 1) = x2 f(x) dx = x2 x3 dx P (X ≥ 1) fx≥1g 255 1 64 Z ( ) ( )( ) h i 256 4 1 256 1 1 x=4 8192 = x5 dx = x6 = 255 1 64 255 64 6 x=1 765 Definition of conditional variance conditioned on an event. Var(XjA) = E(X2jA) − E(XjA)2 1 Example. For the previous example , calculate the conditional variance Var(XjX ≥ 1) Solution. We already calculated E(X2 j X ≥ 1). We only need to calculate E(X j X ≥ 1). Z Z Z ( ) 1 256 4 1 E(X j X ≥ 1) = x f(x) dx = x x3 dx P (X ≥ 1) fx≥1g 255 1 64 Z ( ) ( )( ) h i 256 4 1 256 1 1 x=4 4096 = x4 dx = x5 = 255 1 64 255 64 5 x=1 1275 Finally: ( ) 8192 4096 2 630784 Var(XjX ≥ 1) = E(X2jX ≥ 1) − E(XjX ≥ 1)2 = − = 765 1275 1625625 Definition of conditional expectation conditioned on a random variable.
    [Show full text]
  • Probability Cheatsheet V2.0 Thinking Conditionally Law of Total Probability (LOTP)
    Probability Cheatsheet v2.0 Thinking Conditionally Law of Total Probability (LOTP) Let B1;B2;B3; :::Bn be a partition of the sample space (i.e., they are Compiled by William Chen (http://wzchen.com) and Joe Blitzstein, Independence disjoint and their union is the entire sample space). with contributions from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and Independent Events A and B are independent if knowing whether P (A) = P (AjB )P (B ) + P (AjB )P (B ) + ··· + P (AjB )P (B ) Jessy Hwang. Material based on Joe Blitzstein's (@stat110) lectures 1 1 2 2 n n A occurred gives no information about whether B occurred. More (http://stat110.net) and Blitzstein/Hwang's Introduction to P (A) = P (A \ B1) + P (A \ B2) + ··· + P (A \ Bn) formally, A and B (which have nonzero probability) are independent if Probability textbook (http://bit.ly/introprobability). Licensed and only if one of the following equivalent statements holds: For LOTP with extra conditioning, just add in another event C! under CC BY-NC-SA 4.0. Please share comments, suggestions, and errors at http://github.com/wzchen/probability_cheatsheet. P (A \ B) = P (A)P (B) P (AjC) = P (AjB1;C)P (B1jC) + ··· + P (AjBn;C)P (BnjC) P (AjB) = P (A) P (AjC) = P (A \ B1jC) + P (A \ B2jC) + ··· + P (A \ BnjC) P (BjA) = P (B) Last Updated September 4, 2015 Special case of LOTP with B and Bc as partition: Conditional Independence A and B are conditionally independent P (A) = P (AjB)P (B) + P (AjBc)P (Bc) given C if P (A \ BjC) = P (AjC)P (BjC).
    [Show full text]
  • Probability Cheatsheet
    Probability Cheatsheet v1.1.1 Simpson's Paradox Expected Value, Linearity, and Symmetry P (A j B; C) < P (A j Bc;C) and P (A j B; Cc) < P (A j Bc;Cc) Expected Value (aka mean, expectation, or average) can be thought Compiled by William Chen (http://wzchen.com) with contributions yet still, P (A j B) > P (A j Bc) of as the \weighted average" of the possible outcomes of our random from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and Jessy Hwang. variable. Mathematically, if x1; x2; x3;::: are all of the possible values Material based off of Joe Blitzstein's (@stat110) lectures Bayes' Rule and Law of Total Probability that X can take, the expected value of X can be calculated as follows: P (http://stat110.net) and Blitzstein/Hwang's Intro to Probability E(X) = xiP (X = xi) textbook (http://bit.ly/introprobability). Licensed under CC i Law of Total Probability with partitioning set B1; B2; B3; :::Bn and BY-NC-SA 4.0. Please share comments, suggestions, and errors at with extra conditioning (just add C!) Note that for any X and Y , a and b scaling coefficients and c is our http://github.com/wzchen/probability_cheatsheet. constant, the following property of Linearity of Expectation holds: P (A) = P (AjB1)P (B1) + P (AjB2)P (B2) + :::P (AjBn)P (Bn) Last Updated March 20, 2015 P (A) = P (A \ B1) + P (A \ B2) + :::P (A \ Bn) E(aX + bY + c) = aE(X) + bE(Y ) + c P (AjC) = P (AjB1; C)P (B1jC) + :::P (AjBn; C)P (BnjC) If two Random Variables have the same distribution, even when they are dependent by the property of Symmetry their expected values P (AjC) = P (A \ B jC) + P (A \ B jC) + :::P (A \ B jC) Counting 1 2 n are equal.
    [Show full text]
  • Intensity Distribution of Interplanetary Scintillation at 408 Mhz
    Aust. J. Phys., 1975,28,621-32 Intensity Distribution of Interplanetary Scintillation at 408 MHz R. G. Milne Chatterton Astrophysics Department, School of Physics, University of Sydney, Sydney, N.S.W. 2006. Abstract It is shown that interplanetary scintillation of small-diameter radio sources at 408 MHz produces intensity fluctuations which are well fitted by a Rice-squared. distribution, better so than is usually claimed. The observed distribution can be used to estimate the proportion of flux density in the core of 'core-halo' sources without the need for calibration against known point sources. 1. Introduction The observed intensity of a radio source of angular diameter ;s 1" arc shows rapid fluctuations when the line of sight passes close to the Sun. This interplanetary scintillation (IPS) is due to diffraction by electron density variations which move outwards from the Sun at high velocities (-350 kms-1) •. In a typical IPS observation the fluctuating intensity Set) from a radio source is reCorded for a few minutes. This procedure is repeated on several occasions during the couple of months that the line of sight is within - 20° (for observatio~s at 408 MHz) of the direction of the Sun. For each observation we derive a mean intensity 8; a scintillation index m defined by m2 = «(S-8)2)/82, where ( ) denote the expectation value; intensity moments of order q QI! = «(S-8)4); and the skewness parameter Y1 = Q3 Q;3/2, hereafter referred to as y. A histogram, or probability distribution, of the normalized intensity S/8 is also constructed. The present paper is concerned with estimating the form of this distribution.
    [Show full text]
  • A Guide on Probability Distributions
    powered project A guide on probability distributions R-forge distributions Core Team University Year 2008-2009 LATEXpowered Mac OS' TeXShop edited Contents Introduction 4 I Discrete distributions 6 1 Classic discrete distribution 7 2 Not so-common discrete distribution 27 II Continuous distributions 34 3 Finite support distribution 35 4 The Gaussian family 47 5 Exponential distribution and its extensions 56 6 Chi-squared's ditribution and related extensions 75 7 Student and related distributions 84 8 Pareto family 88 9 Logistic ditribution and related extensions 108 10 Extrem Value Theory distributions 111 3 4 CONTENTS III Multivariate and generalized distributions 116 11 Generalization of common distributions 117 12 Multivariate distributions 132 13 Misc 134 Conclusion 135 Bibliography 135 A Mathematical tools 138 Introduction This guide is intended to provide a quite exhaustive (at least as I can) view on probability distri- butions. It is constructed in chapters of distribution family with a section for each distribution. Each section focuses on the tryptic: definition - estimation - application. Ultimate bibles for probability distributions are Wimmer & Altmann (1999) which lists 750 univariate discrete distributions and Johnson et al. (1994) which details continuous distributions. In the appendix, we recall the basics of probability distributions as well as \common" mathe- matical functions, cf. section A.2. And for all distribution, we use the following notations • X a random variable following a given distribution, • x a realization of this random variable, • f the density function (if it exists), • F the (cumulative) distribution function, • P (X = k) the mass probability function in k, • M the moment generating function (if it exists), • G the probability generating function (if it exists), • φ the characteristic function (if it exists), Finally all graphics are done the open source statistical software R and its numerous packages available on the Comprehensive R Archive Network (CRAN∗).
    [Show full text]
  • An Alternative Discrete Skew Logistic Distribution
    An Alternative Discrete Skew Logistic Distribution Deepesh Bhati1*, Subrata Chakraborty2, and Snober Gowhar Lateef1 1Department of Statistics, Central University of Rajasthan, 2Department of Statistics, Dibrugarh University, Assam *Corresponding Author: [email protected] April 7, 2016 Abstract In this paper, an alternative Discrete skew Logistic distribution is proposed, which is derived by using the general approach of discretizing a continuous distribution while retaining its survival function. The properties of the distribution are explored and it is compared to a discrete distribution defined on integers recently proposed in the literature. The estimation of its parameters are discussed, with particular focus on the maximum likelihood method and the method of proportion, which is particularly suitable for such a discrete model. A Monte Carlo simulation study is carried out to assess the statistical properties of these inferential techniques. Application of the proposed model to a real life data is given as well. 1 Introduction Azzalini A.(1985) and many researchers introduced different skew distributions like skew-Cauchy distribu- tion(Arnold B.C and Beaver R.J(2000)), Skew-Logistic distribution (Wahed and Ali (2001)), Skew Student's t distribution(Jones M.C. et. al(2003)). Lane(2004) fitted the existing skew distributions to insurance claims data. Azzalini A.(1985), Wahed and Ali (2001) developed Skew Logistic distribution by taking f(x) to be Logistic density function and G(x) as its CDF of standard Logistic distribution, respectively and obtained arXiv:1604.01541v1 [stat.ME] 6 Apr 2016 the probability density function(pdf) as 2e−x f(x; λ) = ; −∞ < x < 1; −∞ < λ < 1 (1 + e−x)2(1 + e−λx) They have numerically studied cdf, moments, median, mode and other properties of this distribution.
    [Show full text]
  • (Introduction to Probability at an Advanced Level) - All Lecture Notes
    Fall 2018 Statistics 201A (Introduction to Probability at an advanced level) - All Lecture Notes Aditya Guntuboyina August 15, 2020 Contents 0.1 Sample spaces, Events, Probability.................................5 0.2 Conditional Probability and Independence.............................6 0.3 Random Variables..........................................7 1 Random Variables, Expectation and Variance8 1.1 Expectations of Random Variables.................................9 1.2 Variance................................................ 10 2 Independence of Random Variables 11 3 Common Distributions 11 3.1 Ber(p) Distribution......................................... 11 3.2 Bin(n; p) Distribution........................................ 11 3.3 Poisson Distribution......................................... 12 4 Covariance, Correlation and Regression 14 5 Correlation and Regression 16 6 Back to Common Distributions 16 6.1 Geometric Distribution........................................ 16 6.2 Negative Binomial Distribution................................... 17 7 Continuous Distributions 17 7.1 Normal or Gaussian Distribution.................................. 17 1 7.2 Uniform Distribution......................................... 18 7.3 The Exponential Density...................................... 18 7.4 The Gamma Density......................................... 18 8 Variable Transformations 19 9 Distribution Functions and the Quantile Transform 20 10 Joint Densities 22 11 Joint Densities under Transformations 23 11.1 Detour to Convolutions......................................
    [Show full text]
  • Skellam Type Processes of Order K and Beyond
    Skellam Type Processes of Order K and Beyond Neha Guptaa, Arun Kumara, Nikolai Leonenkob a Department of Mathematics, Indian Institute of Technology Ropar, Rupnagar, Punjab - 140001, India bCardiff School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, UK Abstract In this article, we introduce Skellam process of order k and its running average. We also discuss the time-changed Skellam process of order k. In particular we discuss space-fractional Skellam process and tempered space-fractional Skellam process via time changes in Poisson process by independent stable subordinator and tempered stable subordinator, respectively. We derive the marginal proba- bilities, L´evy measures, governing difference-differential equations of the introduced processes. Our results generalize Skellam process and running average of Poisson process in several directions. Key words: Skellam process, subordination, L´evy measure, Poisson process of order k, running average. 1 Introduction Skellam distribution is obtained by taking the difference between two independent Poisson distributed random variables which was introduced for the case of different intensities λ1, λ2 by (see [1]) and for equal means in [2]. For large values of λ1 + λ2, the distribution can be approximated by the normal distribution and if λ2 is very close to 0, then the distribution tends to a Poisson distribution with intensity λ1. Similarly, if λ1 tends to 0, the distribution tends to a Poisson distribution with non- positive integer values. The Skellam random variable is infinitely divisible since it is the difference of two infinitely divisible random variables (see Prop. 2.1 in [3]). Therefore, one can define a continuous time L´evy process for Skellam distribution which is called Skellam process.
    [Show full text]
  • On the Bivariate Skellam Distribution Jan Bulla, Christophe Chesneau, Maher Kachour
    On the bivariate Skellam distribution Jan Bulla, Christophe Chesneau, Maher Kachour To cite this version: Jan Bulla, Christophe Chesneau, Maher Kachour. On the bivariate Skellam distribution. 2012. hal- 00744355v1 HAL Id: hal-00744355 https://hal.archives-ouvertes.fr/hal-00744355v1 Preprint submitted on 22 Oct 2012 (v1), last revised 23 Oct 2013 (v2) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Noname manuscript No. (will be inserted by the editor) On the bivariate Skellam distribution Jan Bulla · Christophe Chesneau · Maher Kachour Submitted to Journal of Multivariate Analysis: 22 October 2012 Abstract In this paper, we introduce a new distribution on Z2, which can be viewed as a natural bivariate extension of the Skellam distribution. The main feature of this distribution a possible dependence of the univariate components, both following univariate Skellam distributions. We explore various properties of the distribution and investigate the estimation of the unknown parameters via the method of moments and maximum likelihood. In the experimental section, we illustrate our theory. First, we compare the performance of the estimators by means of a simulation study. In the second part, we present and an application to a real data set and show how an improved fit can be achieved by estimating mixture distributions.
    [Show full text]