
Available at Applications and Applied http://pvamu.edu/aam Mathematics: Appl. Appl. Math. ISSN: 1932-9466 An International Journal (AAM) Vol. 3, Issue 1 (June 2008), pp. 42 – 54 (Previously, Vol. 3, No. 1) Some Applications of Dirac's Delta Function in Statistics for More Than One Random Variable Santanu Chakraborty Department of Mathematics University of Texas Pan American, 1201 West University Drive, Edinburg, Texas 78541, USA [email protected] Received July 14, 2006; accepted December 7, 2007 Abstract In this paper, we discuss some interesting applications of Dirac's delta function in Statistics. We have tried to extend some of the existing results to the more than one variable case. While doing that, we particularly concentrate on the bivariate case. Keywords: Dirac's Delta function, Random Variables, Distributions, Densities, Taylor's Series Expansions, Moment generating functions. 1. Introduction Cauchy, in 1816, was the first (and independently, Poisson in 1815) gave a derivation of the Fourier integral theorem by means of an argument involving what we would now recognize as a sampling operation of the type associated with the delta function. And there are similar examples of the use of what are essentially delta functions by Kirchoff, Helmholtz and Heaviside. But Dirac was the first to use the notationδ . The Dirac delta function (δ -function) was introduced by Paul Dirac at the end of the 1920s in an effort to create the mathematical tools for the development of quantum filed theory. He referred to it as an “improper function” in Dirac (1930). Later, in 1947, Laurent Schwartz gave it a more rigorous mathematical definition as a linear functional on the space of test functions D (the set of all real-valued infinitely differentiable functions with compact support) such that for a given function f (x) in D , the value of the functional is given by the property (b) below. This is called the sifting or sampling property of the delta function. Since the delta function is not really a function in the classical sense, one should not consider the “value” of the delta function at x . Hence, the domain of the delta function is D and its value, for f ∈ D and a given x0 is f (x0 ). Khuri (2004) studied some interesting applications of the delta function in statistics. He mainly studied univariate cases even though he did give some interesting examples for the multivariate case. We shall study some more applications in the multivariate scenario in this work. These might help future researchers 42 AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 43 in statistics to develop more ideas. In sections 2 and 3, we discuss derivatives of the delta function in both univariate and multivariate case. Then, in section 4, we discuss some applications of the delta function in probability and statistics. In section 5, we discuss calculations of densities in both univariate and multivariate case using transformations of variables. In section 6, we use vector notations for delta functions in the multidimensional case. In section 7, we discuss very briefly the transformations of variables in the discrete case. Then, in section 8, we discuss the moment generating function in the multivariate set up. We conclude with few remarks in section 9. 2. Derivatives of the δ -function in the Univariate Case In the univariate case, some basic properties satisfied by Dirac's delta function are: ∞ (a) ∫δ (x)dx = 1, −∞ b δ − = < < (b) ∫ f (x) (x x0 )dx f (x0 ) for all a x0 b , a where f (x) is any function continuous in a neighborhood of the point x0 . In particular, we have, ∞ δ − = ∫ f (x) (x x0 )dx f (x0 ). −∞ This is the sifting property that we mentioned in the previous section. If f (x) is any function th with continuous derivatives up to the n order in some neighborhood of x0 , then b δ (n) − = − n (n) ≥ < < ∫ f (x) (x x0 )dx ( 1) f (x0 ), n 0 for all a x0 b . a In particular, we have, ∞ δ (n) − = − n (n) ≥ ∫ f (x) (x x0 )dx ( 1) f (x0 ), n 0 −∞ (n) th for a given x0 . Here, δ is the generalized n order derivative of δ . This derivative defines a n (n) linear functional which assigns the value (−1) f (x0 ) to f (x) . Now let us consider the Heaviside function H(x) unit step function defined by 44 Santanu Chakraborty H (x) = 0 for x < 0 = 1 for x ≥ 1. dH (x) The generalized derivative of H (x) is δ (x), i.e., δ (x) = . As a result, we get a special dx case of the formula for the nth order derivative mentioned above: ∞ ∫ x nδ (i) (x)dx = 0 if i ≠ n −∞ = (−1) n n! if i = n 3. Derivatives of the delta function in the bivariate case Following Saichev and Woyczynski (1997), Khuri (2004) provided the extended definition of delta function to the n -dimensional Euclidean space. But we shall mainly concentrate on the bivariate case. As in the univariate case, we can write down similar properties for the bivariate case as well. In the bivariate case, δ (x, y) = δ (x)δ (y) . So, if we assume f (x, y) to be a continuous function in some neighborhood of (x0 , y0 ) , then we can write δ − − = ∫∫ f (x, y) (x x0 , y y0 )dxdy f (x0 , y0 ) , ℜ×ℜ where ℜ is the real line. Now, for this function f , if all its partial derivatives up to the nth order are continuous in the abovementioned neighborhood of (x0 , y0 ) , then, n (n) n n ∂ f (x, y) f (x, y)δ (x − x , y − y )dxdy = (−1) C | = = …… (1) ∫∫ 0 0 ∑ k k n−k x x0 , y y0 ℜ×ℜ 0≤k≤n ∂x ∂y n (n) where Ck is the number of combinations of k out of n objects, δ (x, y) is the generalized nth order derivative of δ (x, y) . In the general p-dimensional case, by using induction on n , it can be shown that ∞ ∞ δ (n) − * − * ∫... ∫ f (x1 ,..., x p ) (x1 x1 ,..., x p x p )dx1...dx p −∞ −∞ − − − n n k1 ... k p−1 n! ∂ (n) f = − n = * ( 1) ∑... ∑ k | x x , k1 p 0 0 k1!...k p ! ∂x1 ...∂x p AAM: Intern, J., Vol. 3, Issue 1 (June 2008) [Previously, Vol. 3, No. 1] 45 * * * where x = (x1 ,..., x p )′ , x = (x1 ,..., x p )′ and f is a function of p variables, namely, x1 , x2 ,, x p . 4. Use of Delta Function to Obtain Discrete Probability Distributions If X is a discrete random variable that assumes the values a1 ,...,an with probabilities p1 ,..., pn respectively such that ∑ pi = 1, then, the probability mass function of X can be represented 1≤i≤n as p(x) = ∑ piδ (x − ai ). 1≤i≤n Now let us consider two discrete random variables X and Y which assume the values a1 ,...,an and b1 ,...,bn , respectively, and the joint probability P(X = ai ,Y = b j ) is given by pij for i = 1,...,m and j = 1,...,n so that the joint probability mass function p(x, y) is given by p(x, y) = ∑ ∑ pijδ (x − ai )δ (y − b j ) . 1≤i≤m1≤ j≤n Similarly, one can write down the joint probability distribution of any finite number of random variables in terms delta functions as follows: Suppose we have k random variables X 1 ,..., X k with X i taking values aij , j = 1,2,...,ni for i = 1,2,...,k with probability p . Then, the joint probability mass function is i1...ik n1 nk P(X = x ,..., X = x ) = p δ (x − a ) δ (x − a ) 1 1 k k ∑ ∑ i1ik 1 1i1 k kik i1 =1 ik =1 As an example, we may consider the situation of multinomial distributions. Let X 1 , X 2 ,...X k follow multinomial distribution with parameters n, p1 , p2 ,..., pk . Then, n! i1 ik P(X 1 = i1 ,, X k = ik ) = p1 pk i1!ik ! where i1 ,,ik add up to n and p1 , pk add up to 1. In terms of delta function, the joint probability mass function is PX(1= x 12 , X = x 2 ,..., Xkk = x ) n! ii12 ik =∑∑... ∑ pp12... pkδ ( x1 −δ i 1 ) ( x 2 − i 2 )... δ ( xkk − i ) . ii12 ik ii12! !... ik ! 46 Santanu Chakraborty We can also consider conditional probabilities and think of expressing them in terms of the δ - function. Let us go back to the example of the two discrete random variables X and Y , where X takes the values a1 ,a2 ,,am and Y takes the values b1 ,b2 ,,bn . Then, the conditional probability of Y = y given X = x is given by PY(,= y X = x ) p(|) y x= PY ( = y | X = x ) = PX()= x ∑∑pijδ−( xa i )( δ− ybj ) pxy(, ) ≤≤ ≤ ≤ = = 11im jn . px() ∑ piiδ−() x a 1≤≤in 5. Densities of Transformations of Random Variables Using δ -function If X is a continuous random variable with a density function f (x) and if Y = g(X ) is a function of X , then the density function of Y , namely, h(y) is given by ∞ h(y) = ∫ f (x)δ (y − g(x))dx . −∞ We can extend this to the two-dimensional case. If X and Y are two continuous random variables with joint density function f (x, y) and if Z = φ1 (X ,Y ) and W = φ2 (X ,Y ) are two random variables obtained as transformations from (X ,Y ) , then the bivariate density function for Z and W is given by ∞ ∞ = δ −φ δ −φ h(z, w) ∫ ∫ f (x, y) (z 1 (x, y)) (w 2 (x, y))dxdy , −∞−∞ where z and w are the variables corresponding to the transformations φ1 (X ,Y ) and φ2 (X ,Y ) .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-