Another Generalization of the Skew Normal Distribution
Total Page:16
File Type:pdf, Size:1020Kb
World Applied Sciences Journal 12 (7): 1034-1039, 2011 ISSN 1818-4952 © IDOSI Publications, 2011 Another Generalization of the Skew Normal Distribution 1M.R. Kazemi, 1H. Haghbin and 2J. Behboodian 1Department of Mathematics, Kazerun Branch, Islamic Azad University, Kazerun, Iran 2Department of Mathematics, Shiraz Islamic Azad University, Shiraz, Iran Abstract: In this paper, we first introduce a new family of distributions, called Minimax Normal distributions, by using Kumaraswamy distribution. Then we use this family to find a generalization of the Balakrishnan skew-normal distribution by the name of skew minimax Normal distribution and we study some of its properties. Key words: Kumaraswamy distribution • Balakrishnan Skew-Normal distribution • Farlie-Gumble- Morgentern formula • Moment generating function • Conditional probability • Limiting dis tribution INTRODUCTION Recently a natural way of generating families of distributions on some support from a simple starting A random variable X is said to be skew-normal distribution G with density g is to apply a family of with parameter λ∈R, written as X~SN (λ), if its density distributions on (0,1), as Jones [14, 15] used this idea to function is the beta distribution. This family is motivated by the following general class: f(x;λ ) = 2 φ ( x ) Φλ ( x),x ∈R b1− g(x)Ga1− (x)() 1− G(x) f(x)= (2) where φ(x) and Φ (x) denote the N(0,1) density and B(a,b) distribution, respectively, which is introduced by Azzalini [1]. This distribution extends the widely Eugene et al. [16] introduced what is known as the family of Normal distribution by introducing a beta normal by taking G in (2) to be cdf of the normal skewness factor. The skew normal distribution has distribution. Now if in (2), beta distribution is replaced some applications in different statistical field, like by Kumaraswamy's distribution, the resulting density is discriminant analysis, regression and graphical models which were discussed by Azzalini and Capitanio [2]. n1− The applications in time series and spatial analysis are f(x)=mnGm1− (x){} 1− G m (x) g(x) (3) given by Genton et al. [3] and earlier fragmental works of Aigner et al. [4] and Roberts [5] in econometrics and medical studies. Azzalini and Della Valle [6], Azzalini Now like beta base distribution family (2) if we and Capitaneo [2] and Arnold and Beaver [7] discussed take G(.) in (3) to be cdf of another distribution, we the multivariate extension of the distribution; Arellano- have a new family of distributions. Valle et al. [8], Gupta and Gupta [9], Jamalizadeh et al. In this paper, by appliying the above disscution, we [10], Sharafi and Behboodian [11] and Yadegari et al. generalize another aspect of Skew Normal distribution [12] proposed some general classes of the skew-normal along the line of the Balakrishnan skew normal or distribution. Generalized skew normal [17] by reviewing more The paper by Kumaraswamy [13] proposed a properties as follows. new probability distribution for double bounded In Section 2, we take G in (3) as standard normal random processes with hydrological applications. distribution. We name this distribution Minimax Kumaraswamy's distribution has the following Normal (MMN) and we find some properties of MMN probability density function distribution. In Section 3, a new generalization of Balakrishnan skew normal is introduced by using n1− MMN distribution and it is named Skew Minimax f(x)=mnxm1− () 1− x m ,0<x<1 (1) Normal (SMMN); some properties of this distribution is Corresponding Author: M.R. Kazemi, Department of Mathematics, Kazerun Branch, Islamic Azad University, Kazerun, Iran. 1034 World Appl. Sci. J., 12 (7): 1034-1039, 2011 also studied. In Section 4, a multivariate version of Theorem 1: The moment of order k for Skew Minimax Normal distribution is given. X~ MMN(m,n) is MINIMAX NORMAL DISTRIBUTION n1− k n1− i E X =mn ()−×1 () ∑i In this section we define Minimax Normal i=0 distribution and study its properties. ()i1m1+− kj()i+− 1m 1 Definition 1: A random variable X has MMN (m,n) ()−+1 I () −1Ik,j k,i1m1()+− ∑ j distribution if its density function is j=0 n1− f(x)=mnΦm1− (x) 1 −Φ m (X) φ (x),x ∈R (4) where ∞ s 1− zz2 k Ik,s =n z exp erf dz 22π ∫ 2 2 where m and n are non-negative integer numbers. 0 Another way to construct this distribution involves norma l order statistics. is given by Gupta and Nadarajah [18]. Consider the following independent and identically distributed random variable from standard normal Proof: distribution : n1− kkn1− i() i1m1+− XX11 1m E() X = m n ()−Φ1 x ()()x φ x dx ∑i ∫ i=0 R n1− XXn1 nm n1− i = mn ()−1 ∑i then it is easy to show that i=0 ()i1m1+− XX= kj()i1m1+− min max { ij} −+ − 1≤≤ i n1 ≤≤ j m ()1 I ()1Ik,j k,i1m1()+− ∑ j j=0 has MMN(m,n) distribution. Thus, it is reasonable to call this new distribution, Minimax Normal distribution. Property 3: If X~MMN (m,n1) and Y~MMN(m,n 2) are Property 1: If X~MMN (m,n), then its cumulative independent random variables, then it can be shown distribution function (cdf) is that X|{Y≥X} is distributed as MMN (m,n 1+n2) n SKEW MINIMAX NORMAL DISTRIBUTION − −Φm FX () x;m,n =1 1() x Balakrishnan [17], as a discussant of Arnold and which can be use to simulate MMN random variable. Beaver [7], generalizes the Azzalini distribution as n Property 2: (Bivariate Generalization): A bivariate Φλ( x) φ (x) λ∈ generalization of MMN distribution can be derived fn (x; ) = , x R (5) C()n λ by using the Farlie-Gumble-Morgentern formula as follows: where n is a non-negative integer and Cn(λ)is the normalizing constant. This distribution is known as the nn12 Balakrishnan skew normal or Generalized skew normal, mm12 F,x,x=11x11xXX() 1 2 − −Φ ()1 − − Φ×()2 12 λ denoted by GSNn( ). A generalization of (5) can be derived as follows: nn mm 12 11+α −Φ 12 x 1−Φ x If X~ N(0,1) and Y~MMN (m,n) are independent ()12 ( ) random variables, then it can be easily shown that X|{Y≥λX} has a new distribution with the where X1~MMN(m1,n1), X2~MMN(m2,n2) and -1<α<1. following pdf 1035 World Appl. Sci. J., 12 (7): 1034-1039, 2011 1 n Property 7: In Theorem 2, if k = 1, then we can present λ −Φm λ φ fX () x;m,n, = 1( x ) () x (6) a simple expression for E(X), as follows: Km,n ()λ where n n λλn m m i kλ = P Y ≥λ X = 1 −Φ λ x φ x dx EX=() ()−1 Cim− 1 m,n ()( ) ()() ∑ i 2 ∫ Kλ 21 π +λ2 1+λ R m,n () ()i=0 n n i im =(− 1 ) Φ ( λφ x )() x dx ∑i ∫ Theorem 3: Let the following sequence of random i=0 R variables be identically and independently distributed as N(0,1): We will call the pdf (6), Skew Minimax Normal λ (SMMN) and denote it by SMMNm,n ( ) XX11 1m Property 4: It is easy to show that SMMN distribution XX is a generalized Balakrishnan skew normal distribution L1 Lm (5), because SMMN1,n (-λ) = DSNn (λ). And X~SMMNm,n (λ) be independent from the above sequence. If Property 5: If X:SMMNm,n (-λ), then -X:SMMNm,n (-λ). A= min max{} Xij ≥λ X 1≤≤ i L1 ≤≤ j m Property 6: Let ƒ be the pdf of a SMMNm,n (λ), then we have then X|A is distributed as SMMNm,n+L(λ). lim fX ( x;m,n,λφ) = 2( x) I( x > 0 ) λ→−∞ Proof: lim fX ( x;m,n,λφ ) = 2 ( x )( I x < 0 ) λ→+∞ x P min max {}Xij≥ t f X () t dt Theorem 2: The moment of order k for the pdf (6) is as ∫ 1≤≤ i L1 ≤≤ j m −∞ follows, P() X≤ x|A = PA() (7) x n nL+ 1 n i 1 m EXkk = (−λ1 ) C () EY 1−Φ( λ t ) φ () t dt () ∑ im() i K λ ∫ Km,n ()λ i m,n () i=0 = −∞ PA() where Yi: GSNim (λ) and Cj(λ) is Balakrishnan skew normal constant. It can be shown that Proof: K + (λ) PA=() m,n L Km,n ()λ 1 n E Xk = x km 1−Φ λ x φ x dx () ( ) () K m,n ()λ ∫ R Now, the relation (7) may be re-written as follows: n n i 1 k im x = (−1 ) x Φ ( λφ x )() x dx + K ()λ ∑ i ∫ m nL m,n i=0 1−Φ( λ t ) φ () t dt R ∫ n −∞ 1 n i k P() X≤ x|A = = (−λ1 ) Cim () EY i Km,n+ L ()λ K λ ∑ i () m,n ()i=0 where and by taking the derivative with respect to x, the proof n is complete. Cn ()λ =∫ Φλφ ( xx )()dx. R Theorem 4: If X~SMMNm,n(λ), then its moment is the Balakrishnan skew normal constant. generating function is 1036 World Appl. Sci. J., 12 (7): 1034-1039, 2011 1 −Φm λ +λ fWT( w) = 2 f( w) I( w < 0) as λ→+∞ MX () t= E( 1() Z t) ()r km,n ()λ fWT() w = 2 f( w )( I w > 0 ) as λ→−∞ ()r where Z has standard normal distribution. Proof: By using the joint density of W and Y, then the Proof: Using integration by parts, we would have, marginal density of W is 1 n M t = exptx1−Φm λ x φ xdx fW( w) = f T( w) b m,n ( w,λ ,r) X () ( ) ( ) () ()r K m,n ()λ ∫ R where 1 n = 1−Φm λ x φ x − t dx r1+ () ( ) 2 Km,n ()λ ∫ w 2 R 1 + n r 1 m = 1−Φ( λ y +λ t ) φ () y dy bm,n () w,λ× ,r = K ()λ ∫ r1+ r1− m,n Γλ2Km,n () R 2 w2 and the proof is complete.