Mathematics and 4(1): 1-14, 2016 http://www.hrpub.org DOI: 10.13189/ms.2016.040101

On the Kumaraswamy Fisher Snedecor Distribution

Adepoju, K.A*, Chukwu, A.U, Shittu, O.I

Department of Statistics, University of Ibadan, Nigeria

Copyright©2016 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License

Abstract We propose the Kumaraswamy-F (KUMAF) population means relies heavily on a good number of distribution which is a generalization of the conventional assumptions of the Analysis of (ANOVA). Fisher Snedecor (F-distribution). The new distribution can For the F-ratio statistic there are two fundamental be used even when one or more of the regular assumptions assumptions: the of the compared populations are are violated. It is obtained with the addition of two shape the same; the estimates of the population variance are parameters to a continuous F-distribution which is independent. commonly used to test the null hypothesis in the Analysis of Therefore before we proceed with an analysis of the data Variance (ANOVA test). The statistical properties of the we have collected we have to make sure that these proposed distribution such as moments, generating assumptions have been met. function, the asymptotic behavior among others were These assumptions include independent of k populations investigated. The method of maximum likelihood is used to being tested, equality of the population variances, and estimate the model parameters and the observed information absence of outlier among others. When a number of the matrix is derived. The distribution is found to be more above assumptions are violated, the use of F-test result to test flexible and robust to regular assumptions of the for the equality of the regresand becomes incorrect or conventional F-distribution. In future research, the misleading. For example, if the assumption of independence flexibility of this distribution as well as its robustness using is violated, then the one-way ANOVA is simply not a real data set will be examined. The new distribution is appropriate. Similarly, when the assumption of normality or recommended for used in most applications where the unequal variances is violated, the classical F-test fails to assumption underlying the use of conventional F reject the null hypothesis even if the data actually provide distribution for one-way analysis of variance are violated strong evidence for it. A potentially more damaging such as homogeneity of variance or normality assumption violation of assumption occurs when one are more of the probably as result of the presence of outlier(s). It is populations being tested are not normally distributed instructive to note that the new distribution preserves the probably due to the presence of outliers. This occurs more originality of the data without transformation. especially when the sample sizes are not equaled Keywords Fisher-Snedecor Distribution, (unbalanced). Often, the effect of an assumption violation on Kumaraswamy-F Distribution, One Way ANOVA, the result of one-way ANOVA depends on the extent of the Outlier, Maximum Likelihood Method violation (such as how unequal the population variances are, or how heavy-tailed one or another population distribution

is). Some small violations may have little practical effect on the analysis, while other violations may render the one-way ANOVA result uselessly incorrect. Krutchkoff [10] 1. Introduction discussed some misconceptions about the F-test and An F-test is any statistical test in which the test statistic provided a simulation based solution to overcome drawbacks has an F-distribution under the null hypothesis. It is most of the test. ANOVA under heteroscedasticity is a Behrens- often used when comparing statistical models that have been Fisher’s type problem. The Behrens–Fisher problem, named fitted to a data set, in order to identify the model that best fits after Ronald Fisher and W. V. Behrens, is the problem of the population from which the data were sampled. Exact interval estimation and hypothesis testing concerning the F-tests mainly arise when the models have been fitted to the difference between the means of two normally distributed data using least squares. The name was coined by George W. populations when the variances of the two populations are Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially not assumed to be equal, based on two independent samples. developed the statistic as the variance ratio in the 1920s. The Tsui and Weerahandi [14] generalized the conventional efficient use of classical F-test to compare several (k) definition of the p-value from the F-distribution so that 2 On the Kumaraswamy Fisher Snedecor Distribution

problems such as the Behrens-Fisher problem can be r1 r1 −1 resolved. Weerahandi [15] discussed the numerical 2 2 r1 x equivalence of this test with the representation of Behrens- f (x) = (1) r +r2 Fisher solution given in Barnard [1]. Rice and Gaines [12] 1 1  r r  r1  r x  2 extended the p-value given in Barnard [1] to the one-way B 1 , 2  r 2 1+ 1  ANOVA case. Samaradasa and Weerahandi [16] extended 2    2 2   r2  the representation of the two-sample test given in Tsui and Weerahandi [14] to the one-way ANOVA case and provide  r1 x r1 r2  exact tests for making multiple comparisons for means and β ; ,   r2 2 2  variances. This test referred to as the generalized F-test for F(x) = I = = A (2) r1x  r r  one-way ANOVA is numerically equivalent in performance r 2 β 1 , 2  to the test in Rice and Gaines [12].  2 2  This paper therefore focuses on developing a generalized F-distribution that is capable of handling data that are The Kumaraswamy generator (link function) as given by non-normal and non-Gaussian and probably infected with Jones [7] as outliers. The proposed distributions will be used to develop C−1 b−1 = − c (3) p-value that is less sensitive to any serious model assumption g(x) bc[F(X ) ] [1 F(X ) ] f (x) violations. then using (1) and (2) in (3) , we obtain the density function of the Kumaraswamy-F distribution as: distribution as: 2. The Kumaraswamy-F Distribution b−1   r1 r1 (KUMA-F) c−1  c  bc I 1− I r 2 x 2 −1 r1x  r1 x  1 In this section Kumaraswamy-F distribution is developed r 2  r2  by compounding the tractable link function developed by g(x) = (4) r +r Kumaraswamy [8] with conventional F-distribution by 1 2  r r  r1  r x  2 Fisher Snedecor to obtain the Kumaraswamy-F distribution. B 1 , 2  r 2 1+ 1  A X is distributed as the 2    2 2   r2  Kumaraswamy-F distribution if its PDF is derived as follows: > > > > > b 0, c 0, x 0, r1 0, r2 0 The graph of the above pdf is given in Fig. 1 below:

Figure 1. The PDF of Kumaraswamy F-Distribution at r1=3, r2=22 b=2.5 Mathematics and Statistics 4(1): 1-14, 2016 3

c−1 Using the appropriate transformation, one can show that ∞ 1 1   b−1 dM ∞ g(x)dx = bc  M c  (1− M c ) = ∫ ∫   cAc−1 ∫ g(x)dx 1 −∞ 0   0 1 1 1− c b−1 dM Let = b M (1− M ) − ∫ 1 c 1 r1 0   r −1  c    2 1 M r1 2     x   I r x dA r2 1   1 A = such that = + r dx r1 r2 b−1 2 2 = b (1− M ) dM  r1 r2   r1 x  ∫ β ,  1+  0  2 2  r  2  = 1 Then This verifies that g(x) is indeed a probability density function of a continuous distribution. b−1 dA g(x) = bc A c−1 (1− Ac ) dx ∞ ∞ 3. Cumulative Distribution Function b−1 g(x)dx = bc A c−1 (1− Ac ) dA =1 ∫ ∫ Lemma1: Given that X ˜KUMA − F (b,c,r ,r ) its −∞ 0 1 2 , distribution function is expressed as c dM c−1 c b Let M = A = c A     dA  r1x r1 r2    b ; ,    1    r2 2 2    dM c G(x) =1− 1−    dA = − A = M  r r  C Ac 1   b 1 , 2       2 2       

Proof

r1 r  r  2 1 −1 b−1  1  t 2 x     c−1  c   r2  G(x) = P(X ≤ x) = bc I r t 1− I r t + dt ∫ 1  1  r1 r2 0 r2  r2  2  r1 r2   r1t  B ,  1+   2 2   r2 

r1 r   2 1 −1 r1 2   t dp  r2  Let P = I and = r1t r dt  r1 r2   r1t  2 B ,  1 +   2 2   r2  We can be simplified as follows

M b−1 G(x) = b∫ (1− P c ) dP 0 (5) G(x) = 1− (1− P c )b

4 On the Kumaraswamy Fisher Snedecor Distribution

 r1 x r1 r2  β ; ,   r2 2 2  Recall that P = then  r r  β 1 , 2   2 2 

c b         r1 x r1 r2   b ; ,     r2 2 2    G(x) = 1− 1−    (6)  r r    b 1 , 2       2 2       

The plot of the CDF for various values of b, c, r1 and r2 is plotted below

Figure 2. CDF of Kumaraswamy distribution

4. Limit Behaviour In this section, we investigate the limit behavior of Kumaraswamy-F distribution as x → ∞ and as x → 0. This can be achieved by taking the limit of the equation (4). For x → ∞ r1 c−1 r1 b−1   2 −1     1 r1 2 1 c   x       r1 +r2 Iim g(x) = Iim bc I r x 1− I  r1 r2  r x→∞  1   r1x  β  2  2 x→∞  ,   r1  r  r2   2   2 2  1+ x  r2  This tends to zero because 1 Iim + = 0 x→∞ r1 r2  r x  2 1+ 1   r   2  Mathematics and Statistics 4(1): 1-14, 2016 5

r 1 −1 Similarly, as x → 0, Iim g(x) = 0, this is because Iim x 2 = 0 x→0 The above indicates that the proposed distribution has a .

5. Hazard Rate Function The hazard rate function of a random variable X with the probability density function is obtained using g(x) h(x) = − 1 G(x) then using (4) and (6) in the above expression ,the hazard rate function of the Kumaraswamy-F (KUMA-F)distribution is

b−1  r r    1 1 −1  c−1  c   bc I 1− I r 2 x 2  r1x  r1 x  1  r  2  r2   +  r1 r2    r r  r1  r x  2  1 2 2  1   B ,  r2 1+     2 2   r2   h(x) = (7)  c b        r1 x r1 r2      b ; ,            r2 2 2    1−      r r      b 1 , 2        2 2           

6. Estimation and Information Matrix Let X be a random variable with the KUMA-F distribution (4.). The log-likelihood function is

 r x r r  b 1 , 1 , 2  n    r1 r2   r2 2 2   r1 r2  l(b,c,r1 ,r2 ) = nlogC − nlogb − nlog b ,  + (c −1)∑log − (c −1)log b ,  +  2 2  i=1  r r   2 2  b 1 , 2   2 2  n   r r   r x r r   r r  nr nr − b c 1 2 − b 1 1 2  − − b 1 2 + 1 − 1 (b 1)∑log  ,   , ,  (b 1)log  ,  log r1 log r2 i=1   2 2   r2 2 2   2 2  2 2 n n  r1   r1 + r2   r1 x  +  −1∑log x − ∑log1+  (8)  2  i=1  2  i=1  r2 

By differentiating (8) with respect to parameters b, c, r1 and r2

' ' n ∂l Γ(b) Γ(1+ b)  r1 r2  δ  c  r1 r2  c  r1 r2  = −n + n − c log b ,  + ∑ logb  ,  − b  ,  (9) ∂b Γ(b) Γ(1+ b)  2 2  i=1 δc   2 2   2 2 

6 On the Kumaraswamy Fisher Snedecor Distribution

n n ∂l n  r1 x r1 r2  r1 r2 δ c  r1 r2  c  r1 x r1 r2  = + ∑log b  , ,  − log b ( , ) + (b −1)∑ log[b  ,  − b  ; ,  ∂c c i=1 r 2 2 2 2 i=1 δc  2 2  r 2 2  2   2   r r  − (b −1)log b 1 , 2  (10)  2 2 

1 1 1 1 n  r1 r1 + r2 δ  r1 x r1 r2  r1 r1 + r2 (c −1)∑  , ,   ∂δ 2 2 i=1 δr1  r2 2 2   2 2 = −n + n + − (c −1) − ] + (b −1) ∂r r r + r  r x r r  r r + r 1 1 1 2 β 1 , 1 , 2   1 1 2  r 2 2   2 2  2   2 2     1 1   r r   r x r r   r r + r  δ b c  1 , 2  − b c  1 , 1 , 2    1 1 2  n   n    2 2   r2 2 2    2 2  n n n 1 ∑   − (b −1)  −  + log r1 + − log r2 + ∑log x − = δr1 + 2 2 2 2 = i 1    r1 r1 r2  i 1   c  r1 r2  c  r1 x r1 r2   2 2   b  ,  − b  , ,       2 2   r2 2 2    

 n n  1  r1 x   r1 + r2  x   +  +   (11)  ∑log1  ∑  2 i=1  r2   2  i=1  r1 x   1+     r2  

1 1 1 1 r r + r  r x r r   r r + r  1 1 2 β 1 , 1 , 2   2 1 2  n   ∂δ 2 2 δ  r2 2 2   2 2  = −n + n + ( c −1)∑ − ( c −1) −  ∂r r r1 + r2 i=1 δr  r x r r  r r1 + r2 2 2 2 β 1 , 1 , 2   2  2 2  r 2 2   2   2   2  1 1 δ   r r   r x r r   r r + r  b c  1 , 2  − b c  1 , 1 , 2   2 1 2  n     δr2   2 2   r 2 2  2 2 nr + (b −1)∑ − (b −1)c − − 1  i=1  r r   r x r r   r r1 + r2 2r  b c  1 , 2  − b c  1 , 1 , 2  2 2    2   2 2   r2 2 2  2     r  −  1    2 x 1 n  r x   r + r n r −   + 1  + 1 2  2   ∑log1    ∑  (12) 2 i=1 r  2  i=1 r1 x   2  1+  r  2  1 noting that φ(k) = k k equations(9), (10), ( 11) and (12) can be re-expressed as n   δl c  r1 r2  c  r1 x r1 r2   r1 r2  = −nϕ(b)+ nϕ(1+ b)+ ∑log b  ,  − b  , ,  − C log b ,  (13) δb i=1   2 2   r2 2 2   2 2 

Mathematics and Statistics 4(1): 1-14, 2016 7

n n   δ n  r1 x r1 r2   r1 r2  δ  r1 r2   r1 x r1 r2  = + ∑log b , ,  − log b ,  + (b −1)∑ log b ,  − b , ,  δc c i=1 r 2 2  2 2  i=1 δc  2 2  r 2 2  2    2   r r  − (b −1)log b 1 , 2  (14)  2 2 

  r r   r x r r  b c  1 , 2  − b c  1 , 1 , 2  n   δ  r1   r1 + r2  δ   2 2   r2 2 2    r1   r1 + r2  = −nϕ  + nϕ  + (b −1)∑ − (b −1)ϕ  −ϕ  δr1  2   2  i=1 δr1  r r   r x r r    2   2  b c  1 , 2  − b c  1 , 1 , 2  2 2  r 2 2     2  n  r1 + r2   x  −  ∑  (15)  2  i=n  r2 + r1 x   r x r r  β c  1 , 1 , 2  n   δ  r   r + r  δ  r2 2 2   r   r + r  = −nϕ 2  + nϕ 1 2  + ( c −1)∑ − ( c −1)ϕ  2  + ( c −1)ϕ β 1 2  δr  2   2  i=1 δr  r x r r   2   2  2 2 β c  1 , 1 , 2   r 2 2   2  δ  r r   r x r r  b c  1 , 2  − b c  1 , 1 , 2  n   δr2  2 2   r2 2 2   r   r + r  nr + (b −1) − (b −1)ϕ b c  2  + (b −1)ϕ 1 2  − 1 − ∑ δ i=1 c  r1 r2  c  r1 x r1 r2   2   2  r2 b  ,  − b  , ,   2 2   r2 2 2  1 n  r x   r + r  n  r x   + 1  +  1 2   1 ( − ) (16) ∑log1  ∑ 2  b 1 2 i=1  r2   2  i=1  r2 + r1r2 x 

Equations (13), (14), (15) and (16 ) above are solved numerically for b, c, r1 and r2 to obtain the respective estimates ˆ b, cˆ, rˆ1 , and rˆ2 .

7. Fisher Information The Fisher Information Matrix is obtained by finding the derivatives of the equations (13), (14), (15) and (16) which in turn give the diagonal elements of the Fisher’s Information Matrix. They are as follows: δ 2l = −nΨ(b)+ nΨ(a + b) (17) δb 2   r r   r x r r  b c  1 , 2  − b c  1 , 1 , 2  2 n   δ l δ   2 2   r2 2 2  c  r r  = − log B  1 , 2  (18) δ δ ∑ δ b c i=1 c c  r1 r2  c  r1 x r1 r2   2 2  b  ,  − b  , ,   2 2   r2 2 2 

8 On the Kumaraswamy Fisher Snedecor Distribution

  r r   r x r r  b c  1 , 2  − b c  1 , 1 , 2  n   δl δ   2 2   r2 2 2   r   r + r  = ∑ − Cϕ 1  + Cϕ 1 2  (19) δ δ = δ   b r1 i 1 r1 c  r1 r2  c  r1 x r1 r2   2   2  b  ,  − b  , ,    2 2   r2 2 2 

  r r   r x r r  b c  1 , 2  − b c  1 , 1 , 2  2 n   δ l δ   2 2   r2 2 2   r   r + r  = ∑ − Cϕ 2  + Cϕ 1 2  (20) δ δ = δ   b r1 i 1 r2 c  r1 r2  c  r1 x r1 r2   2   2  b  ,  − b  , ,    2 2   r2 2 2 

2 n 2 δ l n δ  c  r r  c  r x r r  = + − b  1 2  − b  1 1 2  (21) 2 2 (b 1)∑ 2  ,  , ,  δc C i=1 δc   2 2   r2 2 2 

 c  r1 x r1 r2  b  , ,  δ 2 n δ 2   r r r   r   r + r  = a∑ − aϕ 1  + aϕ 1 2  + (b −1) δcr i=1 δr  r x r r   2   2  1 1 b c  1 , 1 , 2   r 2 2   2  n 2   δ c  r1 r2  c  r1 x r1 r2   r1   r1 + r2  ∑ log b  ,  − b  , ,  − (b −1)ϕ  + (b −1)ϕ  (22) i=1 δr1δc   2 2   r2 2 2   2   2   r x r r  b c  1 , 1 , 2  n   δl δ  r2 2 2   r   r + r   r   r + r  = −ϕ 2  +ϕ 1 2  − (b −1)ϕ 2  + (b −1)ϕ 1 2  (23) δ δ ∑ δ c r2 i=1 r2 c  r1 x r1 r2   2   2   2   2  b  , ,   r2 2 2 

c  r1 r2  c  r1 x r1 r2  b  ,  − b  , ,  δ 2l  r   r + r  n δ 2  2 2  r 2 2 = ϕ 1 + ϕ 1 2 + −  2  2 n   n   (b 1)∑ δ = δ   r1  2   2  i 1 r2 c  r1 r2  c  r1 x r1 r2  b  ,  − b  , ,    2 2   r2 2 2 

n n 2  r1   r1 + r2  n 1  x   r1 + r2  x − (b −1) ϕ  + (b −1)ϕ  + − ∑  −  ∑ (24)  2   2  2r1 2 i=1  r2 + r1 x   2  i=1 (r2 + r1 x)

 c  r1 r2  c  r1 x r1 r2  b  ,  − b  , ,  δ 2  r   r + r  n δ 2   r r   r r r  = −nϕ  1  + nϕ 1 2  + (b −1)∑ δr δr  2   2  i=1 δr δr  r r   r x r r  1 2 1 2 b c  1 , 2  − b c  1 , 1 , 2  2 2  r 2 2     2   r + r  n r n x 1 n x  r + r  n x + (b −1) ϕ  1 2  − 1 − + 1 2  (25) 2 ∑ ∑ + ∑ 2  2  2 r2 i=1  r1 x  2 i=1 r2 r1 x  2  i=1 (r2 + r1 x) 1+   r2  Mathematics and Statistics 4(1): 1-14, 2016 9

ccrr rxrr bb12,−  1 ,, 12 δδ22r  rr+  n 22r 22 r =−φφ2 + 12 + − 2 −−φ2 22nn  ( c11)∑ (b)  (26) δδrr2222   i=1 ccrr rxrr 2 bb12,−  1 ,, 12 22r2 22

8. Moments and Generating Functions

8.1. Generating Functions We derive the moment generating function and characteristic function for a random variable X having the KUMA-F density Function given in equation as follows. By definition, the moment generating function of a random variable X is defined as M (t) = E(etx ), where t < 1. x Using equation (1), we have

b−1 ∞ r   1 −1 tx c−1  c  2 r1 e I 1− I x ∫ r1x r1x r    r  1 0 r2  r2  M (t) = bc 1  dx x   r1 +r2  r2   r1 r2  B ,   r x  2  2 2  1+ 1   r   2  For real integer b>0

b−1   b−1 b −1 1− I c  =   (−1)i I ci  r1x  ∑  r1x i=1  j   r2  r2 ∞ r 1 −1 tx c−1 r r1 e I x r1x 2 b−1 ∫  r  1 i 0 r M (t) = bc 1  (b −1)(−1) 2 dx x   ∑ r1 +r2  r2   r1 r2  i=0 B ,   r x  2  2 2  1+ 1   r   2  Recall that

r1 r   2 1 −1 r1 2   x dI  r2  A = I r x = + 1 dx r1 r2 r2  r r  r x  2 B 1 , 2 1+ 1  2 2  r    2 

10 On the Kumaraswamy Fisher Snedecor Distribution

Since

r1 +r2  r r  r x  2 β 1 , 2 1+ 1  ∞ k   tx (tx)  2 2  r2  e = ; dx = dI ∑ 1 r1 k =0 K r 1   2 1 −1 r1 2   x  r2 

r1 ∞ r  r  r b−1 ∞ b −1 1 −1 M (t) = bc  1  (−1)  t k x k I c+ci−1 x 2 dI x   ∑∑   ∫ r1x  r2  i=0 k =0  i  0 r2 Since

 r1 x r1 r2   r1 r2  I = β , ,  β ,  r 2 2  2 2   2  From which

r1x r 2 r1x  r1 +r2   r1 x r1 r2  r −  β , ,  = y 2 (1+ y)  2  dy  r 2 2  ∫  2  0

 r1 +r2  −   2   r1 +r2   r +r  −  − 1 2   2   1− y  (1+ y)  2  = 2 1−   2   

r1 + r2  r1 +r2  + j −  ∞ j  2  j (1− y) 2 = − 2 ∑( 1) j j=0 2 r1 + r2 2

r x Since y = 1 , we have r2

j   + + r1 x r1 r2  r1 r2    −   r1 +r2  1− + j  r  −  ∞    r x   r  r 2  + 1  = − j  2  1  2 ∑( 1) j  r2  j=0 2 r1 + r2 2

Noting that j m j  r1x  m  r1x   j  1−  = ∑(−1)     r = r m  2  m 0  2    m    j  + + r1 x r1 r2  r1 r2      −   r1 +r2  + j  2  −  ∞ j      r x   2  m+ j  r2  m 2  + 1  = − 1  2 ∑∑( 1) j r = = 2  2  j 0 m 0 Mathematics and Statistics 4(1): 1-14, 2016 11

Then

∞ j r x m+ j  j  r + r 1  r1 +r2  1 2 r −  (−1)   + j 2 r1x  r  ∑∑    r x r r  2 j=0 m=0 m 2 β 1 , 1 , 2  = y r2 −1+ m dy   ∫ j  r2 2 2  0 r1 + r2 2 2

∞ j m+ j  j  r + r r x  r1 +r2  1 2 1 −  (−1)   + j r  2  ∑∑   r2 1 −1+m 2 j=0 m=0 m 2 = y r2 dy j ∫ r1 + r2 2 0 2

∞ j m+ j  j  r + r  r1 +r2  1 2 r r −  (−1)   + j 1 +m 1 +m  2  ∑∑   2 2 2 j=0 m=0 m 2  r  x =  1  j   dy r + r 2 r  r1  1 2  2   + m 2  2 

Since

r1 r    r  2 1 −1 r1x r1 r2  1  2 β , ,    x dI  r2   r2 2 2  = = r +r I ; dx 1 2  r r   r r   r x  2 β 1 , 2  β 1 , 2  1+ 1   2 2   2 2   2  This implies

r1 r   2 1 −1 r1 2   x dx  r2  dI = r1 +r2  r r   r x  2 β 1 , 2  1+ 1  2 2  r     2  c+c1−1 r1 r1   −1  r1 x r1 r2   r  2 b−1 ∞ b −1 1 2 β , ,   1 i r r x  r 2 2  M x (t) = bc  (−1)  t x dx   2     ∑∑   ∫ r1 +r2 r = = i + −  2  i 0 r 0   0   2 c ci 1  r1 r2  r1 x   r1 r2   B ,  1+  β ,   2 2 r 2 2    2       i b −1 r m+ j r1 + r2 r1 (−1)  t  (−1) + j  r  r 1 b−1 ∞ i ∞ j 2  1     = bc + − +   a c ci 1 ∑∑ ∑∑ r1 r2 r i=0 r=0 r! j=0 m=0 j+  2    r1 r2   2 b ,    2 2 2      

12 On the Kumaraswamy Fisher Snedecor Distribution

c+ci−1  r + 1 +m r1 r2  ∞  r1 1  −   2 +  +m ( c+ci−1)   2 r1 1 r1 r2   2  r1 r1x ∗  x + −1 1+  dx r  r  2  ∫ 2  2   2   1 + m 0 2     If we consider

r1 +r2 ∞  r  r  1 +m  ( c+ ci−1)+r+ 1 −1  r x  2 x  2  2 1+ 1  dx ∫  r  0  2  r x dy r r Let y = 1 = 1 dx = 2 dy r2 dx r2 r1 r x = 2 y r 1

r1 +r2 ∞  r  r  1 +m  (a + i−1)+r+ 1 −1   2 r2  2  2 r1 x r2 ∫ ( y 1+  dy 0 r1  r2  r1

r (a + i−1)+r + 1 −1 r ∞  r1  2 (c+ci−1)+r + 1 −1  +m   r +r     r1  2 − 1 2 r2  +m   2       2  y (1+ y)  2  dy  r  ∫  1  0 Recall from the of the second kind that

∞ −(α +β ) ∫ yα −1 (1+ y) dy = β (α, β ) 0 r + r α + β = 1 2 2  r  r α =  1 + m ( c + ci −1)+ r + 1  2  2 Then

∞  r1  r1 r +r  +m  ( c+ ci−1)+r+ −( 1 2 ) ∫ y  2  2 (1+ y) 2 dy 0 We have

i b −1 r r1 (−1)  t  r  2 1 b−1 ∞ i =  1    M x (t) bc  c(1+i) ∑∑ r i=0 r=0 r!  2    r1 r2  b ,  2 2    Mathematics and Statistics 4(1): 1-14, 2016 13

( + )− r c 1 i 1  1 +m  2  m+ j r1 + r2  r1 (−1) + j (c (1+i )−1)+ r +  ∞ j   r   r1  2 2  r  1 2  + m    1      2  ∑∑ r1 +r2   = = + j r  r1   j 0 m 0 r  2   r1  r1 + r2   2  + m   2  2  

 r1   r1 r2  r1  β  + m(c(1+ i)−1) + + r, −  + m(c(1+ i)−1) ) (27)  2   2 2  2 

9. Moments The rth moment of the Kumaraswamy-F is given as follows; ∞ t r Using the that E(etx ) = ∑ E( X r ) r=0 r!

( + )− r c a i 1  1 +m  2  m+ j r1 + r2  r1 (−1) + j  r  2 1  ∞ j 2  r  1  E(X r ) = bc 1    1     c ∑∑ r1 +r2   r  j=0 m=0 + j r +   2    r1 r2  2  2   r1  r1 r2 b ,   2  + m    2 2   2  2  

r (c (a+i )−1)+r + 1    r1  2   r2  +m   r1  r1 r2  r1     2  β  + m(c −1) + + r, −  + m(c −1)− r) (28)  r1   2   2 2  2 

10. Conclusions [2] Brown, M.B Forsythe, A.B. (1974). The small sample behavior of some statistics which test the equality of several The Kumaraswamy distribution was used to study the means. Technometrics 16, 129-132. Kumaraswamy-Fisher Snedecor KUMA-F distribution and [3] Chen, S., Chen, J.H. (1998). Single-Stage analysis of variance some of its properties which include the moments from under heteroscedasticity. Communications in Statistics and which its mean, variance, and can be Simulations 27(3), 641-666. derived. The graph of the new F-distribution is positively [4] Gamage, J, Weerahandi, S. (1998). Size performance of some skewed and appears to be fairly leptokurtic while the shape tests in one-way ANOVA Communications in Statistics and of the CDF of KUMA-F is similar to that of the Simulations 27(3), 625-640. conventional F- distribution. It is instructive to note that if [5] Gupta, R. D. and Kundu, D. (1999). Generalized exponential the values of the parameters b and c were set to unity, the distributions. Australian and New Zealand Journal of proposed KUMA-F distribution reverts to the conventional Statistics 41, 173-188. F-distribution. The proposed KUMA-F distribution is better [6] Gupta, R. D. and Kundu, D. (2001). Exponentiated in testing the equality of means across populations, when : an alternative to gamma and Weibull. there is violation of assumptions due to the presence of Biometrical Journal 43, 117-130. outlier without transformation of data. Further work [7] Jones, M.C. (2009). "Kumaraswamy distribution: A beta-type concerns application to real life data that would demonstrate distribution with some tractability advantages". Statistical the capabilities of the proposed distribution. Methodology 6 (1): 70–81 [8] Kumaraswamy, P. (1980). "A generalized probability density functions for double-bounded random processes". Journal of Hydrology 46 (1-2): 79–88. REFERENCES [9] K.W. Tsui, S. Weerahandi Generalized p -values in significance testing of hypotheses in the presence of nuisance [1] Barnard, G. A. (1984). Comparing the means of two parameters J. Amer. Statist. Assoc., 84 (1989), pp. 602–607 independent samples. Appl. Statist. 33, 266-271. [10] Krutchkoff, R. G. (1988). One-way fixed effects analysis of

14 On the Kumaraswamy Fisher Snedecor Distribution

variance when the error variances may be unequal. J. Statist. [14] Tsui, K. and Weerahandi, S. (1989).Generalized p-values in Comput. Simul. 30, 259-271. significance testing of hypotheses in the presence of nuisance parameters. Journal of the American Statistical Association [11] Lomax Richard G (2007): “Statistical Concept: A second 84, 602-607. Course”, P. 10 ISBN 0-8058-5850-4’ [15] Weerahandi, S. (1993). ANOVA under unequal error [12] Rice, W. R. and Gains, S. D. (1989). One-way analysis of variances. Biometrika 38, 330-336. variance with unequal variances. Proc. Natl. Acad. Sci. 86, 8183-8184. [16] Weerahandi, S. (1995). Exact statistical method for data analysis. Springer-Verlag, New York, 2-50. [13] Scott, A.J, Smith, T.M.F. (1971). Interval estimates for linear combinations of means. Applied Statistics 20(3), 276-285. [17] Welch, B.L. (1951). On the comparison of several mean values: An alternative approach. Biometrika 38, 330-336.