Georgia Southern University Digital Commons@Georgia Southern

Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of

Spring 2013

Statistical Properties of the Mc-Dagum and Related Distributions

Sasith Rajasooriya

Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/etd

Part of the Mathematics Commons

Recommended Citation Rajasooriya, Sasith, "Statistical Properties of the Mc-Dagum and Related Distributions" (2013). Electronic Theses and Dissertations. 45. https://digitalcommons.georgiasouthern.edu/etd/45

This thesis (open access) is brought to you for free and open access by the Graduate Studies, Jack N. Averitt College of at Digital Commons@Georgia Southern. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons@Georgia Southern. For more information, please contact [email protected]. STATISTICAL PROPERTIES OF THE MC-DAGUM AND RELATED DISTRIBUTIONS

by SASITH RAJASOORIYA

(Under the Direction of Broderick O. Oluyede)

ABSTRACT

In this thesis, we present a new class of distributions called Mc-Dagum distribu- tion. This class of distributions contains several distributions such as beta-Dagum, beta-Burr III, beta-Fisk and Dagum distributions as special cases. The hazard func- tion, reverse hazard function, moments and mean residual life function are obtained. Inequality measures, entropy and Fisher information are presented. Maximum likeli- hood estimates of the model parameters are given.

KEY WORDS: Mc-Dagum Distribution, Beta-Dagum Distribution, Inequality Measures, Information Matrix

2009 Mathematics Subject Classification: 62E15,60E05 STATISTICAL PROPERTIES OF THE MC-DAGUM AND RELATED DISTRIBUTIONS

by SASITH RAJASOORIYA

B.S in Business Administration

A Thesis Submitted to the Graduate Faculty of Georgia Southern University in Partial Fulfillment of the Requirement for the Degree

MASTER OF SCIENCE

STATESBORO, GEORGIA

2013 c 2013 Sasith Rajasooriya All Rights Reserved

iii STATISTICAL PROPERTIES OF THE MC-DAGUM AND RELATED DISTRIBUTIONS

by SASITH RAJASOORIYA

Major Professor: Broderick O. Oluyede

Committee: Charles Champ Hani Samawi

Electronic Version Approved: May 2013

iv DEDICATION

This thesis is dedicated to my beloved parents and specially my beloved wife ”Pub- udu” for her exceptional support and encouragement.

v ACKNOWLEDGMENTS

My deepest gratitude extends to Dr. Broderick Oluyede for his guidance and encour- agement given to me throughout this thesis and also my entire studies at GSU. It is undoubtedly his exceptional support and direction what made this work a success. It was truly a privilege to have him as my advisor. My special thank goes to my committee members, Dr. Charles Champ and Dr. Hani Samawi for their valuable contribution towards the success of this work. It is my pleasure to acknowledge all the members of the faculty and staff in the Department of Mathematical Sciences at Georgia Southern University, specially Dr. Sharon Taylor, Department Chair, Martha Abell, Dean and former Department Chair and Dr. Yan Wu, former Graduate Pro- gram Director, for all their utmost commitments towards the success of the students and the department.

vi TABLE OF CONTENTS

ACKNOWLEDGMENTS ...... vi

CHAPTER

1 Introduction ...... 1

1.1 Review for the Dagum and Mc-Donald Generalized Distributions ...... 1

1.1.1 Dagum Distribution ...... 1

1.1.2 Mc-Donald Generalized Distribution ...... 3

1.1.3 Hazard and Reverse Hazard Functions ...... 4

1.1.4 Outline of Results ...... 6

2 Introducing Mc-Dagum distribution ...... 7

2.1 Mc-Dagum Distribution ...... 7

2.2 Hazard and Reverse Hazard Functions ...... 9

2.3 Expansion of Distribution ...... 9

2.3.1 Submodels ...... 14

vii 2.3.2 Kum-Dagum Distribution ...... 15

2.4 Concluding Remarks ...... 16

3 Moments and Inequality Measures ...... 18

3.1 Moments ...... 18

3.2 Inequality Measures ...... 19

3.2.1 Inequality Measures for Some Sub models ...... 20

3.3 Concluding Remarks ...... 22

4 Entropy ...... 23

4.1 Renyi and Shanon Entropy ...... 23

4.2 βe-entropy ...... 24

4.3 Concluding Remarks ...... 24

5 Inference ...... 25

5.1 Maximum Likelihood Estimates (MLE) ...... 25

5.2 Fishers Information Matrix ...... 26

5.3 Concluding Remarks ...... 30

5.4 Future Works ...... 30

viii BIBLIOGRAPHY ...... 31

ix CHAPTER 1 INTRODUCTION

1.1 Review for the Dagum and Mc-Donald Generalized Distributions

1.1.1 Dagum Distribution

Dagum distribution was proposed by Camilo Dagum in 1970’s (Dagum 1977-1980). His proposals enable the development of statistical distributions used to fit empirical income and wealth data, that could accommodate both heavy tails in empirical income and wealth distributions, and also permit interior . Dagum distribution has both Type-I and Type-II specification, where Type-I is the three parameter specifications and Type-II deal with four parameter specification.

Dagum in 1977 motivated his model from empirical observation that the income elasticity η (F, x) of the cumulative distribution function (cdf) F of income into a decreasing and bounded function of F.

The cdf and pdf of Dagum (Type-I) distribution are given by

−β G(x; λ, δ, β) = 1 + λx−δ , (1.1) and −β−1 g (x; λ, δ, β) = βλδx−δ−1 1 + λx−δ , for λ, δ, β > 0, (1.2) respectively, where λ is a , and δ and β are shape parameters.

Dagum (1980) refers to his model as the generalized logistic-. Actually when β = 1, Dagum distribution was also referred to as the log-. Also, generalized (log-) logistic distributions arise naturally in Burr’s 2

(1942) system of distributions. The most popular Burr distributions are Burr-XLL distribution, often called Burr distribution with cdf,

−β F (x; δ, β) = 1 − 1 + x−δ , for x > 0, δ, β > 0, (1.3)

and more importantly the Burr-III distribution with cdf

−β F (x; δ, β) = 1 + x−δ , for x > 0 and δ, β > 0. (1.4)

Thus, these distributions are more popular in economics, after the introduction of an additional parameter (λ as we can see above in the Dagum cdf and pdf). It is clear that the Dagum distribution is a Burr III distribution with an additional scale parameter (λ).

The kth raw or noncentral moments of Dagum distribution are given by

∞ Z −β−1 E Xk = xkβλδx−δ−1 1 + λx−δ dx 0   k k k = βλ δ B β + , 1 − , (1.5) δ δ for δ > k, λ, δ, β > 0, where B(.,.) is the beta function, (by setting t = (1 + λx−δ)−1).

The mean, mode and of the Dagum distribution are given by 1 Γ(β + 1  Γ 1 − 1  µ = λ δ δ , (1.6) X Γ(β)

1 1 δβ − 1 δ Mode = , (1.7) λ δ + 1 and 2          δ 2 2 1 1 σ2 = λ Γ(β)Γ β + Γ 1 − − Γ2 β + Γ2 1 − , (1.8) X Γ2 (β) δ δ δ δ respectively. The qth percentile of the Dagum distribution is

−1 1  −1  δ x(q) = λ δ q β − 1 . (1.9) 3

1.1.2 Mc-Donald Generalized Distribution

Consider an arbitrary parent cdf G(x). The probability density function (pdf) f(x) of the new class of distributions called the Mc-Donald generalized distribution is given by

cg(x) f(x; a, b, c) = Gac−1(x) (1 − Gc(x))b−1 , for a > 0, b > 0, and c > 0. (1.10) B(a, b)

See Corderio et al.(2012) for additional details.

Note that g(x) is the pdf of parent distribution , g(x) = dG(x)/dx, and a,b and c are additional shape parameters. Introduction of this additional shape parameters is specially to introduce skewness. Also, this allows us to vary tail weight. It is important to note that for c=1 we obtain a sub-model of this generalization which is a beta-generalization and for a=1, we have the Kumaraswamy (Kw),[Kumaraswamy (1980)] generalized distributions. For random variable X with density function given above in (1.10), we write X∼Mc-G(a,b,c).

The cdf for this generalization is given by,

Z G(x)c 1 a−1 b−1 F (x; a, b, c) = IG(x)c (a, b) = ω (1 − ω) dω, (1.11) B(a, b) 0

c −1 R G(x) a−1 b−1 where IGc(x)(a, b) = B(a, b) 0 ω (1 − ω) dω denotes incomplete beta func- tion ratio (Gradshteyn and Ryzhik, 2000). The same equation can be expressed as follows: G (x)ac F (x; a, b, c) = [ F (a, 1 − b; a + 1; G(x)c)] , (1.12) aB (a, b) 2 1 where Z 1 b−1 c−b−1 −1 t (1 − t) 2F1 (a, b; c; x) = B (b, c − b) a dt, (1.13) 0 (1 − tz) 4

is the well known hypergeometric function (Gradshteyn and Ryzhik, 2000), and Γ(a)Γ(b) B(a, b) = . (1.14) Γ(a + b)

One important benefit of this class is its ability to fit skewed data that cannot properly be fitted by many other existing distributions. Mc-G family of densities allows for higher levels of flexibility of its tails and has a lot of applications in various fields including economics, finance, reliability and medicine.

1.1.3 Hazard and Reverse Hazard Functions

In this section, some basic utility notions are presented. Suppose the distribution

∗ of a continuous random variable X has the parameter set θ = {θ1, θ2, ··· , θn}. Let the probability density function (pdf) of X be given by f(x; θ∗). The cumulative distribution function of X, is defined to be Z x F (x; θ∗) = f(t; θ∗) dt. (1.15) −∞ The hazard function of X can be interpreted as the instantaneous failure rate or the conditional probability density of failure at time x, given that the unit has survived until time x. The hazard function h(x; θ∗) is defined to be P (x ≤ X ≤ x + ∆x) −F¯0 (x; θ∗) f(x; θ∗) h(x; θ∗) = lim = = , (1.16) ∆x→0 ∆x[1 − F (x; θ∗)] F¯(x; θ∗) 1 − F (x; θ∗) where F¯(x; θ∗) is the survival or reliability function.

Reverse Hazard function can be interpreted as an approximate probability of a failure in [x, x + dx], given that the failure had occurred in [0, x] . The reverse hazard function τ(x; θ∗) is defined to be f(x; θ∗) τ(x; θ∗) = . (1.17) F (x; θ∗) 5

Some useful functions that are employed in subsequent sections are given below. The gamma function is given by

Z ∞ Γ(x) = tx−1e−t dt. (1.18) 0

The digamma function is defined by

Γ0 (x) Ψ(x) = , (1.19) Γ(x) where Z ∞ Γ0 (x) = tx−1(log t)e−t dt 0 is the first derivative of the gamma function. The second derivative of the gamma function is Z ∞ Γ00 (x) = tx−1(log t)2e−t dt. 0 The lower incomplete and upper incomplete gamma functions are

Z x Z ∞ γ(s, x) = ts−1e−t dt and Γ(s, x) = ts−1e−t dt (1.20) 0 x respectively.

The hazard function (hf) and reverse hazard functions (rhf) of the Mc-G distri- bution are given by

cg (x) Gac−1 (x) {1 − Gc (x)}b−1 hF (x) =  , (1.21) B (a, b) 1 − IG(x)c (a, b) and cg (x) Gac−1 (x) {1 − Gc (x)}b−1 τF (x) = , (1.22) B (a, b) IGc(x)(a, b) respectively. 6

1.1.4 Outline of Results

The outline of this thesis as follows: In chapter 2, the Mc-Dagum distribution and related family of distributions are introduced. The expansion for the density, hazard and reverse hazard functions, and other properties are presented. Chapter 3 presents the moments, and inequality measures. Chapter 4 contains entropy measures of the Mc-Dagum distribution. Chapter 5 contains inference for the model parameters as well applications of the results presented in earlier chapters. CHAPTER 2 INTRODUCING MC-DAGUM DISTRIBUTION

In this chapter, a new class of distribution, called Mc-Dagum distribution is intro- duced. Considering the properties and some useful features of both Dagum and Mc- Donald distributions, a broad range of generalization is possible by combining these distributions. The new class of distributions possess capabilities widely applicable in several areas as we will show in the next few chapters.

2.1 Mc-Dagum Distribution

In chapter 1, the cdf and pdf of Dagum distribution were given as

G (x; λ, δ, β) = 1 + λx−δ−β , (2.1) and

g (x; λ, δ, β) = βλδx−δ−1 1 + λx−δ−β−1 , λ, δ, β > 0, (2.2) respectively. The pdf for Mc-Donald distribution is given by

c ac−1 c b−1 (2.3) f(x; a, b, c) = B(a,b) g(x)G (x) (1 − G (x)) , a > 0, b > 0, c > 0, and the cdf is

F (x) = IG(x)c (a, b) (2.4) c 1 R G(x) a−1 b−1 = B(a,b) 0 ω (1 − ω) dω.

Now, combining the densities given in equations (2.2) and (2.3), we obtain the 8

pdf of the Mc-Dagum distribution as follows:

c −β−1 h −βiac−1 f(x; λ, δ, β, a, b, c) = βλδx−δ−1 1 + λx−δ 1 + λx−δ B(a, b) h −cβib−1 1 − 1 + λx−δ (2.5)

−δ−1 cβλδx −βac−1 h −cβib−1 = 1 + λx−δ 1 − 1 + λx−δ , B(a, b)

for a, b, c, λ, β, δ > 0.

The cdf of this new distribution is given by

F (x) = IG(x)c (a, b)

c 1 R G(x) a−1 b−1 = B(a,b) 0 ω (1 − ω) dω −δ −βc (2.6) 1 R (1+λx ) a−1 b−1 = B(a,b) 0 ω (1 − ω) dω

= I −βc (a, b) , (1+λx−δ)

where

1 R y a−1 b−1 (2.7) Iy(a, b) = B(a,b) 0 ω (1 − ω) dω

is the incomplete beta function. The cdf can also be written as follows:

1 + λx−δ−βac F (x) =  F a, 1 − b; a + 1; (1 + λx−δ)−βc , (2.8) aB(a, b) 2 1 where

1 R 1 yb−1(1−y)c−b−1 (2.9) 2F1 (a, b; c; x) = B(b,c−b) 0 (1−yz)a dy,

is the well-known hypergeometric function, (Gradshteyn and Ryzhik,(2000)). 9

2.2 Hazard and Reverse Hazard Functions

The failure rate function or hazard function and reverse hazard function are given by

cg(x)Gac−1(x)[1−Gc(x)]b−1 hF (x; a, b, c, λ, β, δ) = B(a,b)[1−IGc(x)(a,b)] −βac−1 b−1 cβλδx−δ−1(1+λx−δ) [1−(1+λx−δ)−cβ ] (2.10) = " # , B(a,b) 1−I −βc (a,b) [(1+λx−δ) ] and

−βac−1h −cβ ib−1 cβλx−δ−1(1+λx−δ) 1−(1+λx−δ) τF (x; a, b, c, λ, β, δ) = (2.11) B(a,b)I −βc (a,b) (1+λx−δ) for a > 0, b > 0, c > 0, λ > 0, β > 0, δ > 0, respectively.

2.3 Expansion of Distribution

In this section, we present a series expansion of the Mc-Dagum cdf and pdf. Consider the Mc-Dagum cdf given by

F (x; λ, β, δ, a, b, c) = IG(x)c (a, b)

c 1 R G(x) a−1 b−1 (2.12) = B(a,b) 0 ω (1 − ω) dω −δ −βc 1 R (1+λx ) a−1 b−1 = B(a,b) 0 ω (1 − ω) dω.

Note that for |ω| < 1,

∞ j b−1 X (−1) Γ(b) (1 − ω) = ωj. Γ(b − j) j! j=0 10

Therefore, the cdf can be expanded to obtain:

−βc −δ j 1 R (1+λx ) a−1 P∞ (−1) Γ(b) F (x; λ, β, δ, a, b, c) = B(a,b) 0 ω j=0 Γ(b−j)j! dω j c P∞ (−1) Γ(b) R G(x;λ,β,δ) a+j−1 = j=0 B(a,b) Γ(b−j)j! 0 ω dω c j h a+j−1+1 iG(x;λ,β,δ) P∞ (−1) Γ(b) ω (2.13) = j=0 B(a,b)Γ(b−j)j! a+j−1+1 0 P∞ (−1)j Γ(b) [G(x;λ,β,δ)]c(a+j) = j=0 B(a,b)Γ(b−j)j! (a+j) P∞ = j=0 pjG (x; λ, βc(a + j), δ) ,

(−1)j Γ(a+b) for b > 0, real non-integer, where pj = j!Γ(a)Γ(b−j)(a+j) .

Similarly, the pdf is given by

∞ X f(x) = pjg(x; λ, βc(a + j), δ). (2.14) j=0

If b > 0 is an integer, then

b−1 X F (x; λ, β, δ, a, b, c) = pjG (x; βc(a + j), λ, δ) , (2.15) j=0 and

b−1 X f(x; λ, β, δ, a, b, c) = pjg(x; βc (a + j) , λ, δ). (2.16) j=0

This is a finite mixture of Dagum distributions with parameters λ, βc(a + j)andδ. The graphs below are the pdf of the Mc-Dagum distribution for different values of parameters λ, δ, β, a, b, and c. 11 12 13

The graphs below are the cdf and hazard functions of the Mc-Dagum distribution for different values of parameters a, b, c, λ, δ, β. 14

2.3.1 Submodels

With this generalization, we have several submodels that can be obtained with specific values of the parameters λ, β, a, b and c.

1. When c = 1, the Mc-Dagum distribution is the beta-Dagum distribution, with the density given by:

−δ−1 βλδx −βa−1 b−1 f(x; λ, β, δ, a, b) = 1 + λx−δ 1 − (1 + λx−δ)−β , (2.17) B(a, b) for x > 0, λ > 0, β > 0, δ > 0, a > 0, and b > 0.

2. If a = b = c = 1, we have the Dagum distribution with the pdf,

−δ−1 −δ−β−1 fD (x; λ, δ, β) = βλδx 1 + λx , (2.18) 15

for λ, δ, β > 0.

3. If b = c = 1 and a > 0, then we have the Dagum distribution with parameters βa, λ and δ. The pdf is

−βa−1 f (x; βa, λ, δ, ) = βaλδx−δ−1 1 + λx−δ , (2.19)

for λ, δ, β > 0.

4. If a = c = 1 and b > 0, we have another Beta-Dagum distribution with param- eters b, β, λ, δ and the pdf is given by

b−1 −δ−1 −δ−β−1 h −δ−βi fBD (x; λ, δ, β, b) = bβλδx 1 + λx 1 − 1 + λx , (2.20)

for λ, δ and β > 0.

5. If a = c = λ = 1, then we have the beta-Burr III distribution with parameters b, β, δ and the pdf is given by

b−1 −δ−1 −δ−β−1 h −δ−βi fBB (x; δ, βb, ) = bβδx 1 + x 1 − 1 + x , (2.21)

for b, δ, β > 0.

6. If c = β = 1, then we have the beta-Fisk distribution with parameters a, b, λ, δ and the pdf is given by

−δ−1 λδx −a−1 h −1ib−1 f (x; λ, δ, a, b) = 1 + λx−δ 1 − 1 + λx−δ , (2.22) BF B (a, b) for a, b, λ, δ > 0.

2.3.2 Kum-Dagum Distribution

Kumaraswamy in his paper (1980) proposed a two-parameter distribution () defined in (0, 1). Here we will refer to it as Kum distribution. Its cdf is 16 given by: F (x; a; b) = 1 − (1 − xa)b , x ∈ (0, 1), a > 0, b > 0. (2.23)

The parameters a and b are the shape parameters. The Kum distribution has the probability density function (pdf) given by:

f(x; a, b) = abxa−1(1 − xa)b−1, x ∈ (0, 1), a > 0, b > 0. (2.24)

Note that the Kumaraswamy distribution can be derived from the . The beta distribution has the pdf: Γ(α + β) f(x; α, β) = xα−1(1 − x)β−1, where x ∈ (0 1), α > 0, β > 0. (2.25) Γ(α)Γ(β)

Combining cdf of Kum distribution with the Dagum distribution discussed in chapter 1, we obtain Kum-Dagum distribution with the cdf and pdf for this distribu- tion given by b h −δ−βai FKum (x) = 1 − 1 − 1 + λx , (2.26) and b −δ−1 −δ−β−1  −δ−β(a−1) h −δ−βai fkum (x) = abβλδx 1 + λx 1 + λx 1 − 1 + λx  −β−1 = abβλδx−δ−1 1 + λx−δ−β−βa+β−1 1 − 1 + λx−δ−βa  −β−1 = abβλδx−δ−1 1 + λx−δ−βa−1 1 − 1 + λx−δ−βa , (2.27) for a, b, β, λ, δ > 0, respectively. We do not study the properties of the Kum-Dagum distribution in this thesis.

2.4 Concluding Remarks

In this chapter, we introduced a new class of distributions called the Mc-Dagum distribution. We obtained the pdf, cdf, hazard function, reverse hazard function 17 for this class of distributions. We obtained the series expansion of the distribution and presented plots of pdf, cdf and hazards function for different parameter values. Through these graphs we see that the distribution possesses the ability to fit for a large range of data sets. We noted that there are several submodels for selected values of the Mc-Dagum model parameters. Additionally we introduced another new distribution called “Kum-Dagum distribution” but we do not discuss its properties in this thesis. CHAPTER 3 MOMENTS AND INEQUALITY MEASURES

In this chapter, we present moments and inequality measures for the Mc-Dagum distri- bution. Income distribution and its variation is an important concern for economists. We use the results presented in chapter 2 which we obtained by expanding the pdf.

3.1 Moments

We can derive the kth moment of a Mc-Dagum distribution using properties of the . The kth raw or non-central moments are given by,

b−1 k R ∞ k cβλx−δ−1 −δ−βac−1  −δβc E(X ) = 0 x B(a,b) 1 + λx 1 − 1 + λx dx b−1 (3.1) cβλ R ∞ k−δ−1 δ−βac−1  −δ−βc = B(a,b) 0 x 1 + λx 1 − 1 + λx dx.

−1 −δ −1 1 Now let, y = 1 + λx , then x = (1 − y) δ (λy) δ , and we have

k cβ 1 −k k βac−1 βc b−1 R δ δ (3.2) E(X ) = δB(a,b) 0 (1 − y) (λy) y (1 − y ) dy.

βc b−1 P∞ (−1)j Γ(b) βc j (−1)j Γ(a+b) Using the fact that (1 − y ) = j=1 Γ(b−j)j! (y ) , and for pj = j!Γ(a)Γ(b−j)(a+j) , k λ δ cβ ∞ (−1)j Γ(b) 1 k k k P R δ +βac+βcj−1 1− δ −1 E(X ) = δB(a,b) 0 Γ(b−j)j! 0 y (1 − y) dy k and |yβcweobtain λ δ cβ P∞ (−1)j Γ(b) k k (3.3)We = δB(a,b) 0 Γ(b−j)j! B(βc(a + j) + δ , 1 − δ ) k P∞ δ j=0 pj βc(a+j)λ k k = δ B(βc(a + j) + δ , 1 − δ ), δ > k. can obtain the kth incomplete moment for a Mc-Dagum distribution as follows:

 k   k  E X |X ≤ x = EX≤x X ; λ, β, δ, a, b, c

R x k = 0 u f (u) du R x k P∞ (3.4) = 0 u j=0 pjf(u; βc(a + j), λ, δ)du P∞ R x k = j=0 pj 0 u f(u; βc(a + j), λ, δ)du k P∞ βc(a+j)λ δ −δ −1 k k = 0 pj δ B((1 + λx ) ; βc(a + j) + δ , 1 − δ ), 19

R t c1−1 c2−1 for δ > k, where B(t; c1, c2) = 0 y (1 − y) dy.

The mean residual life (MRF) function denoted by µ(x; λ, β, δ, a, b, c) = µ(x) is given by

µ(x) = E[X − x|X ≥ x]

E(X)−E(X|X≤x) = 1−F (x) − x k k P∞ p βc(a+j)λ δ j=0 j k k P∞ βc(a+j)λ δ −δ −1 k k δ B(βc(a+j)+ δ ,1− δ )− 0 pj δ B((1+λx ) ;βc(a+j)+ δ ,1− δ ) = P∞ − x. 1− j=0 pj G(x;λ,βc(a+j),δ) (3.5)

3.2 Inequality Measures

Lorenz and Bonferroni curves are the most widely used inequality measures in income and wealth distribution (Kleiber, 2004). Zenga curve was presented by Zenga in 2007. In this section, we will derive Lorenz, Bonferroni and Zenga curves for the Mc-Dagum distribution.

The Lorenz, Bonferroni and Zenga curves are defined by

R x 0 tf(t)dt LF (x) = E(X) (3.6) EX≤x(X) = E(X) ,

R x 0 tf(t)dt B (F (x)) = F (x)E(X) EX≤x(X) (3.7) = F (x)E(X)

LF (x) = F (x) , and µ−(x) (3.8) A(x) = 1 − µ+(x) , 20

R x R ∞ − 0 tf(t)dt EX (x) + x tf(t)dt E(X)−EX>x(x) respectively, where µ (x) = F (x) = F (x) and µ (x) = 1−F (x) = 1−F (x) are the lower and upper means. For Mc-Dagum distribution, using these results, we obtain the curves. Lorenz curve for Mc-Dagum distribution is given by

1 P∞ δ −δ −1 1 1 j=0 pj βc(a+j)λ B((1+λx ) ;βc(a+j)+ δ ,1− δ ) LFG (x; λ, β, δ, a, b, c) = 1 . (3.9) P∞ δ 1 1 j=0 pj βc(a+j)λ B(βc(a+j)+ δ ,1− δ ) Bonferroni curve for Mc-Dagum distribution is given by

1 P∞ δ −δ −1 1 1 j=0 pj βc(a+j)λ B((1+λx ) ;βc(a+j)+ δ ,1− δ ) B(FG(x; λ, β, δ, a, b, c)) = 1 . P∞ P∞ δ 1 1 j=0 pj G(x;λ,βc(a+j),δ) j=0 pj βc(a+j)λ B(βc(a+j)+ δ ,1− δ ) (3.10) Zenga curve for the Mc-Dagum distribution is given by

 E(X|X≤x)  F (x) A(x; λ, β, δ, a, b, c) = 1 − E(X)−E(X≤x) 1−F (x) (3.11) (1−F (x))E[X|X≤x] = 1 − F (x)[E(X)−E(X|X≤x)] ,

1 Px βc(a+j)λ δ −δ −1 1 1 where E [X|X ≤ x] = 0 pj δ B((1 + λx ) ; βc(a + j) + δ , 1 − δ ), 1 P∞ δ j=0 pj βc(a+j)λ 1 1 E(X) = δ B(βc(a + j) + δ , 1 − δ ), and P∞ F (x) = j=0 pjG (x; λ, βc(a + j), δ) .

3.2.1 Inequality Measures for Some Sub models

For various submodels that we introduced in chapter 2, we can generate Lorenz,

Bonferroni and Zenga curves. Let ξ1=(λ, β, δ, a, b), ξ2=(λ, β, δ, b), ξ3=(λ, δ, a, b) and

∞ 1 1 1 P δ E= j=0 pjβ(a + j)λ B(β(a + j) + δ , 1 − δ )

1. If c = 1, we obtain the Lorenz and Bonferroni curves for the beta-Dagum distribution: P∞ 1 −δ −1 1 1 pjβ(a + j)λ δ B((1 + λx ) ; β(a + j) + , 1 − ) L (x; ξ ) = j=0 δ δ , FG 1 E 21

and

∞ 1 1 1 P δ −δ −1 j=0 pjβ(a + j)λ B((1 + λx ) ; β(a + j) + δ , 1 − δ ) B(FG(x; ξ1)) = P∞ , j=0 pjG (x; λ, β(a + j), δ) E respectively.

2. If a = c = 1 and b > 0, then Lorenz and Bonferroni curves for another Beta- Dagum distribution with parameters b, β, λ, δ given by

P∞ 1 −δ −1 1 1 pjβ(1 + j)λ δ B((1 + λx ) ; β(1 + j) + , 1 − ) L (x; ξ ) = j=0 δ δ , FG 2 ∞ 1 1 1 P δ j=0 pjβ(1 + j)λ B(β(1 + j) + δ , 1 − δ ) and

∞ 1 1 1 P δ −δ −1 j=0 pjβ(a + j)λ B((1 + λx ) ; β(1 + j) + δ , 1 − δ ) B(FG(x; ξ2)) = P∞ , j=0 pjG (x; λ, β(1 + j), δ) E respectively.

3. If a = c = λ = 1, then we obtain the Lorenz and Bonferroni curves for the beta-Burr III distribution with parameters b, β, δ, that is

P∞ −δ −1 1 1 pjβ(1 + j)B((1 + x ) ; β(1 + j) + , 1 − ) L (x; β, δ, b) = j=0 δ δ , FG P∞ 1 1 j=0 pjβ(1 + j)B(β(1 + j) + δ , 1 − δ ) and

P∞ −δ −1 1 1 j=0 pjβ(a + j)B((1 + x ) ; β(1 + j) + δ , 1 − δ ) B(FG(x; β, δ, b)) = P∞ . j=0 pjG (x; β(1 + j), δ) E

4. If c = β = 1, then we obtain the Lorenz and Bonferroni curves for the beta-Fisk distribution with parameters a, b, λ, δ.

P∞ 1 −δ −1 1 1 pj(a + j)λ δ B((1 + λx ) ;(a + j) + , 1 − ) L (x; ξ ) = j=0 δ δ , FG 3 E

and

P∞ −δ −1 1 1 j=0 pj(a + j)B((1 + λx ) ;(a + j) + δ , 1 − δ ) B(FG(x; ξ3)) = P∞ j=0 pjG (x; λ, (a + j), δ) E for a, b, λ, δ > 0. 22

3.3 Concluding Remarks

In this chapter, we presented the raw moments and the kth incomplete moments for the Mc-Dagum distribution. Inequality measures for the distribution are derived using well known Lorenz and Bonferroni curves. Additionally, Zenga curve was also obtained. Lorenz curve and Bonferroni curves for some submodels of this class of distributions are also obtained. CHAPTER 4 ENTROPY

In this chapter, we discuss the Renyi entropy, Shannon entropy and βe-entropy for the Mc-Dagum distribution. The entropy of a random variable X is a measure of variation of the uncertainty.

4.1 Renyi and Shanon Entropy

For a pdf f(x), Renyi entropy (Renyi, 1961) is given by

log R ∞ s  (4.1) HR(f) = 1−s 0 f (x)dx , s > 0, s 6= 1.

As s → 1, we obtain the Shanon entropy. Note that,

bs−s s (cβλδ)sx−sδ−s −δ−βacs−s h −δ−cβi f (x) = Bs(a,b) 1 + λx 1 − 1 + λx bs−s R ∞ s (cβλδ)s R ∞ −sδ−s −δ−βacs−s h −δ−cβi and 0 f (x)dx = Bs(a,b) 0 x 1 + λx 1 − 1 + λx dx −sδ−s βacs+s βc bs−s (cβλδ)s 1 h −1 1 i y (1−y ) = R (1 − y) δ (λy) δ dy Bs(a,b) 0  −1 −δ−1 y2λδ (1−y) δ (λy)1δ

(cβλδ)s 1 −sδ−s 1 s−1 R −sδ−s δ +βacs+s−2+ δ +1 βc sb−s s−1+ δ = Bs(a,b) 0 λ y (1 − y ) (1 − y) dy. (4.2)

j b−1 P∞ (−1) Γ(b) j Using the fact that, (1 − ω) = j=0 Γ(b−j)j! ω , and setting −δ −1 −δ y−1−1 1−y −δ−1 −2 y = (1 + λx ) , so that x = λ = λy , and λδx dx=y dy, we obtain

∞ (cβλδ)s 1 −sδ−s 1 s 1 R s R −sδ−s δ +βacs+s−2+ δ +1 βc sb−s s+ δ − δ −1 0 f (x)dx = Bs(a,b) 0 λ y (1 − y ) (1 − y) dy 1+ 1 (cβλδ)sλ−sδ−sλ δ 1 s 1 s 1 R βacs+s−s− δ + δ +1−2 s+ δ − δ −1 βc sb−s = Bs(a,b) 0 y (1 − y) (1 − y ) dy 1+ 1 −sδ−s (cβλδ)sλ δ 1 s 1 s 1 ∞ (−1)j Γ(sb−s+1)yβcj R βacs− δ + δ −1 s+ δ − δ −1 P = Bs(a,b) 0 y (1 − y) j=0 Γ(sb−s+1−j)j! 1+ 1 −sδ−s (cβλδ)sλ δ ∞ (−1)j Γ(sb−s+1) 1 s s 1 P R βcj+βacs− δ +1δ−1 s+ δ − δ −1 = Bs(a,b) j=0 Γ(sb−s+1−j)j! 0 y (1 − y) dy 1+ 1 −sδ−s (cβλδ)sλ δ P∞ (−1)j Γ(sb−s+1) s 1 s 1 = Bs(a,b) j=0 Γ(sb−s+1−j)j! B(βcj + βacs − δ + δ , s + δ − δ ). (4.3) 24

Therefore, Renyi entropy for the Mc-Dagum distribution is

 s 1+ 1 −sδ−s ∞ j log (cβλδ) λ δ X (−1) Γ(sb − s + 1) H (f) = (4.4) R 1 − s Bs(a, b) Γ(sb − s + 1 − j)j! j=0  s 1 s 1  B βcj + βacs − + , s + − δ δ δ δ for s > 0 and s 6= 1. If bs − s is a positive integer, then the sum in the Renyi entropy stops at bs − s.

4.2 βe-entropy

We also obtain βe-entropy for the Mc-Dagum density as follows.   1 R ∞ βe  [1 − 0 f (x)dx] , if βe6= 1,βe > 0, H (f) = βe−1 (4.5) βe  E[−log(f(X))], if βe=1.

Therefore, if βe 6= 1, βe > 0,

 βe 1+ 1 −βδe −βe ∞ j 1 (cβλδ) λ δ X (−1) Γ(βbe − βe + 1) H (f) = 1 − βe βe βe − 1 B (a, b) j=0 Γ(βbe − βe + 1 − j)j! ! βe 1 βe 1  × B βcj + βacβe − + , βe + − . δ δ δ δ

4.3 Concluding Remarks

In chapter 4, measures of uncertainty, including Renyi, Shanon and βe-entropy for the Mc-Dagum distribution were presented. CHAPTER 5 INFERENCE

5.1 Maximum Likelihood Estimates (MLE)

Let Θ = (λ, β, δ, a, b, c)T . In order to estimate the parameters λ, β, δ, a, b and c of the Mc-Dagum distribution, we use the method of maximum likelihood estimation.

Let X1,X2, ...... , Xn be a random sample from f(x; λ, β, δ, a, b, c). The log-likelihood function L(λ, β, δ, a, b, c) is:

  n ! " n # cβλδ Y Y −βac−1 L(λ, β, δ, a, b, c) = nlog + log x−δ−1 + log 1 + λx−δ B(a, b) i i i=1 i=1 n b−1 Y h −δ−cβi + log 1 − 1 + λxi (5.1) i=1 = nlog(c) + nlog(β) + nlog(λ) + nlog(δ) − nlogB(a, b) n n X X −δ − (δ + 1) logxi − (βac + 1) log[1 + λxi ] i=1 1 n X −δ −cβ + (b − 1) log[1 − (1 + λxi ) ]. i=1 (5.2)

Differentiating L(λ, β, δ, a, b, c) with respect to each parameter λ, β, δ, a, b and c and setting the result equals to zero, we obtain maximum likelihood estimates. The partial derivatives of L with respect to each parameter or the score function is given by: ∂L ∂L ∂L ∂L ∂L ∂L U (Θ) = , , , , , , (5.3) n ∂λ ∂β ∂δ ∂a ∂b ∂c where

n n n ∂L n X  x−δ  X x−δ X cβ(1 + λx−δ)−cβ−1x−δ = − βac i − i + (b − 1) i i , ∂λ λ 1 + λx−δ −δ −δ −cβ i=1 i i=1 (1 + λxi ) i=1 [1 − (1 + λxi ) ] (5.4) 26

n n ∂L n X X (1 + λx−δ)−cβlog(1 + λx−δ) = − ac log(1 + λx−δ) + c(b − 1) i i , (5.5) ∂β β i −δ −cβ i=1 i=1 [1 − (1 + λxi ) ]

n n −δ ∂L n X X x log(xi) = − logx + λ(βac + 1) i (5.6) ∂δ δ i −δ i=1 i=1 (1 + λxi ) n −δ −δ −cβ−1 X x (1 + λx ) logxi − λcβ(b − 1) i i , −δ −cβ i=1 [1 − (1 + λxi ) ]

n ∂L X = −n(ψ(a) − ψ(a + b)) − βc log(1 + λx−δ), (5.7) ∂a i i=1 n ∂L X = −n[ψ(b) − ψ(a + b)] + log[1 − (1 + λx−δ)−cβ], (5.8) ∂b i i=1 d Γ0(x) where ψ(.) is digamma function defined by ψ(x) = dx logΓ(x) = Γ(x) , and

n n ∂L n X X (1 + λx−δ)−cβlog(1 + λx−δ) = − βa log(1 + λx−δ) + (b − 1)β i i . (5.9) ∂c c i −δ −cβ i=1 i=1 [1 − (1 + λxi ) ]

The MLE of the parameters λ,β,δ,a,b and c, say λˆ,βˆ,δˆ,ˆa,ˆb andc ˆ are obtained by

∂L ∂L ∂L ∂L ∂L ∂L solving the following equations, ∂λ = ∂β = ∂δ = ∂a = ∂b = ∂c =0. There is no closed form solution to these equations, so numerical technique such as Newton-Rapson method must be applied.

5.2 Fishers Information Matrix

To obtain the Fishers information matrix (FIM), we derive the second partial deriva- tives and cross partial derivatives with respect to each parameter λ,β,δ,a,b and c as follows:

From equation (5.4) we obtain

n n −cβ−2 −cβ ∂2L −n X x−2δ X cβx−2δA [A − cβ − 1] = + (βac + 1) i + (b − 1) i i i , (5.10) ∂λ2 λ2 A2 −cβ 2 i=1 i i=1 [1 − Ai ] 27

−δ where, Ai=(1 + λxi ),

2 n −δ n −cβ ∂ L X x X [cβlog(Ai) − 1 + A ] = −ac i + (b − 1) x−δcA−cβ−1 i , (5.11) ∂λ∂β A i i −cβ 2 i=1 i i=1 [1 − Ai ]

2 n −δ n ∂ L X x log(xi) X = (−βac − 1) i + (b − 1) cβx−δA−cβ−1log(x )B , (5.12) ∂λ∂δ A2 i i i i i=1 i i=1

−δ −1 −δ −1 −δ −cβ−1 −δ −cβ 1−λxi cβAi −λxi Ai +λxi cβAi −(1+λxi ) where, Bi = −cβ 2 , [1−Ai ]

n ∂2L X x−δ = −βc i , (5.13) ∂λ∂a −δ i=1 (1 + λxi )

n ∂2L X cβ(1 + λx−δ)−cβ−1x−δ = i i , (5.14) ∂λ∂b −δ −cβ i=1 [1 − (1 + λxi ) ] and

2 n −δ n −δ −cβ−1 −cβ ∂ L X x X βx A [cβlogAi − 1 + A ] = −βa i + (b − 1) i i i . (5.15) ∂λ∂c A −cβ 2 i=1 i i=1 [1 − Ai ]

From equation (5.5), we obtain

n ∂2L −n X (1 + λx−δ)−cβ[log(1 + λx−δ)]2 = + c2(b − 1) i i , (5.16) ∂β2 β2 −δ −cβ 2 i=1 [1 − (1 + λxi ) ]

2 n n −cβ−1 −δ −cβ ∂ L X X λcAi xi logxi[1 − cβlogAi − Ai ] = ac Ci + (b − 1) , (5.17) ∂β∂δ −cβ 2 i=1 i=1 [1 − Ai ]

−δ λxi log(xi) where, Ci = , Ai

n ∂2L X = −c log(1 + λx−δ), (5.18) ∂β∂a i i=1

n ∂2L X c(1 + λx−δ)−cβlog(1 + λx−δ) = i i , (5.19) ∂β∂b −δ −cβ i=1 [1 − (1 + λxi ) ] and

2 n n −cβ −cβ ∂ L X X Ai logAi[cβlogAi + Ai − 1] = −a logAi + (b − 1) . (5.20) ∂β∂c −cβ 2 i=1 i=1 [1 − Ai ] 28

From equation (5.6), we obtain

n ∂2L −n X = + λ(βac + 1) x−δ(logx )2A−cβ−1D (5.21) ∂δ2 δ2 i i i i i=1

−δ −δ −1 −δ −δ −1 −δ −cβ −δ −δ −cβ−1 [1−λcβxi (1+λxi ) −λxi (1+λxi ) −(1+λxi ) +λxi (1+λxi ) ] where Di= −δ −cβ 2 . [1−(1+λxi ) ]

Also, 2 n −δ ∂ L X x logxi = λβc i , (5.22) ∂δ∂a −δ i=1 (1 + λxi )

2 n −δ −δ −cβ−1 ∂ L X x (1 + λx ) logxi = −λcβ i i , (5.23) ∂δ∂b −δ −cβ i=1 [1 − (1 + λxi ) ] and

2 n n −cβ−1 −δ −cβ ∂ L X X Ai xi logxi[cβlogAi + Ai − 1] = λβc Fi − λβ(b − 1) , (5.24) ∂δ∂c −cβ 2 i=1 i=1 [1 − Ai ]

−δ xi logxi where Fi = −δ . (1+λxi )

From equation (5.7), we obtain

∂2L  Γ00 (a + b) Γ00 (a) = n (ψ(a + b))2 − − (ψ(a))2 + , (5.25) ∂a2 Γ(a + b) Γ(a)

∂2L  Γ00 (a + b) = n (ψ(a + b))2 − , (5.26) ∂a∂b Γ(a + b) and n ∂2L X = −β log(1 + λx−δ). (5.27) ∂a∂c i i=1

From equation (5.8), we obtain

∂2L  Γ00 (a + b) Γ00 (b) = n (ψ(a + b))2 − − (ψ(b))2 + , (5.28) ∂a2 Γ(a + b) Γ(b) 29

and n ∂2L X β(1 + λx−δ)−cβlog(1 + λx−δ) = i i . (5.29) ∂b∂c −δ −cβ i=1 [1 − (1 + λxi ) ]

From equation (5.9), we obtain

n ∂2L −n X β(1 + λx−δ)−cβ[log(1 + λx−δ)]2 = + β(b − 1) i i . (5.30) ∂c2 c2 −δ −cβ 2 i=1 [1 − (1 + λxi ) ]

Fisher information matrix for the Mc-Dagum distribution is:   Iλλ Iλβ Iλδ Iλa Iλb Iλc      I I I I I I   βλ ββ βδ βa βb βc       Iδλ Iδβ Iδδ Iδa Iδb Iδc  I(θ) = I(λ, β, δ, a, b, c) =   . (5.31)    Iaλ Iaβ Iaδ Iaa Iab Iac       I I I I I I   bλ bβ bδ ba bb bc    Icλ Icβ Icδ Ica Icb Icc

h ∂2L i h ∂2L i where, Iλλ=−E ∂λ2 ,...... ,Icc=−E ∂c2 .

The elements of the 6 X 6 matrix I(λ, β, δ, a, b, c) can be approximated by the elements of the information matrix, where

 ∂2L  −∂2L Iij(θ) = −E ≈ . (5.32) ∂θi∂θj ∂θi∂θj Applying the usual large sample approximation, MLE of Θ, that is Θˆ is approximately

−1 N6(Θ,In (Θ)), where In(Θ) is the 6X6 observed information matrix. Under the regularity conditions and parameters in the interior of the parameter space but not √ ˆ −1 on the boundary, the asymptotic distribution of n((Θ)−Θ) is N6(Θ,I (Θ)), where

−1 I(Θ) = lim n In(Θ). n→∞

Therefore, the approximate 100(1-α)% two-sided confidence intervals for λ, β, δ, a, b and c are given by: 30

q q −1 q −1 −1 λˆ ± Z α I (θˆ), βˆ ± Z α I (θˆ), δˆ ± Z α I (θˆ), 2 λλ 2 ββ 2 δδ q q q −1 −1 −1 aˆ ± Z α I (θˆ), ˆb ± Z α I (θˆ) andc ˆ ± Z α I (θˆ), 2 aa 2 bb 2 cc α th where, Z α is the upper ( ) percentile of a standard . 2 2

5.3 Concluding Remarks

In this chapter, we presented log-likelihood function for the Mc-Dagum distribution and obtained partial derivatives with respect to each parameter to estimate the model parameters. We noticed that there are no closed form estimates of the parameters, so numerical methods must be applied. We also obtained Fisher Information matrix;

 ∂2L  −∂2L Iij(θ) = −E ≈ . (5.33) ∂θi∂θj ∂θi∂θj

Finally the approximate confidence intervals for each parameter was given.

5.4 Future Works

In the future, we will investigate and obtain results on the Kumaraswamy-Dagum (Kum-Dagum) distribution that was mentioned in the Chapter one. We will also work on obtaining estimates of model parameters from the Bayesian viewpoint for both Mc-Dagum and Kum-Dagum distributions and conduct goodness-of-fit tests for these models. 31

BIBLIOGRAPHY

[1] Corderio, G.M., and Castro,M., A New Family of Distributions, Journal of Sta- tistical Computation and Simulation, Vol.00, No.00, August (2009), 1-17.

[2] Corderio, G.M., Cinta,R.J., Rego,L.C., and Ortega,E.M.M., The McDonald Nor- mal Distribution, Statistics in the Twenty-First Century: Special Volume, PJ- SOR, Vol.8, No.3, pages 301-329, July (2012)

[3] Dagum, C., A New Model of Personal Income Distribution, Economic Appliquee, 30, 413-437, (1977).

[4] Dagum, C., The generation and distribution of income, the Lorenz curve and the Gini ratio, Economic Appliquee, XXXIII, pp. 327-367, (1980).

[5] Domma, F., and Condino, F., The Beta-Dagum Distribution; definition and Prop- erties,Communication in Statistics-Theory and Methods,in press (2013).

[6] Domma, F., and Giordano,S., The Fisher Information Matrix In Right Censored Data From The Dagum Distribution, Department of Economics and Statistics, University of Calabria, working paper n.04-(2011).

[7] Gradshteyn,I.S., and Ryzhik,I.M., Table of Tntegral, Series and Products, Sev- enth Edition,Elsevier Inc.

[8] Kleiber, C., A Guide to the Dagum Distributions, A publication of the Center of Business and Economics (WWZ), University of Basel. Dec.04,(2004).

[9] Kleiber, C., University of Dortmund, Germany, On the Lorenz Order within Para- metric Families of Income Distributions, Sankhya: The Indian Journal of Statis- tics, Volume 61, Series B, Pt.3, pp.514-517 (1999).

[10] Kumaraswamy, P., Generalized probability density function for double-bounded random process, Journal of Hydrology, 46,79-88, (1980)

[11] MALA I.,(CZ), Distribution Of Income Per Capita Of The Czech Households From 2005 To 2008, Journal of Applied Statistics, volume IV (2011),number III. 32

[12] Marciano,F.W.P.,Nascimento,A.D.C., Santos-Neto, M. and Corderio,G.M., The Mc-Γ Distribution and Its Statistical Properties:An Application to reliability Data, International Journal of Statistics and Probability, Vol 1, No 1 (2012).

[13] Renyi, A., On measures of Entropy and Information, Berkeley Symposium on Mathematical Statistics and Probability :University of California Press.1(1),547- 561 (1961).