<<

mathematics

Article Generalized Mixtures of Exponential Distribution and Associated Inference

Yaoting Yang 1, Weizhong Tian 2,* and Tingting Tong 3

1 Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054 , China; [email protected] 2 Department of Mathematical Sciences, Eastern New Mexico University, Portales, NM 88130, USA; 3 Department of Mathematical Sciences, New Mexico State University, Las Cruces, NM 88003, USA; [email protected] * Correspondence: [email protected]

Abstract: A new generalization of the exponential distribution, namely the generalized mixture of exponential distribution, is introduced. Some of its basic properties, such as hazard function, moments, order , deviation, measures of uncertainly, and reliability probability, are studied. Three different estimation methods are investigated by the maximum likelihood estimator, least-square estimator, and weighted least-square estimator. The performances of the estimators are assessed by simulation studies. Real-world applications of the proposed distribution are explored, and data fitting results show that the new distribution performs better than its competitors.

Keywords: generalized mixture of exponential distribution; reliability probability; maximum likeli- hood estimator; weighted least-square estimator

1. Introduction   Among the parametric models, exponential distribution is perhaps the most widely applied statistical distribution in several fields and plays an important role in the statistical Citation: Yang, Y.; Tian, W.; Tong, T. theory of reliability and lifetime analysis. Based on this reason, statisticians have been Generalized Mixtures of Exponential interested in defining new classes of distributions by adding one or more shape Distribution and Associated parameters to provide greater flexibility in modeling real data in many applied fields. Inference. Mathematics 2021, 9, 1371. Gupta and Kundu [1] studied the generalized exponential distribution and used it as an https://doi.org/10.3390/math9121371 alternative to gamma or in many situations. Gupta and Kundu [2] used the idea of Azzalini [3], introducing a new class of weighted exponential (WE) distributions, Received: 1 May 2021 and Kharazmi et al. [4] extended it into the generalized weighted exponential (GWE) Accepted: 10 June 2021 distribution. Nadarajah and Haghighi [5] discussed a new two-parameter generalization of Published: 13 June 2021 the exponential distribution, which had its at zero and allowed increasing, decreasing,

Publisher’s Note: MDPI stays neu- and constant hazard rates. tral with regard to jurisdictional clai- On the other hand, generalizations of exponentiated type distributions can be ob- ms in published maps and institutio- tained from the class of generalized beta distributions, in particular after the works of nal affiliations. Eugene et al. [6] Nadarajah and Kotz [7] introduced the beta exponential distribution, generated from the logit of a beta . Barreto-Souza et al. [8] discussed beta generalized exponential distribution, which includes the beta exponential and generalized exponential distributions as special cases. A generalization of the exponentiated Frenchet Copyright: © 2021 by the authors. Li- distribution, called the beta Frenchet distribution, was studied by Barreto-Souza et al. [9]. censee MDPI, Basel, Switzerland. Ristic and Balakrishnan [10] proposed the gamma exponential distribution generated by This article is an open access article gamma random variables. Being of a similar methodology, many X-family exponential distributed under the terms and con- distributions were investigated recently. These include the Weibull exponential (WED) ditions of the Creative Commons At- distributions which were introduced by Oguntunde et al. [11], Marshall–Olkin generalized tribution (CC BY) license (https:// exponential distributions which were defined by Ristic and Kundu [12], Kumaraswamy creativecommons.org/licenses/by/ Marshall–Olkin exponential distributions which were given by George and Thobias [13], 4.0/).

Mathematics 2021, 9, 1371. https://doi.org/10.3390/math9121371 https://www.mdpi.com/journal/mathematics Mathematics 2021, 9, 1371 2 of 22

and generalized extended exponential-Weibull (GExtEW) distributions which were pro- posed by Shakhatreh et al. [14]. In this paper, a new class of generalized mixture exponential (GME) distribution is introduced, which has the exponential and WE distributions as its submodels. In order to motivate interest, let us first present the definition of the generalized skew introduced by Kumar and Anusree [15]. A random variable Z is said to have a generalized if its probability density function (pdf) is of the following form, 2 h(z, λ, β) = f (z)(1 + βF(λz)), 2 + β where f (z) = φ(z), F(λz) = Φ(λz), λ ∈ < and β > −2. In fact, the correct values of β should be β ≥ −1, which has been discussed in Tian et al. [16]. The rest of the article is organized as follows. The GME distribution is introduced in Section2. Some important properties of GME distributions, such as cumulative dis- tribution function (cdf), hazard function, mean deviations, order statistics, measure of uncertainly, and reliability probability, are discussed in Section3. Three different estimation methods are studied in Section4. Simulations are conducted to investigate and compare the performances of the proposed estimation methods in Section5. Two real data sets are analyzed for illustrating the usefulness of the proposed GME distribution in Section6. Some conclusions are presented in Section7.

2. Generalized Mixture Exponential Distribution The GME distribution offers more flexible distributions with applications in lifetime modeling, which is defined as follows.

Definition 1. A random variable X is said to have a GME distribution if its pdf is of the following form,

(α + 1)λ f (x; λ, α, β) = e−λx[1 + β(1 − e−αλx)], x > 0, (1) α + 1 + αβ

where α > 0 is the , λ > 0 and β ≥ −1 are the shape parameters, and we denote it as X ∼ GME(λ, α, β).

Remark 1. (i) For β = 0, or α → 0, or α → ∞, GME(λ, α, β) is reduced into exponential distribution with parameter λ namely, E(λ). (ii) For β = −1,GME(λ, α, β) is reduced into E(λ(α + 1)). (iii) For β → ∞,GME(λ, α, β) is reduced into WE(λ, α).

For different values of λ, α, β, the pdfs of GME(λ, α, β) are presented in the Figure1, which indicate that the GME distribution can generate distributions with various shapes.

Proposition 1. The cdf, the and the hazard function of X ∼ GME(λ, α, β) are given by

βe−(α+1)λx (α + 1)(β + 1)e−λx F(x; λ, α, β) = 1 + − , α + 1 + αβ α + 1 + αβ (α + 1)λ1 + β(1 − e−λαx) h(x; λ, α, β) = , (α + 1 + αβ) + β(1 − e−λαx) (α + 1)(β + 1)e−λx βe−(α+1)λx S(x; λ, α, β) = − . α + 1 + αβ α + 1 + αβ Mathematics 2021, 9, 1371 3 of 22

Figure 1. The pdf curves for different parameters of GME(λ, α, β).

Proof of Proposition1. According to the Equation (1), we have

λ(α + 1)  Z x Z x  F(x; λ, α, β) = (1 + β) e−λtdt − β e−λαtdt α + 1 + αβ 0 0 β h i = 1 − e−λx + e−λ(1+α)x − e−λx . α + 1 + αβ

Therefore,   f (x; λ, α, β) (α + 1)λ 1 + β(1 − e−λαx) h(x; λ, α, β) = = , 1 − F(x; λ, α, β) (α + 1 + αβ) + β(1 − e−λαx) Mathematics 2021, 9, 1371 4 of 22

(α + 1)(β + 1)e−λx βe−(α+1)λx S(x; λ, α, β) = 1 − F(x; λ, α, β) = − . α + 1 + αβ α + 1 + αβ

This ends the proof of Proposition1. Figure2 shows that the GME distribution produces flexible hazard rate shapes, such as decreasing, increasing, and stable.

Figure 2. The hazard function curves for different values of parameter in GME(λ, α, β). 3. General Properties of the GME Distribution In what follows, we discuss various properties associated with the proposed distribution.

Proposition 2. The shapes of density function of X ∼ GME(λ, α, β) can be characterized as follows, (i) f (x) is monotone decreasing, if −1 ≤ β ≤ 0 or 0 < αβ ≤ 1, (ii) f (x) is unimodal, if αβ > 1.

Proof of Proposition2. The derivatives of f (x) are obtained by Equation (1),

λ2(α + 1)e−λx  β(α + 1)  f 0(x) = e−αλx − 1 . α + 1 + αβ β + 1

λ2(α + 1)e−λx (i) if −1 ≤ β ≤ 0, we have > 0, and (α + 1)β ≤ 0, thus, α + 1 + αβ

λ2(α + 1)e−λx  β(α + 1)  e−αλx − 1 < 0; α + 1 + αβ β + 1

β(α + 1)  β(α + 1)  if 0 < αβ ≤ 1, we get e−αλx < 1 and < 1, thus, e−αλx − 1 < 0. β + 1 β + 1 1 β + 1 (ii) Setting f 0(x) = 0, we have x = − log , and f 00(x ) > 0. Thus, if αβ > 1, we 0 αλ β + βα 0 have f (x) is monotone increasing on 0 < x < x0 and monotone decreasing on x > x0. Thus, f (x) is unimodal. Mathematics 2021, 9, 1371 5 of 22

This ends the proof of Proposition2.

Remark 2. The two properties in Proposition2 are actually exclusive.

Proposition 3. Let X ∼ GME(λ, α, β), the generating function of X is

λ(α + 1)  1 + β β  M (t) = − , t < λ. X α + 1 + αβ λ − t λα + λ − t

Proof of Proposition3. According to Equation (1) and the definition of moment generating function,

 Z ∞ Z ∞  λ(α + 1) (t−λ)x (t−λ−λα)x MX(t) = (1 + β) e dx − β e dx α + 1 + αβ 0 0 λ(α + 1)  1 + β β  = − , t < λ. α + 1 + αβ λ − t λα + λ − t

This ends the proof of Proposition3.

Corollary 1. Let X ∼ GME(λ, α, β), the first four moments of X are

(1 + α)2(1 + β) − β 2(1 + α)3(1 + β) − 2β E[X] = , E[X2] = , λ(α + 1 + αβ)(1 + α) λ2(α + 1 + αβ)(1 + α)2 6(1 + α)4(1 + β) − 6β 24(1 + α)5(1 + β) − 24β E[X3] = , E[X4] = . λ3(α + 1 + αβ)(1 + α)3 λ4(α + 1 + αβ)(1 + α)4

Proposition 4. Let X ∼ GME(λ, α, β) and µ = E[X], then the mean deviation about the mean of X is given by

2(α + 1 + αβ + β) 2β D(µ) = e−λµ − e−λµ(1+α). λ(α + 1 + αβ) λ(α + 1 + αβ)(1 + α)

Proof of Proposition4. According to Equation (1) and D(µ) = E[|X − µ|], with µ = E[X], we have

Z µ Z ∞ D(µ) = (µ − x) f (x)dx + (x − µ) f (x)dx 0 µ Z µ Z ∞   Z µ Z ∞  = µ f (x)dx − f (x)dx + − x f (x)dx + x f (x)dx 0 µ 0 µ  Z µ  = µ(2F(µ) − 1) + µ − 2 x f (x)dx 0 2β   2(α + 1 + αβ + β) = 1 − e−λµ(1+α) − (1 − e−λµ) + 2µ λ(α + 1 + αβ)(1 + α) λ(α + 1 + αβ) 2(α + 1 + αβ + β) 2β = e−λµ − e−λµ(1+α). λ(α + 1 + αβ) λ(α + 1 + αβ)(1 + α)

This ends the proof of Proposition4.

The entropy of a random variable is a measure of uncertainty, which is an important topic in the fields of communication theory, statistical physics, and . In the following, we study the entropy measures for X ∼ GME(λ, α, β). Mathematics 2021, 9, 1371 6 of 22

Proposition 5. Let X ∼ GME(λ, α, β), then the Shannon entropy, S(x), and Renyi entropy, R(x), of X are given by

 λ(α + 1)   1  log(β) log(u) S(x) = − log − 1 + log(1 + β) + − − log(1 − u), α + 1 + αβ α α α 1   λ(α + 1)   γ  γ log(β) R (x) = γ log + γ + log(1 + β) − − log(λα) γ 1 − γ α + 1 + αβ α α   β γ  + log B ; , γ + 1 , γ > 0 and γ 6= 1, 1 + β α

R z a−1 b−1 respectively, where B(z; a, b) = 0 u (1 − u) du is the incomplete beta function.

Proof of Proposition5. For any γ > 0 and γ 6= 1, the Reni entropy of X is defined as

Z ∞  1 γ Rγ(x) = log f (x; λ, β, α)dx 1 − γ 0 1  λ(α + 1) γ Z ∞  = log e−λγx1 + β(1 − e−λαx)γdx 1 − γ α + 1 + αβ 0  γ 1 λ(α + 1) + γ − γ = log (1 + β)γ α β α 1 − γ α + 1 + αβ γ # Z ∞ β  α  β γ × e−λαx 1 − e−λαx dx 0 1 + β 1 + β 1   λ(α + 1)   γ  γ = γ log + γ + log(1 + β) − log(β) − log(λα) 1 − γ α + 1 + αβ α α   β γ  + log B ; , γ + 1 . 1 + β α

The Shannon entropy S(x) is the limiting value of Rγ(x) as γ → 1 and, thus, the results are obtained. This ends the proof of Proposition5.

In the next proposition, we study the probability that one of the two independent GME random variables exceeds the other, which is named as the reliability probability.

Proposition 6. Suppose two independent random variables X and Y follow GME(λ, α, β), then the reliability probability is given by

(1 + α)2(1 + β)2 λβ2 P(X > Y) = + 2(1 + α + αβ)2 2(1 + α + αβ)2 β(1 + β)(1 + α)2 β(1 + β)(1 + α) − − . (2 + α)(1 + α + αβ)2 (2 + α)(1 + α + αβ)2

Proof of Proposition6. Let Z = Y − X and X = X, the joint density function of X and Z is obtained as

λ2(1 + α)2(1 + β)2 λ2β(1 + β)(1 + α)2 f (x, z; λ, β, α) = e−λ(2x+z) − e−λ[(1+α)z+(2+α)x] (1 + α + αβ)2 (1 + α + αβ)2 λ2β(1 + β)(1 + α)2 λ2β2(1 + α)2 − e−λ[z+(2+α)x] + e−λ[(1+α)z+(2+2α)x]. (1 + α + αβ)2 (1 + α + αβ)2 Mathematics 2021, 9, 1371 7 of 22

Therefore, the marginal density function of Z is

(α + 1)2(1 + β)2 λβ2 f (z; λ, β, α) = eλz + eλ(1+α)z (1 + α + αβ)2 2(1 + α + αβ)2 β(1 + β)(1 + α)2 β(1 + β)(1 + α) − eλz − eλ(1+α)z. (2 + α)(1 + α + αβ)2 (2 + α)(1 + α + αβ)2

Thus, the result is obtained by P(X > Y) = P(Z < 0). This ends the proof of Proposition6.

Order statistics are fundamental tools in non-parametric statistics and inference. In what follows, we derive an expression for the density function of the rth in a random sample size n ≥ r from the GME distribution.

Proposition 7. Suppose X1, X2, ··· , Xn is a random sample from GME(λ, α, β). Let X1:n ≤ th X2:n ≤ · · · ≤ Xn:n denote the corresponding order statistics. Then the pdf and cdf of r order statistic, Xr:n, 1 ≤ r ≤ n, are respectively,

− n!  (1 + α)λ r 1 f (x) = (e−λ(1+α)x − e−λx) − e−λx + 1 r:n (r − 1)!(n − r)! α + 1 + αβ −  (1 + α)λ n r (α + 1)λ × (e−λx − e−λ(1+α)x) + e−λx e−λx[1 + β(1 − e−αλx)], α + 1 + αβ α + 1 + αβ + n n−r  (1 + α)λ l u F (x) = (−1)u(n)(n−r) (e−λ(1+α)x − e−λx) − e−λx + 1 . r:n ∑ ∑ l u + + l=r u=0 α 1 αβ

Proof of Proposition7. It is well known that the pdf and cdf of Xr:n, 1 ≤ r ≤ n are given by

n! f (x) = [F(x)]r−1[1 − F(x)]n−r f (x), r:n (r − 1)!(n − r)! n n l n−l Fr:n(x) = ∑ ( l )[F(x)] [1 − F(x)] , l=r

respectively. Thus, the results are derived straightly from Equation (1) and Proposition1. This ends the proof of Proposition7.

Proposition 8. Let X ∼ GME(λ, α, β), the function of GME distribution, xq, wherein 0 < q < 1, can be obtained by solving the following equation,

βe−(α+1)λxq − (α + 1)(β + 1)e−λxq = (q − 1)(1 + α + αβ). (2)

We can see from Equation (2) that there is no closed form of the solution in xq and, thus, we have to use numerical techniques to obtain the quantile. The mean residual life (MRL) function plays a very important role in , and many other fields. It represents the period from time t till the time of failure, and the MRL also represents the expected additional life length for a unit.

Proposition 9. Let X ∼ GME(λ, α, β), the MRL function of GME distribution, defined as µX(t), is given by (1 + α)2(1 + β)e−λt − βe−(1+α)λt µX(t) = , t > 0. (1 + α)2(1 + β)λe−λt − (1 + α)λβe−(1+α)λt Mathematics 2021, 9, 1371 8 of 22

Proof of Proposition9. For t > 0, we have

R ∞ S(x; λ, α, β)dx µ (t) = E(X − t|X > t) = t X S(t; λ, α, β)  (α + 1)(β + 1)e−λx βe−(α+1)λx  R ∞ − dx t α + 1 + αβ α + 1 + αβ = , S(t; λ, α, β)

where S(·) is survival function of GME distribution. We know that

Z ∞  (α + 1)(β + 1)e−λx βe−(α+1)λx  Z ∞ (α + 1)(β + 1)e−λx Z ∞ βe−(α+1)λx − dx = dx − dx t α + 1 + αβ α + 1 + αβ t α + 1 + αβ t α + 1 + αβ (1 + α)2(1 + β)e−λt − βe−(1+α)λt = . (1 + α + αβ)(1 + α)λ

Thus, the result is obtained. This ends the proof of Proposition9.

4. Methods of Estimation In this section, we consider the methods of maximum likelihood, least squares, and weighted least squares to estimate the unknown parameters, θ = (λ, α, β), of the GME distribution. Suppose x1, x2, ··· , xn is a random sample from GME(λ, α, β).

4.1. Maximum Likelihood Estimator The method of maximum likelihood is the most frequently used method for parameter estimation. According to the Equation (1), the is calculated as

n  n −λ x n (α + 1)λ ∑ i − L(λ, α, β|x , ··· , x ) = e i=1 [1 + β(1 − e αλxi )]. 1 n + + ∏ α 1 αβ i=1

The log-likelihood function is given by

n `(λ, α, β|x1, ··· , xn) = n[log(α + 1) + log(λ) − log(α + 1 + αβ)] − λ ∑ xi i=1 n (3) + ∑ log[1 + β(1 − e−αλxi )]. i=1

We denote the first partial derivatives of (3) by `λ, `α and `β. Setting `λ = 0, `α = 0, and `β = 0, we have

n n −αλx n βαxie i `λ = − xi + = 0, ∑ ∑ + ( − −αλxi ) λ i=1 i=1 1 β 1 e n −αλx n n(1 + β) βλxie i `α = − + = 0, + + + ∑ + ( − −αλxi ) α 1 α 1 αβ i=1 1 β 1 e nα n 1 − e−αλxi `β = − + = 0. + + ∑ + ( − −αλxi ) α 1 αβ i=1 1 β 1 e

The maximum likelihood estimator (MLE) θˆ of the unknown parameters θ can be obtained by optimizing the log-likelihood function with respect to the involved parameters. Due to the non-linearity of these equations, the MLEs of parameters can be obtained Mathematics 2021, 9, 1371 9 of 22

numerically. These estimators can be easily obtained by using the functions from the statistical software R. is helpful to get the reference priors for the model parameters. In the following, we observe that the Fisher information is given by   `λλ `λα `λβ I(θ) = −E `αλ `αα `αβ , `βλ `βα `ββ

where

n 2 2 −2αλxi n β(1 + β)α xi e `λλ = − − , 2 ∑ [ + ( − −αλxi )]2 λ i=1 1 β 1 e 2 n 2 2 −2αλxi n n(1 + β) β(1 + β)λ xi e `αα = − + − , ( + )2 ( + + )2 ∑ [ + ( − −αλxi )]2 α 1 1 α αβ i=1 1 β 1 e n −αλx αxie i `βλ = = `λβ, ∑ [ + ( − −αλxi )]2 i=1 1 β 1 e n −αλx λxie i `βα = = `αβ, ∑ [ + ( − −αλxi )]2 i=1 1 β 1 e n −αλx 2 −2αλx β(1 + β)(1 − αλxi)xie i − β xie i `λα = = `αλ, ∑ [ + ( − −αλxi ]2 i=1 1 β 1 e nα2 n (1 − e−αλxi )2 `ββ = − . ( + + )2 ∑ [ + ( − −αλxi )]2 1 α αβ i=1 1 β 1 e

4.2. Least-Square Estimator

Suppose F(x(j)) denotes the distribution function of the ordered random variables x(1) < ··· < x(n). Denote the following function

n  i 2 h(λ, α, β) = F(x ; λ, α, β) − , (4) ∑ (i) + i=1 n 1

β where F(x; λ, α, β) = [e−λ(1+α)x − e−λx] − e−λx + 1, and the least-square esti- α + 1 + αβ mator (LS) of θ can be obtained by minimizing h(λ, α, β). Therefore, θˆ can be obtained by solving the following equations,

∂h(λ, α, β) n  n β o = 2Q  − (1 + α)C + B  + B = 0, ∑ i + + i i i ∂λ i=1 α 1 αβ n   ∂h(λ, α, β) n −(1 + β) − ( + ) − βλ o = 2Q [e λ 1 α X(i) − e λX(i) ] − C = 0, ∑ i ( + + )2 + + i ∂α i=1 α 1 αβ α 1 αβ n   ∂h(λ, α, β) n 1 − α − o = 2Q [e−λ(1+α)X − e λX(i) ] = 0, ∑ i ( + + )2 (i) ∂β i=1 α 1 αβ

β − ( + ) − − i − where Q = [e λ 1 α x(i) − e λx(i) ] − e λx(i) + 1 − , B = x e λx(i) and i α + 1 + αβ n + 1 i (i) −λ(1+α)x(i) Ci = x(i)e . Mathematics 2021, 9, 1371 10 of 22

4.3. Weighted Least-Square Estimator The weighted least-square estimator (WLS) is an extension of LS and proposed by Swain et al. [17], which studied the WLS is obtained by minimizing the function,

n (n + 1)2(n + 2)  i 2 W(λ, α, β) = F(x ; λ, α, β) − , ∑ ( − + ) (i) + i=1 i n i 1 n 1

where F(·) function has been given in Equation (4). Therefore, the WLS of θ can be obtained by

∂W(λ, α, β) n  2(n − 1)2(n + 2) n β o = Q  − (1 + α)C + B  + B = 0, ∑ ( − + ) i + + i i i ∂λ i=1 i n i 1 α 1 αβ n  2 ∂W(λ, α, β) 2(n − 1) (n + 2) n −(1 + β) − ( + ) − = Q [e λ 1 α x(i) − e λx(i) ] ∑ ( − + ) i ( + + )2 ∂α i=1 i n i 1 α 1 αβ βλ o − C = 0, α + 1 + αβ i n  2  ∂W(λ, α, β) 2(n − 1) (n + 2) n 1 − α − ( + ) − o = Q [e λ 1 α x(i) − e λx(i) ] = 0, ∑ ( − + ) i ( + + )2 ∂β i=1 i n i 1 α 1 αβ

where Qi, Bi and Ci, i = 1, ··· , n, are defined as above.

5. Simulation Studies In this section, we assess the performance of the estimation methods proposed in the previous section by conducting several simulations for different sample sizes and values of the parameter. As indicated in Proposition1, the F(x; λ, α, β) there is used to generate pseudo-random numbers from the GME distribution. This technique is called the inverse transform method, which consists of the following steps: (i) Generate a random number u from the standard uniform distribution in the interval [0,1]. (ii) Apply the numerical techniques to solve the equation F(x) = u with given λ, α, β. We take the sample size n = 50, 100, 200, 300, 400, 500, 1000 for each simulation, and each sample was replicated N = 1000 times. The values of parameter θ = (1, 5, −0.5), (2, 1, 1.5), and (3, 10, 5) are considered, respectively. All the results were computed using the R programming. The evaluation of the estimators are performed based on the average bias ˆ 1 N ˆ(i) and the standard error (SE) for each single parameter, where Bias(θj) = N ∑i=1(θj − θj), ˆ 1 N ˆ(i) 2 th SE(θj) = N ∑i=1(θj − θj) , and θj is the j component of θ. Moreover, the overall bias ˆ ˆ 3 ˆ and (MSE) of θ are also considered, where the Bias(θ) = ∑j=1 Bias(θj), N 1 (i) MSE(θˆ) = ∑ ||θˆ − θ||2, and || · || is the Euclidean norm. The simulation results for N i=1 different scenarios are given in the Tables1–3. Mathematics 2021, 9, 1371 11 of 22

Table 1. λ = 1, α = 5, β = −0.5.

Sample Size Method λˆ (SE) αˆ (SE) βˆ (SE) MSE n = 50 MLE 0.9337 (0.3743) 5.0838 (2.4378) −0.1080 (1.7229) 1.5273 LS 0.9747 (0.2605) 4.7614 (2.3440) −0.4964 (0.2255) 0.9491 WLS 0.9162 (0.2344) 5.0376 (1.1337) −0.4444 (0.7600) 0.7204 n = 100 MLE 0.8927 (0.3087) 5.7791 (3.1581) −0.5004 (0.8806) 1.4852 LS 1.0136 (0.2050) 4.6547 (2.2373) −0.4550 (0.2004) 0.8934 WLS 0.9433 (0.1481) 4.6418 (0.9328) −0.4858 (0.4072) 0.5307 n = 200 MLE 0.9407 (0.2054) 5.3528 (2.7663) −0.5606 (0.2314) 1.0774 LS 0.9954 (0.1574) 4.7508 (2.2406) −0.4717 (0.1863) 0.8692 WLS 0.9646 (0.1087) 4.9761 (0.9436) −0.49608 (0.1881) 0.4350 n = 300 MLE 0.9780 (0.1157) 4.9851 (2.1748) −0.5229 (0.1454) 0.8116 LS 1.0224 (0.1291) 5.4474 (2.1971) −0.4555 (0.1605) 0.8504 WLS 0.9654 (0.0982) 4.5041 (0.8171) −0.5127 (0.1298) 0.4120 n = 400 MLE 0.9731 (0.1137) 4.0055 (1.4628) −0.5137 (0.1432) 0.6742 LS 1.0158 (0.1180) 5.1742 (1.9422) −0.4639 (0.1519) 0.7472 WLS 0.9980 (0.0733) 5.0675 (0.9835) −0.4948 (0.1078) 0.4018 n = 500 MLE 0.9841 (0.0970) 4.8382 (1.9489) −0.5135 (0.1202) 0.7242 LS 1.0023 (0.1071) 4.9641 (1.7818) −0.4779 (0.1345) 0.6818 WLS 0.9821 (0.0610) 4.9304 (0.9001) −0.5062 (0.0849) 0.3783 n = 1000 MLE 0.9938 (0.0705) 4.9315 (1.6172) −0.5020 (0.0932) 0.5937 LS 1.0060 (0.0763) 4.9750 (1.7033) −0.4797 (0.1029) 0.6367 WLS 0.9894 (0.0501) 4.8743 (0.8206) −0.5019 (0.0635) 0.3545

Table 2. λ = 2 , α = 1 , β = 1.5.

Sample Size Method λˆ (SE) αˆ (SE) βˆ (SE) MSE n = 50 MLE 1.4458 (0.9097) 1.2917 (0.5589) 1.6460 (3.2067) 1.6314 LS 1.8709 (0.3747) 1.4399 (0.5717) 2.0283 (2.1736) 1.1183 WLS 1.9537 (0.3217) 0.9440 (0.6706) 2.2223 (1.8281) 0.9885 n = 100 MLE 1.7026 (0.7886) 1.3034 (0.5729) 1.9781 (2.6490) 1.3911 LS 1.9554 (0.2942) 1.2025 (0.6341) 2.3221 (2.0580) 0.9073 WLS 1.9519 (0.2279) 0.9008 (0.6066) 1.9902 (1.6569) 0.8606 n = 200 MLE 1.7535 (0.6525) 1.3555 (0.5795) 1.6087 (1.8850) 1.0782 LS 1.9669 (0.2506) 1.1475 (0.6229) 1.9986 (1.5482) 0.7287 WLS 1.9578 (0.1269) 1.0596 (0.4735) 1.5661 (0.6906) 0.4392 n = 300 MLE 1.8438 (0.5467) 1.2813 (0.5962) 1.6082 (1.5177) 0.9151 LS 1.8927 (0.1585) 1.6745 (0.4603) 1.3887 (0.8766) 0.5973 WLS 1.9835 (0.1121) 1.0563 (0.4295) 1.5963 (0.6744) 0.4141 n = 400 MLE 1.8684 (0.4775) 1.3238 (0.5647) 1.5057 (1.2785) 0.8075 LS 1.9413 (0.1308) 1.3760 (0.5588) 1.5542 (0.8849) 0.5718 WLS 1.9638 (0.0916) 1.1273 (0.3971) 1.4569 (0.3923) 0.3095 n = 500 MLE 1.9002 (0.4561) 1.2733 (0.5697) 1.6020 (1.1967) 0.7662 LS 2.0143 (0.1602) 1.0437 (0.5819) 1.9070 (0.9159) 0.5142 WLS 1.9573 (0.0739) 1.0416 (0.3624) 1.5810 (0.5662) 0.3489 n = 1000 MLE 1.9189 (0.3525) 1.2974 (0.5674) 1.5166 (0.8654) 0.6228 LS 1.9928 (0.1392) 1.1979 (0.6193) 1.7116 (0.7494) 0.5268 WLS 1.9700 (0.0613) 1.1250 (0.29170 1.4208 (0.2599) 0.2280 Mathematics 2021, 9, 1371 12 of 22

Table 3. λ = 3 , α = 10 , β = 5.

Sample Size Method λˆ (SE) αˆ (SE) βˆ(SE) MSE n = 50 MLE 3.0424 (0.4181) 9.7778 (2.4919) 5.6122 (3.8162) 2.2299 LS 3.0549 (0.5084) 9.9521 (2.3594) 4.4837 (4.1413) 2.3605 WLS 3.0281 (0.4395) 10.1081 (2.0774) 5.0283 (3.9031) 2.1545 n = 100 MLE 3.0380 (0.3025) 9.8275 (2.4712) 5.5335 (3.4914) 2.0887 LS 3.0416 (0.3537) 9.8668 (2.2719) 5.0004 (4.0233) 2.2335 WLS 2.9993 (0.3392) 10.1385 (2.0807) 4.6623 (3.5851) 2.0216 n = 200 MLE 3.0172 (0.2143) 9.5225 (2.1802) 5.4294 (3.2236) 1.9055 LS 3.0208 (0.2368) 9.7251 (2.2322) 4.9982 (3.6205) 2.0550 WLS 3.0072 (0.2337) 10.3121 (2.2217) 4.6433 (2.9485) 1.8293 n = 300 MLE 3.0239 (0.1718) 9.5911 (2.2102) 5.5706 (3.1488) 1.8680 LS 3.0251 (0.2052) 9.5669 (1.9457) 5.3041 (3.7496) 2.0098 WLS 3.0066 (0.1789) 10.052 (2.0334) 4.6414 (2.7485) 1.6785 n = 400 MLE 3.0222 (0.1455) 9.3730 (1.9607) 5.4066 (2.9740) 1.7391 LS 3.0285 (0.1623) 9.5468 (2.1307) 5.1461 (3.3433) 1.9214 WLS 3.0104 (0.1522) 9.9920 (1.9927) 4.6140 (2.4915) 1.5730 n = 500 MLE 3.0166 (0.1405) 9.3607 (1.9663) 5.3163 (2.8919) 1.6961 LS 3.0163 (0.1561) 9.4042 (1.9839) 4.9983 (3.4092) 1.9053 WLS 3.0071 (0.1374) 10.0114 (1.9581) 4.7762 (2.4242) 1.5292 n = 1000 MLE 3.0139 (0.0984) 9.3293 (1.1715) 5.1518 (2.4282) 1.4799 LS 3.0235 (0.1120) 9.2747 (1.8501) 5.1674 (3.4079) 1.8668 WLS 3.0136 (0.1032) 9.7242 (1.6997) 4.9825 (2.2380) 1.3759

From Tables1–3, we find that the SE of all three estimator decrease as the sample size n increases and all estimators will tend to more accuracy when n is large. In addition, the estimated values obtained by the three estimator are close to the true values. Furthermore, the plots of bias of the simulated estimators of λ, α, and β, corresponding with different sample size n, are shown in Figures3–5, respectively.

Figure 3. Cont. Mathematics 2021, 9, 1371 13 of 22

Figure 3. Bias of estimator λˆ versus sample size n for different scenarios.

From Figures3–5, we observe that the magnitude of bias of all estimators tends to zero as n grows, which these estimators are asymptotically unbiased and consistent for the parameters. Thus, these estimator techniques perform well for estimating the parameters in the GME distribution. For further studying, we draw the plots of overall bias and MSE for θˆ in Figures6 and7. Figures 6 and 7 show the bias and MSE of θˆ, and we can find that as n increases, the bias of θˆ towards to zero, and the WLS always has the smallest value of MSE. The LS estimators have the largest MSE among the three considered estimators. Thus, we can conclude that WLS can be chosen as a more reliable estimator for the GME distribution. Mathematics 2021, 9, 1371 14 of 22

Figure 4. Bias of estimator αˆ versus sample size n for different scenarios. Mathematics 2021, 9, 1371 15 of 22

Figure 5. Bias of estimator βˆ versus sample size n for different scenarios. Mathematics 2021, 9, 1371 16 of 22

Figure 6. Bias of estimator θˆ versus sample size n for different scenarios. Mathematics 2021, 9, 1371 17 of 22

Figure 7. MSE of estimator θˆ versus sample size n for different scenarios.

6. Real Data Analysis In this section, we use the weighted least-square estimator to analyze two real data sets for investigating the advantage of proposed GME distribution and compare it with some other distributions, including the exponential distribution distribution, WED distribution, GExtEW distribution, and GWE distribution, where the pdfs are given as follows. (1) Exponential distribution: E(λ)

−λx fE(x) = λe , x ≥ 0, λ > 0.

(2) Weibull exponential distribution: WED(λ, α, β)

( β)  (1 − e−λx)β−1   1 − e−λx  f (x) = αβ(λe−λx) exp −α , x > 0, α, β, λ > 0. WED (e−λx)β+1 e−λx Mathematics 2021, 9, 1371 18 of 22

(3) Generalized extended exponential-Weibull distribution: GExtEW(λ, α, β, r, c)

r−1 r c−1 −(βxr+λx)c fGExtEW (x) = cα(rβx + λ)(βx + λx) e , x > 0, c, β, λ > 0.r ∈ (0, ∞) {1}.

(4) Generalized weighted exponential distribution: GWE(λ, α, k)

α f (x) = λe−λx(1 − e−λαx)k, x > 0, α, λ > 0, k ∈ Z+. GWE B(1/α, k + 1)

6.1. Data Set 1: Waiting Times This data set represents the waiting times (in minutes) before the service of 100 bank customers, which has been previously used by Ghitany et al. [18]. It can be seen in Appendix A.1 of AppendixA. Table 4 shows the parameter estimator results of the GME, E, WED, GExtEW, and GWE distributions for these data. The corresponding minus log- likelihood, Akaike information criterion (AIC), and Bayesian information criterion (BIC) are also presented. From Table 4, we find that the GME has the smallest values of all criteria for comparing all other distributions.

Table 4. Data set 1 : Comparison between the GME, E, WED, GExtEW, and GWE by using different criteria.

GME(λ, α, β) E(λ) WED(λ, α, β) GExtEW(λ, α, β, r, c) GWE(λ, α, k) λ 0.1554 0.0883 0.0234 0.0807 0.1284 α 0.8202 - 5.3703 3.9509 6.2681 β 124.9639 - 1.3486 0.0812 - r - - - 1.7231 - c - - - 0.4711 - k - - - - 4 -loglike 317.3592 329.9063 321.8420 317.1620 318.4602 AIC 640.7184 661.8126 649.6839 644.3241 642.9205 BIC 648.5339 664.4177 657.4995 657.3499 650.7360

Figure 8 shows the fitted models for data set 1. The first subgraph of Figure 8 shows the fitted densities to the data set histogram and some estimated distributions, and the second subgraph displays the empirical distribution function for the data set and the estimated distributions. Both figures reveal that the GME distribution provides a qualified fit for the data set. Mathematics 2021, 9, 1371 19 of 22

Figure 8. Fitted pdfs and the relative histogram, empirical and fitted cdfs.

6.2. Date Set 2: Survival Times The second data set represents the survival times of 121 patients with breast can- cer obtained from a large hospital in a period from 1929 to 1938. This data set has recently been studied by Lee [19] and Tahir et al. [20]. The data set can be seen in Appendix A.2 of AppendixA. We compare the GME distribution with the E, WED, GEx- tEW, and GWE distributions. The estimated value of the parameters, AIC and BIC statistics of these distributions are listed in Table 5. It can be seen that GME distribution provides the best fit among these competing models. Figure 9 displays the fitted pdfs and cdfs of Mathematics 2021, 9, 1371 20 of 22

the GME, E, WED, GExtEW, and GWE distributions for data set 2, and suggests that the fit of the GME distribution is reasonable.

Table 5. Data set 2 : Comparison between the GME, E, WED, GExtEW, and GWE by using different criteria.

GME(λ, α, β) E(λ) WED(λ, α, β) GExtEW(λ, α, β, r, c) GWE(λ, α, k) λ 0.0311 0.0197 0.0114 0.0280 0.0237 α 0.6384 - 1.2263 1.8338 11.8562 β 7.5142 - 1.0035 0.0069 - r - - - 0.8196 - c - - - 0.9454 - k - - - - 3 -loglike 578.9263 585.5995 580.1301 580.6417 590.0420 AIC 1163.8530 1173.1990 1166.2600 1171.2830 1186.0850 BIC 1172.2400 1175.9950 1174.6480 1185.2620 1194.4720

Figure 9. Fitted pdfs and the relative histogram, empirical and fitted cdfs.

7. Conclusions In this paper, we introduce a new lifetime distribution, GME distribution, and propose several statistical properties of it. As it is not feasible to compare these methods theoretically, we have studied several simulations to identify the most efficient estimation method for GME distribution. The simulation results show that weighted least-square estimator (WLS) is the best performing estimator in terms of MSE. That is, the weighted least-square estimation method is more feasible for estimating parameters in the GME distribution. Finally, two real data sets were analyzed to indicate the importance and flexibility of GME distribution in comparison to some existing lifetime distributions. In the future, the development of properties and proper estimation procedure of the bivariate model and multivariate generalization will be of interest, and more work is needed along that direction.

Author Contributions: W.T.: Conceptualization, Methodology, Validation, Investigation, Resources, Supervision, Project Administration, Visualization, Writing review and editing; Y.Y. and T.T.: Soft- ware, Formal analysis, Data curation, Writing—original draft preparation, Visualization. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Mathematics 2021, 9, 1371 21 of 22

Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Datasets are provided in the paper. Acknowledgments: The authors would like to thank the editor and three anonymous referees for their careful reading of this article and for their constructive suggestions, which considerably improved this article. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations The following abbreviations are used in this manuscript: WE weighted exponential distribution GWE generalized weighted exponential distribution WED Weibull exponential distribution GExtEW generalized extented exponential-Weibull distributions GME generalized mixture exponential distribution MLE maximum likelihood estimator LS least-square estimator WLS weighted least-square estimator

Appendix A. Data Set Appendix A.1. Data Set 1 0.8 0.8 3.2 3.3 4.6 4.7 6.2 6.2 7.7 8 9.7 9.8 12.5 12.9 17.3 18.1 27 31.6 1.3 3.5 4.7 6.2 8.2 10.7 13 18.2 33.1 1.5 1.8 1.9 3.6 4 4.1 4.8 4.9 4.9 6.3 6.7 6.9 8.6 8.6 8.6 10.9 11 11 13 13.3 13.6 18.4 18.9 19 38.5 1.9 2.1 2.6 4.2 4.2 4.3 5 5.3 5.5 7.1 7.1 7.1 8.8 8.8 8.9 11.1 11.2 11.2 13.7 13.9 14.1 19.9 20.6 21.3 2.7 2.9 3.1 4.3 4.4 4.4 5.7 5.7 6.1 7.1 7.4 7.6 8.9 9.5 9.6 11.5 11.9 12.4 15.4 15.4 17.3 21.4 21.9 23.

Appendix A.2. Data Set 2 0.3 0.3 4.0 5.0 5.6 6.2 6.3 6.6 6.8 7.4 7.5 8.4 8.4 10.3 11.0 11.8 12.2 12.3 13.5 14.4 14.4 14.8 15.5 15.7 16.2 16.3 16.5 16.8 17.2 17.3 17.5 17.9 19.8 20.4 20.9 21.0 21.0 21.1 23.0 23.4 23.6 24.0 24.0 27.9 28.2 29.1 30.0 31.0 31.0 32.0 35.0 35.0 37.0 37.0 37.0 38.0 38.0 38.0 39.0 39.0 40.0 40.0 40.0 41.0 41.0 41.0 42.0 43.0 43.0 43.0 44.0 45.0 45.0 46.0 46.0 47.0 48.0 49.0 51.0 51.0 51.0 52.0 54.0 55.0 56.0 57.0 58.0 59.0 60.0 60.0 60.0 61.0 62.0 65.0 65.0 67.0 67.0 68.0 69.0 78.0 80.0 83.0 88.0 89.0 90.0 93.0 96.0 103.0 105.0 109.0 109.0 111.0 115.0 117.0 125.0 126.0 127.0 129.0 129.0 139.0 154.0.

References 1. Gupta, R.D.; Kundu, D. Generalized exponential distribution: Different method of estimations. J. Stat. Comput. Simul. 2001, 69, 315–337. [CrossRef] 2. Gupta, R.D.; Kundu, D. A new class of weighted exponential distributions. Statistics 2009, 43, 621–634. [CrossRef] 3. Azzalini, A. A class of distributions which includes the normal ones. Scand. J. Stat. 1985, 12, 171–178. 4. Kharazmi, O.; Mahdavi, A.; Fathizadeh, M. Generalized weighted exponential distribution. Commun. Stat. Simul. Comput. 2015, 44, 1557–1569. [CrossRef] 5. Nadarajah, S.; Haghighi, F. An extension of the exponential distribution. Statistics 2011, 45, 543–558. [CrossRef] 6. Eugene, N.; Lee, C.; Famoye, F. Beta-normal distribution and its applications. Commun. Stat. Theory Methods 2002, 31, 497–512. [CrossRef] 7. Nadarajah, S.; Kotz, S. The beta exponential distribution. Reliab. Eng. Syst. Saf. 2006, 91, 689–697. [CrossRef] 8. Barreto-Souza, W.; Santos, A.H.; Cordeiro, G.M. The beta generalized exponential distribution. J. Stat. Comput. Simul. 2010, 80, 159–172. [CrossRef] 9. Barreto-Souza, W.; Cordeiro, G.M.; Simas, A.B. Some results for beta Frenchet distribution. Commun. Stat. Theory Methods 2011, 40, 798–811. [CrossRef] 10. Ristic, M.M.; Balakrishnan, N. The gamma-exponentiated exponential distribution. J. Stat. Comput. Simul. 2012, 82, 1191–1206. [CrossRef] Mathematics 2021, 9, 1371 22 of 22

11. Oguntunde, P.E.; Balogun, O.S.; Okagbue, H.I.; Bishop, S.A. The Weibull-exponential distribution: Its properties and applications. J. Appl. Sci. 2015, 15, 1305–1311. [CrossRef] 12. Ristic, M.M.; Kundu, D. Marshall-Olkin generalized exponential distribution. Metron 2015, 73, 317–333. [CrossRef] 13. George, R.; Thobias, S. Kumaraswamy Marshall-Olkin Exponential Distribution. Commun. Stat. Theory Methods 2019, 48, 1920–1937. [CrossRef] 14. Shakhatreh, M.K.; Lemonte, A.J.; Cordeiro, G.M. On the generalized extended exponential-Weibull distribution: Properties and different methods of estimation. Int. J. Comput. Math. 2020, 97, 1029–1057. [CrossRef] 15. Kumar, C.S.; Anusree, M.R. On a generalized mixture of standard normal and skew normal distributions. Stat. Probab. Lett. 2011, 81, 1813–1821. [CrossRef] 16. Tian, W.; Wang, C.; Wu, M.; Wang, T. The multivariate extended skew normal distribution and its quadratic forms. In Causal Inference in Econometrics; Springer: Cham, Switzerland, 2016; pp. 153–169. 17. Swain, J.J.; Venkatraman, S.; Wilson, J.R. Least-squares estimation of distribution functions in Johnson’s translation system. J. Stat. Comput. Simul. 1988, 29, 271–297. [CrossRef] 18. Ghitany, M.E.; Atieh, B.; Nadarajah, S. Lindley distribution and its application. Math. Comput. Simul. 2008, 78, 493–506. [CrossRef] 19. Lee, E.T. Statistical Methods for Survival Data Analysis; John Wiley: New York, NY, USA, 1992. 20. Tahir, M.H.; Mansoor, M.; Zubair, M.; Hamedani, G. McDonald log- with an application to breast cancer data. J. Stat. Theory Appl. 2014, 13, 65–82. [CrossRef]