Fisher in Censored Samples from Univariate and Bivariate Populations and Their Applications

Dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University

By

Lira Pi, M.S.

Graduate Program in

The Ohio State University

2012

Dissertation Committee:

Haikady N. Nagaraja, Advisor Steven N. MacEachern Omer¨ Ozt¨ urk¨ c Copyright by

Lira Pi

2012 Abstract

This research explores many analytical features of Fisher information (FI) in censored

samples from univariate as well as bivariate populations and discusses their applications.

The FI in censored samples is utilized to obtain the asymptotic of the associated

maximum likelihood (MLE) in censored samples and to assess Asymptotic Rel-

ative Efficiency (ARE) of an estimator. We primarily focus on the FI contained in Type-II

censored samples. The FI plays a significant role in determining an optimal sample size in a

life-testing experiment while taking the expected duration of the experiment into account.

In Chapter 2 we investigate the linkage between unfolded and folded distributions in

terms of FI in order statistics and in Type-II censored samples for symmetric distributions.

For instance, with mean θ can be viewed as the folded distribution

arising from a Laplace distribution with location at zero and scale parameter θ.

We exploit this connection to simplify the efforts in finding the FI in order statistics and in

Type-II censored samples from an unfolded distribution that is symmetric about zero. We have shown that 4n − 3 independent computations of the expectations of special functions of order statistics from the folded distribution are needed to obtain FI in all single order statistics and all Type-II (right or left or doubly) censored samples for all random samples of size m up to n. We use this efficient approach to find the FI in order statistics and Type-

II censored samples from the Laplace distribution using the expectations of functions of exponential order statistics that can be easily obtained.

ii We present in Chapter 3 the FI (FIM) in censored samples from a mixture of two

exponentials when the mixing proportion θ is unknown and when it is known. We consider a mixture of two exponentials with pdf given by θαe−αx + (1 − θ)βe−βx (x > 0). It is

proved that every entry of the FIM is finite. As closed form expressions do not exist we

pursue the simulation approach to generate reliable, close approximations to the elements

of the FIM. However we found that the of FIM are almost zero for any as-

sumed values of α, β, and θ when n is small. This supports the general knowledge that

a very large sample is needed for a precise estimation of in a finite mixture of

exponential distributions.

Let Xi:n be the ith order and Y[i:n] be its concomitant obtained from a random

sample from the absolutely continuous Block-Basu (1974) bivariate exponential random

variable (X,Y ) with parameters λ1, λ2, and λ12. For this model, Chapter 4 provides ex-

pressions for the elements of the FIM in censored samples {(Xi:n,Y[i:n], 1 ≤ i ≤ r} and studies the growth pattern of the FI relative to the total FI in the sample on λ1, λ2, and λ12 as r/n changes in (0,1). This is done for small and large sample sizes. The results show that the FI on λ1, the parameter associated with X, is always greater than the FI on λ2, associated with the concomitant Y . We calculate the FI per unit of experimental duration to suggest optimal sample sizes for life-testing experiments. We describe its implications on the design of censored trials. In all of our investigations we also consider left and doubly censored samples.

iii Dedicated to my husband, Hyeong-Tak and to my parents, Jae-Ho and Yong-Sook

iv ACKNOWLEDGEMENTS

I gratefully acknowledge the favor of several people who mitigate the toughness of aca- demic career at the Ohio State.

My research and thesis have been well developed under the delicate advice and super- vision of Dr. Nagaraja. He inspired me with confidence and independent thinking about problems of . As time goes on, I was more impressed by his good personality and enthusiasm to help students. He also provided me with a desirable insight and it makes a large influence on my view of life. For this and everything else I am sincerely grateful to him. I would also like to take this opportunity to express my appreciation to Dr.

Ozturk and Dr. MacEachern. Their valuable comments during candidacy and final oral exams helped my narrow view of research expand. I would like to thank the department of Statistics for providing the facilities to work and for its continuous financial support all along.

I would like to thank all my professors at Ewha Womans University, Seoul, Korea for encouraging me to pursue Ph.D degree in USA. In particular Dr. So paid his attention to my academic career and gave me expert advice on research.

I gratefully thank my parents Jae-Ho and Yong-Sook and my sister Soo-Jin for en- couraging me with their profound praying. Also I would like to thank my parents in law

Su-Dong and Jung-Ae. Their belief in my abilities kept driving me to work harder towards my goal. I thank my husband Hyeong-Tak for his constant support. There is no suitable

v word to sufficiently express appreciation of all he has done for my academic career. His love and affection have helped me overcome some of the toughest days of my life and I will be forever indebted to him for that. Last but not least, I should thank my lord, God.

His spirit has been always accompanying by me and guiding me onto the right path.

vi Contents

Page

ABSTRACT ...... ii

Dedication ...... iv

Acknowledgments ...... v

List of Tables ...... x

List of Figures ...... xiii

1. Introduction ...... 1

1.1 Fisher Information in Order Statistics and their Concomitants ...... 1 1.1.1 Properties of Concomitants ...... 1 1.1.2 Fisher Information in Univariate Censored samples ...... 2 1.1.3 Fisher Information in Bivariate Censored samples ...... 3 1.1.4 Use of Fisher Information in Censored samples ...... 10 1.2 Univariate Models ...... 11 1.2.1 Relationship between Folded and Unfolded populations . . . . . 11 1.2.2 A Mixture of Finite Exponential populations ...... 12 1.3 Bivariate Exponential Models ...... 13 1.4 Motivation and Summary of Work ...... 14

2. Connections between Fisher Information in Type-II Censored samples from Folded and Unfolded populations ...... 16

2.1 Introduction ...... 16 2.2 Fisher Information in a Single Order statistic from Unfolded population . 19

vii 2.3 Connection between Fisher Information in Type-II Censored samples from Folded and Unfolded populations ...... 33 2.4 An Illustrative Example ...... 44

3. Fisher Information from a Mixture of Finite Exponential distributions and its Type-II censored samples ...... 52

3.1 Introduction ...... 52 3.2 Fisher Information in Type-II Censored samples from a Mixture of Two Exponentials with Unknown p ...... 53 3.3 Fisher Information in Type-II Censored samples from a Mixture of Two Exponentials with Known θ ...... 58 3.4 Application and Numerical Integration ...... 59

4. Fisher Information in Type-II censored samples from Block-Basu Bivariate Ex- ponential distribution ...... 69

4.1 Introduction ...... 69 4.2 Block and Basu Bivariate Exponential distribution ...... 70 4.3 Fisher Information in Type-II Censored samples from BBVE ...... 72 4.3.1 Right Censored Samples ...... 72 4.3.2 Left Censored Samples ...... 83 4.3.3 Limiting Fisher Information Matrix ...... 91 4.4 Computations ...... 93 4.4.1 Right Censored Samples - Finite Sample Case ...... 93 4.4.2 Limiting FIM for Right Censored Samples ...... 101 4.4.3 Left and Doubly Censored Samples ...... 103

5. Conclusion ...... 105

5.1 Concluding Remarks ...... 105 5.2 Future Work ...... 106

Bibliography ...... 108

Appendix A. Notations and Abbreviations ...... 111

A.1 Symbols ...... 111 A.2 Abbreviations ...... 112 A.3 Distributions ...... 112

viii Appendix B. R codes ...... 113

B.1 Numerical Integration ...... 113 B.2 Simulation ...... 115

ix List of Tables

Table Page

f 2.1 Ir:m from Laplace(0, 2) for 1 ≤ r ≤ m ≤ n when n = 10 using (2.1.1) . . . 45

2.2 The values of ab1:i, c1:i, d1:i, e1:1, and k1:i for the Exp(2) distribution in Lemma 2.2.1 for 1 ≤ i ≤ 10 ...... 46

2.3 The values of abr:m from Exp(2) parent for 1 ≤ r ≤ m ≤ 10 ...... 47

2.4 The values of cr:m from Exp(2) for 1 ≤ r ≤ m ≤ 10 ...... 47

2.5 The values of dr:m from Exp(2) for 1 ≤ r ≤ m ≤ 10 ...... 47

2.6 The values of er:m from Exp(2) for 1 ≤ r ≤ m ≤ 10 ...... 48

f 2.7 Ir:m from Laplace(0, 2) using Theorem 2.2.1 for 1 ≤ r ≤ m ≤ 10 ..... 48

f 2.8 I1···r:m from Laplace(0, 2) using Theorem 2.3.1 for 1 ≤ r ≤ m ≤ 10 .... 49

2.9 Proportional FI from Laplace (0, 2) ...... 50

2.10 ARE values for Laplace (0, 2) distribution ...... 51

3.1 I(X; α, β, θ) from MExp(15, 1, θ) with known θ and unknown θ ...... 59

3.2 I(X; α, β, θ) from MExp(2, 1, θ) with known θ and unknown θ ...... 60

3.3 I(X; α, β, θ) from MExp(15, 2, θ) with known θ and unknown θ ...... 60

3.4 I(X; α, β, θ) from MExp(3, 2, θ) with known θ and unknown θ ...... 61

3.5 I1···r:n(α, β, θ) from MExp(15, 1, .9) when n = 10 ...... 62

x 3.6 Proportional FI from MExp(15, 1, .9) when n = 10 ...... 63

3.7 FI in Type-II right censored samples from MExp(15, 1, .9) per unit time when n = 10 ...... 64

3.8 ARE values for MExp(15, 1, .9) when n = 10 ...... 65

3.9 I1···r:n(α, β) from MExp(2, 1; .6) when n = 10 ...... 66

3.10 Proportional FI from MExp(2, 1; .6) ...... 66

3.11 FI in Type-II right censored samples from MExp(2, 1, .6) per unit time when n = 10 ...... 67

3.12 ARE values for MExp(2, 1; .6) when n = 10 ...... 68

4.1 Elements of I1···r:n(λ) from BBVE(1, .5, .5) when n = 10 ...... 94

−1 4.2 Elements of I1···r:n(λ) from BBVE(1, .5, .5) when n = 10 ...... 94

4.3 Proportional FI in Type-II right censored samples from BBVE(1, .5, .5) when n = 10 ...... 95

4.4 Values of λ12 for selected values of λ1, λ2 and ρ in (4.4.3) ...... 97

4.5 Proportional FI in Type-II right censored samples from BBVE(1, 1, λ12) when n = 10 ...... 97

4.6 Proportional FI in Type-II right censored samples from BBVE(1, .5, λ12) when n = 10 ...... 98

4.7 Proportional FI in Type-II right censored samples from BBVE(.5, 1, λ12) when n = 10 ...... 99

1 4.8 Diagonal entries of Ip(λ) and n I1···r:n(λ) from BBVE(1, .5, .5) where r/n → p as n ↑ ∞ when n=10, 20, 50, 100 and 500 ...... 102

4.9 Approximations based on limiting FIM to the of MLEs from right censored samples from BBVE(1, .5, .5) when n = 10 ...... 102

xi 4.10 ARE values for MLEs from right censored samples for BBVE(1, .5, .5) distribution ...... 103

4.11 Is···n:n(λ) from BBVE(1, .5, .5) when n = 10 ...... 104

xii List of Figures

Figure Page

f 2.1 Triangle of Ir:m for 1 ≤ r ≤ m ≤ n from unfolded distribution f(x; θ) ... 21

f 2.2 Blocks of ab, c, d, and e’s (colored in green) needed to obtain I2:m for every m ≥ 4 ...... 26

f 2.3 Blocks of ab, c, and d’s (colored in green) needed to obtain I1:m for every m ≥ 3 ...... 27

f 2.4 Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when m−1 3 ≤ r ≤ 2 and m ≥ 7 ...... 28

f 2.5 Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when m r = 2 and m ≥ 6 where m is even ...... 29

f 2.6 Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when m+1 r = 2 and m ≥ 5 where m is odd ...... 30

f 2.7 Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when m+1 2 < r ≤ m − 2 and m ≥ 6 ...... 31

f 2.8 Expressions for I1···r:m for 1 ≤ r ≤ m ≤ n ...... 35

f m−1 2.9 Blocks of ab, c, and d’s needed to obtain I1···r:m when 1 ≤ r < 2 and m ≥ 4 ...... 38

f m−1 2.10 Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and odd m ≥ 3 ...... 39

f m 2.11 Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and even m ≥ 4 40

xiii f m+1 2.12 Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and odd m ≥ 5 ...... 41

f m+1 2.13 Blocks of ab, c, and d’s needed to obtain I1···r:m when 2 < r ≤ m − 2 and m ≥ 6 ...... 42

f 2.14 Blocks of ab, c, and d’s needed to obtain Ir:m when r = m − 1 and m ≥ 2 . 43

I1···r:n(θ) 2.14 nI(θ) when n = 10 ...... 50

I1···r:n(α,α) I1···r:n(β,β) 3.1 Proportional FI from MExp(15, 1, .9) when n=10: nI(α,α) (black), nI(β,β) I1···r:n(θ,θ) (red), and nI(θ,θ) (blue)...... 63

3.2 Average Information per unit time: (Left panel) I1···r:n(θ) (blue) and (Right E(Xr:n) panel) I1···r:n(α) (black) and I1···r:n(β) (red) from MExp(15, 1, .9) when n=10 64 E(Xr:n) E(Xr:n)

3.3 Proportional FI from MExp(2, 1; .6): α(black), β(red) ...... 66

3.4 I1···r:n(α) (black) and I1···r:n(β) (red) from MExp(2, 1;.6) when n=10 . . . . . 67 E(Xr:n) E(Xr:n)

4.1 Increasing pattern of the relative FI in Type-II right censored samples for 1 ≤ r ≤ n, for the BBVE(1,0.5,0.5) parent where n=10 ...... 96

4.2 3D surface plots of proportional FI in censored samples from BBVE . . . . 100

4.3 FI in Type-II right censored samples per unit of the duration ...... 101

xiv CHAPTER 1: INTRODUCTION

1.1 Fisher Information in Order Statistics and their Concomitants

Let (X,Y ) be an absolutely continuous random vector with joint cdf F (x, y) and joint

pdf f(x, y) and

(Xi,Yi), i = 1, . . . , n be a random sample from the distribution of (X,Y ). Also let

X1:n ≤ · · · ≤ Xn:n be the order statistics of the X sample values. Then the Y -value associated with Xi:n is called the concomitant of the ith order statistic and is denoted by Y[i:n]. It has been known that the joint pdf of a single order statistic Xr:n, and its concomitant Y[r:n] is given by n! f (x, y; θ) = f(x, y; θ)F (x; θ)r−1(1 − F (x; θ))n−r, 1 ≤ r ≤ n, r:n (r − 1)!(n − r)! X X (1.1.1)

where FX is the marginal cdf of X and θ is a vector of parameters associated with the joint

T pdf, say (θ1, θ2, ··· , θt) for t ≥ 1.

1.1.1 Properties of Concomitants

The marginal pdf of Y[r:n] is obtained by integrating (1.1.1) with respect to Xr:n; Z ∞ n! r−1 n−r f[r:n](y; θ) = f(x, y; θ)FX (x; θ) (1 − FX (x; θ)) dx. (r − 1)!(n − r)! −∞ 1 David and Galambos (1974) showed that when (Xr,Yr) is distributed as bivariate normal

2 2 N(µx, µy, σx, σy, ρ), 1 ≤ r ≤ n,

2 2 Y[ri:n] − E(Y[ri:n]) ∼ N(0, σy(1 − ρ )), i = 1, 2, ··· , k; 1 ≤ r1 < r2 < ··· < rk ≤ n

where k is fixed. Also they investigated the asymptotic properties of the rank of Y[t:n], Pn Rt:n = s=1 νts:n where

 1 if Y ≥ Y , ν = [t:n] [s:n] ts:n 0 otherwise.

Yang (1977) extended the asymptotic property based on the bivariate normal to a general distribution. He applied conditioning argument to obtain the exact and asymptotic distri- butions of concomitants from any arbitrary bivariate distribution.

1.1.2 Fisher Information in Univariate Censored samples

Suppose X1, ··· ,Xn are independent and identically distributed random variables from cdf FX (x; θ) with absolutely continuous pdf f(x; θ). Let X1:n, ··· ,Xn:n be their order statistics. When only k of the n order statistics are randomly collected and denoted by

X = (Xr1:n, ··· ,Xrk:n) with joint pdf fr1,··· ,rk:n, the FI matrix (FIM) about θ in X, under some regularity conditions, is given by

Ir1,··· ,rk:n(θ; X) = ||Ir1,··· ,rk:n(θi, θj; X)||, 1 ≤ i, j ≤ t where

∞ x Z Z r2:n  ∂   ∂  Ir1,··· ,rk:n(θi, θj; X) = ··· log fr1,··· ,rk:n log fr1,··· ,rk:n dFr1···rk:n. −∞ −∞ ∂θi ∂θj (1.1.2)

2 Nagaraja (1983) showed that the regularity conditions used to define the FI in X, I(θ; X) =

∂ 2 E ∂θ log f(X; θ) serve as the sufficient conditions for defining I(θ; Xr:n). These regu- larity conditions are introduced in Section 1.1.3.

Under these regularity conditions Park (1996) simplified (1.1.2) by expressing I1···r:n(θ; X)

and Is···n:n(θ; X) as a sum of r and n − s + 1 single expectations, respectively.

n X  k − 2 n I (θ; X) = (−1)k−n+r−1I (θ; X),1 ≤ r ≤ n − 1. 1···r:n n − r − 1 k 1:k k=n−r+1 (1.1.3) n X k − 2n I (θ; X) = (−1)k−sI (θ; X), 2 ≤ s ≤ n. s···n:n s − 2 k k:k k=s (1.1.4)

Zheng and Gastwirth (2000) considered location-scale families and examined the percent-

ages of FI contained in middle section of ordered data about the location parameter and in

the two-tails about the scale parameter. As applications they investigated the FI contained

in multiply censored samples from Cauchy, Laplace, Logistic, and Normal distributions.

1.1.3 Fisher Information in Bivariate Censored samples

Bivariate censored samples that include a collection of order statistics and their con-

comitants have two different types of censoring schemes.

Type-I censoring

An experiment is terminated at a predetermined time t so that the data is recorded prior

to this time.

3 Type-II censoring

An experiment is terminated at a predetermined number of items, say at the rth failure,

that is, at time Xr:n. One can consider several schemes under this framework.

Set-Up 1. Censoring is effective on both variables and we have a Type II bivariate censored

sample consisting of (Xi:n,Y[i:n]), 1 ≤ s ≤ i ≤ r ≤ n.

1. Right censored

 (X(1, r), Y[1, r]) = (Xi:n,Y[i:n]), 1 ≤ i ≤ r

2. Left censored

 (X(s, n), Y[s, n]) = (Xi:n,Y[i:n]), s ≤ i ≤ n

3. Double censored

 (X(s, r), Y[s, r]) = (Xi:n,Y[i:n]), s ≤ i ≤ r

Set-Up 2. Censoring is done on the concomitant of order statistic. That is, only Xi:n are

observed; this results in a univariate sample.

Set-Up 3. Censoring is effective on the ordered variable only. All the Y values and the

ranks of associated X values, i.e., (i, Y[i:n]), 1 ≤ i ≤ n, are observed.

We focus on Set-Up 1. above in this study and consider only Type II censored samples.

Suppose bivariate (X,Y ) is from absolutely continuous distribution with joint cdf F (x, y; θ) and pdf f(x, y; θ) where θ is a scalar parameter. Abo-Eleneen and

Nagaraja (2002) present the necessary assumptions on the f(x, y; θ) to obtain FI in a single pair I(θ; X,Y ) as follows:

4 (a) Ω is a real non-degenerate interval.

(b) f(x, y; θ) is differentiable with respect to θ for all θ ∈ Ω.

(c∗) There exists an integrable H(x, y) such that ∂f(x,y;θ) ≤ H(x, y) for all θ. ∂θ Note that (c∗) validates the required assumption

(c) For any measurable set C ⊂ S, the sample space,

∂ Z Z ∂f(x, y; θ) f(x, y; θ)dµ = dµ. ∂θ C C ∂θ

Abo-Eleneen and Nagaraja (2002) prove that the regularity conditions (a), (b), and (c∗) can serve as the regularity conditions for defining the FI in an order statistic and its concomitant,

I(θ; Xr:n,Y[r:n]), and also in bivariate censored samples, Is···r:n(θ; X, Y) for 1 ≤ s < r ≤

n. They state that I(θ; Xr:n,Y[r:n]) can be defined through the second order derivative with respect to θ with an additional regularity assumption that allows the interchange of integration with respect to µ and second derivative of f(x, y; θ) with respect to θ. That is,

the additional condition says that f(x, y; θ) is twice differentiable, and

2 (d) There exists an integrable H0(x, y) such that ∂ f(x,y;θ) ≤ H0(x, y) for all θ. ∂θ2

Under such regularity assumptions the FI about a real valued parameter θ contained in a

single pair, (X,Y ) is defined as

 ∂ 2 I(θ; X,Y ) = E log f(X,Y ; θ) ∂θ  ∂2  = −E log f(X,Y ; θ) ∂θ2 Z Z  ∂2  = − 2 log f(x, y; θ) dF (x, y; θ). (1.1.5) X Y ∂θ

5 For two or more parameters, let X and Y be distributed with pdf f(x, y; θ), θ ∈ Ω, with

T respect to µ where θ is vector-valued, say θ = (θ1, ··· , θt) . Under the regularity assump-

tions, the FIM is a t × t matrix denoted by

I(θ; X,Y ) = ||I(θi, θj; X,Y )||, 1 ≤ i, j ≤ t where

 ∂   ∂  I(θi, θj; X,Y ) = E log f(X,Y ; θ) log f(X,Y ; θ) ∂θi ∂θj  ∂2  = −E log f(X,Y ; θ) . (1.1.6) ∂θi∂θj

Likewise the entry of the FIM in multiply censored bivariate samples, Ir1,··· ,rk:n(θ; X, Y) is

Z Z Z Z  ∂  Ir1,··· ,rk:n(θi, θj; X, Y) = ··· ··· log fr1,··· ,rk:n(x, y; θ) ∂θi yr1 yrk xr1 <···

The joint pdf of Type-II right censored samples, (X(1, r), Y[1, r]) is given by

n! f (x, y; θ) = f(x , y ; θ) ··· f(x , y ; θ)(1 − F (x ; θ))n−r, 1···r:n (n − r)! 1 1 r r X r

x1 < x2 < . . . < xr. (1.1.8)

Hence the log- is

`1···r:n(θ; x, y) = log n! − log(n − r)! + log f(x1, y1; θ) + ··· + log f(xr, yr; θ)

+(n − r) log(1 − FX (xr; θ)). (1.1.9)

Under the regularity conditions used to define FIM about θ in (Xr:n,Y[r:n]) for 1 ≤ r ≤ n, a Type-II right censored sample has the FIM denoted by

I1···r:n(θ; X,Y) = ||I1···r:n(θi, θj; X,Y)||, 1 ≤ i, j ≤ t

6 where I1···r:n(θi, θj; X,Y) is given by (1.1.7) with r1 = 1, r2 = 2, ··· , rk = r;

I1···r:n(θi, θj)

Z ∞ Z x2 Z ∞ Z ∞  ∂2   = ··· ··· − `1···r:n(θ; x,y) f1···r:n(x,y; θ)dy1 ··· dyr −∞ −∞ −∞ −∞ ∂θi∂θj

dx1 ··· dxr, x1 < . . . < xr. (1.1.10)

Another form for I1···r:n(θi, θj; X, Y) is provided by Nagaraja and Abo-Eleneen (2008).

This method simplifies (1.1.10) only to the sum of double integrals.

n X  k − 2 n I (θ , θ ; X, Y) = (−1)k−n+r−1I (θ , θ ; X,Y ), 1···r:n i j n − r − 1 k 1:k i j k=n−r+1 1 ≤ r ≤ n − 1 (1.1.11)

where I1:k(θi, θj; X,Y ) is the FI in the first order statistic and its concomitant for 2 ≤

k ≤ n. This extends the result on order statistics by Park (1996) given in (1.1.3). The

expression of (1.1.11) can be used to validate the numerical results from (1.1.10) but one

must be aware that (1.1.11) itself has accumulated errors due to the alternating series as

pointed out by Park (1996).

Nagaraja and Abo-Eleneen (2008) present a different formula to obtain the FI in a Type-II

right censored bivariate sample by dividing f(x, y; θ) into the marginal distribution of X,

fX (x; θ) and the conditional distribution of Y given x, fY |X (y | x; θ), namely

r  ∂2  X   ∂2  I1···r:n(θi, θj; X,Y) = E − `1···r:n(θ; X) + E E − fk:n(Y |X; θ) . ∂θi∂θj ∂θi∂θj k=1

Next, let us consider the Type-II left censoring scheme. The joint pdf of a Type-II left cen- sored sample of the largest (n−s+1) order statistics and their concomitants, (X(s, n), Y[s, n])

7 is given by

n! f (x, y; θ) = f(x , y ; θ) ··· f(x , y ; θ)[F (x ; θ)]s−1 , s···n:n (s − 1)! s s n n X s

xs < xs+1 < . . . < xn. (1.1.12)

Then the log-likelihood function is

`s···n:n(θ; x, y) = log n! − log(s − 1)! + log f(xs, ys; θ) + ··· + log f(xn, yn; θ)

+ (s − 1) log FX (xs; θ). (1.1.13)

Under the regularity conditions of (a)-(d) given on pages 4-5, the FIM in Type-II left cen-

sored samples is denoted by

Is···n:n(θ; X,Y) = ||Is···n:n(θi, θj); X,Y||, 1 ≤ i, j ≤ s

where

Is···n:n(θi, θj; X,Y)

Z ∞ Z xs+1 Z ∞ Z ∞  ∂2   = ··· ··· − `s···n:n(θ; x, y) fs···n:n(x, y; θ)dys ··· dyn −∞ −∞ −∞ −∞ ∂θi∂θj

dxs ··· dxn. (1.1.14)

The expression in (1.1.14) is simplified in terms of the sum of double integrals

n X i − 2n I (θ , θ ; X,Y) = (−1)i−sI (θ , θ ; X,Y ), 2 ≤ s ≤ n. (1.1.15) s···n:n i j s − 2 i i:i i j i=s

Moreover, Nagaraja and Abo-Eleneen (2008) concluded that the FI in Type-II doubly cen- sored bivariate samples is readily obtained from the relation given by

Is···r:n(θi, θj) = I1···r:n(θi, θj) + Is···n:n(θi, θj) − nI(θi, θj), 1 ≤ s ≤ r ≤ n. (1.1.16)

8 For very large sample sizes, Nagaraja and Abo-Eleneen (2008) show that the FI in Type-II right censored bivariate samples with r = [np], 0 < p < 1 converges to the limiting FIM

Ip(θ), denoted by

Ip(θ) = ||Ip(θi, θj)||.

From Nagaraja and Abo-Eleneen (2008), the (i, j)th element is

Ip(θi, θj)

F −1(p) Z X  ∂   ∂  = log fX (x; θ) log fX (x; θ) fX (x; θ)dx −∞ ∂θi ∂θj

(Z ∞ )(Z ∞ ) 1 ∂ log fX (x; θ) ∂ log fX (x; θ) + fX (x; θ)dx fX (x; θ)dx 1 − p −1 ∂θi −1 ∂θj FX (p) FX (p)

F −1(p) ∞ Z X Z ∂ log f(y | x; θ) ∂ log f(y | x; θ) + f(x, y; θ)dydx (1.1.17) −∞ −∞ ∂θi ∂θj

−1 where FX (p) represents the quantile function or the inverse cdf of X at p and fX (x; θ) is the marginal pdf of X. The rate of convergence here is of order 1/n.

−1 According to He (2007), [nIp(θi, θi)] serves as the asymptotic variance of the MLE of θi in the bottom 100p% of the sample when the other parameters, (θ1, ··· , θi−1, θi+1, ··· , θt)

I−1(θ) are known. On the other hand, p ii serves as the asymptotic variance of the MLE of

θi in the bottom 100p% of the sample when the other parameters are unknown.

By using the expression in (1.1.17) the univariate case is given by

Ip(θi, θj)

F −1(p) Z X  ∂   ∂  = log fX (x; θ) log fX (x; θ) fX (x; θ)dx −∞ ∂θi ∂θj

(Z ∞ )(Z ∞ ) 1 ∂ log fX (x; θ) ∂ log fX (x; θ) + fX (x; θ)dx fX (x; θ)dx . 1 − p −1 ∂θi −1 ∂θj FX (p) FX (p) (1.1.18)

9 Those expressions in (1.1.17) and (1.1.18) can be used to study the asymptotic relative

efficiency (ARE) of the MLEs from Type II right censored samples and determination of

optimal schemes in life-testing experiments.

1.1.4 Use of Fisher Information in Censored samples

There are four main purposes for studying the FI in censored samples:

To obtain the asymptotic variance of MLE

It is known that MLEs are asymptotically efficient under regularity conditions. We can

find the asymptotic variance of MLE using the FIM if the MLE exists. The reciprocal of

diagonal entry corresponding to the component in FIM is the asymptotic variance of its

MLE when the values of other parameters are known. Otherwise, the diagonal entry in the

inverse of the FIM is regarded as the asymptotic variance.

To determine the optimal sample size for life-testing experiments

We compare I1···r:m for 1 ≤ r ≤ m ≤ n and call this quantity FI per unit time for E(Xr:m) the life-testing experiment. The quantity measures which censored sampling mechanism is more efficient in terms of the amount of FI acquired per unit time during the experiment.

The censored sample with more FI in less duration is assumed to have better performance in life-testing experiment.

To evaluate MLEs from large censored samples

ˆ ˆ Let us denote by θr and θn the MLE of θ from Type-II right censored sample and the

MLE from a complete sample, respectively. For r/n → p as n ↑ ∞ where 0 < p < 1, the

10 asymptotic relative efficiency (ARE) is given by

  I (θ) ARE θˆ , θˆ = p . (1.1.19) n r I(θ; X,Y )

To evaluate the relative efficiencies of unbiased in finite censored samples

Cramer-Rao´ Lower Bound (CR Lower bound) provided by the Fisher Information mea- sure can be used to examine the finite sample efficiencies of unbiased estimators based on censored samples.

1.2 Univariate Models

1.2.1 Relationship between Folded and Unfolded populations

A folded population is obtained by folding an original population at a reference point.

In this study a symmetric population about zero is considered to be the unfolded population; the folded population is obtained by folding it at zero. For example one obtains exponential distribution by folding the Laplace distribution at zero. Govindarajulu (1963) proved that the moments of order statistics in samples from the unfolded distribution can be expressed as a function of the moments of order statistics in samples from the folded distribution.

An application of his results to Laplace and exponential populations was given in Govin- darajulu (1966). Govindarajulu algebraically derived the formulas by splitting the range of integrals into two parts, Balakrishnan et al. (1993) examined the same relation through a probabilistic approach that extended the work to the independent and non-identically case.

11 1.2.2 A Mixture of Finite Exponential populations

Mixtures of exponential distributions can be applied to model heterogeneous failure time data. A mixture of exponential distributions is frequently used to model time to failure data where the failure rate either becomes constant or again increases with time (Menden- hall and Hader, 1958). Mendenhall and Hader (1958) also claimed that the computation of

MLE in Type-I censored data can be undertaken within an iterative process while the esti- mating equation of each parameter is a function of the other MLEs including itself when the partial derivatives are equated to zero. Thus Hasselblad (1969) suggested a special case of the EM algorithm for computation of MLE to estimate the parameters of finite mixtures of the distributions. Dempster, Laird and Rubin (1977) proved that the iterative scheme converges to the MLE. For large samples, Bruce (1963) showed that the asymptotically the FI about the mixing proportion in a mixture of two exponential distri- butions is a function of the ratio of the two exponential parameters. However, consistency problem for the MLE has been reported and studied by Jewell (1982) and hence Redner and

Walker (1984) presented conditions to assure the existence of MLE. Atienza et al. (2007) developed conditions that are easier to check for mixtures of densities from exponential distributions. Choi and Nadarajah (2009) computed entries of information matrix for a mixture of two Laplace distributions with various assumed parameter values. But there is a lack of practical use of the information matrix since every is almost zero.

12 1.3 Bivariate Exponential Models

Exponential distribution is more likely to be used in modeling life testing data than any

other distributions. However there are often some multivariate empirical data that the uni-

variate exponential distribution cannot fit well marginally. In such cases, one may consider

a multivariate exponential distribution to explain the complicated data. In this context,

only bivariate exponential distribution is commonly studied because multivariate exponen-

tial distributions including more than two variables can be very readily investigated once

properties of bivariate exponential distribution are well examined. The three bivariate ex-

ponential distributions, proposed by Gumbel (1960), are not meaningful in practice. For

the physical motivation of the distributions Freund (1961) introduced his own model that

is empirically adapted to a two component system. However, the Freund model does not

have univariate exponential marginal distributions. So Marshall and Olkin (1967) devel-

oped a bivariate exponential distribution to overcome both weaknesses: impracticality of

Gumbel’s (1960) model, and lack of marginal exponentiality in Freund (1961). In spite of

many strengths of Marshall-Olkin’s model such as the bivariate loss of memory property,

the model has a shortcoming that corresponds to its singularity. Block and Basu (1974)

proposed an absolutely continuous bivariate exponential distribution (BBVE) that is a non-

singular part of Marshall-Olkin’s model. Alternatively, the BBVE model can be derived

by Freund’s method with appropriate restrictions on the parameters. The marginal survival

functions of X and Y are not exponential but are a generalized mixtures of two exponentials

with a negative mixing proportion. The details on this will be discussed in Chapter 4. The

other significant properties of the BBVE are: min(X,Y ) is distributed as an exponential,

X − Y and min(X,Y ) are independent variables, and the lack of memory property holds.

These properties can be used to determine whether the BBVE distribution is appropriate or

13 not. According to Gross and Lam (1981), the following are some of the examples where the BBVE model fits well: lengths of tumor remission when a patient receives different treatments on two occasions and lengths of time required for analgesics to take effect when patients with headaches receive different ones on two occasions. Refer to Section 10.6 in

Balakrishnan and Lai (2009) for additional detailed inferences.

1.4 Motivation and Summary of Work

We have seen in Section 1.1.4 the role of the FI in the asymptotic theory of maximum likelihood estimation. The inverse of FI equals the asymptotic variance of the maximum likelihood estimator (MLE). In addition, the FI in censored samples has been of interest to researchers who seek to determine the efficient number of subjects in life-testing experi- ments. The relative amount of FI contained in Type-II censored samples to the duration of the censored life experiment provides guidance on optimal designs for the experiment.

In this dissertation, Type-II right censored data is primarily dealt with since such a type of data is commonly generated in reliability studies and survival analyses. Let us consider the life-testing experiment where n subjects are kept under observation until failure. These subjects could be some systems or components in reliability study experiments or patients put under certain drug or clinical conditions. Suppose the life lengths of these n items are independent and identically distributed (i.i.d.) random variables with an absolutely contin- uous cdf FX (x; θ) and pdf f(x; θ) where θ is a parameter. Then we have a random sample of X1,X2, ··· ,Xn from FX (x; θ). Note however that these values are recorded in increas- ing order of magnitude; that is, the data appear as the vector of order statistics in a natural way since the variables indicate the times to failure observed in sequence. Thus a univariate

14 censored sample is governed under an order statistics paradigm assuming that we have to

terminate the experiment before all items have failed.

The exponential population is the folded distribution generated by being folded at the mean

of a Laplace distribution. The FI in censored samples from unfolded distribution can be

expressed as a function of expectations of functions of order statistics contained in cen-

sored samples from the folded distribution. The connection between the FI in censored

samples from folded and unfolded distributions is shown in Chapter 2. We explore the

probabilistic properties of the FI in Type-II censored samples from a 2-component mixture

of exponentials and its limitations in Chapter 3. Next we consider bivariate censored sam-

ples. A complete bivariate sample consists of (Xi,Yi), i = 1, 2, ··· , n, and the likelihood Qn function is i=1 f(xi, yi) when the pairs are mutually independent. In this study, in view of application to life testing experiments, a bivariate exponential distribution is examined.

In Chapter 4, the Block-Basu bivariate exponential (BBVE) model is investigated in terms of the actual and relative amount of FI in Type-II censored samples. Our conclusions and plans for future work are discussed in Chapter 5.

15 CHAPTER 2: CONNECTIONS BETWEEN FISHER INFORMATION IN TYPE-II CENSORED SAMPLES FROM FOLDED AND UNFOLDED POPULATIONS

2.1 Introduction

Govindarajulu (1963) showed that the moments of order statistics in a random sample

of size n from a population with distribution symmetric around zero can be expressed as those from the distribution obtained by folding the symmetric population density at zero.

While he algebraically derived the expressions by dividing the range of integration, Bal- akrishnan et al. (1993) used a probabilistic argument to obtain the same expressions for the independent and identically distributed case. They also considered a population not nec- essarily symmetric to extend Govindarajulu’s results. However no research has yet been done on the relation between the unfolded and the folded distributions in terms of the FI in order statistics. In Section 2.2, we obtain the expressions for the FI in a single order statistic from the unfolded distribution by using the expectations of special functions of order statistics in a random sample of size n from the folded distribution. In Section 2.3,

we use these expectations to derive the expressions for the FI in Type-II censored samples.

In Section 2.4, we apply the results to compute the Fisher information in censored samples

from a Laplace (double exponential) distribution that is symmetric about zero in terms of

the expectations of special functions of order statistics in a random sample size of n from

an exponential population.

16 Suppose that a random sample, X1,X2, ··· ,Xn is drawn from a population that is sym- metric about zero with pdf f(x) for −∞ < x < ∞. When Y = |X|, let us denote g(y) by the pdf of y for 0 < y < ∞. In such a case f(x) is the pdf of an unfolded distribution symmetric around zero and g(y) is the pdf of the folded distribution associated with f(x).

Let F (x) and G(y) be the associated cdf’s. For example, a Laplace population with zero location (µ=0) and positive scale parameter θ (θ > 0) or a normal population with µ = 0 and σ > 0 are unfolded distributions symmetric about zero. Accordingly an exponential distribution with the mean θ and a half with scale parameter σ become the corresponding folded distributions. Now under the regularity conditions the FI about θ in a single order statistic Xr:n (1 ≤ r ≤ n) from an unfolded distribution is given by

 ∂2  If (θ) = E − log f (X) r:n r:n ∂θ2 r:n Z ∞  2  ∂ ¯  = − 2 log f(x) + (r − 1) log F (x) + (n − r) log F (x) −∞ ∂θ n r f(x)F (x)r−1F¯(x)n−rdx (2.1.1) r where θ is the positive scale parameter, fr:n(x) is the pdf of a single order statistic Xr:n,

F (x) is the cdf of a single observation, and F¯(x) = 1 − F (x).

When r = n = 1 for the FI in a single observation from the unfolded distribution, the expression in (2.1.1) is simplified as

Z ∞  2  f f ∂ I (θ) = I1:1(θ) = − 2 log f(x) f(x)dx. (2.1.2) −∞ ∂θ

17 Likewise, under the regularity conditions the FI about θ in a single order statistic Yr:n from the corresponding folded distribution is given by

 ∂2  Ig (θ) = E − log g (Y ) r:n r:n ∂θ2 r:n Z ∞  2  ∂ ¯  = − 2 log g(y) + (r − 1) log G(y) + (n − r) log G(y) 0 ∂θ n r g(y)G(y)r−1G¯(y)n−rdy (2.1.3) r where gr:n(y) is the pdf of a single order statistic Yr:n, G(y) is the cdf of a single observa- tion, and G¯(y) = 1 − G(y). With r = n = 1 we obtain the FI in a single observation from the folded distribution. It is given by

Z ∞  2  g g ∂ I (θ) = I1:1(θ) = − 2 log g(y) g(y)dy. (2.1.4) 0 ∂θ

Since the underlying (unfolded) distribution is symmetric about zero, the following alge- braic relations between unfolded and folded distributions hold: ( 1 g(−x; θ) for x ≤ 0 f(x; θ) = 2 (2.1.5) 1 2 g(x; θ) for x ≥ 0 ( 1 − 1 G(−x; θ) for x ≤ 0 F (x; θ) = 2 2 (2.1.6) 1 1 2 + 2 G(x; θ) for x ≥ 0 2 2 ( ∂ ∂ 2 log g(−x; θ) for x ≤ 0 log f(x; θ) = ∂θ (2.1.7) ∂θ2 ∂2 ∂θ2 log g(x; θ) for x ≥ 0 2 2 ( ∂ ∂ 2 log(1 − G(−x)) for x ≤ 0 log F (x; θ) = ∂θ (2.1.8) ∂θ2 ∂2 ∂θ2 log(1 + G(x)) for x ≥ 0. Now (2.1.2) can be rewritten by substituting (2.1.5) and (2.1.7) for the integrand as

Z ∞  2  f ∂ I (θ) = − 2 log f(x) f(x)dx −∞ ∂θ Z 0  ∂2  1 Z ∞  ∂2  1 = − 2 log g(−x) g(−x)dx + − 2 log g(x) g(x)dx −∞ ∂θ 2 0 ∂θ 2 Z ∞  ∂2  = − 2 log g(x) g(x)dx. 0 ∂θ 18 Thus we conclude that the FI in a single observation from a symmetric unfolded distribution and from the corresponding folded distribution are the same; that is

If (θ) = Ig(θ). (2.1.9)

Thus the amount of FI about θ contained in n independent random variables from an un- folded distribution symmetric around zero is identical to that from the folded distribution.

We will now focus on the FI in a single order statistic and in Type-II censored samples.

2.2 Fisher Information in a Single Order statistic from Unfolded pop- ulation

Suppose the original distribution is symmetric around zero and has parameter θ. We now derive the expressions of the FI in a single order statistic from the unfolded distribu-

f tion, Ir:n(θ) for 1 ≤ r ≤ n and n ≥ 1 by using expectations of a few special functions of order statistics from the folded distribution. Note that (2.1.1) can be expressed in terms of g and G by using (2.1.5)-(2.1.8) and dividing the range of integration at zero. Thus,

f Ir:n(θ)

 n Z ∞  2    1 ∂ n n−r r−1 = − 2 log g(y) r g(y)(1 − G(y)) (1 + G(y)) dy 2 0 ∂θ r Z ∞  2    ∂ n n−r r−1 + − 2 log g(y) r g(y)(1 + G(y)) (1 − G(y)) dy 0 ∂θ r Z ∞  2    ∂ n n−r r−1 +(n − r) − 2 log(1 − G(y)) r g(y)(1 − G(y)) (1 + G(y)) dy 0 ∂θ r Z ∞  2     ∂ n n−r r−1 + − 2 log(1 + G(y)) r g(y)(1 + G(y)) (1 − G(y)) dy 0 ∂θ r

19 Z ∞  2    ∂ n n−r r−1 +(r − 1) − 2 log(1 + G(y)) r g(y)(1 − G(y)) (1 + G(y)) dy 0 ∂θ r Z ∞  2     ∂ n n−r r−1 + − 2 log(1 − G(y)) r g(y)(1 + G(y)) (1 − G(y)) dy . 0 ∂θ r (2.2.1)

Note that the second order derivatives that appear on the right side of (2.2.1) can be written

∂ ∂2 ∂ ∂2 in terms of g(y),G(y), ∂θ g(y), ∂θ2 g(y), ∂θ G(y), and ∂θ2 G(y) as

2 ∂ 2 ∂2 ∂ g(y) 2 g(y) − log g(y) = ∂θ − ∂θ , ∂θ2 g2(y) g(y) 2 ∂ 2 ∂2 ∂ G(y) 2 G(y) − log(1 + G(y)) = ∂θ − ∂θ , ∂θ2 (1 + G(y))2 1 + G(y) 2 ∂ 2 ∂2 ∂ G(y) 2 G(y) − log(1 − G(y)) = ∂θ + ∂θ . (2.2.2) ∂θ2 (1 − G(y))2 1 − G(y)

Also consider the binomial theorem

m X m (1 + G(y))m = G(y)i, for m ≥ 0. (2.2.3) i i=0 By substituting (2.2.2) and (2.2.3) for the corresponding terms in (2.2.1), we obtain

f Ir:n(θ)

  ∂ 2 ∂2  n g(y) g(y) 1  Pr−1 R ∞ ( ∂θ ) ∂θ2 nr−1 i n−r = 2 i=0 0 g2(y) − g(y) r r i g(y)G(y) (1 − G(y)) dy

2 2  ∂ g(y) ∂ g(y)  Pn−r R ∞ ( ∂θ ) ∂θ2 nn−r i r−1 + i=0 0 g2(y) − g(y) r r i g(y)G(y) (1 − G(y)) dy

2 2   ∂ G(y) ∂ G(y)  Pr−1 R ∞ ( ∂θ ) ∂θ2 nr−1 i n−r + (n − r) i=0 0 (1−G(y))2 + 1−G(y) r r i g(y)G(y) (1 − G(y)) dy

2 2  ∂ G(y) ∂ G(y)   R ∞ ( ∂θ ) ∂θ2 n r−1 n−r + 0 (1+G(y))2 − 1+G(y) r r g(y)(1 − G(y)) (1 + G(y)) dy

2 2   ∂ G(y) ∂ G(y)  R ∞ ( ∂θ ) ∂θ2 n r−1 n−r + (r − 1) 0 (1+G(y))2 − 1+G(y) r r g(y)(1 + G(y)) (1 − G(y)) dy

2 2  ∂ G(y) ∂ G(y)   Pn−r R ∞ ( ∂θ ) ∂θ2 nn−r i r−1 + i=0 0 (1−G(y))2 + 1−G(y) r r i g(y)G(y) (1 − G(y)) dy . (2.2.4)

20 Note that when f(x; θ) is symmetric about zero

f f Ir:m(θ) = Im−r+1:m(θ), for 1 ≤ r ≤ m ≤ n. (2.2.5)

By using (2.2.4) and (2.2.5), the FI in a single order statistic from the unfolded distribution,

f f Ir:m(θ) = Ir:m is obtained for any r and m where 1 ≤ r ≤ m ≤ n. The FI has six different forms that depend on the values of r and m and can be described in terms of elements of a

Pascal type triangle as given in Figure 2.1.

(2.2.7) f I1:1 (2.2.8) f f I1:2 I2:2 (2.2.10) f f(2.2.9) f I1:3 I2:3 I3:3

f f f f I1:4 I2:4 (2.2.11)I3:4 I4:4

f f f I2:5 I3:5 I4:5

f f I1:n−2 In−2:n−2

f f f f I1:n−1 I2:n−1 (2.2.12) In−2:n−1 In−1:n−1

f f f f f f I1:n I2:n I3:n In−2:n In−1:n In:n

f Figure 2.1: Triangle of Ir:m for 1 ≤ r ≤ m ≤ n from unfolded distribution f(x; θ)

f Thus, (2.2.4) leads us to six different forms for Ir:m where each form can be expressed as a linear function of the expectations of special functions of order statistics from the folded distribution.

21 The special functions are the following:

∂ 2 ∂2  2 g(y; θ) 2 g(y; θ) ∂ h (y; θ) = ∂θ , h (y; θ) = ∂θ ,H (y; θ) = G(y; θ) , 1 g2(y; θ) 2 g(y; θ) 1 ∂θ ∂2 H (y; θ) H (y; θ) H (y; θ) = G(y; θ),H (y; θ) = 1 ,H (y; θ) = 1 . (2.2.6) 2 ∂θ2 3 1 − G(y; θ) 4 1 + G(y; θ)

So the six distinct equations in Figure 2.1 using (2.2.6) are:

Form 1: For r = 1 and m = 1,

f g I1:1 = I1:1 = E1:1 [h1(Y ; θ) − h2(Y ; θ)] (2.2.7)

where Er:m is the expectation with respect to the distribution of Yr:m.

Form 2: For r = 1 or 2, and m = 2,

f f I1:2 = I2:2 1  = E [h (Y ; θ) − h (Y ; θ)] + E [H (Y ; θ)] + E [H (Y ; θ)] . (2.2.8) 1:1 1 2 2 1:1 3 1:1 4

Form 3: For r = 2 and m = 3,

3 1 If = E [h (Y ; θ) − h (Y ; θ)] + E [h (Y ; θ) − h (Y ; θ)] 2:3 4 1:2 1 2 4 2:3 1 2 3 3 3 3 + E [H (Y ; θ)] + E [H (Y ; θ)] + E [H (Y ; θ)] + E [H (Y ; θ)]. 2 1:1 3 4 2:2 3 2 2:2 2 4 1:2 4 (2.2.9)

Form 4: For r = 1 or m, and m ≥ 3,

f f I1:m = Im:m m ( m−1 1 X  m  = E [h (Y ; θ) − h (Y ; θ)] + E [h (Y ; θ) − h (Y ; θ)] 2 1:m 1 2 i + 1 i+1:i+1 1 2 i=0 m−3 m(m − 1) m(m − 1) X m − 2 + E [H (Y ; θ)] + E [H (Y ; θ)] m − 2 1:m−2 1 m − 2 i + 1 i+1:i+1 1 i=0 m−2 ) X m − 1 +mE [H (Y ; θ)] − m E [H (Y ; θ)] . (2.2.10) 1:m−1 2 i + 1 i+1:i+1 2 i=0

22 Form 5: For r = 2 or m − 1, and m ≥ 4,

f f I2:m = Im−1:m 1m  = mE [h (Y ; θ) − h (Y ; θ)] + E [h (Y ; θ) − h (Y ; θ)] 2 1:m−1 1 2 2:m 1 2 m−2 X  m  m(m − 1)(m − 2) + E [h (Y ; θ) − h (Y ; θ)] + E [H (Y ; θ)] i + 2 i+1:i+2 1 2 m − 3 1:m−3 1 i=0 m(m − 1) + E [H (Y ; θ)] + m(m − 1)E [H (Y ; θ)] + mE [H (y; θ)] m − 3 1:m−3 1 1:m−2 2 2:m−1 2 m−4 m−3 m(m − 1) X m − 2 X m − 1 + E [H (Y ; θ)] + m E [H (Y ; θ)] m − 3 i + 2 i+1:i+2 1 i + 2 i+1:i+2 2 i=0 i=0 m−2 X m − 1 + mE [H (Y ; θ)] − mE [H (Y ; θ)] + m E [H (Y ; θ)] 1:m−1 4 1:m−1 2 i + 1 i+1:i+1 3 i=0 m−2 X m − 1  + m E [H (Y ; θ)] . (2.2.11) i + 1 i+1:i+1 2 i=0

Form 6: For 3 ≤ r ≤ m − 2 and m ≥ 5,

f f Ir:m = Im−r+1:m

1 m Pr−1 m  Pm−r m  = 2 i=0 r−1−i Ei+1:m−r+i+1[h1(Y ; θ) − h2(Y ; θ)] + i=0 r+i Ei+1:r+i[h1(Y ; θ) − h2(Y ; θ)]

m(m−1) Pr−1 m−2  Pr−1 m−1  + m−r−1 i=0 r−1−i Ei+1:m−r+i−1[H1(Y ; θ)] + m i=0 r−1−i Ei+1:m−r+i[H2(Y ; θ)]

m(m−1) Pm−r−2 m−2 Pm−r−1 m−1 + m−r−1 i=0 r+i Ei+1:i+r[H1(Y ; θ)] − m i=0 r+i Ei+1:i+r[H2(Y ; θ)]

m(m−1) Pr−3 m−2  Pr−2 m−1  + r−2 i=0 r−3−i Ei+1:m−r+i−1[H1(Y ; θ)] − m i=0 r−2−i Ei+1:m−r+i−1[H2(Y ; θ)]

m(m−1) Pm−r m−2  Pm−r m−1  + r−2 i=0 r−2+i Ei+1:r+i−2[H1(Y ; θ)] + m i=0 r−1+i Ei+1:r+i−1[H2(Y ; θ)] . (2.2.12)

23 Let us denote the expectations of the special functions in (2.2.6) as

ar:m = Er:m[h1(Y ; θ)], br:m = Er:m[h2(Y ; θ)], cr:m = Er:m[H1(Y ; θ)],

dr:m = Er:m[H2(Y ; θ)], er:m = Er:m[H3(Y ; θ)], kr:m = Er:m[H4(Y ; θ)]. (2.2.13)

The notations ar:m, br:m, cr:m, dr:m, er:m and kr:m in (2.2.13) make (2.2.7)-(2.2.12) appear to be much simpler.

Theorem 2.2.1. Let the a, b, c, d, e, k’s be defined as in (2.2.13) where h1, h2, H1-H4 are given in (2.2.6), and the expectations are taken with respect to Gr:m. Then for 1 ≤ r ≤ m ≤ n, (2.2.7)-(2.2.12) are expressed as

f I1:1 = a1:1 − b1:1;

1 If = a − b + (e + k ); 1:2 1:1 1:1 2 1:1 1:1 3 1 3 3 3 3 3 3 If = (a − b ) + (a − b ) + d − d + d + e + e + k ; 2:3 4 1:2 1:2 4 2:3 2:3 2 1:1 4 1:2 4 2:2 2 1:1 4 2:2 4 1:2

m " m 1 X m If = m(a − b ) + a − b + (a − b ) 2:m 2 1:n−1 1:m−1 2:m 2:m j j−1:j j−1:j j=2 m−2 m(m − 1) X m − 2  + (m − 2)c + c + c m − 3 1:m−3 2:m−2 j j−1:j j=2 m−1 X m − 1 + m(m − 1)d + md − m d 1:m−2 2:m−1 j j−1:j j=2 m−1 # X m − 1 −md + m (d + e ) + mk , for m ≥ 4; 1:m−1 j j:j j:j 1:m−1 j=1

m " m 1 X m m(m − 1) If = a − b + (a − b ) + c + md 1:m 2 1:m 1:m j j:j j:j m − 2 1:m−2 1:m−1 j=1 m−2 m−1 # X m(m − 1)m − 2 X m − 1 + c − m d , for m ≥ 3; m − 2 j j:j j j:j j=1 j=1

24 m " r r 1 X  m  X  m  If = a − b r:m 2 r − j j:m−r+j r − j j:m−r+j j=1 j=1 m−r+1 m−r+1 X  m  X  m  + a − b r + i j:r+j−1 r + i j:r+j−1 j=1 j=1 r r m(m − 1) X m − 2 X m − 1 + c + m d m − r − 1 r − j j:m−r+j−2 r − j j:m−r+j−2 j=1 j=1 m−r−1 m−r m(m − 1) X  m − 2  X  m − 1  + c − m d m − r − 1 r + j − 1 j:r+j−1 r + j − 1 j:r+j−1 j=1 j=1 r−2 r−1 m(m − 1) X  m − 2  X  m − 1  + c − m d r − 2 r − j − 2 j:m−r+j−2 r − j − 1 j:m−r+j−2 j=1 j=1 m−r+1 m−r=1 # m(m − 1) X  m − 2  X  m − 1  + c + m d , r − 2 r + j − 3 j:r+j−3 r + j − 2 j:r+j−2 j=1 j=1 for 3 ≤ r ≤ m − 2 and m ≥ 5.

f f f Theorem 2.2.1 shows that we need only a few terms for I1:1,I1:2 and I2:3. For instance we

f need only two terms of a1:1 and b1:1 to calculate I1:1. However many more terms are needed

f f f to calculate the FI in central order statistics: I2:m when m ≥ 4, I1:m when m ≥ 3, and Ir:m when 3 ≤ r ≤ m − 2 and m ≥ 5. Figures 2.2-2.7 display shaded blocks that contain the necessary terms for various r and m. Each triangle is composed of abr:m, cr:m, dr:m, and er:m for 1 ≤ r ≤ m ≤ n. The shaded blocks in Figure 2.2 represent the ab, c, d, and e

f terms that are needed to compute I2:m for every m where 4 ≤ m ≤ n. In addition, only

when r = 2, we also need k1:m−1. Figure 2.3 represents the necessary terms to calculate

f I1:m for every m ≥ 3.

25 ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

c1:n−3 ab1:n−2 abn−2:n−2 c2:n−2 cn−3:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n ab2:n abn−1:nabn:n c1:n c2:n cn−1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 e1:1 d1:2 d2:2 e1:2 e2:2 d1:3 d2:3 d3:3 e1:3 e2:3 e3:3

e1:n−3 d1:n−2 dn−2:n−2 e2:n−2 en−3:n−2 d1:n−1d2:n−1 dn−2;n−1 dn−1:n−1 e1:n−1 en−1:n−1 d1:m d2:n dn−1:n dn:n e1:n e2:n en−1:n en:n

(c) dr:m (d) er:m

f Figure 2.2: Blocks of ab, c, d, and e’s (colored in green) needed to obtain I2:m for every m ≥ 4

26 ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

ab 1:n−3 c1:n−3 ab ab 2:n−2 n−3:n−2 c1:n−2 cn−2:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n ab2:n abn−1:nabn:n c1:n c2:n cn−1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

d1:n−3 d2:n−2 dn−3:n−2 d1:n−1 dn−1:n−1 d1:n d2:n dn−1:n dn:n

(c) dr:m

f Figure 2.3: Blocks of ab, c, and d’s (colored in green) needed to obtain I1:m for every m ≥ 3

f To compute Ir:m for 3 ≤ r ≤ m − 2 when m ≥ 5 (Form 6) we need to take four distinct

m−1 m cases into consideration; (i) 3 ≤ r ≤ 2 and m ≥ 7 (Figure 2.4), (ii) r = 2 and m ≥ 6

m+1 where m is even (Figure 2.5), (iii) r = 2 and m ≥ 5 where m is odd (Figure 2.6), and

m+1 (iv) 2 < r ≤ m − 2 and m ≥ 6 (Figure 2.7). For each of these cases, the algebraic expression is the same even though the locations of the terms shaded can be different for

different configurations of r and m.

27 m−1 Firstly when 3 ≤ r ≤ 2 and m ≥ 7,

ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

c1:r−2 cr−2:r−2 ab ab 1:r r:r c1:r−1 cr−1:r−1 c1:r cr:r

c1:n−r−1 cn−r−1:n−r−1 ab1:n−r+1 abn−r+1:n−r+1

c1:n−4 cr−2:n−4 cn−4:n−4 c1:n−3 cn−3:n−3 ab1:n−2 abn−2:n−2 c1:n−2 cr:n−2 cn−r−1:n−2 cn−r+1:n−2 cn−2:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n abr:n abn−r+1:n abn:n c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

d1:r−1 dr−1:r−1 d1:r dr:r

d1:n−r−1 dn−r−1:n−r−1 d1:n−r dn−r:n−r

d1:n−4 dn−4:n−4 d1:n−3 dr−1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dr:n−1 dn−r:dnn−−1r+1:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m−1 Figure 2.4: Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when 3 ≤ r ≤ 2 and m ≥ 7

28 m Secondly for r = 2 and m ≥ 6 where m is even,

ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

c1:r−2 cr−2:r−2 ab1:r abr:r

c1:n−r−1 cn−r−1:n−r−1

c1:r cr:r ab1:n−r+1 abn−r+1:n−r+1

c1:n−4 cr−2:n−4 cn−4:n−4 c1:n−3 cn−3:n−3 ab1:n−2 abn−2:n−2 c1:n−2 cn−r−1:n−2 cr:n−2 cn−r+1:n−2 cn−2:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n abr:n abn−r+1:n abn:n c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

d1:r−1 dr−1:r−1

d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dr−1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dr:n−1 dr+1:n−1 dn−1:n−1 d1:n µn:n

(c) dr:m

f m Figure 2.5: Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when r = 2 and m ≥ 6 where m is even

29 m+1 Thirdly for r = 2 and m ≥ 5 where m is odd,

ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

c1:n−r−1 cn−r−1:n−r−1

c1:r−2 cr−2:r−2 ab ab 1:r r:r c1:r−1 cr−1:r−1 c1:r cr:r

c1:n−4 cr−2:n−4 cn−4:n−4 c1:n−3 cn−3:n−3 ab1:n−2 abn−2:n−2 c1:n−2 cn−r−1:n−2 cn−r+1:n−2 cr:n−2 cn−2:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n abn−r+1:n abn:n c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

d1:n−r−1 dn−r−1:n−r−1 d1:n−r dn−r:n−r

d1:r−1 dr−1:r−1 d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dr−1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dn−r:dnn−−1r+1:n−1 dr:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m+1 Figure 2.6: Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when r = 2 and m ≥ 5 where m is odd

30 m+1 Lastly for 2 < r ≤ m − 2 and m ≥ 6,

ab1:1 c1:1 ab1:2 ab2:2 c1:2 c2:2 ab1:3 ab2:3 ab3:3 c1:3 c2:3 c3:3

c1:n−r−1 cn−r−1:n−r−1 ab1:n−r+1 abn−r+1:n−r+1

c1:r−2 cr−2:r−2 c1:r−1 cr−1:r−1 c1:r cr:r ab1:r abr:r

c1:n−4 cr−2:n−4 cn−4:n−4 c1:n−3 cn−3:n−3 ab1:n−2 abn−2:n−2 c1:n−2 cn−r−1:n−2 cn−r+1:n−2 cr:n−2 cn−2:n−2 ab1:n−1 abn−1:n−1 c1:n−1 cn−1:n−1 ab1:n abn−r+1:n abr:n abn:n c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

d1:n−r−1 dn−r−1:n−r−1 d1:n−r dn−r:n−r

d1:r−1 dr−1:r−1 d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dr−1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dn−r:dnn−−1r+1:n−1 dr:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m+1 Figure 2.7: Blocks of ab, c, and d’s (colored in green) needed to obtain Ir:m when 2 < r ≤ m−2 and m ≥ 6

By referring to Theorem 5.3.1 from Arnold et al. (1992), we have known that for 1 ≤ r ≤ n − 1,

rµr+1:n + (n − r)µr:n = nµr:n−1

where µr:n is an expectation of rth order statistic in a sample of size n. More generally for any function h for which E(h(X)) exists, the following recurrence relation also holds:

rE(h(Xr+1:n)) + (n − r)E(h(Xr:n)) = nE(h(Xr:n−1)). (2.2.14)

31 Hence the extensive work to calculate all terms in the shaded regions in Figure 2.2-2.7

is simplified considerably by the use of the general recurrence relation in (2.2.14). For

each m acquisition of only one block is enough to obtain all the other needed blocks in

each of the triangles of Figure 2.2-2.7. So the number of independent calculations for

abr:m, cr:m, dr:m, and er:m is much reduced. This is made precise in the following result.

Theorem 2.2.2. The independent evaluation of the following terms is sufficient to deter-

f mine Ir:m for all 1 ≤ r ≤ m ≤ n:

ab : ab1:1, ab2:2, ··· , abn:n

c : c1:1, c2:2, ··· , cn−2:n−2

d : d1:1, d2:2, ··· , dn−1:n−1

e : e1:1, e2:2, ··· , en−1:n−1

k : k1:1, k1:2, ··· , k1:n−1. (2.2.15)

Thus, one needs independently to compute 5(n − 1) terms from the folded distribution.

However we may prefer to compute, for example, d1:1, d1:2, ··· , d1:n−1 instead of d1:1, d2:2,

−y/θ ··· , dn−1:n−1 if the folded distribution is exponential since (1 − G(y)) = e is easier to handle than G(y) = 1 − e−y/θ, since

Z ∞  2  ∂ m−1 d1:m = 2 G(y) mg(y)(1 − G(y)) dy, 0 ∂θ Z ∞  2  ∂ m−1 dm:m = 2 G(y) mg(y)G(y) dy. 0 ∂θ

In addition there is the following interesting and useful connection between c1:m and e1:m; Z ∞ ∂ 2 !  ∂θ G(y) m−1 e1:1, m = 1 e1:m = mg(y)(1 − G(y)) dy = m 0 1 − G(y) m−1 c1:m−1, m ≥ 2. So Theorem 2.2.2 can further be improved after applying the tips due to these additional

observations. Thus, we have the following Corollary:

32 Corollary 2.2.1. The minimum number of independent computations needed to determine

f Ir:m for all r, m, 1 ≤ r ≤ m ≤ n is 4(n − 1) + 1; we need one value of ab, c, d, and k from each row in the Pascal triangle in Figures 2.2-2.7 and e1:1. In the special case of the

Laplace distribution, we have the following.

1 −y/θ Lemma 2.2.1. Suppose g(y; θ) is an exponential pdf θ e ; y > 0. The most efficient

f approach for computing Ir:m for 1 ≤ r ≤ m ≤ n needs the evaluation of the following terms.

ab : ab1:1, ab1:2, ··· , ab1:n

c : c1:1, c1:2, ··· , c1:n−2

d : d1:1, d1:2, ··· , d1:n−1

e : e1:1

k : k1:1, k1:2, ··· , k1:n−1. (2.2.16)

Once we can calculate the terms in (2.2.16) from a folded distribution, all shaded terms in

f Figure 2.2-2.7 are readily derived by the repeated application of (2.2.14). Thus every Ir:m for 1 ≤ r ≤ m ≤ n in Figure 2.1 can be obtained from Theorem 2.2.1 and its Corollary.

All the numerical results for the exponential case will be shown in Section 2.4.

2.3 Connection between Fisher Information in Type-II Censored sam- ples from Folded and Unfolded populations

In this section, we focus on a connection between the FI in Type-II censored samples from an unfolded and the folded populations as supposed in Section 1. Results similar to those in Section 2.2 are derived here. The following expressions of FI in Type-II right

33 censored samples from an unfolded distribution is similar to those in (2.2.1). For 1 ≤ r ≤

m ≤ n,

f I1···r:m(θ)

m " r 1 X Z ∞  ∂2  m = − log g(y) i g(y)(1 − G(y))m−i(1 + G(y))i−1dy 2 ∂θ2 i i=1 0 r X Z ∞  ∂2  m + − log g(y) i g(y)(1 + G(y))m−i(1 − G(y))i−1dy ∂θ2 i i=1 0 nR ∞  ∂2  m m−r r−1 +(m − r) 0 − ∂θ2 log(1 − G(y)) r r g(y)(1 − G(y)) (1 + G(y)) dy

R ∞  ∂2  m m−r r−1 oi + 0 − ∂θ2 log(1 + G(y)) r r g(y)(1 + G(y)) (1 − G(y)) dy

(2.3.1)

Next, (2.3.1) is rewritten by using the binomial theorem as

f I1···r:m(θ)

  ∂ 2 ∂2  m g(y) g(y) 1  Pr Pi−1 R ∞ ( ∂θ ) ∂θ2 mi−1 j m−i = 2 i=1 j=0 0 g2(y) − g(y) i i j g(y)G(y) (1 − G(y)) dy

2 2  ∂ g(y) ∂ g(y)  Pr Pm−i R ∞ ( ∂θ ) ∂θ2 mm−i j i−1 + i=1 j=0 0 g2(y) − g(y) i i j g(y)G(y) (1 − G(y)) dy

2 2   ∂ G(y) ∂ G(y)  R ∞ ( ∂θ ) ∂θ2 m m−r r−1 +(m − r) 0 (1−G(y))2 + 1−G(y) r r g(y)(1 − G(y)) (1 + G(y)) dy

2 2  ∂ G(y) ∂ G(y)   R ∞ ( ∂θ ) ∂θ2 m m−r r−1 + 0 (1+G(y))2 − 1+G(y) r r g(y)(1 + G(y)) (1 − G(y)) dy

(2.3.2)

The right side of (2.3.2) has three distinct types of expressions that depend on r and m for

1 ≤ r ≤ m ≤ n. The triangle below in Figure 2.8 explicitly shows how the expressions are divided.

34 (2.3.5) If (2.3.4) 1:1 f f I1:2 I12:2 (2.3.3) f f f I1:3 I12:3 I1···3:3

f f f f I1:4 I12:4 I1···3:4 I1···4:4

f f f I12:5 I1···3:5 I1···4:5

f f I1:n−2 I1···n−2:n−2

f f f f I1:n−1 I12:n−1 I1···n−2:n−1 I1···n−1:n−1

f f f f f f I1:n I12:n I1···3:n I1···n−2:n I1···n−1:n I1···n:n

f Figure 2.8: Expressions for I1···r:m for 1 ≤ r ≤ m ≤ n

f Each region has its unique form for I1···r:m:

(i) Red region: for 1 ≤ r ≤ m − 2 at m ≥ 3,

f I1···r:m(θ)

 m " r i−1   ∂ 2 ∂2 ! 1 X X m g(Y ) 2 g(Y ) = E ∂θ − ∂θ 2 i − j − 1 j+1:m−i+j+1 g2(Y ) g(Y ) i=1 j=0 r m−i   ∂ 2 ∂2 ! X X m g(Y ) 2 g(Y ) + E ∂θ − ∂θ i + j j+1:i+j g2(Y ) g(Y ) i=1 j=0 ( r−1 2 X (m − r + j)(m − r + j + 1) m   ∂  +(m − r) E G(Y ) (m − r − 1)(m − r) r − j − 1 j+1:m−r+j−1 ∂θ j=0 r−1 X (m − r + j + 1) m   ∂2  + E G(Y ) (m − r) r − j − 1 j+1:m−r+j ∂θ2 j=0

35 m−r−2 2 X (m − r − j)(m − r − j − 1) m   ∂  + E G(Y ) (m − r − 1)(m − r) r + j j+1:r+j ∂θ j=0 m−r−1 )# X (m − r − j) m   ∂2  − E G(Y ) (2.3.3) (m − r) r + j j+1:r+j ∂θ2 j=0

(ii) Green region: for r = m − 1 and m ≥ 2

f I1···r:m(θ)

 m "m−1 i−1   ∂ 2 ∂2 ! 1 X X m g(Y ) 2 g(Y ) = E ∂θ − ∂θ 2 i − j − 1 j+1:m−i+j+1 g2(Y ) g(Y ) i=1 j=0 m−1 m−i   ∂ 2 ∂2 ! X X m g(Y ) 2 g(Y ) + E ∂θ − ∂θ i + j j+1:i+j g2(Y ) g(Y ) i=1 j=0 (m−2 ∂ 2 ! m−2 X m − 1 G(Y ) X m − 1  ∂2  +n E ∂θ + E G(Y ) i + 1 i+1:i+1 1 − G(Y ) i + 1 i+1:i+1 ∂θ2 i=0 i=0 2 ! )# ∂ G(Y )  ∂2  + E ∂θ − E G(Y ) (2.3.4) 1:m−1 1 + G(Y ) 1:m−1 ∂θ2

(iii) Blue region: for r = m at m ≥ 1

2 2 ! ∂ g(Y ) ∂ g(Y ) If (θ) = mIf (θ) = mIg (θ) = mE ∂θ − ∂θ2 (2.3.5) 1···r:m 1:1 1:1 1:1 g2(Y ) g(Y )

As we did in Section 2.2, we use the notations introduced in (2.2.13) to rewrite (2.3.3)-

(2.3.5).

36 Theorem 2.3.1. Let the a, b, c, d, e, and k be denoted by (2.2.13) where the h1, h2, H1-H4 are given by (2.2.6). Then for 1 ≤ r ≤ m ≤ n,

f I1···r:m(θ)

m " r i r m−i+1 1 X X  m  X X  m  = (ab ) + (ab ) 2 i − j j:m−i+j i + j − 1 j:i+j−1 i=1 j=1 i=1 j=1 r X (m − r + j − 1)(m − r + j) m  + c (m − r − 1) r − j j:m−r+j−2 j=1 m−r−1 X (m − r − j + 1)(m − r − j) m  + c (m − r − 1) r + j − 1 j:r+j−1 j=1 r m−r )# X  m  X  m  + (m − r + j) d − (m − r − j + 1) d , r − j j:m−r+j−1 r + j − 1 j:r+j−1 j=1 j=1 for 1 ≤ r ≤ m − 2 and m ≥ 3;

f I1···r:m(θ)

m "m−1 i m−1 m−i+1 1 X X  m  X X  m  = ab − b + ab 2 i − j j:m−i+j j:m−i+j i + j − 1 j:i+j−1 i=1 j=1 i=1 j=1 ( m−1 m−1 )# X m − 1 X m − 1 +m −d + d + e + k , 1:m−1 i i:i i i:i 1:m−1 i=1 i=1 for r = m − 1 and m ≥ 2;

f I1···r:m(θ) = m (ab1:1) , for r = m and m ≥ 1.

The red region in Figure 2.8 corresponding to expression (2.3.3) is divided into five

m−1 mutually exclusive and exhaustive regions; (i) 1 ≤ r < 2 and m ≥ 4 (Figure 2.9), (ii)

m−1 m r = 2 and m ≥ 3 where m is odd (Figure 2.10), (iii) r = 2 and m ≥ 4 where m

m+1 is even (Figure 2.11), (iv) r = 2 and m ≥ 5 where m is odd (Figure 2.12), and (v)

m+1 2 < r ≤ m − 2 and m ≥ 6 (Figure 2.13).

37 m−1 (i) For 1 ≤ r < 2 and m ≥ 4,

c1:1 ab1:1 c1:2 c2:2 ab1:2 ab2:2 c1:3 c2:3 c3:3 ab1:3 ab2:3 ab3:3

cr−2:r−2 ab1:r abr:r cr−1:r−1 c1:r cr:r

c1:n−r−1 cn−r−1:n−r−1 ab1:n−r+1 abn−r+1:n−r+1

c1:n−4 cn−4:n−4

ab1:n−2 abn−2:n−2 c1:n−3 cn−3:n−3 ab1:n−1 abn−1:n−1 c1:n−2 cr:n−2 cn−r−1:n−2 cn−2:n−2 ab1:n abr:n abn−r+1:n abn:n c1:n−1 cn−1:n−1 c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

dr−1:r−1 d1:r dr:r

dn−r−1:n−r−1 d1:n−r dn−r:n−r

d1:n−4 dn−4:n−4 d1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dr:n−1 dn−r:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m−1 Figure 2.9: Blocks of ab, c, and d’s needed to obtain I1···r:m when 1 ≤ r < 2 and m ≥ 4

In Figure 2.9a the red and green regions indicate the blocks of abr:m of the first and second double sums in Theorem 2.3.1, respectively. One can see there is an overlapping area.

38 m−1 (ii) For r = 2 and m ≥ 3 where m is odd,

c1:1 ab1:1 c1:2 c2:2 ab1:2 ab2:2 c1:3 c2:3 c3:3 ab1:3 ab2:3 ab3:3

cr−2:r−2 ab1:r abr:r c1:r cn−r−1:n−r−1

cr:r ab1:n−r+1 abn−r+1:n−r+1

c1:n−4 cn−4:n−4

ab1:n−2 abn−2:n−2 c1:n−3 cn−3:n−3 ab1:n−1 abn−1:n−1 c1:n−2 cn−r−1:n−2 cn−2:n−2 ab1:n abr:n abn−r+1:n abn:n c1:n−1 cn−1:n−1 c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

dr−1:r−1 d1:r dr:r

dn−r−1:n−r−1 d1:n−r dn−r:n−r

d1:n−4 dn−4:n−4 d1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dr:n−1 dn−r:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m−1 Figure 2.10: Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and odd m ≥ 3

39 n (iii) For r = 2 and n ≥ 4 where n is even,

c1:1 ab1:1 c1:2 c2:2 ab1:2 ab2:2 c1:3 c2:3 c3:3 ab1:3 ab2:3 ab3:3

c1:n−r−1 cn−r−1:n−r−1 ab1:r abr:r cr−2:r−2 cr−1:r−1 c1:r cr:r ab1:n−r+1 abn−r+1:n−r+1

c1:n−4 cn−4:n−4

ab1:n−2 abn−2:n−2 c1:n−3 cn−3:n−3 ab1:n−1 abn−1:n−1 c1:n−2 cn−r−1:n−2 cr:n−2 cn−2:n−2 ab1:n abr:n abn−r+1:n abn:n c1:n−1 cn−1:n−1 c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

dr−1:r−1

d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dr:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m Figure 2.11: Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and even m ≥ 4

40 m+1 (iv) For r = 2 and m ≥ 5 where m is odd,

c1:1 ab1:1 c1:2 c2:2 ab1:2 ab2:2 c1:3 c2:3 c3:3 ab1:3 ab2:3 ab3:3

c1:n−r−1 cn−r−1:n−r−1

cr−2:r−2 ab ab 1:r r:r cr−1:r−1 c1:r cr:r

c1:n−4 cn−4:n−4

ab1:n−2 abn−2:n−2 c1:n−3 cn−3:n−3 ab1:n−1 abn−1:n−1 c1:n−2 cn−r−1:n−2 cr:n−2 cn−2:n−2 ab1:n abn−r+1:n abn:n c1:n−1 cn−1:n−1 c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

dn−r−1:n−r−1 d1:n−r dn−r:n−r

dr−1:r−1 d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dn−r:n−1 dr:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m+1 Figure 2.12: Blocks of ab, c, and d’s needed to obtain I1···r:m when r = 2 and odd m ≥ 5

41 m+1 (v) For 2 < r ≤ m − 2 and m ≥ 6,

c1:1 ab1:1 c1:2 c2:2 ab1:2 ab2:2 c1:3 c2:3 c3:3 ab1:3 ab2:3 ab3:3

c1:n−r−1 cn−r−1:n−r−1

ab1:n−r+1 abn−r+1:n−r+1 cr−2:r−2 cr−1:r−1 c1:r cr:r ab1:r abr:r c1:n−4 cn−4:n−4 c1:n−3 cn−3:n−3 ab1:n−2 abn−2:n−2 c1:n−2 cn−r−1:n−2 cr:n−2 cn−2:n−2 ab ab 1:n−1 n−1:n−1 c1:n−1 cn−1:n−1 ab1:n abn−r+1:n abr:n abn:n c1:n cn:n

(a) abr:m = ar:m − br:m (b) cr:m

d1:1 d1:2 d2:2 d1:3 d2:3 d3:3

dn−r−1:n−r−1 d1:n−r dn−r:n−r

dr−1:r−1 d1:r dr:r

d1:n−4 dn−4:n−4 d1:n−3 dn−3:n−3 d1:n−2 dn−2:n−2 d1:n−1 dn−r:n−1 dr:n−1 dn−1:n−1 d1:n dn:n

(c) dr:m

f m+1 Figure 2.13: Blocks of ab, c, and d’s needed to obtain I1···r:m when 2 < r ≤ m − 2 and m ≥ 6

42 In (2.3.4) the shaded blocks are required.

d1:1 ab1:1 d1:2 d2:2 ab1:2 ab2:2 d1:3 d2:3 d3:3 ab1:3 ab2:3 ab3:3

ab1:n−2 abn−2:n−2 d1:n−3 ab1:n−1 abn−1:n−1 d2:n−2 dn−3:n−2 ab1:n ab2:n abn−1:nabn:n d1:n−1 dn−1:n−1 d1:n d2:n dn−1:n dn:n

(a) abr:m = ar:m − br:m (b) dr:m

e1:1 e1:2 e2:2 e1:3 e2:3 e3:3

e1:n−3 e2:n−2 en−3:n−2 e1:n−1 en−1:n−1 e1:n e2:n en−1:n en:n

(c) er:m

f Figure 2.14: Blocks of ab, c, and d’s needed to obtain Ir:m when r = m − 1 and m ≥ 2

and k1:m.

f Thus the minimal number of independent computations needed to compute I1···r:m(θ; X) for

f any m are exactly the same as those needed to compute Ir:m(θ; X), described in Lemma

2.2.1.

Since f(x; θ) is symmetric about zero the FI in a single order statistic satisfies the relation

f f Ir:m(θ; X) = Im−r+1:m(θ; X). (2.3.6)

43 d Further, since (X1:n, ··· ,Xr:n) = (−Xn:n, ··· , −Xn−r+1:n), the FI in Type-II right cen-

sored samples is identical to the FI in Type-II left censored samples as long as the number

of censored observations is the same for both censoring schemes; that is,

f f I1···r:m(θ; X) = Im−r+1···m:m(θ; X).

f The FI in Type-II doubly censored samples Is···r:m(θ; X) for 1 ≤ s ≤ r ≤ m ≤ n is given by

f Is···r:m(θ; X)

f f f = I1···r:m(θ; X) + Is···m:m(θ; X) − mI (θ; X) (2.3.7)

f f f = I1···r:m(θ; X) + I1···m−s+1:m(θ; X) − mI (θ; X). (2.3.8)

f In conclusion, once we obtain I1···r:m(θ; X) for any r and m satisfying 1 ≤ r ≤ m ≤ n in Theorem 2.3.1 by using the necessary terms in Lemma 2.2.1, we can easily compute

f f Is···m:m(θ; X) and Is···r:m(θ; X).

2.4 An Illustrative Example

Suppose a random sample of X1,X2, ··· ,Xn is taken from a Laplace distribution with a location parameter of zero and a scale parameter θ,

i.i.d Xi ∼ Laplace(0, θ), i = 1, 2, ··· , n.

The absolute values of the sample values, |X1|, |X2|, ··· , |Xn| are accordingly from an exponential distribution with the same scale parameter θ,

i.i.d |Xi| ∼ Exp(θ), i = 1, 2, ··· , n.

44 The exponential distribution is the folded version of the Laplace distribution when it is

folded at zero. In this case we call the Laplace and exponential populations the unfolded

and folded distributions, respectively. Let us assume that n = 10 and θ = 2 to compare the

FI obtained from (2.1.1) and from Theorem 2.2.1. The FI in a single order statistic from

a Laplace(0, θ=2) distribution computed using (2.1.1), are shown in Table 2.1. We carried

out numerical integration using software R to arrive at this table. Although the relation of

f (2.3.6) was not applied to obtain Im−r+1:m(θ; X), Table 2.1 itself confirms this symmetry.

f Ir:m m = 1 2 3 4 5 6 7 8 9 10 r = 1 0.2500 0.2872 0.3438 0.4097 0.4797 0.5506 0.6210 0.6898 0.7568 0.8217 2 0.2872 0.2858 0.3136 0.3632 0.4272 0.5004 0.5791 0.6608 0.7440 3 0.3438 0.3136 0.3086 0.3304 0.3729 0.4306 0.4992 0.5756 4 0.4097 0.3631 0.3304 0.3246 0.3425 0.3795 0.4313 5 0.4797 0.4272 0.3729 0.3425 0.3367 0.3520 6 0.5506 0.5004 0.4306 0.3795 0.3520 7 0.6209 0.5791 0.4992 0.4313 8 0.6898 0.6608 0.57560 9 0.7568 0.7440 10 0.8217

f Table 2.1: Ir:m from Laplace(0, 2) for 1 ≤ r ≤ m ≤ n when n = 10 using (2.1.1)

f Now let us calculate the FI Ir:m in terms of abr:m, cr;m, dr:m, er:m, and kr:m from Exp(2) and apply Theorem 2.2.1. As noted in Lemma 2.2.1, we need to compute these only for r =

1 using (2.2.13). The expressions for ab1:m, c1:m, d1:m, and k1:m for the Exp(θ) distribution

with cdf G(y; θ) = 1 − e−y/θ are given by

2 2  ∂ g(y;θ) ∂ mg(y;θ)  R ∞ ( ∂θ ) ∂θ2 m−1 • ab1:m = a1:m − b1:m = 0 g(y;θ)2 − g(y;θ) g(y; θ)(1 − G(y; θ)) dy,

R ∞ ∂ 2 m−1 • c1:m = 0 ∂θ G(y; θ) mg(y; θ)(1 − G(y; θ)) dy,

45 R ∞  ∂2  m−1 • d1:m = 0 ∂θ2 G(y; θ) mg(y; θ)(1 − G(y; θ)) dy,

2 ∂ G(y;θ) R ∞ ( ∂θ ) m−1 • k1:m = 0 1+G(y;θ) mg(y; θ)(1 − G(y; θ)) dy.

The reason we recommend ab1:m, c1:m, d1:m, and k1:m among abr:m, cr;m, dr:m, and kr:m is the fact that the pdf of the first order statistic, mg(y; θ)(1 − G(y; θ))m−1 = mθe−my/θ is the simplest to handle among gr:m(y; θ). These values are tabulated in Table 2.2 below.

i ab c d e k 1 0.25 0.0185 0.0625 0.0625 0.0119 2 0 0.0156 0.0741 0.0107 3 -0.0833 0.0120 0.0703 0.0086 4 -0.1250 0.0093 0.0640 0.0068 5 -0.1500 0.0073 0.0579 0.0055 6 -0.1667 0.0059 0.0525 0.0046 7 -0.1786 0.0048 0.0479 0.0038 8 -0.1875 0.0040 0.0439 0.0032 9 -0.1944 0.0405 0.0028 10 -0.2000

Table 2.2: The values of ab1:i, c1:i, d1:i, e1:1, and k1:i for the Exp(2) distribution in Lemma 2.2.1 for 1 ≤ i ≤ 10

The general recurrence relation in (2.2.14) allows us to produce all terms needed in Figures

2.2-2.7 from the values in Table 2.2. All of these terms, abr:m, cr;m, dr:m, and er:m for any r and m where 1 ≤ r ≤ m ≤ n are displayed in Tables 2.3-2.6 for n = 10.

46 abr:m m = 1 2 3 4 5 6 7 8 9 10 r = 1 0.2500 0.0000 -0.0833 -0.1250 -0.1500 -0.1667 -0.1786 -0.1875 -0.1944 -0.2000 2 0.5000 0.1667 0.0417 -0.0250 -0.0667 -0.0952 -0.1161 -0.1319 -0.1444 3 0.6667 0.2917 0.1417 0.0583 0.0048 -0.0327 -0.0605 -0.0819 4 0.7917 0.3917 0.2250 0.1298 0.0673 0.0228 -0.0105 5 0.8917 0.4750 0.2964 0.1923 0.1228 0.0728 6 0.9750 0.5464 0.3589 0.2478 0.1728 7 1.0464 0.6089 0.4145 0.2978 8 1.1089 0.6645 0.4645 9 1.1645 0.7145 10 1.2145

Table 2.3: The values of abr:m from Exp(2) parent for 1 ≤ r ≤ m ≤ 10

cr:m m = 1 2 3 4 5 6 7 8 r = 1 0.0185 0.0156 0.012 0.0093 0.0073 0.0059 0.0048 0.004 2 0.0214 0.0229 0.0202 0.0171 0.0144 0.0122 0.0104 3 0.0207 0.0255 0.0248 0.0226 0.02 0.0176 4 0.0191 0.026 0.0271 0.026 0.024 5 0.0173 0.0254 0.028 0.0279 6 0.0157 0.0244 0.028 7 0.0143 0.0232 8 0.013

Table 2.4: The values of cr:m from Exp(2) for 1 ≤ r ≤ m ≤ 10

dr:m m = 1 2 3 4 5 6 7 8 9 r = 1 0.0625 0.0741 0.0703 0.064 0.0579 0.0525 0.0479 0.0439 0.0405 2 0.0509 0.0816 0.0892 0.0885 0.0848 0.0802 0.0755 0.0711 3 0.0356 0.0739 0.0903 0.0959 0.0963 0.0943 0.0912 4 0.0228 0.063 0.0848 0.0953 0.0996 0.1005 5 0.0128 0.0521 0.0769 0.091 0.0985 6 0.0049 0.0422 0.0684 0.085 7 -0.0013 0.0334 0.0602 8 -0.0063 0.0258 9 -0.0103

Table 2.5: The values of dr:m from Exp(2) for 1 ≤ r ≤ m ≤ 10

47 er:m m = 1 2 3 4 5 6 7 8 9 r = 1 0.0625 0.037 0.0234 0.016 0.0116 0.0087 0.0068 0.0055 0.0045 2 0.088 0.0642 0.0458 0.0337 0.0257 0.0202 0.0163 0.0134 3 0.0998 0.0827 0.0638 0.0497 0.0395 0.032 0.0264 4 0.1055 0.0953 0.078 0.0633 0.0519 0.0432 5 0.1081 0.104 0.0889 0.0747 0.0629 6 0.1089 0.11 0.0975 0.0841 7 0.1087 0.1142 0.1042 8 0.1079 0.1171 9 0.1068

Table 2.6: The values of er:m from Exp(2) for 1 ≤ r ≤ m ≤ 10

f We can now compute Ir:m for 1 ≤ r ≤ m ≤ n when n = 10 using Theorem 2.2.1 by substituting the values of ab, c, d, and e in Table 2.3-2.6 and k1:m in Table 2.2. This is reported in Table 2.7 below.

f Ir:m m = 1 2 3 4 5 6 7 8 9 10 r = 1 0.2500 0.2872 0.3438 0.4097 0.4797 0.5506 0.6209 0.6898 0.7568 0.8217 2 0.2872 0.2858 0.3136 0.3632 0.4272 0.5004 0.5791 0.6608 0.7440 3 0.3438 0.3136 0.3086 0.3304 0.3729 0.4306 0.4992 0.5756 4 0.4097 0.3632 0.3304 0.3246 0.3426 0.3795 0.4313 5 0.4797 0.4272 0.3729 0.3426 0.3367 0.3520 6 0.5506 0.5004 0.4306 0.3795 0.3520 7 0.6209 0.5791 0.4992 0.4313 8 0.6898 0.6608 0.5756 9 0.7568 0.7440 10 0.8217

f Table 2.7: Ir:m from Laplace(0, 2) using Theorem 2.2.1 for 1 ≤ r ≤ m ≤ 10

48 We can observe that there is no difference between Table 2.1 and 2.7 values up to the

reported four decimal places. Therefore it supports our claim that the minimal effort to

f compute Ir:m(θ; X) is computing the blocks in Lemma 2.2.1 while taking advantage of

the convenience of evaluation of these constants. We can similarly use Theorem 2.3.1 and

f Tables 2.2-2.6 to compute I1···r:m. These values are reported in Table 2.8.

f I1···r:m m = 1 2 3 4 5 6 7 8 9 10 r = 1 0.2500 0.2872 0.3438 0.4097 0.4797 0.5506 0.6210 0.6898 0.7568 0.8217 2 0.5 0.5179 0.5556 0.6096 0.6756 0.7496 0.8287 0.9107 0.9940 3 0.75 0.7580 0.7793 0.8154 0.8653 0.9267 0.9973 1.0746 4 1 1.0036 1.015 1.0373 1.0717 1.1180 1.1751 5 1.25 1.2516 1.2576 1.2708 1.2934 1.3262 6 1.5 1.5007 1.5038 1.5114 1.5257 7 1.75 1.7503 1.7519 1.7562 8 2 2.0002 2.0010 9 2.25 2.2501 10 2.5

f Table 2.8: I1···r:m from Laplace(0, 2) using Theorem 2.3.1 for 1 ≤ r ≤ m ≤ 10

In conclusion, the use of Theorem 2.2.2 and 2.3.1 and Lemma 2.2.1 provides an efficient way to compute the Fisher information in a single order statistic and type-II censored sam- ples from an unfolded distribution that is symmetric about zero. The approach in values manipulating the special functions of single order statistics from the corresponding folded distribution.

Table 2.9 and Figure 2.14 provide the proportional FI about θ from Laplace (0, 2) for n = 10.

49 I1···r:n(θ) r nI(θ) 1 0.32868 2 0.39760 3 0.42984 4 0.47004 5 0.53048 6 0.61028 7 0.70248 8 0.80040 9 0.90004 10 1

I1···r:n(θ) Table 2.9: Proportional FI from Laplace (0, 2) Figure 2.14: nI(θ) when n = 10

We can see that the proportional FI is close to r/n when r ≥ 6. This means that the relative amount of FI in Type-II right censored samples from Laplace (0, 2) is almost the same as that in a random sample of size r when r ≥ 6 and n = 10.

Let us measure the ARE for 0 < p < 1 from Laplace (0, 2). We first derive the expression in (1.1.18) for any θ from Laplace (0, θ).

2 F −1(p) 2 " ∞ # Z X  1 |x| 1 Z  1 |x| Ip(θ) = − + 2 fX (x; θ)dx + − + 2 fX (x; θ)dx . θ θ 1 − p −1 θ θ −∞ FX (p)

These values are given in Table 2.10.

50 Ip(θ) p I(θ;X) 0.1 0.3878087 0.2 0.4098972 0.3 0.4118318 0.4 0.4331956 0.5 0.5000000 0.6 0.5999997 0.7 0.6999982 0.8 0.7999991 0.9 0.8999943

Table 2.10: ARE values for Laplace (0, 2) distribution

Upon comparing Table 2.9 and 2.10, it follows that even for n = 10, the relative FI is close

to what happens asymptotically. In particular, the bottom 10% of the sample has a large

proportion of the total FI and the symmetry of Laplace makes the ARE for the bottom 50%

to be 0.5 as shown in Table 2.10.

Using the fact that I(θ) = 0.25, Ip(θ) values can be computed from 2.10 and the asymp-

totic variance of MLE is readily obtained as the reciprocal of nIp(θ).

The question of determination of optimal sample size is not applicable to the Laplace dis-

tribution because the original variable is not duration of time and it can be negative.

Remark 2.4.1. Comparison with Fisher information in Exponential samples: It is known

2 g f that for Exponential(θ) parent I1...r:n(θ) = r/θ (= rI1:1(θ) = rI1:1(θ)) and hence is strictly

linear. In contrast, from Figure 2.14 and Table 2.10 it follows that, the FI is nonlinear

initially and eventually comes close to being linear for small as well as large samples.

51 CHAPTER 3: FISHER INFORMATION FROM A FINITE MIXTURE OF EXPONENTIAL DISTRIBUTIONS AND ITS TYPE-II CENSORED SAMPLES

3.1 Introduction

In this chapter, we consider a finite mixture model of two distinct exponential distribu-

tions with two positive mixing proportions, θ and 1 − θ; sum of the proportions is always

one. The model is given by

f(x; α, β, θ) = θαe−αx + (1 − θ)βe−βx, for x > 0 (3.1.1) where α, β > 0 and 0 < θ < 1.

For the pdf in (3.1.1), we are interested in the 3x3 FIM about α, β, and θ from Type-

II censored samples. However Atienza and et al. (2007) stated that the log-likelihood function of the mixture model is not always bounded so that global maximum likelihood estimator does not often exist. This limitation affects FIM to have almost zero determinant.

This problem will be discussed in detail in Section 3.2. For this reason we assume that the mixing proportions are fixed, i.e., θ is already known. The 2x2 FIM about α and β is obtained and its properties are discussed in Section 3.3. Numerical results for the 2x2 FIM and their applications will be illustrated in Section 3.4.

52 3.2 Fisher Information in Type-II Censored samples from a Mixture of Two Exponentials with Unknown p

The two component mixture model for the exponential distribution given by (3.1.1) has

three unknown parameters, α, β and θ to be estimated. The log likelihood function in a

single observation is

log f(x; α, β, θ) = log θαe−αx + (1 − θ)βe−βx , x > 0. (3.2.1)

The first and second order derivatives of (3.2.1) with respect to α, β, and θ are

∂ (1 − αx)θe−αx log f(x; α, β, θ) = , ∂α f(x)

∂ (1 − βx)(1 − θ)e−βx log f(x; α, β, θ) = , ∂β f(x)

∂ αe−αx − βe−βx log f(x; α, β, θ) = , (3.2.2) ∂θ f(x)

∂2 (2 − αx)θ(1 − θ)xβe−(α+β)x + θ2e−2αx log f(x; α, β, θ) = − , ∂α2 f 2(x)

∂2 (2 − βx)θ(1 − θ)xαe−(α+β)x + (1 − θ)2e−2βx log f(x; α, β, θ) = − , ∂β2 f 2(x)

∂2 θ(1 − θ)e−(α+β)x(1 − αx)(1 − βx) log f(x; α, β, θ) = − , ∂α∂β f 2(x)

∂2 (αe−αx − βe−βx)2 log f(x; α, β, θ) = − , ∂θ2 f 2(x)

∂2 β(1 − αx)e−(α+β)x log f(x; α, β, θ) = , ∂α∂θ f 2(x)

∂2 α(1 − βx)e−(α+β)x log f(x; α, β, θ) = − , (3.2.3) ∂β∂θ f 2(x)

53 where f(x) is f(x; α, β, θ) given by (3.1.1).

Let I(α, β, θ) be denoted by the FIM in a single observation from (3.1.1):

 I(α, α) I(α, β) I(α, θ)  I(α, β, θ) =  I(α, β) I(β, β) I(β, θ)  . (3.2.4) I(α, θ) I(β, θ) I(θ, θ)

Note that (3.2.1) violates the regularity conditions in that, generally-speaking, the distribu- tion of X is not well-behaved and differentiable. It is easy to see, for instance I(θ, θ) is infinite if we plug in α = 1, β = 3, and θ = 0. So we add more conditions to make FI well-defined; that is, we take α 6= β and 0 < θ < 1. Each entry of (3.2.4) is obtained by taking the expectations of the associated second order derivative but it has no closed form.

Z ∞ (2 − αx)θ(1 − θ)xβe−(α+β)x + θ2e−2αx I(α, α) = dx 0 f(x) Z ∞ (2 − βx)θ(1 − θ)xαe−(α+β)x + (1 − θ)2e−2βx I(β, β) = dx 0 f(x) Z ∞ (αe−αx − βe−βx)2 I(θ, θ) = dx 0 f(x) Z ∞ θ(1 − θ)e−(α+β)x(1 − αx)(1 − βx) I(α, β) = dx 0 f(x) Z ∞ β(αx − 1)e−(α+β)x I(α, θ) = dx 0 f(x) Z ∞ α(1 − βx)e−(α+β)x I(β, θ) = dx (3.2.5) 0 f(x)

Under the assumptions that α, β > 0, and 0 < θ < 1 for preventing the pdf of (3.1.1)

from being homogeneous, let us find whether (3.2.5) is finite. For α > β, every integrand

54 in each entry of the FIM can be rewritten by being multiplied by eβx/eβx as follows:

 ∂2  (2 − αx)θ(1 − θ)xβe−αx + θ2e−(2α−β)x − log f(x) f(x) = ∂α2 θαe−(α−β)x + (1 − θ)β

 ∂2  (2 − βx)θ(1 − θ)xαe−αx + (1 − θ)2e−βx − log f(x) f(x) = ∂β2 θαe−(α−β)x + (1 − θ)β  ∂2  θ(1 − θ)e−αx(1 − αx)(1 − βx) − log f(x) f(x) = ∂α∂β θαe−(α−β)x + (1 − θ)β

 ∂2  α2e−(2α−β)x + β2e−βx − 2αβe−αx − log f(x) f(x) = ∂θ2 θαe−(α−β)x + (1 − p)β  ∂2  β(αx − 1)e−αx − log f(x) f(x) = ∂α∂θ θαe−(α−β)x + (1 − θ)β  ∂2  α(1 − βx)e−αx − log f(x) f(x) = . (3.2.6) ∂β∂θ θαe−(α−β)x + (1 − θ)β

For instance I(α, α) must be finite because

Z ∞ (2 − αx)θ(1 − θ)xβe−αx + θ2e−(2α−β)x −(α−β)x dx 0 θαe + (1 − θ)β Z ∞ 1 −αx 2 −(2α−β)x ≤ (2 − αx)θ(1 − θ)xβe + θ e dx (1 − θ)β 0

since θαe−(α−β)x + (1 − θ)β is positive and monotonically decreasing to (1 − θ)β

and α > β;

Thus the above integral on the left is bounded by

1 Z ∞ (2 + αx)θ(1 − θ)xβe−αx + θ2e−(2α−β)xdx, for α > 0 (1 − θ)β 0 4θ θ2 = + . (3.2.7) α2 (1 − θ)β(2α − β)

55 By using similar arguments, it can be shown that all the integrals in (3.2.5) are finite for

α > β > 0 as follows:

Z ∞ −αx 2 −βx (2 − βx)θ(1 − θ)xαe + (1 − θ) e 2θ(α + β) 1 − θ −(α−β)x dx ≤ 2 + 2 0 θαe + (1 − θ)β α β β

Z ∞ 2 −(2α−β)x 2 −βx −αx 2 2 α e + β e − 2αβe α + 6αβ − 3β −(α−β)x dx ≤ 0 θαe + (1 − θ)β (1 − θ)β(2α − β) Z ∞ −αx   θ(1 − θ)e (1 − αx)(1 − βx) θ 2 3 −(α−β)x dx ≤ + 0 θαe + (1 − θ)β α β α Z ∞ −αx β(αx − 1)e 2 −(α−β)x dx ≤ 0 θαe + (1 − θ)β (1 − θ)α Z ∞ −αx α(1 − βx)e α + β −(α−β)x dx ≤ . (3.2.8) 0 θαe + (1 − θ)β (1 − θ)αβ

The fact that every FI expression in (3.2.5) is all bounded as shown in (3.2.8) supports that under the regularity conditions in addition to α > β and 0 < θ < 1, the FIM in (3.2.4) is well-defined.

For α < β, we can see that all integrals are also finite by interchanging the roles of α and

β, and θ and 1 − θ in the above discussion.

Through (3.2.8) we can see that the integrals in (3.2.5) are all finite for all values of α, β, and θ under the assumptions of α, β > 0 and 0 < θ < 1. So we expect the FI in Type-II censored samples to be finite as well.

Now we use (1.1.3) to obtain the FI in Type-II right censored samples from MExp(α, β, θ) as in (3.1.1) in order to avoid the (r − 1) integrals in (1.1.2). The log likelihood function of the first order statistic among i observations is proportional to the expression given below;

 −αx −βx  −αx −βx log f1:i(x; α, β, θ) ∝ log θαe + (1 − θ)βe + (i − 1) log θe + (1 − θ)e .

(3.2.9)

56 The first order derivatives are followings:

∂ (1 − αx)θe−αx θxe−αx log f (x; α, β, θ) = − (i − 1) = D (x), ∂α 1:i θαe−αx + (1 − θ)βe−βx θe−αx + (1 − θ)e−βx α ∂ (1 − βx)(1 − θ)e−βx (1 − θ)xe−βx log f (x; α, β, θ) = − (i − 1) = D (x), ∂β 1:i θαe−αx + (1 − θ)βe−βx θe−αx + (1 − θ)e−βx β ∂ αe−αx − βe−βx e−αx − e−βx log f (x; α, β, θ) = + (i − 1) = D (x). ∂θ 1:i θαe−αx + (1 − θ)βe−βx θe−αx + (1 − θ)e−βx θ (3.2.10)

The expressions for I1:i are given below:

Z ∞ 2 i−1 I1:i(α, α) = [Dα(x)] × if(x)(1 − F (x)) dx 0 Z ∞ 2 i−1 I1:i(β, β) = [Dβ(x)] × if(x)(1 − F (x)) dx 0 Z ∞ i−1 I1:i(α, β) = [Dα(x)Dβ(x)] × if(x)(1 − F (x)) dx 0 Z ∞ 2 i−1 I1:i(θ, θ) = [Dθ(x)] × if(x)(1 − F (x)) dx 0 Z ∞ i−1 I1:i(α, θ) = [Dα(x)Dθ(x)] × if(x)(1 − F (x)) dx 0 Z ∞ i−1 I1:i(β, θ) = [Dβ(x)Dθ(x)] × if(x)(1 − F (x)) dx 0 (3.2.11) where 1 − F (x) = θe−αx + (1 − θ)e−βx.

As shown in (3.2.8), we can prove that every integral in (3.2.11) is finite for any i.

57 We use (3.2.2) to obtain the expressions for the limiting FI in (1.1.17); they are given by

−1 " #2 Z FX (p) 2 2 −2αx Z ∞ (1 − αx) θ e 1 −αx Ip(α) = dx + (1 − αx)θe dx , f(x) 1 − p −1 0 FX (p) −1 " #2 Z FX (p) 2 2 −2βx Z ∞ (1 − βx) (1 − θ) e 1 −βx Ip(β) = dx + (1 − βx)(1 − θ)e dx , f(x) 1 − p −1 0 FX (p) −1 " #2 Z FX (p) −αx −βx 2 Z ∞ (αe − βe ) 1 −αx −βx Ip(θ) = dx + (αe − βe )dx . f(x) 1 − p −1 0 FX (p) (3.2.12)

3.3 Fisher Information in Type-II Censored samples from a Mixture of Two Exponentials with Known θ

Suppose the mixing proportion θ is known. Then the FIM is no longer a 3x3 matrix in

(3.2.4); it is given by  I(α, α) I(α, β)  I(α, β) = . (3.3.1) I(α, β) I(β, β) We need only three integrals from (3.2.5):

Z ∞ (2 − αx)θ(1 − θ)xβe−(α+β)x + θ2e−2αx I(α, α) = dx 0 f(x) Z ∞ (2 − βx)θ(1 − θ)xαe−(α+β)x + (1 − θ)2e−2βx I(β, β) = dx 0 f(x) Z ∞ θ(1 − θ)e−(α+β)x(1 − αx)(1 − βx) I(α, β) = dx. (3.3.2) 0 f(x) To compute the FI in Type-II right censored samples we use the approach of (1.1.3). The relation needs the following FI in the first order statistics for sample size i, 1 ≤ i ≤ n:

Z ∞ 2 i−1 I1:i(α, α) = [Dα(x)] × if(x)(1 − F (x)) dx, 0 Z ∞ 2 i−1 I1:i(β, β) = [Dβ(x)] × if(x)(1 − F (x)) dx, 0 Z ∞ i−1 I1:i(α, β) = [Dα(x)Dβ(x)] × if(x)(1 − F (x)) dx. (3.3.3) 0

58 The expressions for the limiting FI are

−1 " #2 Z FX (p) 2 2 −2αx Z ∞ (1 − αx) θ e 1 −αx Ip(α) = dx + (1 − αx)θe dx , f(x) 1 − p −1 0 FX (p) −1 " #2 Z FX (p) 2 2 −2βx Z ∞ (1 − βx) (1 − θ) e 1 −βx Ip(β) = dx + (1 − βx)(1 − θ)e dx . f(x) 1 − p −1 0 FX (p)

3.4 Application and Numerical Integration

The exact values of FI are obtained by performing numerical integration in software R in this section. Jewell (1982) performed the EM algorithm for calculating MLEs from 100 observations generated from MExp(15, 1, .8). Table 3.1 shows the 3x3 FIM in a single observation when α = 15 and β = 1 with various θ from which every entry has finite number. However the 3x3 FIM has determinant very close to zero for every suggested θ

value. When θ is known the determinant of 2x2 FIM is also close to zero and can be closer

to zero than the corresponding 3x3 FIM.

α = 15, β = 1 θ I11 I12 I22 det(2x2) I33 I13 I23 det(3x3) 0.1 9.26E-05 -0.0006 0.8513 7.84445E-05 3.9803 0.0116 0.5874 0.0002 0.2 2.72E-04 -0.0016 0.7337 0.0002 3.1039 0.0152 0.4547 0.0004 0.3 5.03E-04 -0.0025 0.6277 0.0003 2.7267 0.0172 0.3817 0.0006 0.4 7.81E-04 -0.0032 0.5280 0.0004 2.585 0.0187 0.3356 0.0007 0.5 1.11E-03 -0.0036 0.4326 0.0005 2.6089 0.0203 0.3042 0.0009 0.6 1.48E-03 -0.0038 0.3405 0.0005 2.8056 0.0223 0.2826 0.0010 0.7 1.92E-03 -0.0037 0.2511 0.0005 3.2660 0.0249 0.2683 0.0012 0.8 2.46E-03 -0.0032 0.1642 0.0004 4.3083 0.0289 0.2610 0.0013 0.9 3.14E-03 -0.0022 0.0801 2.46E-04 7.5284 0.0370 0.2639 0.0015

Table 3.1: I(X; α, β, θ) from MExp(15, 1, θ) with known θ and unknown θ

59 What will happen if we change the values for α and β? Table 3.2, 3.3, and 3.4 show

determinants when α = 2 and β = 1, α = 15 and β = 2, and α = 3 and β = 2, respectively. Since I(X; α, β, θ) = I(X; β, α, 1 − θ), we do not consider the FI for α < β.

For instance, the pdf when α = 15 and β = 1 with θ = .3 is identical to the case where

α = 1 and β = 15 with θ = .7.

α = 2, β = 1 θ I11 I12 I22 det(2x2) I33 I13 I23 det(3x3) 0.1 0.0018 0.0222 0.8569 0.0010 0.3353 0.0218 0.4631 7.00E-06 0.2 0.0070 0.0395 0.7261 0.0035 0.3416 0.0432 0.4295 2.55E-05 0.3 0.0156 0.0525 0.6057 0.0067 0.3526 0.0649 0.3981 5.24E-05 0.4 0.0277 0.0618 0.4941 0.0099 0.3695 0.0877 0.3680 8.49E-05 0.5 0.0436 0.0673 0.3904 0.0125 0.3944 0.1126 0.3381 0.0001 0.6 0.0640 0.0691 0.2940 0.0140 0.4312 0.1410 0.3073 0.0002 0.7 0.0899 0.0666 0.2046 0.0140 0.4878 0.1754 0.2734 0.0002 0.8 0.1237 0.0586 0.1228 0.0117 0.5832 0.2208 0.2326 0.0002 0.9 0.1702 0.0417 0.0504 0.0068 0.7849 0.2904 0.1742 0.0002

Table 3.2: I(X; α, β, θ) from MExp(2, 1, θ) with known θ and unknown θ

α = 15, β = 2 θ I11 I12 I22 det(2x2) I33 I13 I23 det(3x3) 0.1 6.40E-05 5.19E-05 0.2119 1.35608E-05 2.3359 0.0084 0.3338 9.79E-06 0.2 2.07E-04 -2.24E-04 0.1808 3.73846E-05 2.0303 0.0131 0.2795 2.73E-05 0.3 4.03E-04 -5.60E-04 0.1533 6.14601E-05 1.8919 0.0164 0.2448 4.65E-05 0.4 6.46E-04 -8.70E-04 0.1280 8.18768E-05 1.8594 0.0192 0.2207 6.60E-05 0.5 9.39E-04 -1.11E-03 0.1041 9.65073E-05 1.9195 0.0221 0.2033 8.55E-05 0.6 1.29E-03 -1.26E-03 0.0814 0.0001 2.0913 0.0253 0.1905 1.05E-04 0.7 1.71E-03 -1.28E-03 0.0598 0.0001 2.4452 0.0295 0.1813 1.24E-04 0.8 2.24E-03 -1.15E-03 0.0390 8.59971E-05 3.2057 0.0356 0.1755 1.43E-04 0.9 2.95E-03 -7.91E-04 0.0190 5.55E-05 5.4486 0.0471 0.1737 1.58E-04

Table 3.3: I(X; α, β, θ) from MExp(15, 2, θ) with known θ and unknown θ

60 α = 3, β = 2 θ I11 I12 I22 det(2x2) I33 I13 I23 det(3x3) 0.1 0.0008 0.0102 0.2125 6.26187E-05 0.1279 0.0095 0.1553 2.79E-08 0.2 0.0032 0.0186 0.1776 0.0002 0.1317 0.0192 0.1439 1.03E-07 0.3 0.0072 0.0251 0.1452 0.0004 0.1365 0.0295 0.1324 2.12E-07 0.4 0.0130 0.0299 0.1153 0.0006 0.1428 0.0404 0.1205 3.41E-07 0.5 0.0207 0.0327 0.0878 0.0008 0.1508 0.0524 0.1080 4.73E-07 0.6 0.0307 0.0333 0.0629 0.0008 0.1614 0.0657 0.0945 5.86E-07 0.7 0.0434 0.0315 0.0407 0.0008 0.1760 0.0812 0.0793 6.50E-07 0.8 0.0597 0.0265 0.0218 0.0006 0.1971 0.1001 0.0613 6.18E-07 0.9 0.0809 0.0172 0.0072 2.88E-04 0.2321 0.1250 0.0381 4.15E-07

Table 3.4: I(X; α, β, θ) from MExp(3, 2, θ) with known θ and unknown θ

Table 3.2 and 3.4 have the determinants with known θ to be greater than when θ is unknown.

In terms of the magnitude of determinant, the 2x2 FIM in Table 3.2 provides the largest values. We can order for each θ the determinants of 3x3 FIM: when θ = .1 or .2, Table

3.1 > Table 3.3 > Table 3.2 > Table 3.4, otherwise Table 3.1 > Table 3.2 > Table 3.3

> Table 3.4. And for 2x2 FIM, the determinants are sorted as: Table 3.2 > Table 3.4 >

Table 3.1 > Table 3.3 for every θ. This nearly zero determinant leads MLE to have very large asymptotic variance meaning that the very large sample sizes are needed for stable properties of the MLE from a mixture of two exponentials.

Will the weakness of almost zero determinant of FI in a single observation be improved in Type-II right censored samples? We will investigate 3x3 FIM from MExp(15, 1, .9) and 2x2 FIM from MExp(2, 1;θ) with known θ = .6 because both cases have the greatest determinant for unknown and known θ, respectively. We now compare the 3x3 FIM of

θ = .9 in Table 3.1 that corresponds to a single observation to the FIM in right censored samples given in Table 3.5.

61 MExp(15, 1, .9) r I11 I12 I13 I22 I23 I33 det 1 0.0043 0.0006 0.0739 0.0001 0.0105 1.2976 -8.35324E-12 2 0.0084 0.0013 0.1543 0.0002 0.0254 2.9257 4.48395E-10 3 0.0123 0.0022 0.2420 0.0005 0.0475 5.0325 1.1562E-09 4 0.0159 0.0032 0.3373 0.0009 0.0822 7.8694 6.90623E-08 5 0.0191 0.0045 0.4388 0.0019 0.1410 11.8912 1.79502E-06 6 0.0217 0.0058 0.5390 0.0046 0.2518 17.9533 4.98383E-05 7 0.0239 0.0062 0.6156 0.0150 0.4839 27.5581 0.0012 8 0.0260 0.0029 0.6207 0.0619 0.9752 42.4470 0.0229 9 0.0287 -0.0081 0.5120 0.2499 1.8245 61.4369 0.2605 10 0.0314 -0.0225 0.3701 0.8008 2.6394 75.2842 1.4825

Table 3.5: I1···r:n(α, β, θ) from MExp(15, 1, .9) when n = 10

When r ≥ 8 the determinant of I1···r:n(α, β, θ) in Table 3.5 is at least 10 times greater than that of I(α, β, θ) in Table 3.1. Note that the determinant of I1···10:10(α, β, θ) is the same as that of I(α, β, θ) multiplied by 103 because in this example n = 10 and each entry in the

FIM contained in a complete sample is given to 10 × I(α, α), 10 × I(β, β), and so on. That is, the determinant of FI in a complete sample of sample size n is the determinant of FI in a single observation times n to the power of the number of parameters to be estimated.

Table 3.6 provides the proportion of FI in Type-II right censored samples when compared to the complete sample of size 10, for the MExp(15, 1, .9) population. Values reported in

Table 3.1 and Table 3.5 are used for this purpose. Table 3.6 values are displayed in Figure

3.1 which shows near linear rate of FI accumulation for α and unusually slow rate for θ.

62 MExp(15, 1, .9) I1···r:n(α,α) I1···r:n(β,β) I1···r:n(θ,θ) r nI(α,α) nI(β,β) nI(θ,θ) 1 0.1357723 0.000106751 0.01723573 2 0.2666148 0.000283209 0.03886181 3 0.3906355 0.000583604 0.06684639 4 0.5053443 0.001149377 0.1045295 5 0.6071316 0.002363177 0.15795134 6 0.6921575 0.005738021 0.23847317 7 0.7607789 0.018763678 0.36605505 8 0.8266048 0.077322865 0.56382368 9 0.914053 0.312103052 0.81606648 10 1 1 1

Table 3.6: Proportional FI from MExp(15, 1, .9) when n = 10

I1···r:n(α,α) I1···r:n(β,β) Figure 3.1: Proportional FI from MExp(15, 1, .9) when n=10: nI(α,α) (black), nI(β,β) (red), I1···r:n(θ,θ) and nI(θ,θ) (blue).

Figure 3.2 shows [I1···r:n]ii plotted against E(X ). E(Xr:n) r:n

63 Figure 3.2: Average Information per unit time: (Left panel) I1···r:n(θ) (blue) and (Right panel) E(Xr:n) I1···r:n(α) (black) and I1···r:n(β) (red) from MExp(15, 1, .9) when n=10 E(Xr:n) E(Xr:n)

Note that the left plot and right plot have different axes for the vertical axes in Figure 3.2.

Another interesting aspect is that the measure on β is continuously increasing while on α it is decreasing; the average information per unit time maximizes around r = 7. In terms of the FI efficiency regarding the experimental duration, we may take the optimal sample size of 7 or 8 out of n = 10 in a life-testing experiment if our interest is β or θ. The values of [I1···r:n]ii are tabulated in Table 3.7. E(Xr:n)

MExp(15, 1, .9) r 1 2 3 4 5 6 7 8 9 10 Time Mean, E(Xr:n) 0.0074 0.0158 0.0254 0.0366 0.0502 0.0675 0.0923 0.1395 0.2789 0.8864

[I1···r:n]11/E(Xr:n) 0.5786 0.5316 0.4845 0.4344 0.3807 0.3215 0.2589 0.1864 0.1029 0.0354 [I1···r:n]22/E(Xr:n) 0.0135 0.0127 0.0197 0.0246 0.0379 0.0682 0.1625 0.4439 0.8959 0.9124 [I1···r:n]33/E(Xr:n) 174.5958 185.1479 198.2448 214.9832 237.0177 266.0306 298.4695 304.3806 220.2628 84.9316

Table 3.7: FI in Type-II right censored samples from MExp(15, 1, .9) per unit time when n = 10

64 With respect to α, the FI per unit time is the largest for the first order statistic when com-

pared to any other Type-II censored sample and even the complete sample. For β, the

complete sample corresponds to the biggest value but we may consider the first 9 order

statistics since the difference between those is relatively small. For θ, the efficiency of the

complete sample is very low (only 84.9316 from Table 3.7) and the values suggest that

r = 8 is the optimal sample size. Of course, the assumed values of α, β, and θ will impact this conclusion.

MExp(15, 1, .9) Sampled Proportion (p) .1 .2 .3 .4 .5 .6 .7 .8 .9

Ip(α) I(α;x) 0.1380 0.2714 0.3990 0.5196 0.6281 0.7194 0.7836 0.8078 0.8601 Ip(β) I(β;x) 8.30e-05 2.09e-04 4.05e-04 7.30e-04 1.30e-03 2.38e-03 4.82e-03 1.20e-02 5.70e-02 Ip(θ) I(θ;x) 0.0155 0.0345 0.0584 0.0896 0.1318 0.1926 0.2884 0.4623 0.8555

Table 3.8: ARE values for MExp(15, 1, .9) when n = 10

The AREs of the MLEs for censored trials and the sampled proportion are tabulated in

Table 3.8. In particular, the ARE about β slowly increases as p increases. Even when this

proportion is 90%, the ARE is only .0570 which implies that the information about β in a dataset consisting of the bottom 90% of the random sample is negligible. This may be due to the fact that the parameters α(=15) and θ (=.9) dominate; especially with θ = .9, most

of the data is likely to come from the first population.

Now consider the 2x2 FIM when θ is known. Suppose we have a random sample of size

10 from MExp(2, 1; .6). The values of I1···r:n are shown in Table 3.9.

65 α = 2, β = 1 when θ = .6 is known r I11 I22 I12 det 1 0.1291946 0.07454028 0.09729702 0.000163492 2 0.2458035 0.1644539 0.1977233 0.001328841 3 0.348575 0.2743799 0.300469 0.005360354 4 0.4362116 0.4109531 0.4040672 0.015992207 5 0.5074937 0.5839558 0.5059344 0.040384272 6 0.5615502 0.8083012 0.6015691 0.092016318 7 0.5984041 1.107495 0.6831855 0.195987121 8 0.6199631 1.519499 0.7376399 0.397920688 9 0.6313001 2.104641 0.7451273 0.773445381 10 0.6397057 2.940166 0.6911131 1.403203632

Table 3.9: I1···r:n(α, β) from MExp(2, 1; .6) when n = 10

The determinant of the FI in complete sample is 1.4032 which is 102 times the quantity

.0140 reported in Table 3.2. From values in Table 3.9 we can compute the proportional FI contained in Type-II censored samples. This is reported in Table 3.10 and Figure 3.3. The rate of growth in FI is non-linear and concave for α and convex for β. Upon comparing

Figures 3.1 and 3.3 we observe that the parameter values influence these patterns.

α = 2, β = 1 when p = .6 is known I1···r:n(α,α) I1···r:n(β,β) r nI(α,α) nI(β,β) 1 0.201959432 0.025352405 2 0.384244661 0.055933543 3 0.544899006 0.093321227 4 0.68189419 0.139772074 5 0.793323711 0.198613208 6 0.87782585 0.274916858 7 0.935436561 0.376677711 8 0.969137996 0.516807214 9 0.986860208 0.715823868 10 1 1

Figure 3.3: Proportional FI from MExp(2, 1; Table 3.10: Proportional FI from MExp(2, 1; .6) .6): α(black), β(red)

66 Figure 3.4: I1···r:n(α) (black) and I1···r:n(β) (red) from MExp(2, 1;.6) when n=10 E(Xr:n) E(Xr:n)

Figure 3.4 provides information per unit time for censored samples. It suggests the optimal sample size to be 4 if the interest is in both parameters α and β at the same time. Compar- ison between Figures 3.2 and 3.4 shows that selecting the optimal sample size depends on the parameter values. The FI in Type-II right censored samples from MExp(2,1,.6) divided by the duration of experiment is tabulated in Table 3.11.

MExp(2, 1, .6) r 1 2 3 4 5 6 7 8 9 10 Mean Dur. (E(Xr:n)) 0.0563 0.1138 0.1748 0.2426 0.3221 0.4222 0.5596 0.7721 1.1673 2.2222 [I1···r:n]11 2.2946 2.1596 1.9939 1.7984 1.5755 1.3300 1.0693 0.8030 0.5408 0.2879 E(Xr:n) [I1···r:n]22 1.3239 1.4449 1.5695 1.6942 1.8128 1.9145 1.9791 1.9680 1.8030 1.3231 E(Xr:n)

Table 3.11: FI in Type-II right censored samples from MExp(2, 1, .6) per unit time when n = 10

If we are interested in α and β individually, the first order statistic and the first 7 order statistics are most efficient from Table 3.11, respectively. That is, for both parameters, collecting the complete samples is not the best way in terms of the efficiency using the

67 amount of FI per unit time.

We compute the ARE’s as the proportional FIs of censored samples when compared to the complete samples. These are reported in Table 3.12.

MExp(2, 1; .6) p .1 .2 .3 .4 .5 .6 .7 .8 .9

Ip(α) I(α;X) 0.2105 0.4011 0.5691 0.7123 0.8276 0.9118 0.9631 0.9831 0.9851 Ip(β) I(β;X) 0.0232 0.0510 0.0847 0.1264 0.1791 0.2475 0.3396 0.4706 0.6720

Table 3.12: ARE values for MExp(2, 1; .6) when n = 10

A comparison of Tables 3.10 and 3.12 suggests that the asymptotic values are achieved faster for the parameter α than for β. A comparison of Table 3.8 (on page 64) and Table

3.12 shows that the variation in the proportional FIs depends on the three parameters.

68 CHAPTER 4: FISHER INFORMATION UNDER TYPE-II CENSORED SAMPLING FROM BLOCK-BASU BIVARIATE EXPONENTIAL DISTRIBUTION

4.1 Introduction

In this chapter we study the Fisher Information (FI) in Type-II censored samples from the Block-Basu bivariate exponential distribution (BBVE). The BBVE is one of the com- monly used model used to describe the properties of bivariate life-time distributions. The

BBVE is a suitable model for bivariate life testing studies since two life times in this distri- bution are the bivariate random variables and the joint distribution is absolutely continuous.

The life testing experiments often deal with censored samples involving bivariate data and our goal is to estimate the parameters involved in a bivariate exponential distribution such as BBVE. The BBVE is introduced and the associated literature is reviewed in Section 4.2.

Expressions for the each element of the FI matrix contained in Type-II censored samples is given in Section 4.3. They have no closed form and numerical integration is used to evaluate them. Of the six different entries (since the FI matrix is symmetric), we focus on only three diagonal entries and evaluate the relative FI which is the ratio of the FI in

Type-II censored samples and the FI in the complete sample. A numerical integration is performed to calculate the complex FI matrix and some applications are discussed based on the numerical results in Section 4.4.

69 4.2 Block and Basu Bivariate Exponential distribution

Block and Basu (1974) proposed an absolutely continuous bivariate exponential (BBVE)

model by omitting the singular part of Marshall and Olkin’s bivariate exponential model

(1967). It also is a sub-family of Freund’s bivariate exponential distribution (1961). These

distributions are used to model the life lengths of components of a two-component system

with dependent life times (X,Y ). The Freund’s BVE has the joint pdf

( αβ0e−(α+β−β0)x−β0y, for y > x > 0 f(x, y) = (4.2.1) βα0e−α0x−(α+β−α0)y, for x > y > 0.

0 0 The BBVE pdf is obtained from (4.2.1) by replacing α, α , β, and β with λ1, λ2, and λ12

where

λ1λ12 0 λ2λ12 0 α = λ1 + , α = λ1 + λ12, β = λ2 + , and β = λ2 + λ12, λ1 + λ2 λ1 + λ2

and λ1 > 0, λ2 > 0, λ12 ≥ 0. The BBVE has only 3 parameters.

The joint pdf of the BBVE distribution is given by  λ1λ(λ2+λ12) −λ1x−(λ2+λ12)y  λ +λ e = f1(x, y), for y > x > 0 f(x, y) = 1 2 (4.2.2) λ2λ(λ1+λ12) −(λ1+λ12)x−λ2y  e = f2(x, y), for x > y > 0 λ1+λ2

where λ = λ1 + λ2 + λ12. The two life times X and Y are independent if and only if

λ12 = 0, and have identical marginals if and only if λ1 = λ2.

The marginal survival functions have the form

λ −(λ1+λ12)x λ12 −λx 1 − FX (x) = λ +λ e − λ +λ e , for x > 0, 1 2 1 2 (4.2.3) λ −(λ2+λ12)y λ12 −λy 1 − FY (y) = e − e , for y > 0. λ1+λ2 λ1+λ2 The marginal distributions in (4.2.3) are ‘generalized’ mixtures of two exponentials where

one of the mixing proportions is always negative (Balakrishnan and Basu (1995)) even

though the sum of two proportions stays at 1. The marginal cdf of X consists of two ex-

ponential cdfs with two distinct means, 1 and 1 , and the two mixing proportions are λ1+λ12 λ 70 λ and − λ12 . The second mixing proportion, − λ12 is obviously negative whenever λ1+λ2 λ1+λ2 λ1+λ2 X and Y are dependent. We have studied the properties of FI from mixture of exponentials in Chapter 3. But those results are not applicable here. This is because we have the mixing proportion dependent on the parameters of the exponentials that are being mixed, and we also have a generalized mixture here.

The BBVE model has a particular appeal in survival analysis. For example, it considers a two component (A and B) system where the failure of A (or B) induces a higher failure rate for B (or A). For the special case of λ1 = λ2, Gross, Clark and Liu (1971) conducted

λ1λ12 Monte Carlo simulation to produce the MLEs of λ1 + and λ1 + λ12 from the BBVE λ1+λ2 model. Gross (1973) presented a competing risk model with two systems where one system has two components of whose life lengths have BBVE distribution. Gross and Lam (1981) investigated the hypothesis test to determine the equality of two mean survival times when the paired observations are from BBVE. Klein and Basu (1985) considered the BBVE dis- tribution to obtain minimum variance unbiased estimators (UMVUE) of the joint reliability function, 1−F (x, y) and examine the performance of MLE and the jackknifed MLE. Hana- gal and Kale (1991) obtained MLEs of the three parameters and test for independence, (i.e.

λ12 = 0) and FIM in a single pair, (X,Y ). Selection procedures have been developed by

Hanagal (1997) to get the better component having longer mean lifetime when the lifetimes of the two components are BBVE random variables. Carlos and Jorge (2011) considered type-I censored samples from BBVE and apply Bayesian methods to estimate the parame- ters.

71 4.3 Fisher Information in Type-II Censored samples from BBVE

In this section, we provide explicit expressions for each element of the FIM in Type-II right and left censored samples from the BBVE. The FIM in a single pair, I(X,Y ), is also calculated. This is needed for the determination of the FIM in a complete sample and in

Type-II doubly censored samples. We will also present the expressions for entries of the limiting FIM in censored samples.

4.3.1 Right Censored Samples

Let I1···r:n(λ) be the FIM in Type-II right censored bivariate samples,

  I1···r:n(λ1, λ1) I1···r:n(λ1, λ2) I1···r:n(λ1, λ12) I1···r:n(λ) =  I1···r:n(λ1, λ2) I1···r:n(λ2, λ2) I1···r:n(λ2, λ12)  . (4.3.1) I1···r:n(λ1, λ12) I1···r:n(λ2, λ12) I1···r:n(λ12, λ12)

To calculate (4.3.1) for the BBVE using (4.2.2), we should consider that the BBVE pdf has two different forms that depend on x and y. Hence the likelihood function of the first r pairs (i.e., the order statistics and their concomitants, (X(1, r), Y[1, r])) matters how many and which ones out of the r pairs are from f1(x, y). Let us denote respectively by r1 and A the number of pairs in which y > x > 0, and a subset of Ω = {1, 2, ··· , r} where the total number of the elements in A is r1. Note that r1 has a with r trials and the probability of success λ1 . Then the likelihood function of (X(1, r), Y[1, r]) and a λ1+λ2 specific A is given by

n! Y Y L(λ; A) = f (x , y ) f (x , y )(1 − F (x ))n−r, (n − r)! 1 k k 2 k k X r k∈A k∈Ac

0 < x1 < ··· < xr1 < xr1+1 < ··· < xr,

c yi > xi > 0, i ∈ A; yi < xi, i ∈ A . (4.3.2)

72 The log-likelihood function is

X `(λ; A) = log n! − log(n − r)! + log f1(xk, yk) k∈A X + log f2(xk, yk) + (n − r) log(1 − FX (xr)). k∈Ac

The second order partial derivatives of log f1(x, y; λ), log f2(x, y; λ), and log(1−FX (x; λ))

with respect to λ1, λ2 and λ12, are as follows:

2 ∂ 1 1 1 (1) 2 log f1(x, y; λ) = − 2 − 2 + 2 = −C1,1 ∂λ1 λ1 λ (λ1 + λ2) 2 ∂ 1 1 1 (2) 2 log f2(x, y; λ) = − 2 − 2 + 2 = −C1,1 ∂λ1 λ (λ1 + λ12) (λ1 + λ2) 2 ∂ 1 1 S log(1 − FX (x; λ)) = − = −C (x) 2 2 −λ2x 2 1,1 ∂λ1 (λ1 + λ2) (λ − λ12e ) 2 ∂ 1 1 1 (1) 2 log f1(x, y; λ) = − 2 − 2 + 2 = −C2,2 ∂λ2 λ (λ2 + λ12) (λ1 + λ2) 2 ∂ 1 1 1 (2) 2 log f2(x, y; λ) = − 2 − 2 + 2 = −C2,2 ∂λ2 λ2 λ (λ1 + λ2) 2 2 −λ2x −λ2x ∂ 1 λλ12x e + 2λ12xe + 1 S log(1 − FX (x; λ)) = − = −C (x) 2 2 −λ2x 2 2,2 ∂λ2 (λ1 + λ2) (λ − λ12e ) 2 ∂ 1 1 (1) 2 log f1(x, y; λ) = − 2 − 2 = −C12,12 ∂λ12 λ (λ2 + λ12) 2 ∂ 1 1 (2) 2 log f2(x, y; λ) = − 2 − 2 = −C12,12 ∂λ12 λ (λ1 + λ12) 2 −λ2x 2 ∂ (1 − e ) S log(1 − FX (x; λ)) = − = −C (x) 2 −λ2x 2 12,12 ∂λ12 (λ − λ12e )

73 2 ∂ 1 1 (1) log f1(x, y; λ) = − 2 + 2 = −C1,2 ∂λ1∂λ2 λ (λ1 + λ2) 2 ∂ 1 1 (2) log f2(x, y; λ) = − 2 + 2 = −C1,2 ∂λ1∂λ2 λ (λ1 + λ2) 2 −λ2x ∂ 1 1 + λ12xe S log(1 − FX (x; λ)) = 2 − −λ x 2 = −C1,2(x) ∂λ1∂λ2 (λ1 + λ2) (λ − λ12e 2 ) 2 ∂ 1 (1) log f1(x, y; λ) = − 2 = −C1,12 ∂λ1∂λ12 λ 2 ∂ 1 1 (2) log f2(x, y; λ) = − 2 − 2 = −C1,12 ∂λ1∂λ12 λ (λ1 + λ12) 2 −λ2x ∂ 1 − e S log(1 − FX (x; λ)) = − −λ x 2 = −C1,12(x) ∂λ1∂λ12 (λ − λ12e 2 ) 2 ∂ 1 1 (1) log f1(x, y; λ) = − 2 − 2 = −C2,12 ∂λ2∂λ12 λ (λ2 + λ12) 2 ∂ 1 (2) log f2(x, y; λ) = − 2 = −C2,12 ∂λ2∂λ12 λ 2 −λ2x ∂ {(λ1 + λ2)x + 1} e − 1 S log(1 − FX (x; λ)) = −λ x 2 = −C2,12(x), ∂λ2∂λ12 (λ − λ12e 2 )

for λ1, λ2 > 0, and λ12 ≥ 0. (4.3.3)

In (4.3.3) we can see that every second order derivative of log f1(x, y; λ) and log f2(x, y; λ) is free of x and y which makes I1···r:n(λi, λj) simpler, i, j = 1, 2, or 12. So the entries of

FIM in Type-II right censored samples with a fixed r1 are given by

I1···r:n(λi, λj; r1)

Z ∞ Z x2 X n (1) (2) S o n! = ··· r1Ci,j + (r − r1)Ci,j + (n − r)Ci,j(xr) 0 0 (n − r)! A∈A(r1) "Z ∞ Z ∞ # Y Y n−r × ··· f1(xk, yk) f2(xk, yk)dy1 ··· dyr (1 − FX (xr)) dx1 ··· dxr, 0 0 k∈A k∈Ac Z ∞ Z x2 X n (1) (2) S o = ··· r1Ci,j + (r − r1)Ci,j + (n − r)Ci,j(xr) 0 0 A∈A(r1) n! Y Y × f (x ) f (x )(1 − F (x ))n−rdx ··· dx (n − r)! 1 k 2 k X r 1 r k∈A k∈Ac

74 where A(r1) is the collection of all possible sets of A with a fixed size r1 and the Ci,j’s

are give in (4.3.3). For example, when r = 3, r1 can be one of 0, 1, 2, or 3, and then we

can see that A(0) = {∅}, A(1) = {{1}, {2}, {3}}, A(2) = {{1, 2}, {1, 3}, {2, 3}}, and

A(3) = {{1, 2, 3}}. Finally, the expression of each entry of (4.3.1) is given by

r X I1···r:n(λi, λj) = I1···r:n(λi, λj; r1). (4.3.4)

r1=0

After taking every possible subset, A out of Ω = {1, 2, ··· , r} into account and adding all

of them up, (4.3.4) turns out to be

" ( r )# ∂2 X I1···r:n(λi, λj) = E − log f(xk, yk) + (n − r) log(1 − FX (xr)) ∂λi∂λj k=1

r Z ∞   (1) X n = C k F (x )k−1f (x )(1 − F (x ))n−kdx i,j k X k 1 k X k k k=1 0

r Z ∞   (2) X n + C k F (x )k−1f (x )(1 − F (x ))n−kdx i,j k X k 2 k X k k k=1 0 Z ∞   S n r−1 n−r +(n − r) Ci,j(xr)r FX (xr) f(xr)(1 − FX (xr)) dxr, 0 r where

Z ∞ λ λ 1 −λxk f1(xk) = f1(xk, yk)dyk = e , yk=xk λ1 + λ2 Z yk=xk λ(λ + λ ) 1 12 −(λ1+λ12)xk −λ2xk f2(xk) = f2(xk, yk)dyk = e (1 − e ). 0 λ1 + λ2

75 For i = j = 1,

I1···r:n(λ1, λ1)

  r Z ∞   1 1 1 X n k−1 n−k = 2 + 2 − 2 k FX (x) f1(x)(1 − FX (x)) dx λ λ (λ1 + λ2) k 1 k=1 0

  r Z ∞   1 1 1 X n k−1 n−k + 2 + 2 − 2 k FX (x) f2(x)(1 − FX (x)) dx (λ1 + λ12) λ (λ1 + λ2) k k=1 0 Z ∞     1 1 n r−1 n−r + (n − r) − r FX (x) f(x)(1 − FX (x)) dx, −λ2x 2 2 0 (λ − λ12e ) (λ1 + λ2) r r n  1  = 2 − 2 + (n − r)Er:n −λ X 2 λ (λ1 + λ2) (λ − λ12e 2 ) r   X 1 f1(X) 1 f2(X) + Ek:n 2 + 2 , λ f(X) (λ1 + λ12) f(X) k=1 1 r n  1  = 2 − 2 + (n − r)Er:n −λ X 2 λ (λ1 + λ2) (λ − λ12e 2 )

λ r  1  X −λX −(λ1+λ12)X  + Ek:n λ12e + λ1e . λ1(λ1 + λ2)(λ1 + λ12) f(X) k=1

In the same way the FIM in Type-II right censored samples from BBVE for 1 ≤ r ≤ n has the other five distinct entries simplified as given below.

r  −(λ1+λ12)X  r n λ(λ1 + λ12) X e I1···r:n(λ2, λ2) = 2 − 2 + 2 Ek:n λ (λ1 + λ2) λ (λ1 + λ2) f(X) 2 k=1 2 r  −λX  2λ1λ2 + λ1λ12 + (λ2 + λ12) X e −λλ12 2 2 Ek:n (λ1 + λ2)λ (λ2 + λ12) f(X) 2 k=1  2 −λ2X  λ12(λX + 2X)e + 1 +(n − r)Er:n −λ X 2 , (λ − λ12e 2 )

76 2 r  −λX  r λ(λ1(λ1 + λ12) − (λ2 + λ12) ) X e I1···r:n(λ12, λ12) = 2 + 2 Ek;n λ (λ1 + λ2)(λ1 + λ12)(λ2 + λ12) f(X) k=1 r λ X e−(λ1+λ12)X  + E (λ + λ )(λ + λ ) i;n f(X) 1 2 1 12 i=1 1 + e−2λ2X − 2e−λ2X  +(n − r)Er:n −λ X 2 (λ − λ12e 2 )

 −λ2X  r n λ12Xe + 1 I1···r:n(λ1, λ2) = 2 − 2 + (n − r)Er:n −λ X 2 , λ (λ1 + λ2) (λ − λ12e 2 )

r λ r  1  X −(λ1+λ12)X −λX  I1···r:n(λ1, λ12) = 2 + Ek:n e − e λ (λ1 + λ2)(λ1 + λ12) f(X) k=1  1 − e−λ2X  +(n − r)Er:n −λ X 2 , (λ − λ12e 2 )

r  −λX  r λ1λ X e I1···r:n(λ2, λ12) = 2 + 2 Ek:n λ (λ2 + λ12) (λ1 + λ2) f(X) k=1  −λ2X  1 − (1 + (λ1 + λ2)X)e +(n − r)Er:n −λ X 2 . (4.3.5) (λ − λ12e 2 )

The expectations in (4.3.5) can be replaced by other simpler ones, by using the following

recurrence relation and the binomial theorem; that is,

Pn m−n+r−1 n m−1 • Er:n (h(X)) = m=n−r+1(−1) m n−r E1:m (h(X)).

n Pn n  m n−m • (a + b) = m=0 m a b .

This leads to the following conclusion.

Theorem 4.3.1. For 1 ≤ r ≤ n − 2,

r n n − r I1···r:n(λ1, λ1) = 2 − 2 + 2 U2(2(λ1 + λ12); λ, r, n) λ (λ1 + λ2) (λ1 + λ2) r λ X + {λ U (λ; λ, i, n) + λ U (λ + λ ; λ, i, n)} , λ (λ + λ )(λ + λ ) 12 1 1 1 1 12 1 1 2 1 12 i=1

77 r r n λ(λ1 + λ12) X I (λ , λ ) = − + U (λ + λ ; λ, i, n) 1···r:n 2 2 λ2 (λ + λ )2 λ2(λ + λ ) 1 1 12 1 2 2 1 2 i=1 2 r 2λ1λ2 + λ1λ12 + (λ2 + λ12) X − λλ U (λ; λ, i, n) 12 (λ + λ )λ2(λ + λ )2 1 1 2 2 2 12 i=1 n − r  + 2 λ12λU4(2λ1 + λ2 + 2λ12; λ, r, n) (λ1 + λ2)  + 2λ12U3(2λ1 + λ2 + 2λ12; λ, r, n) + U2(2(λ1 + λ12); λ, r, n) ,

2 r r λ(λ1(λ1 + λ12) − (λ2 + λ12) ) X I (λ , λ ) = + U (λ; λ, i, n) 1···r:n 12 12 λ2 (λ + λ )(λ + λ )(λ + λ )2 1 1 2 1 12 2 12 i=1 r λ X + U (λ + λ ; λ, i, n) (λ + λ )(λ + λ ) 1 1 12 1 2 1 12 i=1 n − r  + 2 Er:n U2(2(λ1 + λ12); λ, r, n) + U2(2λ; λ, r, n) (λ1 + λ2)  − 2U2(2λ1 + λ2 + 2λ12; λ, r, n) ,

r n n − r  I1···r:n(λ1, λ2) = 2 − 2 + 2 λ12U3(2λ1 + λ2 + 2λ12; λ, r, n) λ (λ1 + λ2) (λ1 + λ2)  + U2(2(λ1 + λ12); λ, r, n) ,

r r λ X   I (λ , λ ) = + U (λ + λ ; λ, i, n) − U (λ; λ, i, n) 1···r:n 1 12 λ2 (λ + λ )(λ + λ ) 1 1 12 1 1 2 1 12 i=1 n − r   + 2 U2(2(λ1 + λ12); λ, r, n) − U2(2λ1 + λ2 + 2λ12; λ, r, n) , (λ1 + λ2)

r r λ1λ X I (λ , λ ) = + U (λ; λ, i, n) 1···r:n 2 12 λ2 (λ + λ )2(λ + λ ) 1 2 12 1 2 i=1 n − r  + 2 U2(2(λ1 + λ12); λ, r, n) − U1(2λ1 + λ2 + 2λ12; λ, r, n) (λ1 + λ2)  + (λ1 + λ2)U3(2λ1 + λ2 + 2λ12; λ, r, n) , (4.3.6)

78 and for r = n − 1,

r n n − r I1···r:n(λ1, λ1) = 2 − 2 + 2 U2(2(λ1 + λ12); λ, r, n) λ (λ1 + λ2) (λ1 + λ2) r λ X + {λ U (λ; λ, i, n) + λ U (λ + λ ; λ, i, n)} , λ (λ + λ )(λ + λ ) 12 1 1 1 1 12 1 1 2 1 12 i=1

r r n λ(λ1 + λ12) X I (λ , λ ) = − + U (λ + λ ; λ, i, n) 1···r:n 2 2 λ2 (λ + λ )2 λ2(λ + λ ) 1 1 12 1 2 2 1 2 i=1 2 r 2λ1λ2 + λ1λ12 + (λ2 + λ12) X − λλ U (λ; λ, i, n) 12 (λ + λ )λ2(λ + λ )2 1 1 2 2 2 12 i=1 n − r  + 2 λ12λU4(2λ1 + λ2 + 2λ12; λ, r, n) (λ1 + λ2)  + 2λ12U3(2λ1 + λ2 + 2λ12; λ, r, n) + U2(2(λ1 + λ12); λ, r, n) ,

2 r r λ(λ1(λ1 + λ12) − (λ2 + λ12) ) X I (λ , λ ) = + U (λ; λ, i, n) 1···r:n 12 12 λ2 (λ + λ )(λ + λ )(λ + λ )2 1 1 2 1 12 2 12 i=1 r λ X + U (λ + λ ; λ, i, n) (λ + λ )(λ + λ ) 1 1 12 1 2 1 12 i=1 n − r  + 2 Er:n U2(2(λ1 + λ12); λ, r, n) + U2(2λ; λ, r, n) (λ1 + λ2)  − 2U2(2λ1 + λ2 + 2λ12; λ, r, n) ,

r n n − r  I1···r:n(λ1, λ2) = 2 − 2 + 2 λ12U3(2λ1 + λ2 + 2λ12; λ, r, n) λ (λ1 + λ2) (λ1 + λ2)  + U2(2(λ1 + λ12); λ, r, n) ,

r r λ X   I (λ , λ ) = + U (λ + λ ; λ, i, n) − U (λ; λ, i, n) 1···r:n 1 12 λ2 (λ + λ )(λ + λ ) 1 1 12 1 1 2 1 12 i=1 n − r   + 2 U2(2(λ1 + λ12); λ, r, n) − U2(2λ1 + λ2 + 2λ12; λ, r, n) , (λ1 + λ2)

79 r r λ1λ X I (λ , λ ) = + U (λ; λ, i, n) 1···r:n 2 12 λ2 (λ + λ )2(λ + λ ) 1 2 12 1 2 i=1 n − r  + 2 U2(2(λ1 + λ12); λ, r, n) − U1(2λ1 + λ2 + 2λ12; λ, r, n) (λ1 + λ2)  + (λ1 + λ2)U3(2λ1 + λ2 + 2λ12; λ, r, n) , where

 e−kX  U (k; λ, r, n) = E 1 r:n f(X) n i−1 X ni − 1 X i − 1 = (−1)i−n+r−1 i cj(1 − c)i−1−j i n − r j i=n−r+1 j=0  1  × , k + j(λ1 + λ12) + (i − 1 − j)λ  e−kX  U (k; λ, r, n) = E 2 r:n (1 − F (X))2 n i−3 X ni − 1 X i − 3 = (−1)i−n+r−1 i cj(1 − c)i−3−j i n − r j i=n−r+1 j=0  c(λ + λ ) × 1 12 k + (j + 1)(λ1 + λ12) + (i − 3 − j)λ (1 − c)λ  + , r < n − 1; k + j(λ1 + λ12) + (i − 2 − j)λ  e−kX  U (k; λ, (n − 1), n) = E 2 n−1:n (1 − F (X))2 n i−2 X ni − 1 X i − 2 = (−1)i−2 i cj(1 − c)i−2−j i 1 j i=2 j=0 Z ∞ f(x) × e−(k+j(λ1+λ12)+(i−2−j)λ)x dx, 0 1 − F (x)

80  Xe−kX  U (k; λ, r, n) = E 3 r:n (1 − F (X))2 n i−3 X ni − 1 X i − 3 = (−1)i−n+r−1 i cj(1 − c)i−3−j i n − r j i=n−r+1 j=0  c(λ1 + λ12) × 2 (k + (j + 1)(λ1 + λ12) + (i − 3 − j)λ) (1 − c)λ  + 2 , r < n − 1; (k + j(λ1 + λ12) + (i − 2 − j)λ)

n i−2 X ni − 1 X i − 2 U (k; λ, (n − 1), n) = (−1)i−2 i cj(1 − c)i−2−j 3 i 1 j i=2 j=0 Z ∞ f(x) × xe−(k+j(λ1+λ12)+(i−2−j)λ)x dx, 0 1 − F (x)

and,  X2e−kX  U (k; λ, r, n) = E 4 r:n (1 − F (X))2 n i−3 X ni − 1 X i − 3 = (−1)i−n+r−1 i cj(1 − c)i−3−j i n − r j i=n−r+1 j=0  2c(λ1 + λ12) × 3 (k + (j + 1)(λ1 + λ12) + (i − 3 − j)λ) 2(1 − c)λ  + 3 , r < n − 1; (k + j(λ1 + λ12) + (i − 2 − j)λ)  X2e−kX  U (k; λ, (n − 1), n) = E 4 r:n (1 − F (X))2 n i−2 X ni − 1 X i − 2 = (−1)i−2 i cj(1 − c)i−2−j i 1 j i=2 j=0 Z ∞ f(x) × x2e−(k+j(λ1+λ12)+(i−2−j)λ)x dx, 0 1 − F (x)

where c = λ = 1 + λ12 . λ1+λ2 λ1+λ2

For 1 ≤ r < n, we can also use (1.1.11) to compute I1···r:n(λi, λj) or to check the results from (4.3.5). Consider the joint pdf of the first order statistic and its concomitant,

r1 1−r1 k−1 f1:k(x, y, r1; λ) = kf1(x1, y1) f2(x1, y1) (1 − FX (x1)) (4.3.7)

81 λ1 where r1 is a mean Bernoulli variable. The log-likelihood function is λ1+λ2

`1:k(λ; X, Y, r1) = log k+r1 log f1(x1, y1)+(1−r1) log f2(x1, y1)+(k−1) log(1−FX (x1)).

Then the FI in the first order statistic and its concomitant with r1 from BBVE of various sample sizes is given by

I1:k(λi, λj; r1) Z ∞ Z ∞  ∂2  = − r1 log f1(x1, y1) + (1 − r1) log f2(x1, y1) 0 0 ∂λi∂λj  r1 1−r1 k−1 + (k − 1) log(1 − FX (xr)) kf1(x1, y1) f2(x1, y1) (1 − FX (x1)) dy1dx1

Z ∞ h i Z ∞ (1) (2) S r1 1−r1 = r1Ci,j + (1 − r1)Ci,j + (k − 1)Ci,j(x1) k kf1(x1, y1) f2(x1, y1) dy1 0 0

k−1 ×(1 − FX (x1)) dx1.

Hence

I1:k(λi, λj)

1 X = I1:k(λi, λj; r1)

r1=0

Z ∞ Z x1 h (2) S i k−1 = Ci,j + (k − 1)Ci,j(x1) k f2(x1, y1)dy1(1 − FX (x1)) dx1 0 0 Z ∞ Z ∞ h (1) S i k−1 + Ci,j + (k − 1)Ci,j(x1) k f1(x1, y1)dy1(1 − FX (x1)) dx1 0 x1 Z ∞ h (2) S i k−1 = Ci,j + (k − 1)Ci,j(x1) kf2(x1)(1 − FX (x1)) dx1 0 Z ∞ h (1) S i k−1 + Ci,j + (k − 1)Ci,j(x1) kf1(x1)(1 − FX (x1)) dx1. (4.3.8) 0

We can readily obtain I1···r:n(λi, λj) by substituting (4.3.8) to I1:k(λi, λj) in (1.1.11). Since

(1.1.11) holds only for 1 ≤ r ≤ n − 1, we need to separately compute the FI in a complete

82 sample, I1···n:n(λi, λj). Note that

I1···n:n(λi, λj) = nI(X,Y ; λi, λj), (4.3.9) where the FI in a single pair from BBVE is

I(X,Y ; λi, λj)

1 Z ∞ n o Z ∞  X (1) (2) r1 1−r1 = r1Ci,j + (1 − r1)Ci,j f1 (x, y)f2 (x, y)dydx 0 0 r1=0 Z ∞ Z ∞ (2) (1) = Ci,j f2(x)dx + Ci,j f1(x)dx 0 0     λ2 (2) λ1 (1) = Ci,j + Ci,j . (4.3.10) λ1 + λ2 λ1 + λ2

Thus (1.1.11) computed by adding (4.3.8) multiplied by some multipliers and (4.3.9) can be used to check the numerical results from (4.3.5), but as pointed out by Park (1996),

(1.1.11) is subject to accumulation errors.

4.3.2 Left Censored Samples

Now we use (1.1.15) to obtain the elements of FIM in Type-II left censored samples from BBVE denoted by, 1 ≤ s ≤ n,   Is···n:n(λ1, λ1) Is···n:n(λ1, λ2) Is···n:n(λ1, λ12) Is···n:n(λ) =  Is···n:n(λ1, λ2) Is···n:n(λ2, λ2) Is···n:n(λ2, λ12)  . (4.3.11) Is···n:n(λ1, λ12) Is···n:n(λ2, λ12) Is···n:n(λ12, λ12)

The likelihood function of Type-II left censored samples, (X(s, n), Y[s, n]) is given by

n! Y Y L(λ; B, s ) = F (x )s−1 f (x , y ) f (x , y ), 1 (s − 1)! X s 1 k k 2 k k k∈B k∈Bc

xs < ··· < xr2 < xr2+1 < ··· < n,

c yk > xk, k ∈ B; yk < xk, k ∈ B (4.3.12)

83 where B = {k; yk > xk > 0} when Ω = {s, s + 1, ··· , n} and the total number of the

∂2 elements in B is s1. If FX (x; λ), exists for i, j = 1, 2, 12, the following relation ∂λi∂λj

∂2 ∂2 between log(1 − FX (x; λ)) and log FX (x; λ) holds, ∂λi∂λj ∂λi∂λj

∂2 log FX (x; λ) ∂λi∂λj

 2  2  1 ∂ 2 ∂ = 2 log(1 − FX (x; λ)) (1 − FX (x; λ)) + FX (x; λ) . FX (x; λ) ∂λi∂λj ∂λi∂λj

The second derivatives of the logarithms of the cdf with respect to λ1, λ2, and λ12 are as follows:

∂2 1 2xe−(λ1+λ12)x − λx2e−(λ1+λ12)x + λ x2e−λx log F (x) = + 12 2 2 −(λ1+λ12)x −λx ∂λ1 (λ1 + λ2) λ1 + λ2 − λe + λ12e 2  −(λ1+λ12)x −λx  1 + (λx − 1)e − λ12xe − −(λ +λ )x −λx λ1 + λ2 − λe 1 12 + λ12e

1 e−2(λ1+λ12)x + 1 = − 2 −(λ +λ )x −λx 2 (λ1 + λ2) (λ1 + λ2 − λe 1 12 + λ12e ) (λλ x − λ2x − 2λ )xe−(λ1+λ12)x + (λx − λ x + 2)λ xe−λx + 12 12 12 12 −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) F = −C1,1(x);

∂2 1 λ x2e−λx log F (x) = + 12 2 2 −(λ1+λ12)x −λx ∂λ2 (λ1 + λ2) λ1 + λ2 − λe + λ12e 2  (λ1+λ12)x −λx  1 − e − λ12xe F − −(λ +λ )x −λx = −C2,2(x); λ1 + λ2 − λe 1 12 + λ12e

∂2 (2 − λx)xe−(λ1+λ12)x − 2xe−λx log F (x) = 2 −(λ1+λ12)x −λx ∂λ12 λ1 + λ2 − λe + λ12e 2  −(λ1+λ12)x −λx  (λx − 1)e + (1 − λ12x)e F − −(λ +λ )x −λx = −C12,12(x); λ1 + λ2 − λe 1 12 + λ12e

84 2 −(λ1+λ12)x 2 −λx ∂ 1 xe + λ12x e log F (x) = 2 + −(λ +λ )x −λx ∂λ1∂λ2 (λ1 + λ2) λ1 + λ2 − λe 1 12 + λ12e (λx − 2)e−(λ1+λ12)x(1 − e−λx) − (λx − 1)e−2(λ1+λ12)x − −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) (λ xe−λx − 1)2 + 12 = −CF (x); −(λ +λ )x −λx 2 1,2 (λ1 + λ2 − λe 1 12 + λ12e )

∂2 (2 − λx)xe−(λ1+λ12)x − (1 − λx)xe−λx log F (x) = −(λ +λ )x −λx ∂λ1∂λ12 λ1 + λ2 − λe 1 12 + λ12e 2  −(λ1+λ12)x −λx  (xλ − 1)e − λ12xe − −(λ +λ )x −λx λ1 + λ2 − λe 1 12 + λ12e

−λx −(λ1+λ12)x −λx −λx (1 + e )((xλ − 1)e − λ12xe ) + e − −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) F = −C1,12(x);

2 −(λ1+λ12)x −λx ∂ xe − (1 − xλ12)xe log F (x) = −(λ +λ )x −λx ∂λ2∂λ12 λ1 + λ2 − λe 1 12 + λ12e 2  −λx −(λ1+λ12)x  λ12xe + e − −(λ +λ )x −λx λ1 + λ2 − λe 1 12 + λ12e

−λx −(λ1+λ12)x −(λ1+λ12)x −λx (λ12xe + e − 1)(λxe + e + 1) + −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) F = −C2,12(x). (4.3.13)

Using the above expressions for the second derivatives of cdfs,

Is···n:n(λi, λj; s1)

Z ∞ Z xs+1 X n F (1) (2)o = ··· (s − 1)Ci,j(xs) + s1Ci,j + (n − s + 1 − s1)Ci,j 0 0 B∈B(s1) ( ) n! Z ∞ Z ∞ Y Y × ··· f (x , y ) f (x , y )dy ··· dy (s − 1)! 1 k k 2 k k s n 0 0 k∈B k∈Bc  s−1 × F (xs)dxs ··· dxn

85 Z ∞ Z xs+1 X n F (1) (2)o = ··· (s − 1)Ci,j(xs) + s1Ci,j + (n − s + 1 − s1)Ci,j 0 0 B∈B(s1) # n! Y Y × f (x ) f (x )F s−1(x )dx ··· dx , (s − 1)! 1 k 2 k s s n k∈B k∈Bc

where B(s1) is the union of all possible sets of B at fixed s1 for 0 ≤ s1 ≤ s. Thus the FI

in Type-II left censored samples from the BBVE is

s X Is···n:n(λi, λj) = Is···n:n(λi, λj; s1).

s1=0 Continuing in this manner we obtain an expression for each element of (4.3.11); it is given

by

" ( s )# ∂2 X Is···n:n(λi, λj) = E − (s − 1) log FX (xs) + log f(xk, yk) , ∂λi∂λj k=1 Z ∞   F n s−1 n−s = (s − 1) Ci,j(xs)s f(xs)F (xs)(1 − F (xs)) dxs 0 s n Z ∞   (1) X n +C k F (x )k−1f (x )(1 − F (x ))n−kdx i,j k X k 1 k X k k k=s 0

n Z ∞   (2) X n +C k F (x )k−1f (x )(1 − F (x ))n−kdx i,j k X k 2 k X k k k=s 0

F since Ci,j depends only on the s-th smallest order statistic for all i and j. Upon substituting

Ci,j the values given in (4.3.3), and (4.3.13) yields

Is···n:n(λ1, λ1) " 2 n − s + 1 n Z ∞ 1 + xλe−(λ1+λ12)x − e−(λ1+λ12)x − xλ e−λx  = − + 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x 2 −(λ1+λ12)x 2 −λx  2xe − x λe + x λ12e n! s−1 ¯ n−s − −(λ +λ )x −λx f(x)F (x) F (x) dx λ1 + λ2 − λe 1 12 + λ12e (s − 1)!(n − s)! n Z ∞ −(λ1+λ12)xi −λ2xi  X λe (λ1 + λ12e ) n! + F (x )i−1F¯(x )n−idx ; λ (λ + λ )(λ + λ ) (i − 1)!(n − i)! i i i i=s 0 1 1 2 1 12

86 Is···n:n(λ2, λ2) " 2 n − s + 1 n Z ∞  1 − e−(λ1+λ12)x − xλ e−λx  = − + 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e 2 −λx  x λ12e n! s−1 ¯ n−s − −(λ +λ )x −λx × f(x)F (x) F (x) dx λ1 + λ2 − λe 1 12 + λ12e (s − 1)!(n − s)! n Z ∞   X λ1λ λ(λ1 + λ12) + e−λxi + e−(λ1+λ12)xi − e−λxi  (λ + λ )2(λ + λ ) λ2(λ + λ ) i=s 0 2 12 1 2 2 1 2 n!  × F (x )i−1F¯(x )n−idx ; (i − 1)!(n − i)! i i i

Is···n:n(λ12, λ12) " 2 n − s + 1 Z ∞ −e−(λ1+λ12)x + xλe−(λ1+λ12)x + e−λx − xλ e−λx  = + 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x 2 −(λ1+λ12)x −λx  2xe − x λe − 2xe n! s−1 ¯ n−s − −(λ +λ )x −λx f(x)F (x) F (x) dx λ1 + λ2 − λe 1 12 + λ12e (s − 1)!(n − s)! n Z ∞  −λxi −(λ1+λ12)xi −λxi  X λ1λe λ(λ1 + λ12)(e − e ) + + (λ + λ )2(λ + λ ) (λ + λ )2(λ + λ ) i=s 0 2 12 1 2 1 12 1 2 n!  × F (x )i−1F¯(x )n−idx ; (i − 1)!(n − i)! i i i

Is···n:n(λ1, λ2) " 2 n − s + 1 n Z ∞  1 − e−(λ1+λ12)x − λ xe−λx  = − + 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e

 −λx −(λ1+λ12)x 2 −λx 2 −λx # λ12 (1 − e )(xe + λ12x e ) − λx e + −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) n! × f(x)F (x)s−1F¯(x)n−sdx; (s − 1)!(n − s)!

87 Is···n:n(λ1, λ12) ( 2 n − s + 1 Z ∞ λxe−(λ1+λ12)x − e−(λ1+λ12)x − λ xe−λx  = + 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−λx −(λ1+λ12)x −(λ1+λ12)x −λx −λx (1 + e )(λxe − e − λ12xe ) + e + −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) −(λ1+λ12)x 2 −(λ1+λ12)x −λx 2 −λx  2xe − x λe − xe + x λ12e − −(λ +λ )x −λx λ1 + λ2 − λe 1 12 + λ12e n! × f(x)F (x)s−1F¯(x)n−sdx (s − 1)!(n − s)! n " ∞ −(λ +λ )x −λx  # X Z λ e 1 12 i − e i n! + × F (x )i−1F¯(x )n−idx ; (λ + λ )(λ + λ ) (i − 1)!(n − i)! i i i i=s 0 1 2 1 12

Is···n:n(λ2, λ12) ( 2 n − s + 1 Z ∞  e−(λ1+λ12)x + λ xe−λx  = + 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x −λx −(λ1+λ12)x −λx −(λ1+λ12)x −λx (e + λ12xe )(1 + λxe + e ) − (λxe + e ) − −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) −(λ1+λ12)x −λx 2 −λx  xe − xe + x λ12e n! s−1 ¯ n−s − −(λ +λ )x −λx × f(x)F (x) F (x) dx λ1 + λ2 − λe 1 12 + λ12e (s − 1)!(n − s)! n Z ∞ −λxi  X λ1λe n! + × F (x )i−1F¯(x )n−idx , (4.3.14) (λ + λ )2(λ + λ ) (i − 1)!(n − i)! i i i i=s 0 2 12 1 2 where F¯(x) = 1 − F (x).

We may also use (1.1.15) to obtain the FIM given in (4.3.11).

The joint pdf of the last order statistic and its concomitant is

k−1 s1 1−s1 fk:k(x, y, s1; λ) = kFX (xk) f1(xk, yk) f2(xk, yk) where  1 if xk < yk, s1 = 0 if xk > yk.

88 Hence

Ik:k(λi, λj; s1) Z ∞ Z ∞  ∂2  = − {(k − 1) log FX (xk) + s1 log f1(xk, yk) + (1 − s1) log f2(xk, yk)} 0 0 ∂λi∂λj

k−1 s1 1−s1 × kFX (xk) f1(xk, yk) f2(xk, yk) dykdxk Z ∞ h i Z ∞ F (1) (2) s1 1−s1 = (k − 1)Ci,j(xk) + s1Ci,j + (1 − s1)Ci,j k kf1(xk, yk) f2(xk, yk) dyk 0 0 k−1 × FX (xk) dxk.

We now obtain Ik:k(λi, λj) which is given by,

Ik:k(λi, λj) 1 X = Ik:k(λi, λj; s1)

s1=0 Z ∞ h F (2)i k−1 = (k − 1)Ci,j(xk) + Ci,j kFX (xk) f2(xk)dxk 0 Z ∞ h F (1)i k−1 + (k − 1)Ci,j(xk) + Ci,j kFX (xk) f1(xk)dxk. 0

The expressions for Ik:k(λi, λj) for i, j = 1, 2 or 12 are the following:

Ik:k(λ1, λ1) ( 2 1 k Z ∞ 1 + xλe−(λ1+λ12)x − e−(λ1+λ12)x − xλ e−λx  = − + k(k − 1) 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x 2 −(λ1+λ12)x 2 −λx  2xe − x λe + x λ12e k−1 − −(λ +λ )x −λx f(x)F (x) dx λ1 + λ2 − λe 1 12 + λ12e Z ∞ λe−(λ1+λ12)x(λ + λ e−λ2x) + k 1 12 F (x)k−1dx; 0 λ1(λ1 + λ2)(λ1 + λ12)

89 Ik:k(λ2, λ2) ( 2 1 k Z ∞  1 − e−(λ1+λ12)x − xλ e−λx  = − + k(k − 1) 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e 2 −λx  x λ12e k−1 − −(λ +λ )x −λx f(x)F (x) dx λ1 + λ2 − λe 1 12 + λ12e Z ∞  −λx −(λ1+λ12)x −λx  λ1λe λ(λ1 + λ12)(e − e ) k−1 + k 2 + 2 ) F (x) dx; 0 (λ2 + λ12) (λ1 + λ2) λ2(λ1 + λ2)

Ik:k(λ12, λ12) ( 2 1 Z ∞ −e−(λ1+λ12)x + xλe−(λ1+λ12)x + e−λx − xλ e−λx  = + k(k − 1) 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x 2 −(λ1+λ12)x −λx  2xe − x λe − 2xe k−1 − −(λ +λ )x −λx f(x)F (x) dx λ1 + λ2 − λe 1 12 + λ12e Z ∞  −λx −(λ1+λ12)x −λx  λ1λe λ(λ1 + λ12)(e − e ) k−1 + k 2 + 2 F (x) dx; 0 (λ2 + λ12) (λ1 + λ2) (λ1 + λ12) (λ1 + λ2)

Ik:k(λ1, λ2) ( 2 1 k Z ∞  1 − e−(λ1+λ12)x − xλ e−λx  = − + k(k − 1) 12 2 2 −(λ1+λ12)x −λx λ (λ1 + λ2) 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x −λ2x −λ2x −λx −(λ1+2λ2+λ12)x  λ12xe (1 + λ12xe − λxe − e − λ12xe ) + −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) f(x)F (x)k−1dx; (4.3.15)

Ik:k(λ1, λ12) ( 2 1 Z ∞ λxe−(λ1+λ12)x − e−(λ1+λ12)x − λ xe−λx  = + k(k − 1) 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−λx −(λ1+λ12)x −(λ1+λ12)x −λx −λx (1 + e )(λxe − e − λ12xe ) + e + −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) −(λ1+λ12)x 2 −(λ1+λ12)x −λx 2 −λx  2xe − x λe − xe + x λ12e k−1 − −(λ +λ )x −λx f(x)F (x) dx λ1 + λ2 − λe 1 12 + λ12e kλ Z ∞ + F (x)k−1 e−(λ1+λ12)x − e−λx dx; (λ1 + λ2)(λ1 + λ12) 0

(4.3.16)

90 Ik:k(λ2, λ12) ( 2 1 Z ∞  e−(λ1+λ12)x + λ xe−λx  = + k(k − 1) 12 2 −(λ1+λ12)x −λx λ 0 λ1 + λ2 − λe + λ12e

−(λ1+λ12)x −λx −(λ1+λ12)x −λx −(λ1+λ12)x −λx (e + λ12xe )(1 + λxe + e ) − (λxe + e ) − −(λ +λ )x −λx 2 (λ1 + λ2 − λe 1 12 + λ12e ) −(λ1+λ12)x −λx 2 −λx  xe − xe + x λ12e k−1 − −(λ +λ )x −λx f(x)F (x) dx λ1 + λ2 − λe 1 12 + λ12e Z ∞ −λx λ1λe k−1 + k 2 × F (x) dx. (4.3.17) 0 (λ2 + λ12) (λ1 + λ2)

Since (1.1.11) holds for 1 < s ≤ n, we use (4.3.9) when s = 1. Now we can obtain the

FIM in Type-II doubly censored samples from BBVE using the relation of (1.1.16) for r and s with 1 ≤ s ≤ r ≤ n.

4.3.3 Limiting Fisher Information Matrix

For large samples, the properties of the FIM can be examined through the limiting FIM.

To explore the FIM in the bottom for 100p% of the complete sample 0 < p < 1 we need the marginal pdf of X, the conditional pdf of Y given x, and their first partial derivatives with respect to λ1, λ2, and λ12. The marginal pdf of X is

d λ(λ + λ ) λλ 1 12 −(λ1+λ12)x 12 −λx fX (x) = − [1 − F (x)] = f1(x) + f2(x) = e − e , dx λ1 + λ2 λ1 + λ2 (4.3.18) and the conditional pdf of Y given x is given by

−λ x−(λ +λ )y  λ1λ(λ2+λ12)e 1 2 12 −(λ +λ )x −λx = f1(y | x), for y > x > 0  λ(λ1+λ12)e 1 12 −λλ12e f(y | x) = −(λ +λ )x−λ y (4.3.19) λ2λ(λ1+λ12)e 1 12 2  −(λ +λ )x −λx = f2(y | x), for x > y > 0. λ(λ1+λ12)e 1 12 −λλ12e

91 The first derivatives of (4.3.18) and (4.3.19) with respect to λ1, λ2 and λ12 are the follow- ing:

∂ 1 1 1 log fX (x) = − − x + −λ x = W1(x), ∂λ1 λ λ1 + λ2 λ1 + λ12 − λ12e 2

−λ2x ∂ 1 1 xλ12e log fX (x) = − + −λ x = W2(x), ∂λ2 λ λ1 + λ2 λ1 + λ12 − λ12e 2

∂ 1 1 − e−λ2x log fX (x) = − x + −λ x = W12(x), ∂λ12 λ λ1 + λ12 − λ12e 2

∂ 1 1 (1) log f1(y | x) = − −λ x = W1 (y | x), ∂λ1 λ1 λ1 + λ12 − λ12e 2

−λ2x ∂ 1 xλ12e (1) log f1(y | x) = − y − −λ x = W2 (y | x), ∂λ2 λ2 + λ12 λ1 + λ12 − λ12e 2

−λ2x ∂ 1 1 − e (1) log f1(y | x) = − y + x − −λ x = W12 (y | x), ∂λ12 λ2 + λ12 λ1 + λ12 − λ12e 2

∂ 1 1 (2) log f2(y | x) = − −λ x = W1 (y | x), ∂λ1 λ1 + λ12 λ1 + λ12 − λ12e 2

−λ2x ∂ 1 xλ12e (2) log f2(y | x) = − y − −λ x = W2 (y | x), ∂λ2 λ2 λ1 + λ12 − λ12e 2

−λ2x ∂ 1 1 − e (2) log f2(y | x) = − −λ x = W12 (y | x). (4.3.20) ∂λ12 λ1 + λ12 λ1 + λ12 − λ12e 2

Let us denote the limiting FIM in the bottom 100p% of a random sample by   Ip(λ1, λ1) Ip(λ1, λ2) Ip(λ1, λ12) Ip(λ) =  Ip(λ1, λ2) Ip(λ2, λ2) Ip(λ2, λ12)  . Ip(λ1, λ12) Ip(λ2, λ12) Ip(λ12, λ12)

92 Theorem 4.3.2. The entries of the limiting FIM corresponding to the bottom 100p% of a random sample from BBVE distribution are given by

−1 Z FX (p) Ip(λi, λj) = Wi(x)Wj(x)f(x; λ)dx 0 ( )( ) 1 Z ∞ Z ∞ + Wi(x)f(x; λ)dx Wj(x)f(x; λ)dx 1 − p −1 −1 FX (p) FX (p)

−1 Z FX (p) Z ∞ (1) (1) + Wi (y | x)Wj (y | x)f1(x, y; λ)dy 0 x Z x  (2) (2) + Wi (y | x)Wj (y | x)f2(x, y; λ)dy dx, (4.3.21) 0

for i, j = 1, 2, 12 where the W’s are given by (4.3.20), fi(x, y) = f(x)fi(y | x) and

fi(y | x) are given by (4.3.19).

The FIM in 100p% of the sample from the bottom plays an important role in finding asymp-

−1 totic variances of the MLEs. For example, [nIp(λ1, λ1)] serves as the asymptotic vari-

ance of the MLE of λ1 contained in the bottom 100p% of the sample, when λ2 and λ12 are

−1 known. On the other hand [Ip (λ)]11 serves as the asymptotic variance of the MLE of λ1

when λ2 and λ12 are unknown (He, 2007).

4.4 Computations

4.4.1 Right Censored Samples - Finite Sample Case

Suppose λ1 = 1, λ2 = .5, λ12 = .5, and n = 10. We performed numerical integration

to evaluate the integrals of (4.3.6)using the R software. Table 4.1 contains the results for

right censored samples with r = [np], p = .1, ··· ,.9 and the complete sample.

93 r I1···r:n(λ1, λ1) I1···r:n(λ2, λ2) I1···r:n(λ12, λ12) I1···r:n(λ1, λ2) I1···r:n(λ1, λ12) I1···r:n(λ2, λ12) 1 0.6838905 1.171852 1.230341 -0.1544685 0.4106654 0.9373311 2 1.3566989 2.485043 2.438122 -0.3232938 0.8159873 1.8910165 3 2.017798 3.937202 3.62006 -0.5051478 1.2166668 2.8562546 4 2.6664257 5.526428 4.772341 -0.6985469 1.6134679 3.8275412 5 3.3016289 7.251498 5.890391 -0.901772 2.0072478 4.7983323 6 3.9221697 9.11223 6.968494 -1.1127266 2.3990076 5.7604606 7 4.5263529 11.110232 7.999087 -1.3286655 2.7899858 6.7030275 8 5.1116691 13.250718 8.971302 -1.5455899 3.1818528 7.6099805 9 5.6738857 15.548132 9.867234 -1.7565602 3.5772114 8.4535127 10 6.2037037 18.055556 10.648148 -1.9444444 3.9814815 9.1666667

Table 4.1: Elements of I1···r:n(λ) from BBVE(1, .5, .5) when n = 10

As anticipated the absolute values of all entries in the first three columns of Table 4.1 mono-

tonically go up as r increases. In terms of the properties of the estimators, the reciprocal of each diagonal entry in Table 4.1, 1/I1···r:n(λi, λi) is the CR Lower bound for the variance of any unbiased estimator of λi from Type-II right censored samples with r = [np]. The

CR Lower bound for every censored trial are shown in Table 4.2.

−1 −1 −1 r I1···r:n(λ1, λ1) I1···r:n(λ2, λ2) I1···r:n(λ12, λ12) 1 14.34705 17.1387 19.80808 2 5.381593 5.726364 7.080975 3 2.994767 2.861878 3.7783 4 1.981233 1.710491 2.40775 5 1.444378 1.131054 1.696603 6 1.119591 0.7968409 1.2741827 7 0.9042518 0.5849018 0.9985505 8 0.7508243 0.440094 0.804768 9 0.6335364 0.3338639 0.6581005 10 0.5331081 0.2472973 0.5331081

−1 Table 4.2: Elements of I1···r:n(λ) from BBVE(1, .5, .5) when n = 10

94 Table 4.3 shows proportional FI that is the FI in Type-II censored samples divided by

the FI in the corresponding complete sample. The proportional FI are compared with the

independent case in the fourth column which is only r/n for each r where 1 ≤ r ≤ n.

From Table 4.3, we observe the following order among the proportional FI for any r:

I (λ ) I (λ ) r I (λ ) 1···r:n 12 > 1···r:n 1 > > 1···r:n 2 . (4.4.1) nI(λ12) nI(λ1) n nI(λ2)

r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) rI(λi) = r nI(λ1) nI(λ2) nI(λ12) nI(λi) n 1 0.1102 0.0649 0.1155 0.1 2 0.2187 0.1376 0.229 0.2 3 0.3253 0.2181 0.34 0.3 4 0.4298 0.3061 0.4482 0.4 5 0.5322 0.4016 0.5532 0.5 6 0.6322 0.5047 0.6544 0.6 7 0.7296 0.6153 0.7512 0.7 8 0.824 0.7339 0.8425 0.8 9 0.9146 0.8611 0.9267 0.9 10 1 1 1 1

Table 4.3: Proportional FI in Type-II right censored samples from BBVE(1, .5, .5) when n = 10

Figure 4.1 provides a graphical representation of the first three columns of Table 4.3. The

proportional FI about λ1 is very close to a straight line. This implies that the increasing rate

r  of the relative FI about λ1 is uniform and is the most similar to independent case ≈ n .

95 Information plot

● 1.0

● 0.8

● 0.6

● 0.4

● FI relative to the total FI FI relative

● ● lambda.1

0.2 lambda.2

● lambda.12 0.0

2 4 6 8 10

r (censored sample size)

Figure 4.1: Increasing pattern of the relative FI in Type-II right censored samples for 1 ≤ r ≤ n, for the BBVE(1,0.5,0.5) parent where n=10

However, the above ordering in (4.4.1) depends specifically on the selected values of λ1 =

1, λ2 = 0.5 and λ12 = 0.5. In order to confirm whether there is a general order among the proportional FI for any λ1, λ2, λ12 and r, we first consider three different cases: λ1 > λ2,

λ1 = λ2, and λ1 < λ2. Based on the given values of λ1 and λ2, λ12 is selected depending

on various ρ, the Pearson’s correlation coefficient between X and Y . The ρ from BBVE is a non-linear function involving λ1, λ2 and λ12 and is given by

2 2 ρ = √ (λ1+λ2) (λ1+λ12)(λ2+λ12)−λ λ1λ2 . 2 2 2 2 2 2 {(λ1+λ2) (λ1+λ12) +λ2(2λ1+λ2)λ }{(λ1+λ2) (λ2+λ12) +λ1(2λ2+λ1)λ } (4.4.2)

The expression in (4.4.2) is always greater than or equal to zero for any λ1 > 0, λ2 > 0, and λ12 ≥ 0. And when λ1 and λ2 are fixed, note that as λ12 → ∞

λ2 + λ2 + λ λ ρ −→ 1 2 1 2 . (4.4.3) p 2 2 2 2 {λ1 + 4λ1λ2 + 2λ2}{2λ1 + 4λ1λ2 + λ2}

For instance, when λ1 = λ2 = 1, ρ goes to .429 and when λ1 = 1 and λ2 = .5 or λ1 = .5

and λ2 = 1, ρ converges to .45 as λ12 → ∞. So we consider 12 different scenarios for the

values of the parameters:

96 ρ λ1 = λ2 = 1 λ1 = 1 > λ2 = .5 λ1 = .5 < λ2 = 1 .1 .51 .34 .34 .2 1.41 .91 .91 .3 3.67 2.19 2.19 .4 21.5 8.14 8.14

Table 4.4: Values of λ12 for selected values of λ1, λ2 and ρ in (4.4.3)

ρ = .1, λ12 = .51 ρ = .2, λ12 = 1.41 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1229 0.0790 0.1033 1 0.1390 0.0665 0.1039 2 0.2403 0.1645 0.2087 2 0.2687 0.1452 0.2099 3 0.3525 0.2554 0.3150 3 0.3894 0.2336 0.3168 4 0.4597 0.3509 0.4211 4 0.5016 0.3295 0.4232 5 0.5620 0.4503 0.5260 5 0.6056 0.4312 0.5283 6 0.6594 0.5530 0.6289 6 0.7015 0.5373 0.6311 7 0.7520 0.6586 0.7288 7 0.7894 0.6470 0.7308 8 0.8399 0.7673 0.8248 8 0.8690 0.7596 0.8265 9 0.9227 0.8799 0.9157 9 0.9397 0.8758 0.9169 10 1 1 1 10 1 1 1

ρ = .3, λ12 = 3.67 ρ = .4, λ12 = 21.5 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1415 0.0699 0.1043 1 0.1201 0.0978 0.1055 2 0.2741 0.1543 0.2109 2 0.2403 0.2037 0.2111 3 0.3977 0.2489 0.3181 3 0.3581 0.3134 0.3193 4 0.5124 0.3506 0.4246 4 0.4718 0.4241 0.4248 5 0.6183 0.4567 0.5296 5 0.5801 0.5333 0.5303 6 0.7151 0.5654 0.6321 6 0.6821 0.6395 0.6306 7 0.8028 0.6750 0.7315 7 0.7767 0.7411 0.7309 8 0.8805 0.7843 0.8271 8 0.8627 0.8367 0.8259 9 0.9473 0.8923 0.9172 9 0.9385 0.9242 0.9156 10 1 1 1 10 1 1 1

Table 4.5: Proportional FI in Type-II right censored samples from BBVE(1, 1, λ12) when n = 10

97 ρ = .1, λ12 = .34 ρ = .2, λ12 = .91 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1080 0.0731 0.1178 1 0.1132 0.0525 0.1117 2 0.2146 0.1519 0.2331 2 0.2243 0.1165 0.2221 3 0.3196 0.2365 0.3454 3 0.3331 0.1915 0.3308 4 0.4231 0.3269 0.4545 4 0.4393 0.2771 0.4376 5 0.5250 0.4230 0.5600 5 0.5427 0.3728 0.5420 6 0.6249 0.5249 0.6611 6 0.6431 0.4783 0.6434 7 0.7229 0.6329 0.7573 7 0.7399 0.5935 0.7413 8 0.8185 0.7472 0.8473 8 0.8326 0.7182 0.8346 9 0.9112 0.8687 0.9295 9 0.9200 0.8529 0.9220 10 1 1 1 10 1 1 1

ρ = .3, λ12 = 2.19 ρ = .4, λ12 = 8.14 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1109 0.0447 0.1068 1 0.0888 0.0542 0.1030 2 0.2211 0.1048 0.2134 2 0.1840 0.1240 0.2069 3 0.3301 0.1790 0.3195 3 0.2844 0.2071 0.3112 4 0.4374 0.2661 0.4245 4 0.3884 0.3017 0.4147 5 0.5424 0.3651 0.5281 5 0.4948 0.4058 0.5182 6 0.6444 0.4747 0.6298 6 0.6021 0.5177 0.6199 7 0.7429 0.5940 0.7291 7 0.7087 0.6356 0.7200 8 0.8365 0.7219 0.8250 8 0.8129 0.7573 0.8180 9 0.9235 0.8574 0.9163 9 0.9119 0.8804 0.9122 10 1 1 1 10 1 1 1

Table 4.6: Proportional FI in Type-II right censored samples from BBVE(1, .5, λ12) when n = 10

98 ρ = .1, λ12 = .34 ρ = .2, λ12 = .91 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1464 0.0889 0.0793 1 0.1808 0.0848 0.0922 2 0.2775 0.1821 0.1709 2 0.3332 0.1775 0.1936 3 0.3958 0.2776 0.2702 3 0.4636 0.2740 0.2991 4 0.5035 0.3743 0.3738 4 0.5762 0.3720 0.4058 5 0.6022 0.4717 0.4795 5 0.6741 0.4703 0.5118 6 0.6932 0.5700 0.5857 6 0.7594 0.5689 0.6159 7 0.7776 0.6697 0.6914 7 0.8337 0.6680 0.7174 8 0.8564 0.7725 0.7960 8 0.8982 0.7693 0.8156 9 0.9302 0.8809 0.8989 9 0.9536 0.8765 0.9100 10 1 1 1 10 1 1 1

ρ = .3, λ12 = 2.19 ρ = .4, λ12 = 8.14 r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) r I1···r:n(λ1) I1···r:n(λ2) I1···r:n(λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) nI(X,Y ;λ1) nI(X,Y ;λ2) nI(X,Y ;λ12) 1 0.1952 0.0952 0.1020 1 0.1846 0.1357 0.1090 2 0.3561 0.1976 0.2100 2 0.3393 0.2643 0.2209 3 0.4911 0.3019 0.3194 3 0.4717 0.3845 0.3323 4 0.6059 0.4055 0.4276 4 0.5862 0.4962 0.4409 5 0.7038 0.5072 0.5334 5 0.6857 0.5996 0.5460 6 0.7874 0.6063 0.6358 6 0.7721 0.6951 0.6470 7 0.8583 0.7031 0.7343 7 0.8467 0.7827 0.7432 8 0.9174 0.7983 0.8284 8 0.9099 0.8627 0.8349 9 0.9650 0.8944 0.9174 9 0.9616 0.9351 0.9210 10 1 1 1 10 1 1 1

Table 4.7: Proportional FI in Type-II right censored samples from BBVE(.5, 1, λ12) when n = 10

99 (a) when λ1 = 1 and λ2 = .5 (b) when λ1 = 1 and λ2 = 1 (c) when λ1 = .5 and λ2 = 1

Figure 4.2: 3D surface plots of proportional FI in censored samples from BBVE

With λ12 in Tabel 4.4 for each scenario, we obtain the proportional FI and tabulate them in

Table 4.5, 4.6, and 4.7.

In Figure 4.2 the proportional FI contained in Type-II right censored samples about λ1, λ2, and λ12 are red, green, and blue, respectively. Figure 4.2 displays the proportional FI in

Table 4.5, 4.6, and 4.7 and shows that the proportional FI for λ1 is always greater than for

λ2, that is I (λ ) I (λ ) 1···r:n 1 > 1···r:n 2 (4.4.4) nI(λ1) nI(λ2)

regardless of λ1, λ2, λ12 and r. However it is verified that the other inequalities of (4.4.1)

do not hold. For example, Table 4.5 shows the proportional FI about λ1 can be smaller

than r/n when ρ = .4 and Table 4.7 has the proportional FI on λ2 when ρ = .4 being greater than r/n for any r. Intuitively, the reason for the general inequality in (4.4.4) is the distributional features of BBVE; λ1 is strongly associated with X that is used to sort the data and λ2 is associated with Y , its concomitant. In Tables 4.6 and 4.7 very often we also

find the proportional FI on λ1 to be bigger than that on λ12.

100 How do we use the FI in Type-II right censored samples to choose an optimal sample size

in life-testing experiments? The appropriate criterion may be the FI per unit duration of the

experiment, I1···r:n(λi, λi)/E(Xr:n). The larger magnitude for this measure yields a better

experimental design for life-testing experiments.

Figure 4.3: FI in Type-II right censored samples per unit of the duration

4.4.2 Limiting FIM for Right Censored Samples

For large sample size, the limiting FI defined by FI in 100p% of samples from the bot-

tom can be used rather than the FI in Type-II right censored samples to choose optimal sam-

ple size in the life-testing experiment. The diagonal entries of limiting FIM from the BBVE

are calculated using (4.3.21) for λ1 = 1, λ2 = .5, and λ12 = .5. Table 4.8 shows that even when sample size is samll (n = 10), the proportional FI is close to Ip(λi) from the com-

1 1 plete sample. When n is small n I1···r:n(λ1, λ1) < Ip(λ1, λ1), n I1···r:n(λ2, λ2) > Ip(λ2, λ2)

1 and n I1···r:n(λ12, λ12) < Ip(λ12, λ12) for any p and corresponding r. As n increases

1 1 n I1···r:n(λ1, λ1) and n I1···r:n(λ12, λ12) increase to Ip(λ1, λ1) and Ip(λ12, λ12), respectively.

1 On the other hand n I1···r:n(λ2, λ2) decreases toward Ip(λ2, λ2).

101 limiting n = 10 n = 20 1 1 1 1 1 1 p Ip(λ1, λ1) Ip(λ2, λ2) Ip(λ12, λ12) r n I1···r:n(λ1, λ1) n I1···r:n(λ2, λ2) n I1···r:n(λ12, λ12) r n I1···r:n(λ1, λ1) n I1···r:n(λ2, λ2) n I1···r:n(λ12, λ12) 0.1 0.0689 0.1108 0.1241 1 0.0684 0.1172 0.1230 2 0.0686 0.1142 0.1235 0.2 0.1366 0.2374 0.2459 2 0.1357 0.2485 0.2438 4 0.1361 0.2432 0.2448 0.3 0.2031 0.3794 0.3650 3 0.2018 0.3937 0.3620 6 0.2024 0.3869 0.3635 0.4 0.2682 0.5364 0.4812 4 0.2666 0.5526 0.4772 8 0.2674 0.5449 0.4792 0.5 0.3320 0.7084 0.5939 5 0.3302 0.7251 0.5890 10 0.3310 0.7171 0.5914 0.6 0.3941 0.8950 0.7024 6 0.3922 0.9112 0.6968 12 0.3931 0.9034 0.6995 0.7 0.4546 1.0964 0.8058 7 0.4526 1.1110 0.7999 14 0.4536 1.1040 0.8027 0.8 0.5129 1.3130 0.9030 8 0.5112 1.3251 0.8971 16 0.5120 1.3193 0.8999 0.9 0.5688 1.5464 0.9916 9 0.5674 1.5548 0.9867 18 0.5680 1.5509 0.9889 n = 50 n = 100 n = 500 1 1 1 1 1 1 1 1 1 r n I1···r:n(λ1, λ1) n I1···r:n(λ2, λ2) n I1···r:n(λ12, λ12) r n I1···r:n(λ1, λ1) n I1···r:n(λ2, λ2) n I1···r:n(λ12, λ12) r n I1···r:n(λ1, λ1) n I1···r:n(λ2, λ2) n I1···r:n(λ12, λ12) 5 0.0688 0.1122 0.1238 10 0.0688 0.1115 0.1239 50 0.0896 0.1109 0.124 10 0.1364 0.2398 0.2454 20 0.1365 0.2386 0.2456 100 0.1366 0.2376 0.2458 15 0.2028 0.3824 0.3644 30 0.2029 0.3809 0.3647 150 0.2030 0.3797 0.365 20 0.2679 0.5399 0.4804 40 0.2681 0.5382 0.4808 200 0.2682 0.5368 0.4811 25 0.3316 0.7119 0.5929 50 0.3318 0.7101 0.5934 250 0.3319 0.7087 0.5938 30 0.3937 0.8984 0.7012 60 0.3939 0.8967 0.7018 300 0.3941 0.8953 0.7023 35 0.4542 1.0995 0.8046 70 0.4544 1.0980 0.8052 350 0.4545 1.0967 0.8057 40 0.5126 1.3156 0.9017 80 0.5128 1.3143 0.9024 400 0.5129 1.3133 0.9029 45 0.5685 1.5482 0.9905 90 0.5686 1.5473 0.9911 450 0.5688 1.5417 0.9915

1 Table 4.8: Diagonal entries of Ip(λ) and n I1···r:n(λ) from BBVE(1, .5, .5) where r/n → p as n ↑ ∞ when n=10, 20, 50, 100 and 500

The limiting FI values in Table 4.8 are used to calculate the approximations to the variances

−1 of MLEs about λ1, λ2, or λ12 in Table 4.9. In the table, [nIp(λi)] is the variance of the

−1 MLE when the other two parameters are known while [Ip (λ)]ii/n is the variance when all

the parameters are unknown. So the former must be much less than the latter due to the

−1 −1 −1 −1 −1 −1 p [nIp(λ1)] [nIp(λ2)] [nIp(λ12)] [Ip (λ)]11/n [Ip (λ)]22/n [Ip (λ)]33/n 0.1 1.4512 0.9022 0.8061 23.5198 31.50318 34.17041 0.2 0.7320 0.4212 0.4067 6.78368 7.92847 9.28443 0.3 0.4924 0.2636 0.2740 3.42423 3.53704 4.45615 0.4 0.3728 0.1864 0.2078 2.15971 1.98972 2.69045 0.5 0.3012 0.1412 0.1684 1.53270 1.26716 1.83662 0.6 0.2537 0.1117 0.1424 1.16936 0.87087 1.35263 0.7 0.2200 0.0912 0.1241 0.93519 0.62803 1.04654 0.8 0.1950 0.0762 0.1107 0.77159 0.46608 0.83599 0.9 0.1758 0.0647 0.1008 0.64785 0.34899 0.67857

Table 4.9: Approximations based on limiting FIM to the variances of MLEs from right censored samples from BBVE(1, .5, .5) when n = 10

102 −1 lack of information in terms of the nuisance parameters. That is, [Ip (λ)]ii/n is a better

approximation to the variance of MLE for n = 10 and is higher than the CR Lower bound

given in Table 4.2. The ARE values of the MLE based on the bottom 100p% of the sample

when compared to the complete sample are given in Table 4.10.

ˆ ˆ ˆ ˆ ˆ ˆ p ARE(λ1r:n, λ1n:n) ARE(λ2r:n, λ2n:n) ARE(λ12r:n, λ12n:n) 0.1 0.1111 0.0614 0.1165 0.2 0.2202 0.1315 0.2309 0.3 0.3273 0.2101 0.3428 0.4 0.4323 0.2971 0.4519 0.5 0.5351 0.3923 0.5578 0.6 0.6353 0.4957 0.6596 0.7 0.7327 0.6072 0.7568 0.8 0.8268 0.7272 0.8480 0.9 0.9168 0.8564 0.9313

Table 4.10: ARE values for MLEs from right censored samples for BBVE(1, .5, .5) distribution

4.4.3 Left and Doubly Censored Samples

Upon substituting (4.3.15) in (1.1.15) we obtain Table 4.11 that provides the FI in all possible left censored samples from a total sample size of 10. Finally one can use (1.1.16) to obtain the FI in Type-II doubly censored samples from BBVE by taking Table 4.1 and

4.11 values for the right and left censored samples. They are not reported here.

103 r Is···n:n(λ1) Is···n:n(λ2) Is···n:n(λ12) 1 6.2037 18.0556 10.6481 2 6.1940 16.9054 9.7072 3 6.1688 15.6079 8.8209 4 6.1196 14.1659 7.9832 5 6.0339 12.5814 7.1855 6 5.8917 10.8559 6.4146 7 5.6595 8.9901 5.6491 8 5.2767 6.9830 4.8499 9 4.6170 4.8301 3.9341 10 3.3494 2.5189 2.6768

Table 4.11: Is···n:n(λ) from BBVE(1, .5, .5) when n = 10

104 CHAPTER 5: CONCLUSION

5.1 Concluding Remarks

Suppose we have a random sample of size n from a symmetric distribution with a scale

parameter θ. If one is interested in computing Ir:m(θ) or I1···r:m(θ) for any 1 ≤ r ≤ m ≤ n, the total number of independent calculations of FI is n(n + 1)/2. However a relation be- tween an unfolded and associated folded distribution reduces the number of independent calculations to 4n − 3 special expectations based on the folded distribution when n > 6.

In addition to a significant reduction in the number of independent calculations, computing

Ir:m(θ) or I1···r:m(θ) from Theorems 2.2.1 or 2.3.1 using the 4n − 3 terms is much easier than directly through (1.1.2).

We have focused our work on Type-II right censored samples with the following goals: (a) to obtain the asymptotic variance of MLE from 1/Ip(θ), (b) to determine the optimal sam- ple size for life-testing experiment by comparing I1···r:n(θ)/E(Xr:n) for various r, and (c) to evaluate relative efficiencies of unbiased estimators in Type-II right censored samples by using 1/I1···r:n(θ), and (d) to evaluate asymptotic relative efficiencies of MLEs from censored samples when compared to the complete sample (Ip(θ)/I(X; θ)).

In general, we verified that I1···r:n(θ) (Ip(θ)) monotonically increases as r(p) increases for

any distribution at a fixed n. But the rates of increase vary and depend on distributions and

the choice of parameter values.

105 For the BBVE model, we show that the proportional FI about λ1 is greater than about λ2 for various values of λ1, λ2, and λ12. Note that λ1 is more closely associated with order statistics and λ2 is closely associated with the concomitants. From our exploration of the properties of FI in Type-II censored samples as discussed in Section 2.4, 3.4, and 4.4, we conclude that the FI in the first r order statistics could be more or less than the FI in a

random sample of size r. We see that the nature of this ordering depends on the values of

the parameters involved.

For a mixture of two exponential distributions, the determinant of the FI matrix is almost

zero. That is, we need very large n in order to obtain reliable estimates of asymptotic vari-

ances of MLEs. Also when α > β and θ > .5, Type-II right censored samples provide large

proportional FI about α as the component of θαe−αx that dominates the mixture density.

In summary, this dissertation provides expressions of FI in Type-II censored samples from the BBVE model and the mixture of two exponential distributions. In both cases these have no closed forms but are all finite. Hence numerical results are obtained to study the asymp- totic property of estimators and to obtain optimal sample sizes in life-testing experiments.

Meanwhile, the connection between folded and unfolded distributions leads us to minimize our efforts to compute the FI in Type-II censored samples from the unfolded distribution.

5.2 Future Work

1. Zheng and Gastwirth (2000) showed that the middle 40% (25% in each tail) of or-

dered data includes more than 80% of the FI about the location (scale) parameter for

Cauchy, Laplace, logistic, and normal distributions. I will examine the regions that

contain more than 80% of the FI about various parameters for the following common

106 bivariate exponential distributions: Marshall-Olkins (1967), Downton (1970), and

Block-Basu (1974) models.

2. In Chapter 2, we introduced the connection between folded and symmetric unfolded

distributions. I will extend the results to asymmetric unfolded distribution. What

number of independent calculations is the minimum based on the folded distribution?

The goal is to look for most efficient methods to compute the FI in either the folded

or the unfolded distribution.

3. Zheng and Gastwirth (2002) obtained symmetric fractions of ordered data that have

the most information about each scale parameter for Cauchy, Laplace, logistic, and

normal distributions assuming the location parameter is known. They also verified

that the BLUE based on the selected order statistics in the fractions is asymptotically

highly efficient in terms of the RE when the sample size n is fixed. Thus I plan to

investigate the regions of ordered data from asymmetric distributions that are most

informative.

4. We used Laplace(0, θ =2), MExp(α =15, β =1, θ =.9), MExp(α =2, β =1, θ =.6),

and BBVE(λ1 =1, λ2 =.5, λ12 =.5) to illustrate our general results discussed in

the earlier chapters. However, the numbers resulted from the numerical integration

and simulation approaches depend on the values of the parameters. Although no

expression for FI has a closed algebraic form, I will explore general properties of

the FI for each of the distributions by considering a large number of values for the

various parameters and by investigating the properties of FI as a function of these

parameters.

107 Bibliography

[1] Abo-Eleneen, Z. A. and Nagaraja, H. N. Fisher information in an order statistic and its concomitant. Annals of the Institute of Statistical Mahematics, Vol. 54, No. 3, 667- 680, 2002.

[2] Arnold B. C., Balakrishnan, N., and Nagaraja, H. N. A First Course in Order Statis- tics, SIAM, Philadelphia, 2008.

[3] Atienza, N., Garc´ıa, J., Munoz-Pichardo,˜ J. M., and Villa, R. On the consistency of MLE in finite mixture models of exponential families. Journal of Statistical Planning and Inference, Vol. 137, Issue 2, 496-505, 2007.

[4] Balakrishnan, N. and Basu, A. P. The Exponential Distribution: Theory, Methods and Applications, Taylor and Francis, Philadelphia, 1995.

[5] Balakrishnan, N., Govindarajulu, Z., and Balasubramanian, K. Relationships be- tween moments of two related sets of order statistics and some extensions. Annals of the Institute of Statistical Mahematics, Vol. 45, No. 2, 243-247, 1993.

[6] Balakrishnan, N. and Lai, C. D. Continuous Bivariate Distributions. Springer Sci- ence+Business Media, LLC 2009.

[7] Block, H. W. and Basu, A. P. A continuous bivariate exponential extension. Journal of the American Statistical Association, Vol. 69, No. 348, 1031-1037, 1974.

[8] Choi, D. and Nadarajah, S. Information matrix for a mixture of two Laplace distribu- tions. Stat Papers, Vol. 50, 1-12, 2009.

[9] David, H. A. and Galambos, J. The asymptotic theory of concomitants of order statistics. Journal of Applied Probability, Vol. 11, No. 4, 762-770, 1974.

[10] Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incom- plete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological). Vol. 39, No. 1, 1-38, 1977.

[11] Downton, F. Bivariate exponential distributions in reliability theory. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 32, No. 3, 408-417, 1970.

108 [12] Freund, J. E. A bivariate extension of the exponential distribution. Journal of the American Statistical Association, Vol. 56, No. 296. 971-977, 1961.

[13] Govindarajulu, Z. Relationships among moments of order statistics in samples from two related populations. Technometrics, Vol. 5, No. 4, 514-518, 1963.

[14] Govindarajulu, Z. Best linear estimates under symmetric censoring of the parameters of double exponential population. Journal of the American Statistical Association, Vol. 61, No. 313, 248-258, 1966.

[15] Gross, A. J. A competing risk model: A one organ subsystem plus a two organ subsystem. IEEE Transactions on Reliability, Vol.R-22, No. 1, 1973.

[16] Gross, A. J., Clark, V. A., and Liu, V. Estimation of survival parameters when one of two organs must function for survival. Biometrics, Vol. 27, No. 2, 369-377, 1971.

[17] Gross, A. J. and Lam, C. F. Paired observations from a survival distribution. Bio- metrics, Vol. 37, No. 3, 505-511, 1981.

[18] Gumbel, E. J. Bivariate exponential distributions. Journal of the American Statistical Association, Vol. 55, No. 292, 698-707, 1960.

[19] Hanagal, D. D. and Kale, B. K. Large sample tests of independence for absolutely continuous bivariate exponential distribution. Communications in Statistics - Theory and Methods, Vol. 20, No. 4, 1301-1313, 1991.

[20] Hasselblad, V. Estimation of finite mixtures of distributions from the exponential family. Journal of the American Statistical Association, Vol. 64, No. 328, 1459-1471, 1969.

[21] He, Q. Inference on correlation from incomplete bivariate samples, Dissertation at The Ohio State University, 2007.

[22] Hill, B. M. Information for estimating the proportions in mixtures of exponential and normal distributions. Journal of the American Statistical Association. Vol. 58, No 304, 918-932, 1963.

[23] Jewell, N. P. Mixtures of exponential distributions. The Annals of Statistics, Vol. 10, No. 2, 479-484, 1982.

[24] Klein, J. P. and Basu, A. P. Estimating reliability for bivariate exponential distribu- tions. Sankhya, Vol. 47, No. 3, 346-353, 1985.

[25] Marshall, A. W. and Olkin, I. A multivariate exponential distribution. Journal of the American Statistical Association, Vol. 62, No. 317, 30-44, 1967.

109 [26] Mendenhall, W. and Hader, R. J. Estimation of parameters of mixed exponentially distributed failure time distributions from censored life test data. , Vol. 45, No. 3/4, 504-520, 1958.

[27] Nagaraja, H. N. On the information contained in order statistics, Tech. Report, No. 278, Department of Statistics, The Ohio State University, Columbus, Ohio, 1983.

[28] Nagaraja, H. N. and Abo-Eleneen, Z. A. Fisher information in order statistics and their concomitants in bivariate censored samples. Metrika, 67, 327-347, 2008.

[29] Park, S. Fisher information in order statistics. Journal of the American Statistical Association, Vol. 91, No. 433, 385-390, 1996.

[30] Redner, R. A. and Walker, H. F. Mixture densities, maximum likelihood and the EM algorithm. SIAM Review, Vol. 26, No. 2, 195-239, 1984.

[31] Yang, S. S. General distribution theory of the concomitants of order statistics. The Annals of Statistics, Vol. 5, No. 5, 996-1002, 1977.

[32] Zheng, G. and Gastwirth, J. L. Where is the Fisher information in an ordered sample? Statistica Sinica, 10, 1267-1280, 2000.

110 APPENDIX A

NOTATIONS AND ABBREVIATIONS

A.1 Symbols

fX (x) Probability density function (pdf) of a continuous random variable X FX (x) Cumulative density function (cdf) of a continuous random variable X f(x, y) Joint pdf of continuous random variables X and Y Xr:n The r-th order statistic out of n variables X(1, r) The first r order statistics X(s, n) The last r order statistics (Xr:n,Y[r:n]) The r-th order statistic and its concomitant (X(1, r), Y[1, r]) The first r order statistics and their concomitants (X(s, n), Y[s, n]) The last n − s + 1 order statistics and their concomitants L Likelihood function ` Log likelihood function Ir:n FI in Xr:n I1···r:n(θ; X) FI about θ in X(1, r) I1···r:n(θ; X, Y) FI about θ in X(1, r), Y[1, r]) Ir:n FIM in Xr:n I1···r:n(θ; X) FIM about θ in X(1, r) I1···r:n(θ; X, Y) FIM about θ in (X(1, r),Y [1, r]) f Ir:n FI in Xr:n from unfolded distribution f I1···r:n(θ; X) FI about θ in X(1, r) from unfolded distribution g Ir:n FI in Xr:n from folded distribution g I1···r:n(θ; X) FI about θ in X(1, r) from the folded distribution E(Xr:n) Expectation of Xr:n −1 FX (p) Quantile function of probability p

111 A.2 Abbreviations

FI Fisher information FIM Fisher information matrix Cramer´ Rao Lower Bound CR Lower bound MLE Maximum likelihood estimator ARE Asymptotic relative efficiency Exp Exponential distribution MExp Mixture of two exponential distributions BBVE Block-Basu bivariate exponential distribution

A.3 Distributions

1 |x−µ| − θ Laplace(µ, θ) Laplace distribution with pdf f(x) = 2θ e , −∞ < x < ∞ 1 x − θ Exp(θ) Exponential distribution with pdf f(x) = θ e , 0 < x < ∞ MExp(α, β, θ) Mixture of two exponentials with pdf θαe−αx + (1 − θ)βe−βx, 0 < x < ∞ MExp(α, β; θ) MExp(α, β, θ) with known θ

BBVE(λ1, λ2, λ12) Block-Basu bivariate exponential distribution with joint pdf λ λ(λ +λ ) f(x, y)= 1 2 12 e−λ1x−(λ2+λ12)y, y > x > 0 and λ1+λ2 λ λ(λ +λ ) f(x, y)= 2 1 12 e−(λ1+λ12)x−λ2y, x > y > 0 λ1+λ2

112 APPENDIX B

R CODES

B.1 Numerical Integration

We show an example of R codes including numerical integration approach used to com-

pute the limiting FI in Theorem 4.3.2 from the BBVE model.

### substitutions for λ1, λ2, and λ12

lambda.1 ← 1; lambda.2 ← 0.5; lambda.12 ← 0.5

lambda ← lambda.1+lambda.2+lambda.12

−1 ### the bottom 100p% of a random sample and FX (p) p ← seq(0.1, 0.9, by=.1)

inv.p ← c(.0788, .1663, .2648, .3777, .5104, .6714, .8772, 1.1645, 1.6495)

### the marginal pdf of X

f ← function(x){

lambda*(lambda.1+lambda.12)/(lambda.1+lambda.2)*exp(-(lambda.1+lambda.12)*x)

-lambda.12*lambda/(lambda.1+lambda.2)*exp(-lambda*x)

}

### 1 − FX (x)

survival ← function(x){

113 lambda/(lambda.1+lambda.2)*exp(-(lambda.1+lambda.12)*x)

-lambda.12/(lambda.1+lambda.2)*exp(-lambda*x)

}

### the joint pdf of (X, Y ) for Y > X

joint1 ← function(x,y){

lambda.1*lambda*(lambda.2+lambda.12)*exp(-lambda.1*x-(lambda.2+lambda.12)*y)

*1/(lambda.1+lambda.2)

}

### the joint pdf of (X, Y ) for Y < X

joint2 ← function(x,y){

lambda.2*lambda*(lambda.1+lambda.12)*exp(-lambda.2*y-(lambda.1+lambda.12)*x)

*1/(lambda.1+lambda.2)

}

### the partial derivatives in (4.3.20)

w.1 ← function(x){

1/lambda-1/(lambda.1+lambda.2)-x+1/(lambda.1+lambda.12-lambda.12*exp(-lambda.2*x))

} w.1.1 ← function(x,y){

1/lambda.1-1/(lambda.1+lambda.12-lambda.12*exp(-lambda.2*x))

} w.2.1 ← function(x,y){

1/(lambda.1+lambda.12)-1/(lambda.1+lambda.12-lambda.12*exp(-lambda.2*x))

}

### limitFI.11 is a vector of Ip(λ1, λ1) in (4.3.21)

114 limitFI.11 ← rep(0, 9) for(j in 1:9){

first11 ← integrate(function(x)w.1(x)ˆ2*f(x),0, inv.p[j])$value second11 ← integrate(function(x) w.1(x)*f(x),inv.p[j],Inf)$valueˆ2/(1-p[j]) third11 ← integrate(function(x)sapply(x,function(x)integrate(function(y) w.1.1(x,y)ˆ2

*joint1(x,y),x,Inf)$value)+sapply(x,function(x)integrate(function(y) w.2.1(x,y)ˆ2

*joint2(x,y),0,x)$value),0,inv.p[j])$value limitFI.11[j] ← first11 + second11 + third11

}

B.2 Simulation

When we faced the problem of numerical integration in the software R (the R frequently produced negative FI values when n was large), we performed simulation instead to com- pute reliable estimates of FI. For instance, for a mixture of two exponential distributions,

### This function generates a random sample from a mixture of two exponential distribu- tions

#n: sample size.

#alpha: mixing proportions.

#theta: mixing parameters; means of all components. rmixedexp ← function (n, alpha, theta)

{ m = length(theta) data = c()

115 nindex = rmultinom(1,n,alpha)

for (i in 1:m)

data = c(data, rexp(nindex[i], rate = 1/theta[i]))

}

data

}

### the pdf

f ← function(x, prop, lambda.1, lambda.2){

prop*lambda.1*exp(-lambda.1*x)+(1-prop)*lambda.2*exp(-lambda.2*x)

}

### the survival function

df ← function(x, prop, lambda.1, lambda.2){ prop*exp(-lambda.1*x)+(1-p)*exp(-lambda.2*x)

}

### generated m random samples of each size n, α = 15, β = 1, and θ = .3 m ← 10ˆ5; lambda.1 ← 15; lambda.2 ← 1; p ← .3; n ← 10 simul ← matrix(0, nrow=n, ncol=m) ord ← matrix(0, nrow=n, ncol=m)

### n by m matrix generated from MExp(15, 1, .3) for(i in 1:m){ simul[,i] ← rmixedexp(n, c(.7,.3), c(1,1/15))

}

### sort “simul” in increasing order in each sample for(i in 1:m){

116 ord[,i] ← sort(simul[,i], decreasing=FALSE)

}

### -the second order derivatives

derv.2.f ← ((2-lambda.2*ord)*p*(1-p)*lambda.1*ord*exp(-(lambda.1+lambda.2)*ord)+

(1-p)ˆ2*exp(-2*lambda.2*ord))/f(ord, .3, 15, 1)ˆ2 derv.1.f ← ((2-lambda.1*ord)*p*(1-p)*lambda.2*ord*exp(-(lambda.1+lambda.2)*ord)+ pˆ2*exp(-2*lambda.1*ord))/f(ord, .3, 15, 1)ˆ2 derv.12.df ← p*(p-1)*ordˆ2*exp(-(lambda.1+lambda.2)*ord)/df(ord, .3, 15, 1)ˆ2

### arithematic mean of derv.2.f and derv.12.df to obtain the FI about β in Type-II censored

samples

I.22 ← function(num,r){

return(sum(derv.2.f[1:r,])/m+(num-r)*sum(derv.12.df[r,])/m)

}

### arithmetic mean of derv.1.f and derv.12.df to obtain the FI about α in Type-II censored

samples

I.11 ← function(num,r){

return(sum(derv.1.f[1:r,])/m+(num-r)*sum(derv.12.df[r,])/m)

}

117