Journal of Advanced , Vol. 1, No. 2, June 2016 63 https://dx.doi.org/10.22606/jas.2016.12002

Bayes and Invariante Estimation of Parameters Under a Bounded Asymmetric

N. Sanjari Farsipour

Department of Statistics, Faculty of Mathematical sciences, Alzahra University, Tehran, Iran Email: [email protected]

Abstract. In this paper we consider the estimation of parameters under a bounded asymmetric loss function. The Bayes and invariant of location and scale parameters in the presence and absence of a nuisance parameter is considered. Some examples in this regard are included.

Keywords: Bayes estimation; invariance; ; ; bounded Asymmetric loss.

1 Introduction

In the literature, the estimation of a parameter is usually considered when the loss is squared error or in general any convex and symmetric function. The quadratic loss function has been criticized by some researches (e.g., [4], [5], [6] and [7]). The proposed loss function is ba{1 ( ) ea () } Lke(,)  {1 } (1.1) where a  0 determines the shape of the loss function, b  0 serves to scale the loss and k  0 is the maximum loss parameter. The general form of the loss function is illustrated in Figure 1. This is obviously a bounded asymmetric loss function.

Figure 1. The loss function (1.1) for a=1.

In this paper, we first study the problem of estimation of a location parameter, using the loss function (1.1). In section 2 we introduce the best location-invariant estimator of  under the loss (1.1). In section 3, Bayesian estimation of the normal mean is obtained under the loss (1.1). Then we study the problem of estimation of a scale parameter, using the loss function   a (1) ba{1 ( 1) e } Lke(,)  {1  } (1.2) where a  0 ,bk,0 . The loss (1.2) is scale invariant and bounded. In section 4 we introduce the best invariant estimator of the scale parameter  under the loss (1.2). Finally in section 5 we consider a

Copyright © 2016 Isaac Scientific Publishing JAS 64 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016

subclass of the exponential family and obtain the Bayes estimates of  under the loss (1.2). Since the parameters b and k do not have any influence on our results, so without loss of generality we take bk1 in the rest of the paper.

2 Best Location-Invariant Estimator

Let X  (XX1 ,...,n ) have a joint distribution with probability density ffXX(X  ) (1  ,...,n  ) where f is known and  is an unknown location parameter. The class of all location invariant of a location parameter  is of the form [3]

()XXY0 ()v ( ) where 0 is any location-invariant estimator and Y  (,...,YY11n  ) with YXXiin, in1,..., 1 and the best location-invariant estimator  * of  under the loss function(1.1), is *** ()XXy0 ()v (), where v ()y is a number which minimizes

av(()()) 0 Xy 1(()())ave0 Xy Ee1 Yy   0  (see [3]). Differentiating with respect to v()y and equating to zero, it can be seen that v *()y must satisfy the following equation

av(()Xy* ()) av(()Xy** ()) ave (() Xy ())0  Ee(100) e Yy 0  0  (2.1) 

Example 2.1: (normal mean) Let XX1,..., n be i.i.d. random variables having normal distribution with 2 mean  (real but unknown) and known variance  . If 0()X  X , it follows from Basu’s theorem that 0()X is independent of Y and hence the best location-invariant estimator of  is given by  **()X Xv , when v* is a number which satisfies (2.1), i.e.

2 22 2 nn2aa2 aavx  * a  2 aavx - * ()xx eeav*  -(-)- 222 nnn 2 exdd eex (2.2)   So, we can find v * by a numerical solution.

Example 2.2: (Uniform) Let XX1,..., n be i.i.d. according to the uniform distribution on   , where  is real (but unknown) and (0) is known. Taking 0(()X  (XX1)(2) )2 22 which is an invariant estimator of  , the conditional distribution of 0()X given Yy depends on y only through differences XXVi,  2,..., n. Now, note that XX, is a complete ()ii (1) (1) (n)

XX()i  (1) sufficient statistic for , and is independent of Zi  , in2,..., 1 for all ,  by XX(n) (1) Basu’s theorem. Hence XX, and Zs' are independent for all  and any given . Also, note (1) (n ) i ' that the conditional distribution of 0()X given Vsi which is equivalent to conditional distribution of ' 0()X given XX(n) (1) and Zsi depends only on XX(n) (1). On the other hand, the conditional distribution of 0()X given WX()n X (1) at   0 is of the form 1   w ft ()|X Ww () if| t | ;  w 0   w 2

JAS Copyright © 2016 Isaac Scientific Publishing Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 65 https://dx.doi.org/10.22606/jas.2016.12002

XX Hence the estimator  **()X (1) (n ) v is the MRE estimator of , if  * satisfies (2.1), which 2 simplifies to

ww** wwav() av ()  ww** ave()**22 ave ()  av() a ( v ) ee22 ee22(1 a )ea (1 ) e (2.3) So, we can find v * by a numerical solution.

Example 2.3: (Exponential distribution) Let XX1,..., n be iid.. . random variables with the density 1 fx() e()x  x    where   R is unknown and (0) is known. 0(()X  X 1) is an equivariant estimator and by the ** Basu’s theorem, it is independent of Y . Therefore,  ()X X(1)  is the MRE estimator of  , if  * satisfies (2.1), i.e. satisfies * **nnn eeaa1  xexeaaxxdd xexa;0 00 nnn * 1  aaxx **xexedd xex; a0 eeaa which simplifies to

nnnn 1(1 )! na * ()! n aaaaar* (1 ) ar* (  ) eeaa e  nn (2.4) rr00(1rr )! ()!  aa So, we can find  * by a numerical solution.

3 Bayes Estimation of the Normal Mean

Let XX1,..., n be a random sample of size n from a normal distribution with mean  (unknown parameter) and variance  2 (known parameter). In this section we consider Bayesian estimation of the parameter  using the loss function (1.1). If the conjugate family of prior distributions for  is the family normal distributions Nb(, 2 ), then the posterior distribution of  is Nm(,) where nx   22 1 m  b &, nn11  22bb 22 and the posterior risk of an estimator ()X under the loss function (1.1) is aaXX  1 2 11aeXX   a   e 1  m 11Ee X e e2 d   n so, B ()X is the solution of the following integral equation 2 2 11a  B a  B 2ame aam ame 22B eedd  e (3.1)  

Hence, we can find B from the equation (3.1) by a numerical solution.

Copyright © 2016 Isaac Scientific Publishing JAS 66 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016

Also, notice that the generalized relative to a diffuse prior, ()  1 for all   R  2 can be found by letting b , i.e.   . n In the presence of a nuisance parameter  2 , i.e. when  2 is unknown, a modified loss function is as follows   a    1ae  Le(;, ) 1 (3.2) a  0 which is a location scale invariant loss function. 1 In this position, we obtain a class of Bayes estimators of the location parameter . Let   be  2 the precision which is unknown and suppose that conditional on  ,  has a normal distribution with 1 mean  and variance 1, where  R,0 are both known constants, i.e., |~N   ,  and  has a pd.. f g(). In this case, one can easily verify that

n r 2 ()x  r 2  i () 2 i 1 2 x,   ee Or 2 n x ,exp nx       2 nn  1 n  It is clear that |,~x N   , , with  x . To obtain the Bayes estimate of ()n  nn  X  for our problem, it is enough to find an estimate ()x which minimizes EL();, X,  for any X, . This expectation is under the distribution of  X, . So B is the solution of the following integral equation r r an()()()    2 an()()()    2 ae  B 2 2ae  B 2 egegB ()d  d  B ()d  d (3.3) 00  which can be solved numerically.

4 Best Scale Invariant Estimator

1 x Consider a random sample XX,..., from f (), where f is a known function, and  is an 1 n   unknown scale parameter. It is desired to estimate  under the loss function (1.2). The class of all scale-invariant estimators of  is of the form

()XXZ 0 ()W ()

Xi where 0 is any scale-invariant estimator, X  (XX1 ,...,n ), and Z  (ZZ1 ,...,n ) with Zi  ; Xn

Xn inZ1,..., 1,n . Moreover the best scale-invariant (minimum risk equivariant (MRE)) estimator Xn  * of  is given by ** ()XXZ 0 ()w () where w *()Z is a function of Z which maximizes

JAS Copyright © 2016 Isaac Scientific Publishing Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 67 https://dx.doi.org/10.22606/jas.2016.12002

 0 (X) a  1  (X) w(Z) 1(ae0 1)  Eew(Z) Zz  1  (4.1)   In the presence of a location parameter as a nuisance parameter, the MRE estimator of  is of the form ** ()XYZ 0 ()w () where 0 ()Y is any finite risk scale-invariant estimator of , based on Y  (Y11 ,...,Yn ), with Y Y YXXi;  1,..., n  1 , Z (ZZZ ,..., ),i ; i 1,..., n 2, and Z  n 1 and w *()Z is iin 11ni Y n1 n 1 Yn 1 any function of Z maximizing  (Y) 0 a  1  (Y) w(Z ) 1(ae0 1)  Eew(Z) Zz  1  (4.2)  

In many cases, when   1, we can find an equivariant estimator 0()X or 0()Y which has the gamma distribution with known parameters , and is independent of Z .  It follows that  *  0 is the MRE estimator of  where w * is a number which maximizes w * x x xaa(1) 1  a(1) 1(ae 1)w x ( ) ew wwx  x 11a  gw() e ed x e x ed x (4.3) 00()() and hence w * must satisfy the following equation ax ax aa 2aa** ()xeww () xe 1 ww**a xedd x e xe x (4.4) 00

Theorem 4.1: If 0()X is a finite risk scale-invariant estimator of , which has the gamma distribution with known parameters ,, when   1. Then the MRE (minimum risk equivariant)  ()X estimator of  under the loss function (1.2) is  *()X  0 , where w * must satisfy the equation w * (4.4).

Example 4.1: (Exponential) Let XX1,..., n be a random sample from E(0, ) with density x 1  n ex ;0, and consider the estimation of  under the loss (1.2).  ()X  X is an equivariant 0  i  i 1 estimator which has Ga(n,1)-distribution when   1 and it follows from the Basu’s theorem that 0 n X is independent of Z , hence the MRE estimator of  under the loss (1.2) is  *()X  i 1 i , where * * must satisfy the following equation ax ax aa 2aa** (1)xeww (1) xe nan1 ww** xedd x e xe x (4.5) 00

Copyright © 2016 Isaac Scientific Publishing JAS 68 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016

Example 4.1: (Continued) Suppose that XX1,..., n is a random sample of E(,  ) with density 1 e()x   ; x   , and consider the estimation of  when  is unknown. We know that  n XX,(X) is a complete sufficient statistics for (,  ) . It follows that  (1) i 1 i (1) 

n 1  ()Y  2 (XX ) has Ga(n 1, ) -distribution, when   1 , and from the Basu’s theorem 0(i 1 i 1)2 n ()XX i 1 i (1)  ()Y is independent of Z and hence  *()X  is the MRE estimator of  under 0 * the loss (1.2), where * must satisfy the following equation ax ax 21aaaa1 ()xeww() xe nan21ww22 xedd xe xe x (4.6) 00 2 Example 4.2: (Normal variance) Let XX1,..., n be a random sample of N(0, ) and consider the n estimation of  2 .  ()X  X 2 is a finite risk scale-invariant estimator of  2 and is independent of 0  i i 1 n X 2 n 1  i Z , and when 2  1, (X ) has Ga( , ) -distribution and hence  *()X  i 1 is the MRE 0 22 * estimator of  2 , where * must satisfy the following equation ax ax aa nn21aa**1  1 ()xeww() e 22ww**22a xedd xe xe x (4.7) 00 2 Example 4.2: (Continued) Let XX1,..., n be a random sample from N(,  ), with a nuisance n parameter  . In estimating  2 using the loss (1.2), it follows that  ()X  (XX )2 is 0  i i 1 n  11 independent of Z , and when  2  1 , the distribution of  ()Y is Ga( , ) . Therefore, 0 22 n 2 XX  i  *()X  i 1 is the MRE estimator of  2 , where * must satisfy the following equation * ax ax aa n 3 21aa**n  1 1 ()xeww()  xe 22ww**22a xedd xe xe x (4.8) 00

Example 4.3: (Inverse Gaussian with zero drift) Let XX1,..., n be a random sample of IG( , ) with density 1   2  fx(|) e2x ifx 0 3 2x

n n 1 and consider the estimation of  .  ()X  X 1 has Ga( , )-distribution and is independent of 0 i 1 i 22 n X 1  i Z and hence  *()X  i 1 is the MRE estimator of  , where * must satisfy the equation (4.7). *

JAS Copyright © 2016 Isaac Scientific Publishing Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 69 https://dx.doi.org/10.22606/jas.2016.12002

5 Bayes Estimation of Scale Parameters

In the section, we consider the Bayesian estimation of the scale parameter  in a subclass of one-  parameter exponential families in which the complete sufficient statistic  ()X has G( , ) - 0 2 distribution, where   0 ,   0 are known. 1 Assume that the conjugate family of prior distributions for   is the family of Gamma  distribution Ga( , ) . Now, the posterior distribution of  is Ga( ,0 (x)) and the Bayes estimate of  is a function (x) which maximizes the function  a (1) (())X    (())aeX  a(1) 1(ae 1) 0 11a  0 Ee X  e e d  () 0 Hence, the maximized  must satisfy the following integral equation, a(1) a(1) (2a(0 x))  eea (a( 0 x))    eedd e (5.1) 00 So all estimators satisfying (5.1) are also Bayes estimators. Example 5.1: (Fisher Nile’s problem) The classical example of an ancillary statistic is known as the problem of Nile, originally formulated by Fisher [1]. Assume that X and Y are two positive valued random variables with the joint density function 1 () xy fxy(,;) e  ;x 0, y 0, 0 1 n and that (XY , ) , i 1,..., n is a random sample of n paired observation on (,XY ). Let XX , ii  i n i 1 1 n Y YY , TuXY, . T is the MLE of  and the pair (,TU ) is a jointly sufficient, but  i n i 1 X not complete statistics for  and U is ancillary. Consider a nonrandomized rule (,TU ) based on the sufficient statistic (,XY ) which is equivariant under the transformation c 0 zX    ;0c 1    0 Y  c For (,TU ) to be scale equivariant, we must have cTU(, ) (c, TU ) ; c 0 (5.2) Following Lehman [3] a necessary and sufficient condition for an estimator  to be scale equivariant is that it is of the form   0Z , where 0 satisfies (5.2), hence 0  T , ZU (). We see that all the scale equivariant estimators (,TU ) must have the form (,TU ) T () U (5.3) T using the loss function (1.2) and the fact that the joint distribution of ,U is independent of  ,  and we can evaluate the risk at   1 . Hence 1(()1)aTU eaT(()1) U RTU(, ( )) EEU [ (1 e )|U ] It follows that RTU(, (( )) is minimized by minimizing the inner expectation. Hence, the minimum ˆ * * risk scale equivariant estimator is MRE  TU(), where  ()U must satisfy the following integral equation

Copyright © 2016 Isaac Scientific Publishing JAS 70 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016

uu** (2 a** (u) u)teau(t ( ) 1) (a (u) u) teau(t ( ) 1) tta etdd eet (5.4) 00 where we use the fact that the joint density function of (T, U) is g(t,u), when t  1 , and [2] t   nu()  2eu t 21n  u  ift  0,u 0 gt(, ) nn2n[( 1)!]2 t    0otherwise. For deriving the Bayes estimator of  , let us consider the Inverted Gamma distribution as a prior distribution e ();  0,  0. ,  1()

Therefore the unique Bayes estimator B which is admissible under the loss (1.2) must satisfy the following integral equation

uu11a(1)  a(1)  (2 a ) (ut ) eB (a ) ( ut ) e B B tta B  eedd e (5.5) 00 ˆ ˆ Note that  MRE  B , whenever   0 ,   0 . This means that when the loss function is scale ˆ invariant loss (1.2), then  MRE is a generalized Byes rule against the scale invariant improper prior 1 () ;  0 and is therefore minimax. 

References

1. Fisher, R. A. (1959): Statistical Method and Scientific Inference. London, Oliver and Boyd. 2. Joshi, s. and Nabar, S. (1987): Estimation of the Parameter in the Problem of the Nile. Commun. Statist. Theory Meth. 16, 3149-3156. 3. Lehman, E.L. and Casella, G. (1998): Theory of Point Estimation. Springer-Varlag New York, Inc. 4. Leon, R. V. and Wu, C.F.G. (1992): A Theory of Performance Measure in Parameter Design. Statist. Sinica, 2(2), 335-357. 5. Tribus, M. and Szonya, G. (1989): An Alternate View of the Taguchi Approach. Quality progress, May, 46-52. 6. Varian, H. R. (1975): A Bayesian Approach to Real Assessment. In: Studies in Bayesian Econometric and Statistics in Honor of Leonard J. Savage, eds. S.E. Fienberg and A. Zellner. North Holland, Amesterdam, 195-208. 7. Zellner, A. (1986): Bayesian Estimation and Prediction Using Asymmetric Loss Function. J. Amer Statist. Assoc., 81,446-451.

JAS Copyright © 2016 Isaac Scientific Publishing