Bayes and Invariante Estimation of Parameters Under a Bounded Asymmetric Loss Function
Total Page:16
File Type:pdf, Size:1020Kb
Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 63 https://dx.doi.org/10.22606/jas.2016.12002 Bayes and Invariante Estimation of Parameters Under a Bounded Asymmetric Loss Function N. Sanjari Farsipour Department of Statistics, Faculty of Mathematical sciences, Alzahra University, Tehran, Iran Email: [email protected] Abstract. In this paper we consider the estimation of parameters under a bounded asymmetric loss function. The Bayes and invariant estimator of location and scale parameters in the presence and absence of a nuisance parameter is considered. Some examples in this regard are included. Keywords: Bayes estimation; invariance; location parameter; scale parameter; bounded Asymmetric loss. 1 Introduction In the literature, the estimation of a parameter is usually considered when the loss is squared error or in general any convex and symmetric function. The quadratic loss function has been criticized by some researches (e.g., [4], [5], [6] and [7]). The proposed loss function is ba{1 ( ) ea () } Lke(,) {1 } (1.1) where a 0 determines the shape of the loss function, b 0 serves to scale the loss and k 0 is the maximum loss parameter. The general form of the loss function is illustrated in Figure 1. This is obviously a bounded asymmetric loss function. Figure 1. The loss function (1.1) for a=1. In this paper, we first study the problem of estimation of a location parameter, using the loss function (1.1). In section 2 we introduce the best location-invariant estimator of under the loss (1.1). In section 3, Bayesian estimation of the normal mean is obtained under the loss (1.1). Then we study the problem of estimation of a scale parameter, using the loss function a (1) ba{1 ( 1) e } Lke(,) {1 } (1.2) where a 0 ,bk,0 . The loss (1.2) is scale invariant and bounded. In section 4 we introduce the best invariant estimator of the scale parameter under the loss (1.2). Finally in section 5 we consider a Copyright © 2016 Isaac Scientific Publishing JAS 64 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 subclass of the exponential family and obtain the Bayes estimates of under the loss (1.2). Since the parameters b and k do not have any influence on our results, so without loss of generality we take bk1 in the rest of the paper. 2 Best Location-Invariant Estimator Let X (XX1 ,...,n ) have a joint distribution with probability density ffXX(X ) (1 ,...,n ) where f is known and is an unknown location parameter. The class of all location invariant estimators of a location parameter is of the form [3] ()XXY0 ()v ( ) where 0 is any location-invariant estimator and Y (,...,YY11n ) with YXXiin, in1,..., 1 and the best location-invariant estimator * of under the loss function(1.1), is *** ()XXy0 ()v (), where v ()y is a number which minimizes av(()()) 0 Xy 1(()())ave0 Xy Ee1 Yy 0 (see [3]). Differentiating with respect to v()y and equating to zero, it can be seen that v *()y must satisfy the following equation av(()Xy* ()) av(()Xy** ()) ave (() Xy ())0 Ee(100) e Yy 0 0 (2.1) Example 2.1: (normal mean) Let XX1,..., n be i.i.d. random variables having normal distribution with 2 mean (real but unknown) and known variance . If 0()X X , it follows from Basu’s theorem that 0()X is independent of Y and hence the best location-invariant estimator of is given by **()X Xv , when v* is a number which satisfies (2.1), i.e. 2 22 2 nn2aa2 aavx * a 2 aavx - * ()xx eeav* -(-)- 222 nnn 2 exdd eex (2.2) So, we can find v * by a numerical solution. Example 2.2: (Uniform) Let XX1,..., n be i.i.d. according to the uniform distribution on , where is real (but unknown) and (0) is known. Taking 0(()X (XX1)(2) )2 22 which is an invariant estimator of , the conditional distribution of 0()X given Yy depends on y only through differences XXVi, 2,..., n. Now, note that XX, is a complete ()ii (1) (1) (n) XX()i (1) sufficient statistic for , and is independent of Zi , in2,..., 1 for all , by XX(n) (1) Basu’s theorem. Hence XX, and Zs' are independent for all and any given . Also, note (1) (n ) i ' that the conditional distribution of 0()X given Vsi which is equivalent to conditional distribution of ' 0()X given XX(n) (1) and Zsi depends only on XX(n) (1). On the other hand, the conditional distribution of 0()X given WX()n X (1) at 0 is of the form 1 w ft ()|X Ww () if| t | ; w 0 w 2 JAS Copyright © 2016 Isaac Scientific Publishing Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 65 https://dx.doi.org/10.22606/jas.2016.12002 XX Hence the estimator **()X (1) (n ) v is the MRE estimator of , if * satisfies (2.1), which 2 simplifies to ww** wwav() av () ww** ave()**22 ave () av() a ( v ) ee22 ee22(1 a )ea (1 ) e (2.3) So, we can find v * by a numerical solution. Example 2.3: (Exponential distribution) Let XX1,..., n be iid.. random variables with the density 1 fx() e()x x where R is unknown and (0) is known. 0(()X X 1) is an equivariant estimator and by the ** Basu’s theorem, it is independent of Y . Therefore, ()X X(1) is the MRE estimator of , if * satisfies (2.1), i.e. satisfies * **nnn eeaa1 xexeaaxxdd xexa;0 00 * nnn 1 aaxx **xexedd xex; a0 eeaa which simplifies to nnnn 1(1 )! na * ()! n aaaaar* (1 ) ar* ( ) eeaa e nn (2.4) rr00(1rr )! ()! aa So, we can find * by a numerical solution. 3 Bayes Estimation of the Normal Mean Let XX1,..., n be a random sample of size n from a normal distribution with mean (unknown parameter) and variance 2 (known parameter). In this section we consider Bayesian estimation of the parameter using the loss function (1.1). If the conjugate family of prior distributions for is the family normal distributions Nb(, 2 ), then the posterior distribution of is Nm(,) where nx 22 1 m b &, nn11 22bb 22 and the posterior risk of an estimator ()X under the loss function (1.1) is aaXX 1 2 11aeXX a e 1 m 11Ee X e e2 d n so, B ()X is the solution of the following integral equation 2 2 11a B a B 2ame aam ame 22B eedd e (3.1) Hence, we can find B from the equation (3.1) by a numerical solution. Copyright © 2016 Isaac Scientific Publishing JAS 66 Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 Also, notice that the generalized Bayes estimator relative to a diffuse prior, () 1 for all R 2 can be found by letting b , i.e. . n In the presence of a nuisance parameter 2 , i.e. when 2 is unknown, a modified loss function is as follows a 1ae Le(;, ) 1 (3.2) a 0 which is a location scale invariant loss function. 1 In this position, we obtain a class of Bayes estimators of the location parameter . Let be 2 the precision which is unknown and suppose that conditional on , has a normal distribution with 1 mean and variance 1, where R,0 are both known constants, i.e., |~N , and has a pd.. f g(). In this case, one can easily verify that n r 2 ()x r 2 i () 2 i 1 2 x, ee Or 2 n x ,exp nx 2 nn 1 n It is clear that |,~x N , , with x . To obtain the Bayes estimate of ()n nn X for our problem, it is enough to find an estimate ()x which minimizes EL();, X, for any X, . This expectation is under the distribution of X, . So B is the solution of the following integral equation r r an()()() 2 an()()() 2 B 2 B 2 ae B 2ae B egeg()d d ()d d (3.3) 00 which can be solved numerically. 4 Best Scale Invariant Estimator 1 x Consider a random sample XX,..., from f (), where f is a known function, and is an 1 n unknown scale parameter. It is desired to estimate under the loss function (1.2). The class of all scale-invariant estimators of is of the form ()XXZ 0 ()W () Xi where 0 is any scale-invariant estimator, X (XX1 ,...,n ), and Z (ZZ1 ,...,n ) with Zi ; Xn Xn inZ1,..., 1,n . Moreover the best scale-invariant (minimum risk equivariant (MRE)) estimator Xn * of is given by ** ()XXZ 0 ()w () where w *()Z is a function of Z which maximizes JAS Copyright © 2016 Isaac Scientific Publishing Journal of Advanced Statistics, Vol. 1, No. 2, June 2016 67 https://dx.doi.org/10.22606/jas.2016.12002 0 (X) a 1 (X) w(Z) 1(ae0 1) Eew(Z) Zz 1 (4.1) In the presence of a location parameter as a nuisance parameter, the MRE estimator of is of the form ** ()XYZ 0 ()w () where 0 ()Y is any finite risk scale-invariant estimator of , based on Y (Y11 ,...,Yn ), with Y Y YXXi; 1,..., n 1 , Z (ZZZ ,..., ),i ; i 1,..., n 2, and Z n 1 and w *()Z is iin 11ni Y n1 n 1 Yn 1 any function of Z maximizing (Y) 0 a 1 (Y) w(Z ) 1(ae0 1) Eew(Z) Zz 1 (4.2) In many cases, when 1, we can find an equivariant estimator 0()X or 0()Y which has the gamma distribution with known parameters , and is independent of Z .