7 Minimax Estimation

Total Page:16

File Type:pdf, Size:1020Kb

7 Minimax Estimation 7 Minimax Estimation Previously we have looked at Bayes estimation when the overall measure of the es- timation error is weighted over all values of the parameter space, with a positive weight function, a prior. Now we use the maximum risk as a relevant measure of the estimation risk, sup R(✓,δ), ✓ and define the minimax estimator, if it exists, as the estimator that minimizes this risk, i.e. δˆ = arg inf sup R(✓,δ), δ ✓ where the infimum is taken over all functions of the data. To relate the minimax estimator with the Bayes estimator the Bayes risk r⇤ = R(✓,δλ) d⇤(✓) Z is of interest. Definition 16 The prior distribution ⇤ is called least favorable for estimating g(✓) if r r for every other distribution ⇤0 on ⌦. ⇤ ≥ ⇤0 When a Bayes estimator has a Bayes risk that attains the minimax risk it is minimax: Theorem 11 Assume ⇤ is a prior distribution and assume the Bayes estimator δ⇤ satisfies R(✓,δ⇤) d⇤(✓)=supR(✓,δ⇤). Z ✓ Then (i) δ⇤ is minimax. (ii) If δ⇤ is the unique Bayes estimator, it is the unique mimimax estimator. (iii)⇤is least favorable. Proof. Let δ = δ be another estimator. Then 6 ⇤ sup R(✓,δ) R(✓,δ) d⇤(✓) R(✓,δ⇤) d⇤(✓) ✓ ≥ Z ≥ Z =supR(✓,δλ), ✓ which proves that δ⇤ is minimax. To prove (ii), uniqueness of the Bayes estimator means that no other estimator has the same Bayes risk, so > replaces in the second inequality above, and thus ≥ 49 no other estimator has the same maximum risk, which proves the uniqueness of the minimax estimator. To show (iii), let ⇤0 =⇤beadistributionon⌦,andletδ⇤,δ⇤0 be the corresponding Bayes estimators. The6 the Bayes risks are related by r⇤0 = R(✓,δ⇤0 ) d⇤0(✓) R(✓,δ⇤) d⇤0(✓) sup R(✓,δ⇤) Z Z ✓ = r⇤, where the first inequality follows since δ⇤0 is the Bayes estimator corresponding to ⇤0. We have proven that δ⇤ is least favorable. 2 Note 1 The previous result says minimax estimators are Bayes estimators with re- spect to the least favorable prior. Two simple cases when the Bayes estimator’s Bayes risk attains the minimax risk are given in the following. Corollary 3 Assume the Bayes estimator δ⇤ has constant risk, i.e. R(✓,δ⇤) does not depend on ✓. Then δ⇤ is minimax. Proof. If the risk is constant the supremum over ✓ is equal to the average over ✓ so the Bayes and the minimax risk are the same, and the result follows from the previous theorem. 2 Corollary 4 Let δ⇤ be the Bayes estimator for ⇤, and define ⌦⇤ = ✓ ⌦:R(✓,δ⇤)=supR(✓0,δ⇤) . { 2 ✓0 } Then if ⇤(⌦⇤)=1, δ⇤ is minimax. Proof. The condition ⇤(⌦⇤)=1meansthattheBayesestimatorhasconstantrisk ⇤-almost surely, and since the Bayes estimator is only determined modulo ⇤-null sets, this is enough. 2 Example 32 Let X Bin(n, p). Let ⇤=B(a, b) be the Beta distribution for p.As previously established,2 via the conditional distribution of p given X (which is Beta also), the Bayes estimator of p is a + x δ (x)= , ⇤ a + b + n 50 with risk function, for quadratic loss, R(p, δ )=Var(δ )+(E(δ ) p)2 ⇤ ⇤ ⇤ − 1 a + np p(a + b + n) 2 = npq + − (a + b + n)2 a + b + n ! 1 = npq +(aq bp)2 (a + b + n)2 − ⇣ ⌘ a2 +(n 2a(a + b))p +((a + b)2 n)p2 = − − . (a + b + n)2 Since this function is a quadratic in p it is constant, as a function of p, if and only if the coefficients of p and p2 are zero, i.e. i↵ (a + b)2 = n, 2a(a + b)=n, which has a solution a = b = pn/2. Thus x + pn/2 δ (x)= ⇤ n + pn is a constant risk Bayes estimator and therefore minimax for the (least favorable) distribution ⇤=B(pn/2, pn/2). The Bayes estimator is unique and therefore this is the unique minimax estimator for p, with respect to ⇤=B(pn/2, pn/2). Now let ⇤ be an arbitrary distribution on [0, 1]. Then the Bayes estimator of p is δ (x)=E(p x), i.e. ⇤ | 1 1 1 δ⇤(x)= pd⇤(p x)= pf(x p) d⇤(p) Z0 | Z0 | f(x) 1 x n x 1 pp (1 p) − d⇤(p) = 1 − . R px(1 p)n xd⇤(p) 0 − − Using the power expansion R n x n x (1 p) − =1+a1p + ...+ an xp − , − − this becomes 1 x+1 x+2 n+1 0 p + a1p + ...+ an xp d⇤(p) − δ⇤(x)= x x+1 n , R p + a1p + ...+ an xp d⇤(p) − which shows that the Bayes estimatorR depends on the distribution ⇤ only via the n+1 first moments of ⇤. Therefore the least favorable distribution is not unique for esti- mating p in a Bin(n, p) distribution: Two priors with the same first n +1 moments will give the same Bayes estimator. 2 51 Recall that when the loss function is convex, then for any randomized estima- tor there is a nonrandomized estimator which has at least as small a risk as the randomized, so then there is no need to consider randomized estimators. The relation that was established between the minimax estimator and the Bayes estimator, and obtaining the prior ⇤as least favorable distribution, is valid when ⇤ is proper prior. What happens when ⇤is not proper? Sometimes the estimation problem at hand makes it natural to consider such an improper prior: One such situation is when estimating the mean in a Normal distribution with known variance, and the mean is unrestricted i.e. it is a real number. Then one could believe that the least favorable distribution is the Lebesgue measure on R. To model this, assume ⇤is a fixed (improper) prior and let ⇤n be a sequence of priors that in some sense approximate ⇤: { } Definition 17 Let ⇤ be a sequence of priors and let δ be the Bayes estimator { n} n corresponding to ⇤n with rn = R(✓,δn) d⇤n(✓) Z the Bayes risk. Define r =limn rn. ! If r⇤0 r for every (proper) prior distribution ⇤0 then the sequence ⇤n is called least favorable. { } Now if δ is an estimator whose maximal risk attains this limiting Bayes risk, then δ is minimax and the sequence is least favorable: Theorem 12 Let ⇤n be a sequence of priors and let r =limn rn. Assume the !1 estimator δ satisfies{ } r =supR(✓,δ). ✓ Then δ is minimax and the sequence ⇤ is least favorable. { n} Proof. To prove the minimaxity: Let δ0 be any other estimator. Then sup R(✓,δ0) R(✓,δ0) d⇤(✓) rn, ✓ ≥ Z ≥ for every n.Sincetherighthandsideconvergestor =sup✓ R(✓,δ)thisimpliesthat sup R(✓,δ0) sup R(✓,δ), ✓ ≥ ✓ and thus δ is minimax. To prove that ⇤ is least favorable: Let ⇤be any (proper) prior distribution { n} and let δ⇤ be the corresponding Bayes estimator. Then r⇤ = R(✓,δ⇤) d⇤(✓) R(✓,δ) d⇤(✓) Z Z sup R(✓,δ)=r, ✓ and thus ⇤ is least favorable. 2 { n} 52 Note 2 Uniqueness of the Bayes estimators δn does not imply uniqueness of the minimax estimator, since in that case the strict inequality in rn = R(✓,δn) d⇤n(✓) < R(✓,δ0) d⇤(✓) is transformed to r R(✓,δ0) d⇤(✓) under the limit operation. R R R To calculate the Bayes risk rn for δn,thefollowingisuseful. Lemma 9 Let δ⇤ be the Bayes estimator of g(✓) corresponding to ⇤, and assume quadratic loss. Then the Bayes risk is given by r⇤ = Var(g(⇥) x) dP (x). Z | Proof. The Bayes estimator is for quadratic loss given by δ⇤(x)=E(g(⇥) x), and thus the Bayes risk is | r⇤ = R(✓,δ⇤)d⇤(✓)=...(Fubinis theorem)... = Z 2 = E([δ⇤(x) g(⇥)] x) dP (x) Z − | = E([E(g(⇥) x) g(⇥)]2 x) dP (x) Z | − | = Var(g(⇥) x) dP (x). Z | 2 Note 3 If the conditional variance Var(g(⇥) x) is independent of x, the Bayes risk can be obtained as Var(g(⇥) x). | | 2 Example 33 Let X =(X1,...,Xn) be an i.i.d sequence with X1 N(✓,σ ).We want to estimate g(✓)=✓ and assume quadratic loss. Assume σ2 2is known. Then δ = X¯ is minimax: To prove this we will find a sequence a Bayes estimators δn whose 2 Bayes risk satisfies rn σ /2=sup✓ R(✓,δ); the last equality holds since the risk of X¯ does not depend on !✓. Let ✓ be distributed according to the prior ⇤=N(µ, b2). Then the Bayes estimator is nx/¯ σ2 + µ/b2 δ (x)= , ⇤ n/σ2 +1/b2 with posterior variance 1 Var(⇥ x)= , | n/σ2 +1/b2 which, since it is independent of x, implies the Bayes risk 1 r = . ⇤ n/σ2 +1/b2 If b , then r = r σ2/n. Thus δ = X¯ is minimax. 2 !1 ⇤ ⇤b ! 53 Lemma 10 Let 1 be sets of distributions. Let g(F ) be an estimand (a func- tional) defined onF .⇢ AssumeF δ is minimax over . Assume F 1 F1 sup R(F,δ1)= supR(F,δ1). F F 1 2F 2F Then δ is minimax over . 1 F Proof. Assume δ is minimax over .Sinceforeveryδ, 1 F1 sup R(F,δ) sup R(F,δ) F 1 F 2F 2F we get sup R(F,δ1)= supR(F,δ1) = inf sup R(F,δ) inf sup R(F,δ). F F 1 δ F 1 δ F 2F 2F 2F 2F Thus we have equality in the last inequality, and thus δ is minimax over . 2 1 F 7.1 Minimax estimation in exponential families Recall that the standard approach to estimation in exponential families is to make a restriction to the unbiased estimator, and in this restricted family find the estimator (if it exists) that has smallest risk (or variance for quadratic loss), uniformly over the parameter space (the UMVU), or locally for a fixed parameter (the LMVU).
Recommended publications
  • Approximated Bayes and Empirical Bayes Confidence Intervals—
    Ann. Inst. Statist. Math. Vol. 40, No. 4, 747-767 (1988) APPROXIMATED BAYES AND EMPIRICAL BAYES CONFIDENCE INTERVALSmTHE KNOWN VARIANCE CASE* A. J. VAN DER MERWE, P. C. N. GROENEWALD AND C. A. VAN DER MERWE Department of Mathematical Statistics, University of the Orange Free State, PO Box 339, Bloemfontein, Republic of South Africa (Received June 11, 1986; revised September 29, 1987) Abstract. In this paper hierarchical Bayes and empirical Bayes results are used to obtain confidence intervals of the population means in the case of real problems. This is achieved by approximating the posterior distribution with a Pearson distribution. In the first example hierarchical Bayes confidence intervals for the Efron and Morris (1975, J. Amer. Statist. Assoc., 70, 311-319) baseball data are obtained. The same methods are used in the second example to obtain confidence intervals of treatment effects as well as the difference between treatment effects in an analysis of variance experiment. In the third example hierarchical Bayes intervals of treatment effects are obtained and compared with normal approximations in the unequal variance case. Key words and phrases: Hierarchical Bayes, empirical Bayes estimation, Stein estimator, multivariate normal mean, Pearson curves, confidence intervals, posterior distribution, unequal variance case, normal approxima- tions. 1. Introduction In the Bayesian approach to inference, a posterior distribution of unknown parameters is produced as the normalized product of the like- lihood and a prior distribution. Inferences about the unknown parameters are then based on the entire posterior distribution resulting from the one specific data set which has actually occurred. In most hierarchical and empirical Bayes cases these posterior distributions are difficult to derive and cannot be obtained in closed form.
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • Download Article (PDF)
    Journal of Statistical Theory and Applications, Vol. 17, No. 2 (June 2018) 359–374 ___________________________________________________________________________________________________________ BAYESIAN APPROACH IN ESTIMATION OF SHAPE PARAMETER OF THE EXPONENTIATED MOMENT EXPONENTIAL DISTRIBUTION Kawsar Fatima Department of Statistics, University of Kashmir, Srinagar, India [email protected] S.P Ahmad* Department of Statistics, University of Kashmir, Srinagar, India [email protected] Received 1 November 2016 Accepted 19 June 2017 Abstract In this paper, Bayes estimators of the unknown shape parameter of the exponentiated moment exponential distribution (EMED)have been derived by using two informative (gamma and chi-square) priors and two non- informative (Jeffrey’s and uniform) priors under different loss functions, namely, Squared Error Loss function, Entropy loss function and precautionary Loss function. The Maximum likelihood estimator (MLE) is obtained. Also, we used two real life data sets to illustrate the result derived. Keywords: Exponentiated Moment Exponential distribution; Maximum Likelihood Estimator; Bayesian estimation; Priors; Loss functions. 2000 Mathematics Subject Classification: 22E46, 53C35, 57S20 1. Introduction The exponentiated exponential distribution is a specific family of the exponentiated Weibull distribution. In analyzing several life time data situations, it has been observed that the dual parameter exponentiated exponential distribution can be more effectively used as compared to both dual parameters of gamma or Weibull distribution. When we consider the shape parameter of exponentiated exponential, gamma and Weibull is one, then these distributions becomes one parameter exponential distribution. Hence, these three distributions are the off shoots of the exponential distribution. Moment distributions have a vital role in mathematics and statistics, in particular probability theory, in the viewpoint research related to ecology, reliability, biomedical field, econometrics, survey sampling and in life-testing.
    [Show full text]
  • Bayes Estimator Recap - Example
    Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Last Lecture . Biostatistics 602 - Statistical Inference Lecture 16 • What is a Bayes Estimator? Evaluation of Bayes Estimator • Is a Bayes Estimator the best unbiased estimator? . • Compared to other estimators, what are advantages of Bayes Estimator? Hyun Min Kang • What is conjugate family? • What are the conjugate families of Binomial, Poisson, and Normal distribution? March 14th, 2013 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 1 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 2 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Recap - Bayes Estimator Recap - Example • θ : parameter • π(θ) : prior distribution i.i.d. • X1, , Xn Bernoulli(p) • X θ fX(x θ) : sampling distribution ··· ∼ | ∼ | • π(p) Beta(α, β) • Posterior distribution of θ x ∼ | • α Prior guess : pˆ = α+β . Joint fX(x θ)π(θ) π(θ x) = = | • Posterior distribution : π(p x) Beta( xi + α, n xi + β) | Marginal m(x) | ∼ − • Bayes estimator ∑ ∑ m(x) = f(x θ)π(θ)dθ (Bayes’ rule) | α + x x n α α + β ∫ pˆ = i = i + α + β + n n α + β + n α + β α + β + n • Bayes Estimator of θ is ∑ ∑ E(θ x) = θπ(θ x)dθ | θ Ω | ∫ ∈ Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 3 / 28 Hyun Min Kang Biostatistics 602 - Lecture 16 March 14th, 2013 4 / 28 Recap Bayes Risk Consistency Summary Recap Bayes Risk Consistency Summary . Loss Function Optimality Loss Function Let L(θ, θˆ) be a function of θ and θˆ.
    [Show full text]
  • CSC535: Probabilistic Graphical Models
    CSC535: Probabilistic Graphical Models Bayesian Probability and Statistics Prof. Jason Pacheco Why Graphical Models? Data elements often have dependence arising from structure Pose Estimation Protein Structure Exploit structure to simplify representation and computation Why “Probabilistic”? Stochastic processes have many sources of uncertainty Randomness in Measurement State of Nature Process PGMs let us represent and reason about these in structured ways What is Probability? What does it mean that the probability of heads is ½ ? Two schools of thought… Frequentist Perspective Proportion of successes (heads) in repeated trials (coin tosses) Bayesian Perspective Belief of outcomes based on assumptions about nature and the physics of coin flips Neither is better/worse, but we can compare interpretations… Administrivia • HW1 due 11:59pm tonight • Will accept submissions through Friday, -0.5pts per day late • HW only worth 4pts so maximum score on Friday is 75% • Late policy only applies to this HW Frequentist & Bayesian Modeling We will use the following notation throughout: - Unknown (e.g. coin bias) - Data Frequentist Bayesian (Conditional Model) (Generative Model) Prior Belief Likelihood • is a non-random unknown • is a random variable (latent) parameter • Requires specifying the • is the sampling / data prior belief generating distribution Frequentist Inference Example: Suppose we observe the outcome of N coin flips. What is the probability of heads (coin bias)? • Coin bias is not random (e.g. there is some true value) • Uncertainty reported
    [Show full text]
  • 9 Bayesian Inference
    9 Bayesian inference 1702 - 1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake £pM to win £M and equally prepared to accept a stake of £pM to win £M. In other words ... ... the bet is fair and you are assumed to behave rationally. 9.1.1 Kolmogorov’s axioms How does subjective probability fit in with the fundamental axioms? Let A be the set of all subsets of a countable sample space Ω. Then (i) P(A) ≥ 0 for every A ∈A; (ii) P(Ω)=1; 83 (iii) If {Aλ : λ ∈ Λ} is a countable set of mutually exclusive events belonging to A,then P Aλ = P (Aλ) . λ∈Λ λ∈Λ Obviously the subjective interpretation has no difficulty in conforming with (i) and (ii). (iii) is slightly less obvious. Suppose we have 2 events A and B such that A ∩ B = ∅. Consider a stake of £pAM to win £M if A occurs and a stake £pB M to win £M if B occurs. The total stake for bets on A or B occurring is £pAM+ £pBM to win £M if A or B occurs. Thus we have £(pA + pB)M to win £M and so P (A ∪ B)=P(A)+P(B) 9.1.2 Conditional probability Define pB , pAB , pA|B such that £pBM is the fair stake for £M if B occurs; £pABM is the fair stake for £M if A and B occur; £pA|BM is the fair stake for £M if A occurs given B has occurred − other- wise the bet is off.
    [Show full text]
  • Improving on the James-Stein Estimator
    Improving on the James-Stein Estimator Yuzo Maruyama The University of Tokyo Abstract: In the estimation of a multivariate normal mean for the case where the unknown covariance matrix is proportional to the identity matrix, a class of generalized Bayes estimators dominating the James-Stein rule is obtained. It is noted that a sequence of estimators in our class converges to the positive-part James-Stein estimator. AMS 2000 subject classifications: Primary 62C10, 62C20; secondary 62A15, 62H12, 62F10, 62F15. Keywords and phrases: minimax, multivariate normal mean, generalized Bayes, James-Stein estimator, quadratic loss, improved variance estimator. 1. Introduction Let X and S be independent random variables where X has the p-variate normal 2 2 2 distribution Np(θ, σ Ip) and S/σ has the chi square distribution χn with n degrees of freedom. We deal with the problem of estimating the mean vector θ relative to the quadratic loss function kδ−θk2/σ2. The usual minimax estimator X is inadmissible for p ≥ 3. James and Stein (1961) constructed the improved estimator δJS = 1 − ((p − 2)/(n + 2))S/kXk2 X, JS which is also dominated by the James-Stein positive-part estimator δ+ = max(0, δJS) as shown in Baranchik (1964). We note that shrinkage estimators such as James-Stein’s procedure are derived by using the vague prior informa- tion that λ = kθk2/σ2 is close to 0. It goes without saying that we would like to get significant improvement of risk when the prior information is accurate. JS JS Though δ+ improves on δ at λ = 0, it is not analytic and thus inadmissi- ble.
    [Show full text]
  • Lecture 8 — October 15 8.1 Bayes Estimators and Average Risk
    STATS 300A: Theory of Statistics Fall 2015 Lecture 8 | October 15 Lecturer: Lester Mackey Scribe: Hongseok Namkoong, Phan Minh Nguyen Warning: These notes may contain factual and/or typographic errors. 8.1 Bayes Estimators and Average Risk Optimality 8.1.1 Setting We discuss the average risk optimality of estimators within the framework of Bayesian de- cision problems. As with the general decision problem setting the Bayesian setup considers a model P = fPθ : θ 2 Ωg, for our data X, a loss function L(θ; d), and risk R(θ; δ). In the frequentist approach, the parameter θ was considered to be an unknown deterministic quan- tity. In the Bayesian paradigm, we consider a measure Λ over the parameter space which we call a prior. Assuming this measure defines a probability distribution, we interpret the parameter θ as an outcome of the random variable Θ ∼ Λ. So, in this setup both X and θ are random. Conditioning on Θ = θ, we assume the data is generated by the distribution Pθ. Now, the optimality goal for our decision problem of estimating g(θ) is the minimization of the average risk r(Λ; δ) = E[L(Θ; δ(X))] = E[E[L(Θ; δ(X)) j X]]: An estimator δ which minimizes this average risk is a Bayes estimator and is sometimes referred to as being Bayes. Note that the average risk is an expectation over both the random variables Θ and X. Then by using the tower property, we showed last time that it suffices to find an estimator δ which minimizes the posterior risk E[L(Θ; δ(X))jX = x] for almost every x.
    [Show full text]
  • Unbiasedness and Bayes Estimators
    Unbiasedness and Bayes Estimators Siamak Noorbaloochi Center for Chronic Disease Outcomes Research Minneapolis VA Medical Center and Glen Meeden1 School of Statistics University of Minnesota2 A simple geometric representation of Bayes and unbiased rules for squared error loss is provided. Some orthogonality relationships between them and the functions they are estimating are proved. Bayes estimators are shown to be behave asymptotically like unbiased estimators. Key Words: Unbiasedness, Bayes estimators, squared error loss and con- sistency 1Research supported in part by NSF Grant DMS 9971331 2Glen Meeden School of Statistics 313 Ford Hall 224 Church ST S.E. University of Minnesota Minneapolis, MN 55455-0460 1 1 Introduction Let X be a random variable, possible vector valued, with a family of possible probability distributions indexed by the parameter θ ∈ Θ. Suppose γ, some real-valued function defined on Θ, is to be estimated using X. An estimator δ is said to be unbiased for γ if Eθδ(X) = γ(θ) for all θ ∈ Θ. Lehmann (1951) proposed a generalization of this notion of unbiasedness which takes into account the loss function for the problem. Noorbaloochi and Meeden (1983) proposed a generalization of Lehmann’s definition but which depends on a prior distribution π for θ. Assuming squared error loss let the Bayes risk of an estimator δ for estimating γ be denoted by r(δ, γ; π). Then under their definition δ is unbiased for estimating γ for the prior π if r(δ, γ; π) = inf r(δ, γ0; π) γ0 Under very weak assumptions it is easy to see that this definition reduces to the usual one.
    [Show full text]
  • Bayesian Cluster Enumeration Criterion for Unsupervised Learning
    1 Bayesian Cluster Enumeration Criterion for Unsupervised Learning Freweyni K. Teklehaymanot, Student Member, IEEE, Michael Muma, Member, IEEE, and Abdelhak M. Zoubir, Fellow, IEEE Abstract—We derive a new Bayesian Information Criterion The BIC was originally derived by Schwarz in [8] assum- (BIC) by formulating the problem of estimating the number of ing that (i) the observations are independent and identically clusters in an observed data set as maximization of the posterior distributed (iid), (ii) they arise from an exponential family probability of the candidate models. Given that some mild assumptions are satisfied, we provide a general BIC expression of distributions, and (iii) the candidate models are linear in for a broad class of data distributions. This serves as a starting parameters. Ignoring these rather restrictive assumptions, the point when deriving the BIC for specific distributions. Along this BIC has been used in a much larger scope of model selection line, we provide a closed-form BIC expression for multivariate problems. A justification of the widespread applicability of Gaussian distributed variables. We show that incorporating the the BIC was provided in [16] by generalizing Schwarz’s data structure of the clustering problem into the derivation of the BIC results in an expression whose penalty term is different derivation. In [16], the authors drop the first two assumptions from that of the original BIC. We propose a two-step cluster enu- made by Schwarz given that some regularity conditions are meration algorithm. First, a model-based unsupervised learning satisfied. The BIC is a generic criterion in the sense that algorithm partitions the data according to a given set of candidate it does not incorporate information regarding the specific models.
    [Show full text]
  • Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities
    Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Andrew Bean, B.A.,M.S. Graduate Program in Statistics The Ohio State University 2017 Dissertation Committee: Xinyi Xu, Co-Advisor Steven N. MacEachern, Co-Advisor Yoonkyung Lee Matthew T. Pratola c Copyright by Andrew Bean 2017 Abstract In data analysis applications characterized by large and possibly irregular data sets, nonparametric statistical techniques aim to ensure that, as the sample size grows, all unusual features of the data generating process can be captured. Good large- sample performance can be guaranteed in broad classes of problems. Yet within these broad classes, some problems may be substantially more difficult than others. This fact, long recognized in classical nonparametrics, also holds in the growing field of Bayesian nonparametrics, where flexible prior distributions are developed to allow for an infinite-dimensional set of possible truths. This dissertation studies the Bayesian approach to the classic problem of nonpara- metric density estimation, in the presence of specific irregularities such as heavy tails and skew. The problem of estimating an unknown probability density is recognized as being harder when the density is skewed or heavy tailed than when it is symmetric and light-tailed. It is more challenging problem for classical kernel density estimators, where the expected squared-error loss is higher for heavier tailed densities. It is also a more challenging problem in Bayesian density estimation, where heavy tails preclude the analytical treatment required to establish a large-sample convergence rate for the popular Dirichlet-Process (DP) mixture model.
    [Show full text]
  • Domination Over the Best Invariant Estimator Is, When Properly
    DOCUMENT RESUME ED 394 999 TM 024 995 AUTHOR Brandwein, Ann Cohen; Strawderman, William E. TITLE James-Stein Estimation. Program Statistics Research, Technical Report No. 89-86. INSTITUTION Educational Testing Service, Princeton, N.J. REPORT NO ETS-RR-89-20 PUB DATE Apr 89 NOTE 46p. PUB TYPE Reports Evaluative/Feasibility (142) Statistical Data (110) EDRS PRICE MF01/PCO2 Plus Postage. DESCRIPTORS Equations (Mathematics); *Estimation (Mathematics); *Maximum Likelihood Statistics; *Statistical Distributions IDENTIFIERS *James Stein Estimation; Nonnormal Distributions; *Shrinkage ABSTRACT This paper presents an expository development of James-Stein estimation with substantial emphasis on exact results for nonnormal location models. The themes of the paper are: (1) the improvement possible over the best invariant estimator via shrinkage estimation is not surprising but expected from a variety of perspectives; (2) the amount of shrinkage allowable to preserve domination over the best invariant estimator is, when properly interpreted, relatively free from the assumption of normality; and (3) the potential savings in risk are substantial when accompanied by good quality prior information. Relatively, much less emphasis is placed on choosing a particular shrinkage estimator than on demonstrating that shrinkage should produce worthwhile gains in problems where the error distribution is spherically symmetric. In addition, such gains are relatively robust with respect to assumptions concerning distribution and loss. (Contains 1 figure and 53 references.)
    [Show full text]