
Test (] 994) Vol. 3, No. 1, pp. 5-124 An Overview of Robust Bayesian Analysis* JAMES O. BERGER Department of Statistics, Purdue University West Lafayette, IN 47907-1399, U.S.A. [Read before the Spanish Statistical Society at a meeting organized by the Universidad Aut6noma de Madrid on Friday, November 19, 1993] SUMMARY Robust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are also reviewed, though with very uneven emphasis. The topics to be covered are as follows: 1. Introduction 1.1 Motivation 1.2 Preview 1.3 Notation 2. Development of Inherently Robust Procedures 2.1 Introduction 2.2 Use of Flat-tailed Distributions 2.3 Use of Noninformative and Partially Informative Priors 2.4 Nonparametric Bayes Procedures 3. Diagnostics, Influence, and Sensitivity 3.1 Diagnostics 3.2 Influence and Sensitivity 4. Global Robustness 4.1 Introduction 4.2 Parametric Classes 4.3 Nonparametric Classes of Priors 4.3.1 Factors Involved in Choosing a Class 4.3.2 Common Classes 4.3.3 Application to Hypothesis Testing and Ockham's Razor Received November 93; Revised February 94. * Research supported by the National Science Foundation, Grants DMS-8923071 and DMS 93-03556. 6 James O. Berger 4.4 Nonparametric Classes of Likelihoods 4.5 Limitations of Global Robustness 4.6 Optimal Robust Procedures 5. Computing 5.1 Computational Issues 5.2 Interactive Elicitation 6. Future Directions 1. INTRODUCTION 1.1. Motivation Robust Bayesian analysis is the study of the sensitivity of Bayesian an- swers to uncertain inputs. These uncertain inputs are typically the model, prior distribution, or utility function, or some combination thereof. In- formal or adhoc sensitivity studies have long been a part of applied Bayesian analysis (cf. Box, 1980), but recent years have seen an explo- sion of interest and literature on the subject. There are several reasons for this interest: Foundational Motivation: There is a common perception that founda- tional arguments lead to subjective Bayesian analysis as the only coher- ent method of behavior. Non-Bayesians often recognize this, but feel that the subjective Bayesian approach is too difficult to implement, and hence they ignore the foundational arguments. Both sides are partly right. Subjective Bayesian analysis is, indeed, the only coherent mode of behavior, but only if it is assumed that one can make arbitrarily fine discriminations in judgment about unknowns and utilities. In reality, it is very difficult to discriminate between, say, 0.10 and 0.15 as the subjec- tive probability, P(F_,), to assign to an event E, much less to discriminate between 0.10 and 0.100001. Yet standard Bayesian axiomatics assumes that the latter can (and will) be done. Non-Bayesians intuitively reject the possibility of this, and hence reject subjective Bayesian theory. It is less well known that realistic foundational systems exist, based on axiomatics of behavior which acknowledge that arbitrarily fine dis- crimination is impossible. For instance, such systems allow the pos- sibility that P(E) can only be assigned the range of values from 0.08 to 0.13; reasons for such limitations range from possible psychological limitations to constraints on time for elicitation. The conclusion of these An Ovela,iew of Robust Bayesian Analysis 7 foundational systems is that a type of robust Bayesian analysis is the coherent mode of behavior Roughly, coherent behavior corresponds to having classes of models, priors, and utilities, which yield a range of possible Bayesian answers (con'esponding to the answers obtained through combination of all model-prior-utility triples from the classes). If this range of answers is too large, the question of interest may not, of course, be settled, but that is only realistic: if the inputs are too uncer- tain, one cannot expect certain outputs. Indeed, if one were to perform ordinary subjective Bayesian analysis without checking for robustness, one could be seriously misled as to the accuracy of the conclusion. Extensive developments of such foundational systems can be found in Walley (1991), Rfos Insfa (1990, 1992) and Rfos Insfia and Martfn (1994); see also Gir6n and Rfos (1980) and Kouznetsov (1991). I. J. Good (cf., Good, 1983a) was the first to extensively discuss these issues. Other earlier refmences can be found in Berger (1984, 1985) and in Walley (1991); this latter work is particularly to be recommended for its deep and scholarly study of the foundations of imprecision and robustness. Recent developments in some of the interesting theoretical aspects of the foundations can be found in Wasserman and Kadane (1990, 1992b) and Wasserman and Seidenfeld (1994). Practical Bayesian Motivation: Above, we alluded to the difficulty of subjective elicitation. It is so difficult that, in practice, it is rarely done. Instead, noninformative priors or other approximations (e.g., BIC in model selection) are typically used. The chief difficulties in elicitation are (i) knowing the degree of accuracy in elicitation that is necessary; (ii) knowing what to elicit. Robust Bayesian analysis can provide the tools to answer both questions. As an example of (i), one might be able to quickly determine that 0.05 < P(E) < 0.15, but then wonder if more accurate specification is needed. Robust Bayesian methods can operate with such partial speci- fications, allowing computation of the corresponding range of Bayesian answers. If this range of answers is small enough to provide an answer to the question of interest, then further elicitation is unnecessary. If, howevm; the range is too large to provide a clear answer, then one must attempt finer elicitation (or obtain more data or otherwise strengthen the information base). 8 James O. Berger Knowing what to elicit is even more crucial, especially in higher di- mensional problems where it is completely infeasible to elicit everything that is possibly relevant. Suppose, for instance, that one believes in a 10-dimensional normal model, but that the mean vector and covariance matrix are unknown. Then there are 65 unknown parameters, and ac- curate elicitation of a 65-dimensional distribution is impossible (unless one is willing to introduce structure that effectively greatly reduces the number of parameters). But many of these parameters may be accurately determined by the data, or the question of interest may not depend on accurately knowing many of the parameters. In fact, there may only be a few crucial quantities that need to be elicited. Robust Bayesian techniques can help to identify these quantities. Acceptance of Bayesian Analysis: Rightly or wrongly, the majority Of the statistical world resists use of Bayesian methods. The most often vocalized reason is fear of using a subjective prior, because of a number of perceived dangers. While we do not view this fear as being particularly reasonable (assumptions made in other parts of the analysis are usually much morn inlluential and questionable), we recognize its existence. Robust Bayesian methods, which can operate with a wide class of prior distributions (reflecting either the elicitor's uncertainty in the chosen prior or a range of prior opinions of different individuals), seems to be an effective way to eliminate this fear Non-Bayesian Motivation: Many classical procedures work well in prac- tice, but some standard procedures are simply illogical. Robust Bayes- ian analysis can be used to determine which procedures are clearly bad. Consider, for instance, the following example: Examt)le 1.. A series of clinical trials is performed, with trial i testing drug Di versus a placebo. Each clinical trial is to be analyzed separately, but all can be modelled as standard normal tests of H0: Oi = 0 versus Hi: Oi 7~ O, where Oi is the mean effect of Di minus the mean effect of the placebo. Suppose we know, from past experience, that about 1/2 of the drugs that am tested will end up being ineffective; i.e., will have Oi = 0. (This assumption is not essential; it merely provides a mental reference for the ensuing understanding.) We will focus on the meaning of P-values that arise in this sequence of tesLs. Table 1 presents the first twelve such P-values. Consider, first, An Overview of Robust Bayesian Analysis 9 those tests for which the P-value is approximately 0.05; D2 and Ds are examples. A crucial question is: among the drugs for which the P-value of the test is approximately 0.05, what fraction are actually ineffective (i.e., correspond to true H0)? Likewise, consider those Di for which the P-value is approximately 0.01 (D,5 and Dm are examples) and ask: what fi'action are actually ineffective? DRUG D1 D~ Da D4 D5 D6 P-Value 0.41 0.04 0.32 0.94 0.01 0.28 DRUG Dr D8 D9 Dlo Dn D12 P-Value O. 11 0.1t5 0.65 0.009 0.09 0.66 ...d Table 1. P-values tesulting from the first twelve clinical trials, testing Ho: Di has no effect vs. HI: Di has an effect. The answers to these questions are, of course, indeterminate. They depend on the actual sequence of {Oi } that arises. However, using robust Bayesian techniques one can find lower bounds on the answers that are valid for any sequence {Oi}. These can be computed as in Berger and Sellke (1987, Section 4.3), and are 0.24 for the first question and 0.07 for the second. This is quite startling, since mo>sststatistical users would believe that, when the P-value is 0.05, H0 is very likely to be wrong and, when the P-value is 0.01, H0 is almost certain to be wrong.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages120 Page
-
File Size-