K.D. Hoover Phil 693S Causation 27 February 2016

Some Notes on Bayesianism

Bayes’ Theorem

Bayes Theorem is based on :

(1) P(HE) = P(H|E)P(E) = P(E|H)P(H); hence

(2) P(H|E) = P(E|H)P(H)/P(E) Bayes’ Theorem

An example:

Suppose that there is a test for HIV. Let , E+ = test is positive; E– = test is negative; H+ = patient has HIV; H– = patient does not have HIV. Suppose that for a person who actually is HIV positive, the test is 95 percent correct – that is,

 P(E+|H+) = 0.95 = the rate of true positives; and, therefore  P(E–|H+) = 1 – P(E+|H+) = 0.05 = the rate of false negatives;

and suppose for a person who is actually HIV negative, the test is also 95 percent correct – that is,

 P(E–|H–) = 0.95 = the rate of true negatives; and, therefore  P(E+|H–) = 1 – P(E–|H–) = 0.05 = the rate of false positives;

Further suppose that on the basis of a survey of the population, our best estimate of the prevalence of HIV in the population is 3/10 percent, so that:

 P(H+) = 0.003;  P(H–) = 1 – P(H+) = 0.997

Suppose that you are worried about your HIV status and take a test that comes back positive (E+). What is your chance of actually having HIV – i.e., what is P(H+|E+)?

Apply Bayes’ theorem:

(3) P(H+|E+) = P(E+|H+)P(H+)/P(E+)

We are missing one piece of information – P(E+); but it can be calculated

P(E+) = P(E+|H+)P(H+) + P(E+|H–)P(H–) = 0.95  0.003 + 0.05  0.997 = 0.0527.

Thus,

1

K.D. Hoover Phil 693S Causation 27 February 2016

(4) P(H+|E+) = P(E+|H+)P(H+)/P(E+) = 0.95  0.003/0.0527 = 0.0541.

Given the relatively high accuracy of the HIV test – it is correct 95 percent of the time when the patient is actually HIV positive – this is a surprisingly low value. The intuition is this: There are two ways to get a positive test result (E+) – either you really have HIV or you get a false positive. The rate of false positives is low (5 percent), but since HIV is rare (i.e., H– is common – 997 percent), but 5 percent of almost the whole population is much larger than 95 percent of a very small segment of the population. Thus, of all the people who obtain a positive test result, most will do so falsely. Since HIV is rare (i.e., H+ is itself rare), then even a very accurate test will turn up positive only a small number of times, and more people will have a positive test result as a result of a false positive than as a result of actually having the disease.

Bayesian

Bayesian epistemology is based on the idea that probabilities are not about the frequency of actual occurrences in the world (nor about propensities of other facts in the world) but rather measures of epistemological warrant. On some interpretations, they are measures of degree of justified . For example, the calculation in (4) of the value of P(H+|E+) can be interpreted as the degree of belief that you are justified in assigning to the hypothesis that you have HIV given the of a positive test.

The Bayesian interpretation of Bayes Theorem treats it as a relationship between a prior belief expressed as probability and a posteriori belief expressed as a probability based on acquired evidence. Bayes theorem in (2) can be rearranged:

(5) posteriori probability (= P(H|E)) = support  = ([P(E|H)/P(E)]  P(H)).

Whereas before we thought of H as a variable taking the values {HIV positive, HIV negative, and E as a variable taking the values {test positive, test negative}, now we think of H more generally as a hypothesis taking the values {true, false} and E as a variable taking the values {supports, does not support}. The probabilities P(H+) and P(H–) were justified through estimates of the actual prevalence of HIV in the population. But on most versions of Bayesianism, these probabilities are not facts in the world but expressions of degrees of belief about facts in the world. In the HIV example, we were concerned not about the proportion of people in the country with HIV, but the likelihood that you yourself are HIV positive. We might use the actual distribution of HIV in the country and the assumption that you are as likely as the next person to be positive as a means of forming our initial belief (P(H+) = 0.003); but in fact your belief is whatever happens to be with or without justification. Bayesian epistemology is about how your beliefs should change in the face of evidence. What (4) tells you then, is that a positive test should alter your judgment of the likelihood substantially to 5.41 percent, which

2

K.D. Hoover Phil 693S Causation 27 February 2016 albeit substantial is not as much as you might have feared given the 95 percent accuracy of the test.

Of course you need not assume that your chances of having HIV are the same as the general population. Suppose that you have not engaged in any behaviors that are known risk factors. You might then hold a prior that is very low. For example, if your prior is P(H+) = 0.0001, then ceteris paribus the analogous calculation to (4) yields a posterior of 0.0019 – i.e., 2/10 percent versus a little less than 5½ percent in (4). Or if you are worried about your behavior, you might believe that your chances are more like 1 in 4. Then, the analogous calculation to (4) yields a of 86 percent of actually being HIV positive.

On a subjective interpretation of Bayesian epistemology, the prior is your own belief, and it may be initially set anywhere on any basis (e.g., from frequency data or from your hopes or from your fears). But the central discipline of Bayesian epistemology is that you should use Bayes’ theorem to update your belief with the new information. Suppose that you take one HIV test that turns out positive. Then when you take a second test, the posterior from the first test becomes the prior for the second test – that is, P(H+|E+) for test 1  P(H+) for test 2. A sequence of tests would, therefore, generate a sequence of reevaluations of your beliefs – this is Bayesian updating. Thus, with the initial prior of P(H+) = 0.003, you got a posterior of 0.0541. If that is taken to be the new prior, we have to recalculate P(E+), since it depends on the new value of P(H+). Now if we take another test that turns out positive, we apply Bayes’ theorem again with the updated prior to get an updated posterior P(H+|E+) = 0.5207. Thus, two independent positive tests for HIV would shift your beliefs radically. After four such tests you would be more than 99 percent certain that you were infected.

One property of the Bayesian updating rule is that, no matter what your prior, the evidence eventually dominates your initial belief. So, in the initial example, you would become 99 percent certain after four positive tests. If you had started with an optimistic prior P(H+) = 0.0001, it would take more tests – but not many more – to reach the same certainty; whereas, if you were pessimistic (P(H+) = 0.25), it would take only two tests to reach 99 percent certainty. But on any of these assumptions, the evidence – faster or slower – eventually converges to the same result. Bayesian epistemologists often hold such convergence up as an answer to those who worry that the approach is too subjective: we all eventually reach the same conclusion, no matter where we start.

3