<<

SCIENTIFIC FACTS IN THE SPACE OF PUBLIC REASON: MODERATE IDEALIZATION, PUBLIC JUSTIFICATION, AND VACCINE POLICY UNDER CONDITIONS OF WIDESPREAD MISINFORMATION AND CONSPIRACISM

Amitabha Palmer

A Dissertation

Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of

DOCTOR OF

December 2020

Committee:

Kevin Vallier, Advisor

Daniel Piccolo Graduate Faculty Representative

Christian Coons

Molly Gardner © 2020

Amitabha Palmer

All Rights Reserved iii ABSTRACT

Kevin Vallier, Advisor

If liberal democratic theory requires that policy conform with citizens’ beliefs,

then democracy seems to require bad policy when citizens hold false beliefs. To escape this

problem, public reason liberals advocate epistemic idealization: citizens’ false beliefs, bad

inferences, and informational deficits are corrected in order to uncover the genuine reasons

citizens hold. Poli tically legitimate policy must conform with citizens’ idealized reasons rather

than their messy unreflective reasons. But advocates of idealization disagree over how much

idealization is permissible. I focus on moderate idealizers like Gaus (2011) and Vallier (2014,

2018). They hold that the upward bound of idealization is by the beliefs a real-world citizen

could arrive at by sound deliberative route from their existing -value sets with a

“reasonable” amount of effort.

Vallier and Gaus created their models before widespread use, echo

chambers, hig h social and , and all the epistemic problems these create. For

this reason, I argue, their models are understandably inadequate for addressing the vicious

epistemic environments many citizens currently inhabit and the empirical beliefs they acquire

from them. Contemporary moderate idealizers should adopt the exclusion principle whereby we permissibly exclude from policy considerations deeply held empirical beliefs when they contradict a consensus of relevant experts in a mature science—even if they survive moderate

idealization. Incorporating this principle generates better policy outcomes and better supports

pre-theoretical intuitions about political legitimacy. iv Chapter 1, argues that, under these conditions, Vallier’s moderate idealization leads to normatively and epistemically bad policy, and that the exclusion principle solves this problem from within the commitments of political liberalism. Chapter 2 argues that Gausian moderate idealization also leads to normatively and epistemically bad policy when epistemically vicious environments are widespread. Finally, Chapter 3 applies these arguments to vaccine policy and demonstrates that (a) neither Gaus nor Vallier’s models generate acceptable publicly justified immunization policy; (b) moderate idealizers must adopt the exclusion principle in order to do so and, more broadly, to satisfy pre-theoretical intuitions about political legitimacy; and (c) the exclusion principle can be justified from within public reason liberalism.

Keywords: Political Philosophy; Public Reason; Social Epistemology; ; Science

Denialism; Vaccine v

This is dedicated to anyone whose friends or loved ones have fallen down a rabbit hole.

vi ACKNOWLEDGMENTS

I would like to express my deep gratitude to my advisor Dr. Kevin Vallier whose

guidance and support were invaluable throughout the entire process. His generous constructive

feedback and encouragement were critical to my growth a philosopher. Most importantly, Dr.

Vallier taught me intellectual honesty and to criticize my own views at least as vigorously as my opponents will. He will forever serve for me as a model for professionalism and mentorship. I’m

also grateful to my committee members Dr. Christian Coons and Dr. Molly Gardner. Many of

the most important lessons for developing my philosophical writing came from Dr. Coons’

comments on papers early in the PhD program. To this day, I still apply them and am astonished

at the amount of feedback he provides students on their essays. In the moments that my

motivation waned, or I felt plagued by uncertainty, Dr. Gardner’s friendship, kindness, and open

office door where invaluable for rekindling my motivation and direction. From her I learned the

indispensable art of framing and structuring arguments. Finally, I am thankful for Dr. Daniel

Piccolo’s keen editorial eye.

I am also grateful for my many friends who have supported me throughout. Several friends’ contributions deserve special mention. My last chapter would not have been possible

without Colin Manning’s input, so much so that I cannot distinguish where my thoughts end and

his begin. During the height of the pandemic, over long walks, he generously provided feedback and ideas which allowed me to overcome difficult argumentative hurdles. Ryan Thurber

Fishbeck also regularly made important and insightful contributions to my project over Friday

Zoom calls. Sara Ghaffari, in response to an early version of C h. 2, made an off-hand comment that sent my dissertation in a completely different but more fertile direction. Steve Ross provided

critical emotional and material support, even turning over his apartment for an entire month. Dr. vii Steven Novella and the SGU provided intellectual inspiration for the project and mentorship from afar. And of course, I am indebted all my friends on who fell prey to conspiracies and science denialism. Wrestling with how to talk to people I care about who have succumbed to epistemically vicious environments is the genesis and motivating force for my project. Finally, I would like to acknowledge , Big Pharma, and the Deep State. Without their generous financial support, none of this would have been possible.

I am extremely grateful to my parents and family for their continued love and support. I leaned on them throughout to overcome adversity. This includes my late grandparents. I know I would have done you all proud. Last but not least, there are not enough treats in the world to express my gratitude to my dog and best friend Otis Ponens. Through it all, you never left my side for a moment. You reminded me when it was time to break for play and you comforted me when I needed it.

vi

TABLE OF CONTENTS

Page

CHAPTER 1: KEVIN VALLIER AND MODERATE IDEALIZATION: RESCUING

PUBLIC REASON FROM THE RISE OF CONSIPIRACISM AND SCIENCE

DENIALISM ...... 1

Introduction ...... 1

Science Denialism and Conspiracy Theories ...... 5

Defining Science Denialism and Conspiracism ...... 5

The Epistemology of Science Denialism and Conspiracism ...... 6

The Mainstreaming of Science Denialism and Conspiracism ...... 7

The Case for Concern: Science Denialism, Conspiracism, and Public Reason 10

Vallier’s Model of Moderate Idealization ...... 11

Overview of Vallier’s Solution: Moderate Idealization ...... 11

Applying Vallier’s Idealization to Science Denialism and Conspiracy Theorists 12

The Cult Challenge: Are Denialism and Conspiracy Theories Cults? . 16

Reasonableness, What?...... 18

Diagnosing and Revising Vallier’s Moderate Idealization for Empirical Beliefs ...... 20

Justification and Trade-Offs for an Epistemic Filter on Empirical Beliefs ...... 30

Conclusion ...... 36

References ...... 39

CHAPTER 2: PUBLIC REASON, EMPIRICAL DISAGREEMENT, AND THE

PROBLEM OF EXPERTS IN GAUS ...... 43

Introduction ...... 43 vii

Gausian Public Reason ...... 43

Deep and Genuine Empirical Disagreement Over Empirical Facts ...... 48

Moderate Idealization and the Problems of Identifying Experts ...... 52

Social Epistemology and the Problem of Experts ...... 53

Heuristics for Identifying Experts ...... 55

Identifying Experts Under Conditions of Persistent Polarization and

Propaganda ...... 59

The Accessibility Condition and Moderate Idealization Under Triple P Conditions .. 62

Denialist Defeaters Lead to Bad Policy ...... 70

Conclusion ...... 74

References ...... 76

CHAPTER 3: VACCINE POLICY, MODERATE IDEALIZATION, AND THE PUBLIC

JUSTIFICATION ...... 80

Introduction ...... 80

Non-Medical Exemptions to Vaccines and Policy Responses ...... 84

History ...... 84

The Threat ...... 86

Policy Alternatives ...... 89

Gaus and Vaccine Policy ...... 91

The Problem of Moderate Idealization and Vaccine Skepticism ...... 92

The Inability to Idealize Away False Empirical Beliefs ...... 96

Publicly Justified Vaccine Policy for Gaus’ Model ...... 98

Costs of The Gaus-Vallier Model ...... 101 viii

Role of Experts in Policy...... 102

Lending Credibility to Bad Reasons ...... 105

Substantial Preventable Negative Externalities ...... 107

A Better Moderate Idealization, A Better Policy ...... 110

Argument for the Conditional View ...... 110

The Exclusion Principle and Political Legitimacy ...... 111

Justifying the Exclusion Principle within Public Reason ...... 114

Vallier’s Incomplete Solution ...... 114

The Exclusion Principle Justified ...... 116

Conclusion ...... 118

References ...... 120 ix

LIST OF FIGURES

Figure Page

1 Increasing Nationwide Trend in Kindergarten NME Rates from 2009 to 2017 ...... 87 x

PREFACE

We live in a world of radical ignorance, and the marvel is that any kind of truth

cuts through the noise.

--Robert Proctor

Here you have philosophy’s starting point: We find that people cannot agree among

themselves, and we go in search of the source of their disagreement. In time, we come to

scorn and dismiss simple opinion, and look for a way to determine if an opinion is wrong

or right. At last, we focus on finding a standard that we can invoke, just as the scale was

invented to measure weights, and the carpenter’s rule devised to distinguish straight from

crooked.

--Epictetus, Discourses Bk II. 11, 13-14

Over the last decade, conspiracism and science denialism have moved from the fringe to the mainstream of public discourse and of politics (Lewis & Marwick, 2017; Uscinski & Parent,

2014). So pervasive have these views become that even the US President Donald Trump regularly openly endorses conspiracies, conspiracists, and science denialist views (Craw et al.,

2017; List of conspiracy theories promoted by Trump, 2020; Tani, 2017). Political liberalism has long preoccupied itself with normative disagreement and has tacitly assumed that empirical disagreement is politically inconsequential. I argue below that the increased prevalence and popularity of these conspiracy theories and science denialism present special problems for liberal democratic theory with respect to policymaking. When significant portions of a population have false empirical beliefs with respect to science and the basic motivations of public institutions, it becomes extremely difficult to justify sound policy to that group. Furthermore, traditional liberal xi commitments to individual rights and the presumption against state coercion resist technocratic governance as a solution.

Of course, citizens holding false beliefs isn’t a new challenge for democratic theory. One popular remedy—public reason liberalism--builds into the theory some account of epistemic idealization. That is, we idealize away, to varying degrees, citizens’ false beliefs, bad inferences, and informational deficits to uncover the genuine reasons citizens hold. Politically legitimate policy must conform with citizens idealized reasons rather than their messy unreflective reasons.

But advocates of idealization disagree over how much idealization is permissible. I focus on moderate idealizers like Gaus (2011) and Vallier (2014, 2018). They hold that the upward bound of idealization is set by the beliefs a real-world citizen could arrive at by sound deliberative route from their existing belief-value sets with a “reasonable” amount of effort. The idealized reasons ascribed to citizens must be recognizable as their own by their real-world selves.

I argue that current accounts of moderate idealization are inadequate for handling the challenge of empirical disagreement generated by widespread conspiracism, denialism, and the epistemically vicious environments they grow out of. To meet this challenge, I offer an account of where to draw the line with respect to which empirical beliefs permissibly enter the domain of public reason and influence policy choice. The exclusion principle holds that we permissibly exclude from policy considerations deeply held empirical beliefs when they contradict a consensus of relevant experts in a mature science—even if they survive idealization. Appending this principle to moderately idealizing accounts of public reason, I argue, not only gives us better policy outcomes but better supports our pre-theoretical intuitions about political legitimacy.

Vallier and Gaus created their models of moderate idealization before widespread social media use, echo chambers, high social and political polarization, widespread conspiracism and xii science denialism, and all the epistemic problems that follow from their conjunction. For this reason, I argue, their models are understandably inadequate for addressing the vicious epistemic environments many citizens currently inhabit and the beliefs they acquire within them. In

Chapter 1, I show how, under these conditions, Kevin Vallier’s account of moderate idealization leads to normatively and epistemically bad policy. Then I demonstrate that the exclusion principle can remedy this problem from within the commitments of political liberalism. In

Chapter 2, I argue that Gausian moderate idealization also leads to normatively and epistemically bad policy when epistemically vicious environments are widespread. Finally, in Chapter 3, as a case study, I apply my arguments to vaccine policy. I demonstrate that (a) neither Gaus nor

Vallier’s models provide satisfying results with respect to generating an acceptable publicly justified immunization policy; (b) moderately idealizing accounts of public reason must adopt the exclusion principle in order to do so and, more broadly, to satisfy our pre-theoretical intuitions about political legitimacy; and (c) the exclusion principle can be justified from within public reason liberalism.

1

CHAPTER 1: KEVIN VALLIER AND MODERATE IDEALIZATION: RESCUING

PUBLIC REASON FROM THE RISE OF CONSIPIRACISM AND SCIENCE

DENIALISM

Introduction

On the public reason view of democracy, very loosely, a policy is legitimate to the extent

that those subject to it have reasons to endorse it from their own point of view. Good policy,

however, can be difficult to achieve because its justification will frequently have to

accommodate a variety of often conflicting beliefs, values, and . In addition to

accommodating diversity, citizens’ epistemic shortcomings can also undermine good policy: All

citizens, to varying degrees, hold false beliefs, make bad inferences, and possess incomplete

. This epistemic places the public reason theorist in an uncomfortable

position: If policy must conform with the public’s uncorrected beliefs, we easily end up with

misinformed policy—and worse yet, we end up with policy that undermines the very interests of

the citizens who purport to support it. So, what’s a public reason theorist to do?

To answer this challenge, public reason liberals offer an account of political legitimacy

for coercive political arrangements in a society populated by epistemically imperfect citizens

with diverse worldviews, values, and beliefs. Public reason views are committed to the Public

Justification Principle (PJP) which holds that a coercive law or political arrangement is

legitimate if and only if it can be justified on grounds no one may reasonably reject (Mulligan,

2015). That is, for any coercive public policy to be justified, each member of the public must

have sufficient reason to endorse it.

To address the problem of imperfect reasoning and incomplete information, public reason

accounts employ some degree of idealization to citizens’ beliefs and reasoning. Idealization 2

seeks to uncover which justificatory reasons a citizen has for or against a policy when we correct his reasoning errors and informational deficits. Models of epistemic idealization, to varying degrees, swap out false beliefs for true beliefs, add in missing relevant information, and correct bad inferences. A policy is then justified to the extent that idealization can uncover citizens’

(genuine) justificatory reasons for that policy.

While public reason liberals agree that some degree of idealization is necessary, they disagree over how and to what extent we should idealize away informational deficits and reasoning errors. At one end of the continuum we have populism where we don’t idealize at all.

On the other extreme sits full idealization where we idealize away all epistemic shortcomings; that is, we epistemically idealize citizens to the point of being maximally rational and fully informed.1 “Full idealization” also implies stripping away any individual traits such as particular conceptions of the good as well as any individual sociological traits that might judgments and conceptions of justice. The supposition is that full idealization leads to convergence on principles of justice according to which we order our institutions and derive our laws.

Moderate idealizers occupy the diverse space in the middle of the continuum. They object to full idealization primarily because actual citizens won’t recognize their idealized self’s reasons as being their own. Vallier argues for a model of moderate idealization that requires that a particular agent’s idealized reasons are recognizable to that actual agent as their own. We idealize only enough to correct bad inferences and informational shortcomings of beliefs that occupy peripheral areas of concern. Importantly, idealization doesn’t extend to an agent’s core

1 Although there is exegetical uncertainty, this appears to be Rawls’ suggestion for the original position in Theory of Justice but not in Political Liberalism. Also, full idealization doesn’t apply to forming an overlapping consensus. 3

beliefs and commitments. Publicly justified policy, on Vallier’s model of moderate idealization, requires that each moderately idealized citizen recognize in policy their own subjective reasons.

That is, the justifying reasons must be derived or reconstructed from each agent’s unidealized core beliefs, values, and plans.

In the most general terms, my project can be conceived of in the following way: No one,

Vallier included, that the PJP requires accommodating of all normative beliefs—no

matter how outlandish and no matter their location in a agent’s web of beliefs. Some normative

beliefs—for example, genocidal racism being a good—are beyond the pale even if they are core

areas of concern. No one in public reason takes the position that policy must accommodate the

normative beliefs of the deeply committed genocidal racist. The point being, that there is some

criterion beyond idealization according to which beliefs may permissibly be excluded from

policy consideration. Wherever we draw the line for normative beliefs, there is similarly no a

priori reason to suppose that the PJP requires that policy accommodate all and any empirical

beliefs—regardless of their content.2 Furthermore, there is no a priori reason to suppose that the

criterion that distinguishes the outlandish from PJP-worthy normative beliefs is the same

criterion that distinguishes the outlandish from the PJP-worthy empirical beliefs.

2 There are two possible ways to describe the process that leads to permissibly applying a law to an agent who cannot see the reasons for it: First, we might say because the agent’s belief lacks some epistemic or normative property that it needn’t be taken into account in justifying policy. That is, beliefs of a certain kind are ‘beyond the pale’ and are permissibly disregarded in satisfying the PJP. However, we may characterize the process differently, although the outcome is the same. On this model, rather that dismiss a belief as relevant to the PJP, we may opt to idealize more robustly the agent holding the belief such that we can attribute to them a subjective reason to endorse the policy in question. That is, rather than say the PJP needn’t satisfy certain views/beliefs, we idealize agents to a higher degree in order that they hold beliefs that do satisfy the PJP. I’m not sure how important this distinction is if, on the idealization route, idealization proceeds above what the real-world agent would recognize as their own reasons. On both accounts, the outcome is the same. Agents are subject to coercive laws in which they can recognize no justifying subjective reasons. 4

The majority of the public reason literature has focused on normative disagreement: It seeks to reconcile the tension between the PJP, respect for normative diversity, and the public’s epistemic fallibility. The general purpose of this paper is to explore what I contend is a domain of under-appreciated difficulties: How do we publicly justify policy when there is substantial and entrenched disagreement over empirical facts relevant to coercive policy—and more specifically, when large portions of public opinion diverge from that of a consensus of relevant experts. On the assumption that not all empirical beliefs need to be accommodated in policy; by what criterion do we distinguish between the empirical beliefs that the PJP must accommodate and those that it doesn’t?

I argue that (a) the growing prominence of science denialism and thinking (i.e., conspiracism) in the public domain present a challenge to Vallier’s and other similar models of moderate idealization and (b) meeting this challenge requires granting special epistemic status to scientific facts to the degree that there is a consensus of relevant experts in a mature science. Consistent with Vallier, we should be reluctant to idealize away core beliefs; nevertheless, false empirical beliefs contrary to a consensus of relevant experts, I argue, are permissibly idealized away or disregarded (see footnote 12) even if they occupy an agent’s epistemic core.

In order to establish my thesis, my paper has four main sections. First, since Vallier’s model of moderate idealization depends for its application on understanding an individual’s epistemology and , I describe the epistemology and psychology of science denialism and conspiracism. Second, I make the case that science denialism and conspiracy thinking don’t merely manifest as theoretical problems. They currently generate real-world problems for policymaking in liberal democracies. Hence, they are an urgent problem that merits greater 5

attention than they are receiving at the moment.3 Once I have established the nature and import

of the problem that science denialism and conspiracism present to policymaking, I explore how

Vallier’s moderate idealization handles this problem and suggest that some denialisms and

conspiracisms present challenges for his view. Next, I make my positive case: Any moderate

idealization view that hopes to avoid the problems created by conspiracism and denialism must

adopt the following idealization rule: to the degree that there is a consensus of relevant experts

on an empirical matter,4 to that degree we may idealize away an agent’s beliefs if they

with the consensus—regardless of the role that belief plays in an agent’s . Otherwise

stated, I argue that we ought to be full idealizers in the empirical domain whenever there is a

strong consensus of relevant experts in a mature science. Finally, I will add a caveat to my

position. The epistemic permissibility of idealization and political expediency can come apart.

Although I argue that we permissibly idealize away false empirical beliefs that contradict a

consensus of experts, whether it is politically expedient to do so will depend on the particulars of

each case. Sometimes, allowing exemptions to a policy better reconciles the various competing

values of political liberalism.

Science Denialism and Conspiracy Theories

Defining Science Denialism and Conspiracism

I call science denialism any view that is consciously contrary to the position taken by a consensus of relevant experts in a particular empirical domain. Prominent examples of denialisms include (but are not limited to) anti-GMO advocates, anthropogenic global warming

3 Since the first draft of this paper, others have begun to recognize the danger of widespread conspiracism and denialism to democracy. 4 In a mature natural science. 6

(AGW) deniers, evolution deniers (i.e., creationists), anti-vaccine proponents, flat-earthers, and germ theory deniers. I exclude from my definition any position where there isn’t yet a strong consensus of experts despite a clear direction in the discipline’s literature.5 I also exclude views

contrary to a consensus of experts in a relatively new science or area of inquiry.

The nature of conspiracy theories is harder to define. The growing sub-field of conspiracy

studies within psychology has yet to agree on a definitive list of necessary and sufficient

conditions for a conspiracy theory; instead they are usually operationalized extensionally by

listing known conspiracy theories (Sunstein & Vermeule, 2009). That said, we can make certain generalizations. Aaronovitch (2009) suggests conspiracy theories “involve the assumption of

collusion when other elucidations are more credible.” The assumption of collusion “manifests as

the belief that multiple actors cooperate in order to orchestrate a malevolent plot” (Baron et al.,

2014). In other words, a conspiracy theory will usually involve believing in “hidden-hand”

explanations that appeal to shadowy actors “pulling the strings” behind the scenes.

The Epistemology of Science Denialism and Conspiracism

Vallier’s model of moderate idealization stipulates that beliefs ought not to be revised to

the extent that belief clusters relate to or constitute an agent’s world view and core areas of

concern. In other words, we may only permissibly idealize beliefs outside the core of an agent’s

epistemic web. For this reason, evaluating Vallier’s model requires developing an understanding

of the nature and structure of denialist and conspiracist epistemology and psychology.

5 It doesn’t follow from this that it is a scientifically respectable position. Where there is scientific uncertainty, often flourishes. I submit that this is why alternative “medicine” is so popular. 7

The literature on conspiracy theorists suggests that conspiracy theories are like Lay’s potato chips: you can’t have just one. But why not? Work by Goertzel, (1994); Swami et al.,

(2010); Wood et al., (2012) indicates that conspiracism is driven by higher-order beliefs such as mistrust of authority, the conviction that nothing is quite as it seems, and . These higher-order beliefs constitute a worldview through which individuals filter and interpret the world (Overton, 1991). The conspiracy world-view hypothesis predicts that possession of these deeper-seated belief structures explains adherence to a particular conspiracy rather than the content of that conspiracy. Wood et al. (2012) tested this hypothesis and found that participants who subscribed to the higher-order beliefs linked to conspiracism endorsed contradictory conspiracy theories: The more participants believed Princess Diana faked her own death, the more they also believed she was murdered (!). Wood et al. (2012) explain that adherence to principle beliefs was more important than the congruence of sub-beliefs. Also, Brotherton et al.

(2013) suggest that once one adopts the conspiracy , it’s applied to an increasing number of domains until it becomes a world view that’s epistemically siloed from the world of facts.

The Mainstreaming of Science Denialism and Conspiracism

Science denialism and conspiracy theories are no longer fringe movements and have become—and continue to become—mainstream. For example, Oliver and Wood (2014) found that

in any given year, about half the public generally endorses at least one conspiracy

theory. Some of the most popular include the “birther” conspiracy about Obama

(endorsed by about 25%), the “truther” conspiracy about 9/11 (endorsed by about

40%), the theory that the FDA is deliberately withholding natural cures for cancer 8

(endorsed by 40 percent), and the theory that the Fed intentionally orchestrated the

2008 recession (endorsed by 19%).

Regarding science denialism, 57% of US adults think genetically engineered foods are unsafe to eat (Funk & Rainie, 2015), 1 in 4 think vaccines cause autism (Freed et. al., 2010), and about

37% deny anthropogenic global warming—although this last figure has slowly been falling.

Public reason accounts that grant special status to beliefs when they are core areas of concern or part of a worldview ought to be gravely concerned about the growth of denialism and conspiracy theories. The degree to which such accounts privilege beliefs based on their subjective importance to an agent, to that degree the account will allow empirically dubious beliefs into the domain of public reason.

Perhaps of greater concern is that it’s not just the general public espousing these misinformed views. Political representatives at even the highest levels espouse anti-science and conspiratorial views with political impunity and without shame. For example, during the 2016

Republican primaries three of the candidates,6 to varying degrees, expressed skepticism about the safety of vaccines, and two of them were medical doctors!

Donald Trump as candidate during the 2016 election and as sitting president has on several occasions espoused the wild conspiracies advanced by conspiracy king Alex Jones

(Craw, 2017; Kranz, 2019). Recently, Donald Trump declared that California isn’t experiencing a drought despite overwhelming evidence to the contrary (Drought | USGS California Water

Science Center, 2014). The governor of Texas, while the US military was conducting training exercises, called in the State Guard to “supervise” because he was concerned that Operation Jade

6 Donald Trump, Rand Paul, and Ben Carson. 9

Helm might be a US government conspiracy to take over Texas (!) It goes without saying that anthropogenic global warming (AGW) denial is a mainstream view in the US—including among high-ranking Republican politicians. One might reply that politicians like Donald Trump are outliers and not representative. But the fact that a presidential candidate can espouse views so obviously false without being laughed off the stage or lambasted and win the presidency only confirms the magnitude of the potential problem for public reason.

Smallpage et al. (2017) suggest that out-of-power parties are especially inclined to conspiracy theorizing. They suggest that during the Obama administration, Republican elites were more prone to conspiracy theories because they have more to fear:

With a Democrat occupying the most powerful office on the planet, Republican elites are

an alarm system sounding warnings about potential plots against conservative values. But

that is a temporary, situational argument.

In other words, there is an ebb and flow to the demographic locus of conspiracy thinking which is tied to a group’s relative political power.

However, we interpret the current balance of power between Left and Right, the political

Right doesn’t own a monopoly on science denial or conspiracy theories. Opposition to genetic engineering is to the Left what AGW denial is to the Right. Bernie Sanders and Jill Stein, both prominent voices on the Left, ran on platforms espousing various degrees of anti-GMO myths and talking points. Further examples abound but the above should be sufficient to illustrate the general point that science denialism and conspiracy thinking are not fringe movements and not particular to any single political . In the next section, I will outline specifically why the rise of denialism and conspiracism present a challenge to Vallier’s model of public reason with respect to policymaking. 10

The Case for Concern: Science Denialism, Conspiracism, and Public Reason

At this point we should ask why the mainstreaming of science denialism and conspiracy theories are a danger to public reason views such as Vallier’s. The short answer is that (a) the phenomenon makes empirically sound public policy difficult to generate without violating the

PJP and as such it (b) increases the likelihood of epistemically and normatively bad policy. We can’t have effective public health policy when a substantial minority believes that vaccines are more dangerous than the diseases they prevent. Biotech can’t effectively address the challenges of modern agriculture (particularly under conditions of global warming) if our most promising tool—genetic engineering—is taken off the table. And addressing climate change becomes extremely difficult if the around half of the population--in the country with the highest greenhouse emissions--deny its seriousness or reality (or both).7

Most public reason accounts of justification aren’t tied to the truth—justificatory reasons for a policy must merely reach some subjective or inter-subjective level of justification or warrant.8 On Vallier’s account, reasons for policy must be justified relative to subjective core beliefs and projects. As denialist or conspiratorial reasons become mainstream and migrate towards the core beliefs and projects of a growing segment a population, so too will the requirement to justify policy in terms of subjective reasons that support policy contrary to the best science. In the following section I describe how Vallier’s moderately idealizing account of public reason may be vulnerable to the growth of science denialism and conspiracy theories with respect to policymaking.

7 Seventy-four percent of Republicans with a college degree say it is exaggerated, compared with 57% of those with high school education or less saying the same (Gallup, 2015). 8 In Political Liberalism, Rawls argued that justification should be ‘freestanding’ in that it doesn’t presuppose any single comprehensive doctrine (p. 10). 11

Vallier’s Model of Moderate Idealization

Overview of Vallier’s Solution: Moderate Idealization

Vallier proposes we only moderately idealize away some but not all of each citizen’s cognitive shortcomings: i.e., we correct some bad inferences, false beliefs, and informational deficits. However, since we’re not engaged in full idealization, we’ll need to decide which mistaken beliefs and inferences we revise and which we keep—preferably in way that isn’t ad hoc.

Vallier (2016) suggests that we idealize to the point that peripheral informational and rational shortcomings dissolve yet core beliefs, plans, and values remain because “the point of idealizing is to provide an accurate account of citizens’ reasons so as to treat them in accord with their deepest commitments” (p. 155). Critically, the idealized reasoner’s reasons should be recognizable to the non-idealized reasoner from the point of view of their core beliefs and commitments. Recognizing an agent’s core beliefs and commitments regulates the degree to which we idealize. We assume citizens want to live consistently with their deepest beliefs and commitments and so we idealize to the extent that it supports this aim. Thus, on this model, idealization is justified because it “furthers [citizens’] interest in living according to their ideals”

(p. 156).

Idealizing away false peripheral beliefs is fairly uncontroversial. However, we face a more controversial decision when we consider that some people’s core beliefs and commitments will turn out to be false. Here, Vallier argues that when core beliefs wouldn’t survive idealization we shouldn’t reject/revise them. To the extent that an idealized agent’s reasons are unrecognizable to the actual agent (from the point of view of their core commitments), we’ve passed the upward bound of idealization (p. 156). Vallier’s view implies a hierarchy of epistemic 12

and normative commitments. Idealizing away the beliefs and commitments constitutive of an agent’s identity and way of life generates beliefs and values that are unrecognizable to the actual

agent. However, the whole point of idealization is to generate policy that’s justified in terms of a

‘real’ agent’s genuine subjective reasons—rather than those of a hypothetical agent—so that

actual citizens can live lives of integrity.

Applying Vallier’s Idealization to Science Denialism and Conspiracy Theorists

In this section I explore an analogy between on one side and science denialism

and conspiracism on the other. Vallier’s brand of moderate idealization is primarily motivated to

show that the domain of public reason isn’t restricted to only secular reasons. In some cases,

religious reasons can enter too. If science denialism and conspiracy theory adherence sufficiently

resemble religion with respect to the epistemic location of relevant beliefs and the structural roles

they play in people’s lives, then Vallier’s idealization must also admit denialist and conspiracist

reasons as relevant to policymaking. I argue that on Vallier’s model, the antecedent of the above

conditional obtains and, therefore, the PJP will have to satisfy some conspiracist and denialist

reasons.9 Later, I will suggest how Vallier’s view might be modified to avoid this outcome.

On the face of it we might think I’m making a category error proposing an analogy

between religious reasons and denialist and conspiracy reasons. Science denialists and

conspiracy theorists are, amongst other things, making empirical claims: the earth is flat,

vaccines cause autism, humans aren’t causing climate change, etc., whereas religious claims

aren’t empirical, they’re normative. This simply isn’t true. Religious doctrines are replete with

9 In the last section of my paper I argue that Vallier’s account ought to be revised in a way that it excludes conspiracist and denialist reasons from public policy. 13

empirical claims. For example, there is a God vs there are gods, Moses received the ten commandments from God on Mt. Sinai, Mary was a virgin, Joseph Smith could read Egyptian hieroglyphs with a magic looking-glass, Jesus walked on water, lotus flowers spontaneously

grew where baby Buddha took his first 7 steps, Buddha sat under a tree and meditated until he

reached enlightenment, you will go to Heaven/Hell after you die, there are angels, you can be

reincarnated, a miracle happened, etc.10

The upshot is that religion, science denialism, and conspiracy theories all make in

principle verifiable empirical claims that would strike non-adherents as dubious. In fact, many

religious empirical claims would require non-believers to revise important aspects of their

epistemology were they to accept them. 11 This is not to say that these beliefs are necessarily

false, only that assenting to them often also requires taking on board metaphysical, ontological,

and epistemological beliefs a non-believer doesn’t already have.

We can say the same about the empirical claims science denialists and conspiracy

theorists make. It would require a not unsubstantial revision of a non-believer’s epistemology

(and possibly metaphysics) for them to subscribe to the conspiracist or denialist beliefs. One

could reply here that humans often have a remarkable ability to compartmentalize. It’s not

uncommon for people to have locally consistent but globally inconsistent belief sets.

Recall that on Vallier’s model we permissibly idealize away dubious or inconsistent

beliefs when they inhabit the periphery of an individual’s center of concern. Peripheral beliefs

can be idealized away without compromising the individual’s ability to live according to their

10 I think it could be argued that some of these are metaphysical claims. Nevertheless, I think the point stands that most make empirical claims that are central to their worldview. 11 To paraphrase Hume, when someone suggests a miracle has occurred, the likelihood that a law of nature has been broken is significantly smaller than the likelihood that someone’s senses have deceived them. 14

core commitments. So, what we really need to ask is whether the science denier and conspiracy theorists’ beliefs inhabit the domain of core commitments. To put it more concretely, suppose we idealized away the anti-vaxxers false beliefs about the relative risks of vaccines. Would they recognize the justifying reasons for public policy that followed from their idealized self’s beliefs in a way that is consistent with their core commitments? Would the idealized anti-GMO advocate, or AGW denier?

The answer is a clear and unambiguous “it depends.” Here, we must pay careful attention to the way a particular belief is related to other beliefs. When we idealize, we’re partly aiming for coherence. However, Vallier (2016) argues that idealization should aim at local rather than global coherence (p. 156). When we evaluate and idealize beliefs piecemeal, we ignore that they exist within a complex belief ecology. Attempting to remove a single belief from its ecology is akin to trying to remove the pit from a mango: a lot of other stuff comes along with it. So, part of deciding whether we can idealize away a belief will depend on how the ‘offending’ belief is tied to other beliefs and where those other beliefs are situated in the hierarchy of commitments.

Idealization at the local level comes with a caveat: it shouldn’t be applied when an agent “has locally inconsistent but deeply entrenched beliefs" because “if the beliefs are made consistent, the agent’s entire belief-value set will be fundamentally altered” (p. 156). We can summarize this view on belief revision as an expression of the principle of conservatism. We ought to revise beliefs in ways that minimally disrupt core belief commitments.

To illustrate how denialists beliefs are interrelated, consider the likely structure of an

AGW’s denier’s epistemology. Proctor, a science historian who studies how science denialism spreads, writes: 15

The fight is not just over the existence of climate change, it’s over whether God

has created the Earth for us to exploit, whether government has the right to

regulate industry, whether environmentalists should be empowered, and so on.

It’s not just about the facts, it’s about what is imagined to flow from and into such

facts. (Quoted in Kenyon, 2016) 12

Our decision to idealize away a denialist’s particular belief depends on how tightly it is

interconnected with other core beliefs and values. The denialist belief itself may or may not be at

the core of the agent’s web of belief, however, the belief might be very strongly tied to core areas

of concern. I submit that there are AGW deniers such that if we idealize away their beliefs

regarding AGW, those idealized agents will be left with reasons for climate policy that would be

unrecognizable to them. 13 If AGW denial is closely tied to or constitutes core beliefs and values,

as suggested above by Proctor (2016) and Bastiaan et al. (2017), it’s possible that the caveat in

Vallier’s idealization prevents us from idealizing away the AGW denier’s improbable empirical

belief.

Of course, just as with religiosity, science denialism also exists on a continuum of

commitment. We can imagine the dabbling anti-vaxxer or anti-GMO advocate. Perhaps they

repost and naturalnews.com articles here and there but they aren’t beholden to the cause

as are the committed activists. There are, however, science denialists for whom spreading the

anti-vax/anti-GMO/AGW denial gospel is their Cause. They are the Jehovah’s Witnesses of

science denialism and are the people whose posts we typically block from knocking on our

12 There’s good evidence supporting Proctor’s views. Social and economic conservatism are the strongest predictors of AGW denial. See: Bastiaan T. Rutjens, Robbie M. Sutton, and Romy van der Lee. (2017) “Not All Skepticism Is Equal: Exploring the Ideological Antecedents of Science Acceptance and Rejection.” 13 Also, below I consider objections to my claim. 16

newsfeed door. They run Facebook groups, websites, chat rooms, and organize and attend rallies.

There is no doubt that their cause is central to or constitutes their core commitments. If we

idealized away their denialist beliefs to generate policy, the reasons would be unrecognizable to

their unidealized self. They would incur substantial integrity costs and, because they would be

coerced, the resulting public policy would therefore fail to meet Vallier’s standards for

legitimacy.

The Cult Challenge: Are Denialism and Conspiracy Theories Cults?

In Liberal Politics and Faith (2014), Vallier defends his view against a challenge similar

to the one I’m pressing here. In his book, he addresses the concern that his view won’t be able to

distinguish “the reasons of cult members from the reasons of people with more ordinary,

seemingly rational religious and secular reasons” (p. 172). We can imagine someone raised in a

cult being able to sustain her cultish beliefs through moderate idealization because of their

proximity to her core areas of concern. Vallier’s response to the cult challenge is first to deny

that their reasons survive rational scrutiny “given how much force and social pressure is required

to sustain cultish beliefs” (p. 172). Second, a reasonableness norm filters which views we admit

into public reason: Cult members violate this norm because they don’t seem to take seriously the

beliefs of others (p. 172).

The first reply to the cult challenge will work to exclude the more esoteric denialisms and conspiracy theories. Maintaining a set of beliefs against frequent and sustained external scrutiny, disconfirmation, and possibly derision requires strong and frequent social and psychological maintenance mechanisms. And this is precisely the problem with denialism and conspiracism’s

mainstreaming: When beliefs become mainstream, force and overt social pressure are not 17

required to maintain them. They are self-sustaining regardless of their truth value. 14 The number and clustered distribution of adherents of many of the mainstream denialisms and conspiracies are such that one can easily avoid disconfirmation while receiving frequent reinforcement— especially as we increasingly receive our information from customized news feeds (Benkler et al., 2017; Lee, 2017).

There is reason to believe that, with at least some issues, the general public (not just select groups) will have more contact with false than true information. To quote the most extensive study to date on the topic:

False news reached more people than the truth; the top 1% of false news cascades

diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more

than 1000 people. Falsehood also diffused faster than the truth. (Vosoughi et al., 2018)

Another recent study compared level of engagement with the top twenty fake news stories of

2016 to the top real news stories:

[The] 20 top-performing false election stories from hoax sites and hyper-partisan blogs

generated 8,711,000 shares, reactions, and comments on Facebook. Within the same time

period, the 20 best-performing election stories from 19 major news websites generated a

total of 7,367,000 shares, reactions, and comments on Facebook. (Silverman, 2016)

The upshot here is that Vallier’s move to avoid the cult objection likely won’t work for widespread conspiracies and denialisms. The more the public are exposed such views and beliefs the more they will gain widespread acceptance and credibility thereby making them more

14 Anyone doubting that a belief’s popularity and its probability of being true come apart need only spend an hour in the comments section of politicized scientific issue to be disabused of this fanciful notion. 18

difficult to idealize away. Once they become part of a significant portion of the public’s world- view, the force and social pressure that ruled out cult beliefs no longer applies.

Reasonableness, What?

The second reply to the cult challenge—that reasonableness constrains which reasons enter the public domain—again seems only to work when applied against small groups with esoteric views. As a belief (regardless of its truth value) becomes more widespread, reasonableness will not likely rule it out. To see why, we must first begin by clarifying the notion of reasonableness.

In both Liberal Politics and Public Faith (2014) and Must Politics Be War? (2018),

Vallier suggests there are yet unresolved problems with the notion of reasonableness as it is employed in the public reason literature (p. 148; p. 132-133). In short, Vallier (2018) argues that the literature and authors equivocate between an intuitive notion, which “lacks specific and substantive content, and [a] technical notion, which begs the question of the basis of reasonableness’s normative force” (p. 37). Vallier’s own notion of reasonableness isn’t intended to play a significant role in his model of moderate idealization. He adopts a “deflationary or thin” moral notion that is ultimately quite permissive. It will “rule out few evaluative standards” and

“in combination with other features of a theory of public justification prevent some of the coercion we regard as unjustified” (2018, p. 135). This position is largely consistent with his view in Liberal Politics and Public Faith (2014) in which he states that “once we flesh out convergence and moderate idealization, reasonableness will become innocuous” (p. 148). What’s important to note here is that whatever reasonableness does, it plays a back seat to the other values and principles that guide idealization. 19

We know that reasonableness is thin and deflationary, but what is its content? In PLPF, reasonableness is the practice of “taking seriously the beliefs of others” (p. 172). In MPBW, reasonableness is primarily applied to the normative domain: Being reasonable “requires being prepared to propose reciprocal terms of cooperation” (p. 135). One important feature of reasonableness in MPBW is that it seems to function primarily as a normative rather than epistemic criterion for whether beliefs enter the domain of public reason. Nevertheless, for my purposes, perhaps the best way to characterize the content of Vallier’s reasonableness is as a disposition to recognize the burdens of judgment.15 This amounts to acknowledging that others can have justifiably high credence in their own beliefs in some circumstances—even if those beliefs differ from my own. Reasonableness, however, doesn’t require that we decrease credence in our own beliefs.

With this understanding of reasonableness in our back pocket, we can evaluate whether it can successfully prevent widespread denialisms and conspiracies from entering the domain of public reason. Reasonableness, as Vallier conceives it, doesn’t on its own seem to be robust enough for the task. The denialist or conspiracist can acknowledge that others have strongly held views that contradict their own. But reasonableness doesn’t demand that they diminish their own credence levels; and so, it doesn’t seem to bear necessarily on whether denialist beliefs are permissibly idealized away.

In the cult case, perhaps the sheer quantity of people holding contrary views allows reasonableness to have some effect with respect to which beliefs can be ruled out. Also, cult

15 The burdens of judgment are properties of cognitions and discussion that will lead reasonable people to disagree. These properties included difficulties assessing how evidence weighs in favor of one position or another, how to weigh different, relevant factors, how to resolve conceptual vagueness, how their social status affects their within of evidence and values, how to balance different ides of an issue and how to resolve or accept value conflicts. 20

members will have difficulty pointing to outside entities or authorities to corroborate their claims—that is, their epistemic bonafides will be suspect. However, when false beliefs are fairly widespread neither of these seem to hold. As I discussed above, there are plenty of people in positions of power who will vouch for a variety of denialisms or conspiracies.16 If these beliefs aren’t idealized away, they enter the domain of public reason which will in turn allow them to act as defeaters to the eligible set of science-based policy.

Diagnosing and Revising Vallier’s Moderate Idealization for Empirical Beliefs

There is nothing logically incoherent about a model of idealization that doesn’t idealize away (false) conspiracist and denialist beliefs; however, if we think that this is an undesirable outcome then it’s worth exploring how Vallier’s model might be modified to prevent it. We’ll want to do this without straying too far from its other theoretical commitments. Modifying the model begins by diagnosing the cause of the problem then suggesting a remedy that addresses this cause. In this section, I also consider a potential objection that Vallier might raise.

I suggest that Vallier’s idealization model to allows improbable denialist and conspiracist reasons to enter the domain of public reason because it lacks an epistemic filter for empirical beliefs. That is, there is no epistemic standard that stands outside the agent for the quality of the empirical beliefs that underlie an agent’s reasons. In the normative domain, Vallier has such a filter: We needn’t accommodate normative beliefs that fail to propose reciprocal terms of cooperation. This criterion acts as a demarcation line between the beliefs that may enter the domain of public reason and those that can’t. What we need is some sort of analogue for the empirical domain.

16 I elaborate on the significance of this point in the next section. 21

It can’t be the case that idealization prohibits revising/rejecting any empirical belief, no

matter how outrageous and contrary to the best science, merely because it occupies an agent’s

core beliefs (or it is closely tied to those beliefs). If a large group of people oppose building a

road because they believe it would undermine the health of the local unicorn population, we

would not want justified public policy to be held hostage to defeaters from this belief—no matter

how deeply held or intrinsic to a world view. That is, we would not want the unicorn rights

advocates to have defeaters for the policy proposals because we can’t permissibly idealize away

their obviously false empirical beliefs.17

So, at last, what is this epistemic filter I am proposing? Whenever an empirical belief

contradicts that of a consensus of relevant experts in a mature natural science, that belief is

permissibly idealized away—regardless of its location in the agent’s web of beliefs. This

standard is grounded in the basic epistemic distinction between belief and knowledge.18 As

Epictetus put it, “the fact that someone holds this or that opinion will not suffice to make it true”

(Discourse II. 11. 15). In short, merely believing something don’t make it true—no matter how

strongly you believe it. When an agent holds an empirical belief that is contrary to a consensus of

17 Before moving forward, I need to qualify my suggestions for an epistemic filter on empirical beliefs. I don’t think epistemology, on its own, is the final word on how policy is applied but it should constrain the sorts of facts from which policy is built. Epistemic constraints tell us which beliefs are permissibly idealized, and, in turn, which reasons must be taken into account in satisfying the PJP. It also tells us which reasons can act as defeaters and which the PJP needn’t accommodate. But an epistemic filter won’t tell us whether we grant exemptions or not to the policy that is eventually selected. The question of exemptions is separate and will be guided by various normative considerations built into one’s model. So, it may very well be that my proposed epistemic filter affirms that certain empirical beliefs can be permissibly idealized away but that, because of a variety of other theoretical commitments, we might end up granting exemptions to those real-world agents whose false beliefs were idealized away. I address some of these issues at the end of the chapter. 18 Without going too far down the Gettier rabbit hole, all I mean by knowledge is an extremely well-justified belief that is unlikely to turn out false. I’m just going to stipulate a knowledge concept that doesn’t require Truth. 22

relevant experts in a mature natural science, that agent’s belief is virtually false. 19 I submit that

models of idealization ought to avoid requiring that policymaking and selection accommodate

what, by the best scientific standards, are reasons based on false beliefs.

I propose this particular epistemic filter for a second reason: It recognizes the social

nature of scientific knowledge and the legitimate hierarchies of knowledge communities in

complex modern democracies. Citizens of complex societies have settled on a solution to

learning about the world and making sure policy is well-informed: As a society, we recognize

and fund a division of epistemic labor. Rather than each person having to become a 17th Century

scientist with a garage full of beakers and test tubes, we defer to communities of experts which

are often publicly funded. On Vallier’s model, idealization is constrained by the reasons a

particular real-world agent can access by sound deliberative route. Fundamentally, the quality of

the reasons that the PJP must accommodate depend on the epistemic starting point of each

individual agent. This emphasis on the individual starting point is problematic because it fails to

recognize this socially agreed upon division of epistemic labor and thereby accords equal weight

expert and non-expert reasons in policymaking.

Vallier might reply to my critique as follows. His model doesn’t preclude attributing to

lay people expert reasons. Idealization can include experts’ reasons because agents have the

belief that there are experts to which we ought to defer. Conspiracists and denialists alike defer

to experts in a wide array of domains and so, by their own lights they endorse this norm. Why

19 Experts can get it wrong but what matters is relative probabilities of error. There is a significant gap in relative probabilities of error between a consensus of experts in a mature natural science and a single layperson. Moving forward, I’ll use the notion of false probabilistically. That is, the relative probability is so high for a lay person’s belief being false when it contradicts that of a relevant consensus of experts that, as a rhetorical short cut, I think I can safely call it false. And if it does turn out to be true, it’s unlikely to be for the reasons the lay person thought it was true. (Go away, Gettier!) 23

not just idealize away their false beliefs by attributing to them a belief or norm akin to “defer to

experts when I am not an expert”?

I think it’s true that the mildly committed conspiracists and denialist can, on Vallier’s

existing model, have their false beliefs idealized away in this manner. But it’s not so clear in

other cases. Let me present two important cases that might resist idealization on Vallier’s model:

(a) The conspiracist/denialist who endorses the norm “defer to experts” but misidentifies who the

actual experts are and (b) the conspiracist/denialist who adopts “defer to expert,” correctly

identifies the experts, but doesn’t trust them.

In the first case we have conspiracists/denialists who misidentify the relevant experts

despite endorsing “defer to experts.” I suggest that this category describes a significant portion of

these belief communities. Proctor (as well as Vallier) adopts the view that core beliefs are closely

intertwined in clusters to form a worldview/perspective. Importantly, the reasons for believing

some things are closely tied to—even dependent on—one’s world view. Part of those belief

clusters will include beliefs about who the relevant experts are in a particular domain. These

beliefs about who the relevant experts are depend heavily on the both the worldview and

epistemic environment these agents inhabit (which are themselves related). If, as we might

expect, agents’ worldviews drive them to a media environment where they are heavily exposed

to pseudo-experts and outlier views, they will form erroneous beliefs about who the relevant

experts are. When we defer to the wrong experts, we end up with the wrong beliefs.

Such agents can form this mistaken belief culpably or non-culpably. On many denialist

issues, there is a long history of interest groups deliberately creating and promoting pseudo- experts and pseudo-expert institutions in tandem with a campaign to discredit legitimate experts and institutions (Oreskes & Conway, 2012). Well-known examples of this tactic include the 24

tobacco industry’s campaigns to undermine tobacco safety concerns, various manufacturing and power industry’s campaigns to undermine concerns about acid rain, and the fossil fuel industry’s campaign to undermine concerns about anthropogenic global warming and the credibility of climate scientists. In all cases, these interest groups created parallel institutions and experts to deliberately confuse and mislead the public.

If we idealize such denialist agents according to the experts to whom they defer (which is in turn often tightly tied to their political or religious ideology), idealization won’t rectify their false empirical beliefs without significantly revising other core beliefs. Not only would idealization require changing who such agents identify as legitimate experts, it must also change who they believe are illegitimate. Again, these revisions themselves will be closely connected to ideology and worldview and so these too will also need revisions. To the extent that these revisions will be required, Vallier’s model cautions against idealization. But that means we risk admitting defeaters to policy-selection that are based on highly improbably empirical beliefs. I think the only way to idealize away the false empirical reasons of committed conspiracists/denialists of this ilk is to adopt my proposed epistemic filter. Other values can come into play when discussing exceptions but, again, it doesn’t seem like we want improable empirical beliefs to function as defeaters for constructing and selecting.

The second kind of denialist/conspiracist differs from the first kind in that they recognize who the legitimate experts are, they just don’t trust them. This gets to the heart of the matter with respect to the social nature of knowledge. Steven Shapin (1994), in A Social History of Truth, and Miranda Fricker (2007), in Epistemic Injustice, both argue that we cannot derive knowledge from those whom we don’t trust. Shapin spends much of the first chapter of his book demonstrating that “much of our empirical knowledge is held solely on the basis of what 25

trustworthy sources tell us” (p. 21). In other words, our ability to leverage the knowledge of experts depends in large part on whom we trust.

These same ideas are also echoed by Elijah Millgram (2015) in The Great

Endarkenment--modern knowledge depends on trusting long chains of experts. No single person is in a position to check up on the reliability of every member of that chain. With respect to idealization, whom we trust is intimately tied to a core web of fundamental beliefs including ideology and religion. If my core ideological commitments drive me to mistrust the genuine experts, idealization might be impermissible.

This tight relationship between the ability to receive knowledge, core commitments, and whom we trust is also echoed in Imhoff and Brunders’ (2014) study on the roots of conspiracy thinking. They found that the conspiracy mentality

… produces a generalized political associated with the behavioral intention to

challenge the status quo and a pejorative view of those in power. In contrast to right-wing

authoritarianism and social dominance orientation, conspiracy mentality is related to

against high-power groups that are perceived as less likable and more

threatening than low-power groups.

We needn’t go so far as conspiracy theorists to recognize the fundamental relationship between trust, core commitments, and knowledge. Morgan Marietta (2017) argues persuasively that this is what’s at the core of science denialism, of which AGW denialism is just an instance. He argues that a significant portion of climate change denialism on the Right derives from the Right’s low trust in what they perceive to be the Liberal scientists. Self-identified Republicans were asked about the trustworthiness of information on global warming coming out of prestigious universities such as Berkeley and Princeton. 59% responded that it was either certainly or 26

probably wrong (vs 41% responded almost certainly or probably correct). 90% of Democrats, on

the other hand, said the information was either certainly or almost certainly correct (with 10%

saying it’s probably or almost certainly wrong).

Note that not only is the effect size of ideology robust with respect to the size of the

group affected (i.e., 59% of Republicans), but within that group, the effect size—i.e., the degree

of mistrust—is also robust. This suggests that idealizing this population’s beliefs would require a

substantial revision at the core with respect to whom they trust and mistrust. And since much of

our knowledge depends on whom we trust, idealizing in this way would also require

substantially revising other core beliefs.

Another recent study on the diverse roots of science denialism confirms the tight

relationship between ideology, trust, and knowledge:

[D]ifferent ideological predictors are related to the acceptance of different scientific

findings. Political conservatism best predicts climate change skepticism. Religiosity,

alongside moral purity concerns, best predicts vaccination skepticism. GM food

skepticism is not fueled by religious or political ideology. Finally, religious conservatives

consistently display a low faith in science and an unwillingness to support science.

(Rutjens, 2017)

The upshot is that there’s growing evidence that different groups display different levels of

(mis)trust towards legitimate epistemic authorities and that core ideological commitments

generate this trust/mistrust. Where trust is misaligned with relevant authorities, agents inhabit

epistemic echo chambers, where an echo chamber is a social structure from which other relevant

voices have been actively discredited (Nguyen, 2020). The fundamental feature is not that they 27

don’t encounter contrary views or aren’t aware of competing epistemic authorities, rather the social nature of the epistemic community perverts attitudes of trust:

They are isolated, not by selective exposure, but by changes in who they accept as

authorities, experts and trusted sources. They hear, but dismiss, outside voices. Their

worldview can survive exposure to those outside voices because their belief system has

prepared them for such intellectual onslaught. (Nguyen, 2020)

In Rush Limbaugh and the Conservative Media Establishment, Jamison and Capella (2010) conducted empirical studies on the effects of echo chambers and found that

[…] exposure to contrary views could actually reinforce their views. Limbaugh might

offer his followers a conspiracy theory: anybody who criticizes him is doing it at the

behest of a secret of evil elites, which has already seized control of the mainstream

media. His followers are now protected against simple exposure to contrary evidence. In

fact, the more they find that the mainstream media calls out Limbaugh for inaccuracy, the

more Limbaugh’s predictions will be confirmed. Perversely, exposure to outsiders with

contrary views can thus increase echo-chamber members’ confidence in their insider

sources, and hence their attachment to their worldview. (Quoted in Nguyen, 2018)

What’s relevant to idealization is the tight relationship between trust, knowledge, and worldview.

To the extent that core perspectives and ideological beliefs drive who we trust and, in turn, from whom we may acquire knowledge, Vallier’s model is faced with a choice with respect to idealization: On one hand, one can idealize an agent’s mistaken beliefs about who is trustworthy, but this might require revising core beliefs. On the other, beliefs about who is trustworthy aren’t idealized away, but this preserves the core-occupying false empirical beliefs, which in turn holds policymaking captive to false beliefs. 28

In the first case, simply attributing to the agent the correct belief about who is a

trustworthy epistemic authority also requires idealizing away the agent’s belief about who is

genuinely trustworthy. For example, it’s not enough to attribute to the Rush Limbaugh fan the

belief that climate scientists are in fact telling the truth and from there attribute to them the

correct beliefs about AGW. You must also revise beliefs about Rush Limbaugh’s

trustworthiness; i.e, that Rush Limbaugh is either lying or ignorant. Already, the agent’s web of

beliefs is starting to look unrecognizable to the real world agent. But it doesn’t end there. If Rush

Limbaugh is now idealized away as a trustworthy source, this calls into question other beliefs

that relied on the belief “Rush Limbaugh is a trustworthy knowledgeable source.” In other

words, there is a cascade effect that ends with beliefs and reasons attributed to an agent that are

unrecognizable to that agent.

Alternatively, one might not idealize away beliefs about who are trustworthy sources of

knowledge thereby protecting other core beliefs from being revised. However, if we adopt this

strategy then it doesn’t seem like you can now permissibly idealize away the false empirical

belief. If the Rush Limbaugh fan isn’t idealized such that he trusts climate scientists rather than

Rush, it’s not clear how we can attribute to this agent the belief that AGW is a real phenomenon.

At least, it doesn’t seem possible without also revising beliefs about Rush Limbaugh’s

trustworthiness along with the beliefs built up over time that came from Rush and similarly

trusted figures.

Vallier’s model prohibits us from revising core beliefs. The mistrust of relevant experts is tightly bound up in core ideological commitments, beliefs about whom we trust, and beliefs

derived from those we trust. Thus, idealizing agents to trust whom they, in real life, deeply

distrust will violate Vallier’s upper bound on idealization: That agents recognize the reasons 29

being attributed to them as coming from their own core beliefs, attitudes, and values. Otherwise stated, we cannot acquire knowledge from those whom we distrust and if this distrust runs deep, it doesn’t seem like it can be permissibly idealized away on Vallier’s model.

Another way to think of it is like this. Maintaining and developing trust are distinctly social phenomena. We rarely reason our way into trusting someone without some form of new interaction with the object of (mis)trust. Idealizing an agent to trust someone that they don’t (or to mistrust someone that they do) misunderstands the nature of trust. Although it contains an epistemic component, that’s not it’s primary characteristic. Trust is built up out of complex social interactions over time, themselves laden with background beliefs. I get to trust only after frequent interaction and familiarity or intermediately through someone I already trust. I don’t get there via rational deliberative route from my existing beliefs and attitudes. My existing beliefs and attitudes are the very things that preclude my trusting the people I don’t. All this to say, in the vast majority of cases, it’s implausible that I can reason my way from mistrust to trust from my existing beliefs and attitudes and without interacting with the world differently than I have/do.

Above I have described the possible epistemologies that adherents of widespread conspiracies and denialisms might possess. I suspect that a non-trivial number of people fall under these descriptions—although this is ultimately an empirical matter. To the extent that I’m right, Vallier’s model won’t be able to idealize away their false empirical beliefs. Suggesting that it’s permissible to do so because they already defer to experts won’t work: They can defer to the wrong experts or they can mistrust the genuine relevant experts. In both cases, there’s reason to believe Vallier’s prohibition on revising core beliefs will block idealizing adjacent conspiracist and denialist beliefs about who the relevant experts are. 30

My suggestion, again, is this: We ought to idealize away empirical beliefs that contradict a consensus of community of relevant experts in a mature natural science. Otherwise policy construction and selection will face defeaters grounded in improbable empirical beliefs. The epistemic filter ensures that policy is generated from the best scientific knowledge and is not defeated by poorly justified beliefs. Discussions of exemptions can enter at a later stage in the policymaking/implementation process.

Justification and Trade-Offs for an Epistemic Filter on Empirical Beliefs

So far, I’ve primarily justified an epistemic filter by appealing to final consequences:

Policymaking and selection are unacceptably hampered when reasons grounded in scientifically improbable beliefs can act as defeaters to policy proposals. Of course, as with any feature of a theory, there will be trade-offs. First, I will acknowledge what might be lost if Vallier adopts my filter. Then I will suggest that, despite the loses, the reasons for adopting an epistemic filter on empirical beliefs can be justified while maintaining a commitment to political liberalism’s core values and motivations:

1. Political communities are home to diverse and incommensurable conceptions of the good

and of justice, yet somehow, we must all find a way to live together.

2. Political liberalism is primarily concerned with people living according to their normative

commitments (i.e., their conception of the good and of justice).

Political liberalism, on some views, should seek to accommodate as many worldviews as possible. On Vallier’s model of idealization, few citizens are excluded from the domain of public reason and, as such, aren’t coerced for reasons they can’t recognize (assuming the eventual policy is applied universally). Allowing in deeply held conspiracist and denialist reasons might also be more in with what Rawls had in mind when he described the notion of 31

reasonableness as primarily moral rather than epistemic. As Jonch-Clausen and Kappel (2015) put it:

the burdens of judgment explain reasonable disagreement, but the bar for reasonableness

is set low in order to accommodate the aims of political liberalism, including the aim of

treating fellow citizens with respect as political equals. (my italics for emphasis, p. 120)

When we exclude citizens from the domain of public reason, we undermine political liberalism’s commitment to respect diverse worldviews as well as to avoid coercive political arrangements that don’t meet the PJP. Vallier would also point out that in failing to meet the PJP, we undermine people’s integrity and dignity because they are coerced in a way that prevents them from living according to their deepest commitments and values.

Idealizing some denialists and conspiracists will render inaccessible some reasons attributed to them. In turn, this makes policies to which they are subject appear coercive to their non-idealized selves. However, I reject the charge that this higher level of idealization for empirical beliefs is inconsistent with political liberalism; i.e., that my higher level of idealization applied to the empirical domain undermines people’s ability to live according to their deepest normative commitments.

As the empirical research I have cited throughout suggests, many denialists and conspiracists’ empirical beliefs are driven by normative commitments (e.g., religiosity and political ideology) as well as by how these ideological affiliations influence who they believe to be trustworthy and untrustworthy experts. Core beliefs and values are not completely independent: Normative and empirical beliefs become closely intertwined and interact thereby forming a worldview or “perspective.” A question arises with respect to the content and doxastic consequences of these worldviews: What do we do, with respect to idealization, when a false 32

empirical belief undermines the realization a normative commitment within that worldview? For example, an anti-vaxxer deeply believes that (a) vaccines cause autism (empirical), (b) modern

medical practitioners and institutions are deeply untrustworthy, and that (c) public health policy

ought to do what is medically best for her child (normative). One option is to hold fixed the

empirical belief, thereby undermining the normative belief, or another option would be to

idealize away the empirical belief in order to harmonize it with the normative belief.

Let me offer an in-depth example to further illustrate: Consider the beliefs an AGW

denier holds:

AGW Denier 1

P1. Anthropogenic climate change isn’t happening (empirical).

P2. It’s bad to enact policy about non-existent problems (normative).

P3. If anthropogenic climate change isn’t happening, then we should not enact climate

change policy (linking).

C. Therefore, we should not enact climate change policy (normative).

The beliefs form a valid argument, but it is not sound because P1 is false. If we idealize it away,

the resulting policy won’t violate any of AGW Denier 1’s normative commitments. In other

words, if we adopt the epistemic filter, idealization proceeds without undermining any normative

beliefs.

But wait--there’s more! Suppose AGW Denier 1 also holds the following belief:

P4. If climate change is caused by humans and its effects will be extremely harmful to

life on earth, then we should enact a policy to mitigate the causes and effects. (Linking)

P5. Life on earth is valuable and we should enact policy to protect it when it is faced with

grave danger. (Normative) 33

I think it’s reasonable to suppose that a significant subset of AGW deniers hold something like linking premise 4 and normative premise 5. If I’m right, then idealizing to correct their empirical

belief in order to realize their normative commitments is more consistent with political liberalism

than not idealizing. We idealize away the false empirical belief and replace it with its contrary. In

doing so, we idealize in a way that allows AGW Denier 1 to live more consistently with his

normative commitments than if we hadn’t idealized away the false empirical belief.

The main idea is this. At the core of their web of beliefs, groups of agents like AGW

Denier 1 hold two linking premises like P3 and P4, normative premise P5, and a false empirical

belief P1. For such agents, idealizing away the false empirical belief P1 attributes to them a

position that realizes their normative commitments. Prohibiting the empirical revision attributes a

policy position contrary to AGW Denier 1’s core normative commitments. It follows from a

central motivation for political liberalism—that people ought to be able to live according to their

considered normative judgments—that we ought to allow for idealization of empirical belief in

or tightly connected to the core. If we are primarily committed to ensuring citizens live according

to their considered normative commitments, empirical beliefs are therefore, through the basic

motivation for political liberalism, justifiably idealized away regardless of their position in an

agent’s web of beliefs.

One might reply, “Well, what about another kind of AGW Denier who has no normative

commitment to caring for the consequences of global warming?” This AGW Denier 2’s beliefs

look something like this:

AGW Denier 2

P1. Anthropogenic climate change isn’t happening. (Empirical)

P2. It’s bad to enact policy about non-existent problems. (Normative) 34

P3. If anthropogenic climate change isn’t happening, then we should not enact climate

change policy. (Linking)

P4*. I don’t care about the environment, people who live on coasts, nature, or animals.

C. Therefore, we should not enact climate change policy.

Despite idealizing away his empirical belief the same policy position remains attributed to him:

No action to mitigate global warming. His opposition to global warming policy isn’t rooted in empirical concerns anyway—it’s normative. He just doesn’t think that the things that will be affected matter. So, even if we idealize away his core empirical belief (P1), idealization still respects his core normative commitments. If such an individual exists, this is a problem for moderate idealization and how it determines which normative beliefs can enter the domain of public reason—not one particular to cases that involve deeply held false empirical beliefs.

Objection: A proponent of Vallier’s view might still respond that I’m granting unjustified priority to the normative domain in idealization. What we’re really concerned with is agents’ worldviews which are equally composed of empirical and normative beliefs. Prioritizing one domain impermissibly compromises the unity of the worldview. This objection, however, doesn’t quite correctly characterize my proposal. Yes, I’m suggesting prioritizing the normative when it conflicts with an agent’s scientifically improbable empirical beliefs, but it’s not just any normative beliefs we ought to prioritize. We are prioritizing normative beliefs that have survived the acceptability filter for normative beliefs over empirical beliefs that haven’t survived the filter for empirical beliefs.

Recall that, even on Vallier’s permissive view, there is a criterion by which we reject normative beliefs from the domain of public reason: normative commitments that don’t offer 35

reciprocal terms of cooperation don’t meet the minimum standard. The PJP needn’t take them into account.

All I’m suggesting is that, just like in the normative domain, there must be some minimum threshold for which empirical beliefs enter the domain of public reason and which ones should be idealized away. Even for Vallier, it’s not a free-for-all in the normative domain— even if it’s part of someone’s worldview—so why should it be a free-for-all in the empirical domain?

So, in a way, the asymmetry is on the side of a Vallier-type idealizer. They are imposing minimum standards for normative beliefs but not for empirical beliefs that make up a worldview.

To reiterate my reply to the objection: I’m not suggesting we prioritize the normative domain tout court. Rather, we prioritize it when an agent’s own empirical beliefs don’t meet a minimum epistemic standard and those empirical beliefs ‘obstruct’ that same agent’s normative commitments (that have met the minimum normative standard) from being realized in policy.

I propose one more argument to defend my epistemic filter on empirical beliefs. Really, this is just a reiteration of what I have said at various points throughout the paper. There are legitimate hierarchies of knowledge communities. While some people may misidentify relevant communities of experts or may mistrust them, few people will agree that forming and selecting complex policy ought to be constrained by beliefs that run counter to a consensus of relevant experts. It seems inappropriate that policymaking, via satisfaction of the PJP, must be vulnerable to defeaters grounded in known untruths. A theory that fails to recognize legitimate hierarchies of knowledge communities and permits highly improbable poorly justified empirical beliefs into the policymaking process seems to get something wrong. We should want for an account of the

PJP to generate policy based on the best science. 36

It’s true that this means that some views will be excluded from the policymaking and

selection process. This is a cost. But we must also acknowledge the cost of allowing policy to be

constrained by reasons grounded in claims known to be false. As I’ve hinted there’s a way to

reconcile the costs of excluding certain views: We may allow exemptions to the policies that

aren’t hampered by bad reasons, but this should occur only after policy has been constructed and

selected.

Conclusion

In this paper I have argued that some strains of widespread conspiracism and science denialism pose a problem for Vallier’s model of idealization. Since we may not permissibly

idealize away core beliefs, conspiracists and denialists whose false empirical beliefs occupy or

are generated by their core beliefs cannot be permissibly idealized away. This raises a problem

for policymaking and selection: policymaking and selection will face defeaters based on what are

very likely to be false beliefs. On the other hand, political liberalism seems to require that policy

be justified as broadly as possible. This means it must accommodate some disagreement. But

how broad should a theory allow that disagreement to be? Surely, it’s not possible to

accommodate all disagreement. To quote Epictetus:

Here you have philosophy’s starting point: We find that people cannot agree among

themselves, and we go in search of the source of their disagreement. In time, we come to

scorn and dismiss simple opinion, and look for a way to determine if an opinion is wrong

or right. At last, we focus on finding a standard that we can invoke, just as the scale was

invented to measure weights, and the carpenter’s rule devised to distinguish straight from

crooked. (Discourses Bk II. 11, 13-14) 37

I began this philosophical project by following Epictetus’ lead: I observed a difference in opinion over the truth of empirical claims. I suggested that the source of disagreement is tied to whom we perceive to be trustworthy sources of knowledge, and that that perception is, in turn, often generated by our core ideological commitments.

So, how do we handle the inevitable disagreement? What standard can we invoke to separate the empirical beliefs we ought to take into account in policymaking and those we may permissibly ignore? As far back as Plato, philosophers and laypeople alike have recognized a legitimate hierarchy of epistemic communities and authorities. I take this as my starting place. A theory that doesn’t recognize legitimate epistemic hierarchies and social divisions of epistemic labor risks trivializing expertise in policymaking and selection. A political epistemology must also recognize the deeply social nature of knowledge and its intimate relationship to social position and trust. However, these social elements can be manipulated and hijacked. A theory of idealization should be able to address this. On these grounds, I argue that we may permissibly idealize away empirical beliefs that contradict those of a consensus of relevant experts in a mature natural science.

It might be charged that my proposed standard undermines political liberalism’s commitment to ensure that each be able to live according to her respective worldview. However,

I argue that idealization on my model is consistent with political liberalism. Failing to idealize away obviously false conspiracist and denialist reasons attributes to agents policy positions that are at odds with their own deep normative commitments. However, idealizing away empirical beliefs when they contradict a consensus of relevant experts in a mature natural science places an agent’s considered normative judgments ahead of false empirical beliefs that don’t meet a minimum epistemic threshold. Thus, agents are idealized such that the reasons attributed to them 38

conform with their considered normative judgments rather than their deeply held but highly improbable empirical beliefs. 39

References

Aaronovitch, D. (2011). Voodoo histories: The role of the conspiracy theory in shaping modern

history. Riverhead Books.

Barron, D., Morgan, K., Towell, T., Altemeyer, B., & Swami, V. (2014). Associations between

schizotypy and belief in conspiracist ideation. Personality and Individual Differences, 70,

156–159. https://doi.org/10.1016/j.paid.2014.06.040

Benkler, Y., Faris, R., & Zucherman, E. (2017, March 3). Study: Breitbart-led right-wing media

ecosystem altered broader media agenda. Columbia Journalism Review.

https://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php

California Drought. (2014, October 2). The Mercury News. http://www.cadrought.com/watch-

californias-drought-conditions-change-14-years/

Craw, B., Stantus, R., & Bluestone, G. (2017, November 30). We’re going to keep updating this

list until Trump stops endorsing conspiracy theories. Www.Vice.Com.

https://news.vice.com/en_ca/article/d3xxqq/were-going-to-keep-updating-this-list-until-

trump-stops-endorsing-conspiracy-theories

Drought | USGS California Water Science Center. (n.d.). Ca.Water.Usgs.Gov. Retrieved

October 25, 2020 from http://ca.water.usgs.gov/data/drought/

Epictetus, & Long, G. (2010). The discourses of Epictetus: With the Encheiridion and fragments.

Nabu Press.

Freed, G. L., Clark, S. J., Butchart, A. T., Singer, D. C., & Davis, M. M. (2010). Parental vaccine

safety concerns in 2009. Pediatrics, 125(4), 654–659. https://doi.org/10.1542/peds.2009-

1962 40

Fricker, M. (2011). Epistemic injustice: Power and the ethics of knowing. Oxford University

Press.

Gallup. (2015a, March 26). College-educated Republicans most skeptical of global warming.

Gallup.Com. https://news.gallup.com/poll/182159/college-educated-republicans-skeptical-

global-warming.aspx#:~:text=The%20tendency%20for%20Republicans%20with

Gaus, G. F. (2011). The order of public reason: A theory of freedom and morality in a diverse

and bounded world. Cambridge University Press.

Jønch-Clausen, K., & Kappel, K. (2015). Scientific facts and methods in public reason. Res

Publica, 22(2), 117–133. https://doi.org/10.1007/s11158-015-9290-1

Kenyon, G. (2016). The man who studies the spread of ignorance. bbc.com.

https://www.bbc.com/future/article/20160105-the-man-who-studies-the-spread-of-ignorance

Kranz, J. Z.-R., John Haltiwanger, Michal. (2019, October 9). 24 outlandish conspiracy theories

Donald Trump has floated over the years. Business Insider.

http://www.businessinsider.com/donald-trump-conspiracy-theories-2016-5

Lee, S. (2017, March 7). How anti-science forces thrive on Facebook. BuzzFeed News.

https://www.buzzfeed.com/stephaniemlee/inside-the-internets-war-on-

science?utm_term=.fsp0gmnbe#.hplMDYQPm

Lewis, B., & Marwick, A. E. (2017, May 15). Media manipulation and disinformation online.

Data & Society. https://datasociety.net/output/media-manipulation-and-disinfo-online

List of conspiracy theories promoted by Donald Trump. (2020, September 10). Wikipedia.

https://en.wikipedia.org/wiki/List_of_conspiracy_theories_promoted_by_Donald_Trump 41

Marietta, M. (2017, November 1). The problem of inconvenient expertise: II. Facts and

universities. Niskanen Center. https://niskanencenter.org/blog/problem-inconvenient-

expertise-ii-facts-universities/

Millgram, E. (2015). The great endarkenment: Philosophy for an age of hyperspecialization.

Oxford University Press.

Mulligan, T. (2015). On the compatibility of epistocracy and public reason. Social Theory and

Practice, 41(3), 458–476. https://doi.org/10.5840/soctheorpract201541324

Nguyen, C. T. (2018). Cognitive islands and runaway echo chambers: problems for epistemic

dependence on experts. Synthese, July. https://doi.org/10.1007/s11229-018-1692-0

Nguyen, C Thi. (2018, April 9). Escape the echo chamber. Aeon. https://aeon.co/essays/why-its-

as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult

Oreskes, N., & Conway, E. M. (2012). Merchants of doubt: How a handful of scientists obscured

the truth on issues from tobacco smoke to global warming. Bloomsbury.

Parent, J., & Uscinski, J. (2014, September 2). Are Republican leaders more prone to conspiracy

theories? Washington Post. https://www.washingtonpost.com/news/monkey-

cage/wp/2014/09/02/are-republican-leaders-more-prone-to-conspiracy-theories/

Pew. (2015b, January 29). Public and scientists’ views on science and society. Pew Research

Center Science & Society. http://www.pewinternet.org/2015/01/29/public-and-scientists-

views-on-science-and-society/

Rawls, J. (2005). Political liberalism. Columbia University Press.

Rutjens, B. T., Sutton, R. M., & van der Lee, R. (2017). Not all skepticism is equal: Exploring

the ideological antecedents of science acceptance and rejection. Personality and Social

Psychology Bulletin, 44(3), 384–405. https://doi.org/10.1177/0146167217741314 42

Shapin, S. (1995). A social history of truth: Civility and science in seventeenth-century England.

University of Chicago Press.

Silverman, C. (2016, November 16). This analysis shows how viral fake election news stories

Outperformed Real News on Facebook. BuzzFeed News.

https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-

on-facebook?utm_term=.wsNnVA5EK#.vmwwV3Re6

Smallpage, S. M., Enders, A. M., & Uscinski, J. E. (2017). The partisan contours of conspiracy

theory beliefs. Research & Politics, 4(4), 205316801774655.

https://doi.org/10.1177/2053168017746554

Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures*. Journal of

Political Philosophy, 17(2), 202–227. https://doi.org/10.1111/j.1467-9760.2008.00325.x

Uscinski, J. E. (2014). Placing conspiratorial motives in context: The role of predispositions and

threat, a comment on Bost and Prunier (2013). Psychological Reports, 115(2), 612–617.

https://doi.org/10.2466/17.04.pr0.115c19z2

Uscinski, J. E., & Parent, J. M. (2014). American conspiracy theories. Oxford University Press.

Vallier, K. (2014). Liberal politics and public faith: Beyond separation. Taylor & Francis.

Vallier, K. (2018). Must politics be war?: Restoring our trust in the open society. Oxford

University Press.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news

online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

Wood, M. J., Douglas, K. M., & Sutton, R. M. (2012). Dead and alive: Beliefs in contradictory

conspiracy theories. Social Psychological and Personality Science, 3(6), 767–773.

https://doi.org/10.1177/1948550611434786 43

CHAPTER 2: PUBLIC REASON, EMPIRICAL DISAGREEMENT, AND THE

PROBLEM OF EXPERTS IN GAUS

Introduction

Much of the public reason literature focuses on problems that arise out of normative disagreement and implicitly assumes that empirical disagreement is insignificant or unproblematic. However, the current confluence of political polarization, wars,

disinformation campaigns, social sorting and polarization, epistemic echo chambers, partisan media, and (selective) low trust in experts and public institutions challenges this assumption. A

growing literature finds that disagreement over basic empirical facts relevant to policy can be

widespread, deep, and recalcitrant. In this paper, I argue that under the above conditions, Gerald

Gaus’ public reason view can lead to bad policymaking since it permits empirical beliefs that

contradict a consensus of relevant experts to act as defeaters to science-based policy.

In the next section, I outline Gaus’ account of public reason. Second, I defend the view

that deep and genuine empirical disagreements exist. Third, I explain how his accessibility

condition allows implausible empirical beliefs to act as defeaters to science-based policy. Fourth,

I argue that if denialist reasons can act as defeaters then Gaus’ model justifies implausible

defeaters to science-based policy and justifies bad policy.

Gausian Public Reason

A central goal of public reason liberalism is to give an account of how coercive moral

and legal arrangements are justified for a normatively diverse public of free and equal persons.

On Gaus’ public reason view, a justified coercive rule is one that conforms with the Public

Justification Principle (PJP). Very broadly, the PJP holds that 44

a coercive law L is justified in a public P if and only if each member i of P has sufficient

reason(s) Ri to endorse L.

This, however, comes with a caveat. Public reason liberals recognize that real-world citizens are

regularly prone to irrationality, false beliefs, information deficits, and viciousness. Coercive laws

ought to be justified to informationally and rationally idealized citizens. That is to say, public

justification requires that we ask whether a citizen would endorse a coercive law L if we

corrected their false beliefs, bad inferences, and informational deficits.

While public reason liberals broadly agree on the conditions for public justification, they

disagree over the precise formulation of the PJP and sufficient reason(s), the scope of

permissible reasons, and the degree to which citizens ought to be idealized. In what follows, I

describe Gaus’ view in The Order of Public Reason (2010).

Gaus grounds his model of public reason in the normal practice of social morality and the

explanation of its authority. Social morality is “the set of social-moral rules that require or

prohibit action, and so ground moral imperatives that we direct to each other to engage in, or

refrain from, certain lines of conduct” (2010, p. 2). Our normal moral practices explain both the

justification of making moral demands of others and of others making demands of us. We

justifiably make demands of others when we reasonably suppose that those subject to our

demands also endorse the socio-moral rule(s) or principles to which we appeal. Others justifiably

make demands of us when they reasonably suppose that we endorse the rules or principles to

which they appeal. Importantly, moral accountability and coercion are justified according to the

subject’s own (qualified) evaluative standards. They are not alien.

Like social-moral rules, laws and policy must also be justified to their subjects due to the presumption against coercion (Gaus, 2010, p. 262). In a society of free and equal persons, 45

unjustified coercive arrangements are indistinguishable from authoritarianism: “Authority must in some way be recognized by those over whom it is exercised; otherwise it is mere power, no matter how benevolently exercised” (p. 230). The problem with unjustified coercive arrangements is that they fail to recognize the mutual moral standing and autonomy of all members of the community and offer no assurance of without oppressive coercion

(p. 193). This concern for anti-authoritarian moral relations grounds Gaus’ conception of public justification.

Each must have, from their own point of view, reason(s) to endorse the rule to which they are subject. To this end, Gaus offers a three-part formulation of the Basic PJP:

A moral imperative PHI in context C, based on rule L, is an authoritative requirement of

social morality only if each normal moral agent has sufficient reason to (a) internalize

rule L, (b) hold that L requires that PHI-type acts in circumstances C and (c) moral

agents generally conform to L. (2010, p. 263)

Gaus’ PJP requires an account of what it means for an agent to have a sufficient reason to endorse some rule L. Identifying the reasons that agents have is not simply a matter of surveying popular opinion. A wealth of empirical literature finds that, on important policy matters, citizens frequently hold false beliefs, commit reasoning errors, and lack important information. 20

Uncovering agents’ reasons requires idealizing them along the informational and rational dimensions. We ask, if we corrected this agent’s reasoning errors and informational deficits, would this agent have a reason to endorse rule L? Idealization uncovers agents’ considered reasons thereby allowing policy to more accurately represent their beliefs and values.

20 See Somin, Ilya. Democracy and Political Ignorance for a good overview. 46

Within public reason, views diverge with respect to the appropriate degree and method of

idealization. One prominent view, full idealization, maximally idealizes citizens in terms of

reasoning and information. On this view, to test whether an agent has sufficient reason for some

policy L, we ask whether that agent would have reason(s) to endorse L if they made only valid

inferences, held no relevant false beliefs, and had all possible information on the matter in

question.

Gaus argues that full idealization would render the reasons for a policy unintelligible to

the real-world (imperfect) agent who is subject to them. What a perfectly rational and fully

informed version of myself would want is inaccessible to me and differs importantly from what

my real-world fallible self would want. Consequentially, full idealization fails to meet one of the

primary motivations for public justification: ensuring that the justificatory reasons for a policy

are intelligible to the diverse real-world agents subject to them. When justificatory reasons are

unintelligible, they appear alien, and therefore authoritarian and detrimental to moral relations

between free and equal persons. To ensure that justificatory reasons for a law are intelligible to

the agents subject to that law, Gaus proposes moderate idealization.

Moderate idealization is intended to support the values that ground non-authoritarian moral relations between free and equal persons. Recall that the Gausian model is embedded in

the everyday practice of social morality. I can only blame or hold someone responsible for what I

can reasonably claim that they should have known or believed is wrong. I cannot hold people

culpable for what they could not have known or believed from their real-world epistemic

position—despite what a fully rational and informed version of themselves would believe. I can

only hold people responsible for what they, with a respectable about of deliberation, should have

known better to do, believe, or know. Hence, moderate idealization limits the reasons an agent 47

has to those that an agent could access with a “respectable amount of deliberation” and information-seeking effort.

Gaus proposes two requirements or restrictions for idealization. First, the accessibility requirement requires that justificatory reasons for a law be accessible by sound deliberative route to the agent(s) subject to that law from within their own subjective set of beliefs and values. If an agent could not, with a “respectable amount of deliberation,” have accessed a reason to believe or do something from their subjective epistemic position then that reason is inaccessible to them.

Second, mutual intelligibility requires that the justificatory reasons and deliberative route meet the standard of intersubjectivity. Citizens must be able to recognize the validity of other citizen’s deliberative pathways to their justificatory reasons as well as the relevance of the supporting beliefs. Importantly, mutual intelligibility doesn’t require that I accept the normative strength of other people’s reasons. It requires only that I recognize their relevance and the validity of the pathway from within the other agent’s set of beliefs, interests, and values.

Moderate idealization ensures that the reasons attributed an agent are accessible to that agent. But this benefit does not come without costs. Moderate idealization also implies that some false beliefs cannot be idealized away, otherwise it is indistinguishable from full idealization. I devote the remainder of this paper to establishing that (a) there are deep and genuine disagreements over empirical facts that contradict a consensus of relevant experts, (b) moderate idealization cannot idealize away these denialist beliefs when agents are under conditions of high social sorting, and persistent polarization and propaganda, (c) if these denialist beliefs cannot be idealized away they can act as defeaters to science-based policy and justify unacceptably bad policy. 48

Deep and Genuine Empirical Disagreement Over Empirical Facts

There are two views within the growing political science literature on empirical disagreement. The orthodox view holds that there are deep and genuine disagreements over policy-relevant empirical facts (Jerit & Barabas, 2012; Kull, 2004). The newer emerging view is that such disagreements are an artifact of partisan cheerleading, not genuine disagreement

(Bullock & Lenz, 2019; Bullock et al., 2015). If the latter view is correct, then Gaus easily idealizes away empirical disagreement. If the orthodox view is correct, then further investigation is required to evaluate whether the Gausian model can idealize away denialist beliefs. First, I describe the arguments and literature supporting the competing views. Second, I argue that each view is only partially correct: For some issues, disagreement is deep and genuine. For others, it is superficial. A lot depends on the nature of the issue.

The nascent literature on partisan disagreement over empirical facts appears to vindicate

Gaus. Where empirical disagreement runs along partisan lines, survey responses are expressive of party support and are better understood as partisan “cheerleading” rather than differences in beliefs about the facts (Bullock & Lenz, 2019; Bullock et al., 2015). Bullock, Gerber, Hill, et al.

(2015) tested this hypothesis.21 First, experimenters established a baseline difference between

Republicans and Democrats on a series of politically relevant empirical issues. In a second group, survey respondents were offered a monetary reward for each correct answer. The partisan gap decreased on average 55-60% relative to baseline. A third group was offered monetary rewards for both giving the correct answer or admitting they don’t know. This decreased the partisan gap by an average of 80% from baseline.

21 Similar experiments have been conducted by Khanna and Sood (2018); Huber and Yair (2018). 49

Several conclusions follow. Contra the orthodox view, the partisan gap on empirical matters is vastly overstated. Partisan differences are not expressions of genuine belief but rather expressions of partisan cheerleading. Where incentives to cheerlead are greater than those to give the correct answer (and on surveys it’s usually zero), partisans will cheerlead.22 Similarly, when partisans don’t know the correct answer, they will often engage in cheerleading thereby further inflating the appearance of a gap between partisans. The partisan gap is fake news!

Although partisan cheerleading probably explains some partisan disagreement, I suggest that there are also genuine and deep disagreements over empirical facts between laypeople

(especially partisans) even when there is a consensus among relevant experts. There is a distinction between kinds and sources of empirical disagreement that exist on a continuum. Here are a few examples on either end of the continuum. At one end of the continuum we have superficial empirical disagreements that are most likely explained away by partisan cheerleading. Perhaps the best example is the survey in which Trump supporters were six times more likely (compared to Clinton voters and nonvoters) to say that the half-empty photo of

Trump’s inauguration had more people.23 Other examples include judgments about the unemployment rate, whether the economy is growing, or the number of soldiers killed in Iraq.

At the other end of the continuum are deep and genuine empirical disagreements that are not easily explained away as mere cheerleading. Current examples include whether humans are contributing to global warming or whether vaccines are a net hazard to a child’s health.

22 There is another possible explanation. In the experiments were partisans are offered a monetary reward for the correct answers, partisans may simply be giving the answer that they think that the experimenter believes is true. As this is a nascent literature, no one has yet designed an experiment to make this distinction. 23 For more examples, see: See: Flynn 2016, p. 14; Ramsay eval. 2010, pp. 17-18; Shani 2006; Shapiro & Block Elkon 2008. 50

Historical examples include lay disagreement over whether smoking cigarettes causes throat and

lung cancer or whether CFCs were adversely affecting the ozone layer. 24

Several features distinguish the two ends of the spectrum of disagreements. The first is

complexity. At the superficial end, the issues are of low complexity. The matter at hand is almost

always simple and, critically, it doesn’t require sophisticated background knowledge or expertise

to assess. Also, there is little to no perceived significant disagreement among experts and expert

institutions over the answers. Every survey question used to support the cheerleading view was

of low complexity. 25

Also, almost every question in these studies that record differences in empirical

judgments are about the past. That is, partisans were asked to recall information about previous

administrations or the existing administration’s early actions.26 Recall promotes what Bullock

and Lenz (2019) refer to as congenial inference: When a partisan has low confidence in an answer about an opposing administration, “he may canvas his memory for considerations

relevant to the unemployment rate—but do so in a way that makes him especially likely to

retrieve considerations that cast [the opposing administration] in a negative light.” Alternatively,

he may rely on a pro-party to determine an answer that presents his party in a positive

light (Lenz, 2019).

Deep and genuine disagreements over empirical facts typically have different features.

First, they often but not always involve contemporary (rather than past) issues. They also involve

24 I discuss these examples in depth later in the paper. 25 See: Table 1 Experiment 1 and Table 3, Experiment 2 in Bullock, Gerber, Hill, Huber (2015) for a comprehensive list. 26 This is by design since the original studies were intended to evaluate whether voters can accurately recall information of past events to make present voting judgments. 51

complex issues that demand substantial background knowledge, thereby requiring lay people to defer to experts or expert institutions. In Justificatory Liberalism (1996), Gaus himself acknowledges the role of complexity in deep disagreement. In response to the idea that deep disagreement only arises within the normative domain, Gaus argues that complexity—not the normative/empirical divide—underpins the possibility of deep disagreement. To illustrate he appeals to disagreements over the status of anthropogenic global warming (AGW):

As the complexity of the issue and the number of relevant variables increases, so does

disagreement; hence the moderate disagreement about tomorrow’s weather forecast, and

the deep disagreement about patterns of climate change. (Gaus, 1996, p. 156)

Second, deep and genuine disagreement over empirical facts occurs under what I will call ‘triple

P conditions’—persistent polarization and propaganda—and where powerful vested interests oppose the policy consequences of the scientific consensus. Two important phenomena often emerge under triple P conditions: First, the public is presented with a compelling appearance of disagreement between experts. Second, portions of the citizenry occupy what Thi Nguyen (2018) describes as epistemic echo chambers where, once inside, “one might follow good epistemic practices and still be led further astray” (p. 5).

Importantly, in which partisans are incentivized for accuracy, or for conceding ignorance, none of the survey questions cover empirical issues that meet these criteria for deep 52

disagreement.27 However, complex politicized empirical issues are where partisan disagreement is greatest.28

The two kinds of disagreement and the conditions that create and sustain them exist on a continuum. I argue that Gausian moderate idealization cannot idealize away deep and genuine disagreements over empirical facts for issues on the ‘deep end’ of the continuum. This is a problem for Gaus because many of the issues where there is deep and genuine empirical disagreement are those where significant portions of lay people hold beliefs contrary to the consensus of relevant experts. It follows, I will argue, that they will have defeaters for science- based policy that conforms with the consensus of relevant experts.

Moderate Idealization and the Problems of Identifying Experts

We don’t expect people to be experts in every domain. We can defer to experts. As Gaus

(2011) points out, neither are we required to replicate the reasoning of experts to acquire the expert’s reasons (p. 252). The basic error science denialists commit is that they defer to the wrong experts. Under triple P conditions, the public is presented an illusion of disagreement between experts. The central question of this paper is whether moderate idealization is sufficient for a denialist to overcome this appearance of disagreement between experts and defer to the correct experts. I tackle this problem in three steps. First, I present several epistemic challenges

27 See: Table 1 Experiment 1 and Table 3, Experiment 2 in Bullock, Gerber, Hill, Huber (2015) for a comprehensive list. The question that comes closest to asking about a matter that meets the conditions of deep and genuine disagreement (i.e., anthropogenic climate change) is worded as follows: “According to NASA, by how much did average global temperature (in degrees F) differ in 2010 from the average temperature between 1950 and 1980?” 28 For example, from 1994-2004 the partisan gap has more than doubled in response to whether “government regulation of business usually does more harm than good” and the partisan gap for the same period has tripled in response to whether “stricter environmental laws and regulations cost too many jobs and hurt the economy” (Pew Research Center. 2014. “Political Polarization in the American Public.”). Similarly, in a recent Gallup Poll (2018), only 35% of Republican agree that global warming is human-caused compared with 89% of Democrats and only 42% of Republicans versus 86% of Democrats believe that most scientists say global warming is occurring (Inc, 2018). 53

non-experts confront in the face of the appearance of disagreement between experts. Second, I

examine the various nonexperts use to identify experts. Third, I begin to explore how

propagandists manipulate the heuristics nonexperts use to decide between competing experts.

Social Epistemology and the Problem of Experts

Alvin Goldman (2001) spawned a literature that reveals the difficulties for non-experts to make justified judgments about the relative credibility of rival experts. In many respects,

Goldman’s inquiry mirrors Gaus’. Rather than examine the prospects for knowledge under ideal conditions and under ideal situations where agents have “unlimited logical competence and no significant limits on their investigational resources,” Goldman focuses on “agents with stipulated epistemic constraints and ask[s] what they might attain while subject to those constraints” (p. 1).

Non-experts rely on a variety of heuristics to identify experts. Depending on the domain of knowledge, the clarity of the signal, and the information environment, these heuristics will

enjoy varying degrees of success. Although I won’t review them all, Goldman and others

consider several ways by which a non-expert could make justified judgments about the relative

credibility of rival experts.

First, the non-expert could observe a pair of experts debating each other. However,

experts in specialized fields commonly use technical language beyond the comprehension of

non-experts. Without understanding these terms, it is difficult to establish the experts’ relative

reliability. Similarly, non-expert observers might be unconsciously persuaded by enthusiasm or

confident speaking style rather than substance. Finally, non-experts frequently have insufficient

background knowledge to evaluate the truthfulness of the competing claims.

54

A second possibility is for the non-expert to evaluate relative credibility according to expert third party assessments (i.e., “meta-experts”). Meta-experts include credentials such as academic degrees, professional accreditation, work experience, and credentialing institutions.

These methods presume non-experts are familiar with the Byzantine world of academic institutions and rankings. And even if they are, the problem is left unresolved when competing experts have similar credentials. This occurs often in politically relevant deep disagreements where each side is able to recruit at least some well-credentialed individuals from equally prestigious institutions. Furthermore, one’s degree-conferring institution is only a proxy for reliability.

A third possibility involves examining each expert’s interests and with respect to the issue. Preferential trust ought to be placed in the least biased expert. Again, these considerations will be difficult for a non-expert to identify, weigh, and evaluate. On politicized empirical issues, each side accuses the other of bias and conflicts of interest. Without further knowledge on the intricacies of public funding and industry involvement—and especially where industry funding is concealed—the lay person has difficulty weighting the relative merits of the competing claims. Particularly under conditions of high social sorting and polarization for complex politicized issues, citizens will have difficulty assessing the relative credibility of competing experts.

Even under the best of conditions, it can be quite difficult for cognitively bounded non- experts to identify the credible experts where there is the appearance of disagreeing experts. As I argue in the next section, these difficulties increase under conditions of high social and political polarization. Partisans disagree over the markers of prestige, and who is trustworthy. Can we moderately idealize agents such that they correctly identify the credible experts where there is, 55

from their point of view, a credible appearance of disagreement? In the next section, I will argue

that moderate idealization cannot idealize away denialist defeaters when the above conditions obtain in conjunction with a highly coordinated persistent propaganda and disinformation

campaign.

Heuristics for Identifying Experts

Moderate idealization cannot stray too far from actual practice since it is grounded in

everyday social practices. If moderate idealization cannot overcome these epistemic challenges,

then lay people non-culpably defer to the wrong experts. In this section, I explain how non-

experts actually identify experts and the epistemic problems they encounter when faced with the

appearance of disagreement between experts. This establishes a baseline for normal

performance.

Boyd and Richerson’s model of cultural learning explains, among other things, how and

when we defer to others. Their costly information hypothesis holds that

when accurate information is unavailable or too costly [for individuals to learn

something on their own], individuals may exploit information stored in the behavior and

experience of other members of their social group. (my italics for emphasis. Henrich &

McElreath, 2003, p. 129)

To place this in our informational context, rather than earn a PhD in every domain, it’s much

easier for individuals to investigate what the relevant experts in a domain believe. Under 56

conditions of high social sorting29 and polarization30, we can expect social groups to be defined

rather narrowly. The greater the distance between social groups, the less likely members from

one group are to defer to members or institutions from another. In the US where the population is

more socially polarized than at any other point in the nation’s history (Mason, 2018), it’s

unlikely that members of one political identity will defer (on a politicized issue) to a member or

institution belonging to the other group. For example, rightwing partisans are unlikely to defer to

climate scientists since they don’t see them as being members of their own group. This distrust

shows up in attitudes toward universities and university professors (Pew, 2018).31

In our information environment, scientific knowledge is costly for an individual to

acquire through trial and error. It follows from the costly information hypothesis that, on

scientific matters, citizens will engage in social learning and defer. Once they’ve decided to learn

from others, various contextual cues bias them toward learning from one subgroup or individual

rather than another. Adaptive information is embodied in both who holds ideas and how common

those ideas are. These in turn underpin the prestige and frequency bias, respectively.

The prestige bias is actually a proxy for the success bias. When the ability to rank

individuals by outcome in a particular skill or activity is too difficult, individuals “use aggregate

indirect measures of success, such as wealth, health, or local standards” (Henrich & McElreath

2003, p. 130; my italics). The fact that prestige is only an indirect measure of skill implies that it

29 Social sorting is the degree of homogeneity within groups in terms of social identities (Mason, 2018, p. 4). Social polarization is the affective distance between social groups. It is composed of increased partisan bias, increased emotional reactivity, and increased activism (Mason, 2018, p. 17). 30 Social polarization is the affective distance between groups. The greater the polarization the greater the negative affect between groups. 31 About 60% of Republicans think universities have a negative effect on the country (compared to 67% of Democrats think they have a positive effect). And 19% of Republicans have no confidence at all in university professors to act in the public interest and 33% have not too much confidence (while 26% of Democrats say they have a great amount of confidence and 57% have a fair amount) (Parker, 2019). 57

will often be unclear which of a revered individual’s many traits led to (perceived) success. For example, the fact that an individual has a large media presence (prestige) may lead many to believe that individual is an expert in a domain when in fact they are not.

The prestige bias tracks local standards, thus in a socially sorted and polarized society, people likely will not defer to experts outside their own group. It follows that, under such conditions, many will likely defer to the wrong experts on politicized empirical issues.

The prevalence of the success and prestige biases creates pressure for success-biased learners to pay deference to those they perceive as highly skilled (Henrich & McElreath, 2003, p.

130). The spread of deference-type behaviors means that naive entrants “may take advantage of the existing patterns of deference by using the amounts and kinds of deference different models receive as cues for underlying skill “(p. 130). So, local (i.e., ingroup) standards of prestige combine with patterns of deference to give (noisy and fallible) signals to non-experts about who the experts are.

Once again, we can see how these patterns instantiate themselves and mislead in our current environment. On politicized issues, non-experts in a socially sorted and polarized society will defer to different experts and institutions based on ingroup prestige standards and patterns of deference. Few partisans, if any, will defer to individuals or institutions that their outgroup perceives as experts. On partisan issues where there is a consensus of genuine experts, one group can defer to the wrong individuals and institutions despite their subjective perceptions to the contrary.

The success and prestige bias do not solve every costly information problem. To whom should we defer when two purported experts on either side of an issue both work at prestigious universities or institutions and/or both have media presence? The successful heuristic—the 58

conformity/frequency bias—is to copy the behaviors, beliefs, and strategies of the majority

(Henrich & McElreath, 2003, p. 130). Again, like all heuristics they can be maladaptive,

depending on the environment. In a highly socially sorted and polarized society, the conformity bias will likely apply only to the behaviors and beliefs of one’s ingroup rather than those of the outgroup. If a majority in one group hold false or improbable beliefs, the frequency bias predicts that other members will defer to the majority.

Using a Bayesian mathematical model developed by Venkatesh Bala and Sanjeev Goyal,

Cailin O’Connor and James Weatherall (2019) investigated a similar issue. They modeled how scientific communities converge or polarize on beliefs. The purpose was to study how belief polarization occurs and misinformation spreads. The scientific community adheres to rigorous epistemic norms (relative to lay people) and so if some variables can cause belief polarization

and misinformation in these communities then it is bound to occur in the general population.

An important finding aligns with what I suggested might occur in a sorted and polarized

society. The models found that when subgroups within a community distrust each other they

appraise evidence differently depending on its source. That is, evidence from a trusted source can

move credence levels one way while the same evidence—but from a distrusted source—can send

credence levels in the other direction! This makes sense. If you believe that a lab or scientist is

corrupt, then it is reasonable to assume they’ve fabricated or manipulated their results and to

revise your credence levels in the other direction.

The end result is stable belief polarization within the community: One subgroup

converges on the correct view while the other converges on the false one. The greater the

mistrust, the larger the faction that settles on the false belief. This occurs because “those who are

skeptical of the better theory are precisely those who do not trust those who test it” (O’Connor & 59

Weatherall, 2019, p. 76). The group converging on false beliefs becomes insensitive to countervailing evidence.

Several important conclusions follow from the Bala and Sanjeev Goyal Bayesian model that incorporates social trust. First, “models of polarization […] strongly suggest that psychological biases [such as ] are not necessary for polarization to result (Ibid p. 76). Second, while distrust can cause us to dismiss relevant evidence, too much trust can also lead us astray “especially when agents in a community have strong incentives to convince you of a particular view” (O’Connor & Weatherall, 2019, p. 77).

These findings put pressure on moderate idealization. Moderate idealization may only permissibly idealize agents to a limited degree. If a subgroup of experts with strong epistemic norms can converge on false beliefs, what can we reasonably expect from a subgroup of lay people when there is low trust between groups? When confronted with the credible appearance of expert disagreement, the non-expert faces significant additional epistemic problems. In the next section, I argue that Gaus cannot idealize away denialist defeaters when the above conditions obtain in conjunction with a highly coordinated persistent propaganda and disinformation campaign.

Identifying Experts Under Conditions of Persistent Polarization and Propaganda

Under normal conditions, a non-expert faces steep epistemic challenges in appraising credibility when there’s the appearance of expert disagreement. These difficulties are amplified under conditions of high social sorting, polarization, and consequent low social and institutional trust. Here, I investigate the challenges involved in correctly identifying and appraising expertise under conditions of persistent polarization and propaganda. The question in the background—to 60

which I will return—is this: Are denialists under triple P conditions culpable for their mistaken beliefs about experts? To put it in Gausian terms, should they have known better?

In The Misinformation Age (2019), O’Connor and Weatherall investigate how beliefs spread through epistemic networks with propagandists intent on manipulating public belief. They apply the Bala-Goyal Bayesian model to theory choice within scientific communities then add layers of stakeholders to the epistemic network such as policymakers, the general public, and propagandists. Given two alternative theories and absent low trust, communities of scientists usually converge on the better theory. To reach the public, evidence and ideas often flow from a scientific community through a community of nonscientists such as policymakers. To explore these interactions, O’Connor and Weatherall build policymakers into their model.

Under normal conditions, models show that policymakers’ opinions typically track the scientific consensus—even when policymakers are initially skeptical (i.e., low initial credence for the scientific consensus belief) (O’Connor & Weatherall, 2019, p. 103). However, things change when a propagandist is added to the model. Propagandists are agents who, like the scientists, can share results with policymakers. The difference, however, is that the propagandist is unconcerned with truth and therefore never updates their beliefs—regardless of the evidence they encounter. Their sole purpose is to persuade the policymaker (and the public) of their belief.

The tobacco industry’s infamous campaign serves as a useful example of how propagandists can affect public policy, policymakers, and public perception. By the early 1950s, strong evidence linked smoking to cancer, yet the Surgeon General did not issue any statements linking the two until 1964. It wasn’t until 1992 that it was prohibited to sell to minors and even later before widespread no smoking laws were introduced (O’Connor & Weatherall, 2019, p.

100). How did they achieve this despite the scientific consensus? 61

The propagandist uses a variety of tactics to achieve their goals. The first is biased production which involves directly funding or producing industry sponsored research. Industry control allows them to decide which studies get funded and published and which get discarded.

By 1986, the tobacco industry-funded research consortium

spent more than $130 million on sponsored research, resulting in twenty-six hundred

published articles. […] This research was then shared, along with selected independent

research, in industry newsletters and pamphlets; in press releases sent to journalists,

politicians, and medical professionals; and even in testimony in Congress. (O’Connor &

Weatherall, 2019, p.104)

When biased production is added to the Bala-Goyal model, (naive) policymakers consistently update their beliefs in the wrong direction since the propagandist only shares research that supports their position. Models found that when propagandists infiltrate scientific networks naive policymakers converge on the wrong belief despite the fact that legitimate scientists converge on the correct belief (O’Connor & Weatherall, 2019, p. 106). Worse still, the mistaken convergence of naive policymakers is stable so long as the propagandist remains active and despite the evidence the scientists produce (Ibid).

A more subversive strategy involves selective sharing of independent research congenial to the propagandist. This distances the propagandist from accusations of producing biased research but also makes it harder to detect as propaganda. The industry-funded Tobacco Institute published the Orwellian monthly newsletter Tobacco and Health. Editors would cherry-pick, misleadingly frame, and quote portions of independent research in ways that would cast doubt on the scientific consensus (Oreskes & Conway, 2011, p. 32). 62

This strategy is particularly effective for complex issues. There’s a direct relationship

between the complexity of a scientific issue and the absolute number of inconclusive

independent studies even if they represent a minority finding as a percentage of the total

literature. O’Connor and Weatherall (2019) found that in a wide variety of cases, a propagandist

using selective sharing alone (without biased production) causes naive policymakers to converge

on the false belief despite a scientific consensus to the contrary (pp. 112-113). The effect is even

stronger when conjoining biased production and selective sharing.

So far, the Bala-Goyal model of the epistemic community includes policymakers who have some working knowledge of the science. However, when we extend these models to agents

with little background knowledge and little to no direct contact with the scientific community,

outcomes are even worse with respect to converging on the false belief. Selective sharing is

particularly effective with manipulating the public because the selectively quoted research comes

from independent research giving it the veneer of legitimacy.

The Accessibility Condition and Moderate Idealization Under Triple P Conditions

Given the plethora of challenges a non-expert can face in identifying which experts to

follow, I now turn to the central issue of this paper. Under persistent polarization and

propaganda, when non-experts defer to the wrong experts leading to false or improbable

empirical beliefs, may we permissibly idealize away these errors and attribute to them the correct

judgments? I argue that for many citizens moderate idealization cannot. It follows that significant 63

numbers of citizens will have defeaters to science-based policy—even when there is a consensus

of relevant experts.32

My argument requires quickly reviewing the specific constraints Gaus imposes on

idealization. First, idealization must be moderate rather than full. That is, whatever reasons are

attributed to Alf must be accessible, by sound deliberative route, to real-world Alf (Gaus, 2010,

p. 276). It follows that inferences must originate in real-world Alf’s current belief and value set.

Here, Gaus follows Gilbert Harmon:

[y]ou start where you are, with your present beliefs and intentions. Rationality or

reasonableness then consists in trying to make improvements in your view. Your initial

beliefs and intentions have a privileged position in the sense that you begin with them

rather than with nothing at all or with a special privileged part of those beliefs and

intentions as data. (Quoted in Gaus, 2010. p. 241)

Idealization of and from an existing belief-value set is guided by coherence and conservatism:

We may permissibly revise an original belief-value set only in ways that improve local (i.e.,

beliefs and values relevant to a particular issue) coherence. The principle of conservatism grants

a privileged position to existing local beliefs. That is, if Alf’s original set contains a belief B, he

needs some positive reason to reject it (Gaus, 2010, p. 241).

In response to deeply entrenched science denialism, Gaus could argue that such citizens

are simply ignorant of the scientific facts and so we permissibly attribute to them the correct

32 There is no clear line between consensus and disagreement between experts, however, it does not follow that we cannot identify clear cases. Issues where there is a strong consensus of experts include anthropogenic climate change, the safety and efficacy of vaccines, the human safety of consuming GMOs, the veracity of the theory of evolution, the public health benefits and safety of water fluoridation, and the causal link between smoking and lung cancer, to name a few. 64

beliefs. This strategy elides two different kinds of ignorance. Ignorance is commonly conceived

of as a deficit of knowledge. However, science denialism creates a second kind of ignorance

which David Dunning (2014) describes as follows:

An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the

clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies,

algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of

useful and accurate knowledge.

In other words, agents who have been subject to persistent and systematic misinformation and

disinformation aren’t empty vessels to be filled. They are full of beliefs already—and confident

with the illusion of knowledge.33

This is the problem confronting Gaus. Any idealization must move toward improving

coherence of the existing local belief-value set. Attributing to the denialist the correct scientific

conclusions diminishes local coherence. A vast epistemic network filled with years of mutually

reinforcing misinformation and disinformation will not cohere with the consensus of experts.

Under triple P conditions, attributing the correct conclusions to a denialist requires a radical

restructuring of the agent’s belief-value set.

Gaus could reply that we can augment the denialist agents’ rational capacities; that is,

their ability to make sound inferences. However, if you start with false or improbable premises,

no amount of valid inferences can lead you to the correct conclusion. Under conditions of high

social sorting and polarization, many agents likely inhabit media ecosystems which function as

33 Perhaps the best examples are the anti-vaccine and anti-GMO movements. Adherents exhibit the “super” Dunning-Kruger effect where their confidence in their understanding of the issue increases as their (actual) knowledge decreases. In the regular Dunning-Kruger effect, confidence is insensitive to actual knowledge but, unlike in the super Dunning-Kruger effect, it doesn’t rise with ignorance. 65

echo chambers. Once inside the echo chamber, the edifice of false knowledge grows as do the

credence levels for its interdependent beliefs. As Nguyen (2018) notes, once inside an echo

chamber “one might follow good epistemic practices and still be led further astray” (p. 4). In

other words, even if we attribute to agents near-perfect rationality, if they begin with the wrong

premises there’s no modus ponens to the correct conclusion. In Gaus’ terms, there’s no likely

sound deliberative route to the correct conclusion—at least none that respect coherence or

conservatism.

Gaus can attempt to evade these problems another way. Gaus (2010) points out the

“fundamental social dimension to our understanding of what reasons we have” which implies

that “expert conclusions show that nonexperts have a reason to do as they advise” (p. 252). Non-

experts are not required to replicate the reasoning of experts, only follow their reasons. Denialists

have erred by deferring to the wrong experts. If we idealize agents as deferring to the correct

experts, we can attribute to them the correct empirical beliefs.

This strategy must overcome two obstacles. First, we must show that it’s possible to

attribute to a denialist in an echo chamber the correct experts without violating coherence and

conservatism. Second, we must show that attributing to a denialist both the genuine experts and

their reasons don’t violate coherence and conservatism. Since the latter test depends on the

former’s success, I begin by evaluating the first obstacle.

In his article on echo chambers and epistemic bubbles, C. Nguyen (2018) writes that echo

chambers are only possible because of the fundamentally social dimension to our understanding of what reasons we have. Epistemic echo chambers prey on our epistemic interdependence by manipulating trust: 66

An echo chamber is an epistemic community which creates a significant disparity in

trust between members and non-members. This disparity is created by excluding non-

members through epistemic discrediting, while simultaneously amplifying insider

members’ epistemic credential. Finally, echo chambers are such that in which general

agreement with some core set of beliefs is a pre-requisite for membership, where these

core beliefs include beliefs that support that disparity in trust. (p. 10; my italics)

Echo chambers manipulate the social conditions of knowledge by undercutting the very

foundation of social knowledge—trust. And, importantly, who to trust itself is a feature of an

echo chamber’s member’s core beliefs.

The role of trust in echo chambers presents problems for Gausian idealization of science

denialists—most of which likely inhabit echo chambers. A long-time member of an echo

chamber will have a complex and rich web of beliefs and values justifying their epistemic

deference to their chosen experts and for rejecting others. These are interconnected beliefs about

who is trustworthy and why. Attributing to the denialist the correct experts will violate coherence

with the beliefs about who is trustworthy and who is not. Gaus (2010) appears to agree:

Expert conclusions show that non experts have a reason to do as they advise if, after a

“respectable amount” of good reasoning, the non-experts, consulting their values and

beliefs (including beliefs about reliable experts), would arrive at a victorious reason to

follow the expert’s advice. (p. 253; my italics)

Swapping in the correct experts can only be described as a radical restructuring of a denialist’s local belief-value set on an issue since they are being attributed beliefs about whom to trust that contradict their core beliefs on the issue. Additionally, once denialists have been idealized to defer to the correct experts then those experts’ reasons will also apply to them. But these reasons 67

contradict the denialists’ network of prior empirical beliefs on the matter. Virtually no local

beliefs on the issue remain. They will hold beliefs from people whom they do not trust for

reasons they cannot access. Such a radical restructuring falls outside of the constraints imposed

by moderate idealization.

Gaus could reply that the degree of idealization required to attribute the correct experts to

a denialist is lower than what I suppose. It’s really not that hard to detect a consensus of experts.

There are a few reasons to reject this. The first follows from the earlier sections of this paper

showing the genuine difficulties non-experts confront in identifying the correct experts where,

from their subjective point of view, there is the appearance of expert disagreement. Moderate

idealization cannot stray too far from how real-world people identify and defer to experts since

the whole point is to idealize our actual social practices.

Second, the tools non-experts have available to them are heuristics which well-funded,

sophisticated, and organized propagandists seize upon and manipulate. The pro-tobacco and anti-

AGW campaigns are waged by some of the world’s richest companies and largest PR firms.

Their tactics can fool even the epistemically prudent. To manipulate the prestige bias, interest

groups employ credentialed scientists, fund think tanks and institutes with Orwellian names,

place industry representatives on government panels, and fund friendly politicians.

For example, to create a counter-narrative to the growing scientific consensus34 on the

link between smoking and health problems, the tobacco industry created the Tobacco Industry

34 In 1953 most scientists were certain of a tobacco-cancer link. In 1957, the US Public Health Service concluded that smoking was the “principle etiological factor in the increased incidence of lung cancer (Oreskes & Conway, p. 21). In 1959, in peer reviewed literature leading researchers declared that the smoking-cancer link was “beyond dispute” (Ibid, p. 21). Also, in 1959 the American Cancer Society issued a formal statement that cigarette smoke is a major causal factor in lung cancer (Ibid). In 1962, the Royal College of Physicians of London declared that cigarette smoking is a cause of cancer and bronchitis (Ibid). 68

Committee for Public Information and The Council for Tobacco Research.35 For the former, they hired John Hill from the largest American public relations firm, Hill and Knowlton, who understood his job as ensuring that “scientific doubt must remain” (quoted in Oreskes &

Conway, 2012, p. 16).

The director of scientific research for the tobacco industry was Frederick Seitz, a

physicist, former head of the National Academy of Sciences, and science advisor to NATO. He

was joined by two other prominent scientists: James A. Sharon and Maclyn McCarty. The former

was a physician who had pioneered the use of the antimalarial drug Atabrine and headed the NIH

from 1955-1968. The latter was a biologist and Lasker Award36 winner (Oreskes & Conway,

2012, pp. 11-12). These scientists tailored the grant approval process at the Council for Tobacco

Research to promote research that could be used to create doubt in order to search for experts

who would affirm their view (p. 22). Every study funded by the tobacco industry produced more

credentialed expert witnesses and quotable experts for newspaper articles.

Despite the overwhelming scientific consensus by the late seventies, some laypeople still

doubted the science (as some still doubt AGW or the safety of vaccines). And why not? It’s

possible to see why some Americans would be confused, as the science on carcinogenic effects

of smoking was being publicly debated even into the 90s. In the media, before Congressional

hearings, on government panels, in powerful think tanks, and in promotional literature the

tobacco industry inserted a long parade of well-credentialed scientists claiming that the science

was not settled.

35 It was originally named the Tobacco Industry Research Committee but adopted the new name shortly after to give the illusion of impartiality. 36 Considered to be the Nobel prize of biology. He and two others won the award having discovered, before Francis and Crick, that DNA carried genetic information in cells. 69

Every well-funded anti-science propaganda campaign uses this same strategy (in fact, many of the people are the same too) including on issues such as CFCs and the ozone layer, acid rain, and anthropogenic climate change. The strategy is all the more effective when an issue is highly complex—like AGW—but especially when it becomes partisan or bound up in a social identity. With partisanship come echo chambers which not only act as information filters but breed distrust in the other team’s experts and epistemic institutions and boost trust in misleading experts. As Benkler, Faris, and Roberts (2018) put it, propagandists “take advantage of the

[partisan] media ecosystem architecture to insert their narratives, memes, and frames into the network directly and through the major propaganda outlets” (p. 225).

Anti-science propaganda successfully molded segments of public opinion and prevented science-based policymaking in the pre-internet era. The scope, reach, and tools available now make that era look innocuous. For example, a recent study found that on any given day at least one quarter of all tweets about the climate are produced by bots propagating denialist messages

(Milman, 2020). Armies of bots, “sock puppets,” microtargeting, and social media all manipulate the heuristics that underpin social learning. It’s even possible to rent out real people’s

Twitter and Facebook accounts to post content (Blumenthal, 2018). And it’s not just social media and fringe websites propagating disinformation. Large state actors such as Russia fund and run massive propaganda networks that influence and amplify views on a wide range of issues

(Benkler et al., 2018). It's implausible that these new forces in conjunction with the old have only superficial effects on what subsections of politically and socially fractured public believe about the world.

From within an echo chamber, it is difficult to identify a consensus of experts. Well- trained and funded propagandists manipulate public perception to make it hard to identify. To 70

create the illusion of expert disagreement they fund and hire well-credentialed scientists. They

hire PR firms to amplify the prestige and media presence of congenial research and scientists. In

conjunction they launch smear campaigns against legitimate scientists, academic institutions, and

regulatory institutions to corrode social trust and sow doubt. Moderately idealizing denialists

within echo chambers such that they identify the correct experts will violate coherence and

doubly so when attributing to them the correct empirical beliefs. For many inside an echo

chamber under triple P conditions, only a radical restructuring belief-value sets provides a path

out. However, since moderate idealization prohibits radically restructuring beliefs it provides

denialists reasons (i.e., defeaters) for opposing science-based policy. Science-based policy, under

these conditions would violate the PJP for many partisans trapped in echo chambers.

Denialist Defeaters Lead to Bad Policy

Before moving forward, it’s important to clarify terminology. By eligible set, I mean the

range of policy options for which an agent has sufficient reason for some policy issue, i.e., {L1,

L2, L3…}. That set includes optimal policy and sub-optimal policy for some particular agent for

some policy issue. For any policy issue, it’s unlikely that Li will be in each member of the

public’s optimal set. For some members of the public, Li will be in their sub-optimal set.

Nevertheless, so long as L doesn’t fall outside the eligible set (optimal or sub-optimal) for

anyone, it is publicly justified. If L falls outside the eligible set for some members of the public,

it is not publicly justified. I will argue that because Gaus’ model can’t idealize away some denialists’ epistemically bad reasons, some science-based policies will not merely be suboptimal

for them but will fall outside of their eligible set.

Here is my argument in its most basic form:

Some science denialists’ beliefs can't be moderately idealized away. 71

If some science denialists’ beliefs can't be moderately idealized away, then some groups will have defeaters to science-based policy leading to unacceptably bad policy.

Therefore, some groups will have defeaters to science-based policy resulting in unacceptably bad policy (epistemically and morally).37

So far, I have defended P1. Gaus could concede that I’m right and reply that it doesn’t follow that such people will lead to bad policy. Such an inference, he could argue, fails to distinguish between defeated policy and sub-optimal but still eligible policy. In order to defend

P2, I offer an example where a group’s recalcitrant improbable scientific beliefs (that contradict a consensus of relevant experts) function as a defeater to science-based policy and lead to policy that isn’t even sub-optimal for those who advocate science-based policy.

Everyone, anti-vaxxers included, subscribe to a general moral rule approximating “we ought to adopt policies that best protect the health and welfare of children.” Despite this normative agreement, many anti-vaxxers oppose mandatory vaccinations because many incorrectly believe that vaccines do just the opposite. Notice that the disagreement is not normative but empirical. In terms of Gaus’ PJP, the first condition is met (agreement over a rule

L) but the second isn’t (i.e., the conditions under which L applies).

Together, the expert consensus on vaccines and the moral rule to protect children’s health support a universal vaccine policy that grants medical exemptions to the immunocompromised and perhaps a small number of religious exemptions. This would be the optimal policy from the point of view of the consensus of relevant experts and members of the public who defer to them.

37 It is epistemically bad because it does not properly take into account the reasons of experts. It is morally bad in that it fails to address the relevant policy issue and leads to unacceptable preventable third-party harms for bad reasons. 72

However, if P1 is correct, some anti-vaxxers will have defeaters to this policy that cannot be idealized away. Their belief that vaccines cause net harm to children acts as a defeater to universal mandatory vaccines since, if it were indeed true, then mandatory immunizations would contradict the moral rule everyone endorses.

Gaus can reply that, although mandatory vaccinations are the scientifically optimal policy, allowing non-medical exemptions for anti-vaxxers could be a sub-optimal but eligible policy (for them). However, in order to enjoy the benefits of herd immunity, populations require

92-95% compliance rates—depending on the vaccine (Watson, 2018). So, this suboptimal policy is possible only where the total number of immunocompromised, those with vaccine allergies, and the philosophically/religiously exempted does not exceed 5-8% of the population (depending on the vaccine).

Approximately 3.6% of US adults are immunocompromised (Kahn, 2008). However, that’s probably an underestimate because it only includes those with HIV/AIDS, organ transplant recipients, and cancer patients. It excludes a sizable population that takes immunosuppressive drugs for other disorders such as rheumatoid arthritis and inflammatory bowel disease. About 1% of children are immunocompromised (Boyle, 2007). Any vaccine policy that achieves herd immunity may only allow non-medical exemptions for 0.5%-3.5% of the population (this is generous) depending on the vaccine. However, the current number of non-medical exemptions already dwarfs that number.

As an illustration, the immunization rate for US two-year-olds is 75%—far below the rate needed for herd immunity (Feikin, et. al., 2000). But even if the national or state-level rates were around 90%, the exemptions will be highly clustered (Omar, et. al., 2008). For example, school- based outbreaks have been associated with high exemption rates and a recent survey of schools 73

reported substantial intrastate variability in implementation of exemptions (Feikin, et. al.,

2000). In California there are 50 large clusters of school districts where the immunization rate for kindergarteners is below 50% and about the same number of districts have 51-75% immunization rates (edsource.org, 2019). This is not surprising since groups with similar beliefs, values, and demographics will tend to cluster together.

Allowing for non-medical exemptions and the inevitable clustering of such groups exposes the broader community and immunocompromised to unacceptable preventable risks. For example, the incidence of pertussis was almost 50% higher in states that easily granted non- medical exemptions (Omar, 2006). This is outside the eligible set for citizens who want science- based policy.

The number of allowable non-medical exemptions for herd immunity is too small to accommodate all anti-vaxxers and ensure herd immunity and community safety. Given the risks and costs to the broader communities where anti-vaxxers live, members of the broader community will have decisive reason to reject an expansive non-medical exemption policy.

Herd immunity and broad-based non-medical exemptions are mutually exclusive. Broad- based non-medical exemptions are not in the eligible set for those following the scientific consensus. Mandatory immunization isn’t in the eligible set for many recalcitrant anti-vaxxers. If we can’t idealize away their belief that vaccines are a net harm, then, on Gaus’ view, they will have defeaters for any policy coercing them to vaccinate their children. And if we accommodate even the suboptimal policy of recalcitrant anti-vaxxers (i.e., broad-based non-medical exemptions) then we are left with unacceptably bad policy (epistemically and normatively).

The anti-vaccine case is a proof of principle. It shows that in some cases moderate idealization leads to justifying defeaters to science-based policy and to justifying bad policy 74

since it does not idealize away denialist beliefs. Policy in such cases is bad epistemically since it accords greater weight to reasons that poorly justified than reasons justified according to the most stringent epistemic principles and practices. Such policy is also bad because it can allow outcomes that are harmful and unjust.

Conclusion

The Gausian model correctly acknowledges our deep epistemic reliance on others but overlooks the ways that this reliance can be manipulated by vested interests and bad actors. I have argued, first, that there are deep and genuine disagreements over policy-relevant empirical facts, some of which contradict a consensus of relevant experts; second, Gausian moderate idealization cannot idealize away some science denialists’ beliefs, and third that if denialist reasons cannot be idealized away they can act as defeaters to science-based policy and lead to unacceptably bad policy. The conjunction of high social and political polarization, pervasive well-funded and well-organized propaganda campaigns, selective low trust in experts and institutions, and echo chambers conspire to confuse and mislead citizens such that they become deeply committed to denialist views. Such citizens, on Gaus’ model, will have defeaters to policy options informed by a consensus of relevant experts.

All this points to what I take to be the fundamental issue: a political epistemology must acknowledge a legitimate hierarchy of knowledge communities in the empirical domain and appropriately recognize the social division of labor between expert communities and nonexperts.

A political epistemology cannot (without serious consequences) be neutral on these matters.

Perhaps this problem points toward a deeper problem with the PJP itself. As a public reason liberal, Gaus is committed to the premise that laws are legitimate only if they are publicly justified. For Gaus, the PJP is itself grounded in non-authoritarian relations between free and 75

equal people. His political theory is independent from “the (objective) reasons there are” and concerns itself only with “the reasons people have” (Wendt, 2019). It follows that Gaus also endorses the claim that moral considerations beyond public justifiability are irrelevant for the legitimacy of laws (Wendt, 2019). That is to say, once a law has been determined to be publicly justified, no further moral considerations matter for determining its legitimacy.

But sometimes we do care about values beyond non-authoritarian relations. Sometimes consequences, when they are of a certain magnitude, matter more. I do not know where precisely to draw the line between the two values, but it doesn’t follow we can never identify reasonable cases. Certainly, the future habitability of the planet or the easy prevention of children’s deaths seem to qualify as two such cases.

76

References

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation,

disinformation, and radicalization in American politics. Oxford University Press.

Blumenthal, P. (2018, March 14). How a Twitter fight over Bernie Sanders revealed a network of

fake accounts. Huffington Post.

https://m.huffpost.com/us/entry/us_5aa2f548e4b07047bec68023/amp?__twitter_impression=

true&fbclid=IwAR0QmgncM5gJVQbnywwbOpM-

Dqg1yKT9kVyVolV9_FyhkArTcGmMtrzZ-kU

Boyle, J. M., & Buckley, R. H. (2007). Population prevalence of diagnosed primary

immunodeficiency diseases in the United States. Journal of Clinical Immunology, 27(5),

497–502. https://doi.org/10.1007/s10875-007-9103-1

Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2015). Partisan bias in factual beliefs

about politics. Quarterly Journal of Political Science, 10(4), 519–578.

https://doi.org/10.1561/100.00014074

Bullock, J. G., & Lenz, G. (2019). Partisan bias in surveys. Annual Review of Political

Science, 22(1), 325–342. https://doi.org/10.1146/annurev-polisci-051117-050904

Dunning, D. (2014, October 27). We are all confident idiots. Pacific Standard.

https://psmag.com/social-justice/confident-idiots-92793

Feikin, D. R. (2000). Individual and community risks of measles and pertussis associated with

personal exemptions to immunization. JAMA, 284(24), 3145.

https://doi.org/10.1001/jama.284.24.3145

Gaus, G. (2010). The order of public reason: A theory of freedom and morality in a diverse and

bounded world. Cambridge University Press. 77

Goldman, A. I. (2001). Experts: Which ones should you trust? Philosophy and

Phenomenological Research, 63(1), 85–110. https://doi.org/10.1111/j.1933-

1592.2001.tb00093.x

Goldman, A. I., & Whitcomb, D. (2011). Social epistemology: Essential readings. Oxford

University Press.

Henrich, J., & McElreath, R. (2003). The evolution of cultural evolution. Evolutionary

Anthropology: Issues, News, and Reviews, 12(3), 123–135.

https://doi.org/10.1002/evan.10110

Ilya Somin, & Stanford University Press. (2016). Democracy and political ignorance: Why

smaller government is smarter. Stanford Law Books.

Inc, G. (2018, March 28). Global warming concern steady despite some partisan shifts.

Gallup.Com. https://news.gallup.com/poll/231530/global-warming-concern-steady-despite-

partisan-shifts.aspx).

Jerit, J., & Barabas, J. (2012). Partisan perceptual bias and the information environment. The

Journal of Politics, 74(3), 672–684. https://doi.org/10.1017/s0022381612000187

Kahn, L. H. (2008, January 6). The growing number of immunocompromised. Bulletin of the

Atomic Scientists. https://thebulletin.org/2008/01/the-growing-number-of-

immunocompromised/

Kull, S., Ramsay, C., Subias, S., Weber, S., & Lewis, E. (2004, October). The separate

of Bush and Kerry supporters. Worldpublicopinion.Org. http://worldpublicopinion.net/the-

separate-realities-of-bush-and-kerry-supporters/

Mason, L. (2018). Uncivil agreement. How politics became our identity. University of Chicago

Press. 78

Milman, O. (2020, February 21). Revealed: quarter of all tweets about climate crisis produced by

bots. The Guardian. https://www.theguardian.com/technology/2020/feb/21/climate-tweets-

twitter-bots-

analysis?fbclid=IwAR2PN1FikCPSZ019ZuwrHuvF45j73N9eqRNTUSz13o0hQ_J-

lkKRrwYGrdk

Nguyen, C. T. (2018). Echo chambers and epistemic bubbles. Episteme, 1–21.

https://doi.org/10.1017/epi.2018.32

O’Connor, C. & Weatherall, J. O. (2020). Misinformation age: How false beliefs spread. Yale

University Press.

Omer, S. B., Enger, K. S., Moulton, L. H., Halsey, N. A., Stokley, S., & Salmon, D. A. (2008).

Geographic clustering of nonmedical exemptions to school immunization requirements and

associations with geographic clustering of pertussis. American Journal of

Epidemiology, 168(12), 1389–1396. https://doi.org/10.1093/aje/kwn263

Omer, S. B., Pan, W. K. Y., Halsey, N. A., Stokley, S., Moulton, L. H., Navar, A. M., Pierce, M.,

& Salmon, D. A. (2006). Nonmedical exemptions to school immunization

requirements. JAMA, 296(14), 1757. https://doi.org/10.1001/jama.296.14.1757

Oreskes, N., & Conway, E. M. (2012). Merchants of doubt: How a handful of scientists obscured

the truth on issues from tobacco smoke to global warming. Bloomsbury.

Parker, K. (2019, August 19). The growing partisan divide in views of higher education. Pew

Research Center’s Social & Demographic Trends Project; Pew Research Center’s Social &

Demographic Trends Project. https://www.pewsocialtrends.org/essay/the-growing-partisan-

divide-in-views-of-higher-education/ 79

Watson, S. (2018, November 30). What’s herd immunity, and how does it protect us? WebMD;

WebMD. https://www.webmd.com/vaccines/news/20181130/what-herd-immunity-and-how-

does-it-protect-us

Wendt, F. (2019). Rescuing public justification from public reason liberalism. In D. Sobel, P.

Vallentyne, & S. Wall (Eds.), Oxford studies in political philosophy, Volume 5 (pp. 39–64).

Oxford University Press.

Workshops, I. of M. (US) C. on the I. F. D. (2003). State and local immunization issues in

California. In www.ncbi.nlm.nih.gov. National Academies Press (US).

https://www.ncbi.nlm.nih.gov/books/NBK221347/

Xie, Y., & Willis, D. J. (2019, August 7). Interactive map: California schools with low

vaccination rates. EdSource. https://edsource.org/2019/interactive-map-california-schools-

with-low-vaccination-rates/615883 80

CHAPTER 3: VACCINE POLICY, MODERATE IDEALIZATION, AND THE PUBLIC

JUSTIFICATION

Introduction

Pervasive propaganda, polarization, social media, and disinformation have generated a

cornucopia of practical political problems. But do these conditions generate problems for

political theory? And if so, in what ways must political theory adapt? These are the central

questions motivating this paper. In answering those questions, this paper simultaneously seeks to

answer a narrower question. Given the rise of vicious epistemic environments surrounding

immunization policy, what policy could all suitably idealized citizens endorse? That is to say,

what immunization policy would be politically legitimate?

Public reason liberalism is a family of views that seeks to give an account of political

legitimacy whereby politically legitimate policy respects diverse world views among morally

free and equal peoples. This idea is represented in the public justification principle (PJP) which

holds that a coercive law L is justified in a public P if and only if each member i of P has

sufficient reason(s) Ri to endorse L. Since growing anti-vaccine sentiment and the prospect of a coronavirus vaccine has further enflamed public disagreement over vaccine policy, it’s worth investigating which vaccine policies can satisfy the PJP and whether there would be permissible exemptions.

In the public reason literature, religious exemptions and normative disagreement have

received special attention, however, little attention has been given to non-religious exemptions

and empirical disagreement. Unlike religious exemptions, requests for non-religious exemptions

to vaccines typically arise from mistaken empirical beliefs about their risks and benefits relative

to the diseases vaccines protect against. Citizens typically acquire these beliefs by inhabiting 81

epistemically vicious environments. Attributing to them beliefs that conform with the scientific consensus would in some cases require radically reconstructing their interconnected beliefs about vaccine safety and efficacy and trustworthy information sources.

Different accounts of public reason provide different resources for addressing disagreement in such cases. Full idealization accounts ask what reasons agents would have if they were fully rational and fully informed. On this view, much of the empirical disagreement is plausibly idealized away since fully rational and informed agents would not hold empirical beliefs that contradict a consensus of relevant experts.

Moderate idealizers, however, do not attribute full rationality and information to agents.

Agents’ beliefs are not permissibly idealized beyond what, with a reasonable amount of effort, they could arrive at by sound deliberate route from their existing belief-value sets on that issue.

It follows that, unlike fully idealized agents, moderately idealized agents will hold some false empirical beliefs—especially on issues where epistemically vicious environments are pervasive.

Taking Gaus as a representative of moderately idealizing public reason views, I argue that this model leads publicly justifying defeaters to science-based policies. More specifically, this leads his view to justify non-medical exemptions (NMEs) at the expense of herd immunity for poorly justified reasons. 38

38 Empirical beliefs on their own do not, of course, imply particular policies. Nevertheless, they do imply constraints on policy options within the context of certain basic normative assumptions. For example, believing that vaccines are a safe and effective means of protecting children’s health and welfare implies that policy will favor immunizing children if we assume the normative commitment to protecting children’s health and welfare. Believing the contrary empirical claim, i.e., that vaccines are not safe and effective, implies a vastly different set of policies given the same basic normative commitments. Hence, in this paper I focus on disagreement over empirical beliefs that imply vastly different sets of policies despite widespread normative agreement.

82

The Gausian view is too permissive with respect to the empirical beliefs that survive the idealization process because his model fails to adequately manage the ways in which people can become trapped in epistemically vicious environments. This is unsurprising. When he developed the model, cases in which agents succumbed to epistemically vicious environments were mostly the stuff of philosophical fancy. No one could have anticipated that entire subpopulations could become entrapped in epistemically vicious environments. Halcyon days!

Under conditions of persistent polarization and propaganda (“triple P conditions”), agents will hold empirical beliefs that survive moderate idealization even though they contradict a consensus of relevant scientific experts. This leads to a variety of pernicious outcomes. First, groups of people who hold these beliefs will have defeaters to policies grounded in the best science. For example, a deeply held belief that the measles, mumps, rubella (MMR) vaccine causes autism provides a strong reason against a policy requiring children to be vaccinated.

Second, this will lead groups like vaccine skeptics to be idealized such that they reject policies that instantiate their own deeply held normative commitments. The vaccine skeptic is every bit as committed to promoting children’s health and welfare as are proponents of vaccines but

Gausian idealization frustrates, rather than facilitates, this end. Perhaps even worse, in a world teeming with normative disagreement, Gausian idealization implies that policy will undermine a rare and precious gift of a normative consensus among the public—i.e., that the state ought to protect children’s health and welfare.

The model’s permissiveness also disappoints our pre-theoretical intuitions and desiderata concerning an account of political legitimacy: (a) Policy ought to be informed by and not contradict expert reasons when there is a consensus among them on empirical matters and (b) the state should not legitimize poorly justified reasons by promulgating laws that respect them. 83

Finally, (c) the model’s permissiveness fails to provide a mechanism to resolve disputes over negative externalities, which are usually empirical in nature.

By raising the epistemic bar on which sorts of empirical beliefs can count as defeaters, we protect public justification from being hijacked by epistemically compromised reasons and resolve the problems created by epistemic permissiveness. As such, I argue that moderate idealization requires the exclusion principle as an amendment to moderate idealization: The exclusion principle holds that we permissibly exclude from the domain of public reason deeply held empirical beliefs when they contradict a consensus of relevant experts in a mature science.

This principle, I argue, better supports our pre-theoretical intuitions about political legitimacy and, in the case of immunization policy, avoids publicly justifying NMEs at the expense of herd immunity and children’s welfare.

This essay proceeds as follows: First, I describe the main issues and positions surrounding NMEs and MMR vaccine policy. Second, I explain why Gaus’ model of moderate idealization and public justification must lead him to support NMEs and why this is a normatively and epistemically bad policy outcome. Third, I argue that Gaus’ model conflicts with pre-theoretical intuitions regarding political legitimacy and fails to provide a framework for resolving empirical disagreements over negative externalities. Finally, I argue that the exclusion principle justifies the correct vaccine policy, rescues moderate idealization from the consequences of pervasive vicious epistemic environments, and can itself be justified from within the PJP and moderate idealization. More generally, even under triple P conditions, the exclusion principle generates policy that aligns with our pre-theoretical intuitions about political legitimacy and provides resources for resolving empirical disputes over negative externalities.

84

Non-Medical Exemptions to Vaccines and Policy Responses

In this section I outline the history, threat, and nature of non-medical exemptions then describe the range of possible policy responses.

History

Before vaccines, public health was permanently threatened by what are today vaccine preventable diseases (VPDs). Almost everyone in the U.S. caught measles, and hundreds died from it each year. Today most doctors have never seen a case of measles (Vaccines: Vac-

Gen/What Would Happen If We Stopped Vaccinations, 2019). Polio was also a major concern.

For example, in the 1952 US polio outbreak 60, 000 children were infected, thousands were paralyzed, and 3,000 died (Polio, 2019). The polio vaccine was invented in 1955 and by the

1970s polio had been eradicated from the US (Polio, 2019). In 1921, before the vaccine, more than 15,000 Americans died from diphtheria. Today with widespread vaccinations, only two cases of diphtheria have been reported to the CDC between 2004 and 2014 (Vaccines: Vac-

Gen/What Would Happen If We Stopped Vaccinations, 2019). In 1964-65, an epidemic of rubella

(German measles) infected 12½ million Americans, killed 2,000 infants, and caused 11,000 miscarriages. Since 2012, only 15 cases of rubella have been reported to CDC (Vaccines: Vac-

Gen/What Would Happen If We Stopped Vaccinations, 2019).

eVacc ines ar the single most cost-effective and successful public health measure to prevent disease, yet public confidence in vaccine safety and effectiveness has been falling since the late 1990s (Committee, 2015). Even today, in the midst of a deadly pandemic, many

Americans express hesitancy toward a vaccine that doesn’t even yet exist. In a recent survey, 85

Americans were asked,39 “If a vaccine that protected you from the coronavirus were available for

free to everyone who wanted it, would you definitely, probably, probably not, or definitely not

get it?” Only 43% replied “definitely,” 28% replied “probably,” 12% said “probably not,” and

15% said “definitely not” (Goldstein & Clement, 2020).40 This significant and growing skepticism towards vaccines is both puzzling and disconcerting given what we know about the pre-vaccine world--to say nothing of our current situation.

Growing and skepticism are a serious public health risk and policy

problem. To illustrate, in this paper I focus on the MMR vaccine since it is a frequent target of

anti-vaccine rhetoric and has a robust literature. The MMR vaccine protects against measles,

mumps, and rubella. Prior to the invention and widespread of the vaccine, these

diseases wreaked havoc on public health (Vaccines: Vac-Gen/What Would Happen If We

Stopped Vaccinations, 2019). With the introduction of the vaccine, these tragic diseases virtually

disappeared. Almost.

In 1998, the Lancet published a study by Andrew Wakefield purportedly showing that the

MMR vaccine caused serious health risks. In 2010, after public controversy, the study was found

to be fraudulent and was retracted (Eggertson, 2010). Despite the fraud and retraction, the damage was done. Public trust in vaccines collapsed in many communities and the modern anti-

vaccine movement was born. States began passing legislation allowing for NMEs which allow

children to forgo standard vaccinations and still attend public and private schools.

39 Numbers were similar for polls in the UK and in Canada. 40 2% had no opinion. 86

Religious and NMEs are distinct categories and some states allow both. Currently, forty- five states have religious exemptions to vaccines (States With Religious and Philosophical

Exemptions From School Immunization Requirements, 2020). Eighteen of them also offer non- medical exemptions for non-religious reasons (Olive et al., 2018). In this paper, I set aside issues concerning religious exemptions and focus exclusively on NMEs. It’s important to emphasize that NME rates do not include religious exemption rates. They are distinct categories.

While it’s possible that some requests for philosophical exemptions are genuinely motivated by normative or philosophical considerations, the evidence suggests empirical beliefs about vaccine safety motivate the vast majority of NMEs (Wang et al., 2014; Committee, 2015).

In a recent national survey, for example, 10% of Republicans and 12% of Independents believed that vaccines cause autism, and 53% and 44% were unsure (Inc, 2020).41 The empirical reasons hypothesis is further supported by the finding that vaccine skeptics tend to systematically misjudge (i.e., exaggerate) probabilities for vaccines-related risks (LaCour & Davis, 2020).

The Threat

In 2010 there were 63 measles cases reported in the US (Measles, 2019). In 2019, 1,282 individual cases of measles were confirmed in 31 states. Of these cases, 128 were hospitalized and 61 reported having complications, including pneumonia and encephalitis (Measles, 2019).

Perhaps most prominently, in 2014 due to rising NME rates in California, a measles outbreak occurred in Anaheim, Disneyland. Despite some vocal opposition, state legislators responded by passing laws to close easily acquired NMEs. However, in 18 other states, attempts to close

41 This is compared to 5% (yes) and 40% (unsure) for Democrats. The survey question was, “From what you have read or heard, do you personally think certain vaccines do -- or do not -- cause autism in children, or are you unsure? 87

NMEs have either been blocked, or they have expanded—primarily by Republican legislators

(Allen, 2019). Since 2009, the proportion of school-aged children with NMEs has risen continuously in 12 of those states (Olive et al., 2018).

Figure 1. Increasing Nationwide Trend in Kindergarten NME Rates from 2009 to 2017. The asterisk (*) indicates states demonstrating an upward trend of kindergarteners with NMEs. NME, nonmedical exemption. (Olive et al., 2018)

Increasing exemption rates are a problem because they undermine herd immunity.

Immunizations protect not only the immunized, but they also prevent community spread

(through herd immunity) to children still too young to be vaccinated and those who are medically 88

unable to be vaccinated--such as the immunocompromised and those with vaccine allergies.

There are, of course, also the risks to the children with NMEs. For example, children with NMEs

for the MMR vaccine are 30 times more likely to contract measles than a vaccinated child

(Salmon et al., 1999). In order for vaccines to protect public health and the medically exempt,

immunization rates for measles can be no less 93-95% (Watson, 2018). It follows that the

aggregate number of exemptions to the MMR vaccine for all reasons—including medical,

religious, and non-medical exemptions—cannot exceed 5-7% percent of the population.

Counting up existing exemption rates suggests herd immunity is already compromised.

Medical exemptions compose about 4.4-5.0% of the population. Immunocompromised adults

make up around 3.6% and include those with HIV/AIDS, organ transplants, and cancer patients

(Kahn, 2008). However, the total percentage of immunocompromised is probably greater since

this number excludes other conditions which require immunosuppressive treatments (Kahn,

2008). Furthermore, about 1% of children are immunocompromised (Boyle, 2007), and a very

small number of people with severe (but not moderate) egg, gelatin, or yeast allergies generally

cannot be vaccinated (Chung, 2014). Finally, medical exemptions also include all children still

too young to be vaccinated.

Herd immunity requires capping total exemptions at 5-7% of the population. Given

current medical exemption rates, it follows that NMEs and religious exemptions combined

cannot exceed around 2% without also endangering herd immunity and those with genuine medical exemptions. However, Fig. 1 shows, NMEs alone, in many states, currently exceed this amount (Olive et al., 2018). Furthermore, states with both NMEs and religious exemptions have, on average, 2.5 times the rate of unvaccinated children compared to states with only one kind of exemption (Olive et al., 2018). 89

The problem of under-vaccination is actually even worse. Children with NMEs are not

evenly distributed throughout the population. They cluster together since they are often members

of the same social groups. For example, the top ten counties ranked according to rate of

kindergarten aged children with NMEs ranges from 26.67% to 14.55% (Olive et al., 2018)—and

this in addition to religious exemptions. A single unvaccinated person in an unvaccinated

population can infect 15-25 people (Pierik, 2020). These clusters are essentially outbreaks

waiting to happen. For vaccine policy to achieve herd immunity and protect the children and the

medically vulnerable, policy must drastically reduce NME rates from where they currently stand.

Policy Alternatives

There are four main policy approaches to vaccine exemptions: eliminationism, religious

priority, inconvenience, and conditional. Furthermore, there are two main categories of coercion

that states typically employ to support immunization: mandatory immunization and compulsory

immunization. I describe the categories of coercion first.

A mandatory vaccine policy attempts to raise vaccine compliance by limiting access to

public goods and services to those who do not vaccinate their children. Common examples

include requiring immunization in order for children to attend public schools and daycares.

NMEs are not part of a mandatory vaccine policy—they are exemptions to it. Unlike mandatory

programs, compulsory vaccine programs impose criminal penalties for non-compliance.

Compulsory policy is rare but has been used. For example, in 1990-91 there was a measles outbreak in a Philadelphia-based fundamentalist church whose doctrine required that prayer be the only medical intervention. After nine children died from measles, and measles spread to 486 children in the broader community, a court ordered compulsory vaccinations for the children

(WPVI, 2015). 90

The eliminationist position is straightforward. It holds that, with the exception of genuine

medical exemptions, both non-medical and religious exemptions should be eliminated. The

American Medical Association (AMA), the American Academy of Family Physicians (AAFP),

and the American College of Physicians (ACP) all officially endorse eliminationism (AMA,

2015; AAFP, 2105; ACP, 2015). California, Mississippi, and West Virginia are all eliminationist

(States with Religious and Philosophical Exemptions from School Immunization Requirements,

2020). Most EU nations are also eliminationist.

The religious priority position grants special privilege to religious exemptions. Secular

reasons—i.e., “philosophical” or “conscientious” reasons—for exemptions should be eliminated

while religious-based exemptions should remain. This view holds that religious reasons have

special constitutional status (Navin & Largen, 2017). In practice, this is the dominant policy in

the US: Of the 45 US states that allow religious exemptions, 28 prohibit NMEs (States with

Religious and Philosophical Exemptions from School Immunization Requirements, 2020).

The inconvenience view, on the other hand, treats both secular and religious reasons as

equally legitimate. On this view, religious and secular exemptions should be available but

inconvenient for parents to obtain for their children (Navin & Largen, 2017). Inconvenience

comes in degrees. Parents might be required to attend an in-person information session before

they may obtain the religious exemption or NME request form. A stronger approach would see

the state implement a mandatory policy limiting access to public goods and services to those who

do not vaccinate their children.

The conditional view advanced by Pierik (2020) incorporates elements of each of the above views but the kind and degree of coercion is conditioned on the broader community’s immunization rate. This view is grounded in the state’s basic obligation to protect the health and 91

welfare of children. Both kinds of exemptions can remain open in communities that have childhood immunization rates substantially above what is required for herd immunity. That is to say, there’s no need or justification to coerce where the state’s objective is already met.

However, as immunization rates begin to fall, the state may justifiably apply the amount of coercion commensurate with achieving the legitimate end of protecting children’s health and welfare. When immunization rates only begin jeopardizing herd immunity, the state may implement immunization mandates whereby certain public goods are withheld from those who do not comply. And when immunization rates fall such that many children’s wellbeing is seriously jeopardized, the state permissibly employs compulsory vaccinations whereby refusal is treated as a crime.

Critically, on the conditional view, the epistemically unsound reasons commonly advanced for NMEs never factor into evaluating the legitimacy of coercive policy. Exemptions expand and contract based on the degree to which children’s health interests are threatened-- which the state is obligated to protect.

In the next section, I argue that the Gausian model of moderate idealization can only publicly justify the inconvenience model and cannot publicly justify closing NMEs.

Gaus and Vaccine Policy

In this section, I argue that Gaus’ model of moderate idealization and political legitimacy commits him to vaccine policies that justify and preserve NMEs. These, however, are epistemically and normatively bad policy and reveal problems with Gaus’ model of moderate idealization--specifically, that it is ill-equipped to handle the pervasiveness of epistemically vicious environments. First, I summarize my main argument against Gausian idealization from

Ch. 2 and apply it to the case of vaccine skeptics. Next, I explain why Gaus’ model cannot 92

idealize away some vaccine skeptics’ poorly justified empirically-grounded objections to vaccines. Finally, I describe which vaccine policies Gaus’ model justifies and which it cannot.

The Problem of Moderate Idealization and Vaccine Skepticism

In Chapter 2, I argued that under conditions of widespread persistent polarization and propaganda (triple P conditions), Gausian moderate idealization can sometimes justify defeaters for policy grounded empirical claims endorsed by a consensus of experts. In the case of vaccines, for example, a consensus of experts holds that vaccines do not cause autism and are not a net harm to children’s health. On Gaus’ view, I argue, a deeply committed vaccine skeptic’s empirical beliefs could survive idealization and could therefore defeat policy proposals grounded in the expert consensus.

My main argument takes the following form:

Many members of the public are under triple P conditions.

Under triple P conditions, Gausian moderate idealization cannot idealize away some denialists’ and conspiracists’ false empirical beliefs.

If some people’s false empirical beliefs can’t be idealized away, these people can have defeaters for policy that conforms with a consensus of relevant empirical experts.

Therefore, some science denialists and conspiracists can have defeaters to science-based policy.

If some science denialists and conspiracists can have defeaters to science-based policy, then

Gausian idealization commits us to epistemically and normatively bad policy.

Therefore, Gausian idealization can commit us to epistemically and normatively bad policy.

I defend Premise 1 by drawing on recent literature in social epistemology on the relationship between trust and belief formation. First, there’s broad agreement that even under normal conditions, non-experts struggle to discern to whom to defer where there is the 93

appearance of expert disagreement (Goldman, 2001). Complex controversial issues amplify this problem (Gaus, 1996, p. 156).

The dynamics of persistent polarization and echo chambers breed distrust in expert testimony when it conflicts with in-group ideology. This conforms with Gaus’ ideas concerning

the role of the path dependence of belief formation. Two agents can appraise the same evidence

differently depending on “the order in which one takes up the examination of the elements”

(Gaus, 2010, p. 273). If agents, early in their path, are led to believe that the CDC and other

public health agencies are untrustworthy, this will influence how they appraise evidence on

vaccines going forward.

This model also aligns with recent empirical findings. High levels of distrust between

groups leads members of the different groups to appraise the same expert testimony and evidence

differently (O’Connor and Weatherall, 2019). People do not update beliefs in light of testimony

or evidence coming from experts belonging to distrusted social groups (O’Connor and

Weatherall, 2019). In fact, their credence can move in the opposite direction (O’Connor and

Weatherall, 2019). People only defer and update in light of testimony or evidence coming from

prestigious individuals within their own groups (O’Connor and Weatherall, 2019). It follows that

if trust and distrust have been misplaced, stable belief polarization occurs between groups over

time.

Erick Merkly (2020) suggest this distrust-based occurs with opponents

to vaccines. He found that anti-intellectualism—defined as a generalized mistrust of experts and

intellectuals—is an important predictor of citizens’ views on vaccines, GMOs, and water

fluoridation. The stronger the anti-intellectualism, the more likely citizens are to hold views

contradicting the expert consensus. Consistent with the aforementioned formal models in social 94

epistemology, Merkly also found that anti-intellectualism diminishes the persuasiveness of

messages of expert consensus.

Horizontal patterns of deference can also cause stable belief polarization between groups.

First, individuals tend to adopt those beliefs which are most frequent in their information

environment (Henrich & McElreath, 2003). Polarized media environments mean that polarized

groups will encounter various beliefs at different relative frequencies. Opponents to vaccines will encounter information that supports their view much more frequently than they will encounter

disconfirming information. For example, Sullivan et al. (2019) show that, “at least when it comes

to discourse on controversial topics like vaccine safety, social media tends to be highly polarized

and to amplify misinformation rather than broadcasting [genuine] expert consensus.”

James O. Weatherall and Cailin O’ Connor (2020) found that horizontal trust/mistrust

relations between laypeople also lead to endogenous stable belief polarization. Individuals reject

evidence for a position from those who hold beliefs different from their own on unrelated issues.

Conversely, people accept evidence for a position from those that hold beliefs similar to their

own on unrelated issues. As in vertical belief transmission, for the same evidence, individuals

revise credence levels one way if the evidence comes from people with similar beliefs but revise

in the other direction if that same evidence comes from people holding different beliefs on

unrelated issues (Weatherall & O’Connor, p. 2).

This pattern is evident in the contemporary anti-vaccine movement. Disparate groups such as white Christian evangelicals, the alt-med community, and rightwing populists are

increasingly converging on anti-vaccine views (Allen, 2019; Ehley, 2019; Gorski, 2020). This

can be explained by their shared beliefs that the state, Big Pharma, and public health institutions

are corrupt and untrustworthy. 95

Finally, Sullivan et al. (2020) developed a formal model to explain and evaluate the effects of filter bubbles, echo chambers, and group polarization on belief formation. Unlike O’

Connor and Weatherall (2019, 2020) who focus on how information spreads through networks,

Sullivan et al. (2020) investigate the epistemic well-being or vulnerability of individual positions in a network. For any given observer in an epistemic network, their model quantifies that position’s epistemic vulnerability, where vulnerability represents the low diversity, independence, and number of the observer’s sources. Positions whose sources are low in diversity, independence, and number are epistemically vulnerable to false beliefs and poorly positioned to receive corrective information.

Sullivan et al. (2020) then applied their model to a real online network representative of the epistemic problems associated with filter bubbles, echo chambers, and group polarization— i.e., Twitter networks that share and discuss content related to vaccine safety. They found that high epistemic vulnerability characterizes a large proportion of individuals in these networks (p.

14). That is to say, observers in these networks score extremely low in terms of the diversity, independence, and number of sources from which they receive information, and are poorly positioned to receive good or corrective information.

The Inability to Idealize Away False Empirical Beliefs

Gausian idealization moderately improves real world agents along the rational and informational dimensions. Idealization prohibits attributing beliefs to real world agents beyond what they could have arrived at by sound deliberative route had they expended a reasonable amount of effort. For this reason, Gaus reminds us that for bounded reasoners, “we must acknowledge that all our beliefs are defeasible” (p. 274). The above findings in social epistemology suggests that Gausian idealization will be too moderate to idealize away 96

improbable empirical beliefs about vaccines under triple P conditions and that doing so requires idealizing beyond what Gaus allows.

As Weatherall and O’Conner (2019, 2020) demonstrate, when two groups appraise a source’s trustworthiness differently, stable belief polarization occurs even without cognitive biases. It follows that idealizing agents along the rational dimension (i.e., improving the quality of their inferences) will not necessarily revise vaccine skeptics’ beliefs against the scientific consensus. Improved inferences only lead to sound conclusions if they begin with good information. As Nguyen (2018) notes, however, once inside an echo chamber “one might follow good epistemic practices and still be led further astray” (p. 4). This applies to vaccine skeptics since most vaccine skeptics engaging in online discussion of vaccine safety are in epistemically vulnerable positions with respect to the quality of the information they receive from their network (Sullivan et al., 2020). Increasing the quality of vaccine skeptics’ inferences will not idealize away their false beliefs.

Furthermore, Gausian idealization constrains the ability to attribute to agents the correct information. Moderate idealization permits informational improvements but only in ways that improve the coherence of existing local (but not global) belief-value structures (Gaus, 2010, p.

241). Introducing corrective or missing information to vaccine skeptics’ belief-value structures undermines rather than improves coherence since skepticism toward vaccine safety and efficacy aren’t isolated beliefs. They are tightly intertwined with other mutually supporting beliefs about vaccines, risk assessments, and the motives and relative trustworthiness of the FDA, CDC, experts, and Big Pharma. Attributing to a vaccine skeptic beliefs that vaccines are safe and efficacious will not cohere with the broader local belief structure with which those beliefs are 97

interwoven. They will instead undermine coherence. Therefore, informational improvements through idealization that render the vaccine skeptic a proponent of vaccines violate coherence.

Gausian idealization further constrains informational improvements via the principle of conservatism. The principle of conservatism grants a privileged position to existing local beliefs.

That is, if an agent’s original set contains a belief B, he needs some positive reason to reject it

(Gaus, 2010, p. 241). Hence, conservatism resists attributing to vaccine skeptics beliefs that would contradict their existing deeply held and mutually reinforcing skeptical beliefs about vaccine safety, efficacy, and trustworthy sources.

Modifying vaccine skeptics’ belief structures so that they are pro-vaccine requires radically restructuring their interconnected beliefs about vaccines. But this level of restructuring surpasses the upward bound of permissible idealization. Moderate idealization cannot attribute to agents reasons their real-world selves could not access by sound deliberative route (Gaus, 2010, p. 257). Many real-world vaccine skeptics could not, without radical revision, recognize as their own reasons to give up NMEs and embrace a comprehensive immunization policy. It follows that moderate idealization will preserve their reasons to reject any policy that eliminates exemptions.

Publicly Justified Vaccine Policy for Gaus’ Model

The Gausian view must preserve NMEs. Recall that the PJP requires that every citizen have, from their own point of view, sufficient reason to endorse a policy. When a citizen’s beliefs provide countervailing reasons to a policy, those reasons count as ‘defeaters’ and exclude that policy from the eligible set of policies. Since moderate idealization preserves vaccine skeptics’ compromised empirical beliefs about vaccine safety and efficacy, they will have defeaters for any policy that eliminates NMEs. Requiring them to vaccinate themselves or their 98

children, from their point of view, is horrific and unjust. The state is mandating that they inject

“poisons” into their children. From their epistemic position, this policy violates fundamental ethical values, such as parental rights and autonomy, and the duty of nonmaleficence and against harm. From the Gausian point of view, denying NMEs to this population violates the requirement for non-authoritarian relations between free and equal people. Hence, Gausian moderate idealization, in conjunction with the PJP, generates defeaters for policies that eliminate

NMEs.

It follows that the Gausian model must reject prioritizing religion, eliminationism, and the conditional view, and endorse the inconvenience model. Prioritizing religion leaves open religious exemptions but closes NMEs. However, Gaus must treat secular reasons that survive idealization the same as religious reasons, therefore he cannot advocate policies that close

NMEs. Eliminationism closes both NMEs and religious exemptions and therefore, Gaus must reject it. Gaus, recall, cannot close NMEs or religious exemptions so long as the reasons for them survive idealization. The conditional view closes NMEs and religious exemptions and makes immunizations mandatory or compulsory when immunization rates fall below those required for herd immunity. But once again, for Gaus, so long as a reason survives idealization, it counts as a defeater, and therefore he must reject the conditional view.

Since only the inconvenience model preserves NMEs, the Gausian model justifies only some variant this policy. But there are two main problems with preserving NMEs and the inconvenience model. First, it ties the state’s hands when immunization rates fall below that required for herd immunity. Second, its justification relies on poorly justified empirical beliefs. I discuss the latter in Section IV. 99

Suppose a situation arises similar to that which occurred in the fundamentalist church in

Philadelphia, except this time with a secular group. Children are dying of easily preventable

diseases and the unimmunized cluster is spreading the disease to the general population which

includes the medically vulnerable. On the inconvenience view, the state cannot close NMEs,

only make them inconvenient.

The Gausian view cannot justify the compulsory immunizations which the conditional

view permits under certain conditions. On Gaus’ model, proposals for compulsory vaccinations

are defeated by vaccine skeptics’ deeply-held but poorly justified empirical reasons. But simply

requiring that vaccine skeptics attend information session is insufficient to address the

hypothesized (but currently likely) VPD public health crises. Furthermore, an established

literature finds that such information sessions often fail and, in some cases, leave attendees

further entrenched in their opposition to vaccines (Nyhan et al., 2014; Dubé et al., 2015).

The above case marks a fundamental difference between the inconvenience view and the conditional view. The inconvenience view prevents the state from meeting its obligation to

protect children’s welfare and public health when they face significant threats. The Gausian

model allows vaccine skeptics’ poorly justified reasons to survive idealization and, therefore,

defeat the policy response that would allow the state to meet its obligations.

Notice that although vaccine skeptics will have defeaters to policies that close NMEs,

policy must also respect the majority’s wishes, and the majority will have defeaters for the costs

of leaving NMEs open. How might a Gausian approach strike a balance between respecting

minority and majority reasons?

One move would be to add a mandatory immunization policy on top of the required

information sessions. Those with NMEs lose their normal entitlement to public spaces, such as 100

public schools, daycares, libraries, and public transportation. If vaccine skeptics can claim, “you may not impose what I perceive as risks on me or my child without my ,” so can the general public. That is, those with NMEs may not expose others to (genuine) risk in public spaces without their consent.

While this policy reconciles minority and majority policy desires in public spaces, we cannot reasonably expect viruses to respect the distinction between and public-private spaces.

This has been amply demonstrated during the COVID-19 pandemic where congregating and singing turned churches and weddings into epicenters for wider community spread. Even if those with NMEs don’t attend public schools, they still subject others to risk in private spaces such as restaurants, play centers, amusement parks, and anywhere else children congregate. Recall that the Disneyland and Philadelphia measles outbreaks originated in private spaces.

Furthermore, it’s impractical to extend to private spaces the policy applied to public spaces. Are businesses going to require immunization certificates from all children upon entry?

Will some business be NME-friendly while others aren’t, thereby segregating children from each other? How will birthday parties or sports leagues be organized without neutral ground? And even if these measures are adopted, it’s still doubtful that the virus will respect this cumbersome division of spaces.

At the level of theory, perhaps it’s possible to segregate the public and private world according to vaccination status. This would respect everyone’s reasons related to vaccine exemptions. However, the practical costs and difficulties of such a policy—along with the loss of herd immunity—make it extremely undesirable. Further concentrating unvaccinated children and adults leads to obvious community health risks—primarily to the children of vaccine skeptics who will live within clusters of unvaccinated children. 101

Although such a policy gestures at reconciling the perceived health concerns of both sides, the impracticality and social and economic costs would likely also provoke defeaters for non-health related reasons, thereby removing it from the eligible set of policies. Gausian moderate idealization fails to deliver an acceptable policy outcome. Despite a consensus of health experts, expert reasons are defeated by poorly justified lay reasons, in turn leading to epistemically and normatively bad policy.

Costs of The Gaus-Vallier Model

Vaccine skeptics, when moderately idealized, will not have sufficient reason to endorse either prioritize religion, eliminationism, or the conditional policy. Such policies will, on Gaus’ view, violate the precept for non-authoritarian relations. However, while it’s true that policy ought to strive to reconcile disagreement between free and equal peoples, policy should be more than mere reconciliation. Policy ought also to be drawn from epistemically adequate reasons.

When it isn’t, other costs arise.

Here, I argue that the issue of vaccine policy exposes deeper problems with Gaus’ model and moderate idealization generally. When it was developed, no one could have anticipated the power and pervasiveness of epistemically vicious environments. As such, under triple P conditions and for politically controversial issues, the model will fail to appropriately align with pre-theoretical intuitions and desiderata concerning an account of political legitimacy: Policy (a) ought to be informed by and not contradict a consensus of relevant experts, (b) ought not to lend credibility to epistemically compromised reasons, and (c) ought to be able to resolve disputes over negative externalities with epistemically sound reasons. In this section, I argue that immunization policy reveals that the Gausian model does not adequately deliver in these respects. 102

Role of Experts in Policy

Pre-theoretically we think that policy should in an important sense rely on experts—at least where the basic scientific facts are concerned. The Gausian model’s outcomes conflict with our pre-theoretical intuitions. At minimum, policy ought not to contradict a consensus of relevant experts on empirical matters. This is in part because we want policy to be effective. Where we disregard experts, the laws are not capable of accomplishing what they’re supposed to do. Also, when policy disregards expert reasons, policy and institutions lose legitimacy from the perspective of the majority who defer to the expert consensus. Finally, it becomes difficult to explain the role of experts in society and the role of government-funded expert institutions and science when expert reasons are defeated by non-experts. In this section, I explain and diagnose the source of this problem in the Gausian framework and how it undermines political legitimacy.

On the Gausian model, expert institutions and knowledge can become merely an appendage to policy justification rather than a source of justification. That is, expert reasons only figure into public justification when members of the public defer to those experts. In cases where a group of lay citizens disagree with the expert consensus, such as in the case of immunization policy, lay citizens’ reasons can defeat policy derived from expert reasons. As such, science- based policy—even when there is a consensus of relevant experts—can be defeated by improbable and poorly justified empirical beliefs. When poorly justified lay reasons override those of a consensus of experts, that policy’s legitimacy is undermined from the point of view of the majority who defer to the consensus of experts.

The Gausian model admits poorly justified empirical reasons into the domain of public reason in part because (a) it is overly concerned with individuals’ reasons and reasoning, and (b) it fails to manage vulnerabilities in the background socio-epistemic environment from which 103

individuals acquire their beliefs. Although the Gausian model acknowledges our epistemic

interdependence, it fails to anticipate the pervasiveness of the myriad ways this interdependence can be hijacked. The model is overly optimistic about individuals’ abilities to overcome

epistemically vicious environments. This leaves the model vulnerable to admitting epistemically

compromised reasons from individuals who non-culpably inhabit vicious epistemic

environments and are manipulated into empirical beliefs that contradict a consensus of relevant

experts.

Recent trends in formal social epistemology support this analysis. Network models by

Sullivan et al. (2019) and findings by Benkler et al., (2018) suggest that, on controversial issues,

social media tends to be highly polarized and to amplify misinformation rather than expert

consensus. Thus, Sullivan et al. (2020) argue that to better evaluate epistemic vulnerabilities

associated with filter bubbles, echo chambers, and polarization, we can no longer abstract away

from the fact that epistemic agents are situated in large epistemic communities and receive

information from many interconnected sources. What we need is a that makes it

possible to assess the epistemic qualities and drawbacks of social networks (p. 1). Anticipating

this shift, Rini (2017) argues that individual epistemic dispositions and virtues are inadequate to

address the problem of persistent polarization, filter bubbles, and echo chambers, and we need to

place more emphasis on the structure of epistemic institutions such as social media platforms.

The Gausian model inadequately manages the epistemic hazards that emerge in our

current informational environment. To compensate for individuals’ epistemic vulnerabilities,

policy epistemology must somehow recognize legitimate hierarchies of knowledge communities

and the role they ought to play in policymaking. 104

Several reasons justify this hierarchy. First, the epistemic norms and practices of

scientific communities and lay communities differ in important ways. Scientific communities,

while still fallible, deliberately employ norms and practices to mitigate the distorting effects of

bias and other epistemic errors. A consensus among scientific experts indicates an important

epistemic achievement that a political epistemology should not subordinate to the conclusions of

partisan lay communities—particularly given the prevalence of epistemically vicious environments.

In a sense, the above points out the obvious. Improving Gausian moderate idealization, therefore, requires a justification for imposing the epistemic hierarchy that doesn’t conflict with

Gaus’ other commitments. Reflecting on how vaccine skeptics and other science denialists’ reason provides the initial steps in providing this justification. When vaccine skeptics and other

science denialists provide justifications for their views, they invariably appeal to scientific

norms, practices, and (outlier) experts. They frequently claim to be the true adherents of

scientific norms and deferrers to experts.

This implies an important intersubjective commitment to scientific norms, practices, and

experts among all moderately idealized citizens. However, despite this intersubjective

agreement, I have argued that a circumscribed edifice of false beliefs inhibits idealizing

denialists such that they endorse the correct conclusions in a particular domain. To again quote

Nguyen (2018), once inside an echo chamber “one might follow good epistemic practices and

still be led further astray” (p. 4).

And so, moderate idealizers must decide how to resolve a contradiction that arises in the

science denialists set of beliefs, values, norms, and epistemic norms: On the one hand, the

denialist whole-heartedly endorses the various norms and practices of science. On the other, 105

because they are non-culpably entrapped in triple P conditions, they endorse policies that undermine their own epistemic and normative commitments. The principle of conservatism and

the requirement to improve local rather than global coherence leads Gaus to resolve the tension

in favor of the particular empirical beliefs. In Section IV, I complete the argument for resolving

it in the other direction.

For now, I wish only to point out that (a) groups applying scientific norms, practices, and

institutions will more reliably approach truth on empirical matters than will groups of lay people;

(b) moderately idealized citizens will intersubjectively agree on (a). Finally, (c) idealizing in a

way that undermines the legitimate epistemic hierarchy of expert consensus over lay people

conflicts with our pre-theoretical intuitions about policy legitimacy and will, in the eyes of the

majority, diminish the legitimacy of whatever policy, institution, and government incorporates

these beliefs.

Lending Credibility to Bad Reasons

When policy is grounded in bad reasons or defeated by bad reasons, the state lends its

credibility to those bad reasons. In so doing, the state diminishes its own credibility and

legitimacy. Consider the case of GMO labeling laws. Despite the outcry from some vocal online

groups, a consensus of relevant experts finds that there are no important health differences

between GMOs and conventional foods. However, in some states, groups lobbied for labeling

laws based on the unsubstantiated belief that GMOs do have harmful health effects. That these

laws are grounded in epistemically compromised reasons damaged their legitimacy and that of

the institutions who promulgate them.

Worse still, these laws create a feedback loop that can re-enforce the perceived

credibility/justificatory force of compromised reasons. Since enacting the policy, GMO 106

opponents point to the GMO labeling laws as evidence for their harmfulness. They say, “if

GMOs are so safe, then why does the state require that they be labeled?” This same reasoning

can spread to previously naïve consumers. In short, when the state, its institutions, and policies

lend their credibility to compromised reasons, they confuse the public. And, in doing so, risk

damaging their political legitimacy.

NMEs have this same problem. NMEs arose out of the public’s fear of the MMR vaccine,

itself based on Andrew Wakefield’s fraudulent and retracted Lancet article. This policy is

grounded in poorly justified empirical reasons.42 As in the GMO case, the fact that the state instituted NMEs folds back on itself to further reinforce the illusion of creditability for the reasons grounding the bad policy. Naïve citizens reason, “if vaccines are so safe and a net benefit

to my child’s health, then why does the state allow for NMEs?”. Lending credibility to

compromised reasons by promulgating NMEs diminishes the credibility and political legitimacy

of government and public health institutions that promulgate these laws.

Finally, it should also be noted that in states that grant NMEs, NMEs are not granted for

the credibility of reasons provided but because of the applicant’s perceived sincerity. That is,

states apply the same standard to secular reasons as they do to religious reasons. However,

there’s an important disanalogy between secular and religious reasons in this case. Sincerity is an

appropriate standard for evaluating religious reasons since they concern metaphysical and

normative commitments. However, as I have argued, NMEs are almost exclusively motivated by

poorly grounded empirical beliefs. Sincerity is not a good standard for gauging the legitimacy of

42 Some vaccine skeptics may not appeal to Wakefield’s study as justification for their beliefs. However, the Wakefield study helped to create the vicious epistemic environment surrounding vaccines. It should also be noted that Andrew Wakefield is still a major celebrity online and at vaccine skeptic conventions around the US. 107

empirical beliefs. Conformity with expert consensus is. When the state (mis)applies the sincerity standard to evaluate the legitimacy of (empirical) reasons for NMEs, they inadvertently lend legitimacy to those reasons. In doing so, the state delegitimizes itself for citizens who recognize the poor reasons for what they are.

Substantial Preventable Negative Externalities

Presumably, government-funded institutions and experts exist to generate knowledge to advise on, solve, or prevent social and political problems. Justificatory processes that allow compromised reasons to defeat policies grounded in expert reasons will likely allow or impose greater harm than policies that do not. In the case of vaccine policy, the Gausian model allows poorly justified empirical beliefs to trump those of a consensus of experts, in turn allowing substantial preventable negative externalities. This outcome can occur in other action problems where it’s not possible to idealize away a subgroup’s compromised empirical beliefs

Recall that for Gaus, the PJP is grounded in non-authoritarian relations between free and equal people. Politically legitimate coercive policy cannot be authoritarian, and the presumption against coercion implies that all coercive policy stands in need of justification. A policy that mandates vaccinations is coercive and, therefore, political legitimacy requires that each person subject to it have sufficient reason to endorse it. A policy that allows exemptions, however, isn’t coercive. NMEs are the absence of coercion; hence, they don’t demand the same degree of justification. If non-authoritarian relations and satisfying the presumption against coercion ground political legitimacy, then only the exemption-based policy can be legitimate. Non- coercion doesn’t demand justification—at least not relative to coercion. The Gausian model leads to exemptions that impose significant externalities on the majority. 108

Gaus can reply by drawing on Kevin Vallier (2015) whose own model of moderate idealization shares many of the same features. Vallier argues that “we should not deny the benefits of a law to the overwhelming majority of the populace based on the objections of the few so long as the minority group can be meaningfully exempted from the law” (p. 16). Vallier provides a framework for considering when exemptions are warranted. A citizen merits an exemption if she meets four conditions:

(a) if she has sufficient intelligible reason to oppose the law,

(b) if the law imposes unique and substantial burdens on the integrity of those exempted that

are not offset by comparable benefits,

(c) if the large majority of citizens have sufficient reason to endorse the law, and

(d) if the exempted group does not impose significant costs on other parties that require

redress. (Vallier, 2015, p. 1)

Since Gaus and Vallier both work with very similar models, Vallier’s framework can be appended to Gaus’ and provide the correct outcome.

At first glance, this framework appears to straightforwardly prohibit NMEs since the exempted group would impose significant costs on other parties. Furthermore, NMEs allow exemptees to freeride off of herd immunity while placing a burden on the majority (Vallier,

2015, p. 16). NMEs do not meet condition (d). In fact, on the issue of vaccine exemptions,

Vallier concurs that “the cost borne is non-trivial and [. . .] the cost borne is unequal. The majority must put up with the inequitable costs of exempting others” (Vallier, 2015, p. 16).

Here’s the problem. Public justification demands some mutually agreed upon assessment of the externalities. But disagreement over the relative costs and benefits of vaccines is precisely what’s at issue in the first place. Vaccine skeptics believe that vaccines pose unacceptable costs, 109

while the majority believes the contrary. Where there is deep empirical disagreement, the exemption criteria just moves the bump in the rug. We cannot point to the unacceptability of the externalities until citizens agree on what they are or whether they even occur, for that matter. We need epistemic terra firma from which to resolve the empirical dispute before we can apply condition (d).

The emphasis on justifying coercion in conjunction with moderate idealization legitimizes policies that fail to solve collective action problems which can, in turn, generate substantial preventable negative externalities. Furthermore, allowing or closing exemptions that cause significant externalities requires prior agreement over the nature and magnitude of the externalities. In cases like immunization policy, climate change, and GMOs, where the substantial disagreement is primarily over the empirical facts, the issue of whether exemptions cause externalities cannot be resolved without first resolving the empirical disagreement. And resolution requires applying epistemic standards to the contested claims in the disagreement.

To summarize, the immunization issue reveals that the Gaus-Vallier model’s epistemic permissiveness conflicts with pre-theoretical intuitions about legitimacy and what a theory of political legitimacy ought to be able to do. Under triple P conditions and where there is deep disagreement over empirical facts their model leads to policy that (a) fails to properly take on the advice of experts and (b) lends credibility to epistemically compromised reasons. Finally, the

Gaus-Vallier model (c) lacks resources (i) to reconcile disagreements over empirically measurable externalities and (ii) to prevent imposing large negative externalities on the majority or other groups—such as the children of vaccine skeptics and medically vulnerable children.

110

A Better Moderate Idealization, A Better Policy

So far, I have sought to identify two inter-related problems. First, the pervasiveness of

epistemically vicious environments leads Gausian moderate idealization to generate

epistemically and normatively bad policy. Second, because of these problems, Gausian

idealization publicly justifies bad vaccine policy and generates defeaters to scientifically sound

vaccine policy. In this section I address both problems. First, I explain why the conditional vaccine policy ought to be preferred to the inconvenience policy, despite the fact that it is defeated within Gaus’ model. Second, I offer a modification to the Gausian view that allows it to

avoid the problems caused by epistemically vicious environments and to endorse the conditional

rather than inconvenience policy. Finally, I provide a justification for my proposed modification

to Gausian moderate idealization.

Argument for the Conditional View

The argument for Pierik’s (2020) conditional approach to vaccine policy is fairly

straightforward: It solves the problems that the other views cannot and is derived from well-

justified normative and empirical reasons. In Section III. C., I demonstrated that the

inconvenience view (which the Gausian model would justify) cannot address situations similar to

what occurred in Philadelphia churches. The state has a basic obligation to protect the health and

welfare of children and the broader community from immanent harm. When low vaccination

rates seriously imperil children’s and the broader community’s health and welfare the state must

be able to intervene. The fact that a cluster of parents hold poorly justified empirical beliefs does

not outweigh allowing the genuine risks of harm. Children’s rights to health and welfare must be

protected. Furthermore, regular citizens would have defeaters to NMEs in situations analogous to

what occurred with the church in Philadelphia (but if it were a secular group). 111

The conditional view allows the state’s policy to be flexible so long as its primary obligation is met; i.e., protecting children’s health and welfare. When a community’s immunization rates surpass what’s required for herd immunity, the state can allow both religious exemptions and NMEs because children’s health and welfare do not face substantial risk. As immunization rates fall and children face greater risk, the state can introduce incrementally coercive policies. They can begin by imposing inconveniences. But if that fails to secure the health and safety of children, then it may introduce further coercion such as restricting entitlements to various public goods and services, up to compulsory vaccinations.

The conditional view, unlike the other views, is sensitive to and guided by the normative commitments all parties accept (i.e., that the state has an obligation to protect the health and welfare of children) and by epistemically sound empirical reasons. When one group’s policy preferences diverge for poorly justified empirical reasons, the conditional view excludes these reasons and continues to represent policy grounded in everyone’s shared normative commitments.

On the Gausian view, however, poorly justified empirical reasons defeat the policy that aligns with not only epistemically sound empirical reasons but also universally shared normative commitments. The resulting policy undermines the normative commitments that vaccine skeptics themselves hold; i.e., that the state has an obligation to protect the health and welfare of children.

But, perhaps more perversely, the Gausian model destroys this rare and precious universal normative consensus in favor of improbable poorly justified empirical beliefs.

The Exclusion Principle and Political Legitimacy

In this section, I explain how an amendment to moderate idealization avoids the problems generated by pervasive vicious epistemic environments. The amendment is what I call the 112

exclusion principle: Empirical beliefs that contradict a consensus of relevant experts in a mature science are excluded from the domain of public reason even if they survive idealization. These beliefs, therefore, cannot act as defeaters to science-based policy. Thus, the exclusion principle excludes the defeaters to closing NMEs that arise in Gaus’ model (when low immunization rates jeopardize children’s health and welfare).

Intuitively, this gives us the correct policy for the right reasons; i.e., one that does not contradict the consensus of experts. Furthermore, the exclusion principle has another advantage:

It preserves the deep normative agreement between vaccine skeptics and everyone else with respect to protecting the health and welfare of children. All sides agree that public health policy ought to optimally protect the health and welfare of the children it is intended to protect. The

Gausian view, on the other hand, destroys this rare normative consensus among citizens in favor of preserving improbable empirical beliefs derived from epistemically vicious environments.

The exclusion principle also better balances the desiderata connected to political legitimacy not fully accounted for by the Gaus-Vallier model. First, policy conforms with our intuitions about the role of science and experts in policymaking. On scientific matters, policy ought to accord more weight to a community of experts’ reasons over those of laypeople trapped in an epistemically vicious environment. A political epistemology that weighs such reasons equally—or worse, grants epistemically compromised reasons defeater status over scientific reasons—gets something wrong. Furthermore, it undermines the legitimacy of the policy in the eyes of the majority who endorse expert reasons.

Second, my amendment ensures epistemically sound policy by preventing epistemically compromised reasons from defeating good policy. In the normative domain, reversibility additionally constrains moderately idealized normative reasons from defeating or creating 113

exemptions for normatively good policy. The exclusion principle is the analogue in the empirical domain; it constrains poorly justified empirical reasons from undermining epistemically and normatively sound policy—even if those reasons initially survive moderate idealization.

Third, my proposal also prevents government institutions from lending credibility to epistemically compromised reasons and from tarnishing their credibility in doing so.

Empirically-grounded reasons that contradict a consensus of relevant experts are excluded from the domain of public reason and, therefore, cannot defeat policy. Institutions, therefore, do not lend credibility to such reasons or tarnish their own credibility by promulgating policy grounded in compromised reasons. Epistemically compromised reasons to refuse vaccines cannot usurp credibility from or be reinforced by state institutions and policy. This, in turn, prevents the vicious feedback loop that generates the illusion of credibility to anti-vaccine beliefs and supports spreading the false belief into naïve populations.

Fourth, my model addresses the problem of negative externalities and exemptions. On the

Gausian view, public justification is primarily concerned with justifying coercion which, in collective action problems, gives an advantage to an absence of policy. Since NMEs are the absence of coercion, they have an advantage with respect to the degree of justification required.

But political legitimacy requires that the absence of policy must also be able to justify the probable negative externalities that policy was intended to solve. When governments fail to address genuine problems, their political legitimacy suffers.

Vallier’s model recognizes the requirement to justify the negative externalities that exemptions may bring about. But when the dispute centers on the nature and magnitude of the externalities themselves—as in the case of vaccines and NMEs--some epistemic standard must be brought in to resolve what is fundamentally an empirical disagreement. Without the exclusion 114

principle, the Gaus-Vallier model does not have the resources to resolve this conflict. Instead, it allows poorly justified perceptions of risk and benefit to override well justified appraisals of risk and benefit. A political epistemology should demand an appropriate level of justification for substantial preventable negative externalities where policy risks being defeated. Otherwise, the model justifies policies that diminish political legitimacy and fails to fulfill one of the fundamental functions of the state.

Justifying the Exclusion Principle within Public Reason

The exclusion principle suggests how we ought to idealize citizens when their empirical beliefs conflict with what they would have endorsed had they not been entrapped by triple P conditions. Modeling agents in these situations pits global epistemic commitments against a particular cluster of empirical beliefs. On the one hand, such idealized agents will have a general concern for truth, a commitment to scientific epistemic norms and practices, and a disposition to defer to a consensus of experts. These commitments are evident in every other domain of their lives—conspiracist or not. On the other hand, some idealized agents will be committed to a particular local set of beliefs generated from epistemically vicious environments. The tension must be resolved one way or the other. In such cases, Gaus’ principle of conservatism and the requirement to improve local coherence resolve the tension in favor of the local empirical beliefs.

Vallier’s Incomplete Solution

Although it shares the same roots, Vallier’s (2014) take on moderate idealization offers a potential solution. For Vallier, idealization must distinguish between core and peripheral commitments. The solution, then, is straightforward. Vaccine skeptics’ commitments to scientific 115

norms and practices and truth inhabit the core of the web of beliefs and values. Particular empirical beliefs that vaccines are a net risk to health occupy the periphery.

Vallier’s idealization model solves the problem so long as this is an accurate representation of vaccine skeptics’ belief-value webs. However, as I argued in Chapter 1, this does not accurately characterize all vaccine skeptics. In fact, for a non-trivial number, their belief that vaccines are a net harm to health will also occupy their core. And, in so far as this is the case, Vallier also requires a principle by which to resolve conflicting commitments within the core.

Vallier (2014) acknowledges that agents will sometimes have mutually exclusive deeply held beliefs and values. However, rather than offer a principle by which to resolve this tension,

Vallier holds that, when this occurs in the core, idealization ought not to resolve the tension either way (p. 156). Agents should be modeled such that the internal conflict is persevered (p.

156). Vallier’s position, I suggest, follows from the context in which he presents his view: when core normative commitments conflict. He is acknowledging that by moderately idealizing citizens we may uncover mutually exclusive commitments within their core. This follows from moderately rather than fully idealizing epistemically bounded agents.

However, observing and acknowledging that citizens will sometimes have conflicting commitments within their core fails to provide guidance for how they ought to be resolved for concrete policy issues. And so, while Vallier provides plausible guidance for how to resolve conflicts between core and peripheral commitments, he lacks a principle by which to resolve conflicts within the core. Note that at least four important kinds of conflicts can arise in the core: normative vs empirical, normative vs normative, empirical vs epistemic, and epistemic vs 116

normative. I will focus primarily on normative vs empirical and epistemic vs empirical and leave resolving the other kinds of conflicts to others.

The Exclusion Principle Justified

Whether we prefer Gaus’ or Vallier’s model of moderate idealization, I offer an idealization principle that modifies each and resolves tensions between empirical beliefs vs epistemic and normative commitments. This is the exclusion principle: We exclude from the domain of public reason any empirical beliefs that both survive idealization and contradict a consensus of relevant experts in a mature science. This exclusion principle would exclude the vaccine skeptic’s empirical beliefs that vaccines are a net harm, thereby preventing defeaters grounded in this belief. Unfortunately, merely stipulating this exclusion principle is vulnerable to accusations of its being ad hoc. Extensionally, it generates the right policies but what grounds the principle and how does it cohere with the existing public reason framework?

The justification for the exclusion principle follows from reflecting on the idealization principles a moderately idealized citizen would endorse. That is to say, we can derive the exclusion principle from the way moderate idealization has already been specified and from moderately idealized citizens’ preferences. We begin by asking what beliefs and commitments we can reasonably attribute to moderately idealized citizens then asking how they would prefer conflicts between them to be resolved.

Moderately idealized citizens will

1. intersubjectively agree on and endorse scientific norms and practices (see: Sec III. A.);

2. intersubjectively agree on there being a legitimate hierarchy of epistemic communities on

empirical matters;

3. value the truth as opposed to falsity on empirical matters. 117

4. know that epistemically vicious environments exist and that they could unknowingly and

non-culpably fall victim to them;

5. know that if they unknowingly and non-culpably fall victim to an epistemically vicious

environment their empirical beliefs on politicized scientific issues will be improbably

true;

6. know that if they unknowingly and non-culpably fall victim to an epistemically vicious

environment that their empirical beliefs will contradict what their global epistemic

commitments would have led them to endorse had they not fallen into the echo chamber.

Given the above, we could ask moderately idealized citizens what principle of idealization they believe ought to resolve tensions between particular empirical and general epistemic commitments. That is, we can ask, “According to what principle would you want to be idealized if you unknowingly fell into an echo chamber? Would you prefer to be idealized according to your commitment to truth, scientific norms and practices, and general epistemic values or according to a particular improbable empirical belief generated from within the epistemically vicious environment?” We can reasonably suppose that the vast majority of moderately idealized citizens would endorse idealization according to the former.

When framed this way, we can see why we permissibly apply the exclusion principle to idealization and exclude as defeaters empirical beliefs which contradict a consensus of relevant experts. First, doing so conforms with a principle of idealization that moderately idealized citizens themselves would endorse. Second, since moderately idealized citizens would endorse the principle, it doesn’t conflict with Gaus’ requirement for non-authoritarianism. Third, the principle meets an important condition imposed on moderate idealization; viz., real-world agents can arrive at it by sound deliberative route. And finally, rather than reinforcing the harmful 118

distorting effects of our current epistemic environment, it removes them from moderate idealization, along with the normative consensus it otherwise requires idealization to destroy.

One might worry that this method of justification opens itself to a reflexivity objection.

That is, by justifying the exclusion principle by appeal to public justification, I imply that every element of public justification must itself also be publicly justified. If this is the case, it’s not clear that public justification ever gets off the ground since among the public there may be incommensurable disagreement over the standards of public justification.

This, however, mischaracterizes my project. Moderate idealization as it has already been specified leads to an indeterminacy. Do we resolve the indeterminacy in favor of epistemic norms and commitments or in favor or particular empirical beliefs generated from a vicious environment? The theory contains a lacuna and requires some non- ad hoc means of resolving the indeterminacy. The justification for the exclusion principle simply derives from public reason and moderate idealization as specified already. The project is not to justify a new model of moderate idealization.

Conclusion

In this paper I have sought to achieve two inter-related goals: To expose the problems that pervasive epistemically vicious environments pose for moderate idealization (represented by

Gaus’ model) and second, to determine the elements of a publicly justified immunization program. In working towards both ends, I have suggested that the Gausian model requires an amendment to ensure moderate idealization better manage the treacherous epistemic conditions of our contemporary world. This is the exclusion principle: i.e., that even if empirical beliefs survive idealization, we permissibly exclude them from the domain of public reason when they contradict a consensus of relevant experts in a mature science. 119

The exclusion principle allows citizens to be idealized such that they do not fall prey to vicious epistemic environments and according to principles that they would endorse.

Furthermore, it prevents the overturning of the commonsense view that there are legitimate hierarchies of epistemic communities and that political epistemology ought to represent and respect these hierarchies in policy.

Finally, I have argued that when moderate idealization is understood and amended as I have suggested, a publicly justified vaccine policy resembles the conditional approach rather than one that preserves NMEs across all conditions. It accords with the commonsense intuition that protecting children’s health and welfare ought to be the primary consideration guiding the justification for state coercion or its absence—not the sincerity of parents’ improbable empirical beliefs.

120

References

Allen, A. (2019, May 27). How the anti-vaccine movement crept into the GOP mainstream.

POLITICO. https://www.politico.com/story/2019/05/27/anti-vaccine-republican-mainstream-

1344955

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation,

disinformation, and radicalization in American politics. Oxford University Press.

Bicchieri, Cristina. (2017). Norms in the wild. Oxford University Press.

Chung, E. H. (2014). Vaccine allergies. Clinical and Experimental Vaccine Research, 3(1), 50.

https://doi.org/10.7774/cevr.2014.3.1.50

Committee, N. V. A. (2015). Assessing the state of vaccine confidence in the United States:

Recommendations from the National Vaccine Advisory Committee. Public Health

Reports, 130(6), 573–595. https://doi.org/10.1177/003335491513000606

Dubé, E., Gagnon, D., & MacDonald, N. E. (2015). Strategies intended to address vaccine

hesitancy: Review of published reviews. Vaccine, 33(34), 4191–4203.

https://doi.org/10.1016/j.vaccine.2015.04.041

Eggertson, L. (2010). Lancet retracts 12-year-old article linking autism to MMR

vaccines. Canadian Medical Association Journal, 182(4), E199–E200.

https://doi.org/10.1503/cmaj.109-3179

Ehley, B. (2019, March 5). Rand Paul condemns mandatory vaccines amid measles outbreak.

POLITICO. https://www.politico.com/story/2019/03/05/rand-paul-mandatory-vaccines-

measles-1240542

Gaus, G. F. (1996). Justificatory liberalism: An essay on epistemology and political theory.

Oxford University Press. 121

Gaus, G. F. (2011). The order of public reason: A theory of freedom and morality in a diverse

and bounded world. Cambridge University Press.

Goldstein, A., & Clement, S. (2020, June 2). 7 in 10 Americans would be likely to get a

coronavirus vaccine, Post-ABC poll finds. Washington Post.

https://www.washingtonpost.com/health/7-in-10-americans-would-be-likely-to-get-a-

coronavirus-vaccine-a-post-abc-poll-finds/2020/06/01/4d1f8f68-a429-11ea-bb20-

ebf0921f3bbd_story.html

Gorski, D. (2020, April 20). COVID-19 pandemic deniers and the antivaccine movement: An

unholy alliance. Sciencebasedmedicine.org. https://sciencebasedmedicine.org/covid-19-

pandemic-deniers-and-the-antivaccine-movement-an-unholy-alliance/

Henrich, J., & McElreath, R. (2003). The evolution of cultural evolution. Evolutionary

Anthropology: Issues, News, and Reviews, 12(3), 123–135.

https://doi.org/10.1002/evan.10110

Inc, G. (2020, January 14). Fewer in U.S. continue to see vaccines as important. Gallup.com.

https://news.gallup.com/poll/276929/fewer-continue-vaccines-important.aspx

LaCour, M., & Davis, T. (2020). Vaccine skepticism reflects basic cognitive differences in

mortality-related event frequency estimation. Vaccine, 38(21), 3790–3799.

https://doi.org/10.1016/j.vaccine.2020.02.052

Measles. (2019). Centers for Disease Control and Prevention.

https://www.cdc.gov/measles/cases-outbreaks.html

Merkley, E. (2020). Anti-intellectualism, populism, and motivated resistance to expert

consensus. Public Opinion Quarterly. https://doi.org/10.1093/poq/nfz053 122

Navin, M. C., & Largent, M. A. (2017). Improving nonmedical vaccine exemption policies:

Three case studies. Public Health Ethics. https://doi.org/10.1093/phe/phw047

Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine

promotion: A randomized trial. PEDIATRICS, 133(4), e835–e842.

https://doi.org/10.1542/peds.2013-2365

Olive, J. K., Hotez, P. J., Damania, A., & Nolan, M. S. (2018). The state of the antivaccine

movement in the United States: A focused examination of nonmedical exemptions in states

and counties. PLOS Medicine, 15(6), e1002578.

https://doi.org/10.1371/journal.pmed.1002578

Pierik, R. (2020). Vaccination policies: Between best and basic interests of the child, between

precaution and proportionality. Public Health Ethics. https://doi.org/10.1093/phe/phaa008

Polio. (2019). CDC.Gov. https://www.cdc.gov/polio/what-is-polio/polio-us.html

Quong, J. (2013). Public reason. Stanford Encyclopedia of Philosophy.

https://plato.stanford.edu/entries/public-reason/

Salmon, D. A., Haber, M., Gangarosa, E. J., Phillips, L., Smith, N. J., & Chen, R. T. (1999).

Health consequences of religious and philosophical exemptions from immunization

laws. JAMA, 282(1), 47. https://doi.org/10.1001/jama.282.1.47

States with Religious and Philosophical Exemptions from School Immunization Requirements.

(2020). Ncsl.Org. https://www.ncsl.org/research/health/school-immunization-exemption-

state-laws.aspx

Vaccines: Vac-Gen/What Would Happen If We Stopped Vaccinations. (2019). cdc.gov.

https://www.cdc.gov/vaccines/vac-gen/whatifstop.htm

Vallier, K. (2014). Liberal politics and public faith: Beyond separation. Taylor & Francis. 123

Wang, E., Clymer, J., Davis-Hayes, C., & Buttenheim, A. (2014). Nonmedical exemptions from

school immunization requirements: A systematic review. American Journal of Public

Health, 104(11), e62–e84. https://doi.org/10.2105/ajph.2014.302190

Watson, S. (2018, November 30). What’s herd immunity, and how does it protect us? WebMD.

https://www.webmd.com/vaccines/news/20181130/what-herd-immunity-and-how-does-it-

protect-us

Weatherall, J. O., & O’Connor, C. (2020). Endogenous epistemic

factionalization. Synthese, 197(6). https://doi.org/10.1007/s11229-020-02675-3

WPVI. (2015, February 6). 1991: The Philly measles outbreak that killed 9 children. 6abc

Philadelphia. https://6abc.com/1991-outbreak-faith-tabernacle-first-century-gospel-

measles/504818/