<<

Chapter 1

Introduction Wars and Policy Wars L

hen considering the importance of science in policy- making, common wisdom contends that keeping science as W far as possible from social and political concerns would be the best way to ensure science’s reliability. This intuition is captured in the value-free ideal for science—that social, ethical, and political values should have no influence over the reasoning of scientists, and that scientists should proceed in their work with as little concern as possible for such values. Con- trary to this intuition, I will argue in this book that the value-free ideal must be rejected precisely because of the importance of science in policymaking. In place of the value-free ideal, I articulate a new ideal for science, one that accepts a pervasive role for social and ethical values in scientific reasoning, but one that still protects the integrity of science. Central to the concerns over the use of science in policymaking is the degree of reliability we can expect for scientific claims. In general, we have no better way of producing about the natural world than do- ing science. The basic idea of science—to generate hypotheses about the world and to gather from the world to test those hypotheses—has been unparalleled in producing complex and robust knowledge, knowledge that can often reliably guide decisions. From an understanding of inertia and gravity that allows one to predict tides and the paths of cannonballs, to an understanding of that underlies the solid state components of computers, to an understanding of physiology that helps to

1

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 1 4/16/09 2:47:00 PM 2 • introduction

guide new medical breakthroughs, science has been remarkably successful in developing theories that make reliable predictions. Yet this does not mean that science provides certainty. The process of hypothesis testing is inductive, which means there is always a gap between the evidence and the theory developed from the hypothesis. When a sci- entist makes a hypothesis, she is making a conjecture of which she is not certain. When the gathered evidence supports the hypothesis, she is still not certain. The evidence may support the theory or hypothesis under ex- amination, but there still may be some other theory that is also supported by the available evidence, and more evidence is needed to differentiate be- tween the two. The hypothesis concerns a great many more instances than those for which we will carefully collect data. When we collect more data, we may find that seemingly well-confirmed hypotheses and theories were false. For example, in the late nineteenth century, it was widely accepted that chemical elements could not transform into other elements. Elements seemed to be stable in the face of any efforts at transmutation. The dis- covery of radioactivity in the early twentieth century overturned this wide- spread belief. Or consider the theory of ether, a medium in which it was once commonly believed light traveled. Despite near universal acceptance in the late nineteenth century, the theory of ether was rejected by most physicists by 1920. Going even further back in , for over 1,500 years it seemed a well-supported theory that the sun revolved around the Earth, as did the fixed stars. But evidence arose in the early seventeenth century to suggest otherwise and, along with changes in the theories of mechan- ics, overturned one of the longest standing and best supported scientific theories of the time. After all, how many times had humans seen the sun rise and set? And yet, the theory was ultimately incorrect. Data can provide evidential support for a theory, but can never prove a scientific theory with certainty. Aspects of the world that were once thought to be essential parts of scientific theory can be rejected wholesale with the development of new theories or the gathering of new evidence. Because of the chronic, albeit often small, uncertainty in scientific work, there is always the chance that a specific scientific claim is wrong. And we may come to know that it is wrong, overturning the theory and the predictions that follow from it. The constant threat of revision is also the promise of science, that new evidence can overturn previous thought, that scientific ideas respond to and change in light of new evidence. We could perhaps have certainty about events that have already been observed (although this too could be disputed—our descriptions could prove inac-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 2 4/16/09 2:47:00 PM introduction • 3

curate), but a science that is only about already observed events is of no predictive value. The generality that opens scientific claims to future refu- tation is the source of uncertainty in science, and the source of its utility. Without this generality, we could not use scientific theories to make pre- dictions about what will happen in the next case we encounter. If we want useful knowledge that includes predictions, we have to accept the latent uncertainty endemic in that knowledge. The chronic incompleteness of evidential support for scientific theory is no threat to the general reliability of science. Although we can claim no certainty for science, and thus no perfect reliability, science has been stun- ningly successful as the most reliable source for knowledge about the world. Indeed, the willingness to revise theories in light of new evidence, the very quality that makes science changeable, is one key source for the reliability and thus the authority of science. That it is not dogmatic in its understand- ing of the natural world, that it recognizes the inherent incompleteness of empirical evidence and is willing to change when new evidence arises, is one of the reasons we should grant science a prima facie authority. It is this authority and reliability that makes science so important for policy. And it seems at first that the best way to preserve the reliability of science is to keep it as far from policy as possible. Indeed, the realm of sci- ence and the realm of policy seem incompatible. In the ideal image of sci- ence, scientists work in a world detached from our daily political squabbles, seeking enduring empirical knowledge. Scientists are interested in timeless about the natural world rather than current affairs. Policy, on the other hand, is that messy realm of conflicting interests, where our temporal (and often temporary) laws are implemented, and where we craft the neces- sary compromises between political ideals and practical limits. This is no place for discovering . Without reliable knowledge about the natural world, however, we would be unable to achieve the agreed upon goals of a public policy decision. We may all agree that we want to reduce the health effects of air pollution, for example, or that we want safe, drinkable water, but without reliable informa- tion about which pollutants are a danger to human health, any policy deci- sion would be stymied in its effectiveness. Any implementation of our policy would fail to achieve its stated goals. Science is essential to policymaking if we want our policies concerning the natural world to work. This importance of science in achieving policy goals has increased steadily throughout the past century in the , both as the issues encompassed by public policy have expanded and as the decisions to be

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 3 4/16/09 2:47:01 PM 4 • introduction

made require an increasingly technical base. As science has become more important for policy, the relationship between science and policy has be- come more entangled. This entanglement exists in both directions: science for policy and policy for science. In the arena of policy for science, public funds allocated for doing science have grown dramatically, and these funds require some policy decisions for which projects get funded and how those funds will be administered. In the arena of science for policy, increasing numbers of laws require technically accurate bases for the promulgation of regulations to implement those laws. These arenas in practice overlap: which studies one chooses to pursue influences the evidence one has on hand with which to make decisions. In this book, however, my focus will be largely on science for policy. While the entanglement between science and policy has been noted, the importance of this entanglement for the norms of science has not been recognized. As science plays a more authoritative role in public decision- making, its responsibility for the implications of , particularly the implications of potential inductive error, increases. Failure to recognize the implications of this responsibility for science, combined with the desire to keep science and policy as distinct as possible, has generated deep tensions for our understanding of science in society. These tensions are evident in the increased stress science has been un- der, particularly with respect to its public role. Some commentators note an increasing strain on the “social contract” between science and society (see, for example, Guston and Keniston 1994). This strain was made manifest in the 1990s when two public debates erupted over science: the “Science Wars” and the sound science–junk science dispute. Both can be taken as emblematic of science under stress in our society. The Science Wars, as they are often called, centered on the author- ity of science. They were about whether or not science should be believed when it tells us what the nature of the world is, about whether or not science should have more public authority than other approaches to knowledge or belief. For those outside the world of , these are astonishing questions to raise. If one wants to know something about the natural world, it seems obvious that one should ask scientists. While few in science studies would actually dispute this, the claim has been made that the knowledge produced by science has no special authority above and beyond any other approach. In other words, the claim is that science and its methods have no special hold on the ability to uncover and speak truth; they simply have more funding and attention.

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 4 4/16/09 2:47:01 PM introduction • 5

The sound science–junk science war, in contrast, does not question the special epistemic authority given to science in general, or the overall reli- ability of science for answering empirical questions. Instead, this dispute is about which particular piece(s) of science should shape policy. When is a particular body of scientific work adequately “sound” to serve as the basis for policy? Debates in this arena center on how much evidence is sufficient or when a particular study is sufficiently reliable. The arguments focus on such questions as: How much of an understanding of biochemical mechanisms do we need to have before we regulate a chemical? How much evidence of causation is needed before a court case should be won? How much of an understanding of complex biological or geological systems do we need before regulatory frameworks intervene in the market to prevent potential harm? The idea that science is the authoritative body to which one should turn is not questioned; what is questioned is which science is adequate for the job, or which scientific experts are to be believed by policymakers, Con- gress, and the public. While both of these disputes are symptomatic of deep concerns sur- rounding the public role of science, neither has been able to produce a satis- factory approach to understanding the role of science in society or what that role might mean for the norms of scientific reasoning. This is, in part, be- cause both disputes began with the presupposition that science is a distinct and autonomous enterprise developed by a community of scientists largely in isolation from public questions and concerns. Such an understanding of science and scientists inhibits a clear view of how science should function in society. Both in the academic arena of the Science Wars and in the policy arena of the sound science–junk science dispute, the discussions shed little light on the deep questions at issue, even as the existence of the debates indicated the need for a more careful examination of the role of science in society and its implications.

The Science Wars The Science Wars were an academic affair from start to finish. A particular critique of science, known as , began in the 1970s and gathered steam and fellow travelers in the 1980s. The social constructivist critique was essentially an assault on the authority of science, particularly its apparently privileged place in producing knowledge. Social construc- tivists suggested that scientific knowledge (not just scientific institutions or practices) was socially constructed and thus should be treated on a par with other knowledge claims, from folklore to mythology to communal beliefs

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 5 4/16/09 2:47:01 PM 6 • introduction

(Barnes and Bloor 1982). There simply was no deep difference between one set of knowledge claims and another, social constructivists argued, and thus scientific facts held no special claim to our acceptance. As this critique was developed throughout the late 1970s and 1980s, other criticisms of science began to coalesce. For example, feminists noted that few scientists were women, and that many scientific claims about women had been (and continued to be in the 1980s) either explicitly sexist or supportive of sexist beliefs (Fausto-Sterling 1985; Longino 1990). Femi- nists wondered if science done by women would be different, producing different conclusions (Harding 1986, 1991). It was unclear whether sexist science was always methodologically flawed or bad science (as it sometimes was), or whether sexist science simply relied upon different background as- sumptions, assumptions which in themselves did not clearly put the scien- tific quality of the work in doubt. If the latter were the case, then an empha- sis on unpacking the background assumptions, which often arose from the surrounding culture, seemed to support the notion that science was in fact a social construct, or at least heavily influenced by the surrounding society and its prejudices. Although feminists and social constructivists disagreed about much, their arguments often pointed in a similar direction—that sci- entific knowledge consisted of socially constructed claims that were relative to a social context. Only those within a particular social context thought the claims produced had any special authority or believability. By the early 1990s, some scientists began to take umbrage with these criticisms, particularly the apparently strong form of the social construc- tivist critique, that in general science had no special claim to being more believable than any other knowledge claim. As scientists began to engage in this debate, the Science Wars erupted. An early salvo was Lewis Wolpert’s The Unnatural Nature of Science (1992), which devoted a chapter to re- sponding to relativist and social constructivist views of science. The debate really heated up in 1994, however, with the publication of Paul Gross and ’s Higher : The Academic Left and Its Quarrels with Science.1 As one sympathetic reader of the book notes, “This unabash- edly pugnacious work pulled no punches in taking on the academic science critics. . . . Naturally, those criticized on the ‘academic left’ fired back, and so the science wars were joined” (Parsons 2003, 14). The polemical nature of Gross and Levitt’s book drew immediate attention from scientists and fire from its targets, and the accuracy of Gross and Levitt’s criticisms has been seriously questioned. (Roger Hart [1996] is particularly precise in his critique of Gross and Levitt for simply misunderstanding or misrepresent-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 6 4/16/09 2:47:01 PM introduction • 7

ing their targets.) Now scientists and their critics had a text over which to argue. The Science Wars took an even nastier turn when , a physi- cist, decided to attempt a hoax. Inspired by Gross and Levitt’s book, he wrote a paper in the style of postmodern social constructivism and sub- mitted it for publication in a left-leaning social constructivist journal, So- cial Text. The paper was entitled “Transgressing the Boundaries: Toward a Transformative of Quantum Gravity,” and was a parody of some constructivist work, citing and drawing heavily from that work. The editors were thrilled that a physicist was attempting to join in the discussion, and they published the piece in 1996.2 Sokal then revealed he had written the work as a hoax to unmask the vacuity of this kind of work (see Sokal 1998). Many cheered Sokal’s effort; after all, hoaxing is a venerable tradition in the natural , where hoaxing has revealed some of science’s most self-deceived practitioners.3 But in the , there is little tradition of hoaxing as a deliberate attempt to catch a colleague’s suspected incom- petence.4 Scholars in those fields take for granted that a person’s work, in- genuously put forth, is their own honest view, so others cried foul at Sokal’s violation of this basic norm of honesty. The gulf between the critics of science and the scientists only grew wider. However, as Ullica Segerstråle notes, in many of the forums of debate for the Science Wars, it was hard to find anyone actually defending the strong versions of the social constructivist claims (Segerstråle 2000, 8). The plurality of views about science among both science studies practitioners and scientists themselves became increasingly apparent as the decade came to a close. In the end, the Science Wars petered out, perhaps having mod- erated the views of some academics, but having had little impact on the public perception or understanding of science.5 So what was the debate about? Why did critics of science attack sci- ence’s general authority? I think the debate arose because of tensions be- tween science’s authority and science’s autonomy. As I will discuss in chap- ter 3, the autonomy of science, the isolation of science from society, became a cornerstone of the value-free ideal in the 1960s. On the basis of the value- free nature of science, one could argue for the general authoritativeness of its claims. But an autonomous and authoritative science is intolerable. For if the values that drive inquiry, either in the selection and framing of research or in the setting of burdens of proof, are inimical to the society in which the science exists, the surrounding society is forced to accept the science and its claims, with no recourse. A fully autonomous and authoritative science is

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 7 4/16/09 2:47:01 PM 8 • introduction

too powerful, with no attendant responsibility, or so I shall argue. Critics of science attacked the most obvious aspect of this issue first: science’s author- ity. Yet science is stunningly successful at producing accounts of the world. Critiques of science’s general authority in the face of its obvious importance seem absurd. The issue that requires serious examination and reevaluation is not the authority of science, but its autonomy. Simply assuming that sci- ence should be autonomous, because that is the supposed source of au- thority, generates many of the difficulties in understanding the relationship between science and society. That the relationship between science and society was an underlying but unaddressed tension driving the Science Wars has been noted by oth- ers. Segerstråle, in her reflections on the debate, writes, “But just as in other academic debates, the issues that were debated in the Science Wars were not necessarily the ones that were most important. One ‘hidden issue’ was the relationship between science and society. The Science Wars at least in part reflected the lack of clarity in science’s basic social contract at the end of the twentieth century” (Segerstråle 2000, 24–25). Rethinking that social contract requires reconsidering the autonomy of science. Once we begin to rethink the autonomy of science (chapter 4), we will need to rethink the role of values in science (chapter 5), the nature of scientific (chapter 6), and the process of using science in policymaking (chapter 7). The Sci- ence Wars demonstrated the tension around these issues with its intensity, but shed little light on them.

Policy Wars: The Sound Science–Junk Science Dispute While the Science Wars were playing out, the place of science in policy- making was the focus of a completely separate debate in the 1990s. Rather than centering on the authority of science in general, the debate over sci- ence in policy centered on the reliability of particular pieces of science. As noted above, science is endemically uncertain. Given this uncertainty, blan- ket statements about the general reliability of science, and its willingness to be open to continual revision, are no comfort to the policymaker. The poli- cymaker does not want to be reassured about science in general, but about a particular piece of science, about a particular set of predictions on which decisions will be based. Is the piece of science in which the policymaker is interested reliable? Or more to the point, is it reliable enough? For the poli- cymaker to not rely on science is unthinkable. But which science and which scientists to rely upon when making policy decisions is much less clear. The difficulties with some areas of science relevant for policy are com-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 8 4/16/09 2:47:01 PM introduction • 9

pounded by their complexity and by problems with doing definitive studies. Because of their complexity, policy interventions based on scientific pre- dictions rarely provide good tests of the reliability of the science. A single change in policy may not produce a detectable difference against the back- drop of all the other factors that are continually changing in the real world. And the obvious studies that would reduce uncertainty are often immoral or impractical to perform. For example, suppose we wanted to definitively determine whether a commonly occurring water pollutant caused cancer in humans. Animal studies leave much doubt about whether humans are sufficiently similar to the animal models. The biochemical mechanisms are often too complex to be fully traced, and whether some innate mecha- nism exists to repair potential damage caused by the pollutant would be in doubt. Epidemiological studies involve people living their daily lives, and thus there are always confounding factors, making causation difficult to at- tribute. A definitive study would require the sequestering of a large number of people and exposing them to carefully controlled doses. Because the la- tency period for cancer is usually a decade or more, the subjects would have to remain sequestered in a controlled environment for years to avoid con- founders. And large numbers of people would be needed to get statistically worthwhile results. Such a study would be immoral (the subjects would not likely be volunteers), expensive, and too unwieldy to actually conduct. We cannot reduce uncertainty by pursuing such methods. Nor does implement- ing a policy and seeing what happens reduce uncertainty about the science. Real world actions are subject to even more confounders than are controlled for in epidemiological studies. Even if cancer rates clearly dropped after the regulation of a pollutant, it would be hard to say that the new regulation caused the drop. Other simultaneous regulations or cultural changes could have caused the cancer rate to decline at the same time as the drop in expo- sure to the pollutant. Thus, in addition to the generic uncertainties of science, science use- ful for policymaking often carries with it additional sources of uncertainty arising from the biological, ecological, and social complexity of the topics under study. Yet this chronic uncertainty does not mean that policymakers should go elsewhere for information. Instead, it puts increased pressure on assuring that the science on which they depend is reliable. Tensions over the role of science in policy increased as the reach of regulation grew and the decisions stakes rose throughout the 1970s and 1980s. As will be discussed in the next chapter, by the 1970s dueling experts became a standard phenomenon in public policy debates. As I will discuss

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 9 4/16/09 2:47:01 PM 10 • introduction

in chapter 7, attempts to proceduralize the policy decisionmaking process were supposed to rein in the impact of these contrarian experts, but by the 1990s it was apparent the problem was not going away. In fact, it seemed to be worsening, with public debates about technical matters occurring in- creasingly earlier in the policymaking process. New terms arose in attempts to grapple with the crisis. Although scien- tists had used the phrase “sound science” to refer to well-conducted, care- ful scientific work throughout the twentieth century, its opposite was often “,” which masqueraded as science but had not a shred of sci- entific credibility. Examples of pseudoscience included tales of extraterres- trial sightings, claims about or abilities, and . Pseudoscience as such has not been taken seriously in the policy realm and has not been a source of policy controversy. Rather than these more outlandish concerns, the realm of policy was focused on the reliabil- ity of a range of ostensibly reasonable scientific claims. Even as scientific expertise became the basis of many decisions, from new regulatory policy to rulings in tort cases, increasing concern was raised over the quality of science that served as a basis for those decisions. As tension brewed over the role of science in policymaking and skeptics over certain uses of science became more vocal in the early 1990s, a new term entered into the lexicon of commentators: “junk science.” Although the term “junk science” appeared occasionally before 1991, Peter Huber’s Galileo’s Revenge: Junk Science in the Courtroom popularized the notion of junk science. Huber’s critique centered on the use of evidence in tort cases and decried the shift in the Federal Rules of Evidence from the Frye rule of the 1920s, which stated that only scientific ideas reflecting the consensus of the were admissible in court, to more recent, laxer standards that allow any expert testimony that assists in the understanding of evidence or determination of fact. Huber argued that this was a catastrophe in the making and that we needed to strengthen stan- dards back to the Frye rule. Huber relied upon an autonomous image of science to decide what counted as sound science, stating near the close of his book, “as points out, a scientific ‘fact’ is the collective judgment of a specialized community” (Huber 1991, 226). For Huber, what the specialized commu- nity deems sound science is sound science. Yet Kuhn’s idea of a special- ized community consists of scientists working within internally determined —sets of problems and ideas that scientists alone, separated from

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 10 4/16/09 2:47:01 PM introduction • 11

any social considerations, decide are acceptable. (Kuhn’s influence in this regard will be discussed further in chapter 3.) Under this Kuhnian image of science as isolated and autonomous, one could presume that there might exist a clear and “pure” to which one could refer, and on which one could rely. Any science outside of this clear consensus was “junk,” even if later it might prove its mettle. Initially, the very idea of junk science depended on an autonomous and isolated scientific community, inside of which one could find sound science, and outside of which lay junk science. Thus, the same conceptual framework that led to the Science Wars, the idea of science as autonomous and isolated, shaped the sound science–junk science debates. Like the Science Wars, the resulting debates in the policy arena have produced few helpful insights. Instead, they merely changed the rhetoric of policy disputes. As experts with obvious credentials contin- ued to disagree about apparently scientific matters, Huber’s term “junk sci- ence” expanded from the courtroom to all public debates over technical policy issues. Rather than argue that an opposing view had an insufficient scientific basis, one could dismiss an opponent by claiming that their views were based on junk science, which looked like science, but which would be proven wrong in the near future. Conversely, one’s own science was sound, and thus would prove to be a reliable basis for decisionmaking in the long run. The idea that sound science was a clear and readily identifiable cat- egory, and that its opposite, junk science, was also easily identified, ran ram- pant through public discussions. As this language permeated policy debate, it became a mere rhetorical tool to cast doubt upon the expertise of one’s opponents. In a revealing study by Charles Herrick and Dale Jamieson, the use of the term “junk science” in the popular media from 1995 to 2000 was examined systematically (Herrick and Jamieson 2001). They found that the vast majority of studies tarnished with the term did not have any obvious flaws (such as lack of peer review or appropriate publication, lack of ap- propriate credentials of the scientists, or fraud), but were considered “junk science” because the implications of the studies were not desirable. For ex- ample, studies were called junk science because the results they produced were not “appropriately weighted” when considered with other evidence, the studies came from a source that was simply presumed to be biased, or undesirable consequences that might follow from the study were not consid- ered. Thus, by the end of the decade, the term “junk science” had come to

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 11 4/16/09 2:47:02 PM 12 • introduction

be used in ways quite different from the original intent of designating work that fails to pass muster inside the scientific community, denoting instead science that one did not like rather than science that was truly flawed. Despite the muddling of the notions of sound and junk science, much effort has gone into finding ways to sort the two out in the policy process. For example, the Data Quality Act (or Information Quality Act, Public Law 106-554, HR 5658, sec. 515) was passed in 2000 and charged the Office of Management and Budget (OMB) with ensuring “the quality, objectivity, utility, and integrity of information . . . disseminated by Federal agencies,” including the information that serves as a basis in public record for regula- tory decisions.6 However, there are deep tensions generally unrecognized at the heart of such solutions to the sound science–junk science problem. In an essay by Supreme Court Justice Stephen Breyer, published in Science, Breyer emphasized the need for “sound science” in many then-current legal cases: “I believe there is an increasingly important need for law to reflect sound science” (Breyer 1998, 538). While the importance of sound science is clear, how to identify what constitutes sound science in any particular case is a challenge, Breyer acknowledged. This is in part because the ideal for sound science contains contradictory impulses, as can be seen in Breyer’s concern that “the law must seek decisions that fall within the boundaries of scientifically sound knowledge and approximately reflect the scientific state of the art” (Breyer 1998, 537). As noted above, earlier standards of evidence, following the Frye rule, demanded that scientific testimony reflect the con- sensus of the scientific community. While such a standard might clearly determine the boundaries of scientifically sound knowledge, it would often exclude state-of-the-art science, which would encompass newer discover- ies still in the process of being tested and disputed by scientists. Every im- portant discovery, from Newton’s theory of gravity to Darwin’s descent by natural selection to Rutherford’s discovery of radioactivity, was disputed by fellow scientists when first presented. (Some may note that in high stakes discoveries, expert disputation can become a career unto itself.) Yet many cutting-edge scientists have strong evidence to support their novel claims. State-of-the-art science and scientific consensus may overlap, but they are not equivalent. If we want to consider state-of-the-art scientific work in our decisionmaking, we will likely have to consider science not yet part of a stalwart consensus. In the 2000s, the rhetoric around science in policy changed again, this time to focus on “politicized science” rather than junk science. The Bush administration’s handling of science and policy led to these charges, par-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 12 4/16/09 2:47:02 PM introduction • 13

ticularly as concern over the suppression of unwanted scientific findings arose.7 Rather than introducing junk science into the record, the worry is that sound science is being distorted or kept out of the public record alto- gether. Thus, the debate over the role of science in policymaking continues, even if under an altered guise. Regardless of the form it takes, debate over sound science and junk science (or politicized science) centers on the reli- ability of science to be used in decisionmaking. The introduction of new jargon, however, has not helped to clarify the issues. As with the Science Wars, more heat than light has resulted. And ironically, despite the parallels between the two disputes, neither dispute seems to have noticed the other. The Science Wars were a debate among academics interested in science and science studies; the sound science–junk science dispute is a debate among those interested in the role of science in policy and law. One was about the standing of science in society; the other is about which science should have standing. These two disputes involve different texts, participants, and issues, and we should not be surprised that no general connection was made between them. Yet the origins of these two disputes can be found in the same set of historical developments, the same general understanding of science and its place in society. Both disputes and their conceptual difficulties arise from assuming that a clearly defined, authoritative, and autonomous scientific community that hands to society fully vetted scientific knowledge is the correct understanding of science’s role in society. Getting to the heart of this understanding—centering on the autonomous and authoritative view of science—will be central to find- ing a workable resolution to the continuing dispute over the role of science in public policy. It will also challenge the norms for scientific reasoning in general.

Overview, Context, and Limits of the Book This book will not challenge the idea that science is our most authoritative source of knowledge about the natural world. It will, however, challenge the autonomy of science. I will argue that we have good grounds to challenge this autonomy, particularly on the basis of both the endemic uncertainty in science and science’s importance for public decisionmaking. In order to protect the authority of science without complete autonomy, I will articulate and defend ways to protect the integrity of science even as scientific endeav- ors become more integrated with the surrounding society. By considering carefully the importance of science for public policy, I will argue for impor- tant changes in the norms that guide scientific reasoning. In particular, I

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 13 4/16/09 2:47:02 PM 14 • introduction

will argue that the value-free ideal for science, articulated by philosophers in the late 1950s and cemented in the 1960s, should be rejected, not just because it is a difficult ideal to attain, but because it is an undesirable ideal.8 In its place, I will suggest a different ideal for , one that will illuminate the difference between sound science and junk science, and clarify the importance and role for values in science. I will also argue that rejecting the value-free ideal is no threat to scientific objectivity. With these conceptual tools in hand, a revised understanding of science in public policy becomes possible. I will argue that understanding scientific integrity and objectivity in the manner I propose allows us to rethink the role of sci- ence in the policy process in productive ways, ways that allow us to see how to better democratize the expertise on which we rely, without threatening its integrity. Key to this account is the growth in science advising in the United States. Prior to World War II, involvement of science with government was sporadic. Wartime, such as World War I, produced spurts of activity, but rather than producing a long lasting science-government relationship, these episodes developed the forms of the relationship that would be cemented after World War II. That war was the watershed, when science established a permanent relationship with government, both as a recipient of federal sup- port and as a source for advice. Yet the road since World War II has not been smooth. Chapter 2 will detail both how the forms of science advice origi- nated and the ups and downs of science advising since then. Although the specific avenues for advising have shifted in the past fifty years, the steadily expanding importance of science for policymaking will be apparent. This continual expansion is crucial to note because even as scientists were becoming more central figures in policymaking, philosophers of sci- ence were formulating an understanding of science that would turn a blind eye toward this importance. Chapter 3 will examine how the idea of the sci- ence advisor came to be excluded from the realm of of science. In particular, I will examine how the current ideal for value-free science came into existence. Although some normative impulse to be value-free has been part of the scientific world since at least the nineteenth century, the exact form of the value-free ideal has shifted. At the start of World War II, most prominent philosophers rejected the older forms of the ideal as unworkable. The pressures of the cold war and the need to professionalize the young discipline of generated a push for a new value-free ideal, one that was accepted widely by the mid-1960s, is still pre- dominant among philosophers, and is reflected by scientists. I will describe

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 14 4/16/09 2:47:02 PM introduction • 15

how this ideal came into existence and how it depends crucially on a belief in the autonomy of science from society. Chapter 4 begins the critique of this value-free ideal. As we will see in chapter 3, the current value-free ideal rests on the idea that scientists should act as though morally autonomous from society, in particular that they should not consider the broader consequences of their work. Chapter 4 disputes this claim, arguing that scientists must consider certain kinds of consequences of their work as part of a basic responsibility we all share. Because of this responsibility, the value-free ideal cannot be maintained. Values, I argue, are an essential part of scientific reasoning, including social and ethical values. This raises the question of how values should play a role in science, a question addressed in chapter 5. There I lay out a normative structure for how values should (and should not) function in science, and I argue that at the heart of science values must be constrained in the roles they play. The crucial normative distinction is not in the kinds of values in science but in how the values function in the reasoning process. While no part of science can be held to be value-free, constraints on how the values are used in scientific reasoning are crucial to preserving the integrity and reliability of science. By clearly articulating these constraints, we can see the differ- ence between acceptable science and politicized science, between sound and junk science. If science is and should be value-laden, then we need an account of objectivity that will encompass this norm, an account that explicates why we should trust specific scientific claims and what the bases of trust should be. In chapter 6 I provide that account, arguing that there are at least seven facets to objectivity that bolster the reliability of science and that are wholly compatible with value-laden science. We can have objective and value- laden science, and explicating how this is possible clarifies the basis for sci- ence’s reliability and authority. Returning to the nuts and bolts of science in policymaking, chapter 7 concerns how we should understand the needed integrity for science in the policy process given the pervasive role for values in science. I will argue that attempts to separate science from policy have failed, but that the integrity of science can be brought into focus and defended in light of the philosophi- cal work of the previous chapters. With a more precise view of scientific integrity, we can more readily understand the sound science–junk science debates and see our way through them. Finally, in chapter 8 I present some examples of how these consider-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 15 4/16/09 2:47:02 PM 16 • introduction

ations lead to a different understanding of science in policy, and how that understanding includes an important role for the public in the practice of policymaking. In particular, I will address the problem of how to make the proper role of values in science accountable to the public in a democracy. Philosophers might immediately object to the trajectory of this argu- ment on the grounds that I am confusing the norms of theoretical and prac- tical reason. In philosophy, this distinction divides the norms that should govern belief (theoretical) and those that should govern action (practical). The basic distinction between these realms is that values should not dictate our empirical beliefs (because desiring something to be true does not make it so), even as values might properly dictate our actions (because desiring something is a good reason to pursue a course of action). John Heil (1983, 1992) and Thomas Kelly (2002), for example, have challenged the sharp- ness of this distinction, and while I will draw from some of their work, I will not attempt to resolve the general tensions between theoretical and practical reason here. Instead, I will argue that (1) simply because science informs our empirical beliefs does not mean that when scientists make claims based on their work they are not performing actions; and (2) the intuition that val- ues should not dictate beliefs is still sound. Indeed, I will argue that values dictating belief would violate the norms of good scientific reasoning, thus preserving an essential aspect of the distinction between theoretical and practical reason. But dictating beliefs is not the sole role values can play. And the actions of scientists as voices of authority cannot be handled prop- erly by merely concerning ourselves with theoretical reasoning. Making claims is performing an action, and some concerns of practical reason must be addressed. How to do this without violating the core norms of theoretical reason is at the heart of this book. The arguments I will present in the following chapters have been de- veloped against the backdrop of current discussions in philosophy of sci- ence, particularly on values in science. In addition, there has been some philosophical attention to science in public policy since 1990, although this has not been a central area of concern (for reasons discussed in chapter 3). Before embarking on the trajectory I have laid out above, it will be helpful to situate the arguments to come among this work. The most careful examiner and defender of the value-free ideal for sci- ence since the 1990s is probably Hugh Lacey. In his 1999 book, Is Science Value-Free?, Lacey develops a three-part analysis of what it means to be value-free. He distinguishes among autonomy (the idea that the direction of science should be completely distinct from societal concerns), neutrality

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 16 4/16/09 2:47:02 PM introduction • 17

(the idea that the results of science have no implications for our values), and impartiality (the idea that scientific reasoning in evaluating evidence should involve only cognitive and never social or ethical values) (Lacey 1999, chap. 10). Lacey strongly defends the impartiality thesis for science, arguing for a strict distinction between cognitive (for example, scope, simplicity, explana- tory power) and noncognitive (for example, social or ethical) values, and for the exclusion of the latter from scientific reasoning. Lacey’s conception of impartiality captures the current standard core of the value-free ideal, as we will see in chapter 3. He is more moderate in his defense of neutrality and autonomy, arguing that neutrality is only a plausible ideal if one has sufficiently diverse “strategies” or approaches to research within disciplines, something Lacey finds lacking in many areas of current scientific practice, particularly in the arena of plant genetics (Lacey 2005, 26–27). Autonomy in research is even more difficult to assure, as the importance of funding in science has grown (see chapter 2). And recent careful reflection on policy- making for science seems to suggest that autonomy may not be desirable in the ideal (see, for example, Guston 2000; Kitcher 2001).9 While I will not address the issues of neutrality and autonomy here, I will be directly critiqu- ing the ideal of impartiality, which Lacey views as logically prior to the other two. If my criticisms hold, then all three theses of value-free science must be rejected or replaced. Hugh Lacey is not the only philosopher of science who has defended the value-free ideal for science while examining areas of science crucial for policymaking. Kristen Shrader-Frechette has held a steady focus on the role of science in policymaking, providing in-depth examinations of nuclear waste handling and concerns over toxic substances, and using these exam- ples to develop concerns over the methodological flaws and weaknesses of some risk analysis processes (Shrader-Frechette 1991, 1993). Her views on the proper role for values in science have also followed the traditional value- free ideal. For example, in Risk and Rationality (1991), she argues that, “although complete freedom from value judgments cannot be achieved, it ought to be a goal or ideal of science and risk assessment” (44). In Burying Uncertainty (1993), when describing “methodological value judgments,” she considers the traditional epistemic values, which are acceptable under the value-free ideal for science, and problems of interpretation with them (27–38). She explicates clearly how the reliance on these values can create problems in risk assessment, but no alternative norms for scientific reason- ing are developed. It might seem she undermines the value-free ideal in her book Ethics of Scientific Research when she writes, “Although researchers

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 17 4/16/09 2:47:02 PM 18 • introduction

can avoid allowing bias and cultural values to affect their work, method- ological or epistemic values are never avoidable, in any research, because all scientists must use value judgments to deal with research situations involv- ing incomplete data or methods” (Shrader-Frechette 1994, 53). However, the importance of the value-free ideal becomes apparent when Shrader- Frechette equates objectivity with keeping the influence of all values to a minimum, and still only “methodological” (or epistemic/cognitive) values are acceptable (ibid., 53). In this book, I will disagree with Shrader-Frechette on this point, arguing that the value-free ideal needs to be rejected as an ideal, and making a case for a replacement set of norms for scientific reason- ing. In addition, Shrader-Frechette contends that scientists are obligated to consider the consequences of their work because of a professional duty as scientists (Shrader-Frechette 1994, chap. 2). I, however, will argue in chap- ter 4 that the obligation to consider the consequences of one’s choices is not a duty special to a profession or role, but a duty all humans share. The work of Helen Longino is probably the closest to my own position on how to understand the proper role of values in science. Rather than start- ing with a focus on policymaking, Longino has worked from the feminist philosophy of science literature that developed out of feminist critiques of science in the 1980s. In Science as Social Knowledge (1990), she lays out a framework for understanding the ways in which values can influence sci- ence, particularly through the adoption of background assumptions. She distinguishes between constitutive and contextual values in science, argu- ing that both influence science in practice and content (4). Longino de- velops her account of values in science by examining the functioning of background assumptions in scientific research relating to gender. To address the concerns about the objectivity of science raised by these examples, she suggests that we think of science as an essentially social process, and she develops a socially based view of objectivity that can inform how science should function (ibid., chap. 4).10 While her work serves as a useful starting point for more in-depth discussions on the role of values in science and the social nature of science, I depart from her framework in several ways. First, I want to provide a more closely argued account for how the adoption of an ostensibly empirical background assumption could “encode” values, work I have begun elsewhere (Douglas 2000, 2003a). Second, I do not utilize her distinction between contextual and constitutive values in science because I want to maintain a focus on both the scientific community and the broader community within which science functions, and dividing values into the internal and external at the start obscures one’s vision at the boundary. In

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 18 4/16/09 2:47:02 PM introduction • 19

addition, as Longino herself argues, the distinction can provide no ground for ideals for science, as it is a thoroughly porous boundary between the two types of values. (This issue is discussed further in chapter 5.) Finally, while I appreciate and utilize Longino’s emphasis on social aspects of science, I think we also need clear norms for individual reasoning in science, and this book aims to provide those. Thus, while my views on objectivity, as articu- lated in chapter 6, draw insight from Longino, I do not rest the nature of objectivity on social processes alone. In addition to these philosophers who have grappled with the role of values in science, two writers have provided important insights on the role of science in policymaking. I see this book as expanding on the insights from these earlier works. Sheldon Krimsky, for example, has contributed much to the discussion on science and in public life and poli- cymaking, focusing on the biological sciences and their import (Krimsky 1982, 1991, 2000, 2003; Krimsky and Wrubel 1998). Krimsky’s discussions are wide ranging, and only some pick up on the themes of this book, as most of his work centers on the relationship between the public and the uses of new technology. The book that most closely relates to the concerns to be considered here is Hormonal Chaos, an overview of the endocrine disruptor debate, where Krimsky addresses problems of the acceptance of evidence as a basis for both science and policy (Krimsky 2000). While his discussion of that particular debate is rich, the general implications for understand- ing science in policy are not fully developed. For example, in order to get us beyond the sound science versus junk science debate, Krimsky briefly mentions a new ideal, “honest science,” which he describes as “science that discloses financial interests and other social biases that may diminish the appearance of objectivity in the work” (ibid., 187). While this sketch is sug- gestive, it needs further development. Which interests are relevant and why? Why is the exposure of interests important to the integrity of science? How does this fit with the ideal of value-free science, in which one’s interests are not to interfere with the interpretation of evidence? Answering these ques- tions with more in-depth normative work is one of the purposes of this book. I will propose that it is not a full disclosure of interests that is needed for the integrity and objectivity of science, but an explicit and proper use of values in scientific reasoning. Not all interests are relevant to the doing of science, and some kinds of influence arising from interests are unacceptable, even if disclosed. Situating the scientist with respect to his or her interests is a good start, but not normatively sufficient. In contrast to the work of Krimsky, Carl Cranor’s Regulating Toxic Sub-

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 19 4/16/09 2:47:03 PM 20 • introduction

stances (1993) is more focused on the topic of the general use of science in public policy. Cranor examines the implications of accepting (or reject- ing) certain levels of uncertainty in science to be used as a basis for policy, and provides a careful account of the processes of risk assessment and the uncertainties involved. I take up his focus on the trade-offs between under- regulation and overregulation and expand their reach beyond how public officials and administrators should think about science to how scientists and philosophers of science should think about science, given science’s central importance in the policy process. In particular, I will address how these insights lead to rethinking our understanding of norms for scientific reason- ing, the nature of objectivity, and how to differentiate junk science from sound science. Scientists and philosophers still largely hold to the value-free ideal.11 Some claim that the value-free ideal is essential to the authority of science, to objectivity, or to the very possibility of having reliable knowledge (for ex- ample, Lacey 1999, 223). The critiques of the value-free ideal to date have been based on its untenability rather than its undesirability. I will take the latter road here and provide an alternative ideal in its place. Having a clearer understanding of how values should, and should not, play a role in science, working from the foundations of both moral responsibility and proper rea- soning, should provide a clearer framework with which to examine the role of science in policymaking. Thus, this book is about how scientists, once engaged in a particular area of research, should think about the evidence, and should present their findings, given the importance of science in policymaking. This area has reached a philosophical impasse of sorts. The value-free ideal requires that ethical and social values have no influence on scientific reasoning in the interpretation of evidence. But works like Longino’s and Cranor’s suggest that something is quite amiss with this ideal, that values and interests are influential for scientists, and perhaps properly so. What we need is a reex- amination of the old ideal and a replacement of it with a new one. The ideal I propose here is not just for the social functioning of science as a commu- nity, but for the reasoning processes of individual scientists, for the practice of science and for the giving of scientific advice. In addition to this philosophical literature on values in science and sci- ence in policy, there are related bodies of work that will not be directly ad- dressed in this book. For example, Shrader-Frechette (1991, 1993), as well as Douglas MacLean (1986), have also done considerable work on which

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 20 4/16/09 2:47:03 PM introduction • 21

values shape, or should shape, our management of risk. This work argues for more ethically informed weighing of the consequences of policy decisions, suggesting that qualitative aspects of risk, such as its distribution, the right to avoid certain risks, voluntariness, and the valuation of life, are crucial to a complete understanding of risk in our society.12 In this book, I do not engage in debates over which values in particular should inform our judg- ments concerning risk, but instead focus on how values in general should play a role in the science that informs our understanding of risk. To finish setting the bounds of this book, a few caveats are in order. First, some hot-button issues will not be discussed. For example, debates over have been chronically prominent in the past few de- cades, as first creationism, and now , seek to challenge the content of science education through school boards and textbook disputes, rather than through the scientific debate process of journal publications and conferences. My concerns here are not with what science to teach to the young, but with what science to depend upon to make decisions in public policy.13 Second, the book focuses exclusively on the natural sciences as a source of desired expertise. My neglect of the social sciences, such as psychology, , and , arises partly from the need for less complexity in the book, and partly from a desire to avoid the debates over the scientific standing of the social sciences. Social sciences also raise unique problems of reflexivity, as the subjects of the research can read and understand the research, and alter their behavior as a result. How the ideas I develop here would apply to such contexts awaits future work. Finally, while my treatment brings philosophical attention and analysis to the role of science in public policy, an issue much neglected by philoso- phers in the past forty years, science is a changing institution in modern society, and many have noted that the most important issues of science in society may not center on science and public policy as we enter the twenty- first century. Since the 1990s, the sources of funding for science and the institutional settings for research have been changing, with potentially im- portant consequences (Greenberg 2001; Krimsky 2003). Over half of the funding for scientific research in the 2000s comes from private sources. With intellectual property concerns keeping much private research private, one may wonder whether the issues I address here are really salient. Such a critic would have a point, but until philosophy of science ceases to define itself solely in epistemological terms, such issues can hardly be addressed.

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 21 4/16/09 2:47:03 PM 22 • introduction

I see this book as part of a reorientation of the discipline of philosophy of science, to begin to take seriously again a philosophical (that is, conceptual and normative) examination of science as it functions in society in all of its aspects. With such a reorientation, we will be better able to address the is- sues presented by these changes in scientific practice, and the implications for policy for science, broadly construed.

© 2009 University of Pittsburgh Press. All rights reserved.

Douglas text.indd 22 4/16/09 2:47:03 PM