Running Head: Approaching the “Ought” Question

Approaching the “Ought” Question when Facing the Singularity1

Chad E. Brack

National Defense University

College of International Security Affairs

April 1, 2015

DISCLAIMER: THE OPINIONS AND CONCLUSIONS EXPRESSED HEREIN ARE THOSE OF THE INDIVIDUAL STUDENT AUTHOR AND DO NOT NECESSARILY REPRESENT THE VIEWS OF THE NATIONAL DEFENSE UNIVERSITY, THE DEPARTMENT OF DEFENSE OR ANY OTHER GOVERNMENTAL ENTITY. REFERENCES TO THIS STUDY SHOULD INCLUDE THE FOREGOING STATEMENT.

1. This paper was a submission for CISA Master of Arts in Strategic Security Studies course 6933: Science, Technology, and War. The topic question was “can knowledge (output of science) be a bad thing for humans? How should humans view the Bill Joy– debate?

Approaching the “Ought” Question

Many scholars agree that science itself should not be expected to answer “ought” questions (Bauer 1994; Feyerabend 1975; Midgley 1994; Hoover 2010). As a theoretically amoral system,2 science is simply the pursuit of knowledge or, as Kenneth Hoover (2010) puts it, “a strategy for learning about life and the universe” (134). Science easily provides us with three of the four types of knowledge: empirical, analytical, and theoretical. Normative knowledge, however, stems from human values and norms. It is not scientific by nature and can be highly relative and subjective. Normative questions require “other forms of understanding” (Hoover 2010, 130), which come from many faucets of human experience (including the three types of knowledge listed above). The empirical, analytical, and theoretical knowledge that science provides us is neither good nor bad. It simply is. It is the application of that knowledge — in the form of technology — in which the “ought” question comes into play. Technology allows us to apply scientific knowledge to the world around us. It does not, however, imply that we should pursue all forms of technology simply because we have the ability to do so, or whether the benefits of our technology will outweigh its potential costs. Who should be responsible for such decisions is up for debate. Some people argue that scientists should take the lead in decision-making. Others believe even matters of science and technology remain in the realm of politics. According to Marvin Minsky (2014), scientists are no better (and perhaps worse) than anyone else at deciding what is good or bad. Such questions of morality and ethics are bigger than science, and their consequences can profoundly impact not only all of humanity, but the world at large. Bill Joy and Ray Kurzweil approach the technological “ought” problem from opposite angles. Joy foresees knowledge-enabled weapons of mass destruction as a likely means to the end of human civilization, while Kurzweil paints technology as the ultimate savior of mankind. Both futurists speak of a coming technological singularity — a point at which technology irreversibly transforms humanity — with Joy pessimistically advocating technological or knowledge-based regulation and Kurzweil optimistically encouraging exponential technological growth. Both men bring some valid arguments and concerns to the discussion, but both also fall victim to assumptions that drastically flavor their conclusions. Both view the development of technology as a normative question, and both seek to steer mankind towards a positive future — one by avoiding and the other by embracing the singularity. Both approaches are ideological and emotional by nature, and they both fail to recognize that technological singularities have commonly occurred in the past.3 Humanity is already irreversibly bound to technology, and no technology to date has proven to be the destroyer or the savior of mankind.

2. I label science “theoretically amoral” because, although the system itself is amoral by nature, its findings can be skewed, tainted, or manipulated, whether purposefully or inadvertently, by biased individuals or groups with specific agendas.

3. Academics discuss different types of singularities, most of which revolve around the development of (AI) or post-humanism (the point at which humans and machines merge). According my analysis, Kurzweil and Joy are speaking about two different types of singularities. Kurzweil leans toward the AI-based or post-human singularity, while Joy speaks of a more vague technological singularity in which human knowledge—through technology—produces irreversible consequences of some kind. In this paper, my use of the term coincides with Joy’s more general definition. The Kurzweilian singularity, of course, has not yet occurred.

1 Approaching the “Ought” Question

Though each scenario is a real possibility, current empirical data leaves no reason to believe that the next major technological advancement will break the pattern. Joy (2000; 2008) posits that knowledge can, indeed, be a bad thing. He argues that humanity should willingly limit its pursuit of certain types of knowledge and come together as a species to determine which forms of technology should be relinquished (Joy 2000, 8). His argument revolves around the potential for Genetic Engineering, , and (GNR) to damage the physical world. With the ability to self-replicate, once loosed upon the environment such potentially destructive technologies may prove unstoppable (3). Furthermore, Joy recognizes that the research and development of GNR technologies has moved from secretive government projects, many of which relied heavily on hard-to-get materials, to more open source-type corporate environments in which knowledge itself enables production. Unlike nuclear bombs, which require the combination of expertise and elusive fissile materials, knowledge-enabled weapons of mass destruction might become easily accessible to the general population and could prove more destructive than nuclear weapons. They may also be extremely easy to assemble, requiring little more than a bit of knowledge and readily available parts (6). For Joy, the risk of such knowledge-enabled destruction is not worth the potential rewards of pursuing the knowledge itself. He argues that certain GNR technologies inherently contain such vastly destructive potential to warrant a ban on their very development. Joy is also concerned about developing general artificial intelligence. He believes that a new, in some ways superior, species would inevitably compete for resources, energy, and power. He mentions that biological species rarely survive encounters with superior competitors. Furthermore, humans could become so dependent on technology that they are forced to accept machine decisions that might not be in humanity’s best interests (1–2).4 Joy worries that we have a tendency to fail to “understand the consequences of our inventions while we are in the rapture of discovery and innovation” (4). He dismisses the Kurzweilian notion of human-robot synthesis, seeing it as a road to the loss of what we consider “human,” which could lead to the oppression or extinction of purely biological humanity. Not without valid concerns, Joy’s technological pessimism borders on the paranoid and leaves little room for a non-dystopian future. Kurzweil, on the other hand, may not be cautious enough (Grossman 2011; Kurzweil 2014). His version of the future is extremely utopian, and one in which even death has been conquered (Grossman 2011, 1, 4). Kurzweil envisions a melding of humanity with technology. He sees artificial intelligence billions of times smarter than humans solving the world’s toughest problems. Meanwhile, humans with genetic enhancements will be smarter, faster, stronger, and healthier than ever before (Transcendent Man 2009). Eventually humans will be able to download their intelligence into robot vessels and travel the stars, re-engineering all of the matter in the universe (Grossman 2011, 5).5 Furthermore, Kurzweil predicts the singularity to happen much

4. Joy uses quotes from Luddite, Theodore Kaczynski (the Unabomber), and robotics researcher, Hans Moravec, to express his concerns with technology run amok.

5. This idea sounds like a precursor to Frank Tipler’s (1986) Anthropic Universe.

2 Approaching the “Ought” Question sooner than anyone expects (3). Simply stated, Kurzweil sees technology as the answer to all of humanity’s problems and for him, banning it would be unethical, dangerous, and impossible (4). Both authors agree that technology’s exponential growth will soon offer awe- inspiring transformative power. What they do not agree on is whether or not that transformative power will be ultimately good or bad for humanity or the planet Earth. Current data seems to suggest that the answer is neither nor. Or perhaps we could logically conclude that it is actually both. A more interesting question is whether our opinion truly matters. Do we even have the power to choose? Technological determinism can be viewed in two distinct ways. The first says that technology itself is a driving force containing “agency within the scope of human history.”6 Kevin Kelly (2010) frames this idea in terms of evolution. He describes technology’s tendency to move toward higher forms of complexity as universal and inevitable. For Kelly, once precursor technologies are in place, the next step will definitely occur. We do not have the power to stop progress, but we do have the power to influence how it happens, e.g. choosing which type of light bulb to pursue, not whether or not to produce artificial light. The second way to view technological determinism is based on a type of sunk cost through dependency. As mentioned briefly above, humanity has experienced a number of singularities in which technology has shaped human evolution. From physiological changes within our own bodies to how we communicate and interact with our environment, humans are wholly dependent on scientific knowledge and technological development to maintain our current evolutionary path. Modern humanity is demonstrably a product of its technology; from the clothes we wear to the food we eat and the shelters we construct, only under rare conditions can we survive without it. Regardless of agency, technology is an irreversible determinant of our future. With both types of technological determinism in mind, Joy’s (2000) suggestion to relinquish certain types of knowledge may not be a viable option. With ever-present threats in the form of natural disasters, disease, climate change, asteroid impacts, solar flares, etc. we need all of the knowledge we can get. Furthermore, many of the threats we face are by-products of the very technology we will depend on for solutions. At this point in our technological development, denying ourselves the ability to expand our capabilities may leave us exposed to an environment we can no longer handle. We may be forced to choose between possible destruction by the technologically unknown or near certain death by the already hostile environment in which we cannot cope without the technology that keeps it hospitable. Which is the more rational concern? Many people, Joy included, seem to assume that advanced societies most likely narrowly escape self-destruction. Joy quotes Carl Sagan in his essay stating, “science, they recognize, grants immense powers…some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils” (6). Some even argue the probability of self-destruction as a potential

6. This quote is taken from the February 28th Science, Technology, and War class on technological determinism.

3 Approaching the “Ought” Question solution to the Fermi Paradox or as an addition to the Drake equation.7 In reality, however, the only data available to us clearly demonstrates technological success. Not only are we the only society that we know of, but we also possess nuclear capabilities and have not yet destroyed ourselves. Furthermore, we have indisputable evidence of multiple mass extinction events on our own planet. We also observe massively destructive phenomenon everywhere we look in space. The odds of long-term survival are not in our favor, but technological advancement may eventually offer us the upper hand. We simply face a much greater chance of extinction by foregoing the use of technology than we do by accepting the potential risks attributable to it. Joy worries about the development of artificial intelligence or post-humanism challenging our notions of humanity (5). He is also concerned about creating a superior intelligence that might decide to wipe biological humanity from existence. His fundamental question is whether we will “survive our technologies” (9). Perhaps we will not, but we also will not survive without them, at least not in our present form. Humanity will evolve even if it doesn’t suffer mass extinction. Change is inevitable, but now we are on the brink of possessing the power to influence our own evolution by steering it in whatever directions we see fit. If we relinquish that ability out of fear of the unknown, then we leave ourselves at the mercy of the forces that have extinguished the vast majority of life that has ever existed on this planet. We might be the first species to break that cycle. And we may even take a new species along for the ride. Joy’s fear of artificial intelligence may be justified from a human standpoint. If we expect the created to mirror the creator then we would probably be wise to be concerned. Such a prediction may be very shortsighted, however. Humans tend to fear the unknown, and that fear sometimes leads to violence, war, and irrational behavior. We simply cannot know how an AI with access to the collective knowledge of humanity would behave. Would it have reason to be fearful? If each of us truly understood our enemies would we continue to go to war? Should we expect a massively intelligent being to act as we do? Such knowledge seems likely to tend toward empathy. Maybe super- intelligent machines or post-human enhancements are what we require to transcend the cycle of violence to which humanity currently seems enslaved. Perhaps, as Jim Hendler (2015) suggests, we simply need AI to help us solve the world’s problems. Then again, Joy’s fear may be warranted. We cannot predict how such a creature would interact with other beings. Grossman (2011) assumes that AIs would expand their intellect at exponential rates, not concerned with taking “breaks to play FarmVille” (1). They may do just that. Or perhaps they will become obsessed with figuring out how to “taste” spaghetti, create the “perfect” piece of cardboard, or simply leave us behind to explore the Universe. They might even shut themselves down due to depression, or launch all out war on all biological life for reasons we couldn’t comprehend. We cannot know how other fully self-aware creatures will behave because we have not met them yet. The tiny chance that they will decide to extinguish humanity is a small risk to take in relation to what their existence could offer us. The likelihood of human-AI hostilities is more a question of ethics than one of existentialism. Besides, they will most likely depend on

7. The Fermi Paradox describes the contradiction between the high probability of extraterrestrial life in the Universe and the lack of any indication of its presence. The Drake equation attempts to estimate the amount of intelligent life in the Milky Way Galaxy.

4 Approaching the “Ought” Question mankind — at least in the beginning — for their own survival. And if they are, indeed, the next step in human evolution, then refusing their creation would mean evolutionary suicide. Joy’s fear may be a bit excessive, but Kurzweil’s argument does not escape criticism either. In the documentary, Transcendent Man (2009), multiple scholars took issue with his claims, but their main complaints centered on his overly optimistic timeline. Kurzweil predicts technological breakthroughs at rates experts simply cannot agree with. Whether he is right or wrong about the timeframes of such developments, however, is really just a red herring. It is his message that counts, and Kurzweil assumes that technology will lead to utopia. If it somehow does, the road there will be much harder than he expects. Thomas Mahken (2008) and Michael O’Hanlon (2009) both describe the science and technology of war since World War II. The competition inherent to warfare tends to drive technological advancement exponentially. Many of the technologies that make our lives better are by-products of technologies meant to take the lives of others. Likewise, competitive reactions to actual or perceived technological advancement, whether or not it is meant for the purposes of war, can result in violence. Kurzweil may be too quick to rely on the power of technology, while forgetting about the human element inherent to it. In the documentary Singularity or Bust (2012), describes his prediction of an apocalyptic war between those who do (Cosmists) and those who do not (Terrans) support the creation of godlike super intelligent machines (Artilects). Although all out war between Luddites and Technophiles sounds unlikely, De Garis makes an important point. Technology alone may not help, and could even worsen, societal or international systemic problems. Limited supply or affordability for enhancement technologies may increase social inequality, introducing problems that the technology itself cannot solve. Though humans might develop the ability to alter people’s emotional states to prevent violent or irrational behavior, ethical concerns could prevent them from doing so. Similarly, some communities may refuse certain technologies while others embrace them, causing new social issues we cannot predict. Furthermore, new technologies might allow people access to superhuman abilities and near immortality, but they will still be subject to Joy’s concerns. Even Kurzweil’s post-humans and AIs must answer the “ought” questions. Both Joy and Kurzweil’s drastically different views of the singularity spark emotional responses to our possible future. Joy’s fearful potentialities are unlikely scenarios, but with substantial costs if they prove to be true. Kurzweil’s optimistic future encourages naivety through overreliance on technology, leaving plenty of room for destruction by other means (including Joy’s predictions). The argument really boils down to risks versus rewards. Should humans plan for the worst or strive for the best? How do we answer the “ought” question? In my view, technological determinism works two ways; it simultaneously pushes us forward while saving us from our past. Plus, it would be nice to see an anthropic universe come to fruition. As Tyler Durden put it in Fight Club (1999), “I say…let’s evolve, let the chips fall where they may.”

5 Approaching the “Ought” Question

References

Bauer. Henry H. 1994. Scientific Literacy and the Myth of the Scientific Method. Illini Books: University of Illinois Press.

Feyerabend, Paul. 1975. Against Method. New Left Books.

Fight Club. 1999. Directed by David Fincher. 20th Century Fox.

Dye, Raj. 2012. “Singularity or Bust,” edited by Alex MacKenzie. Raj Dye and Pacific Coast Digital, Inc. YouTube.com, published November 3, 2013. Accessed March 25, 2015. https://www.youtube.com/watch?v=owppju3jwPE.

Grossman, Lev. 2011. “2045: The Year Man Becomes Immortal.” Time, February 10. Time, Inc. Accessed March 18, 2015. http://content.time.com/time/magazine /article/0,9171,2048299,00.html.

Hendler, Jim. 2015. “Artificial Intelligence vs Humans.” TEDx Talks, YouTube.com, published February 5, 2015. Accessed March 25, 2015. https://www.youtube.com /watch?v=5rNKtramE-I.

Hoover, Kenneth. 2010. “Morality and the Limits of Science,” in The Elements of Social Scientific Thinking. Boston, MA: Cengage Learning, Wadsworth.

Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” Wired 8, no. 4 (April). Conde Nast Digital. Accessed March 18, 2015. http://archive.wired.com/wired/archive/8.04 /joy.html.

Joy, Bill. 2008. “What I’m Worried About, What I’m Excited About.” TED Talks, YouTube.com, published November 25. Accessed March 23, 2015. https:// www.youtube.com/watch?v=LN2shXeJNz8.

Kelly, Kevin. 2010. “What Technology Wants.” TEDx Talks, YouTube.com, published December 8. Accessed March 25, 2015. https://www.youtube.com/watch?v=nF -5CMozGWY.

Kurzweil, Ray. 2014. “Get Ready for Hybrid Thinking.” TED Talks, YouTube.com, published June 2. Accessed March 23, 2015. https://www.youtube.com /watch?v=PVXQUItNEDQ.

Mahnken, Thomas G. 2008. Technology and The American Way of War Since 1945. New York, NY: Columbia University Press.

Midgley, Mary. 1994. Science as Salvation: A Modern Myth and Its Meaning. London, GBR: Routledge.

6 Approaching the “Ought” Question

Minsky, Marvin. 2014. “Kurzweil Interviews Minsky: Is Singularity Near?” YouTube.com, published July 14. Accessed March 24, 2015. https:// www.youtube.com/watch?v=RZ3ahBm3dCk.

O’Hanlon, Michael E. 2009. The Science of War. Princeton, NJ: Princeton University Press.

Tipler, Frank J., and John D. Barrow. 1986. The Anthropic Cosmological Principle. Oxford, UK: Oxford University Press.

Transcendent Man. 2009. Directed by Robert Barry Ptolemy. Ptolemaic Productions in partnership with Therapy Content.

7