Intentionality and Materialism

Total Page:16

File Type:pdf, Size:1020Kb

Intentionality and Materialism The Nature of Mind II: Intentionality and Materialism (I) The Intentional Stance In his “Intentional Systems,” Daniel Dennett argues that there are three different stances that we can adopt when trying to understand (or come to know) something: (a) a design stance (b) a physical stance (c) an intentional stance. When we adopt the design stance, we attempt to predict the future actions of a thing or a system of things by appeal to the underlying design of that thing. Dennett’s example throughout is of a chess-playing computer: “one can predict its designed response to any move one makes by following the computation instructions of the program.” (p. 337b) And our predictions based on the design stance will all rely on the notion of function. When we adopt the physical stance, we make predictions based on the actual physical state of the particular object (along with our knowledge of the laws of nature). But, according to Dennett, chess-playing computers have advanced to such a degree that predicting their behaviors based on the design stance or the physical stance is very difficult. “A man’s best hope of defeating such a machine in a chess match is to predict its responses by figuring out as best he can what the best or most rational move would be, given the rules and goals of chess.” (p. 338b) In other words, one ought to adopt the intentional stance with respect to the computer. This stance assumes rationality. “One predicts behavior in such a case by ascribing to the system the possession of certain information and supposing it to be directed by certain goals, and then by working out the most reasonable or appropriate action on the basis of these ascriptions and suppositions.” (p. 339a) But now here’s the interesting move on Dennett’s part: “It is a small step to calling the information possessed the computer’s beliefs, its goals and subgoals its desires.” (ibid.) What do you think? Well, according to Dennett, you need not be bothered by this because, he claims, he is not saying that the computers really have beliefs and desires, only that “one can explain and predict their behavior of the computer by ascribing beliefs and desires to them.” (p. 339b) In the end, “the decision to adopt the strategy is pragmatic, and is not intrinsically right or wrong.” (ibid.) This claim goes back to what Dennett says in the beginning; namely, that “a particular thing is an intentional system only in relation to the strategies of someone who is trying to explain and predict its behavior.” (p. 337b) A potential problem enters, however, when we realize that intentional systems don’t always obey the rules of rationality (i.e., logic). Eventually, Dennett says, we end up having to look at things from a design stance. This is actually OK, because the design stance is more reliable. “In the end, we want to be able to explain the intelligence of man, or beast, in terms of his design, and this in turn in terms of the natural selection of this design…” (p. 342a) And, ultimately, the intentional stance presupposes rationality and intelligence, it doesn’t explain it. (p. 344a) Introduction to Philosophy: Knowledge and Reality 2 Dr. Brandon C. Look, University of Kentucky What does all this mean? Well, it means, first of all, that we might not have to worry about the question “Can a computer think?” only whether we are justified in treating a computer as an intentional system. Further, like Ryle (perhaps), it seems that Dennett is not saying anything about What there is only about how we can consider things in scientific explanation. One remaining question is this, however: Is there anything that Dennett is leaving out of the picture of seemingly-intelligent beings or systems? One answer might be that what’s being left out are qualia – the feelings of my 1st-person perspective. It does seem that this model is fine – but only from the 3rd-person perspective. (II) Eliminative Materialism The main claim in Paul Churchland’s piece is that all the concepts of “folk psychology” – e.g. beliefs, desires, fear, sensation, etc. – will be (or can be) eliminated by a completed neuro-scientific theory. In other words, at some point in the future, we will cease to recognize their real existence just as we have ceased to recognize any number of concepts in scientific theory. Churchland’s gives three reasons to believe that the concepts of folk psychology should be abandoned. (pp. 351a-352b) First, folk psychology often fails to predict and explain things. Second, our early theories in other fields of science were confused and unhelpful. So why think our crude folk psychological theories are more accurate. Third, the prospects of adequately making one-to-one correspondences between folk psychological states and brain states (as expected by identity theories) are not great. Against the counter-argument that “one’s introspection reveals directly the existence of pains, beliefs, desires, fears and so forth” (p. 352b) Churchland points out that “all observation occurs within some system of concepts, and our observation judgments are only as good as the conceptual framework in which they are expressed.” (ibid.) The point is that eliminative materialism will produce a wholesale trashing of the conceptual framework of beliefs, desires, and so on. (His second criticism draws a similar response.) The third counter-argument to eliminative materialism is that it exaggerates the defects of folk psychology and presents a romantic (?!) and enthusiastic picture of possible progress. Perhaps, Churchland says. But it is clear that one should try to go the hard-core materialist route. Introduction to Philosophy: Knowledge and Reality 3 Dr. Brandon C. Look, University of Kentucky The Nature of the Mind III: Artificial Intelligence Is materialism true? Or, better: can we explain all phenomena purely in terms of the states and interactions of matter and physical laws? It might have seemed that mental phenomena are resistant to such explanations. But our question for today is this: can we legitimately say that a computer (which is a material thing) thinks? If so, then we have at least one kind of material object whose mental properties are simply the effects of its material components. If that material thing can be said to think, why can’t we simply be material things that think? I. “Leibniz’s Mill” Leibniz was one of the first to take seriously the challenge of materialism with respect to the mind. In §17 of his “Monadology” (1714), he writes the following: [P]erception, and what depends on it, is inexplicable in terms of mechanical reasons, that is, through shapes and motions. If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception. In other words, the mind and its contents cannot be explained solely in material terms. The material cannot give us examples of perceptions and thoughts. II. The “Turing Test” I have asked before in class “Can a machine think?” or “Can a computer think?” In his classic article, “Computing Machinery and Intelligence” (1950), Alan Turing is dismissive of this formulation of the question. Instead, he offers a famous approach to the general problem of the nature of thought and computation: an “imitation game” (which has since come to be known as a “Turing Test”). The game has three players: A, B, and C. Let A be a computer, B a person. C acts as an interrogator, who tries to determine who (or what) A is and who (or what) B is, but who cannot see A or B. Turing’s question becomes “Could a computer fool the interrogator into thinking that it is a person?” Or, in other words, “Could a computer be programmed so that its answers to ordinary questions in natural language were so like the answers of native speakers that a blind observer couldn’t determine if it was a computer?” In his article, Turing predicts that within 50 years machines ought to be able to do well in an imitation game (i.e., they ought to be able to deceive judges 70% of the time in five minute conversations). (p. 361a) To the best of my knowledge, this has not happened. But there is a yearly Loebner Prize for computers that imitate human conversation. And, if you like, you can chat on-line with a computer, “A.L.I.C.E.” (follow link on the course website). Introduction to Philosophy: Knowledge and Reality 4 Dr. Brandon C. Look, University of Kentucky By the way, if you’ve seen Blade Runner, then you know that Blade Runners (special police, whose job it is to “retire” replicants) interrogate others, trying to determine whether or not they are androids. III. Searle and the “Chinese Room” Experiment A. The Original Thought Experiment and Argument Searle’s goal in “Minds, Brains, and Programs” is to show that the claims of “strong AI” are false. “Strong AI” is characterized by the beliefs that “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” (p. 368b) He asks us to consider the case in which he is in a room and given English instructions on how to manipulate Chinese characters – that is, when given a certain input in Chinese, he has directions on how to give an output in Chinese characters.
Recommended publications
  • Intentionality in Kant and Wittgensetin
    Journal of Undergraduate Research at Minnesota State University, Mankato Volume 6 Article 5 2006 Intentionality in Kant and Wittgensetin Ryan Feldbrugge Minnesota State University, Mankato Follow this and additional works at: https://cornerstone.lib.mnsu.edu/jur Part of the Philosophy Commons Recommended Citation Feldbrugge, Ryan (2006) "Intentionality in Kant and Wittgensetin," Journal of Undergraduate Research at Minnesota State University, Mankato: Vol. 6 , Article 5. Available at: https://cornerstone.lib.mnsu.edu/jur/vol6/iss1/5 This Article is brought to you for free and open access by the Undergraduate Research Center at Cornerstone: A Collection of Scholarly and Creative Works for Minnesota State University, Mankato. It has been accepted for inclusion in Journal of Undergraduate Research at Minnesota State University, Mankato by an authorized editor of Cornerstone: A Collection of Scholarly and Creative Works for Minnesota State University, Mankato. Feldbrugge: Intentionality in Kant and Wittgensetin INTENTIONALITY IN KANT AND WITTGENSTEIN Ryan Feldbrugge (Philosophy) Dr. Richard Liebendorfer, Faculty Mentor, Philosophy How is thought about and experience of a world possible? This has been the framing question of the present work and it is generally understood as the problem of intentionality. The more specific problem dealt with has been whether or not intentionality has an internal structure that can be made explicit through science, particularly cognitive science. In his Critique of Pure Reason, Immanuel Kant outlines an internal, mental structure that, when imposed on our sensory data, makes thought about and experience of a world possible, which can be viewed as highly anticipatory of modern cognitive science. On the other hand, there are a number of philosophers who have it that the structure of intentionality cannot be made explicit nor can it be understood within science, notably Ludwig Wittgenstein.
    [Show full text]
  • Dualistic Physicalism: from Phenomenon Dualism to Substance Dualism
    Dualistic Physicalism: From Phenomenon Dualism to Substance Dualism Joseph Polanik, JD Table of Contents Preface.................................................................................................................7 §1 The Central Question......................................................................................9 §2 The Brain/Experience Relation....................................................................11 §2.1 The Elements of Dualism.......................................................................11 §2.2 Proceeding from Common Ground........................................................13 §2.2.1 Evaluating Dennett's Defense of Materialism.................................13 §2.2.1.1 The Contradiction in the Dennett Defense...............................14 §2.2.1.2 Other Problems .......................................................................15 §2.2.1.2.1 Referring to Non-Existents...............................................15 §2.2.1.2.2 Violation of Common Sense..............................................16 §2.2.1.2.3 Denial of Experience.........................................................16 §2.2.1.2.4 Anticipating Type-Z Materialism......................................18 §2.2.1.3 Standing Precisely Against Eliminative Materialism ..............20 §2.2.2 The Argument for Dualism from Experience..................................21 §2.2.3 What Sort of Dualism is This?.........................................................25 §2.2.3.1 Phenomenon Dualism is Not Predicate Dualism.....................26
    [Show full text]
  • KNOWLEDGE ACCORDING to IDEALISM Idealism As a Philosophy
    KNOWLEDGE ACCORDING TO IDEALISM Idealism as a philosophy had its greatest impact during the nineteenth century. It is a philosophical approach that has as its central tenet that ideas are the only true reality, the only thing worth knowing. In a search for truth, beauty, and justice that is enduring and everlasting; the focus is on conscious reasoning in the mind. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as “idea-ism”. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed. Etymologically Its origin is: from Greek idea “form, shape” from weid- also the origin of the “his” in his-tor “wise, learned” underlying English “history.” In Latin this root became videre “to see” and related words. It is the same root in Sanskrit veda “knowledge as in the Rig-Veda. The stem entered Germanic as witan “know,” seen in Modern German wissen “to know” and in English “wisdom” and “twit,” a shortened form of Middle English atwite derived from æt “at” +witen “reproach.” In short Idealism is a philosophical position which adheres to the view that nothing exists except as it is an idea in the mind of man or the mind of God. The idealist believes that the universe has intelligence and a will; that all material things are explainable in terms of a mind standing behind them. PHILOSOPHICAL RATIONALE OF IDEALISM a) The Universe (Ontology or Metaphysics) To the idealist, the nature of the universe is mind; it is an idea.
    [Show full text]
  • Abstract: This Paper Argues Against Daniel Dennett's Conception of What
    Abstract: This paper argues against Daniel Dennett’s conception of what is labeled as the Intentional Stance. Daniel Dennett thinks that conceiving of human beings, as Intentional Systems will reveal aspects about human action that a physical examination of the composition of their bodies will always fail to capture. I will argue against this claim by a reexamination of the Martian’s thought experiment. I will also analyze Dennett’s response to the famous Knowledge argument in order to point out a seeming contradiction in his view which will make my point stronger. Lastly, I will try to conclude with general remarks about my own view. Due to numerous advancements in the field of neurology, philosophers are now reconsidering their views on mental states. This in turn led to reconsideration of common views on action and intentionality in contemporary philosophy. In this paper I will try to defend a strict version of the Physicalist view of human nature, which states that everything about human action and intentionality can be explained by the study of physical matter, physical laws and physical history of humans. This will be attained by exploring the contrast between Daniel Dennett’s Intentional and Physical stances, and by showing how the former “fails” in face of the latter (contra Dennett), especially when considering the evolution of the human species. After that, I will try to point out an inconsistency in Dennett’s view especially in his dealing with Frank Jackson’s Knowledge Argument. I will end with a brief conclusion about the view I am advocating. A central claim of Dennett’s thesis is that if a system holds true beliefs and could be assigned beliefs and desires, then, given the proper amount of practical reasoning, that system’s actions should be relatively accurately predicted, and we can say, then, that the system is an intentional system to whom we can apply the intentional stance.
    [Show full text]
  • Matter and Consciousness
    Matter and Consciousness Historical Parallels As the identity theorist can point to historical cases of successful Paul Churchland, 1984 intertheoretic reduction, so the eliminative materialist can point to historical cases of the outright elimination of the ontology of an older theory in favor of the ontology of a new and superior theory. For most of Chapter 2: The Ontological Problem (the Mind-Body Problem) the eighteenth and nineteenth centuries, learned people believed that heat was a subtle fluid held in bodies, much in the way water is held in a sponge. A fair body of moderately successful theory described the way 5. Eliminative Materialism this fluid substance—called “caloric”—flowed within a body, or from one body to another, and how it produced thermal expansion, melting, The identity theory was called into doubt not because the prospects for a boiling, and so forth. But by the end of the last century it had become materialist account of our mental capacities were thought to be poor, but abundantly clear that heat was not a substance at all, but just the energy because it seemed unlikely that the arrival of an adequate materialist of motion of the trillions of jostling molecules that makeup the heated theory would bring with it the nice one-to-one match-ups, between the body itself. The new theory—the “corpuscular/kinetic theory of matter concepts of folk psychology and the concepts of theoretical and heat”—was much more successful than the old in explaining and neuroscience, that intertheoretic reduction requires. The reason for that predicting the thermal behavior of bodies.
    [Show full text]
  • Maintaining Meaningful Expressions of Romantic Love in a Material World
    Reconciling Eros and Neuroscience: Maintaining Meaningful Expressions of Romantic Love in a Material World by ANDREW J. PELLITIERI* Boston University Abstract Many people currently working in the sciences of the mind believe terms such as “love” will soon be rendered philosophically obsolete. This belief results from a common assumption that such terms are irreconcilable with the naturalistic worldview that most modern scientists might require. Some philosophers reject the meaning of the terms, claiming that as science progresses words like ‘love’ and ‘happiness’ will be replaced completely by language that is more descriptive of the material phenomena taking place. This paper attempts to defend these meaningful concepts in philosophy of mind without appealing to concepts a materialist could not accept. Introduction hilosophy engages the meaning of the word “love” in a myriad of complex discourses ranging from ancient musings on happiness, Pto modern work in the philosophy of mind. The eliminative and reductive forms of materialism threaten to reduce the importance of our everyday language and devalue the meaning we attach to words like “love,” in the name of scientific progress. Faced with this threat, some philosophers, such as Owen Flanagan, have attempted to defend meaningful words and concepts important to the contemporary philosopher, while simultaneously promoting widespread acceptance of materialism. While I believe that the available work is useful, I think * [email protected]. Received 1/2011, revised December 2011. © the author. Arché Undergraduate Journal of Philosophy, Volume V, Issue 1: Winter 2012. pp. 60-82 RECONCILING EROS AND NEUROSCIENCE 61 more needs to be said about the functional role of words like “love” in the script of progressing neuroscience, and further the important implications this yields for our current mode of practical reasoning.
    [Show full text]
  • Artificial Intelligence Is Stupid and Causal Reasoning Won't Fix It
    Artificial Intelligence is stupid and causal reasoning wont fix it J. Mark Bishop1 1 TCIDA, Goldsmiths, University of London, London, UK Email: [email protected] Abstract Artificial Neural Networks have reached `Grandmaster' and even `super- human' performance' across a variety of games, from those involving perfect- information, such as Go [Silver et al., 2016]; to those involving imperfect- information, such as `Starcraft' [Vinyals et al., 2019]. Such technological developments from AI-labs have ushered concomitant applications across the world of business, where an `AI' brand-tag is fast becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits `racist' behaviour; automated credit-scoring processes `discriminate' on gender etc. - there are often significant financial, legal and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that \... all the impressive achievements of deep learning amount to just curve fitting". The key, Pearl suggests [Pearl and Mackenzie, 2018], is to replace `reasoning by association' with `causal reasoning' - the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: \we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets { often using an approach known as \Deep Learning" { and start building computer systems that from the moment of arXiv:2008.07371v1 [cs.CY] 20 Jul 2020 their assembly innately grasp three basic concepts: time, space and causal- ity"[Marcus and Davis, 2019].
    [Show full text]
  • Some Unnoticed Implications of Churchland's Pragmatic Pluralism
    Contemporary Pragmatism Editions Rodopi Vol. 8, No. 1 (June 2011), 173–189 © 2011 Beyond Eliminative Materialism: Some Unnoticed Implications of Churchland’s Pragmatic Pluralism Teed Rockwell Paul Churchland’s epistemology contains a tension between two positions, which I will call pragmatic pluralism and eliminative materialism. Pragmatic pluralism became predominant as his episte- mology became more neurocomputationally inspired, which saved him from the skepticism implicit in certain passages of the theory of reduction he outlined in Scientific Realism and the Plasticity of Mind. However, once he replaces eliminativism with a neurologically inspired pragmatic pluralism, Churchland (1) cannot claim that folk psychology might be a false theory, in any significant sense; (2) cannot claim that the concepts of Folk psychology might be empty of extension and lack reference; (3) cannot sustain Churchland’s critic- ism of Dennett’s “intentional stance”; (4) cannot claim to be a form of scientific realism, in the sense of believing that what science describes is somehow realer that what other conceptual systems describe. One of the worst aspects of specialization in Philosophy and the Sciences is that it often inhibits people from asking the questions that could dissolve long standing controversies. This paper will deal with one of these controversies: Churchland’s proposal that folk psychology is a theory that might be false. Even though one of Churchland’s greatest contributions to philosophy of mind was demonstrating that the issues in philosophy of mind were a subspecies of scientific reduction, still philosophers of psychology have usually defended or critiqued folk psychology without attempting to carefully analyze Churchland’s theory of reduction.
    [Show full text]
  • Intentionality and the Mental Volume IV, Number 4 (Winter 1957)
    Paul E. Meehl we have a formula, and a case comes along in which it disagrees with ------APPENDIX------ our heads? Shall we then use our heads? I would say, yes-provided the psychological situation is as clear as a broken leg; otherwise, very, very seldom. EDITORS' NOTE. This article first appeared in the Tournal of Counseling Psychology, Intentionality and the Mental Volume IV, Number 4 (Winter 1957). For the assignment of copyright, we are grateful to the editor, C. Gilbert Wrenn. REFERENCES I. Cronbach, L. J., and P. E. Meehl. "Construct yalidity in Psy.chological Tests," Psychological Bulletin, 52:281-302 (1955). Reprmted m H. Fe1gl and M. Scnv~n Introduction by Wilfrid Sellars (eds.), Minnesota Studies in the Philosophy of Science, Vol. I, pp. 174-204. Mm­ The traditional mind-body problem is, as Herbert Feig] has amply demonstrated in neapolis: Univ. of Minnesota Press, 1956. his contribution to this volume, a veritable tangle of tangles. At first sight but one of 2. Humphreys, L. G., C. C. McArthur, P. E. Meehl, N. Sanford, ~nd _ J. Zubin. the 'problems of philosophy,' it soon turns out, as one picks at it, to be nothin& more "Clinical versus Actuarial Prediction," Proceedings of the 1955 Invitational Con- nor less than the philosophical enterprise as a whole. Yet if, to the close-up view. of ference on Testing Problems, pp. 91-141. the philosopher at work, it soon becomes a bewildering crisscross of th.r~ds l.eadmg 3. McArthur, C. C. "Analyzing the Clinical Process," Tournal of Counselmg Psy­ in all directions, it is possible to discern, on standing off, a number of d1stmgmshable chology, 1:203-207 (1954).
    [Show full text]
  • Is AI Intelligent, Really? Bruce D
    Seattle aP cific nivU ersity Digital Commons @ SPU SPU Works Summer August 23rd, 2019 Is AI intelligent, really? Bruce D. Baker Seattle Pacific nU iversity Follow this and additional works at: https://digitalcommons.spu.edu/works Part of the Artificial Intelligence and Robotics Commons, Comparative Methodologies and Theories Commons, Epistemology Commons, Philosophy of Science Commons, and the Practical Theology Commons Recommended Citation Baker, Bruce D., "Is AI intelligent, really?" (2019). SPU Works. 140. https://digitalcommons.spu.edu/works/140 This Article is brought to you for free and open access by Digital Commons @ SPU. It has been accepted for inclusion in SPU Works by an authorized administrator of Digital Commons @ SPU. Bruce Baker August 23, 2019 Is AI intelligent, really? Good question. On the surface, it seems simple enough. Assign any standard you like as a demonstration of intelligence, and then ask whether you could (theoretically) set up an AI to perform it. Sure, it seems common sense that given sufficiently advanced technology you could set up a computer or a robot to do just about anything that you could define as being doable. But what does this prove? Have you proven the AI is really intelligent? Or have you merely shown that there exists a solution to your pre- determined puzzle? Hmmm. This is why AI futurist Max Tegmark emphasizes the difference between narrow (machine-like) and broad (human-like) intelligence.1 And so the question remains: Can the AI be intelligent, really, in the same broad way its creator is? Why is this question so intractable? Because intelligence is not a monolithic property.
    [Show full text]
  • The Thought Experiments Are Rigged: Mechanistic Understanding Inhibits Mentalistic Understanding
    Georgia State University ScholarWorks @ Georgia State University Philosophy Theses Department of Philosophy Summer 8-13-2013 The Thought Experiments are Rigged: Mechanistic Understanding Inhibits Mentalistic Understanding Toni S. Adleberg Georgia State University Follow this and additional works at: https://scholarworks.gsu.edu/philosophy_theses Recommended Citation Adleberg, Toni S., "The Thought Experiments are Rigged: Mechanistic Understanding Inhibits Mentalistic Understanding." Thesis, Georgia State University, 2013. https://scholarworks.gsu.edu/philosophy_theses/141 This Thesis is brought to you for free and open access by the Department of Philosophy at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Philosophy Theses by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please contact [email protected]. THE THOUGHT EXPERIMENTS ARE RIGGED: MECHANISTIC UNDERSTANDING INHIBITS MENTALISTIC UNDERSTANDING by TONI ADLEBERG Under the Direction of Eddy Nahmias ABSTRACT Many well-known arguments in the philosophy of mind use thought experiments to elicit intuitions about consciousness. Often, these thought experiments include mechanistic explana- tions of a systems’ behavior. I argue that when we understand a system as a mechanism, we are not likely to understand it as an agent. According to Arico, Fiala, Goldberg, and Nichols’ (2011) AGENCY Model, understanding a system as an agent is necessary for generating the intuition that it is conscious. Thus, if we are presented with a mechanistic description of a system, we will be very unlikely to understand that system as conscious. Many of the thought experiments in the philosophy of mind describe systems mechanistically. I argue that my account of consciousness attributions is preferable to the “Simplicity Intuition” account proposed by David Barnett (2008) because it is more explanatory and more consistent with our intuitions.
    [Show full text]
  • 1. a Dangerous Idea
    About This Guide This guide is intended to assist in the use of the DVD Daniel Dennett, Darwin’s Dangerous Idea. The following pages provide an organizational schema for the DVD along with general notes for each section, key quotes from the DVD,and suggested discussion questions relevant to the section. The program is divided into seven parts, each clearly distinguished by a section title during the program. Contents Seven-Part DVD A Dangerous Idea. 3 Darwin’s Inversion . 4 Cranes: Getting Here from There . 8 Fruits of the Tree of Life . 11 Humans without Skyhooks . 13 Gradualism . 17 Memetic Revolution . 20 Articles by Daniel Dennett Could There Be a Darwinian Account of Human Creativity?. 25 From Typo to Thinko: When Evolution Graduated to Semantic Norms. 33 In Darwin’s Wake, Where Am I?. 41 2 Darwin's Dangerous Idea 1. A Dangerous Idea Dennett considers Darwin’s theory of evolution by natural selection the best single idea that anyone ever had.But it has also turned out to be a dangerous one. Science has accepted the theory as the most accurate explanation of the intricate design of living beings,but when it was first proposed,and again in recent times,the theory has met with a backlash from many people.What makes evolution so threatening,when theories in physics and chemistry seem so harmless? One problem with the introduction of Darwin’s great idea is that almost no one was prepared for such a revolutionary view of creation. Dennett gives an analogy between this inversion and Sweden’s change in driving direction: I’m going to imagine, would it be dangerous if tomorrow the people in Great Britain started driving on the right? It would be a really dangerous place to be because they’ve been driving on the left all these years….
    [Show full text]