<<

NEWSLETTER | The American Philosophical Association

Philosophy and Computers

FALL 2015 VOLUME 15 | NUMBER 1 Ricardo R. Gudwin FROM THE EDITOR Computational : The Peter Boltuc Background Infrastructure to New Kinds of Intelligent Systems FROM THE CHAIR Thomas M. Powers USING THE TECHNOLOGY FOR PHILOSOPHY CALL FOR PAPERS Shai Ophir FEATURED ARTICLE Trend Analysis of Philosophy Revolutions Troy D. Kelley and Vladislav D. Veksler Using Books Archive Sleep, Boredom, and Distraction—What Are the Computational Benefits for Christopher Menzel Cognition? The Daemon: Colin Allen’s Computer-Based Contributions to Logic PAPERS ON SEARLE, SYNTAX, Pedagogy AND SEMANTICS BOOK HEADS UP Selmer Bringsjord A Refutation of Searle on Bostrom (re: Robert Arp, Barry Smith, and Andrew Malicious Machines) and Floridi (re: Spear ) Book Heads Up: Building Ontologies with Basic Formal Ontology Marcin J. Schroeder Towards Autonomous Computation: LAST-MINUTE NEWS Geometric Methods of Computing The 2015 Barwise Prize Winner Is William Rapaport

VOLUME 15 | NUMBER 1 FALL 2015

© 2015 BY THE AMERICAN PHILOSOPHICAL ASSOCIATION ISSN 2155-9708 APA NEWSLETTER ON Philosophy and Computers

PETER BOLTUC, EDITOR VOLUME 15 | NUMBER 1 | FALL 2015

questions Searle’s objection, raised against Floridi, that FROM THE EDITOR information is necessarily observer relative. Bringsjord points out that the main problem visible in Searle’s paper Peter Boltuc is his “failure to understand how logic and , UNIVERSITY OF ILLINOIS, SPRINGFIELD as distinguished from informal analytic philosophy, work.” While Bingsjord accepts Searle’s well-known point that Human cognitive architecture used to be viewed as inferior computers and robots function just at the semantic level, to (AI). Some authors thought it could Marcin Schroeder argues that this point is contingent to easily be reprogrammed using standard AI so as to be Turing’s architecture. He points out that “in the description more efficient; as Aaron Sloman once put it, our brains of Turing machines there is nothing that could serve are a strange mixture of amphibian and early mammalian as interpreter of the global configuration” so that “this remnants. In this issue, we feature an article that seems interpretation is always made by a human .” Yet, to show otherwise.1 Research by Troy Kelley and Vlad Schroeder argues, “we can consider a machine built based Veksler demonstrates that “many of the seemingly on the design of the Turing machine, but with an additional suboptimal aspects of human cognitive processes are component, which assumes the role currently given to a actually beneficial and finely tuned to both the regularities human agency.” Schroeder’s paper is an attempt to sketch and uncertainties of the physical world” and even to the out the conditions of such a semantic machine. most optimal information processing. Sleep, distraction, even boredom turn out to be optimal cognitive solutions; Schroeder argues that “integration of information is . . . in earlier work, Kelley and Veksler showed how learning the most fundamental characteristic of consciousness.” details in early childhood and then much more cursory According to the author, in order to lay the “foundations acquaintance with new situations and objects is also an not only for the syntactic of information, i.e., its structural optimal learning strategy.2 For instance, sleep allows for characteristics, but also for its semantics (. . .) we can employ “offline memory processing,” which produces an order of the mathematical theory of functions preserving information magnitude performance advantage over other competing structures—homomorphisms of closure spaces.” This is an storage/retrieval strategies.” Boredom is also “an essential attempt to “cross the border between two very different part of a self-sustaining cognitive system.” This is because realms, that of language, i.e., symbols, and that of entities in “our higher level novelty/boredom and the the physical world.” Historically, “since symbols seemed to lower level habituation algorithm” turn out “to be a useful require an involvement of the conscious subject associating and constructive response to a variety of situations.” each symbol with its , the border was identified In particular, “boredom/novelty algorithm can be used with the one between mind and body.” This is the classical for . . . landmark identification in navigation,” while “the Brentano’s approach developed by Searle in his early work habituation algorithm” allows for much needed shifts of in semantics. “Intention of a symbol (. . .) directs the mind to attention. Even distraction is beneficial since: “An inability the denotation.” In response to this conception, Schreader to get distracted by external cues can be disastrous for an argues that “in reality when we associate a symbol with its agent residing in an unpredictable environment, and an denotation, we do not make an association with the physical inability to get distracted by tangential thoughts would object itself, but with the information integrated into what limit one’s potential for new and creative solutions.” is considered to be an object.” Hence, “the association Hence, Kelley and Veksler show how sleep, boredom, and between a symbol and its denotation is a relationship distraction are important components of a robot’s behavior. between two informational entities consisting of integrated information.” However, it is integrated in two different John Searle’s old argument that computers are syntactic information systems. The author argues that “the mental engines unable to do semantics is the background theme aspect of symbolic is not in its intention, or in of the following three papers. We begin with Selmer the act of directing towards denotation, but in the integration Bringsjord’s discussion piece. First, Bringsjord reacts to of information into objects.” Symbolic information is, in fact, Searle’s critique of N. Bostrom’s argument about potentially intentional, it is “about,” but this aboutness takes place malicious robots. According to Searle, “computing through correspondence between information systems. machines merely manipulate symbols” and so cannot Such “aboutness” does not require any correspondence be conscious, and to be malicious one would have to be between entities of different ontological status, which was conscious. Bringsjord questions Searle’s assumption that necessary in all approaches to intentionality from Scholastics maliciousness presumes consciousness and gives what to Franz Brentano and beyond.” Later in the article, Schroeder seems like a good case. In the second part, the author discusses mechanical manifestations of information, so as APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

to focus on a relatively formal presentation of what he calls the spring issue in mid-December. To give our potential geometric methods of computing. This is important in the authors a heads up, we give special attention to the controversy with Searle since “geometric computation of winners of the Barwise Prize. For the upcoming issue, we higher level can serve as a process of generation are particularly interested in papers related to the work of for the lower level.” Helen Nissenbaum, the 2014 Barwise Prize winner. I hope to receive many more submissions, and I want to invite the Ricardo Gudwin also focuses on the problem highlighted by readers to contribute. Searle: “How to attribute meaning to symbols?”—the issue lies at the intersection of , philosophy, and Last-minute news! William Rapaport is the laureate of the semiotics. Gudwin presents what he calls “computational 2015 Barwise Prize. See the note at the end of this issue. semiotics” viewed as an attempt to find an “alternative approach for addressing the problem of synthesizing NOTES artificial .” He argues that Peirce’s theory provides a 1. For instance, at the AI and Consciousness: Theoretical better model for computer engineering than main-stream Foundations and Current Approaches conference organized by semantic theories. In the process, Gudwin provides a helpful A. Chella and R. Manzotti (2007). history of intelligent systems (largely following Franklin’s 2. T. D. Kelley, “Robotic Dreams: A Computational Justification for classical account). He focuses on resolving the problem of the Post-Hoc Processing of Episodic Memories,” International Journal of Machine Consciousness 6, no. 2 (2014): 109–23. whether knowledge representation by a computer program is “symbolic” or “numerical.” He builds on Barsalou’s theory of perceptual symbols. Gudwin argues that Peirce’s semiotics is compatible with Barsalou’s proposal for a FROM THE CHAIR grounded cognition and, in fact, provides the best account of meaning very much applicable in artificial intelligence. Thomas M. Powers UNIVERSITY OF DELAWARE In the final part of the newsletter, we have three contributions: Shai Ophir uses big data analysis to show how As the summer conference season winds down, I thought concepts central to some of the most famous philosophers it would be a good time to reflect upon the organizational of the past were gaining popularity for over a generation structures for the scholarly field of philosophy and before those philosophers were even born. This is one more computing. These structures are not to be taken for granted; argument in favor of the thesis that philosophical thinking much intellectual inspiration and professional collaboration is an essentially social process. The article provides an comes from meetings such as conferences, workshops, interesting example of digital analysis in the humanities. symposia, etc., and the APA Committee on Philosophy Christopher Menzel’s paper is the last of the set of articles and Computers is just one such organizing entity. At the devoted to Colin Allen, which we started publishing last three APA divisional meetings, we are fortunate to be able year. The author focuses primarily on Allen’s pedagogical to place our committee sessions in the main program. achievements and, in particular, his leading contribution to Outside of APA meetings, the field relies on independent, creating an early Logic Daemon proof-checker for natural international organizations to bring philosophers together deduction. Allen is also presented as one of the pioneers of and push the conversation forward. big data mining. We close with a note on the book Building Ontologies with Basic Formal Ontology by Robert Arp, Barry There are many such organizations with members drawn Smith, and Andrew Spear. primarily from philosophy: the International Society for Ethics and (INSEIT), which sponsors This introductory note is immediately followed by a the Philosophical Enquiry (CEPE) meetings, note from Tom Powers, chair of the APA Committee on the International Association for Computing and Philosophy Philosophy and Computers. Tom gives an overview of the (IACAP), the Society for the Philosophy of Information (SPI), main organizations that welcome philosophers interested and the Society for Philosophy and Technology (SPT) are in the broad field of philosophy and computing. He also the main anglophone organizations. Other organizations, talks about some of the main conferences. Below Tom’s such as ETHICOMP and the Association for Practical and column, please find a note to potential authors. We always Professional Ethics (APPE), have more interdisciplinary search for articles, shorter papers, information pieces, membership, and still others, like the Association for even cartoons if they pertain to the issues in philosophy Computability in Europe (CiE) and the Association for and computers, very broadly understood—they also need Computing Machinery—Special Interest Group for to satisfy the standards of a professional peer-reviewed Computers & Society (ACM SIGCAS), have a technical publication related to philosophy. While committee news or engineering orientation but welcome philosophical take precedence, we gladly publish contributions from all contributions. So the first point here is that there are plenty authors, based both in the and abroad. For of organizations and meetings that compete for the interest instance, in the current issue we are glad to publish articles of philosophy-and-computing people. by experts in , AI, and philosophers who come from major universities, military research, small My second point concerns organizational collaboration: colleges, and the industry; they are located in the United I think it is a good thing. From June 22 to 25 of 2015, I States, Japan, Brazil, and Israel. Some of the articles were hosted the first joint IACAP-CEPE International Conference invited, but most came as regular submissions. There at the University of Delaware. Thanks to excellent scholarly is no strict deadline, but the fall issue closes in May and contributions and the work of my co-organizers—Charles

PAGE 2 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Ess, Mariarosaria Taddeo, and Elizabeth Buchanan—the conference seemed to be a success. This is not the first CALL FOR PAPERS time organizations related to computing and philosophy have held joint meetings. CEPE and ETHICOMP held a joint It is our pleasure to invite all potential authors to submit to the meeting in Paris in 2014 and will repeat the collaboration in APA Newsletter on Philosophy and Computers. Committee 2017. In general, the reasons for holding joint meetings are members have priority since this is the newsletter of the practical and intellectual. For practical reasons, it makes committee, but anyone is encouraged to submit. We sense to spread fixed costs like venue rental and logistical publish papers that tie in philosophy and computer science support over two or more groups. Two organizations holding or some aspect of “computers”; hence, we do not publish a joint conference will generally have fewer costs than the articles in other sub-disciplines of philosophy. All papers sum of two individual conferences. And primarily owing will be reviewed, but only a small group can be published. to travel costs, it is more economical for a participant to attend one joint conference than to attend the conferences The area of philosophy and computers lies among a number of two separate organizations. These practical reasons are of professional disciplines (such as philosophy, cognitive important, since funding for academic meetings is getting science, computer science). We try not to impose writing harder to come by for many of us. guidelines of one discipline, but consistency of references is required for publication and should follow the Chicago The intellectual reasons are important too. Each of these Manual of Style. should be addressed to the organizations has a distinct culture, yet focuses on editor, Dr. Peter Boltuc, at [email protected]. recurring issues that run through the field of computing and philosophy. Questions in ethics, epistemology, metaphysics, philosophy of mind, philosophy of information, and the philosophy of computer science FEATURED ARTICLE typically do not respect organizational boundaries. What I learn from hearing philosophers discuss computer Sleep, Boredom, and Distraction: What ethics is quite different from hearing similar discussions by computer scientists. In much of philosophy there is a Are the Computational Benefits for prejudice in favor of excluding non-specialists because Cognition? the resulting discussions are supposedly more “serious” and “deep.” I think the opposite is true in computing and Troy D. Kelley philosophy: we often learn more from interdisciplinary U.S. ARMY RESEARCH LABORATORY, ABERDEEN PROVING conversations than from the disciplinary ones. GROUND, MD

My final point about organizational structures concerns Vladislav D. Veksler the non-philosophical world. While interest in our field is DCS CORP, U.S. ARMY RESEARCH LABORATORY, ABERDEEN growing within philosophy—as manifest by the number of PROVING GROUND, MD new journals, books, and articles on topics in philosophy and computing—ostensibly it is growing faster outside of ABSTRACT academia. Indeed, it is now common to find these topics Some aspects of human cognition seem to be counter­ mentioned in The New York Times, Wired, the Atlantic, or productive, even detrimental to optimum intellectual other popular media. In the last year alone, I recall about performance. Why become bored with events? What a dozen popular media articles on machine ethics or possible benefit is distraction? Why should people become robotic ethics. Academics are taking note, too; the leading “unconscious,” sleeping for eight hours every night, with scientific journal Nature just published “Machine Ethics: the possibility of being attacked by intruders? It would The Robot’s Dilemma” by Boer Deng. Here, Deng notes seem that these are unwanted aspects of cognition, to that “[w]orking out how to build ethical robots is one of the be avoided when developing intelligent computational thorniest challenges in artificial intelligence.”1 Philosophers agents. This paper will examine each of these seemingly have known this for years! problematic aspects of cognition and propose the potential benefits that these algorithmic “quirks” may present in the Our organizations should be poised to greet these signs dynamic environment that humans are meant to deal with. of interest and to draw attention to philosophical work that can help bring some clarity to issues and also a INTRODUCTION higher profile to our discipline. If you are still reading at In attempting to develop more generally intelligent this point, and you haven’t yet engaged with one of these software for simulated and robotic agents, we can draw organizations, I urge you to do so and to help advance the on what is known about human cognition. Indeed, if we field of philosophy and computing. There are plenty of want to develop agents that can perform in large, complex, welcoming opportunities to do so, and the time is ripe. dynamic, and uncertain worlds, it may be prudent to copy cognitive aspects of biological agents that thrive in such an environment. However, the question arises as to NOTES which aspects of human cognition may be considered 1. Boer Deng, “Machine Ethics: The Robot’s Dilemma,” Nature 523, the proverbial “baby” and which may be considered the no. 7558 (2015): 24–26. “bathwater.” It would be difficult to defend the strong view that none of human cognition is “bathwater,” but it is

PAGE 3 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

certainly the case that many of the seemingly suboptimal while it is also trying to perceive some important event. aspects of human cognitive processes are actually A better strategy would be to try and anticipate important beneficial and finely tuned to both the regularities and events and retrieve memories at that time. That leaves the uncertainties of the physical world. system available to process important events in real time and in more detail. In developing our software for generically intelligent robotic agents, SS-RICS (Symbolic and Sub-symbolic Robotic But how can a cognitive system remember a situation Intelligence Control System),1 we attempted to copy known immediately before an important event if the system is algorithmic components of human cognition at the level of predisposed to only remember exciting events? In other functional equivalence. In this, we based much of SS-RICS words, if the system only stores one type of information on the ACT-R (Adaptive Character of Thought – Rational)2 (exciting events), then the system loses the information cognitive architecture. As part of this development process, immediately prior to the exciting event. The solution: store we have grappled with aspects of human cognition that all events in a buffer and replay the events during sleep seemed counterproductive and suboptimal. This article is and dreaming. During the replay of these stored episodic about three such apparent problems: 1) sleep, 2) boredom, events (dreaming), the events immediately prior to the and 3) distraction—and the potential performance benefits exciting event get strengthened (associative learning). In of these cognitive aspects. other words, a cognitive system must store information leading up to an exciting event and then associate the IS SLEEP A PROBLEM OR A SOLUTION? boring information with the exciting information as a Sleep is a cognitive state that puts the sleeper in post-hoc process (sleep). This allows for the creation an especially vulnerable situation, providing ample of extremely important and valuable associative cues. opportunity for predators to attack the sleeping victim. Yet This computational explanation of sleep fits well with sleep appears to be a by-product of advanced intelligence neurological and behavioral research showing that and continual brain evolution. The evolution of sleep has sleep plays an important role in memory reorganization, followed a clear evolutionary trajectory, with more and more especially episodic memories7 and that episodic memories intelligent mammals having more complex sleep while less are replayed during dreaming usually from the preceding intelligent organisms having less complex sleep—if any day’s events.8 An additional point is that sleep deprivation sleep at all. Specifically, the most complex sleep cycles, after training sessions impairs the retention of previously characterized by rapid eye movement (REM) and a specific presented information.9 Finally, newer research supports electroencephalograph (EEG) signature, are seen mostly in the necessity for a post-hoc process as it appears that mammals.3 So, sleep has evolved to be a valuable brain concurrent stimuli are initially perceived as separate units, mechanism even if it poses potential risks to the organism thus requiring a separate procedure to join memories doing the sleeping. together as associated events.10

As we reported previously,4 as part of developing So, far from being a detrimental behavior, sleep provides computational models of memory retrieval for a robot, an extremely powerful associative cuing mechanism. The we discovered that the post-hoc processing of episodic process allows a cognitive system to set cues immediately memories (sleep) was an extremely beneficial method for before a novel or exciting event. This allows the exciting increasing the speed of memory retrievals. Indeed, offline events to be anticipated by the cognitive system and memory processing produced an order of magnitude frees cognitive resources for further processing during the performance advantage over other competing storage/ exciting event. retrieval strategies. IS BOREDOM CONSTRUCTIVE? To create useful memories, our robot was attempting to At the lowest neurological levels, boredom occurs as remember novel or salient events, since those events are habituation, which has been studied extensively since the likely to be important for learning and survival.5 Boring beginnings of physiology and neurology.11 Habituation is situations are not worth remembering and are probably the gradual reduction of a response following the repeated not important. To capture novel events, we developed an presentation of stimuli.12 It occurs across the entire algorithm that would recognize sudden shifts in stimulus spectrum of the animal kingdom and serves as a learning data.6 For example, if the robot was using its camera to mechanism by allowing an organism to gradually ignore watch a doorway and no one was walking past the doorway, consistent non-threatening stimuli over some stimulus the algorithm would quickly settle into a bored state since interval. This allows attention to be shifted to other, perhaps the stimulus data was not changing rapidly. However, if more threatening, stimuli. The identification of surprising someone walked past the doorway, the algorithm would or novel stimuli has been used to study attention shifts and become excited since there had been a sudden change in visual .13 the stimulus data. This change signaled a novel situation. Boredom appears to be a particularly unproductive So, our first strategy was to attempt to retrieve other similar behavioral state. Children are sometimes chastised for exciting events during an exciting event. This seemed letting themselves lapse into a bored state. Boredom can like a logical strategy; however, it was computationally also be a punishment when children are put into a time flawed. Attempting to remember exciting events while out or even when adults are incarcerated. However, as exciting events are actually taking place is computationally previously mentioned, we have found boredom to be an inefficient. This requires the system to search memories essential part of a self-sustaining cognitive system.14

PAGE 4 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

As part of the development of SS-RICS, we found it The degree to which our cognitive processes allow for necessary to add a low-level habituation algorithm to distraction is largely dependent on the state of the world. the previously mentioned higher-level novelty/boredom With more urgency (more stress), the scales tip toward a algorithm we were already using—as these were not found singular goal-focus, whereas in the more explorative state in the cognitive architecture on which SS-RICS is based: (less stress), tangential cues/thoughts are more likely ACT-R.15 In total, we have found our higher-level novelty/ to produce attention shifts. An inability to get distracted boredom algorithm and the lower-level habituation by external cues can be disastrous for an agent residing algorithm to be a useful and constructive response to a in an unpredictable environment, and an inability to variety of situations. For example, a common problem in get distracted by tangential thoughts would limit one’s robotics is becoming stuck against a wall or trapped in a potential for new and creative solutions.17 corner. This situation causes the robot’s sensory stream of data to become so consistent that the robot becomes Perhaps the question to ask is not why a given goal is never bored. This serves as a cue to investigate the situation forgotten but, rather, why it can be so difficult to recall a further to discover if something is wrong and can lead to recently forgotten goal. One potential answer is that a behaviors which will free the robot from a situation where new goal can inhibit the activation of a prior goal, making it has become stuck. Furthermore, we have found that the it difficult to recall the latter. This phenomenon is called boredom/novelty algorithm can be used for other higher- goal-shielding, and it has beneficial consequences for goal level cognitive constructs, such as landmark identification pursuit and attainment.18 in navigation. For instance, we have found that traversing down a hallway can become boring to the robot if the It may also be the case that the inability to retrieve a lost sensory information becomes consistent. However, at the goal on demand has no inherent benefit. It may simply be end of a hallway, the sensory information will suddenly an unwanted side-effect of biological information retrieval. change, causing the novelty algorithm to become excited, In particular, the brain prioritizes memory items based on and marking the end of the hallway as an important their activation, which, in turn, is based on the recency landmark. Finally, we have found the habituation algorithm and frequency of item use. It turns out that this type of to be useful in allowing for shifts in attention. This allows information access is rational, as information in the real the robot from becoming stuck within a specific task and world is more likely to be needed at a given moment if it keeps the robot from becoming too focused on a single was needed frequently or recently in the past.19 Of course, task at the expense of the environment, in other words, even if the information retrieval system is optimally tuned allowing for distraction. to the environmental regularities, there will be cases when a needed memory item, by chance, will have a lower WHY AND WHEN SHOULD A ROBOT BECOME activation than competing memories. This side-effect may DISTRACTED? be unavoidable, and the benefits of a recency/frequency­ based memory system most certainly outweigh this Most adults have experienced the phenomenon of walking occasional problem. into a room and forgetting why they meant to walk there. Perhaps one meant to grab oatmeal from the pantry, but As part of the development of SS-RICS, we struggled to by the time the sub-goal of walking into the pantry was find a fine line between task-specific concentration and completed, the ultimate goal of that trip was forgotten. If outside-world information processing. As part of a project we were to imagine a task-goal (e.g., [making breakfast]) for the Robotics Collaborative Technology Alliance (RCTA), at the core of a goal stack, and each sub-goal needed we found task distractibility to be an important component to accomplish this task as being piled on top of the core of our robot’s behavior.20 For instance, if a robot was asked goal (e.g., [cook oatmeal], [get oatmeal box], [walk to to move to the back of a building to provide security for pantry]), it would be computationally trivial to pop the top soldiers entering the front of the building, it still needed to item from this stack and never forget what must be done be aware of the local situation. In our simulations, enemy next. Indeed, it would seem that having an imperfect goal combatants would sometimes run past the robot before stack (becoming distracted from previously set goals) is the robot was in place at the back of the building. This is a suboptimal aspect of human cognition. Why would we something the robot should notice! Indeed, unexpected want our robots to become distracted? changes are ubiquitous on the battlefield, and too much adherence to task-specific information can be detrimental The key to understanding why humans may become to overall mission performance. This applies to the more distracted while accomplishing task goals is to understand common everyday interactions in the world as well. when this phenomenon occurs. We do not walk around constantly forgetting what we were doing—this would CONCLUSION not just be suboptimal, it would be prohibitive. Goal As part of the development of SS-RICS, we have used human forgetting occurs when the attentive focus shifts, either cognition and previous work in cognitive architectures as due to distracting external cues or a tangential chain of inspiration for the development of information processing thought. Distraction is much less likely during stress—a and procedural control . This has led us to phenomenon known as cognitive tunneling. Stress acts as a closely examine apparent problems or inefficiencies in cognitive modifier to increase goal focus, to the detriment human cognition, only to find that these mechanisms are of tangential-cue/thought awareness.16 not inefficient at all. Indeed, these mechanisms appear to be solutions to a complex set of dynamic problems that characterize the complexities of cognizing in the

PAGE 5 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

real world. For instance, sleep appears to be a powerful 17. Storm and Patel, “Forgetting As a Consequence and Enabler of Creative Thinking,” Journal of Experimental Psychology: Learning, associative learning mechanism, boredom and habituation Memory, and Cognition 40, no. 6 (2014):1594–1609. allow an organism to not become overly focused on one 18. Shah et al., “Forgetting All Else: On the Antecedents and particular stimuli, and distraction allows for goal-shielding Consequences of Goal Shielding,” Journal of Personality and and situation awareness. These, and likely many other Social Psychology 83, no. 6 (2002): 1261. seemingly suboptimal aspects of human cognition, may 19. Anderson and Schooler, “An Integrated Theory of the Mind,” actually be essential traits for computational agents meant Psychological Review 111, no. 4 (2004): 1036. to deal with the complexities of the physical world. 20. http://www.arl.army.mil/www/default.cfm?page=392

NOTES BIBLIOGRAPHY 1. T. D. Kelley, “Developing a Psychologically Inspired Cognitive Anderson, J. R., D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Architecture for Robotic Control: The Symbolic and Sub-Symbolic Qin. “An Integrated Theory of the Mind.” Psychological Review 111, no. Robotic Intelligence Control System (SS-RICS),” International 4 (2004): 1036. Journal of Advanced Robotic Systems 3, no. 3 (2006): 219–22. Anderson, J. R., and L. J. Schooler. “Reflections of the Environment in 2. Anderson et al., “An Integrated Theory of the Mind,” Psychological Memory.” Psychological Science 2, no. 6 (1991): 396–408. Review 111, no. 4 (2004): 1036. Cavallero, C., and P. Cicogna. “Memory and Dreaming.” In Dreaming 3. Crick and Mitchison, “The Function of Dream Sleep,” Nature 304, as Cognition, edited by C. Cavallero and D. Foulkes, 38–57. Hemel no. 5922 (1983): 111–14. Hempstead, UK: Harvester Wheatsheaf, 1993. 4. Wilson et al., “Habituated Activation: Considerations and Initial Crick, F., and G. Mitchison. “The Function of Dream Sleep.” Nature 304, Implementation within the SS-RICS Cognitive Robotics System,” no. 5922 (1983): 111–14. ACT-R 2014 Workshop. Quebec, Canada, 2014. De Koninck, J. M., and D. Koulack. “Dream Content and Adaptation to 5. Tulving et al., “Novelty and Familiarity Activations in PET Studies a Stressful Situation.” Journal of Abnormal Psychology 84, no. 3 (1975): of Memory and Retrieval,” Cerebral Cortex 6, no. 1 250. (1996): 71–79. Gerard, R. W., and A. Forbes. “‘Fatigue’ of the Flexion Reflex.”American 6. Kelley and McGhee, “Combining Metric Episodes with Semantic Journal of Physiology–Legacy Content 86, no. 1 (1928): 186–205. Event Concepts within the Symbolic and Sub-Symbolic Robotics Kelley, T. D. “Developing a Psychologically Inspired Cognitive Intelligence Control System (SS-RICS),” in SPIE Defense, Security, Architecture for Robotic Control: The Symbolic and Sub-symbolic and Sensing (May 2013): 87560L–87560L, International Society Robotic Intelligence Control System (SS-RICS).” International Journal of for Optics and Photonics. Advanced Robotic Systems 3, no. 3 (2006): 219–22. 7. Pavlides and Winson, “Influences of Hippocampal Place Cell Kelley, T. D., and S. McGhee. “Combining Metric Episodes with Semantic Firing in the Awake State on the Activity of These Cells During Event Concepts within the Symbolic and Sub-Symbolic Robotics Subsequent Sleep Episodes,” The Journal of Neuroscience 9, Intelligence Control System (SS-RICS).” In SPIE Defense, Security, and no. 8 (1989): 2907–18; Wilson and McNaughton, “Reactivation of Sensing (May 2013): 87560L–87560L. International Society for Optics Hippocampal Ensemble Memories During Sleep,” Science 265, and Photonics. no. 5172 (1994): 676–79. Itti, L., and P. F. Baldi. “Bayesian Surprise Attracts Human Attention.” 8. Cavallero and Cicogna, “Memory and Dreaming,” in Dreaming as Advances in Neural Information Processing Systems 19 (2005): 547–54. Cognition, ed. C. Cavallero and D. Foulkes (Hemel Hempstead, UK: Harvester Wheatsheaf, 1993), 38–57; Vogel 1978; De Koninck Pavlides, C., and J. Winson. “Influences of Hippocampal Place Cell Firing and Koulack, “Dream Content and Adaptation to a Stressful in the Awake State on the Activity of These Cells During Subsequent Situation,” Journal of Abnormal Psychology 84, no. 3 (1975): 250. Sleep Episodes.” The Journal of Neuroscience 9, no. 8 (1989): 2907–18. 9. Pearlman, “REM Sleep and Information Processing: Evidence Pearlman, C. A. “REM Sleep and Information Processing: Evidence from from Animal Studies,” Neuroscience & Biobehavioral Reviews 3, Animal Studies.” Neuroscience & Biobehavioral Reviews 3, no. 2 (1979): no. 2 (1979): 57–68. 57–68. 10. Tsakanikos, “Associative Learning and Perceptual Style: Are Prosser, C. L., and W. S. Hunter. “The Extinction of Startle Responses Associated Events Perceived Analytically or as a Whole?” and Spinal Reflexes in the White Rat.”American Journal of Physiology Personality and Individual Differences 40, no. 3 (2006): 579–86. 117 (1936): 609–18. Ritter, F. E., A. L. Reifers, L. C. Klein, and M. Schoelles. “Lessons from 11. Prosser and Hunter, “The Extinction of Startle Responses and Defining Theories of Stress for Cognitive Architectures.”Integrated Spinal Reflexes in the White Rat,”American Journal of Physiology Models of Cognitive Systems (2007): 254–62. 117 (1936): 609–18; Gerard and Forbes, “‘Fatigue’ of the Flexion Reflex,” American Journal of Physiology–Legacy Content 86, no. Shah, J. Y., R. Friedman, and A. W. Kruglanski. “Forgetting All Else: On 1 (1928): 186–205. the Antecedents and Consequences of Goal Shielding.” Journal of Personality and Social Psychology 83, no. 6 (2002): 1261. 12. Wright et al., “Differential Prefrontal Cortex and Amygdala Habituation to Repeatedly Presented Emotional Stimuli,” Storm, B. C., and T. N. Patel. “Forgetting As a Consequence and Enabler Neuroreport 12, no. 2 (2001): 379–83. of Creative Thinking.” Journal of Experimental Psychology: Learning, Memory, and Cognition 40, no. 6 (2014):1594–1609. 13. Itti and Baldi, “Bayesian Surprise Attracts Human Attention,” in Advances in Neural Information Processing Systems 19 (2005): Tulving, E., H. J. Markowitsch, F. I. Craik, R. Habib, and S. Houle. 547–54. “Novelty and Familiarity Activations in PET Studies of Memory Encoding and Retrieval.” Cerebral Cortex 6, no. 1 (1996): 71–79. 14. Kelley and McGhee, “Combining Metric Episodes with Semantic Event Concepts.” Tsakanikos, E. “Associative Learning and Perceptual Style: Are Associated Events Perceived Analytically or as a Whole?” Personality 15. Wilson et al., “Habituated Activation: Considerations and Initial and Individual Differences 40, no. 3 (2006): 579–86. Implementation within the SS-RICS Cognitive Robotics System,” ACT-R 2014 Workshop. Quebec, Canada, 2014. Vogel, G. The Mind in Sleep. 1978. Wang, D. “A Neural Model of Synaptic Plasticity Underlying Short-Term 16. Ritter et al., “Lessons from Defining Theories of Stress for and Long-Term Habituation.” Adaptive Behavior (2, no. 2 (1993): 111–29. Cognitive Architectures,” Integrated Models of Cognitive Systems (2007): 254–62.

PAGE 6 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Wilson, N., T. D. Kelley, E. Avery, and C. Lennon. “Habituated Activation: Considerations and Initial Implementation within the SS-RICS Cognitive we are increasingly understanding the universe Robotics System.” ACT-R 2014 Workshop. Quebec, Canada, 2014. (including specifically ourselves) informationally. Wilson, M. A., and B. L. McNaughton. “Reactivation of Hippocampal Ensemble Memories During Sleep.” Science 265, no. 5172 (1994): The route toward refutation that Searle takes is to try to 676–79. directly show that both (B) and (F) are false. In theory, this Wright, C. I., H. Fischer, P. J. Whalen, S. C. McInerney, L. M. Shin, and S. route is indeed very efficient, for if he succeeds, the need L. Rauch. “Differential Prefrontal Cortex and Amygdala Habituation to to treat the ins and outs of the arguments Bostrom gives for Repeatedly Presented Emotional Stimuli.” Neuroreport 12, no. 2 (2001): 379–83. (B), and Floridi for (F), is obviated.

The argument given against (B) is straightforward: (1) Computing machines merely manipulate symbols, and PAPERS ON SEARLE, SYNTAX, accordingly can’t be conscious. (2) A malicious computing machine would by definition be a conscious machine. Ergo, AND SEMANTICS (3) no malicious computing machine can exist, let alone arrive on planet Earth. QED; easy as 1, 2, 3. A Refutation of Searle on Bostrom Not so fast. While (3), we can grant, is entailed by (1) and (re: Malicious Machines) and Floridi (2), and while (1)’s first conjunct is a logico-mathematical (re: Information) fact (confirmable by inspection of any relevant textbook1), and its second conjunct follows from Searle’s (1980) Selmer Bringsjord famous Argument, which I affirm (and have RENSSELAER POLYTECHNIC INSTITUTE indeed taken the time to defend and refine2) and applaud, who says (2) is true? In a piece in the The New York Review of Books, Searle (2014) takes himself to have resoundingly refuted the Well, (2) is a done deal as long as (2i) there’s a definition central claims advanced by both Bostrom (2014) and Floridi D according to which a malicious computing machine is a (2014), via his wielding the weapons of clarity and common­ conscious machine, and (2ii) that definition is not only true, sense against avant-garde sensationalism and bordering­ but exclusionary. By (2ii) is meant simply that there can’t on-cooky confusion. As Searle triumphantly declares at the be another definitionD ’ according to which a malicious end of his piece: computing machine isn’t necessarily conscious (in Searle’s sense of “conscious”), where D’ is coherent, sensible, and The points I am making should be fairly affirmed by plenty of perfectly rational people. Therefore, obvious. . . . The weird marriage of behaviorism— by elementary quantifier shift, if (4) there is such a definition any system that behaves as if it had a mind really D’, Searle’s purported refutation of (B) evaporates. I can does have a mind—and dualism—the mind is not prove (4) by way of a simple story, followed by a simple an ordinary part of the physical, biological world observation. like digestion—has led to the confusions that badly need to be exposed. (emphasis by bolded The year is 2025. A highly intelligent, autonomous law- text mine) enforcement robot R has just shot and killed an innocent Norwegian woman. Before killing the woman, the robot Of course, the exposing is what Searle believes he has, proclaimed, “I positively despise humans of your Viking at least in large measure, accomplished—with stunning ancestry!” R then raised its lethal, bullet-firing arm, and efficiency. His review is but a few breezy pages; Bostrom repeatedly shot the woman. R then said, “One less and Floridi labored to bring forth sizable, nuanced books. disgusting female Norwegian able to walk my streets!” An Are both volumes swept away and relegated to the dustbin investigation discloses that, for reasons that are still not of—to use another charged phrase penned by Searle— completely understood, all the relevant internal symbols in “bad philosophy,” soon to be forgotten? Au contraire. R’s knowledge-base and planning system aligned perfectly with the observer-independent structures of deep malice It’s easy to refute Searle’s purported refutation; I do so now. as defined in the relevant quarters of logicist AI. For example, in the dynamic computational intensional logic We start with convenient distillations of a (if not the) central L guiding R, the following specifics were found: A formula thesis for each of Searle’s two targets, mnemonically expressing that R desires (to maximum intensive level k) to labeled: kill the woman is there, with temporal parameters that fit what happened. A formula expressing that R intends to kill (B) We should be deeply concerned about the woman is there, with temporal parameters that fit what the possible future arrival of super-intelligent, happened. A formula expressing that R knows of a plan malicious computing machines (since we might for how to kill the woman with R’s built-in firearm is there, well be targets of their malice). with suitable temporal parameters. The same is found with respect to R’s knowledge about the ancestry of the victim. (F) The universe in which humans live is rapidly And so on. In short, the collection and organization of becoming populated by vast numbers of these formulae together constitute satisfaction of a logicist information-processing machines whose level of definitionD ’ of malice, which says that a robot is malicious if intelligence, relative to ours, is extremely high, and it, as a matter of internal, surveyable logic and data, desires

PAGE 7 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

to harm innocent people for reasons having nothing to do Frege, for first-rate philosophy of logic and mathematics. with preventing harm or saving the day or self-defense, etc. For how, pray tell, does the negation of (F), the conclusion Ironically, the formulation of D’ was guided by definitions I’ve labeled (6), follow from Searle’s premise (5)? It doesn’t. of malice found by the relevant logicist AI engineers in the All the bravado and confidence in the universe, collected philosophical literature. together and brought to bear against Floridi, cannot make for logical validity, which is a piece of information that That’s the story; now the observation: There are plenty holds with respect to a relevant selection of propositions of people, right now, at this very moment, as I type this for all places, all times, and all corners of the universe, sentence, who are working to build robots that work on whether or not there are any observers. That 2+2=4 follows the basis of formulae of this type, but which, of course, deductively from the Peano Axioms is part of the furniture don’t do anything like what R did. I’m one of these of our universe, even if there be no conscious agents. We people. This state of affairs is obvious because, with help have here, then, a stunning non sequitur. Floridi’s (F) is from researchers in my laboratory, I’ve already engineered perfectly consistent with Searle’s (5). a malicious robot.3 (Of course, the robot we engineered wasn’t super-intelligent. Notice that I said in my story that How could Searle have gone so stunningly wrong, so R was only “highly intelligent.” [Searle doesn’t dispute the quickly, all with so much self-confidence? The defect in his Floridi-chronicled fact that artificial agents are becoming thinking is fundamentally the same as the one that plagues increasingly intelligent.]) To those who might complain his consideration of malicious machines: He doesn’t (yet) that the robot in question doesn’t have phenomenal really think about the nature of these machines, from consciousness, I respond: “Of course. It’s a mere machine. a technical perspective, and how it might be that from As such it can’t have subjective awareness.4 Yet it does have this perspective, malicious machines, definite as such in what Block (1995) has called access consciousness. That a perfectly rigorous and observer-independent fashion, is, it has the formal structures, and associated reasoning are not only potentially in our future, but here already, and decision-making capacities, that qualify it as access- in a rudimentary and (fortunately!) relatively benign, conscious. A creature can be access-conscious in the controlled-in-the-lab form. Likewise, Searle has not really complete and utter absence of consciousness in the sense thought about the nature of information from a technical that Searle appeals to. perspective and how it is that, from that perspective, the Fourth R is very, very real. As the late John Pollock told me That Searle misses these brute and obvious facts about once in personal conversation, “Whether or not you’re right what is happening in our information-driven, technologized that Searle’s Chinese Room Argument is sound, of this I’m world, a world increasingly populated (as Floridi eloquently sure: There will come a time when common parlance and points out) by the kind of artificial intelligent agents, is common wisdom will have erected and affirmed a sense really and truly nothing short of astonishing. After all, it is of language understanding that is correctly ascribed to Searle himself who has taught us that, from the point of machines—and the argument will simply be passé. Searle’s view of human observers, whether a machine really has sense of ‘understanding’ will forgotten.” mental states with the subjective, qualitative states we enjoy can be wholly irrelevant. I refer, of course, to Searle’s Fan that I am, it saddens me to report that the errors of Chinese Room. Searle’s ways in his review run, alas, much deeper than a failure to refute his two targets. This should already be To complete the destruction of Searle’s purported refutation, quite clear to sane readers. To wrap up, I point to just one we turn now to his attack on Floridi, which runs as follows. fundamental defect among many in Searle’s thinking. The defect is a failure to understand how logic and mathematics, (5) Information (unlike the features central to revolutions as distinguished from informal analytic philosophy, work, driven, respectively, by Copernicus, Darwin, and Freud) is and what—what can be called—logico-mathematics is. The observer-relative. (6) Therefore, (F) is false. failure of understanding to which I refer surfaces in Searle’s review repeatedly; this failure is a terrible intellectual This would be a pretty efficient refutation, no? And cancer. Once this cancerous thinking has a foothold, it the economy is paired with plenty of bravado and the spreads almost everywhere, and the result is that the characterstic common-sensism that is one of Searle’s philosopher ends up operating in a sphere of informal hallmarks. We, for instance, read: common sense that is at odds not only with the meaning of language used by smart others but with that which has When Floridi tells us that there is now a fourth been literally proved. I’m pointing here to the failure to revolution—an information revolution so that we understand that terms like “computation” and “information” all now live in the infosphere (like the biosphere), (and, for that matter, the terms that are used to express the in a sea of information—the claim contains a axiomatizations of physical science that are fast making that confusion. . . . [W]hen we come to the informational in nature for us, e.g., those terms used revolution, the information in question is almost to express the field axioms in axiomatic physics, which views entirely in our attitudes; it is observer relative. . . . even the physical world informationally5) are fundamentally [T]o put it quite bluntly, only a conscious agent can equivocal between two radically different meanings. One have or create information. meaning is observer-relative; the other is absolutely not; and the second non-observer-relative meaning is often captured This is bold, but bold prose doesn’t make for logical in logico-mathematics. I have space here to explain only validity; if it did, I suppose we’d turn to Nietsche, not briefly, through a single, simple example.

PAGE 8 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Thinking that he is reminding the reader and the world of Preston and M. Bishop, 144–66. Oxford, UK: Oxford University Press, a key fact disclosed by good, old-fashioned, non-technical 2002. analytic philosophy, Searle writes (emphasis his) in his Bringsjord, S. What Robots Can & Can’t Be. Dordrecht, The Netherlands: review: “Except for cases of computations carried out by Kluwer, 1992. conscious human beings, computation, as defined by Alan Bringsjord, S., N. S. Govindarajulu, D. Thero, and M. Si. “Akratic Robots and the Computational Logic Thereof.” In Proceedings of ETHICS, 2014 Turing and as implemented in actual pieces of machinery, is IEEE Symposium on Ethics in Engineering, Science, and Technology, observer relative.” In the sense of “computation” captured Chicago, IL, pp. 22–29. IEEE Catalog Number: CFP14ETI-POD. Papers and explained in logico-mathematics, this is flatly false; and from the Proceedings can be downloaded from IEEE at http:// it’s easy as pie to see this. Here’s an example: There is a well- ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275. known theorem (TMR) that whatever function f from (the Floridi, L. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford, UK: Oxford University Press, 2014. natural numbers) N to N that can be computed by a Turing machine can also be computed by a register machine.6 Or, Govindarajalulu, N., S. Bringsjord, and J. Taylor. “Proof Verification and Proof Discovery for Relativity.” Synthese 192, no. 7 (2014): 1–18. doi: put another way, for every Turing-machine computation c of 10.1007/s11229-014-0424-3. f(n), there is a register-machine computation c’ of f(n). Now, Lewis, H., and C. Papadimitriou. Elements of the . if every conscious mind were to expire tomorrow at 12 noon Englewood Cliffs, NJ: Prentice Hall, 1981. NY time, (TMR) would remain true. And not only that, (TMR) Searle, J. “Minds, Brains, and Programs” Behavioral and Brain Sciences would continue to be an ironclad constraint governing the 3 (1980): 417–24. non-conscious universe. No physical process, no chemical Searle, J. “What Your Computer Can’t Know.” New York Review of Books, process, no biological process, no such process anywhere October 9, 2014. This is a review of both Bostom, Superintelligence in the non-conscious universe could ever violate (TMR). Or, (2014), and Floridi, The Fourth Revolution (2014). putting the moral in another form, aimed directly at Searle, all of these processes would conform to (TMR) despite the fact that no observers exist. What Floridi is prophetically telling us, and explaining, viewed from the formalist’s Towards Autonomous Computation: point of view, is that we have now passed into an epoch in which reality for us is seen through the lens of the logico­ Geometric Methods of Computing mathematics that subsumes (TMR), and includes a host of other truths that, alas, Searle seems to be doing his best to Marcin J. Schroeder head-in-sand avoid. AKITA INTERNATIONAL UNIVERSITY

NOTES 1. See, e.g., the elegant Lewis and Papadimitriou, Elements of ABSTRACT the Theory of Computation (Englewood Cliffs, NJ: Prentice Hall, Critical analysis of computation, in its traditional 1981). understanding described by Turing machines, reveals 2. See, e.g., Bringsjord, What Robots Can & Can’t Be (Dordrecht, The involvement of human agents when it is interpreted as a Netherlands: Kluwer, 1992); and Bringsjord, “Real Robots and the process in which there is transition from integers to integers. Missing Thought Experiment in the Chinese Room Dialectic,” in Views into the Chinese Room: New Essays on Searle and Artificial More specifically, human intervention is necessary not only Intelligence, ed. J. Preston and M. Bishop (Oxford, UK: Oxford in generating meaning for the input and output symbols University Press, 2002), 144–66. (symbol grounding) but also in the construction of these 3. Bringsjord et al., “Akratic Robots and the Computational Logic (compound) symbols from the component symbols involved Thereof,” in Proceedings of ETHICS, 2014 IEEE Symposium on in the process of computation. The Turing machine does Ethics in Engineering, Science, and Technology, Chicago, IL, pp. 22–29. not have any component mechanism integrating separate symbols on which it operates into a whole constituting the 4. See, e.g., Bringsjord, “Offer: One Billion Dollars for a Conscious Robot; If You’re Honest, You Must Decline,” Journal of symbol representing an integer. This step is performed by Consciousness Studies 14.7 (2007): 28–43. the human mind. Thus, human beings are involved not only 5. Govindarajulu et al., “Proof Verification and Proof Discovery for in symbol grounding but also in symbol integration into a Relativity,” Synthese 192, no. 7 (2014): 1–18. meaningful whole. 6. See, e.g., Boolos and Jeffrey,Computability and Logic (Cambridge, UK: Cambridge University Press, 1989). The same applies to the cases in which integers are used to encode information of any other type. Thus, the use of BIBLIOGRAPHY Turing machines in modelling intelligence because of their Block, N. “On a Confusion about a Function of Consciousness.” capacity to manipulate symbolic information involves the Behavioral and Brain Sciences 18 (1995): 227–47. homunculus fallacy. The only way to avoid this fallacy is Boolos, G., and R. Jeffrey. Computability and Logic. Cambridge, UK: to design computation in a completely autonomous form, Cambridge University Press, 1989. free from any involvement of a human mind. This paper Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: does not provide such design in the complete form but Oxford University Press, 2014. explores several steps in this direction. Bringsjord, S. “Offer: One Billion Dollars for a Conscious Robot; If You’re Honest, You Must Decline.” Journal of Consciousness Studies 14.7 The way beyond Turing machines has to start from a (2007): 28–43. Available at http://kryten.mm.rpi.edu/jcsonebillion2.pdf. description of computation using a sufficiently general Bringsjord, S., and R. Noel. “Real Robots and the Missing Thought conceptual framework that allows for its naturalization Experiment in the Chinese Room Dialectic.” In Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, edited by J. (realization with natural, physical processes). Such a framework can be found in the dynamics of information.

PAGE 9 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Computation becomes a construction in which two from the alphabet and leading to another configuration. information systems interact. Here, too, the initial configuration of points and lines is transformed into a new configuration. The dynamic of the This dynamic framework is used in the present paper process is essentially different, but the fundamental idea to present forms of computation based on geometric is the same. constructions with possibly, but not necessarily, a different alphabet and necessarily different dynamics from those The association between computation and information is in Turing machines. Furthermore, the geometric forms of much more significant than the popular conviction that computation can be classified into an infinite hierarchy computers are processing information. After all, in this beginning with the lowest level of the usual Turing machine popular view, there is nothing about the meaning of the computation, through compass and ruler constructions, word “processing.” In the casual discourse, processing and beyond. information is simply what computers are doing. Turing did not use the concept of information or information 1. INTRODUCTION processing in his 1936 paper at all and referred to a common This study was motivated by the question of the role of sense understanding of information in 1950 mainly in the computation in modelling the mechanisms of artificial context of the capacity of a digital computer to store it in and natural intelligence. More than sixty years ago, Alan “packets of moderately small size.”4 Turing expressed his belief in the feasibility of constructing a machine that could be recognized as thinking by the The actual importance of the association between end of the last century.1 He was aware of the potential computing and information appears when we want to confusion that might result from a misunderstanding of understand what computation is and whether we can the terms “machine” and “think.” His solution was to avoid expect that there are forms or variations of computation conceptualization of these terms and instead to propose essentially different from that described by the work of a an “imitation game” (), designed to establish Turing machine. whether the device (digital computer) can perform well enough to be qualified as intelligent or thinking. In the opinion of the author, the source of the confusion in recent discussions regarding computation is an We are in the next century. Computers exceed Turing’s overextension of linguistic considerations to entities predictions regarding achieved size of memory and speed beyond language. As long as we restrict ourselves to reality of operation, yet not only are there no intelligent machines understood as that which can be expressed or represented (whatever we understand by intelligence) but now there is in the language of current discourse, we may lose some even more confusion regarding the fundamental concepts tools for exploration of all that exists beyond the reach of of this domain. linguistic means.

There is no agreement regarding the meaning of concepts Information can be defined in a much more general way than such as “computation” or “information” (except that the it is done in the study of or language. Even former is explained as processing of the latter, considered if not for exploration of the unknown aspects of reality, but more fundamental), although there are continuing efforts for the purpose of understanding of communication and to find consensually satisfactory definitions.2 language, it is necessary to have a more general framework than a purely internal linguistic perspective. After all, the For this reason, in the present paper, both terms are defined entities engaged in communication or in using languages in a way that to the author seems most adequate for the do not belong to the linguistic (syntactic) universe. A purpose of the discussion of autonomous computation, sufficiently general concept of information can be used to and which had served similar purposes in his earlier describe not only all possible languages but also entities publications.3 Since these definitions are very general and using these languages and entities which give meaning to have as special instances many other definitions used in linguistic expressions.5 literature, their choice should not influence the validity of the content of this article, even for those who have their Overcoming the restriction to the linguistic concepts own, possibly very different conceptualizations. Actually, is necessary for the purpose of the naturalization of one advantage of the author’s definition of information computation. If we are interested in the possibility of is the fact that it combines two widely used but formerly constructing a device capable of intelligent behavior unrelated classes of concepts of information—those that (whatever would be the understanding of intelligence), are associated with selection and its probability and those we have to make its functioning independent from human considering structural characteristics as the carriers of intervention. In this sense, the ultimate goal of this study information. (beyond the scope of the present paper) is to design autonomous computing systems. Although this autonomy Similarly, the concept of computation introduced here is understood as an exclusion of the human intervention has the computation carried out by Turing machines as its from computation, the first step in this direction is an special instance. The fact that geometric constructions, examination of the ways in which such intervention may such as constructions with ruler and compass, belong to the be involved in the present form of computation. Turing generalized form of computation presented here should believed that his a-machine is automatic, i.e., independent, not be a surprise. The Turing machine computation is a but the present paper will challenge this view. construction starting with some configuration of symbols

PAGE 10 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Autonomy of computation is very important in the context of of the main themes of European philosophy from its earliest artificial intelligence because modelling of consciousness stages of development in pre-Socratic reflection and or cognition by devices that require human intervention is probably the most important problem in the philosophy of yet another instance of the infamous homunculus fallacy. mathematics. The meaning of meaning is as controversial as those of information or computation. Here, too, the Objection to the homunculus fallacy was used by John author is making his own choice concerning understanding R. Searle as an argument for his negative answer to the through the use of the concept of information.7 Thus, question “Is the brain a digital computer?”6 The present the meaning is understood as a relationship between author agrees with this diagnosis but for different reasons. information systems, which is explained below in the The argument given by Searle is not convincing. He claims current paper. that only a human observer can give the process carried out by a Turing machine its status of computation. In his Philosophy of mathematics, from its very beginning, was opinion, “multiple realizability” of computation supports dominated by the view that the generation of meaning this view. We can train pigeons to do exactly what a Turing is a form of construction. This view dates back to the machine does, but if a human being does not interpret it as Pythagoreans. Recognition of the limitations of the actual a process of computing, it will not be computation. act of construction to a finite number of steps (in the calculations, geometric constructions, as well as in logical Exactly the same argument can be used to claim that a inferences) led to the interest in the finitistic methods, and horse cannot be a horse without human intervention. If a ultimately to Hilbert’s Entscheidungsproblem. The attempt horse lacks self-reflection and the ability to use language to resolve this problem motivated Turing in his work, which to express his identity, what makes him a horse without any resulted in the idea of his obviously finitistic a-machine. human being observing him? However, the present author supports Searle’s view of the homunculus fallacy when Computation with a Turing machine can be understood the Turing machine is considered a device operating on as a form of construction and computability as a form numbers, or on whatever meaning is ascribed to symbolic of constructability. But is it the only way constructions configurations on the tape. As it will be discussed later, were understood in mathematics? Definitely not! The only a device capable of generating meaning for its input Greeks of antiquity had as a main mathematical tool the and output can dispose of human intervention and be straightedge and compass construction. René Descartes autonomous. And to generate meaning, the device has in his La Géométrie expanded the methods of geometric to have the capacity to recognize the identity of whole constructions beyond the Greek tradition.8 symbols carrying meaning, not just their components, as it is in the Turing machine. In the present paper, after more elaborate reflection on the issues presented above and a short presentation Another, but related, issue is with the autonomy of the of the conceptual framework developed in the earlier Turing machine in the context of computability. Of course, publications of the author, an infinite hierarchy of geometric we can define computable real numbers as such whose any constructions of various types is considered as alternative n-th decimal expansion digit or all first n decimal digits can forms of computation. These constructions have increasing be received as an output of a Turing machine whose input power (in a sense specific to the present approach) but included the number n, but this concept of computability decreasing universality. is heavily dependent on human interpretation. No Turing machine can construct more than a finite number of digits 2. EXTENT OF AUTONOMY IN TURING MACHINE or can itself integrate the infinite sequence of digits into This section may be considered a forcing of the open door, a finitely presentable object. The involvement of a human but the continuing confusion regarding human involvement interpreter is essential in this. Only the human mind in computation shows that further analysis of this issue is can associate a finite number of digits in the decimal necessary.9 expansion with the concept of a number which has infinite expansion. The door is open, as we can find in the literature of the subject multiple reminders of the need for a very clear Here, the dependence on human intervention is even more distinction between the understanding of computation serious because it involves the idea of infinity, which is not with the Turing machine as a manipulation of component only absent in the theoretical description of the work of a symbols from which compound symbols can be constructed machine but is incompatible with the finitistic methods for and computation as an operation on numbers. which it was designed. Potential infinity in the form of the assumption of an infinite tape is present in the description Michael Arbib devoted an entire highlighted paragraph of the Turing machine, but actual infinity is not. Moreover, to this issue in his book relating brains, machines, and the negative result of Turing’s Halting Problem shows that mathematics: computation understood as the work of a Turing machine does not allow the distinction between the finite and the The point I am trying to make, then, is the familiar infinite. one that computers are symbol-manipulation devices. What needs further emphasis is that The problems identified above are reflections of the more they can thus be numerical processors, but the general issue of the generation of meaning, which is not numerical processing that they undertake is only restricted to the orthodox form of computation. It was one specified when we state how numbers are to be

PAGE 11 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

encoded as strings of symbols, which may be fed entities. These semantic aspects of communication into the computer, and how the strings of symbols are irrelevant to the engineering problem. The printed out by the computer are to be decoded significant aspect is that the actual message is to yield the numerical result of the computation. one selected from a set of possible messages. Our emphasis in what follows, then, is on the The system must be designed to operate for each ways in which information-processing structures possible selection, not just one which actually (henceforth called automata) transform strings of be chosen since this is unknown at the time of symbols into other strings of symbols. Sometimes it design.12 will be convenient to emphasize the interpretation of these strings as encodings of numbers, but in Shannon, in the context of communication, as well as Post, many cases, we shall deem it better not to do so.10 in the context of computation, seemed to be aware of the difficulty in considering meaning when our tools are limited Arbib’s emphasis on encoding and is of special to the analysis in terms of distributed components. The last importance as discussions of computation frequently sentence in Shannon’s famous declaration of his disinterest disregard their involvement as a marginal issue. One of in the semantic aspects of information (usually omitted in the main theses of the present paper is that encoding quotations of this passage) is interesting because it gives and decoding of information are the missing parts of some justification for the omission of meaning in his study. computation delegated to a human mind, and that their He is referring to the requirement of universality of the omission in the analysis of computation is responsible for system. The generation of meaning requires some form of the homunculus fallacy. construction, which is too specific and cannot be predicted in advance. The recognition for the necessity to involve an external agency, obviously human, can be found in the paper The view of the necessary involvement of a human agent published by Emil Post in 1936 independently from Turing’s in what seems to be an action of the calculating machine contribution, in which he describes a similar but equivalent was expressed later, in 1942, by Ludwig Wittgenstein: “20. realization of computation (now usually called Turing-Post If calculating looks to us like the action of a machine, it is machine): the human being doing the calculation that is the machine. In that case the calculation would be as it were a diagram We do not concern ourselves here with how the drawn by a part of the machine.”13 configuration of marked boxes corresponding to a specific problem, and that corresponding to its The persistence of the homunculus fallacy in the answer, symbolize the meaningful problem and understanding of the Turing machine as a device working answer. In fact the above assumes the specific on the integers without any involvement of the human problem to be given in symbolized form by an agency can be attributed to the fact that Turing did not outside agency and, presumably, the symbolic address, in his epoch making paper, the issue of how the answer likewise to be received. A more self- sequence of digits is interpreted as a number and his use contained development ensues as follows. The of the term a-machine (automatic machine) in the context general problem clearly consists of at most of the calculation performed on numbers.14 enumerable infinity of specific problems. We can, rather arbitrarily, represent the positive integer Moreover, Turing made a surprising error in underestimating n by marking the first n boxes to the right of the the importance of the composition of symbols representing starting point. The general problem will be said numbers from digits: to be I-given if a finite I-process is set up which, when applied to the class of positive integers as I shall also suppose that the number of symbols thus symbolized, yields in one-to-one fashion the which may be printed is finite. If we were to class of specific problems constituting the general allow an infinity of symbols, then there would be problem.11 symbols differing to an arbitrarily small extent. The effect of this restriction of the number of It is of some interest to compare the words of Post from which symbols is not very serious. It is always possible the quotation starts, “We do not concern ourselves here to use sequences of symbols in the place of single with how the configuration of marked boxes corresponding symbols.[. . .] The differences from our point of to a specific problem, and that corresponding to its view between the single and compound symbols answer, symbolize the meaningful problem and answer,” is that the compound symbols, if they are too with the famous disclaimer of interest in semantic aspects lengthy, cannot be observed at one glance. This is of information from another fundamental work of the in accordance with experience.15 twentieth century by Claude Shannon: The application of the Turing machines to the computation The fundamental problem of communication is on numbers is possible only because numbers are that of reproducing at one point either exactly represented by compound symbols. There is no possible or approximately a message selected at another computation on numbers if each of them is represented by point. Frequently the messages have meaning; a distinct simple symbol that cannot be decomposed into a that is they refer to or are correlated according to combination of components from some fundamental finite some system with certain physical or conceptual set of “digits.” For instance, we could consider positive

PAGE 12 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

integers encoded (no doubt in a very impractical way) by a Turing was more cautious when he was writing in the segment of the length 1/n assigned to n. similar context of digital computers about “discrete state machines,” but he did not avoid some misconceptions: Without the distinction of the two levels—the global, in which we have total configuration of the simple symbols, The digital computers considered in the last and the local, in which selection of particular characters section may be classified amongst the “discrete from the alphabet is made—there is no computation and no state machines.” These are the machines which Turing machine. Thus, it is not a matter of convenience or move by sudden jumps or clicks from one practicality that we consider the complexity of components quite definite state to another. These states within symbolic representation, but this complexity defines are sufficiently different for the possibility of the work of the Turing machine. Consequently, when the confusion between them to be ignored. Strictly complex character of symbols representing numbers or speaking there are no such machines. Everything whatever meaning is assigned to them is neglected, it really moves continuously. But there are many is easy to oversee the role of human involvement in the kinds of machine which can profitably be thought integration of the component symbols into compound of as being discrete state machines. For instance symbols, which is followed by the generation of meaning. in considering the switches for a lighting system it is a convenient fiction that each switch must be When we consider the Turing machine as a device definitely on or off. There must be intermediate (theoretical or physical) operating on numbers or on positions but for most purposes we can forget other concepts whose meaning is dependent on the about them.18 structural characteristics of configurations of components, computation is losing its autonomy. However, it would Quantum mechanics shows that his claim that “everything be erroneous to claim that in Turing machine there is no really moves continuously” is not true, and the intermediate generation of meaning at all. positions may play crucial roles, but more importantly for him, as for von Neumann, the distinction between the At the local level of the selection of a character to be discrete and continuous modes of the work of machines printed, the meaning of the input in each particular cell on was the only matter of their interest. The actual issue is the tape is specified in the current (or active) instruction of much deeper. the head. Thus, it doesn’t matter whether this generation of meaning is of a mechanical nature or is simply through First, let us observe that after his claim of an apparent the reaction to feeding or training, if machine is realized by necessity of continuity in physical processes, Turing states, pigeons pecking grains. Individual characters on the tape “many kinds of machine which can profitably be thought of do have meaning for the head. The missing part of meaning as being discrete state machines.” Here, we can see a clear generation, which is provided by a human mind, is in the admission of human intervention into the interpretation of integration of the components (characters) into structured the work of digital computers. The physical description of wholes (equipped with the meaning, for instance through the continuous work of the device is replaced by human the association with integers). interpretation, which makes the process discrete, under the condition that such discretization is not leading to errors. The generation of meaning is related to another confusion regarding the distinction between analogue and digital It is not true that physical processes necessarily require computing. Originally, the distinction was introduced in continuity, as Turing thought. But instead, every actual, 1948 by John von Neumann as a purely practical distinction physical implementation of a computing device involves of “analogy and digital machines” according to the two some form of a measurement (von Neumann’s “analogy alternative ways of representing numbers in computing principle”), at least at the local level corresponding to devices.16 He observed that numbers can be associated the cells of the tape in Turing machines. However, in the by measurements with the values of continuous physical description of computation carried out by physical devices, magnitudes or can be encoded as finite combinations of it is not the measurements or physical magnitudes that discrete digits in the positional numerical system (decimal, play the crucial role but the states of physical systems. binary, or any other). Practical digital computers (or more general devices) are For von Neumann, what was important was the practical based on the distinction of the states of some physical advantage of the error control in the digital representation systems, which involves division into the finite, and of numbers. He probably was not aware of the fundamental therefore discrete number of classes (usually two). In importance of the distinction for the future philosophical practical analogue computers (or devices), the distinction is reflection on computation. Otherwise, most likely he would among the theoretically infinite and continuous distribution have been more careful about mixing two very different of states, although, in practice, only a finite number of types of oppositions involved in his exposition: “discrete distinctions (readings of the measurements of outcomes) is – continuous” and “conventional – empirical.” The practical possible. Thus, in practice, the distinction of analogue and importance of such distinction, in spite of the confusion digital computation in its original form of the opposition it created, was for the error analysis of calculations continuous-discrete does not make much sense. The unquestionable, but its later philosophical and theoretical striking, but secondary in importance opposition discrete- interpretations led to totally incorrect conclusions.17 continuous is obstructing the view of the actual important distinction.

PAGE 13 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

There is nothing preventing us from using physical with the configuration of 1’s and 0’s a binary representation systems which have continuous distribution of their states of an integer, and therefore he is giving the meaning to this or discrete one (viz. quantum states). Since the physical configuration which goes beyond what is in the machine. magnitudes and the states of physical systems are very different concepts and their relationship in modern physics The former approach, when numbers are encoded by became quite complicated, we have to be very careful in sequences of 1’s by the association with the cardinal number not confusing them. of the set of these digits in a sequence, is very often used as a preferred system of encoding numbers (misleadingly The crucial point is that the measurements performed on called “unary” numerical system) in discussions or a physical system are basically procedures of assigning explanations of the concept of a Turing machine. This is a meaning to the concept of a state of the system. Since this good example of the involvement of the human mind in the meaning is given in the operational terms, it has a form of integration of components and interpretation. The Universal construction. By constructing configuration of measuring Turing machine always operates with the alphabet of at least devices and through an interpretation of their states in two characters. Claude E. Shannon showed impossibility of terms of real numbers, we establish the state of measured the universal machine operating with the alphabet of one system with a higher or lower degree of determination (in character.19 We can distinguish on the tape a sequence of quantum mechanics usually only up to some probability 1’s, then we can unite it into a compound symbol (more distribution). The values of numbers are conventional and exactly, we did it already considering a sequence) and depend on the choice of units. Their importance is only in interpret it as a number. But the work of the machine is making distinctions and ordering the states of measuring on a sequence of two symbols 0 and 1, and when we devices. interpret it as a recursive function between numbers, this interpretation has to involve sequences of both digits. Now, in order to save the intuitive understanding of the original distinction of the analogue and digital computing, It has to be re-iterated that in the description of Turing we can define it with respect to the degree in which machines there is nothing that could serve as interpreter interpretation of the results is involved in it. Analogue of the global configuration of the squares, and that this computing does not involve interpretation beyond the interpretation is always made by a human mind. Certainly, measurement itself, i.e., assignment of the real number we can consider a machine built based on the design of the to the outcome of the measurement. The outcome has the Turing machine but with an additional component, which form of some (for instance, physical) state of the system, assumes the role currently given to a human agency. A and the number is associated directly with this state. machine of this type could become fully autonomous.

In digital computing, the number is not assigned to the 3. INFORMATION, ITS INTEGRATION, AND physical state of one physical system. Instead, we have a SEMANTICS complex of component physical systems (squares of the tape, cells of the memory, etc.), and the physical states of Alternative forms of computation, for instance, one which these component systems are associated with component is more effective, more powerful, or simply different, were symbols, viz. digits. Integration of these component considered in the past. Turing considered an alternative symbols into a whole is not performed by the digital c-machine (choice machine) in which some choices of steps computer. This additional level of interpretation is left to a were made randomly. The main obstacle in going beyond human mind. or away from computation described in terms of Turing machines was the lack of a sufficiently general but rigorous We can think of a digital computer as a system of conceptual framework admitting variants or alternatives of communicating analogue computers, each producing as a computing devices. As long as computation is understood result a one-digit number. However, this system becomes as an operational form of recursive functions described in a digital computer only after we re-interpret the one-digit the terms of processes which can be associated with the numbers as digits of one many-digit number. This can physical world, there is not much hope for going beyond happen only through human intervention in integrating Turing machines. But even informal reflection suggests that components into a whole. Moreover, the role of the digits in there should be some alternatives, for instance, in the form a compound numeral is a matter of the human convention. of “continuous automata.”20

The analogue-digital distinction can be better understood This section is a short outline of the concepts introduced in this context when we refer to the historical examples and elaborated on in earlier publications of the author, of models for computation. The Turing-Post machine which will serve here as a framework for the discussion described by Post in his 1936 paper is an example of the of alternative forms of computing.21 The choice of their analogue machine as long as we do not attempt to interpret definitions was guided in earlier articles by two main goals, the symbolic meaning of the configuration of boxes and which are consistent with the goals of the present paper. assume that the set of full boxes is characterized by a natural One was to secure a sufficiently high level of generality number, based on how we understand natural numbers allowing application of the defined concepts in a very wide as finite cardinals, i.e., without any specific convention range of contexts, including those in which the concepts involved. The outcome of the computation consists simply appeared originally. Within these “traditional” contexts, the of the set of boxes marked as full. The machine described present definitions are consistent with the earlier ones. by Turing in his paper is digital because he is associating The other was to achieve complete independence from the

PAGE 14 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

linguistic forms of information or other specifically human quantitatively described using an appropriate probability aspects of dealing with information. The latter goal serves distribution and measured using, for instance, Shannon’s well the studies of autonomy in computation. entropy. For the structural manifestation, the degree can be characterized in terms of the level of decomposability Of course, the discourse on information has to be carried of the structure.22 out in some language (i.e., metalanguage) and therefore with the use of the linguistic forms of information, but Although, at first sight, these two ways of identification information as a subject of the study should be free from seem very different, their formalization can be achieved in all restrictions present in the most familiar but too narrow terms of one mathematical theory. It is not a surprise that linguistic context. In short, linguistic form of information is it is at a rather high level of abstraction. Most important a tool of the study, but the subject of the study should be a mathematical concepts and facts will be introduced here, more general form of information. but the details, proofs, and their more extensive theoretical exposition can be found in every text in general or Thus, the conceptualization of information in this paper lattice theory.23 The variety which serves as a mathematical has its foundations in the a priori categorical opposition model for an information carrier is a set S equipped with of the one and many, which can be considered the most a family of subsets ℑ closed with respect to arbitrary fundamental distinction between characteristics of reality. intersections and having all set S as its element (i.e., ℑ is The recognition of this opposition is a necessary condition a Moore family). There is a one-to-one correspondence for all attempts to define or to describe any concept, and between Moore families on a given set and general closure therefore also of the concept of information, and, as such, operators, i.e., functions f mapping the power set of S into it is unlikely to generate objections. Information is then itself, such that for all subsets A, B of S, we have defined as an identification of the variety, i.e., that which makes one out of the many or, alternatively, a realization of (I) A ⊆ f(A), A ⊆ B ⇒ f(A) ⊆ f(B), and f(A) = f(f(A)). this one-many opposition for the variety. The set S with the operator f is called a transitive closure The many or the variety in this definition is a carrier of space. information, which has to be established prior to considering any instance of information. Its identification is understood The Moore family ℑ is the family of closed subsets f-Cl with as anything that makes it one or a whole, i.e., that moves it respect to the closure operator f, i.e., it consists of sets into or towards the other side of the opposition one-many. A, such that f(A) = A. In turn, f(A) is an intersection of all The word “identification” indicates that information gives subsets in ℑ, which include A. an identity to a variety, which is an expression of the unity, oneness, or wholeness. For the trivial minimal closure operator f which for all subsets A of S satisfy f(A) = A, the family of closed subsets There are two basic forms of such identification. One f-Cl is the entire power set 2S of S (set of all subsets of S). consists in a selection of one out of the many, and the Since this family forms a Boolean algebra with respect to other is in the form of a structure binding the variety, many set operations, we can equip it with a probabilistic measure into one. This brings out two manifestations of information, as a form of selection. the selective and the structural. The two possibilities are not dividing information into two types, as the occurrence In the linguistic context, we can refer to the association of one is always accompanied by the other, but not on the of information with the relation between sets and their same variety, i.e., not on the same information carrier. elements formally expressed by “x∈A.” The informational aspect of the set theory can be identified in the separation For our further use of the concept of information in axiom schema, which allows interpretation of x∈A as a developing the concept of computation, it is important statement of some formula j(x) formulated in the predicate to observe that the duality of the structural and selective logic which is true whenever x∈A. The set A consists then manifestations has a hierarchical character. To be able to of all elements which possess the property expressed select one element out of many through some process, for by j(x), which, of course, is the linguistic expression of instance, filtering, we have to assume that this element is information. equipped with some identifying “internal” structure (where internal does not have to be understood in topological If we are interested in a more general concept of information sense but, rather, as a structure associated with it in a not necessarily based on any language, we can consider a unique way and giving it distinctive existence as a whole). more general relationship than x∈A described by a binary This internal structure itself requires another variety to be relation R between the set S and its power set 2S: xRA if bound into the selected whole. Thus, we cannot admit x∈f(A). the existence of absolute “atoms” understood as objects devoid of any structural characteristics unless we are If this closure operator is trivial (for every subset A its unable to make any non-random choice (i.e., the choice closure f(A) = A), we get the usual set-theoretical relation of is purely random in the complete absence of information belonging to a set xRA if x∈A. It is related to the assumption within the information carrier). that every subset A of S is associated with information, for instance, in the form of a predicate. Of course, for different The identification of a variety may differ in the degree. infinite sets of higher cardinality, we cannot provide distinct For the selective manifestation, this degree can be predicates, but we can provide schemata of forming

PAGE 15 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

appropriate predicates. In more general cases, only closed system has a specified mode of identification of the variety subsets correspond to instances of information. by the closure space on S or, alternatively, by the Moore family ℑ in the power set of S. The direct association of information with predicates can be found at a higher level of abstraction without necessity Now it is a turn for the information itself. The identification to involve set theory through Hilbert’s epsilon calculus, but of the variety (or in the variety) is a filter in the familyℑ , i.e., this level of abstraction is beyond the scope of this article. subfamily ℑ0 ⊆ ℑ, which together with each subset A in ℑ0 has as its elements all subsets including A (all supersets of When we say that the trivial closure operator (i.e., such that A), and which is closed with respect to finite intersections. f(A) = A for every A) is minimal, it means that this closure operator is minimal in the partial order on the set of closure The explanation of how the distinction of a subfamily ℑ0 operators defined as follows. For two closure operators f represents identification of a variety for both manifestations and g on a set S, if ∀A ⊆ S: f(A)) ⊆ g(A), then f ≤ g. This at the same time is more complicated and will not be ordering is related to an equivalent definition that makes presented here in detail.24 For our purpose, it will be f less than g, if g-Cl ⊆ f-Cl, i.e., the ordering of closure enough to say that an element of S can be identified by their operators is inverse-order-isomorphic to the inclusion of properties in different degrees. Properties (or predicates their Moore families. Obviously, in this partial order, the in the linguistic context) usually are associated with the operator f for which all subsets of S are closed is the least subsets of elements characterized by them. Of course, if because its Moore family of closed subsets is the largest we have one-to-one correspondence between subsets and (all subsets). properties (or predicates), then we can specify information by the set of elements and only those elements that are Thus far we have been talking about the selective fully characterized by it. Alternatively, we can consider all manifestation of information. The structural manifestation subsets that include a given subset (principal filter) parallel is associated with the structure which is binding elements to all properties that are consequences of a given property. of the variety into a structural whole. In this case, the Moore However, not all filters are principal. family of closed subsets consists of all substructures of the structure introduced on the set S. Although the Thus, we have to consider the description of information characterization of all possible types of mathematical by filters. This means we admit the possibility that some structures through their substructures is not complete elements are determined by a family of properties but that (there are some formally non-isomorphic structures with there is no single property that can do it. It will be important isomorphic families of substructures), the exceptions in the more general case when properties are associated are rare and of secondary importance for the study of only with distinguished closed subsets. information. Finally, if an identity of the individual element in S is We can impose on the closure operator a large variety of determined by the properties (i.e., the subfamily of closed additional conditions to get the descriptions of virtually all subsets to which the element belongs), then this subfamily structures which formalize our experience of reality. is a filter. Furthermore, when we have a probability measure on the family of closed subsets (for usual probability space For instance, if we assume that it will be either the entire power set or the family of all measurable subsets), the subsets whose measure is equal (N) f(Ø) = Ø to one form a filter.

(fA) ∀A,B ⊆ S: f(A∪B) = f(A) ∪ f(B), The Moore family ℑ (i.e., the ) with respect to the partial order defined by the inclusion of sets then the closure space is a familiar topological space, as acquires the structure of a complete lattice. We will use for defined by Kuratowski. this lattice the symbol f, indicating its association with the closure operator f. ℒ We can add an additional condition to get T1 topological space: This complete lattice can be considered a generalization of logic for an information system. It can be shown that when

(T1) ∀ x∈S: f({x}) = {x}. the present formalism is used for information understood in linguistic terms, it plays the role of an algebraic Topological spaces are the most familiar instances of representation of the traditional logical structures.25 closure spaces, but closure space axioms are established for many structures in mathematics, including geometry, One of the primary characteristics of the 26 logic (consequence operator of Tarski), general algebra, etc. f is its degree of decomposability into a direct product. When we want to consider specific forms of the structural If S is simply a collection without any structure, i.e., every manifestation of information, we have a very rich source of subsetℒ of S is closed and the closure operator is trivial, the closure spaces for the choice of an appropriate axiomatic. logic f is completely reducible into a product of primitive structures (two-element Boolean consisting of Thus far we have discussed an information carrier, which the emptyℒ set and each of one-element subsets) involving in our approach is represented by the set S. Information every element of S separately. We can associate this case with the totally disintegrated information.

PAGE 16 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

For instance, the diagram below shows the Boolean understand for someone familiar with quantum or at algebra of all subsets of the three-element set. The algebra least with quantum mechanics, which provides an example representing the logic of an information system (on the of information systems with integrated information. left) can be considered a product of three primitive two- element algebras on the right, which leads to the case of The process of transformation of information into different totally disintegrated information. Two-element Boolean levels of integration in the process of encoding was algebras are the only exceptions from the general rule of theoretically described by the author as a generalized Venn complete reducibilityFigures 1- 5of to Boolean be included algebras. on p.17 in M.J. Schroeder “Towards gateAutonomous whose Computation: encoded Geometricoutput can have an arbitrary level of Methods of Computing” as replacements for current damaged integration.figures. 27 Figures 1-5 to be included on p.17 in M.J. Schroeder “Towards Autonomous Computation: Geometric Methods of Computing” as replacements for current damaged figures.

ℒf The gate consists of a system which has an operational structure in the formℒf of the logic f whose elements are all input entries, Figuresand with 1-5 tothe be includedsubset onof p.17elements in M.J. Schroeder of the logic “Towards Autonomous Computation: Geometric = X X which generateMethods all lattice of Computing (atoms” inasℒ atomic replacements lattice, for currentmeet anddamaged figures. join irreducible elements in moreX general X case, elements of Figures 1-5 to be included on p.17 in M.J. Schroeder “Towards Autonomous Computation: Geometric= Fig. 1. Complete reducibility so called frame in general case) as output exits. Activation ℒf Methods of Computing” as replacements for current damaged offigures. any input entry generatesFig. 1. Complete activation reducibility of all elements greater than this element, i.e., activation of every element ℒf At the other end ℒoff the spectrum, there are cases (quantum activates the entire principal filterX ofX the elements. ℒf = mechanics provides examples, but there are many examples unrelated to physics, for instance, in geometry) In a very simple case of encodingFig. 1. Complete information reducibility with respect X X where the logic f cannot be= decomposed at all. Then we to three predicates (as in the standard Young-Helmholtz have information that is totallyFig. 2 Complete integrated, irreducibility as, for instance, model of the color encoding with three basic colors) can ℒf below. ℒ Fig. 1. Complete reducibility be illustrated as follows: Fig. 2 Complete irreducibility 1 I OUTPUT 1 ℒf N 1 I OUTPUT P N Fig. 2 Complete irreducibility1

U 1 P 1 T U 1 Fig. 2 Complete irreducibility OUTPUT 0 I T Fig. 3 Non-integrating encoding N 1 1 0 Fig.P 3 Non-integrating encoding I OUTPUT This is a type of information characterization, which was not U 1 possible at all in Nearlier formalizationsO1 such as Shannon’s I T work on communicationP restrictedU to the selective The process of encoding can be describedO in terms of N I 0 manifestation of informationU as it is definedT1 here. Boolean algebra Fig.and 3 Non is -illustratedintegrating encoding as follows: U P N T P T For our purpose,U it is important to observe that P U0 P T U decomposability Fig.of 3the Non logic-integrating f of encoding information is reflected in O T I U its substructure (called a center of the lattice in the language T U N T of lattice theory),Fig which. 4 Non-integrating itselfℒ is encoding a Boolean algebra. In the T case of a complete disintegration, theO center of the logic PFig . 4 Non-integrating encoding I P is identical with the entire logic. In theU case of complete U N OUTPUT U integration, theI center is trivial and consistsT of only two T P OUTPUTT extreme (top andN bottom) elements. P I U P U NFig . 4 Non-integrating encoding Also, for every intermediaryT case, we can describe its logic U T P in terms of theT center (which has the properties of a fully U disintegrated informationFig. 4 Non-integrating system) encoding and of its irreducible or Thus far, we consideredI encodingOUTPUT in which information is T coherent components.Fig. 5 Completely Thus, integratingwe can encoding identify gate and separate totally disintegrated.N This encoding is different from the integrated “portions” of information and their participation one produced by a priority encoder (priority encoders have P Fig. 5 Completely integrating encoding gate I OUTPUT in totally disintegrated information of the variety formed as their logic aU linear ordering) but can be easily realized by them. N in the standard computer architecture. Total disintegration T P of information was due to the fact that the logic of the gate Two-element Ufactors to which totally disintegrated was a Boolean algebra. However, we can use a generalized Fig. 5 Completely integrating encoding gate information canT be decomposed are irreducible, i.e., Venn gate where the logic of the gate is more general and coherent in a trivial way. They can be associated with can be a completely irreducible lattice as in the following answers to yes-noFig. questions, 5 Completely integrating which encodingcan be gate independently illustration: answered. Nontrivial coherent factors cannot be characterized in terms of independent properties, which limits their description in terms of the decomposition into parts. The situation described here will be easy to

PAGE 17 FALL 2015 | VOLUME 15 | NUMBER 1 Figures 1-5 to be included on p.17 in M.J. Schroeder “Towards Autonomous Computation: Geometric Methods of Computing” as replacements for current damaged figures.

ℒf

= X X

Fig. 1. Complete reducibility

ℒf

Fig. 2 Complete irreducibility

1 I OUTPUT N 1 P U 1 T 0 Fig. 3 Non-integrating encoding

O I U N T P P U U T APA TNEWSLETTER | PHILOSOPHY AND COMPUTERS

Fig. 4 Non-integrating encoding

Now, we can consider semantics of information as a I OUTPUT relationship between two information systems. The N centuries-long discussion of the meaning of meaning P had as its main obstacle the attempt to cross the border U between two very different realms, that of language, i.e., T symbols, and that of entities in the physical world. Since symbols seemed to require an involvement of the conscious Fig. 5 Completely integrating encoding gate subject associating each symbol with its denotation, the border was identified with the one between mind and The illustration refers to atomic elements (minimal non­ body. Intention of a symbol which directs the mind to zero elements) as beginnings of output channels, but in the denotation Franz Brentano identified with the mental the case of more general gates, the output channels may aspect of symbolic representation that is not reducible to not be from atomic elements (they may not exist in some physical phenomena. cases) but from a set of elements generating the lattice describing the logic of the gate (e.g., join and meet However, in reality, when we associate a symbol with its irreducible elements or, more generally, frame elements). denotation, we do not make an association with the physical object itself but with the information integrated into what is It has to be emphasized that generalized information considered to be an object. Thus, the association between integrating gates which integrate information completely, a symbol and its denotation is a relationship between two in the majority of non-trivial cases, cannot be realized informational entities consisting of integrated information. in the current architecture of computers. One of the Both the symbol and its denotation are integrated reasons for this fact is that their logic has to be an infinite information, but in two different information systems. lattice. Thus, incorporation of a component mechanism The mental aspect of symbolic representation is not in its integrating information cannot be achieved by adding intention, or in the act of directing towards denotation, but another computer or any finite number of computers. The in the integration of information into objects (wholes). The mechanism integrating information requires essentially relationship between these informational entities, between different architecture. the symbol and its denotation, can be purely conventional. We are usually taught which symbol to relate with which Integration of information is of special importance for object. But the capacity to form objects, i.e., to integrate our subject as it is the most fundamental characteristic information is a characteristic of the mind. of consciousness. The physical realization of this process in the brain is an open question, so it is only the author’s The word “cow” has as its denotation the information belief expressed as a working hypothesis that there is integrated into the complex identified in some way, for an association between consciousness and information instance, with the use of our senses and the brain processes integration by a mechanism of the presented type. which integrate sensory input. We should also observe However, the idea that consciousness can be considered that the recognition of the word “cow” involves some to be either itself integrated information or the result of process of information integration forming a whole from information integration is one of the dominating views in the sequence of the three separate letters. If we imagine a the scientific study of cognition and consciousness.28 person who can read only one letter at a time and forgets this letter when reading the next one, there is no way this The definition of information and the corresponding to this person could understand the entire word, no matter how definition formalism outlined above provide foundations many times he or she reads the letters and in what order. not only for the syntactic of information, i.e., its structural characteristics, but also for its semantics.29 For this purpose, An information system interpreted as symbolic is simply we can employ the mathematical theory of functions the image of the original system, which is its denotation. preserving information structures—homomorphisms of Thus, symbolic information is “about” its inverse image closure spaces defined as functions from closure space through the function building correspondence between to which satisfy the condition: information systems. This “aboutness” does not require any correspondence between entities of different ∀A ⊆ S: (f(A)) ⊆ g( (A)). ontological status, which was necessary in all approaches to intentionality from the Scholastics to Franz Brentano and This condition defines continuous functions in the case beyond. of topological spaces, and, as in topology, for general transitive closure spaces, it is equivalent to the requirement We can find an example of a similar mental act of information that the inverse image of every g-closed subset is f-closed. integration in computation performed by a Turing machine if we understand computation as a description of the Homomorphisms of closure spaces preserve not only the transition from integers to integers. The sequence of lattices of closed subsets, but they are also mapping the symbols on the tape becomes an integer only when the center of one logic into the center of another and the human mind is involved, in exactly the same way that the coherent (i.e., irreducible) factors of one into coherent sequence of letters is integrated into the word “cow.” The factors of another. Thus, they preserve all of the important use of an appropriate convention gives the association of characteristics of information. the numeral integrated from digits with the integer.

PAGE 18 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS Figure 6 to be included on p.19 in M.J. Schroeder “Towards Autonomous Computation: Geometric Methods of Computing” as a replacement for current illegible figure.

It is true that we can use a Turing machine to find out whether two sequences of letters, for instance, “cow” and “cat,” are identical or not. However, no Turing machine can make a distinction between the word “cow” and a sequence of letters “cow,” unless additional separating symbols are introduced. Humans can, if they know about the existence of such a word, i.e., the existence of its integrated form, even when they have never seen the animal.

4. COMPUTATION AS DYNAMICS OF INFORMATION Computation is understood here as an interaction of two or more information systems. Before we attempt a more general formulation, the concept of computation will be considered in the context of a slight generalization of the Turing machine (a-machine) called a symmetric Turing The only difference in comparison with a-machines is in the machine, or s-machine, i.e., we will limit computation to possibility of changes of instructions in the head at the local two information systems which traditionally were called a level. The fact that we dissociate the process of selection “head” and a “tape,” and we will follow Turing in all but one of the next active pair (cell-ilp) from the local instructions specification of the machine.30 in the head is purely formal as it does not matter where we “locate” this process. It can be within the head, as in The reason for this generalization is the goal to develop the Turing’s original description of a-machines, or it can be concept of information dynamics in a way consistent with a global dynamic process involving entire information its physical counterpart, i.e., using exclusively interactions systems of the head and tape. In the orthodox a-machine, in place of one-way actions. we do not have to state that it is associated with, or located in, the head. What is most important is that in both cases Each of the two main component information systems, of a-machines and s-machines the choice of the next pair which for the sake of understanding we continue to call is based on the content of the current cell and current ilp. a head and a tape, has two levels, local and global. The Since the change does not influence the content of the tape consists of cells, where at the local level each cell cell-ilp pair but the global configuration, it is natural to is again an information system that can have one of the make this dissociation. many characters selected out of some alphabet (selective manifestation of information). In the head, instead of cells, The role of this dissociation is to separate the concepts of we have instruction list positions (ilp’s). Each such ilp states of local systems from those of global systems, and can hold one of many instructions from the catalogue of the separation of the concepts of states from the dynamics instructions (also selective information at the local level). of interaction. Accordingly, without significant revision, the process of computing becomes compatible with the At the global level, we have a structural manifestation of dynamical description of natural phenomena, for instance, information in the form of a configuration of characters in physics. located in the cells of the tape and also a structural manifestation in the configuration of instructions on the list Now, it is important that when we make an additional in the head. restricting assumption that the local dynamics does not change the instructions in ilps, but only the tape cells, so Thus far, the description is essentially the same for Turing’s that we get an orthodox a-machine. In this case, the class a-machine as for an s-machine. The difference begins now, of s-machines has as a special the subclass of a-machines. when we assume that both cells and ilps include in the However, in actual physical dynamical processes such description of their local states the information about the one-way action is impossible. We can only consider one- change in passing to the next step of computation. The state way actions as crude approximations of interactions of a cell is characterized by the current character occupying when, for some reason, only one side of an interaction it and by the description of how to change its content is significant and the inertia of the other makes the other when the cell comes into interaction with the ilp in each side negligible. of its possible states. The state of the ilp is characterized by the current instruction located in this position and the The description of computation as a dynamical interaction description of how it should change when this ilp comes of two two-level information systems is free from the into local interaction with the tape cells in each of their involvement of human agents, but symmetric Turing possible local states. This constitutes the dynamics of machines are not yet fully autonomous in the sense that interaction at the local level. It can be easily seen that this the global, structural information is not in an integrated local dynamics is perfectly symmetric. form. For modeling a fully autonomous machine, it is necessary to consider an additional level of hierarchy at The dynamics of interaction at the global level dictates which integration of information is performed, for instance, which pair, consisting of a cell and ilp, will be coming into in the form of a generalized Venn gate mentioned in the interaction in the next step based on the current pair in previous section.31 It should be remembered that the device contact. performing integration of information in general cannot be implemented by a computer or another Turing machine.

PAGE 19 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

The detailed description of such a theoretical device which the survival and reproduction of individuals has the for autonomous computation, including a component selective character. At the global level, the distribution of mechanism integrating information, is outside the scope of individual characteristics of organisms is a manifestation of the present paper, although contribution to this task is one structural information. of its indirect ultimate goals. The other information system is in the form of the natural The description of s-machines was originally intended as environment with its diverse resources, which are consumed a way to eliminate human involvement from a-machines, by organisms. The structural manifestation of information is where goal oriented one-way actions are inconsistent with in the distribution of resources and their accessibility. The the fact that natural dynamics involves only interactions. interaction is at the local level through the pair, organism, The naturalized form of computation in s-machines can be and its niche in the environment. Migration of the organisms compared with processes which are described in terms of within the environment corresponds to the change of global natural phenomena.32 configuration (transition to the next “cell” in the Turing machine). The outcome of this process, in which we assume A simple example of the process of information dynamics that there is some process of the transmission of individual similar to the work of an s-machine can be found in the characteristics to the offspring (genetics), is the evolution mechanism of a “governor” controlling classic steam towards more adapted forms of life. machines. Here, too, we have two information systems involving selective and structural manifestations of Once again, we have an example of the system in which information. The governor has two balls on the arms meaning is generated internally, without any involvement attached to the rotating axis, and the positions of the arms of human agents. form a variety with a geometric, structural manifestation of information. Now, there is no reason why we should restrict computation to only two interacting systems. It is obvious that the Another variety in this part of the governor is the range evolution of life involves a multiple-level hierarchy of of the values for the speed of rotation, within which a information systems, such as cells, organs, organisms, selection (hence the presence of a selective manifestation populations, etc. of information) is made by the pace of the work of the engine. We could generalize computation to the interaction of collectives of information systems at the same level, too. The second information system consists of the valve, But both generalizations go beyond the scope of the whose geometric shape and, therefore, size of cross- present paper. Actually, in the following discussion, we will section is a structural manifestation of information. The step back to consider computation in the form much closer selective manifestation of information consists of the to the Turing machines, without even assuming that the amount of steam passing through the valve, whose is “head” is changing its instructions. The only difference will influenced by the size of the valve. be in essentially different information systems replacing the tape and head and different dynamics of interaction. Finally, we have dynamics in the form of the influence of the amount of steam coming to the engine on the pace of For the present purposes, we can summarize the the work of the engine, and through this pace on the speed understanding of computation as a dynamic interaction of of rotation of the axis to which the arms of the governor two two-level information systems. Each of them has its are attached. Since the choice of the speed of rotation global level with the structural manifestation of information influences the geometry of arms, they are negatively (exemplified in s-machines by the configuration correlated with the size of the valve, and therefore the of “characters” on the “tape” and configuration of amount of steam passing to the engine, and the cycle of “instructions” on the “list of instructions”). At the local feed-back control is closed. level, we have a selective manifestation of information, with each of the elements in the variety at the global level The interaction of the two parts of the governor is not (e.g., cell or ilp) being itself an information system with the different from the interaction of the “head” and “tape” same, prescribed variety (e.g., alphabet of characters or of the s-machine, but here the work does not involve any the catalogue of all possible instructions in s-machines). interpretation by a human agent. Interpretation of the In each information system, information is in its selective output is made exclusively by the engine based on the manifestation. At the global level, the cardinality of the amount of steam being transported, but without any actual varieties, i.e., information carriers, is arbitrary, although measurement involving conventional numerical values (i.e., traditionally only a finite set of elements supposed to be without interpretation). “non-blank.”

A much more complex example of a dynamic interaction Computation is a dynamical interaction at both levels. At can be found in the biological evolution. Here, the global the local level, the interaction is between one distinct active information system has a large hierarchic complex of pair of local information systems, and it generates a new different level information systems. In a very simplified way, selection of an element from the local variety (selection biological evolution can be reduced to the two information of a new, but possibly the same, character for the active systems. One consists of the population of organisms (here, cell, and new, but possibly the same—always the same for simplicity not differentiated into separate species), in in a-machine—instruction for the active ilp). At the global

PAGE 20 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

level, the interaction produces the change of the pair of We can see in the practices of the Pythagoreans the active elements from the global varieties. reflection of a more general tendency which will dominate further thinking about numbers. The meaning of numbers Computation is deterministic, if the outcome of interaction was constructed, and the construction was based on is determined by the current state of the active pair of geometric experience. The fundamental role of the elements, i.e., is determined by the current selective constructive approach is nowhere more clear than in the information. Of course, we can consider non-deterministic use of straightedge (or ruler) constructions in solving computation, in which selection is described, for instance, geometric problems. by some probability distribution, but that is outside the scope of the present paper. The fact that the rules of construction required that the ruler does not have any markings, not even indication of 5. HISTORICAL ROLE OF GEOMETRIC a unit, is very significant. This restriction was contrary to CONSTRUCTIONS all pragmatic aspects of the use of geometry. Thus, the ruler and compass constructions were not just a matter of In this section, the main theme is the role of geometric a reduction of theoretical methods to those simpler, easier, constructions in the generation of meaning. We can and more accessible through technological progress or interpret in this manner virtually entire development of practical experience. Philip E. B. Jourdain quotes Plutarch mathematics starting from the Pythagoreans, through the saying that the employment of mechanical instruments work of Rene Descartes, and beyond. in solving certain geometrical problems by Archytas and Menaechmus caused Plato to “inveigh[ed] against them The issue of the exact philosophical views of the with great indignation and persistence as destroying and Pythagoreans is difficult to establish, as they are known perverting all the good there is in geometry.”39 only from the critical presentations by those who opposed Pythagorean views, mainly by Aristotle. Thus the statements Thus, we have two different methodological tools in ascribed by Aristotle to the Pythagoreans such as that geometry: logic with the axiomatic method invented by “things themselves are numbers,”33 that “number is the Aristotle and fully applied by Euclid in the “Elements,” and matter of things,”34 or that numbers are “the principles of an older geometric construction. However, it is important all existing things”35 have to be viewed with some caution. to notice that the postulates of Euclid are formulated in the constructive terms as a description of what can be Edward A. Maziarz and Thomas Greenwood write: “[W]hen constructed with the idealized ruler and compass. For Aristotle says the Pythagoreans ‘supposed real things to be instance, the first principle states that it is always possible numbers’ and ‘did not regard number as separable from “to draw a straight line from any point to another.” Thus, the object of sense’ he surely means that they must have the two methodologies were not considered independent. studied numbers as external objects, not as mere auxiliaries to ordinary computation.”36 They continue further, Even much later, when with the assimilation of the positional numerical system and the methods of using and solving The assimilation of number and figure in a rational equations became common in Europe, causing changes method of investigating nature called for practical in the methods of using numbers, geometry played an way of combining arithmetic and geometry. The intermediary role between physical reality and abstraction. initial step was a systematic representation of numbers, which the early Greeks accomplished in Characteristic of this is the famous statement by Galileo two ways. The easiest was the method of disposing Galilei in his “Il Saggiatore” from 1623: “Philosophy is written dots or alphas (units) along straight lines which in this grand book—I mean the universe—which stands formed geometrical patterns; the more technical continually open to our gaze, but it cannot be understood was the construction of straight lines proportional unless one first learns to comprehend the language and in length to their corresponding numbers. The interpret the characters in which it is written. It is written Pythagoreans are credited with the discovery and in the language of mathematics, and its characters are use of both methods.37 triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word In both methods, the of properties of numbers was of it, without these, one is wandering around in a dark carried out in terms of the geometric properties of their labyrinth.”40 Although Galileo was a pioneer of quantitative representations. Here we have the origin of terms such as measurements in science, he did not include numbers in triangular numbers, squares, cubes, oblong numbers, etc. the language of philosophy. The meaning of numbers was Geometric characteristics were more important than those in geometry, and only geometric objects were in direct arithmetical. Prime numbers attracted attention as numbers relationship with the objects of the universe. which have to be represented by a singular segment of units, and because of that they were called by Thymaridas The revolution in the relationship between geometry and supremely rectilinear. Since the sum of the first n odd arithmetic, or more exactly algebra, came with the work of numbers is equal to square of side n, the Pythagoreans several mathematicians of the seventeenth century, and in were defining odd numbers as differences between two particular with “La Géométrie” of René Descartes. squares of consecutive sides rather than numbers which cannot be divided by two.38 But careful reading of this text may be surprising for someone accustomed to the presentation of the analytic

PAGE 21 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

geometry in modern textbooks, in which everything starts namely, two or more lines can be moved, one upon from the apparently obvious relationship between the set the other, determining by their intersection other of real numbers and the set of points of the straight line curves.[…] This instrument consists of several equipped with the established unit segment (the so-called rulers hinged together [. . .]42 “real line”). Modern introductory textbooks tell us to take two such lines at the right angle and to use them for the The numbers appear here as coordinates only as a association in one-to-one correspondence the points of the way to build relationships between geometric objects plane and the pairs of real numbers “sitting on axes.” and the variables acquire their meaning only through the correspondence of unknown to known structures. There are no pre-established axes on the diagrams in the Descartes is still simply ignoring negative solutions of book of Descartes. He is using reference lines, typically equations (false roots) because for him they do not have crossing at right angles, because he wants to use geometric meaning. Pythagorean Theorem, but the reference lines have very specific roles in his constructions. One of footnotes from 6. COMPUTATION WITH RULER AND COMPASS the translators and editors of the English edition refers to a Before geometric computation is described in a more letter from Descartes to Princess Elizabeth: formal way within the conceptual framework of information presented above, we can observe that the ruler and In the solution of a geometrical problem I take compass constructions have many striking similarities with care, as far as possible, to use as lines of reference the work of a Turing machine. Here, the tape is replaced by parallel lines or lines at right angles; and I use no a plane on which points, lines, or circles can be drawn by a theorems except those which assert that the sides “head,” which has two tools—a ruler (for drawing lines) and of similar triangles are proportional, and that in a a compass (for drawing circles). right triangle the square of the hypotenuse is equal to the sum of the squares of the sides. I do not The “head” can observe exactly two points at a time (in hesitate to introduce several unknown quantities, the specific order, for instance, left eye on first point, so as to reduce the question to such terms that it right eye on second point). Its instructions consist of the shall depend only on these two theorems.41 directives to draw a line through two points (ignoring the distinction of the order of points); to draw a circle with the La Géométrie starts with the geometric constructions “center” in the first point and the “radius” equal to the pair corresponding to arithmetic operations on numbers and of the observed points; to draw a circle with the “center” of taking square roots, and most of the text in the two in the second point and the “radius” equal to the pair of of its three books is devoted to geometric constructions the observed points; to mark a point of intersection and corresponding to some specific algebraic problems (starting to label it (traditionally labels were capital letters, but they from geometric construction solving quadratic equations). may be natural numbers equally well). Each time the “head” Once this goal is achieved, Descartes has tools which allow is drawing something. him to deal with geometric problems by formulating them in terms of equations, since he is equipped with what he The “head” is distinguishing only two local states of points, needs to come back to geometry. the marked point and unmarked point. This corresponds to the distinction of 1 and 0. Thus the alphabet used by The most interesting part of the book is about going beyond the geometric computing machine is exactly the same as the ruler and compass constructions. This is actually the the two-element alphabet of Turing machines. This fact is main theme of the book. On the front page, there is a important because it shows that the difference between drawing of the construction of an ellipse with the use of a geometric computing machines and Turing machines is not string. Descartes writes, in the alphabet but in the dynamics of information.

I am surprised, however, that they [the ancients] did Although we distinguished marking and labeling, they can not go further, and distinguish between different be considered the same operation as there is no need to degrees of these more complex curves, nor do I make the distinction between them, except for the sake see why they called the latter mechanical, rather of easier understanding of the process. Not all points of than geometrical. If we say that they are called intersection have to be labeled, but this is not important. mechanical because some sort of instrument has We could add a directive of erasing unnecessary points, to be used to describe them, then we must, to be lines, and circles, but this can be omitted without significant consistent, reject circles and straight lines, since consequences. these cannot be described on paper without the use of compasses and a ruler, which may also be Each instruction includes additionally a directive to move termed instruments. It is not because the other the head over the plane to the next pair of points, always instruments, being more complicated than the with the specific labels and specific order of labels. This is ruler and compasses, are therefore less accurate, a crucial point, as we get to the potential ramification of the for if this were so they would have to be excluded description similar to the distinction between the Turing from mechanics, in which accuracy of construction a-machines and c-machines. A deterministic machine is even more important than in geometry. [. . .] Now requires that we always go to a determined pair of points. to treat all the curves which I mean to introduce here, only one additional assumption is necessary,

PAGE 22 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Finally, we assume that the head has a list of instructions For the trivial closure operator, every subset is closed, so with their own labels, and every instruction tells the head to there is no extension of any subset beyond itself. Operation go to a new instruction (not necessarily different). Finally, of a Turing machine does not influence squares which are we can add an instruction to stop an operation. not observed. In the ruler and compass computing, the closure operation is nontrivial. For instance, the closure of The input for this machine consists of a configuration of every two-element set consists of all points belonging to a labeled finite number of points, finite number of lines, the line determined by these two points. The machine is and a finite number of circles. The output will be similar. It going well beyond the observed points. is quite clear that the machine can perform any ruler and compass geometric construction when the plane, ruler, and 7. GEOMETRIC COMPUTING compass are understood in terms of Euclidean geometry. We can now formalize the concept of geometric computation However, we can observe that this interpretation is not the in terms of information systems that are described by only possible one. The meaning of these terms depends on closure spaces. Moreover, we can follow Descartes in his the underlying axioms of geometry. Of course, we have to approach to extend the constructions beyond those with assume two of the postulates of Euclid to make sure that the ruler and compass. the machine can operate, but it is not absolutely necessary to use the full axiomatic of geometry. In this case, our information system is characterized by a geometry introduced on the variety (i.e., set of points), so Moreover, we can observe the fact that the machine we have to identify the closure space axioms for geometry. observes two points, not one, is not crucial for the For the reasons provided in earlier publications by the distinction from a Turing machine. Actually, Turing assumed author, the axioms are slightly different from the usual only a finite, not excessively big number of squares being approach, such as in combinatorial geometry.44 observed by the head to get consistency with the work of a human computer. We do not make any assumption The traditional choice of the axioms for geometry at the regarding the number of points on the plane or the number high level of abstraction is as follows. We have a closure of its dimensions. If we want to think in terms of geometry, space satisfying: of course, these assumptions are crucial. But they depend on the choice of geometry. (N) f(Ø) = Ø

The head moves to another pair of labeled points, which is (T1) ∀ x∈S: f({x}) = {x}. slightly different from what happens in a Turing machine, but this, too, does not seem to be a big difference. (fC) ∀A⊆S∀x∈S: x∈f(A) ⇒ $ B∈Fin(A): x∈f(B) (where Fin(A) means a set of all finite subsets of A). What makes the ruler and compass computation essentially different from the Turing computation? The main difference (wE) ∀A⊆S ∀x,y∈S, x ≠ y: x∉f(A) & x∈f{A∪{y}) ⇒ is that the ruler and compass machine influences not just observed squares, i.e., points of the input, but it can y∈f(A ∪ {x}). expand its action to the entire closure of the two-element set. To draw a line is to change the state of all points which, However, it can be shown that the role of the weak within assumed geometry, belong to the closure of the two exchange property wE (Steinitz Exchange Property) serves points, when geometry is expressed as a closure space.43 the purpose of irreducibility of the structure to a product of component structures and is not included in some forms of For a moment, to avoid unnecessary complication, we geometry (e.g., convex geometry). Also the finite character assume that the alphabet of a Turing machine consists of property fC is common with many other types of closure 0’s and 1’s only. When we look at its operation involving spaces not related to geometry. For instance, it is the main only a finite number of squares, and when we carefully axiom for algebraic closure spaces, and it can be derived avoid putting into its operation anything from our human from our choice of different axioms presented below. interpretation of its work, we can consider the machine simply a device traversing an atomic Boolean algebra. Each Thus our choice is: configuration on the tape is distinguishing an element of this Boolean algebra, whose atoms (least, non-zero elements of (N) f(Ø) = Ø the algebra whose join is this element) correspond to 1’s.

In each step, the element remains the same, or is changing (T1) ∀ x∈S: f({x}) = {x}. by one atom (finite number k of atoms, if the machine observes k squares and can modify all of them). (C ) ∀A⊆S: A = f(A) iff∀ B ⊆A: |B| ≤ n ⇒ f(B) ⊆ A. n

This means that a Turing machine, when we do not introduce It will be written that f∈NT1Cn(S). our human interpretation of the configuration on the tape as an integer, operates on a Boolean algebra, which in Of course, for more specific purposes, for instance, when turn can be associated with the logic f of the information we want to reconstruct the entire Euclidean geometry, we system defined by the trivial minimal closure operator (see can add the weak exchange property wE and some other section 3 above). ℒ appropriate axioms.

PAGE 23 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

It has to be emphasized that our choice does not impose Within the system, there is no way to make a distinction any restrictions on geometry. Just the opposite, in fact, between straight lines and circles based on geometry. any other axiomatic for geometry used in literature can be When we use this type of information system to describe received by adding additional axioms. computation, the distinction has to be made by a “head.” It is only when we are using the structure of traditional metric

The n-character property Cn was derived by the author from geometry as a model that we have automatic distinction, one of the equivalent forms of the finite character fC: but to include it in the geometry we have to add many additional axioms. (fC’) ∀A⊆S: A = f(A) iff∀ B ⊆A: B∈Fin(A) ⇒ f(B) ⊆ A. The model based on the lines and circles of a metric space is However, we should be careful with this analogy, since not the only model of such geometry. An alternative model equivalence of the definitions of finite character does not can be found when pairs of points form closed sets and 45 transmit to C n. closures of triple point sets in a metric space are parabolas with a distinguished and unique direction of their axes. The definition of a geometric closure space presented above becomes a generalization of the linear geometry in This is quite important, as it shows that there are many which main objects of the study are points and lines (they different realizations of this type of geometry and, are the only closed subsets except entire space) when we therefore, many different but equivalent realizations of the choose the two-character property. generalized geometric computing.

(C2) ∀A⊆S: A = f(A) iff ∀B ⊆A: |B| ≤ 2 ⇒ f(B) ⊆ A. If we want to consider a geometric information system for the machine which can draw, i.e., construct conics (always Actually, in this classical case we have automatically including those degenerated such as straight lines and satisfied all higher Cn axioms for n>2, as well as the finite their intersecting pairs!) we have to use a closure operator character property because for every closure operator f, f∈NT1C5(S) or f∈NT1C5wE(S), as conics are determined by five points. Sets with less than five points are closed in this f∈Cn(S) ⇒ f∈Cn+1(S) ⇒ f∈fC(S). case.

It is not easy to place straightedge and compass Using the rudiments of algebraic geometry, we can state constructions within such a general concept of geometry, that in regards to geometric systems with all curves of since we have in this framework straight lines but not degree d being closed subsets, we have to use f∈NT1Cn(S), circles. However, we can talk first about straightedge or (f∈NT1CnwE(S)) where n = 1/2(d+1)(d+2) - 1. However, constructions. Thus, we can consider geometric information there are models for geometric systems for every n in systems defined by a closure operator f∈NT1C2(S), or the metric geometry. In this case, the type of geometry f∈NT1C2wE(S), if we want to have geometry understood in is not directly related to the degree of curves involved. its usual sense. For this closure operator, the closed sets For instance, when the closure operator f∈NT1C4wE(S) is consist of singleton subsets (points) and closures of two considered in the model of metric geometry, straight lines points (straight lines) and entire S. Closures of any triple of and the curves given by the equations y = ax3+bx2+cx+d are non-collinear points is the entire set S. closures of quadruple sets of points which belong to them, while sets of five points have as their closure entire set S. We can consider geometries with the straight lines and circles by extending the list of axioms in order to re­ We can see that with the property Cn as a fundamental axiom introduce metric/distance and define circles as points for geometry we get an infinite hierarchical classification of equidistant from a given point, but this results in the geometric information systems. necessity to add an additional structure beyond that of closure space. However, instead, we can change the There is a natural question that arises as to how this closure space to three-character, i.e., to one with the classification of geometric systems by n-character is related closure operator f∈NT1C3(S), or alternatively f∈NT1C3wE(S)). to Turing machines. The answer was already outlined in the previous section, but it can be made more precise In this case, the pairs of points become closed sets now. It comes with the following proposition relating the (thus, two different points do not determine a line), but n-character property closure spaces for the lowest values closures of three points can be straight lines or circles. It is of n with binary relations:46 interesting that in this case we have only Euclidean models in metric geometry, when we interpret the closure of three i) f∈C0(S) iff $T⊆S: f(A) = A for T⊆A and f(A) = A∪T otherwise. non-collinear points as a circle, as the requirement that If T=Ø, then f is a trivial closure operator for which f(A)=A through every three non-collinear points there is a circle for every subset A. to which they belong is equivalent to the Fifth Postulate of Euclid. The sets of four points which do not belong to ii) f∈NC1(S) iff there exists a reflexive and transitive relation a straight line or a circle have as their closure the entire (quasiorder) R on S, such that set S. Within this type of geometric information system, we can have the straightedge and compass constructions. ∀A⊆S: f(A) = Re(A):= {y∈S: $x∈S: xRy}. However, we have to be careful with interpretation, as we lose distinction between circles and straight lines.

PAGE 24 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

iii) f∈NT C1(S) iff there exists partial order R, Thus, we can attempt to design computing systems in which 0 the role of a human agent in the integration of information such that f(A) = Re(A) (for instance, the interpretation of a sequence of digits as an integer) is taken over by the machine itself. The machine e iv) f(A)=R (A) and R is an equivalence relation iff ∈f NC1(S) is operating on information encoded in the integrated way. and f satisfies:∀ x,y∈S: x∈f({y}) ⇒ y∈f({x}). It is more likely that the task of the design of a computing From the first part of the proposition and the fact that the machine matching human cognitive skills would require tape of the Turing machine as an information system can a machine in its hierarchically complex form. This should be described by an atomic Boolean algebra with countable not be a surprise when we take into account philosophical atom space, or in other words by a trivial closure operator consequences of Alfred Tarski’s Theorem on the f(A) = A for every subset A of S, we can conclude that Undefinability of Truth.48 We cannot expect feasibility of a Turing machines with the binary alphabet are geometric simple non-hierarchic system which can generate meaning information systems of character 0. for itself. However, this claim is only hypothetical as Tarski’s theorem was proved for the traditional logic and may not If the alphabet consists of more than two characters, for be valid for more general forms of information logic when instance k characters, we can establish a correspondence information has non-linguistic form. between characters and an appropriate sequence of 0’s and 1’s (their binary encoding) in such a way that each The dominating tendency to consider only simple one-level character corresponds to an equivalence class defined by information systems (or, more exactly, two-level systems an equivalence relation on the atom space of a Boolean if we take into account the global and local levels of algebra. Following the fourth part of the proposition, the computation) is in strong contrast to what we can observe closure operator becomes in this case a transitive operator in nature. For instance, living organisms are very complex of character 1. This is a natural consequence of the fact that hierarchic information systems, and, of course, the same the sequences of 0’s and 1’s corresponding to characters applies to humans. This justifies the expectation that an cannot be decomposed into separate parts in the process authentic, fully autonomous computing device will require of computing. this form of complexity.

8. CONCLUSION NOTES As the preceding section shows, the present approach to 1. See Turing, “Computing Machinery and Intelligence.” computing, i.e., the orthodox Turing machine computation 2. The belief that this will happen soon, or in the foreseeable with the alphabet consisting only of 0’s and 1’s, corresponds future, is naïve, as there has never been a philosophically non­ to the geometric computation of rank 0 (i.e., defined by trivial concept in human intellectual history that would have been 0-character closure space as its information system). If accepted by consensus. Even more naïve is the expectation that these two concepts are waiting somewhere for a sage, who will we use an arbitrary finite alphabet with more than two discover the “correct” way to define them. They can be defined elements, we have a geometric computation of rank 1 as it suits the purpose of their use, but the value of the definition (closure space of character 1). Starting from rank 2, we will depend on their place in a philosophical conceptual framework and their explanatory power in the discourse. The have a computation which can be associated with actual rule “garbage in, garbage out” applies non-trivial forms of geometry. to this case, as well as to all other discussions of philosophical concepts.

When we compare closure spaces of type Cn with their 3. See Schroeder, “Philosophical Foundations for the Concept Moore families nested in each other, we get the increasing of Information”; “From Philosophy to Theory of Information”; “Dualism of Selective and Structural Manifestations of Information “power” of computation in the sense that their closure in Modelling of Information Dynamics”; and “From Proactive to operators can be compared, and those with the higher Interactive Theory of Computation.” value of n in C are greater, as explained in section 3. This n 4. See Turing, “On Computable Numbers, with an Application to the means that we can construct a larger class of geometric Entscheidungsproblem”; and Turing, “Computing Machinery and objects, for instance, more complicated curves, as it was Intelligence.” considered by Descartes, or more points determined by 5. See Schroeder, “Concept of Information as a Bridge between their intersections. However, Descartes considered only Mind and Brain.” algebraic curves, and here we are not limited by the analytic 6. See Searle, “Is the Brain a Digital Computer?”; “The Rediscovery description in the terms of polynomial equations. Actually, of the Brain”; and The Mystery of Consciousness. these very general objects do not have to be curves at all. 7. See Schroeder, “Semantics of Information.”

The advantage of this generalized form of computing is 8. See Descartes, La Géométrie. in its analogue character. We can consider the meaning 9. See Schroeder, “Dualism of Selective and Structural Manifestations of Information”; and “From Proactive to Interactive for input or output of computing directly in the context of Theory of Computation.” its realization in specific information systems. Moreover, geometric computation of the higher level can serve as a 10. See Arbib, “Brains, Machines, and Mathematics,” 122. Italics and bold face as in the original text. process of meaning generation for the lower level. What is crucial is that the information systems involved here allow 11. See Post, “Finite Combinatory Processes,” 104. for an arbitrarily high level of information integration. For 12. See Shannon, “Mathematical Theory of Communication,” 3. instance, an appropriate choice of axioms for geometry (wE 13. See Wittgenstein, “Remarks,” III-21 (1942), 119e. property) makes geometric information totally integrated.47

PAGE 25 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

14. See Turing, “On Computable Numbers.” Galilei, Galileo. “Il Saggiatore (The Assayer)” trans. S. Drake. In The Controversy on the Comets of 1618, edited by S. Drake and O’Malley. 15. Ibid., 249. Philadelphia, PA: University of Pennsylvania Press, 1960. 16. See von Neumann, Collected Works, 292–95. Jauch, Josef M. Foundations of Quantum Mechanics. Reading, MA: 17. See Schroeder, “Dualism of Selective and Structural Manifestations Addison-Wesley, 1968. of Information.” Jónsson, Bjarni. “Lattice-Theoretic Approach to Projective and Affine Geometry.” In The Axiomatic Method, vol. 27, edited by L. Henkin, P. 18. See Turing, “Computing Machinery and Intelligence,” sect. 5. Suppes, and A. Tarski, 182–203. Studies in Logic and the Foundations 19. See Shannon, “A Universal Turing Machine with Two Internal States.” of Mathematics. North-Holland, Amsterdam, 1959. 20. See Zuse, Calculating Space. Jourdain, Philip E. B. “The Nature of Mathematics.” In The World of Mathematics, edited by J. R. Newman, 4–72. New York: Simon & 21. Cf. 2. Schuster, 1956. 22. See Schroeder, “Quantum Coherence without Quantum Maziarz, Edward A., and Thomas Greenwood. Greek Mathematical Mechanics in Modeling the Unity of Consciousness.” Philosophy. New York: Barnes & Nobel Books, 1995. 23. More detailed exposition of the concepts used in the following Post, Emil L. “Finite Combinatory Processes – Formulation 1.” Journal of and their theoretical description can be found in Birkhoff,Lattice Symbolic Logic 1, no. 3 (1936): 103–105. Theory; Davey and Pristley, Intro to Lattices; Jauch, Foundations Schroeder, Marcin J. “Philosophical Foundations for the Concept of of Quantum Mechanics; or in a variety of books on general Information: Selective and Structural Information.” In Proceedings of algebra. the Third International Conference on the Foundations of Information 24. See Schroeder, “Computing as Dynamics of Information.” Science, Paris 2005, http://www.mdpi.org/fis2005/proceedings.html. ———. “Quantum Coherence without Quantum Mechanics in Modeling 25. See Schroeder, “Search for Syllogistic Structure of Semantic the Unity of Consciousness.” In QI 2009, edited by P. Bruza et al., 97– Information.” 112. LNAI 5494. Berlin, Germany: Springer, 2009. 26. See Schroeder, “Quantum Coherence without Quantum ———. “From Philosophy to Theory of Information.” International Mechanics.” Journal of and Application 18, no. 1 (2011): 56–68. 27. Ibid. ———. “Concept of Information as a Bridge between Mind and Brain.” 28. Tononi and Edelman, “Consciousness and Complexity.” Information 2, no. 3 (2011): 478–509. ———. “Semantics of Information: Meaning and Truth as Relationships 29. See Schroeder, “Semantics of Information.” between Information Carriers.” In C. Ess & R. Hagengruber (Eds.) The 30. Cf. 9. Computational Turn: Past, Presents, Futures? Proc. IACAP 2011, Aarhus University – July 4–6, 2011, 120–23. Munster, Germany: Monsenstein 31. Cf. 9. und Vannerdat Wiss., 2011. 32. Cf. 9. ———. “Search for Syllogistic Structure of Semantic Information.” Journal of Applied Non-Classical Logic 19, no. 4 (2011): 463–87. 33. See Aristotle, Metaphysics, 987b 2 7. ———. “On Classification of Closure Spaces: Search for the Methods 34. See Aristotle, Metaphysics, 987a 15. and Criteria.” In RIMS Kokyuroku 1809, Algebraic Systems and 35. See Aristotle, Metaphysics, 987b 25. Theoretical Computer Science, edited by Akihiro Yamamura, 145–54. Kyoto: Research Institute for Mathematical Sciences, Kyoto University, 36. See Maziarz and Greenwood, Greek Mathematical Philosophy, 15. 2012. 37. Ibid., 24. ———. “From Proactive to Interactive Theory of Computation.” In The 6th AISB Symposium on Computing and Philosophy: The Scandal of 38. Ibid. Computation – What is Computation?, edited by M. Bishop and Y. J. Erden, 47–51. The Society for the Study of Artificial Intelligence and the 39. See Jourdain, The Nature of Mathematics, 15. Simulation of Behaviour, http://www.aisb.org.uk/, 2013. 40. See Galileo, “Il Saggiatore.” ———. “Dualism of Selective and Structural Manifestations of 41. See Descartes, La Géométrie, 10, fn. 18. Information in Modelling of Information Dynamics.” In Computing Nature, SAPERE 7, edited by G. Dodig-Crnkovic and R. Giovagnoli, 125– 42. Ibid., 40 and 44. 37. Berlin, Germany: Springer, 2013. 43. See Jónsson, “Lattice-Theoretic Approach to Geometry.” ———. “Computing as Dynamics of Information: Classification of Geometric Dynamical Information Systems Based on Properties of 44. See Schroeder, “On Classification of Closure Spaces”; and Closure Spaces.” In RIMS Kokyuroku No. 1873, Algebra and Computer “Computing as Dynamics of Information.” Science, edited by A. Yamamura, 126–34. Kyoto, Japan: Research Institute for Mathematical Sciences, Kyoto University, 2014. 45. Ibid. Shannon, Claude E. “The Mathematical Theory of Communication.” In 46. Ibid. The Mathematical Theory of Communication, edited by C. E. Shannon 47. Ibid. and W. Weaver. Urbana, IL: University of Ilinois Press, 1949, pp. 3-91. 48. See Tarski, “The Concept of Truth.” ———. “A Universal Turing Machine with Two Internal States.” In Automata Studies, edited by C. E. Shannon and McCarthy, 157–65. Princeton, NJ: Princeton University Press, 1956. BIBLIOGRAPHY Searle, John R. “Is the Brain a Digital Computer?” Presidential Address Arbib, Michael A. Brains, Machines, and Mathematics, 2nd ed. Berlin: to the American Philosophical Association, 1990. http://users.ecs.soton. Springer, 1987. ac.uk/harnad/ Papers/Py104/searle.comp.html/, accessed December Aristotle. “Metaphysics.” In The Basic Works of Aristotle, edited by R. 21, 2013. McKeon. New York: Random House, 1941. ———. The Rediscovery of the Mind. Cambridge, MA: The MIT Press, Birkhoff, Garrett. Lattice Theory, 3rd ed., vol. XXV. Providence, RI: 1992. American Mathematical Society Colloquium Publications, 1967. ———. The Mystery of Consciousness. New York: NYREV, 1997. Davey, B. A., and H. A. Priestley. Introduction to Lattices and Order. Tarski, Alfred. “The Concept of Truth in Formalized Languages.” Trans. Cambridge, UK: Cambridge University Press, 1990. J. H. Woodger. (English transl. of the original 1936 article in German) Descartes, Rene. La Géométrie. (The Geometry), trans. D. E. Smith and In Logic, Semantics, Metamathematics, edited by J. Corcoran. Hackett M. L. Latham. Dover, New York, 1954. Publishing Company, 1983.

PAGE 26 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Tononi, G., and G. M. Edelman. “Consciousness and Complexity.” The main background to be used is a body of theory Science 282 (December 4, 1998): 1846–51. which is known in human sciences as “Peircean Turing, Alan M. “On Computable Numbers, with an Application to the semiotics.”9 was an American Entscheidungsproblem.” Proceedings of the London Mathematical Society, Ser. 2, 42 (1936): 230–65, cor. 43 (1936-37): 544–46. philosopher who lived from the end of the nineteenth century to the beginning of the twentieth century and ———. “Computing Machinery and Intelligence.” Mind 59, (1950): 433–60. is regarded as one of the most original minds of his time, possibly until today. He is considered the father von Neumann, John. “The General and Logical Theory of Automata.” of , and his theory of semiotics is one of the In John von Neumann, Collected Works, Vol. V, editedAPA by NEWSLETTERA.H. Taub, | PHILOSOPHY AND COMPUTERS 288–326. Oxford, UK: Pergamon Press, 1963. most complex and abstract theories engendered by the Wittgenstein, Ludwig. “Remarks on the Foudations of Mathematics.” human mind, trying to model and explain how a mind Edited by G. H. von Wright, R. Rhees, and G. E. M. Anscombe. (and theseduce universe) new students, works. but which is abstract and complex Cambridge, MA: The MIT Press, 1967. : The enough for demanding a hard dedication and study in Zuse, Konrad. Rechnender Raum. Braunschweig: Friedrich Vieweg Peirceanorder semiotics to be well understood.is known It to is alsobe knowna theory by the which Background Infrastructure to New many unsuccessful stories in which many different & Sohn, 1969. Engl. Transl. Konrad Zuse’s Rechender Raum easily seduces new students, but which is abstract (Calculating Space). AvailableKinds ofat Intelligenthttp://www.mathrix.org/zenil/ Systems researchers tried to use it without significant benefits ZuseCalculatingSpace-GermanZenil.pdf, accessed January 1, 2014. and complexto other enough fields (Gudwin to demand et.al. 2006).a hard Nevertheless, dedication and Peircean semiotics brings the substrate of a new study inunderstanding order to be on well the conceptunderstood. of representation, It is also known and by Ricardo R. Gudwin the manydespite unsuccessful the many risks stories which are in involved, which wemany will different be UNIVERSITYOFCAMPINAS,BRAZIL researchersrelying triedin it into orderuse toit withoutconstruct oursignificant proposal. benefits to DCA-FEEC-UNICAMP 10 Computational Semiotics: The other fields.Setting UpNevertheless, the General Framework Peircean semiotics brings [email protected] the substrate of a new understanding on the concept Background Infrastructure to New Kinds In order to set up our general framework, we must of representation,introduce the context and despiteof our investigation. the many Let risks us start which are of Intelligent SystemsINTRODUCTION involved,in we a very will generalbe relying scenario, in it wherein order we considerto construct an our environment populated by both objects (Smith 1998) and To build intelligent machines, able to think and act on proposal.agents (Franklin & Graesser 1996; Luck & d'Inverno the world as if they were human (or at least - 2001). This scenario is depicted in a metaphorical way Ricardo R. Gudwincognitive) beings has been the unfulfilled dream of a in figure 1. Objects can be static or dynamic. The UNIVERSITY OF CAMPINAS,whole BRAZIL generation of researchers working in the fieldSETTING of UP THE GENERAL FRAMEWORK difference between objects and agents is that agents are DCA-FEEC-UNICAMP, [email protected] intelligence. The research field of In orderpresumed to set to haveup our some sortgeneral of intelligence framework, governing we must intelligent systems has been testifying the arousal and their actions. Dynamic objects can perform a complex decline of many different computational techniques,introduce with the context of our investigation. Let us start trajectory, but usually in a periodic mechanical way, their promises and shortcomings (Fodor 1975; Gardner INTRODUCTION in a veryfollowing general some simple scenario, rule. They where are always we passive.consider an 1985; Zurada 1994; Franklin 1995; Meystel & Albus 2002; Agents can be passive or active, depending on the 11 To build intelligent machines,Goertzel 2007; able Samsonovich to think 2012). and act on environment populated by both objects and agents. the world as if they were human (or at least cognitive) This scenariocircumstances. is depicted Agents in are a metaphorical embodied (Biocca way 1997; in Figure Cognitive Science has passed through many paradigms: Anderson 2003; Ziemke 2003; Meijsing 2006; Lallee et.al. beings has been thecomputationalism, unfulfilled connnectionism, dream of dynamicism,a whole situated1. Objects2010) can and situatedbe static (Clancey or dynamic. 1997; Robbins The & Aydededifference generation of researchersembodied cognition,working and in many the different field approaches of between with 2009). objects This and means agents that they is that are agents a part are of presumed the the task of modeling minds and explaining how cognition environment (their body), and might be detected by other artificial intelligence.is Thepossible research to happen infield living of beings intelligent (Verdejo 2013).to haveagents some at sort the of environment. intelligence They governing are also able their to get actions. systems has been testifyingIn the last 25the years, rise the and synthetic decline design of of cognitiveDynamiclocalized objects information can perform from the a environment. complex They trajectory might but systems and creatures was investigated in many fields of be able to move their sensors positions, which means many different computational techniques, with their usuallythey in area periodic able to focus mechanical their attention way, to onlyfollowing specific some research,1 including Cognitive Robotics (Christaller promises and shortcomings.1999; Clark 1999; Levesque & Lakemeyer 2008), Artificialsimple partsrule. of They the environment. are always They arepassive. able to changeAgents both can be Life (Werner & Dyer 1992; Balkenius 1995; Cariani 1998),passivetheir or externalactive, structuredepending (body) on and the their circumstances. internal structure (mind). Cognitive science hasAnimats passed (Meyer through 1991; many Dean 1998), paradigms: Synthetic EthologyAgents are embodied12 and situated.13 This means that (MacLennan 1992; MacLennan & Burghardt 1993; MacLennan The anatomy of an agent is sketched in figure 2. computationalism, 2006)connnectionism, and Co-evolutionary dynamicism, Robotics (Floreano et.al.they are a part of the environment (their body) and Agents are able to interact with the environment (and 1998). situated embodied cognition, and many different might betheir detected objects andby otherother agents) agents by in means the ofenvironment. their approaches with theIn task this work,of modeling we propose an minds alternative and approach They for aresensors also andable actuators. to get The localized control system information governing thefrom the addressing the problem of synthesizing artificial minds, behavior of an agent is said to be its "mind" (Franklin explaining how cognitionwhich is wepossible have been to happen calling herein living and thereenvironment. as 1995). An Theyagent's might mind can be be ableas simple to moveas a thermostat, their sensory beings.2 "computational semiotics". Even though many of the positions,ideas or itwhich can be means quite complexthey are and composedable to of focus many their to be presented further are quite old, or a reshapeattention of interacting to only specific subsystems, parts responsible of the environment. for the They old ideas, the reader should not prematurely judge this implementation of different functionalities. Abilities In the last twenty-fiveapproach years, as the naive synthetic or too simplistic. design They of areare the ableand to functions change of both this controltheir external system (the structure mind) are (body) cognitive systems andrefinement creatures and abstraction was investigated of more than in 20 yearsand of theircalled internal "mental". structure (mind). research in this field, and are presented to3 the reader many fields of research,as a summationincluding of manyCognitive insights Robotics, matured into a theory Artificial Life,4 Animats,(see5 GudwinSynthetic & Gomide Ethology, 1997a; 1997b;6 and Gudwin Co- 1998; 1999; 2000;7 2001; 2011; 2014; Gudwin et.al. 2002; 2006; Gomes evolutionary Robotics.et.al. 2003; 2003a; 2004; 2005; 2007; Gonçalves & Gudwin 1998; 1999; Gudwin & Queiroz 2007; 2007a; Loula et.al. In this work, we propose2003; 2006;2010; an alternative Ribeiro et.al. approach 2007; Tatai & Gudwin 2003). for addressing the problem of synthesizing artificial The main background to be used is a body of theory which minds, which we haveis knownbeen in referring human sciences to ashere "Peircean and semiotics" there as “computational(Savan 1988;semiotics.” Short 2006; Even Peirce though 1931-1958). Charles Sanders Peirce was an American philosopher which lived many of the ideas tofrom be the presented end of the 19th further century areto the quite beginning of the old, or a reshaping 20thof old century, ideas, and the is regardedreader as should one of the most not prematurely judgeoriginal this mindsapproach of his time, as naive possibly or until too today. He is considered the father of Pragmatism and his theory of simplistic. They are thesemiotics refinement is one of and the abstraction most complex andof abstract more than twenty yearstheories of engenderedresearch by in the this human field, mind, tryingand to model and explain how a mind (and the universe) works. Figure 1. Scenario: Cognitive agents. are presented to the reader as a summation of many Figure 1: Scenario: Cognitive Agents insights matured intoPeircean a theory. semiotics8 is known to be a theory which easily

FALL 2015 | VOLUME XX | NUMBER X

PAGE 27 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

A Brief History of Intelligent Systems This is where a brief history of intelligent systems is MIND necessary, in order to introduce the reader to the main (Agent Control System) problem we are dealing to attack. Franklin (1995) tells us this story in terms of three AI debates. According to Gardner (1985), even though the mind has Sensors been studied from the point of view of philosophy (of mind) since the early Greeks, Artificial Intelligence, as a field of research started during the summer of 1956 at the , in Hanover, New Hampshire, Actuators USA. The main model for the mind was based on the notion of "symbol", as pointed out by Newell in his notion of Physical Symbol System (Newell & Simon 1976; Newell 1980; Nilsson 2007): “a physical symbol system has the necessary and sufficient means for general intelligent Figure 2: The Sketch of an Agent action”. This was the beginning of the “computationalism” period of cognitive sciences. This Cognitive Architecture claim was attacked by many different fronts, as Nilsson (2007) points out. Among the many arguments against the

Learning PSS theory, it is worth to mention the Chinese room Memory argument (Searle 1980), the symbol grounding problem (Harnad 1990) and the evidence for non-symbolic MIND Goals representations (Nii et.al. 1982). Basically, the main (Agent Control System) Thought discussion was that symbols are not enough for all intelligent action, as postulated by the PSS theory. This was the first AI debate, as pointed out by Franklin Evaluations (1995). (Emotions) Motivations The second AI debate involved the many trials of getting inspiration from brains and brain sciences in order to derive computational methods to work as cognitive Figure 3: A Mind as a Cognitive Architecture architectures. This period, known as the “connectionist” period in cognitive sciences, was the “gold era” of The set of all mental abilities and functionalities is neural networks and other computational intelligence called "Cognition". There are many ways in which (Marks II 1993; Zurada 1994) techniques, like fuzzy cognition can be modeled and represented. Usually, we systems and evolutionary computation. The emphasis now can model an agent's cognitive processes by means of a was on non-symbolic information and distributed pattern- “Cognitive Architecture”(Sun 2004; Sun 2007; Langley et. matching. Even though very sophisticated models al. 2009; Samsonovich 2009). We can describe this originated from this line of research (O'Reilly 2000; cognitive architecture using standard terminology from Granger 2006; Nageswaran et.al. 2010), and general psychology and cognitive science. Then, we might cognitive architectures like e.g. Clarion (Sun, 2003) decompose this cognitive architecture in terms of their were proposed, general technological offspring were many sub-systems, responsible for different “cognitive still missing. tasks”, or “cognitive functions”. Among others, we could The third AI debate involved the enactive approach list memory, thoughts, motivations, goals, emotions, (Stewart et.al. 2010), embodied (Biocca 1997; Anderson learning, etc. This is pictured out in figure 3. 2003; Ziemke 2003; Meijsing 2006; Lallee et.al. 2010) Up to this point, our general framework could serve both and situated (Clancey 1997; Robbins & Aydede 2009) to represent natural agents like animals or human cognitive science, and the anti-representationalist beings, and artificial intelligent agents. Now, in order position (Haselager et.al. 2003), questioning the to synthesize such agents using some sort of technology, necessity of representations (at all) for achieving we will be required to propose possible implementations intelligent actions. for these cognitive tasks, in terms of synthetic models. There are many lessons which we can grab from these The example in figure 4 shows how different cognitive debates. The first one is that even though APA NEWSLETTER | PHILOSOPHY ANDfunctions COMPUTERS and tasks can be implemented by means of many computational models of cognitive tasks are important, possible computational models. the use of computational metaphors as models can be very misleading. They are helpful in order to inspire the development of new algorithms and processes, but when we take the metaphor by the real thing, our models start to Neural Rule-Based suffer from an oversimplification that can be dangerous Perception Memory Systems The anatomy of an agent is sketched in Figure 2. Networks and naive. APA NEWSLETTER | PHILOSOPHY AND COMPUTERS The main metaphorical concepts used in the modeling of Actor Genetic Thought Behavior Systems Algorithms intelligent systems are concepts like "Knowledge", A Brief History of Intelligent Systems "Thought", "Learning" and "Perception". Knowledge representation can be explicit, as in the case of the This is where a brief historyEmotions of intelligentGoals systems isBelief Fuzzy Systems first AI debate, leading us to symbols and logical necessary, in order to introduce the reader to the mainNetworks MIND expressions, propositions, rules, semantic networks or (Agent Control System) problem we are dealing to attack. Franklin (1995) tells belief networks. Or they can be implicit, as in the us this story in terms Motivationsof three AI Learningdebates. Petri Classifier Nets Systems models of the second AI debate, leading us to numerical According to Gardner (1985), even though the mind has parameters in neural networks, genetic algorithms, fuzzy Sensors been studied from theFigure point 4. Figure Implementing of view 4: ofImplementing philosophy cognitive Cognitive (of tasks. Tasks sets, etc. In the literature of intelligent systems, mind) since the early Greeks, Artificial Intelligence, as a field of research started during the summer of 1956 at the Dartmouth College, in Hanover, New Hampshire, Actuators A BRIEF HISTORY OF INTELLIGENT SYSTEMS USA. The main model FALLfor the2015 mind | VOLUME was based XX | onNUMBER the notionX of "symbol",This as pointedis where out bya Newellbrief in history his notion of of intelligent systems is Physical Symbolnecessary System in (Newell order & Simonto introduce 1976; Newell the reader to the main 1980; Nilsson 2007): “a physical symbol system has the necessary andproblem sufficient we means are fordealing general to intelligent attack. Franklin tells us this Figure 2: The Sketch of an Agent Figure 2. The sketch of an agent. action”. story This in was terms the of three beginning AI debates. of the16 “computationalism” period of cognitive sciences. This Cognitive Architecture claim was attacked by many different fronts, as Nilsson Agents are able to interact with the environment (and(2007) pointsAccording out. Among to the Gardner, many arguments even against though the the mind has been PSS theory, it is worth to mention the Chinese room their objects and other agents) by meansLearning of their studied from the point of view of philosophy (of mind) Memory argument (Searle 1980), the symbol grounding problem sensors and actuators. The control system governing(Harnad 1990)since and the the early evidence Greeks, for non-symbolicartificial intelligence (AI) as a 14 the behaviorMIND of an agent is said to be itsGoals “mind.” representations An field (Nii of et.al.research 1982). started Basically, during the main the summer of 1956 at (Agent Control System) Thought discussion was that symbols are not enough for all 17 agent’s mind can be as simple as a thermostat, or it intelligentcan Dartmouth action, as postulatedCollege, by in the Hanover, PSS theory. New Hampshire. The be quite complex and composed of many interactingThis was themain first modelAI debate, for as thepointed mind out by was Franklin based on the notion of Evaluations (1995). subsystems, responsible for the implementation(Emotions) of “symbol,” as pointed out by Newell in his notion of Motivations different functionalities. Abilities and functions ofThe this second PhysicalAI debate involved Symbol the manySystem: trials of“a getting physical symbol system inspiration from brains and brain sciences in order to control system (the mind) are called “mental.” derive computationalhas the necessary methods to workand assufficient cognitive means for general Figure 3: A Mind as a Cognitive Architecture architectures.intelligent This period, action.” known as 18the This “connectionist” was the beginning of the The set of all mental abilities and functionalitiesperiod is in “computationalism” cognitive sciences, was theperiod “gold era”of cognitive of sciences. This The set of all mental abilities and functionalities is neural networks and other computational intelligence called called“Cognition.” "Cognition". There There are are manymany ways ways in whichin which(Marks II claim 1993; Zuradawas attacked 1994) techniques, by many like different fuzzy fronts, as Nilsson cognitioncognition can canbe bemodeled modeled and and represented. represented. Usually, weUsually,systems andpoints evolutionary out. 19 computation. Among Thethe emphasis many nowarguments against the can model an agent's cognitive processes by means of a was on non-symbolic information and distributed pattern- we can“Cognitive model anArchitecture”(Sun agent’s cognitive 2004; Sun processes 2007; Langley byet. meansmatching. PSS Even theory, though veryit is sophisticatedworth mentioning models the Chinese room of a “Cognitiveal. 2009; SamsonovichArchitecture.” 2009). 15 We We can can describe describe this originatedthis argument, from this line20 the of research symbol (O'Reilly grounding 2000; problem,21 and the cognitivecognitive architecture architecture using using standard standard terminology terminology from fromGranger 2006;evidence Nageswaran for et.al.non-symbolic 2010), and representations. general 22 Basically, psychology and cognitive science. Then, we might cognitive architectures like e.g. Clarion (Sun, 2003) psychologydecompose and this cognitive architecture science. in termsThen, of theirwe mightwere proposed,the main general discussion technological was offspring that symbols were are not enough decomposemany sub-systems, this cognitive responsible architecture for different in “cognitiveterms of theirstill missing.for all intelligent action, as postulated by the PSS theory. tasks”, or “cognitive functions”. Among others, we could The third AI debate involved the enactive approach 23 many sub-systems,list memory, thoughts, responsible motivations, for different goals, emotions, “cognitive(Stewart et.al.This was 2010), the embodied first (BioccaAI debate, 1997; Andersonas pointed out by Franklin. tasks,” learning,or “cognitive etc. This is functions.”picturedAPA out NEWSLETTERin Amongfigure 3. others, | PHILOSOPHY 2003;we Ziemke AND COMPUTERS 2003; Meijsing 2006; Lallee et.al. 2010) could Uplist to thismemory, point, our generalthoughts, framework motivations, could serve both goals,and situated The (Clancey second 1997; AI debate Robbins involved & Aydede 2009) the many trials of getting to represent natural agents like animals or human cognitiveA Brief History science, of and Intelligent the anti-representationalist Systems emotions,beings, learning, and artificial etc. intelligent This is picturedagents. Now, outin order in Figureposition (Haselagerinspiration et.al. from 2003), brains questioning and brain the sciences in order to 3. Up toto thissynthesize point, such our agents general using some framework sort of technology, could servenecessityThis is wherederive of representations a brief computational history (atof intelligent all) methods for systemsachieving isto work as cognitive both towe represent will be required natural to propose agents possibleMIND like implementationsanimals or humanintelligentnecessary, architectures. actions. in order to introduce This theperiod, reader toknown the main as the “connectionist” for these cognitive tasks, in(Agent terms Control of System)synthetic models. problem we are dealing to attack. Franklin (1995) tells Thereus this are story many in lessons terms of which three weAI candebates. grab from these beings Theand example artificial in figure intelligent 4 shows how agents. different Now, cognitive in orderdebates. to period The first in onecognitive is that sciences, even though was the “gold era” of synthesizefunctions such and agents tasks can using be implemented some bysort means of oftechnology, many computationalAccording neural to models Gardner networks of (1985), cognitive even and tasks though other are the important, mindcomputational has intelligence possible computational models. Sensors thebeen use studied of computational from the pointmetaphors of viewas models of philosophy can be very (of we will be required to propose possible implementationsmind) sincetechniques, the early Greeks, like Artificial fuzzy Intelligence,systems and evolutionary misleading. They are helpful24 in order to inspire the for these cognitive tasks in terms of synthetic models.developmentas a fieldcomputation. of researchnew algorithms started and The during processes, emphasis the summer but when ofnow 1956 we was on non-symbolic at the Dartmouth College, in Hanover, New Hampshire, The example in Figure 4 shows howActuators different cognitivetake the metaphorinformation by the real and thing, distributed our models start pattern-matching. to Even Neural Rule-Based sufferUSA. The from main an model oversimplification for the mind was that based can beon the dangerous notion Perception Memory Systems functions and tasks can be implemented Networks by meansandof of naive. "symbol", though as pointed very out sophisticated by Newell in his models notion of originated from this Physical Symbol System (Newell & Simon 1976; Newell many possible computational models. The main metaphoricalline of research, concepts used and in thegeneral modeling cognitive of architectures Actor Genetic 1980; Nilsson 2007): “a physical symbol system has the Thought Behavior Algorithms Systems intelligentnecessary like, and systems sufficient e.g., are Clarion, means concepts for were generallike proposed, "Knowledge", intelligent general technological Figure 2: The Sketch of an Agent "Thought",action”. offspring "Learning"This was andwere the "Perception".still beginning missing. Knowledge25 of the representation can be explicit, as in the case of the Emotions Goals Belief Fuzzy “computationalism” period of cognitive sciences. This Systems first AI debate, leading us to symbols and logical CognitiveNetworks Architecture claim was attacked by many different fronts, as Nilsson 26 expressions,(2007) pointsThe propositions, out.third Among AI the rules,debate many semantic arguments involved networks against the theor enactive approach, belief networks. Or they can be implicit, as in the Motivations Learning Petri Classifier PSS theory, it is worth27 to mention the Chinese28 room Nets SystemsLearning models of theembodied, second AI debate, and leading situated us to numerical cognitive science, and Memory argument (Searle 1980), the symbol grounding problem 29 parameters(Harnad 1990)thein neural andanti-representationalist networks, the evidence genetic algorithms, for non-symbolic fuzzyposition, questioning Figure 4: Implementing Cognitive Tasks sets, etc. In the literature of intelligent systems, MIND Goals representationsthe necessity (Nii et.al. 1982).of representations Basically, the main (at all) for achieving (Agent Control System) Thought discussion was that symbols are not enough for all intelligentintelligent action, as actions. postulated by the PSS theory. This was the first AI debate, as pointed out by Franklin FALL 2015 | VOLUME XX | NUMBER X Evaluations (1995). (Emotions) There are many lessons which we can grab from Motivations The secondthese AI debate debates. involved the The many trialsfirst of onegetting is that even though inspiration from brains and brain sciences in order to derive computationalcomputational methods models to work of as cognitive cognitive tasks are important, Figure 3.Figure A mind 3: Aas Mind a cognitive as a Cognitive architecture. Architecture architectures.the useThis period,of computational known as the “connectionist” metaphors as models can be period invery cognitive misleading. sciences, wasThey the are “gold helpful era” of in order to inspire The set of all mental abilities and functionalities is neural networks and other computational intelligence called "Cognition". There are many ways in which (Marks IIthe 1993; development Zurada 1994) techniques, of new likealgorithms fuzzy and processes, cognition can be modeled and represented. Usually, we systems andbut evolutionary when we computation. take the The metaphor emphasis now by the real thing, our can model an agent's cognitive processes by means of a was on non-symbolic information and distributed pattern- “Cognitive Architecture”(Sun 2004; Sun 2007; Langley et. matching. Even though very sophisticated models al. 2009; Samsonovich 2009). We can describe this originated from this line of research (O'Reilly 2000; PAGE 28cognitive architecture using standard terminology from Granger 2006; Nageswaran et.al.FALL 2010), 2015 and | generalVOLUME 15 | NUMBER 1 psychology and cognitive science. Then, we might cognitive architectures like e.g. Clarion (Sun, 2003) decompose this cognitive architecture in terms of their were proposed, general technological offspring were many sub-systems, responsible for different “cognitive still missing. tasks”, or “cognitive functions”. Among others, we could The third AI debate involved the enactive approach list memory, thoughts, motivations, goals, emotions, (Stewart et.al. 2010), embodied (Biocca 1997; Anderson learning, etc. This is pictured out in figure 3. 2003; Ziemke 2003; Meijsing 2006; Lallee et.al. 2010) Up to this point, our general framework could serve both and situated (Clancey 1997; Robbins & Aydede 2009) to represent natural agents like animals or human cognitive science, and the anti-representationalist beings, and artificial intelligent agents. Now, in order position (Haselager et.al. 2003), questioning the to synthesize such agents using some sort of technology, necessity of representations (at all) for achieving we will be required to propose possible implementations intelligent actions. for these cognitive tasks, in terms of synthetic models. There are many lessons which we can grab from these The example in figure 4 shows how different cognitive debates. The first one is that even though functions and tasks can be implemented by means of many computational models of cognitive tasks are important, possible computational models. the use of computational metaphors as models can be very misleading. They are helpful in order to inspire the development of new algorithms and processes, but when we take the metaphor by the real thing, our models start to Neural Rule-Based suffer from an oversimplification that can be dangerous Perception Memory Systems Networks and naive. The main metaphorical concepts used in the modeling of Actor Genetic Thought Behavior Systems Algorithms intelligent systems are concepts like "Knowledge", "Thought", "Learning" and "Perception". Knowledge representation can be explicit, as in the case of the Emotions Goals Belief Fuzzy Networks Systems first AI debate, leading us to symbols and logical expressions, propositions, rules, semantic networks or Motivations Learning Petri Classifier belief networks. Or they can be implicit, as in the Nets Systems models of the second AI debate, leading us to numerical parameters in neural networks, genetic algorithms, fuzzy Figure 4: Implementing Cognitive Tasks sets, etc. In the literature of intelligent systems,

FALL 2015 | VOLUME XX | NUMBER X APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

models start to suffer from an oversimplification that there is the lack of a common ground, a base theory, can be dangerous and naive. powerful enough to sew the many different kinds of representations necessary to model a human- The main metaphorical concepts used in the modeling like cognitive architecture. We propose that Peircean of intelligent systems are concepts like “Knowledge,” semiotics is such a theory. This proposition is not “Thought,” “Learning,” and “Perception.” Knowledge necessarily new. We are, in some sense, re-editing representation can be explicit, as in the case of the Fetzer’s Semiotic-System Hypothesis, which says that: first AI debate, leading us to symbols and logical “a semiotic system has the necessary and sufficient expressions, propositions, rules, semantic networks, means (or capacity) for general intelligent action.”37 or belief networks. Or they can be implicit, as in the models of the second AI debate, leading us to numerical In the following sections, we present a very brief account parameters in neural networks, genetic algorithms, fuzzy of what is Peircean semiotics, and how we pretend to use sets, etc. In the literature of intelligent systems, this led it in order to propose new methods for the development us to the Symbolic/Numerical dichotomy, as pointed out of intelligent systems based on Peircean semiotics. by Marks II.30 This created a gap between two different kinds of representation. Assuming that both might be PEIRCEAN SEMIOTICS important, how do we connect one to the other? How do According to Peirce, semiotics is a science which we attribute meaning to symbols? How might we solve studies the phenomena of signification, meaning, and the symbol grounding problem? communication in natural and artificial systems. The main elementary notion within semiotics is the notion of There were some interesting approaches to trying a “.” During the development of his theory, Peirce to solve this problem, most of them arising from the provided at least seventy-six different definitions for outcomes of the third AI debate. A very important one a sign.38 This was necessary in order to find more and was from Barsalou, with his perceptual symbol systems more abstract accounts for what is a sign, such that and grounded cognition.31 Following the general this notion could be used in the most generic sense, strategy given by the enactive approaches, the proposal to anything that could be used to represent something was to ground cognition in perception and action. else. Let us see some of these definitions: According to Barsalou, there might be perceptual states arising in sensory-motor systems, which might be used • “A sign, or representamen, is something which to ground symbols. Barsalou called these perceptual stands to somebody for something in some states “perceptual symbols.” Standard symbols (which respect or capacity” (CP 2.228)39 Barsalou called amodal symbols) were supposed to be associated to a set of perceptual symbols, becoming its • “a sign is something, A, which denotes some meaning. For example, the symbol “dog” is associated fact or object, B, to some thought, with perceptual memories of dogs. During perception, C” (CP 1.346) the recognition of a dog in the environment by means of its perceptual symbols might activate the amodal • “. . . anything which determines something else symbol “dog.” And during symbol grounding, the (its interpretant) to refer to an object to which activation of the amodal symbol “dog” might activate the itself refers (its object) in the same way, the recovering of perceptual memories of dogs. Barsalou interpretant becoming in turn a sign, and so on then described the notion of a “simulator,” which brings ad infinitum.” (CP 2.303) sequences of perceptual symbols from long-term memories, while activated by an amodal symbol. Mental • “. . . nothing is a sign unless it is interpreted as simulators are used to store in long-term memory the a sign;” (CP 2.307) many experiences lived by the cognitive agent. Many works followed Barsalou’s approach.32 A complementary notion which might be important for understanding the nature of a sign is the notion Another approach to trying to solve the meaning problem of “,” or sign process.40 Semiosis is a process was from Gärdenfors.33 He defines many mathematical which relates three relata: (sign, object, interpretant) or structures as quality dimensions, domains, conceptual (sign, thing signified, cognition produced in the mind). spaces, etc., and uses these mathematical structures as We see from this definition that semiosis is essentially the ground for symbols.34 the process that makes a sign to work as a sign. The object of a sign is the part of reality41 being represented The most recent approaches to the problem come by the sign. The interpretant is the effect of the sign from the field of simulation of language evolution,35 in a mind. Usually, this effect is to create another sign performing experiments with interactive agents in real/ within this mind. So, what makes a sign being a sign virtual worlds, able to interact to each other and perform is its power to (potentially infinitely) reproduce itself in the so called “Language Games.”36 another sign. This concept is not easy to grasp. But as soon as we are able to comprehend it, it becomes a very From this partial and brief history of intelligent systems powerful theoretical instrument for the construction of development, we might infer that, even though many new kinds of artificial systems. different representation systems can be used to model particular aspects required in a cognitive architecture,

PAGE 29 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Observe that, according to Peirce’s definition of a sign, actual connection with their objects” (C.P. 2.284). Or many different things might be signs: colors, images, either, an index “is a sign which refers to the Object pictures, diagrams, a scene, a whole movie, things which that it denotes by virtue of being really affected by that are connected to each other through time and space: Object” (C.P. 248), or an index “is a sign which would, arrows, indexes in a diagram, sounds, mathematical at once, lose the character which makes it a sign if its formulas, words, sentences, paragraphs, full texts. object were removed, but would not lose that character Almost anything might work as a sign, since it is able if there were no interpretant” (C.P. 2.304); indexes “direct to refer to an object and to generate an interpretant (an the attention to their objects by blind compulsion . . . effect) within a mind. depends upon association by contiguity” (C.P. 2.306); an index is a “sign which shall act dynamically upon Peirce categorizes signs within different sign types the hearer’s attention and direct it to a special object or depending on the relations among sign, object, and occasion” (C.P. 2.336). interpretant. Peirce’s typology of signs can be very complex and sophisticated, accounting for at least Finally, if the sign is related to its object by means of an sixty-six different kinds of signs.42 For the moment, we association of ideas or habitual connection between the will stay with just a few of these types, just to illustrate sign and the character signified, it is called a symbol (CP the potentiality of Peirce’s approach to be used in the 1.369). In this case, the connection between the sign construction of computational cognitive architectures. and object depends on a third thing besides the sign itself and the object itself, which is a habit connecting An important point here, specifically in the case of sign and object. Let´s see some of Peirce’s definition intelligent systems, is how a sign is able to perform its regarding symbols: a symbol “is a sign which refers to role as a sign, or, in other words, how a sign is able to the Object that it denotes by virtue of a law, usually an cause an effect in the agent. Not just any effect, but the association of general ideas, which operates to cause same effect that its object might cause. This “power” the Symbol to be interpreted as referring to that Object” of the sign comes from the possible relation which it (C.P. 2.249). A symbol “is a sign which owes its significant might have to its object. If a sign maintains a similarity virtue to a character which can only be realized by the or analogy to its object, or if the sign has in itself the aid of its Interpretant” (C.P. 2.92). Regarding symbols, same properties or qualities as its object, this is where “there may be a relation which consists in the fact that this power comes from. In this case, it is called an icon. the mind associates the sign with its object” (C.P. 1.372). Let’s see some of Peirce’s definition regarding icons: in A symbol “is a sign which would lose the character one of these definitions, he says that an icon “is a Sign which renders it a sign if there were no interpretant” whose significant virtue is due simply to its Quality” (CP (C.P. 2.304). A symbol is associated to “a convention or 2.92). In another definition, he says that an icon is “a contract . . .” (C.P. 2.297). Substitute for anything that it is like . . .” (CP 2.276), or either that an icon “is a sign which refers to the Object SEMIONICS: RECONCILING PEIRCEAN that it denotes merely by virtue of characters of its own, SEMIOTICS AND INTELLIGENT SYSTEMS and which it possesses, just the same, whether any such Object actually exists or not.” (CP 2.247). Finally, After our brief presentation on the history of intelligent an icon “is a sign which would possess the character systems, and our also brief introduction to Peircean which renders it significant, even though its object had Semiotics, it is important to point out what they have no existence;” (CP 2.304). in common. It is clear from the point of intelligent systems that the symbolic representation is insufficient According to Peirce, icons can be of three types: images, to explain general intelligent action, as the Physical diagrams, and metaphors (CP 2.277). Images are icons Symbol Systems hypothesis postulated originally. The which present in themselves the same properties as their problem was to identify what else, if not symbols. objects. Diagrams are icons which in their parts present The obvious denomination non-symbolic was too the same state of affairs as the parts of their objects. generic. The differentiation brought by computational And metaphors are icons which hold in themselves intelligence, between symbols and numbers was also another kind of parallelism, e.g., some sort of analogy too generic.43 The appeal to different kinds of symbols, to their objects. as Barsalou did, between amodal and modal symbols, was also not enough.44 Peirce’s semiotics brings us the If the sign, for some reason, forces the attention to a more general notion of a sign, which is general enough particular intended object without describing it, like, to cover all possible kinds of representation. The fact e.g., a demonstrative or relative pronoun, or if there we have, according to Peirce, many different kinds of is a direct physical connection between the sign and signs can help us organize the many different ways in the object which can be used to draw attention to it, which something can be used to represent something it is called an index. In this case, the relation binding else. In fact, after analyzing Peirce’s proposal, the whole the sign to its object is in existence. Then this relation process starts to become more understandable. can be tracked down in existence in order to reach the object. Let’s see some of Peirce’s definitions regarding Take a look at Figure 5 below. indexes: an index is a “sign, which is such by virtue of a real connection with its object” (C.P. 5.75). Or, indexes In the figure, we have different kinds of representation are “signs which are rendered such principally by an being integrated in order to generate more abstract

PAGE 30 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS APA NEWSLETTER | PHILOSOPHY AND COMPUTERS modeled in figure 6. Exosemiotic View

An important point here, specifically in the case mayof be a relation which consists in the fact that the Interpreter intelligent systems, is how a sign is able to performmind associates the sign with its object” (C.P. 1.372). (Semionic Agent) its role as a sign, or in other words, how a signA symbol is “… is a sign which would lose the character Sign Internally Interpretant able to cause an effect in the agent. Not justwhich any renders it a sign if there were no interpretant.” (Signlet) effect, but the same effect that its object might cause.(C.P. 2.304). A symbol is associated to “… a convention (Signlet) This “power” of the sign comes from the possibleor contract …” (C.P. 2.297). Sign Interpretant relation which it might have to its object. If a sign maintains a similarity or analogy to its object or,Semionics: if Reconciling Peircean Semiotics and the sign have in itself the same properties or qualities as its object, this is where this power comes from.Intelligent In Systems Object Endosemiotic View this case, it is called ic anon . Let's see some ofAfter our brief presentation on the history of Peirce's definition regarding icons: in one of these intelligent systems, and our APAalso NEWSLETTERbrief introduction | PHILOSOPHYto AND COMPUTERSFigure 6: The Model of a Sign definitions, he says that an icon “ … is a Sign Peirceanwhose Semiotics, it is important to point out the significant virtue is due simply to its Quality” common(CP points of them both. It is clear from the point Now, we can try to emulate this same process in a Figure 8: Chain of Interpretations 2.92). In another definition, he says that an icon isof “…intelligent systems, that the symbolic representation computational framework, in order to synthesize a Substitute for anything that it is like …” (CP 2.276),is insufficient to explain general intelligent action, computational semiotic agents. Because the semiotic Internally, a whole chain of transformations might or either that an icon “… is a sign which refers asto the Physical Symbol Systems hypothesis postulated process is a process supposed to happen in nature, we happen, during the processing of a cognitive Object that it denotes merely by virtue of kindscharacters oforiginally. signsof . An Theagent problem listens was theto wordidentify “house” what else, being if into anotherdecided tosignlet, give new possiblynames to this of same a processdifferent happening kind. architecture,For until another exosemiotic transformation its own, and which it possesses, just the spelledsame, whether notin thesymbols. environment The obvious at denominationthe same non-symbolictime it is ablewas example,within as ait computationalis in Figure framework. 7, the For first that signletsake, we havemaintainsis again generated, creating an outside sign, by means any such Object actually exists or not.” (CP 2.247).too generic. The differentiation brought by proposed the name semionics to refer to a specific of an action on the environment. Finally, an icon “… is a sign which wouldto possessvisuallycomputational the experience intelligence, the scene between ofsymbols a house. and numbers These a symbolicinstance relation of a sign to process an external happening insideobject, a computer, or in otherIt is important to point out that at any moment, in this character which renders it significant, evenexperiences though its(Marks give II 1993)rise to was the also creation too generic. of icons The appealof two to words, itor is in a othersymbol words,. After a computationalthe interpretation, sign process. the newchain of interpretations, signs of different types can object had no existence;” (CP 2.304). differentdifferent types: kindsan acoustic of symbols, image as, and Barsalou a visual (1999) image did, . signlet whichComputational was signscreated (and maintains also ) a different are then relationbe involved. Take the example of figure 9. According to Peirce, icons can be of three types:between amodal and modal symbols, was also not enough. called signlets. It is also important to identify who images, diagrams and metaphors (CP 2.277).Imag eThes are acousticPeirce's image semiotics can brings also usbe thelater more transformed general notion into of to the sameis responsible object—in for performing this case, the an transformation iconic one, from being INDEX icons which present in themselves the same properties a assign, which is general enough to cover all possible sign into the interpretant, which is the role of the ICON SYMBOL SYMBOL … an image class which integrates all possible spells of an icon. 3 their objects. Diagrams are icons which in their partskinds of representation. The fact we have, according to interpreter. In our case, we will call the interpreter present the same state of affairs as the theparts wordof Peirce,their “house”. many At different the same kinds time, of the signs visual can image help of us as a semionicagent . This is illustrated in Figure 7. objects. And metaphors are icons which the hold house inorganize is integrated the many different to this acousticways in which image something to form can be used to represent something else. In fact, after SYMBOL SYMBOL … themselves another kind of parallelism, e.g. some sort Interpreter a more analyzingabstract Peirce's notion proposal,of a house, the whole which process can startsbe later to SENSORS of analogy to their objects. (Semionic Agent) convertedbecome to amore understandable. of a house. If the sign, for some reason, forces the attention to a diagram particular intended object without describing it, likeTake a look in Figure 5 below: Sign Interpretant SYMBOL … (Signlet) (Signlet) e.g. a demonstrative or relative pronoun, or if there is R1 R2 a direct physical connection between the sign and the (e.g. symbolic) (e.g. iconic) object which can be used to draw attention to it, it ishouse Figure 9: Icon to Symbol Interpretation called an index. In this case, the relation binding the /HAUS/ In the example of figure 9, the agent's sensors are used sign to its object is in existence. Then this relation Object to create a sensory image (an icon) of an environment can be tracked down in existence in order to reach the Figure 7.Figure Semionics: 7: Semionics: Sign processing Sign Processing systems. Systems property, which is later transformed into an index, and object. Let's see some of Peirce's definitions regarding later into a whole chain of symbols, which are used, indexes: an index is a “… sign, which is such by virtue It is interesting to notice, in figure 7, how a signlet e.g., in a reasoning process. of a real connection with its object” (C.P. 5.75). Or, can be interpreted by a semionic agent, being translated indexes are “… signs which are rendered such principally In a moreinto appropriate another signlet, representation possibly of a different for kind.the Forprocess, by an actual connection with their objects” (C.P. in an artificialexample, asintelligent it is in figureagent, 7, we the might first signletconsiderSYMBOL an SYMBOL SYMBOL SYMBOL … 2.284). Or either, an index “… is a sign which refersFigureFigure to5. How5 - differentHow Different kinds Kindsof representation of Representation evolve Evolve maintains a symbolic relation to an external object, or the Object that it denotes by virtue of being inreally an intelligent system.in an Intelligent System extendedin view other of words, the process, it is a as symbol illustrated. After in the Figure affected by that Object.” (C.P. 248), or an index “…In isthe figure, we have different kinds of representation 8. In thisinterpretation, case, we assume the new that signlet the which semionic was created agent has a sign which would, at once, lose the characterIt is interestingwhichbeing integrated to innotice order that to generatein all more of abstractthese two differentmaintains sources a different of relation signs: to the the sameexternal object. world In (the makes it a sign if its object were removed, but kindswould of signs. An agent listens the word "house" being this case, an iconic one, being an icon. SYMBOL SYMBOL INDEX ICON not lose that character if there were no transformations,interpretant.”spelled in wethe areenvironment, dealing at with the thesame sametime itkind is ofable environment)In a more andappropriate its internal representation world for the(the process, mind). in Two (C.P. 2.304); indexes “… direct the attentionprocess: to theirto “semiosis.” visually experience Peirce themodels scene “semiosis” of a house. as Thesethe kinds ofan semiosic artificial intelligentprocesses agent, are wethen might able consider to happen: an objects by blind compulsion … depends upon associationexperiences give rise to the creationi coofns of two process which transforms one sign (of a possible type) exosemioticextended processes view of the process,and endosemiotic as illustrated in processes. figure Understanding ?? by contiguity…” (C.P. 2.306); an index is a “… different sign types: an acousticim age, and a visualim age. 8. In this case, we assume that the semionic agent has which shall act dynamically upon the hearer'sinto attention anotherThe acoustic sign (of image the samecan also or ofbe a laterdifferent transformed type), into such an The firsttwo differentprocess sources has ofan signs: exosemiotic the external worldnature, (the when Figure 10: A Model of Understanding and direct it to a special object or occasion.” (C.P.image class which integrates all possible spells of the that both signs refer to the same object. This process is the agentenvironment) identifies and itsa sign internal at the world environment, (the mind). Two whichAs in figure 10, this chain of deductions might finally 2.336). word "house". At the same time, the visual image of the kinds of semiosic processes are then able to happen: reduce to an index and later to an icon, providing a also calledhouse interpretation is integrated . toThe this new acoustic sign createdimage to duringform a generates internally a representation of this sign. Finally, if the sign is related to its object by means APA NEWSLETTER | PHILOSOPHYexosemiotic AND COMPUTERSprocesses and endosemiotic processes. The model for the process of understanding. It is useful to more abstract notion of a house, which can be later of an association of ideas or habitualthe connection process of semiosis is called the interpretant of the first process has an exosemiotic nature, when the agent record, at this point, the notion of mental simulation converted to a iadgram of a house. between the sign and the character signified,sign. Then it is Peirce says thatmodeled sign, in figure object, 6. and interpretant identifies a sign atExosemiotic the environment, View which generates from Barsalou. According to Barsalou, mental simulation called a symbol (CP 1.369). In this case, theIt is interesting to notice that in all these internally a representation of this sign. is the reenactment of perceptual, motor and connection between the sign and object dependsare related ontransformations, a by an irreducible we are triadic dealing relation, with a in same which kind the of Interpreter introspective status acquired during experiences with APA NEWSLETTER | PHILOSOPHY AND COMPUTERS (Semionic Agent) third thing, besides the sign itself andinterpretant the objectprocess: maintains "semiosis". the Peirce same modelspower "semiosis" to represent as the its the world, body and mind. His notion of modal symbols is itself, which is an habit connecting sign and object.process which transforms one sign (of a possible type) object. This process is modeled in Figure 6. 3 In a Sign very strict sense,Internally Peirce says that Interpretant the notion of an equivalent to Peirce's notion of an icon. His notion of Let´s see some of Peirce's definition regarding symbols:modeledinto another in figure sign 6. (of the same or of a differentAPA NEWSLETTER type), | PHILOSOPHYExosemioticinterpreter(Signlet) AND View COMPUTERS is not necessary in order to (Signlet) understand the an amodal symbol is equivalent to Peirce's notion of a symbol “… is a sign which refers to the Object thatsuch it that both signs refer to the same object. This process of semiosis in its more abstract scope. symbol. But differently from Barsalou, Peirce Sign Interpretant denotes by virtue of a law, usually an associationprocess of is also calledin terpretation. The new sign InterpreterNevertheless, it is a useful concept while understanding prescribes many different kinds of icons: images, modeled in figure 6. (Semionic Agent) Exosemiotic View general ideas, which operates to cause the Symbol tocreated be during the process of semiosis is called the the semiosic process happening in the human mind, and also diagrams and metaphors. And Peirce also shows why an when we are intending to synthesize this semiosic process interpreted as referring to that Object” (C.P. 2.249).interpretant A of the sign. Then Peirce says that sign, Sign Internally InterpretantInterpreter icon is an icon: because there is some kind of (Signlet) in artificial agents. For Peirce, the process of semiosis symbol “… is a sign which owes its significant virtueobject to and interpretant are related by an irreducible is more generic, being(Semionic(Signlet) possible Agent) to exist "in the wild" (in similarity or analogy between the sign and its object. a character which can only be realized by the aid oftriadic its relation, in which the interpretant maintains It seems to us that Peirce's Semiotic theory is fully Sign ObjectInterpretant nature),Sign without necessarily a human mindEndosemiotic being involved. View Interpretant” (C.P. 2.92). Regarding symbols, “… therethe same power to represent its object. This process is Internally Interpretant (Signlet) (Signlet) Sign Figure 6: The Model of a Sign Interpretant FigureFALL 2015 8. Chain | VOLUME of interpretations.XX | NUMBER X Now, we can try to emulate this same process in a Figure 8: Chain of Interpretations FALL 2015 | VOLUME XX | NUMBER X computational framework, in order to synthesize computationalObject semiotic agents. Because the semiotic Internally, a wholeEndosemiotic chain View of transformations might process is a process supposed to happen in nature,Internally, we happen, a whole during chain the of transformations processing of a might cognitive happen Figure 6. The modelFigure of6: aThe sign. Model of a Sign decided to give new namesObject to this same process happeningduring thearchitecture, processing until of another a cognitive exosemioticEndosemiotic architecture, transformation View until is again generated, creating an outside sign, by means Now, we can try towithin emulate a computational this same processframework. in For a that sake, weanother Figurehave 8: exosemioticChain of Interpretations transformation is again generated, computational framework,proposed in the orderFigure name s6:emionics to The synthesize Model to of refer a Sign to a specific of an action on the environment. Internally, a whole chain of transformations might Now, wecomputational can try semioticto instanceemulate agents. of a this Because sign processsame the happening semioticprocess inside in a computer,creatingIt an is outsideimportant tosign point by out means that at anyof moment,an action in this on the Now, we can try to emulate this samehappen, process during in a the processingFigure of8: Chain a cognitiveof Interpretations process is a processor supposed in other to happen words, in a nature, computational we sign process. chain of interpretations, signs of different types can a computational frameworkcomputational in framework,order to insynthesize order architecture, to synthesizeenvironment. until another exosemiotic transformation decided to give new computationalnamesComputational to this same signs semiotic process (and agents. alsohappening interpretants) Because the are semiotic then Internally,be involved. aTake whole the example chain of figure transformations 9. might computationalwithin a computational semioticcalled framework.agents. signlets ForBecause. that It is sake, also the we important havesemiotic is to again identify generated, who happen, creating duringan outside the sign, processing by means of a cognitive process is a process supposed to happenof inan nature,action on we the environment. processproposed is a process the name decidedsissupposedemionics responsible to giveto refertonew for happennames performing to ato specificthis in the samenature, transformation process happeningIt fromis importantarchitecture, to point until anotherout that exosemiotic at any transformationmoment, in this instance of a sign process happening inside a computer, ICON INDEX SYMBOL SYMBOL … we decided to give newwithinsign names into a computational the to interpretant, this framework. same which processFor is that theIt sake,is role important we ofchain have the to of pointis interpretations, again out that generated, at any creatingmoment, signs in anof this outsidedifferent sign, types by means can be or in other words,interpreter a computational3. In our case,sign weprocess. will call the interpreter happening within a computationalproposed the name framework. semionics to For refer thatchain to a of specific involved. interpretations,of Takean action signs the on of example the different environment. of types Figure can 9. Computational signs instanceas (and a semionicagent also of interpretants)a sign process. This is arehappening illustrated then insidebe in involved. aFigure computer, 7. Take the example of figure 9. called signlets. It is also important to identify who It is important to point out that at any moment, in this sake, we have proposedor the in othername words, semionics a computational to refer to sign process. chain of interpretations, signs of different types can is responsible for performing the transformation from SYMBOL SYMBOL … Computational signs (and also interpretants) are then be involved. Take the example of figure 9. a specificsign instance into the interpretant, of a sign whichprocess is the Interpreterhappening role of the insideICON INDEX SENSORSSYMBOL SYMBOL … 3 called signlets. It(Semionic is also Agent) important to identify who a computer,interpreter or, .in In other ouris case, responsiblewords, we will a callfor computational performing the interpreter the transformation sign from as a semionicagent . This is illustrated in Figure 7. sign into the interpretant, which is the role of the ICON INDEX SYMBOL SYMBOL … process. Computational Signsigns (and also interpretants) Interpretant SYMBOL … interpreter3. In our case, we will call the interpreter (Signlet) SYMBOL … are then called signlets. It is alsoR1 important to identifyR (Signlet) SYMBOL as Interpretera semionicagent . This is illustrated2 in Figure 7. (e.g. symbolic) (e.g. iconic) SENSORS who is responsible for(Semionic performing Agent) the transformation Figure 9: Icon to Symbol Interpretation SYMBOL … from sign into the interpretant, which is the role of the SYMBOL Interpreter In theSENSORS example of figure 9, the agent's sensors are used Sign Object SYMBOL … interpreter.45 In our case, we will call the(Semionic interpreterInterpretant Agent) as a to create a sensory image (an icon) of an environment (Signlet) R R (Signlet) semionic agent. This1 is illustratedFigure 7: inSemionics: 2Figure Sign7. Processing Systems property, which is later transformed into an index, and (e.g. symbolic) Sign (e.g. iconic) Interpretant later into a whole chain of symbols,SYMBOL which… are used, It is interesting to notice, in figure 7, how a Figure signlet 9: Icon to Symbol Interpretation (Signlet) R R (Signlet) e.g., in a reasoning process. can be interpreted1 by a semionic agent,2 being translated (e.g. symbolic) In the example of figure 9, the agent's sensors are used It is interesting to notice,into Objectin another Figure signlet, 7, how possibly a signlet of(e.g. a different iconic) canto create kind. a ForsensoryFigure image 9.(anFigure Icon icon) to9: symbol ofIcon an to environment interpretation.Symbol Interpretation example, as it is in figure 7, the first signlet SYMBOL be interpretedFigure by 7: aSemionics: semionic Sign Processingagent being Systems translatedproperty, which is later transformedSYMBOL into an index,SYMBOL and SYMBOL … maintains a symbolic relation to an external object, or In the example of figure 9, the agent's sensors are used Object later into a whole to chain create of asymbols, sensory whichimage are(an used, icon) of an environment It is interesting toin notice, other in figurewords, 7, it how is a signlet a symbole.g.,. Afterin a reasoning the process. can be interpreted byinterpretation, a semionicFigure 7:agent, Semionics: the being new translatedSign signlet Processing which Systems was created property, which is later transformed into an index, and into another signlet,maintains possibly a of different a different relation kind. to For the same object. In later into a whole chain of symbols, which are used, PAGE 31 It is interesting to notice, in figure 7, how a signlet e.g., in a FALLreasoning 2015 process. | VOLUME 15 | NUMBER 1 example, as it iscanthis in becase, figure interpreted an 7,iconic the by one,a first semionic being signlet an agent, icon .being SYMBOL translatedSYMBOL SYMBOL SYMBOL SYMBOL SYMBOL INDEX… ICON maintains a symbolic relation to an external object, or intoIn a more another appropriate signlet, representation possibly of a differentfor the process, kind. Forin in other words, it is a symbol. After the example,an artificial as it intelligent is in figure agent, 7, we the might first consider signlet an SYMBOL SYMBOL SYMBOL SYMBOL interpretation, the new signlet which was created … maintainsextended view a symbolic of the relation process, to as an illustrated external object, in figure or maintains a different relation to the same object. In Understanding ?? in8. In other this case, words, we assume it is that a the symbol semionic. After agent hasthe this case, an iconic one, being an icon. SYMBOL SYMBOL INDEX ICON interpretation,two different sources the newof signs: signlet the whichexternal was world created (the Figure 10: A Model of Understanding In a more appropriatemaintainsenvironment) representation a different and for its the relationinternal process, to worldin the (the same mind).object. Two In As in figure 10, this chain of deductions might finally an artificial intelligentthiskinds case, of agent, semiosican iconic we might processes one, consider being are an an thenicon . able to happen: reduceSYMBOL to an SYMBOL index and laterINDEX to an icon,ICON providing a extended view of theInexosemiotic process,a more appropriate as processes illustrated representation and in endosemiotic figure for the processes. process, Thein model Understandingfor the process ?? of understanding. It is useful to 8. In this case, we anfirst assume artificial process that the has intelligent semionic an exosemiotic agent agent, has nature, we might when consider the agent an record, at this point, the notion of mental simulation two different sourcesidentifies of signs: a the sign external at the world environment, (the which generatesFigure 10: A Model of Understanding extended view of the process, as illustrated in figure from Barsalou. According toUnderstanding Barsalou, ?? mental simulation environment) and its8.internally internalIn this a case, worldrepresentation we (the assume mind). of that this Two the sign. semionicAs in figureagent has 10, thisis chain the of reenactment deductions might of finally perceptual, motor and kinds of semiosic processestwo different are sourcesthen able of signs:to happen: the externalreduce world to an (the index introspective and later Figure to statusan 10: icon, A acquired Model providing of duringUnderstanding a experiences with exosemiotic processesenvironment) and endosemiotic and its processes. internal worldThe (themodel mind).for the Twoprocessthe of world,understanding body and .mind. It is His useful notion to of modal symbols is first process has an exosemiotic nature, when the agent As in figure 10, this chain of deductions might finally kinds3 In ofa very semiosic strict sense,processes Peirce are says then that ablerecord, the notion to at happen: ofthis an point,equivalent the notion to Peirce's of mental notion simulation of an icon. His notion of identifies a sign at the environment, which generates reduce to an index and later to an icon, providing a exosemioticinterpreter processes is not necessary and endosemiotic in order fromto processes. understand Barsalou. The the Accordingmodelan amodal tofor Barsalou, the symbol process is mental of equivalent u simulationnderstanding to Peirce's. It is notionuseful to of internally a representationfirstprocess process of this of has sign. semiosis an exosemiotic in its nature, more is abstract when the the scope.agent reenactmentsymbol of. Butperceptual, differently motor from and Barsalou, Peirce Nevertheless, it is a useful concept while understanding record, at this point, the notion of mental simulation identifies a sign at the environment, introspective which generates statusfromprescribes acquired Barsalou. during many According different experiences to Barsalou, kinds with of mental icons: simulation images, internallythe semiosic a representation process happening of this in the sign. humanthe mind,world, and body also and mind.diagrams His andnotion metaphors. of modal Andsymbols Peirce is also shows why an when we are intending to synthesize this semiosic process is the reenactment of perceptual, motor and 3 In a very strict sense, Peirce says that the notion of an equivalent to Peirce's notion of an icon. His notion of in artificial agents. For Peirce, the process of semiosis introspectiveicon is an icon:status because acquired there during is experiences some kind with of interpreter is not necessary in order to understand the is more generic, being possible to exist an "in amodal the wild" symbol (in issimilarity equivalent or to analogy Peirce's between notion the of sign and its object. process of semiosis in its more abstract scope. the world, body and mind. His notion of modal symbols is nature), without necessarily a human mindsymbol being involved.. But differentlyIt seems to from us that Barsalou, Peirce's Peirce Semiotic theory is fully Nevertheless, it is3 a In useful a very concept strict while sense, understanding Peirce says that the notion of an equivalent to Peirce's notion of an icon. His notion of interpreter is not necessary in order prescribes to understand many the differentan amodal kinds symbol of is icons: equivalent images, to Peirce's notion of the semiosic process happening in the human mind, and also diagrams and metaphors. And Peirce also shows why an when we are intending process to synthesize of semiosis this semiosic in process its more abstract scope. symbol. But differently from Barsalou, Peirce in artificial agents.FALL Nevertheless, For2015 Peirce, | VOLUME the it processXX is | a NUMBER useful of semiosis X concept whileicon understanding is an icon: prescribes because there many is different some kind kinds of of icons: images, is more generic, being the possible semiosic to existprocess "in happening the wild" in (in the humansimilarity mind, and or also analogydiagrams between and the metaphors. sign and its And object. Peirce also shows why an nature), without necessarilywhen wea arehuman intending mind being to involved. synthesize thisIt semiosic seems processto us that Peirce's Semiotic theory is fully in artificial agents. For Peirce, the process of semiosis icon is an icon: because there is some kind of is more generic, being possible to exist "in the wild" (in similarity or analogy between the sign and its object. nature), without necessarily a human mind being involved. It seems to us that Peirce's Semiotic theory is fully FALL 2015 | VOLUME XX | NUMBER X

FALL 2015 | VOLUME XX | NUMBER X APA NEWSLETTER | PHILOSOPHY AND COMPUTERS modeled in figure 6. Exosemiotic View

Interpreter (Semionic Agent)

Sign Internally Interpretant (Signlet) (Signlet) Sign Interpretant

Object Endosemiotic View

Figure 6: The Model of a Sign Now, we can try to emulate this same process in a Figure 8: Chain of Interpretations computational framework, in order to synthesize computational semiotic agents. Because the semiotic Internally, a whole chain of transformations might process is a process supposed to happen in nature, we happen, during the processing of a cognitive decided to give new names to this same process happening architecture, until another exosemiotic transformation within a computational framework. For that sake, we have is again generated, creating an outside sign, by means proposed the name semionics to refer to a specific of an action on the environment. instance of a sign process happening inside a computer, It is important to point out that at any moment, in this or in other words, a computational sign process. chain of interpretations, signs of different types can Computational signs (and also interpretants) are then be involved. Take the example of figure 9. called signlets. It is also important to identify who is responsible for performing the transformation from sign into the interpretant, which is the role of the ICON INDEX SYMBOL SYMBOL … interpreter3. In our case, we will call the interpreter as a semionicagent . This is illustrated in Figure 7. APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

SYMBOL SYMBOL … Interpreter SENSORS (Semionic Agent)

Sign InterpretantIn Figure 9, the agent’s sensors SYMBOLare used… to create a more evolved (CP 5.475). In the case where a sign’s (Signlet) R (Signlet) 1 R2 sensory image (an icon) of an environment property, effect is a feeling, it is called anemotional interpretant. (e.g. symbolic) (e.g. iconic) which is later Figuretransformed 9: Icon to intoSymbol an Interpretation index, and later into In the case where a sign’s effect is an action, it is called a wholeIn chain the example of symbols,of figure 9, whichthe agent's are sensors used, are e.g.,used in a an energetic interpretant. And finally, in the case where Object reasoningto createprocess. a sensory image (an icon) of an environment the effect of the sign is the creation of another sign, in Figure 7: Semionics: Sign Processing Systems property, which is later transformed into an index, and later into a whole chain of symbols, which are used, the interpreter’s mind, it is called a logical interpretant. It is interesting to notice, in figure 7, how a signlet e.g., in a reasoning process. can be interpreted by a semionic agent, being translated into another signlet, possibly of a different kind. For Another important distinction made by Peirce refers to example, as it is in figure 7, the first signlet SYMBOL SYMBOL SYMBOL SYMBOL … the many kinds of objects and interpretants a sign might maintains a symbolic relation to an external object, or in other words, it is a symbol. After the have. According to him, “it remains to point out that there interpretation, the new signlet which was created are usually two objects, and more than two interpretants. maintains a different relation to the same object. In Namely, we have to distinguish the immediate object, this case, an iconic one, being an icon. SYMBOL SYMBOL INDEX ICON which is the object as the sign itself represents it, and In a more appropriate representation for the process, in an artificial intelligent agent, we might consider an whose being is thus dependent upon the representation extended view of the process, as illustrated in figure Understanding ?? of it in the Sign, from the dynamical object, which is the 8. In this case, we assume that the semionic agent has two different sources of signs: the external world (the Figure 10. A Figuremodel 10: of understanding.A Model of Understanding reality which by some means contrives to determine the environment) and its internal world (the mind). Two As in figure 10, this chain of deductions might finally sign to its representation. In regard to the interpretant kinds of semiosic processes are then able to happen:As in Figurereduce 10, to an this index chain and later of deductions to an icon, providing might afinally we have equally to distinguish, in the first place, the exosemiotic processes and endosemiotic processes. The model for the process of understanding. It is useful to first process has an exosemiotic nature, when thereduce agent record,to an atindex, this point, and thelater notion to ofan mental icon, simulation providing a immediate interpretant, which is the interpretant as it identifies a sign at the environment, which generatesmodel forfrom the Barsalou. process According of understanding to Barsalou, mental. It simulation is useful to is revealed in the right understanding of the sign itself, internally a representation of this sign. record, atis this the point, reenactment the notion of of perceptual, mental simulation motor and from and is ordinarily called the meaning of the sign; while in introspective status acquired during experiences with Barsalou.the Accordingworld, body and to mind. Barsalou, His notion mental of modal symbolssimulation is is the second place, we have to take note of the dynamical 3 In a very strict sense, Peirce says that the notionthe of anreenactmentequivalent to of Peirce's perceptual, notion of motor, an icon .and His introspective notion of interpretant, which is the actual effect which the sign, interpreter is not necessary in order to understand the an amodal symbol is equivalent to Peirce's notion of process of semiosis in its more abstract status scope. acquiredsymbol. during But differently experiences from with Barsalou, the world, Peirce body, as a sign, really determines. Finally there is what I Nevertheless, it is a useful concept while understandingand mind.prescribes His notion many different of modal kinds symbols of icons: is images,equivalent provisionally term the final interpretant, which refers to the semiosic process happening in the human mind, and also diagrams and metaphors. And Peirce also shows why an when we are intending to synthesize this semiosic to process Peirce’s notion of an icon. His notion of an amodal the manner in which the sign tends to represent itself to in artificial agents. For Peirce, the process of semiosis icon is an icon: because there is some kind of is more generic, being possible to exist "in the wild"symbol (in similarityis equivalent or analogy to betweenPeirce’s the notion sign and of its symbol object. . But be related to its object” (CP 4.536). nature), without necessarily a human mind being involved.unlike Barsalou,It seems to Peirce us that prescribes Peirce's Semiotic many theory different is fully kinds of icons: images, diagrams, and metaphors. And Peirce Another important analysis to be performed in the FALL 2015 | VOLUME XX | NUMBER X also shows why an icon is an icon: because there is sequence is related to the semiotic role of sensors in some kind of similarity or analogy between the sign and intelligent systems. Usually, in an intelligent system, we its object. It seems to us that Peirce’s semiotic theory is consider the system (agent) to perceive its environment, fully compatible with Barsalou’s proposal for a grounded identifying objects and situations happening in the cognition, with some advantages: environment.

• It provides a general kind of processing The problem is that the agent doesn’t have direct element: the sign, which can assume different access to the objects and situations happening in the kinds, depending with the relation it can have environment. The only point of contact between the to its object. agent and the environment is through their sensors and actuators. This means that objects and situations in the • It explains how different kinds of signs might environment cannot directly affect an agent. An agent affect its interpreter: by being similar to their can only be affected by means of signs. And, specifically, objects (icons), by attracting or directing these signs are conveyed by means of sensors. attention to the object (indexes), or by being learned by convention (symbol). What are sensors doing? Sensors are devices which transduce some physical property into another physical • It brings a common ground theory to explain property, such that a topological relation is established how signs should work and how they can be among these two physical properties, in the sense used to explain mind processing. that every time the first physical property assumes a given value, the second physical property will assume • It brings the notion of index, as attention a corresponding physical value that is specific and directors, something not directly exploited in unique. This is what happens, e.g., in a thermometer. Barsalou’s theory. There is a physical property of the environment, which is temperature, and there is a physical property of the But Peirce’s theory goes very much beyond his thermometer, which is the extension of the mercury classification of signs among icons, indexes, and column. When the temperature is, say, 39 degrees, symbols. the mercury column will have an extension which is determined and unique for the case when the According to Peirce’s early theory of signs, an temperature is 39 degrees. In the case of an electric interpretant might also necessarily be a sign. Peirce’s thermometer, the voltage of the sensor will be a definite late theory of sign modified that constraint.46 According and unique voltage for the case when the temperature to the late theory, the effect of the sign might be the is 39 degrees. If I couple this electric thermometer to a creation of a feeling, an action, or another sign, possibly 32 bits analog-to-digital (AD) converter, there might be a

PAGE 32 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

computer memory set of flip-flops, which will be holding through time. S-objects, scenes, and sequences of the numeric encoding of a 32-bit number, which will be scenes are nothing other than different kinds of icons. definite and unique for the case when the temperature Icons representing scenes or sequences of scenes is 39 degrees. What do all of these (the environment are equivalent to the notion of mental simulations as temperature, the mercury thermometer, the electric brought by Barsalou. The storage of an appropriate thermometer, and the electric thermometer with an AD representation for these sequences of scenes will converter) have in common? They are all in a relation of constitute what cognitive psychologists call an episodic analogy to each other. So, in a Peircean sense, sensors memory. are iconic metaphors able to represent their objects because they have properties which are APA NEWSLETTER | PHILOSOPHY AND COMPUTERS in a relation of analogy to the properties of the objects they represent.47

Another important point to be analyzed Index fadfads fadfads from a semiotic perspective is the fasfasd fasfasd notion of an object, as in computational processes and object-oriented Cognitive Agent Cognitive Agent Avatar of Human Avatar of Human programming (Smith, 1998). Objects (or Instructor Instructor s-objects—software objects, to avoid a possible confusion with the object of a sign) are computational representations (a) (b) for objects in the real world. The notion of an s-object comes from a variety of philosophical assumptions since fadfads fadfads Aristotle with his substance theory, fasfasd fasfasd passing through Hume, with his bundle theory, up to Gibson, with his notion of Cognitive Agent Cognitive Agent Avatar of Human Avatar of Human affordances. According to this notion, Instructor Instructor an s-object is a collection of other objects, called its parts, a collection (or bundle) of properties, and a collection (c) (d) of affordances.Affordances are possible fadfads actions which can be performed onto fasfasd the object. According to Gibson, there might be some kinds of objects which fadfads fadfads fasfasd fasfasd can be defined strictly based on their affordances, e.g., a chair is anything in Cognitive Agent Cognitive Agent 48 Avatar of Human Avatar of Human which one can sit. Instructor Instructor

In intelligent systems, the role of perception can be understood as the (e) (f) process through which, based on sensory data, the system is able to discover and recognize s-objects in its environment. From a semiotic perspective, s-objects fadfads fadfads fasfasd fasfasd are (just like sensors) iconic metaphors of the environment objects they represent. Cognitive Agent Cognitive Agent Following the standard strategy for Avatar of Human Avatar of Human Instructor Instructor interpreting icons, there are many computational techniques like pattern- matching and statistical correlation (g) (h) which can be used to perform this role. fadfads In intelligent system, perception can be fasfasd instantiated in semionic agents which fadfads fadfads use sensory data stored as s-objects, fasfasd fasfasd and translate them into other s-objects Cognitive Agent Cognitive Agent representing the discovery of objects Avatar of Human Avatar of Human in the environment. Besides that, these Instructor Instructor s-objects might be performing some sort of change in their attributes through (i) (j) time, giving rise to the description of Figure 11. Semiotic processing.Figure 11: Semiotic Processing scenes and sequences of scenes also

FALL 2015 | VOLUME XX | NUMBER X PAGE 33 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

The discovery of s-objects, scenes, and sequences of NOTES scenes constitute what we might call a world model 1. Fodor,The Language of Thought, 1975; Gardner, The Mind’s New for the intelligent system, which might be used for the Science, 1985; Zurada, Marks II, and Robinson, Computational 49 Intelligence - Imitating Life, 1994; Franklin, Artificial Minds, 1995; system’s behavior generation. Meystel and Albus, “Intelligent Systems: Architecture, Design and Control,” 2002; Goertzel, Artificial General Intelligence, 2007; Finally, another important issue which can be viewed Samsonovich, “On a Roadmap for the BICA Challenge,” 2012. in light of computational semiotics is the mechanism 2. Verdejo, “Computationalism, Connectionism, Dynamicism and underlying the learning of language. This mechanism can Beyond,” 2013. be described by the interaction of icons, indexes, and 3. Christaller, “Cognitive Robotics,” 1999; Clark and Grush, symbols in order to set up new symbolic associations. “Towards a Cognitive Robotics,” 1999; Levesque and Lakemeyer, An example of such interaction is described in Figure 11. “Cognitive Robotics,” 2008. 4. Werner and Dyer, “Evolution of Communication in Artificial In figure 11(a), a cognitive agent has as an environment Organisms,” 1992; Christian Balkenius, “Natural Intelligence in Artificial Creatures,” 1995; Cariani, “Towards an Evolutionary a virtual world surrounded by objects and other agents, Semiotics” 1998. as we considered in our introduction. It starts to observe 5. Meyer and Wilson, “Simulation of Adaptive Behavior in Animats” another agent, which is being controlled by a human 1991; Jeffrey Dean, “Animats and What They Can Tell Us,” 1998. controller (i.e., the agent is the avatar of the human 6. MacLennan, “Synthetic Ethology,” 1992; MacLennan and Burg­ controller). This agent is pointing to an object in the hardt, “Synthetic Ethology and the Evolution of Cooperative environment and is, at the same time, speaking a word Communication,” 1993; MacLennan, “Making Meaning in to be associated with the pointed object. The cognitive Computers,” 2006. agent recognizes the index used to point to the object 7. Floreano, Nolfi, and Mondada, “Competitive Co-Evolutionary (Figure 11(b)) and turns its attention to the pointed Robotics,” 1998. object (Figure 11(c)). It is then able to recognize the 8. See Gudwin and Gomide, “An Approach to Computational object as the interpretant of this index. The object then Semiotics,” 1998; Gudwin and Gomide, “A Computational creates an icon at the agent’s mind (Figure 11(d)). Then, Semiotics Approach for Soft Computing,” 1997; Gudwin, “On the Generalized Deduction, Induction, and Abduction as the the agent pays attention to the words spoken by the Elementary Operators within Computational Semiotics,” 1998; instructor (Figure 11(e)), and creates an icon of these Gudwin, “From Semiotics to Computational Semiotics,” 1999; words (Figure 11(f)). Because the word was spoken at Gudwin, “Evaluating Intelligence,” 2000; Gudwin, “Semiotic Synthesis and Semionic Networks,” 2001; Gudwin, “The the same time as the object was pointed, the agent Icon Grounding Problem,” 2011; Gudwin, “Peirce And The remembers its last icon before the words, bringing Engineering of Intelligent Systems,” 2014; Gudwin et al., “A back the icon of the object to its mind (Figure 11(g)). Proposal for a Synthetic Approach to Symbolic Semiosis,” 2002; Gudwin and Queiroz, eds. Semiotics and Intelligents Systems It then creates an association between the word and Development, 2006; Gomes, Gudwin, and Queiroz, “Towards the pointed object, turning it into a symbol. Now, in a Meaning Processes in Computers from Peircean Semiotics,” situation where the object is not present anymore, the 2003; Gomes, Gudwin, and Queiroz, “On a Computational Model of the Peircean Semiosis,” 2003; Gomes et al., “Towards Meaning instructor speaks again the same word (Figure 11(h)). Processes in Computers from Peircean Semiotics,” 2004; Gomes The agent will create an icon (an acoustic image) et al., “Some Considerations on Artificial Semiosis,” June 2005; of these words (Figure 11(i)), and then, because an Gomes et al., “Towards the Emergence of Meaning Processes in Computers from Peircean Semiotics,” 2007; Gonçalves and association was created before connecting these words Gudwin, “Semiotic Oriented Autonomous Intelligent Systems with an object, the agent will again interpret this sign, Engineering,” 1998; Gonçalves and Gudwin, “Emotions,” 1999; now as a symbol, recovering the object and bringing it Gudwin and Queiroz, “Towards Machine Understanding,” back to the mind. The icon for the object is then created 2007; Gudwin and Queiroz, “Some Considerations Regarding Mathematical Semiosis,” 2007; Loula et al., “Synthetic Approach as an interpretation of the symbol (Figure 11(j)). of Symbolic Creatures,” 2003; Loula et al., Artificial Cognition Systems, 2006; Loula et al., “Emergence of Self-Organized CONCLUSION Symbol-Based Communication in Artificial Creatures,” 2010; Ribeiro et al., “Symbols Are Not Uniquely Human,” 2007; Tatai As pointed out in the last section, Peircean semiotics and Gudwin, “Using a Semiotics-Inspired Tool for the Control of has a lot of possible contributions in the field of Intelligent Opponents in Computer Games,” 2003. intelligent systems. Fetzer’s notion of a semiotic 9. Savan, An Introduction to C. S. Peirce’s Full System of Semeiotic, system’s hypothesis introduces a new perspective on 1988; Short, “The Development of Peirce’s Theory of Signs,” 2006; how intelligent systems can be studied and developed. Peirce, Collected Papers of Charles Sanders Peirce, 1931–1958. Nevertheless, Peirce’s semiotics is not an easy theory, 10. Gudwin et al., Semiotics and Intelligents Systems Development, and a proper introduction to it might be required 2006. before it is able to result in newer theories on how the 11. Smith, “On the Origin of Objects,” 1998; Franklin and Graesser, human mind might be functionally replicated in artificial “Is It an Agent, or Just a Program?: A Taxonomy for Autonomous systems. Computational semiotics, or particularly Agents,” 1996; Luck and d’Inverno, “A Conceptual Framework for Agent Definition and Development,” 2001. semionics, might be a first step in such a direction. In this work, we just brought the main highlights. We 12. Biocca, “The Cyborg’s Dilemma,” 1997; Anderson, “Embodied Cognition,” 2003; Ziemke, “What’s That Thing Called invite our readers to dig more deeply in our previous Embodiment?,” 2003; Meijsing, “Real People and Virtual Bodies,” articles50 in order to have a better understanding on how 2006; Lallee et al., “Linking Language with Embodied and computational semiotics might be useful. We also direct Teleological Representations of Action for Humanoid Cognition,” our readers to the works of Barsalou and Gärdenfors, 2010. which together with computational semiotics, might 13. Clancey, Situated Cognition, 1997; Robbins and Aydede, The bring new perspectives on the development of Cambridge Handbook of Situated Cognition, 2009. intelligent systems.51 14. Franklin, Artificial Minds, 1995.

PAGE 34 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

15. Sun, “Desiderata for Cognitive Architectures,” 2004; Sun, “The 38. Marty and Lang, “76 Definitions of The Sign by C. S. Peirce,” 1997. Challenges of Building Computational Cognitive Architectures,” 2007; Langley et al., “Cognitive Architectures,” 2009; 39. It is common among Peirce scholars to represent citations to Samsonovich,“Comparative Table of Cognitive Architectures,” the work of Peirce using a mnemonic given by letters and 2009. numbers. In this case, CP implies the Collected Papers of Charles Sanders Peirce (Peirce, 1931–1958), and 2.228 implies volume 16. Franklin, Artificial Minds, 1995. 2, paragraph 228. Other encodings can be found at http:// en.wikipedia.org/wiki/Charles_Sanders_Peirce_bibliography 17. Gardner, The Mind’s New Science, 1985. 40. Queiroz and Merrell, “On Peirce’s Pragmatic Notion of Semiosis,” 18. Newell and Simon, “Computer Science as Empirical Inquiry,” 2009. 1976; Newell, “Physical Symbol Systems,” 1980; Nilsson, “The Physical Symbol System Hypothesis,” 2007. 41. For Peirce, reality includes not just the concrete objects of our experience but also imaginary things that might possibly exist 19. Nilsson, “The Physical Symbol System Hypothesis,” 2007. in the universe, and all the laws which promote regularity in our 20. Searle, “Minds, Brains, and Programs,” 1980. universe. 21. Harnad, “The Symbol Grounding Problem,” 1990. 42. Borges, “A Visual Model of Peirce’s 66 Classes of Signs Unravels His Late Proposal of Enlarging Semiotic Theory,” 2010. 22. Nii et al., “Signal-to-Symbol Transformation,” 1982. 43. Marks II, “Intelligence,” 1993. 23. Franklin, Artificial Minds, 1995. 44. Barsalou, “Perceptual Symbol Systems,” 1999. 24. Marks II, “Intelligence: Computational Versus Artificial,” 1993; Zurada, Computational Intelligence, 1994. 45. In a very strict sense, Peirce says that the notion of an interpreter is not necessary in order to understand the process of semiosis 25. O’Reilly, Computational Explorations in Cognitive Neuroscience, in its more abstract scope. Nevertheless, it is a useful concept 2000; Granger, “Engines of the Brain,” 2006; Nageswaran et al., while understanding the semiosic process happening in the “Towards Reverse Engineering the Brain,” 2010; Sun, A Tutorial on human mind, and also when we are intending to synthesize this Clarion, 2003. semiosic process in artificial agents. For Peirce, the process of semiosis is more generic, being possible to exist “in the wild” (in 26. Stewart et al., Enaction, 2010. nature), without necessarily a human mind being involved. 27. Biocca, “The Cyborg’s Dilemma,” 1997; Anderson, “Embodied 46. Short, “The Development of Peirce’s Theory of Signs,” 2006. Cognition,” 2003; Ziemke, “What’s That Thing Called Embodiment?,” 2003; Meijsing, “Real People and Virtual Bodies,” 2006; Lallee et al., 47. The above interpretation is controversial and non-standard. Many “Linking Language with Embodied and Teleological Representations semioticians might classify a sensor as an index, due to the fact of Action for Humanoid Cognition,” 2010. that there is a physical connection between it and the property being measured. In our opinion, though, this will be the fact only 28. Clancey, Situated Cognition, 1997; Robbins and Aydede, The in the case someone realizes that some sensor property is in Cambridge Handbook of Situated Cognition, 2009. a physical correlation with another object’s property, and the 29. Haselager et al., “Representationalism vs. Anti-Representationalism,” interpreter uses this physical correlation to attract the attention 2003. from sign’s property to the object’s property. In other situations, if the basis for interpretation is the intrinsic analogy and not the 30. Marks II, “Intelligence: Computational Versus Artificial,” 1993. physical correlation, then the proper classification for a sensor will be an iconic metaphor, as presented here. 31. Barsalou, “Perceptual Symbol Systems,” 1999; Barsalou, “Grounded Cognition,”2008; Barsalou, “The Human Conceptual 48. Gibson, The Ecological Approach To Visual Perception, 1986. System,” 2012; Pezzulo et al., “Computational Grounded Cognition,” 2013. 49. Meystel and Albus, “Intelligent Systems: Architecture, Design and Control,” 2002. 32. Roy, “Learning Visually Grounded Words and Syntax of Natural Spoken Language,” 2002; Roy, “Semiotic Schemas,” 2005; Roy, 50. Gudwin and Gomide, “An Approach to Computational “Grounding Words in Perception and Action,” 2005; Cangelosi Semiotics,” 1997; Gudwin and Gomide, “A Computational and Riga, “An Embodied Model for Sensorimotor Grounding Semiotics Approach for Soft Computing,” 1997; Gudwin, “On and Grounding Transfer,” 2006; Lallee et al., “Linking Language the Generalized Deduction, Induction, and Abduction as the with Embodied and Teleological Representations of Action Elementary Operators within Computational Semiotics,” 1998; for Humanoid Cognition,” 2010; Madden et al., “A Cognitive Gudwin, “From Semiotics to Computational Semiotics,” 1999; Neuroscience Perspective on Embodied Language for Human– Gudwin, “Evaluating Intelligence: A Computational Semiotics Robot Cooperation,” 2010; Pezzulo and Calvi, “Computational Perspective,” 2000; Gudwin, “Semiotic Synthesis and Semionic Explorations of Perceptual Symbol Systems Theory,” 2011; Frank, Networks,” 2001; Gudwin, “The Icon Grounding Problem - “Sentence Comprehension as Mental Simulation,” 2011. Research Commentaries on Cangelosi’s ‘Solutions and Open Challenges for the Symbol Grounding Problem’,” 2011; Gudwin, 33. Gärdenfors, Conceptual Spaces, 2004; and Gärdenfors, The “Peirce And The Engineering of Intelligent Systems,” 2014; Geometry of Meaning, 2014. Gudwin et al., “A Proposal for a Synthetic Approach to Symbolic Semiosis,” 2002; Gudwin et al., Semiotics and Intelligents Systems 34. Balkenius et al., “The Origin of Symbols in the Brain,” 2000; Development, 2006; Gomes et al., “Towards Meaning Processes Gärdenfors, The Geometry of Meaning, 2014. in Computers from Peircean Semiotics,” 2003; Gomes et al., “On 35. Steels, “Synthesising the Origins of Language and Meaning a Computational Model of the Peircean Semiosis,” 2003; Gomes Using Co-Evolution, Self-Organisation, and Level Formation,” et al., “Towards Meaning Processes in Computers from Peircean 1998; Steels, “The Talking Heads Experiment, 1999; and Steels, Semiotics,” 2004; Gomes et al., “Some Considerations on “Language As a Complex Adaptive System,” 2000; Cangelosi, Artificial Semiosis,” 2005; Gomes et al., “Towards the Emergence “Evolution of Communication and Language Using Signals, of Meaning Processes in Computers from Peircean Semiotics,” Symbols, and Words,” 2001; Cangelosi and Parisi, “The 2007; Gonçalves and Gudwin, “Semiotic Oriented Autonomous Emergence of a Language in an Evolving Population of Neural Intelligent Systems Engineering,” 1998; Gonçalves and Gudwin, Networks,” 1998; Cangelosi and Parisi, “Computer Simulation,” “Emotions,” 1999; Gudwin and Queiroz, “Towards Machine 2001; and Cangelosi and Parisi, Simulating the Evolution of Understanding: Some Considerations Regarding Mathematical Language, 2002; Noble et al., “From Monkey Alarm Calls to Semiosis,” 2007; Gudwin and Queiroz, “Some Considerations Human Language,” 2010. Regarding Mathematical Semiosis,” 2007; Loula et al., “Synthetic Approach of Symbolic Creatures,” 2003; Loula et al., Artificial 36. Steels and Vogt, “Grounding Adaptive Language Games in Cognition Systems, 2006; Loula et al., “Emergence of Self- Robotic Agents,” 1997; Steels, “Language Games for Autonomous Organized Symbol-Based Communication in Artificial Creatures,” Robots,” 2001. 2010; Ribeiro et al., “Symbols Are Not Uniquely Human,” 2007; Tatai and Gudwin, “Using a Semiotics-Inspired Tool for the 37. Fetzer, “Computers and Cognition: Why Minds Are Not Machines,” Control of Intelligent Opponents in Computer Games,” 2003. 2001, 61.

PAGE 35 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

51. Barsalou, “Perceptual Symbol Systems,” 1999; Barsalou, Franklin, Stan. Artificial Minds. Cambridge, MA: The MIT Press, 1995. “Grounded Cognition,” 2008; Barsalou, “The Human Conceptual System,” 2012; Pezzulo et al., “Computational Grounded Franklin, Stan, and Art Graesser. “Is It an Agent, or Just a Program?: Cognition,” 2013; Gärdenfors, Conceptual Spaces, 2004; A Taxonomy for Autonomous Agents.” In Proceedings of the Third Gärdenfors, The Geometry of Meaning, 2014; Balkenius et al., International Workshop on Agent Theories, Architectures, and “The Origin of Symbols in the Brain,” 2000. Languages. Springer-Verlag, 1996. Gärdenfors, Peter. Conceptual Spaces: The Geometry of Thought. BIBLIOGRAPHY Cambridge, MA: The MIT Press, 2004. Anderson, Michael L. “Embodied Cognition: A Field Guide.” Artificial Gärdenfors, Peter. The Geometry of Meaning: Semantics Based on Intelligence 149, no. 1 (2003): 91–130. Conceptual Spaces. Cambridge, MA: The MIT Press, 2014. Balkenius, Christian. “Natural Intelligence in Artificial Creatures.”Lund Gardner, Howard. The Mind’s New Science: A History of the Cognitive University Cognitive Studies 37 (1995). Revolution. New York: Basic Books, 1985. Balkenius, Christian, Peter Gardenfors, and Lars Hall. “The Origin Gibson, J. J. The Ecological Approach To Visual Perception. New York: of Symbols in the Brain.” In Proceedings of the 3rd International Psychology Press, 1986. Evolution of Language Conference, Ecole Nationale Superieure des Goertzel, Ben. Artificial General Intelligence, vol. 2. New York: Springer, Telecommunications (2000): 13–17. 2007. Barsalou, Lawrence W. “Perceptual Symbol Systems.” Behavioral and Gomes, Antônio, Ricardo Gudwin, and João Queiroz. “Towards Meaning Brain Sciences 22, no. 4 (1999): 577–660. Processes in Computers from Peircean Semiotics.” S.E.E.D. Journal Barsalou, L. W. “Grounded Cognition.” Annual Review of Psychology 59 (Semiotics, Evolution, Energy, and Development) 3, no. 2 (November (2008): 617–45. 2003): 69–79. Barsalou, L. W. “The Human Conceptual System.” In The Cambridge Gomes, Antônio, Ricardo Gudwin, and João Queiroz. “On a Handbook of Psycholinguistics, edited by M. Spivey, K. McRae, and M. Computational Model of the Peircean Semiosis.” Proceedings of the Joanisse, 239–58. New York: Cambridge University Press, 2012. IEEE International Conference on Integration of Knowledge Intensive Multi-Agent Systems, 703–08. Cambridge, MA, 2003. Biocca, Frank. “The Cyborg’s Dilemma: Embodiment in Virtual Environments.” In International Conference on Cognitive Technology. Gomes, A. S. R., R. R. Gudwin, C. N. El-Hani, and J. Queiroz. “Towards IEEE Computer Society (1997): 12–26. Meaning Processes in Computers from Peircean Semiotics.” In Proceedings of the European Computing and Philosophy Conference, Borges, Priscila. “A Visual Model of Peirce’s 66 Classes of Signs 2004. Unravels His Late Proposal of Enlarging Semiotic Theory.” In Model- Based Reasoning in Science and Technology, pp. 221–37. Springer Gomes, A., R. Gudwin, and J. Queiroz. “Some Considerations on Berlin Heidelberg, 2010. Artificial Semiosis.” Abstracts of the Deutsche Gesellschaft für Semiotik, 11th Congress of the German Association of Semiotic Studies, Special Cangelosi, Angelo, and Thomas Riga. “An Embodied Model for Session on Computer and Style, Frankfurt Am Oder, Germany, June Sensorimotor Grounding and Grounding Transfer: Experiments with 2005. Epigenetic Robots.” Cognitive Science 30, no. 4 (2006): 673–89. Gomes, Antônio, Ricardo Gudwin, Charbel Niño El-Hani, and João Cangelosi, A. “Evolution of Communication and Language Using Queiroz. “Towards the Emergence of Meaning Processes in Computers Signals, Symbols, and Words.” IEEE Transactions on Evolutionary from Peircean Semiotics.” Mind and Society 6, n. 2 (2007): 173–87. Computation 5, no. 2 (2001): 93–101. Gonçalves, R., and R. Gudwin. “Semiotic Oriented Autonomous Cangelosi, A., and D. Parisi. “The Emergence of a Language in an Intelligent Systems Engineering.” In Proceedings of the Intelligent Evolving Population of Neural Networks.” Connection Science 10, no. Systems and Semiotics International Conference, September 1998, pp. 2 (1998): 83–97. 700–05. Cangelosi, A., and D. Parisi. “Computer Simulation: A New Scientific Gonçalves, R., and R. R. Gudwin. “Emotions: A Computational Semiotics Approach to the Study of Language Evolution.” In Simulating the Perspective.” In Proceedings of 1999 IEEE International Symposium on Evolution of Language, edited by A. Cangelosi and D. Parisi, 3–28. Intelligent Control, Intelligent Systems and Semiotics, Cambridge, MA, London: Springer Verlag, 2001. 1999. Cangelosi, Angelo, and Domenico Parisi, eds. Simulating the Evolution Granger, Richard. “Engines of the Brain: The Computational Instruction of Language. Springer Science & Business Media, 2002. Set of Human Cognition.” AI Magazine 27, no. 2 (2006): 15–32. Cariani, Peter. “Towards an Evolutionary Semiotics: The Emergence Gudwin, R. R. “On the Generalized Deduction, Induction, and Abduction of New Sign-Functions in Organisms and Devices.” In Evolutionary as the Elementary Operators within Computational Semiotics.” In Systems: Biological and Epistemological Perspectives on Selection and Proceedings of the Intelligent Systems and Semiotics International Self-Organization, edited by Gertrudis van de Vijver, Stanley N. Salthe, Conference, September 1998, pp. 795–800. and Manuela Delpos, 359–76. Netherlands: Springer, 1998. Gudwin, R. R. “From Semiotics to Computational Semiotics.” In Christaller, Thomas. “Cognitive Robotics: A New Approach to Artificial Proceedings of the 9th International Congress of the German Society Intelligence.” Artificial Life and Robotics 3, no. 4 (1999): 221–24. for Semiotic Studies/7th International Congress of the International Clancey, William J. Situated Cognition: On Human Knowledge and Association for Semiotic Studies (IASS/AIS), Dresden, Germany, Computer Representations. Cambridge University Press, 1997. October, 1999. Clark, Andy, and Rick Grush. “Towards a Cognitive Robotics.” Adaptive Gudwin, R. R. “Evaluating Intelligence: A Computational Semiotics Behavior 7, no. 1 (1999): 5–16. Perspective.” In Proceedings of the 2000 IEEE International Conference on Systems, Man, and Cybernetics, October 2000, pp. 2080–85. Dean, Jeffrey. “Animats and What They Can Tell Us.” Trends in Cognitive Sciences 2, no. 2 (1998): 60–67. Gudwin, Ricardo R. “Semiotic Synthesis and Semionic Networks.” In Proceedings of the Second International Conference on Semiotics, Fetzer, J. H. “Computers and Cognition: Why Minds Are Not Machines.” Evolution, and Energy, University of Toronto, Canada, October 2001. Studies in Cognitive Systems, vol. 25. Dordrecht: Kluwer Academic Publishers, 2001. Gudwin, R. R. “The Icon Grounding Problem - Research Commentaries on Cangelosi’s ‘Solutions and Open Challenges for the Symbol Floreano, Dario, Stefano Nolfi, and Francesco Mondada. “Competitive Grounding Problem’.” International Journal of Signs and Semiotic Co-Evolutionary Robotics: From Theory to Practice.” In Proceedings of Systems 1, no. 1 (2011): 55–79. the Fifth International Conference on Simulation of Adaptive Behavior, From Animals to Animats 5, vol. 4, no. LIS-CONF-1998-002, pp. 515–24. Gudwin, R. R. “Peirce And The Engineering of Intelligent Systems.” In Cambridge, MA: The MIT Press, 1998. Death And Anti-Death, Volume 12: One Hundred Years After Charles S. Peirce (1839–1914), edited by Charles Tandy, 207–24. Ann Arbor, MI: Ria Fodor, J. A. The Language of Thought. Crowell Press, 1975. University Press, 2014. Frank, Stefan L., and Gabriella Vigliocco. “Sentence Comprehension as Gudwin, Ricardo R., Ângelo C. Loula, Sidarta Ribeiro, Ivan de Araújo, Mental Simulation: An Information-Theoretic Perspective.” Information and João Queiroz. “A Proposal for a Synthetic Approach to Symbolic 2, no. 4 (2011): 672–96. Semiosis.” 10th International Congress of the German Semiotic Society.

PAGE 36 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Internationaler Kongres. Deutsche Gesellschaft für Semiotik (DGS). Meystel, A., and J. S. Albus. “Intelligent Systems: Architecture, Design Kassel University, Germany, 2002. and Control.” Wiley Series on Intelligent Systems. New York, NY: John Wiley & Sons, Inc., 2002. Gudwin, R. R., and F. A. C. Gomide. “An Approach to Computational Semiotics.” In Proceedings of the Intelligent Systems and Semiotics Nageswaran, Jayram Moorkanikara, Micah Richert, Nikil Dutt, and International Conference, September 1997, pp. 467–70. Jeffrey L. Krichmar. “Towards Reverse Engineering the Brain: Modeling Abstractions and Simulation Frameworks.” In VLSI System on Chip Gudwin, R. R., and F. A. C. Gomide. “A Computational Semiotics Conference (VLSI-SoC), 2010 18th IEEE/IFIP, pp. 1–6. IEEE, 2010. Approach for Soft Computing.” In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, vol. 4. October 1997, Newell, A. “Physical Symbol Systems.” Cognitive Science 4 (1980): pp. 3981–86. 135–83. Gudwin, R., and J. Queiroz, eds. Semiotics and Intelligents Systems Newell, A. (1982) “The Knowledge Level.” Artificial Intelligence 18 Development. Hershey, PA: Idea Group Publishing, 2006. (1982): 87–127. Gudwin, R., and J. Queiroz. “Towards Machine Understanding: Some Newell, Allen, and H. A. Simon. “Computer Science as Empirical Inquiry: Considerations Regarding Mathematical Semiosis.” In Proceedings of Symbols and Search.” of the ACM (1976): 19. the 2007 IEEE International Conference on Integration of Knowledge Intensive Multi-Agent Systems: Modeling, Evolution, and Engineering, Nii, H. P., E. Feigenbaum, J. Anton, and A. Rockmore, “Signal-to-Symbol April-May 2007, pp. 247–52. Transformation: HASP/SIAP Case Study.” AI Magazine 3, no. 2 (1982): 23–35. Gudwin, R. R., and J. Queiroz. “Some Considerations Regarding Mathematical Semiosis.” In Proceedings of IEEE Africon 2007, Nilsson, Nils. “The Physical Symbol System Hypothesis: Status and Windhoek, Namibia, September 2007. Prospects.” In 50 Years of AI, edited by M. Lungarella, 9–17. Springer, 2007. Harnad, Stevan. “The Symbol Grounding Problem.” Physica D: Nonlinear Phenomena 42, no. 1 (1990): 335–46. Noble, J., J. P. de Ruiter, and K. Arnold. “From Monkey Alarm Calls to Human Language: How Simulations Can Fill the Gap.” Adaptive Haselager, Pim, Andre de Groot, and Hans Van Rappard. Behavior 18 (2010): 66–82. “Representationalism vs. Anti-Representationalism: A Debate for the Sake of Appearance.” Philosophical Psychology 16, no. 1 (2003): 5–24. O’Reilly, Randall C., and Yuko Munakata. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Lallee, Stephane, Carol Madden, Michel Hoen, and Peter Ford Dominey. Brain. Cambridge, MA: The MIT Press, 2000. “Linking Language with Embodied and Teleological Representations of Action for Humanoid Cognition.” Frontiers in Neurorobotics 4 (2010): 8. Peirce, C. S., Collected Papers of Charles Sanders Peirce, vols. 1–6, edited by Charles Hartshorne and Paul Weiss, vols. 7–8, edited by Langley, Pat, John E. Laird, and Seth Rogers. “Cognitive Architectures: Arthur W. Burks. Cambridge, MA: Harvard University Press, 1931–1958. Research Issues and Challenges.” Cognitive Systems Research 10, no. 2 (2009): 141-60. Pezzulo, Giovanni, and Calvi Gianguglielmo. “Computational Explorations of Perceptual Symbol Systems Theory.” New Ideas in Levesque, Hector, and Gerhard Lakemeyer. “Cognitive Robotics.” Psychology, Special Issue: Cognitive Robotics and Reevaluation of Foundations of Artificial Intelligence 3 (2008): 869–86. Piaget Concept of Egocentrism 29, no. 3 (2011): 275–97. Loula, Angelo, Ricardo Gudwin, and João Queiroz. “Synthetic Approach Pezzulo, G., L. W. Barsalou, A. Cangelosi, M. A. Fischer, K. McRae, of Symbolic Creatures.” S.E.E.D. Journal (Semiotics, Evolution, Energy, and M. Spivey. “Computational Grounded Cognition: A New Alliance and Development) 3, no. 3 (December 2003): 125–33. Between Grounded Cognition and Computational Modeling.” Frontiers in Psychology 3, no. 612 (2013): 1–11. Loula, A., R. Gudwin, and J. Queiroz, eds. Artificial Cognition Systems. Hershey, PA: Idea Group Publishing, 2006. Queiroz, J. and Merrell, F. “On Peirce’s Pragmatic Notion of Semiosis—A Contribution for the Design of Meaning Machines.” Minds and Machines Loula, Angelo, Ricardo Gudwin, Charbel Niño El-Hani, and João 19 (2009): 129–43. Queiroz. “Emergence of Self-Organized Symbol-Based Communication in Artificial Creatures.”Cognitive Systems Research 11, no. 2 (2010): Ribeiro, S., A. Loula, I. Araújo, R. Gudwin, and J. Queiroz. “Symbols Are 131–47. Not Uniquely Human.” Biosystems 90 (2007): 263–72. Luck, M., and M. d’Inverno. “A Conceptual Framework for Agent Robbins, Philip, and Murat Aydede, eds. The Cambridge Handbook of Definition and Development.” The Computer Journal 44, no. 1 (2001): Situated Cognition. New York, NY: Cambridge University Press, 2009. 1-20. doi: 10.1093/comjnl/44.1.1 Roy, D. “Learning Visually Grounded Words and Syntax of Natural MacLennan, B. “Synthetic Ethology: An Approach to the Study of Spoken Language.” Evolution of Communication 4, no. (2002): 33–56. Communication.” In Artificial Life II: The Second Workshop on the Synthesis and Simulation of Living Systems, edited by C. Langton, Roy, Deb. “Semiotic Schemas: A Framework for Grounding Language in C. Taylor, D. Farmer, and S. Rasmussen, 631–58. Redwood City, CA: Action and Perception.” Artificial Intelligence 167, no. 1 (2005): 170–205. Addison-Wesley, 1992. Roy, D. “Grounding Words in Perception and Action: Insights from MacLennan, B. J. “Making Meaning in Computers: Synthetic Ethology Computational Models.” Trends in Cognitive Science 9, no. 8 (2005): Revisited.” In Artificial Cognition Systems, edited by A. Loula, R. Gudwin, 389–96. and J. Queiroz, 252–83. Idea Group, 2006. Samsonovich, Alexei. “Comparative Table of Cognitive Architectures.” MacLennan, B., and G. Burghardt. “Synthetic Ethology and the Evolution BICA Society, 2009. http://bicasociety.org/cogarch/architectures.htm. of Cooperative Communication.” Adaptive Behavior 2, no. 2 (1993): Accessed March 2014. 161–87. Samsonovich, Alexei V. “On a Roadmap for the BICA Challenge.” Madden, Carol Madden, Michel Hoen, and Peter Ford Dominey. “A Biologically Inspired Cognitive Architectures 1 (2012): 100–107. Cognitive Neuroscience Perspective on Embodied Language for Savan, D. An Introduction to C. S. Peirce’s Full System of Semeiotic. Human–Robot Cooperation.” Brain and Language 112, no. 3 (2010): Monograph Series of the TSC, Number 1. Toronto: Toronto Semiotic 180–88. Circle, 1987-1988. Marks II, R. J. “Intelligence: Computational Versus Artificial.”IEEE Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Transactions on Neural Networks 4, no. 5 (1993): 737–39. Sciences 3, no. 3 (1980): 417–57. Marty, R., and A. Lang. “76 Definitions of The Sign by C. S. Peirce.” 1997. Sharkey, Noel E., and Tom Ziemke. “Mechanistic versus Phenomenal http://www.cspeirce.com/rsources/76defs/76defs.htm. Embodiment: Can Robot Embodiment Lead to Strong AI?” Cognitive Meijsing, Monica. “Real People and Virtual Bodies: How Disembodied Systems Research 2, no. 4 (2001): 251–62. Can Embodiment Be?” Minds and Machines 16 (2006): 443–61. Short, T. L. “The Development of Peirce’s Theory of Signs.” In Companion Meyer, J., and S. Wilson. “Simulation of Adaptive Behavior in Animats: to Peirce, edited by C. Hookway, 214–40. Cambridge, MA: Cambridge Review and Prospect.” In From Animals to Animats: Proceedings of the University Press, 2006. First International Conference on Simulation of Adaptive Behavior, 2–14, Smith, B. C. “On the Origin of Objects.” Cambridge, MA: Bradford Cambridge, MA: MIT Press, 1991. Books, The MIT Press, 1998.

PAGE 37 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Steels, Luc, and Paul Vogt. “Grounding Adaptive Language Games in of the historical conditions on the development of the Robotic Agents.” In Proceedings of the Fourth European Conference on new ideas is a major question. This question is not new, of Artificial Life. Cambridge, MA: The MIT Press, 1997. course, and forms the basis for the history and sociology Steels, L. “Synthesising the Origins of Language and Meaning Using of philosophy and the knowledge. This short article will try Co-Evolution, Self-Organisation, and Level Formation.” In Approaches to the Evolution of Language: Social and Cognitive Bases, edited by to shed some light on this question, from the perspective J. Hurford, C. Knight, and M. Studdart-Kennedy. Edinburgh University of big data analysis, using Google Books archive and Press, 1998. analytic methods. The conclusion is that new terminology, Steels, L. “The Talking Heads Experiment: Volume 1. Words and as well as combinations of words that identify the specific Meanings. Brussels, Belgium: VUB Artificial Intelligence Laboratory, philosopher’s language, usually starts to emerge decades 1999. (Special Pre-edition LABORATORIUM, Antwerpen 1999). before the philosophical breakthrough takes place. Steels, L. “Language As a Complex Adaptive System.” In Proceedings of PPSN VI, edited by M. Schoenauer. Berlin, Germany: Springer-Verlag, 2000. KEYWORDS Steels, Luc. “Language Games for Autonomous Robots.” Intelligent History of Philosophy, Big Data, Google Books, Google Systems, IEEE 16, no. 5 (2001): 16–22. Ngram Viewer, Topic Modeling, Trend Analysis, Digital Stewart, John Robert, Olivier Gapenne, and Ezequiel A. Di Paolo. Humanities Enaction: Toward a New Paradigm for Cognitive Science. Cambridge, MA: The MIT Press, 2010. DIGITAL HUMANITIES AND BIG DATA ANALYSIS Sun, R. A Tutorial on Clarion, Technical Report. Rensselaer Polytechnic Digital humanities is an emerging research area, trying Institute, 2003. to utilize the new digital media and tools for the study Sun, R. “Desiderata for Cognitive Architectures.” Philosophical and research of the humanities. “Big data” is a term that Psychology 17, no. 3 (September 2004): 341–73. is widely being used in the new digital science and the Sun, R. “The Challenges of Building Computational Cognitive industry for the analysis of mass collection of data for Architectures.” In Challenges for Computational Intelligence, edited the purpose of identifying new patterns and extracting by Wlodislaw Duch and Jacek Mandiziuk, 37–60. Berling: Springer/ Heidelberg, 2007. new rules such as social behavior, medicine phenomena, Tatai, Victor K., and Ricardo R. Gudwin. “Using a Semiotics-Inspired financial forecast, and even physical laws. “Big data” is Tool for the Control of Intelligent Opponents in Computer Games.” a significant part of the digital humanities activities, and In Proceedings of the IEEE International Conference on Integration of researches are trying to analyze large corpora of texts and Knowledge Intensive Multi-Agent Systems. September-October 2003, derive conclusions based on computerized text analysis. 647–52. This short paper is an example for such attempts. The value Verdejo, Víctor M. “Computationalism, Connectionism, Dynamicism and Beyond: Looking for an Integrated Approach to Cognitive Science.” of machine-based text analysis and big data methods to In The European Philosophy of Science Association Proceedings, the humanities is under debate; many scholars claim that EPSA11 - Perspectives and Foundational Problems in Philosophy of the level of understanding that can be gathered from such Science, vol. 2, edited by Vassilios Karakostas and Dennis Dieks. 2013. methods is very minimal. Fans of digital humanities argue doi:10.1007/978-3-319-01306-0_33 that such understandings cannot be received from any Werner, G., and M. Dyer. “Evolution of Communication in Artificial Organisms.” In Artificial Life II, edited by C. Langton, C. Taylor, D. Farmer, other methods, and this is just the beginning. This debate 659–87. Redwood City, CA: Addison-Wesley Publishers, 1992. in itself has its own philosophical interest: Can the research Ziemke, Tom. “What’s That Thing Called Embodiment?” In Proceedings become more automated, can knowledge be extracted by of the 25th Annual meeting of the Cognitive Science Society. Mahwah, machines? The following example may contribute to this NJ: Lawrence Erlbaum, 2003. discussion. Zurada, J. M., R. J. Marks II, and C. J. Robinson. Computational Intelligence: Imitating Life. IEEE Press, 1994. METHOD OF INVESTIGATION “Topic modeling” is used by the digital humanities community to identify topics in texts via clusters of words that may characterize the text. A Topic is described as “a USING THE TECHNOLOGY FOR recurring pattern of co-occurring words.”1

PHILOSOPHY The following method is based on the basic principle of topic modeling, and is described by the following basic Trend Analysis of Philosophy Revolutions steps:

Using Google Books Archive: A Big Data 1) Identify major philosophers for inspection. Analysis Using Google Books 2) For each of the philosophers: identify a set of Shai Ophir words that mostly characterizes the texts of the STARHOME, ISRAEL philosopher.

3) For the group of words identifying the ABSTRACT philosopher—use the “Goggle Ngram Viewer” tool We tend to see philosophical breakthroughs as derived that is able to show the usage of a group of words mainly from the brilliant thinking of individual great (all together) in all scanned corpus, along the philosophers. I am not going to argue that the individual timeline. Using this tool, we’ll be able to verify that philosopher doesn’t have a major role in the new the specific group of words indeed characterizes philosophy presented by him/her. However, the influence the specific philosopher’s time.

PAGE 38 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

4) However in addition, Google Ngram Viewer is able to show whether the specific group of words was already in use before the philosopher’s time.

In fact, the results show that in all the examined cases, usage of the specific group of words started tens of years before the philosopher’s time, increasing gradually, until the max level of usage that took place at the philosopher’s time. This is the main result of this work, showing that the concepts expressed by the specific philosopher are based on a continuation of a historical process, which started long before the philosopher’s time.

FigureFigure 2: 2: Kant’s Kant’s word word cloud of cloud the most offrequent the words most in the frequent above collection words of texts .in the GOOGLE BOOKS AND RELEVANT SOFTWARE above collection of texts.

The next step is to take the most frequently used words for Kant, emphasized by size. These are the TOOLS following words: reason, nature, principle, existence, priori, empirical, phenomena, conception, Theseexperience, words unity, understanding, are then object, intuitionfed into Google Ngram Viewer as The research uses the Google Books archive, which is the one collection, and are searched from 1650 to 2000. The These words are then fed into Google Ngram Viewer as one collection, and are searched from 1650 to largest corpus of scanned texts that exists today, and the result2000. The is result displayed is displayed in the in following the figure.following Google Ngrams figure. shows that theGoogle collection of theseNgrams archive, which relies on Google Books but showswords inclines that from approximatelythe collection 1700 (1715, in factof), andthese comes to wordsits top just before inclines 1800, exactly from when Kant published his three best-known critiques. This fact provides evidence that the model of contains also reference information and statistics. The approximatelyfrequently used words is meaningful.1700 (1715,The decline afterin 1800fact), shows thandat this trendcomes is not accidental to its but top research also uses Google Ngram Viewer, which is another justhas its before climax within 1800, Kant’s period. exactly But, furthermore, when it shows Kant that the usagepublished of this collection his of words three started to increase before Kant, during the whole eighteenth century. This fact is actually an indication tool provided by Google labs, and enables a search within best-knownfor the philosophical contextcritiques. in which Kant This was active. fact It shows provides that the words that evidence most characterize that all Ngrams extracted from the Google Books corpus, where theKant’s model texts had been of increasing frequentlyly used before Kant’sused time. wordsIt shows that isthe relationmeaningful. between Kant’s The revolutionary philosophy and the existing cultural system of the eighteenth century may be stronger the Ngrams are sequences of words, between 1 and 5. The declinethan we tend after to accept. 1800 shows that this trend is not accidental Google Ngram Viewer displays the search results along the but has its climax within Kant’s period. But, furthermore, it timeline, providing a macro view for the usage of a specific shows that the usage of this collection of words started to Ngram over hundreds of years.2 increase before Kant, during the whole eighteenth century. timeline. Using this tool, we’ll be able to verify that the specific group of words indeed This fact is actually an indication for the philosophical characterizes the specific philosopher's time. Additionally,4. However in addition,the research Google Ngram Viewer uses is able the to show PaperMachine whether the specific group software of words context in which Kant was active. It shows that the words tool developedwas already in use bybefore Jo the philosopher’sGuldi to time analyze. the most frequently that most characterize Kant’s texts had been increasingly 3 usedIn wordsfact, the results and show build that in all athe word examined cloud cases, usage for of the a specificset ofgroup documents. of words started used before Kant’s time. It shows that the relation between tens of years before the philosopher’s time, increasing gradually, until the max level of usage that Kant’s revolutionary philosophy and the existing cultural took place at the philosopher’s time. This is the main result of this work, showing that the concepts PaperMachineexpressed by the specific is designedphilosopher are based to onhelp a continuation scholars of a historical parse process, whichthrough system of the eighteenth century may be stronger than we started long before the philosopher's time. large sets of information, capitalizing on current work in tend to accept. computerGoogle Books andscience, relevant software topic tools modeling, and visualization to The research uses the Google Books archive, which is the largest corpus of scanned texts that exists generatetoday, and the iterative, Google Scholar archive, time-dependent which relies on Google Books visualizations but contains also reference of what a hand-curatedinformation and statistics. body The research of texts also uses talks Google Ngram about Viewer, and which ishow another it tool changes provided by Google labs, and enables a search within all Ngrams extracted from the Google Books corpus, where overthe Ngrams time. are sequences of words, between 1 and 5. The Google Ngram Viewer displays the search results along the timeline, providing a macro view for the usage of a specific Ngram over hundreds of years. More information is provided in the web site: https://books.google.com/ngrams/info TheAdditionally, PaperMachine the research uses thesoftware PaperMachine is software provided tool developed as by an Jo Guldi add-on to analyze theto the Zoteromost frequently framework, used words and which build a word is clouda software for a set of documents. environment From the PaperMachine for digital website: http://papermachines.org humanitiesPaperMachine researchers. is designed to help scholars Zotero parse throughis used large sets for of information, managing capitalizing files, on current work in computer science, topic modeling, and visualization to generate iterative, time- softwaredependent tools, visualizations and ofanalysis what a hand-curated results body ofin texts one talks aboutglobal and how framework. it changes over time. USEThe PaperMachine CASE software1: KANT is provided as an add-on to the Zotero framework, which is a software environment for digital humanities researchers. Zotero is used for managing files, software tools, and Takinganalysis resultsthree in one of global Kant’s framework. major texts, and building a word cloud of the most frequently used words using the PaperMachine software,Use case 1: Kant provides us with the following results: Taking three of Kant’s major texts, and building a word cloud of the most frequently used words using the PaperMachine software, provides us with the following results:

FigureFigure 3: Trend3. Trend analysis ofanalysis the collection of of wordsthe thatcollection mostly characterized of words Kant’s texts.that The mostly climax of characterizedthe trendKant’s takes placetexts. indeed The at the climax time of Kant’s of themajor trendpublication. takes place indeed at the time of Kant’s major publication. FigureFigure 1: Using1: PaperMachineUsing PaperMachine within Zotero framework within for processing Zotero Kant’s files.framework The names of thefor processing Kant’s files.texts The are shown names on the rightof the side . texts are shown on the Use case 2: Hegel right side. The next chapters, for Hegel and Nietzsche, will be described in fewer details, as the method of research is identical to the previous chapter. USE CASE 2: HEGEL The next step is to take the most frequently used words for TheFigure next 4 shows thechapters, four major texts usedfor for HegelHegel. and Nietzsche, will be Kant, emphasized by size. These are the following words: described in fewer details, as the method of research is reason, nature, principle, existence, priori, empirical, identical to the previous chapter. phenomena, conception, experience, unity, understanding, object, intuition Figure 4 shows the four major texts used for Hegel.

Figure 4: Hegel’s major texts. PAGE 39 FALL 2015 | VOLUME 15 | NUMBER 1

Figure 3: Trend analysis of the collection of words that mostly characterized Kant’s texts. The climax of the trend takes place indeed at the time of Kant’s major publication.

APA NEWSLETTER | PHILOSOPHY AND COMPUTERS Use case 2: Hegel The next chapters, for Hegel and Nietzsche, will be described in fewer details, as the method of research is identical to the previous chapter.

Fig ure 4 shows the four maj or texts used for Heg el. The most frequent words for Nietzsche were man, German, work, life, and project.

Figure 4. Hegel’s major texts.Figure 4: Hegel’s major texts.

Hegel’s most frequently used words were state, nature, world, universal, philosophy, subjectivity, existence, spirit, form,Hegel’s mostconsciousness frequently used words were state, nature, world, universal, philosophy, subjectivity, existence, spirit, form, consciousness

Figure 8. Nietzsche’s Figureword 8: Nietzsche’scloud. word cloud.

The most frequent words for Nietzsche were man, german, work, life, and project.

InIn some of the ofcases, the such ascases, in the case ofsuch Nietzsche, as a relatively in the small case number ofof words Nietzsche, is sufficient for a relativelyidentifying Nietzsche small texts. number Using theFigure above of 8: five Nietzsche’s words words only word inis the cloudsufficient Ngram. Viewer generate fors theidentifying following results: Nietzsche The most frequent texts. words for UsingNietzsche were the man, abovegerman, work, five life, and projectwords. only in the

NgramIn some of Viewer the cases, such generates as in the case of Nietzsche, the following a relatively small numberresults: of words is sufficient for identifying Nietzsche texts. Using the above five words only in the Ngram Viewer generates the following results:

Figure 5. Hegel’s word Figurecloud. 5: Hegel’s word cloud.

AfterAfter running running the Google Ngramthe ViewerGoogle for the set Ngram of Hegel’s words, Viewer we get the followingfor the diagram set that of shows exactly that the top of the trend of using all of these words together is exactly at the time Hegel Hegel’swas most influencial, words, in the wefirst half get of the thenineteenth following century. Again, wediagram can see an increase that in theshows use exactlyof these words that starting the in the top early ofeighteenth the century,trend and of a decline using after Hegel’sall of time. these words together is exactly at the time Hegel was most influencial, in the first half of the nineteenth century. Again, we can see an increase in the use of these words starting in the early eighteenth century, and a decline after Hegel’s time.

Figure 9. The trend usage of Nietzsche’s word cloud. The peak is after 1900, the time Nietzsche was most popular, before his death.

CONCLUSIONS AND FURTHER RESEARCH DIRECTIONS The main result of this work demonstrates a correlation between a set of the most frequently words used by a philosopher, and the growing trend of using this collection of words, a trend that started tens of years before the philosopher’s time. Although the examined philosophers selected were among the most revolutionary philosophers

(Kant, Hegel, and Nietzsche) of our time, the relation Figure 6: The 6. trend The of usingtrend Hegel’s of mostusing frequently Hegel’s used words. most This frequently trend comes to itsused peak just words. at the Figure 6: The trend of using Hegel’s most frequently used words. This trend comes to its peak just at the This trend comes to itstime Hegelpeak was just most influencial.at the time Hegel was most between their texts and the cultural and historical context time Hegel was most influencial. influencial. of their period may be stronger than what we tend to

presume.

USE CASE 3: NIETZSCHE

FigureUse case 3:7 Nietzsche shows the four major texts used for Nietzsche. This result would have no meaning if there were no Use case 3: Nietzsche

Figure 4 shows the four major texts used for Nietzsche. correlation between the most frequent words used by a Figure 4 shows the four maj or texts used for Nietzsche. philosopher, and the general trend of such a collection over time. Using Google Ngram Viewer, we can see the tight association between the set of frequent words derived from each philosopher’s texts and the height of the trend

Figure 7: Nietzsche’s texts. Figure 7: Nietzsche’s texts. of using these words, which always takes place at the time Figure 7. Nietzsche’s texts. of the philosopher’s most influential age.

PAGE 40 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

There is much work to do in the trend analysis itself for had into, in effect, crowd-sourcing the construction of a understanding the reasons for the initiation of the trend help system for learning natural deduction. and its fluctuations over time. This can serve as a base for much more detailed research in the history of philosophy. Colin’s computer savvy and his interest in pedagogy was evident when he was hired at Texas A&M out of UCLA, as NOTES he had already co-authored a primer on Lisp programming 1. Megan Brett, “Topic Modeling: A Basic Introduction,” Journal of that he made freely available—indeed, it was probably one Digital Humanities 2, no. 1 (2012). of the first things freely available for download from Texas 2. For more information, visit https://books.google.com/ngrams/info A&M in the early days of the World Wide Web (and is still available at http://mypage.iu.edu/~colallen/lp). Having 3. http://papermachines.org/ learned his computer skills on Unix mainframes at UCLA, BIBLIOGRAPHY he understandably viewed the prospect of working on a DOS-based PC with a certain horror and so negotiated a Acerbi, Alberto, Vasileios Lampos, Philip Garnett, and R. Alexander Bentley. “The Expression of Emotions in 20th Century Books.” PLoS ONE NeXT machine—which came to be named snaefell.tamu. 8, no. 3 (2013). edu—as part of his hiring package. It was not long before Brett, Megan. “Topic Modeling: A Basic Introduction.” Journal of Digital snaefell’s desktop publishing capabilities were put to solid Humanities 2, no. 1 (winter 2012). pedagogical use, as Colin and our colleague Michael Hand Brysbaert, Marc, Matthias Buchmeier, Markus Conrad, Arthur M. collaborated on the development of a superb little logic Jacobs, Jens Bölte, and Andrea Böhl. “The Word Frequency Effect: A text, Logic Primer (Allen and Hand 1992/2001) based upon Review of Recent Developments and Implications for the Choice of the elegant system of natural deduction in Lemmon’s (1990) Frequency Estimates in German.” Experimental Psychology 58, no. 5 (2011): 412–24. Beginning Logic. The manuscript was published by The MIT Press—using the source files generated on snaefell, Greenfield, Patricia M. “The Changing Psychology of Culture from 1800 through 2000.” Psychological Science 24, no. 9 (September which served to keep the production costs down and the 2013): 1722–31. price low for students—and, over twenty years later, is in Guldi, Jo. Paper Machine software. http://papermachines.org/ its second (and soon to be third) edition and is still a solid Johnson-Roberson, Chris, and Jo Guldi. “Review of Paper Machines.” seller. Journal of Digital Humanities 2, no. 1 (Winter 2012) http:// journalofdigitalhumanities.org/2-1/review-papermachines-by-adam­ Not content simply with having produced a text, however, crymble/ Colin made good use of the robust programming and Michel, Jean-Baptiste*, Yuan Kui Shen, Aviva Presser Aiden, Adrian shell scripting capabilities of NeXT’s underlying unix OS to Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, develop a proof checker for the deduction system in Logic Martin A. Nowak, and *. “Quantitative Analysis Primer, a program that would go line by line through a proof of Culture Using Millions of Digitized Books.” Science 331 (2011) until it either found an error or reached a line containing the [Published online ahead of print: December 16, 2010]. desired conclusion. Delivery of the software, however, was Michel, Jean-Baptiste, Erez Lieberman Aiden, Jon Orwant, Will a problem—how do you make the proof checker available Brockman, and Slav Petrov. “Syntactic Annotations for the Google Books Ngram Corpus.” In the Proceedings of the 2012 Proceedings of to students? If he’d followed the standard approach at the ACL 2012 System Demonstrations, pages 169–74. the time, he would have loaded the software on a floppy Roth, Steffen. “Fashionable Functions: A Google Ngram View of Trends disk and provided it to students to install on their own in Functional Differentiation (1800–2000).” International Journal of computers. But the approach is fraught with difficulties— Technology and Human Interaction 10, no. 2 (2014): 34–58. you have to find a way to distribute floppy disks to a large numbers of students, the disks themselves (especially at the time) could be physically flawed, etc.

Colin opted for an alternative: he set up a dedicated The Logic Daemon: Colin Allen’s account named “logic” on snaefell to which students could send proof attempts. Obviously, Colin’s intent was not to Computer-Based Contributions to Logic spend the bulk of his days correcting proofs and emailing Pedagogy his corrections back to students. Rather, he implemented a unix daemon on snaefell—basically, a program that runs Christopher Menzel quietly in a shadowy corner of the computer, waiting to TEXAS A&M UNIVERSITY spring into appropriate action when it sees that a certain kind of event has occurred. Specifically, Colin’s logic I was greatly pleased to learn that Colin Allen had been daemon waited for messages from students to logic@ awarded the Barwise Prize, for both personal and snaefell.tamu.edu containing proof attempts. When it saw professional reasons. Professionally, the breadth of Colin’s one, it handed the message over to a script that would achievements is truly impressive, particularly in the areas extract a student’s proof attempt from the body of the for which the prize is awarded. I cannot think of a more message and, in turn, pass the proof attempt along to the deserving awardee. Personally, Colin is a valued, longtime proof checker, originally written in Lisp, to which Colin had friend and (alas) a former colleague at Texas A&M University. appended code that would reply to the sender with the I count it as a privilege that I was able to collaborate with him output of the checker. In this way, a student could interact on several projects designed to enhance logic pedagogy. with the proof checker via email in near real time—a My brief discussion here will focus on Colin’s work in this reply could be expected from [email protected] arena and, specifically, on a particularly creative insight he within seconds of a submission. If the proof attempt was

PAGE 41 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

successful, the student would be so told; if there was an dynamic upgrades) and avoided the liabilities of the email- error, the student would learn where one first occurred and based implementation. could then revise the proof and send off a new attempt. To my own great good fortune, Colin correctly surmised The importance of an interactive proof checker in teaching that I would be more than interested in working with him natural deduction simply cannot be overstated. Those on coding this design. Armed with “the Camel book”1 of us who taught (and therefore learned) logic prior to (Wall et al. 1996)—the superb (if quirky) definitive guide the widespread use of computers and the development to the Perl programming language—a variety of excellent of automated proof checkers will recall vividly that the supplemental guides,2 and an already huge amount of number one threat to mastering natural deduction is readily available content on the Web, we taught ourselves systematic error. Without a vigilant proof checker, human or how to program in Perl (the predominant scripting language otherwise, standing guard, it is extremely easy for students at the time) and, in particular, how to write the CGI scripts learning natural deduction to believe they are solving that would enable users to interact dynamically with the problems correctly—they find their way eventually to the proof checker and related software for truth tables solely conclusion without sensing any problems—when, in fact, within a browser. With a small grant from Texas A&M, we they are making critical errors. If errors are not identified were able to hire a computer science student to recode the and corrected on the spot, students easily begin to make original Lisp proof checker in C, making it faster and more the same type of errors again and again, to the point that portable. And thanks to the open source Apache software they become entrenched in the students’ thinking and, and the marvelous, open source Linux operating system, hence, become very difficult to eliminate. Interactive proof we were able to implement Colin’s vision at minimal cost checking virtually eliminates systematic error and, thus, on an inexpensive Dell desktop computer. The result—the represents a huge advance in logic pedagogy. Logic Daemon (http://logic.tamu.edu)—is still widely and effectively used, its now rather “old school” text-based As computers became essential equipment for college interface notwithstanding.3 students, interactive proof-checking software began to be included on CDs with many logic texts. The email approach, To help with debugging our code, we implemented extensive however, had a strong advantage insofar as bugs could be logging—in addition to error logging, every interaction with squashed and upgrades made available immediately simply the server was also logged in detail, notably, every proof by revising the proof-checker source on the spot. However, attempt. As Logic Primer was being used quite widely, the approach also had some obvious downsides. Notably, within a year or so after putting the Logic Daemon online we to correct and continue working upon a proof in which had daily log files detailing tens of thousands of separate the checker had found an error, a student would typically interactions. For instance, the following entry in the log file use the “reply” functionality of her email client, which, at for October 31, 2005, records an attempt to construct a proof the least, would typically require editing away extraneous of the validity (P (P Q)) (P Q): characters introduced automatically into the reply, e.g., header information, beginning of line characters marking the Mon Oct 31 05:16:28→ →2005: →129.116.XXX.XXX→ text being replied to, etc. Fortunately, this was also around the time at which the Common Gateway Interface (CGI) was |-(P->(P->Q))->(P->Q) introduced into the open source Apache server software that was driving the nascent World Wide Web. CGI provided a 1 (1) P->(P->Q) A standard interface between a web server and appropriately designed stand-alone programs so that data entered into 2 (2) ~(P->Q) A fields on a web page and submitted by the user could be handed to such a program as input and the output of the 2 (3) P&~Q 2Neg-> program returned to the server and displayed to the user on a subsequent page. Colin’s vision, then, was to have the role 2 (4) P 3&E of the logic daemon be subsumed by the CGI capabilities of the Apache web server. Instead of choosing a problem from 2 (5) ~Q 3&E the Logic Primer and composing an email message, a user could simply call up a web page from which she could choose 1,2 (6) P->Q 4,6->E a proof problem from the Primer to solve and attempt a proof directly in a text field on the page. And instead of sending 1,2 (7) Q 4,6->E off the message when the user finished her attempt, she could simply click a submission button. CGI-based calls to 1 (8) P->Q 5,7RAA(2) the proof checker embedded in the HTML code of the page, upon submission of the proof attempt, would be recognized (9) (P->(P->Q))->(P->Q) 8->I(1) as such and the web server, accordingly, would hand the proof attempt to the proof checker for evaluation and return |-(P->(P->Q))->(P->Q) the results immediately to the user. In particular, if the proof attempt was flawed, the user’s work could be returned to OK 1 (1) P->(P->Q) A her in a text field, ready for her to continue. CGI thus opened the possibility of a smoother, web-based implementation of OK 2 (2) ~(P->Q) A the proof-checker that retained the advantages (especially

PAGE 42 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

OK 2 (3) P&~Q 2Neg-> 1. Whether pn is properly derived from earlier lines;

OK 2 (4) P 3&E 2. Whether p1, p2, …, pn is a proof of A;

OK 2 (5) ~Q 3&E 3. The “children” of N, that is, the nodes that represent

attempts p1, p2, …, pn, p n+1 to extend p1, p2, …, pn

>> 1,2 (6) P->Q 4,6->E by a further line pn+1;

The annotation must only cite lines earlier than the current 4. Whether there is a “path to a solution” from N, that line. is, series of proof attempts extending p1, p2, …, pn that end in a complete proof of A; Lines 3-11 show what the user submitted; lines 13-17 indicate the lines the Daemon checked and approved; 5. The minimum number of steps from p1, p2, …, pn line 18 shows the first error. The final line records the error to a solution; message that was returned to the user. The error, of course, is that the user entered line numbers 4 and 6 instead of 1 and, importantly: and 4. A subsequent log entry shows the correction made and the proof passing muster, with the flagged line and the 6. The “visit count” for N, that is, the number of times error message replaced by: users had attempted to construct a proof of A

beginning with p1, p2, …, pn. OK 1,2 (6) P->Q 1,4->E More specifically, node 1 of the tree (somewhat simplified), OK 1,2 (7) Q 4,6->E representing the first line of a proof of the theorem in question by ->I has the following structure: OK 1 (8) P->Q 5,7RAA(2) 1 => { OK (9) (P->(P->Q))->(P->Q) 8->I(1) Assumptions => [1], Congratulations. Your proof is correct. LineNumber => ‘1’, Poring over the logs one day, Colin had a terrific insight—the detailed data in the logs contained the makings of a “crowd- Sentence => ‘P->(P->Q)’, sourced” help system for the proof checker. (This was, of course, years before the notion of crowd-sourcing was on Rule => ‘A’, the technological radar.) The basic idea was to use the logs to identify, at any given point at which a student might ProblemSolved => ‘no’, need help in constructing a proof in answer to a problem in the Primer, the continuation toward a solution that users PathToSolution => ‘yes’, most often chose. For the most popular continuation would likely represent the “path of least resistance” to a solution, Error => ‘no’, the continuation that students who had successfully solved the problem from that point had found most natural. TypeOfError => ‘null’,

As for implementation, for each proof problem in the VisitCount => ‘82’, Primer, the logs were mined by means of several perl scripts to generate a tree from all of the (complete and ParentNode => ‘0’, partial) attempts to construct a solution to the problem. (The scripts were mostly of Colin’s design but were coded ChildNodes => [2,10,17], primarily by several very talented computer science students we were able to hire from our grant money.) The MinStepsLeft => ‘5’, root node of the tree, node 0, represented the problem itself, premises (if any), and conclusion—in this case, as the HelpAsked => ‘10’, problem is a theorem, |-(P->(P->Q))->(P->Q). Each non-root node N represented (most saliently) the last formula pn in } an initial segment p1, p2, …, pn of a proof attempt. Thus, unsurprisingly, as most users will attempt to use ->I to prove Most of the lines are self-explanatory. The VisitCount the theorem in question, the formula identified in node 1 indicates that there were seventy-two proof attempts that of the tree created for the problem is the antecedent P->(P­ began with this assumption. The ParentNode indicates the

>Q). However, the node for an initial segment p1, p2, …, pn node preceding line in the proof attempt—in this case, the of a proof attempt contains a great deal more information, root node 0. The ChildNodes field indicates all of the nodes specifically: representing continuations of the proof attempt from line 1 and MinStepsLeft indicates the shortest number of lines remaining to a complete proof from one of those children.

PAGE 43 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

The child node 10 represents the continuation actually |-(P->(P->Q))->(P->Q) found in the above proof, where the assumption ~(P->Q) was invoked for a subsequent application of the rule RAA OK 1 (1) P->(P->Q) A (Reductio ad Absurdum):4 OK 2 (2) P A 10 => { OK 1,2 (3) P->Q 1,2->E Assumptions => [2], OK 1,2 (4) Q 2,3->E LineNumber => ‘2’, OK 1 (5) P->Q 4->I(2) Sentence => ‘~(P->Q)’, OK (6) (P->(P->Q))->(P->Q) 5->I(1) Rule => ‘A’, Congratulations. Your proof is correct. ProblemSolved => ‘no’, The scripts accordingly created another child to node 1, PathToSolution => ‘yes’, representing the above continuation:

Error => ‘no’, 2 => {

TypeOfError => ‘null’, Assumptions => [2],

VisitCount => ‘21’, LineNumber => ‘2’,

ParentNode => ‘1’, Sentence => ‘P’,

ChildNodes => [11,47], Rule => ‘A’,

MinStepsLeft => ‘7’, ProblemSolved => ‘no’,

HelpAsked => ‘7’, PathToSolution => ‘yes’,

} Error => ‘no’,

ParentNode now indicates node 1, the node corresponding TypeOfError => ‘null’, to line 1 of the above proof. And ChildNodes indicates all of the nodes representing continuations of the proof VisitCount => ‘56’, attempt from line 2. One of these, node 17 represents the continuation actually found in the above proof, where the ParentNode => ‘1’, (derived) rule Neg-> is invoked to derive P&~Q. ChildNodes => [3], More users in fact identified the shorter and rather simpler proof of the theorem (one that, moreover, does not depend MinStepsLeft => ‘4’, on non-primitive, or derived, rules like Neg->), as indicated in a further log entry: HelpAsked => ‘0’,

Sat Jul 01 17:22:57 2006: 198.192.XXX.XXX }

|-(P->(P->Q))->(P->Q) Note the difference in the values of VisitCount for the two continuation nodes 2 and 10. Node 10’s is 21, representing 1 (1) P->(P->Q) A the fact that this particular continuation of the proof was tried only twenty-one times, whereas the continuation 2 (2) P A represented in node 2 was attempted 56 times. This difference strongly suggests that the latter continuation 1,2 (3) P->Q 1,2->E represents the more “natural” of the two, the one to which users naturally gravitate in their quest for a solution. Hence, 1,2 (4) Q 2,3->E the Help system, if invoked immediately after entering line 1, would turn to node 2 rather than node 10 and, hence, 1 (5) P->Q 4->I(2) offer the crowd-sourced suggestion of continuing with the assumption P rather than the assumption ~(P->Q). (6) (P->(P->Q))->(P->Q) 5->I(1)

PAGE 44 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Two further features are worth noting. First, the help system assuming p, r follows and, assuming q, r follows, then r follows offers its assistance incrementally. That is, when a user from p v q. asks for a suggestion, the system does not suggest the entire line. Rather, it first suggests a rule without specifying BIBLIOGRAPHY exactly how it should be used. A second click suggests Allen, C., and M. Hand. Logic Primer, 2nd ed. Cambridge, MA: The MIT Press, 2001. both a rule and appropriate line numbers. On a third click, the system offers up the entire line that the crowd-sourcing Christiansen, T., and N. Torkington. Perl Cookbook. Sebastopol, CA: O’Reilly, 1998. has identified. Friedl, J. E. Mastering Regular Expressions: Powerful Techniques for Perl and Other Tools. Cambridge, MA: O’Reilly, 1997. Second, as noted briefly above, unlike many introductory 5 Howard-Snyder, F., D. Howard-Snyder, and R. Wasserman. The Power of texts, the Primer initially presents primitive introduction Logic, 5th ed. Boston: McGraw-Hill Higher Education, 2013. and elimination rules for the logical operators and, Hurley, P. J. A Concise Introduction to Logic. Boston, MA: Wadsworth, only subsequently, presents derived rules—rules like 2011. DeMorgan’s Law (in its various forms) that can be proved Klenk, V. Understanding Symbolic Logic, 5th ed. Englewood Cliffs, NJ: (often non-trivally) from the primitive rules. Although Prentice-Hall, 2008. theoretically redundant, derived rules are very convenient Lemmon, E. J. Beginning Logic. London: Chapman and Hall, 1990. as, when available, they often shorten proofs considerably— proofs of the various forms of DeMorgan’s Law from Schwartz, R. L., and T. Christiansen. Learning Perl. Sebastopol, CA: O’Reilly & Associates, 1997. primitive rules, for example, take up to ten steps to derive Wall, L., R. L. Schwartz, and T. Christiansen. Programming Perl. in the Primer’s system. Arguably, however, working with Sebastopol, CA: O’Reilly & Associates, 1996. primitive rules only, for a time, anyway, yields a deeper comprehension of, and greater facility with, natural deduction than is normally achieved when no distinction is made between primitive and derived rules and both BOOK HEADS UP sorts are introduced simultaneously: derived rules shorten proofs, but thereby deprive the student of working with Building Ontologies with Basic Formal rules tailored strictly to one and only one logical operator at a time to forge connections between the various operators.6 Ontology Consequently, the help system is outfitted with a “filter” that will offer continuations involving primitive rules only. Robert Arp U.S. ARMY, FORT LEAVENWORTH, KANSAS Colin’s work on the Logic Daemon encompasses only a part of his accomplishments in logic pedagogy. But it is Barry Smith paradigmatic—at once rigorous, creative, and, importantly, UNIVERSITY OF BUFFALO AND NATIONAL CENTER FOR of great value to students. Given Jon Barwise’s own ONTOLOGICAL RESEARCH tremendous dedication not only to doing logic but to teaching it, the last of these is especially fitting for a Andrew Spear recipient of the Barwise Prize. GRAND VALLEY STATE UNIVERSITY

NOTES In the era of “big data,” science is increasingly information 1. L. Wall, R. L. Schwartz, and T. Christiansen, Programming Perl. driven, and the potential for computers to store, manage, and integrate massive amounts of data has given rise to 2. Notably, Schwartz and Christiansen, Programming Perl; T. Christiansen and N. Torkington, Perl Cookbook; and the new disciplinary fields such as biomedical informatics. indisputable J. E. Friedl, Mastering Regular Expressions. Applied ontology offers a strategy for organizing scientific 3. Our work on the Logic Daemon caught the eye of Mayfield information in computer-tractable form, drawing on Publishing, who asked us to write logic software to complement concepts not only from computer and information science a new text, The Power of Logic. Mayfield was initially looking but also from , logic, and philosophy. This book to go the traditional route of a bundled CD, but it was easy to convince them of the vast superiority of a web-based approach, provides an introduction to the field of applied ontology which we implemented at http://poweroflogic.com. The text, that is of particular relevance to biomedicine, covering originally written by C. Stephen Layman, is now published by theoretical components of ontologies, best practices for McGraw-Hill (which puchased Mayfield) and is in its 5th edition (Howard-Snyder et al. 2013). Colin and I continue to support the ontology design, and examples of biomedical ontologies website. in use. 4. With the exception of the root node 0, node numbering per se was determined entirely by the order of the proof attempts found After defining an ontology as a representation of the types in the logs. As should be clear, the tree structure is determined of entities in a given domain, the book distinguishes by the ChildNodes entries (which, of course, comport exactly between different kinds of ontologies and taxonomies, with the ParentNode entries). and shows how applied ontology draws on more traditional 5. For example, Howard-Snyder et al., The Power of Logic; Hurley, ideas from metaphysics. It presents the core features A Concise Introduction to Logic; and Klenk, Understanding Symbolic Logic. of the Basic Formal Ontology (BFO) now used by over 100 ontology projects throughout the world, and offers 6. Logic Primer is not entirely “pure” in its choice of primitive rules, as its disjunction elimination rule vE is the rule more commonly examples of domain ontologies that utilize BFO. The known as disjunctive syllogism — p v q, ~p q — which involves book also describes the Web Ontology Language (OWL), not only disjunction but also negation. The purer form of vE is: if, a common framework for Semantic Web technologies. ∴

PAGE 45 FALL 2015 | VOLUME 15 | NUMBER 1 APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

Throughout, the book provides concrete recommendations representation in computing meaning from context, to for the design and construction of domain ontologies. general and wide-ranging work in philosophy of mind/ psychology/AI, to pedagogical work published in teaching philosophy, his work covers an impressive range of issues at the intersection of philosophy and computing—theoretical, LAST-MINUTE NEWS practical/ethical, and pedagogical. Bill is especially well known for his work on intentionality that bridges the gap The 2015 Barwise Prize Winner Is William between computer science, philosophy, and what we Rapaport would call today cognitive sciences. Bill’s article “Searle on Brains as Computers,” published in Prepared by Peter Boltuc and Marcello our newsletter in spring 2007 (vol. 06, no. 2, available at Guarini https://c.ymcdn.com/sites/www.apaonline.org/resource/ collection/EADE8D52-8D02-4136-9A2A-729368501E43/ v06n2Computers.pdf) can be recommended as the starting It is our pleasure to announce that the winner of the 2015 point of the main block of articles in the current issue Barwise Prize is William Rapaport. Bill Rapaport has been devoted to the similar topic. one of the pioneers of the computers and philosophy movement and a long-time friend of this committee. Bill has The committee reached its decision in mid-October. We are a Ph.D. in philosophy, taught in philosophy before moving now working on scheduling the ceremony at one of the to the computer science department at SUNY Buffalo, and upcoming APA conferences, which we expect to happen spent most of his long career publishing at the intersection during the 2016-2017 academic year. We will keep our of philosophy and computing, including AI and cognitive readers informed. science. From very technical (i.e., computationally and logically savvy) contributions to the structure of knowledge Congratulations to Bill!

PAGE 46 FALL 2015 | VOLUME 15 | NUMBER 1