Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference

Total Page:16

File Type:pdf, Size:1020Kb

Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference ABDUCTIVE HUMANISM: COMPARATIVE ADVANTAGES OF ARTIFICIAL INTELLIGENCE AND HUMAN COGNITION ACCORDING TO LOGICAL INFERENCE By WILLIAM JOSEPH LITTLEFIELD II Submitted in partial fulfillment of the requirements for the degree of Master of Arts World Literature CASE WESTERN RESERVE UNIVERSITY May, 2019 We hereby approve the thesis/dissertation of William Joseph Littlefield II candidate for the degree of Master of Arts. ​ ​ Committee Chair Florin Berindeanu Committee Member Richard Buchanan Committee Member Mark Turner Date of Defense April 3, 2019 *We also certify that written approval has been obtained for any proprietary material contained therein. 2 Table of Contents Abstract 4 1. Introduction 5 2. Logical AI 6 a. Deductive Methods b. Inductive Methods c. Comparative Advantages of AI 3. Big Data 16 a. Epistemic Superiority b. Computational Creativity 4. Abduction 19 a. Charles Sanders Peirce b. Abduction & Discovery 5. Abductive Humanism 23 a. Comparative Advantages of Human Cognition b. A Future of Abduction 6. Conclusion 31 Bibliography 33 3 Abductive Humanism: Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference Abstract By WILLIAM JOSEPH LITTLEFIELD II Speculation about artificial intelligence and big data has become commonplace. Foremost among these discussions is the potential for these technologies to displace the value of human labor. However, these discussions have become untethered from the history and development of the technologies in question. This paper analyzes paradigms of artificial intelligence to reveal that they are closely tied to different types of logical inference. The properties of computing machinery provide them with an advantage at deductive and inductive tasks when compared with human cognition. But, the understudied method of Peircean abduction poses several problems for computing machinery. Moreover, Peircean abduction, also called “creative abduction,” bears resemblance to important themes in the history of humanism. Human cognition handles the issues of Peircean abduction with remarkable ease, suggesting that humans will maintain a comparative advantage at this type of logical inference for the foreseeable future. 4 “Imagination is more robust in proportion as reasoning power is weak.”1 ​ - Giambattista Vico, The New Science ​ 1. Introduction Two great chimeras of science fiction are proliferating: one represents an existential crisis, the other, epistemological. I speak, of course, of artificial intelligence and big data, and like most crises, their horror is in the questions that they pose. What is the value of human labor once we have outsourced reason? What is the value of knowledge when we have outsourced memory? Machine learning algorithms can be trained to see what we do not; brute force computation can outrun us. Our favorite applications now know us better than our friends or family do.2 Still, there are particular tasks where human cognition will possess a comparative advantage for the foreseeable future — this paper will argue that such tasks are centered around the human capacity for abductive reasoning. Meanwhile, as these two technologies continue to develop, their comparative 1 Vico, Giambattista, et al. The New Science of Giambattista Vico: Revised Translation of the ​ ​ Third Edition (1744). Cornell University Press, 1968. ​ 2 Youyou, Wu, et al. “Computer-Based Personality Judgments Are More Accurate than Those ​ Made by Humans.” Proceedings of the National Academy of Sciences, vol. 112, no. 4, 2015, pp. ​ ​ 1036–1040., doi:10.1073/pnas.1418680112. 5 advantage at inductive and deductive reasoning will only improve.3 Therefore, I advocate for a new humanism, simply referred to as abductive humanism. ​ To this end, this paper begins by reviewing some paradigms in the history of artificial intelligence and how they are representative of the aforementioned types of logical inference. The speed, inerrability, and indefatigability of computing machines has rendered them as superior tools for such inference. Additional consideration is given to the epistemic advantages which big data provides and the consequences of those advantages for computational creativity. Ultimately, though, a revisiting of the foundations of humanism is combined with outstanding problems in logical artificial intelligence (logical AI), problems which are well handled by abductive reasoning, to conclude as Selmer Bringsjord quipped, that, “computation, among other things, is beneath us.”4 2. Logical AI 3 A disclaimer regarding methodology: there are some who may object to the idea of broadly periodizing the history of artificial intelligence according to the underlying logic employed. Furthermore, scholars of logical AI have been grappling with the problems of abduction, defeasible reasoning, and nonmonotonic logic since the emergence of the field. Suggesting that the technologies of the past, or of today, are strictly deductive, inductive, or abductive in their approach could be denounced as an oversimplification. The author readily concedes both of those points. Instead, this paper considers which approaches have generally materialized in ​ ​ artificial intelligence empirically, and which have generally eluded success, to describe ​ ​ macroscale trends in the history and philosophy of technology. Legitimacy of method lies in the fact that this paper intends to respond to scholarly and popular thought of a similar ilk. 4 John McCarthy provides the following definition of logical AI: “logical AI involves representing knowledge of an agent's world, its goals and the current situation by sentences in logic.” McCarthy, John. “Concept of Logical AI.” Logic-Based Artificial Intelligence, by Jack Minker, ​ ​ Kluwer Academic Publishers, 2000, pp. 37–56. This witticism by Bringsjord dates from a 1994 refutation of Searle’s arguments against cognition as computation. Bringsjord, Selmer. ​ “Computation, among Other Things, Is beneath Us.” Minds and Machines, vol. 4, no. 4, 1994, pp. ​ ​ 469–488., doi:10.1007/bf00974171. 6 Much of the speculation and hysteria regarding AI arises from generalizations about its seemingly infinite applications. Once more, predictions of a leisure society seem reasonable, if not inevitable.5 However, by contextualizing this conversation within the history of logical AI, the limitations of these technologies become much clearer.6 What is more, the history of AI, in practice, maps to different strategies of logical inference. As will be seen, abductive reasoning is mostly aspirational within logical AI, which is a telling contrast to the achievements in inductive and deductive reasoning. Deductive Methods Deductive and inductive reasoning both comprise large fields of study within philosophy. Despite the many historical interpretations of induction and deduction, it can be safely said that induction is a kind of “bottom-up” reasoning while deduction is “top-down.” The most reprinted illustration of deduction is likely the following syllogism: All men are mortal. Socrates is a man. 5 See the work of Yuval Noah Harari, whose arguments about the future of artificial intelligence are among those which this paper intends to refute. Harari, Yuval Noah. “The Meaning of Life in a World without Work.” The Guardian, Guardian News and Media, 8 May 2017, ​ ​ www.theguardian.com/technology/2017/may/08/virtual-reality-religion-robots-sapiens-book. For a ​ book length exposition, see Homo Deus: A Brief History of Tomorrow. Harari, Yuval N. Homo ​ ​ ​ Deus: A Brief History of Tomorrow. Harper Perennial, 2018. ​ 6 Something very similar occurred with early academic speculation about “cyberspace.” After the word was coined by William Gibson in the 1984 novel, Neuromancer, a flurry of papers, ​ ​ conferences, and books emerged that imagined a digital future which was increasingly untethered from reality. Those working more closely with the technology and policy of the era, such as John Perry Barlow, offered much more accurate prognostications. 7 Therefore, Socrates is mortal. Bertrand Russell provides a memorable example of induction in the chicken that is fed daily by a farmer, inferring that the following day, they will be fed once more (only to have their neck wrung).7 As can be seen, deduction operates by inspecting whether a particular case falls within the domain of an established rule. Some of the most visible examples of early AI were chess-playing programs, often called “chess engines.” The approach first employed in the development of chess engines was to survey chess grandmasters about how to play chess, aggregate their answers, and create a set of logical rules that corresponded to their strategies. During gameplay, if a situation fell under an established rule, then the program would execute a particular strategy in accordance with that rule.8 This is very typical of the dominant paradigm in early AI. In fact, such programs are often referred to as “GOFAI,” or Good Old-Fashioned Artificial Intelligence, a name coined by the philosopher John Haugeland (1945-2010).9 But, in academic terms, this technology is referred to as “symbolic AI.” Deep Blue, the chess engine developed by IBM, and which first defeated world champion Gary Kasparov in 1996, relied primarily upon symbolic AI.10 7 Russell, Bertrand, and John Skorupski. The Problems of Philosophy. OUP Oxford, 2014. Here, ​ ​ Russell is of course explaining the infamous “problem of induction.” 8 Such programs rely upon a system of conditional if-then statements, a basic fundamental of computer science.
Recommended publications
  • Defeasibility from the Perspective of Informal Logic
    University of Windsor Scholarship at UWindsor OSSA Conference Archive OSSA 10 May 22nd, 9:00 AM - May 25th, 5:00 PM Defeasibility from the perspective of informal logic Ralph H. Johnson University of Windsor, Centre for Research in Reasoning, Argumentation and Rhetoric Follow this and additional works at: https://scholar.uwindsor.ca/ossaarchive Part of the Philosophy Commons Johnson, Ralph H., "Defeasibility from the perspective of informal logic" (2013). OSSA Conference Archive. 84. https://scholar.uwindsor.ca/ossaarchive/OSSA10/papersandcommentaries/84 This Paper is brought to you for free and open access by the Conferences and Conference Proceedings at Scholarship at UWindsor. It has been accepted for inclusion in OSSA Conference Archive by an authorized conference organizer of Scholarship at UWindsor. For more information, please contact [email protected]. Defeasibility from the perspective of informal logic RALPH H. JOHNSON Centre for Research in Reasoning, Argumentation and Rhetoric University of Windsor 401 Sunset Ave, Windsor, Ontario Canada [email protected] ABSTRACT: The notions of defeasibility and defeasible reasoning have generated a great deal of interest in various research communities. Here I want to focus on their use in logic and argumentation studies. I will approach these topics from the perspective of an informal logician who finds himself struggling with some issues that surround the idea of and the deployment of the concept of defeasibility. My intention is to make those struggles as clear as I can. KEYWORDS: deductive, defeasible, defeasibility, Pollock, undercutting defeater, rebutting defeater, Informal Logic Initiative 1. INTRODUCTION The notions of defeasibility and defeasible reasoning have generated a great deal of interest in various research communities.
    [Show full text]
  • Nancy J. Nersessian Regents' Professor of Cognitive Science
    Last updated April 2021 CURRICULUM VITAE PERSONAL: Nancy J. Nersessian Regents’ Professor of Cognitive Science (Emerita) Georgia Institute of Technology Research Associate Harvard University Department of Psychology William James Hall 33 Kirkland St. Cambridge, MA 02138 Email: [email protected] [email protected] Web page: www.cc.gatech.edu/~nersessian EDUCATION: Case Western Reserve University: Ph.D., Philosophy 1977; M.A., Philosophy 1974; Boston University: A.B., Physics and Philosophy (with distinction in Logic) 1969 ACADEMIC APPOINTMENTS Harvard University, Research Associate, Department of Psychology, 2014-present Georgia Institute of Technology, Regents’ Professor of Cognitive Science, 2007-2014 Regents’ Professor Emerita, 2014-present Professor, of Cognitive Science 1993-2007 Professor, School of Interactive Computing and School of Public Policy Adjunct Professor, College of Architecture Adjunct Professor, School of Literature, Communication, & Culture N.J. Nersessian Director, Program in Cognitive Science, 1994-1999, 2003-2005 Member: Program in Philosophy, Science, & Technology Program in Women, Science, & Technology Princeton University, Program in History of Science and Department of History, Assistant Professor, 1987-1993 Associate Member, Department of Philosophy, 1987-1993 Member: Program in Cognitive Studies, 1992-1993 Cognitive Studies Committee, 1987-1992 University of Pittsburgh, Learning Research and Development Center, Postdoctoral Research Associate, 1986-1987 Twente University of Technology,
    [Show full text]
  • Bootstrapping, Defeasible Reasoning, Anda Priorijustification
    PHILOSOPHICAL PERSPECTIVES Philosophical Perspectives, 24, Epistemology, 2010 BOOTSTRAPPING, DEFEASIBLE REASONING, AND APRIORIJUSTIFICATION Stewart Cohen The University of Arizona What is the relationship between (rationally) justified perceptual belief and (rationally) justified belief in the reliability of perception? A familiar argument has it that this relationship makes perceptual justification impossible. For on the one hand, pre-theoretically it seems reasonable to suppose: (1) We cannot have justified perceptual beliefs without having a prior justified belief that perception is reliable. If we are not justified in believing perception is reliable, then how can we trust its deliverances? On the other hand, it seems reasonable to suppose: (2) We cannot be justified in believing perception is reliable without having prior justified perceptual beliefs. It is a contingent matter whether perception is reliable. So how else could we be justified in believing perception is reliable other than by empirical investigation?1 Combining these two suppositions yields the disastrous result that we need justified perceptual beliefs prior to having justified perceptual beliefs. Unless we are content with a skeptical result, these two suppositions need to be reconsidered. Some theorists have elected to retain (1) and deny (2), by arguing that we have apriorijustification for believing perception is reliable.2 Others have elected to retain (2) and deny (1), allowing that we can have justified perceptual beliefs without having justification for believing perception is reliable.3 Denying (1) yields what I will call “basic justification” or “basic justified beliefs” — justified beliefs that are obtained without prior justification for believing that the processes that produces them are reliable.
    [Show full text]
  • From Logic to Rhetoric: a Contextualized Pedagogy for Fallacies
    Current Issue From the Editors Weblog Editorial Board Editorial Policy Submissions Archives Accessibility Search Composition Forum 32, Fall 2015 From Logic to Rhetoric: A Contextualized Pedagogy for Fallacies Anne-Marie Womack Abstract: This article reenvisions fallacies for composition classrooms by situating them within rhetorical practices. Fallacies are not formal errors in logic but rather persuasive failures in rhetoric. I argue fallacies are directly linked to successful rhetorical strategies and pose the visual organizer of the Venn diagram to demonstrate that claims can achieve both success and failure based on audience and context. For example, strong analogy overlaps false analogy and useful appeal to pathos overlaps manipulative emotional appeal. To advance this argument, I examine recent changes in fallacies theory, critique a-rhetorical textbook approaches, contextualize fallacies within the history and theory of rhetoric, and describe a methodology for rhetorically reclaiming these terms. Today, fallacy instruction in the teaching of written argument largely follows two paths: teachers elevate fallacies as almost mathematical formulas for errors or exclude them because they don’t fit into rhetorical curriculum. Both responses place fallacies outside the realm of rhetorical inquiry. Fallacies, though, are not as clear-cut as the current practice of spotting them might suggest. Instead, they rely on the rhetorical situation. Just as it is an argument to create a fallacy, it is an argument to name a fallacy. This article describes an approach in which students must justify naming claims as successful strategies and/or fallacies, a process that demands writing about contexts and audiences rather than simply linking terms to obviously weak statements.
    [Show full text]
  • Evidence Graphs: Supporting Transparent and FAIR Computation, with Defeasible Reasoning on Data, Methods and Results
    bioRxiv preprint doi: https://doi.org/10.1101/2021.03.29.437561; this version posted March 30, 2021. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Evidence Graphs: Supporting Transparent and FAIR Computation, with Defeasible Reasoning on Data, Methods and Results Sadnan Al Manir1 [0000-0003-4647-3877], Justin Niestroy1 [0000-0002-1103-3882], Maxwell Adam Levinson1 [0000-0003-0384-8499], and Timothy Clark1,2 [0000-0003-4060-7360]* 1 University of Virginia, School of Medicine, Charlottesville VA USA 2 University of Virginia, School of Data Science, Charlottesville VA USA [email protected] *Corresponding author: [email protected] Abstract. Introduction: Transparency of computation is a requirement for assessing the va- lidity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These com- ponents may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publica- tions, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challeng- ing evidence for computations, services, software, data, and results, across arbi- trarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory.
    [Show full text]
  • Abduction, Reason, and Science Abduction, Reason, and Science Processes of Discovery and Explanation
    Abduction, Reason, and Science Abduction, Reason, and Science Processes of Discovery and Explanation Lorenzo Magnani University of Pavia Pavia, Italy, and Georgia Institute of Technology Atlanta, Georgia Springer Science+Business Media, LLC Library of Congress Cataloging-in-Publication Data Magnani, Lorenzo Abduction, reason, and ,cience: processes of discovcry and explanation/Lorenzo Magnani. p. cm. IncIudes bibliographical references and index. ISBN 978-1-4613-4637-1 ISBN 978-1-4419-8562-0 (eBook) DOI 10.1007/978-1-4419-8562-0 1. Science-Philosophy. 2. Abduction (Logic). 3. Discoveries in science. I. Tirle. Q175.32.A24 M34 2001 501-dc21 00-052061 Front cover: Descartes's explanation of the rainbow (from his Meteora, 1656). ISBN 978-1-4613-4637-1 © 2001 Springer Science+Business Media New York Originally published by Kluwer Academic / Plenum Publishers, New York in 2001 Softcover reprint of the hardcover 1st edition 1990 http://www.wkap.nl/ 1098765432 A c.I.P. record for this book is available from the Library of Congress. AII rights reserved No par! of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without wrilten permis sion from the Publisher To my daughter Giovanna Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or "given" base; and if we stop driving the piles deeper, it is not because we have reached firm ground.
    [Show full text]
  • Calendar of Events
    AI Magazine Volume 25 Number 2 (2004) (© AAAI) Calendar ■ Contact: David Heckerman Calendar of Events Microsoft Research, One Microsoft Way, Redmond, WA 98008 USA Voice: 425-706 2662 E-mail: [email protected] URL: www.ceas.cc August 2004 Key Centre of Design Computing and July 2004 Cognition, 148 City Road AUGUST 2 Chippendale, Sydney, NSW 2006 Modeling Decisions for Artificial In- JULY 19–21 Australia telligence. Barcelona, Catalonia, International Conference on Design Voice: 61-2 93512051 Spain Computing and Cognition (DCC'04). E-mail: [email protected] ■ Sponsors: Cambridge, MA URL: www.arch.usyd.edu.au/kcdc/con- European Society for Fuzzy Logic and ■ Sponsors: ferences/vr04/ Key Centre of Design Computing and Technology, Japan Society for Fuzzy Cognition, University of Sydney: JULY 25–29 Theory and intelligent informatics, School of Architecture and Planning, Nineteenth National Conference on Catalan Society for Artificial Intelli- Massachusetts Institute of Technology Artificial Intelligence. San Jose, CA gence ■ Contact: ■ ■ Sponsor: Contact: John Gero American Association for Vicenc Torra Key Centre of Design Computing and Artificial Intelligence IIIA-CSIC, Campus UAB s/n Cognition, University of Sydney Bellaterra, Catalonia 08193 Spain Wilkinson Building (G04) ■ Contact: Voice: +34-935809570 Sydney NSW 2008 Australia Conference Coordinator E-mail: [email protected] Voice: 61-2 9351 4766 American Association for Artificial Fax: 61-2 9351 3031 Intelligence, AAAI-04 URL: www.iiia.csic.es/~vtorra/mdai/A E-mail: [email protected] 445 Burgess Drive, Suite 100 URL: www.arch.usyd.edu.au/kcdc/con- Menlo Park, CA 94025 USA AUGUST 2–4 ferences/dcc04/ Voice: 650-328-3123 International Workshop on Qualita- Fax: 650-321-4457 tive Reasoning.
    [Show full text]
  • Discourse Relations and Defeasible Knowledge`
    DISCOURSE RELATIONS AND DEFEASIBLE KNOWLEDGE* Alex Lascarides t Nicholas Asher Human Communication Research Centre Center for Cognitive Science University of Edinburgh University of Texas 2 Buccleuch Place, Edinburgh, EH8 9LW Austin, Texas 78712 Scotland USA alex@uk, ac. ed. cogsc£ asher@sygmund, cgs. utexas, edu Abstract of the event described in a, i.e. fl's event is part of the preparatory phase of a's: 2 This paper presents a formal account of the temporal interpretation of text. The distinct nat- (2) The council built the bridge. The architect ural interpretations of texts with similar syntax drew up the plans. are explained in terms of defeasible rules charac- terising causal laws and Gricean-style pragmatic Explanation(a, fl): For example the event de- maxims. Intuitively compelling patterns of defea,- scribed in clause fl caused the event described in sible entailment that are supported by the logic clause a: in which the theory is expressed are shown to un- derly temporal interpretation. (3) Max fell. John pushed him. Background(a, fl): For example the state de- The Problem scribed in fl is the 'backdrop' or circumstances under which the event in a occurred (so the event and state temporally overlap): The temporal interpretation of text involves an account of how the events described are related (4) Max opened the door. The room was pitch to each other. These relations follow from the dark. discourse relations that are central to temporal import. 1 Some of these are listed below, where Result(a, fl): The event described in a caused the clause a appears in the text before fl: the event or state described in fl: Narration(a,fl): The event described in fl is a consequence of (but not necessarily caused by) (5) Max switched off the light.
    [Show full text]
  • 1 a Tale of Two Interpretations
    Notes 1 A Tale of Two Interpretations 1. As Georges Dicker puts it, “Hume’s influence on contemporary epistemology and metaphysics is second to none ... ” (1998, ix). Note, too, that Hume’s impact extends beyond philosophy. For consider the following passage from Einstein’s letter to Moritz Schlick: Your representations that the theory of rel. [relativity] suggests itself in positivism, yet without requiring it, are also very right. In this also you saw correctly that this line of thought had a great influence on my efforts, and more specifically, E. Mach, and even more so Hume, whose Treatise of Human Nature I had studied avidly and with admiration shortly before discovering the theory of relativity. It is very possible that without these philosophical studies I would not have arrived at the solution (Einstein 1998, 161). 2. For a brief overview of Hume’s connection to naturalized epistemology, see Morris (2008, 472–3). 3. For the sake of convenience, I sometimes refer to the “traditional reading of Hume as a sceptic” as, e.g., “the sceptical reading of Hume” or simply “the sceptical reading”. Moreover, I often refer to those who read Hume as a sceptic as, e.g., “the sceptical interpreters of Hume” or “the sceptical inter- preters”. By the same token, I sometimes refer to those who read Hume as a naturalist as, e.g., “the naturalist interpreters of Hume” or simply “the natu- ralist interpreters”. And the reading that the naturalist interpreters support I refer to as, e.g., “the naturalist reading” or “the naturalist interpretation”. 4. This is not to say, though, that dissenting voices were entirely absent.
    [Show full text]
  • Logic and Abduction: Cognitive Externalizations in Demonstrative Environments
    Logic and Abduction: Cognitive Externalizations in Demonstrative Environments Lorenzo MAGNANI BIBLID [0495-4548 (2007) 22: 60; pp. 275-284] ABSTRACT: In her book Abductive Reasoning Atocha Aliseda (2006) stresses the attention to the logical models of abduction, centering on the semantic tableaux as a method for extending and improving both the whole cognitive/philosophical view on it and on other more restricted logical approaches. I will provide further insight on two aspects. The first is re- lated to the importance of increasing logical knowledge on abduction: Aliseda clearly shows how the logical study on abduction in turn helps us to extend and modernize the classical and received idea of logic. The second refers to some ideas coming from the so-called distributed cognition and concerns the role of logical models as forms of cognitive exter- nalizations of preexistent in-formal human reasoning performances. The logical externalization in objective systems, communicable and sharable, is able to grant stable perspectives endowed with symbolic, abstract, and rigorous cogni- tive features. I will also emphasize that Aliseda especially stresses that this character of stability and objectivity of logical achievements are not usually present in models of abduction that are merely cognitive and epistemological, and of ex- treme importance from the computational point of view. Keywords: logic, abduction, distributed cognition. 1. Introduction In her book Abductive Reasoning Atocha Aliseda (2006) stresses the attention to the logical models of abduction, centering on the semantic tableaux as a method for ex- tending and improving both the whole cognitive/philosophical view on abduction and on other more restricted logical approaches.
    [Show full text]
  • On a Flexible Representation for Defeasible Reasoning Variants
    Session 27: Argumentation AAMAS 2018, July 10-15, 2018, Stockholm, Sweden On a Flexible Representation for Defeasible Reasoning Variants Abdelraouf Hecham Pierre Bisquert Madalina Croitoru University of Montpellier IATE, INRA University of Montpellier Montpellier, France Montpellier, France Montpellier, France [email protected] [email protected] [email protected] ABSTRACT Table 1: Defeasible features supported by tools. We propose Statement Graphs (SG), a new logical formalism for Tool Blocking Propagating Team Defeat No Team Defeat defeasible reasoning based on argumentation. Using a flexible label- ASPIC+ - X - X ing function, SGs can capture the variants of defeasible reasoning DEFT - X - X DeLP - X - X (ambiguity blocking or propagating, with or without team defeat, DR-DEVICE X - X - and circular reasoning). We evaluate our approach with respect Flora-2 X - X - to human reasoning and propose a working first order defeasible reasoning tool that, compared to the state of the art, has richer expressivity at no added computational cost. Such tool could be of great practical use in decision making projects such as H2020 Existing literature has established the link between argumenta- NoAW. tion and defeasible reasoning via grounded semantics [13, 28] and Dialectical Trees [14]. Such approaches only allow for ambiguity KEYWORDS propagation without team defeat [16, 26]. In this paper we propose a new logical formalism called State- Defeasible Reasoning; Defeasible Logics; Existential Rules ment Graph (SGs) that captures all features showed in Table 1 via ACM Reference Format: a flexible labelling function. The SG can be seen as a generalisa- Abdelraouf Hecham, Pierre Bisquert, and Madalina Croitoru. 2018. On a tion of Abstract Dialectical Frameworks (ADF) [8] that enrich ADF Flexible Representation for Defeasible Reasoning Variants.
    [Show full text]
  • Explanations, Belief Revision and Defeasible Reasoning
    Artificial Intelligence 141 (2002) 1–28 www.elsevier.com/locate/artint Explanations, belief revision and defeasible reasoning Marcelo A. Falappa a, Gabriele Kern-Isberner b, Guillermo R. Simari a,∗ a Artificial Intelligence Research and Development Laboratory, Department of Computer Science and Engineering, Universidad Nacional del Sur Av. Alem 1253, (B8000CPB) Bahía Blanca, Argentina b Department of Computer Science, LG Praktische Informatik VIII, FernUniversitaet Hagen, D-58084 Hagen, Germany Received 15 August 1999; received in revised form 18 March 2002 Abstract We present different constructions for nonprioritized belief revision, that is, belief changes in which the input sentences are not always accepted. First, we present the concept of explanation in a deductive way. Second, we define multiple revision operators with respect to sets of sentences (representing explanations), giving representation theorems. Finally, we relate the formulated operators with argumentative systems and default reasoning frameworks. 2002 Elsevier Science B.V. All rights reserved. Keywords: Belief revision; Change theory; Knowledge representation; Explanations; Defeasible reasoning 1. Introduction Belief Revision systems are logical frameworks for modelling the dynamics of knowledge. That is, how we modify our beliefs when we receive new information. The main problem arises when that information is inconsistent with the beliefs that represent our epistemic state. For instance, suppose we believe that a Ferrari coupe is the fastest car and then we found out that some Porsche cars are faster than any Ferrari cars. Surely, we * Corresponding author. E-mail addresses: [email protected] (M.A. Falappa), [email protected] (G. Kern-Isberner), [email protected] (G.R.
    [Show full text]