On a Flexible Representation for Defeasible Reasoning Variants

Total Page:16

File Type:pdf, Size:1020Kb

On a Flexible Representation for Defeasible Reasoning Variants Session 27: Argumentation AAMAS 2018, July 10-15, 2018, Stockholm, Sweden On a Flexible Representation for Defeasible Reasoning Variants Abdelraouf Hecham Pierre Bisquert Madalina Croitoru University of Montpellier IATE, INRA University of Montpellier Montpellier, France Montpellier, France Montpellier, France [email protected] [email protected] [email protected] ABSTRACT Table 1: Defeasible features supported by tools. We propose Statement Graphs (SG), a new logical formalism for Tool Blocking Propagating Team Defeat No Team Defeat defeasible reasoning based on argumentation. Using a flexible label- ASPIC+ - X - X ing function, SGs can capture the variants of defeasible reasoning DEFT - X - X DeLP - X - X (ambiguity blocking or propagating, with or without team defeat, DR-DEVICE X - X - and circular reasoning). We evaluate our approach with respect Flora-2 X - X - to human reasoning and propose a working first order defeasible reasoning tool that, compared to the state of the art, has richer expressivity at no added computational cost. Such tool could be of great practical use in decision making projects such as H2020 Existing literature has established the link between argumenta- NoAW. tion and defeasible reasoning via grounded semantics [13, 28] and Dialectical Trees [14]. Such approaches only allow for ambiguity KEYWORDS propagation without team defeat [16, 26]. In this paper we propose a new logical formalism called State- Defeasible Reasoning; Defeasible Logics; Existential Rules ment Graph (SGs) that captures all features showed in Table 1 via ACM Reference Format: a flexible labelling function. The SG can be seen as a generalisa- Abdelraouf Hecham, Pierre Bisquert, and Madalina Croitoru. 2018. On a tion of Abstract Dialectical Frameworks (ADF) [8] that enrich ADF Flexible Representation for Defeasible Reasoning Variants. In Proc. of the acceptance condition. 17th International Conference on Autonomous Agents and Multiagent Systems After introducing SGs in Section 3 we show in Section 4 how the (AAMAS 2018), Stockholm, Sweden, July 10–15, 2018, IFAAMAS, 9 pages. flexible labelling function of SG can capture ambiguity blocking (Section 4.1), ambiguity propagating (Section 4.2), team defeat (Sec- 1 INTRODUCTION tion 4.3) and circular reasoning (Section 4.4). In Section 5 we evalu- ate the practical applicability of SGs. We demonstrate (Section 5.2) Defeasible reasoning [25] is used to evaluate claims or statements certain features of human reasoning empirically demonstrated by in an inconsistent setting. It has been successfully applied in many psychologists. Furthermore, we provide a tool (Section 5.1) that multi-agent domains such as legal reasoning [3], business rules [24], implements SGs and, despite its higher expressivity than the tools contracting [17], planning [15], agent negotiations [12], inconsis- in Table 1, performs better or equally good. tency management [23], etc. Defeasible reasoning can also be used for reasoning with uncertain rules in food science. In the EU H2020 NoAW project we are interested in reasoning logically about how 2 BACKGROUND NOTIONS to manage waste from wine by products. Such rules, elicited by ex- Language. We consider a propositional language of literals and the perts, non experts, consumers etc via online surveys have to be put connectives ¹^; !; ); º. A literal is either an atom (an atomic together and used as a whole for decision making. Unfortunately, formula) or the complement of an atom. The complement of atom there is no universally valid way to reason defeasibly. An inherent f is denoted f . > and ? are considered atoms. Given Φ a finite non- characteristic of defeasible reasoning is its systematic reliance on a empty conjunction of literals and a literal ψ , a rule r is a formula set of intuitions and rules of thumb, which have been long debated of the form Φ V ψ such that V2 f!; ); g. We call Φ the body between logicians [1, 19, 22, 26]. For example, could an information of r denoted B¹rº and ψ the head of r denoted H¹rº. The set R of derived from a contested claim be used to contest another claim rules is composed of: (i.e. ambiguity handling)? Could “chains” of reasoning for the same (1) R! the set of strict rules (of the form Φ ! ψ ) expressing claim be combined to defend against challenging statements (i.e. undeniable implications i.e. if Φ then definitely ψ . team defeat)? Is circular reasoning allowed? etc. (2) R) the set of defeasible rules (of the form Φ ) ψ ) express- 1 The main available defeasible reasoning tools are ASPIC+ [27], ing a weaker implication i.e. if Φ then generally ψ . DEFT [18], DeLP [14], DR-DEVICE [5], and Flora-2 [31]. Table 1 R (3) the set of defeater rules (of the form Φ ψ ) used to shows that no tools can support all features. prevent a complement of a conclusion i.e. if Φ then ψ should not be concluded. It does not imply that ψ is concluded. 1Available and runnable as of November 2017. Given a literal f , the rule of the form > ! f or > ) f is called Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems a fact rule. A derivation for f is a sequence of strict and defeasible (AAMAS 2018), M. Dastani, G. Sukthankar, E. André, S. Koenig (eds.), July 10–15, 2018, 2 R Stockholm, Sweden. © 2018 International Foundation for Autonomous Agents and rules that starts from a fact rule and end with a rule r s.t. Multiagent Systems (www.ifaamas.org). All rights reserved. H¹rº = f . A strict derivation contains only strict rules. 1123 Session 27: Argumentation AAMAS 2018, July 10-15, 2018, Stockholm, Sweden A defeasible knowledge base is a tuple KB = ¹F ; R; ≻º where of “responsible” is propagated to “дuilty” because “дuilty” can be F is a set of fact rules, R is a set of strict, defeasible and defeater derived (arд4 is allowed to defeat arд1), hence, the answer to the rules, and ≻ is a superiority relation on R. A superiority relation is query q is ‘false’ (i.e. KB 2prop дuilty, where ⊨prop denotes an acyclic relation ≻ (i.e. the transitive closure of ≻ is irreflexive). entailment in ambiguity propagating). If r1 ≻ r2, then r1 is called superior to r2, and r2 inferior to r1. This Ambiguity propagation results in fewer conclusions (as more expresses that r1 may override r2. A query (also called claim) on a ambiguities are allowed) and may be preferable when the cost of an knowledge base KB is a conjunction of literals. incorrect conclusion is high. Ambiguity blocking may be appropriate Defeasible reasoning. To reason defeasibly about a conclusion, in situations where contested claims cannot be used to contest other all chains of rule applications (also called arguments) reaching that claims (e.g. in the legal domain) [19]. conclusion must be evaluated along with any conflict (i.e. attack) Team Defeat. The absence of team defeat means that for an ar- that arises from other chains of rules. This can be achieved using gument to be undefeated it has to single-handedly defeat all its two kinds of approaches: (1) approaches based on the evaluation attackers, as shown in Example 2. of arguments during their construction, such as defeasible logics Example 2. Generally, animals do not fly unless they are birds. [6, 25] and (2) approaches using the idea of extensions, where Also, penguins do not fly except magical ones. ‘Tweety’ is an animal, arguments are built then evaluated at a later stage; these encapsulate a bird, and a magical penguin. Can ‘Tweety’ fly? KB = ¹F ; R; ≻º: argumentation-based techniques [13, 14]. •F f> ) ; > ) > ) > ) g An argument is a minimal sequence of rule applications from = animal bird, penдuin, maдical . a fact rule to a rule with a literal in its head called the conclusion •R = frd1 : animal ) f ly; rd2 : bird ) f ly; 0 of that argument. An argument arд attacks another argument arд rd3 : penдuin ) f ly; rd4 : maдical ^ penдuin ) f lyg. on a literal f if its conclusion is f and f is one of the literals that • (rd2 ≻ rd1), (rd4 ≻ rd3). 0 0 appear in arд . We say that arд defeats arд if the rule in arд gen- The query is q = f ly. In the absence of team defeat, the 0 noT D noT D erating f is not inferior to the rule in arд generating f . answer to q is ‘false’ (i.e. KB 2 f ly, where ⊨ denotes entailment in defeasible reasoning without team defeat) because Ambiguity Handling. A literal f is ambiguous if there is an there is no chain of reasoning for “f ly” that can defend itself from undefeated argument for f and another undefeated argument for all attacks: even if rd2 defends itself from rd1 (because rd2 ≻ rd1), f and the superiority relation does not state which argument is it does not defend against rd3 (since rd2 ⊁ rd3), and the same superior, as shown in Example 1. applies for rd4: it defends against rd3 but fails against rd1 because rd4 ⊁ rd1. If team defeat is allowed then the answer to q is ‘true’ Example 1. The following defeasible knowledge base KB = ¹F ; TD TD (i.e. KB ⊨ f ly, where ⊨ denotes entailment in defeasible R; º describes a situation where a piece of evidence ‘A’ suggests that ? reasoning with team defeat) because all attacks are defeated: rd1 is a defendant is responsible while an evidence ‘B’ indicates that he is defeated by rd2 (rd2 ≻ rd1) and rd3 is defeated by rd4 (rd4 ≻ rd3). not responsible; Both evidences are equally reliable. A defendant is Argumentation-based techniques for defeasible reasoning do not presumed not guilty unless responsibility has been proven: allow for team defeat, whereas defeasible logics do [2, 26].
Recommended publications
  • Defeasibility from the Perspective of Informal Logic
    University of Windsor Scholarship at UWindsor OSSA Conference Archive OSSA 10 May 22nd, 9:00 AM - May 25th, 5:00 PM Defeasibility from the perspective of informal logic Ralph H. Johnson University of Windsor, Centre for Research in Reasoning, Argumentation and Rhetoric Follow this and additional works at: https://scholar.uwindsor.ca/ossaarchive Part of the Philosophy Commons Johnson, Ralph H., "Defeasibility from the perspective of informal logic" (2013). OSSA Conference Archive. 84. https://scholar.uwindsor.ca/ossaarchive/OSSA10/papersandcommentaries/84 This Paper is brought to you for free and open access by the Conferences and Conference Proceedings at Scholarship at UWindsor. It has been accepted for inclusion in OSSA Conference Archive by an authorized conference organizer of Scholarship at UWindsor. For more information, please contact [email protected]. Defeasibility from the perspective of informal logic RALPH H. JOHNSON Centre for Research in Reasoning, Argumentation and Rhetoric University of Windsor 401 Sunset Ave, Windsor, Ontario Canada [email protected] ABSTRACT: The notions of defeasibility and defeasible reasoning have generated a great deal of interest in various research communities. Here I want to focus on their use in logic and argumentation studies. I will approach these topics from the perspective of an informal logician who finds himself struggling with some issues that surround the idea of and the deployment of the concept of defeasibility. My intention is to make those struggles as clear as I can. KEYWORDS: deductive, defeasible, defeasibility, Pollock, undercutting defeater, rebutting defeater, Informal Logic Initiative 1. INTRODUCTION The notions of defeasibility and defeasible reasoning have generated a great deal of interest in various research communities.
    [Show full text]
  • Bootstrapping, Defeasible Reasoning, Anda Priorijustification
    PHILOSOPHICAL PERSPECTIVES Philosophical Perspectives, 24, Epistemology, 2010 BOOTSTRAPPING, DEFEASIBLE REASONING, AND APRIORIJUSTIFICATION Stewart Cohen The University of Arizona What is the relationship between (rationally) justified perceptual belief and (rationally) justified belief in the reliability of perception? A familiar argument has it that this relationship makes perceptual justification impossible. For on the one hand, pre-theoretically it seems reasonable to suppose: (1) We cannot have justified perceptual beliefs without having a prior justified belief that perception is reliable. If we are not justified in believing perception is reliable, then how can we trust its deliverances? On the other hand, it seems reasonable to suppose: (2) We cannot be justified in believing perception is reliable without having prior justified perceptual beliefs. It is a contingent matter whether perception is reliable. So how else could we be justified in believing perception is reliable other than by empirical investigation?1 Combining these two suppositions yields the disastrous result that we need justified perceptual beliefs prior to having justified perceptual beliefs. Unless we are content with a skeptical result, these two suppositions need to be reconsidered. Some theorists have elected to retain (1) and deny (2), by arguing that we have apriorijustification for believing perception is reliable.2 Others have elected to retain (2) and deny (1), allowing that we can have justified perceptual beliefs without having justification for believing perception is reliable.3 Denying (1) yields what I will call “basic justification” or “basic justified beliefs” — justified beliefs that are obtained without prior justification for believing that the processes that produces them are reliable.
    [Show full text]
  • From Logic to Rhetoric: a Contextualized Pedagogy for Fallacies
    Current Issue From the Editors Weblog Editorial Board Editorial Policy Submissions Archives Accessibility Search Composition Forum 32, Fall 2015 From Logic to Rhetoric: A Contextualized Pedagogy for Fallacies Anne-Marie Womack Abstract: This article reenvisions fallacies for composition classrooms by situating them within rhetorical practices. Fallacies are not formal errors in logic but rather persuasive failures in rhetoric. I argue fallacies are directly linked to successful rhetorical strategies and pose the visual organizer of the Venn diagram to demonstrate that claims can achieve both success and failure based on audience and context. For example, strong analogy overlaps false analogy and useful appeal to pathos overlaps manipulative emotional appeal. To advance this argument, I examine recent changes in fallacies theory, critique a-rhetorical textbook approaches, contextualize fallacies within the history and theory of rhetoric, and describe a methodology for rhetorically reclaiming these terms. Today, fallacy instruction in the teaching of written argument largely follows two paths: teachers elevate fallacies as almost mathematical formulas for errors or exclude them because they don’t fit into rhetorical curriculum. Both responses place fallacies outside the realm of rhetorical inquiry. Fallacies, though, are not as clear-cut as the current practice of spotting them might suggest. Instead, they rely on the rhetorical situation. Just as it is an argument to create a fallacy, it is an argument to name a fallacy. This article describes an approach in which students must justify naming claims as successful strategies and/or fallacies, a process that demands writing about contexts and audiences rather than simply linking terms to obviously weak statements.
    [Show full text]
  • Evidence Graphs: Supporting Transparent and FAIR Computation, with Defeasible Reasoning on Data, Methods and Results
    bioRxiv preprint doi: https://doi.org/10.1101/2021.03.29.437561; this version posted March 30, 2021. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY 4.0 International license. Evidence Graphs: Supporting Transparent and FAIR Computation, with Defeasible Reasoning on Data, Methods and Results Sadnan Al Manir1 [0000-0003-4647-3877], Justin Niestroy1 [0000-0002-1103-3882], Maxwell Adam Levinson1 [0000-0003-0384-8499], and Timothy Clark1,2 [0000-0003-4060-7360]* 1 University of Virginia, School of Medicine, Charlottesville VA USA 2 University of Virginia, School of Data Science, Charlottesville VA USA [email protected] *Corresponding author: [email protected] Abstract. Introduction: Transparency of computation is a requirement for assessing the va- lidity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These com- ponents may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publica- tions, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challeng- ing evidence for computations, services, software, data, and results, across arbi- trarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory.
    [Show full text]
  • Discourse Relations and Defeasible Knowledge`
    DISCOURSE RELATIONS AND DEFEASIBLE KNOWLEDGE* Alex Lascarides t Nicholas Asher Human Communication Research Centre Center for Cognitive Science University of Edinburgh University of Texas 2 Buccleuch Place, Edinburgh, EH8 9LW Austin, Texas 78712 Scotland USA alex@uk, ac. ed. cogsc£ asher@sygmund, cgs. utexas, edu Abstract of the event described in a, i.e. fl's event is part of the preparatory phase of a's: 2 This paper presents a formal account of the temporal interpretation of text. The distinct nat- (2) The council built the bridge. The architect ural interpretations of texts with similar syntax drew up the plans. are explained in terms of defeasible rules charac- terising causal laws and Gricean-style pragmatic Explanation(a, fl): For example the event de- maxims. Intuitively compelling patterns of defea,- scribed in clause fl caused the event described in sible entailment that are supported by the logic clause a: in which the theory is expressed are shown to un- derly temporal interpretation. (3) Max fell. John pushed him. Background(a, fl): For example the state de- The Problem scribed in fl is the 'backdrop' or circumstances under which the event in a occurred (so the event and state temporally overlap): The temporal interpretation of text involves an account of how the events described are related (4) Max opened the door. The room was pitch to each other. These relations follow from the dark. discourse relations that are central to temporal import. 1 Some of these are listed below, where Result(a, fl): The event described in a caused the clause a appears in the text before fl: the event or state described in fl: Narration(a,fl): The event described in fl is a consequence of (but not necessarily caused by) (5) Max switched off the light.
    [Show full text]
  • 1 a Tale of Two Interpretations
    Notes 1 A Tale of Two Interpretations 1. As Georges Dicker puts it, “Hume’s influence on contemporary epistemology and metaphysics is second to none ... ” (1998, ix). Note, too, that Hume’s impact extends beyond philosophy. For consider the following passage from Einstein’s letter to Moritz Schlick: Your representations that the theory of rel. [relativity] suggests itself in positivism, yet without requiring it, are also very right. In this also you saw correctly that this line of thought had a great influence on my efforts, and more specifically, E. Mach, and even more so Hume, whose Treatise of Human Nature I had studied avidly and with admiration shortly before discovering the theory of relativity. It is very possible that without these philosophical studies I would not have arrived at the solution (Einstein 1998, 161). 2. For a brief overview of Hume’s connection to naturalized epistemology, see Morris (2008, 472–3). 3. For the sake of convenience, I sometimes refer to the “traditional reading of Hume as a sceptic” as, e.g., “the sceptical reading of Hume” or simply “the sceptical reading”. Moreover, I often refer to those who read Hume as a sceptic as, e.g., “the sceptical interpreters of Hume” or “the sceptical inter- preters”. By the same token, I sometimes refer to those who read Hume as a naturalist as, e.g., “the naturalist interpreters of Hume” or simply “the natu- ralist interpreters”. And the reading that the naturalist interpreters support I refer to as, e.g., “the naturalist reading” or “the naturalist interpretation”. 4. This is not to say, though, that dissenting voices were entirely absent.
    [Show full text]
  • Explanations, Belief Revision and Defeasible Reasoning
    Artificial Intelligence 141 (2002) 1–28 www.elsevier.com/locate/artint Explanations, belief revision and defeasible reasoning Marcelo A. Falappa a, Gabriele Kern-Isberner b, Guillermo R. Simari a,∗ a Artificial Intelligence Research and Development Laboratory, Department of Computer Science and Engineering, Universidad Nacional del Sur Av. Alem 1253, (B8000CPB) Bahía Blanca, Argentina b Department of Computer Science, LG Praktische Informatik VIII, FernUniversitaet Hagen, D-58084 Hagen, Germany Received 15 August 1999; received in revised form 18 March 2002 Abstract We present different constructions for nonprioritized belief revision, that is, belief changes in which the input sentences are not always accepted. First, we present the concept of explanation in a deductive way. Second, we define multiple revision operators with respect to sets of sentences (representing explanations), giving representation theorems. Finally, we relate the formulated operators with argumentative systems and default reasoning frameworks. 2002 Elsevier Science B.V. All rights reserved. Keywords: Belief revision; Change theory; Knowledge representation; Explanations; Defeasible reasoning 1. Introduction Belief Revision systems are logical frameworks for modelling the dynamics of knowledge. That is, how we modify our beliefs when we receive new information. The main problem arises when that information is inconsistent with the beliefs that represent our epistemic state. For instance, suppose we believe that a Ferrari coupe is the fastest car and then we found out that some Porsche cars are faster than any Ferrari cars. Surely, we * Corresponding author. E-mail addresses: [email protected] (M.A. Falappa), [email protected] (G. Kern-Isberner), [email protected] (G.R.
    [Show full text]
  • Reasoning Strategies for Diagnostic Probability Estimates in Causal Contexts: Preference for Defeasible Deduction Over Abduction?
    Reasoning strategies for diagnostic probability estimates in causal contexts: Preference for defeasible deduction over abduction? Jean-Louis Stilgenbauer1,2, Jean Baratgin1,3, and Igor Douven4 1 CHArt (P-A-R-I-S), Université Paris 8, EPHE, 4-14 rue Ferrus, 75014 Paris – France, http://paris-reasoning.eu/ 2 Facultés Libres de Philosophie et de Psychologie (IPC), 70 avenue Denfert-Rochereau, 75014 Paris – France [email protected] 3 Institut Jean Nicod (IJN), École Normale Supérieure (ENS), 29 rue d’Ulm, 75005 Paris – France [email protected] 4 Sciences, Normes, Décision (SND), CNRS/Université Paris-Sorbonne, 1 rue Victor Cousin, 75005 Paris – France [email protected] Abstract. Recently, Meder, Mayrhofer, and Waldmann [1,2] have pro- posed a model of causal diagnostic reasoning that predicts an interfer- ence of the predictive probability, Pr(Effect | Cause), in estimating the diagnostic probability, Pr(Cause | Effect), specifically, that the interfer- ence leads to an underestimation bias of the diagnostic probability. The objective of the experiment reported in the present paper was twofold. A first aim was to test the existence of the underestimation bias in in- dividuals. Our results indicate the presence of an underestimation of the diagnostic probability that depends on the value of the predictive probability. Secondly, we investigated whether this bias was related to the type of estimation strategy followed by participants. We tested two main strategies: abductive inference and defeasible deduction. Our re- sults reveal that the underestimation of diagnostic probability is more pronounced under abductive inference than under defeasible deduction. Our data also suggest that defeasible deduction is for individuals the most natural reasoning strategy to estimate Pr(Cause | Effect).
    [Show full text]
  • 1 the Reasoning View and Defeasible Practical Reasoning Samuel Asarnow Macalester College [email protected] Forthcoming In
    The Reasoning View and Defeasible Practical Reasoning Samuel Asarnow Macalester College [email protected] Forthcoming in Philosophy and Phenomenological Research ABSTRACT: According to the Reasoning View about normative reasons, facts about normative reasons for action can be understood in terms of facts about the norms of practical reasoning. I argue that this view is subject to an overlooked class of counterexamples, familiar from debates about Subjectivist theories of normative reasons. Strikingly, the standard strategy Subjectivists have used to respond to this problem cannot be adapted to the Reasoning View. I think there is a solution to this problem, however. I argue that the norms of practical reasoning, like the norms of theoretical reasoning, are characteristically defeasible, in a sense I make precise. Recognizing this property of those norms makes space for a solution to the problem. The resulting view is in a way analogous to familiar defeasibility theories of knowledge, but it avoids a standard objection to that theory. Introduction It is a commonplace to observe that there is a connection between the idea of a normative reason and the idea of good reasoning. As Paul Grice put it, “reasons […] are the stuff of which reasoning is made.”1 Using a consideration as a premise in a piece of reasoning—making an inference on the basis of it—is a paradigmatic way of treating that consideration as a reason. And, very often, when we engage in good reasoning we are responding to the considerations that really are the normative
    [Show full text]
  • Defeasible Inference in Natural Language
    Thinking Like a Skeptic: Defeasible Inference in Natural Language Rachel Rudingeryzx, Vered Shwartzyz, Jena D. Hwangy, Chandra Bhagavatulay, Maxwell Forbesyz, Ronan Le Brasy, Noah A. Smithyz, Yejin Choiyz yAllen Institute for AI zPaul G. Allen School of Computer Science & Engineering, University of Washington xUniversity of Maryland, College Park, MD [email protected] fvereds,jenah,chandrab,maxf,ronanlb,noah,[email protected] Abstract Premise: Two men and a dog are standing among rolling green hills. Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X Hypothesis: The men are farmers. flies) may be weakened or overturned in light They are wearing backpacks. The men are facing their granary. of new evidence (X is a penguin). Though One man is using his binoculars. The men are holding pitchforks. long recognized in classical AI and philoso- The men are studying a tour map. The dog is a sheep dog. phy, defeasible inference has not been exten- sively studied in the context of contemporary Figure 1: Examples from the δ-SNLI portion of the data-driven research on natural language infer- δ-NLI dataset. A neutral premise-hypothesis pair from ence and commonsense reasoning. We intro- SNLI is augmented with three update sentences that duce Defeasible NLI (abbreviated δ-NLI), a weaken the hypothesis (left, red) and three update sen- dataset for defeasible inference in natural lan- tences that strengthen it (right, blue). guage. δ-NLI contains extensions to three existing inference datasets covering diverse modes of reasoning: common sense, natural of new evidence, is known as defeasible reasoning language inference, and social norms.
    [Show full text]
  • An Appreciation of John Pollock's Work on the Computational Study Of
    Argument and Computation Vol. 3, No. 1, March 2012, 1–19 An appreciation of John Pollock’s work on the computational study of argument Henry Prakkena* and John Hortyb aDepartment of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands & Faculty of Law, University of Groningen, Groningen, The Netherlands; bPhilosophy Department, Institute for Advanced Computer Studies and Computer Science Department, University of Maryland, College Park, MD, USA (Received 12 January 2012; final version received 19 January 2012) John Pollock (1940–2009) was an influential American philosopher who made important con- tributions to various fields, including epistemology and cognitive science. In the last 25 years of his life, he also contributed to the computational study of defeasible reasoning and prac- tical cognition in artificial intelligence. He developed one of the first formal systems for argumentation-based inference and he put many issues on the research agenda that are still relevant for the argumentation community today. This paper presents an appreciation of Pol- lock’s work on defeasible reasoning and its relevance for the computational study of argument. In our opinion, Pollock deserves to be remembered as one of the founding fathers of the field of computational argument, while, moreover, his work contains important lessons for current research in this field, reminding us of the richness of its object of study. Keywords: formal models of argumentation; non-monotonic accounts of argument; defeasible reasoning 1. Introduction John Pollock (1940–2009) was an influential American philosopher who made important contri- butions to various fields, including epistemology and cognitive science. In the last 25 years of his life, he also contributed to artificial intelligence (AI), first to the study of defeasible reasoning and then to the study of decision-theoretic planning and practical cognition.
    [Show full text]
  • Critical Thinking and the Methodology of Science
    Peter Ellerton, University of Queensland Critical thinking and the methodology of science Presentation to the Australasian Science Education Research Association Brisbane, Australia June 2021 Peter Ellerton University of Queensland Critical Thinking Project The University of Queensland Australia Abstract Scientific literacy concerns understanding the methodology of science at least as much as understanding scientific knowledge. More than this, it also requires an understanding of why the methodology of science delivers (or fails to deliver) epistemic credibility. To justify scientific claims as credible, it is important to understand how the nature of our reasoning is embodied in scientific methodology and how the limits of our reasoning are therefore the limits of our inquiry. This paper makes explicit how aspects of critical thinking, including argumentation and reasoning, underpin the methodology of science in the hope of making the development of scientific literacy in students more actionable. Critical thinking is not necessarily owned, or even best propagated, by any particular discipline. It does, however, find an academic home in philosophy. The reason for this is quite straightforward: questions of what makes for a good reason, why the answer to those questions should compel us to accept such reasons and even the nature of rationality itself are philosophical questions. In other words, philosophy provides a normative framework for understanding critical thinking. This normative framework is a rigorous one, involving a range of skills, values and virtues. Core technical skills include those of argument construction and evaluation (and within these the use of analogy and generalisation), collaborative reasoning skills and communication skills including effectively linking good thinking and writing.
    [Show full text]