Measuring the Counter/Assumption Model's Effect On

Total Page:16

File Type:pdf, Size:1020Kb

Measuring the Counter/Assumption Model's Effect On MEASURING THE COUNTER/ASSUMPTION MODEL'S EFFECT ON ARGUMENTATION QUALITY A Thesis Presented to the Faculty of California Polytechnic State University San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Evan Ovadia December 2013 c 2013 Evan Ovadia ALL RIGHTS RESERVED ii COMMITTEE MEMBERSHIP TITLE: Measuring the Counter/Assumption Model's Effect on Argumentation Quality AUTHOR: Evan Ovadia DATE SUBMITTED: December 2013 COMMITTEE CHAIR: Franz Kurfess, Ph.D. Professor of Computer Science COMMITTEE MEMBER: Aaron Keen, Ph.D. Professor of Computer Science COMMITTEE MEMBER: John Patrick, Ed.D. Professor of Communication Studies iii ABSTRACT Measuring the Counter/Assumption Model's Effect on Argumentation Quality Evan Ovadia This thesis presents a new platform called See the Reason, built upon a tree- structured argumentation model called the Counter/Assumption model. In the Counter/Assumption model, a topic is posted first, then under that topic, reasons for and against, and for each reason, counterarguments, and for any counterargu- ment, more counterarguments. The model enables us to systematically determine whether a claim is \tentatively true" or \tentatively false," in an effort to motivate people to make their side's claims tentatively true and the opposing side's claims tentatively false, thus encouraging conflict. Research suggests that debates with more conflict are better, so this thesis investigates whether Counter/Assumption model encourages better debates. In this thesis, we have students debate on See the Reason and the closest existing platform, CreateDebate. We measure the number of uncaught bad ar- guments, the user satisfaction, and how far into the Interaction Analysis Model the debates progress. We find promising evidence that See the Reason progresses further into the IAM and encourages more logical debates, but sacrifices usability at the same time. iv TABLE OF CONTENTS List of Tables xii List of Figures xiii 1 Introduction1 1.1 Indices of Claims...........................1 1.2 The Counter/Assumption Model..................4 1.3 Question...............................4 2 Previous Work5 2.1 Analyzing Discussion.........................5 2.2 Threaded Discussion.........................7 2.2.1 Drawbacks.........................9 2.3 Staged Threaded Discussion..................... 10 2.4 Scaffolded Threaded Discussion................... 11 2.4.1 Scaffolding Reduces Overall Discussion.......... 12 2.4.2 Remaining Weakness in Threaded Discussion...... 13 2.5 Wikis................................. 14 2.6 Trees................................. 15 2.7 Scaffolded Trees........................... 17 2.8 Graphs................................ 19 2.8.1 Complexity......................... 20 2.9 Conflict................................ 22 3 The Counter/Assumption Model 24 3.1 Objective Claims, Counters, Claim Status............. 24 3.2 Assumptions............................. 28 3.3 Subjective Claims.......................... 30 3.4 In Toulmin's Terms......................... 32 3.5 Potential Benefits and Drawbacks.................. 33 4 See the Reason Features 35 4.1 Changes to the Counter/Assumption Model............ 35 4.2 Guide Mode............................. 35 v 4.2.1 Sidebar........................... 38 4.2.2 Helpables.......................... 38 4.2.3 Participate Hints...................... 39 4.2.4 Address Counter Wizard................. 41 4.3 Ordering............................... 43 4.4 Citations and Formatting...................... 44 4.5 Subscriptions............................. 44 4.6 Next Unread Link.......................... 46 4.7 View Modes............................. 46 4.8 Compact Mode............................ 46 4.9 Collapsing Subtrees......................... 48 4.10 Real-time Updating......................... 48 5 Experiment 51 5.1 Choice to compare to CreateDebate................ 51 5.2 Experiment Questions and Hypotheses............... 51 5.2.1 Does See the Reason motivate users to participate more than CreateDebate?.................... 52 5.2.2 Are there fewer uncaught \bad arguments" in See the Reason?........................... 52 5.2.3 Does See the Reason Progress Further into the IAM than CreateDebate?....................... 52 5.2.4 Is See the Reason as Easy to Use as CreateDebate?... 53 5.3 Method................................ 53 5.4 Weaknesses.............................. 56 5.4.1 Scale of Experiment.................... 56 5.4.2 Judges........................... 56 5.4.3 Difference from Online Behaviors............. 57 5.4.4 Difference in Platforms Besides Underlying Models... 58 5.4.5 Bias towards See the Reason............... 58 5.5 Results................................ 58 5.5.1 Summary of Results.................... 60 vi 5.6 Discussion............................... 60 5.6.1 Does See the Reason motivate users to participate more than CreateDebate?.................... 62 5.6.2 Are there fewer uncaught \bad arguments" in See the Reason?........................... 62 5.6.3 Does See the Reason Progress Further into the IAM than CreateDebate?....................... 62 5.6.4 Is See the Reason as Easy to Use as CreateDebate?... 63 6 Conclusion 64 6.1 Future Work............................. 64 6.1.1 Future Experiments.................... 64 6.1.2 Comparison with Calculemus and TruthMapping.... 65 Bibliography 66 Appendix A Other Systems 72 Appendix B Helpables' Contents 75 B.1 Certainty............................... 75 B.2 \post a reason for" link....................... 76 B.3 \post a reason against" link..................... 76 B.4 Upvote................................ 77 B.5 Objective Claim........................... 78 B.6 Subjective Claim........................... 79 B.7 Assumption.............................. 79 B.8 Claim Judgments........................... 80 B.8.1 \tentatively true" icon................... 80 B.8.2 \tentatively false" icon................... 80 B.8.3 \no judgment" icon.................... 80 B.9 New Claim Form........................... 81 B.9.1 assumption......................... 81 B.9.2 \personal opinion"..................... 81 B.9.3 \based on unknown fact"................. 82 B.9.4 \not true beyond a reasonable doubt".......... 83 vii B.9.5 \not a claim"........................ 83 B.9.6 \factually incorrect".................... 84 B.9.7 \bad logic / fallacy".................... 84 B.9.8 \not a reason for the root"................ 85 B.9.9 \not a reason against the root".............. 85 B.9.10 \doesn't counter parent"................. 85 B.9.11 \contradicts assumption"................. 86 B.9.12 \defends assumption"................... 86 B.9.13 \citation needed"...................... 86 B.9.14 \bad citation"....................... 86 B.9.15 \cherrypicking"....................... 86 B.9.16 “flying dutchman"..................... 87 B.9.17 \other"........................... 87 B.10 Claim Menu............................. 88 B.10.1 \hide" link......................... 88 B.10.2 \subscribe" link...................... 88 B.10.3 \unsubscribe" link..................... 88 B.10.4 permalink.......................... 88 B.10.5 \forward" link....................... 89 B.10.6 “flag inappropriate" link.................. 89 B.10.7 “flag duplicate" link.................... 90 Appendix C Sidebar 91 Appendix D Certainty Measure 94 D.0.8 Defining Certainty Recursively.............. 96 D.0.9 Avoiding Reddit Hivemind Syndrome.......... 97 Appendix E Common Questions 100 E.1 How do you know if something is true beyond a reasonable doubt? 100 E.2 How will you enforce that edits do not change the core meaning of claims?............................... 100 Appendix F Evolution of a Tree 102 viii Appendix G The Bias Measure 105 Appendix H Reddit Hivemind Syndrome 106 Appendix I Making See the Reason 109 I.1 Assumptions............................. 109 I.2 Standard of Truth.......................... 109 I.3 Supports for Objective Claims................... 109 I.4 Editing................................ 112 I.5 Experiment A: the 508 experiment................. 113 I.6 Choosing Languages and Systems.................. 116 I.6.1 Database.......................... 116 I.6.2 PHP vs Java........................ 118 I.6.3 Model............................ 119 I.6.4 Server............................ 124 I.6.5 Communication...................... 126 I.6.6 Front-end.......................... 130 I.7 Users were Lost and Confused.................... 136 I.8 Single Branch vs Multi Branch................... 137 I.9 Choosing a New Name........................ 139 I.10 Ordering Algorithm......................... 140 I.11 Experiment B............................. 142 I.11.1 The Site was Confusing.................. 142 I.11.2 Single-Branch Tree View was Hard to Navigate..... 144 I.11.3 Users Liked the For and Against Columns........ 145 I.11.4 Users Liked the Helpables................. 146 I.11.5 Users Didn't Like Having to Expand to See Claims... 146 I.11.6 The Page was too Compact................ 148 I.11.7 Users Wanted to Support Objective Claims....... 148 I.11.8 Users Wanted to Edit and Delete Claims......... 149 I.11.9 Users were Nervous of Posting Duplicates........ 149 I.11.10 Topics were too Broad................... 150 ix I.11.11 Explanations were Verbose................ 151 I.11.12 Users Didn't Understand \Countered".......... 151 I.11.13 Users Didn't Understand Assumptions.......... 151 I.11.14 The COW was Confusing................. 152 I.11.15 Users Didn't Understand Objective vs Subjective.... 152 I.11.16 Other Good Suggestions.................
Recommended publications
  • Online Research Tools
    Online Research Tools A White Paper Alphabetical URL DataSet Link Compilation By Marcus P. Zillman, M.S., A.M.H.A. Executive Director – Virtual Private Library [email protected] Online Research Tools is a white paper link compilation of various online tools that will aid your research and searching of the Internet. These tools come in all types and descriptions and many are web applications without the need to download software to your computer. This white paper link compilation is constantly updated and is available online in the Research Tools section of the Virtual Private Library’s Subject Tracer™ Information Blog: http://www.ResearchResources.info/ If you know of other online research tools both free and fee based feel free to contact me so I may place them in this ongoing work as the goal is to make research and searching more efficient and productive both for the professional as well as the lay person. Figure 1: Research Resources – Online Research Tools 1 Online Research Tools – A White Paper Alpabetical URL DataSet Link Compilation [Updated: August 26, 2013] http://www.OnlineResearchTools.info/ [email protected] eVoice: 800-858-1462 © 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013 Marcus P. Zillman, M.S., A.M.H.A. Online Research Tools: 12VPN - Unblock Websites and Improve Privacy http://12vpn.com/ 123Do – Simple Task Queues To Help Your Work Flow http://iqdo.com/ 15Five - Know the Pulse of Your Company http://www.15five.com/ 1000 Genomes - A Deep Catalog of Human Genetic Variation http://www.1000genomes.org/
    [Show full text]
  • A Review of Argumentation for the Social Semantic Web
    Undefined 0 (0) 1 1 IOS Press A Review of Argumentation for the Social Semantic Web Jodi Schneider a, Tudor Groza a;b, and connected arguments posted by individuals to express Alexandre Passant a their opinions in a structured manner” [5]. a Digital Enterprise Research Institute, National Arguments on the Web can be used in decision con- University of Ireland, Galway, texts. Decision-making often requires discussion not fi[email protected] just of agreement and disagreement, but also the prin- b School of ITEE, The University of Queensland, ciples, reasons, and explanations driving the choices Australia, [email protected] between particular options. Furthermore, arguments expressed online for one audience may be of inter- est to other (sometimes far-flung) audiences. And it can be difficult to re-find the crucial turning points of an argumentative discussion in which we have partic- Abstract. Argumentation represents the study of views and ipated. Yet on the Web, we cannot subscribe to argu- opinions expressed by humans with the goal of reaching a ments or issues, nor can we search for them. Nor can conclusion through logical reasoning. Beginning with the we summarize the rationale behind a group’s decision, 1950’s, several models were proposed to capture the essence even when the discussion took place entirely in public of informal argumentation in different settings. With the venues such as mailing lists, blogs, IRC channels, and emergence of the Web, and then the Semantic Web, this Web forums. modeling shifted towards ontologies, while from the de- velopment perspective, we witnessed an important increase By providing common languages and principles to in Web 2.0 human-centered collaborative deliberation tools.
    [Show full text]
  • Augmented Collective Intelligence
    AUGMENTED. __COLLECTIVE__ INTELLIGENCE REPORT FULL HUMAN-AI NETWORKS IN A VIRTUAL FUTURE OF WORK GIANNI GIACOMELLI Gianni Giacomelli SUPERMIND.DESIGN Augmented Collective Intelligence Version 7.7.0 Last edit August 24, 2020 Download the latest at www.supermind.design Creative Common license terms here You are free to: Share — copy and redistribute the material in any medium or format; Adapt — remix, transform, and build upon the material for any purpose, even commercially. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. Contacts: [email protected] https://www.linkedin.com/in/ggiacomelli/ Cover and chapter pages artwork credits: Bob Holzer, the Noun Project 1 Augmented Collective Intelligence Contents Contents ........................................................................................................................................................ 2 Acknowledgments ......................................................................................................................................... 4 About the author .......................................................................................................................................... 5 Prologue ...................................................................................................................................................... 6 Purpose and structure of this document ............................................................................................
    [Show full text]
  • Argument Visualization for Eparticipation: Towards a Research Agenda and Prototype Tool
    Argument Visualization for eParticipation: Towards a Research Agenda and Prototype Tool Neil Benn and Ann Macintosh Institute of Communications Studies, University of Leeds, Leeds, LS2 9JT, UK {N.J.L.Benn,A.Macintosh}@leeds.ac.uk Abstract. This paper describes research that aims to develop an argument visu- alization tool and associated method for supporting eParticipation and online deliberation. Based on the state-of-the-art in the field of computer-supported ar- gument visualization, the tool will support the work of relevant eParticipation actors by enabling them to navigate through arguments contained in relevant consultation and policy documents. This tool will form the core of our investi- gation into the mediating role that large, Web-based argument maps can play in eParticipation scenarios. In particular, we intend to investigate the method and practice of how various eParticipation actors use the tool in the policy- making process. To this end, this paper sets out a clear research agenda for research at the intersection of eParticipation and computer-supported argument visualization. Keywords: Argument visualization, technologies for eParticipation, online deliberation. 1 Introduction This paper describes research and development on an argument visualization tool (AVT) for supporting eParticipation and online deliberation. The AVT is part of a larger suite of tools being developed within the EU-funded IMPACT project. The project began January 1, 2010 and will run for three years. The aims of the IMPACT project include addressing the four overarching problems outlined in [1], namely: 1. How can the various actors determine the relationships between contribu- tions to policy development, whether taken from expert papers, consultations or public forum discourse, and appreciate how these contributions are taken through to decisions? 2.
    [Show full text]