Mind the GAP: a Balanced Corpus of Gendered Ambiguous Pronouns

Total Page:16

File Type:pdf, Size:1020Kb

Mind the GAP: a Balanced Corpus of Gendered Ambiguous Pronouns Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns Kellie Webster and Marta Recasens and Vera Axelrod and Jason Baldridge Google AI Language fwebsterk|recasens|vaxelrod|[email protected] Abstract (1) In May, Fujisawa joined Mari Motohashi’s rink as the team’s skip, moving back from Karuizawa to Ki- Coreference resolution is an important task tami where she had spent her junior days. for natural language understanding, and the resolution of ambiguous pronouns a long- With this scope, we make three key contributions: standing challenge. Nonetheless, existing • We design an extensible, language- corpora do not capture ambiguous pronouns in sufficient volume or diversity to accu- independent mechanism for extracting rately indicate the practical utility of mod- challenging ambiguous pronouns from text. els. Furthermore, we find gender bias in • We build and release GAP, a human-labeled existing corpora and systems favoring mas- corpus of 8,908 ambiguous pronoun-name culine entities. To address this, we present pairs derived from Wikipedia.2 This dataset and release GAP, a gender-balanced labeled targets the challenges of resolving naturally- corpus of 8,908 ambiguous pronoun-name occurring ambiguous pronouns and rewards pairs sampled to provide diverse coverage systems which are gender-fair. of challenges posed by real-world text. We explore a range of baselines which demon- • We run four state-of-the-art coreference re- strate the complexity of the challenge, the solvers and several competitive simple base- best achieving just 66.9% F1. We show lines on GAP to understand limitations in that syntactic structure and continuous neu- current modeling, including gender bias. ral models provide promising, complemen- We find that syntactic structure and Trans- tary cues for approaching the challenge. former models (Vaswani et al., 2017) pro- vide promising, complementary cues for ap- 1 Introduction proaching GAP. Coreference resolution decisions can drastically Coreference resolution involves linking referring alter how automatic systems process text. Biases expressions that evoke the same discourse en- in automatic systems have caused a wide range tity, as defined in shared tasks such as CoNLL of underrepresented groups to be served in an in- 2011/12 (Pradhan et al., 2012) and MUC (Gr- equitable way by downstream applications (Hardt, ishman and Sundheim, 1996). Unfortunately, 2014). We take the construction of the new GAP high scores on these tasks do not necessarily corpus as an opportunity to reduce gender bias in translate into acceptable performance for down- arXiv:1810.05201v1 [cs.CL] 11 Oct 2018 coreference datasets; in this way, GAP can pro- stream applications such as machine translation mote equitable modeling of reference phenom- (Guillou, 2012) and fact extraction (Nakayama, ena complementary to the recent work of Zhao 2008). In particular, high-scoring systems suc- et al. (2018) and Rudinger et al. (2018). Such cessfully identify coreference relationships be- approaches promise to improve equity of down- tween string-matching proper names, but fare stream models, such as triple extraction for knowl- worse on anaphoric mentions such as pronouns edge base population. and common noun phrases (Stoyanov et al., 2009; Rahman and Ng, 2012; Durrett and Klein, 2013). We consider the problem of resolving gendered biguous pronoun in bold, the two potential coreferent names ambiguous pronouns in English, such as she1 in: in italics, and the correct one also underlined. 2http://goo.gl/language/gap- 1The examples throughout the paper highlight the am- coreference 2 Background Langlais, 2016b). GAP examples are not strictly Winograd Existing datasets do not capture ambiguous pro- schemas because they have no reference-flipping nouns in sufficient volume or diversity to bench- word. Nonetheless, they contain two person mark systems for practical applications. named entities of the same gender and an am- 2.1 Datasets with Ambiguous Pronouns biguous pronoun that may refer to either (or nei- ther). As such, they represent a similarly diffi- Winograd schemas (Levesque et al., 2012) are cult challenge and require the same inferential ca- closely related to our work as they contain am- pabilities. More importantly, GAP is larger than biguous pronouns. They are pairs of short texts existing Winograd schema datasets and the exam- with an ambiguous pronoun and a special word (in ples are from naturally occurring Wikipedia text. square brackets) that switches its referent: GAP complements OntoNotes by providing an ex- (2) The trophy would not fit in the brown suitcase be- tensive targeted dataset of naturally occurring am- cause it was too [big/small]. biguous pronouns. The Definite Pronoun Resolution Dataset (Rah- 2.2 Modeling Ambiguous Pronouns man and Ng, 2012) comprises 943 Winograd schemas written by undergraduate students and State-of-the-art coreference systems struggle to later extended by Peng et al. (2015). The First resolve ambiguous pronouns that require world Winograd Schema Challenge (Morgenstern et al., knowledge and commonsense reasoning (Durrett 2016) released 60 examples adapted from pub- and Klein, 2013). Past efforts have tried to mine lished literary works (Pronoun Disambiguation semantic preferences and inferential knowledge Problem)3 and 285 manually constructed schemas via predicate-argument statistics mined from cor- (Winograd Schema Challenge)4. More recently, pora (Dagan and Itai, 1990; Yang et al., 2005), Rudinger et al. (2018) and Zhao et al. (2018) semantic roles (Kehler et al., 2004; Ponzetto and have created two Winograd schema-style datasets Strube, 2006), contextual compatibility features containing 720 and 3160 sentences, respectively, (Liao and Grishman, 2010; Bansal and Klein, where each sentence contains a gendered pronoun 2012), and event role sequences (Bean and Riloff, and two occupation (or participant) antecedent 2004; Chambers and Jurafsky, 2008). These usu- candidates that break occupational gender stereo- ally bring small improvements in general corefer- types. Overall, ambiguous pronoun datasets have ence datasets and larger improvements in targeted been limited in size and, most notably, consist only Winograd datasets. of manually constructed examples which do not Rahman and Ng (2012) scored 73.05% preci- necessarily reflect the challenges faced by systems sion on their Winograd dataset after incorporating in the wild. targeted features such as narrative chains, Web- In contrast, the largest and most widely-used based counts, and selectional preferences. Peng coreference corpus, OntoNotes (Pradhan et al., et al. (2015)’s system improved the state of the art 2007), is general purpose. In OntoNotes, simpler to 76.41% by acquiring hsubject, verb, objecti and high-frequency coreference examples (e.g. those hsubject/object, verb, verbi knowledge triples. captured by string matching) greatly outnumber In the First Winograd Schema Challenge (Mor- examples of ambiguous pronouns, which obscures genstern et al., 2016), participants used methods performance results on that key class (Stoyanov ranging from logical axioms and inference to neu- et al., 2009; Rahman and Ng, 2012). Ambiguous ral network architectures enhanced with common- pronouns greatly impact main entity resolution sense knowledge (Liu et al., 2017), but no system in Wikipedia, the focus of Ghaddar and Langlais qualified for the second round. Recently, Trinh (2016a), who use WikiCoref, a corpus of 30 full and Le (2018) have achieved the best results on articles annotated with coreference (Ghaddar and the Pronoun Disambiguation Problem and Wino- grad Schema Challenge datasets, achieving 70% 3 https://cs.nyu.edu/faculty/davise/ and 63.7%, respectively, which are 3% and 11% papers/WinogradSchemas/PDPChallenge2016. xml above Liu et al.’s (2017)’s previous state of the 4https://cs.nyu.edu/faculty/davise/ art. Their model is an ensemble of word-level and papers/WinogradSchemas/WSCollection.xml character-level recurrent language models, which Type Pattern Example FINALPRO (Name, Name, Pronoun) Preckwinkle criticizes Berrios’ nepotism: [. ] County’s ethics rules don’t apply to him. MEDIALPRO (Name, Pronoun, Name) McFerran’s horse farm was named Glen View. After his death in 1885, John E. Green acquired the farm. INITIALPRO (Pronoun, Name, Name) Judging that he is suitable to join the team, Butcher injects Hughie with a specially formulated mix. Table 1: Extraction patterns and example contexts for each. despite not being trained on coreference data, en- ing given that learned NLP systems often reflect code commonsense as part of the more general and even amplify training biases (Bolukbasi et al., language modeling task. It is unclear how these 2016; Caliskan et al., 2017; Zhao et al., 2017). A systems perform on naturally-occurring ambigu- growing body of work defines notions of fairness, ous pronouns. For example, Trinh and Le’s (2018) bias, and equality in data and machine-learned system relies on choosing a candidate from a pre- systems (Pedreshi et al., 2008; Hardt et al., 2016; specified list, and it would need to be extended to Zafar et al., 2017; Skirpan and Gorelick, 2017), handle the case that the pronoun does not corefer and debiasing strategies include expanding and with any given candidate. By releasing GAP, we rebalancing data (Torralba and Efros, 2011; Ryu aim to foster research in this direction, and set sev- et al., 2017; Shankar et al., 2017; Buda, 2017), and eral competitive baselines without using targeted balancing performance
Recommended publications
  • Questions Under Discussion: from Sentence to Discourse∗
    Questions under Discussion: From sentence to discourse∗ Anton Benz Katja Jasinskaja July 14, 2016 Abstract Questions under discussion (QUD) is an analytic tool that has recently become more and more popular among linguists and language philosophers as a way to characterize how a sentence fits in its context. The idea is that each sentence in discourse addresses a (of- ten implicit) QUD either by answering it, or by bringing up another question that can help answering that QUD. The linguistic form and the interpretation of a sentence, in turn, may depend on the QUD it addresses. The first proponents of the QUD approach (von Stut- terheim & Klein, 1989; van Kuppevelt, 1995) thought of it as a general approach to the analysis of discourse structure where structural relations between sentences in a coherent discourse are understood in terms of relations between questions they address. However, the idea did not find broad application in discourse structure analysis, where theories aiming at logical discourse representations are predominantly based on the notion of coherence re- lations (Mann & Thompson, 1988; Hobbs, 1985; Asher & Lascarides, 2003). The problem of finding overt markers for QUDs means that they are also difficult to utilize for quanti- tative measures of discourse coherence (McNamara et al., 2010). At the same time QUD proved useful in the analysis of a wide range of linguistic phenomena including accentua- tion, discourse particles, presuppositions and implicatures. The goal of this special issue is to bring together cutting-edge research that demonstrates the success of QUD in linguistics and the need for the development of a comprehensive model of discourse coherence that incorporates the notion of QUD as its integral part.
    [Show full text]
  • Coreference Resolution for Spoken French Loïc Grobol
    Coreference resolution for spoken French Loïc Grobol To cite this version: Loïc Grobol. Coreference resolution for spoken French. Computation and Language [cs.CL]. Université Sorbonne Nouvelle - Paris 3, 2020. English. tel-02928209v2 HAL Id: tel-02928209 https://hal.archives-ouvertes.fr/tel-02928209v2 Submitted on 9 Mar 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License École Doctorale 622 — Sciences du langage Lattice, Inria Thèse de doctorat en sciences du langage de l’Université Sorbonne Nouvelle Coreference resolution for spoken French présentée et soutenue publiquement par Loïc Grobol le 15 juillet 2020 Sous la direction d’Isabelle Tellier† et de Frédéric Landragin Et co-encadrée par Éric Villemonte de la Clergerie et Marco Dinarelli Jury : Massimo Poesio, Professor (Queen Mary University), Rapporteur, Sophie Rosset, Directrice de recherche (LIMSI), Rapportrice, Béatrice Daille, Professeur (Université de Nantes), Examinatrice, Pascal Amsili, Professeur (Université Sorbonne Nouvelle), Examinateur, Frédéric Landragin, Directeur de recherche (Lattice), Directeur, Éric Villemonte de la Clergerie, Chargé de recherche (Inria), Co-encadrant, Marco Dinarelli, Chargé de recherche (LIG), Co-encadrant. Reconnaissance automatique de chaînes de coréférences en français parlé Résumé Une chaîne de coréférences est l’ensemble des expressions linguistiques — ou mentions — qui font référence à une même entité ou un même objet du discours.
    [Show full text]
  • Constraints on Donkey Pronouns
    Constraints on Donkey Pronouns The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Grosz, P. G., P. Patel-Grosz, E. Fedorenko, and E. Gibson. “Constraints on Donkey Pronouns.” Journal of Semantics 32, no. 4 (July 15, 2014): 619–648. As Published http://dx.doi.org/10.1093/jos/ffu009 Publisher Oxford University Press Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/102962 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ CONSTRAINTS ON DONKEY PRONOUNS Patrick Grosz, Pritty Patel-Grosz, Evelina Fedorenko, Edward Gibson Abstract This paper reports on an experimental study of donkey pronouns, pronouns (e.g. it) whose meaning covaries with that of a non-pronominal noun phrase (e.g. a donkey) even though they are not in a structural relationship that is suitable for quantifier-variable binding. We investigate three constraints, (i) the preference for the presence of an overt NP antecedent that is not part of another word, (ii) the salience of the position of an antecedent that is part of another word, and (iii) the uniqueness of an intended antecedent (in terms of world knowledge). We compare constructions in which intended antecedents occur in a context such as who owns an N / who is an N-owner with constructions of the type who was without an N / who was N-less. Our findings corroborate the existence of the overt NP antecedent constraint, and also show that the salience of an unsuitable antecedent’s position matters.
    [Show full text]
  • Arxiv:1805.11824V1 [Cs.CL] 30 May 2018
    Artificial Intelligence Review manuscript No. (will be inserted by the editor) Anaphora and Coreference Resolution: A Review Rhea Sukthanker · Soujanya Poria · Erik Cambria · Ramkumar Thirunavukarasu Received: date / Accepted: date Abstract Entity resolution aims at resolving repeated references to an entity in a document and forms a core component of natural language processing (NLP) research. This field possesses immense potential to improve the performance of other NLP fields like machine translation, sentiment analysis, paraphrase detection, summarization, etc. The area of entity resolution in NLP has seen proliferation of research in two separate sub-areas namely: anaphora resolution and coreference resolution. Through this review article, we aim at clarifying the scope of these two tasks in entity resolution. We also carry out a detailed analysis of the datasets, evaluation metrics and research methods that have been adopted to tackle this NLP problem. This survey is motivated with the aim of providing the reader with a clear understanding of what constitutes this NLP problem and the issues that require attention. Keywords Entity Resolution · Coreference Resolution · Anaphora Resolution · Natural Language Processing · Sentiment Analysis · Deep Learning 1 Introduction A discourse is a collocated group of sentences which convey a clear understanding only when read together. The etymology of anaphora is ana (Greek for back) and pheri (Greek for to bear), which in simple terms means repetition. In computational linguistics, anaphora is typically defined as references to items mentioned earlier in the discourse or \pointing back" reference as described by (Mitkov, 1999). The most prevalent type of anaphora in natural language is the pronominal anaphora (Lappin and Leass, 1994).
    [Show full text]
  • Event Coreference Resolution: a Survey of Two Decades of Research
    Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) Event Coreference Resolution: A Survey of Two Decades of Research Jing Lu and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 fljwinnie,[email protected] Abstract Recent years have seen a gradual shift of focus from entity-based tasks to event-based tasks in informa- tion extraction research. Being a core event-based task, event coreference resolution is less studied but arguably more challenging than entity corefer- ence resolution. This paper provides an overview of the major milestones made in event coreference research since its inception two decades ago. Figure 1: The standard information extraction pipeline 1 Introduction It should be easy to see from this example that to per- Compared to entity coreference resolution, event coreference form end-to-end event coreference resolution, one has to resolution is less studied but arguably more challenging. To build an information extraction (IE) pipeline (cf. Figure 1) see its difficulty, consider the following example on within- that involves (1) extracting the entity mentions from a given document event coreference resolution, whose goal is to de- document (the entity extraction component) and determining termine which event mentions in a document refer to the same which of them are coreferent (the entity coreference compo- real-world event: nent); (2) extracting the event mentions by identifying their trigger words/phrases and determining which entity mentions Georges Cipriani fleftg a prison in Ensisheim ev1 are their arguments (the event extraction component); and (3) in northern France on parole on Wednesday.
    [Show full text]
  • Binding and Coreference in Vietnamese
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses October 2019 Binding and Coreference in Vietnamese Thuy Bui University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the Psycholinguistics and Neurolinguistics Commons, Semantics and Pragmatics Commons, and the Syntax Commons Recommended Citation Bui, Thuy, "Binding and Coreference in Vietnamese" (2019). Doctoral Dissertations. 1694. https://doi.org/10.7275/ncqc-n685 https://scholarworks.umass.edu/dissertations_2/1694 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. BINDING AND COREFERENCE IN VIETNAMESE A Dissertation Presented by THUY BUI Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY September 2019 Linguistics c Copyright by Thuy Bui 2019 All Rights Reserved BINDING AND COREFERENCE IN VIETNAMESE A Dissertation Presented by THUY BUI Approved as to style and content by: ————————————————– Kyle Johnson, Co-Chair ————————————————– Brian Dillon, Co-Chair ————————————————– Rajesh Bhatt, Member ————————————————– Adrian Staub, Member ————————————————– Joe Pater, Department Chair Department of Linguistics ACKNOWLEDGMENTS I now realize how terrible I really am at feelings, words, and expressing feelings with words, as I spent the past three weeks trying to make these acknowledge- ments sound right, and they just never did. With this acute awareness and abso- lutely no compensating skills, I am going to include a series of inadequate thank yous to the many amazing people whose overflowing acts of kindness towards me exceeds what words can possibly describe anyway.
    [Show full text]
  • Long-Distance Reflexivization and Logophoricity in the Dargin Language Muminat Kerimova Florida International University
    Florida International University FIU Digital Commons MA in Linguistics Final Projects College of Arts, Sciences & Education 2017 Long-Distance Reflexivization and Logophoricity in the Dargin Language Muminat Kerimova Florida International University Follow this and additional works at: https://digitalcommons.fiu.edu/linguistics_ma Part of the Linguistics Commons Recommended Citation Kerimova, Muminat, "Long-Distance Reflexivization and Logophoricity in the Dargin Language" (2017). MA in Linguistics Final Projects. 3. https://digitalcommons.fiu.edu/linguistics_ma/3 This work is brought to you for free and open access by the College of Arts, Sciences & Education at FIU Digital Commons. It has been accepted for inclusion in MA in Linguistics Final Projects by an authorized administrator of FIU Digital Commons. For more information, please contact [email protected]. FLORIDA INTERNATIONAL UNIVERSITY Miami, Florida LONG-DISTANCE REFLEXIVIZATION AND LOGOPHORICITY IN THE DARGIN LANGUAGE A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF ARTS in LINGUISTICS by Muminat Kerimova 2017 ABSTRACT OF THE THESIS LONG-DISTANCE REFLEXIVIZATION AND LOGOPHORICITY IN THE DARGIN LANGUAGE by Muminat Kerimova Florida International University, 2017 Miami, Florida Professor Ellen Thompson, Major Professor The study of anaphora challenges us to determine the conditions under which the pronouns of a language are associated with possible antecedents. One of the theoretical questions is whether the distribution of pronominal forms is best explained by a syntactic, semantic or discourse level analysis. A more practical question is how we distinguish between anaphoric elements, e.g. what are the borders between the notions of pronouns, locally bound reflexives and long-distance reflexives? The study analyzes the anaphora device saj in Dargin that is traditionally considered to be a long-distance reflexivization language.
    [Show full text]
  • Modeling the Lifespan of Discourse Entities with Application to Coreference Resolution
    Journal of Artificial Intelligence Research 52 (2015) 445-475 Submitted 09/14; published 03/15 Modeling the Lifespan of Discourse Entities with Application to Coreference Resolution Marie-Catherine de Marneffe [email protected] Linguistics Department The Ohio State University Columbus, OH 43210 USA Marta Recasens [email protected] Google Inc. Mountain View, CA 94043 USA Christopher Potts [email protected] Linguistics Department Stanford University Stanford, CA 94035 USA Abstract A discourse typically involves numerous entities, but few are mentioned more than once. Dis- tinguishing those that die out after just one mention (singleton) from those that lead longer lives (coreferent) would dramatically simplify the hypothesis space for coreference resolution models, leading to increased performance. To realize these gains, we build a classifier for predicting the singleton/coreferent distinction. The model’s feature representations synthesize linguistic insights about the factors affecting discourse entity lifespans (especially negation, modality, and attitude predication) with existing results about the benefits of “surface” (part-of-speech and n-gram-based) features for coreference resolution. The model is effective in its own right, and the feature represen- tations help to identify the anchor phrases in bridging anaphora as well. Furthermore, incorporating the model into two very different state-of-the-art coreference resolution systems, one rule-based and the other learning-based, yields significant performance improvements. 1. Introduction Karttunen imagined a text interpreting system designed to keep track of “all the individuals, that is, events, objects, etc., mentioned in the text and, for each individual, record whatever is said about it” (Karttunen, 1976, p. 364).
    [Show full text]
  • 1 Discourse and Logical Form: Pronouns, Attention and Coherence
    Discourse and Logical Form: Pronouns, Attention and Coherence 1 Introduction The meaning of the pronoun ‘he’ does not seem ambiguous in the way that, say, the noun ‘bank’ is. Yet what we express in uttering ‘He is happy,’ pointing at Bill, differs from what we express when uttering it, pointing at a different person, Sam. ‘he’ in the first case refers to Bill, in the second to Sam. And further with an utterance of ‘A man always drives the car he owns,’ or ‘A man came in. He sat down,’ without an accompanying pointing gesture, ‘he’ refers not at all; its interpretation, instead, co- varies with possible instances in the range of the quantifier ‘A man’. Those uses in which the pronoun is interpreted referentially are usually called ‘demonstrative,’ and those in which it is interpreted non-referentially are usually called ‘bound’. We immediately face two questions: what is the semantic value of a pronoun in a context, given that it can have both bound and demonstrative uses? And, since this value can vary with context, how does context fix it? The first question concerns the semantics of pronouns, and the second their meta-semantics. The received view answers the first question by positing an ambiguity: bound uses of ‘he’ function as bound variables; demonstrative ones, by contrast, are referential. The received view’s answer to the second question is that the semantic value of a bound use co-varies with possible instances in the range of a binding expression, and for demonstrative uses, linguistic meaning constrains, but does not fully determine, its semantic value in context.
    [Show full text]
  • Discourse-Givenness of Noun Phrases Theoretical and Computational Models
    Discourse-Givenness of Noun Phrases Theoretical and Computational Models Dissertation zur Erlangung des akademischen Grades Doktor der Philosophie (Dr. phil.) der Humanwissenschaftlichen Fakult¨at der Universit¨atPotsdam vorgelegt von Julia Ritz Stuttgart, Mai 2013 Published online at the Institutional Repository of the University of Potsdam: URL http://opus.kobv.de/ubp/volltexte/2014/7081/ URN urn:nbn:de:kobv:517-opus-70818 http://nbn-resolving.de/urn:nbn:de:kobv:517-opus-70818 Erkl¨arung(Declaration of Authenticity of Work) Hiermit erkl¨areich, dass ich die beigef¨ugteDissertation selbstst¨andigund ohne die un- zul¨assigeHilfe Dritter verfasst habe. Ich habe keine anderen als die angegebenen Quellen und Hilfsmittel genutzt. Alle w¨ortlich oder inhaltlich ¨ubernommenen Stellen habe ich als solche gekennzeichnet. Ich versichere außerdem, dass ich die Dissertation in der gegenw¨artigenoder einer an- deren Fassung nur in diesem und keinem anderen Promotionsverfahren eingereicht habe, und dass diesem Promotionsverfahren keine endg¨ultiggescheiterten Promotionsverfahren vorausgegangen sind. Im Falle des erfolgreichen Abschlusses des Promotionsverfahrens erlaube ich der Fakult¨at, die angef¨ugtenZusammenfassungen zu ver¨offentlichen. Ort, Datum Unterschrift iii iv iv Contents Preface xiii 1 Introduction 1 1.1 Discourse-Givenness and Related Concepts . .3 1.2 Goal . .4 1.3 Motivation . .6 1.4 The Field of Discourse-Givenness Classification . 10 1.5 Structure of this Document . 11 2 Concepts and Definitions 13 2.1 Basic Concepts and Representation . 13 2.1.1 Basic Concepts in Discourse . 14 2.1.2 Formal Representation . 15 2.2 Reference . 18 2.2.1 Identifying Basic Units . 20 2.2.2 Non-Referring Use of Expressions .
    [Show full text]
  • Lecture 5. Introduction to Issues in Anaphora 1. What Is Anaphora? 2
    Formal Semantics and Current Problems of Semantics, Lecture 5 Formal Semantics and Current Problems of Semantics, Lecture 5 Barbara H. Partee, RGGU, March 4, 2008 p. 1 Barbara H. Partee, RGGU, March 4, 2008 p. 2 Lecture 5. Introduction to Issues in Anaphora in (1a) and what is sometimes called a deictic use1 of he in (2), as it might be uttered while looking at someone who just walked by, where there is no linguistic antecedent. 1. What is Anaphora? ..................................................................................................................................... 1 (2) He looks lost. 2. Coreference vs. variable-binding................................................................................................................ 2 3. Syntactic aspects of anaphora, and syntax-semantics interface issues ....................................................... 4 Two notes on terminology. In the broad sense of the term anaphora, all of the examples 4. Definite NPs as anaphoric expressions: debates......................................................................................... 7 above as well as the ones in (3) below involve anaphora. There are two narrower senses 5. Issues and puzzles in anaphora. Preview of coming lectures ..................................................................... 7 5.1. Donkey anaphora and discourse anaphora with indefinite antecedents and the rise of Dynamic in which anaphora or anaphor are distinguished from other terms. Semantics. Weeks 6-8...............................................................................................................................
    [Show full text]
  • Binding, Coreference, and Presuppositions: a Presuppositional Precede-And-Command Binding Theory
    Binding, Coreference, and Presuppositions: A Presuppositional Precede-and-Command Binding Theory Benjamin Bruening (University of Delaware) rough draft, November 26, 2018, comments welcome ([email protected]) Abstract Almost all current approaches to the binding theory (the conditions that regulate covaluation between NPs within a sentence) have accepted the view of Reinhart (1983a,b), according to which the binding theory should regulate only syntactic binding and not coreference. They also adopt the view that syntactic movement is implicated somehow in binding (either directly, or indirectly, through constraints on chains). In this paper, I argue that both of these ideas are incorrect. Instead, we want a binding theory that is unrelated to syntactic movement, and one that regulates both binding and coreference, as the classical binding theory had it (e.g., Chomsky 1981). I propose a new binding theory that combines the Precede-and-Command analysis of Bruening (2014) with the presuppositional approach to Binding Condition A in Sauerland (2013). 1 Introduction The classical binding theory (e.g., Chomsky 1981) did not distinguish between binding and coref- erence. Its conditions (Conditions A, B, and C) regulated both, by imposing conditions on coin- dexing. In contrast, almost all alternatives to the classical binding theory have accepted the view of Reinhart (1983a,b), according to which the binding theory should regulate only syntactic binding and not coreference. Many also adopt the view that syntactic movement is implicated somehow in binding. In this paper, I argue that both of these ideas are incorrect. First, I argue that the binding the- ory should regulate both binding and coreference, contrary to all work that has followed Reinhart.
    [Show full text]