Two Approaches to Metaphor Detection Brian Macwhinney and Davida Fromm Carnegie Mellon University Pittsburgh PA, 15213 USA E-Mail: [email protected], [email protected]

Total Page:16

File Type:pdf, Size:1020Kb

Two Approaches to Metaphor Detection Brian Macwhinney and Davida Fromm Carnegie Mellon University Pittsburgh PA, 15213 USA E-Mail: Macw@Cmu.Edu, Fromm@Andrew.Cmu.Edu Two Approaches to Metaphor Detection Brian MacWhinney and Davida Fromm Carnegie Mellon University Pittsburgh PA, 15213 USA E-mail: [email protected], [email protected] Abstract Methods for automatic detection and interpretation of metaphors have focused on analysis and utilization of the ways in which metaphors violate selectional preferences (Martin, 2006). Detection and interpretation processes that rely on this method can achieve wide coverage and may be able to detect some novel metaphors. However, they are prone to high false alarm rates, often arising from imprecision in parsing and supporting ontological and lexical resources. An alternative approach to metaphor detection emphasizes the fact that many metaphors become conventionalized collocations, while still preserving their active metaphorical status. Given a large enough corpus for a given language, it is possible to use tools like SketchEngine (Kilgariff, Rychly, Smrz, & Tugwell, 2004) to locate these high frequency metaphors for a given target domain. In this paper, we examine the application of these two approaches and discuss their relative strengths and weaknesses for metaphors in the target domain of economic inequality in English, Spanish, Farsi, and Russian. Keywords: metaphor, corpora, grammatical relations, selectional restrictions, collocations • Action_Has_Modifier: verb-adverb pairs. 1. The Task (trap softly) Entity_Has_Modifier: noun-adjective pairs. In our project, code-named METAL, the task was to • (grinding poverty) automatically detect metaphors in the target domain Entity_Entity: noun-noun pairs. (Kövecses, 2002) of economic inequality – a subject of • particular current interest in the wake of the financial (wealth ladder) Entity_Has_Entity: noun-noun pairs for possession. collapse of 2008 and growing income disparity within • (poverty s trap, burden of poverty) and between countries caused by globalization and ’ Entity_Is_Entity: noun-noun pairs for identity. automation. Within this target domain, we defined the • (money is the lifeblood) four subdomains of poverty, wealth, taxes, and social Entity_Prep_Entity: noun-(prep)-noun pairs. class. The task of the system was to take short passages • discussing this issue, and to identify within each passage (path_to wealth) Once a checkable was detected in the parse tree, it was the metaphor(s) dealing with one of the target then sent on to four methods for evaluating selectional subdomains of economic inequality, if there was one. restrictions or preferences. As an example, consider the Once a sentence with a metaphor was detected, the sentence: There are many hyenas feeding on DBKL’s system had to delineate the specific words involved in money. In the relevant GR, feeding is the action and the metaphor. Finally, the system had to assign an interpretation to the metaphor by specifying the relevant money is the object in the Action_Has_Object relation. Because one does not literally feed on money, this pair is subsegment of the economic inequality target domain judged to violate a selectional restriction and hence be a (poverty, wealth, taxes, or social class), as well as the likely candidate for detection as a metaphor. source domain. The source domain was characterized in terms of conceptual metaphor theory (Lakoff & Johnson, 1980) involving sources such as PATH, CONTAINER, 2. Selectional Restriction Methods STRUCTURE, and BODY. The first selectional restriction method used the Common Semantic Features (CSF) method (Tsvetkov, The system targeted detection of metaphors in English, Boytsov, Gershman, Nyberg, & Dyer, 2014). For this Farsi, Russian, and Spanish. All passages in the system system, a classifier for metaphor detection was trained were processed using the TurboParser (Martins, Smith, on English samples and then applied to each of the four Xing, Aguiar, & Figueiredo, 2010) which produced target languages. The CSF classifier used coarse-grained dependency relations in CoNNL format (Kübler, semantic features derived from WordNet (Fellbaum, McDonald, & Nivre, 2009). From these parses, we 1998), psycholinguistic features for abstractness from the extracted pairs of words that represented specific MRC database (Wilson, 1988) and Word Representations grammatical relations (GRs) or “checkables” that could in Global Context (Huang, Socher, Manning, & Ng., then be processed for violations of selectional 2012). Training relied on the TroFi example base (Birke restrictions or preferences (Peters & Wilks, 2010; Wilks, & Sarkar, 2007) as parsed into dependency relations by 1978). In many cases the checkable included a word the TurboParser. Because the WordNet and other from the source domain linked to a word from the target materials were based on English, this method treated all domain. The possible checkables included: four languages as roughly equivalent. • Action_Has_Subject: verb-subject pairs. (poverty vanishes) The second selectional restriction method, which we will • Action_Has_Object: verb-object pairs. call the TRIPS method, attempted to extract additional (poverty traps) information from WordNet to achieve detection through 2501 preference violation (Wilks, 1978). The ontological classification system of WordNet was further elaborated We also explored ways of addressing this problem by by ontological relations deriving from the TRIPS system harvesting complete collections of figurative language (Allen, Swift, & de Beaumont, 2008) and the method of from the Oxford English Dictionary (OED) or the selecting likely figurative synsets from WordNet (Peters McGraw-Hill Dictionary of English Idioms and Phrasal & Wilks, 2010). The TRIPS lexicon provides Verbs (Spears, 2009). Our plan was to use these information on semantic roles and selectional conventionalized forms as a filter on the results from the restrictions, and the parser can construct the required selectional methods. However, before we completed semantic structures from WordNet entries. TRIPS has implementation of this method, we developed a more been shown to be successful at parsing WordNet glosses precise method, as described in the next section. in order to build commonsense knowledge bases (Allen et al., 2011). Consider how the system decides how to 4. Corpus-based Analysis handle a word like cooperate that is not in the TRIPS After working for nearly two years with the selectional hierarchy. Within WordNet, it tries unsuccessfully to methods, we then implemented an alternative approach process the first synset about collaborate, but then to metaphor detection that relied on corpus analysis, moves to the second synset which provides the item rather than detection of selectional restrictions. This work and then maps this to the TRIPS ontology to derive method used the TenTen corpora for English, Russian the correct selectional restrictions. and Spanish (Jakubicek, Kilgariff, Kovar, Rychly, & Suchomel, 2013) available from SketchEngine The third selectional restriction method relied on the (sketchengine.co.uk). These corpora are called TenTen matching of the lexical frames for verbs from VerbNet corpora, because they contain over 10 billion words (Baker, Filmore, & Cronin, 2003) along with features each. In addition, they have been lemmatized and tagged derived from WordNet. This method provides acceptable for part of speech and grammatical relations. precision, but its coverage (recall) is limited to Grammatical relations between the lemmas are metaphors that include verbs included in VerbNet. constructed using a SketchEngine (SkE) shallow parsing Moreover, because VerbNet is based on English verbs, grammar based on a set of regular expressions the extensions to the other languages are incomplete. formulated over sequences of part of speech categories. SkE grammars were already available within SkE for The fourth selectional restriction method processed Russian, Spanish, and English. These grammars were WordNet features using the Scone ontological database already built into the corpora, allowing for immediate (http://cs.cmu.edu/~sef/scone). The Scone ontological creation of WordSketches, as described below. Ivanova hierarchy establishes “split sets” such as et al. discuss how a SkE grammar was built for German animate/inanimate or tangible/intangible. Given a and Khokhlova and Zakharov (2010) describe the collocation such as green ideas, Scone would require that construction of a SkE grammar for Russian. For Farsi, only tangible things can have color and therefore green we built a SkE grammar from scratch, using the methods ideas must be metaphorical. described in these papers and in the SkE documentation. 3. Problems with Selectional Restrictions For Farsi, no TenTen corpus was available, so we needed The four selectional restriction methods relied on to build our own SkE corpus. We did this by using WordNet ontology and the method of checking for passages from web searches supplied by Shlomo selectional or preference violations across grammatical Argamon of IIT and pointers to relevant URLs from Ron relations. Thus, reliable detection of metaphors depended Ferguson of SAIC. Combining these materials into a on first being able to use NLP tools in each language for single corpus, we then lemmatized and tagged this new accurate identification of each of the relevant corpus, using the methods described by Feely et al. “checkable” relations. Whenever there were errors in this (2014). Using the SkE documentation, we then wrote a parsing process, the resultant grammatical relations or SkE grammar to detect the most important grammatical
Recommended publications
  • Talk Bank: a Multimodal Database of Communicative Interaction
    Talk Bank: A Multimodal Database of Communicative Interaction 1. Overview The ongoing growth in computer power and connectivity has led to dramatic changes in the methodology of science and engineering. By stimulating fundamental theoretical discoveries in the analysis of semistructured data, we can to extend these methodological advances to the social and behavioral sciences. Specifically, we propose the construction of a major new tool for the social sciences, called TalkBank. The goal of TalkBank is the creation of a distributed, web- based data archiving system for transcribed video and audio data on communicative interactions. We will develop an XML-based annotation framework called Codon to serve as the formal specification for data in TalkBank. Tools will be created for the entry of new and existing data into the Codon format; transcriptions will be linked to speech and video; and there will be extensive support for collaborative commentary from competing perspectives. The TalkBank project will establish a framework that will facilitate the development of a distributed system of allied databases based on a common set of computational tools. Instead of attempting to impose a single uniform standard for coding and annotation, we will promote annotational pluralism within the framework of the abstraction layer provided by Codon. This representation will use labeled acyclic digraphs to support translation between the various annotation systems required for specific sub-disciplines. There will be no attempt to promote any single annotation scheme over others. Instead, by promoting comparison and translation between schemes, we will allow individual users to select the custom annotation scheme most appropriate for their purposes.
    [Show full text]
  • Arabic Corpus and Word Sketches
    Journal of King Saud University – Computer and Information Sciences (2014) 26, 357–371 King Saud University Journal of King Saud University – Computer and Information Sciences www.ksu.edu.sa www.sciencedirect.com arTenTen: Arabic Corpus and Word Sketches Tressy Arts a, Yonatan Belinkov b, Nizar Habash c,*, Adam Kilgarriff d, Vit Suchomel e,d a Chief Editor Oxford Arabic Dictionary, UK b MIT, USA c New York University Abu Dhabi, United Arab Emirates d Lexical Computing Ltd, UK e Masaryk Univ., Czech Republic Available online 7 October 2014 KEYWORDS Abstract We present arTenTen, a web-crawled corpus of Arabic, gathered in 2012. arTenTen con- Corpora; sists of 5.8-billion words. A chunk of it has been lemmatized and part-of-speech (POS) tagged with Lexicography; the MADA tool and subsequently loaded into Sketch Engine, a leading corpus query tool, where it Morphology; is open for all to use. We have also created ‘word sketches’: one-page, automatic, corpus-derived Concordance; summaries of a word’s grammatical and collocational behavior. We use examples to demonstrate Arabic what the corpus can show us regarding Arabic words and phrases and how this can support lexi- cography and inform linguistic research. The article also presents the ‘sketch grammar’ (the basis for the word sketches) in detail, describes the process of building and processing the corpus, and considers the role of the corpus in additional research on Arabic. Ó 2014 Production and hosting by Elsevier B.V. on behalf of King Saud University. 1. Introduction ber of the TenTen Corpus Family (Jakubı´cˇek et al., 2013).
    [Show full text]
  • Child Language
    ABSTRACTS 14TH INTERNATIONAL CONGRESS FOR THE STUDY OF CHILD LANGUAGE IN LYON, IASCL FRANCE 2017 WELCOME JULY, 17TH21ST 2017 SPECIAL THANKS TO - 2 - SUMMARY Plenary Day 1 4 Day 2 5 Day 3 53 Day 4 101 Day 5 146 WELCOME! Symposia Day 2 6 Day 3 54 Day 4 102 Day 5 147 Poster Day 2 189 Day 3 239 Day 4 295 - 3 - TH DAY MONDAY, 17 1 18:00-19:00, GRAND AMPHI PLENARY TALK Bottom-up and top-down information in infants’ early language acquisition Sharon Peperkamp Laboratoire de Sciences Cognitives et Psycholinguistique, Paris, France Decades of research have shown that before they pronounce their first words, infants acquire much of the sound structure of their native language, while also developing word segmentation skills and starting to build a lexicon. The rapidity of this acquisition is intriguing, and the underlying learning mechanisms are still largely unknown. Drawing on both experimental and modeling work, I will review recent research in this domain and illustrate specifically how both bottom-up and top-down cues contribute to infants’ acquisition of phonetic cat- egories and phonological rules. - 4 - TH DAY TUESDAY, 18 2 9:00-10:00, GRAND AMPHI PLENARY TALK What do the hands tell us about lan- guage development? Insights from de- velopment of speech, gesture and sign across languages Asli Ozyurek Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Most research and theory on language development focus on children’s spoken utterances. However language development starting with the first words of children is multimodal. Speaking children produce gestures ac- companying and complementing their spoken utterances in meaningful ways through pointing or iconic ges- tures.
    [Show full text]
  • Jazykovedný Ústav Ľudovíta Štúra
    JAZYKOVEDNÝ ÚSTAV ĽUDOVÍTA ŠTÚRA SLOVENSKEJ AKADÉMIE VIED 2 ROČNÍK 70, 2019 JAZYKOVEDNÝ ČASOPIS ________________________________________________________________VEDEcKÝ ČASOPIS PRE OTáZKY TEóRIE JAZYKA JOURNAL Of LINGUISTIcS ScIENTIfIc JOURNAL fOR ThE Theory Of LANGUAGE ________________________________________________________________ hlavná redaktorka/Editor-in-chief: doc. Mgr. Gabriela Múcsková, PhD. Výkonní redaktori/Managing Editors: PhDr. Ingrid Hrubaničová, PhD., Mgr. Miroslav Zumrík, PhD. Redakčná rada/Editorial Board: PhDr. Klára Buzássyová, CSc. (Bratislava), prof. PhDr. Juraj Dolník, DrSc. (Bratislava), PhDr. Ingrid Hrubaničová, PhD. (Bra tislava), doc. Mgr. Martina Ivanová, PhD. (Prešov), Mgr. Nicol Janočková, PhD. (Bratislava), Mgr. Alexandra Jarošová, CSc. (Bratislava), prof. PaedDr. Jana Kesselová, CSc. (Prešov), PhDr. Ľubor Králik, CSc. (Bratislava), PhDr. Viktor Krupa, DrSc. (Bratislava), doc. Mgr. Gabriela Múcsková, PhD. (Bratislava), Univ. Prof. Mag. Dr. Stefan Michael Newerkla (Viedeň – Ra- kúsko), Associate Prof. Mark Richard Lauersdorf, Ph.D. (Kentucky – USA), prof. Mgr. Martin Ološtiak, PhD. (Prešov), prof. PhDr. Slavomír Ondrejovič, DrSc. (Bratislava), prof. PaedDr. Vladimír Patráš, CSc. (Banská Bystrica), prof. PhDr. Ján Sabol, DrSc. (Košice), prof. PhDr. Juraj Vaňko, CSc. (Nitra), Mgr. Miroslav Zumrík, PhD. (Bratislava), prof. PhDr. Pavol Žigo, CSc. (Bratislava). Technický_______________________________________________________________ redaktor/Technical editor: Mgr. Vladimír Radik Vydáva/Published by: Jazykovedný
    [Show full text]
  • Multimedia Corpora (Media Encoding and Annotation) (Thomas Schmidt, Kjell Elenius, Paul Trilsbeek)
    Multimedia Corpora (Media encoding and annotation) (Thomas Schmidt, Kjell Elenius, Paul Trilsbeek) Draft submitted to CLARIN WG 5.7. as input to CLARIN deliverable D5.C­3 “Interoperability and Standards” [http://www.clarin.eu/system/files/clarin­deliverable­D5C3_v1_5­finaldraft.pdf] Table of Contents 1 General distinctions / terminology................................................................................................................................... 1 1.1 Different types of multimedia corpora: spoken language vs. speech vs. phonetic vs. multimodal corpora vs. sign language corpora......................................................................................................................................................... 1 1.2 Media encoding vs. Media annotation................................................................................................................... 3 1.3 Data models/file formats vs. Transcription systems/conventions.......................................................................... 3 1.4 Transcription vs. Annotation / Coding vs. Metadata ............................................................................................. 3 2 Media encoding ............................................................................................................................................................... 5 2.1 Audio encoding ..................................................................................................................................................... 5 2.2
    [Show full text]
  • Conference Abstracts
    EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Held under the Patronage of Ms Neelie Kroes, Vice-President of the European Commission, Digital Agenda Commissioner MAY 23-24-25, 2012 ISTANBUL LÜTFI KIRDAR CONVENTION & EXHIBITION CENTRE ISTANBUL, TURKEY CONFERENCE ABSTRACTS Editors: Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis. Assistant Editors: Hélène Mazo, Sara Goggi, Olivier Hamon © ELRA – European Language Resources Association. All rights reserved. LREC 2012, EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Title: LREC 2012 Conference Abstracts Distributed by: ELRA – European Language Resources Association 55-57, rue Brillat Savarin 75013 Paris France Tel.: +33 1 43 13 33 33 Fax: +33 1 43 13 33 30 www.elra.info and www.elda.org Email: [email protected] and [email protected] Copyright by the European Language Resources Association ISBN 978-2-9517408-7-7 EAN 9782951740877 All rights reserved. No part of this book may be reproduced in any form without the prior permission of the European Language Resources Association ii Introduction of the Conference Chair Nicoletta Calzolari I wish first to express to Ms Neelie Kroes, Vice-President of the European Commission, Digital agenda Commissioner, the gratitude of the Program Committee and of all LREC participants for her Distinguished Patronage of LREC 2012. Even if every time I feel we have reached the top, this 8th LREC is continuing the tradition of breaking previous records: this edition we received 1013 submissions and have accepted 697 papers, after reviewing by the impressive number of 715 colleagues.
    [Show full text]
  • Gold Standard Annotations for Preposition and Verb Sense With
    Gold Standard Annotations for Preposition and Verb Sense with Semantic Role Labels in Adult-Child Interactions Lori Moon Christos Christodoulopoulos Cynthia Fisher University of Illinois at Amazon Research University of Illinois at Urbana-Champaign [email protected] Urbana-Champaign [email protected] [email protected] Sandra Franco Dan Roth Intelligent Medical Objects University of Pennsylvania Northbrook, IL USA [email protected] [email protected] Abstract This paper describes the augmentation of an existing corpus of child-directed speech. The re- sulting corpus is a gold-standard labeled corpus for supervised learning of semantic role labels in adult-child dialogues. Semantic role labeling (SRL) models assign semantic roles to sentence constituents, thus indicating who has done what to whom (and in what way). The current corpus is derived from the Adam files in the Brown corpus (Brown, 1973) of the CHILDES corpora, and augments the partial annotation described in Connor et al. (2010). It provides labels for both semantic arguments of verbs and semantic arguments of prepositions. The semantic role labels and senses of verbs follow Propbank guidelines (Kingsbury and Palmer, 2002; Gildea and Palmer, 2002; Palmer et al., 2005) and those for prepositions follow Srikumar and Roth (2011). The corpus was annotated by two annotators. Inter-annotator agreement is given sepa- rately for prepositions and verbs, and for adult speech and child speech. Overall, across child and adult samples, including verbs and prepositions, the κ score for sense is 72.6, for the number of semantic-role-bearing arguments, the κ score is 77.4, for identical semantic role labels on a given argument, the κ score is 91.1, for the span of semantic role labels, and the κ for agreement is 93.9.
    [Show full text]
  • Korpusleksikograafia Uued Võimalused Eesti Keele Kollokatsioonisõnastiku Näitel
    doi:10.5128/ERYa11.05 korpuSLekSikograafia uued võimaLuSed eeSti keeLe koLLokatSiooniSõnaStiku näiteL Jelena Kallas, Kristina Koppel, Maria Tuulik Ülevaade. Artiklis tutvustame korpusleksikograafia üldisi arengu- tendentse ja uusi meetodeid. Käsitleme korpuse kui leksikograafilise 11, 75–94 EESTI RAKENDUSLINGVISTIKA ÜHINGU AASTARAAMAT info allika potentsiaali ning analüüsime, kuidas saab leksikograafilisi andmebaase pool- ja täisautomaatselt genereerida. Vaatleme, mil määral on uusi tehnoloogilisi lahendusi võimalik rakendada Eesti õppe- leksikograafias, täpsemalt eesti keele kollokatsioonisõnastiku (KOLS) koostamisel. KOLS on esimene eestikeelne sõnastik, kus rakendatakse andmebaasi automaatset genereerimist nii märksõnastiku kui ka sõnaartikli sisu (kollokatiivse info ja näitelausete) tasandil. Tutvustame sõnastiku koostamise üldisi põhimõtteid ja esitame näidisartikli.* Võtmesõnad: korpusleksikograafia, kollokatsioonisõnastik, korpus- päringusüsteem, sõnastikusüsteem, eesti keel 1. Sõnastike korpuspõhine koostamine internetiajastul: vahendid ja meetodid Korpusleksikograafia eesmärk on luua meetodid, mis võimaldaksid leksikograafilisi üksusi automaatselt tuvastada ja valida. Korpusanalüüsi tulemusi rakendatakse tänapäeval märksõnastiku loomisel, tähendusjaotuste uurimisel, leksikaalsete üksuste erinevate omaduste tuvastamisel (süntaktiline käitumine, kollokatsioonid, semantiline prosoodia eri tüüpi tekstides, leksikaal-semantilised suhted), näitelau- sete valikul ja tõlkevastete leidmisel (Kilgarriff 2013: 77). Korpuspõhine koostamine
    [Show full text]
  • The Translation Equivalents Database (Treq) As a Lexicographer’S Aid
    The Translation Equivalents Database (Treq) as a Lexicographer’s Aid Michal Škrabal, Martin Vavřín Institute of the Czech National Corpus, Charles University, Czech Republic E-mail: [email protected], [email protected] Abstract The aim of this paper is to introduce a tool that has recently been developed at the Institute of the Czech National Corpus, the Treq (Translation Equivalents) database, and to explore its possible uses, especially in the field of lexicography. Equivalent candidates offered by Treq can also be considered as potential equivalents in a bilingual dictionary (we will focus on the Latvian–Czech combination in this paper). Lexicographers instantly receive a list of candidates for target language counterparts and their frequencies (expressed both in absolute numbers and percentages) that suggest the probability that a given candidate is functionally equivalent. A significant advantage is the possibility to click on any one of these candidates and immediately verify their individual occurrences in a given context; and thus more easily distinguish the relevant translation candidates from the misleading ones. This utility, which is based on data stored in the InterCorp parallel corpus, is continually being upgraded and enriched with new functions (the recent integration of multi-word units, adding English as the primary language of the dictionaries, an improved interface, etc.), and the accuracy of the results is growing as the volume of data keeps increasing. Keywords: InterCorp; Treq; translation equivalents; alignment; Latvian–Czech dictionary 1. Introduction The aim of this paper is to introduce one of the tools that has been developed recently at the Institute of the Czech National Corpus (ICNC) and which could be especially helpful to lexicographers: namely, the Treq translation equivalents database1.
    [Show full text]
  • ESP Corpus Construction: a Plea for a Needs-Driven Approach Bâtir Des Corpus En Anglais De Spécialité : Plaidoyer Pour Une Approche Fondée Sur L’Analyse Des Besoins
    ASp la revue du GERAS 68 | 2015 Varia ESP corpus construction: a plea for a needs-driven approach Bâtir des corpus en anglais de spécialité : plaidoyer pour une approche fondée sur l’analyse des besoins Hilary Nesi Electronic version URL: http://journals.openedition.org/asp/4682 DOI: 10.4000/asp.4682 ISSN: 2108-6354 Publisher Groupe d'étude et de recherche en anglais de spécialité Printed version Date of publication: 23 October 2015 Number of pages: 7-24 ISSN: 1246-8185 Electronic reference Hilary Nesi, « ESP corpus construction: a plea for a needs-driven approach », ASp [Online], 68 | 2015, Online since 01 November 2016, connection on 02 November 2020. URL : http:// journals.openedition.org/asp/4682 ; DOI : https://doi.org/10.4000/asp.4682 This text was automatically generated on 2 November 2020. Tous droits réservés ESP corpus construction: a plea for a needs-driven approach 1 ESP corpus construction: a plea for a needs-driven approach Bâtir des corpus en anglais de spécialité : plaidoyer pour une approche fondée sur l’analyse des besoins Hilary Nesi 1. Introduction 1 A huge amount of corpus data is now available for English language teachers and learners, making it difficult for them to decide which kind of corpus is most suited to their needs. The corpora that they probably know best are the large, publicly accessible general corpora which inform general reference works and textbooks; the ones that aim to cover the English language as a whole and to represent many different categories of text. The 100 million word British National Corpus (BNC), created in the 1990s, is a famous example of this type and has been used extensively for language learning purposes, but at 100 million words it is too small to contain large numbers of texts within each of its subcomponents, and it is now too old to represent modern usage in the more fast-moving fields.
    [Show full text]
  • (Or, the Raising of Baby Mondegreen) Dissertation
    PRESERVING SUBSEGMENTAL VARIATION IN MODELING WORD SEGMENTATION (OR, THE RAISING OF BABY MONDEGREEN) DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Christopher Anton Rytting, B.A. ***** The Ohio State University 2007 Dissertation Committee: Approved by Dr. Christopher H. Brew, Co-Advisor Dr. Eric Fosler-Lussier, Co-Advisor Co-Advisor Dr. Mary Beckman Dr. Brian Joseph Co-Advisor Graduate Program in Linguistics ABSTRACT Many computational models have been developed to show how infants break apart utterances into words prior to building a vocabulary—the “word segmenta- tion task.” Most models assume that infants, upon hearing an utterance, represent this input as a string of segments. One type of model uses statistical cues calcu- lated from the distribution of segments within the child-directed speech to locate those points most likely to contain word boundaries. However, these models have been tested in relatively few languages, with little attention paid to how different phonological structures may affect the relative effectiveness of particular statistical heuristics. This dissertation addresses this is- sue by comparing the performance of two classes of distribution-based statistical cues on a corpus of Modern Greek, a language with a phonotactic structure signif- icantly different from that of English, and shows how these differences change the relative effectiveness of these cues. Another fundamental issue critically examined in this dissertation is the practice of representing input as a string of segments. Such a representation im- plicitly assumes complete certainty as to the phonemic identity of each segment.
    [Show full text]
  • A Corpus-Based Study of the Effects of Collocation, Phraseology
    A CORPUS-BASED STUDY OF THE EFFECTS OF COLLOCATION, PHRASEOLOGY, GRAMMATICAL PATTERNING, AND REGISTER ON SEMANTIC PROSODY by TIMOTHY PETER MAIN A thesis submitted to the University of Birmingham for the degree of DOCTOR OF PHILOSOPHY Department of English Language and Applied Linguistics College of Arts and Law The University of Birmingham December 2017 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract This thesis investigates four factors that can significantly affect the positive-negative semantic prosody of two high-frequency verbs, CAUSE and HAPPEN. It begins by exploring the problematic notion that semantic prosody is collocational. Statistical measures of collocational significance, including the appropriate span within which evaluative collocates are observed, the nature of the collocates themselves, and their relationship to the node are all examined in detail. The study continues by examining several semi-preconstructed phrases associated with HAPPEN. First, corpus data show that such phrases often activate evaluative modes other than semantic prosody; then, the potentially problematic selection of the canonical form of a phrase is discussed; finally, it is shown that in some cases collocates ostensibly evincing semantic prosody occur in profiles because of their occurrence in frequent phrases, and as such do not constitute strong evidence of semantic prosody.
    [Show full text]