Computational Linguistics in the Netherlands 2004

Total Page:16

File Type:pdf, Size:1020Kb

Computational Linguistics in the Netherlands 2004 Computational Linguistics in the Netherlands 2004 Published by LOT phone: +31 30 253 6006 Trans 10 fax: + 31 30 253 6000 3512 JK Utrecht e-mail: [email protected] The Netherlands http://wwwlot.let.uu.nl/ ISBN 90-76864-91-8 NUR 632 c 2005 by the individual authors Computational Linguistics in the Netherlands 2004 Selected papers from the fifteenth CLIN meeting Edited by Ton van der Wouden Michaela Poß Hilke Reckman Crit Cremers LOT Utrecht 2005 Contents Preface vii Invited Speaker Shalom Lappin 1 Machine Learning and the Cognitive Basis of Natural Language Regular Papers Tamas Biro´ 13 When the Hothead Speaks. Simulated Annealing Optimality Theory for Dutch Fast Speech Wauter Bosma 29 Query-Based Summarization using Rhetorical Structure Theory Anne Hendrik Buist, Wessel Kraaij, Stephan Raaijmakers 45 Automatic Summarization of Meeting Data: A Feasibility Study Christiano Chesi 59 Phases and Complexity in Phrase Structure Building Tim Van de Cruys 75 Between VP Adjuncts and Second Pole in Dutch. A corpus based survey regarding the complements between VP adjuncts and second pole in Dutch Frank Van Eynde 89 Argument Realization in Dutch Nicole Gregoire´ 105 Accentuation of Adpositions and Particles in a Text-to-Speech System for Dutch Lars Hellan, Dorothee Beermann, Jon Atle Gulla, Atle Prange 121 Trailfinder - A Case Study in Extracting Spatial Information Using Deep Language Processing Veronique Hoste, Walter Daelemans 133 Learning Dutch Coreference Resolution v vi Kim Luyckx, Walter Daelemans 149 Shallow Text Analysis and Machine Learning for Authorship Attribtion Jori Mur 161 Off-line Answer Extraction for Dutch QA Lonneke van der Plas, Gosse Bouma 173 Syntactic Contexts for Finding Semantically Related Words Michaela Poß, Ton van der Wouden 187 Extended Lexical Units in Dutch Wojciech Skut 203 Preference-Driven Bimachine Compilation. An Application to TTS Text Normalisation Alexandros Tantos 217 An Interface between Lexical and Discourse Semantics. The case of the light verb “have” List of Contributors 233 Preface It is our pleasure to present this selection of the papers presented at the fifteenth in- stallment of Computational Linguistics in the Netherlands, held at Leiden University on Friday, December 17th, 2004. Organized by the computational linguists of what was at that time called the Leiden Centre for Linguistics (ULCL), this one day con- ference attracted 74 abstracts from no less than four continents, and over one hundred participants. Shalom Lappin, of King’s College London, and Luc Steels, of the Free University, Brussels and the Sony Computer Science Laboratories, Paris, delivered the keynote lectures, on From Universal Grammar to Machine Learning: the Significance of Nat- ural Language Engineering for Theories of Cognition and Emergent Fluid Construc- tion Grammars, respectively. A written version of Prof. Lappin’s paper is included in the present volume. The separate call for papers resulted in the submission of 26 papers, of which only 16 could be included. We wish to express our heartfelt thanks to the reviewing panel consisting of Harald Baaijen, Stefano Bocconi, Antal van den Bosch, Harry Bunt, Alessandra Corda, Arthur Dirksen, Francisca de Jong, Truus Kruijt, Frank Landsber- gen, Anton Nijholt, Gertjan van Noord, Jan Odijk, Manfred Sailer, Rob van der Sandt, Mariet¨ Theune, Mark de Vos, Theo Vosse, Henk Zeevat, and Joost Zwarts, for helping us in the difficult task of selecting the papers most suitable for this volume, as well as to the authors complying with our harsh demands with respect to formatting issues and time schedules. As you will see from the table of contents, the papers cover a wide variety of topics from the field of computational linguistics. In our view, the alphabet was the ordering principle that was the least arbitrary. Finally, we wish to thank everyone who helped with the local organization, the for- mer ULCL for cooperation and financial support, LOT for their willingness to publish this volume in their Occasional Series, and to the Faculty of Arts of Leiden University for providing a venue. Leiden, October 2005 Crit Cremers Michaela Poß Hilke Reckman Ton van der Wouden vii Machine Learning and the Cognitive Basis of Natural Language Shalom Lappin Department of Philosophy, King’s College London Abstract Machine learning and statistical methods have yielded impressive results in a wide variety of natural language processing tasks. These advances have generally been regarded as engineering achievements. In fact it is possible to argue that the success of machine learning methods is significant for our understanding of the cognitive basis of language acquisition and processing. Recent work in unsupervised grammar induction is particularly relevant to this issue. It suggests that knowledge of language can be achieved through general learning procedures, and that a richly articulated language faculty is not required to explain its acquisition. 1 Introduction The past fifteen years have seen a massive expansion in the application of information theoretic and machine learning (ML) methods to natural language processing. This work has yielded impressive results in accuracy and coverage for engineering systems addressing a wide variety of tasks in areas like speech recognition, morphological analysis, parsing, semantic interpretation, and dialogue management. It is worth con- sidering whether the inductive learning mechanisms that these methods employ have consequences not simply for natural language engineering, but also for our under- standing of the cognitive basis of human language acquisition and processing. A view common among computational linguists is that the information theoretic methods used to construct robust NLP systems are engineering tools. On this approach language engineering does not bear directly on the cognitive properties of grammar and human language processing. Shieber (2004 pc) suggests that the success of in- formation theoretic methods in NLP may have implications for the scientific under- standing of natural language. Pereira (2000) argues that current work on grammar induction has revived Harris’ (1951), (1991) program for combining formal grammar and information theory in the study of language. Most machine learning has used supervised learning techniques. These have lim- ited implications for theories of human language learning, given that they require an- notation of the training data with the structures and rules that are to be learned. How- ever, recently there has been an increasing amount of promising research on unsuper- vised machine learning of linguistic knowledge. The results of this research suggest the computational viability of the view that general cognitive learning and projection mechanisms rather than a richly articulated language faculty may be sufficient to sup- port language acquisition and interpretation. In Section 2 I briefly consider the poverty of stimulus argument that has been traditionally invoked to motivate a distinct language faculty. Section 3 reviews major developments in the application of supervised machine learning to different areas of NLP. Section 4 considers some recent work in unsupervised learning of NLP systems. In Section 5 I attempt to clarify the central issues in the debate between the distinct 1 2 Shalom Lappin language faculty and generalized learning model approaches. Finally Section 6 states the main conclusions of this discussion and suggests directions for future work. 2 The Poverty of Stimulus Argument for a Language Faculty Chomsky (1957), (1965), (1981), (1986), (1995), (2000) argues for a richly articu- lated language acquisition device to account for the speed and efficiency with which humans acquire natural language on the basis of “sparse” evidence. This device de- fines the initial state of the language learner. Its structures and constraints represent a Universal Grammar (UG) that the learner brings to the language acquisition problem. In earlier versions of the language faculty hypothesis (as in Chomsky (1965), for ex- ample) this device is presented as a schema that defines the set of possible grammars and an evaluation metric that ranks grammars conforming to this schema for a partic- ular natural language. Since Chomsky (1981) UG has been described as an intricate set of principles containing parameters at significant points in their specification. On the Principles and Parameters (P&P) approach exposure to a very limited amount of linguistic data allows the learner to determine parameter values in order to produce a grammar for his/her language. Chomsky (2000) (p. 8) describes this Principles and Parameters model in the following terms. We can think of the initial state of the faculty of language as a fixed net- work connected to a switch box; the network is constituted of the prin- ciples of language, while the switches are options to be determined by experience. When switches are set one way, we have Swahili; when they are set another way, we have Japanese. Each possible human language is identified as a particular setting of the switches—a setting of parameters, in technical terminology. If the research program succeeds, we should be able literally to deduce Swahili from one choice of settings, Japanese from another, and so on through the languages that humans acquire. The empirical conditions of language acquisition require that the switches can be set on the basis of the very limited properties of information that is available to the child. The language faculty view relies primarily on the poverty of stimulus argument to mo- tivate the claim that a powerful task-specific mechanism is required for language ac- quisition. According this argument the complex grammar that a child achieves within a short period, with very limited data cannot be explained through general learning procedures of the kind involved in other cognitive tasks. This is, at root, a “What else could it be?” argument. It asserts that, given the com- plexity of grammatical knowledge and the lack of evidence for discerning its proper- ties, we are forced to the conclusion that much of this knowledge is not learned at all but must already be present in the initial design of the language learner.
Recommended publications
  • An Analysis of Dilemmas in English Composition Among Asian College Students Shuko Nawa University of North Florida
    UNF Digital Commons UNF Graduate Theses and Dissertations Student Scholarship 1995 An Analysis Of Dilemmas In English Composition Among Asian College Students Shuko Nawa University of North Florida Suggested Citation Nawa, Shuko, "An Analysis Of Dilemmas In English Composition Among Asian College Students" (1995). UNF Graduate Theses and Dissertations. 83. https://digitalcommons.unf.edu/etd/83 This Master's Thesis is brought to you for free and open access by the Student Scholarship at UNF Digital Commons. It has been accepted for inclusion in UNF Graduate Theses and Dissertations by an authorized administrator of UNF Digital Commons. For more information, please contact Digital Projects. © 1995 All Rights Reserved AN ANALYSIS OF DILEMMAS IN ENGLISH COMPOSITION AMONG ASIAN COLLEGE STUDENTS by Shuko Nawa A thesis submitted to the Department of Curriculum and Instruction in partial fulfillment of the requirements for the degree of Master of Education UNIVERSITY OF NORTH FLORIDA COLLEGE OF EDUCATION AND HUMAN SERVICES May, 1995 Unpublished work © Shuko Nawa Certificate of Approval The thesis ofShuko Nawa is approved: (Date) Signature Deleted Signature Deleted Signature Deleted Accepted for the Division: Signature Deleted Chairperson Accepted for the College: Signature Deleted Accepted for the University Signature Deleted Dedication and Acknowledgement This study is dedicated to the researcher's father, Nobuhide Nawa, who taught the importance of learning and exhibited continuous efforts to become a better person through his life. The researcher gratefully acknowledges the assistance of Dr. Priscilla VanZandt, Dr. Patricia Ellis, and Mrs. Mary Thornton in preparing research materials. Conscientious reviewing by Miss Suzanne Fuss is also deeply appreciated. The researcher would like to extend special thanks to Dr.
    [Show full text]
  • Package 'Tidytext'
    Package ‘tidytext’ February 19, 2018 Type Package Title Text Mining using 'dplyr', 'ggplot2', and Other Tidy Tools Version 0.1.7 Description Text mining for word processing and sentiment analysis using 'dplyr', 'ggplot2', and other tidy tools. License MIT + file LICENSE LazyData TRUE URL http://github.com/juliasilge/tidytext BugReports http://github.com/juliasilge/tidytext/issues RoxygenNote 6.0.1 Depends R (>= 2.10) Imports rlang, dplyr, stringr, hunspell, broom, Matrix, tokenizers, janeaustenr, purrr (>= 0.1.1), methods, stopwords Suggests readr, tidyr, XML, tm, quanteda, knitr, rmarkdown, ggplot2, reshape2, wordcloud, topicmodels, NLP, scales, gutenbergr, testthat, mallet, stm VignetteBuilder knitr NeedsCompilation no Author Gabriela De Queiroz [ctb], Oliver Keyes [ctb], David Robinson [aut], Julia Silge [aut, cre] Maintainer Julia Silge <[email protected]> Repository CRAN Date/Publication 2018-02-19 19:46:02 UTC 1 2 bind_tf_idf R topics documented: bind_tf_idf . .2 cast_sparse . .3 cast_tdm . .4 corpus_tidiers . .5 dictionary_tidiers . .6 get_sentiments . .6 get_stopwords . .7 lda_tidiers . .8 mallet_tidiers . 10 nma_words . 12 parts_of_speech . 12 sentiments . 13 stm_tidiers . 14 stop_words . 16 tdm_tidiers . 17 tidy.Corpus . 18 tidytext . 19 tidy_triplet . 19 unnest_tokens . 20 Index 22 bind_tf_idf Bind the term frequency and inverse document frequency of a tidy text dataset to the dataset Description Calculate and bind the term frequency and inverse document frequency of a tidy text dataset, along with the product, tf-idf, to
    [Show full text]
  • A Customizable Speech Practice Application for People Who Stutter
    Rollins College Rollins Scholarship Online Honors Program Theses Spring 2021 A Customizable Speech Practice Application for People Who Stutter Eric Grimm [email protected] Nikola Vuckovic Follow this and additional works at: https://scholarship.rollins.edu/honors Part of the Other Computer Sciences Commons Recommended Citation Grimm, Eric and Vuckovic, Nikola, "A Customizable Speech Practice Application for People Who Stutter" (2021). Honors Program Theses. 154. https://scholarship.rollins.edu/honors/154 This Open Access is brought to you for free and open access by Rollins Scholarship Online. It has been accepted for inclusion in Honors Program Theses by an authorized administrator of Rollins Scholarship Online. For more information, please contact [email protected]. 1 A Customizable Speech Practice Application for People Who Stutter Eric Grimm and Nikola Vuckovic Rollins College 2 Abstract Stuttering is a speech impediment that often requires speech therapy to curb the symptoms. In speech therapy, people who stutter (PWS) learn techniques that they can use to improve their fluency. PWS often practice their techniques extensively in order to maintain fluent speech. Many listen to audio recordings to practice where a single word or sentence is played on the recording and then there is a pause, giving the user a chance to say the word(s) to practice. This style of practice is not customizable and is repetitive since the contents do not change. Thus, we have developed an application for iPhone that uses a text-to-speech API to read single words and sentences to PWS, so that they can practice their techniques. Each practice mode is customizable in that the user can choose to practice certain sounds that they struggle with, and the app will respond by choosing words that start with the desired sound or generate sentences that contain several words that start with the desired sound.
    [Show full text]
  • Think Python
    Think Python How to Think Like a Computer Scientist 2nd Edition, Version 2.4.0 Think Python How to Think Like a Computer Scientist 2nd Edition, Version 2.4.0 Allen Downey Green Tea Press Needham, Massachusetts Copyright © 2015 Allen Downey. Green Tea Press 9 Washburn Ave Needham MA 02492 Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-NonCommercial 3.0 Unported License, which is available at http: //creativecommons.org/licenses/by-nc/3.0/. The original form of this book is LATEX source code. Compiling this LATEX source has the effect of gen- erating a device-independent representation of a textbook, which can be converted to other formats and printed. http://www.thinkpython2.com The LATEX source for this book is available from Preface The strange history of this book In January 1999 I was preparing to teach an introductory programming class in Java. I had taught it three times and I was getting frustrated. The failure rate in the class was too high and, even for students who succeeded, the overall level of achievement was too low. One of the problems I saw was the books. They were too big, with too much unnecessary detail about Java, and not enough high-level guidance about how to program. And they all suffered from the trap door effect: they would start out easy, proceed gradually, and then somewhere around Chapter 5 the bottom would fall out. The students would get too much new material, too fast, and I would spend the rest of the semester picking up the pieces.
    [Show full text]
  • SPELLING out P a UNIFIED SYNTAX of AFRIKAANS ADPOSITIONS and V-PARTICLES by Erin Pretorius Dissertation Presented for the Degree
    SPELLING OUT P A UNIFIED SYNTAX OF AFRIKAANS ADPOSITIONS AND V-PARTICLES by Erin Pretorius Dissertation presented for the degree of Doctor of Philosophy in the Faculty of Arts and Social Sciences at Stellenbosch University Supervisors Prof Theresa Biberauer (Stellenbosch University) Prof Norbert Corver (University of Utrecht) Co-supervisors Dr Johan Oosthuizen (Stellenbosch University) Dr Marjo van Koppen (University of Utrecht) December 2017 This dissertation was completed as part of a Joint Doctorate with the University of Utrecht. Stellenbosch University https://scholar.sun.ac.za DECLARATION By submitting this dissertation electronically, I declare that the entirety of the work contained therein is my own, original work, that I am the owner of the copyright thereof (unless to the extent explicitly otherwise stated) and that I have not previously in its entirety or in part submitted it for obtaining any qualification. December 2017 Date The financial assistance of the South African National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the author and are not necessarily to be attributed to the NRF. The author also wishes to acknowledge the financial assistance provided by EUROSA of the European Union, and the Foundation Study Fund for South African Students of Zuid-Afrikahuis. Copyright © 2017 Stellenbosch University Stellenbosch University https://scholar.sun.ac.za For Helgard, my best friend and unconquerable companion – all my favourite spaces have you in them; thanks for sharing them with me For Hester, who made it possible for me to do what I love For Theresa, who always takes the stairs Stellenbosch University https://scholar.sun.ac.za ABSTRACT Elements of language that are typically considered to have P (i.e.
    [Show full text]
  • Package 'Qdap'
    Package ‘qdap’ March 13, 2013 Type Package Title Bridging the gap between qualitative data and quantitative analysis Version 0.2.1 Date 2012-05-09 Author Tyler Rinker Maintainer Tyler Rinker <[email protected]> Depends R (>= 2.15), gdata, ggplot2 (>= 0.9.2), grid, reports, scales Imports gridExtra, chron, grDevices, RColorBrewer, igraph, tm,wordcloud, venneuler, openNLPmod- els.en, Snowball, gplots,gridExtra, openNLP, plotrix, XML, RCurl, reshape2, parallel,tools Suggests plyr, koRpus LazyData TRUE Description This package automates many of the tasks associated with quantitative discourse analysis of transcripts containing discourse including fre- quency counts of sentence types, words,sentence, turns of talk, syllable counts and other assorted analysis tasks. The package provides parsing tools for preparing transcript data. Many functions enable the user to aggregate data by any number of grouping variables providing analysis and seamless integration with other R packages that undertake higher level analysis and visualization of text. This provides the user with a more efficient and targeted analysis. Note qdap is not compiled for Mac users. Installation instructions for Mac user or other OS users having difficulty installing qdap please visit: http://trinker.github.com/qdap_install/installation License GPL-2 URL http://trinker.github.com/qdap/ BugReports http://github.com/trinker/qdap/issues 1 2 R topics documented: Collate ’adjacency_matrix.R’ ’all_words.R’’automated_readability_index.R’ ’bag.o.words.R’ ’blank2NA.R’’bracketX.R’
    [Show full text]
  • Techniques and Challenges in Speech Synthesis Final Report for ELEC4840B
    Techniques and Challenges in Speech Synthesis Final Report for ELEC4840B David Ferris - 3109837 04/11/2016 A thesis submitted in partial fulfilment of the requirements for the degree of Bachelor of Engineering in Electrical Engineering at The University of Newcastle, Australia. Abstract The aim of this project was to develop and implement an English language Text-to-Speech synthesis system. This first involved an extensive study of the mechanisms of human speech production, a review of modern techniques in speech synthesis, and analysis of tests used to evaluate the effectiveness of synthesized speech. It was determined that a diphone synthesis system was the most effective choice for the scope of this project. A diphone synthesis system operates by concatenating sections of recorded human speech, with each section containing exactly one phonetic transition. By using a database that contains recordings of all possible phonetic transitions within a language, or diphones, a diphone synthesis system can produce any word by concatenating the correct diphone sequence. A method of automatically identifying and extracting diphones from prompted speech was designed, allowing for the creation of a diphone database by a speaker in less than 40 minutes. The Carnegie Mellon University Pronouncing Dictionary, or CMUdict, was used to determine the pronunciation of known words. A system for smoothing the transitions between diphone recordings was designed and implemented. CMUdict was then used to train a maximum-likelihood prediction system to determine the correct pronunciation of unknown English language alphabetic words. Using this, the system was able to find an identical or reasonably similar pronunciation for over 76% of the training set.
    [Show full text]
  • The Computer Generation of Cryptic Crossword Clues
    Riddle posed by computer (6) : The Computer Generation of Cryptic Crossword Clues David Hardcastle Birkbeck, London University 2007 Thesis submitted in fulfilment of requirements for degree of PhD The copyright of this thesis rests with the author. Due acknowledgement must always be made of any quotation or information derived from it. 1 Abstract This thesis describes the development of a system (ENIGMA) that generates a wide variety of cryptic crossword clues for any given word. A valid cryptic clue must appear to make sense as a fragment of natural language, but it must also be possible for the reader to interpret it using the syntax and semantics of crossword convention and solve the puzzle that it sets. It should also have a fluent surface reading that conflicts with the crossword clue interpretation of the same text. Consider, for example, the following clue for the word THESIS: Article with spies’ view (6) This clue sets a puzzle in which the (the definite article) must be combined with SIS (Special Intelligence Service) to create the word thesis (a view), but the reader is distracted by the apparent meaning and structure of the natural language fragment when attempting to solve it. I introduce the term Natural Language Creation (NLC) to describe the process through which ENIGMA generates a text with two layers of meaning. It starts with a representation of the puzzle meaning of the clue, and generates both a layered text that communicates this puzzle and a fluent surface meaning for that same text with a shallow, lexically-bound semantic representation. This parallel generation process reflects my intuition of the creative process through which cryptic clues are written, and domain expert commentary on clue-setting.
    [Show full text]
  • Automatic Lexico-Semantic Acquisition for Question Answering Plas, Marie Louise Elizabeth Van Der
    University of Groningen Automatic lexico-semantic acquisition for question answering Plas, Marie Louise Elizabeth van der IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below. Document Version Publisher's PDF, also known as Version of record Publication date: 2008 Link to publication in University of Groningen/UMCG research database Citation for published version (APA): Plas, M. L. E. V. D. (2008). Automatic lexico-semantic acquisition for question answering. s.n. Copyright Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons). Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum. Download date: 24-09-2021 Automatic Lexico-Semantic Acquisition for Question Answering Lonneke van der Plas ii This research was carried out in the project Question Answering using Dependency Relations, which is part of the research programme for Interactive Multimedia Information eXtraction, IMIX, financed by NWO, the Netherlands organisation for scientific research. The work in this thesis has been carried out under the aus- pices of the LOT school and the Center for Language and Cognition Groningen (CLCG) from the University of Groningen.
    [Show full text]
  • Similative Plurals and the Nature of Alternatives
    SIMILATIVE PLURALS AND THE NATURE OF ALTERNATIVES by Ryan Walter Smith Copyright c Ryan Walter Smith 2020 A Dissertation Submitted to the Faculty of the DEPARTMENT OF LINGUISTICS In Partial Fulfillment of the Requirements For the Degree of DOCTOR OF PHILOSOPHY In the Graduate College THE UNIVERSITY OF ARIZONA 2020 2 THE UNIVERSITY OF ARIZONA GRADUATE COLLEGE As members of the Dissertation Committee, we certify that we have read the disser- tation prepared by Ryan Walter Smith, titled Similative plurals and the nature of alternatives and recommend that it be accepted as fulfilling the dissertation require- ment for the Degree of Doctor of Philosophy. Date: May 12 2020 Robert Henderson Date: May 12 2020 Heidi Harley Date: May 12 2020 Simin Karimi Date: May 12 2020 Andrew Carnie Date: May 12 2020 Final approval and acceptance of this dissertation is contingent upon the candidate's submission of the final copies of the dissertation to the Graduate College. I hereby certify that I have read this dissertation prepared under my direction and recommend that it be accepted as fulfilling the dissertation requirement. Date: May 12 2020 Dissertation Director: Robert Henderson 3 ACKNOWLEDGEMENTS I've heard it said that the acknowledgements section of the dissertation is the hardest part of the entire thing, and I can see why: it's impossible to express in only a few paragraphs just how much this six-year journey has meant to me, and how much I've grown in the process. U of A, and the city of Tucson as a whole, will always hold a special place in my heart.
    [Show full text]
  • Package 'Lexicon'
    Package ‘lexicon’ October 29, 2017 Title Lexicons for Text Analysis Version 0.4.1 Maintainer Tyler Rinker <[email protected]> Description A collection of lexical hash tables, dictionaries, and word lists. Depends R (>= 3.2.2) Imports data.table, syuzhet (>= 1.0.1) Date 2017-10-28 License MIT + file LICENSE LazyData TRUE RoxygenNote 6.0.1 BugReports https://github.com/trinker/lexicon/issues?state=open URL https://github.com/trinker/lexicon Collate 'available_data.R' 'common_names.R' 'discourse_markers_alemany.R' 'dodds_sentiment.R' 'freq_first_names.R' 'freq_last_names.R' 'function_words.R' 'grady_augmented.R' 'hash_emoticons.R' 'hash_grady_pos.R' 'hash_lemmas.R' 'hash_power.R' 'hash_sentiment_huliu.R' 'utils.R' 'hash_sentiment_jockers.R' 'hash_sentiment_nrc.R' 'hash_sentiment_sentiword.R' 'hash_strength.R' 'hash_syllable.R' 'hash_valence_shifters.R' 'key_abbreviation.R' 'key_contractions.R' 'key_grade.R' 'key_grades.R' 'key_ratings.R' 'lexicon-package.R' 'nrc_emotions.R' 'pos_action_verb.R' 'pos_adverb.R' 'pos_df_irregular_nouns.R' 'pos_df_pronouns.R' 'pos_interjections.R' 'pos_preposition.R' 'pos_unchanging_nouns.R' 'profanity_alvarez.R' 'profanity_arr_bad.R' 'profanity_banned.R' 'profanity_google.R' 'profanity_von_ahn.R' 'sw_buckley_salton.R' 'sw_dolch.R' 'sw_fry_100.R' 'sw_fry_1000.R' 'sw_fry_200.R' 'sw_fry_25.R' 'sw_onix.R' NeedsCompilation no 1 2 R topics documented: Author Tyler Rinker [aut, cre] Repository CRAN Date/Publication 2017-10-29 14:13:50 UTC R topics documented: available_data . .3 common_names . .3 discourse_markers_alemany
    [Show full text]
  • Ensemble Methods for Automatic Thesaurus Extraction
    Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, July 2002, pp. 222-229. Association for Computational Linguistics. Ensemble Methods for Automatic Thesaurus Extraction James R. Curran Institute for Communicating and Collaborative Systems University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW United Kingdom [email protected] Abstract Ensemble methods ameliorate learner bias by amortising individual classifier bias of over differ- Ensemble methods are state of the art ent systems. For an ensemble to be more effec- for many NLP tasks. Recent work by tive than its constituents, the individual classifiers Banko and Brill (2001) suggests that this must have better than 50% accuracy and must pro- would not necessarily be true if very large duce diverse erroneous classifications (Dietterich, training corpora were available. However, 2000). Brill and Wu (1998) call this complementary their results are limited by the simplic- disagreement complementarity. Although ensem- ity of their evaluation task and individual bles are often effective on problems with small train- classifiers. ing sets, recent work suggests this may not be true as dataset size increases. Banko and Brill (2001) found Our work explores ensemble efficacy for that for confusion set disambiguation with corpora the more complex task of automatic the- larger than 100 million words, the best individual saurus extraction on up to 300 million classifiers outperformed ensemble methods. words. We examine our conflicting results One limitation of their results is the simplicity of in terms of the constraints on, and com- the task and methods used to examine the efficacy plexity of, different contextual representa- of ensemble methods.
    [Show full text]