Monolingual Machine Translation for Paraphrase Generation

Total Page:16

File Type:pdf, Size:1020Kb

Monolingual Machine Translation for Paraphrase Generation Monolingual Machine Translation for Paraphrase Generation Chris QUIRK, Chris BROCKETT and William DOLAN Natural Language Processing Group Microsoft Research One Microsoft Way Redmond, WA 90852 USA {chrisq,chrisbkt,billdol}@microsoft.com Abstract phrasal decoder to generate meaning-preserving paraphrases across multiple domains. By adopting We apply statistical machine translation at the outset a paradigm geared toward generating (SMT) tools to generate novel paraphrases sentences, this approach overcomes many prob- of input sentences in the same language. lems encountered by task-specific approaches. In The system is trained on large volumes of particular, we show that SMT techniques can be sentence pairs automatically extracted from extended to paraphrase given sufficient monolin- clustered news articles available on the gual parallel data.1 We show that a huge corpus of World Wide Web. Alignment Error Rate comparable and alignable sentence pairs can be (AER) is measured to gauge the quality of culled from ready-made topical/temporal clusters the resulting corpus. A monotone phrasal of news articles gathered on a daily basis from decoder generates contextual replacements. thousands of sources on the World Wide Web, Human evaluation shows that this system thereby permitting the system to operate outside outperforms baseline paraphrase generation the narrow domains typical of existing systems. techniques and, in a departure from previ- ous work, offers better coverage and scal- 2 Related work ability than the current best-of-breed Until recently, efforts in paraphrase were not paraphrasing approaches. strongly focused on generation and relied primarily on narrow data sources. One data source has been 1 Introduction multiple translations of classic literary works (Bar- zilay & McKeown 2001; Ibrahim 2002; Ibrahim et The ability to categorize distinct word sequences al. 2003). Pang et al. (2003) obtain parallel mono- as “meaning the same thing” is vital to applications lingual texts from a set of 100 multiply-translated as diverse as search, summarization, dialog, and news articles. While translation-based approaches question answering. Recent research has treated to obtaining data do address the problem of how to paraphrase acquisition and generation as a machine identify two strings as meaning the same thing, learning problem (Barzilay & McKeown, 2001; they are limited in scalability owing to the diffi- Lin & Pantel, 2002; Shinyama et al, 2002, Barzilay culty (and expense) of obtaining large quantities of & Lee, 2003, Pang et al., 2003). We approach this multiply-translated source documents. problem as one of statistical machine translation Other researchers have sought to identify pat- (SMT), within the noisy channel model of Brown terns in large unannotated monolingual corpora. et al. (1993). That is, we seek to identify the opti- Lin & Pantel (2002) derive inference rules by pars- mal paraphrase T* of a sentence S by finding: ing text fragments and extracting semantically similar paths. Shinyama et al. (2002) identify de- T* = arg max{}P()T | S pendency paths in two collections of newspaper T = {} articles. In each case, however, the information arg max P(S | T) P(T ) extracted is limited to a small set of patterns. T Barzilay & Lee (2003) exploit the meta- T and S being sentences in the same language. information implicit in dual collections of news- We describe and evaluate an SMT-based para- phrase generation system that utilizes a monotone 1 Barzilay & McKeown (2001), for example, reject the idea owing to the noisy, comparable nature of their data. wire articles, but focus on learning sentence-level these clusters is generally good. Impressionisti- patterns that provide a basis for generation. Multi- cally, discrete events like sudden disasters, busi- sequence alignment (MSA) is used to identify sen- ness announcements, and deaths tend to yield tences that share formal (and presumably semantic) tightly focused clusters, while ongoing stories like properties. This yields a set of clusters, each char- the SARS crisis tend to produce very large and acterized by a word lattice that captures n-gram- unfocused clusters. based structural similarities between sentences. To extract likely paraphrase sentence pairs from Lattices are in turn mapped to templates that can these clusters, we used edit distance (Levenshtein be used to produce novel transforms of input sen- 1966) over words, comparing all sentences pair- tences. Their methodology provides striking results wise within a cluster to find the minimal number of within a limited domain characterized by a high word insertions and deletions transforming the first frequency of stereotypical sentence types. How- sentence into the second. Each sentence was nor- ever, as we show below, the approach may be of malized to lower case, and the pairs were filtered limited generality, even within the training domain. to reject: • 3 Data collection Sentence pairs where the sentences were identical or differed only in punctuation; Our training corpus, like those of Shinyama et • Duplicate sentence pairs; al. and Barzilay & Lee, consists of different news • Sentence pairs with significantly different stories reporting the same event. While previous lengths (the shorter is less than two-thirds work with comparable news corpora has been lim- the length of the longer); ited to just two news sources, we set out to harness • Sentence pairs where the Levenshtein dis- the ongoing explosion in internet news coverage. tance was greater than 12.0.3 Thousands of news sources worldwide are compet- ing to cover the same stories, in real time. Despite A total of 139K non-identical sentence pairs were different authorship, these stories cover the same obtained. Mean Levenshtein distance was 5.17; events and therefore have significant content over- mean sentence length was 18.6 words. lap, especially in reports of the basic facts. In other cases, news agencies introduce minor edits into a 3.2 Word alignment single original AP or Reuters story. We believe To this corpus we applied the word alignment that our work constitutes the first to attempt to ex- algorithms available in Giza++ (Och & Ney, ploit these massively multiple data sources for 2000), a freely available implementation of IBM paraphrase learning and generation. Models 1-5 (Brown, 1993) and the HMM align- ment (Vogel et al, 1996), along with various im- 3.1 Gathering aligned sentence pairs provements and modifications motivated by We began by identifying sets of pre-clustered experimentation by Och & Ney (2000). In order to URLs that point to news articles on the Web, gath- capture the many-to-many alignments that identify ered from publicly available sites such as correspondences between idioms and other phrasal http://news.yahoo.com/, http://news.google.com chunks, we align in the forward direction and again and http://uk.newsbot.msn.com. Their clustering in the backward direction, heuristically recombin- algorithms appear to consider the full text of each ing each unidirectional word alignment into a sin- news article, in addition to temporal cues, to pro- gle bidirectional alignment (Och & Ney 2000). duce sets of topically/temporally related articles. Figure 1 shows an example of a monolingual Story content is captured by downloading the alignment produced by Giza++. Each line repre- HTML and isolating the textual content. A super- sents a uni-directional link; directionality is indi- vised HMM was trained to distinguish story con- cated by a tick mark on the target side of the link. tent from surrounding advertisements, etc.2 We held out a set of news clusters from our Over the course of about 8 months, we collected training data and extracted a set of 250 sentence 11,162 clusters, comprising 177,095 articles and pairs for blind evaluation. Randomly extracted on averaging 15.8 articles per cluster. The quality of the basis of an edit distance of 5 ≤ n ≤ 20 (to allow a range of reasonably divergent candidate pairs 2 while eliminating the most trivial substitutions), We hand-tagged 1,150 articles to indicate which por- tions of the text were story content and which were ad- the gold-standard sentence pairs were checked by vertisements, image captions, or other unwanted an independent human evaluator to ensure that material. We evaluated several classifiers on a 70/30 test train split and found that an HMM trained on a handful of features was most effective in identifying 3 Chosen on the basis of ablation experiments and opti- content lines (95% F-measure). mal AER (discussed in 3.2). Training Data Type: L12 Precision 87.46% Recall 89.52% AER 11.58% Identical word precision 89.36% Identical word recall 89.50% Identical word AER 10.57% Non-identical word preci- 76.99% sion Non-identical word recall 90.22% Non-identical word AER 20.88% Table 1. AER on the Lev12 corpus Figure 1. An example Giza++ alignment Table 1 shows the results of evaluating align- ment after trainng the Giza++ model. Although the they contained paraphrases before they were hand overall AER of 11.58% is higher than the best bi- word-aligned. lingual MT systems (Och & Ney, 2003), the train- To evaluate the alignments, we adhered to the ing data is inherently noisy, having more in standards established in Melamed (2001) and Och common with analogous corpora than conventional & Ney (2000, 2003). Following Och & Ney’s MT parallel corpora in that the paraphrases are not methodology, two annotators each created an ini- constrained by the source text structure. The iden- tial annotation for each dataset, subcategorizing tical word AER of 10.57% is unsurprising given alignments as either SURE (necessary) or POSSIBLE that the domain is unrestricted and the alignment (allowed, but not required). Differences were high- algorithm does not employ direct string matching lighted and the annotators were asked to review to leverage word identity.5 The non-identical word their choices on these differences. Finally we com- AER of 20.88% may appear problematic in a sys- bined the two annotations into a single gold stan- tem that aims to generate paraphrases; as we shall dard: if both annotators agreed that an alignment see, however, this turns out not to be the case.
Recommended publications
  • Preliminary Version of the Text Analysis Component, Including: Ner, Event Detection and Sentiment Analysis
    D4.1 PRELIMINARY VERSION OF THE TEXT ANALYSIS COMPONENT, INCLUDING: NER, EVENT DETECTION AND SENTIMENT ANALYSIS Grant Agreement nr 611057 Project acronym EUMSSI Start date of project (dur.) December 1st 2013 (36 months) Document due Date : November 30th 2014 (+60 days) (12 months) Actual date of delivery December 2nd 2014 Leader GFAI Reply to [email protected] Document status Submitted Project co-funded by ICT-7th Framework Programme from the European Commission EUMSSI_D4.1 Preliminary version of the text analysis component 1 Project ref. no. 611057 Project acronym EUMSSI Project full title Event Understanding through Multimodal Social Stream Interpretation Document name EUMSSI_D4.1_Preliminary version of the text analysis component.pdf Security (distribution PU – Public level) Contractual date of November 30th 2014 (+60 days) delivery Actual date of December 2nd 2014 delivery Deliverable name Preliminary Version of the Text Analysis Component, including: NER, event detection and sentiment analysis Type P – Prototype Status Submitted Version number v1 Number of pages 60 WP /Task responsible GFAI / GFAI & UPF Author(s) Susanne Preuss (GFAI), Maite Melero (UPF) Other contributors Mahmoud Gindiyeh (GFAI), Eelco Herder (LUH), Giang Tran Binh (LUH), Jens Grivolla (UPF) EC Project Officer Mrs. Aleksandra WESOLOWSKA [email protected] Abstract The deliverable reports on the resources and tools that have been gathered and installed for the preliminary version of the text analysis component Keywords Text analysis component, Named Entity Recognition, Named Entity Linking, Keyphrase Extraction, Relation Extraction, Topic modelling, Sentiment Analysis Circulated to partners Yes Peer review Yes completed Peer-reviewed by Eelco Herder (L3S) Coordinator approval Yes EUMSSI_D4.1 Preliminary version of the text analysis component 2 Table of Contents 1.
    [Show full text]
  • Information Retrieval and Web Search
    Information Retrieval and Web Search Text processing Instructor: Rada Mihalcea (Note: Some of the slides in this slide set were adapted from an IR course taught by Prof. Ray Mooney at UT Austin) IR System Architecture User Interface Text User Text Operations Need Logical View User Query Database Indexing Feedback Operations Manager Inverted file Query Searching Index Text Ranked Retrieved Database Docs Ranking Docs Text Processing Pipeline Documents to be indexed Friends, Romans, countrymen. OR User query Tokenizer Token stream Friends Romans Countrymen Linguistic modules Modified tokens friend roman countryman Indexer friend 2 4 1 2 Inverted index roman countryman 13 16 From Text to Tokens to Terms • Tokenization = segmenting text into tokens: • token = a sequence of characters, in a particular document at a particular position. • type = the class of all tokens that contain the same character sequence. • “... to be or not to be ...” 3 tokens, 1 type • “... so be it, he said ...” • term = a (normalized) type that is included in the IR dictionary. • Example • text = “I slept and then I dreamed” • tokens = I, slept, and, then, I, dreamed • types = I, slept, and, then, dreamed • terms = sleep, dream (stopword removal). Simple Tokenization • Analyze text into a sequence of discrete tokens (words). • Sometimes punctuation (e-mail), numbers (1999), and case (Republican vs. republican) can be a meaningful part of a token. – However, frequently they are not. • Simplest approach is to ignore all numbers and punctuation and use only case-insensitive
    [Show full text]
  • Constructing a Lexicon of Arabic-English Named Entity Using SMT and Semantic Linked Data
    Constructing a Lexicon of Arabic-English Named Entity using SMT and Semantic Linked Data Emna Hkiri, Souheyl Mallat, and Mounir Zrigui LaTICE Laboratory, Faculty of Sciences of Monastir, Tunisia Abstract: Named entity recognition is the problem of locating and categorizing atomic entities in a given text. In this work, we used DBpedia Linked datasets and combined existing open source tools to generate from a parallel corpus a bilingual lexicon of Named Entities (NE). To annotate NE in the monolingual English corpus, we used linked data entities by mapping them to Gate Gazetteers. In order to translate entities identified by the gate tool from the English corpus, we used moses, a statistical machine translation system. The construction of the Arabic-English named entities lexicon is based on the results of moses translation. Our method is fully automatic and aims to help Natural Language Processing (NLP) tasks such as, machine translation information retrieval, text mining and question answering. Our lexicon contains 48753 pairs of Arabic-English NE, it is freely available for use by other researchers Keywords: Named Entity Recognition (NER), Named entity translation, Parallel Arabic-English lexicon, DBpedia, linked data entities, parallel corpus, SMT. Received April 1, 2015; accepted October 7, 2015 1. Introduction Section 5 concludes the paper with directions for future work. Named Entity Recognition (NER) consists in recognizing and categorizing all the Named Entities 2. State of the art (NE) in a given text, corresponding to two intuitive IAJIT First classes; proper names (persons, locations and Written and spoken Arabic is considered as difficult organizations), numeric expressions (time, date, money language to apprehend in the domain of Natural and percent).
    [Show full text]
  • Fasttext-Based Intent Detection for Inflected Languages †
    information Article FastText-Based Intent Detection for Inflected Languages † Kaspars Balodis 1,2,* and Daiga Deksne 1 1 Tilde, Vien¯ıbas Gatve 75A, LV-1004 R¯ıga, Latvia; [email protected] 2 Faculty of Computing, University of Latvia, Rain, a blvd. 19, LV-1586 R¯ıga, Latvia * Correspondence: [email protected] † This paper is an extended version of our paper published in 18th International Conference AIMSA 2018, Varna, Bulgaria, 12–14 September 2018. Received: 15 January 2019; Accepted: 25 April 2019; Published: 1 May 2019 Abstract: Intent detection is one of the main tasks of a dialogue system. In this paper, we present our intent detection system that is based on fastText word embeddings and a neural network classifier. We find an improvement in fastText sentence vectorization, which, in some cases, shows a significant increase in intent detection accuracy. We evaluate the system on languages commonly spoken in Baltic countries—Estonian, Latvian, Lithuanian, English, and Russian. The results show that our intent detection system provides state-of-the-art results on three previously published datasets, outperforming many popular services. In addition to this, for Latvian, we explore how the accuracy of intent detection is affected if we normalize the text in advance. Keywords: intent detection; word embeddings; dialogue system 1. Introduction and Related Work Recent developments in deep learning have made neural networks the mainstream approach for a wide variety of tasks, ranging from image recognition to price forecasting to natural language processing. In natural language processing, neural networks are used for speech recognition and generation, machine translation, text classification, named entity recognition, text generation, and many other tasks.
    [Show full text]
  • Unsupervised Methods to Predict Example Difficulty in Word Sense
    Unsupervised Methods to Predict Example Difficulty in Word Sense Annotation Author: Cristina Aceta Moreno Advisors: Oier L´opez de Lacalle, Eneko Agirre and Izaskun Aldezabal hap/lap Hizkuntzaren Azterketa eta Prozesamendua Language Analysis and Processing Final Thesis June 2018 Departments: Computer Systems and Languages, Computational Architectures and Technologies, Computational Science and Artificial Intelligence, Basque Language and Communication, Communications Engineer. Unsupervised Prediction of Example Difficulty ii/103 Language Analysis and Processing Unsupervised Prediction of Example Difficulty iii/103 Laburpena Hitzen Adiera Desanbiguazioa (HAD) Hizkuntzaren Prozesamenduko (HP) erronkarik handienetakoa da. Frogatu denez, HAD sistema ahalik eta arrakastatsuenak entrenatzeko, oso garrantzitsua da entrenatze-datuetatik adibide (hitzen testuinguru) zailak kentzea, honela emaitzak asko hobetzen baitira. Lan honetan, lehenik, gainbegiratutako ereduak aztertzen ditugu, eta, ondoren, gainbegiratu gabeko bi neurri proposatzen ditugu. Gainbegiratutako ereduetan, adibideen zailtasuna definitzeko, anotatutako corpuseko datuak erabiltzen dira. Proposatzen ditugun bi gainbegiratu gabeko neurrietan, berriz, batetik, aztergai den hitzaren zailtasuna neurtzen da (hitzon Wordnet-eko datuak aztertuta), eta, bestetik, hitzaren agerpenarena (alegia, hitzaren testuinguruarena edo adibidearena). Biak konbinatuta, adibideen zailtasuna ezaugarritzeko eredu bat ere proposatzen da. Abstract Word Sense Disambiguation (WSD) is one of the major challenges in
    [Show full text]
  • Robust Prediction of Punctuation and Truecasing for Medical ASR
    Robust Prediction of Punctuation and Truecasing for Medical ASR Monica Sunkara Srikanth Ronanki Kalpit Dixit Sravan Bodapati Katrin Kirchhoff Amazon AWS AI, USA fsunkaral, [email protected] Abstract the accuracy of subsequent natural language under- standing algorithms (Peitz et al., 2011a; Makhoul Automatic speech recognition (ASR) systems et al., 2005) to identify information such as pa- in the medical domain that focus on transcrib- tient diagnosis, treatments, dosages, symptoms and ing clinical dictations and doctor-patient con- signs. Typically, clinicians explicitly dictate the versations often pose many challenges due to the complexity of the domain. ASR output punctuation commands like “period”, “add comma” typically undergoes automatic punctuation to etc., and a postprocessing component takes care enable users to speak naturally, without hav- of punctuation restoration. This process is usu- ing to vocalise awkward and explicit punc- ally error-prone as the clinicians may struggle with tuation commands, such as “period”, “add appropriate punctuation insertion during dictation. comma” or “exclamation point”, while true- Moreover, doctor-patient conversations lack ex- casing enhances user readability and improves plicit vocalization of punctuation marks motivating the performance of downstream NLP tasks. This paper proposes a conditional joint mod- the need for automatic prediction of punctuation eling framework for prediction of punctuation and truecasing. In this work, we aim to solve the and truecasing using pretrained masked lan- problem of automatic punctuation and truecasing guage models such as BERT, BioBERT and restoration to medical ASR system text outputs. RoBERTa. We also present techniques for Most recent approaches to punctuation and true- domain and task specific adaptation by fine- casing restoration problem rely on deep learning tuning masked language models with medical (Nguyen et al., 2019a; Salloum et al., 2017).
    [Show full text]
  • Evaluation of Machine Translation Methods Applied to Medical Terminologies
    Evaluation of Machine Translation Methods applied to Medical Terminologies Konstantinos Skianis Yann Briand, Florent Desgrippes BLUAI Agence du Numerique´ en Sante´ Athens, Greece Paris, France [email protected] [email protected] [email protected] Abstract process can be expensive both in terms of time and resources, while the vocabulary and number of Medical terminologies resources and stan- medical terms can reach high numbers and require dards play vital roles in clinical data ex- changes, enabling significantly the services’ health professional efforts for evaluation. interoperability within healthcare national in- This work constitutes a generic, language- formation networks. Health and medical sci- independent and open methodology for medical ence are constantly evolving causing require- terminology translation. To illustrate our approach, ments to advance the terminologies editions. which is based on automated machine translation In this paper, we present our evaluation work methods, we will attempt to develop a first baseline of the latest machine translation techniques ad- translation from English to French for the ICD- dressing medical terminologies. Experiments 11 classification. We also test on the International have been conducted leveraging selected statis- tical and neural machine translation methods. Classification of Functioning, Disability and Health 2 The devised procedure is tested on a validated (ICF) terminology . sample of ICD-11 and ICF terminologies from First, we are going to investigate existing ma- English to French with promising results. chine translation research studies concerning medi- cal terms and documents, with a comparison of the 1 Introduction relative methods. Next, we present our proposed Medical terminologies are of essential importance methodology.
    [Show full text]
  • Copyright and Use of This Thesis This Thesis Must Be Used in Accordance with the Provisions of the Copyright Act 1968
    COPYRIGHT AND USE OF THIS THESIS This thesis must be used in accordance with the provisions of the Copyright Act 1968. Reproduction of material protected by copyright may be an infringement of copyright and copyright owners may be entitled to take legal action against persons who infringe their copyright. Section 51 (2) of the Copyright Act permits an authorized officer of a university library or archives to provide a copy (by communication or otherwise) of an unpublished thesis kept in the library or archives, to a person who satisfies the authorized officer that he or she requires the reproduction for the purposes of research or study. The Copyright Act grants the creator of a work a number of moral rights, specifically the right of attribution, the right against false attribution and the right of integrity. You may infringe the author’s moral rights if you: - fail to acknowledge the author of this thesis if you quote sections from the work - attribute this thesis to another author - subject this thesis to derogatory treatment which may prejudice the author’s reputation For further information contact the University’s Director of Copyright Services sydney.edu.au/copyright Structured Named Entities Nicky Ringland Supervisor: James Curran A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Information Technologies at The University of Sydney School of Information Technologies 2016 i Abstract The names of people, locations, and organisations play a central role in language, and named entity recognition (NER) has been widely studied, and successfully incorporated, into natural language processing (NLP) applications.
    [Show full text]
  • Using Named Entity Recognition for Relevance Detection in Social Network Messages
    FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Using named entity recognition for relevance detection in social network messages Filipe Daniel da Gama Batista Mestrado Integrado em Engenharia Informática e Computação Supervisor: Álvaro Pedro de Barros Borges Reis Figueira July 23, 2017 Using named entity recognition for relevance detection in social network messages Filipe Daniel da Gama Batista Mestrado Integrado em Engenharia Informática e Computação July 23, 2017 Abstract The continuous growth of social networks in the past decade has led to massive amounts of in- formation being generated on a daily-basis. While a lot of this information is merely personal or simply irrelevant to a general audience, relevant news being transmitted through social networks is an increasingly common phenomenon, and therefore detecting such news automatically has become a field of interest and active research. The contribution of the present thesis consisted in studying the importance of named entities in the task of relevance detection. With that in mind, the goal of this work was twofold: 1) to implement or find the best named entity recognition tools for social media texts, and 2) to analyze the importance of extracted entities from posts as features for relevance detection with machine learning. There are already well-known named entity recognition tools, however, most state-of-the-art tools for named entity recognition show significant decrease of performance when tested on social media texts, in comparison to news media texts. This is mainly due to the informal character of so- cial media texts: the absence of context, the lack of proper punctuation, wrong capitalization, the use of characters to represent emoticons, spelling errors and even the use of different languages in the same text.
    [Show full text]
  • A Rule-Based Normalization System for Greek Noisy User-Generated Text
    A Rule-Based Normalization System for Greek Noisy User-Generated Text Marsida Toska Uppsala University Department of Linguistics and Philology Master Programme in Language Technology Master’s Thesis in Language Technology, 30 ECTS credits November 9, 2020 Supervisor: Eva Pettersson Abstract The ever-growing usage of social media platforms generates daily vast amounts of textual data which could potentially serve as a great source of information. There- fore, mining user-generated data for commercial, academic, or other purposes has already attracted the interest of the research community. However, the informal writing which often characterizes online user-generated texts poses a challenge for automatic text processing with Natural Language Processing (NLP) tools. To mitigate the effect of noise in these texts, lexical normalization has been proposed as a preprocessing method which in short is the task of converting non-standard word forms into a canonical one. The present work aims to contribute to this field by developing a rule-based normalization system for Greek Tweets. We perform an analysis of the categories of the out-of-vocabulary (OOV) word forms identified in the dataset and define hand-crafted rules which we combine with edit distance (Levenshtein distance approach) to tackle noise in the cases under scope. To evaluate the performance of the system we perform both an intrinsic and an extrinsic evaluation in order to explore the effect of normalization on the part-of-speech-tagging. The results of the intrinsic evaluation suggest that our system has an accuracy of approx. 95% compared to approx. 81% for the baseline. In the extrinsic evaluation, it is observed a boost of approx.
    [Show full text]
  • Proceedings Chapter Reference
    Proceedings Chapter ResToRinG CaPitaLiZaTion in #TweeTs NEBHI, Kamel, BONTCHEVA, Kalina, GORRELL, Genevieve Abstract The rapid proliferation of microblogs such as Twitter has resulted in a vast quantity of written text becoming available that contains interesting information for NLP tasks. However, the noise level in tweets is so high that standard NLP tools perform poorly. In this paper, we present a statistical truecaser for tweets using a 3-gram language model built with truecased newswire texts and tweets. Our truecasing method shows an improvement in named entity recognition and part-of-speech tagging tasks. Reference NEBHI, Kamel, BONTCHEVA, Kalina, GORRELL, Genevieve. ResToRinG CaPitaLiZaTion in #TweeTs. In: 3rd International Workshop on Natural Language Processing for Social Media in conjunction with WWW 2015. 2015. p. 1111-1115 Available at: http://archive-ouverte.unige.ch/unige:72793 Disclaimer: layout of this document may differ from the published version. 1 / 1 ResToRinG CaPitaLiZaTion in #TweeTs Kamel Nebhi Kalina Bontcheva Genevieve Gorrell LATL-CUI University of Sheffield University of Sheffield University of Geneva 211 Portobello 211 Portobello 7, route de Drize Sheffield S1 4DP, UK Sheffield S1 4DP, UK 1227 Carouge, Switzerland [email protected] [email protected] [email protected] ABSTRACT In this paper, we show the overall impact of tweet truecasing The rapid proliferation of microblogs such as Twitter has resulted in Named Entity Recognition (NER) and part-of-speech (PoS) tag- in a vast quantity of written text becoming available that contains ging. We demonstrate that our statistical truecaser, which uses a interesting information for NLP tasks.
    [Show full text]
  • Proceedings of the ACL-IJCNLP 2015 System Demonstrations
    ACL-IJCNLP 2015 The 53rd Annual Meeting of the Association for Computational Linguistics and The 7th International Joint Conference on Natural Language Processing Proceedings of System Demonstrations July 26-31, 2015 Beijing, China c 2015 The Association for Computational Linguistics and The Asian Federation of Natural Language Processing Order copies of this and other ACL proceedings from: Association for Computational Linguistics (ACL) 209 N. Eighth Street Stroudsburg, PA 18360 USA Tel: +1-570-476-8006 Fax: +1-570-476-0860 [email protected] ISBN 978-1-941643-99-0 ii Preface Welcome to the proceedings of the system demonstrations session. This volume contains the papers of the system demonstrations presented at the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, on July 26-31, 2015 in Beijing, China. The system demonstrations program offers the presentation of early research prototypes as well as interesting mature systems. We received 62 submissions, of which 25 were selected for inclusion in the program (acceptance rate of 40.32%) after review by three members of the program committee. We would like to thank the members of the program committee for their timely help in reviewing the submissions. System Demonstration Co-Chairs Hsin-Hsi Chen Katja Markert iii Organizers Co-Chairs: Hsin-Hsi Chen, National Taiwan University (Taiwan) Katja Markert, University of Hanover (Germany) and University of Leeds (UK) Program Committee: Johan Bos, University of Groningen (Netherlands) Miriam Butt, University of Konstanz (Germany) Wenliang Chen, Soochow University, Suzhou (China) Gao Cong, Nanyang Technological University (Singapore) Katja Filippova, Google Inc.
    [Show full text]