Spelling Error Correction with Soft-Masked BERT

Total Page:16

File Type:pdf, Size:1020Kb

Spelling Error Correction with Soft-Masked BERT Spelling Error Correction with Soft-Masked BERT Shaohua Zhang1, Haoran Huang1, Jicong Liu2 and Hang Li1 1ByteDance AI Lab 2School of Computer Science and Technology, Fudan University fzhangshaohua.cs,huanghaoran,[email protected] [email protected] Abstract Table 1: Examples of Chinese spelling errors Spelling error correction is an important yet Wrong: ÃÊ有金PT。Egypt has golden towers. challenging task because a satisfactory solu- tion of it essentially needs human-level lan- Correct: ÃÊ有金WT。Egypt has pyramids. guage understanding ability. Without loss of generality we consider Chinese spelling error Wrong: 他的B胜2很:,:了越ñ(挖洞。 correction (CSC) in this paper. A state-of- He has a strong desire to win and is digging for the-art method for the task selects a character prison breaks from a list of candidates for correction (includ- ing non-correction) at each position of the sen- Correct: 他的B生2很:,:了越ñ(挖洞。 tence on the basis of BERT, the language repre- sentation model. The accuracy of the method He has a strong desire to survive and is digging for can be sub-optimal, however, because BERT prison breaks. does not have sufficient capability to detect whether there is an error at each position, ap- parently due to the way of pre-training it us- per, we consider Chinese spelling error correction ing mask language modeling. In this work, we (CSC) at character-level. propose a novel neural architecture to address Spelling error correction is also a very challeng- the aforementioned issue, which consists of a ing task, because to completely solve the problem network for error detection and a network for the system needs to have human-level language error correction based on BERT, with the for- understanding ability. There are at least two chal- mer being connected to the latter with what we call soft-masking technique. Our method of lenges here, as shown in Table1. First, world using ‘Soft-Masked BERT’ is general, and it knowledge is needed for spelling error correction. may be employed in other language detection- Character W in the first sentence is mistakenly writ- correction problems. Experimental results on ten as P, where 金PT means golden tower and two datasets demonstrate that the performance 金WT means pyramid. Humans can correct the of our proposed method is significantly bet- typo by referring to world knowledge. Second, ter than the baselines including the one solely sometimes inference is also required. In the sec- based on BERT. ond sentence, the 4-th character 生 is mistakenly 胜 胜 1 Introduction written as . In fact, and the surrounding char- acters form a new valid word B胜2 (desire to Spelling error correction is an important task which win), rather than the intended word B生2 (desire aims to correct spelling errors in a text either at to survive). word-level or at character-level (Yu and Li, 2014; Many methods have been proposed for CSC or Yu et al., 2014; Zhang et al., 2015; Wang et al., more generally spelling error correction. Previ- 2018b; Hong et al., 2019; Wang et al., 2019). It ous approaches can be mainly divided into two is crucial for many natural language applications categories. One employs traditional machine learn- such as search (Martins and Silva, 2004; Gao et al., ing and the other deep learning (Yu et al., 2014; 2010), optical character recognition (OCR) (Afli Tseng et al., 2015; Wang et al., 2018b). Zhang et et al., 2016; Wang et al., 2018b), and essay scor- al. (2015), for example, proposed a unified frame- ing (Burstein and Chodorow, 1999). In this pa- work for CSC consisting of a pipeline of error de- 882 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 882–890 July 5 - 10, 2020. c 2020 Association for Computational Linguistics tection, candidate generation, and final candidate tion under the help of the detection network, during selection using traditional machine learning. Wang end-to-end joint training. et al. (2019) proposed a Seq2Seq model with copy We conducted experiments to compare Soft- mechanism which transforms an input sentence Masked BERT and several baselines including the into a new sentence with spelling errors corrected. method of using BERT alone. As datasets we uti- lized the benchmark dataset of SIGHAN. We also More recently, BERT (Devlin et al., 2018), the created a large and high-quality dataset for evalua- language representation model, is successfully ap- tion named News Title. The dataset, which contains plied to many language understanding tasks includ- titles of news articles, is ten times larger than the ing CSC (cf., (Hong et al., 2019)). In the state-of- previous datasets. Experimental results show that the-art method using BERT, a character-level BERT Soft-Masked BERT significantly outperforms the is first pre-trained using a large unlabelled dataset baselines on the two datasets in terms of accuracy and then fine-tuned using a labeled dataset. The measures. labeled data can be obtained via data augmentation in which examples of spelling errors are generated The contributions of this work include (1) pro- using a large confusion table. Finally the model is posal of the novel neural architecture Soft-Masked utilized to predict the most likely character from a BERT for the CSC problem, (2) empirical verifica- list of candidates at each position of the given sen- tion of the effectiveness of Soft-Masked BERT. tence. The method is powerful because BERT has 2 Our Approach certain ability to acquire knowledge for language understanding. Our experimental results show that 2.1 Problem and Motivation the accuracy of the method can be further improved, Chinese spelling error correction (CSC) can be however. One observation is that the error detec- formalized as the following task. Given a sequence tion capability of the model is not sufficiently high, of n characters (or words) X = (x1; x2; ··· ; xn), and once an error is detected, the model has a better the goal is to transform it into another sequence chance to make a right correction. We hypothesize of characters Y = (y1; y2; ··· ; yn) with the same that this might be due to the way of pre-training length, where the incorrect characters in X are BERT with mask language modeling in which only replaced with the correct characters to obtain Y . about 15% of the characters in the text are masked, The task can be viewed as a sequential labeling and thus it only learns the distribution of masked problem in which the model is a mapping function tokens and tends to choose not to make any correc- f : X ! Y . The task is an easier one, however, in tion. This phenomenon is prevalent and represents the sense that usually no or only a few characters a fundamental challenge for using BERT in certain need to be replaced and all or most of the characters tasks like spelling error correction. should be copied. To address the above issue, we propose a novel The state-of-the-art method for CSC is to em- neural architecture in this work, referred to as Soft- ploy BERT to accomplish the task. Our prelimi- Masked BERT. Soft-Masked BERT contains two nary experiments show that the performance of the networks, a detection network and a correction net- approach can be improved, if the erroneous charac- work based on BERT. The correction network is ters are designated (cf., section 3.6). In general the similar to that in the method of solely using BERT. BERT based method tends to make no correction The detection network is a Bi-GRU network that (or just copy the original characters). Our inter- predicts the probability that the character is an error pretation is that in pre-training of BERT only 15% at each position. The probability is then utilized to of the characters are masked for prediction, result- conduct soft-masking of embedding of character at ing in learning of a model which does not possess the position. Soft masking is an extension of con- enough capacity for error detection. This motives ventional ‘hard masking’ in the sense that the for- us to devise a new model. mer degenerates to the latter, when the probability of error equals one. The soft-masked embedding 2.2 Model at each position is then inputted into the correction We propose a novel neural network model called network. The correction network conducts error Soft-Masked BERT for CSC, as illustrated in Fig- correction using BERT. This approach can force ure1. Soft-Masked BERT is composed of a detec- the model to learn the right context for error correc- tion network based on Bi-GRU and a correction net- 883 Figure 1: Architecture of Soft-Masked BERT work based on BERT. The detection network pre- embedding of the character, as in BERT. The out- dicts the probabilities of errors and the correction put is a sequence of labels G = (g1; g2; ··· ; gn), network predicts the probabilities of error correc- where gi denotes the label of the i character, and 1 tions, while the former passes its prediction results means the character is incorrect and 0 means it is to the latter using soft masking. correct. For each character there is a probability pi More specifically, our method first creates an representing the likelihood of being 1. The higher embedding for each character in the input sentence, pi is the more likely the character is incorrect. referred to as input embedding. Next, it takes the In this work, we realize the detection network sequence of embeddings as input and outputs the as a bidirectional GRU (Bi-GRU). For each char- probabilities of errors for the sequence of charac- acter of the sequence, the probability of error pi is ters (embeddings) using the detection network.
Recommended publications
  • Malware Classification with BERT
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 5-25-2021 Malware Classification with BERT Joel Lawrence Alvares Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Artificial Intelligence and Robotics Commons, and the Information Security Commons Malware Classification with Word Embeddings Generated by BERT and Word2Vec Malware Classification with BERT Presented to Department of Computer Science San José State University In Partial Fulfillment of the Requirements for the Degree By Joel Alvares May 2021 Malware Classification with Word Embeddings Generated by BERT and Word2Vec The Designated Project Committee Approves the Project Titled Malware Classification with BERT by Joel Lawrence Alvares APPROVED FOR THE DEPARTMENT OF COMPUTER SCIENCE San Jose State University May 2021 Prof. Fabio Di Troia Department of Computer Science Prof. William Andreopoulos Department of Computer Science Prof. Katerina Potika Department of Computer Science 1 Malware Classification with Word Embeddings Generated by BERT and Word2Vec ABSTRACT Malware Classification is used to distinguish unique types of malware from each other. This project aims to carry out malware classification using word embeddings which are used in Natural Language Processing (NLP) to identify and evaluate the relationship between words of a sentence. Word embeddings generated by BERT and Word2Vec for malware samples to carry out multi-class classification. BERT is a transformer based pre- trained natural language processing (NLP) model which can be used for a wide range of tasks such as question answering, paraphrase generation and next sentence prediction. However, the attention mechanism of a pre-trained BERT model can also be used in malware classification by capturing information about relation between each opcode and every other opcode belonging to a malware family.
    [Show full text]
  • Bros:Apre-Trained Language Model for Understanding Textsin Document
    Under review as a conference paper at ICLR 2021 BROS: A PRE-TRAINED LANGUAGE MODEL FOR UNDERSTANDING TEXTS IN DOCUMENT Anonymous authors Paper under double-blind review ABSTRACT Understanding document from their visual snapshots is an emerging and chal- lenging problem that requires both advanced computer vision and NLP methods. Although the recent advance in OCR enables the accurate extraction of text seg- ments, it is still challenging to extract key information from documents due to the diversity of layouts. To compensate for the difficulties, this paper introduces a pre-trained language model, BERT Relying On Spatiality (BROS), that represents and understands the semantics of spatially distributed texts. Different from pre- vious pre-training methods on 1D text, BROS is pre-trained on large-scale semi- structured documents with a novel area-masking strategy while efficiently includ- ing the spatial layout information of input documents. Also, to generate structured outputs in various document understanding tasks, BROS utilizes a powerful graph- based decoder that can capture the relation between text segments. BROS achieves state-of-the-art results on four benchmark tasks: FUNSD, SROIE*, CORD, and SciTSR. Our experimental settings and implementation codes will be publicly available. 1 INTRODUCTION Document intelligence (DI)1, which understands industrial documents from their visual appearance, is a critical application of AI in business. One of the important challenges of DI is a key information extraction task (KIE) (Huang et al., 2019; Jaume et al., 2019; Park et al., 2019) that extracts struc- tured information from documents such as financial reports, invoices, business emails, insurance quotes, and many others.
    [Show full text]
  • Information Extraction Based on Named Entity for Tourism Corpus
    Information Extraction based on Named Entity for Tourism Corpus Chantana Chantrapornchai Aphisit Tunsakul Dept. of Computer Engineering Dept. of Computer Engineering Faculty of Engineering Faculty of Engineering Kasetsart University Kasetsart University Bangkok, Thailand Bangkok, Thailand [email protected] [email protected] Abstract— Tourism information is scattered around nowa- The ontology is extracted based on HTML web structure, days. To search for the information, it is usually time consuming and the corpus is based on WordNet. For these approaches, to browse through the results from search engine, select and the time consuming process is the annotation which is to view the details of each accommodation. In this paper, we present a methodology to extract particular information from annotate the type of name entity. In this paper, we target at full text returned from the search engine to facilitate the users. the tourism domain, and aim to extract particular information Then, the users can specifically look to the desired relevant helping for ontology data acquisition. information. The approach can be used for the same task in We present the framework for the given named entity ex- other domains. The main steps are 1) building training data traction. Starting from the web information scraping process, and 2) building recognition model. First, the tourism data is gathered and the vocabularies are built. The raw corpus is used the data are selected based on the HTML tag for corpus to train for creating vocabulary embedding. Also, it is used building. The data is used for model creation for automatic for creating annotated data.
    [Show full text]
  • NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and Dlpy Doug Cairns and Xiangxiang Meng, SAS Institute Inc
    Paper SAS4429-2020 NLP with BERT: Sentiment Analysis Using SAS® Deep Learning and DLPy Doug Cairns and Xiangxiang Meng, SAS Institute Inc. ABSTRACT A revolution is taking place in natural language processing (NLP) as a result of two ideas. The first idea is that pretraining a deep neural network as a language model is a good starting point for a range of NLP tasks. These networks can be augmented (layers can be added or dropped) and then fine-tuned with transfer learning for specific NLP tasks. The second idea involves a paradigm shift away from traditional recurrent neural networks (RNNs) and toward deep neural networks based on Transformer building blocks. One architecture that embodies these ideas is Bidirectional Encoder Representations from Transformers (BERT). BERT and its variants have been at or near the top of the leaderboard for many traditional NLP tasks, such as the general language understanding evaluation (GLUE) benchmarks. This paper provides an overview of BERT and shows how you can create your own BERT model by using SAS® Deep Learning and the SAS DLPy Python package. It illustrates the effectiveness of BERT by performing sentiment analysis on unstructured product reviews submitted to Amazon. INTRODUCTION Providing a computer-based analog for the conceptual and syntactic processing that occurs in the human brain for spoken or written communication has proven extremely challenging. As a simple example, consider the abstract for this (or any) technical paper. If well written, it should be a concise summary of what you will learn from reading the paper. As a reader, you expect to see some or all of the following: • Technical context and/or problem • Key contribution(s) • Salient result(s) If you were tasked to create a computer-based tool for summarizing papers, how would you translate your expectations as a reader into an implementable algorithm? This is the type of problem that the field of natural language processing (NLP) addresses.
    [Show full text]
  • Unified Language Model Pre-Training for Natural
    Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗† Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. 1 Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1].
    [Show full text]
  • Spell Checker
    International Journal of Scientific and Research Publications, Volume 5, Issue 4, April 2015 1 ISSN 2250-3153 SPELL CHECKER Vibhakti V. Bhaire, Ashiki A. Jadhav, Pradnya A. Pashte, Mr. Magdum P.G ComputerEngineering, Rajendra Mane College of Engineering and Technology Abstract- Spell Checker project adds spell checking and For handling morphology language-dependent algorithm is an correction functionality to the windows based application by additional step. The spell-checker will need to consider different using autosuggestion technique. It helps the user to reduce typing forms of the same word, such as verbal forms, contractions, work, by identifying any spelling errors and making it easier to possessives, and plurals, even for a lightly inflected language like repeat searches .The main goal of the spell checker is to provide English. For many other languages, such as those featuring unified treatment of various spell correction. Firstly the spell agglutination and more complex declension and conjugation, this checking and correcting problem will be formally describe in part of the process is more complicated. order to provide a better understanding of these task. Spell checker and corrector is either stand-alone application capable of A spell checker carries out the following processes: processing a string of words or a text or as an embedded tool which is part of a larger application such as a word processor. • It scans the text and selects the words contained in it. Various search and replace algorithms are adopted to fit into the • domain of spell checker. Spell checking identifies the words that It then compares each word with a known list of are valid in the language as well as misspelled words in the correctly spelled words (i.e.
    [Show full text]
  • Question Answering by Bert
    Running head: QUESTION AND ANSWERING USING BERT 1 Question and Answering Using BERT Suman Karanjit, Computer SCience Major Minnesota State University Moorhead Link to GitHub https://github.com/skaranjit/BertQA QUESTION AND ANSWERING USING BERT 2 Table of Contents ABSTRACT .................................................................................................................................................... 3 INTRODUCTION .......................................................................................................................................... 4 SQUAD ............................................................................................................................................................ 5 BERT EXPLAINED ...................................................................................................................................... 5 WHAT IS BERT? .......................................................................................................................................... 5 ARCHITECTURE ............................................................................................................................................ 5 INPUT PROCESSING ...................................................................................................................................... 6 GETTING ANSWER ........................................................................................................................................ 8 SETTING UP THE ENVIRONMENT. ....................................................................................................
    [Show full text]
  • Finite State Recognizer and String Similarity Based Spelling
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by BRAC University Institutional Repository FINITE STATE RECOGNIZER AND STRING SIMILARITY BASED SPELLING CHECKER FOR BANGLA Submitted to A Thesis The Department of Computer Science and Engineering of BRAC University by Munshi Asadullah In Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Fall 2007 BRAC University, Dhaka, Bangladesh 1 DECLARATION I hereby declare that this thesis is based on the results found by me. Materials of work found by other researcher are mentioned by reference. This thesis, neither in whole nor in part, has been previously submitted for any degree. Signature of Supervisor Signature of Author 2 ACKNOWLEDGEMENTS Special thanks to my supervisor Mumit Khan without whom this work would have been very difficult. Thanks to Zahurul Islam for providing all the support that was required for this work. Also special thanks to the members of CRBLP at BRAC University, who has managed to take out some time from their busy schedule to support, help and give feedback on the implementation of this work. 3 Abstract A crucial figure of merit for a spelling checker is not just whether it can detect misspelled words, but also in how it ranks the suggestions for the word. Spelling checker algorithms using edit distance methods tend to produce a large number of possibilities for misspelled words. We propose an alternative approach to checking the spelling of Bangla text that uses a finite state automaton (FSA) to probabilistically create the suggestion list for a misspelled word.
    [Show full text]
  • Spell Checking in Computer-Assisted Language Learning: a Study of Misspellings by Nonnative Writers of German
    SPELL CHECKING IN COMPUTER-ASSISTED LANGUAGE LEARNING: A STUDY OF MISSPELLINGS BY NONNATIVE WRITERS OF GERMAN Anne Rimrott Diplom, Justus Liebig Universitat, 2002 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS In the Department of Linguistics O Anne Rimrott 2005 SIMON FRASER UNIVERSITY Spring 2005 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without permission of the author. APPROVAL Name: Anne Rimrott Degree: Master of Arts Title of Thesis: Spell Checking in Computer-Assisted Language Learning: A Study of Misspellings by Nonnative Writers of German Examining Committee: Chair: Dr. Alexei Kochetov Assistant Professor, Department of Linguistics Dr. Trude Heift Senior Supervisor Associate Professor, Department of Linguistics Dr. Chung-hye Han Supervisor Assistant Professor, Department of Linguistics Dr. Maria Teresa Taboada Supervisor Assistant Professor, Department of Linguistics Dr. Mathias Schulze External Examiner Assistant Professor, Department of Germanic and Slavic Studies University of Waterloo Date DefendedIApproved: SIMON FRASER UNIVERSITY PARTIAL COPYRIGHT LICENCE The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection.
    [Show full text]
  • Exploiting Wikipedia Semantics for Computing Word Associations
    Exploiting Wikipedia Semantics for Computing Word Associations by Shahida Jabeen A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science. Victoria University of Wellington 2014 Abstract Semantic association computation is the process of automatically quan- tifying the strength of a semantic connection between two textual units based on various lexical and semantic relations such as hyponymy (car and vehicle) and functional associations (bank and manager). Humans have can infer implicit relationships between two textual units based on their knowledge about the world and their ability to reason about that knowledge. Automatically imitating this behavior is limited by restricted knowledge and poor ability to infer hidden relations. Various factors affect the performance of automated approaches to com- puting semantic association strength. One critical factor is the selection of a suitable knowledge source for extracting knowledge about the im- plicit semantic relations. In the past few years, semantic association com- putation approaches have started to exploit web-originated resources as substitutes for conventional lexical semantic resources such as thesauri, machine readable dictionaries and lexical databases. These conventional knowledge sources suffer from limitations such as coverage issues, high construction and maintenance costs and limited availability. To overcome these issues one solution is to use the wisdom of crowds in the form of collaboratively constructed knowledge sources. An excellent example of such knowledge sources is Wikipedia which stores detailed information not only about the concepts themselves but also about various aspects of the relations among concepts. The overall goal of this thesis is to demonstrate that using Wikipedia for computing word association strength yields better estimates of hu- mans’ associations than the approaches based on other structured and un- structured knowledge sources.
    [Show full text]
  • Distilling BERT for Natural Language Understanding
    TinyBERT: Distilling BERT for Natural Language Understanding Xiaoqi Jiao1∗,y Yichun Yin2∗z, Lifeng Shang2z, Xin Jiang2 Xiao Chen2, Linlin Li3, Fang Wang1z and Qun Liu2 1Key Laboratory of Information Storage System, Huazhong University of Science and Technology, Wuhan National Laboratory for Optoelectronics 2Huawei Noah’s Ark Lab 3Huawei Technologies Co., Ltd. fjiaoxiaoqi,[email protected] fyinyichun,shang.lifeng,[email protected] fchen.xiao2,lynn.lilinlin,[email protected] Abstract natural language processing (NLP). Pre-trained lan- guage models (PLMs), such as BERT (Devlin et al., Language model pre-training, such as BERT, has significantly improved the performances 2019), XLNet (Yang et al., 2019), RoBERTa (Liu of many natural language processing tasks. et al., 2019), ALBERT (Lan et al., 2020), T5 (Raf- However, pre-trained language models are usu- fel et al., 2019) and ELECTRA (Clark et al., 2020), ally computationally expensive, so it is diffi- have achieved great success in many NLP tasks cult to efficiently execute them on resource- (e.g., the GLUE benchmark (Wang et al., 2018) restricted devices. To accelerate inference and the challenging multi-hop reasoning task (Ding and reduce model size while maintaining et al., 2019)). However, PLMs usually have a accuracy, we first propose a novel Trans- former distillation method that is specially de- large number of parameters and take long infer- signed for knowledge distillation (KD) of the ence time, which are difficult to be deployed on Transformer-based models. By leveraging this edge devices such as mobile phones. Recent stud- new KD method, the plenty of knowledge en- ies (Kovaleva et al., 2019; Michel et al., 2019; Voita coded in a large “teacher” BERT can be ef- et al., 2019) demonstrate that there is redundancy fectively transferred to a small “student” Tiny- in PLMs.
    [Show full text]
  • Words in a Text
    Cross-lingual geo-parsing for non-structured data Judith Gelernter Wei Zhang Language Technologies Institute Language Technologies Institute School of Computer Science School of Computer Science Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] ABSTRACT to cross-language geo-information retrieval. We will examine A geo-parser automatically identifies location words in a text. We Strötgen’s view that normalized location information is language- have generated a geo-parser specifically to find locations in independent [16]. Difficulties in cross-language experiments are unstructured Spanish text. Our novel geo-parser architecture exacerbated when the languages use different alphabets, and when combines the results of four parsers: a lexico-semantic Named the focus is on proper names, as in this case, the names of Location Parser, a rules-based building parser, a rules-based street locations. parser, and a trained Named Entity Parser. Each parser has Geo-parsing structured versus unstructured text requires different different strengths: the Named Location Parser is strong in recall, language processing tools. Twitter messages are challenging to and the Named Entity Parser is strong in precision, and building geo-parse because of their non-grammaticality. To handle non- and street parser finds buildings and streets that the others are not grammatical forms, we use a Twitter tokenizer rather than a word designed to do. To test our Spanish geo-parser performance, we tokenizer. We use an English part of speech tagger created for compared the output of Spanish text through our Spanish geo- tweets, which was not available to us in Spanish.
    [Show full text]