A Comparative Study for Arabic Word Sense Disambiguation Using Document Preprocessing and Machine Learning Techniques

Total Page:16

File Type:pdf, Size:1020Kb

A Comparative Study for Arabic Word Sense Disambiguation Using Document Preprocessing and Machine Learning Techniques ALTIC -2011, Alexandria, Egypt A COMPARATIVE STUDY FOR ARABIC WORD SENSE DISAMBIGUATION USING DOCUMENT PREPROCESSING AND MACHINE LEARNING TECHNIQUES Mohamed M. El-Gamml , M. Waleed Fakhr Arab Academy for Science and Technology , Heliopolis, Cairo, Egypt [email protected], [email protected] Mohsen A. Rashwan, Almoataz B. Al-Said Faculty of Engineering, Dar Al-Ulum College, Cairo University, Giza, Egypt [email protected], [email protected] Keywords: Word Sense Disambiguation, Support Vector Machine (SVM), Naïve Bayesian Classifier (NBC), k-means clustering, Levenshtein Distance, Latent Semantic Analysis (LSA). Abstract: Word sense disambiguation is a core problem in many tasks related to language processing and was recognized at the beginning of the scientific interest in machine translation and artificial intelligence. In this paper, we introduce the possibilities of using the Support Vector Machine (SVM) classifier to solve the Word Sense Disambiguation problem in a supervised manner after using the Levenshtein Distance algorithm to measure the matching distance between words through the usage of the lexical samples of five Arabic words. The performance of the proposed technique is compared to supervised and unsupervised machine learning algorithms, namely the Naïve Bayes Classifier (NBC) and Latent Semantic Analysis (LSA) with K-means clustering, representing the baseline and state-of-the-art algorithms for WSD. 1 INTRODUCTION context provides the evidence, and each occurrence of a word is assigned to one or more of its possible classes based on the evidence [1]. Anyone who gets the joke when they hear a pun will Arabic, similar to most Natural Languages, is realize that lexical ambiguity is a fundamental characteristic of language: Words can have more ambiguous since many words might have multiple senses. The correct meaning of an ambiguous word than one distinct meaning. So why is it that text depends on the context in which it occurs. The doesn’t seem like one long string of puns? After all, lexical ambiguity is pervasive [1]. Lexical speaker of a language can usually resolve this ambiguity without difficulty. However, disambiguation in its broadest definition is nothing identification of the specific meaning of a word less than determining the meaning of every word in context, which appears to be a largely unconscious computationally, through Word Sense Disambiguation (WSD), is not an easy task. process in people. As a computational problem it is Although WSD may not be considered a standalone often described as “AI-complete”, that is, a problem whose solution presupposes a solution to complete approach by itself, it is an integral part of many applications such as Machine Translation [3, 4], natural-language understanding or common- sense Information Retrieval [5, 6], text mining, reasoning [2]. In the field of computational linguistics, the Lexicography [7], and Information Extraction [8]. Approaches to WSD are often classified problem is generally called word sense according to the main source of knowledge used in disambiguation (WSD), and is defined as the problem of computationally determining which sense differentiation. Methods that rely primarily on dictionaries, thesauri, and lexical knowledge bases, “sense” of a word is activated by the use of the word without using any corpus evidence, are termed in a particular context. WSD is essentially a task of classification: word senses are the classes, the dictionary-based or knowledge-based. Methods that ALTIC -2011, Alexandria, Egypt eschew (almost) completely external information The 1980s were a turning point for WSD. Large- and work directly from raw un-annotated corpora are scale lexical resources and corpora became available termed unsupervised methods (adopting terminology so handcrafting could be replaced with knowledge from machine learning). Included in this category extracted automatically from the resources (Wilks et are methods that use word-aligned corpora to gather al. 1990). Lesk’s (1986) short but extremely seminal cross-linguistic evidence for sense discrimination. paper used the overlap of word sense definitions in Finally, supervised and semi-supervised WSD make the Oxford Advanced Learner’s Dictionary of use of annotated corpora to train from, or as seed Current English (OALD) to resolve word senses. data in a bootstrapping process. Dictionary-based WSD had begun and the In this paper, we explore the idea of using relationship of WSD to lexicography became Levenshtein Distance algorithm to calculate the explicit. For example, Guthrie et al. (1991) used the distance between each two words which help to say subject codes (e.g., Economics,Engineering, etc.) in the Longman Dictionary of Contemporary English ( ,سشطبُ , ٍسشطِ , سشطبّٚ ,اىسشطبُ) that for example Is a derivation from the same word then using a (LDOCE) (Procter 1978) on top of Lesk’s method. supervised or unsupervised technique for In 1996, Mooney compared Naïve Bayes with a classification or clustering the senses . Neural Network, Decision Tree/List Learners, This paper is organized as follows. In Section 2, Disjunctive and Conjunctive Normal Form learners, we briefly describe some related works in the area of and a perceptron when disambiguating six senses of Word Sense Disambiguation. The preprocessing line. Pedersen in 1998 compared Naïve Bayes with preformed on the documents that will be used from Decision Tree, Rule Based Learner, Probabilistic the training of the proposed system and also will be Model, etc. when disambiguating line and 12 other used in the testing phase is described in Section 3. words. All of these researchers found that Naïve Supervised Corpus-Based Methods for WSD and the Bayesian Classifier performed as well as any of the learning algorithm used in this paper for word sense other methods. disambiguation are discussed in Section 4. Let us take some examples of the previous works Unsupervised Corpus-Based Methods for WSD and in the fields of Arabic word sense disambiguation. In the related algorithm used in this paper for word 2003, Mona T. Diab in her Ph.D. thesis "Word sense disambiguation are discussed in Section 5. The Sense Disambiguation within a Multilingual experiments and the experimental results are Framework" used unsupervised machine learning presented in Section 6. In Section 7 we summarize approach called bootstrap. Achraf Chalabi, a Sakhr the conclusion of our work and suggest some future researcher, in 1998 had introduced a new word sense work ideas. disambiguation algorithm that have been used in Sakhr Arabic-English computeraided translation system based on Thematic Words of a given context to choose the appropriate sense of a ambiguities 2 RELATED WORK word [1]. Also Soha M. Eid [21] in her Ph.D. thesis “A WSD was first formulated as a distinct Comparative Study of Rocchio Classifier Applied to computational task during the early days of machine supervised WSD Using Arabic Lexical Samples” translation in the late 1940s, making it one of the Says that the Rocchio classifier outperforms the oldest problems in computational linguistics. other classification approaches with an overall Weaver (1949) introduced the problem in his now accuracy of 88%. famous memorandum on machine translation. Later in 1950, Kaplan observed that sense resolution given two words on either side of the word was not significantly better or worse than when given the 3 WORD MATCHING AND entire sentence. Several researchers since Kaplan's DOCUMENT PREPROCESSING work e.g. Koutsoudas and Korfhage in 1956 on Russian language, Masterman in 1961, Gougenheim The Levenshtein distance is a metric for measuring and Michéa in 1961 on French, and Choueka and the amount of difference between two sequences Lusignan reported the same phenomenon in 1993. (i.e. an edit distance). The term edit distance is often WSD was resurrected in the 1970s within used to refer specifically to Levenshtein distance [9- artificial intelligence (AI) research on full natural 11]. language understanding. In this spirit, Wilks (1975) Levenshtein distance (LD) is a measure of the developed “preference semantics”, one of the first similarity between two strings, which we will refer systems to explicitly account for WSD. to as the source string (s) and the target string (t). The distance is the number of deletions, insertions, ALTIC -2011, Alexandria, Egypt or substitutions required to transform s into t. For Different supervised Machine Learning example: approaches have been employed in supervised WSD by training a classifier using a set of labelled . If s is "test" and t is "test", then LD(s,t) = 0, instances of the ambiguous word and create a because no transformations are needed. The statistical model the generated model is then applied strings are already identical. to unlabeled instances of the ambiguous word to decide their correct sense. In this work, two different . If s is "test" and t is "tent", then LD(s,t) = 1, supervised learning techniques will be used for because one substitution (change "s" to "n") solving WSD problem. These are Naïve Bayes is sufficient to transform s into t. Classifier (NBC) and Support Vector Machine In this work we introduce a simple algorithm to (SVM). detect similarity between two words as follow: 4.1 Naïve Bayes Classifier (NBC) If S is the source term and D is the destination term so, Naïve Bayes is the simplest representative of if (Length(S) < 2 || Length(D) < 2) probabilistic learning methods [13]. In this model, return false; an example is assumed to be
Recommended publications
  • Intelligent Chat Bot
    INTELLIGENT CHAT BOT A. Mohamed Rasvi, V.V. Sabareesh, V. Suthajebakumari Computer Science and Engineering, Kamaraj College of Engineering and Technology, India ABSTRACT This paper discusses the workflow of intelligent chat bot powered by various artificial intelligence algorithms. The replies for messages in chats are trained against set of predefined questions and chat messages. These trained data sets are stored in database. Relying on one machine-learning algorithm showed inaccurate performance, so this bot is powered by four different machine-learning algorithms to make a decision. The inference engine pre-processes the received message then matches it against the trained datasets based on the AI algorithms. The AIML provides similar way of replying to a message in online chat bots using simple XML based mechanism but the method of employing AI provides accurate replies than the widely used AIML in the Internet. This Intelligent chat bot can be used to provide assistance for individual like answering simple queries to booking a ticket for a trip and also when trained properly this can be used as a replacement for a teacher which teaches a subject or even to teach programming. Keywords : AIML, Artificial Intelligence, Chat bot, Machine-learning, String Matching. I. INTRODUCTION Social networks are attracting masses and gaining huge momentum. They allow instant messaging and sharing features. Guides and technical help desks provide on demand tech support through chat services or voice call. Queries are taken to technical support team from customer to clear their doubts. But this process needs a dedicated support team to answer user‟s query which is a lot of man power.
    [Show full text]
  • An Ensemble Regression Approach for Ocr Error Correction
    AN ENSEMBLE REGRESSION APPROACH FOR OCR ERROR CORRECTION by Jie Mei Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia March 2017 © Copyright by Jie Mei, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vi List of Symbols Used .............................. vii Acknowledgements ............................... viii Chapter 1 Introduction .......................... 1 1.1 Problem Statement............................ 1 1.2 Proposed Model .............................. 2 1.3 Contributions ............................... 2 1.4 Outline ................................... 3 Chapter 2 Background ........................... 5 2.1 OCR Procedure .............................. 5 2.2 OCR-Error Characteristics ........................ 6 2.3 Modern Post-Processing Models ..................... 7 Chapter 3 Compositional Correction Frameworks .......... 9 3.1 Noisy Channel ............................... 11 3.1.1 Error Correction Models ..................... 12 3.1.2 Correction Inferences ....................... 13 3.2 Confidence Analysis ............................ 16 3.2.1 Error Correction Models ..................... 16 3.2.2 Correction Inferences ....................... 17 3.3 Framework Comparison ......................... 18 ii Chapter 4 Proposed Model ........................ 21 4.1 Error Detection .............................. 22
    [Show full text]
  • Practice with Python
    CSI4108-01 ARTIFICIAL INTELLIGENCE 1 Word Embedding / Text Processing Practice with Python 2018. 5. 11. Lee, Gyeongbok Practice with Python 2 Contents • Word Embedding – Libraries: gensim, fastText – Embedding alignment (with two languages) • Text/Language Processing – POS Tagging with NLTK/koNLPy – Text similarity (jellyfish) Practice with Python 3 Gensim • Open-source vector space modeling and topic modeling toolkit implemented in Python – designed to handle large text collections, using data streaming and efficient incremental algorithms – Usually used to make word vector from corpus • Tutorial is available here: – https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials – https://rare-technologies.com/word2vec-tutorial/ • Install – pip install gensim Practice with Python 4 Gensim for Word Embedding • Logging • Input Data: list of word’s list – Example: I have a car , I like the cat → – For list of the sentences, you can make this by: Practice with Python 5 Gensim for Word Embedding • If your data is already preprocessed… – One sentence per line, separated by whitespace → LineSentence (just load the file) – Try with this: • http://an.yonsei.ac.kr/corpus/example_corpus.txt From https://radimrehurek.com/gensim/models/word2vec.html Practice with Python 6 Gensim for Word Embedding • If the input is in multiple files or file size is large: – Use custom iterator and yield From https://rare-technologies.com/word2vec-tutorial/ Practice with Python 7 Gensim for Word Embedding • gensim.models.Word2Vec Parameters – min_count:
    [Show full text]
  • NLP - Assignment 2
    NLP - Assignment 2 Week 2 December 27th, 2016 1. A 5-gram model is a order Markov Model: (a) Six (b) Five (c) Four (d) Constant Ans : c) Four 2. For the following corpus C1 of 3 sentences, what is the total count of unique bi- grams for which the likelihood will be estimated? Assume we do not perform any pre-processing, and we are using the corpus as given. (i) ice cream tastes better than any other food (ii) ice cream is generally served after the meal (iii) many of us have happy childhood memories linked to ice cream (a) 22 (b) 27 (c) 30 (d) 34 Ans : b) 27 3. Arrange the words \curry, oil and tea" in descending order, based on the frequency of their occurrence in the Google Books n-grams. The Google Books n-gram viewer is available at https://books.google.com/ngrams: (a) tea, oil, curry (c) curry, tea, oil (b) curry, oil, tea (d) oil, tea, curry Ans: d) oil, tea, curry 4. Given a corpus C2, The Maximum Likelihood Estimation (MLE) for the bigram \ice cream" is 0.4 and the count of occurrence of the word \ice" is 310. The likelihood of \ice cream" after applying add-one smoothing is 0:025, for the same corpus C2. What is the vocabulary size of C2: 1 (a) 4390 (b) 4690 (c) 5270 (d) 5550 Ans: b)4690 The Questions from 5 to 10 require you to analyse the data given in the corpus C3, using a programming language of your choice.
    [Show full text]
  • 3 Dictionaries and Tolerant Retrieval
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 49 Dictionaries and tolerant 3 retrieval In Chapters 1 and 2 we developed the ideas underlying inverted indexes for handling Boolean and proximity queries. Here, we develop techniques that are robust to typographical errors in the query, as well as alternative spellings. In Section 3.1 we develop data structures that help the search for terms in the vocabulary in an inverted index. In Section 3.2 we study WILDCARD QUERY the idea of a wildcard query: a query such as *a*e*i*o*u*, which seeks doc- uments containing any term that includes all the five vowels in sequence. The * symbol indicates any (possibly empty) string of characters. Users pose such queries to a search engine when they are uncertain about how to spell a query term, or seek documents containing variants of a query term; for in- stance, the query automat* would seek documents containing any of the terms automatic, automation and automated. We then turn to other forms of imprecisely posed queries, focusing on spelling errors in Section 3.3. Users make spelling errors either by accident, or because the term they are searching for (e.g., Herman) has no unambiguous spelling in the collection. We detail a number of techniques for correcting spelling errors in queries, one term at a time as well as for an entire string of query terms. Finally, in Section 3.4 we study a method for seeking vo- cabulary terms that are phonetically close to the query term(s).
    [Show full text]
  • Use of Word Embedding to Generate Similar Words and Misspellings for Training Purpose in Chatbot Development
    USE OF WORD EMBEDDING TO GENERATE SIMILAR WORDS AND MISSPELLINGS FOR TRAINING PURPOSE IN CHATBOT DEVELOPMENT by SANJAY THAPA THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at The University of Texas at Arlington December, 2019 Arlington, Texas Supervising Committee: Deokgun Park, Supervising Professor Manfred Huber Vassilis Athitsos Copyright © by Sanjay Thapa 2019 ACKNOWLEDGEMENTS I would like to thank Dr. Deokgun Park for allowing me to work and conduct the research in the Human Data Interaction (HDI) Lab in the College of Engineering at the University of Texas at Arlington. Dr. Park's guidance on the procedure to solve problems using different approaches has helped me to grow personally and intellectually. I am also very thankful to Dr. Manfred Huber and Dr. Vassilis Athitsos for their constant guidance and support in my research. I would like to thank all the members of the HDI Lab for their generosity and company during my time in the lab. I also would like to thank Peace Ossom Williamson, the director of Research Data Services at the library of the University of Texas at Arlington (UTA) for giving me the opportunity to work as a Graduate Research Assistant (GRA) in the dataCAVE. i DEDICATION I would like to dedicate my thesis especially to my mom and dad who have always been very supportive of me with my educational and personal endeavors. Furthermore, my sister and my brother played an indispensable role to provide emotional and other supports during my graduate school and research. ii LIST OF ILLUSTRATIONS Fig: 2.3: Rasa Architecture ……………………………………………………………….7 Fig 2.4: Chatbot Conversation without misspelling and with misspelling error.
    [Show full text]
  • Feature Combination for Measuring Sentence Similarity
    FEATURE COMBINATION FOR MEASURING SENTENCE SIMILARITY Ehsan Shareghi Nojehdeh A thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Master of Computer Science Concordia University Montreal,´ Quebec,´ Canada April 2013 c Ehsan Shareghi Nojehdeh, 2013 Concordia University School of Graduate Studies This is to certify that the thesis prepared By: Ehsan Shareghi Nojehdeh Entitled: Feature Combination for Measuring Sentence Similarity and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the final examining commitee: Chair Dr. Peter C. Rigby Examiner Dr. Leila Kosseim Examiner Dr. Adam Krzyzak Supervisor Dr. Sabine Bergler Approved Chair of Department or Graduate Program Director 20 Dr. Robin A. L. Drew, Dean Faculty of Engineering and Computer Science Abstract Feature Combination for Measuring Sentence Similarity Ehsan Shareghi Nojehdeh Sentence similarity is one of the core elements of Natural Language Processing (NLP) tasks such as Recognizing Textual Entailment, and Paraphrase Recognition. Over the years, different systems have been proposed to measure similarity between fragments of texts. In this research, we propose a new two phase supervised learning method which uses a combination of lexical features to train a model for predicting similarity between sentences. Each of these features, covers an aspect of the text on implicit or explicit level. The two phase method uses all combinations of the features in the feature space and trains separate models based on each combination.
    [Show full text]
  • The Research of Weighted Community Partition Based on Simhash
    Available online at www.sciencedirect.com Procedia Computer Science 17 ( 2013 ) 797 – 802 Information Technology and Quantitative Management (ITQM2013) The Research of Weighted Community Partition based on SimHash Li Yanga,b,c, **, Sha Yingc, Shan Jixic, Xu Kaic aInstitute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190 bGraduate School of Chinese Academy of Sciences, Beijing, 100049 cNational Engineering Laboratory for Information Security Technologies, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093 Abstract The methods of community partition in nowadays mainly focus on using the topological links, and consider little of the content-based information between users. In the paper we analyze the content similarity between users by the SimHash method, and compute the content-based weight of edges so as to attain a more reasonable community partition. The dataset adopt the real data from twitter for testing, which verify the effectiveness of the proposed method. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the organizers of the 2013 International Conference on Information Technology and Quantitative Management Keywords: Social Network, Community Partition, Content-based Weight, SimHash, Information Entropy; 1. Introduction The important characteristic of social network is the community structure, but the current methods mainly focus on the topological links, the research on the content is very less. The reason is that we can t effectively utilize content-based information from the abundant short text. So in the paper we hope to make full use of content-based information between users based on the topological links, so as to find the more reasonable communities.
    [Show full text]
  • Hybrid Algorithm for Approximate String Matching to Be Used for Information Retrieval Surbhi Arora, Ira Pandey
    International Journal of Scientific & Engineering Research Volume 9, Issue 11, November-2018 396 ISSN 2229-5518 Hybrid Algorithm for Approximate String Matching to be used for Information Retrieval Surbhi Arora, Ira Pandey Abstract— Conventional database searches require the user to hit a complete, correct query denying the possibility that any legitimate typographical variation or spelling error in the query will simply fail the search procedure. An approximate string matching engine, often colloquially referred to as fuzzy keyword search engine, could be a potential solution for all such search breakdowns. A fuzzy matching program can be summarized as the Google's 'Did you mean: ...' or Yahoo's 'Including results for ...'. These programs are entitled to be fuzzy since they don't employ strict checking and hence, confining the results to 0 or 1, i.e. no match or exact match. Rather, are designed to handle the concept of partial truth; be it DNA mapping, record screening, or simply web browsing. With the help of a 0.4 million English words dictionary acting as the underlying data source, thereby qualifying as Big Data, the study involves use of Apache Hadoop's MapReduce programming paradigm to perform approximate string matching. Aim is to design a system prototype to demonstrate the practicality of our solution. Index Terms— Approximate String Matching, Fuzzy, Information Retrieval, Jaro-Winkler, Levenshtein, MapReduce, N-Gram, Edit Distance —————————— —————————— 1 INTRODUCTION UZZY keyword search engine employs approximate Fig. 1. Fuzzy Search results for query string, “noting”. F search matching algorithms for Information Retrieval (IR). Approximate string matching involves spotting all text The fuzzy logic behind approximate string searching can matching the text pattern of given search query but with be described by considering a fuzzy set F over a referential limited number of errors [1].
    [Show full text]
  • Levenshtein Distance Based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University
    International Journal of Scientific & Engineering Research, Volume 6, Issue 5, May-2015 112 ISSN 2229-5518 Levenshtein Distance based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University Abstract— In today’s web based applications information retrieval is gaining popularity. There are many advances in information retrieval such as fuzzy search and proximity ranking. Fuzzy search retrieves relevant results containing words which are similar to query keywords. Even if there are few typographical errors in query keywords the system will retrieve relevant results. A query keyword can have many similar words, the words which are very similar to query keywords will be considered in fuzzy search. Ranking plays an important role in web search; user expects to see relevant documents in first few results. Proximity ranking is arranging search results based on the distance between query keywords. In current research information retrieval system is built to search contents of text files which have the feature of fuzzy search and proximity ranking. Indexing the contents of html or pdf files are performed for fast retrieval of search results. Combination of indexes like inverted index and trie index is used. Fuzzy search is implemented using Levenshtein’s Distance or edit distance concept. Proximity ranking is done using binning concept. Search engine system evaluation is done using average interpolated precision at eleven recall points i.e. at 0, 0.1, 0.2…..0.9, 1.0. Precision Recall graph is plotted to evaluate the system. Index Terms— stemming, stop words, fuzzy search, proximity ranking, dictionary, inverted index, trie index and binning.
    [Show full text]
  • Thesis Submitted in Partial Fulfilment for the Degree of Master of Computing (Advanced) at Research School of Computer Science the Australian National University
    How to tell Real From Fake? Understanding how to classify human-authored and machine-generated text Debashish Chakraborty A thesis submitted in partial fulfilment for the degree of Master of Computing (Advanced) at Research School of Computer Science The Australian National University May 2019 c Debashish Chakraborty 2019 Except where otherwise indicated, this thesis is my own original work. Debashish Chakraborty 30 May 2019 Acknowledgments First, I would like to express my gratitude to Dr Patrik Haslum for his support and guidance throughout the thesis, and for the useful feedback from my late drafts. Thank you to my parents and Sandra for their continuous support and patience. A special thanks to Sandra for reading my very "drafty" drafts. v Abstract Natural Language Generation (NLG) using Generative Adversarial Networks (GANs) has been an active field of research as it alleviates restrictions in conventional Lan- guage Modelling based text generators e.g. Long-Short Term Memory (LSTM) net- works. The adequacy of a GAN-based text generator depends on its capacity to classify human-written (real) and machine-generated (synthetic) text. However, tra- ditional evaluation metrics used by these generators cannot effectively capture clas- sification features in NLG tasks, such as creative writing. We prove this by using an LSTM network to almost perfectly classify sentences generated by a LeakGAN, a state-of-the-art GAN for long text generation. This thesis attempts a rare approach to understand real and synthetic sentences using meaningful and interpretable features of long sentences (with at least 20 words). We analyse novelty and diversity features of real and synthetic sentences, generate by a LeakGAN, using three meaningful text dissimilarity functions: Jaccard Distance (JD), Normalised Levenshtein Distance (NLD) and Word Mover’s Distance (WMD).
    [Show full text]
  • Effective Search Space Reduction for Spell Correction Using Character Neural Embeddings
    Effective search space reduction for spell correction using character neural embeddings Harshit Pande Smart Equipment Solutions Group Samsung Semiconductor India R&D Bengaluru, India [email protected] Abstract of distance computations blows up the time com- plexity, thus hindering real-time spell correction. We present a novel, unsupervised, and dis- For Damerau-Levenshtein distance or similar edit tance measure agnostic method for search distance-based measures, some approaches have space reduction in spell correction using been tried to reduce the time complexity of spell neural character embeddings. The embed- correction. Norvig (2007) does not check against dings are learned by skip-gram word2vec all dictionary words, instead generates all possi- training on sequences generated from dic- ble words till a certain edit distance threshold from tionary words in a phonetic information- the misspelled word. Then each of such generated retentive manner. We report a very high words is checked in the dictionary for existence, performance in terms of both success rates and if it is found in the dictionary, it becomes a and reduction of search space on the Birk- potentially correct spelling. There are two short- beck spelling error corpus. To the best of comings of this approach. First, such search space our knowledge, this is the first application reduction works only for edit distance-based mea- of word2vec to spell correction. sures. Second, this approach too leads to high time complexity when the edit distance threshold is 1 Introduction greater than 2 and the possible characters are large. Spell correction is now a pervasive feature, with Large character set is real for Unicode characters presence in a wide range of applications such used in may Asian languages.
    [Show full text]