LIPN-CORE: Semantic Text Similarity Using N-Grams, Wordnet, Syntactic Analysis, ESA and Information Retrieval Based Features Davide Buscaldi, Joseph Le Roux, Jorge J

Total Page:16

File Type:pdf, Size:1020Kb

LIPN-CORE: Semantic Text Similarity Using N-Grams, Wordnet, Syntactic Analysis, ESA and Information Retrieval Based Features Davide Buscaldi, Joseph Le Roux, Jorge J LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features Davide Buscaldi, Joseph Le Roux, Jorge J. García Flores, Adrian Popescu To cite this version: Davide Buscaldi, Joseph Le Roux, Jorge J. García Flores, Adrian Popescu. LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features. Second Joint Conference on Lexical and Computational Semantics, Jun 2013, Atlanta, United States. pp.63. hal-00825054 HAL Id: hal-00825054 https://hal.archives-ouvertes.fr/hal-00825054 Submitted on 22 May 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. LIPN-CORE: Semantic Text Similarity using n-grams, WordNet, Syntactic Analysis, ESA and Information Retrieval based Features Davide Buscaldi, Joseph Le Roux, Adrian Popescu Jorge J. Garc´ıa Flores CEA, LIST, Laboratoire d’Informatique de Paris Nord, Vision & Content CNRS, (UMR 7030) Engineering Laboratory Universite´ Paris 13, Sorbonne Paris Cite,´ F-91190 Gif-sur-Yvette, France F-93430, Villetaneuse, France [email protected] fbuscaldi,joseph.le-roux,jgfloresg @lipn.univ-paris13.fr Abstract to obtain a robust and accurate measure of tex- tual similarity, using the individual measures as fea- This paper describes the system used by the tures for the global system. These measures include LIPN team in the Semantic Textual Similar- simple distances like Levenshtein edit distance, co- ity task at SemEval 2013. It uses a sup- sine, Named Entities overlap and more complex dis- port vector regression model, combining dif- tances like Explicit Semantic Analysis, WordNet- ferent text similarity measures that constitute based similarity, IR-based similarity, and a similar- the features. These measures include simple distances like Levenshtein edit distance, co- ity measure based on syntactic dependencies. sine, Named Entities overlap and more com- The paper is organized as follows. Measures are plex distances like Explicit Semantic Analy- presented in Section 2. Then the regression model, sis, WordNet-based similarity, IR-based simi- based on Support Vector Machines, is described in larity, and a similarity measure based on syn- Section 3. Finally we discuss the results of the sys- tactic dependencies. tem in Section 4. 2 Text Similarity Measures 1 Introduction The Semantic Textual Similarity task (STS) at Se- 2.1 WordNet-based Conceptual Similarity mEval 2013 requires systems to grade the degree of (Proxigenea) similarity between pairs of sentences. It is closely First of all, sentences p and q are analysed in or- related to other well known tasks in NLP such as tex- der to extract all the included WordNet synsets. For tual entailment, question answering or paraphrase each WordNet synset, we keep noun synsets and put detection. However, as noticed in (Bar¨ et al., 2012), into the set of synsets associated to the sentence, Cp the major difference is that STS systems must give a and Cq, respectively. If the synsets are in one of the graded, as opposed to binary, answer. other POS categories (verb, adjective, adverb) we One of the most successful systems in SemEval look for their derivationally related forms in order 2012 STS, (Bar¨ et al., 2012), managed to grade pairs to find a related noun synset: if there is one, we put of sentences accurately by combining focused mea- this synsets in Cp (or Cq). For instance, the word sures, either simple ones based on surface features “playing” can be associated in WordNet to synset (ie n-grams), more elaborate ones based on lexical (v)play#2, which has two derivationally related semantics, or measures requiring external corpora forms corresponding to synsets (n)play#5 and such as Explicit Semantic Analysis, into a robust (n)play#6: these are the synsets that are added measure by using a log-linear regression model. to the synset set of the sentence. No disambiguation The LIPN-CORE system is built upon this idea of process is carried out, so we take all possible mean- combining simple measures with a regression model ings into account. Given Cp and Cq as the sets of concepts contained where idf(w) is calculated as the inverse document in sentences p and q, respectively, with jCpj ≥ jCqj, frequency of word w, taking into account Google the conceptual similarity between p and q is calcu- Web 1T (Brants and Franz, 2006) frequency counts. lated as: The semantic similarity between words is calculated P as: max s(c1; c2) c 2C c12Cp 2 q ss(p; q) = (1) ws(wi; wj) = max sjc(ci; cj): (5) jCpj ci2Wi;cj inWj where s(c1; c2) is a conceptual similarity measure. where Wi and Wj are the sets containing all synsets Concept similarity can be calculated by different in WordNet corresponding to word wi and wj, re- ways. For the participation in the 2013 Seman- spectively. The IC values used are those calcu- tic Textual Similarity task, we used a variation of lated by Ted Pedersen (Pedersen et al., 2004) on the the Wu-Palmer formula (Wu and Palmer, 1994) British National Corpus1. named “ProxiGenea” (from the french Proximite´ 2.3 Syntactic Dependencies Gen´ ealogique,´ genealogical proximity), introduced by (Dudognon et al., 2010), which is inspired by the We also wanted for our systems to take syntac- analogy between a family tree and the concept hi- tic similarity into account. As our measures are erarchy in WordNet. Among the different formula- lexically grounded, we chose to use dependen- tions proposed by (Dudognon et al., 2010), we chose cies rather than constituents. Previous experiments the ProxiGenea3 variant, already used in the STS showed that converting constituents to dependen- 2012 task by the IRIT team (Buscaldi et al., 2012). cies still achieved best results on out-of-domain The ProxiGenea3 measure is defined as: texts (Le Roux et al., 2012), so we decided to use a 2-step architecture to obtain syntactic dependen- 1 s(c1; c2) = (2) cies. First we parsed pairs of sentences with the 1 + d(c1) + d(c2) − 2 · d(c0) LORG parser2. Second we converted the resulting parse trees to Stanford dependencies3. where c0 is the most specific concept that is present Given the sets of parsed dependencies Dp and Dq, both in the synset path of c1 and c2 (that is, the Least for sentence p and q, a dependency d 2 Dx is a Common Subsumer or LCS). The function returning triple (l; h; t) where l is the dependency label (for in- the depth of a concept is noted with d. stance, dobj or prep), h the governor and t the depen- dant. We define the following similarity measure be- 2.2 IC-based Similarity tween two syntactic dependencies d1 = (l1; h1; t1) and d = (l ; h ; t ): This measure has been proposed by (Mihalcea et 2 2 2 2 al., 2006) as a corpus-based measure which uses dsim(d1; d2) = Lev(l1; l2) idf ∗ s (h ; h ) + idf ∗ s (t ; t ) Resnik’s Information Content (IC) and the Jiang- ∗ h WN 1 2 t WN 1 2 Conrath (Jiang and Conrath, 1997) similarity metric: 2 (6) 1 s (c ; c ) = (3) where idfh = max(idf(h1); idf(h2)) and idft = jc 1 2 IC(c ) + IC(c ) − 2 · IC(c ) 1 2 0 max(idf(t1); idf(t2)) are the inverse document fre- where IC is the information content introduced by quencies calculated on Google Web 1T for the gov- (Resnik, 1995) as IC(c) = − log P (c). ernors and the dependants (we retain the maximum The similarity between two text segments T1 and for each pair), and sWN is calculated using formula T2 is therefore determined as: 2, with two differences: 0 P max ws(w; w2) ∗ idf(w) 1 w2fT g w22fT2g • if the two words to be compared are antonyms, sim(T ;T ) = B 1 1 2 2 @ P idf(w) then the returned score is 0; w2fT g 1 1 P 1 http://www.d.umn.edu/˜tpederse/similarity.html max ws(w; w1) ∗ idf(w) 2 https://github.com/CNGLdlab/LORG-Release w2fT g w12fT1g + 2 C(4) 3We used the default built-in converter provided with the P idf(w) A Stanford Parser (2012-11-12 revision). w2fT2g • if one of the words to be compared is not in weighted vector of Wikipedia concepts. Weights WordNet, their similarity is calculated using are supposed to quantify the strength of the relation the Levenshtein distance. between a word and each Wikipedia concept using the tf-idf measure. A text is then represented as a The similarity score between p and q, is then cal- high-dimensional real valued vector space spanning culated as: all along the Wikipedia database. For this particular 0 P max dsim(di; dj) task we adapt the research-esa implementation d inD di2Dp j q (Sorg and Cimiano, 2008)7 to our own home-made sSD(p; q) = max B ; @ jDpj weighted vectors corresponding to a Wikipedia snapshot of February 4th, 2013. P 1 max dsim(di; dj) dj inDp di2Dq C 2.6 N-gram based Similarity jD j A q This feature is based on the Clustered Keywords Po- (7) sitional Distance (CKPD) model proposed in (Bus- caldi et al., 2009) for the passage retrieval task.
Recommended publications
  • Intelligent Chat Bot
    INTELLIGENT CHAT BOT A. Mohamed Rasvi, V.V. Sabareesh, V. Suthajebakumari Computer Science and Engineering, Kamaraj College of Engineering and Technology, India ABSTRACT This paper discusses the workflow of intelligent chat bot powered by various artificial intelligence algorithms. The replies for messages in chats are trained against set of predefined questions and chat messages. These trained data sets are stored in database. Relying on one machine-learning algorithm showed inaccurate performance, so this bot is powered by four different machine-learning algorithms to make a decision. The inference engine pre-processes the received message then matches it against the trained datasets based on the AI algorithms. The AIML provides similar way of replying to a message in online chat bots using simple XML based mechanism but the method of employing AI provides accurate replies than the widely used AIML in the Internet. This Intelligent chat bot can be used to provide assistance for individual like answering simple queries to booking a ticket for a trip and also when trained properly this can be used as a replacement for a teacher which teaches a subject or even to teach programming. Keywords : AIML, Artificial Intelligence, Chat bot, Machine-learning, String Matching. I. INTRODUCTION Social networks are attracting masses and gaining huge momentum. They allow instant messaging and sharing features. Guides and technical help desks provide on demand tech support through chat services or voice call. Queries are taken to technical support team from customer to clear their doubts. But this process needs a dedicated support team to answer user‟s query which is a lot of man power.
    [Show full text]
  • An Ensemble Regression Approach for Ocr Error Correction
    AN ENSEMBLE REGRESSION APPROACH FOR OCR ERROR CORRECTION by Jie Mei Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia March 2017 © Copyright by Jie Mei, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vi List of Symbols Used .............................. vii Acknowledgements ............................... viii Chapter 1 Introduction .......................... 1 1.1 Problem Statement............................ 1 1.2 Proposed Model .............................. 2 1.3 Contributions ............................... 2 1.4 Outline ................................... 3 Chapter 2 Background ........................... 5 2.1 OCR Procedure .............................. 5 2.2 OCR-Error Characteristics ........................ 6 2.3 Modern Post-Processing Models ..................... 7 Chapter 3 Compositional Correction Frameworks .......... 9 3.1 Noisy Channel ............................... 11 3.1.1 Error Correction Models ..................... 12 3.1.2 Correction Inferences ....................... 13 3.2 Confidence Analysis ............................ 16 3.2.1 Error Correction Models ..................... 16 3.2.2 Correction Inferences ....................... 17 3.3 Framework Comparison ......................... 18 ii Chapter 4 Proposed Model ........................ 21 4.1 Error Detection .............................. 22
    [Show full text]
  • Practice with Python
    CSI4108-01 ARTIFICIAL INTELLIGENCE 1 Word Embedding / Text Processing Practice with Python 2018. 5. 11. Lee, Gyeongbok Practice with Python 2 Contents • Word Embedding – Libraries: gensim, fastText – Embedding alignment (with two languages) • Text/Language Processing – POS Tagging with NLTK/koNLPy – Text similarity (jellyfish) Practice with Python 3 Gensim • Open-source vector space modeling and topic modeling toolkit implemented in Python – designed to handle large text collections, using data streaming and efficient incremental algorithms – Usually used to make word vector from corpus • Tutorial is available here: – https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials – https://rare-technologies.com/word2vec-tutorial/ • Install – pip install gensim Practice with Python 4 Gensim for Word Embedding • Logging • Input Data: list of word’s list – Example: I have a car , I like the cat → – For list of the sentences, you can make this by: Practice with Python 5 Gensim for Word Embedding • If your data is already preprocessed… – One sentence per line, separated by whitespace → LineSentence (just load the file) – Try with this: • http://an.yonsei.ac.kr/corpus/example_corpus.txt From https://radimrehurek.com/gensim/models/word2vec.html Practice with Python 6 Gensim for Word Embedding • If the input is in multiple files or file size is large: – Use custom iterator and yield From https://rare-technologies.com/word2vec-tutorial/ Practice with Python 7 Gensim for Word Embedding • gensim.models.Word2Vec Parameters – min_count:
    [Show full text]
  • NLP - Assignment 2
    NLP - Assignment 2 Week 2 December 27th, 2016 1. A 5-gram model is a order Markov Model: (a) Six (b) Five (c) Four (d) Constant Ans : c) Four 2. For the following corpus C1 of 3 sentences, what is the total count of unique bi- grams for which the likelihood will be estimated? Assume we do not perform any pre-processing, and we are using the corpus as given. (i) ice cream tastes better than any other food (ii) ice cream is generally served after the meal (iii) many of us have happy childhood memories linked to ice cream (a) 22 (b) 27 (c) 30 (d) 34 Ans : b) 27 3. Arrange the words \curry, oil and tea" in descending order, based on the frequency of their occurrence in the Google Books n-grams. The Google Books n-gram viewer is available at https://books.google.com/ngrams: (a) tea, oil, curry (c) curry, tea, oil (b) curry, oil, tea (d) oil, tea, curry Ans: d) oil, tea, curry 4. Given a corpus C2, The Maximum Likelihood Estimation (MLE) for the bigram \ice cream" is 0.4 and the count of occurrence of the word \ice" is 310. The likelihood of \ice cream" after applying add-one smoothing is 0:025, for the same corpus C2. What is the vocabulary size of C2: 1 (a) 4390 (b) 4690 (c) 5270 (d) 5550 Ans: b)4690 The Questions from 5 to 10 require you to analyse the data given in the corpus C3, using a programming language of your choice.
    [Show full text]
  • 3 Dictionaries and Tolerant Retrieval
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 49 Dictionaries and tolerant 3 retrieval In Chapters 1 and 2 we developed the ideas underlying inverted indexes for handling Boolean and proximity queries. Here, we develop techniques that are robust to typographical errors in the query, as well as alternative spellings. In Section 3.1 we develop data structures that help the search for terms in the vocabulary in an inverted index. In Section 3.2 we study WILDCARD QUERY the idea of a wildcard query: a query such as *a*e*i*o*u*, which seeks doc- uments containing any term that includes all the five vowels in sequence. The * symbol indicates any (possibly empty) string of characters. Users pose such queries to a search engine when they are uncertain about how to spell a query term, or seek documents containing variants of a query term; for in- stance, the query automat* would seek documents containing any of the terms automatic, automation and automated. We then turn to other forms of imprecisely posed queries, focusing on spelling errors in Section 3.3. Users make spelling errors either by accident, or because the term they are searching for (e.g., Herman) has no unambiguous spelling in the collection. We detail a number of techniques for correcting spelling errors in queries, one term at a time as well as for an entire string of query terms. Finally, in Section 3.4 we study a method for seeking vo- cabulary terms that are phonetically close to the query term(s).
    [Show full text]
  • Use of Word Embedding to Generate Similar Words and Misspellings for Training Purpose in Chatbot Development
    USE OF WORD EMBEDDING TO GENERATE SIMILAR WORDS AND MISSPELLINGS FOR TRAINING PURPOSE IN CHATBOT DEVELOPMENT by SANJAY THAPA THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at The University of Texas at Arlington December, 2019 Arlington, Texas Supervising Committee: Deokgun Park, Supervising Professor Manfred Huber Vassilis Athitsos Copyright © by Sanjay Thapa 2019 ACKNOWLEDGEMENTS I would like to thank Dr. Deokgun Park for allowing me to work and conduct the research in the Human Data Interaction (HDI) Lab in the College of Engineering at the University of Texas at Arlington. Dr. Park's guidance on the procedure to solve problems using different approaches has helped me to grow personally and intellectually. I am also very thankful to Dr. Manfred Huber and Dr. Vassilis Athitsos for their constant guidance and support in my research. I would like to thank all the members of the HDI Lab for their generosity and company during my time in the lab. I also would like to thank Peace Ossom Williamson, the director of Research Data Services at the library of the University of Texas at Arlington (UTA) for giving me the opportunity to work as a Graduate Research Assistant (GRA) in the dataCAVE. i DEDICATION I would like to dedicate my thesis especially to my mom and dad who have always been very supportive of me with my educational and personal endeavors. Furthermore, my sister and my brother played an indispensable role to provide emotional and other supports during my graduate school and research. ii LIST OF ILLUSTRATIONS Fig: 2.3: Rasa Architecture ……………………………………………………………….7 Fig 2.4: Chatbot Conversation without misspelling and with misspelling error.
    [Show full text]
  • Feature Combination for Measuring Sentence Similarity
    FEATURE COMBINATION FOR MEASURING SENTENCE SIMILARITY Ehsan Shareghi Nojehdeh A thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Master of Computer Science Concordia University Montreal,´ Quebec,´ Canada April 2013 c Ehsan Shareghi Nojehdeh, 2013 Concordia University School of Graduate Studies This is to certify that the thesis prepared By: Ehsan Shareghi Nojehdeh Entitled: Feature Combination for Measuring Sentence Similarity and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the final examining commitee: Chair Dr. Peter C. Rigby Examiner Dr. Leila Kosseim Examiner Dr. Adam Krzyzak Supervisor Dr. Sabine Bergler Approved Chair of Department or Graduate Program Director 20 Dr. Robin A. L. Drew, Dean Faculty of Engineering and Computer Science Abstract Feature Combination for Measuring Sentence Similarity Ehsan Shareghi Nojehdeh Sentence similarity is one of the core elements of Natural Language Processing (NLP) tasks such as Recognizing Textual Entailment, and Paraphrase Recognition. Over the years, different systems have been proposed to measure similarity between fragments of texts. In this research, we propose a new two phase supervised learning method which uses a combination of lexical features to train a model for predicting similarity between sentences. Each of these features, covers an aspect of the text on implicit or explicit level. The two phase method uses all combinations of the features in the feature space and trains separate models based on each combination.
    [Show full text]
  • The Research of Weighted Community Partition Based on Simhash
    Available online at www.sciencedirect.com Procedia Computer Science 17 ( 2013 ) 797 – 802 Information Technology and Quantitative Management (ITQM2013) The Research of Weighted Community Partition based on SimHash Li Yanga,b,c, **, Sha Yingc, Shan Jixic, Xu Kaic aInstitute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190 bGraduate School of Chinese Academy of Sciences, Beijing, 100049 cNational Engineering Laboratory for Information Security Technologies, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093 Abstract The methods of community partition in nowadays mainly focus on using the topological links, and consider little of the content-based information between users. In the paper we analyze the content similarity between users by the SimHash method, and compute the content-based weight of edges so as to attain a more reasonable community partition. The dataset adopt the real data from twitter for testing, which verify the effectiveness of the proposed method. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the organizers of the 2013 International Conference on Information Technology and Quantitative Management Keywords: Social Network, Community Partition, Content-based Weight, SimHash, Information Entropy; 1. Introduction The important characteristic of social network is the community structure, but the current methods mainly focus on the topological links, the research on the content is very less. The reason is that we can t effectively utilize content-based information from the abundant short text. So in the paper we hope to make full use of content-based information between users based on the topological links, so as to find the more reasonable communities.
    [Show full text]
  • Hybrid Algorithm for Approximate String Matching to Be Used for Information Retrieval Surbhi Arora, Ira Pandey
    International Journal of Scientific & Engineering Research Volume 9, Issue 11, November-2018 396 ISSN 2229-5518 Hybrid Algorithm for Approximate String Matching to be used for Information Retrieval Surbhi Arora, Ira Pandey Abstract— Conventional database searches require the user to hit a complete, correct query denying the possibility that any legitimate typographical variation or spelling error in the query will simply fail the search procedure. An approximate string matching engine, often colloquially referred to as fuzzy keyword search engine, could be a potential solution for all such search breakdowns. A fuzzy matching program can be summarized as the Google's 'Did you mean: ...' or Yahoo's 'Including results for ...'. These programs are entitled to be fuzzy since they don't employ strict checking and hence, confining the results to 0 or 1, i.e. no match or exact match. Rather, are designed to handle the concept of partial truth; be it DNA mapping, record screening, or simply web browsing. With the help of a 0.4 million English words dictionary acting as the underlying data source, thereby qualifying as Big Data, the study involves use of Apache Hadoop's MapReduce programming paradigm to perform approximate string matching. Aim is to design a system prototype to demonstrate the practicality of our solution. Index Terms— Approximate String Matching, Fuzzy, Information Retrieval, Jaro-Winkler, Levenshtein, MapReduce, N-Gram, Edit Distance —————————— —————————— 1 INTRODUCTION UZZY keyword search engine employs approximate Fig. 1. Fuzzy Search results for query string, “noting”. F search matching algorithms for Information Retrieval (IR). Approximate string matching involves spotting all text The fuzzy logic behind approximate string searching can matching the text pattern of given search query but with be described by considering a fuzzy set F over a referential limited number of errors [1].
    [Show full text]
  • Levenshtein Distance Based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University
    International Journal of Scientific & Engineering Research, Volume 6, Issue 5, May-2015 112 ISSN 2229-5518 Levenshtein Distance based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University Abstract— In today’s web based applications information retrieval is gaining popularity. There are many advances in information retrieval such as fuzzy search and proximity ranking. Fuzzy search retrieves relevant results containing words which are similar to query keywords. Even if there are few typographical errors in query keywords the system will retrieve relevant results. A query keyword can have many similar words, the words which are very similar to query keywords will be considered in fuzzy search. Ranking plays an important role in web search; user expects to see relevant documents in first few results. Proximity ranking is arranging search results based on the distance between query keywords. In current research information retrieval system is built to search contents of text files which have the feature of fuzzy search and proximity ranking. Indexing the contents of html or pdf files are performed for fast retrieval of search results. Combination of indexes like inverted index and trie index is used. Fuzzy search is implemented using Levenshtein’s Distance or edit distance concept. Proximity ranking is done using binning concept. Search engine system evaluation is done using average interpolated precision at eleven recall points i.e. at 0, 0.1, 0.2…..0.9, 1.0. Precision Recall graph is plotted to evaluate the system. Index Terms— stemming, stop words, fuzzy search, proximity ranking, dictionary, inverted index, trie index and binning.
    [Show full text]
  • Thesis Submitted in Partial Fulfilment for the Degree of Master of Computing (Advanced) at Research School of Computer Science the Australian National University
    How to tell Real From Fake? Understanding how to classify human-authored and machine-generated text Debashish Chakraborty A thesis submitted in partial fulfilment for the degree of Master of Computing (Advanced) at Research School of Computer Science The Australian National University May 2019 c Debashish Chakraborty 2019 Except where otherwise indicated, this thesis is my own original work. Debashish Chakraborty 30 May 2019 Acknowledgments First, I would like to express my gratitude to Dr Patrik Haslum for his support and guidance throughout the thesis, and for the useful feedback from my late drafts. Thank you to my parents and Sandra for their continuous support and patience. A special thanks to Sandra for reading my very "drafty" drafts. v Abstract Natural Language Generation (NLG) using Generative Adversarial Networks (GANs) has been an active field of research as it alleviates restrictions in conventional Lan- guage Modelling based text generators e.g. Long-Short Term Memory (LSTM) net- works. The adequacy of a GAN-based text generator depends on its capacity to classify human-written (real) and machine-generated (synthetic) text. However, tra- ditional evaluation metrics used by these generators cannot effectively capture clas- sification features in NLG tasks, such as creative writing. We prove this by using an LSTM network to almost perfectly classify sentences generated by a LeakGAN, a state-of-the-art GAN for long text generation. This thesis attempts a rare approach to understand real and synthetic sentences using meaningful and interpretable features of long sentences (with at least 20 words). We analyse novelty and diversity features of real and synthetic sentences, generate by a LeakGAN, using three meaningful text dissimilarity functions: Jaccard Distance (JD), Normalised Levenshtein Distance (NLD) and Word Mover’s Distance (WMD).
    [Show full text]
  • Effective Search Space Reduction for Spell Correction Using Character Neural Embeddings
    Effective search space reduction for spell correction using character neural embeddings Harshit Pande Smart Equipment Solutions Group Samsung Semiconductor India R&D Bengaluru, India [email protected] Abstract of distance computations blows up the time com- plexity, thus hindering real-time spell correction. We present a novel, unsupervised, and dis- For Damerau-Levenshtein distance or similar edit tance measure agnostic method for search distance-based measures, some approaches have space reduction in spell correction using been tried to reduce the time complexity of spell neural character embeddings. The embed- correction. Norvig (2007) does not check against dings are learned by skip-gram word2vec all dictionary words, instead generates all possi- training on sequences generated from dic- ble words till a certain edit distance threshold from tionary words in a phonetic information- the misspelled word. Then each of such generated retentive manner. We report a very high words is checked in the dictionary for existence, performance in terms of both success rates and if it is found in the dictionary, it becomes a and reduction of search space on the Birk- potentially correct spelling. There are two short- beck spelling error corpus. To the best of comings of this approach. First, such search space our knowledge, this is the first application reduction works only for edit distance-based mea- of word2vec to spell correction. sures. Second, this approach too leads to high time complexity when the edit distance threshold is 1 Introduction greater than 2 and the possible characters are large. Spell correction is now a pervasive feature, with Large character set is real for Unicode characters presence in a wide range of applications such used in may Asian languages.
    [Show full text]