Simbow at Semeval-2017 Task 3: Soft-Cosine Semantic Similarity Between Questions for Community Question Answering

Total Page:16

File Type:pdf, Size:1020Kb

Simbow at Semeval-2017 Task 3: Soft-Cosine Semantic Similarity Between Questions for Community Question Answering SimBow at SemEval-2017 Task 3: Soft-Cosine Semantic Similarity between Questions for Community Question Answering Delphine Charlet and Geraldine´ Damnati Orange Labs Lannion, France delphine.charlet,[email protected] Abstract 2016), which focuses on semantic similarity be- tween short well-formed sentences. This paper describes the SimBow sys- In this paper, we only focus on subtaskB, with tem submitted at SemEval2017-Task3, for the purpose of developing semantic textual sim- the question-question similarity subtask B. ilarity measures for such noisy texts. Question- The proposed approach is a supervised question similarity appeared in SemEval2016 combination of different unsupervised tex- (Nakov et al., 2016), and is pursued in Se- tual similarities. These textual similarities mEval2017 (Nakov et al., 2017). The approaches rely on the introduction of a relation ma- explored last year were mostly supervised fusion trix in the classical cosine similarity be- of different similarity measures, some being un- tween bag-of-words, so as to get a soft- supervised, others supervised. Among the un- cosine that takes into account relations be- supervised measures, many were based on over- tween words. According to the type of re- lap count between components (from n-grams of lation matrix embedded in the soft-cosine, words or characters to knowledge-based compo- semantic or lexical relations can be con- nents such as named entities, frame representa- sidered. Our system ranked first among tions, knowledge graphs, e.g. (Franco-Salvador the official submissions of subtask B. et al., 2016)...). Much attention was also paid for the use of word embeddings (e.g. (Mihaylov and 1 Introduction Nakov, 2016)), with question-level averaged vec- Social networks enable people to post questions, tors used directly with a cosine similarity or as in- and to interact with other people to obtain relevant put of a neural classifier. Finally, fusion was often answers. The popularity of forums show that they performed with SVMs (Filice et al., 2016) are able to propose reliable answers. Due to this Our motivation in this work was slightly dif- tremendous popularity, forums are growing fast, ferent: we considered that forum data were too and the first reflex for an internet user is to check noisy to get reliable outputs from linguistic anal- with his favorite search engine if a similar ques- ysis and we wanted to focus on core textual se- tion has already been posted.Community Question mantic similarity. Hence, we avoided using any Answering at SemEval focuses on this task, with metadata analysis (such as user profile...) to get re- 3 different subtasks. SubtaskA (resp. subtaskC) sults that could easily generalize to other similar- aims at re-ranking the comments of one original ity tasks.Thus, we explore unsupervised similarity question (resp. the comments of a set of 10 re- measures, with no external resources, hardly any lated questions), regarding the relevancy to the linguistic processing (except a list of stopwords), original questions. SubtaskB aims at re-ranking relying only on the availability of sufficient unan- 10 related questions proposed by a search en- notated corpora representative of the data. And we gine, regarding the relevancy to the original ques- fuse them in a robust and simple supervised frame- tion. Subtasks A and C are question-answering work (logistic regression). tasks. SubtaskB can be viewed as a pure seman- The rest of the paper is organized as follows: tic textual similarity task applied on community in section2, the core unsupervised similarity mea- questions, with noisy user-generated texts, mak- sure is presented, the submitted systems are de- ing it different from SemEval-Task1 (Agirre et al., scribed in section3, and section4 presents results. 315 Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 315–319, Vancouver, Canada, August 3 - 4, 2017. c 2017 Association for Computational Linguistics 2 Soft-Cosine Similarity Measure 2013) have known a tremendous success recently. They enable to obtain relevant semantic relations In a classical bag-of-words approach, texts are rep- between words, based on a simple similarity mea- resented by a vector of TF-IDF coefficients of size sure (e.g. cosine) between the vector representa- N, N being the number of different words occur- tions of these words. ring in the texts. Computing a cosine similarity In this work, 2 distributed representations between 2 vectors is directly related to the amount of words are computed, using the word2vec of words which are in common in both texts. t n toolkit, in the cbow configuration: one is esti- X .Y t cos(X, Y ) = whith X .Y = xiyi mated on English Wikipedia, and the other is es- √Xt.X√Y t.Y i=1 timated using the unannotated corpus of questions X (1) and comments on Qatar-Living forum, distributed When there are no words in common between in the campaign, which contains 100 millions of texts X and Y (i.e. no index i for which both x i words. The vectors dimension is 300 (experiments and y are not equal to zero), cosine similarity is i with various vector dimensions didn’t provide any null. However, even with no words in common, significant difference), and only the words with a texts can be semantically related when the words minimal frequency of 50 are taken into account. are themselves semantically related. Hence we word2vec propose to take into account word-level relations Once the representations of words M by introducing in the cosine similarity formula a are available, can be computed in different relation matrix M, as suggested in equation2. ways. We have explored different variants, and the best results were obtained with the following Xt.M.Y cosM (X, Y ) = (2) framework, where vi stands for the word2vec √ t √ t X .M.X Y .M.Y reprsentation of word wi: n n t 2 X .M.Y = ximi,jyj (3) mi,j = max(0, cosine(vi, vj)) (4) Xi=1 jX=1 where M is a matrix whose element mi,j ex- Grounding to 0 is motivated by the observation presses some relation between word i and word that negative cosine between words are hard to in- j. With such a metric, the similarity between two terpret, and often irrelevant. Squaring is applied to texts is non null as soon as the texts share related emphasize the dynamics of the semantic relations: words, even if they have no words in common. insisting more on strong semantic relations, and Introducing the relation matrix in the denomina- flattening weak semantic relations. Actually we tor normalization factors ensures that the reflex- have observed in several applicative domains that ive similarity is 1. If the words are only related high semantic similarities derived from word em- with themselves (m = 1 and m = 0 i, j with bedding are more significant than low similarities. i,i i,j ∀ i = j), M is the identity matrix and the soft-cosine 6 turns out to be the cosine. 2.2 Edit-distance based relations We first investigated this modified cosine simi- Using a Levenshtein distance between words, an larity in the context of topic segmentation of TV edit relation between words can be computed: it Broadcast News (Bouchekif et al., 2016), using enables to cope, for instance, with little typo- semantic relations between words to improve the graphic errors which are frequent in social user- computation of semantic cohesion between con- generated corpora such as Qatar Living forum. It secutive snippets. Other researchers have also pro- is defined as mi,i = 1 and for i = j: posed this measure (e.g (Sidorov et al., 2014)) 6 along with the soft-cosine denomination, where β Levenshtein(w , w ) the matrix was based for instance on Levenshtein m = α 1 i j (5) i,j ∗ − max( w , w ) distance between n-grams. In this work, we inves- || i|| || j|| ! tigate different kinds of word relations that can be used for computing M. w is the number of characters of the word, || || α is a weighting factor relatively to diagonal ele- 2.1 Semantic relations ments, and β is a factor that enables to emphasize Distributed representations of words, such as the the score dynamics. Experiments on train and dev word2vec approach proposed by (Mikolov et al., led to set α = 1.8 and β = 5. 316 3 System Description paper (Nakov et al., 2017). Results are presented with the MAP evaluation measure, on 3 corpora: 3.1 Data pre-processing dev (50 original questions 10 related questions), Some basic preprocessing steps are applied on the × test2016 (70 original questions 10 related ques- text: lowercase, suppression of punctuation marks × tions) and test2017 (88 original questions 10 re- and stopwords, replacing urls and images with the × lated questions). generic terms ” url ” and ” img ”. As for the bag It is worth noticing that the MAP scorer used in of word representation, TF-IDF coefficients are this campaign is sensitive to the amount of orig- computed in a specific way: TF coefficients are inal questions which don’t have any relevant re- computed in the text under consideration as usual lated questions in the gold labels. In fact, these but IDF coefficients are computed from the large questions always account for a precision of 0 in the unannotated Qatar Living forum corpus. MAP scoring. Hence, an Oracle evaluation, giv- 3.2 Supervised combination of unsupervised ing a score of 1 to all related questions labeled as similarities ”true”, and a score of 0 to all related questions la- For a given pair of texts to compare, 3 beled as ”false” in the gold labels, doesn’t provide textual similarity measures are considered: a 100% MAP but an Oracle MAP which corre- cosMrel (soft-cosine with semantic relations), sponds to the proportion of original questions that cosMlev (soft-cosine with Levenshtein distance), have at least 1 relevant related question.
Recommended publications
  • Intelligent Chat Bot
    INTELLIGENT CHAT BOT A. Mohamed Rasvi, V.V. Sabareesh, V. Suthajebakumari Computer Science and Engineering, Kamaraj College of Engineering and Technology, India ABSTRACT This paper discusses the workflow of intelligent chat bot powered by various artificial intelligence algorithms. The replies for messages in chats are trained against set of predefined questions and chat messages. These trained data sets are stored in database. Relying on one machine-learning algorithm showed inaccurate performance, so this bot is powered by four different machine-learning algorithms to make a decision. The inference engine pre-processes the received message then matches it against the trained datasets based on the AI algorithms. The AIML provides similar way of replying to a message in online chat bots using simple XML based mechanism but the method of employing AI provides accurate replies than the widely used AIML in the Internet. This Intelligent chat bot can be used to provide assistance for individual like answering simple queries to booking a ticket for a trip and also when trained properly this can be used as a replacement for a teacher which teaches a subject or even to teach programming. Keywords : AIML, Artificial Intelligence, Chat bot, Machine-learning, String Matching. I. INTRODUCTION Social networks are attracting masses and gaining huge momentum. They allow instant messaging and sharing features. Guides and technical help desks provide on demand tech support through chat services or voice call. Queries are taken to technical support team from customer to clear their doubts. But this process needs a dedicated support team to answer user‟s query which is a lot of man power.
    [Show full text]
  • An Ensemble Regression Approach for Ocr Error Correction
    AN ENSEMBLE REGRESSION APPROACH FOR OCR ERROR CORRECTION by Jie Mei Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia March 2017 © Copyright by Jie Mei, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vi List of Symbols Used .............................. vii Acknowledgements ............................... viii Chapter 1 Introduction .......................... 1 1.1 Problem Statement............................ 1 1.2 Proposed Model .............................. 2 1.3 Contributions ............................... 2 1.4 Outline ................................... 3 Chapter 2 Background ........................... 5 2.1 OCR Procedure .............................. 5 2.2 OCR-Error Characteristics ........................ 6 2.3 Modern Post-Processing Models ..................... 7 Chapter 3 Compositional Correction Frameworks .......... 9 3.1 Noisy Channel ............................... 11 3.1.1 Error Correction Models ..................... 12 3.1.2 Correction Inferences ....................... 13 3.2 Confidence Analysis ............................ 16 3.2.1 Error Correction Models ..................... 16 3.2.2 Correction Inferences ....................... 17 3.3 Framework Comparison ......................... 18 ii Chapter 4 Proposed Model ........................ 21 4.1 Error Detection .............................. 22
    [Show full text]
  • Practice with Python
    CSI4108-01 ARTIFICIAL INTELLIGENCE 1 Word Embedding / Text Processing Practice with Python 2018. 5. 11. Lee, Gyeongbok Practice with Python 2 Contents • Word Embedding – Libraries: gensim, fastText – Embedding alignment (with two languages) • Text/Language Processing – POS Tagging with NLTK/koNLPy – Text similarity (jellyfish) Practice with Python 3 Gensim • Open-source vector space modeling and topic modeling toolkit implemented in Python – designed to handle large text collections, using data streaming and efficient incremental algorithms – Usually used to make word vector from corpus • Tutorial is available here: – https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials – https://rare-technologies.com/word2vec-tutorial/ • Install – pip install gensim Practice with Python 4 Gensim for Word Embedding • Logging • Input Data: list of word’s list – Example: I have a car , I like the cat → – For list of the sentences, you can make this by: Practice with Python 5 Gensim for Word Embedding • If your data is already preprocessed… – One sentence per line, separated by whitespace → LineSentence (just load the file) – Try with this: • http://an.yonsei.ac.kr/corpus/example_corpus.txt From https://radimrehurek.com/gensim/models/word2vec.html Practice with Python 6 Gensim for Word Embedding • If the input is in multiple files or file size is large: – Use custom iterator and yield From https://rare-technologies.com/word2vec-tutorial/ Practice with Python 7 Gensim for Word Embedding • gensim.models.Word2Vec Parameters – min_count:
    [Show full text]
  • NLP - Assignment 2
    NLP - Assignment 2 Week 2 December 27th, 2016 1. A 5-gram model is a order Markov Model: (a) Six (b) Five (c) Four (d) Constant Ans : c) Four 2. For the following corpus C1 of 3 sentences, what is the total count of unique bi- grams for which the likelihood will be estimated? Assume we do not perform any pre-processing, and we are using the corpus as given. (i) ice cream tastes better than any other food (ii) ice cream is generally served after the meal (iii) many of us have happy childhood memories linked to ice cream (a) 22 (b) 27 (c) 30 (d) 34 Ans : b) 27 3. Arrange the words \curry, oil and tea" in descending order, based on the frequency of their occurrence in the Google Books n-grams. The Google Books n-gram viewer is available at https://books.google.com/ngrams: (a) tea, oil, curry (c) curry, tea, oil (b) curry, oil, tea (d) oil, tea, curry Ans: d) oil, tea, curry 4. Given a corpus C2, The Maximum Likelihood Estimation (MLE) for the bigram \ice cream" is 0.4 and the count of occurrence of the word \ice" is 310. The likelihood of \ice cream" after applying add-one smoothing is 0:025, for the same corpus C2. What is the vocabulary size of C2: 1 (a) 4390 (b) 4690 (c) 5270 (d) 5550 Ans: b)4690 The Questions from 5 to 10 require you to analyse the data given in the corpus C3, using a programming language of your choice.
    [Show full text]
  • 3 Dictionaries and Tolerant Retrieval
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 49 Dictionaries and tolerant 3 retrieval In Chapters 1 and 2 we developed the ideas underlying inverted indexes for handling Boolean and proximity queries. Here, we develop techniques that are robust to typographical errors in the query, as well as alternative spellings. In Section 3.1 we develop data structures that help the search for terms in the vocabulary in an inverted index. In Section 3.2 we study WILDCARD QUERY the idea of a wildcard query: a query such as *a*e*i*o*u*, which seeks doc- uments containing any term that includes all the five vowels in sequence. The * symbol indicates any (possibly empty) string of characters. Users pose such queries to a search engine when they are uncertain about how to spell a query term, or seek documents containing variants of a query term; for in- stance, the query automat* would seek documents containing any of the terms automatic, automation and automated. We then turn to other forms of imprecisely posed queries, focusing on spelling errors in Section 3.3. Users make spelling errors either by accident, or because the term they are searching for (e.g., Herman) has no unambiguous spelling in the collection. We detail a number of techniques for correcting spelling errors in queries, one term at a time as well as for an entire string of query terms. Finally, in Section 3.4 we study a method for seeking vo- cabulary terms that are phonetically close to the query term(s).
    [Show full text]
  • Use of Word Embedding to Generate Similar Words and Misspellings for Training Purpose in Chatbot Development
    USE OF WORD EMBEDDING TO GENERATE SIMILAR WORDS AND MISSPELLINGS FOR TRAINING PURPOSE IN CHATBOT DEVELOPMENT by SANJAY THAPA THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at The University of Texas at Arlington December, 2019 Arlington, Texas Supervising Committee: Deokgun Park, Supervising Professor Manfred Huber Vassilis Athitsos Copyright © by Sanjay Thapa 2019 ACKNOWLEDGEMENTS I would like to thank Dr. Deokgun Park for allowing me to work and conduct the research in the Human Data Interaction (HDI) Lab in the College of Engineering at the University of Texas at Arlington. Dr. Park's guidance on the procedure to solve problems using different approaches has helped me to grow personally and intellectually. I am also very thankful to Dr. Manfred Huber and Dr. Vassilis Athitsos for their constant guidance and support in my research. I would like to thank all the members of the HDI Lab for their generosity and company during my time in the lab. I also would like to thank Peace Ossom Williamson, the director of Research Data Services at the library of the University of Texas at Arlington (UTA) for giving me the opportunity to work as a Graduate Research Assistant (GRA) in the dataCAVE. i DEDICATION I would like to dedicate my thesis especially to my mom and dad who have always been very supportive of me with my educational and personal endeavors. Furthermore, my sister and my brother played an indispensable role to provide emotional and other supports during my graduate school and research. ii LIST OF ILLUSTRATIONS Fig: 2.3: Rasa Architecture ……………………………………………………………….7 Fig 2.4: Chatbot Conversation without misspelling and with misspelling error.
    [Show full text]
  • Feature Combination for Measuring Sentence Similarity
    FEATURE COMBINATION FOR MEASURING SENTENCE SIMILARITY Ehsan Shareghi Nojehdeh A thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Master of Computer Science Concordia University Montreal,´ Quebec,´ Canada April 2013 c Ehsan Shareghi Nojehdeh, 2013 Concordia University School of Graduate Studies This is to certify that the thesis prepared By: Ehsan Shareghi Nojehdeh Entitled: Feature Combination for Measuring Sentence Similarity and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the final examining commitee: Chair Dr. Peter C. Rigby Examiner Dr. Leila Kosseim Examiner Dr. Adam Krzyzak Supervisor Dr. Sabine Bergler Approved Chair of Department or Graduate Program Director 20 Dr. Robin A. L. Drew, Dean Faculty of Engineering and Computer Science Abstract Feature Combination for Measuring Sentence Similarity Ehsan Shareghi Nojehdeh Sentence similarity is one of the core elements of Natural Language Processing (NLP) tasks such as Recognizing Textual Entailment, and Paraphrase Recognition. Over the years, different systems have been proposed to measure similarity between fragments of texts. In this research, we propose a new two phase supervised learning method which uses a combination of lexical features to train a model for predicting similarity between sentences. Each of these features, covers an aspect of the text on implicit or explicit level. The two phase method uses all combinations of the features in the feature space and trains separate models based on each combination.
    [Show full text]
  • The Research of Weighted Community Partition Based on Simhash
    Available online at www.sciencedirect.com Procedia Computer Science 17 ( 2013 ) 797 – 802 Information Technology and Quantitative Management (ITQM2013) The Research of Weighted Community Partition based on SimHash Li Yanga,b,c, **, Sha Yingc, Shan Jixic, Xu Kaic aInstitute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190 bGraduate School of Chinese Academy of Sciences, Beijing, 100049 cNational Engineering Laboratory for Information Security Technologies, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093 Abstract The methods of community partition in nowadays mainly focus on using the topological links, and consider little of the content-based information between users. In the paper we analyze the content similarity between users by the SimHash method, and compute the content-based weight of edges so as to attain a more reasonable community partition. The dataset adopt the real data from twitter for testing, which verify the effectiveness of the proposed method. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the organizers of the 2013 International Conference on Information Technology and Quantitative Management Keywords: Social Network, Community Partition, Content-based Weight, SimHash, Information Entropy; 1. Introduction The important characteristic of social network is the community structure, but the current methods mainly focus on the topological links, the research on the content is very less. The reason is that we can t effectively utilize content-based information from the abundant short text. So in the paper we hope to make full use of content-based information between users based on the topological links, so as to find the more reasonable communities.
    [Show full text]
  • Hybrid Algorithm for Approximate String Matching to Be Used for Information Retrieval Surbhi Arora, Ira Pandey
    International Journal of Scientific & Engineering Research Volume 9, Issue 11, November-2018 396 ISSN 2229-5518 Hybrid Algorithm for Approximate String Matching to be used for Information Retrieval Surbhi Arora, Ira Pandey Abstract— Conventional database searches require the user to hit a complete, correct query denying the possibility that any legitimate typographical variation or spelling error in the query will simply fail the search procedure. An approximate string matching engine, often colloquially referred to as fuzzy keyword search engine, could be a potential solution for all such search breakdowns. A fuzzy matching program can be summarized as the Google's 'Did you mean: ...' or Yahoo's 'Including results for ...'. These programs are entitled to be fuzzy since they don't employ strict checking and hence, confining the results to 0 or 1, i.e. no match or exact match. Rather, are designed to handle the concept of partial truth; be it DNA mapping, record screening, or simply web browsing. With the help of a 0.4 million English words dictionary acting as the underlying data source, thereby qualifying as Big Data, the study involves use of Apache Hadoop's MapReduce programming paradigm to perform approximate string matching. Aim is to design a system prototype to demonstrate the practicality of our solution. Index Terms— Approximate String Matching, Fuzzy, Information Retrieval, Jaro-Winkler, Levenshtein, MapReduce, N-Gram, Edit Distance —————————— —————————— 1 INTRODUCTION UZZY keyword search engine employs approximate Fig. 1. Fuzzy Search results for query string, “noting”. F search matching algorithms for Information Retrieval (IR). Approximate string matching involves spotting all text The fuzzy logic behind approximate string searching can matching the text pattern of given search query but with be described by considering a fuzzy set F over a referential limited number of errors [1].
    [Show full text]
  • Levenshtein Distance Based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University
    International Journal of Scientific & Engineering Research, Volume 6, Issue 5, May-2015 112 ISSN 2229-5518 Levenshtein Distance based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University Abstract— In today’s web based applications information retrieval is gaining popularity. There are many advances in information retrieval such as fuzzy search and proximity ranking. Fuzzy search retrieves relevant results containing words which are similar to query keywords. Even if there are few typographical errors in query keywords the system will retrieve relevant results. A query keyword can have many similar words, the words which are very similar to query keywords will be considered in fuzzy search. Ranking plays an important role in web search; user expects to see relevant documents in first few results. Proximity ranking is arranging search results based on the distance between query keywords. In current research information retrieval system is built to search contents of text files which have the feature of fuzzy search and proximity ranking. Indexing the contents of html or pdf files are performed for fast retrieval of search results. Combination of indexes like inverted index and trie index is used. Fuzzy search is implemented using Levenshtein’s Distance or edit distance concept. Proximity ranking is done using binning concept. Search engine system evaluation is done using average interpolated precision at eleven recall points i.e. at 0, 0.1, 0.2…..0.9, 1.0. Precision Recall graph is plotted to evaluate the system. Index Terms— stemming, stop words, fuzzy search, proximity ranking, dictionary, inverted index, trie index and binning.
    [Show full text]
  • Thesis Submitted in Partial Fulfilment for the Degree of Master of Computing (Advanced) at Research School of Computer Science the Australian National University
    How to tell Real From Fake? Understanding how to classify human-authored and machine-generated text Debashish Chakraborty A thesis submitted in partial fulfilment for the degree of Master of Computing (Advanced) at Research School of Computer Science The Australian National University May 2019 c Debashish Chakraborty 2019 Except where otherwise indicated, this thesis is my own original work. Debashish Chakraborty 30 May 2019 Acknowledgments First, I would like to express my gratitude to Dr Patrik Haslum for his support and guidance throughout the thesis, and for the useful feedback from my late drafts. Thank you to my parents and Sandra for their continuous support and patience. A special thanks to Sandra for reading my very "drafty" drafts. v Abstract Natural Language Generation (NLG) using Generative Adversarial Networks (GANs) has been an active field of research as it alleviates restrictions in conventional Lan- guage Modelling based text generators e.g. Long-Short Term Memory (LSTM) net- works. The adequacy of a GAN-based text generator depends on its capacity to classify human-written (real) and machine-generated (synthetic) text. However, tra- ditional evaluation metrics used by these generators cannot effectively capture clas- sification features in NLG tasks, such as creative writing. We prove this by using an LSTM network to almost perfectly classify sentences generated by a LeakGAN, a state-of-the-art GAN for long text generation. This thesis attempts a rare approach to understand real and synthetic sentences using meaningful and interpretable features of long sentences (with at least 20 words). We analyse novelty and diversity features of real and synthetic sentences, generate by a LeakGAN, using three meaningful text dissimilarity functions: Jaccard Distance (JD), Normalised Levenshtein Distance (NLD) and Word Mover’s Distance (WMD).
    [Show full text]
  • Effective Search Space Reduction for Spell Correction Using Character Neural Embeddings
    Effective search space reduction for spell correction using character neural embeddings Harshit Pande Smart Equipment Solutions Group Samsung Semiconductor India R&D Bengaluru, India [email protected] Abstract of distance computations blows up the time com- plexity, thus hindering real-time spell correction. We present a novel, unsupervised, and dis- For Damerau-Levenshtein distance or similar edit tance measure agnostic method for search distance-based measures, some approaches have space reduction in spell correction using been tried to reduce the time complexity of spell neural character embeddings. The embed- correction. Norvig (2007) does not check against dings are learned by skip-gram word2vec all dictionary words, instead generates all possi- training on sequences generated from dic- ble words till a certain edit distance threshold from tionary words in a phonetic information- the misspelled word. Then each of such generated retentive manner. We report a very high words is checked in the dictionary for existence, performance in terms of both success rates and if it is found in the dictionary, it becomes a and reduction of search space on the Birk- potentially correct spelling. There are two short- beck spelling error corpus. To the best of comings of this approach. First, such search space our knowledge, this is the first application reduction works only for edit distance-based mea- of word2vec to spell correction. sures. Second, this approach too leads to high time complexity when the edit distance threshold is 1 Introduction greater than 2 and the possible characters are large. Spell correction is now a pervasive feature, with Large character set is real for Unicode characters presence in a wide range of applications such used in may Asian languages.
    [Show full text]