Text Similarity Analysis for Test Suite Minimization

Total Page:16

File Type:pdf, Size:1020Kb

Text Similarity Analysis for Test Suite Minimization EXAMENSARBETE INOM DATALOGI OCH DATATEKNIK, AVANCERAD NIVÅ, 30 HP STOCKHOLM, SVERIGE 2020 Text Similarity Analysis for Test Suite Minimization HUGO HAGGREN KTH SKOLAN FÖR ELEKTROTEKNIK OCH DATAVETENSKAP Text Similarity Analysis for Test Suite Minimization HUGO HAGGREN Master in Machine Learning Date: November 2, 2020 Supervisor: Sahar Tahvili Examiner: Anne Håkansson School of Electrical Engineering and Computer Science Host company: Ericsson AB, Global Artificial Intelligence Accelerator (GAIA) Swedish title: Textlikhetsanalys för minimering av testsamlingar iii Abstract Software testing is the most expensive phase in the software development life cycle. It is thus understandable why test optimization is a crucial area in the software development domain. In software testing, the gradual increase of test cases demands large portions of testing resources (budget and time). Test Suite Minimization is considered a potential approach to deal with the test suite size problem. Several test suite minimization techniques have been proposed to efficiently address the test suite size problem. Proposing a good solution for test suite minimization is a challenging task, where several parameters such as code coverage, requirement coverage, and testing cost need to be considered before removing a test case from the testing cycle. This thesis proposes and evaluates two different NLP-based approaches for similarity analysis between manual integration test cases, which can be employed for test suite minimiza- tion. One approach is based on syntactic text similarity analysis and the other is a machine learning based semantic approach. The feasibility of the pro- posed solutions is studied through analysis of industrial use cases at Ericsson AB in Sweden. The results show that the semantic approach barely manages to outperform the syntactic approach. While both approaches show promise, subsequent studies will have to be done to further evaluate the semantic simi- larity based method. iv Sammanfattning Mjukvarutestning är den mest kostsamma fasen inom mjukvaruutveckling. Därför är det förståeligt varför testoptimering är ett kritiskt område inom mjuk- varubranschen. Inom mjukvarutestning ställer den gradvisa ökningen av test- fall stora krav på testresurser (budget och tid). Test Suite Minimization an- ses vara ett potentiellt tillvägagångssätt för att hantera problemet med växan- de testsamlingar. Flera minimiseringsmetoder har föreslagits för att effektivt hantera testsamlingars storleksproblem. Att föreslå en bra lösning för minime- ring av antal testfall är en utmanande uppgift, där flera parametrar som kod- täckning, kravtäckning och testkostnad måste övervägas innan man tar bort ett testfall från testcykeln. Denna uppsats föreslår och utvärderar två olika NLP- baserade metoder för likhetsanalys mellan testfall för manuell integration, som kan användas för minimering av testsamlingar. Den ena metoden baseras på syntaktisk textlikhetsanalys, medan den andra är en maskininlärningsbaserad semantisk strategi. Genomförbarheten av de föreslagna lösningarna studeras genom analys av industriella användningsfall hos Ericsson AB i Sverige. Re- sultaten visar att den semantiska metoden knappt lyckas överträffa den syntak- tiska metoden. Medan båda tillvägagångssätten visar lovande resultat, måste efterföljande studier göras för att ytterligare utvärdera den semantiska likhets- baserade metoden. v Acknowledgments I would like to thank my supervisor at Ericsson, Sahar Tahvili. Thank you for helping in any way possible throughout the project and giving me a wonderful time at Ericsson. Furthermore, I would like to thank Cristina Landin at Eric- sson for providing the labeled data for this project and always being available for questions regarding existing software testing procedures. I also want to thank Auwn Muhammad for assisting the project in the form of consultation and practical assitance. Last but not least I would like to thank my examiner at KTH, Anne Håkansson. Thank you for always being available for questions and your extensive feedback on the report throughout the project. Sincerely, Hugo Haggren Contents 1 Introduction 1 1.1 Background . .1 1.2 Problem Statement . .2 1.3 Purpose . .3 1.4 Goal . .3 1.4.1 Benefits, Ethics and Sustainability . .3 1.5 Methodology . .4 1.5.1 Research Philosophy . .4 1.5.2 Resarch Methods . .5 1.5.3 Research Approach . .5 1.6 Stakeholder . .5 1.7 Delimitation . .5 1.8 Outline . .6 2 Theoretical Background 7 2.1 Software Testing . .7 2.1.1 Test Suite Minimization . .9 2.1.2 Manual Testing . .9 2.2 Natural Language Processing . .9 2.3 Machine Learning . 10 2.3.1 Artificial Neural Networks . 10 2.3.2 Deep Learning . 11 2.4 Paragraph Vectors . 11 2.4.1 Word2Vec . 11 2.4.2 Doc2Vec . 12 2.5 The Transformer Model . 13 2.5.1 SBERT . 14 2.6 Syntactic Similarity . 14 2.6.1 Levenshtein Distance . 14 vi CONTENTS vii 2.7 Density-Based Clustering . 15 2.7.1 Cosine Similarity . 15 2.8 Related Work . 16 3 Research Methods and Methodologies 17 3.1 Research Strategy . 17 3.2 Data Collection . 18 3.3 Data Analysis . 19 3.3.1 Visualization . 19 3.4 Quality Assurance . 19 3.4.1 Evaluation Metrics . 20 3.5 System Development . 21 4 Requirements and Design 22 4.1 Requirements . 22 4.2 Initial Design . 23 4.3 Final Design . 23 5 Implementation and Results 25 5.1 Data . 25 5.2 Data Labeling . 26 5.3 Syntactic Similarity Analysis . 26 5.4 Semantic Similarity Analysis . 27 5.4.1 Feature Vector Generation and Clustering . 27 5.5 Results . 28 5.5.1 Syntactic Similarity . 28 5.5.2 Semantic Similarity Analysis . 29 6 Evaluation and Implications 32 6.1 Evaluation . 32 6.1.1 Syntactic Evaluation . 32 6.1.2 Evaluation of Semantic Models . 34 6.2 Implications . 35 6.3 Threats to Validity . 36 7 Conclusions and Future Work 38 7.1 Discussion . 38 7.2 Future Work . 39 viii CONTENTS Bibliography 41 Chapter 1 Introduction In any industry it is always crucial that the product or service works as in- tended. Software development is no exception. Ensuring the quality of soft- ware, requires it to be tested rigorously. Hence, software testing plays a vital role in the software development life cycle. In fact, it takes up to 50% of the to- tal development cost [1]. Therefore, it is in any developer’s interest to optimize the software testing process in terms of cost, time, and resources [2]. To ensure the validity of tests, testers make use of test cases. A test case is defined as a set of test inputs, execution instructions, and expected results, developed for a particular objective [3]. Usually, a large number of test cases are created (manually or automatically) for testing a product [4]. Test cases are commonly grouped with other test cases that test a certain requirement [5]. These groups are called test suites. One way of optimizing a testing process is to remove any redundant test cases in a test suite. This process is called test suite minimization. It is formally defined as techniques used to minimize the testing cost in terms of execution time and resources [6]. The main objective of test case minimization is to generate a representative set from test cases that satisfy all the requirements as the original test suite with a minimum number of test cases [6, 5]. 1.1 Background Software testing can generally be divided into two main groups: automated testing and manual testing [1]. Automated testing is when each and every step of the testing procedure is automated and without manual operations [1]. In a manual testing procedure, however, all testing artifacts (e.g. requirement specification, test cases) are written by humans in natural language [7]. 1 2 CHAPTER 1. INTRODUCTION This opens up to the possibility of using natural language processing (NLP) techniques to optimize the testing process. NLP is a sub-field of computer science and linguistics which aims to find methods that enable computers to understand human language [8]. The area of NLP this thesis focuses on is text similarity analysis. Text similarity anal- ysis consists of finding similarities between words, sentences, or documents [9]. There are two main types of text similarity: (1) syntactic similarity and (2) semantic similarity [9]. The syntactic similarity is the similarity of two words based on what characters they’re constructed off. The syntactic sim- ilarity does not take into account the meaning of the words, which is where semantic similarity comes in. Semantic similarity is how similar the under- lying meaning in two words is [9]. For instance "Paris" and "Stockholm" are string-wise two very different worlds, but semantically they are similar since they are both capital cities. 1.2 Problem Statement Software testing often takes up a large part of the software development pro- cess. This process, however, can be very time and resource consuming and require many manual operators. This consequently, can lead to large costs. To minimize testing times and costs one has to find ways to optimize the software testing process. This is the general, big-picture problem this thesis aims to tackle. With this problem in mind the main research question of this thesis can thus be formulated as follows: How can text similarity analysis be used for test optimization and test suite minimization? In order to analyze the research question of this thesis, the following steps will be performed: 1. Selecting appropriate algorithms for text similarity analysis. 2. Comparing the performance of selected algorithms. 3. Proposing the best solution for test optimization purposes using the sim- ilarities between test cases. CHAPTER 1. INTRODUCTION 3 With these steps it will be possible to come to a conclusion on whether the proposed algorithms can be a viable alternative for test optimization and how they can be best applied. 1.3 Purpose The purpose of this thesis is to explore and present how text similarity tech- niques can be applied for test optimization purposes. This is done by present- ing and analyzing a novel text similarity-based approach to test-suite mini- mization together with the results of the mentioned approaches when applied to a test suite.
Recommended publications
  • Intelligent Chat Bot
    INTELLIGENT CHAT BOT A. Mohamed Rasvi, V.V. Sabareesh, V. Suthajebakumari Computer Science and Engineering, Kamaraj College of Engineering and Technology, India ABSTRACT This paper discusses the workflow of intelligent chat bot powered by various artificial intelligence algorithms. The replies for messages in chats are trained against set of predefined questions and chat messages. These trained data sets are stored in database. Relying on one machine-learning algorithm showed inaccurate performance, so this bot is powered by four different machine-learning algorithms to make a decision. The inference engine pre-processes the received message then matches it against the trained datasets based on the AI algorithms. The AIML provides similar way of replying to a message in online chat bots using simple XML based mechanism but the method of employing AI provides accurate replies than the widely used AIML in the Internet. This Intelligent chat bot can be used to provide assistance for individual like answering simple queries to booking a ticket for a trip and also when trained properly this can be used as a replacement for a teacher which teaches a subject or even to teach programming. Keywords : AIML, Artificial Intelligence, Chat bot, Machine-learning, String Matching. I. INTRODUCTION Social networks are attracting masses and gaining huge momentum. They allow instant messaging and sharing features. Guides and technical help desks provide on demand tech support through chat services or voice call. Queries are taken to technical support team from customer to clear their doubts. But this process needs a dedicated support team to answer user‟s query which is a lot of man power.
    [Show full text]
  • An Ensemble Regression Approach for Ocr Error Correction
    AN ENSEMBLE REGRESSION APPROACH FOR OCR ERROR CORRECTION by Jie Mei Submitted in partial fulfillment of the requirements for the degree of Master of Computer Science at Dalhousie University Halifax, Nova Scotia March 2017 © Copyright by Jie Mei, 2017 Table of Contents List of Tables ................................... iv List of Figures .................................. v Abstract ...................................... vi List of Symbols Used .............................. vii Acknowledgements ............................... viii Chapter 1 Introduction .......................... 1 1.1 Problem Statement............................ 1 1.2 Proposed Model .............................. 2 1.3 Contributions ............................... 2 1.4 Outline ................................... 3 Chapter 2 Background ........................... 5 2.1 OCR Procedure .............................. 5 2.2 OCR-Error Characteristics ........................ 6 2.3 Modern Post-Processing Models ..................... 7 Chapter 3 Compositional Correction Frameworks .......... 9 3.1 Noisy Channel ............................... 11 3.1.1 Error Correction Models ..................... 12 3.1.2 Correction Inferences ....................... 13 3.2 Confidence Analysis ............................ 16 3.2.1 Error Correction Models ..................... 16 3.2.2 Correction Inferences ....................... 17 3.3 Framework Comparison ......................... 18 ii Chapter 4 Proposed Model ........................ 21 4.1 Error Detection .............................. 22
    [Show full text]
  • Practice with Python
    CSI4108-01 ARTIFICIAL INTELLIGENCE 1 Word Embedding / Text Processing Practice with Python 2018. 5. 11. Lee, Gyeongbok Practice with Python 2 Contents • Word Embedding – Libraries: gensim, fastText – Embedding alignment (with two languages) • Text/Language Processing – POS Tagging with NLTK/koNLPy – Text similarity (jellyfish) Practice with Python 3 Gensim • Open-source vector space modeling and topic modeling toolkit implemented in Python – designed to handle large text collections, using data streaming and efficient incremental algorithms – Usually used to make word vector from corpus • Tutorial is available here: – https://github.com/RaRe-Technologies/gensim/blob/develop/tutorials.md#tutorials – https://rare-technologies.com/word2vec-tutorial/ • Install – pip install gensim Practice with Python 4 Gensim for Word Embedding • Logging • Input Data: list of word’s list – Example: I have a car , I like the cat → – For list of the sentences, you can make this by: Practice with Python 5 Gensim for Word Embedding • If your data is already preprocessed… – One sentence per line, separated by whitespace → LineSentence (just load the file) – Try with this: • http://an.yonsei.ac.kr/corpus/example_corpus.txt From https://radimrehurek.com/gensim/models/word2vec.html Practice with Python 6 Gensim for Word Embedding • If the input is in multiple files or file size is large: – Use custom iterator and yield From https://rare-technologies.com/word2vec-tutorial/ Practice with Python 7 Gensim for Word Embedding • gensim.models.Word2Vec Parameters – min_count:
    [Show full text]
  • NLP - Assignment 2
    NLP - Assignment 2 Week 2 December 27th, 2016 1. A 5-gram model is a order Markov Model: (a) Six (b) Five (c) Four (d) Constant Ans : c) Four 2. For the following corpus C1 of 3 sentences, what is the total count of unique bi- grams for which the likelihood will be estimated? Assume we do not perform any pre-processing, and we are using the corpus as given. (i) ice cream tastes better than any other food (ii) ice cream is generally served after the meal (iii) many of us have happy childhood memories linked to ice cream (a) 22 (b) 27 (c) 30 (d) 34 Ans : b) 27 3. Arrange the words \curry, oil and tea" in descending order, based on the frequency of their occurrence in the Google Books n-grams. The Google Books n-gram viewer is available at https://books.google.com/ngrams: (a) tea, oil, curry (c) curry, tea, oil (b) curry, oil, tea (d) oil, tea, curry Ans: d) oil, tea, curry 4. Given a corpus C2, The Maximum Likelihood Estimation (MLE) for the bigram \ice cream" is 0.4 and the count of occurrence of the word \ice" is 310. The likelihood of \ice cream" after applying add-one smoothing is 0:025, for the same corpus C2. What is the vocabulary size of C2: 1 (a) 4390 (b) 4690 (c) 5270 (d) 5550 Ans: b)4690 The Questions from 5 to 10 require you to analyse the data given in the corpus C3, using a programming language of your choice.
    [Show full text]
  • 3 Dictionaries and Tolerant Retrieval
    Online edition (c)2009 Cambridge UP DRAFT! © April 1, 2009 Cambridge University Press. Feedback welcome. 49 Dictionaries and tolerant 3 retrieval In Chapters 1 and 2 we developed the ideas underlying inverted indexes for handling Boolean and proximity queries. Here, we develop techniques that are robust to typographical errors in the query, as well as alternative spellings. In Section 3.1 we develop data structures that help the search for terms in the vocabulary in an inverted index. In Section 3.2 we study WILDCARD QUERY the idea of a wildcard query: a query such as *a*e*i*o*u*, which seeks doc- uments containing any term that includes all the five vowels in sequence. The * symbol indicates any (possibly empty) string of characters. Users pose such queries to a search engine when they are uncertain about how to spell a query term, or seek documents containing variants of a query term; for in- stance, the query automat* would seek documents containing any of the terms automatic, automation and automated. We then turn to other forms of imprecisely posed queries, focusing on spelling errors in Section 3.3. Users make spelling errors either by accident, or because the term they are searching for (e.g., Herman) has no unambiguous spelling in the collection. We detail a number of techniques for correcting spelling errors in queries, one term at a time as well as for an entire string of query terms. Finally, in Section 3.4 we study a method for seeking vo- cabulary terms that are phonetically close to the query term(s).
    [Show full text]
  • Use of Word Embedding to Generate Similar Words and Misspellings for Training Purpose in Chatbot Development
    USE OF WORD EMBEDDING TO GENERATE SIMILAR WORDS AND MISSPELLINGS FOR TRAINING PURPOSE IN CHATBOT DEVELOPMENT by SANJAY THAPA THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at The University of Texas at Arlington December, 2019 Arlington, Texas Supervising Committee: Deokgun Park, Supervising Professor Manfred Huber Vassilis Athitsos Copyright © by Sanjay Thapa 2019 ACKNOWLEDGEMENTS I would like to thank Dr. Deokgun Park for allowing me to work and conduct the research in the Human Data Interaction (HDI) Lab in the College of Engineering at the University of Texas at Arlington. Dr. Park's guidance on the procedure to solve problems using different approaches has helped me to grow personally and intellectually. I am also very thankful to Dr. Manfred Huber and Dr. Vassilis Athitsos for their constant guidance and support in my research. I would like to thank all the members of the HDI Lab for their generosity and company during my time in the lab. I also would like to thank Peace Ossom Williamson, the director of Research Data Services at the library of the University of Texas at Arlington (UTA) for giving me the opportunity to work as a Graduate Research Assistant (GRA) in the dataCAVE. i DEDICATION I would like to dedicate my thesis especially to my mom and dad who have always been very supportive of me with my educational and personal endeavors. Furthermore, my sister and my brother played an indispensable role to provide emotional and other supports during my graduate school and research. ii LIST OF ILLUSTRATIONS Fig: 2.3: Rasa Architecture ……………………………………………………………….7 Fig 2.4: Chatbot Conversation without misspelling and with misspelling error.
    [Show full text]
  • Feature Combination for Measuring Sentence Similarity
    FEATURE COMBINATION FOR MEASURING SENTENCE SIMILARITY Ehsan Shareghi Nojehdeh A thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Master of Computer Science Concordia University Montreal,´ Quebec,´ Canada April 2013 c Ehsan Shareghi Nojehdeh, 2013 Concordia University School of Graduate Studies This is to certify that the thesis prepared By: Ehsan Shareghi Nojehdeh Entitled: Feature Combination for Measuring Sentence Similarity and submitted in partial fulfillment of the requirements for the degree of Master of Computer Science complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the final examining commitee: Chair Dr. Peter C. Rigby Examiner Dr. Leila Kosseim Examiner Dr. Adam Krzyzak Supervisor Dr. Sabine Bergler Approved Chair of Department or Graduate Program Director 20 Dr. Robin A. L. Drew, Dean Faculty of Engineering and Computer Science Abstract Feature Combination for Measuring Sentence Similarity Ehsan Shareghi Nojehdeh Sentence similarity is one of the core elements of Natural Language Processing (NLP) tasks such as Recognizing Textual Entailment, and Paraphrase Recognition. Over the years, different systems have been proposed to measure similarity between fragments of texts. In this research, we propose a new two phase supervised learning method which uses a combination of lexical features to train a model for predicting similarity between sentences. Each of these features, covers an aspect of the text on implicit or explicit level. The two phase method uses all combinations of the features in the feature space and trains separate models based on each combination.
    [Show full text]
  • The Research of Weighted Community Partition Based on Simhash
    Available online at www.sciencedirect.com Procedia Computer Science 17 ( 2013 ) 797 – 802 Information Technology and Quantitative Management (ITQM2013) The Research of Weighted Community Partition based on SimHash Li Yanga,b,c, **, Sha Yingc, Shan Jixic, Xu Kaic aInstitute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190 bGraduate School of Chinese Academy of Sciences, Beijing, 100049 cNational Engineering Laboratory for Information Security Technologies, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093 Abstract The methods of community partition in nowadays mainly focus on using the topological links, and consider little of the content-based information between users. In the paper we analyze the content similarity between users by the SimHash method, and compute the content-based weight of edges so as to attain a more reasonable community partition. The dataset adopt the real data from twitter for testing, which verify the effectiveness of the proposed method. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the organizers of the 2013 International Conference on Information Technology and Quantitative Management Keywords: Social Network, Community Partition, Content-based Weight, SimHash, Information Entropy; 1. Introduction The important characteristic of social network is the community structure, but the current methods mainly focus on the topological links, the research on the content is very less. The reason is that we can t effectively utilize content-based information from the abundant short text. So in the paper we hope to make full use of content-based information between users based on the topological links, so as to find the more reasonable communities.
    [Show full text]
  • Hybrid Algorithm for Approximate String Matching to Be Used for Information Retrieval Surbhi Arora, Ira Pandey
    International Journal of Scientific & Engineering Research Volume 9, Issue 11, November-2018 396 ISSN 2229-5518 Hybrid Algorithm for Approximate String Matching to be used for Information Retrieval Surbhi Arora, Ira Pandey Abstract— Conventional database searches require the user to hit a complete, correct query denying the possibility that any legitimate typographical variation or spelling error in the query will simply fail the search procedure. An approximate string matching engine, often colloquially referred to as fuzzy keyword search engine, could be a potential solution for all such search breakdowns. A fuzzy matching program can be summarized as the Google's 'Did you mean: ...' or Yahoo's 'Including results for ...'. These programs are entitled to be fuzzy since they don't employ strict checking and hence, confining the results to 0 or 1, i.e. no match or exact match. Rather, are designed to handle the concept of partial truth; be it DNA mapping, record screening, or simply web browsing. With the help of a 0.4 million English words dictionary acting as the underlying data source, thereby qualifying as Big Data, the study involves use of Apache Hadoop's MapReduce programming paradigm to perform approximate string matching. Aim is to design a system prototype to demonstrate the practicality of our solution. Index Terms— Approximate String Matching, Fuzzy, Information Retrieval, Jaro-Winkler, Levenshtein, MapReduce, N-Gram, Edit Distance —————————— —————————— 1 INTRODUCTION UZZY keyword search engine employs approximate Fig. 1. Fuzzy Search results for query string, “noting”. F search matching algorithms for Information Retrieval (IR). Approximate string matching involves spotting all text The fuzzy logic behind approximate string searching can matching the text pattern of given search query but with be described by considering a fuzzy set F over a referential limited number of errors [1].
    [Show full text]
  • Levenshtein Distance Based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University
    International Journal of Scientific & Engineering Research, Volume 6, Issue 5, May-2015 112 ISSN 2229-5518 Levenshtein Distance based Information Retrieval Veena G, Jalaja G BNM Institute of Technology, Visvesvaraya Technological University Abstract— In today’s web based applications information retrieval is gaining popularity. There are many advances in information retrieval such as fuzzy search and proximity ranking. Fuzzy search retrieves relevant results containing words which are similar to query keywords. Even if there are few typographical errors in query keywords the system will retrieve relevant results. A query keyword can have many similar words, the words which are very similar to query keywords will be considered in fuzzy search. Ranking plays an important role in web search; user expects to see relevant documents in first few results. Proximity ranking is arranging search results based on the distance between query keywords. In current research information retrieval system is built to search contents of text files which have the feature of fuzzy search and proximity ranking. Indexing the contents of html or pdf files are performed for fast retrieval of search results. Combination of indexes like inverted index and trie index is used. Fuzzy search is implemented using Levenshtein’s Distance or edit distance concept. Proximity ranking is done using binning concept. Search engine system evaluation is done using average interpolated precision at eleven recall points i.e. at 0, 0.1, 0.2…..0.9, 1.0. Precision Recall graph is plotted to evaluate the system. Index Terms— stemming, stop words, fuzzy search, proximity ranking, dictionary, inverted index, trie index and binning.
    [Show full text]
  • Thesis Submitted in Partial Fulfilment for the Degree of Master of Computing (Advanced) at Research School of Computer Science the Australian National University
    How to tell Real From Fake? Understanding how to classify human-authored and machine-generated text Debashish Chakraborty A thesis submitted in partial fulfilment for the degree of Master of Computing (Advanced) at Research School of Computer Science The Australian National University May 2019 c Debashish Chakraborty 2019 Except where otherwise indicated, this thesis is my own original work. Debashish Chakraborty 30 May 2019 Acknowledgments First, I would like to express my gratitude to Dr Patrik Haslum for his support and guidance throughout the thesis, and for the useful feedback from my late drafts. Thank you to my parents and Sandra for their continuous support and patience. A special thanks to Sandra for reading my very "drafty" drafts. v Abstract Natural Language Generation (NLG) using Generative Adversarial Networks (GANs) has been an active field of research as it alleviates restrictions in conventional Lan- guage Modelling based text generators e.g. Long-Short Term Memory (LSTM) net- works. The adequacy of a GAN-based text generator depends on its capacity to classify human-written (real) and machine-generated (synthetic) text. However, tra- ditional evaluation metrics used by these generators cannot effectively capture clas- sification features in NLG tasks, such as creative writing. We prove this by using an LSTM network to almost perfectly classify sentences generated by a LeakGAN, a state-of-the-art GAN for long text generation. This thesis attempts a rare approach to understand real and synthetic sentences using meaningful and interpretable features of long sentences (with at least 20 words). We analyse novelty and diversity features of real and synthetic sentences, generate by a LeakGAN, using three meaningful text dissimilarity functions: Jaccard Distance (JD), Normalised Levenshtein Distance (NLD) and Word Mover’s Distance (WMD).
    [Show full text]
  • Effective Search Space Reduction for Spell Correction Using Character Neural Embeddings
    Effective search space reduction for spell correction using character neural embeddings Harshit Pande Smart Equipment Solutions Group Samsung Semiconductor India R&D Bengaluru, India [email protected] Abstract of distance computations blows up the time com- plexity, thus hindering real-time spell correction. We present a novel, unsupervised, and dis- For Damerau-Levenshtein distance or similar edit tance measure agnostic method for search distance-based measures, some approaches have space reduction in spell correction using been tried to reduce the time complexity of spell neural character embeddings. The embed- correction. Norvig (2007) does not check against dings are learned by skip-gram word2vec all dictionary words, instead generates all possi- training on sequences generated from dic- ble words till a certain edit distance threshold from tionary words in a phonetic information- the misspelled word. Then each of such generated retentive manner. We report a very high words is checked in the dictionary for existence, performance in terms of both success rates and if it is found in the dictionary, it becomes a and reduction of search space on the Birk- potentially correct spelling. There are two short- beck spelling error corpus. To the best of comings of this approach. First, such search space our knowledge, this is the first application reduction works only for edit distance-based mea- of word2vec to spell correction. sures. Second, this approach too leads to high time complexity when the edit distance threshold is 1 Introduction greater than 2 and the possible characters are large. Spell correction is now a pervasive feature, with Large character set is real for Unicode characters presence in a wide range of applications such used in may Asian languages.
    [Show full text]