Spell Checkers and Correctors: a Unified Treatment

Total Page:16

File Type:pdf, Size:1020Kb

Spell Checkers and Correctors: a Unified Treatment SPELL CHECKERS AND CORRECTORS: A UNIFIED TREATMENT By Hsuan Lorraine Liang SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE MASTER OF SCIENCE (COMPUTER SCIENCE) IN THE FACULTY OF ENGINEERING, BUILT ENVIRONMENT AND INFORMATION TECHNOLOGY, UNIVERSITY OF PRETORIA, PRETORIA, SOUTH AFRICA AT NOVEMBER 2008 © University of Pretoria i To my late grandfather, Shu-Hsin. To my parents, Chin-Nan and Hsieh-Chen. Abstract The aim of this dissertation is to provide a unified treatment of various spell checkers and correctors. Firstly, the spell checking and correcting problems are formally described in mathematics in order to provide a better understanding of these tasks. An approach that is similar to the way in which denotational semantics used to describe programming languages is adopted. Secondly, the various attributes of existing spell checking and cor- recting techniques are discussed. Extensive studies on selected spell checking/correcting algorithms and packages are then performed. Lastly, an empirical investigation of various spell checking/correcting packages is presented. It provides a comparison and suggests a classification of these packages in terms of their functionalities, implementation strategies, and performance. The investigation was conducted on packages for spell checking and correcting in English as well as in Northern Sotho and Chinese. The classification pro- vides a unified presentation of the strengths and weaknesses of the techniques studied in the research. The findings provide a better understanding of these techniques in order to assist in improving some existing spell checking/correcting applications and future spell checking/correcting package designs and implementations. Keywords: classification, dictionary lookup, edit distance, formal concept analysis, FSA, n-gram, performance, spell checking, spell correcting. Degree: Magister Scientia Supervisors: Prof B. W. Watson and Prof D. G. Kourie Department of Computer Science iii Acknowledgements First and foremost, I would like to thank my supervisors, Bruce Watson and Derrick Kourie, for directing my research, reviewing my ideas, and providing feedback on my work. I would also like to take this opportunity to express my gratitude for their continuous patience, support, and encouragement. I would like to thank my parents, Chin-Nan and Hsieh-Chen Liang, my brother, Albert Liang, and my grandmother, Yu-Fang Fan, for their unconditional love and support. A special thanks to Ernest Ketcha Ngassam for many technical discussions and in- spiration during this research. He devoted a great deal of his time to helping me kick start this work. Sergei Obiedkov was helpful in pointing me to related research on formal concept analysis. TshwaneDJe HLT and its CEO, David Joffe, were helpful in providing the Northern Sotho corpora as well as providing assistance for the language. v Table of Contents Abstract iii Acknowledgements v Table of Contents vii 1 Introduction 1 1.1 Problem Statement . 1 1.2 Structure of this Dissertation . 2 1.3 Intended Audience . 3 2 Mathematical Preliminaries 5 2.1 Introduction . 5 2.2 Notation . 5 2.3 Denotational Semantics . 7 2.3.1 Spell checking and correcting problems . 7 2.3.2 n-gram analysis . 9 2.4 Conclusion . 10 3 Spell Checking and Correcting Techniques 11 3.1 Introduction . 11 3.2 Morphological Analysis . 12 3.3 Spell Checking Techniques . 13 3.3.1 Dictionary lookup techniques . 14 3.3.2 n-gram analysis . 17 3.4 Spell Correcting Techniques . 21 3.4.1 Minimum edit distance . 22 3.4.2 Similarity key techniques . 26 3.4.3 Rule-based techniques . 27 3.4.4 n-gram-based techniques . 28 3.4.5 Probabilistic techniques . 31 3.4.6 Neural networks . 34 3.5 Other Related Issues . 35 3.6 Conclusion . 36 vii viii 4 Spell Checking and Correcting Algorithms 39 4.1 Introduction . 39 4.2 SPELL . 40 4.3 CORRECT . 44 4.4 ASPELL . 48 4.5 SPEEDCOP . 50 4.6 FSA . 54 4.7 AGREP . 56 4.8 AURA . 61 4.8.1 Spell checking in AURA . 63 4.8.2 Spell correcting in AURA . 64 4.8.3 Phonetic checking and correcting . 67 4.9 Spell Checkers/Correctors for the South African Languages . 69 4.10 CInsunSpell | Chinese Spell Checking Algorithms . 70 4.11 Conclusion . 76 5 Empirical Investigation and Classification 79 5.1 Introduction . 79 5.2 Spell Checking and Correcting for English . 80 5.2.1 Data Sets . 80 5.2.2 Performance . 81 5.3 Spell Checking and Correcting for Northern Sotho . 89 5.3.1 Data Set . 89 5.3.2 Performance . 90 5.4 Spell Checking and Correcting for Chinese . 90 5.4.1 Data Set . 91 5.4.2 Performance . 91 5.5 Classification . 93 5.5.1 Concept lattice . 94 5.6 Conclusions . 97 6 Conclusions and Future Work 99 6.1 Conclusions . 99 6.2 Future Work . 101 Bibliography 104 List of Tables 3.1 An example of a simple non-positional binary bigram array for English. 19 3.2 An example of a simple non-positional frequency bigram array for English. 20 3.3 An extract of the probabilities of bigrams generated from Table 3.2. 21 4.1 An example of the deletion table for the word after. 47 4.2 Example of a bit mask table. 57 4.3 Recalling from the CMM using binary hamming distance. 65 4.4 Recalling from the CMM using shifting n-gram. 67 5.1 Spell checking results for English using the first data set (DS1). 82 5.2 Single-letter errors detected using (DS1). 83 5.3 Spell checking results for English using the second data set (DS2). 83 5.4 Spell correction results for English using DS1. 85 5.5 Spell correction results for English using DS2. 86 5.6 Spell checking results for CInsunSpell. 92 5.7 Classification of spell checking/correcting packages Part I. 93 5.8 Classification of spell checking/correcting packages Part II. 94 ix List of Figures 5.1 Spell checking recall rates for spell checking/correcting packages for English. 84 5.2 Suggestion accuracy for various spell checking/correcting packages for En- glish. 87 5.3 Mean of time performance of the spell checking/correcting packages for English. 88 5.4 Concept lattice for various spell checking and correcting packages. 96 xi Chapter 1 Introduction In this chapter, we identify the problem statement of the domain of spell checking and correcting. We also identify the intended audience of this research. An introduction to the contents and the structure of this dissertation is presented here. This study is restricted to context-independent spelling error detection and correction. 1.1 Problem Statement Spell checkers and correctors are either stand-alone applications capable of processing a string of words or a text, or as an embedded tool which is part of a larger application such as a word processor. Various search and replace algorithms are adopted to fit in the domain of spell checking and correcting. Spelling error detection and correction are closely related to exact and approximate pattern matching respectively. Work on spelling error detection and correction in text started in the 1960s. A num- ber of commercial and non-commercial spell checkers and correctors, such as the ones embedded in Microsoft Word, Unixr spell, GNU's ispell, aspell and other variants, and agrep have been extensively studied. However, the techniques of spell correcting in particular are still limited in their scope, speed, and accuracy. Spell checking identifies words that are valid in some language, as well as the misspelled words in the language. Spell correcting suggests one or more alternative words as the correct spelling when a misspelled word is identified. Spell checking involves non-word error detection and spelling correction involves isolated-word error correction. Isolated- word error correction refers to spell correcting without taking into account any textual or linguistic information in which the misspelling occurs whereas context-dependent word correction would correct errors involving textual or linguistic context. Between the early 1970s and early 1980s, research focused on the techniques for non- word error detection. A non-word can be defined as a continuous string of letters of an alphabet that does not appear in a given word list or dictionary or that is an invalid word form. Non-word spelling error detection can generally be divided into two main techniques, namely dictionary lookup techniques and n-gram analysis. In the past, it was often the case that text recognition systems, such as OCR technology, inclined to rely 1 2 on n-gram analysis for error detection, whereas spell checkers mostly rely on dictionary lookup techniques. Each technique can be used individually as a basis, in combination or together with other methods, such as probabilistic techniques. Isolated-word error correc- tion techniques are often designed based on a study of spelling error patterns. Distinctions are generally made between typographic, cognitive, and phonetic errors. Techniques such as minimum edit distance, n-gram-based techniques, neural networks, probabilistic, rule- based techniques, and similarity key techniques are discussed. In [Kuk92], Kukich gave a survey that provided an overview of the spell checking and correcting techniques and a discussion on some existing algorithms. Chapters 5 and 6 in the book of Jurafsky and Martin [JM00] were based on Kukich's work. Hodge and Austin [HA03] compared standard spell checking algorithms to a spell checking system based on the aura modular neural network. Their research is heavily focused on this hybrid spell checking methodology. Although descriptions on some investigated spell checking and correcting algorithms can be found in the mentioned references, little has been done to compare and represent the existing algorithms in a unified manner. Neither has any formal classification been done previously. This dissertation aims to fill these gaps and provide a unified treatment of various spell checking and correcting algorithms.
Recommended publications
  • The Problem of Fuzzy Duplicate Detection of Large Texts
    The problem of fuzzy duplicate detection of large texts E V Sharapova1 and R V Sharapov1 1Vladimir State University, Orlovskaya street 23, Murom, Russia, 602264 Abstract. In the paper, considered the problem of fuzzy duplicate detection. There are given the basic approaches to detection of text duplicates–distance between strings, fuzzy search algorithms without indexing data, fuzzy search algorithms with indexing data. Thereview of existing methods for the fuzzy duplicate detection is given. The algorithm of fuzzy duplicate detection is present. The algorithm of fuzzy duplicate texts detection was implemented in the system AVTOR.NET. The paper presents the results of testing of the system.The use of filtering text, stemming and character replacement, allow the algorithm to found duplicates even in minor modified texts. 1. Introduction Today computer technology and the World Wide Web have become part of our lives. Billions of users use the Internet every day. The Internet contains a huge number of documents. Many books are transformed in a digital form. Users can read books, articles, newspapers directly from computers and electronic gadgets. Today, in the Internet there are many studying materials: lectures, textbooks, methodical literature, etc. There are big collections of essays, course and diploma projects and scientific dissertations. It leads to problem un-controlling coping of information. The text of document may be a part of another document, formed from several documents, may be modified part of another document (with a change of words to synonyms, endings, times, etc.). In this way, the document is fuzzy duplicate of other documents. There is the problem of text checking to presence of fussy duplicates.
    [Show full text]
  • Algorithms for Approximate String Matching Petro Protsyk (28 October 2016) Agenda
    Algorithms for approximate string matching Petro Protsyk (28 October 2016) Agenda • About ZyLAB • Overview • Algorithms • Brute-force recursive Algorithm • Wagner and Fischer Algorithm • Algorithms based on Automaton • Bitap • Q&A About ZyLAB ZyLAB is a software company headquartered in Amsterdam. In 1983 ZyLAB was the first company providing a full-text search program for MS-DOS called ZyINDEX In 1991 ZyLAB released ZyIMAGE software bundle that included a fuzzy string search algorithm to overcome scanning and OCR errors. Currently ZyLAB develops eDiscovery and Information Governance solutions for On Premises, SaaS and Cloud installations. ZyLAB continues to develop new versions of its proprietary full-text search engine. 3 About ZyLAB • We develop for Windows environments only, including Azure • Software developed in C++, .Net, SQL • Typical clients are from law enforcement, intelligence, fraud investigators, law firms, regulators etc. • FTC (Federal Trade Commission), est. >50 TB (peak 2TB per day) • US White House (all email of presidential administration), est. >20 TB • Dutch National Police, est. >20 TB 4 ZyLAB Search Engine Full-text search engine • Boolean and Proximity search operators • Support for Wildcards, Regular Expressions and Fuzzy search • Support for numeric and date searches • Search in document fields • Near duplicate search 5 ZyLAB Search Engine • Highly parallel indexing and searching • Can index Terabytes of data and millions of documents • Can be distributed • Supports 100+ languages, Unicode characters 6 Overview Approximate string matching or fuzzy search is the technique of finding strings in text or dictionary that match given pattern approximately with respect to chosen edit distance. The edit distance is the number of primitive operations necessary to convert the string into an exact match.
    [Show full text]
  • A Parallel String Matching Tool
    Clgrep: A Parallel String Matching Tool Peng Wu a thesis submitted for the degree of Master of Science at the University of Otago, Dunedin, New Zealand. March 18, 2013 Abstract In this study, we widely investigate the problem of string matching in the context of Heterogeneous Parallel Computing. A overview of string match- ing is made, in which the different forms of string matching problem are distinguished, and the classifications of string matching algorithm are dis- cussed. As an alternative to grep for computational intensive string matching and in addition to support the research of the study, a parallel exact string matching utility `Clgrep' is developed. By experimental studies, we investigate the use of heuristics-based algo- rithms, specifically QS and Horspool algorithms, in the context of Het- erogeneous Parallel Computing. The results suggest that the performance of Heterogeneous Parallel Computing matching, either on multi-core CPU or GPU, is highly related to the computational intensity of certain cases. When computational power is intensively required, the SIMD Parallel Com- puting model of Clgrep can be several times more efficient than correspond- ing sequential matching program. ii Acknowledgements This study has been a difficult task as the problem is novel and involves many programming challenges. Everyday, I am fighting with the errors in the program and the issues raised in the experiment. I do not remember how many hours were spent in the complex experiment. However, I can not forget the people helped me during the study. I would like to thank my parents, without them my study would merely be a dream.
    [Show full text]
  • Improvising of Web Recommendation System by Using K-Means and Bitap Algorithm
    November 2017, Volume 4, Issue 11 JETIR (ISSN-2349-5162) IMPROVISING OF WEB RECOMMENDATION SYSTEM BY USING K-MEANS AND BITAP ALGORITHM 1 Dipali Wankhede,2 S.G.Tuppad 1ME Student,2Professor 1Computer Science, 1MSSCET, Jalna,India Abstract— The increase in the amount of information over the Internet in recent years has led to the risk of data flooding, which has resulted in problems accessing user-related information. In addition, the increase in the number of websites and web pages has made it difficult for webmasters to determine the content of their content. The online user's information needs can be obtained by evaluating the user's web usage. Web Based Data Mining (WUM) is used to extract knowledge from web users. Access logs are accessed by using data mining techniques. One of the WUM applications is a guideline, a privacy filtering technique used to determine whether a user approves a specified item or identifies a list of things that may be important to the user. This article is an architecture that combines product data with user access log data and generates a set of instructions for that user. The operation records interesting results in terms of precision, recall, and metrics. Index Terms— Web Usage mining, Online Web Recommendation System, Clustering, Pattern Matching, Boyer Moore, K- means, Recommendation I. INTRODUCTION In recent years, eCommerce, web services and web-based information systems have been used rapidly. The explosion of the web and the emergence of ecommerce have encouraged designers to develop. The ecommerce product introduction system has transformed the global business landscape.
    [Show full text]
  • Java Exception Matching in Real Time Using Fuzzy Logic Master of Science Thesis
    Java exception matching in real time using fuzzy logic Master of Science Thesis Karl Tillström Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, May 2010 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Java exception matching in real time using fuzzy logic KARL TILLSTRÖM © KARL TILLSTRÖM, MAY 2010. Examiner: BROR BJERNER Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE­412 96 Göteborg Sweden Telephone + 46 (0)31­772 1000 Department of Computer Science and Engineering Göteborg, Sweden May 2010 Abstract This thesis deals with the problem of matching Java exceptions that originates from the same problem, but looks a bit dissimilar. The match- ing must be done within reasonable time to be able to be carried out automatically.
    [Show full text]
  • Needleman–Wunsch Algorithm
    Not logged in Talk Contributions Create account Log in Article Talk Read Edit View history Search Wikipedia Wiki Loves Love: Documenting festivals and celebrations of love on Commons. Help Wikimedia and win prizes by sending photos. Main page Contents Featured content Current events Needleman–Wunsch algorithm Random article From Wikipedia, the free encyclopedia Donate to Wikipedia Wikipedia store This article may be too technical for most readers to understand. Please help improve it Interaction to make it understandable to non-experts, without removing the technical details. (September 2013) (Learn how and when to remove this template message) Help About Wikipedia The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align Class Sequence Community portal protein or nucleotide sequences. It was one of the first applications of dynamic alignment Recent changes Worst-case performance Contact page programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970.[1] The Worst-case space complexity Tools algorithm essentially divides a large problem (e.g. the full sequence) into a series of What links here smaller problems and uses the solutions to the smaller problems to reconstruct a solution to the larger problem.[2] It is also Related changes sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm Upload file is still widely used for optimal global alignment, particularly when
    [Show full text]
  • The Evaluation of Text String Matching Algorithms As an Aid to Image Search
    JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 26 No. 1 (2018), pp. 33-62 The Evaluation of Text String Matching Algorithms as an Aid to Image Search Joanna Ochelska-Mierzejewska1 1Lodz University of Technology Institute of Information Technology ul. Wólcza´nska215, 90-924 Łód´z,Poland [email protected] Abstract. The main goal of this paper is to analyse intelligent text string matching methods (like fuzzy sets and relations) and evaluate their useful- ness for image search. The present study examines the ability of different algorithms to handle multi-word and multi-sentence queries. Eight different similarity measures (N-gram, Levenshtein distance, Jaro coefficient, Dice coefficient, Overlap coefficient, Euclidean distance, Cosine similarity and Jaccard similarity) are employed to analyse the algorithms in terms of time complexity and accuracy of results. The outcomes are used to develop a hi- erarchy of methods, illustrating their usefulness to image search. The search response time increases significantly in the case of data sets containing sev- eral thousand images. The findings indicate that the analysed algorithms do not fulfil the response-time requirements of professional applications. Due to its limitations, the proposed system should be considered only as an illustra- tion of a novel solution with further development perspectives. The use of Polish as the language of experiments affects the accuracy of measures. This limitation seems to be easy to overcome in the case of languages with simpler grammar rules (e.g. English). Keywords: text comparison, N-gram, Levenshtein distance, Jaro coefficient, Dice’s coefficient, Overlap coefficient, Euclidean distance, Cosine similarity, Jaccard similarity.
    [Show full text]
  • Accelerating Approximate Pattern Matching with Processing-In-Memory (PIM) and Single-Instruction Multiple-Data (SIMD) Programming
    Accelerating Approximate Pattern Matching with Processing-In-Memory (PIM) and Single-Instruction Multiple-Data (SIMD) Programming Damla Senol Cali1 Zülal Bingöl2, Jeremie S. Kim1,3, Rachata Ausavarungnirun1, Saugata Ghose1, Can Alkan2 and Onur Mutlu3,1 1 2 3 Poster SEQ-15 Accelerating Approximate Pattern Matching with Processing-In-Memory (PIM) Problem: and Single-Instruction Multiple-Data (SIMD) Programming Damla Senol Cali1, Zülal Bingöl2, Jeremie S. Kim1,3, Rachata Ausavarungnirun1, Saugata Ghose1, o Bitap algorithm can perform string matching with Can Alkan2 and Onur Mutlu3,1 1 Carnegie Mellon University, Pittsburgh, PA, USA 2 Bilkent University, Ankara, Turkey 3 ETH Zürich, Zürich, Switzerland fast and simple bitwise operations. Bitap Algorithm Problem & Our Goal Processing-in-Memory TSV HBM DRAM Die Bitap algorithm (i.e., Shift-Or algorithm, or Baeza- Problem: Microbump o Due to the von Neumann architecture, memory Yates-Gonnet algorithm) [1] can perform exact string o The operations used during bitap can be performed in matching with fast and simple bitwise operations. parallel, but high-throughput parallel bitap 3D-Stacked DRAM Wu and Manber extended the algorithm [2] in order computation requires a large amount of memory to perform approximate string matching. Processor (GPU/CPU/SoC) Die bandwidth that is currently unavailable to the PHY Logic Die PHY . bus between the CPU and the memory is the § Step 1 – Preprocessing: For each character in the processor. Interposer alphabet (i.e., A,C,G,T), generate a pattern o Read mapping is an application of approximate string Package Substrate bitmask that stores information about the matching problem, and thus can benefit from existing o Recent technology that tightly couples memory presence of the corresponding character in the techniques used to optimize general-purpose string and logic vertically with very high bandwidth bottleneck of the high-throughput parallel bitap pattern.
    [Show full text]
  • Real Time Pattern Based Melodic Query for Music Continuation System
    Real time Pattern Based Melodic Query for Music Continuation System Sanjay Majumder Benjamin D. Smith Indiana University - Purdue University Indianapolis Indiana University - Purdue University Indianapolis Indianapolis, Indiana, USA Indianapolis, Indiana, USA [email protected] [email protected] ABSTRACT 1 INTRODUCTION This paper presents a music continuation system using pattern The MIDI file format is one of the most popular and broadly applied matching to find patterns within a library of MIDI files using areal- forms of symbolic music representation. Despite having limitations, time algorithm to build a system which can be used as interactive its wide adoption has resulted in many, many MIDI databases that DJ system. This paper also looks at the influence of different kinds are available on the Internet. With the rapid advancement of Digital of pattern matching on MIDI file analysis. Many pattern-matching Audio Workstations and the advent of inexpensive MIDI controllers algorithms have been developed for text analysis, voice recognition and keyboards, musicians can easily compose, distribute and share and Bio-informatics but as the domain knowledge and nature of MIDI files. the problems are different these algorithms are not ideally suitable Pattern matching is a primitive problem in the field of computer for real time MIDI processing for interactive music continuation science as well as in information retrieval. With the advancement system. By taking patterns in real-time, via MIDI keyboard, the sys- of computer science, researchers have developed many pattern- tem searches patterns within a corpus of MIDI files and continues matching algorithms, but most of these algorithms are problem playing from the user's musical input.
    [Show full text]
  • An Exploration of the Efficiency of String Matching Algorithms
    INTERNATIONAL JOURNAL OF RESEARCH CULTURE SOCIETY ISSN: 2456-6683 Volume - 2, Issue - 3, Mar – 2018 UGC Approved Monthly, Peer-Reviewed, Refereed, Indexed Journal Impact Factor: 3.449 Publication Date: 31/03/2018 An Exploration of the Efficiency of String Matching Algorithms Vennila Santhanam, Assistant Professor, Department of Computer Science, Auxilium College (Autonomous), Vellore. Tamil Nadu, India. Email - [email protected] Abstract: String searching is a process that refers to the problem of finding all the occurrences of a particular string in the given text. String matching algorithms plays a vital role in finding such appropriate match in limited duration. String matching algorithms are used in many areas such as Spell Checkers, Spam Filters, Intrusion Detection System, Search Engines, Plagiarism Detection, Bioinformatics, Digital Forensics and Information Retrieval Systems, Computational molecular biology. The World Wide Web provides settings in which efficient string matching algorithms are necessary. In this study the researchers review most of the String matching algorithms. It is a class of string algorithms that tries to find the occurrences and location of one or several strings (also called as patterns) are found within a larger string or text. Key Words: String matching, Patterns, Occurrences. 1. INTRODUCTION: Let Σ be an alphabet set. The most basic example of string searching is where both the pattern and searched text are sequence of elements of Σ. Kinds of String searching String searching may involve a very long string, called the Haystack or it may be a very short string, called the Needle. The ultimate goal of string searching is to find one or more occurrences of the needle string existing within the haystack.
    [Show full text]
  • Detecting Near Duplicates in Software Documentation
    Detecting Near Duplicates in Software Documentation D.V. Luciv1, D.V. Koznov1, G.A. Chernishev1, A.N. Terekhov1 1 Saint Petersburg State University 199034 St. Petersburg, Universitetskaya Emb., 7–9 E-mail: {d.lutsiv, d.koznov, g.chernyshev, a.terekhov}@spbu.ru 2017-11-15 Abstract Contemporary software documentation is as complicated as the software itself. During its lifecycle, the documentation accumulates a lot of “near duplicate” fragments, i.e. chunks of text that were copied from a single source and were later modified in different ways. Such near duplicates decrease documentation quality and thus hamper its further utilization. At the same time, they are hard to detect manually due to their fuzzy nature. In this paper we give a formal definition of near duplicates and present an algorithm for their detec- tion in software documents. This algorithm is based on the exact software clone detection approach: the software clone detection tool Clone Miner was adapted to detect exact dupli- cates in documents. Then, our algorithm uses these exact duplicates to construct near ones. We evaluate the proposed algorithm using the documentation of 19 open source and com- mercial projects. Our evaluation is very comprehensive — it covers various documentation types: design and requirement specifications, programming guides and API documenta- tion, user manuals. Overall, the evaluation shows that all kinds of software documentation contain a significant number of both exact and near duplicates. Next, we report onthe performed manual analysis of the detected near duplicates for the Linux Kernel Documen- tation. We present both quantative and qualitative results of this analysis, demonstrate algorithm strengths and weaknesses, and discuss the benefits of duplicate management in software documents.
    [Show full text]
  • The Assignment Problem on Sets of Strings
    InternalReport2012–17 August2012 Universiteit Leiden Opleiding Informatica The Assignment Problem on Sets of Strings Thijs van Ommen BACHELOR THESIS Leiden Institute of Advanced Computer Science (LIACS) Leiden University Niels Bohrweg 1 2333 CA Leiden The Netherlands Contents 1 Introduction 1 2 Approximate string matching 2 2.1 String distance measures . 2 2.1.1 Edit distance and variants . 2 2.1.2 Token-based distances . 3 2.1.3 Block edit distances . 4 2.1.4 Specialized distances . 6 2.1.5 Compression-based distances . 6 2.2 Approximate string matching algorithms . 7 2.2.1 Online algorithms . 8 2.2.2 Algorithms using preprocessing . 8 2.3 Other problems involving approximate string matching . 9 2.3.1 File comparison . 9 2.3.2 Record linkage . 10 2.3.3 Inference . 11 3 The assignment problem and the Hungarian algorithm 11 3.1 Theory and basic form of the Hungarian algorithm . 12 3.2 Improvements of the algorithm . 13 4 Our algorithm 15 4.1 Distance measure . 15 4.2 Hashing . 18 4.3 Computing lower bounds on distances . 19 4.4 Computing distances exactly . 20 4.5 Adapting the Hungarian algorithm . 22 4.5.1 Finding the minimum . 22 4.5.2 Labels . 23 4.5.3 Creating the row queues . 24 4.5.4 Rectangular matrices . 25 4.5.5 Voluntarily assigning fewer lines . 25 4.6 Theoretical performance analysis . 26 5 Experiments 27 5.1 Correctness . 27 5.2 Speed . 27 6 Conclusion 30 References 31 i ii 1 Introduction In this thesis, we will develop a tool that compares two text files to get an overview of their differences, in a way similar to what the diff tool does.
    [Show full text]