Forty Years of Suffix Trees

Total Page:16

File Type:pdf, Size:1020Kb

Forty Years of Suffix Trees King’s Research Portal DOI: 10.1145/2810036 Document Version Peer reviewed version Link to publication record in King's Research Portal Citation for published version (APA): Apostolico, A., Crochemore, M., Farach-Colton, M., Galil, Z., & Muthukrishnan, S. (2016). 40 Years of suffix trees. COMMUNICATIONS OF THE ACM, 59(4), 66-73. https://doi.org/10.1145/2810036 Citing this paper Please note that where the full-text provided on King's Research Portal is the Author Accepted Manuscript or Post-Print version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections. General rights Copyright and moral rights for the publications made accessible in the Research Portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognize and abide by the legal requirements associated with these rights. •Users may download and print one copy of any publication from the Research Portal for the purpose of private study or research. •You may not further distribute the material or use it for any profit-making activity or commercial gain •You may freely distribute the URL identifying the publication in the Research Portal Take down policy If you believe that this document breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 29. Sep. 2021 Forty Years of Suffix Trees Alberto Apostolico Maxime Crochemore College of Computing King’s College London Georgia Institute of London WC2R 2LS, UK Technology and Université Paris-Est, 801 Atlantic Drive France Atlanta, GA 30332-0280, USA [email protected] [email protected] Martin Farach-Colton Zvi Galil S. Muthukrishnan Department of Computer College of Computing Department of Computer Science Georgia Institute of Science Rutgers University Technology Rutgers University Piscataway, NJ 08854, USA 801 Atlantic Drive Piscataway, NJ 08854, USA [email protected] Atlanta, GA 30332-0280, USA [email protected] [email protected] ABSTRACT Both before and after 1843, the natural impulse when This paper reviews the first 40 years in the life of suffix faced with some mysterious message has been to count fre- trees, their many incarnations, and their applications. The quencies of individual tokens or subassemblies in search of a paper is non-technical but assumes some familiarity with the clue. Perhaps one of the most intense and fascinating sub- structures and constructions discussed. It is not meant to be jects for this kind of scrutiny have been bio-sequences. As exhaustive. It is meant to be a tribute to a ubiquitous tool soon as some such sequences became available, statistical of string matching | the suffix tree and its variants | and analysts tried to link characters or blocks of characters to one of the most persistent subjects of study in the theory of relevant biological functions. With the early examples of algorithms. whole genomes emerging in the mid 1990's, it seemed nat- Keywords: pattern matching, string searching, bi-tree, suf- ural to count the occurrences of all blocks of size 1, 2, etc., fix tree, dawg, suffix automaton, factor automaton, suffix up to any desired length, looking for statistical characteri- array, FM-index, wavelet tree. zations of coding regions, promoter regions, etc. This review is not about cryptography. It is about a data structure and its variants, and the many surprising and use- 1. PROLOG ful features it carries. Among these is the fact that, to set When William Legrand finally decrypted the string: up a statistical table of occurrences for all substrings (also called factors), of any length, of a text string of n charac- 53zzy305))6*,48264z.)4z);806",48y8P60))85;1z ters, it only takes time and space linear in the length of the (;:z*8y83(88)5*y,46(;88*96*?;8)*z(;485);5*y2:*z text string. While nobody would be so foolish as to solve (;4956*2(5*N4)8P8*;4069285);)6~ z8)4zz;1(z9;48081;8: the problem by first generating all exponentially many pos- 8z1;4885;4)485y528806*81(ddag9;48;(88;4(z?34; sible strings and then counting their occurrences one by one, 48)4z;161;:188;z?; a text string may still contain Θ(n2) distinct substrings, so it did not seem to make much more sense than it did before. that tabulating all of them in linear space, never mind linear The decoded message read: \A good glass in the bishop's time, already seems puzzling. hostel in the devil's seat forty-one degrees and thirteen min- Over the years, such structures have held center stage utes northeast and by north main branch seventh limb east in text searching, indexing, statistics, and compression as side shoot from the left eye of the death's-head a bee line well as in the assembly, alignment and comparison of biose- from the tree through the shot fifty feet out." But at least it quences. Their range of scope extends to areas as diverse did sound more like natural language, and eventually guided as detecting plagiarism, finding surprising substrings in a the main character of Edgar Allan Poe's\The Gold Bug"[36] text, testing the unique decipherability of a code, and more. to discover the treasure he had been after. Legrand solved a Their impact on Computer Science and IT at large cannot substitution cipher using symbol frequencies. He first looked be overstated. Text searching and Bioinformatics would not for the most frequent symbol and changed it into the most be the same without them. In 2013, the Combinatorial Pat- frequent letter of English, then similarly inferred the most tern Matching symposium celebrated the 40th anniversary frequent word, then punctuation marks, and so on. of the appearance of Weiner's paper [41] with a special ses- Permission to make digital or hard copies of all or part of this work for sion entirely dedicated to that event. personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to 2. HISTORY BITS AND PIECES republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. At the dawn of \stringology," Don Knuth conjectured that . the problem of finding the longest substring common to two long text sequences of total length n required Ω(n log n) time. An O(n log n)-time had been provided by Karp, Miller and Rosenberg [26]. That construction was destined to play c a role in parallel pattern matching, but Knuth's conjecture a a was short lived: in 1973, Peter Weiner showed that the prob- $ b b lem admitted an elegant linear-time solution [41], as long as b $ 10 a c the alphabet of the string was fixed. Such a solution was c a a actually a byproduct of a construction he had originally set $ 9 c $ a up for a different purpose, i.e., identifying any substring of a a $ 6 b text file without specifying all of them. In doing so, Weiner b 7 a 8 b a introduced the notion of a textual inverted index that would c a elicit refinements, analyses and applications for forty years a $ and counting, a feature hardly shared by any other data a $ c a 3 structure. b 4 $ Weiner's original construction processed the text file from a b 5 right to left. As each new character was read in, the struc- $ a ture, which he called a\bi-tree", would be updated to accom- 1 $ modate longer and longer suffixes of the text file. Thus this 2 was an inherently off-line construction, since the text had to be known in its entirety before the construction could begin. Alternatively, one could say that the algorithm would build the structure for the reverse of the text on-line. About three years later, Ed McCreight provided a left-to-right algorithm and changed the name of the structure to \suffix tree," a name that would stick [32]. Figure 1: The expanded suffix tree of the string x = Let x be a string of n − 1 symbols over some alphabet abcabcaba Σ and $ an extra character not in Σ. The expanded suffix tree Tx associated with x is a digital search tree collecting all suffixes of x$. Specifically, T is defined as follows. x this is done, it becomes possible to aim for an expectedly 1. T has n leaves, labeled from 1 to n. non-trivial O(n) time construction. x At the CPM Conference of 2013, McCreight revealed that 2. Each arc is labeled with a symbol of Σ [ f$g. For any his O(n) time construction was not born as an alternative i, 1 ≤ i ≤ n, the concatenation of the labels on the to Weiner's: he had developed it in an effort to understand path from the root of Tx to leaf i is precisely the suffix Weiner's paper, but when he showed it to Weiner asking him sufi = xixi+1:::xn−1$. to confirm that he had understood that paper the answer was "No, but you have come up with an entirely different 3.
Recommended publications
  • Suffix Structure Lecture Moscow International Workshop ACM ICPC
    Suffix structure lecture Moscow International Workshop ACM ICPC 2015 19 November, 2015 1 Introduction Consider problem of pattern matching. One of possible statement is as follows: you are given text T and n patterns Si. For each of patterns you are to say whether it has occurrence in text. There are two ways in solving this problem. The first one (considered easier) is Aho-Corasick algorithm which constructs automaton that recognizes strings that contain some of patterns as substring. The second one (considered harder) is usage of suffix structures which are mainly suffix array, suffix automaton or suffix tree. This lecture will be dedicated to the last two, their relations and applications. 2 Naive solution Consider arbitrary substring s[pos; pos + len − 1]. It is the prefix of length len of suffix of string which starts in position pos. Given this, we can use following naive solution for the problem: let’s add all the suffixes of the string in trie. Then for each prefix of every string in trie will correspond exactly one vertex hence we can check whether Si is substring of T in O(Si). Disadvantages are obvious - such solution needs O(jT j2) time and memory. There are two ways of solving this problem which lead to suffix tree and suffix automaton. 1 3 Suffix tree We can see that every top-down way in trie is substring of T . Hence we can remove from trie all nodes which are, neither root nor vertex corresponding to some suffix and its degree equals 2 (i.e., nodes which are not crossroads - they have exactly one ingoing and exactly one outgoing edge).
    [Show full text]
  • Full-Text and Keyword Indexes for String Searching
    Technische Universität München Full-text and Keyword Indexes for String Searching by Aleksander Cisłak arXiv:1508.06610v1 [cs.DS] 26 Aug 2015 Master’s Thesis in Informatics Department of Informatics Technische Universität München Full-text and Keyword Indexes for String Searching Indizierung von Volltexten und Keywords für Textsuche Author: Aleksander Cisłak ([email protected]) Supervisor: prof. Burkhard Rost Advisors: prof. Szymon Grabowski; Tatyana Goldberg, MSc Master’s Thesis in Informatics Department of Informatics August 2015 Declaration of Authorship I, Aleksander Cisłak, confirm that this master’s thesis is my own work and I have docu- mented all sources and material used. Signed: Date: ii Abstract String searching consists in locating a substring in a longer text, and two strings can be approximately equal (various similarity measures such as the Hamming distance exist). Strings can be defined very broadly, and they usually contain natural language and biological data (DNA, proteins), but they can also represent other kinds of data such as music or images. One solution to string searching is to use online algorithms which do not preprocess the input text, however, this is often infeasible due to the massive sizes of modern data sets. Alternatively, one can build an index, i.e. a data structure which aims to speed up string matching queries. The indexes are divided into full-text ones which operate on the whole input text and can answer arbitrary queries and keyword indexes which store a dictionary of individual words. In this work, we present a literature review for both index categories as well as our contributions (which are mostly practice-oriented).
    [Show full text]
  • Algorithms on Strings
    Algorithms on Strings Maxime Crochemore Christophe Hancart Thierry Lecroq Algorithms on Strings Cambridge University Press Algorithms on Strings – Maxime Crochemore, Christophe Han- cart et Thierry Lecroq Table of contents Preface VII 1 Tools 1 1.1 Strings and automata 2 1.2 Some combinatorics 8 1.3 Algorithms and complexity 17 1.4 Implementation of automata 21 1.5 Basic pattern matching techniques 26 1.6 Borders and prefixes tables 36 2 Pattern matching automata 51 2.1 Trie of a dictionary 52 2.2 Searching for several strings 53 2.3 Implementation with failure function 61 2.4 Implementation with successor by default 67 2.5 Searching for one string 76 2.6 Searching for one string and failure function 79 2.7 Searching for one string and successor by default 86 3 String searching with a sliding window 95 3.1 Searching without memory 95 3.2 Searching time 101 3.3 Computing the good suffix table 105 3.4 Automaton of the best factor 109 3.5 Searching with one memory 113 3.6 Searching with several memories 119 3.7 Dictionary searching 128 4 Suffix arrays 137 4.1 Searching a list of strings 138 4.2 Searching with common prefixes 141 4.3 Preprocessing the list 146 VI Table of contents 4.4 Sorting suffixes 147 4.5 Sorting suffixes on bounded integer alphabets 153 4.6 Common prefixes of the suffixes 158 5 Structures for index 165 5.1 Suffix trie 165 5.2 Suffix tree 171 5.3 Contexts of factors 180 5.4 Suffix automaton 185 5.5 Compact suffix automaton 197 6 Indexes 205 6.1 Implementing an index 205 6.2 Basic operations 208 6.3 Transducer of positions 213 6.4 Repetitions
    [Show full text]
  • Finite Automata Implementations Considering CPU Cache J
    Acta Polytechnica Vol. 47 No. 6/2007 Finite Automata Implementations Considering CPU Cache J. Holub The finite automata are mathematical models for finite state systems. More general finite automaton is the nondeterministic finite automaton (NFA) that cannot be directly used. It is usually transformed to the deterministic finite automaton (DFA) that then runs in time O(n), where n is the size of the input text. We present two main approaches to practical implementation of DFA considering CPU cache. The first approach (represented by Table Driven and Hard Coded implementations) is suitable forautomata being run very frequently, typically having cycles. The other approach is suitable for a collection of automata from which various automata are retrieved and then run. This second kind of automata are expected to be cycle-free. Keywords: deterministic finite automaton, CPU cache, implementation. 1 Introduction usage. Section 3 then describes general techniques for DFA implementation. It is mostly suitable for DFA that is run most The original formal study of finite state systems (neural of the time. Since DFA has a finite set of states, this kind of nets) is from 1943 by McCulloch and Pitts [14]. In 1956 DFA has to have cycles. Recent results in the implementation Kleene [13] modeled the neural nets of McCulloch and Pitts using CPU cache are discussed in Section 4. On the other by finite automata. In that time similar models were pre- hand we have a collection of DFAs each representing some sented by Huffman [12], Moore [17], and Mealy [15]. In 1959, document (e.g., in the form of complete index in case of Rabin and Scott introduced nondeterministic finite automata factor or suffix automata).
    [Show full text]
  • Suffix Trees, Suffix Arrays, BWT
    ALGORITHMES POUR LA BIO-INFORMATIQUE ET LA VISUALISATION COURS 3 Raluca Uricaru Suffix trees, suffix arrays, BWT Based on: Suffix trees and suffix arrays presentation by Haim Kaplan Suffix trees course by Paco Gomez Linear-Time Construction of Suffix Trees by Dan Gusfield Introduction to the Burrows-Wheeler Transform and FM Index, Ben Langmead Trie • A tree representing a set of strings. c { a aeef b ad e bbfe d b bbfg e c f } f e g Trie • Assume no string is a prefix of another Each edge is labeled by a letter, c no two edges outgoing from the a b same node are labeled the same. e b Each string corresponds to a d leaf. e f f e g Compressed Trie • Compress unary nodes, label edges by strings c è a a c b e d b d bbf e eef f f e g e g Suffix tree Given a string s a suffix tree of s is a compressed trie of all suffixes of s. Observation: To make suffixes prefix-free we add a special character, say $, at the end of s Suffix tree (Example) Let s=abab. A suffix tree of s is a compressed trie of all suffixes of s=abab$ { $ a b $ b b$ $ ab$ a a $ b bab$ b $ abab$ $ } Trivial algorithm to build a Suffix tree a b Put the largest suffix in a b $ a b b a Put the suffix bab$ in a b b $ $ a b b a a b b $ $ Put the suffix ab$ in a b b a b $ a $ b $ a b b a b $ a $ b $ Put the suffix b$ in a b b $ a a $ b b $ $ a b b $ a a $ b b $ $ Put the suffix $ in $ a b b $ a a $ b b $ $ $ a b b $ a a $ b b $ $ We will also label each leaf with the starting point of the corresponding suffix.
    [Show full text]
  • Starting the Matching from the End Enables Long Shifts. • the Horspool
    BNDM Starting the matching from the end enables long shifts. The Horspool algorithm bases the shift on a single character. • The Boyer–Moore algorithm uses the matching suffix and the • mismatching character. Factor based algorithms continue matching until no pattern factor • matches. This may require more comparisons but it enables longer shifts. Example 2.14: Horspool shift varmasti-aikai/sen-ainainen ainaisen-ainainen ainaisen-ainainen Boyer–Moore shift Factor shift varmasti-aikai/sen-ainainen varmasti-ai/kaisen-ainainen ainaisen-ainainen ainaisen-ainainen ainaisen-ainainen ainaisen-ainainen 86 Factor based algorithms use an automaton that accepts suffixes of the reverse pattern P R (or equivalently reverse prefixes of the pattern P ). BDM (Backward DAWG Matching) uses a deterministic automaton • that accepts exactly the suffixes of P R. DAWG (Directed Acyclic Word Graph) is also known as suffix automaton. BNDM (Backward Nondeterministic DAWG Matching) simulates a • nondeterministic automaton. Example 2.15: P = assi. ε ai ss -10123 BOM (Backward Oracle Matching) uses a much simpler deterministic • automaton that accepts all suffixes of P R but may also accept some other strings. This can cause shorter shifts but not incorrect behaviour. 87 Suppose we are currently comparing P against T [j..j + m). We use the automaton to scan the text backwards from T [j + m 1]. When the automaton has scanned T [j + i..j + m): − If the automaton is in an accept state, then T [j + i..j + m) is a prefix • of P . If i = 0, we found an occurrence. ⇒ Otherwise, mark the prefix match by setting shift = i. This is the ⇒ length of the shift that would achieve a matching alignment.
    [Show full text]
  • Suffix Trees and Suffix Arrays in Primary and Secondary Storage Pang Ko Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 2007 Suffix trees and suffix arrays in primary and secondary storage Pang Ko Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Bioinformatics Commons, and the Computer Sciences Commons Recommended Citation Ko, Pang, "Suffix trees and suffix arrays in primary and secondary storage" (2007). Retrospective Theses and Dissertations. 15942. https://lib.dr.iastate.edu/rtd/15942 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Suffix trees and suffix arrays in primary and secondary storage by Pang Ko A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Computer Engineering Program of Study Committee: Srinivas Aluru, Major Professor David Fern´andez-Baca Suraj Kothari Patrick Schnable Srikanta Tirthapura Iowa State University Ames, Iowa 2007 UMI Number: 3274885 UMI Microform 3274885 Copyright 2007 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, MI 48106-1346 ii DEDICATION To my parents iii TABLE OF CONTENTS LISTOFTABLES ................................... v LISTOFFIGURES .................................. vi ACKNOWLEDGEMENTS. .. .. .. .. .. .. .. .. .. ... .. .. .. .. vii ABSTRACT....................................... viii CHAPTER1. INTRODUCTION . 1 1.1 SuffixArrayinMainMemory .
    [Show full text]
  • Space-Efficient K-Mer Algorithm for Generalised Suffix Tree
    International Journal of Information Technology Convergence and Services (IJITCS) Vol.7, No.1, February 2017 SPACE-EFFICIENT K-MER ALGORITHM FOR GENERALISED SUFFIX TREE Freeson Kaniwa, Venu Madhav Kuthadi, Otlhapile Dinakenyane and Heiko Schroeder Botswana International University of Science and Technology, Botswana. ABSTRACT Suffix trees have emerged to be very fast for pattern searching yielding O (m) time, where m is the pattern size. Unfortunately their high memory requirements make it impractical to work with huge amounts of data. We present a memory efficient algorithm of a generalized suffix tree which reduces the space size by a factor of 10 when the size of the pattern is known beforehand. Experiments on the chromosomes and Pizza&Chili corpus show significant advantages of our algorithm over standard linear time suffix tree construction in terms of memory usage for pattern searching. KEYWORDS Suffix Trees, Generalized Suffix Trees, k-mer, search algorithm. 1. INTRODUCTION Suffix trees are well known and powerful data structures that have many applications [1-3]. A suffix tree has various applications such as string matching, longest common substring, substring matching, index-at, compression and genome analysis. In recent years it has being applied to bioinformatics in genome analysis which involves analyzing huge data sequences. This includes, finding a short sample sequence in another sequence, comparing two sequences or finding some types of repeats. In biology, repeats found in genomic sequences are broadly classified into two, thus interspersed (spaced to each other) and tandem repeats (occur next to each other). Tandem repeats are further classified as micro-satellites, mini-satellites and satellites.
    [Show full text]
  • String Matching and Suffix Tree BLAST (Altschul Et Al'90)
    String Matching and Suffix Tree Gusfield Ch1-7 EECS 458 CWRU Fall 2004 BLAST (Altschul et al’90) • Idea: true match alignments are very likely to contain a short segment that identical (or very high score). • Consider every substring (seed) of length w, where w=12 for DNA and 4 for protein. • Find the exact occurrence of each substring (how?) • Extend the hit in both directions and stop at the maximum score. 1 Problems • Pattern matching: find the exact occurrences of a given pattern in a given structure ( string matching) • Pattern recognition: recognizing approximate occurences of a given patttern in a given structure (image recognition) • Pattern discovery: identifying significant patterns in a given structure, when the patterns are unknown (promoter discovery) Definitions • String S[1..m] • Substring S[i..j] •Prefix S[1..i] • Suffix S[i..m] • Proper substring, prefix, suffix • Exact matching problem: given a string P called pattern, and a long string T called text, find all the occurrences of P in T. Naïve method • Align the left end of P with the left end of T • Compare letters of P and T from left to right, until – Either a mismatch is found (not an occurrence – Or P is exhausted (an occurrence) • Shift P one position to the right • Restart the comparison from the left end of P • Repeat the process till the right end of P shifts past the right end of T • Time complexity: worst case θ(mn), where m=|P| and n=|T| • Not good enough! 2 Speedup • Ideas: – When mismatch occurs, shift P more than one letter, but never shift so far as to miss an occurrence – After shifting, ship over parts of P to reduce comparisons – Preprocessing of P or T Fundamental preprocessing • Can be on pattern P or text T.
    [Show full text]
  • Text Algorithms
    Text Algorithms Jaak Vilo 2015 fall Jaak Vilo MTAT.03.190 Text Algorithms 1 Topics • Exact matching of one paern(string) • Exact matching of mulDple paerns • Suffix trie and tree indexes – Applicaons • Suffix arrays • Inverted index • Approximate matching Exact paern matching • S = s1 s2 … sn (text) |S| = n (length) • P = p1p2..pm (paern) |P| = m • Σ - alphabet | Σ| = c • Does S contain P? – Does S = S' P S" fo some strings S' ja S"? – Usually m << n and n can be (very) large Find occurrences in text P S Algorithms One-paern Mul--pa'ern • Brute force • Aho Corasick • Knuth-Morris-Pra • Commentz-Walter • Karp-Rabin • Shi]-OR, Shi]-AND • Boyer-Moore Indexing • • Factor searches Trie (and suffix trie) • Suffix tree Anima-ons • h@p://www-igm.univ-mlv.fr/~lecroq/string/ • EXACT STRING MATCHING ALGORITHMS Anima-on in Java • ChrisDan Charras - Thierry Lecroq Laboratoire d'Informaque de Rouen Université de Rouen Faculté des Sciences et des Techniques 76821 Mont-Saint-Aignan Cedex FRANCE • e-mails: {Chrisan.Charras, Thierry.Lecroq} @laposte.net Brute force: BAB in text? ABACABABBABBBA BAB Brute Force P i i+j-1 S j IdenDfy the first mismatch! Queson: §Problems of this method? L §Ideas to improve the search? J Brute force attempt 1: gcatcgcagagagtatacagtacg Algorithm Naive GCAg.... attempt 2: Input: Text S[1..n] and gcatcgcagagagtatacagtacg paern P[1..m] g....... attempt 3: Output: All posiDons i, where gcatcgcagagagtatacagtacg g....... P occurs in S attempt 4: gcatcgcagagagtatacagtacg g....... for( i=1 ; i <= n-m+1 ; i++ ) attempt 5: gcatcgcagagagtatacagtacg for ( j=1 ; j <= m ; j++ ) g......
    [Show full text]
  • 4. Suffix Trees and Arrays
    Let us analyze the average case time complexity of the verification phase. The best pattern partitioning is as even as possible. Then each pattern Many variations of the algorithm have been suggested: • factor has length at least r = m/(k + 1) . b c The expected number of exact occurrences of a random string of The filtration can be done with a different multiple exact string • • length r in a random text of length n is at most n/σr. matching algorithm. The expected total verification time is at most The verification time can be reduced using a technique called • • hierarchical verification. m2(k + 1)n m3n . O σr ≤ O σr The pattern can be partitioned into fewer than k + 1 pieces, which are • searched allowing a small number of errors. This is (n) if r 3 log m. O ≥ σ A lower bound on the average case time complexity is Ω(n(k + log m)/m), The condition r 3 logσ m is satisfied when (k + 1) m/(3 logσ m + 1). σ • ≥ ≤ and there exists a filtering algorithm matching this bound. Theorem 3.24: The average case time complexity of the Baeza-Yates–Perleberg algorithm is (n) when k m/(3 log m + 1) 1. O ≤ σ − 153 154 4. Suffix Trees and Arrays Let T = T [0..n) be the text. For i [0..n], let Ti denote the suffix T [i..n). Summary: Approximate String Matching ∈ Furthermore, for any subset C [0..n], we write T = Ti i C . In ∈ C { | ∈ } We have seen two main types of algorithms for approximate string matching: particular, T[0..n] is the set of all suffixes of T .
    [Show full text]
  • Modifications of the Landau-Vishkin Algorithm Computing Longest
    Modifications of the Landau-Vishkin Algorithm Computing Longest Common Extensions via Suffix Arrays and Efficient RMQ computations Rodrigo C´esar de Castro Miranda1, Mauricio Ayala-Rinc´on1, and Leon Solon1 Instituto de CiˆenciasExatas,Universidade de Bras´ılia [email protected],[email protected],[email protected] Abstract. Approximate string matching is an important problem in Computer Science. The standard solution for this problem is an O(mn) running time and space dynamic programming algorithm for two strings of length m and n. Lan- dau and Vishkin developed an algorithm which uses suffix trees for accelerating the computation along the dynamic programming table and reaching space and running time in O(nk), where n > m and k is the maximum number of ad- missible differences. In the Landau and Vishkin algorithm suffix trees are used for pre-processing the sequences allowing an O(1) running time computation of the longest common extensions between substrings. We present two O(n) space variations of the Landau and Vishkin algorithm that use range-minimum-queries (RMQ) over the longest common prefix array of an extended suffix array instead of suffix trees for computing the longest common extensions: one which computes the RMQ with a lowest common ancestor query over a Cartesian tree and other that applies the succinct RMQ computation developed by Fischer and Heun. 1 Introduction Matching strings with errors is an important problem in Computer Science, with ap- plications that range from word processing to text databases and biological sequence alignment. The standard algorithm for approximate string matching is a dynamic pro- gramming algorithm with O(mn) running-time and space complexity, for strings of size m and n.
    [Show full text]