String Matching String Matching Problem

Total Page:16

File Type:pdf, Size:1020Kb

String Matching String Matching Problem String Matching String Matching Problem Pattern: compress Text: We introduce a general framework which is suitable to capture an essence of compresscompressed pattern matching according to various dictionary based compresscompressions. The goal is to find all occurrences of a pattern in a text without decompression,compress which is one of the most active topics in string matching. Our framework includes such compresscompression methods as Lempel-Ziv family, (LZ77, LZSS, LZ78, LZW), byte-pair encoding, and the static dictionary based method. Technically, our pattern matching algorithm extremely extends that for LZW compressed compresstext presented by Amir, Benson and Farach. Notaon & Terminology • String S: – S[1…n] • Sub-string of S : – S[i…j] • Prefix of S: – S[1…i] • Suffix of S: – S[i…n] • |S| = n (string length) • Ex: AGATCGATGGA Prefix Sub-string Suffix Notaon • Σ* is the set of all finite strings over Σ; length is |x|; concatenaon of x and y is xy with |xy| = |x| + |y| • w is a prefix of x (w x) provided x = wy for some y • w is a suffix of x (w x) provided x = yw for some y Naïve String Matcher • The equality test takes 3me O(m) ♦ Complexity – the overall complexity is O((n-m+1)m) Another method? • Can we do beNer than this? – Idea: shiQing at a mismatch far enough for efficiency not too far for correctness. – Ex: T = x a b x y a b x y a b x z P = a b x y a b x z a b x y a b x z X a b x y a b x z v v v v v v v X Jump a b x y a b x z v v v v v v v v Ø Preprocessing on P or T Two types of algorithms for string matching • Preprocessing on P (P fixed , T varied) – Ex: Query P in database T – Algorithms: • Knuth – Morris - Pratt • Boyer – Moore • Preprocessing on T (T fixed , P varied) – Ex: Look for P in dictionary T – Algorothms: – Rabin-Karp – Suffix Trees Rabin-Karp algorithm • Algorithm has worst case running time of O((n-m +1)m), but an average case running time of O(m+n) • Algorithm uses some elementary ideas from number theory - the equivalence of two numbers modulo a third • For simplicity assume that Σ= {0,1,2,..., 9} so that each character is a decimal digit – more generally, if the size of Σ is d, then we will treat each character as a digit in radix d notation. • We can think of a string of length k as a k-digit decimal number. Rabin Karp algorithm • Example - string of characters 354123 corresponds to the number 354,123 • Given a paern P[1 .. m], let p be the decimal number corresponding to the m character paern • Given a text T[1 ..n], let ts = T[s+1 .. s+m] for s = 0, 1, ... n-m denotes the decimal value of the m character substring star3ng at posi3on s+1. • Clearly, ts = p if and only if P[1 ..m] = T[s+1 .. s+m] • So, if we could compute p in O(m) 3me and compute all of the ts in O(n) 3me, then in O(n) 3me we can compare all of the shiQs with p and find all matching locaons. • Of course, the numbers may get very large, but we'll solve that problem in 2-3 slides P --> p • p = P[m] + 10 x P[m-1] + 10 2 x P[m-2] + ... 10 m-1 x P[1] • Compu3ng this in a brute force way would require O(m2) operaons to compute all of the powers of 10. • But no3ce, that we can eliminate one mul3plicaon by 10 in all terms (but the first), by factoring the 10 out: • p = P[m] + 10 (P[m-1] + 10 x P[m-2] + ... + 10 m-2 x P[1]) • Applying this factoring rule repeatedly will lead to only a linear number, in m, of mul3plicaons. Rabin-Karp • Compu3ng p in O(m) 3me. We use what is called Horner's rule • p = P[m] + 10 (P[m-1] +10 (P[m-2]) + ... + 10(P[2] + 10P[1])...))) • Example: P = 3276; P[4] = 6, P[3] = 7, P[2] = 2, P[1] =3 • p = 6 + 10(7 +10(2 + 10(3)))) = 6 + 10(7 + 10(32)) = 6 + 10(327) = 6 + 3270 = 3276 • t0 can be computed from T[1 .. m] using the same method Rabin Karp • Big question is then computing the remaining ti in O(n-m) time - that is, with just a constant number of operations for each remaining ti. • But, there is a simple relationship between t s+1 and ts: (m-1) t s+1 = 10(ts - 10 T[s]) + T[s+m+1] • Example: Suppose T = 314152 and the length of the pattern is 5. Then t0 = 31, 415. What is t1? 4 t1 = 10(31,415 - 10 (3)) + 2 = 10( 1,415) + 2 = 14,152 Rabin Karp • We precompute the constant 10(m-1) (which can be done in logm time) so each update of a t takes a constant number of operations. • So, p and all of the ti can be computed in O(m+n) time. Rabin Karp • So - what is the problem? Well, if P is a long paern, or if the size of the alphabet is large, then the p and ti may be very large numbers, and each arithme3c operaon will not take constant 3me - the numbers might require many words of memory to represent Rabin Karp • Solu3on - compute the p and t's modulo a suitable modulus q. • Horner's rule and the simple updang rule will work using mod q arithme3c • q is large and prime • q is chosen such that 10q just fits within one computer word. This allows all of the arithme3c operaons to be performed using single precision arithme3c, while keeping q as large as possible • For an alphabet with d symbols, q is chosen so that dq fits within one word. Rabin Karp ♦ The catch is that p = ts is no longer a sufficient condition for deciding that P matches a substring of T. ♦ What if two values collide? – Similar to hash table func3ons, it is possible that two or more different strings produce the same “hit” value – any hit will have to be tested to verify that it is not spurious and that p[1..m] = T[s+1..s+m] The Calculaons What is a worst case? • Let P be a string of length m of all a's • Let T be a string of length n of all a's • Then all substrings of length m of T will have the same "key" as P, and all will have to be checked. • But we know that if a substring of T matches P, to determine if the "next" substring matches we only have to perform one additional comparison. • So, the Rabin Karp algorithm can do a lot of unnecessary work. • In practice, though, it doesn't. The Algorithm Knuth-Morris-Pra Algorithm • The key observaon – when there is a mismatch aer several characters match, then the paern and search string share a common substring, which is also a prefix of the paern – If there is a suffix of the part of the paern we have matched which is equal to a prefix of that part, then we need to consider it as a possible match. – By finding the largest such suffix, we make the smallest shiQ – so we never miss a match. • By using the prefix func3on the algorithm has running 3me of O(n + m) Components of KMP algorithm • The prefix function, Π The prefix function,Π for a pattern summarizes how the pattern matches against shifts of itself. This information is used to avoid useless shifts of the pattern P. • The KMP Matcher Given text T, pattern P and prefix function Π as inputs, finds all of the occurrence of P in T. The prefix function, Π Compute-Prefix-Function (p) 1 m ß length[p] //’p’ pattern to be matched 2 Π[1] ß 0 3 k ß 0 4 for q ß 2 to m 5 do while k > 0 and p[k+1] != p[q] 6 do k ß Π[k] 7 If p[k+1] = p[q] 8 then k ß k +1 9 Π[q] ß k 10 return Π Example: compute Π for the pattern ‘p’ below: p do while k > 0 and p[k+1] ! a b a b a c a = p[q] do k ß Π[k] If p[k+1] = p[q] Initially: m = length[p] = 7 then k ß k +1 Π[1] = 0 Π[q] ß k k = 0 q 1 2 3 4 5 6 7 Step 1: q = 2, k=0 p a b a b a c a Π[2] = 0 Π 0 0 q 1 2 3 4 5 6 7 Step 2: q = 3, k = 0, p a b a b a c a Π[3] = 1 Π 0 0 1 q 1 2 3 4 5 6 7 Step 3: q = 4, k = 1 p a b a b a c A Π[4] = 2 Π 0 0 1 2 do while k > 0 and p[k+1] != p[q] do k ß Π[k] If p[k+1] = p[q] then k ß k +1 Π[q] ß k Step 4: q = 5, k =2 q 1 2 3 4 5 6 7 Π[5] = 3 p a b a b a c a Π 0 0 1 2 3 q 1 2 3 4 5 6 7 Step 5: q = 6, k = 3 Π[6] = 1 p a b a b a c a Π 0 0 1 2 3 0 q 1 2 3 4 5 6 7 Step 6: q = 7, k = 1 p a b a b a c a Π[7] = 1 Π 0 0 1 2 3 0 1 The KMP Matcher The KMP Matcher, with pattern ‘p’, string T and prefix function ‘Π’ as input, finds a match of p in T.
Recommended publications
  • FM-Index Reveals the Reverse Suffix Array
    FM-Index Reveals the Reverse Suffix Array Arnab Ganguly Department of Computer Science, University of Wisconsin - Whitewater, WI, USA [email protected] Daniel Gibney Department of Computer Science, University of Central Florida, Orlando, FL, USA [email protected] Sahar Hooshmand Department of Computer Science, University of Central Florida, Orlando, FL, USA [email protected] M. Oğuzhan Külekci Informatics Institute, Istanbul Technical University, Turkey [email protected] Sharma V. Thankachan Department of Computer Science, University of Central Florida, Orlando, FL, USA [email protected] Abstract Given a text T [1, n] over an alphabet Σ of size σ, the suffix array of T stores the lexicographic order of the suffixes of T . The suffix array needs Θ(n log n) bits of space compared to the n log σ bits needed to store T itself. A major breakthrough [FM–Index, FOCS’00] in the last two decades has been encoding the suffix array in near-optimal number of bits (≈ log σ bits per character). One can decode a suffix array value using the FM-Index in logO(1) n time. We study an extension of the problem in which we have to also decode the suffix array values of the reverse text. This problem has numerous applications such as in approximate pattern matching [Lam et al., BIBM’ 09]. Known approaches maintain the FM–Index of both the forward and the reverse text which drives up the space occupancy to 2n log σ bits (plus lower order terms). This brings in the natural question of whether we can decode the suffix array values of both the forward and the reverse text, but by using n log σ bits (plus lower order terms).
    [Show full text]
  • Optimal Time and Space Construction of Suffix Arrays and LCP
    Optimal Time and Space Construction of Suffix Arrays and LCP Arrays for Integer Alphabets Keisuke Goto Fujitsu Laboratories Ltd. Kawasaki, Japan [email protected] Abstract Suffix arrays and LCP arrays are one of the most fundamental data structures widely used for various kinds of string processing. We consider two problems for a read-only string of length N over an integer alphabet [1, . , σ] for 1 ≤ σ ≤ N, the string contains σ distinct characters, the construction of the suffix array, and a simultaneous construction of both the suffix array and LCP array. For the word RAM model, we propose algorithms to solve both of the problems in O(N) time by using O(1) extra words, which are optimal in time and space. Extra words means the required space except for the space of the input string and output suffix array and LCP array. Our contribution improves the previous most efficient algorithms, O(N) time using σ + O(1) extra words by [Nong, TOIS 2013] and O(N log N) time using O(1) extra words by [Franceschini and Muthukrishnan, ICALP 2007], for constructing suffix arrays, and it improves the previous most efficient solution that runs in O(N) time using σ + O(1) extra words for constructing both suffix arrays and LCP arrays through a combination of [Nong, TOIS 2013] and [Manzini, SWAT 2004]. 2012 ACM Subject Classification Theory of computation, Pattern matching Keywords and phrases Suffix Array, Longest Common Prefix Array, In-Place Algorithm Acknowledgements We wish to thank Takashi Kato, Shunsuke Inenaga, Hideo Bannai, Dominik Köppl, and anonymous reviewers for their many valuable suggestions on improving the quality of this paper, and especially, we also wish to thank an anonymous reviewer for giving us a simpler algorithm for computing LCP arrays as described in Section 5.
    [Show full text]
  • A Quick Tour on Suffix Arrays and Compressed Suffix Arrays✩ Roberto Grossi Dipartimento Di Informatica, Università Di Pisa, Italy Article Info a B S T R a C T
    CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Theoretical Computer Science 412 (2011) 2964–2973 Contents lists available at ScienceDirect Theoretical Computer Science journal homepage: www.elsevier.com/locate/tcs A quick tour on suffix arrays and compressed suffix arraysI Roberto Grossi Dipartimento di Informatica, Università di Pisa, Italy article info a b s t r a c t Keywords: Suffix arrays are a key data structure for solving a run of problems on texts and sequences, Pattern matching from data compression and information retrieval to biological sequence analysis and Suffix array pattern discovery. In their simplest version, they can just be seen as a permutation of Suffix tree f g Text indexing the elements in 1; 2;:::; n , encoding the sorted sequence of suffixes from a given text Data compression of length n, under the lexicographic order. Yet, they are on a par with ubiquitous and Space efficiency sophisticated suffix trees. Over the years, many interesting combinatorial properties have Implicitness and succinctness been devised for this special class of permutations: for instance, they can implicitly encode extra information, and they are a well characterized subset of the nW permutations. This paper gives a short tutorial on suffix arrays and their compressed version to explore and review some of their algorithmic features, discussing the space issues related to their usage in text indexing, combinatorial pattern matching, and data compression. ' 2011 Elsevier B.V. All rights reserved. 1. Suffix arrays Consider a text T ≡ T T1; nU of n symbols over an alphabet Σ, where T TnUD # is an endmarker not occurring elsewhere and smaller than any other symbol in Σ.
    [Show full text]
  • Suffix Trees, Suffix Arrays, BWT
    ALGORITHMES POUR LA BIO-INFORMATIQUE ET LA VISUALISATION COURS 3 Raluca Uricaru Suffix trees, suffix arrays, BWT Based on: Suffix trees and suffix arrays presentation by Haim Kaplan Suffix trees course by Paco Gomez Linear-Time Construction of Suffix Trees by Dan Gusfield Introduction to the Burrows-Wheeler Transform and FM Index, Ben Langmead Trie • A tree representing a set of strings. c { a aeef b ad e bbfe d b bbfg e c f } f e g Trie • Assume no string is a prefix of another Each edge is labeled by a letter, c no two edges outgoing from the a b same node are labeled the same. e b Each string corresponds to a d leaf. e f f e g Compressed Trie • Compress unary nodes, label edges by strings c è a a c b e d b d bbf e eef f f e g e g Suffix tree Given a string s a suffix tree of s is a compressed trie of all suffixes of s. Observation: To make suffixes prefix-free we add a special character, say $, at the end of s Suffix tree (Example) Let s=abab. A suffix tree of s is a compressed trie of all suffixes of s=abab$ { $ a b $ b b$ $ ab$ a a $ b bab$ b $ abab$ $ } Trivial algorithm to build a Suffix tree a b Put the largest suffix in a b $ a b b a Put the suffix bab$ in a b b $ $ a b b a a b b $ $ Put the suffix ab$ in a b b a b $ a $ b $ a b b a b $ a $ b $ Put the suffix b$ in a b b $ a a $ b b $ $ a b b $ a a $ b b $ $ Put the suffix $ in $ a b b $ a a $ b b $ $ $ a b b $ a a $ b b $ $ We will also label each leaf with the starting point of the corresponding suffix.
    [Show full text]
  • 1 Suffix Trees
    15-451/651: Design & Analysis of Algorithms November 27, 2018 Lecture #24: Suffix Trees and Arrays last changed: November 26, 2018 We're shifting gears now to revisit string algorithms. One approach to string problems is to build a more general data structure that represents the strings in a way that allows for certain queries about the string to be answered quickly. There are a lot of such indexes (e.g. wavelet trees, FM- index, etc.) that make different tradeoffs and support different queries. Today, we're going to see two of the most common string index data structures: suffix trees and suffix arrays. 1 Suffix Trees Consider a string T of length t (long). Our goal is to preprocess T and construct a data structure that will allow various kinds of queries on T to be answered efficiently. The most basic example is this: given a pattern P of length p, find all occurrences of P in the text T . What is the performance we aiming for? • The time to find all occurrences of pattern P in T should be O(p + k) where k is the number of occurrences of P in T . • Moreover, ideally we would require O(t) time to do the preprocessing, and O(t) space to store the data structure. Suffix trees are a solution to this problem, with all these ideal properties.1 They can be used to solve many other problems as well. In this lecture, we'll consider the alphabet size to be jΣj = O(1). 1.1 Tries The first piece of the puzzle is a trie, a data structure for storing a set of strings.
    [Show full text]
  • Suffix Trees and Suffix Arrays in Primary and Secondary Storage Pang Ko Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 2007 Suffix trees and suffix arrays in primary and secondary storage Pang Ko Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Bioinformatics Commons, and the Computer Sciences Commons Recommended Citation Ko, Pang, "Suffix trees and suffix arrays in primary and secondary storage" (2007). Retrospective Theses and Dissertations. 15942. https://lib.dr.iastate.edu/rtd/15942 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Suffix trees and suffix arrays in primary and secondary storage by Pang Ko A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Computer Engineering Program of Study Committee: Srinivas Aluru, Major Professor David Fern´andez-Baca Suraj Kothari Patrick Schnable Srikanta Tirthapura Iowa State University Ames, Iowa 2007 UMI Number: 3274885 UMI Microform 3274885 Copyright 2007 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, MI 48106-1346 ii DEDICATION To my parents iii TABLE OF CONTENTS LISTOFTABLES ................................... v LISTOFFIGURES .................................. vi ACKNOWLEDGEMENTS. .. .. .. .. .. .. .. .. .. ... .. .. .. .. vii ABSTRACT....................................... viii CHAPTER1. INTRODUCTION . 1 1.1 SuffixArrayinMainMemory .
    [Show full text]
  • Approximate String Matching Using Compressed Suffix Arrays
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 352 (2006) 240–249 www.elsevier.com/locate/tcs Approximate string matching using compressed suffix arraysଁ Trinh N.D. Huynha, Wing-Kai Honb, Tak-Wah Lamb, Wing-Kin Sunga,∗ aSchool of Computing, National University of Singapore, Singapore bDepartment of Computer Science and Information Systems, The University of Hong Kong, Hong Kong Received 17 August 2004; received in revised form 23 September 2005; accepted 9 November 2005 Communicated by A. Apostolico Abstract Let T be a text of length n and P be a pattern of length m, both strings over a fixed finite alphabet A. The k-difference (k-mismatch, respectively) problem is to find all occurrences of P in T that have edit distance (Hamming distance, respectively) at most k from P . In this paper we investigate a well-studied case in which T is fixed and preprocessed into an indexing data structure so that any pattern k k query can be answered faster. We give a solution using an O(n log n) bits indexing data structure with O(|A| m ·max(k, log n)+occ) query time, where occ is the number of occurrences. The best previous result requires O(n log n) bits indexing data structure and k k+ gives O(|A| m 2 + occ) query time. Our solution also allows us to exploit compressed suffix arrays to reduce the indexing space to O(n) bits, while increasing the query time by an O(log n) factor only.
    [Show full text]
  • Suffix Array
    Suffix Array The suffix array of a text T is a lexicographically ordered array of the set T[0::n] of all suffixes of T . More precisely, the suffix array is an array SA[0::n] of integers containing a permutation of the set [0::n] such that TSA[0] < TSA[1] < ··· < TSA[n]. A related array is the inverse suffix array SA−1 which is the inverse permutation, i.e., SA−1[SA[i]] = i for all i 2 [0::n]. The value SA−1[j] is the lexicographical rank of the suffix Tj As with suffix trees, it is common to add the end symbol T [n] = $. It has no effect on the suffix array assuming $ is smaller than any other symbol. Example 4.7: The suffix array and the inverse suffix array of the text T = banana$. −1 i SA[i] TSA[i] j SA [j] 0 6 $ 0 4 banana$ 1 5 a$ 1 3 anana$ 2 3 ana$ 2 6 nana$ 3 1 anana$ 3 2 ana$ 4 0 banana$ 4 5 na$ 5 4 na$ 5 1 a$ 6 2 nana$ 6 0 $ 163 Suffix array is much simpler data structure than suffix tree. In particular, the type and the size of the alphabet are usually not a concern. • The size on the suffix array is O(n) on any alphabet. • We will later see that the suffix array can be constructed in the same asymptotic time it takes to sort the characters of the text. Suffix array construction algorithms are quite fast in practice too.
    [Show full text]
  • Exhaustive Exact String Matching: the Analysis of the Full Human Genome
    Exhaustive Exact String Matching: The Analysis of the Full Human Genome Konstantinos F. Xylogiannopoulos Department of Computer Science University of Calgary Calgary, AB, Canada [email protected] Abstract—Exact string matching has been a fundamental One of the first problems studied in computer science is problem in computer science for decades because of many string matching in biological sequences. Analysis of practical applications. Some are related to common procedures, biological sequences is considered to be a standard string- such as searching in files and text editors, or, more recently, to matching problem since all information in DNA, RNA and more advanced problems such as pattern detection in Artificial proteins is stored as nucleotide sequences using a simple and Intelligence and Bioinformatics. Tens of algorithms and short alphabet. For example, in the DNA the whole genome is methodologies have been developed for pattern matching and expressed with the four-letter based alphabet A, C, G and T several programming languages, packages, applications and representing Adenine, Cytosine, Guanine and Thymine online systems exist that can perform exact string matching in respectively. What made the string matching problem biological sequences. These techniques, however, are limited to searching for specific and predefined strings in a sequence. In important in bioinformatics from the very early stages of this paper a novel methodology (called Ex2SM) is presented, development was not just the obvious importance of biology which is a pipeline of execution of advanced data structures and related analysis of the life base information, but also the size algorithms, explicitly designed for text mining, that can detect of the strings that had to be analyzed, which was beyond any every possible repeated string in multivariate biological available computer capacity 20 or 30 years ago.
    [Show full text]
  • Distributed Text Search Using Suffix Arrays$
    Distributed Text Search using Suffix ArraysI Diego Arroyueloa,d,1,∗, Carolina Bonacicb,2, Veronica Gil-Costac,d,3, Mauricio Marinb,d,f,4, Gonzalo Navarrof,e,5 aDept. of Informatics, Univ. T´ecnica F. Santa Mar´ıa,Chile bDept. of Informatics, University of Santiago, Chile cCONICET, University of San Luis, Argentina dYahoo! Labs Santiago, Chile eDept. of Computer Science, University of Chile fCenter for Biotechnology and Bioengineering Abstract Text search is a classical problem in Computer Science, with many data-intensive applications. For this problem, suffix arrays are among the most widely known and used data structures, enabling fast searches for phrases, terms, substrings and regular expressions in large texts. Potential application domains for these operations include large-scale search services, such as Web search engines, where it is necessary to efficiently process intensive-traffic streams of on-line queries. This paper proposes strategies to enable such services by means of suffix arrays. We introduce techniques for deploying suffix arrays on clusters of distributed-memory processors and then study the processing of multiple queries on the distributed data structure. Even though the cost of individual search operations in sequential (non-distributed) suffix arrays is low in practice, the problem of processing multiple queries on distributed-memory systems, so that hardware resources are used efficiently, is relevant to services aimed at achieving high query throughput at low operational costs. Our theoretical and experimental performance studies show that our proposals are suitable solutions for building efficient and scalable on-line search services based on suffix arrays. 1. Introduction In the last decade, the design of efficient data structures and algorithms for textual databases and related applications has received a great deal of attention, due to the rapid growth of the amount of text data available from different sources.
    [Show full text]
  • Optimal In-Place Suffix Sorting
    Optimal In-Place Suffix Sorting Zhize Li Jian Li Hongwei Huo IIIS, Tsinghua University IIIS, Tsinghua University SCST, Xidian University [email protected] [email protected] [email protected] Abstract The suffix array is a fundamental data structure for many applications that involve string searching and data compression. Designing time/space-efficient suffix array construction algorithms has attracted significant attention and considerable advances have been made for the past 20 years. We obtain the first in-place suffix array construction algorithms that are optimal both in time and space for (read-only) integer alphabets. Concretely, we make the following contributions: 1. For integer alphabets, we obtain the first suffix sorting algorithm which takes linear time and uses only O(1) workspace (the workspace is the total space needed beyond the input string and the output suffix array). The input string may be modified during the execution of the algorithm, but should be restored upon termination of the algorithm. 2. We strengthen the first result by providing the first in-place linear time algorithm for read-only integer alphabets with Σ = O(n) (i.e., the input string cannot be modified). This algorithm settles the open problem| posed| by Franceschini and Muthukrishnan in ICALP 2007. The open problem asked to design in-place algorithms in o(n log n) time and ultimately, in O(n) time for (read-only) integer alphabets with Σ n. Our result is in fact slightly stronger since we allow Σ = O(n). | | ≤ | | 3. Besides, for the read-only general alphabets (i.e., only comparisons are allowed), we present an optimal in-place O(n log n) time suffix sorting algorithm, recovering the result obtained by Franceschini and Muthukrishnan which was an open problem posed by Manzini and Ferragina in ESA 2002.
    [Show full text]
  • 4. Suffix Trees and Arrays
    4. Suffix Trees and Arrays Let T = T [0::n) be the text. For i 2 [0::n], let Ti denote the suffix T [i::n). Furthermore, for any subset C 2 [0::n], we write TC = fTi j i 2 Cg. In particular, T[0::n] is the set of all suffixes of T . Suffix tree and suffix array are search data structures for the set T[0::n]. • Suffix tree is a compact trie for T[0::n]. • Suffix array is a ordered array for T[0::n]. They support fast exact string matching on T : • A pattern P has an occurrence starting at position i if and only if P is a prefix of Ti. • Thus we can find all occurrences of P by a prefix search in T[0::n]. There are numerous other applications too, as we will see later. 138 The set T[0::n] contains jT[0::n]j = n + 1 strings of total length 2 2 jjT[0::n]jj = Θ(n ). It is also possible that L(T[0::n]) = Θ(n ), for example, when T = an or T = XX for any string X. • A basic trie has Θ(n2) nodes for most texts, which is too much. Even a leaf path compacted trie can have Θ(n2) nodes, for example when T = XX for a random string X. • A compact trie with O(n) nodes and an ordered array with n + 1 entries have linear size. • A compact ternary trie and a string binary search tree have O(n) nodes too.
    [Show full text]