Pattern Matching

a b a c a a b 1 a b a c a b 4 3 2 a b a c a b

Pattern Matching 1 Outline and Reading

Strings (§11.1) Pattern matching algorithms

„ Brute-force algorithm (§11.2.1)

„ Boyer-Moore algorithm (§11.2.2)

„ Knuth-Morris-Pratt algorithm (§11.2.3)

Pattern Matching 2 Strings

A string is a sequence of Let P be a string of size m characters „ A substring P[i .. j] of P is the Examples of strings: subsequence of P consisting of the characters with ranks „ C++ program between i and j „ HTML document „ A prefix of P is a substring of „ DNA sequence the type P[0 .. i] „ Digitized image „ A suffix of P is a substring of An alphabet Σ is the set of the type P[i ..m − 1] possible characters for a Given strings T (text) and P family of strings (pattern), the pattern matching Example of alphabets: problem consists of finding a substring of T equal to P „ ASCII (used by C and C++) „ Unicode (used by Java) Applications: „ „ {0, 1} Text editors „ „ {A, C, G, T} Search engines „ Biological research Pattern Matching 3 Brute-Force Algorithm Algorithm BruteForceMatch(T, P) The brute-force pattern matching algorithm compares Input text T of size n and pattern the pattern P with the text T P of size m for each possible shift of P Output starting index of a relative to T, until either substring of T equal to P or −1 if no such substring exists „ a match is found, or for i ← 0 to n − m „ all placements of the pattern have been tried { test shift i of the pattern } Brute-force pattern matching j ← 0 runs in time O(nm) while j < m ∧ T[i + j] = P[j] Example of worst case: j ← j + 1 „ T = aaa … ah if j = m „ P = aaah return i {match at i} „ may occur in images and {else mismatch at i} DNA sequences return -1 {no match anywhere} „ unlikely in English text

Pattern Matching 4 Boyer-Moore Heuristics

The Boyer-Moore’s pattern matching algorithm is based on two heuristics Looking-glass heuristic: Compare P with a subsequence of T moving backwards Character-jump heuristic: When a mismatch occurs at T[i] = c „ If P contains c, shift P to align the last occurrence of c in P with T[i] „ Else, shift P to align P[0] with T[i + 1] Example

a p a t t e r n m a t c h i n g a l g o r i t h m

1 3 5 11 10 9 8 7 r i t h m r i t h m r i t h m r i t h m

2 4 6 r i t h m r i t h m r i t h m

Pattern Matching 5 Last-Occurrence Function

Boyer-Moore’s algorithm preprocesses the pattern P and the alphabet Σ to build the last-occurrence function L mapping Σ to integers, where L(c) is defined as „ the largest index i such that P[i] = c or „ −1 if no such index exists Example: c a b c d „ Σ = {a, b, c, d} „ P = abacab L(c) 4 5 3 −1

The last-occurrence function can be represented by an array indexed by the numeric codes of the characters The last-occurrence function can be computed in time O(m + s), where m is the size of P and s is the size of Σ

Pattern Matching 6 The Boyer-Moore Algorithm

Algorithm BoyerMooreMatch(T, P, Σ) Case 1: j ≤ 1 + l ...... a ...... L ← lastOccurenceFunction(P, Σ ) i i ← m − 1 j ← m − 1 . . . . b a repeat jl if T[i] = P[j] m − j if j = 0 . . . . b a return i { match at i } else j i ← i − 1 j ← j − 1 Case 2: 1 + l ≤ j else ...... a ...... { character-jump } i l ← L[T[i]] . a . . b . i ← i + m – min(j, 1 + l) l j j ← m − 1 m − (1 + l) until i > n − 1 return −1 { no match } . a . . b . 1 + l Pattern Matching 7 Example

a b a c a a b a d c a b a c a b a a b b

1 a b a c a b

4 3 2 13 12 11 10 9 8 a b a c a b a b a c a b

5 7 a b a c a b a b a c a b

6 a b a c a b

Pattern Matching 8 Analysis

Boyer-Moore’s algorithm runs in time O(nm + s) a a a a a a a a a Example of worst case: 6 5 4 3 2 1

„ T = aaa … a b a a a a a

„ P = baaa 12 11 10 9 8 7 The worst case may occur in b a a a a a images and DNA sequences but is unlikely in English text 18 17 16 15 14 13 Boyer-Moore’s algorithm is b a a a a a significantly faster than the 24 23 22 21 20 19 brute-force algorithm on b a a a a a English text

Pattern Matching 9 The KMP Algorithm - Motivation

Knuth-Morris-Pratt’s algorithm compares the pattern to the text in left-to-right, but shifts . . a b a a b x . . . . . the pattern more intelligently than the brute-force algorithm. When a mismatch occurs, a b a a b a what is the most we can shift the pattern so as to avoid j redundant comparisons? a b a a b a Answer: the largest prefix of P[0..j] that is a suffix of P[1..j] No need to Resume repeat these comparing comparisons here Pattern Matching 10 KMP Failure Function

Knuth-Morris-Pratt’s j 0 1 2 3 4 5 algorithm preprocesses the pattern to matches of P[j] a b a a b a prefixes of the pattern with F(j) 0 0 1 1 2 3 the pattern itself The failure function F(j) is . . a b a a b x . . . . . defined as the size of the largest prefix of P[0..j] that is also a suffix of P[1..j] a b a a b a Knuth-Morris-Pratt’s j algorithm modifies the brute- force algorithm so that if a a b a a b a mismatch occurs at P[j] ≠ T[i] we set j ← F(j − 1) F(j − 1)

Pattern Matching 11 The KMP Algorithm

The failure function can be Algorithm KMPMatch(T, P) represented by an array and F ← failureFunction(P) can be computed in O(m) time i ← 0 j ← 0 At each iteration of the while- while i < n loop, either if T[i] = P[j] if j = m − 1 „ i increases by one, or return i − j { match } „ the shift amount i − j else increases by at least one i ← i + 1 (observe that F(j − 1) < j) j ← j + 1 Hence, there are no more else if j > 0 than 2n iterations of the j ← F[j − 1] while-loop else Thus, KMP’s algorithm runs in i ← i + 1 return −1 { no match } optimal time O(m + n)

Pattern Matching 12 Computing the Failure Function

The failure function can be represented by an array and Algorithm failureFunction(P) can be computed in O(m) time F[0] ← 0 The construction is similar to i ← 1 the KMP algorithm itself j ← 0 while i < m At each iteration of the while- if P[i] = P[j] loop, either {we have matched j + 1 chars} F[i] ← j + 1 „ i increases by one, or i ← i + 1 „ the shift amount i − j j ← j + 1 increases by at least one else if j > 0 then (observe that F(j − 1) < j) {use failure function to shift P} Hence, there are no more j ← F[j − 1] else than 2m iterations of the F[i] ← 0 { no match } while-loop i ← i + 1

Pattern Matching 13 Example

a b a c a a b a c c a b a c a b a a b b 1 23456 a b a c a b 7 a b a c a b 8 9 10 11 12 a b a c a b 13 a b a c a b j 0 1 2 3 4 5 14 15 16 P[j] a b a c a b 17 18 19 a b a c a b F(j) 0 0 1 0 1 2

Pattern Matching 14

e i mi nimize ze

mize nimize ze nimize ze

Pattern Matching 15 Outline and Reading

Standard tries (§11.3.1) Compressed tries (§11.3.2) Suffix tries (§11.3.3) Huffman encoding tries (§11.4.1)

Pattern Matching 16 Preprocessing Strings Preprocessing the pattern speeds up pattern matching queries

„ After preprocessing the pattern, KMP’s algorithm performs pattern matching in time proportional to the text size If the text is large, immutable and searched for often (e.g., works by Shakespeare), we may want to preprocess the text instead of the pattern A is a compact for representing a set of strings, such as all the words in a text

„ A trie supports pattern matching queries in time proportional to the pattern size

Pattern Matching 17 Standard Trie (1) The standard trie for a set of strings S is an ordered such that: „ Each node but the root is labeled with a character „ The children of a node are alphabetically ordered „ The paths from the external nodes to the root yield the strings of S Example: standard trie for the set of strings S = { bear, bell, bid, bull, buy, sell, stock, stop }

b s

e i u e t

a l d l y l o

r l l l c p

k Pattern Matching 18 Standard Trie (2) A standard trie uses O(n) space and supports searches, insertions and deletions in time O(dm), where: n total size of the strings in S m size of the string parameter of the operation d size of the alphabet

b s

e i u e t

a l d l y l o

r l l l c p

k Pattern Matching 19 Word Matching with a Trie We insert the s e e a b e a r ? s e l l s t o c k ! words of the 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 text into a s e e a b u l l ? b u y s t o c k ! 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 trie b i d s t o c k ! b i d s t o c k ! Each leaf 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 stores the h e a r t h e b e l l ? s t o p ! occurrences 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 of the associated word in the b h s text e i u e e t

a l d l y a e l o 47, 58 36 0, 24 r l l r l c p 6 78 30 69 12 84 k 17, 40, Pattern Matching51, 62 20 Compressed Trie

A compressed trie has b s internal nodes of degree at least two e id u ell to It is obtained from standard trie by ar ll ll y ck p compressing chains of “redundant” nodes

b s

e i u e t a l d l y l o r l l l c p

k Pattern Matching 21 Compact Representation Compact representation of a compressed trie for an array of strings:

„ Stores at the nodes ranges of indices instead of substrings

„ Uses O(s) space, where s is the number of strings in the array

„ Serves as an auxiliary index structure

01234 0123 0123 S[0] = s e e S[4] = b u l l S[7] = h e a r S[1] = b e a r S[5] = b u y S[8] = b e l l S[2] = s e l l S[6] = b i d S[9] = s t o p S[3] = s t o c k

1, 0, 0 7, 0, 3 0, 0, 0

1, 1, 1 6, 1, 2 4, 1, 1 0, 1, 1 3, 1, 2

1, 2, 3 8, 2, 3 4, 2, 3 5, 2, 2 0, 2, 2 2, 2, 3 3, 3, 4 9, 3, 3

Pattern Matching 22 Suffix Trie (1)

The suffix trie of a string X is the compressed trie of all the suffixes of X m i n i m i z e 01234567

e i mi nimize ze

mize nimize ze nimize ze

Pattern Matching 23 Suffix Trie (2)

Compact representation of the suffix trie for a string X of size n from an alphabet of size d „ Uses O(n) space „ Supports arbitrary pattern matching queries in X in O(dm) time, where m is the size of the pattern

m i n i m i z e 01234567

7, 7 1, 1 0, 1 2, 7 6, 7

4, 7 2, 7 6, 7 2, 7 6, 7

Pattern Matching 24 Encoding Trie (1) A code is a mapping of each character of an alphabet to a binary code-word A prefix code is a binary code such that no code-word is the prefix of another code-word An encoding trie represents a prefix code

„ Each leaf stores a character

„ The code word of a character is given by the path from the root to the leaf storing the character (0 for a left child and 1 for a right child

00 010 011 10 11 a b c d e a d e

b c Pattern Matching 25 Encoding Trie (2)

Given a text string X, we want to find a prefix code for the characters of X that yields a small encoding for X „ Frequent characters should have long code-words „ Rare characters should have short code-words Example „ X = abracadabra

„ T1 encodes X into 29 bits „ T2 encodes X into 24 bits

T1 T2

c d b a b r

a r c d Pattern Matching 26 Huffman’s Algorithm Algorithm HuffmanEncoding(X) Given a string X, Input string X of size n Huffman’s algorithm Output optimal encoding trie for X construct a prefix C ← distinctCharacters(X) code the minimizes computeFrequencies(C, X) the size of the Q ← new empty heap encoding of X for all c ∈ C It runs in time T ← new single-node tree storing c O(n + d log d), where Q.insert(getFrequency(c), T) n is the size of X while Q.size() > 1 and d is the number f1 ← Q.minKey() of distinct characters T ← Q.removeMin() of X 1 f2 ← Q.minKey() A heap-based T ← Q.removeMin() priority queue is 2 T ← join(T1, T2) used as an auxiliary Q.insert(f + f , T) structure 1 2 return Q.removeMin() Pattern Matching 27 Example 11 X = abracadabra a 6 Frequencies 2 4

a b c d r c d b r 5 2 1 1 2 6

2 4 a b c d r 52112 a c d b r 5

2 2 4

a b c d r a c d b r 52 2 5 Pattern Matching 28