Streaming Dictionary Matching with Mismatches

Total Page:16

File Type:pdf, Size:1020Kb

Streaming Dictionary Matching with Mismatches Streaming dictionary matching with mismatches∗† Pawe l Gawrychowski1 and Tatiana Starikovskaya2 1University of Wroc law, 50-137 Wroc law, Poland, [email protected] 2DI/ENS, PSL Research University, Paris, France, [email protected] Abstract In the k-mismatch problem we are given a pattern of length n and a text and must find all loca- tions where the Hamming distance between the pattern and the text is at most k. A series of recent breakthroughs have resulted in an ultra-efficient streaming algorithm for this problem that requires only (k log n ) space and (log n (√k log k + log3 n)) time per letter [Clifford, Kociumaka, Porat, SODA O k O k 2019]. In this work, we consider a strictly harder problem called dictionary matching with k mis- matches. In this problem, we are given a dictionary of d patterns, where the length of each pattern is at most n, and must find all substrings of the text that are within Hamming distance k from one of k the patterns. We develop a streaming algorithm for this problem with (kd log d polylog n) space and k O (k log d polylog n + output ) time per position of the text. The algorithm is randomised and outputs O | | correct answers with high probability. On the lower bound side, we show that any streaming algorithm for dictionary matching with k mismatches requires Ω(kd) bits of space. 1 Introduction In the fundamental dictionary matching problem, we are given a dictionary of patterns and a text, and must find all the substrings of the text equal to one of the patterns (such substrings are called occurrences of the patterns). The classical algorithm for dictionary matching is the one by Aho and Corasick [1]. For a dictionary of d patterns of length at most n, their algorithm uses Ω(nd) space and (1 + output ) time per letter, where output is the set of occurrences of the patterns that end at this position.O Apart| from| the Aho–Corasick algorithm, other word-RAM algorithms for exact dictionary matching include [3, 4, 7, 13, 14, 17, 21, 25, 28, 29, 35]. However, in many applications one is interested in substrings of the text that are close to but not necessarily equal to the patterns. This task can be naturally formalised as follows: given a dictionary of d patterns of length at most n, and a text, find all substrings of the text within distance k from one of the patterns, where the distance is either the Hamming or the edit distance. In this work, we focus on the arXiv:1809.02517v3 [cs.DS] 20 Jun 2021 Hamming distance, and refer to this problem as dictionary matching with k mismatches. We give a brief survey of existing solutions in the word-RAM model, ignoring the special case of k = 1 that relies on very different techniques. The case of d = 1 was considered in [2, 10, 20, 32]. The latest algorithm [20] uses (n) space and (log2 n + kplog(n)/n) amortised time per letter for constant-size alphabet. These algorithmsO can be generalisedO to d > 1 patterns by running d instances of the algorithm in parallel. One can also reduce the problem to dictionary look-up with k mismatches or text indexing with k mismatches. In the former problem, the task is to preprocess the dictionary of patterns into a data structure to support the following queries fast: given a string Q, find all patterns in the dictionary within Hamming distance k from Q. In the latter, the task is to preprocess the text so that given a pattern to ∗This is a full and extended version of the conference paper [19]. †P. Gawrychowski was partially supported by the Bekker programme of the Polish National Agency for Academic Ex- change (PPN/BEK/2020/1/00444) and the grant ANR-20-CE48-0001 from the French National Research Agency (ANR). T. Starikovskaya was partially supported by the grant ANR-20-CE48-0001 from the French National Research Agency (ANR). 1 be able to report all substrings of the text within Hamming distance k from the pattern efficiently. These problems were considered in [12, 16, 18, 26, 31, 34]. However, all of the above algorithms must at least store the dictionary in full, which in the worst case requires Ω(nd) bits of space. In this work, we focus on the streaming model of computation that was designed to overcome this restriction and allows developing particularly efficient algorithms. In the streaming model, we assume that the text arrives as a stream, one letter at a time. The space complexity of an algorithm is defined to be all the space used, including the space we need for storing the information about the pattern(s) and the text. The time complexity of an algorithm is defined to be the time we spend to process one letter of the text. The streaming model of computation aims for algorithms that use as little space and time as possible. All streaming algorithms we discuss in this paper are randomised and output correct answers with high probability1. Throughout the paper, we assume that the length of the text is (n). If the text is longer, we can partition it into overlapping blocks of length (n) and process each blockO independently. The first sublinear-space streaming algorithmO for the dictionary matching problem with d = 1 was suggested by Porat and Porat [33]. For a pattern of length n, their algorithm uses (log n) space and (log n) time per letter. Later, Breslauer and Galil gave a (log n)-space and (1)-time algorithmO [8]. For arbitraryO d, Clifford et al. [9] showed a streaming algorithm thatO uses (d log n)O space and (log log(n + d)+ output ) time per letter. Golan and Porat [24] showed an improvedO algorithm that usesO the same amount| of space| and (1 + output ) time per letter for constant-size alphabets. DictionaryO | matching| with k mismatches has been mainly studied for d = 1. The first algorithm was shown by Porat and Porat [33] by reduction to exact dictionary matching. The algorithm uses (k3 log7 n/ log log n) space and (k2 log5 n/ log log n) time. The complexity has been subsequently improvedO in [10, 11, 23]. The O n n 3 current best algorithm uses only (k log k ) space and (log k (√k log k + log n)) time per letter [11]. Golan et al. [22] studied space-time trade-offsO for this problem.O For d> 1, one can obtain the following result by a repeated application of the algorithm for d = 1 [11]: Corollary 1. For any k 1, there is a randomised streaming algorithm for dictionary matching with k mismatches that uses ˜(dk≥) space and ˜(d√k) time per letter2. The algorithm has two-sided error and outputs correct answersO with high probability.O 1.1 Our results In this work, we consider the problem of streaming dictionary matching with k mismatches for arbitrary d> 1. As it can be seen, the time complexity of Corollary 1 depends on d linearly, which is prohibitive for applications where the stream letters arrive at a high speed and the size of the dictionary is large, up to several thousands of patterns, as we must be able to process each letter before the next one arrives to benefit from the space advantages of streaming algorithms. In this work, we show an algorithm that uses ˜(kd logk d) space and ˜(k logk d+ output ) time per letter, assuming polynomial-size alphabet (Theorem 4). OurO algorithm makes useO of a new randomised| | variant of the k-errata tree (Section 3), a famous data structure of Cole, Gottlieb, and Lewenstein for dictionary matching with k mismatches [12]. This variant of the k- errata tree allows to improve both the query time and the space requirements and can be considered as a generalisation of the z-fast tries [5, 6], that have proved to be useful in many streaming applications. We also show that any streaming algorithm for dictionary matching with k mismatches requires Ω(kd) bits of space (Lemma 12). This lower bound implies that for constant values of k our algorithm is optimal up to polylogarithmic factors. 1With high probability means with probability at least 1 − 1/nc for any predefined constant c> 1. 2Hereafter, O˜ hides a multiplicative factor polynomial in log n. 2 a b b aa a b b a bb cb ca c c b Figure 1: The trie (left) and the compact trie (right) for a dictionary aa,aabc,bac,bbb . The nodes of the trie that we delete are shown by circles, marked nodes (nodes labelled by{ the dictionary} patterns) are black. 2 Preliminaries In this section, we give the definitions of strings, tries, and two hash functions that we use throughout the paper: Karp–Rabin fingerprints [27] and sketches for the Hamming distance [11]. 2.1 Strings and tries We assume an integer alphabet 1, 2,...,σ of size σ = nO(1). A string is a finite sequence of letters of the alphabet. For a string S = S[1]S{[2] ...S[m]} we denote its length m by S and its substring S[i]S[i+1] ...S[j], 1 i < j m, by S[i, j]. If i = 1, the substring S[1, j] is referred to| as| a prefix of S. If j = m, S[i,m] is called≤ a suffix≤ of S. We say that a substring S[i, j] is an occurrence of a string X in S if S[i, j]= X and a k-mismatch occurrence of X in S if the Hamming distance between S[i, j] and X is at most k.
Recommended publications
  • Data Streams and Applications in Computer Science
    Data Streams and Applications in Computer Science David P. Woodruff IBM Research Almaden [email protected] Abstract This is a short survey of my work in data streams and related appli- cations, such as communication complexity, numerical linear algebra, and sparse recovery. The goal is give a non-technical overview of results in data streams, and highlight connections between these different areas. It is based on my Presburger lecture award given at ICALP, 2014. 1 The Data Stream Model Informally speaking, a data stream is a sequence of data that is too large to be stored in available memory. The data may be a sequence of numbers, points, edges in a graph, and so on. There are many examples of data streams, such as internet search logs, network traffic, sensor networks, and scientific data streams (such as in astronomics, genomics, physical simulations, etc.). The abundance of data streams has led to new algorithmic paradigms for processing them, which often impose very stringent requirements on the algorithm’s resources. Formally, in the streaming model, there is a sequence of elements a1;:::; am presented to an algorithm, where each element is drawn from a universe [n] = f1;:::; ng. The algorithm is allowed a single or a small number of passes over the stream. In network applications, the algorithm is typically only given a single pass, since if data on a network is not physically stored somewhere, it may be impossible to make a second pass over it. In other applications, such as when data resides on external memory, it may be streamed through main memory a small number of times, each time constituting a pass of the algorithm.
    [Show full text]
  • Compressed Suffix Trees with Full Functionality
    Compressed Suffix Trees with Full Functionality Kunihiko Sadakane Department of Computer Science and Communication Engineering, Kyushu University Hakozaki 6-10-1, Higashi-ku, Fukuoka 812-8581, Japan [email protected] Abstract We introduce new data structures for compressed suffix trees whose size are linear in the text size. The size is measured in bits; thus they occupy only O(n log |A|) bits for a text of length n on an alphabet A. This is a remarkable improvement on current suffix trees which require O(n log n) bits. Though some components of suffix trees have been compressed, there is no linear-size data structure for suffix trees with full functionality such as computing suffix links, string-depths and lowest common ancestors. The data structure proposed in this paper is the first one that has linear size and supports all operations efficiently. Any algorithm running on a suffix tree can also be executed on our compressed suffix trees with a slight slowdown of a factor of polylog(n). 1 Introduction Suffix trees are basic data structures for string algorithms [13]. A pattern can be found in time proportional to the pattern length from a text by constructing the suffix tree of the text in advance. The suffix tree can also be used for more complicated problems, for example finding the longest repeated substring in linear time. Many efficient string algorithms are based on the use of suffix trees because this does not increase the asymptotic time complexity. A suffix tree of a string can be constructed in linear time in the string length [28, 21, 27, 5].
    [Show full text]
  • Suffix Trees
    JASS 2008 Trees - the ubiquitous structure in computer science and mathematics Suffix Trees Caroline L¨obhard St. Petersburg, 9.3. - 19.3. 2008 1 Contents 1 Introduction to Suffix Trees 3 1.1 Basics . 3 1.2 Getting a first feeling for the nice structure of suffix trees . 4 1.3 A historical overview of algorithms . 5 2 Ukkonen’s on-line space-economic linear-time algorithm 6 2.1 High-level description . 6 2.2 Using suffix links . 7 2.3 Edge-label compression and the skip/count trick . 8 2.4 Two more observations . 9 3 Generalised Suffix Trees 9 4 Applications of Suffix Trees 10 References 12 2 1 Introduction to Suffix Trees A suffix tree is a tree-like data-structure for strings, which affords fast algorithms to find all occurrences of substrings. A given String S is preprocessed in O(|S|) time. Afterwards, for any other string P , one can decide in O(|P |) time, whether P can be found in S and denounce all its exact positions in S. This linear worst case time bound depending only on the length of the (shorter) string |P | is special and important for suffix trees since an amount of applications of string processing has to deal with large strings S. 1.1 Basics In this paper, we will denote the fixed alphabet with Σ, single characters with lower-case letters x, y, ..., strings over Σ with upper-case or Greek letters P, S, ..., α, σ, τ, ..., Trees with script letters T , ... and inner nodes of trees (that is, all nodes despite of root and leaves) with lower-case letters u, v, ...
    [Show full text]
  • Efficient Semi-Streaming Algorithms for Local Triangle Counting In
    Efficient Semi-streaming Algorithms for Local Triangle Counting in Massive Graphs ∗ Luca Becchetti Paolo Boldi Carlos Castillo “Sapienza” Università di Roma Università degli Studi di Milano Yahoo! Research Rome, Italy Milan, Italy Barcelona, Spain [email protected] [email protected] [email protected] Aristides Gionis Yahoo! Research Barcelona, Spain [email protected] ABSTRACT Categories and Subject Descriptors In this paper we study the problem of local triangle count- H.3.3 [Information Systems]: Information Search and Re- ing in large graphs. Namely, given a large graph G = (V; E) trieval we want to estimate as accurately as possible the number of triangles incident to every node v 2 V in the graph. The problem of computing the global number of triangles in a General Terms graph has been considered before, but to our knowledge this Algorithms, Measurements is the first paper that addresses the problem of local tri- angle counting with a focus on the efficiency issues arising Keywords in massive graphs. The distribution of the local number of triangles and the related local clustering coefficient can Graph Mining, Semi-Streaming, Probabilistic Algorithms be used in many interesting applications. For example, we show that the measures we compute can help to detect the 1. INTRODUCTION presence of spamming activity in large-scale Web graphs, as Graphs are a ubiquitous data representation that is used well as to provide useful features to assess content quality to model complex relations in a wide variety of applica- in social networks. tions, including biochemistry, neurobiology, ecology, social For computing the local number of triangles we propose sciences, and information systems.
    [Show full text]
  • Search Trees
    Lecture III Page 1 “Trees are the earth’s endless effort to speak to the listening heaven.” – Rabindranath Tagore, Fireflies, 1928 Alice was walking beside the White Knight in Looking Glass Land. ”You are sad.” the Knight said in an anxious tone: ”let me sing you a song to comfort you.” ”Is it very long?” Alice asked, for she had heard a good deal of poetry that day. ”It’s long.” said the Knight, ”but it’s very, very beautiful. Everybody that hears me sing it - either it brings tears to their eyes, or else -” ”Or else what?” said Alice, for the Knight had made a sudden pause. ”Or else it doesn’t, you know. The name of the song is called ’Haddocks’ Eyes.’” ”Oh, that’s the name of the song, is it?” Alice said, trying to feel interested. ”No, you don’t understand,” the Knight said, looking a little vexed. ”That’s what the name is called. The name really is ’The Aged, Aged Man.’” ”Then I ought to have said ’That’s what the song is called’?” Alice corrected herself. ”No you oughtn’t: that’s another thing. The song is called ’Ways and Means’ but that’s only what it’s called, you know!” ”Well, what is the song then?” said Alice, who was by this time completely bewildered. ”I was coming to that,” the Knight said. ”The song really is ’A-sitting On a Gate’: and the tune’s my own invention.” So saying, he stopped his horse and let the reins fall on its neck: then slowly beating time with one hand, and with a faint smile lighting up his gentle, foolish face, he began..
    [Show full text]
  • Streaming Quantiles Algorithms with Small Space and Update Time
    Streaming Quantiles Algorithms with Small Space and Update Time Nikita Ivkin Edo Liberty Kevin Lang [email protected] [email protected] [email protected] Amazon Amazon Yahoo Research Zohar Karnin Vladimir Braverman [email protected] [email protected] Amazon Johns Hopkins University ABSTRACT An approximate version of the problem relaxes this requirement. Approximating quantiles and distributions over streaming data has It is allowed to output an approximate rank which is off by at most been studied for roughly two decades now. Recently, Karnin, Lang, εn from the exact rank. In a randomized setting, the algorithm and Liberty proposed the first asymptotically optimal algorithm is allowed to fail with probability at most δ. Note that, for the for doing so. This manuscript complements their theoretical result randomized version to provide a correct answer to all possible by providing a practical variants of their algorithm with improved queries, it suffices to amplify the success probability by running constants. For a given sketch size, our techniques provably reduce the algorithm with a failure probability of δε, and applying the 1 the upper bound on the sketch error by a factor of two. These union bound over O¹ ε º quantiles. Uniform random sampling of ¹ 1 1 º improvements are verified experimentally. Our modified quantile O ε 2 log δ ε solves this problem. sketch improves the latency as well by reducing the worst-case In network monitoring [16] and other applications it is critical 1 1 to maintain statistics while making only a single pass over the update time from O¹ ε º down to O¹log ε º.
    [Show full text]
  • Turnstile Streaming Algorithms Might As Well Be Linear Sketches
    Turnstile Streaming Algorithms Might as Well Be Linear Sketches Yi Li Huy L. Nguyên˜ David P. Woodruff Max-Planck Institute for Princeton University IBM Research, Almaden Informatics [email protected] [email protected] [email protected] ABSTRACT Keywords In the turnstile model of data streams, an underlying vector x 2 streaming algorithms, linear sketches, lower bounds, communica- {−m; −m+1; : : : ; m−1; mgn is presented as a long sequence of tion complexity positive and negative integer updates to its coordinates. A random- ized algorithm seeks to approximate a function f(x) with constant probability while only making a single pass over this sequence of 1. INTRODUCTION updates and using a small amount of space. All known algorithms In the turnstile model of data streams [28, 34], there is an under- in this model are linear sketches: they sample a matrix A from lying n-dimensional vector x which is initialized to ~0 and evolves a distribution on integer matrices in the preprocessing phase, and through an arbitrary finite sequence of additive updates to its coor- maintain the linear sketch A · x while processing the stream. At the dinates. These updates are fed into a streaming algorithm, and have end of the stream, they output an arbitrary function of A · x. One the form x x + ei or x x − ei, where ei is the i-th standard cannot help but ask: are linear sketches universal? unit vector. This changes the i-th coordinate of x by an additive In this work we answer this question by showing that any 1- 1 or −1.
    [Show full text]
  • Lecture 1: Introduction
    Lecture 1: Introduction Agenda: • Welcome to CS 238 — Algorithmic Techniques in Computational Biology • Official course information • Course description • Announcements • Basic concepts in molecular biology 1 Lecture 1: Introduction Official course information • Grading weights: – 50% assignments (3-4) – 50% participation, presentation, and course report. • Text and reference books: – T. Jiang, Y. Xu and M. Zhang (co-eds), Current Topics in Computational Biology, MIT Press, 2002. (co-published by Tsinghua University Press in China) – D. Gusfield, Algorithms for Strings, Trees, and Sequences: Computer Science and Computational Biology, Cambridge Press, 1997. – D. Krane and M. Raymer, Fundamental Concepts of Bioin- formatics, Benjamin Cummings, 2003. – P. Pevzner, Computational Molecular Biology: An Algo- rithmic Approach, 2000, the MIT Press. – M. Waterman, Introduction to Computational Biology: Maps, Sequences and Genomes, Chapman and Hall, 1995. – These notes (typeset by Guohui Lin and Tao Jiang). • Instructor: Tao Jiang – Surge Building 330, x82991, [email protected] – Office hours: Tuesday & Thursday 3-4pm 2 Lecture 1: Introduction Course overview • Topics covered: – Biological background introduction – My research topics – Sequence homology search and comparison – Sequence structural comparison – string matching algorithms and suffix tree – Genome rearrangement – Protein structure and function prediction – Phylogenetic reconstruction * DNA sequencing, sequence assembly * Physical/restriction mapping * Prediction of regulatiory elements • Announcements:
    [Show full text]
  • Finding Graph Matchings in Data Streams
    Finding Graph Matchings in Data Streams Andrew McGregor⋆ Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA [email protected] Abstract. We present algorithms for finding large graph matchings in the streaming model. In this model, applicable when dealing with mas- sive graphs, edges are streamed-in in some arbitrary order rather than 1 residing in randomly accessible memory. For ǫ > 0, we achieve a 1+ǫ 1 approximation for maximum cardinality matching and a 2+ǫ approxi- mation to maximum weighted matching. Both algorithms use a constant number of passes and O˜( V ) space. | | 1 Introduction Given a graph G = (V, E), the Maximum Cardinality Matching (MCM) problem is to find the largest set of edges such that no two adjacent edges are selected. More generally, for an edge-weighted graph, the Maximum Weighted Matching (MWM) problem is to find the set of edges whose total weight is maximized subject to the condition that no two adjacent edges are selected. Both problems are well studied and exact polynomial solutions are known [1–4]. The fastest of these algorithms solves MWM with running time O(nm + n2 log n) where n = V and m = E . | | | | However, for massive graphs in real world applications, the above algorithms can still be prohibitively computationally expensive. Examples include the vir- tual screening of protein databases. (See [5] for other examples.) Consequently there has been much interest in faster algorithms, typically of O(m) complexity, that find good approximate solutions to the above problems. For MCM, a linear time approximation scheme was given by Kalantari and Shokoufandeh [6].
    [Show full text]
  • Suffix Trees for Fast Sensor Data Forwarding
    Suffix Trees for Fast Sensor Data Forwarding Jui-Chieh Wub Hsueh-I Lubc Polly Huangac Department of Electrical Engineeringa Department of Computer Science and Information Engineeringb Graduate Institute of Networking and Multimediac National Taiwan University [email protected], [email protected], [email protected] Abstract— In data-centric wireless sensor networks, data are alleviates the effort of node addressing and address reconfig- no longer sent by the sink node’s address. Instead, the sink uration in large-scale mobile sensor networks. node sends an explicit interest for a particular type of data. The source and the intermediate nodes then forward the data Forwarding in data-centric sensor networks is particularly according to the routing states set by the corresponding interest. challenging. It involves matching of the data content, i.e., This data-centric style of communication is promising in that it string-based attributes and values, instead of numeric ad- alleviates the effort of node addressing and address reconfigura- dresses. This content-based forwarding problem is well studied tion. However, when the number of interests from different sinks in the domain of publish-subscribe systems. It is estimated increases, the size of the interest table grows. It could take 10s to 100s milliseconds to match an incoming data to a particular in [3] that the time it takes to match an incoming data to a interest, which is orders of mangnitude higher than the typical particular interest ranges from 10s to 100s milliseconds. This transmission and propagation delay in a wireless sensor network. processing delay is several orders of magnitudes higher than The interest table lookup process is the bottleneck of packet the propagation and transmission delay.
    [Show full text]
  • Lecture 1 1 Graph Streaming
    CS 671: Graph Streaming Algorithms and Lower Bounds Rutgers: Fall 2020 Lecture 1 September 1, 2020 Instructor: Sepehr Assadi Scribe: Sepehr Assadi Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 1 Graph Streaming Motivation Massive graphs appear in most application domains nowadays: web-pages and hyperlinks, neurons and synapses, papers and citations, or social networks and friendship links are just a few examples. Many of the computing tasks in processing these graphs, at their core, correspond to classical problems such as reachability, shortest path, or maximum matching|problems that have been studied extensively for almost a century now. However, the traditional algorithmic approaches to these problems do not accurately capture the challenges of processing massive graphs. For instance, we can no longer assume a random access to the entire input on these graphs, as their sheer size prevents us from loading them into the main memory. To handle these challenges, there is a rapidly growing interest in developing algorithms that explicitly account for the restrictions of processing massive graphs. Graph streaming algorithms have been particularly successful in this regard; these are algorithms that process input graphs by making one or few sequential passes over their edges while using a small memory. Not only streaming algorithms can be directly used to address various challenges in processing massive graphs such as I/O-efficiency or monitoring evolving graphs in realtime, they are also closely connected to other models of computation for massive graphs and often lead to efficient algorithms in those models as well.
    [Show full text]
  • Streaming Algorithms
    3 Streaming Algorithms Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 Some Admin: • Deadline of Problem Set 1 is 23:59, May 14 (today)! • Students are divided into two groups for tutorials. Visit our course web- site and use Doodle to to choose your free slots. • If you find some typos or like to improve the lecture notes (e.g. add more details, intuitions), send me an email to get .tex file :-) We live in an era of “Big Data”: science, engineering, and technology are producing in- creasingly large data streams every second. Looking at social networks, electronic commerce, astronomical and biological data, the big data phenomenon appears everywhere nowadays, and presents opportunities and challenges for computer scientists and mathematicians. Com- paring with traditional algorithms, several issues need to be considered: A massive data set is too big to be stored; even an O(n2)-time algorithm is too slow; data may change over time, and algorithms need to cope with dynamic changes of the data. Hence streaming, dynamic and distributed algorithms are needed for analyzing big data. To illuminate the study of streaming algorithms, let us look at a few examples. Our first example comes from network analysis. Any big website nowadays receives millions of visits every minute, and very frequent visit per second from the same IP address are probably gen- erated automatically by computers and due to hackers. In this scenario, a webmaster needs to efficiently detect and block those IPs which generate very frequent visits per second. Here, instead of storing all IP record history which are expensive, we need an efficient algorithm that detect most such IPs.
    [Show full text]