Setsketch: Filling the Gap Between Minhash and Hyperloglog

Total Page:16

File Type:pdf, Size:1020Kb

Setsketch: Filling the Gap Between Minhash and Hyperloglog SetSketch: Filling the Gap between MinHash and HyperLogLog Otmar Ertl Dynatrace Research Linz, Austria [email protected] ABSTRACT Commutativity: The order of insert operations should not be rele- MinHash and HyperLogLog are sketching algorithms that have vant for the fnal state. If the processing order cannot be guaranteed, become indispensable for set summaries in big data applications. commutativity is needed to get reproducible results. Many data While HyperLogLog allows counting diferent elements with very structures are not commutative [13, 32, 65]. little space, MinHash is suitable for the fast comparison of sets as it Mergeability: In large-scale applications with distributed data allows estimating the Jaccard similarity and other joint quantities. sources it is essential that the data structure supports the union set This work presents a new data structure called SetSketch that is operation. It allows combining data sketches resulting from partial able to continuously fll the gap between both use cases. Its commu- data streams to get an overall result. Ideally, the union operation tative and idempotent insert operation and its mergeable state make is idempotent, associative, and commutative to get reproducible it suitable for distributed environments. Fast, robust, and easy-to- results. Some data structures trade mergeability for better space implement estimators for cardinality and joint quantities, as well efciency [11, 13, 65]. as the ability to use SetSketch for similarity search, enable versatile Space efciency: A small memory footprint is the key purpose of applications. The presented joint estimator can also be applied to data sketches. They generally allow to trade space for estimation other data structures such as MinHash, HyperLogLog, or Hyper- accuracy, since variance is typically inversely proportional to the MinHash, where it even performs better than the corresponding sketch size. Better space efciencies can sometimes also be achieved state-of-the-art estimators in many cases. at the expense of recording speed, e.g. by compression [21, 36, 62]. Recording speed: Fast update times are crucial for many appli- PVLDB Reference Format: cations. They vary over many orders of magnitude for diferent Otmar Ertl. SetSketch: Filling the Gap between MinHash and HyperLogLog. data structures. For example, they may be constant like for the PVLDB, 14(11): 2244 - 2257, 2021. HyperLogLog (HLL) sketch [30] or proportional to the sketch size doi:10.14778/3476249.3476276 like for minwise hashing (MinHash) [6]. Cardinality estimation: An important use case is the estimation PVLDB Artifact Availability: of the number of elements in a set. Sketches have diferent efcien- The source code, data, and/or other artifacts have been made available at cies in encoding cardinality information. Apart from estimation https://github.com/dynatrace-research/set-sketch-paper. error, robustness, speed, and simplicity of the estimation algorithm are important for practical use. Many algorithms rely on empirical 1 INTRODUCTION observations [34] or need to combine diferent estimators for dif- Data sketches [16] that are able to represent sets of arbitrary size ferent cardinality ranges, as is the case for the original estimators using only a small, fxed amount of memory have become important of HLL [30] and HyperMinHash [72]. and widely used tools in big data applications. Although individual Joint estimation: Knowing the relationship among sets is impor- elements can no longer be accessed after insertion, they are still tant for many applications. Estimating the overlap of both sets able to give approximate answers when querying the cardinality or in addition to their cardinalities eventually allows the computa- joint quantities which may include the intersection size, union size, tion of joint quantities such as Jaccard similarity, cosine similarity, size of set diferences, inclusion coefcients, or similarity measures inclusion coefcients, intersection size, or diference sizes. Data like the Jaccard and the cosine similarity. structures store diferent amounts of joint information. In that re- Numerous such algorithms with diferent characteristics have gard, for example, MinHash contains more information than HLL. been developed and published over the last two decades. They can Extracting joint quantities is more complex compared to cardinali- be classifed based on following properties [53] and use cases: ties, because it involves the state of two sketches and requires the Idempotency: Insertions of values that have already been added estimation of three unknowns in the general case, which makes should not further change the state of the data structure. Even fnding an efcient, robust, fast, and easy-to-implement estimation though this seems to be an obvious requirement, it is not satisfed algorithm challenging. If a sketch supports cardinality estimation by many sketches for sets [11, 47, 58, 67, 69]. and is mergeable, the joint estimation problem can be solved using the inclusion-exclusion principle * + = * + * + . j \ j j j ¸ j j − j [ j This work is licensed under the Creative Commons BY-NC-ND 4.0 International However, this naive approach is inefcient as it does not use all License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of information available. this license. For any use beyond those covered by this license, obtain permission by Locality sensitivity: A further use case is similarity search. If emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. the same components of two diferent sketches are equal with a Proceedings of the VLDB Endowment, Vol. 14, No. 11 ISSN 2150-8097. probability that is a monotonic function of some similarity measure, doi:10.14778/3476249.3476276 it can be used for locality-sensitive hashing (LSH)[2, 35, 43, 73], an 2244 indexing technique that allows querying similar sets in sublinear 1.2 MinHash time. For example, the components of MinHash have a collision MinHash [6] maps a set ( to an <-dimensional vector , . , < ¹ 1 º probability that is equal to the Jaccard similarity. using The optimal choice of a sketch for sets depends on the applica- 8 := minℎ8 3 . 3 ( ¹ º tion and is basically a trade-of between estimation accuracy, speed, 2 and memory efciency. Among the most widely used sketches are Here ℎ8 are independent hash functions. The probability, that com- MinHash [6] and HLL [30]. Both are idempotent, commutative, and ponents * 8 and +8 of two diferent MinHash sketches for sets * mergeable, which is probably one of the reasons for their popu- and + match, equals the Jaccard similarity � larity. The fast recording speed, simplicity, and memory-efciency * + % * 8 = +8 = j* \+ j = � . have made HLL the state-of-the-art algorithm for cardinality esti- ¹ º j [ j mation. In contrast, MinHash is signifcantly slower and is less This locality sensitivity makes MinHash very suitable for set com- memory-efcient with respect to cardinality estimation, but is parisons and similarity search, because the Jaccard similarity can locality-sensitive and allows easy and more accurate estimation of be directly and quickly estimated from the fraction of equal com- joint quantities. ponents with a root-mean-square error (RMSE) of � 1 � <. ¹ − º/ MinHash also allows the estimation of cardinalities [12, 13] and p other joint quantities [15, 19]. 1.1 Motivation and Contributions Meanwhile, many improvements and variants have been pub- The motivation to fnd a data structure that supports more accurate lished that either improve the recording speed or the memory ef- joint estimation and locality sensitivity like MinHash, while having ciency. One permutation hashing [40] reduces the costs for adding a better space-efciency and a faster recording speed that are both a new element from < to 1 . However, there is a high prob- O¹ º O¹ º closer to those of HLL, has led us to a new data structure called ability of uninitialized components for small sets leading to large SetSketch. It flls the gap between HLL and MinHash and can be estimation errors. This can be remedied by applying a fnalization seen as a generalization of both as it is confgurable with a continu- step called densifcation [44, 63, 64] which may be expensive for ous parameter between those two special cases. The possibility to small sets [27] and also prevents further aggregations. Alternatively, fne-tune between space-efciency and joint estimation accuracy fast similarity sketching [17], SuperMinHash [25], or weighted min- allows better adaptation to given requirements. wise hashing algorithms like BagMinHash [26] and ProbMinHash We present fast, robust, and simple methods for cardinality and [27] specialized to unweighted sets can be used instead. Compared joint estimation that do not rely on any empirically determined to one permutation hashing with densifcation, they are merge- constants and that also give consistent errors over the full cardinal- able, allow further element insertions, and even give more accurate ity range. The cardinality estimator matches that of both MinHash Jaccard similarity estimates for small set sizes. or HLL in the corresponding limit cases. Together with the derived To shrink the memory footprint, b-bit minwise hashing [39] can corrections for small and large cardinalities, which do not require be used to reduce all MinHash components from typically 32 or any empirical calibration, the estimator can also be applied to HLL 64 bits to only a few bits in a fnalization step. Although this loss and generalized HyperLogLog (GHLL) sketches over the entire car- of information must be compensated by increasing the number of dinality range. Since its derivation is much simpler than that in components<, the memory efciency can be signifcantly improved the original paper based on complex analysis [30], this work also if precise estimates are needed only for high similarities. However, contributes to a better understanding of the HLL algorithm. the need of more components increases the computation time and The presented joint estimation method can be also specialized the sketch cannot be further aggregated or merged after fnalization.
Recommended publications
  • Mining of Massive Datasets
    Mining of Massive Datasets Anand Rajaraman Kosmix, Inc. Jeffrey D. Ullman Stanford Univ. Copyright c 2010, 2011 Anand Rajaraman and Jeffrey D. Ullman ii Preface This book evolved from material developed over several years by Anand Raja- raman and Jeff Ullman for a one-quarter course at Stanford. The course CS345A, titled “Web Mining,” was designed as an advanced graduate course, although it has become accessible and interesting to advanced undergraduates. What the Book Is About At the highest level of description, this book is about data mining. However, it focuses on data mining of very large amounts of data, that is, data so large it does not fit in main memory. Because of the emphasis on size, many of our examples are about the Web or data derived from the Web. Further, the book takes an algorithmic point of view: data mining is about applying algorithms to data, rather than using data to “train” a machine-learning engine of some sort. The principal topics covered are: 1. Distributed file systems and map-reduce as a tool for creating parallel algorithms that succeed on very large amounts of data. 2. Similarity search, including the key techniques of minhashing and locality- sensitive hashing. 3. Data-stream processing and specialized algorithms for dealing with data that arrives so fast it must be processed immediately or lost. 4. The technology of search engines, including Google’s PageRank, link-spam detection, and the hubs-and-authorities approach. 5. Frequent-itemset mining, including association rules, market-baskets, the A-Priori Algorithm and its improvements. 6.
    [Show full text]
  • Applied Statistics
    ISSN 1932-6157 (print) ISSN 1941-7330 (online) THE ANNALS of APPLIED STATISTICS AN OFFICIAL JOURNAL OF THE INSTITUTE OF MATHEMATICAL STATISTICS Special section in memory of Stephen E. Fienberg (1942–2016) AOAS Editor-in-Chief 2013–2015 Editorial......................................................................... iii OnStephenE.Fienbergasadiscussantandafriend................DONALD B. RUBIN 683 Statistical paradises and paradoxes in big data (I): Law of large populations, big data paradox, and the 2016 US presidential election . ......................XIAO-LI MENG 685 Hypothesis testing for high-dimensional multinomials: A selective review SIVARAMAN BALAKRISHNAN AND LARRY WASSERMAN 727 When should modes of inference disagree? Some simple but challenging examples D. A. S. FRASER,N.REID AND WEI LIN 750 Fingerprintscience.............................................JOSEPH B. KADANE 771 Statistical modeling and analysis of trace element concentrations in forensic glass evidence.................................KAREN D. H. PAN AND KAREN KAFADAR 788 Loglinear model selection and human mobility . ................ADRIAN DOBRA AND REZA MOHAMMADI 815 On the use of bootstrap with variational inference: Theory, interpretation, and a two-sample test example YEN-CHI CHEN,Y.SAMUEL WANG AND ELENA A. EROSHEVA 846 Providing accurate models across private partitioned data: Secure maximum likelihood estimation....................................JOSHUA SNOKE,TIMOTHY R. BRICK, ALEKSANDRA SLAVKOVIC´ AND MICHAEL D. HUNTER 877 Clustering the prevalence of pediatric
    [Show full text]
  • Arxiv:2102.08942V1 [Cs.DB]
    A Survey on Locality Sensitive Hashing Algorithms and their Applications OMID JAFARI, New Mexico State University, USA PREETI MAURYA, New Mexico State University, USA PARTH NAGARKAR, New Mexico State University, USA KHANDKER MUSHFIQUL ISLAM, New Mexico State University, USA CHIDAMBARAM CRUSHEV, New Mexico State University, USA Finding nearest neighbors in high-dimensional spaces is a fundamental operation in many diverse application domains. Locality Sensitive Hashing (LSH) is one of the most popular techniques for finding approximate nearest neighbor searches in high-dimensional spaces. The main benefits of LSH are its sub-linear query performance and theoretical guarantees on the query accuracy. In this survey paper, we provide a review of state-of-the-art LSH and Distributed LSH techniques. Most importantly, unlike any other prior survey, we present how Locality Sensitive Hashing is utilized in different application domains. CCS Concepts: • General and reference → Surveys and overviews. Additional Key Words and Phrases: Locality Sensitive Hashing, Approximate Nearest Neighbor Search, High-Dimensional Similarity Search, Indexing 1 INTRODUCTION Finding nearest neighbors in high-dimensional spaces is an important problem in several diverse applications, such as multimedia retrieval, machine learning, biological and geological sciences, etc. For low-dimensions (< 10), popular tree-based index structures, such as KD-tree [12], SR-tree [56], etc. are effective, but for higher number of dimensions, these index structures suffer from the well-known problem, curse of dimensionality (where the performance of these index structures is often out-performed even by linear scans) [21]. Instead of searching for exact results, one solution to address the curse of dimensionality problem is to look for approximate results.
    [Show full text]
  • Similarity Search Using Locality Sensitive Hashing and Bloom Filter
    Similarity Search using Locality Sensitive Hashing and Bloom Filter Thesis submitted in partial fulfillment of the requirements for the award of degree of Master of Engineering in Computer Science and Engineering Submitted By Sachendra Singh Chauhan (Roll No. 801232021) Under the supervision of Dr. Shalini Batra Assistant Professor COMPUTER SCIENCE AND ENGINEERING DEPARTMENT THAPAR UNIVERSITY PATIALA – 147004 June 2014 ACKNOWLEDGEMENT No volume of words is enough to express my gratitude towards my guide, Dr. Shalini Batra, Assistant Professor, Computer Science and Engineering Department, Thapar University, who has been very concerned and has aided for all the material essential for the preparation of this thesis report. She has helped me to explore this vast topic in an organized manner and provided me with all the ideas on how to work towards a research-oriented venture. I am also thankful to Dr. Deepak Garg, Head of Department, CSED and Dr. Ashutosh Mishra, P.G. Coordinator, for the motivation and inspiration that triggered me for the thesis work. I would also like to thank the faculty members who were always there in the need of the hour and provided with all the help and facilities, which I required, for the completion of my thesis. Most importantly, I would like to thank my parents and the Almighty for showing me the right direction out of the blue, to help me stay calm in the oddest of the times and keep moving even at times when there was no hope. Sachendra Singh Chauhan 801232021 ii Abstract Similarity search of text documents can be reduced to Approximate Nearest Neighbor Search by converting text documents into sets by using Shingling.
    [Show full text]
  • Compressed Slides
    compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Spring 2020. Lecture 7 0 logistics • Problem Set 1 is due tomorrow at 8pm in Gradescope. • No class next Tuesday (it’s a Monday at UMass). • Talk Today: Vatsal Sharan at 4pm in CS 151. Modern Perspectives on Classical Learning Problems: Role of Memory and Data Amplification. 1 summary Last Class: Hashing for Jaccard Similarity • MinHash for estimating the Jaccard similarity. • Locality sensitive hashing (LSH). • Application to fast similarity search. This Class: • Finish up MinHash and LSH. • The Frequent Elements (heavy-hitters) problem. • Misra-Gries summaries. 2 jaccard similarity jA\Bj # shared elements Jaccard Similarity: J(A; B) = jA[Bj = # total elements : Two Common Use Cases: • Near Neighbor Search: Have a database of n sets/bit strings and given a set A, want to find if it has high similarity to anything in the database. Naively Ω(n) time. • All-pairs Similarity Search: Have n different sets/bit strings. Want to find all pairs with high similarity. Naively Ω(n2) time. 3 minhashing MinHash(A) = mina2A h(a) where h : U ! [0; 1] is a random hash. Locality Sensitivity: Pr[MinHash(A) = MinHash(B)] = J(A; B): Represents a set with a single number that captures Jaccard similarity information! Given a collision free hash function g :[0; 1] ! [m], Pr [g(MinHash(A)) = g(MinHash(B))] = J(A; B): What is Pr [g(MinHash(A)) = g(MinHash(B))] if g is not collision free? Will be a bit larger than J(A; B). 4 lsh for similarity search When searching for similar items only search for matches that land in the same hash bucket.
    [Show full text]
  • Lecture Note
    Algorithms for Data Science: Lecture on Finding Similar Items Barna Saha 1 Finding Similar Items Finding similar items is a fundamental data mining task. We may want to find whether two documents are similar to detect plagiarism, mirror websites, multiple versions of the same article etc. Finding similar items is useful for building recommender systems as well where we want to find users with similar buying patterns. In Netflix two movies can be deemed similar if they are rated highly by the same customers. While, there are many measures of similarity, in this lecture, we will concentrate one such popular measure known as Jaccard Similarity. Definition (Jaccard Similairty). Given two sets S1 and S2, Jaccard similarity of S1 and S2 is defined as |S1∩S2 S1∪S2 Example 1. Let S1 = {1, 2, 3, 4, 7} and S2 = {1, 4, 9, 7, 5} then |S1 ∪ S2| = 7 and |S1 ∩ S2| = 3. 3 Thus the Jaccard similarity of S1 and S2 is 7 . 1.1 Document Similarity To compare two documents to know how similar they are, here is a simple approach: • Compute k shingles for a suitable value of k k shingles are all substrings of length k that appear in the document. For example, if a document is abcdabd and k = 2, then the 2-shingles are ab, bc, cd, da, ab, bd. • Compare the set of k shingled based on Jaccard similarity. One will often map the set of shingled to a set of integers via hashing. What should be the right size of the hash table? Select a hash table size to avoid collision.
    [Show full text]
  • Distributed Clustering Algorithm for Large Scale Clustering Problems
    IT 13 079 Examensarbete 30 hp November 2013 Distributed clustering algorithm for large scale clustering problems Daniele Bacarella Institutionen för informationsteknologi Department of Information Technology Abstract Distributed clustering algorithm for large scale clustering problems Daniele Bacarella Teknisk- naturvetenskaplig fakultet UTH-enheten Clustering is a task which has got much attention in data mining. The task of finding subsets of objects sharing some sort of common attributes is applied in various fields Besöksadress: such as biology, medicine, business and computer science. A document search engine Ångströmlaboratoriet Lägerhyddsvägen 1 for instance, takes advantage of the information obtained clustering the document Hus 4, Plan 0 database to return a result with relevant information to the query. Two main factors that make clustering a challenging task are the size of the dataset and the Postadress: dimensionality of the objects to cluster. Sometimes the character of the object makes Box 536 751 21 Uppsala it difficult identify its attributes. This is the case of the image clustering. A common approach is comparing two images using their visual features like the colors or shapes Telefon: they contain. However, sometimes they come along with textual information claiming 018 – 471 30 03 to be sufficiently descriptive of the content (e.g. tags on web images). Telefax: The purpose of this thesis work is to propose a text-based image clustering algorithm 018 – 471 30 00 through the combined application of two techniques namely Minhash Locality Sensitive Hashing (MinHash LSH) and Frequent itemset Mining. Hemsida: http://www.teknat.uu.se/student Handledare: Björn Lyttkens Lindén Ämnesgranskare: Kjell Orsborn Examinator: Ivan Christoff IT 13 079 Tryckt av: Reprocentralen ITC ”L’arte rinnova i popoli e ne rivela la vita.
    [Show full text]
  • Minner: Improved Similarity Estimation and Recall on Minhashed Databases
    Minner: Improved Similarity Estimation and Recall on MinHashed Databases ANONYMOUS AUTHOR(S) Quantization is the state of the art approach to efficiently storing and search- ing large high-dimensional datasets. Broder’97 [7] introduced the idea of Minwise Hashing (MinHash) for quantizing or sketching large sets or binary strings into a small number of values and provided a way to reconstruct the overlap or Jaccard Similarity between two sets sketched this way. In this paper, we propose a new estimator for MinHash in the case where the database is quantized, but the query is not. By computing the similarity between a set and a MinHash sketch directly, rather than first also sketching the query, we increase precision and improve recall. We take a principled approach based on maximum likelihood (MLE) with strong theoretical guarantees. Experimental results show an improved recall@10 corresponding to 10-30% extra MinHash values. Finally, we suggest a third very simple estimator, which is as fast as the classical MinHash estimator while often more precise than the MLE. Our methods concern only the query side of search and can be used with Fig. 1. Figure by Jegou et al. [18] illustrating the difference between sym- any existing MinHashed database without changes to the data. metric and asymmetric estimation, as used in Euclidean Nearest Neighbour Search. The distance q¹yº − x is a better approximation of y − x than q¹yº − q¹xº. In the set setting, when q is MinHash, it is not clear what it Additional Key Words and Phrases: datasets, quantization, minhash, estima- would even mean to compute q¹yº − x? tion, sketching, similarity join ACM Reference Format: Anonymous Author(s).
    [Show full text]
  • Strand: Fast Sequence Comparison Using Mapreduce and Locality Sensitive Hashing
    Strand: Fast Sequence Comparison using MapReduce and Locality Sensitive Hashing Jake Drew Michael Hahsler Computer Science and Engineering Department Engineering Management, Information, and, Southern Methodist University Systems Department Dallas, TX, USA Southern Methodist University www.jakemdrew.com Dallas, TX, USA [email protected] [email protected] ABSTRACT quence word counts and have become increasingly popu- The Super Threaded Reference-Free Alignment-Free N- lar since the computationally expensive sequence alignment sequence Decoder (Strand) is a highly parallel technique for method is avoided. One of the most successful word-based the learning and classification of gene sequence data into methods is the RDP classifier [22], a naive Bayesian classifier any number of associated categories or gene sequence tax- widely used for organism classification based on 16S rRNA onomies. Current methods, including the state-of-the-art gene sequence data. sequence classification method RDP, balance performance Numerous methods for the extraction, retention, and by using a shorter word length. Strand in contrast uses a matching of word collections from sequence data have been much longer word length, and does so efficiently by imple- studied. Some of these methods include: 12-mer collec- menting a Divide and Conquer algorithm leveraging MapRe- tions with the compression of 4 nucleotides per byte using duce style processing and locality sensitive hashing. Strand byte-wise searching [1], sorting of k-mer collections for the is able to learn gene sequence taxonomies and classify new optimized processing of shorter matches within similar se- sequences approximately 20 times faster than the RDP clas- quences [11], modification of the edit distance calculation to sifier while still achieving comparable accuracy results.
    [Show full text]
  • Efficient Minhash-Based Algorithms for Big Structured Data
    Efficient MinHash-based Algorithms for Big Structured Data Wei Wu Faculty of Engineering and Information Technology University of Technology Sydney A thesis is submitted for the degree of Doctor of Philosophy July 2018 I would like to dedicate this thesis to my loving parents ... Declaration I hereby declare that except where specific reference is made to the work of others, the contents of this dissertation are original and have not been submitted in whole or in part for consideration for any other degree or qualification in this, or any other University. This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration, except where specifically indicated in the text. This dissertation contains less than 65,000 words including appendices, bibliography, footnotes, tables and equations and has less than 150 figures. Wei Wu July 2018 Acknowledgements I would like to express my earnest thanks to my principal supervisor, Dr. Ling Chen, and co-supervisor, Dr. Bin Li, who have provided the tremendous support and guidance for my research. Dr. Ling Chen always controlled the general direction of my PhD life. She made the recommendation letter for me, and contacted the world leading researchers for collaboration with me. Also, she encouraged me to attend the research conferences, where my horizon was remarkably broaden and I knew more about the research frontiers by dis- cussing with the worldwide researchers. Dr. Bin Li guided me to explore the unknown, how to think and how to write. He once said to me, "It is not that I am supervising you.
    [Show full text]
  • Hash-Grams: Faster N-Gram Features for Classification and Malware Detection Edward Raff Charles Nicholas Laboratory for Physical Sciences Univ
    Hash-Grams: Faster N-Gram Features for Classification and Malware Detection Edward Raff Charles Nicholas Laboratory for Physical Sciences Univ. of Maryland, Baltimore County [email protected] [email protected] Booz Allen Hamilton [email protected] ABSTRACT n-grams is 2564 or about four billion. To build the binary feature N-grams have long been used as features for classification problems, vector mentioned above for an input file of length D, n-grams would and their distribution often allows selection of the top-k occurring need to be inspected, and a certain bit in the feature vector set n-grams as a reliable first-pass to feature selection. However, this accordingly. Malware specimen n-grams do tend to follow a Zipfian top-k selection can be a performance bottleneck, especially when distribution [13, 18]. Even so, we still have to deal with a massive dealing with massive item sets and corpora. In this work we intro- “vocabulary” of n-grams that is too large to keep in memory. It duce Hash-Grams, an approach to perform top-k feature mining has been found that simply selecting the top-k most frequent n- for classification problems. We show that the Hash-Gram approach grams, which will fit in memory, results in better performance than can be up to three orders of magnitude faster than exact top-k se- a number of other feature selection approaches [13]. For this reason lection algorithms. Using a malware corpus of over 2 TB in size, we we seek to solve the top-k selection faster with fixed memory so that show how Hash-Grams retain comparable classification accuracy, n-gram based analysis of malware can be scaled to larger corpora.
    [Show full text]
  • December, 2018 2018
    Ben-Gurion University of the Negev The Faculty of Natural Sciences Department of Computer Science The Relation Between Jaccard Similarity and Edit Distance in LSH Based Malware Detection Thesis submitted in partial fulfillment of the requirements for the Master of Sciences Degree Mohammad Ghanayim Under the supervision of: Prof. Shlomi Dolev December, 2018 2018 Ben-Gurion University of the Negev The Faculty of Natural Sciences Department of Computer Science The Relation Between Jaccard Similarity and Edit Distance in LSH Based Malware Detection Mohammad Ghanayim Under the supervision of: Prof. Sidomi Dolev 20.12.2018 Mohammad Ghanayim: Date: 20.12.2018 Prof. Shlomi Dolev: Date: Committee Chairperson: Date: 23.12.2018 December,August, 2018 Abstract In this work, we employ textual data mining methods which are usually used for finding similar items in large datasets, namely n-grams and MinHash- ing and Locality Sensitive Hashing, for behavioral analysis and detection of malware. Following the misuse approach of intrusion detection, we train aclassifierbyusingtheabovetechniquestoefficientlyclusteradatasetof malicious Windows API call traces with respect to Jaccard similarity. The obtained clustering is used with great success in classifying new query traces for being either malicious or benign. The computation associated with extracting n-grams and calculating Jac- card similarity is much more efficient than the computation of edit distance (linear versus quadratic time complexity). Thus, we examine the possibility to utilize the Jaccard similarity in an estimation of edit distance. We formu- late inequalities defining the relationship between Jaccard similarity and edit distance, that impose upper and lower bounds on the edit distance values in terms of the Jaccard values.
    [Show full text]