Improving Dual-Tree Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

Improving Dual-Tree Algorithms IMPROVING DUAL-TREE ALGORITHMS A Dissertation Presented to The Academic Faculty By Ryan R. Curtin In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Electrical and Computer Engineering School of Electrical and Computer Engineering Georgia Institute of Technology December 2015 Copyright © 2015 by Ryan R. Curtin IMPROVING DUAL-TREE ALGORITHMS Approved by: Dr. David V. Anderson, Committee Chair Dr. Mark A. Clements Associate Professor, School of Electrical and Associate Professor, School of Electrical and Computer Engineering Computer Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Charles L. Isbell, Jr., Advisor Dr. Polo Chau Professor, School of Interactive Computing Assistant Professor, School of Computational Georgia Institute of Technology Science and Engineering Georgia Institute of Technology Dr. Richard W. Vuduc Associate Professor, School of Computational Science and Engineering Georgia Institute of Technology Date Approved: August 18, 2015 This document is dedicated solely to my cats, who do not and will not ever have the capacity to understand even the title of this manuscript, and who, thanks to domestication, are actually entirely incapable of leading any sort of autonomous lifestyle and thus are mortally dependent on my completion of simple maintenance tasks. TABLE OF CONTENTS LIST OF TABLES ................................... vii LIST OF FIGURES .................................. ix CHAPTER 1 THE POINT ............................. 1 CHAPTER 2 INTRODUCTION .......................... 3 2.1 An abridged history of statistical computing . .3 2.2 A less abridged history of the development of trees . .4 2.3 The explosion of single-tree algorithms . 10 2.4 A smorgasboard of trees . 12 2.5 The fast multipole method and query amortization . 14 2.6 Redirection to statistics and dual-tree algorithms . 17 CHAPTER 3 TREE-INDEPENDENT DUAL-TREE ALGORITHMS . 23 3.1 A bibliographical note . 23 3.2 The goal: unification of dual-tree algorithms . 23 3.3 Space trees . 24 3.4 Space tree notation . 28 3.5 Bounding quantities with space trees . 29 3.6 A quick survey of some space trees . 31 3.6.1 The quad-tree, octree, and hyperoctree . 32 3.6.2 The kd-tree . 34 3.6.3 The ball tree . 34 3.6.4 The metric tree / vantage-point tree . 36 3.6.5 The cover tree . 37 3.7 Traversals and problem-specific rules . 39 3.8 A meta-algorithm to produce a dual-tree algorithm . 44 CHAPTER 4 MLPACK: A FLEXIBLE C++ FRAMEWORK . 47 4.1 A survey of the landscape of machine learning libraries . 47 4.2 The Armadillo linear algebra library . 49 4.3 Template paradigms for fast, generic code . 50 4.4 Design principles of mlpack ........................ 52 4.4.1 Scalable and fast machine learning algorithms . 52 4.4.2 Intuitive, consistent, and simple API . 54 4.4.3 Current functionality of mlpack .................. 56 4.5 Tree-independent dual-tree algorithms in mlpack ............. 58 4.5.1 The TreeType policy . 58 4.5.2 The RuleType policy . 63 4.5.3 The TraversalType policy . 65 4.5.4 Assembling a dual-tree algorithm in mlpack ............ 66 iv CHAPTER 5 TREES ................................ 68 5.1 Free parameters in the cover tree . 68 5.1.1 The cover tree: a rehash . 69 5.1.2 The expansion constant . 71 5.1.3 Root point selection policy . 73 5.1.4 Correlation of tree width to performance . 75 5.2 Cover tree runtime bounds . 78 5.2.1 Tree imbalance . 79 5.2.2 General runtime bound . 82 5.3 An issue with the cover tree single-tree runtime bound proof . 92 CHAPTER 6 TRAVERSALS ............................ 95 6.1 Improved dual depth-first traversal . 95 6.1.1 Prioritized recursions and nearest neighbor search . 97 6.1.2 Delaying reference recursion . 99 6.1.3 Experimental evalution . 102 CHAPTER 7 ALGORITHMS . 106 7.1 Nearest neighbor search . 106 7.1.1 A tree-independent dual-tree algorithm . 107 7.1.2 Correctness proof . 109 7.1.3 Specialization to existing k-NN algorithms . 111 7.1.4 Runtime bounds . 112 7.2 Range search . 115 7.2.1 A tree-independent dual-tree algorithm . 116 7.2.2 Runtime bound . 117 7.3 Kernel density estimation . 121 7.3.1 Dual-tree algorithm for absolute-value approximation . 122 7.3.2 Absolute-value approximate KDE runtime bounds . 124 7.3.3 Relative Value Approximation . 126 7.3.4 Runtime bounds for relative value approximate KDE . 127 7.4 Minimum spanning tree calculation . 128 7.5 Sparse kernel matrix approximation . 130 7.5.1 Sparsity in kernel matrices . 132 7.5.2 Related work on kernel matrix approximation . 134 7.5.3 A dual-tree algorithm . 135 7.5.4 Correctness proof . 137 7.5.5 Application to kernel PCA . 138 7.5.6 Theoretical results . 139 7.5.7 Empirical results for kernel PCA . 142 7.5.8 Extensions . 144 7.5.9 Application to other kernel methods . 145 7.5.10 Discussion . 145 7.6 Gaussian mixture model training . 147 7.6.1 Problem introduction . 147 v 7.6.2 A generalized single-tree algorithm . 148 7.6.3 Possible improvements and extensions . 152 7.7 Max-kernel search . 153 7.7.1 Introduction to max-kernel search . 154 7.7.2 Related work . 157 7.7.3 Unnormalized kernels . 159 7.7.4 Analysis of the problem . 160 7.7.5 Indexing points in H ........................ 163 7.7.6 Bounding the kernel value . 165 7.7.7 Single-tree max-kernel search . 171 7.7.8 Dual-tree fast max-kernel search . 174 7.7.9 Dual-tree algorithm runtime analysis . 176 7.7.10 Extensions for approximate max-kernel search . 182 7.7.11 Empirical evaluation . 187 7.7.12 Future directions for max-kernel search . 194 7.7.13 Wrap-up for max-kernel search . 195 7.8 k-means clustering . 196 7.8.1 Introduction . 196 7.8.2 Scaling k-means . 197 7.8.3 The blacklist algorithm and trees . 199 7.8.4 Pruning strategies . 199 7.8.5 The dual-tree k-means algorithm . 203 7.8.6 Theoretical results . 210 7.8.7 Experiments . 222 7.8.8 Future directions . 225 CHAPTER 8 CONCLUSION AND FUTURE DIRECTIONS . 227 REFERENCES .....................................230 vi LIST OF TABLES 1 Notation for trees. See text for details. 29 2 Properties of hyperoctrees. 33 3 Properties of kd-trees. 34 4 Properties of ball trees. 35 5 Properties of vp-trees. 37 6 Properties of cover trees. 39 7 mlpack benchmark dataset sizes. 53 8 All-k-nearest neighbor benchmarks (in seconds). 53 9 k-means benchmarks (in seconds). 54 10 Runtime statistics for different root point policies. 74 11 Build-time statistics for different root point policies. 74 12 Empirically calculated tree imbalances. 82 13 Dataset information. 102 14 Runtime (distance evaluations) for exact nearest neighbor search. 103 15 Runtime (distance calculations) [ or M/W] for approximate NN search. 104 16 Image denoising performance on the USPS dataset as a function of σ... 133 17 Datasets used for kernel PCA experiments. 142 18 Results for Epanechnikov kernel. 143 19 Results for Gaussian kernel. 144 20 Vector dataset details; jS qj and jS rj denote the number of objects in the query and reference sets respectively and dims denotes the dimensional- ity of the sets. 188 21 Single-tree and dual-tree FastMKS on vector datasets with k = 1, part one.191 22 Single-tree and dual-tree FastMKS on vector datasets with k = 1, part two.192 23 Single-tree and dual-tree FastMKS on protein sequences with k = 1. 193 24 Runtime and memory bounds for k-means algorithms. 198 vii 25 Dataset information for dual-tree k-means. 223 26 Empirical results for k-means. 224 viii LIST OF FIGURES 1 Abstract representation of an example quadtree. .5 2 Geometric representation of the same example quadtree. .6 3 Abstract representation of an example kd-tree. .8 4 Geometric representation of the same example kd-tree (top two levels). .8 5 Geometric representation of the same example kd-tree (bottom two levels).9 6 Abstract representation of an example kd-tree. 25 7 Geometric representation of the same example kd-tree. 26 8 Parameterized representation of the same example kd-tree. 27 9 Another example space tree. ..
Recommended publications
  • Mlpack: Or, How I Learned to Stop Worrying and Love C++ Ryan R
    mlpack: or, How I Learned To Stop Worrying and Love C++ Ryan R. Curtin March 21, 2018 speed programming languages machine learning C++ mlpack fast Introduction Why are we here? programming languages machine learning C++ mlpack fast Introduction Why are we here? speed machine learning C++ mlpack fast Introduction Why are we here? speed programming languages C++ mlpack fast Introduction Why are we here? speed programming languages machine learning mlpack fast Introduction Why are we here? speed programming languages machine learning C++ fast Introduction Why are we here? speed programming languages machine learning C++ mlpack Introduction Why are we here? speed programming languages machine learning C++ mlpack fast Java JS INTERCAL Scala Python C++ C Go Ruby C# Tcl ASM PHP Lua VB low-level high-level fast easy specific portable Note: this is not a scientific or particularly accurate representation. Graph #1 Java JS INTERCAL Scala Python C++ C Go Ruby C# Tcl ASM PHP Lua VB fast easy specific portable Note: this is not a scientific or particularly accurate representation. Graph #1 low-level high-level Java JS INTERCAL Scala Python C++ C Go Ruby C# Tcl PHP Lua VB fast easy specific portable Note: this is not a scientific or particularly accurate representation. Graph #1 ASM low-level high-level Java JS INTERCAL Scala Python C++ C Go Ruby C# Tcl PHP Lua VB fast easy specific portable Note: this is not a scientific or particularly accurate representation. Graph #1 ASM low-level high-level Java JS INTERCAL Scala Python C++ C Go Ruby C# Tcl PHP Lua fast easy specific portable Note: this is not a scientific or particularly accurate representation.
    [Show full text]
  • Data Mining – Intro
    Data warehouse& Data Mining UNIT-3 Syllabus • UNIT 3 • Classification: Introduction, decision tree, tree induction algorithm – split algorithm based on information theory, split algorithm based on Gini index; naïve Bayes method; estimating predictive accuracy of classification method; classification software, software for association rule mining; case study; KDD Insurance Risk Assessment What is Data Mining? • Data Mining is: (1) The efficient discovery of previously unknown, valid, potentially useful, understandable patterns in large datasets (2) Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD (2) The analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner Knowledge Discovery Examples of Large Datasets • Government: IRS, NGA, … • Large corporations • WALMART: 20M transactions per day • MOBIL: 100 TB geological databases • AT&T 300 M calls per day • Credit card companies • Scientific • NASA, EOS project: 50 GB per hour • Environmental datasets KDD The Knowledge Discovery in Databases (KDD) process is commonly defined with the stages: (1) Selection (2) Pre-processing (3) Transformation (4) Data Mining (5) Interpretation/Evaluation Data Mining Methods 1. Decision Tree Classifiers: Used for modeling, classification 2. Association Rules: Used to find associations between sets of attributes 3. Sequential patterns: Used to find temporal associations in time series 4. Hierarchical
    [Show full text]
  • Rcppmlpack: R Integration with MLPACK Using Rcpp
    RcppMLPACK: R integration with MLPACK using Rcpp Qiang Kou [email protected] April 20, 2020 1 MLPACK MLPACK[1] is a C++ machine learning library with emphasis on scalability, speed, and ease-of-use. It outperforms competing machine learning libraries by large margins, for detailed results of benchmarking, please refer to Ref[1]. 1.1 Input/Output using Armadillo MLPACK uses Armadillo as input and output, which provides vector, matrix and cube types (integer, floating point and complex numbers are supported)[2]. By providing Matlab-like syntax and various matrix decompositions through integration with LAPACK or other high performance libraries (such as Intel MKL, AMD ACML, or OpenBLAS), Armadillo keeps a good balance between speed and ease of use. Armadillo is widely used in C++ libraries like MLPACK. However, there is one point we need to pay attention to: Armadillo matrices in MLPACK are stored in a column-major format for speed. That means observations are stored as columns and dimensions as rows. So when using MLPACK, additional transpose may be needed. 1.2 Simple API Although MLPACK relies on template techniques in C++, it provides an intuitive and simple API. For the standard usage of methods, default arguments are provided. The MLPACK paper[1] provides a good example: standard k-means clustering in Euclidean space can be initialized like below: 1 KMeans<> k(); Listing 1: k-means with all default arguments If we want to use Manhattan distance, a different cluster initialization policy, and allow empty clusters, the object can be initialized with additional arguments: 1 KMeans<ManhattanDistance, KMeansPlusPlusInitialization, AllowEmptyClusters> k(); Listing 2: k-means with additional arguments 1 1.3 Available methods in MLPACK Commonly used machine learning methods are all implemented in MLPACK.
    [Show full text]
  • Lecture 04 Linear Structures Sort
    Algorithmics (6EAP) MTAT.03.238 Linear structures, sorting, searching, etc Jaak Vilo 2018 Fall Jaak Vilo 1 Big-Oh notation classes Class Informal Intuition Analogy f(n) ∈ ο ( g(n) ) f is dominated by g Strictly below < f(n) ∈ O( g(n) ) Bounded from above Upper bound ≤ f(n) ∈ Θ( g(n) ) Bounded from “equal to” = above and below f(n) ∈ Ω( g(n) ) Bounded from below Lower bound ≥ f(n) ∈ ω( g(n) ) f dominates g Strictly above > Conclusions • Algorithm complexity deals with the behavior in the long-term – worst case -- typical – average case -- quite hard – best case -- bogus, cheating • In practice, long-term sometimes not necessary – E.g. for sorting 20 elements, you dont need fancy algorithms… Linear, sequential, ordered, list … Memory, disk, tape etc – is an ordered sequentially addressed media. Physical ordered list ~ array • Memory /address/ – Garbage collection • Files (character/byte list/lines in text file,…) • Disk – Disk fragmentation Linear data structures: Arrays • Array • Hashed array tree • Bidirectional map • Heightmap • Bit array • Lookup table • Bit field • Matrix • Bitboard • Parallel array • Bitmap • Sorted array • Circular buffer • Sparse array • Control table • Sparse matrix • Image • Iliffe vector • Dynamic array • Variable-length array • Gap buffer Linear data structures: Lists • Doubly linked list • Array list • Xor linked list • Linked list • Zipper • Self-organizing list • Doubly connected edge • Skip list list • Unrolled linked list • Difference list • VList Lists: Array 0 1 size MAX_SIZE-1 3 6 7 5 2 L = int[MAX_SIZE]
    [Show full text]
  • Finding Neighbors in a Forest: a B-Tree for Smoothed Particle Hydrodynamics Simulations
    Finding Neighbors in a Forest: A b-tree for Smoothed Particle Hydrodynamics Simulations Aurélien Cavelan University of Basel, Switzerland [email protected] Rubén M. Cabezón University of Basel, Switzerland [email protected] Jonas H. M. Korndorfer University of Basel, Switzerland [email protected] Florina M. Ciorba University of Basel, Switzerland fl[email protected] May 19, 2020 Abstract Finding the exact close neighbors of each fluid element in mesh-free computational hydrodynamical methods, such as the Smoothed Particle Hydrodynamics (SPH), often becomes a main bottleneck for scaling their performance beyond a few million fluid elements per computing node. Tree structures are particularly suitable for SPH simulation codes, which rely on finding the exact close neighbors of each fluid element (or SPH particle). In this work we present a novel tree structure, named b-tree, which features an adaptive branching factor to reduce the depth of the neighbor search. Depending on the particle spatial distribution, finding neighbors using b-tree has an asymptotic best case complexity of O(n), as opposed to O(n log n) for other classical tree structures such as octrees and quadtrees. We also present the proposed tree structure as well as the algorithms to build it and to find the exact close neighbors of all particles. arXiv:1910.02639v2 [cs.DC] 18 May 2020 We assess the scalability of the proposed tree-based algorithms through an extensive set of performance experiments in a shared-memory system. Results show that b-tree is up to 12× faster for building the tree and up to 1:6× faster for finding the exact neighbors of all particles when compared to its octree form.
    [Show full text]
  • Deep Learning and Big Data in Healthcare: a Double Review for Critical Beginners
    applied sciences Review Deep Learning and Big Data in Healthcare: A Double Review for Critical Beginners Luis Bote-Curiel 1 , Sergio Muñoz-Romero 1,2,* , Alicia Gerrero-Curieses 1 and José Luis Rojo-Álvarez 1,2 1 Department of Signal Theory and Communications, Rey Juan Carlos University, 28943 Fuenlabrada, Spain; [email protected] (L.B.-C.); [email protected] (A.G.-C.); [email protected] (J.L.R.-Á.) 2 Center for Computational Simulation, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Spain * Correspondence: [email protected] Received: 14 April 2019; Accepted: 3 June 2019; Published: 6 June 2019 Abstract: In the last few years, there has been a growing expectation created about the analysis of large amounts of data often available in organizations, which has been both scrutinized by the academic world and successfully exploited by industry. Nowadays, two of the most common terms heard in scientific circles are Big Data and Deep Learning. In this double review, we aim to shed some light on the current state of these different, yet somehow related branches of Data Science, in order to understand the current state and future evolution within the healthcare area. We start by giving a simple description of the technical elements of Big Data technologies, as well as an overview of the elements of Deep Learning techniques, according to their usual description in scientific literature. Then, we pay attention to the application fields that can be said to have delivered relevant real-world success stories, with emphasis on examples from large technology companies and financial institutions, among others.
    [Show full text]
  • Survey on Recent Machine Learning Tools, Platforms and Interface
    INTERNATIONAL JOURNAL OF INFORMATION AND COMPUTING SCIENCE ISSN NO: 0972-1347 Survey on Recent Machine learning Tools, Platforms and Interface 1.Mr.S.Karthi, M.E,AP/IT, St.Joseph College of Engineering, Chennai, [email protected] 2.Mrs.R.Raja Ramya, M.E.(Ph.D),AP/IT, Saveetha Engineering College, Chennai, ramyanitha @ gmail.com 3.Mrs.N.Kanagavalli, M.E.(Ph.D),AP/IT, Saveetha Engineering College, Chennai, [email protected] ABSTRACT- Machine learning and deep in the process of working through a machine learning are recent booming research areas. Tools are learning problem a big part of machine learning and choosing the right tool can be as important as working with the Why Use Tools? best algorithms. So that the new researchers are not aware of the tools and frameworks are helped to Machine learning tools make applied machine machine learning. There are lots of open source tools learning faster, easier and more fun. and frameworks are available in machine learning and deep learning. In this paper, various open source Faster: This means that the time from ideas to machine learning tools and frameworks are explained. results is greatly shortened. The alternative is that you have to implement each capability Keywords-open source tools, frameworks, platforms yourself. Introduction Easier: The open source tools are very easier to learn by individual. Based on the input, we A programming tool or software development tool is form the output in an easiest way. a computer program that software developers use to create, debug, maintain, or otherwise support other This requires research, deeper exercise in order programs and applications.
    [Show full text]
  • Arxiv:1210.6293V1 [Cs.MS] 23 Oct 2012 So the Finally, Accessible
    Journal of Machine Learning Research 1 (2012) 1-4 Submitted 9/12; Published x/12 MLPACK: A Scalable C++ Machine Learning Library Ryan R. Curtin [email protected] James R. Cline [email protected] N. P. Slagle [email protected] William B. March [email protected] Parikshit Ram [email protected] Nishant A. Mehta [email protected] Alexander G. Gray [email protected] College of Computing Georgia Institute of Technology Atlanta, GA 30332 Editor: Abstract MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learning library re- leased in late 2011 offering both a simple, consistent API accessible to novice users and high performance and flexibility to expert users by leveraging modern features of C++. ML- PACK provides cutting-edge algorithms whose benchmarks exhibit far better performance than other leading machine learning libraries. MLPACK version 1.0.3, licensed under the LGPL, is available at http://www.mlpack.org. 1. Introduction and Goals Though several machine learning libraries are freely available online, few, if any, offer efficient algorithms to the average user. For instance, the popular Weka toolkit (Hall et al., 2009) emphasizes ease of use but scales poorly; the distributed Apache Mahout library offers scal- ability at a cost of higher overhead (such as clusters and powerful servers often unavailable to the average user). Also, few libraries offer breadth; for instance, libsvm (Chang and Lin, 2011) and the Tilburg Memory-Based Learner (TiMBL) are highly scalable and accessible arXiv:1210.6293v1 [cs.MS] 23 Oct 2012 yet each offer only a single method.
    [Show full text]
  • Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets
    Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets Lars Arge David Eppstein Michael T. Goodrich Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science University of Aarhus University of California University of California IT-Parken, Aabogade 34 Computer Science Bldg., 444 Computer Science Bldg., 444 DK-8200 Aarhus N, Denmark Irvine, CA 92697-3425, USA Irvine, CA 92697-3425, USA large(at)daimi.au.dk eppstein(at)ics.uci.edu goodrich(at)acm.org ABSTRACT name), a prefix match for a key string, a nearest-neighbor We present a framework for designing efficient distributed match for a numerical attribute, a range query over various data structures for multi-dimensional data. Our structures, numerical attributes, or a point-location query in a geomet- which we call skip-webs, extend and improve previous ran- ric map of attributes. That is, we would like the peer-to-peer domized distributed data structures, including skipnets and network to support a rich set of possible data types that al- skip graphs. Our framework applies to a general class of data low for multiple kinds of queries, including set membership, querying scenarios, which include linear (one-dimensional) 1-dim. nearest neighbor queries, range queries, string prefix data, such as sorted sets, as well as multi-dimensional data, queries, and point-location queries. such as d-dimensional octrees and digital tries of character The motivation for such queries include DNA databases, strings defined over a fixed alphabet. We show how to per- location-based services, and approximate searches for file form a query over such a set of n items spread among n names or data titles.
    [Show full text]
  • Octree Generating Networks: Efficient Convolutional Architectures for High-Resolution 3D Outputs
    Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs Maxim Tatarchenko1 Alexey Dosovitskiy1,2 Thomas Brox1 [email protected] [email protected] [email protected] 1University of Freiburg 2Intel Labs Abstract Octree Octree Octree dense level 1 level 2 level 3 We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the oc- tree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 323 643 1283 3D shapes. In contrast to standard decoders acting on reg- ular voxel grids, the architecture does not have cubic com- Figure 1. The proposed OGN represents its volumetric output as an octree. Initially estimated rough low-resolution structure is gradu- plexity. This allows representing much higher resolution ally refined to a desired high resolution. At each level only a sparse outputs with a limited memory budget. We demonstrate this set of spatial locations is predicted. This representation is signifi- in several application domains, including 3D convolutional cantly more efficient than a dense voxel grid and allows generating autoencoders, generation of objects and whole scenes from volumes as large as 5123 voxels on a modern GPU in a single for- high-level representations, and shape from a single image. ward pass. large cell of an octree, resulting in savings in computation 1. Introduction and memory compared to a fine regular grid. At the same 1 time, fine details are not lost and can still be represented by Up-convolutional decoder architectures have become small cells of the octree.
    [Show full text]
  • ML Cheatsheet Documentation
    ML Cheatsheet Documentation Team Oct 02, 2018 Basics 1 Linear Regression 3 2 Gradient Descent 19 3 Logistic Regression 23 4 Glossary 37 5 Calculus 41 6 Linear Algebra 53 7 Probability 63 8 Statistics 65 9 Notation 67 10 Concepts 71 11 Forwardpropagation 77 12 Backpropagation 87 13 Activation Functions 93 14 Layers 101 15 Loss Functions 105 16 Optimizers 109 17 Regularization 113 18 Architectures 115 19 Classification Algorithms 125 20 Clustering Algorithms 127 i 21 Regression Algorithms 129 22 Reinforcement Learning 131 23 Datasets 133 24 Libraries 149 25 Papers 179 26 Other Content 185 27 Contribute 191 ii ML Cheatsheet Documentation Brief visual explanations of machine learning concepts with diagrams, code examples and links to resources for learning more. Warning: This document is under early stage development. If you find errors, please raise an issue or contribute a better definition! Basics 1 ML Cheatsheet Documentation 2 Basics CHAPTER 1 Linear Regression • Introduction • Simple regression – Making predictions – Cost function – Gradient descent – Training – Model evaluation – Summary • Multivariable regression – Growing complexity – Normalization – Making predictions – Initialize weights – Cost function – Gradient descent – Simplifying with matrices – Bias term – Model evaluation 3 ML Cheatsheet Documentation 1.1 Introduction Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. It’s used to predict values within a continuous range, (e.g. sales, price) rather than trying to classify them into categories (e.g. cat, dog). There are two main types: Simple regression Simple linear regression uses traditional slope-intercept form, where m and b are the variables our algorithm will try to “learn” to produce the most accurate predictions.
    [Show full text]
  • Cover Trees for Nearest Neighbor
    Cover Trees for Nearest Neighbor Alina Beygelzimer [email protected] IBM Thomas J. Watson Research Center, Hawthorne, NY 10532 Sham Kakade [email protected] TTI-Chicago, 1427 E 60th Street, Chicago, IL 60637 John Langford [email protected] TTI-Chicago, 1427 E 60th Street, Chicago, IL 60637 Abstract The basic nearest neighbor problem is as follows: We present a tree data structure for fast Given a set S of n points in some metric space (X, d), nearest neighbor operations in general n- the problem is to preprocess S so that given a query point metric spaces (where the data set con- point p ∈ X, one can efficiently find a point q ∈ S sists of n points). The data structure re- which minimizes d(p, q). quires O(n) space regardless of the met- ric’s structure yet maintains all performance Context. For general metrics, finding (or even ap- properties of a navigating net [KL04a]. If proximating) the nearest neighbor of a point requires the point set has a bounded expansion con- Ω(n) time. The classical example is a uniform met- stant c, which is a measure of the intrinsic ric where every pair of points is near the same dis- dimensionality (as defined in [KR02]), the tance, so there is no structure to take advantage of. cover tree data structure can be constructed However, the metrics of practical interest typically do in O c6n log n time. Furthermore, nearest have some structure which can be exploited to yield neighbor queries require time only logarith- significant computational speedups.
    [Show full text]