Information Flows of Diverse Autoencoders
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Training Autoencoders by Alternating Minimization
Under review as a conference paper at ICLR 2018 TRAINING AUTOENCODERS BY ALTERNATING MINI- MIZATION Anonymous authors Paper under double-blind review ABSTRACT We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed. 1 INTRODUCTION For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. -
Large-Scale Kernel Ranksvm
Large-scale Kernel RankSVM Tzu-Ming Kuo∗ Ching-Pei Leey Chih-Jen Linz Abstract proposed [11, 23, 5, 1, 14]. However, for some tasks that Learning to rank is an important task for recommendation the feature set is not rich enough, nonlinear methods systems, online advertisement and web search. Among may be needed. Therefore, it is important to develop those learning to rank methods, rankSVM is a widely efficient training methods for large kernel rankSVM. used model. Both linear and nonlinear (kernel) rankSVM Assume we are given a set of training label-query- have been extensively studied, but the lengthy training instance tuples (yi; qi; xi); yi 2 R; qi 2 S ⊂ Z; xi 2 n time of kernel rankSVM remains a challenging issue. In R ; i = 1; : : : ; l, where S is the set of queries. By this paper, after discussing difficulties of training kernel defining the set of preference pairs as rankSVM, we propose an efficient method to handle these (1.1) P ≡ f(i; j) j q = q ; y > y g with p ≡ jP j; problems. The idea is to reduce the number of variables from i j i j quadratic to linear with respect to the number of training rankSVM [10] solves instances, and efficiently evaluate the pairwise losses. Our setting is applicable to a variety of loss functions. Further, 1 T X min w w + C ξi;j general optimization methods can be easily applied to solve w;ξ 2 (i;j)2P the reformulated problem. Implementation issues are also (1.2) subject to wT (φ (x ) − φ (x )) ≥ 1 − ξ ; carefully considered. -
A Kernel Method for Multi-Labelled Classification
A kernel method for multi-labelled classification Andre´ Elisseeff and Jason Weston BIOwulf Technologies, 305 Broadway, New York, NY 10007 andre,jason ¡ @barhilltechnologies.com Abstract This article presents a Support Vector Machine (SVM) like learning sys- tem to handle multi-label problems. Such problems are usually decom- posed into many two-class problems but the expressive power of such a system can be weak [5, 7]. We explore a new direct approach. It is based on a large margin ranking system that shares a lot of common proper- ties with SVMs. We tested it on a Yeast gene functional classification problem with positive results. 1 Introduction Many problems in Text Mining or Bioinformatics are multi-labelled. That is, each point in a learning set is associated to a set of labels. Consider for instance the classification task of determining the subjects of a document, or of relating one protein to its many effects on a cell. In either case, the learning task would be to output a set of labels whose size is not known in advance: one document can for instance be about food, meat and finance, although another one would concern only food and fat. Two-class and multi-class classification or ordinal regression problems can all be cast into multi-label ones. This makes the latter quite attractive but at the same time it gives a warning: their generality hides their difficulty to solve them. The number of publications is not going to contradict this statement: we are aware of only a few works about the subject [4, 5, 7] and they all concern text mining applications. -
Density Based Data Clustering
California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of aduateGr Studies 3-2015 Density Based Data Clustering Rayan Albarakati California State University - San Bernardino Follow this and additional works at: https://scholarworks.lib.csusb.edu/etd Part of the Other Computer Engineering Commons Recommended Citation Albarakati, Rayan, "Density Based Data Clustering" (2015). Electronic Theses, Projects, and Dissertations. 134. https://scholarworks.lib.csusb.edu/etd/134 This Project is brought to you for free and open access by the Office of aduateGr Studies at CSUSB ScholarWorks. It has been accepted for inclusion in Electronic Theses, Projects, and Dissertations by an authorized administrator of CSUSB ScholarWorks. For more information, please contact [email protected]. California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 3-2015 Density Based Data Clustering Rayan Albarakati Follow this and additional works at: http://scholarworks.lib.csusb.edu/etd This Project is brought to you for free and open access by the Office of Graduate Studies at CSUSB ScholarWorks. It has been accepted for inclusion in Electronic Theses, Projects, and Dissertations by an authorized administrator of CSUSB ScholarWorks. For more information, please contact [email protected], [email protected]. DESNITY BASED DATA CLUSTERING A Project Presented to the Faculty of California State University, San Bernardino In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Rayan Albarakati March 2015 DESNITY BASED DATA CLUSTERING A Project Presented to the Faculty of California State University, San Bernardino by Rayan Albarakati March 2015 Approved by: Haiyan Qiao, Advisor, School of Computer Date Science and Engineering Owen J.Murphy Krestin Voigt © 2015 Rayan Albarakati ABSTRACT Data clustering is a data analysis technique that groups data based on a measure of similarity. -
A User's Guide to Support Vector Machines
A User's Guide to Support Vector Machines Asa Ben-Hur Jason Weston Department of Computer Science NEC Labs America Colorado State University Princeton, NJ 08540 USA Abstract The Support Vector Machine (SVM) is a widely used classifier. And yet, obtaining the best results with SVMs requires an understanding of their workings and the various ways a user can influence their accuracy. We provide the user with a basic understanding of the theory behind SVMs and focus on their use in practice. We describe the effect of the SVM parameters on the resulting classifier, how to select good values for those parameters, data normalization, factors that affect training time, and software for training SVMs. 1 Introduction The Support Vector Machine (SVM) is a state-of-the-art classification method introduced in 1992 by Boser, Guyon, and Vapnik [1]. The SVM classifier is widely used in bioinformatics (and other disciplines) due to its high accuracy, ability to deal with high-dimensional data such as gene ex- pression, and flexibility in modeling diverse sources of data [2]. SVMs belong to the general category of kernel methods [4, 5]. A kernel method is an algorithm that depends on the data only through dot-products. When this is the case, the dot product can be replaced by a kernel function which computes a dot product in some possibly high dimensional feature space. This has two advantages: First, the ability to generate non-linear decision boundaries using methods designed for linear classifiers. Second, the use of kernel functions allows the user to apply a classifier to data that have no obvious fixed-dimensional vector space representation. -
The Forgetron: a Kernel-Based Perceptron on a Fixed Budget
The Forgetron: A Kernel-Based Perceptron on a Fixed Budget Ofer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel oferd,shais,singer @cs.huji.ac.il { } Abstract The Perceptron algorithm, despite its simplicity, often performs well on online classification problems. The Perceptron becomes especially effec- tive when it is used in conjunction with kernels. However, a common dif- ficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may grow unboundedly. In this paper we describe and analyze a new in- frastructure for kernel-based learning with the Perceptron while adhering to a strict limit on the number of examples that can be stored. We first describe a template algorithm, called the Forgetron, for online learning on a fixed budget. We then provide specific algorithms and derive a uni- fied mistake bound for all of them. To our knowledge, this is the first online learning paradigm which, on one hand, maintains a strict limit on the number of examples it can store and, on the other hand, entertains a relative mistake bound. We also present experiments with real datasets which underscore the merits of our approach. 1 Introduction The introduction of the Support Vector Machine (SVM) [7] sparked a widespread interest in kernel methods as a means of solving (binary) classification problems. Although SVM was initially stated as a batch-learning technique, it significantly influenced the develop- ment of kernel methods in the online-learning setting. Online classification algorithms that can incorporate kernels include the Perceptron [6], ROMMA [5], ALMA [3], NORMA [4] and the Passive-Aggressive family of algorithms [1]. -
Kernel Methods Through the Roof: Handling Billions of Points Efficiently
Kernel methods through the roof: handling billions of points efficiently Giacomo Meanti Luigi Carratino MaLGa, DIBRIS MaLGa, DIBRIS Università degli Studi di Genova Università degli Studi di Genova [email protected] [email protected] Lorenzo Rosasco Alessandro Rudi MaLGa, DIBRIS, IIT & MIT INRIA - École Normale Supérieure Università degli Studi di Genova PSL Research University [email protected] [email protected] Abstract Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since naïve imple- mentations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library1. 1 Introduction Kernel methods provide non-linear/non-parametric extensions of many classical linear models in machine learning and statistics [45, 49]. The data are embedded via a non-linear map into a high dimensional feature space, so that linear models in such a space effectively define non-linear models in the original space. -
Neural Network in Hardware
UNLV Theses, Dissertations, Professional Papers, and Capstones 12-15-2019 Neural Network in Hardware Jiong Si Follow this and additional works at: https://digitalscholarship.unlv.edu/thesesdissertations Part of the Electrical and Computer Engineering Commons Repository Citation Si, Jiong, "Neural Network in Hardware" (2019). UNLV Theses, Dissertations, Professional Papers, and Capstones. 3845. http://dx.doi.org/10.34917/18608784 This Dissertation is protected by copyright and/or related rights. It has been brought to you by Digital Scholarship@UNLV with permission from the rights-holder(s). You are free to use this Dissertation in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/or on the work itself. This Dissertation has been accepted for inclusion in UNLV Theses, Dissertations, Professional Papers, and Capstones by an authorized administrator of Digital Scholarship@UNLV. For more information, please contact [email protected]. NEURAL NETWORKS IN HARDWARE By Jiong Si Bachelor of Engineering – Automation Chongqing University of Science and Technology 2008 Master of Engineering – Precision Instrument and Machinery Hefei University of Technology 2011 A dissertation submitted in partial fulfillment of the requirements for the Doctor of Philosophy – Electrical Engineering Department of Electrical and Computer Engineering Howard R. Hughes College of Engineering The Graduate College University of Nevada, Las Vegas December 2019 Copyright 2019 by Jiong Si All Rights Reserved Dissertation Approval The Graduate College The University of Nevada, Las Vegas November 6, 2019 This dissertation prepared by Jiong Si entitled Neural Networks in Hardware is approved in partial fulfillment of the requirements for the degree of Doctor of Philosophy – Electrical Engineering Department of Electrical and Computer Engineering Sarah Harris, Ph.D. -
Population Dynamics: Variance and the Sigmoid Activation Function ⁎ André C
www.elsevier.com/locate/ynimg NeuroImage 42 (2008) 147–157 Technical Note Population dynamics: Variance and the sigmoid activation function ⁎ André C. Marreiros, Jean Daunizeau, Stefan J. Kiebel, and Karl J. Friston Wellcome Trust Centre for Neuroimaging, University College London, UK Received 24 January 2008; revised 8 April 2008; accepted 16 April 2008 Available online 29 April 2008 This paper demonstrates how the sigmoid activation function of (David et al., 2006a,b; Kiebel et al., 2006; Garrido et al., 2007; neural-mass models can be understood in terms of the variance or Moran et al., 2007). All these models embed key nonlinearities dispersion of neuronal states. We use this relationship to estimate the that characterise real neuronal interactions. The most prevalent probability density on hidden neuronal states, using non-invasive models are called neural-mass models and are generally for- electrophysiological (EEG) measures and dynamic casual modelling. mulated as a convolution of inputs to a neuronal ensemble or The importance of implicit variance in neuronal states for neural-mass population to produce an output. Critically, the outputs of one models of cortical dynamics is illustrated using both synthetic data and ensemble serve as input to another, after some static transforma- real EEG measurements of sensory evoked responses. © 2008 Elsevier Inc. All rights reserved. tion. Usually, the convolution operator is linear, whereas the transformation of outputs (e.g., mean depolarisation of pyramidal Keywords: Neural-mass models; Nonlinear; Modelling cells) to inputs (firing rates in presynaptic inputs) is a nonlinear sigmoidal function. This function generates the nonlinear beha- viours that are critical for modelling and understanding neuronal activity. -
Dynamic Modification of Activation Function Using the Backpropagation Algorithm in the Artificial Neural Networks
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 10, No. 4, 2019 Dynamic Modification of Activation Function using the Backpropagation Algorithm in the Artificial Neural Networks Marina Adriana Mercioni1, Alexandru Tiron2, Stefan Holban3 Department of Computer Science Politehnica University Timisoara, Timisoara, Romania Abstract—The paper proposes the dynamic modification of These functions represent the main activation functions, the activation function in a learning technique, more exactly being currently the most used by neural networks, representing backpropagation algorithm. The modification consists in from this reason a standard in building of an architecture of changing slope of sigmoid function for activation function Neural Network type [6,7]. according to increase or decrease the error in an epoch of learning. The study was done using the Waikato Environment for The activation functions that have been proposed along the Knowledge Analysis (WEKA) platform to complete adding this time were analyzed in the light of the applicability of the feature in Multilayer Perceptron class. This study aims the backpropagation algorithm. It has noted that a function that dynamic modification of activation function has changed to enables differentiated rate calculated by the neural network to relative gradient error, also neural networks with hidden layers be differentiated so and the error of function becomes have not used for it. differentiated [8]. Keywords—Artificial neural networks; activation function; The main purpose of activation function consists in the sigmoid function; WEKA; multilayer perceptron; instance; scalation of outputs of the neurons in neural networks and the classifier; gradient; rate metric; performance; dynamic introduction a non-linear relationship between the input and modification output of the neuron [9]. -
An Algorithmic Introduction to Clustering
AN ALGORITHMIC INTRODUCTION TO CLUSTERING APREPRINT Bernardo Gonzalez Department of Computer Science and Engineering UCSC [email protected] June 11, 2020 The purpose of this document is to provide an easy introductory guide to clustering algorithms. Basic knowledge and exposure to probability (random variables, conditional probability, Bayes’ theorem, independence, Gaussian distribution), matrix calculus (matrix and vector derivatives), linear algebra and analysis of algorithm (specifically, time complexity analysis) is assumed. Starting with Gaussian Mixture Models (GMM), this guide will visit different algorithms like the well-known k-means, DBSCAN and Spectral Clustering (SC) algorithms, in a connected and hopefully understandable way for the reader. The first three sections (Introduction, GMM and k-means) are based on [1]. The fourth section (SC) is based on [2] and [5]. Fifth section (DBSCAN) is based on [6]. Sixth section (Mean Shift) is, to the best of author’s knowledge, original work. Seventh section is dedicated to conclusions and future work. Traditionally, these five algorithms are considered completely unrelated and they are considered members of different families of clustering algorithms: • Model-based algorithms: the data are viewed as coming from a mixture of probability distributions, each of which represents a different cluster. A Gaussian Mixture Model is considered a member of this family • Centroid-based algorithms: any data point in a cluster is represented by the central vector of that cluster, which need not be a part of the dataset taken. k-means is considered a member of this family • Graph-based algorithms: the data is represented using a graph, and the clustering procedure leverage Graph theory tools to create a partition of this graph. -
Knowledge-Based Systems Layer-Constrained Variational
Knowledge-Based Systems 196 (2020) 105753 Contents lists available at ScienceDirect Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys Layer-constrained variational autoencoding kernel density estimation model for anomaly detectionI ∗ Peng Lv b, Yanwei Yu a,b, , Yangyang Fan c, Xianfeng Tang d, Xiangrong Tong b a Department of Computer Science and Technology, Ocean University of China, Qingdao, Shandong 266100, China b School of Computer and Control Engineering, Yantai University, Yantai, Shandong 264005, China c Shanghai Key Lab of Advanced High-Temperature Materials and Precision Forming, Shanghai Jiao Tong University, Shanghai 200240, China d College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA article info a b s t r a c t Article history: Unsupervised techniques typically rely on the probability density distribution of the data to detect Received 16 October 2019 anomalies, where objects with low probability density are considered to be abnormal. However, Received in revised form 5 March 2020 modeling the density distribution of high dimensional data is known to be hard, making the problem of Accepted 7 March 2020 detecting anomalies from high-dimensional data challenging. The state-of-the-art methods solve this Available online 10 March 2020 problem by first applying dimension reduction techniques to the data and then detecting anomalies Keywords: in the low dimensional space. Unfortunately, the low dimensional space does not necessarily preserve Anomaly detection the density distribution of the original high dimensional data. This jeopardizes the effectiveness of Variational autoencoder anomaly detection. In this work, we propose a novel high dimensional anomaly detection method Kernel density estimation called LAKE.