582: Generalization Error Bounds for Deep Unfolding Rnns

Total Page:16

File Type:pdf, Size:1020Kb

582: Generalization Error Bounds for Deep Unfolding Rnns Generalization Error Bounds for Deep Unfolding RNNs Boris Joukovsky ID 1,2 Tanmoy Mukherjee ID 1,2 Huynh Van Luong ID 1,2 Nikos Deligiannis ID 1,2 1Department of Electronics and Informatics, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium 2imec, Kapeldreef 75, B-3001 Leuven, Belgium Abstract result, such models lack in theoretical understanding and interpretability. A step towards post-hoc interpretability of RNNs was made by Karpathy et al. [2015], where they Recurrent Neural Networks (RNNs) are powerful observed the advantages of internal states of the long short- models with the ability to model sequential data. term memory (LSTM) [Hochreiter and Schmidhuber, 1997] However, they are often viewed as black-boxes and over traditional RNNs to model interpretable patterns in the lack in interpretability. Deep unfolding methods input data; this study was further complemented by Gold- take a step towards interpretability by designing berg [2015]. Subsequent works in explainable deep learning deep neural networks as learned variations of itera- have also proposed methods to explain the decision-making tive optimization algorithms to solve various signal process of a trained LSTM for a given prediction, in terms processing tasks. In this paper, we explore theo- of the most influential inputs [Li et al., 2016, Arras et al., retical aspects of deep unfolding RNNs in terms 2019]. of their generalization ability. Specifically, we de- rive generalization error bounds for a class of deep Efforts toward a theoretical understanding of RNNs also unfolding RNNs via Rademacher complexity anal- involve the derivation of bounds on the generalization er- ysis. To our knowledge, these are the first general- ror, i.e., the difference between the empirical loss and the ization bounds proposed for deep unfolding RNNs. expected loss. Such bounds rely on measures—such as the We show theoretically that our bounds are tighter Rademacher complexity [Bartlett and Mendelson, 2002], than similar ones for other recent RNNs, in terms the PAC-Bayes theory [McAllester, 2003], the algorithm of the number of timesteps. By training models in stability [Bousquet and Elisseeff, 2002] and robustness [Xu a classification setting, we demonstrate that deep and Mannor, 2012]—and aim at understanding how accu- unfolding RNNs can outperform traditional RNNs rately a model is able to generalise to unseen data in relation in standard sequence classification tasks. These to the optimization algorithm used for training, the network experiments allow us to relate the empirical gen- architecture, or the underlying data structure. eralization error to the theoretical bounds. In par- Generalization error bounds (GEBs) for RNN models ticular, we show that over-parametrized deep un- were studied by Arora et al. [2018], Wei and Ma [2019] folding models like reweighted-RNN achieve tight and Akpinar et al. [2019]. The theoretical result by Arora theoretical error bounds with minimal decrease in et al. [2018] for low-dimensional embeddings using LSTMs accuracy, when trained with explicit regularization. models was derived for classification tasks. That work showed an interesting property of embeddings such as GloVe and word2vec, that is, forming a good sensing matrix 1 INTRODUCTION for text leads to better representations than those obtained with random matrices. The data-dependent sample complex- The past few years has seen an undeniable success of Recur- ity of deep neural networks was studied by Wei and Ma rent Neural Networks (RNNs) in applications ranging from [2019]. Unlike the existing Rademacher complexity bounds machine translation [Bahdanau et al., 2015] and image cap- that depend on norms of the weight matrices and depend tioning [Karpathy and Fei-Fei, 2017] to speech processing exponentially on the model depth [Bartlett and Mendelson, [Oord et al., 2016]. Despite their impressive performance, 2002, Dziugaite and Roy, 2017, Neyshabur et al., 2018], the existing approaches are typically designed based on heuris- study by Wei and Ma [2019] derived tighter Radermacher tics of the task and rely on engineering experience. As a bounds with the consideration of additional data-dependant Accepted for the 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021). properties controlling the norms of hidden layers and the the ` -` -RNN [Le et al., 2019]. We also present an 1 1 norms of the Jacobians of each layer with respect to all extension of the GEBs when the RNN are used in a clas- previous layers. The derivation methods by Wei and Ma sification setting. We show theoretically that the pro- [2019] were extended to provide GEBs for RNNs scaling posed bounds are tighter than other recent lightweight polynomial with respect to the depth of the model. Specifi- RNNs with generalization guarantees, namely, Spec- cally, generalization abilities were explored by Akpinar et al. tralRNN [Zhang et al., 2018] and FastGRNN [Kusupati [2019], where explicit sample complexity bounds for single et al., 2018], in terms of number of time steps T . and multi-layer RNNs were derived. • We assess the performance of deep unfolding RNNs The focus of this work is in studying how model in classification settings, namely in speech command interpretability—by means of designing networks based detection and language modelling tasks. For the first on deep unfolding—interacts with model generalization task, our results show that reweighted-RNN outper- capacity. Deep unfolding methods design deep models as forms other lightweight RNNs, and that deep unfold- unrolled iterations of optimization algorithms; a key result ing RNNs are competitive with traditional RNN base- in this direction is the learned iterative shrinkage thresh- lines (i.e., vanilla RNN, LSTM and GRU) while being olding algorithm (LISTA) [Gregor and LeCun, 2010] that smaller in size. For the second task, we observe a signif- solves the sparse coding problem. Iterative optimization al- icant improvement of reweighted-RNN over all other gorithms are usually highly interpretable because they are models. developed via modeling the physical processes underlying • By taking speech command detection as an example, the problem and/or capturing prior domain knowledge [Lu- we show that the proposed GEB for reweighted-RNN is cas et al., 2018]. Hence, deep networks designed by deep tight and agrees with the empirical generalization gap. unfolding capture domain knowledge and promote model- This can be achieved when the model is sufficiently based structure in the data; in other words, such networks regularized, while maintaining high classification ac- can be naturally interpreted as a parameter optimized al- curacy. gorithm [Monga et al., 2019]. Deep unfolding has been used to develop interpretable RNNs: SISTA-RNN [Wis- The remainder of this paper is as follows: Section 2 reviews dom et al., 2017] is an extension of LISTA that solves a traditional stacked RNN models, followed by an introduc- sequential signal reconstruction task based on sparse mod- tion to deep unfolding RNNs. Section 3 presents the pro- elling with the aid of side-information. Alternatively, Le posed GEBs for deep unfolding RNNs, which is obtained by et al. [2019] designed `1-`1-RNN by unfolding a proximal studying the complexity of their latent representation stage. method that solves an `1-`1 minimization problem [Mota The bound is then extended to the classification problem. In et al., 2017]. In our recent work [Luong et al., 2021], we Section 4, we experimentally compare reweighted-RNN to proposed reweighted-RNN, which unfolds a reweighted ver- other deep unfolding and traditional RNN models on clas- sion of the `1-`1 minimization problem and incoporates sification tasks. We also evaluate the GEB experimentally additional weights so as to increase the model expressivity. and relate the theoretical aspects to empirical observations. Deep unfolding RNNs excel in solving the underlying signal reconstruction tasks, outperforming traditional RNN base- lines while having a substantially lower parameter count 2 BACKGROUND TO RNN MODELS than the LSTM or gated recurrent unit (GRU) [Cho et al., 2014] models; however, their ability to leverage the sparse 2.1 STACKED RNNS structure of data as a means to solve traditional RNN tasks (e.g., sequence classification) is relatively unexplored. Fur- For each time step t = 1;:::;T , the vanilla RNN recur- thermore, despite the progress in deep unfolding models, sively updates the latent representations ht through linear their generalization ability has received no such attention; transformation of the input sample xt and the previous state particularly, no work exists studying the GEBs of deep un- ht−1, followed by a non linear activation σ(·). To achieve folding RNN models. In this paper, we study theoretical higher expressivity, one may use a stacked vanilla RNN [Pas- aspects of deep unfolding RNNs by means of the GEBs. canu et al., 2014] by juxtaposing multiple RNN layers. In (l) We also benchmark deep unfolding RNNs in classification this setting, the representation ht at layer l and time step t problems and relate the empirical generalization error to the is computed by theoretical one. 8 (l) < σ V1h + U1xt ; if l = 1; The contributions of this work are as follows: (l) t−1 ht = (l−1) (l) (1) : σ Wlht + Vlht−1 ; if l > 1; • We derive generalization error bounds (GEB) for deep unfolding RNNs by means of Rademacher complexity where U1, Wl, Vl are the weight matrices learned per layer analysis, by taking the reweighted-RNN model [Lu-
Recommended publications
  • Arxiv:1811.04918V6 [Cs.LG] 1 Jun 2020
    Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers Zeyuan Allen-Zhu Yuanzhi Li Yingyu Liang [email protected] [email protected] [email protected] Microsoft Research AI Stanford University University of Wisconsin-Madison November 12, 2018 (version 6)∗ Abstract The fundamental learning theory behind neural networks remains largely open. What classes of functions can neural networks actually learn? Why doesn't the trained network overfit when it is overparameterized? In this work, we prove that overparameterized neural networks can learn some notable con- cept classes, including two and three-layer networks with fewer parameters and smooth activa- tions. Moreover, the learning can be simply done by SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples. The sample complexity can also be almost independent of the number of parameters in the network. On the technique side, our analysis goes beyond the so-called NTK (neural tangent kernel) linearization of neural networks in prior works. We establish a new notion of quadratic ap- proximation of the neural network (that can be viewed as a second-order variant of NTK), and connect it to the SGD theory of escaping saddle points. arXiv:1811.04918v6 [cs.LG] 1 Jun 2020 ∗V1 appears on this date, V2/V3/V4 polish writing and parameters, V5 adds experiments, and V6 reflects our conference camera ready version. Authors sorted in alphabetical order. We would like to thank Greg Yang and Sebastien Bubeck for many enlightening conversations. Y. Liang was supported in part by FA9550-18-1-0166, and would also like to acknowledge that support for this research was provided by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin-Madison with funding from the Wisconsin Alumni Research Foundation.
    [Show full text]
  • Analysis of Perceptron-Based Active Learning
    JournalofMachineLearningResearch10(2009)281-299 Submitted 12/07; 12/08; Published 2/09 Analysis of Perceptron-Based Active Learning Sanjoy Dasgupta [email protected] Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404, USA Adam Tauman Kalai [email protected] Microsoft Research Office 14063 One Memorial Drive Cambridge, MA 02142, USA Claire Monteleoni [email protected] Center for Computational Learning Systems Columbia University Suite 850, 475 Riverside Drive, MC 7717 New York, NY 10115, USA Editor: Manfred Warmuth Abstract Ω 1 We start by showing that in an active learning setting, the Perceptron algorithm needs ( ε2 ) labels to learn linear separators within generalization error ε. We then present a simple active learning algorithm for this problem, which combines a modification of the Perceptron update with an adap- tive filtering rule for deciding which points to query. For data distributed uniformly over the unit 1 sphere, we show that our algorithm reaches generalization error ε after asking for just O˜(d log ε ) labels. This exponential improvement over the usual sample complexity of supervised learning had previously been demonstrated only for the computationally more complex query-by-committee al- gorithm. Keywords: active learning, perceptron, label complexity bounds, online learning 1. Introduction In many machine learning applications, unlabeled data is abundant but labeling is expensive. This distinction is not captured in standard models of supervised learning, and has motivated the field of active learning, in which the labels of data points are initially hidden, and the learner must pay for each label it wishes revealed.
    [Show full text]
  • The Sample Complexity of Agnostic Learning Under Deterministic Labels
    JMLR: Workshop and Conference Proceedings vol 35:1–16, 2014 The sample complexity of agnostic learning under deterministic labels Shai Ben-David [email protected] Cheriton School of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, CANADA Ruth Urner [email protected] School of Computer Science, Georgia Institute of Technology, Atlanta, Georgia 30332, USA Abstract With the emergence of Machine Learning tools that allow handling data with a huge number of features, it becomes reasonable to assume that, over the full set of features, the true labeling is (almost) fully determined. That is, the labeling function is deterministic, but not necessarily a member of some known hypothesis class. However, agnostic learning of deterministic labels has so far received little research attention. We investigate this setting and show that it displays a behavior that is quite different from that of the fundamental results of the common (PAC) learning setups. First, we show that the sample complexity of learning a binary hypothesis class (with respect to deterministic labeling functions) is not fully determined by the VC-dimension of the class. For any d, we present classes of VC-dimension d that are learnable from O~(d/)- many samples and classes that require samples of size Ω(d/2). Furthermore, we show that in this setup, there are classes for which any proper learner has suboptimal sample complexity. While the class can be learned with sample complexity O~(d/), any proper (and therefore, any ERM) algorithm requires Ω(d/2) samples. We provide combinatorial characterizations of both phenomena, and further analyze the utility of unlabeled samples in this setting.
    [Show full text]
  • Sinkhorn Autoencoders
    Sinkhorn AutoEncoders Giorgio Patrini⇤• Rianne van den Berg⇤ Patrick Forr´e Marcello Carioni UvA Bosch Delta Lab University of Amsterdam† University of Amsterdam KFU Graz Samarth Bhargav Max Welling Tim Genewein Frank Nielsen ‡ University of Amsterdam University of Amsterdam Bosch Center for Ecole´ Polytechnique CIFAR Artificial Intelligence Sony CSL Abstract 1INTRODUCTION Optimal transport o↵ers an alternative to Unsupervised learning aims at finding the underlying maximum likelihood for learning generative rules that govern a given data distribution PX . It can autoencoding models. We show that mini- be approached by learning to mimic the data genera- mizing the p-Wasserstein distance between tion process, or by finding an adequate representation the generator and the true data distribution of the data. Generative Adversarial Networks (GAN) is equivalent to the unconstrained min-min (Goodfellow et al., 2014) belong to the former class, optimization of the p-Wasserstein distance by learning to transform noise into an implicit dis- between the encoder aggregated posterior tribution that matches the given one. AutoEncoders and the prior in latent space, plus a recon- (AE) (Hinton and Salakhutdinov, 2006) are of the struction error. We also identify the role of latter type, by learning a representation that maxi- its trade-o↵hyperparameter as the capac- mizes the mutual information between the data and ity of the generator: its Lipschitz constant. its reconstruction, subject to an information bottle- Moreover, we prove that optimizing the en- neck. Variational AutoEncoders (VAE) (Kingma and coder over any class of universal approxima- Welling, 2013; Rezende et al., 2014), provide both a tors, such as deterministic neural networks, generative model — i.e.
    [Show full text]
  • Data-Dependent Sample Complexity of Deep Neural Networks Via Lipschitz Augmentation
    Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation Colin Wei Tengyu Ma Computer Science Department Computer Science Department Stanford University Stanford University [email protected] [email protected] Abstract Existing Rademacher complexity bounds for neural networks rely only on norm control of the weight matrices and depend exponentially on depth via a product of the matrix norms. Lower bounds show that this exponential dependence on depth is unavoidable when no additional properties of the training data are considered. We suspect that this conundrum comes from the fact that these bounds depend on the training data only through the margin. In practice, many data-dependent techniques such as Batchnorm improve the generalization performance. For feedforward neural nets as well as RNNs, we obtain tighter Rademacher complexity bounds by considering additional data-dependent properties of the network: the norms of the hidden layers of the network, and the norms of the Jacobians of each layer with respect to all previous layers. Our bounds scale polynomially in depth when these empirical quantities are small, as is usually the case in practice. To obtain these bounds, we develop general tools for augmenting a sequence of functions to make their composition Lipschitz and then covering the augmented functions. Inspired by our theory, we directly regularize the network’s Jacobians during training and empirically demonstrate that this improves test performance. 1 Introduction Deep networks trained in
    [Show full text]
  • Nearly Tight Sample Complexity Bounds for Learning Mixtures of Gaussians Via Sample Compression Schemes∗
    Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes∗ Hassan Ashtiani Shai Ben-David Department of Computing and Software School of Computer Science McMaster University, and University of Waterloo, Vector Institute, ON, Canada Waterloo, ON, Canada [email protected] [email protected] Nicholas J. A. Harvey Christopher Liaw Department of Computer Science Department of Computer Science University of British Columbia University of British Columbia Vancouver, BC, Canada Vancouver, BC, Canada [email protected] [email protected] Abbas Mehrabian Yaniv Plan School of Computer Science Department of Mathematics McGill University University of British Columbia Montréal, QC, Canada Vancouver, BC, Canada [email protected] [email protected] Abstract We prove that Θ(e kd2="2) samples are necessary and sufficient for learning a mixture of k Gaussians in Rd, up to error " in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that Oe(kd="2) samples suffice, matching a known lower bound. The upper bound is based on a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a sample compression scheme can also be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. The core of our main result is showing that the class of Gaussians in Rd has a small-sized sample compression. 1 Introduction Estimating distributions from observed data is a fundamental task in statistics that has been studied for over a century.
    [Show full text]
  • Sample Complexity Results
    10-601 Machine Learning Maria-Florina Balcan Spring 2015 Generalization Abilities: Sample Complexity Results. The ability to generalize beyond what we have seen in the training phase is the essence of machine learning, essentially what makes machine learning, machine learning. In these notes we describe some basic concepts and the classic formalization that allows us to talk about these important concepts in a precise way. Distributional Learning The basic idea of the distributional learning setting is to assume that examples are being provided from a fixed (but perhaps unknown) distribution over the instance space. The assumption of a fixed distribution gives us hope that what we learn based on some training data will carry over to new test data we haven't seen yet. A nice feature of this assumption is that it provides us a well-defined notion of the error of a hypothesis with respect to target concept. Specifically, in the distributional learning setting (captured by the PAC model of Valiant and Sta- tistical Learning Theory framework of Vapnik) we assume that the input to the learning algorithm is a set of labeled examples S :(x1; y1);:::; (xm; ym) where xi are drawn i.i.d. from some fixed but unknown distribution D over the the instance space ∗ ∗ X and that they are labeled by some target concept c . So yi = c (xi). Here the goal is to do optimization over the given sample S in order to find a hypothesis h : X ! f0; 1g, that has small error over whole distribution D. The true error of h with respect to a target concept c∗ and the underlying distribution D is defined as err(h) = Pr (h(x) 6= c∗(x)): x∼D (Prx∼D(A) means the probability of event A given that x is selected according to distribution D.) We denote by m ∗ 1 X ∗ errS(h) = Pr (h(x) 6= c (x)) = I[h(xi) 6= c (xi)] x∼S m i=1 the empirical error of h over the sample S (that is the fraction of examples in S misclassified by h).
    [Show full text]
  • Benchmarking Model-Based Reinforcement Learning
    Benchmarking Model-Based Reinforcement Learning Tingwu Wang1;2, Xuchan Bao1;2, Ignasi Clavera3, Jerrick Hoang1;2, Yeming Wen1;2, Eric Langlois1;2, Shunshi Zhang1;2, Guodong Zhang1;2, Pieter Abbeel3& Jimmy Ba1;2 1University of Toronto 2 Vector Institute 3 UC Berkeley {tingwuwang,jhoang,ywen,edl,gdzhang}@cs.toronto.edu {xuchan.bao,matthew.zhang}@mail.utoronto.ca, [email protected], [email protected], [email protected] Abstract Model-based reinforcement learning (MBRL) is widely seen as having the potential to be significantly more sample efficient than model-free RL. However, research in model-based RL has not been very standardized. It is fairly common for authors to experiment with self-designed environments, and there are several separate lines of research, which are sometimes closed-sourced or not reproducible. Accordingly, it is an open question how these various existing MBRL algorithms perform relative to each other. To facilitate research in MBRL, in this paper we gather a wide collection of MBRL algorithms and propose over 18 benchmarking environments specially designed for MBRL. We benchmark these algorithms with unified problem settings, including noisy environments. Beyond cataloguing performance, we explore and unify the underlying algorithmic differences across MBRL algorithms. We characterize three key research challenges for future MBRL research: the dynamics bottleneck, the planning horizon dilemma, and the early-termination dilemma. Finally, to maximally facilitate future research on MBRL, we open-source our benchmark in http://www.cs.toronto.edu/~tingwuwang/mbrl.html. 1 Introduction Reinforcement learning (RL) algorithms are most commonly classified in two categories: model-free RL (MFRL), which directly learns a value function or a policy by interacting with the environment, and model-based RL (MBRL), which uses interactions with the environment to learn a model of it.
    [Show full text]
  • Theory of Deep Learning
    CONTRIBUTORS:RAMANARORA,SANJEEVARORA,JOANBRUNA,NADAVCOHEN,SIMONDU, RONGGE,SURIYAGUNASEKAR,CHIJIN,JASONLEE,TENGYUMA,BEHNAMNEYSHABUR, ZHAOSONG THEORYOFDEEP LEARNING Contents 1 Basic Setup and some math notions 13 1.1 List of useful math facts 14 1.1.1 Probability tools 14 1.1.2 Singular Value Decomposition 15 2 Basics of Optimization 17 2.1 Gradient descent 17 2.1.1 Formalizing the Taylor Expansion 18 2.1.2 Descent lemma for gradient descent 18 2.2 Stochastic gradient descent 19 2.3 Accelerated Gradient Descent 19 2.4 Local Runtime Analysis of GD 20 2.4.1 Pre-conditioners 21 3 Backpropagation and its Variants 23 3.1 Problem Setup 23 3.1.1 Multivariate Chain Rule 25 3.1.2 Naive feedforward algorithm (not efficient!) 26 3.2 Backpropagation (Linear Time) 26 3.3 Auto-differentiation 27 3.4 Notable Extensions 28 3.4.1 Hessian-vector product in linear time: Pearlmutter’s trick 29 4 4 Basics of generalization theory 31 4.0.1 Occam’s razor formalized for ML 31 4.0.2 Motivation for generalization theory 32 4.1 Some simple bounds on generalization error 32 4.2 Data dependent complexity measures 34 4.2.1 Rademacher Complexity 34 4.2.2 Alternative Interpretation: Ability to correlate with random labels 35 4.3 PAC-Bayes bounds 35 5 Advanced Optimization notions 39 6 Algorithmic Regularization 41 6.1 Linear models in regression: squared loss 42 6.1.1 Geometry induced by updates of local search algorithms 43 6.1.2 Geometry induced by parameterization of model class 46 6.2 Matrix factorization 47 6.3 Linear Models in Classification 47 6.3.1 Gradient Descent
    [Show full text]
  • A Sample Complexity Separation Between Non-Convex and Convex Meta-Learning
    A Sample Complexity Separation between Non-Convex and Convex Meta-Learning Nikunj Saunshi 1 Yi Zhang 1 Mikhail Khodak 2 Sanjeev Arora 13 Abstract complexity of an unseen but related test task. Although there is a long history of successful methods in meta-learning One popular trend in meta-learning is to learn and the related areas of multi-task and lifelong learning from many training tasks a common initialization (Evgeniou & Pontil, 2004; Ruvolo & Eaton, 2013), recent that a gradient-based method can use to solve a approaches have been developed with the diversity and scale new task with few samples. The theory of meta- of modern applications in mind. This has given rise to learning is still in its early stages, with several re- simple, model-agnostic methods that focus on learning a cent learning-theoretic analyses of methods such good initialization for some gradient-based method such as as Reptile (Nichol et al., 2018) being for convex stochastic gradient descent (SGD), to be run on samples models. This work shows that convex-case analy- from a new task (Finn et al., 2017; Nichol et al., 2018). sis might be insufficient to understand the success These methods have found widespread applications in a of meta-learning, and that even for non-convex variety of areas such as computer vision (Nichol et al., 2018), models it is important to look inside the optimiza- reinforcement learning (Finn et al., 2017), and federated tion black-box, specifically at properties of the learning (McMahan et al., 2017). optimization trajectory. We construct a simple meta-learning instance that captures the problem Inspired by their popularity, several recent learning-theoretic of one-dimensional subspace learning.
    [Show full text]
  • A Simple Algorithm for Semi-Supervised Learning with Improved Generalization Error Bound
    A Simple Algorithm for Semi-supervised Learning with Improved Generalization Error Bound Ming Ji∗‡ [email protected] Tianbao Yang∗† [email protected] Binbin Lin\ [email protected] Rong Jiny [email protected] Jiawei Hanz [email protected] zDepartment of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA yDepartment of Computer Science and Engineering, Michigan State University, East Lansing, MI, 48824, USA \State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, Hangzhou, 310058, China ∗Equal contribution Abstract which states that the prediction function lives in a low In this work, we develop a simple algorithm dimensional manifold of the marginal distribution PX . for semi-supervised regression. The key idea It has been pointed out by several stud- is to use the top eigenfunctions of integral ies (Lafferty & Wasserman, 2007; Nadler et al., operator derived from both labeled and un- 2009) that the manifold assumption by itself is labeled examples as the basis functions and insufficient to reduce the generalization error bound learn the prediction function by a simple lin- of supervised learning. However, on the other hand, it ear regression. We show that under appropri- was found in (Niyogi, 2008) that for certain learning ate assumptions about the integral operator, problems, no supervised learner can learn effectively, this approach is able to achieve an improved while a manifold based learner (that knows the man- regression error bound better than existing ifold or learns it from unlabeled examples) can learn bounds of supervised learning. We also veri- well with relatively few labeled examples.
    [Show full text]
  • Statistical Mechanics Methods for Discovering Knowledge from Production-Scale Neural Networks
    Statistical Mechanics Methods for Discovering Knowledge from Production-Scale Neural Networks Charles H. Martin∗ and Michael W. Mahoney† Tutorial at ACM-KDD, August 2019 ∗Calculation Consulting, [email protected] †ICSI and Dept of Statistics, UC Berkeley, https://www.stat.berkeley.edu/~mmahoney/ Martin and Mahoney (CC & ICSI/UCB) Statistical Mechanics Methods August 2019 1 / 98 Outline 1 Prehistory and History Older Background A Very Simple Deep Learning Model More Immediate Background 2 Preliminary Results Regularization and the Energy Landscape Preliminary Empirical Results Gaussian and Heavy-tailed Random Matrix Theory 3 Developing a Theory for Deep Learning More Detailed Empirical Results An RMT-based Theory for Deep Learning Tikhonov Regularization versus Heavy-tailed Regularization 4 Validating and Using the Theory Varying the Batch Size: Explaining the Generalization Gap Using the Theory: pip install weightwatcher Diagnostics at Scale: Predicting Test Accuracies 5 More General Implications and Conclusions Outline 1 Prehistory and History Older Background A Very Simple Deep Learning Model More Immediate Background 2 Preliminary Results Regularization and the Energy Landscape Preliminary Empirical Results Gaussian and Heavy-tailed Random Matrix Theory 3 Developing a Theory for Deep Learning More Detailed Empirical Results An RMT-based Theory for Deep Learning Tikhonov Regularization versus Heavy-tailed Regularization 4 Validating and Using the Theory Varying the Batch Size: Explaining the Generalization Gap Using the Theory: pip install weightwatcher Diagnostics at Scale: Predicting Test Accuracies 5 More General Implications and Conclusions Statistical Physics & Neural Networks: A Long History 60s: I J. D. Cowan, Statistical Mechanics of Neural Networks, 1967. 70s: I W. A. Little, “The existence of persistent states in the brain,” Math.
    [Show full text]