ICML-2021-Paper-Digests.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
https://www.paperdigest.org 1, TITLE: A New Representation of Successor Features for Transfer across Dissimilar Environments http://proceedings.mlr.press/v139/abdolshah21a.html AUTHORS: Majid Abdolshah, Hung Le, Thommen Karimpanal George, Sunil Gupta, Santu Rana, Svetha Venkatesh HIGHLIGHT: To address this problem, we propose an approach based on successor features in which we model successor feature functions with Gaussian Processes permitting the source successor features to be treated as noisy measurements of the target successor feature function. 2, TITLE: Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling http://proceedings.mlr.press/v139/abeyrathna21a.html AUTHORS: Kuruge Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Rahimi Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Rohan K Yadav HIGHLIGHT: In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. 3, TITLE: Debiasing Model Updates for Improving Personalized Federated Training http://proceedings.mlr.press/v139/acar21a.html AUTHORS: Durmus Alp Emre Acar, Yue Zhao, Ruizhao Zhu, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama HIGHLIGHT: We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. 4, TITLE: Memory Efficient Online Meta Learning http://proceedings.mlr.press/v139/acar21b.html AUTHORS: Durmus Alp Emre Acar, Ruizhao Zhu, Venkatesh Saligrama HIGHLIGHT: We propose a novel algorithm for online meta learning where task instances are sequentially revealed with limited supervision and a learner is expected to meta learn them in each round, so as to allow the learner to customize a task-specific model rapidly with little task-level supervision. 5, TITLE: Robust Testing and Estimation under Manipulation Attacks http://proceedings.mlr.press/v139/acharya21a.html AUTHORS: Jayadev Acharya, Ziteng Sun, Huanyu Zhang HIGHLIGHT: We study robust testing and estimation of discrete distributions in the strong contamination model. 6, TITLE: GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning http://proceedings.mlr.press/v139/achituve21a.html AUTHORS: Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya HIGHLIGHT: Here, we propose GP-Tree, a novel method for multi-class classification with Gaussian processes and DKL. 7, TITLE: f-Domain Adversarial Learning: Theory and Algorithms http://proceedings.mlr.press/v139/acuna21a.html AUTHORS: David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler HIGHLIGHT: In this paper, we introduce a novel and general domain-adversarial framework. 8, TITLE: Towards Rigorous Interpretations: a Formalisation of Feature Attribution http://proceedings.mlr.press/v139/afchar21a.html AUTHORS: Darius Afchar, Vincent Guigue, Romain Hennequin HIGHLIGHT: In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. 9, TITLE: Acceleration via Fractal Learning Rate Schedules http://proceedings.mlr.press/v139/agarwal21a.html AUTHORS: Naman Agarwal, Surbhi Goel, Cyril Zhang HIGHLIGHT: We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective. 10, TITLE: A Regret Minimization Approach to Iterative Learning Control http://proceedings.mlr.press/v139/agarwal21b.html AUTHORS: Naman Agarwal, Elad Hazan, Anirudha Majumdar, Karan Singh HIGHLIGHT: In this setting, we propose a new performance metric, planning regret, which replaces the standard stochastic uncertainty assumptions with worst case regret. 1 https://www.paperdigest.org 11, TITLE: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations http://proceedings.mlr.press/v139/agarwal21c.html AUTHORS: Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, Himabindu Lakkaraju HIGHLIGHT: In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. 12, TITLE: Label Inference Attacks from Log-loss Scores http://proceedings.mlr.press/v139/aggarwal21a.html AUTHORS: Abhinav Aggarwal, Shiva Kasiviswanathan, Zekun Xu, Oluwaseyi Feyisetan, Nathanael Teissier HIGHLIGHT: In this paper, we investigate the problem of inferring the labels of a dataset from single (or multiple) log-loss score(s), without any other access to the dataset. 13, TITLE: Deep kernel processes http://proceedings.mlr.press/v139/aitchison21a.html AUTHORS: Laurence Aitchison, Adam Yang, Sebastian W. Ober HIGHLIGHT: We define deep kernel processes in which positive definite Gram matrices are progressively transformed by nonlinear kernel functions and by sampling from (inverse) Wishart distributions. 14, TITLE: How Does Loss Function Affect Generalization Performance of Deep Learning? Application to Human Age Estimation http://proceedings.mlr.press/v139/akbari21a.html AUTHORS: Ali Akbari, Muhammad Awais, Manijeh Bashar, Josef Kittler HIGHLIGHT: In summary, our main statement in this paper is: choose a stable loss function, generalize better. 15, TITLE: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting http://proceedings.mlr.press/v139/akiyama21a.html AUTHORS: Shunta Akiyama, Taiji Suzuki HIGHLIGHT: In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. 16, TITLE: Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks http://proceedings.mlr.press/v139/aladago21a.html AUTHORS: Maxwell M Aladago, Lorenzo Torresani HIGHLIGHT: In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. 17, TITLE: A large-scale benchmark for few-shot program induction and synthesis http://proceedings.mlr.press/v139/alet21a.html AUTHORS: Ferran Alet, Javier Lopez-Contreras, James Koppel, Maxwell Nye, Armando Solar-Lezama, Tomas Lozano- Perez, Leslie Kaelbling, Joshua Tenenbaum HIGHLIGHT: In this work, we propose a new way of leveraging unit tests and natural inputs for small programs as meaningful input-output examples for each sub-program of the overall program. 18, TITLE: Robust Pure Exploration in Linear Bandits with Limited Budget http://proceedings.mlr.press/v139/alieva21a.html AUTHORS: Ayya Alieva, Ashok Cutkosky, Abhimanyu Das HIGHLIGHT: We consider the pure exploration problem in the fixed-budget linear bandit setting. 19, TITLE: Communication-Efficient Distributed Optimization with Quantized Preconditioners http://proceedings.mlr.press/v139/alimisis21a.html AUTHORS: Foivos Alimisis, Peter Davies, Dan Alistarh HIGHLIGHT: We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. 20, TITLE: Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions http://proceedings.mlr.press/v139/alquier21a.html AUTHORS: Pierre Alquier HIGHLIGHT: In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. 2 https://www.paperdigest.org 21, TITLE: Dataset Dynamics via Gradient Flows in Probability Space http://proceedings.mlr.press/v139/alvarez-melis21a.html AUTHORS: David Alvarez-Melis, Nicol? Fusi HIGHLIGHT: In this work, we propose a novel framework for dataset transformation, which we cast as optimization over data-generating joint probability distributions. 22, TITLE: Submodular Maximization subject to a Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive Complexity http://proceedings.mlr.press/v139/amanatidis21a.html AUTHORS: Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Alberto Marchetti-Spaccamela, Rebecca Reiffenh?user HIGHLIGHT: In this work we obtain the first \emph{constant factor} approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with \emph{near-optimal} $O(\log n)$ adaptive complexity. 23, TITLE: Safe Reinforcement Learning with Linear Function Approximation http://proceedings.mlr.press/v139/amani21a.html AUTHORS: Sanae Amani, Christos Thrampoulidis, Lin Yang HIGHLIGHT: In this paper, we address both problems by first modeling safety as an unknown linear cost function of states and actions, which must always fall below a certain threshold. 24, TITLE: Automatic variational inference with cascading flows http://proceedings.mlr.press/v139/ambrogioni21a.html AUTHORS: Luca Ambrogioni, Gianluigi Silvestri, Marcel Van Gerven HIGHLIGHT: Here, we combine the flexibility of normalizing flows and the prior-embedding property of ASVI in a new family of variational programs, which we named cascading flows. 25, TITLE: Sparse Bayesian Learning via Stepwise Regression http://proceedings.mlr.press/v139/ament21a.html AUTHORS: Sebastian E. Ament, Carla P. Gomes HIGHLIGHT: Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection