ICML-2020-Paper-Digests.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
https://www.paperdigest.org 1, TITLE: Reverse-engineering deep ReLU networks https://proceedings.icml.cc/static/paper_files/icml/2020/1-Paper.pdf AUTHORS: David Rolnick, Konrad Kording HIGHLIGHT: Here, we prove that in fact it is often possible to identify the architecture, weights, and biases of an unknown deep ReLU network by observing only its output. 2, TITLE: My Fair Bandit: Distributed Learning of Max-Min Fairness with Multi-player Bandits https://proceedings.icml.cc/static/paper_files/icml/2020/11-Paper.pdf AUTHORS: Ilai Bistritz, Tavor Baharav, Amir Leshem, Nicholas Bambos HIGHLIGHT: We present an algorithm and prove that it is regret optimal up to a log(log T) factor. 3, TITLE: Scalable Differentiable Physics for Learning and Control https://proceedings.icml.cc/static/paper_files/icml/2020/15-Paper.pdf AUTHORS: Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming Lin HIGHLIGHT: We develop a scalable framework for differentiable physics that can support a large number of objects and their interactions. 4, TITLE: Generalization to New Actions in Reinforcement Learning https://proceedings.icml.cc/static/paper_files/icml/2020/29-Paper.pdf AUTHORS: Ayush Jain, Andrew Szot, Joseph Lim HIGHLIGHT: To approach this problem, we propose a two-stage framework where the agent first infers action representations from acquired action observations and then learns to use these in reinforcement learning with added generalization objectives. 5, TITLE: Randomized Block-Diagonal Preconditioning for Parallel Learning https://proceedings.icml.cc/static/paper_files/icml/2020/53-Paper.pdf AUTHORS: Celestine Mendler-Dünner, Aurelien Lucchi HIGHLIGHT: Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique which corresponds to repartitioning coordinates across tasks during the optimization procedure. 6, TITLE: Stochastic Flows and Geometric Optimization on the Orthogonal Group https://proceedings.icml.cc/static/paper_files/icml/2020/57-Paper.pdf AUTHORS: Krzysztof Choromanski, David Cheikhi, Jared Davis, Valerii Likhosherstov, Achille Nazaret, Achraf Bahamou, Xingyou Song, Mrugank Akarte, Jack Parker-Holder, Jacob Bergquist, YUAN GAO, Aldo Pacchiano, Tamas Sarlos, Adrian Weller, Vikas Sindhwani HIGHLIGHT: We present a new class of stochastic, geometrically-driven optimization algorithms on the orthogonal group O(d) and naturally reductive homogeneous manifolds obtained from the action of the rotation group SO(d). 7, TITLE: PackIt: A Virtual Environment for Geometric Planning https://proceedings.icml.cc/static/paper_files/icml/2020/62-Paper.pdf AUTHORS: Ankit Goyal, Jia Deng HIGHLIGHT: We present PackIt, a virtual environment to evaluate and potentially learn the ability to do geometric planning. We also construct a set of challenging packing tasks using an evolutionary algorithm. 8, TITLE: Soft Threshold Weight Reparameterization for Learnable Sparsity https://proceedings.icml.cc/static/paper_files/icml/2020/67-Paper.pdf AUTHORS: Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, Ali Farhadi HIGHLIGHT: This work proposes Soft Threshold Reparameterization (STR), a novel use of the soft-threshold operator on DNN weights. 9, TITLE: Stochastic Latent Residual Video Prediction https://proceedings.icml.cc/static/paper_files/icml/2020/78-Paper.pdf AUTHORS: Jean-Yves Franceschi, Edouard Delasalles, Mickael Chen, Sylvain Lamprier, Patrick Gallinari HIGHLIGHT: In this paper, we overcome these difficulties by introducing a novel stochastic temporal model whose dynamics are governed in a latent space by a residual update rule. 10, TITLE: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise https://proceedings.icml.cc/static/paper_files/icml/2020/86-Paper.pdf AUTHORS: Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gurbuzbalaban 1 https://www.paperdigest.org HIGHLIGHT: In this study, we consider a \emph{continuous-time} variant of SGDm, known as the underdamped Langevin dynamics (ULD), and investigate its asymptotic properties under heavy-tailed perturbations. 11, TITLE: Context Aware Local Differential Privacy https://proceedings.icml.cc/static/paper_files/icml/2020/111-Paper.pdf AUTHORS: Jayadev Acharya, Keith Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun HIGHLIGHT: We propose a context-aware framework for LDP that allows the privacy level to vary across the data domain, enabling system designers to place privacy constraints where they matter without paying the cost where they do not. 12, TITLE: Privately Learning Markov Random Fields https://proceedings.icml.cc/static/paper_files/icml/2020/112-Paper.pdf AUTHORS: Gautam Kamath, Janardhan Kulkarni, Steven Wu, Huanyu Zhang HIGHLIGHT: Our learning goals include both \emph{structure learning}, where we try to estimate the underlying graph structure of the model, as well as the harder goal of \emph{parameter learning}, in which we additionally estimate the parameter on each edge. 13, TITLE: A Mean Field Analysis Of Deep ResNet And Beyond: Towards Provably Optimization Via Overparameterization From Depth https://proceedings.icml.cc/static/paper_files/icml/2020/115-Paper.pdf AUTHORS: Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying HIGHLIGHT: To understand the success of SGD for training deep neural networks, this work presents a mean-field analysis of deep residual networks, based on a line of works which interpret the continuum limit of the deep residual network as an ordinary differential equation as the the network capacity tends to infinity. 14, TITLE: Provable Smoothness Guarantees for Black-Box Variational Inference https://proceedings.icml.cc/static/paper_files/icml/2020/120-Paper.pdf AUTHORS: Justin Domke HIGHLIGHT: This paper shows that for location-scale family approximations, if the target is M-Lipschitz smooth, then so is the "energy" part of the variational objective. 15, TITLE: Enhancing Simple Models by Exploiting What They Already Know https://proceedings.icml.cc/static/paper_files/icml/2020/126-Paper.pdf AUTHORS: Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss HIGHLIGHT: In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model of much lower complexity such as a decision tree or a shallow network enhancing its performance. 16, TITLE: Fiduciary Bandits https://proceedings.icml.cc/static/paper_files/icml/2020/127-Paper.pdf AUTHORS: Gal Bahar, Omer Ben-Porat, Kevin Leyton-Brown, Moshe Tennenholtz HIGHLIGHT: More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. 17, TITLE: Training Deep Energy-Based Models with f-Divergence Minimization https://proceedings.icml.cc/static/paper_files/icml/2020/130-Paper.pdf AUTHORS: Lantao Yu, Yang Song, Jiaming Song, Stefano Ermon HIGHLIGHT: In this paper, we propose a general variational framework termed f-EBM to train EBMs using any desired f- divergence. 18, TITLE: Progressive Graph Learning for Open-Set Domain Adaptation https://proceedings.icml.cc/static/paper_files/icml/2020/136-Paper.pdf AUTHORS: Yadan Luo, Zijian Wang, Zi Huang, Mahsa Baktashmotlagh HIGHLIGHT: More specifically, we introduce an end-to-end Progressive Graph Learning (PGL) framework where a graph neural network with episodic training is integrated to suppress underlying conditional shift and adversarial learning is adopted to close the gap between the source and target distributions. 19, TITLE: Learning De-biased Representations with Biased Representations https://proceedings.icml.cc/static/paper_files/icml/2020/138-Paper.pdf AUTHORS: Hyojin Bahng, SANGHYUK CHUN, Sangdoo Yun, Jaegul Choo, Seong Joon Oh 2 https://www.paperdigest.org HIGHLIGHT: In this work, we propose a novel framework to train a de-biased representation by encouraging it to be \textit{different} from a set of representations that are biased by design. 20, TITLE: Generalized Neural Policies for Relational MDPs https://proceedings.icml.cc/static/paper_files/icml/2020/140-Paper.pdf AUTHORS: Sankalp Garg, Aniket Bajpai, Mausam HIGHLIGHT: We present the first neural approach for solving RMDPs, expressed in the probabilistic planning language of RDDL. 21, TITLE: Feature-map-level Online Adversarial Knowledge Distillation https://proceedings.icml.cc/static/paper_files/icml/2020/143-Paper.pdf AUTHORS: Inseop Chung, SeongUk Park, Kim Jangho, NOJUN KWAK HIGHLIGHT: Thus in this paper, we propose an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework. 22, TITLE: DRWR: A Differentiable Renderer without Rendering for Unsupervised 3D Structure Learning from Silhouette Images https://proceedings.icml.cc/static/paper_files/icml/2020/145-Paper.pdf AUTHORS: Zhizhong Han, Chao Chen, Yu-Shen Liu, Matthias Zwicker HIGHLIGHT: In contrast, here we propose a Differentiable Renderer Without Rendering (DRWR) that omits these steps. 23, TITLE: Towards Accurate Post-training Network Quantization via Bit-Split and Stitching https://proceedings.icml.cc/static/paper_files/icml/2020/147-Paper.pdf AUTHORS: Peisong Wang, Qiang Chen, Xiangyu