
Machine Learning: Why Do Simple Algorithms Work So Well? Chi Jin Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2019-53 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-53.html May 17, 2019 Copyright © 2019, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Machine Learning: Why Do Simple Algorithms Work So Well? by Chi Jin A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Michael I. Jordan, Chair Professor Peter L. Bartlett Associate Professor Aditya Guntuboyina Spring 2019 Machine Learning: Why Do Simple Algorithms Work So Well? Copyright 2019 by Chi Jin 1 Abstract Machine Learning: Why Do Simple Algorithms Work So Well? by Chi Jin Doctor of Philosophy in Electrical Engineering and Computer Science University of California, Berkeley Professor Michael I. Jordan, Chair While state-of-the-art machine learning models are deep, large-scale, sequential and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, gradient descent with momentum or Q-learning (in the case of reinforcement learning tasks). A basic question endures|why do simple algorithms work so well even in these challenging settings? To answer above question, this thesis focuses on four concrete and fundamental questions: 1. In nonconvex optimization, can (stochastic) gradient descent or its variants escape saddle points efficiently? 2. Is gradient descent with momentum provably faster than gradient descent in the general nonconvex setting? 3. In nonconvex-nonconcave minmax optimization, what is a proper definition of local optima and is gradient descent ascent game-theoretically meaningful? 4. In reinforcement learning, is Q-learning sample efficient? This thesis provides the first line of provably positive answers to all above questions. In par- ticular, this thesis will show that although the standard versions of these classical algorithms do not enjoy good theoretical properties in the worst case, simple modifications are sufficient to grant them desirable behaviors, which explain the underlying mechanisms behind their favorable performance in practice. i To Jiaqi, Kelvin and my parents. ii Contents Contents ii List of Figures iv List of Tables v 1 Overview 1 1.1 Machine Learning and Simple Algorithms . 1 1.2 Types of Theoretical Guarantees . 3 1.3 Organization . 5 I Nonconvex Optimization 6 2 Escaping Saddle Points by Gradient Descent 7 2.1 Introduction . 7 2.2 Preliminaries . 11 2.3 Common Landscape of Nonconvex Applications in Machine Learning . 14 2.4 Main Results . 15 2.5 Conclusion . 19 2.6 Proofs for Non-stochastic Setting . 21 2.7 Proofs for Stochastic Setting . 26 2.8 Tables of Related Work . 34 2.9 Concentration Inequalities . 35 3 Escaping Saddle Points Faster using Momentum 38 3.1 Introduction . 38 3.2 Preliminaries . 43 3.3 Main Result . 44 3.4 Overview of Analysis . 46 3.5 Conclusions . 50 3.6 Proof of Hamiltonian Lemmas . 51 3.7 Proof of Main Result . 54 iii 3.8 Auxiliary Lemma . 69 II Minmax Optimization 84 4 On Stable Limit Points of Gradient Descent Ascent 85 4.1 Introduction . 85 4.2 Preliminaries . 88 4.3 What is the Right Objective? . 91 4.4 Main Results . 93 4.5 Conclusion . 99 4.6 Proofs for Reduction from Mixed Strategy Nash to Minmax Points . 99 4.7 Proofs for Properties of Local Minmax Points . 101 4.8 Proofs for Limit Points of Gradient Descent Ascent . 103 4.9 Proofs for Gradient Descent with Max-oracle . 108 III Reinforcement Learning 110 5 On Sample Efficiency of Q-learning 111 5.1 Introduction . 111 5.2 Preliminary . 115 5.3 Main Results . 116 5.4 Proof for Q-learning with UCB-Hoeffding . 119 5.5 Explanation for Q-Learning with "-Greedy . 124 5.6 Proof of Lemma 5.4.1 . 125 5.7 Proof for Q-learning with UCB-Bernstein . 126 5.8 Proof of Lower Bound . 137 Bibliography 139 iv List of Figures 1.1 Image Classification and Deep Neural Networks . 2 1.2 Types of theoretical guarantees and their relevance to practice. 4 2.1 Pertubation ball in 3D and \thin pancake" shape stuck region . 25 2.2 Pertubation ball in 2D and \narrow band" stuck region under gradient flow . 25 4.1 Left: f(x; y) = x2 −y2 where (0; 0) is both local Nash and local minmax. Right: f(x; y) = −x2 + 5xy − y2 where (0; 0) is not local Nash but local minmax with h(δ) = δ......................................... 94 4.2 Left: f(x; y) = 0:2xy − cos(y), the global minmax points (0; −π) and (0; π) are not stationary. Right: The relations among local Nash equilibria, local minmax points, local maxmin points and linearly stable points of γ-GDA, and 1-GDA (up to degenerate points). 95 5.1 Illustration of fαi g1000 for learning rates α = H+1 ; 1 and p1 when H = 10. 120 1000 i=1 t H+t t t v List of Tables 2.1 A high level summary of the results of this work and their comparison to prior state of the art for GD and SGD algorithms. This table only highlights the dependences on d and ................................ 9 2.2 A summary of related work on first-order algorithms to find second-order station- ary points in non-stochastic setting. This table only highlights the dependences on d and . y denotes the follow up work. 34 2.3 A summary of related work on first-order algorithms to find second-order station- ary points in stochastic setting. This table only highlights the dependences on d and . ∗ denotes independent work. 35 3.1 Complexity of finding stationary points. Oe(·) ignores polylog factors in d and . 41 5.1 Regret comparisons for RL algorithms on episodic MDP. T = KH is totally number of steps, H is the number of steps per episode, S is the number of states, and A is the number of actions. For clarity, this table is presented for T ≥ poly(S; A; H), omitting low order terms. 114 vi Acknowledgments I would like to give my foremost thanks to my advisor Michael I. Jordan. Throughout my years at Berkeley, he demonstrated me not only how to do first-class research, but also how to be a respectful academic figure as well as a caring advisor. His pioneering vision over the entire field inspires me to think out of the box, and to attack new challenging problems. His modest viewpoints and encouraging words guide me through the most difficult time in my Ph.D. I am fortunate to be in his group|a highly collaborative environment with diverse interests. It not only gives me a general awareness of the field, but also provides me sufficient freedom to pursue my own interests. I could not have wished for a better advisor. I am especially grateful to Rong Ge, Praneeth Netrapalli and Sham M. Kakade, who are my wonderful long-term collaborators and have a remarkable influence on my entire Ph.D. path. My journey in nonconvex optimization starts with a long-distance collaboration with Rong, and then a summer internship mentored by Sham at Microsoft Research New England, where both Praneeth and Rong were postdoctoral researchers there. Through collaboration with them, I learned a wide range of ideas, intuitions and powerful technics. This grants me an incredible amount of technical strength for solving challenging problems, which is invaluable for my entire career. I also want to express my sincere gratitude to Prateek Jain and Sebastian Bubeck, who generously provide me advices and guidance through different stages of my Ph.D. In addition, I would like to thank Zeyuan Allen-Zhu, who contributed significantly to the work in the last part of this thesis. The collaboration experience with him is truly a pleasure. He has demonstrated me how to be a passionate and smart person, but at the same time work tremendously hard. I would like to thank Peter Bartlett and Adytia Guntuboyina for being on my thesis committee. Peter provided me many helpful advices and feedbacks during my first year. Adytia taught me a lot of useful statistics, and I had a wonderful experience serving as his teaching assistant. I am also grateful to Martin Wainwright and Prasad Raghavendra for being on my Qual Exam committee and providing many helpful discussions. There are also several peer fellows who have substantial impact on my Ph.D. life. Other than being great collaborators, they are also my personal best friends. Nilesh Tripuraneni taught me a lot about the American tradition, culture and politics, and patiently helped me improve my English writings; Yuchen Zhang provided me various guidance during my first three years; Yuansi Chen and I share many precious memories of teaming up for esports; and Yian Ma convinced me the charm of outdoor activities. In addition to all people mentioned above, I am fortunate to have many amazing col- laborators: Lydia Liu, Darren Lin, Mitchell Stern, Jeff Regier, Nicolas Flammarion, Bin Yu, Simon Du, Jason Lee, Barnabas Poczos, Aarti Singh, Sivaraman Balakrishnan, Aaron Sidford, Cameron Musco, Furong Huang, and Yang Yuan. It was really a great pleasure to work with you all.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages159 Page
-
File Size-