New Directions in Bandit Learning: Singularities and Random Walk Feedback by Tianyu Wang

Total Page:16

File Type:pdf, Size:1020Kb

New Directions in Bandit Learning: Singularities and Random Walk Feedback by Tianyu Wang New Directions in Bandit Learning: Singularities and Random Walk Feedback by Tianyu Wang Department of Computer Science Duke University Date: Approved: Cynthia Rudin, Advisor Cynthia Rudin Xiuyuan Cheng Rong Ge Alexander Volfovsky Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2021 ABSTRACT NEW DIRECTIONS IN BANDIT LEARNING: SINGULARITIES AND RANDOM WALK FEEDBACK by Tianyu Wang Department of Computer Science Duke University Date: Approved: Cynthia Rudin, Advisor Cynthia Rudin Xiuyuan Cheng Rong Ge Alexander Volfovsky An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2021 Copyright © 2021 by Tianyu Wang All rights reserved Abstract My thesis focuses new directions in bandit learning problems. In Chapter 1, I give an overview of the bandit learning literature, which lays the discussion framework for studies in Chapters 2 and 3. In Chapter 2, I study bandit learning problem in metric measure spaces. I start with multi-armed bandit problem with Lipschitz reward, and propose a practical algorithm that can utilize greedy tree training methods and adapts to the landscape of the reward function. In particular, the study provides a Bayesian perspective to this problem. Also, I study bandit learning for Bounded Mean Oscillation (BMO) functions, where the goal is to \maximize" a function that may go to infinity in parts of the space. For an unknown BMO function, I will present algorithms that efficiently finds regions with high function values. To handle possible singularities and unboundedness in BMO functions, I will introduce the new notion of δ-regret { the difference between the function values along the trajectory and a point that is optimal after removing a δ-sized portion of the space. I κ log T will show that my algorithm has O T average T -step δ-regret, where κ depends on δ and adapts to the landscape of the underlying reward function. In Chapter 3, I will study bandit learning with random walk trajectories as feedback. In domains including online advertisement and social networks, user behaviors can be modeled as a random walk over a network. To this end, we study a novel bandit learning problem, where each arm is the starting node of a random walk in a network and the reward is the length of the walk. We provide a comprehensive understanding of this formulation iv by studying both the stochastic and the adversarial setting. In the stochastic setting, we observe that, there exists a difficult problem instance on which the following two seemingly conflicting facts simultaneously hold: 1. No algorithm can achieve a regret bound inde- pendent of problem intrinsics information theoretically; and 2. There exists an algorithm whose performance is independent of problem intrinsics in terms of tail of mistakes. This reveals an intriguing phenomenon in general semi-bandit feedback learning problems. In the adversarial setting, we establish novel algorithms that achieve regret bound of order p Oe κT , where κ is a constant that depends on the structure of the graph, instead of number of arms (nodes). This bounds significantly improves regular bandit algorithms, whose complexity depends on number of arms (nodes). v Contents Abstract iv List of Figures x List of Tables xi Acknowledgements xii 1 Introduction 1 2 Bandit Learning in Metric Spaces 7 2.1 Lipschitz Bandits: A Bayesian Approach . .7 2.1.1 Introduction . .7 2.1.2 Main Results: TreeUCB Framework and a Bayesian Perspective . .9 2.1.3 Empirical Study . 29 2.1.4 Conclusion . 31 2.2 Bandits for BMO Functions . 32 2.2.1 Introduction . 32 2.2.2 Preliminaries . 34 2.2.3 Problem Setting: BMO Bandits . 36 2.2.4 Solve BMO Bandits via Partitioning . 38 2.2.5 Achieve Poly-log Regret via Zooming . 44 vi 2.2.6 Experiments . 51 2.2.7 Conclusion . 52 3 Bandit Learning with Random Walk Feedback 53 3.1 Introduction . 53 3.1.1 Related Works . 56 3.2 Problem Setting . 58 3.3 Stochastic Setting . 59 3.3.1 Reduction to Standard MAB . 60 3.3.2 Is this Problem Much Easier than Standard MAB? . 62 3.3.3 Regret Analysis . 70 3.4 Adversarial Setting . 71 3.4.1 Analysis of Algorithm 6 . 74 3.4.2 Lower Bound for the Adversarial Setting . 79 3.5 Experiments . 81 3.6 Conclusion . 82 4 Conclusion 83 Appendices 85 A Supplementary Materials for Chapter 2 86 vii A.1 Proof of Lemma 2 . 86 A.2 Proof of Lemma 3 . 87 A.2.1 Proof of Proposition 5 . 91 A.2.2 Proof of Proposition 6 . 92 A.3 Proof of Lemma 4 . 92 A.4 Proof of Theorem 1 . 93 A.5 Proof of Proposition 2 . 94 A.6 Proof of Proposition 3 . 95 A.7 Elaboration of Remark 6 . 96 A.8 Proof of Theorem 3 . 99 B Supplementary Materials for Chapter 3 103 B.1 Additional Details for the Stochastic Setting . 103 B.1.1 Concentrations of Estimators . 103 B.1.2 Proof of Theorem 9 . 105 B.1.3 Proof of Theorem 10 . 108 B.1.4 Greedy Algorithm for the Stochastic Setting . 113 B.2 Proofs for the Adversarial Setting . 116 B.2.1 Proof of Theorem 12 . 121 B.2.2 Additional Propositions . 126 viii Bibliography 128 Biography 142 ix List of Figures 1.1 A bandit octopus. .2 2.1 Example reward function (in color gradient) with an example partitioning. 10 2.2 The left subfigure is the metric learned by Algorithm 1 (2.9). The right subfigure is the smoothed version of this learned metric. 29 2.3 The estimates for a function with respect to a given partition. 30 2.4 Performance of TUCB against benchmark methods in tuning neural net- works. 31 2.5 Graph of f(x) = − log(jxj), with δ and f δ annotated. This function is an unbounded BMO function. 37 2.6 Example of terminal cubes, pre-parent and parent cubes. 46 2.7 Algorithms 3 and 4 on Himmelblau's function (left) and Styblinski{Tang function (right). 51 2.8 Landscapes of test functions used in Section 2.1.3. Left: (Rescaled) Him- melblau's function. Right: (Rescaled) Styblinski-Tang function. 52 3.1 Problem instances constructed to prove Theorem 8. The edge labels denote edge transition probabilities in J/J0....................... 66 p 1−px 3.2 A plot of function f(x) = 1+ x , x 2 [0; 1]. This shows that in Theorem 11, the dependence on graph connectivity is highly non-linear. 74 3.3 Experimental results for Chapter 3.5. 80 3.4 The network structure for experiments in Chapter 3.5. 82 x List of Tables 2.1 Settings for the SVHN experiments. 25 2.2 Settings for CIFAR-10 experiments. 26 xi Acknowledgements When I started my PhD study, I knew little about the journey ahead. There were multiple points where I almost failed. After five years of adventures, how much I have transformed! I am grateful to my advisor, Prof. Cynthia Rudin, for her guidance and advise. Lessons I learnt from Prof. Rudin are not only technical, but also philosophical. She has taught me how to define problems, how to collaborate, how to write academic papers and give talks. I am still practicing and improving skills learnt from her. I would like to thank Duke University and the Department of Computer Science. Administratively and financially, the university and the department have supported me through my PhD study. Also, a thank you to the Alfred P. Sloan Foundation for support- ing me via the Duke Energy Data Analytics Fellowship. I would like to thank all my co-authors during my PhD study. They are, alphabeti- cally, M. Usaid Awan, Siddhartha Banerjee, Dawei Geng, Gauri Jain, Yameng Liu, Marco Morucci, Sudeepa Roy, Cynthia Rudin, Sean Sinclair, Alexander Volfovsky, Zizhuo Wang, Lin Yang, Weicheng Ye, Christina Lee Yu. I would like to thank all the course instructors and teachers. The techniques and methodologies I learnt from them are invaluable. I also thank Duke University and the Department of Computer Science for providing rich learning resources. Very importantly, I am grateful to my parents. Looking ahead into the future, I will maintain a high standard for myself, to live up to the names of Duke University, the Department of Computer Science, my advisor Cynthia Rudin, and all my collaborators. xii Chapter 1 Introduction Bandit learning algorithms seek to answer the following critical question in sequential de- cision making: In interacting with the environment, when to exploit the historically good options, and when to explore the decision space? This intriguing question, and the corresponding exploitation-exploration tension arise in many, if not all, online decision making problems. Bandit learning algorithms find applications ranging from experiment design [Rob52] to online advertising [LCLS10]. In the classic bandit learning problem, an agent is interacting with an unknown and possibly changing environment. This agent has a set of choices (called arms in bandit community), and is trying to maximize the total reward, while learning the environment. The performance is usually measured by regret. The regret is defined as the total difference, summed over time, between the agent's choices, and a hindsight optimal option. We seek to design algorithms with a sub-linear regret in time. This ensures that when we can run the algorithm long enough, we are often choosing the best options. Three Different Settings In this part, I classify bandit problems into three different settings, based on how the environment may change.
Recommended publications
  • Relational Machine Learning Algorithms
    Relational Machine Learning Algorithms by Alireza Samadianzakaria Bachelor of Science, Sharif University of Technology, 2016 Submitted to the Graduate Faculty of the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2021 UNIVERSITY OF PITTSBURGH DEPARTMENT OF COMPUTER SCIENCE This dissertation was presented by Alireza Samadianzakaria It was defended on July 7, 2021 and approved by Dr. Kirk Pruhs, Department of Computer Science, University of Pittsburgh Dr. Panos Chrysanthis, Department of Computer Science, University of Pittsburgh Dr. Adriana Kovashka, Department of Computer Science, University of Pittsburgh Dr. Benjamin Moseley, Tepper School of Business, Carnegie Mellon University ii Copyright c by Alireza Samadianzakaria 2021 iii Relational Machine Learning Algorithms Alireza Samadianzakaria, PhD University of Pittsburgh, 2021 The majority of learning tasks faced by data scientists involve relational data, yet most standard algorithms for standard learning problems are not designed to accept relational data as input. The standard practice to address this issue is to join the relational data to create the type of geometric input that standard learning algorithms expect. Unfortunately, this standard practice has exponential worst-case time and space complexity. This leads us to consider what we call the Relational Learning Question: \Which standard learning algorithms can be efficiently implemented on relational data, and for those that cannot, is there an alternative algorithm that can be efficiently implemented on relational data and that has similar performance guarantees to the standard algorithm?" In this dissertation, we address the relational learning question for the well-known prob- lems of support vector machine (SVM), logistic regression, and k-means clustering.
    [Show full text]
  • Computer Science and Decision Theory Fred S. Roberts1 Abstract 1
    Computer Science and Decision Theory Fred S. Roberts1 DIMACS Center, Rutgers University, Piscataway, NJ 08854 USA [email protected] Abstract This paper reviews applications in computer science that decision theorists have addressed for years, discusses the requirements posed by these applications that place great strain on decision theory/social science methods, and explores applications in the social and decision sciences of newer decision-theoretic methods developed with computer science applications in mind. The paper deals with the relation between computer science and decision-theoretic methods of consensus, with the relation between computer science and game theory and decisions, and with \algorithmic decision theory." 1 Introduction Many applications in computer science involve issues and problems that decision theorists have addressed for years, issues of preference, utility, conflict and cooperation, allocation, incentives, consensus, social choice, and measurement. A similar phenomenon is apparent more generally at the interface between computer sci- ence and the social sciences. We have begun to see the use of methods developed by decision theorists/social scientists in a variety of computer science applications. The requirements posed by these computer science applications place great strain on the decision theory/social science methods because of the sheer size of the problems addressed, new contexts in which computational power of agents becomes an issue, limitations on information possessed by players, and the sequential nature of repeated applications. Hence, there is a great need to develop a new generation of methods to satisfy these computer science requirements. In turn, these new methods will provide powerful new tools for social scientists in general and decision theorists in particular.
    [Show full text]
  • Are All Distributions Easy?
    Are all distributions easy? Emanuele Viola∗ November 5, 2009 Abstract Complexity theory typically studies the complexity of computing a function h(x): f0; 1gn ! f0; 1gm of a given input x. We advocate the study of the complexity of generating the distribution h(x) for uniform x, given random bits. Our main results are: • There are explicit AC0 circuits of size poly(n) and depth O(1) whose output n P distribution has statistical distance 1=2 from the distribution (X; i Xi) 2 f0; 1gn × f0; 1; : : : ; ng for uniform X 2 f0; 1gn, despite the inability of these P circuits to compute i xi given x. Previous examples of this phenomenon apply to different distributions such as P n+1 (X; i Xi mod 2) 2 f0; 1g . We also prove a lower bound independent from n on the statistical distance be- tween the output distribution of NC0 circuits and the distribution (X; majority(X)). We show that 1 − o(1) lower bounds for related distributions yield lower bounds for succinct data structures. • Uniform randomized AC0 circuits of poly(n) size and depth d = O(1) with error can be simulated by uniform randomized circuits of poly(n) size and depth d + 1 with error + o(1) using ≤ (log n)O(log log n) random bits. Previous derandomizations [Ajtai and Wigderson '85; Nisan '91] increase the depth by a constant factor, or else have poor seed length. Given the right tools, the above results have technically simple proofs. ∗Supported by NSF grant CCF-0845003. Email: [email protected] 1 Introduction Complexity theory, with some notable exceptions, typically studies the complexity of com- puting a function h(x): f0; 1gn ! f0; 1gm of a given input x.
    [Show full text]
  • Research Notices
    AMERICAN MATHEMATICAL SOCIETY Research in Collegiate Mathematics Education. V Annie Selden, Tennessee Technological University, Cookeville, Ed Dubinsky, Kent State University, OH, Guershon Hare I, University of California San Diego, La jolla, and Fernando Hitt, C/NVESTAV, Mexico, Editors This volume presents state-of-the-art research on understanding, teaching, and learning mathematics at the post-secondary level. The articles are peer-reviewed for two major features: (I) advancing our understanding of collegiate mathematics education, and (2) readability by a wide audience of practicing mathematicians interested in issues affecting their students. This is not a collection of scholarly arcana, but a compilation of useful and informative research regarding how students think about and learn mathematics. This series is published in cooperation with the Mathematical Association of America. CBMS Issues in Mathematics Education, Volume 12; 2003; 206 pages; Softcover; ISBN 0-8218-3302-2; List $49;AII individuals $39; Order code CBMATH/12N044 MATHEMATICS EDUCATION Also of interest .. RESEARCH: AGul<lelbrthe Mathematics Education Research: Hothomatldan- A Guide for the Research Mathematician --lllll'tj.M...,.a.,-- Curtis McKnight, Andy Magid, and -- Teri J. Murphy, University of Oklahoma, Norman, and Michelynn McKnight, Norman, OK 2000; I 06 pages; Softcover; ISBN 0-8218-20 16-8; List $20;AII AMS members $16; Order code MERN044 Teaching Mathematics in Colleges and Universities: Case Studies for Today's Classroom Graduate Student Edition Faculty
    [Show full text]
  • Typical Stability
    Typical Stability Raef Bassily∗ Yoav Freundy Abstract In this paper, we introduce a notion of algorithmic stability called typical stability. When our goal is to release real-valued queries (statistics) computed over a dataset, this notion does not require the queries to be of bounded sensitivity – a condition that is generally assumed under differential privacy [DMNS06, Dwo06] when used as a notion of algorithmic stability [DFH+15b, DFH+15c, BNS+16] – nor does it require the samples in the dataset to be independent – a condition that is usually assumed when generalization-error guarantees are sought. Instead, typical stability requires the output of the query, when computed on a dataset drawn from the underlying distribution, to be concentrated around its expected value with respect to that distribution. Typical stability can also be motivated as an alternative definition for database privacy. Like differential privacy, this notion enjoys several important properties including robustness to post-processing and adaptive composition. However, privacy is guaranteed only for a given family of distributions over the dataset. We also discuss the implications of typical stability on the generalization error (i.e., the difference between the value of the query computed on the dataset and the expected value of the query with respect to the true data distribution). We show that typical stability can control generalization error in adaptive data analysis even when the samples in the dataset are not necessarily independent and when queries to be computed are not necessarily of bounded- sensitivity as long as the results of the queries over the dataset (i.e., the computed statistics) follow a distribution with a “light” tail.
    [Show full text]
  • An Efficient Boosting Algorithm for Combining Preferences Raj
    An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A uth or .............. ..... ..... .................................. Department of E'ectrical ngineering and Computer Science August 24, 1999 C ertified by .. ................. ...... .. .............. David R. Karger Associate Professor Thesis Supervisor Accepted by............... ........ Arthur C. Smith Chairman, Departmental Committe Graduate Students MACHU OF TEC Lo NOV LIBRARIES An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science on August 24, 1999, in partial fulfillment of the requirements for the degree of Master of Science Abstract The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experi- ment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of Rank- Boost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations.
    [Show full text]
  • An Introduction to Johnson–Lindenstrauss Transforms
    An Introduction to Johnson–Lindenstrauss Transforms Casper Benjamin Freksen∗ 2nd March 2021 Abstract Johnson–Lindenstrauss Transforms are powerful tools for reducing the dimensionality of data while preserving key characteristics of that data, and they have found use in many fields from machine learning to differential privacy and more. This note explains whatthey are; it gives an overview of their use and their development since they were introduced in the 1980s; and it provides many references should the reader wish to explore these topics more deeply. The text was previously a main part of the introduction of my PhD thesis [Fre20], but it has been adapted to be self contained and serve as a (hopefully good) starting point for readers interested in the topic. 1 The Why, What, and How 1.1 The Problem Consider the following scenario: We have some data that we wish to process but the data is too large, e.g. processing the data takes too much time, or storing the data takes too much space. A solution would be to compress the data such that the valuable parts of the data are kept and the other parts discarded. Of course, what is considered valuable is defined by the data processing we wish to apply to our data. To make our scenario more concrete let us say that our data consists of vectors in a high dimensional Euclidean space, R3, and we wish to find a transform to embed these vectors into a lower dimensional space, R<, where < 3, so that we arXiv:2103.00564v1 [cs.DS] 28 Feb 2021 ≪ can apply our data processing in this lower dimensional space and still get meaningful results.
    [Show full text]
  • Adversarial Robustness of Streaming Algorithms Through Importance Sampling
    Adversarial Robustness of Streaming Algorithms through Importance Sampling Vladimir Braverman Avinatan Hassidim Yossi Matias Mariano Schain Google∗ Googley Googlez Googlex Sandeep Silwal Samson Zhou MITO CMU{ June 30, 2021 Abstract Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates u1; : : : ; un as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust.
    [Show full text]
  • Private Center Points and Learning of Halfspaces
    Proceedings of Machine Learning Research vol 99:1–14, 2019 32nd Annual Conference on Learning Theory Private Center Points and Learning of Halfspaces Amos Beimel [email protected] Ben-Gurion University Shay Moran [email protected] Princeton University Kobbi Nissim [email protected] Georgetown University Uri Stemmer [email protected] Ben-Gurion University Editors: Alina Beygelzimer and Daniel Hsu Abstract We present a private agnostic learner for halfspaces over an arbitrary finite domain X ⊂ d with ∗ R sample complexity poly(d; 2log jXj). The building block for this learner is a differentially pri- ∗ vate algorithm for locating an approximate center point of m > poly(d; 2log jXj) points – a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [Bun et al. FOCS ’15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of m = Ω(d+log∗ jXj), whereas for pure differential privacy m = Ω(d log jXj). Keywords: Differential privacy, Private PAC learning, Halfspaces, Quasi-concave functions. 1. Introduction Machine learning models are often trained on sensitive personal information, e.g., when analyzing healthcare records or social media data. There is hence an increasing awareness and demand for pri- vacy preserving machine learning technology.
    [Show full text]
  • Adversarially Robust Property-Preserving Hash Functions
    Adversarially robust property-preserving hash functions The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Boyle, Elette et al. “Adversarially robust property-preserving hash functions.” Paper in the Leibniz International Proceedings in Informatics, LIPIcs, 124, 16, 10th Innovations in Theoretical Computer Science (ITCS 2019), San Diego, California, January 10-12, 2019, Leibniz Center for Informatics: 1-20 © 2019 The Author(s) As Published 10.4230/LIPIcs.ITCS.2019.16 Publisher Leibniz Center for Informatics Version Final published version Citable link https://hdl.handle.net/1721.1/130258 Terms of Use Creative Commons Attribution 4.0 International license Detailed Terms https://creativecommons.org/licenses/by/4.0/ Adversarially Robust Property-Preserving Hash Functions Elette Boyle1 IDC Herzliya, Kanfei Nesharim Herzliya, Israel [email protected] Rio LaVigne2 MIT CSAIL, 32 Vassar Street, Cambridge MA, 02139 USA [email protected] Vinod Vaikuntanathan3 MIT CSAIL, 32 Vassar Street, Cambridge MA, 02139 USA [email protected] Abstract Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P (x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P (x, y) is negligible.
    [Show full text]
  • Adversarial Laws of Large Numbers and Optimal Regret in Online Classification
    Adversarial Laws of Large Numbers and Optimal Regret in Online Classification Noga Alon* Omri Ben-Eliezer† Yuval Dagan‡ Shay Moran§ Moni Naor¶ Eylon Yogev|| Abstract Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample. We study laws of large numbers in sampling processes that can affect the environment they are acting upon and interact with it. Specifically, we consider the sequential sampling model proposed by Ben-Eliezer and Yogev (2020), and characterize the classes which admit a uniform law of large numbers in this model: these are exactly the classes that are online learnable. Our characterization may be interpreted as an online analogue to the equivalence between learnability and uniform convergence in statistical (PAC) learning. The sample-complexity bounds we obtain are tight for many parameter regimes, and as an application, we determine the optimal regret bounds in online learning, stated in terms of Littlestone’s dimension, thus resolving the main open question from Ben-David, Pal,´ and Shalev-Shwartz (2009), which was also posed by Rakhlin, Sridharan, and Tewari (2015). *Department of Mathematics, Princeton University, Princeton, New Jersey, USA and Schools of Mathematics and Computer Science, Tel Aviv University, Tel Aviv, Israel. Research supported in part by NSF grant DMS-1855464, BSF grant 2018267 and the Simons Foundation. Email: [email protected]. †Center for Mathematical Sciences and Applications, Harvard University, Massachusetts, USA. Research partially conducted while the author was at Weizmann Institute of Science, supported in part by a grant from the Israel Science Foundation (no.
    [Show full text]
  • Download This PDF File
    T G¨ P 2012 C N Deadline: December 31, 2011 The Gödel Prize for outstanding papers in the area of theoretical computer sci- ence is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery, Special Inter- est Group on Algorithms and Computation Theory (ACM-SIGACT). The award is presented annually, with the presentation taking place alternately at the Inter- national Colloquium on Automata, Languages, and Programming (ICALP) and the ACM Symposium on Theory of Computing (STOC). The 20th prize will be awarded at the 39th International Colloquium on Automata, Languages, and Pro- gramming to be held at the University of Warwick, UK, in July 2012. The Prize is named in honor of Kurt Gödel in recognition of his major contribu- tions to mathematical logic and of his interest, discovered in a letter he wrote to John von Neumann shortly before von Neumann’s death, in what has become the famous P versus NP question. The Prize includes an award of USD 5000. AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT. The 2012 Award Committee consists of Sanjeev Arora (Princeton University), Josep Díaz (Uni- versitat Politècnica de Catalunya), Giuseppe Italiano (Università a˘ di Roma Tor Vergata), Mogens Nielsen (University of Aarhus), Daniel Spielman (Yale Univer- sity), and Eli Upfal (Brown University). ELIGIBILITY: The rule for the 2011 Prize is given below and supersedes any di fferent interpretation of the parametric rule to be found on websites on both SIGACT and EATCS.
    [Show full text]