A Brief Introduction to Boosting

Total Page:16

File Type:pdf, Size:1020Kb

A Brief Introduction to Boosting Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, 1999. A Brief Introduction to Bo osting Rob ert E Schapire ATT Labs Shannon Lab oratory Park Avenue Ro om A Florham Park NJ USA wwwresearchattcomschapire schapireresearchattcom Given x y x y 1 1 m m Abstract where x X y Y f g i i Bo osting is a general metho d for improving the Initialize D i m 1 accuracy of any given learning algorithm This For t T short pap er introduces the b o osting algorithm Train weak learner using distribution D t AdaBo ost and explains the underlying theory Get weak hypothesis h X f g with error t of b o osting including an explanation of why b o osting often do es not suer from overtting Pr h x y t iD t i i t Some examples of recent applications of b o ost t ing are also describ ed 1 Cho ose ln t 2 t Up date Background t D i e if h x y t t i i Bo osting is a general metho d which attempts to b o ost D i t+1 t e if h x y Z t i i t the accuracy of any given learning algorithm Bo osting D i exp y h x has its ro ots in a theoretical framework for studying ma t t i t i chine learning called the PAC learning mo del due to Z t Valiant see Kearns and Vazirani for a go o d in where Z is a normalization factor chosen so that t tro duction to this mo del Kearns and Valiant D will b e a distribution t+1 were the rst to p ose the question of whether a weak learning algorithm which p erforms just slightly b et Output the nal hypothesis ter than random guessing in the PAC mo del can b e T X b o osted into an arbitrarily accurate strong learning H x sign h x t t algorithm Schapire came up with the rst prov t=1 able p olynomialtime b o osting algorithm in A year later Freund developed a much more ecient b o osting algorithm which although optimal in a certain Figure The b o osting algorithm AdaBo ost sense nevertheless suered from certain practical draw backs The rst exp eriments with these early b o osting One of the main ideas of the algorithm is to maintain algorithms were carried out by Drucker Schapire and a distribution or set of weights over the training set Simard on an OCR task The weight of this distribution on training example i on round t is denoted D i Initially all weights are set t AdaBo ost equally but on each round the weights of incorrectly The AdaBo ost algorithm introduced in by Freund classied examples are increased so that the weak learner and Schapire solved many of the practical dicul is forced to fo cus on the hard examples in the training ties of the earlier b o osting algorithms and is the fo set cus of this pap er Pseudo co de for AdaBo ost is given The weak learners job is to nd a weak hypothesis in Fig The algorithm takes as input a training h X f g appropriate for the distribution D t t set x y x y where each x b elongs to some 1 1 m m i The go o dness of a weak hypothesis is measured by its domain or instance space X and each label y is in i error X some lab el set Y For most of this pap er we assume Pr h x y D i t iD t i i t t Y f g later we discuss extensions to the multi i:h (x )6=y t i i class case AdaBo ost calls a given weak or base learning algorithm rep eatedly in a series of rounds t T Notice that the error is measured with resp ect to the 1.0 20 15 10 0.5 error 5 0 cumulative distribution 10 100 1000 -1 -0.5 0.5 1 rounds margin Figure Error curves and the margin distribution graph for b o osting C on the letter dataset as rep orted by Schapire et al Left the training and test error curves lower and upp er curves resp ectively of the combined classier as a function of the number of rounds of b o osting The horizontal lines indicate the test error rate of the base classier as well as the test error of the nal combined classier Right The cumulative distribution of margins of the training examples after and iterations indicated by shortdashed longdashed mostly hidden and solid curves resp ectively b etter than random are h s predictions Freund and distribution D on which the weak learner was trained t t Schapire prove that the training error the fraction In practice the weak learner may b e an algorithm that of mistakes on the training set of the nal hypothesis can use the weights D on the training examples Alter t H is at most natively when this is not p ossible a subset of the train q h i Y Y p ing examples can b e sampled according to D and these t 2 t t t unweighted resampled examples can b e used to train t t the weak learner X Once the weak hypothesis h has b een received Ada t 2 exp Bo ost chooses a parameter as in the gure Intu t t t itively measures the imp ortance that is assigned to t h Note that if which we can assume t t t Thus if each weak hypothesis is slightly b etter than ran without loss of generality and that gets larger as t t dom so that for some then the training t gets smaller error drops exp onentially fast The distribution D is next up dated using the rule t A similar prop erty is enjoyed by previous b o osting al shown in the gure The eect of this rule is to increase gorithms However previous algorithms required that the weight of examples misclassied by h and to de t such a lower b ound b e known a priori b efore b o ost crease the weight of correctly classied examples Thus ing b egins In practice knowledge of such a b ound is the weight tends to concentrate on hard examples very dicult to obtain AdaBo ost on the other hand is The nal hypothesis H is a weighted ma jority vote of adaptive in that it adapts to the error rates of the indi the T weak hypotheses where is the weight assigned t vidual weak hypotheses This is the basis of its name to h t Ada is short for adaptive Schapire and Singer show how AdaBo ost and its The b ound given in Eq combined with the b ounds analysis can b e extended to handle weak hypotheses on generalization error given b elow prove that AdaBo ost which output realvalued or condencerated predictions is indeed a b o osting algorithm in the sense that it can That is for each instance x the weak hypothesis h out t eciently convert a weak learning algorithm which can puts a prediction h x R whose sign is the predicted t always generate a hypothesis with a weak edge for any lab el or and whose magnitude jh xj gives a t distribution into a strong learning algorithm which can measure of condence in the prediction generate a hypothesis with an arbitrarily low error rate given sucient data Analyzing the training error Generalization error The most basic theoretical prop erty of AdaBo ost con Freund and Schapire showed how to b ound the cerns its ability to reduce the training error Let us 1 generalization error of the nal hypothesis in terms of Since a hypothesis that write the error of h as t t t 2 its training error the size m of the sample the VC guesses each instances class at random has an error rate dimension d of the weak hypothesis space and the num of on binary problems thus measures how much t 30 30 25 25 20 20 15 15 C4.5 10 10 5 5 0 0 0 5 10 15 20 25 30 0 5 10 15 20 25 30 boosting stumps boosting C4.5 Figure Comparison of C versus b o osting stumps and b o osting C on a set of b enchmark problems as rep orted by Freund and Schapire Each p oint in each scatterplot shows the test error rate of the two comp eting algorithms on a single b enchmark The y co ordinate of each p oint gives the test error rate in p ercent of C on the given b enchmark and the xco ordinate gives the error rate of b o osting stumps left plot or b o osting C right plot All error rates have b een averaged over multiple runs b er of rounds T of b o osting The VCdimension is a dened to b e X standard measure of the complexity of a space of hy y h x t t p otheses See for instance Blumer et al Sp eci t X cally they used techniques from Baum and Haussler t to show that the generalization error with high proba t bility is at most It is a number in which is p ositive if and only if H correctly classies the example Moreover the mag r nitude of the margin can b e interpreted as a measure of T d condence in the prediction Schapire et al proved that Pr H x y O m larger margins on the training set translate into a su p erior upp er b ound on the generalization error Sp eci cally the generalization error is at most r where Pr denotes empirical probability on the train d ing sample This b ound suggests that b o osting will Pr marginx y O 2 m overt if run for to o many rounds ie as T b ecomes large In fact this sometimes do es happ en However in for any with high probability Note that this b ound early exp eriments several authors observed is entirely indep endent of T the number of rounds of empirically that b o osting often do es not overt even b o osting In addition Schapire et al proved that b o ost when run for thousands of rounds Moreover it was ob ing is particularly aggressive at reducing the margin in a served that AdaBo ost would sometimes continue to drive quantiable sense since it concentrates on the examples down the generalization error long after the training er with the smallest margins whether p ositive or negative ror had reached zero clearly contradicting the spirit of Bo ostings eect on the margins can b e seen empirically the b ound ab ove For instance the left side of Fig for instance on the right side of Fig which shows the shows the training and test curves of running b o ost cumulative distribution of margins of the training ex ing on top of Quinlans C decisiontree
Recommended publications
  • Relational Machine Learning Algorithms
    Relational Machine Learning Algorithms by Alireza Samadianzakaria Bachelor of Science, Sharif University of Technology, 2016 Submitted to the Graduate Faculty of the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2021 UNIVERSITY OF PITTSBURGH DEPARTMENT OF COMPUTER SCIENCE This dissertation was presented by Alireza Samadianzakaria It was defended on July 7, 2021 and approved by Dr. Kirk Pruhs, Department of Computer Science, University of Pittsburgh Dr. Panos Chrysanthis, Department of Computer Science, University of Pittsburgh Dr. Adriana Kovashka, Department of Computer Science, University of Pittsburgh Dr. Benjamin Moseley, Tepper School of Business, Carnegie Mellon University ii Copyright c by Alireza Samadianzakaria 2021 iii Relational Machine Learning Algorithms Alireza Samadianzakaria, PhD University of Pittsburgh, 2021 The majority of learning tasks faced by data scientists involve relational data, yet most standard algorithms for standard learning problems are not designed to accept relational data as input. The standard practice to address this issue is to join the relational data to create the type of geometric input that standard learning algorithms expect. Unfortunately, this standard practice has exponential worst-case time and space complexity. This leads us to consider what we call the Relational Learning Question: \Which standard learning algorithms can be efficiently implemented on relational data, and for those that cannot, is there an alternative algorithm that can be efficiently implemented on relational data and that has similar performance guarantees to the standard algorithm?" In this dissertation, we address the relational learning question for the well-known prob- lems of support vector machine (SVM), logistic regression, and k-means clustering.
    [Show full text]
  • Computer Science and Decision Theory Fred S. Roberts1 Abstract 1
    Computer Science and Decision Theory Fred S. Roberts1 DIMACS Center, Rutgers University, Piscataway, NJ 08854 USA [email protected] Abstract This paper reviews applications in computer science that decision theorists have addressed for years, discusses the requirements posed by these applications that place great strain on decision theory/social science methods, and explores applications in the social and decision sciences of newer decision-theoretic methods developed with computer science applications in mind. The paper deals with the relation between computer science and decision-theoretic methods of consensus, with the relation between computer science and game theory and decisions, and with \algorithmic decision theory." 1 Introduction Many applications in computer science involve issues and problems that decision theorists have addressed for years, issues of preference, utility, conflict and cooperation, allocation, incentives, consensus, social choice, and measurement. A similar phenomenon is apparent more generally at the interface between computer sci- ence and the social sciences. We have begun to see the use of methods developed by decision theorists/social scientists in a variety of computer science applications. The requirements posed by these computer science applications place great strain on the decision theory/social science methods because of the sheer size of the problems addressed, new contexts in which computational power of agents becomes an issue, limitations on information possessed by players, and the sequential nature of repeated applications. Hence, there is a great need to develop a new generation of methods to satisfy these computer science requirements. In turn, these new methods will provide powerful new tools for social scientists in general and decision theorists in particular.
    [Show full text]
  • Research Notices
    AMERICAN MATHEMATICAL SOCIETY Research in Collegiate Mathematics Education. V Annie Selden, Tennessee Technological University, Cookeville, Ed Dubinsky, Kent State University, OH, Guershon Hare I, University of California San Diego, La jolla, and Fernando Hitt, C/NVESTAV, Mexico, Editors This volume presents state-of-the-art research on understanding, teaching, and learning mathematics at the post-secondary level. The articles are peer-reviewed for two major features: (I) advancing our understanding of collegiate mathematics education, and (2) readability by a wide audience of practicing mathematicians interested in issues affecting their students. This is not a collection of scholarly arcana, but a compilation of useful and informative research regarding how students think about and learn mathematics. This series is published in cooperation with the Mathematical Association of America. CBMS Issues in Mathematics Education, Volume 12; 2003; 206 pages; Softcover; ISBN 0-8218-3302-2; List $49;AII individuals $39; Order code CBMATH/12N044 MATHEMATICS EDUCATION Also of interest .. RESEARCH: AGul<lelbrthe Mathematics Education Research: Hothomatldan- A Guide for the Research Mathematician --lllll'tj.M...,.a.,-- Curtis McKnight, Andy Magid, and -- Teri J. Murphy, University of Oklahoma, Norman, and Michelynn McKnight, Norman, OK 2000; I 06 pages; Softcover; ISBN 0-8218-20 16-8; List $20;AII AMS members $16; Order code MERN044 Teaching Mathematics in Colleges and Universities: Case Studies for Today's Classroom Graduate Student Edition Faculty
    [Show full text]
  • Typical Stability
    Typical Stability Raef Bassily∗ Yoav Freundy Abstract In this paper, we introduce a notion of algorithmic stability called typical stability. When our goal is to release real-valued queries (statistics) computed over a dataset, this notion does not require the queries to be of bounded sensitivity – a condition that is generally assumed under differential privacy [DMNS06, Dwo06] when used as a notion of algorithmic stability [DFH+15b, DFH+15c, BNS+16] – nor does it require the samples in the dataset to be independent – a condition that is usually assumed when generalization-error guarantees are sought. Instead, typical stability requires the output of the query, when computed on a dataset drawn from the underlying distribution, to be concentrated around its expected value with respect to that distribution. Typical stability can also be motivated as an alternative definition for database privacy. Like differential privacy, this notion enjoys several important properties including robustness to post-processing and adaptive composition. However, privacy is guaranteed only for a given family of distributions over the dataset. We also discuss the implications of typical stability on the generalization error (i.e., the difference between the value of the query computed on the dataset and the expected value of the query with respect to the true data distribution). We show that typical stability can control generalization error in adaptive data analysis even when the samples in the dataset are not necessarily independent and when queries to be computed are not necessarily of bounded- sensitivity as long as the results of the queries over the dataset (i.e., the computed statistics) follow a distribution with a “light” tail.
    [Show full text]
  • An Efficient Boosting Algorithm for Combining Preferences Raj
    An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A uth or .............. ..... ..... .................................. Department of E'ectrical ngineering and Computer Science August 24, 1999 C ertified by .. ................. ...... .. .............. David R. Karger Associate Professor Thesis Supervisor Accepted by............... ........ Arthur C. Smith Chairman, Departmental Committe Graduate Students MACHU OF TEC Lo NOV LIBRARIES An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science on August 24, 1999, in partial fulfillment of the requirements for the degree of Master of Science Abstract The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experi- ment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of Rank- Boost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations.
    [Show full text]
  • LNAI 4264, Pp
    Lecture Notes in Artificial Intelligence 4264 Edited by J. G. Carbonell and J. Siekmann Subseries of Lecture Notes in Computer Science José L. Balcázar Philip M. Long Frank Stephan (Eds.) Algorithmic Learning Theory 17th International Conference, ALT 2006 Barcelona, Spain, October 7-10, 2006 Proceedings 1 3 Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA Jörg Siekmann, University of Saarland, Saarbrücken, Germany Volume Editors José L. Balcázar Universitat Politecnica de Catalunya, Dept. Llenguatges i Sistemes Informatics c/ Jordi Girona, 1-3, 08034 Barcelona, Spain E-mail: [email protected] Philip M. Long Google 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA E-mail: [email protected] Frank Stephan National University of Singapore, Depts. of Mathematics and Computer Science 2 Science Drive 2, Singapore 117543, Singapore E-mail: [email protected] Library of Congress Control Number: 2006933733 CR Subject Classification (1998): I.2.6, I.2.3, F.1, F.2, F.4, I.7 LNCS Sublibrary: SL 7 – Artificial Intelligence ISSN 0302-9743 ISBN-10 3-540-46649-5 Springer Berlin Heidelberg New York ISBN-13 978-3-540-46649-9 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer.
    [Show full text]
  • An Introduction to Johnson–Lindenstrauss Transforms
    An Introduction to Johnson–Lindenstrauss Transforms Casper Benjamin Freksen∗ 2nd March 2021 Abstract Johnson–Lindenstrauss Transforms are powerful tools for reducing the dimensionality of data while preserving key characteristics of that data, and they have found use in many fields from machine learning to differential privacy and more. This note explains whatthey are; it gives an overview of their use and their development since they were introduced in the 1980s; and it provides many references should the reader wish to explore these topics more deeply. The text was previously a main part of the introduction of my PhD thesis [Fre20], but it has been adapted to be self contained and serve as a (hopefully good) starting point for readers interested in the topic. 1 The Why, What, and How 1.1 The Problem Consider the following scenario: We have some data that we wish to process but the data is too large, e.g. processing the data takes too much time, or storing the data takes too much space. A solution would be to compress the data such that the valuable parts of the data are kept and the other parts discarded. Of course, what is considered valuable is defined by the data processing we wish to apply to our data. To make our scenario more concrete let us say that our data consists of vectors in a high dimensional Euclidean space, R3, and we wish to find a transform to embed these vectors into a lower dimensional space, R<, where < 3, so that we arXiv:2103.00564v1 [cs.DS] 28 Feb 2021 ≪ can apply our data processing in this lower dimensional space and still get meaningful results.
    [Show full text]
  • MMLS 2017 Booklet
    MMLS 2017 Booklet Monday, June 19 9:30 - 11:15 TTIC Continental breakfast. (TTIC Colloquium: 10:00-11:00.) 11:15 - 11:30 GPAH Opening remarks: Po-Ling Loh. 11:30 - 12:20 GPAH Plenary speaker: Devavrat Shah . (Chair: Po-Ling Loh .) Latent Variable Model Estimation via Collaborative Filtering. 12:20 - 2:50 GPAH Lunch, posters. 2:50 - 3:30 GPAH Invited talks (chair: Mesrob Ohannessian ). 2:50: Rina Foygel Projected Gradient Descent with Nonconvex Constraints. 3:10: Maxim Raginsky Non-Convex Learning via Stochastic Gradient Langevin Dynamics. 3:30 - 4:00 GPAH Coffee Break. 4:00 - 4:50 GPAH Plenary speaker: Rayid Ghani . (Chair: Nati Srebro .) Machine Learning for Public Policy: Opportunities and Challenges 4:50 - 5:30 GPAH Invited talks (chair: Laura Balzano ). 4:50: Dimitris Papailiopoulos Gradient Diversity Empowers Distributed Learning. 5:10: Alan Ritter Large-Scale Learning for Information Extraction. 5:45 - 7:00 TTIC Reception, with remarks by Sadaoki Furui (TTIC President). Tuesday, June 20 8:30 - 9:50 GPAH Continental breakfast. 9:00 - 9:50 GPAH Bonus speaker: Larry Wasserman . (Chair: Mladen Kolar .) Locally Optimal Testing. [ Cancelled. ] 9:50 - 10:50 GPAH Invited talks (chair: Misha Belkin ). 9:50: Srinadh Bhojanapalli Effectiveness of Local Search for Low Rank Recovery 10:10: Niao He Learning From Conditional Distributions. 10:30: Clayton Scott Nonparametric Preference Completion. 10:50 - 11:20 GPAH Coffee break. 11:20 - 12:20 GPAH Invited talks (chair: Jason Lee ). 11:20: Lev Reyzin On the Complexity of Learning from Label Proportions. 11:40: Ambuj Tewari Random Perturbations in Online Learning. 12:00: Risi Kondor Multiresolution Matrix Factorization.
    [Show full text]
  • Nil Ib N O Ir Ali Mi S Na El Oo B Ilp Itl
    ecneicS retupmoC retupmoC ecneicS ecneicS - o t- o l t aA l aA DD DD 9 9 / / OOnn BBiilliinneeaarr TTeecchhnniiqquueess ffoorr 0202 0202 a p pa p r p a r K a K ii t t t aM t aM SSiimmiillaarriittyy SSeeaarrcchh aanndd BBoooolleeaann MMaattrriixx M ultiplication Multiplication MMaatttit iK Kaarprpppaa noi taci lpi t luM xi r taM naelooB dna hcraeS yt i ral imiS rof seuqinhceT raeni l iB nO noi taci lpi t luM xi r taM naelooB dna hcraeS yt i ral imiS rof seuqinhceT raeni l iB nO BBUUSSININESESS + + ECECOONNOOMMY Y NSI I NBS NBS 879 - 879 - 259 - 259 - 06 - 06 - 5198 - 5198 7 - 7 ( p ( r p i n r t i n t de ) de ) AARRT T+ + NSI I NBS NBS 879 - 879 - 259 - 259 - 06 - 06 - 6198 - 6198 4 - 4 ( ( dp f dp ) f ) DDESESIGIGNN + + NSI I NSS NSS 9971 - 9971 - 4394 4394 ( p ( r p i n r t i n t de ) de ) AARRCCHHITIETCECTUTURRE E NSI I NSS NSS 9971 - 9971 - 2494 2494 ( ( dp f dp ) f ) SSCCIEINENCCE E+ + TETCECHHNNOOLOLOGGY Y tirvn tlaA ot laA ot isrevinU yt isrevinU yt CCRROOSSSOOVEVRE R ceic fo o oohcS f l cS o oohcS i f l cS i ecne ecne DDOOCCTOTORRAAL L ecneicS retupmoC retupmoC ecneicS ecneicS DDISISSERERTATTAITOIONNS S DDOOCCTOTORRAAL L +hfbjia*GMFTSH9 +hfbjia*GMFTSH9 fi.otlaa.www . www a . a a l a t o l t . o fi . fi DDISISSERERTATTAITOIONNS S ot laA ytot laA isrevinU yt isrevinU 0202 0202 Aalto University publication series DOCTORAL DISSERTATIONS 9/2020 On Bilinear Techniques for Similarity Search and Boolean Matrix Multiplication Matti Karppa A doctoral dissertation completed for the degree of Doctor of Science (Technology) to be defended, with the permission of the Aalto University School of Science, at a public examination held at the lecture hall T2 of the school on 24 January 2020 at 12.
    [Show full text]
  • Download This PDF File
    T G¨ P 2012 C N Deadline: December 31, 2011 The Gödel Prize for outstanding papers in the area of theoretical computer sci- ence is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery, Special Inter- est Group on Algorithms and Computation Theory (ACM-SIGACT). The award is presented annually, with the presentation taking place alternately at the Inter- national Colloquium on Automata, Languages, and Programming (ICALP) and the ACM Symposium on Theory of Computing (STOC). The 20th prize will be awarded at the 39th International Colloquium on Automata, Languages, and Pro- gramming to be held at the University of Warwick, UK, in July 2012. The Prize is named in honor of Kurt Gödel in recognition of his major contribu- tions to mathematical logic and of his interest, discovered in a letter he wrote to John von Neumann shortly before von Neumann’s death, in what has become the famous P versus NP question. The Prize includes an award of USD 5000. AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT. The 2012 Award Committee consists of Sanjeev Arora (Princeton University), Josep Díaz (Uni- versitat Politècnica de Catalunya), Giuseppe Italiano (Università a˘ di Roma Tor Vergata), Mogens Nielsen (University of Aarhus), Daniel Spielman (Yale Univer- sity), and Eli Upfal (Brown University). ELIGIBILITY: The rule for the 2011 Prize is given below and supersedes any di fferent interpretation of the parametric rule to be found on websites on both SIGACT and EATCS.
    [Show full text]
  • Theory and Algorithms for Modern Problems in Machine Learning and an Analysis of Markets
    Theory and Algorithms for Modern Problems in Machine Learning and an Analysis of Markets by Ashish Rastogi A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science Courant Institute of Mathematical Sciences New York University May 2008 Richard Cole—Advisor Mehryar Mohri—Advisor °c Ashish Rastogi All Rights Reserved, 2008 To the most wonderful parents in the whole world, Mrs. Asha Rastogi and Mr. Shyam Lal Rastogi iv Acknowledgements First and foremost, I would like to thank my advisors, Professor Richard Cole and Professor Mehryar Mohri, for their unwavering support, guidance and constant encouragement. They have been inspiring mentors and much of what lies in the following pages can be credited to them. Working under their supervision has been one of the most enriching experiences of my life. I would also like to thank Professor Joel Spencer, Professor Arun Sun- dararajan, Professor Subhash Khot and Dr. Corinna Cortes for agreeing to serve as members on my thesis committee. Professor Spencer’s class on Random Graphs remains one of the most stim- ulating courses I undertook as a graduate student. Internships at Google through the summers of 2005, 2006 and 2007 were some of the most enjoy- able periods of my graduate school life. Many thanks are due to Dr. Corinna Cortes for providing me with the opportunity to work on several challenging problems at Google. Research initiated during these internships culminated in the development of ideas that form the bulk of this thesis. I would also like to thank my peers from the graduate school.
    [Show full text]
  • 40Th ACM Symposium on Theory of Computing (STOC 2008) Saturday
    40th ACM Symposium on Theory of Computing (STOC 2008) Saturday, May 17, 2008 7:00pm – 10:00pm Registration (Conference Centre) 8:00pm – 10:00pm Reception (Palm Court) Sunday, May 18, 2008 8:00am – 5:00pm Registration (Conference Centre) 8:10am – 8:35am Breakfast (Conference Centre) Session 1A Session 1B (Theatre) (Saanich Room) Chair: Venkat Guruswami Chair: David Shmoys (Cornell (University of Washington and University) Institute for Advanced Study) 8:35am - 8:55am Parallel Repetition in Projection The Complexity of Temporal Games and a Concentration Bound Constraint Satisfaction Problems Anup Rao Manuel Bodirsky, Jan Kara 9:00am - 9:20am SDP Gaps and UGC Hardness for An Effective Ergodic Theorem Multiway Cut, 0-Extension and and Some Applications Metric Labeling Satyadev Nandakumar Rajeskar Manokaran, Joseph (Seffi) Naor, Prasad Raghavendra, Roy Schwartz 9:25am – 9:45am Unique Games on Expanding Algorithms for Subset Selection Constraint Graphs are Easy in Linear Regression Sanjeev Arora, Subhash A. Khot, Abhimanyu Das, David Kempe Alexandra Kolla, David Steurer, Madhur Tulsiani, Nisheeth Vishnoi 9:45am - 10:10am Break Session 2 (Theatre) Chair: Joan Feigenbaum (Yale University) 10:10am - 11:10am Rethinking Internet Routing Invited talk by Jennifer Rexford (Princeton University) 11:10am - 11:20am Break Session 3A Session 3B (Theatre) (Saanich Room) Chair: Xiaotie Deng (City Chair: Anupam Gupta (Carnegie University of Hong Kong) Mellon University) The Pattern Matrix Method for 11:20am – 11:40am Interdomain Routing and Games Lower Bounds on Quantum Hagay Levin, Michael Schapira, Communication Aviv Zohar Alexander A. Sherstov 11:45am – 12:05pm Optimal approximation for the Classical Interaction Cannot Submodular Welfare Problem in Replace a Quantum Message the value oracle model Dmitry Gavinsky Jan Vondrak 12:10pm – 12:30pm Optimal Mechanism Design and Span-program-based quantum Money Burning algorithm for evaluating formulas Jason Hartline, Tim Ben W.
    [Show full text]