Mlbench: Benchmarking Machine Learning Services Against Human Experts

Mlbench: Benchmarking Machine Learning Services Against Human Experts

MLBench: Benchmarking Machine Learning Services Against Human Experts Yu Liuy Hantian Zhangy Luyuan Zengy Wentao Wuz Ce Zhangy yETH Zurich zMicrosoft Research fyu.liu, hantian.zhang, [email protected] [email protected] ABSTRACT Motivating Questions: How good is good? The ongoing advance- Modern machine learning services and systems are complicated ment of machine learning services raises some natural questions, data systems — the process of designing such systems is an art from the perspectives of both service users and providers. We are of compromising between functionality, performance, and quality. currently hosting a similar service as Azure Machine Learning Stu- Providing different levels of system supports for different function- dio, namely ease.ml [31], to our local users at ETH Zurich, and alities, such as automatic feature engineering, model selection and this work is motivated by the following two questions. ensemble, and hyperparameter tuning, could improve the quality, • User: How good are machine learning services such as Azure but also introduce additional cost and system complexity. In this Machine Learning Studio and ease.ml? We were unable to pro- paper, we try to facilitate the process of asking the following type vide our user (a biologist) with a good answer, because we did not of questions: How much will the users lose if we remove the support have a reasonable measure of “good” that is easy to convey. of functionality x from a machine learning service? • Provider (Us): Do we need to provide more machine learning Answering this type of questions using existing datasets, such models on ease.ml (than Azure provides) to our local user? We as the UCI datasets, is challenging. The main contribution of this were unable to answer this question because we did not know how work is a novel dataset, MLBench, harvested from Kaggle compe- to reasonably measure the potential improvement on accuracy for a titions. Unlike existing datasets, MLBench contains not only the given machine learning task. raw features for a machine learning task, but also those used by Baseline Approaches. We are not satisfied with two strategies: (1) the winning teams of Kaggle competitions. The winning features Run a standard dataset from the UCI repository [21] and report the serve as a baseline of best human effort that enables multiple ways absolute accuracy, or (2) run an existing system (e.g., scikit-learn) to measure the quality of machine learning services that cannot be on a standard dataset and report the difference (i.e., relative accu- supported by existing datasets, such as relative ranking on Kaggle racy) between it and Azure Machine Learning Studio. Neither the and relative accuracy compared with best-effort systems. absolute accuracy nor the relative accuracy over a baseline system MLBench We then conduct an empirical study using to under- reveals the full picture of a machine learning service as they rely on stand example machine learning services from Amazon and Mi- the hardness of the given machine learning task and the maturity of MLBench crosoft Azure, and showcase how enables a comparative the baseline system, which are often opaque to our users. study revealing the strength and weakness of these existing ma- MLBench. In this paper, we present MLBench, a novel dataset chine learning services quantitatively and systematically. The full that we constructed to benchmark machine learning systems and version of this paper can be found at arxiv.org/abs/1707.09562 MLBench PVLDB Reference Format: services. The goal of is to provide a quantitative qual- Yu Liu, Hantian Zhang, Luyuan Zeng, Wentao Wu, and Ce Zhang. ML- ity measure to answer questions such as what would users lose Bench: Benchmarking Machine Learning Services Against Human Experts. compared with a best-effort baseline? To construct MLBench, PVLDB, 11 (10): 1220-1232, 2018. we first collected real-world datasets and best-effort solutions har- DOI: https://doi.org/10.14778/3231751.3231770 vested from Kaggle competitions, and then developed multiple per- 1. INTRODUCTION formance metrics by comparing relative performance of machine Modern machine learning services and toolboxes provide func- learning services with top-ranked solutions in Kaggle competitions. tionalities that surpass training a single machine learning model. An Empirical Study. We present an example use case of MLBench Instead, these services and systems support a range of different by conducting an empirical study of two online machine learning machine learning models, different ways of training, and auxiliary services, i.e., Azure Machine Learning Studio and Amazon Ma- utilities such as model selection, model ensemble, hyperparamter chine Learning. As we will see, MLBench provides a way for us tuning, and feature selection. Azure Machine Learning Studio [1] to decouple multiple factors having impact on the quality: each in- and Amazon Machine Learning [2] are prominent examples. dividual machine learning model and other external factors such as Permission to make digital or hard copies of all or part of this work for feature engineering and model ensemble. personal or classroom use is granted without fee provided that copies are Premier of Kaggle Competitions not made or distributed for profit or commercial advantage and that copies . Kaggle [3] is a popular plat- bear this notice and the full citation on the first page. To copy otherwise, to form hosting a range of machine learning competitions. Compa- republish, to post on servers or to redistribute to lists, requires prior specific nies or scientists can host their real-world applications on Kaggle; permission and/or a fee. Articles from this volume were invited to present Each user has access to the training and testing datasets, submits their results at The 44th International Conference on Very Large Data Bases, their solutions to Kaggle, and gets a quality score on the test set. August 2018, Rio de Janeiro, Brazil. Kaggle motivates users with prizes for top winning entries. This Proceedings of the VLDB Endowment, Vol. 11, No. 10 Copyright 2018 VLDB Endowment 2150-8097/18/06. “crowdsourcing” nature of Kaggle makes it a representative sam- DOI: https://doi.org/10.14778/3231751.3231770 ple of real-world machine learning workloads. 1220 Summary of Technical Contributions. • How significant is the gap between a1 and a2? Essentially, how C1. We present the MLBench benchmark. One prominent feature difficult is the learning task defined over D? (An accuracy differ- of MLBench is that each of its datasets comes with a best-effort ence of 0.01 may mean a lot if D is a tough case.) baseline of both feature engineering and machine learning models. None of the above questions can be answered satisfactorily with- C2. We propose a novel performance metric based on the notion of out a comparison against the frontier of current machine learning “quality tolerance” that measures the performance gap between a techniques. As far as we know, MLBench is the first, rudimentary given machine learning system and top-ranked Kaggle performers. attempt to provide a baseline for reasoning and understanding the C3. We showcase one application of MLBench by evaluating two absolute performance of machine learning services. online machine learning services: Azure Machine Learning Studio One alternative approach to MLBench could be to stay with an and Amazon Machine Learning. Our experimental result reveals existing benchmark such as the UCI dataset but try to find the best interesting strengths and limitations of both clouds. Detailed anal- features and models – not only those models that have been pro- ysis of the results further points out promising future directions to vided by machine learning services – and run them for each dataset. improve both machine learning services. We see at least two potential shortcomings of this approach. First, Scope of Study. We study three most popular machine learning it is costly to engineering features to achieve the best accuracy for tasks: binary classification, multi-class classification, and regres- each UCI dataset. Second, it is expensive to collect all existing sion. Due to space limitation, we focus our discussion on binary models — there are just too many and one may have to resort classification. As we will see, even with this constrained scope, to a shortened list (e.g., models that have been implemented by there is no simple, single answer to the main question we aim to an- Weka [27]). Moreover, it remains elusive that how representative swer. We also briefly cover multi-class classification and regression the datasets in UCI and the models in Weka are — they are defi- but leave the complete details to the full version of this paper [32]. nitely quite different from the datasets and models that people have Reproducibility. We made MLBench publicly available at the fol- used in Kaggle competitions. It is definitely important to explore lowing website: http://ds3lab.org/mlbench. alternatives to MLBench, but it is beyond the scope of this paper. Overview. The rest of the paper is organized as follows. We start by having an in-depth discussion on why existing datasets such as 3. THE MLBENCH BENCHMARK the UCI repository are not sufficient. We then present our method- MLBench ology and the MLBench benchmark in Section 3. We next present In this section, we present details of the benchmark experimental settings and evaluation results in Section 4-6. We constructed by harvesting winning code of Kaggle competitions. summarize related work in Section 8 and conclude in Section 9. 3.1 Kaggle Competitions Kaggle hosts various types of competitions for data scientists. There are seven different competition categories, and we are partic- 2. MOTIVATION ularly interested in the category “Featured” that aims to solve com- Over the years, researchers have applied many techniques on the mercial, real-world machine learning problems. For each compe- UCI datasets [21] and shown incremental improvements.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us