Stealing Machine Learning Models Via Prediction Apis

Stealing Machine Learning Models Via Prediction Apis

Stealing Machine Learning Models via Prediction APIs Florian Tramèr, École Polytechnique Fédérale de Lausanne (EPFL); Fan Zhang, Cornell University; Ari Juels, Cornell Tech; Michael K. Reiter, The University of North Carolina at Chapel Hill; Thomas Ristenpart, Cornell Tech https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/tramer This paper is included in the Proceedings of the 25th USENIX Security Symposium August 10–12, 2016 • Austin, TX ISBN 978-1-931971-32-4 Open access to the Proceedings of the 25th USENIX Security Symposium is sponsored by USENIX Stealing Machine Learning Models via Prediction APIs Florian Tramer` Fan Zhang Ari Juels EPFL Cornell University Cornell Tech, Jacobs Institute Michael K. Reiter Thomas Ristenpart UNC Chapel Hill Cornell Tech Abstract that can generate labels on future inputs. Popular models Machine learning (ML) models may be deemed con- include support vector machines (SVMs), logistic regres- fidential due to their sensitive training data, commercial sions, neural networks, and decision trees. value, or use in security applications. Increasingly often, ML algorithms’ success in the lab and in practice has confidential ML models are being deployed with pub- led to an explosion in demand. Open-source frameworks licly accessible query interfaces. ML-as-a-service (“pre- such as PredictionIO and cloud-based services offered dictive analytics”) systems are an example: Some allow by Amazon, Google, Microsoft, BigML, and others have users to train models on potentially sensitive data and arisen to broaden and simplify ML model deployment. charge others for access on a pay-per-query basis. Cloud-based ML services often allow model owners to The tension between model confidentiality and pub- charge others for queries to their commercially valuable lic access motivates our investigation of model extraction models. This pay-per-query deployment option exem- attacks. In such attacks, an adversary with black-box ac- plifies an increasingly common tension: The query in- cess, but no prior knowledge of an ML model’s param- terface of an ML model may be widely accessible, yet eters or training data, aims to duplicate the functionality the model itself and the data on which it was trained of (i.e., “steal”) the model. Unlike in classical learning may be proprietary and confidential. Models may also theory settings, ML-as-a-service offerings may accept be privacy-sensitive because they leak information about partial feature vectors as inputs and include confidence training data [4, 23, 24]. For security applications such values with predictions. Given these practices, we show as spam or fraud detection [9,29,36,55], an ML model’s simple, efficient attacks that extract target ML models confidentiality is critical to its utility: An adversary that with near-perfect fidelity for popular model classes in- can learn the model can also often evade detection [4,36]. cluding logistic regression, neural networks, and deci- In this paper we explore model extraction attacks, sion trees. We demonstrate these attacks against the on- which exploit the tension between query access and con- line services of BigML and Amazon Machine Learning. fidentiality in ML models. We consider an adversary that We further show that the natural countermeasure of omit- can query an ML model (a.k.a. a prediction API) to ob- ting confidence values from model outputs still admits tain predictions on input feature vectors. The model may potentially harmful model extraction attacks. Our results be viewed as a black box. The adversary may or may highlight the need for careful ML model deployment and not know the model type (logistic regression, decision new model extraction countermeasures. tree, etc.) or the distribution over the data used to train the model. The adversary’s goal is to extract an equiva- lent or near-equivalent ML model, i.e., one that achieves 1 Introduction (close to) 100 agreement on an input space of interest. Machine learning (ML) aims to provide automated ex- We demonstrate successful model extraction attacks traction of insights from data by means of a predictive against a wide variety of ML model types, including de- model. A predictive model is a function that maps feature cision trees, logistic regressions, SVMs, and deep neu- vectors to a categorical or real-valued output. In a super- ral networks, and against production ML-as-a-service 1 vised setting, a previously gathered data set consisting (MLaaS) providers, including Amazon and BigML. In of possibly confidential feature-vector inputs (e.g., digi- nearly all cases, our attacks yield models that are func- tized health records) with corresponding output class la- 1We simulated victims by training models in our own accounts. We bels (e.g., a diagnosis) serves to train a predictive model have disclosed our results to affected services in February 2016. USENIX Association 25th USENIX Security Symposium 601 Service Model Type Data set Queries Time (s) Having demonstrated the broad applicability of model Logistic Regression Digits 650 70 Amazon Logistic Regression Adult 1,485 149 extraction attacks to existing services, we consider the Decision Tree German Credit 1,150 631 most obvious potential countermeasure ML services BigML Decision Tree Steak Survey 4,013 2,088 might adopt: Omission of confidence values, i.e., output Table 1: Results of model extraction attacks on ML services. For of class labels only. This approach would place model each target model, we report the number of prediction queries made to extraction back in the membership query setting of prior the ML API in an attack that extracts a 100 equivalent model. The attack time is primarily influenced by the service’s prediction latency work in learning theory [3, 8, 36, 53]. We demonstrate ( 100ms/query for Amazon and 500ms/query for BigML). a generalization of an adaptive algorithm by Lowd and ≈ ≈ Meek [36] from binary linear classifiers to more com- plex model types, and also propose an attack inspired by tionally very close to the target. In some cases, our at- the agnostic learning algorithm of Cohn et al. [18]. Our tacks extract the exact parameters of the target (e.g., the new attacks extract models matching targets on >99 of coefficients of a linear classifier or the paths of adecision the input space for a variety of model classes, but need tree). For some targets employing a model type, param- up to 100 more queries than equation-solving attacks × eters or features unknown to the attacker, we addition- (specifically for multiclass linear regression and neural ally show a successful preliminary attack step involving networks). While less effective than equation-solving, reverse-engineering these model characteristics. these attacks remain attractive for certain types of adver- Our most successful attacks rely on the information- sary. We thus discuss further ideas for countermeasures. rich outputs returned by the ML prediction APIs of all In summary, we explore model extraction attacks, a cloud-based services we investigated. Those of Google, practical kind of learning task that, in particular, affects Amazon, Microsoft, and BigML all return high-precision emerging cloud-based ML services being built by Ama- confidence values in addition to class labels. They also zon, Google, Microsoft, BigML, and others. We show: respond to partial queries lacking one or more features. Simple equation-solving model extraction attacks that • Our setting thus differs from traditional learning-theory use non-adaptive, random queries to solve for the pa- settings [3,7,8,15,30,33,36,53] that assume only mem- rameters of a target model. These attacks affect a wide bership queries, outputs consisting of a class label only. variety of ML models that output confidence values. For example, for logistic regression, the confidence value We show their success against Amazon’s service (us- (w x+β) is a simple log-linear function 1/(1+e− · ) of the d- ing our own models as stand-ins for victims’), and also dimensional input vector x. By querying d + 1 random report successful reverse-engineering of the (only par- d-dimensional inputs, an attacker can with high proba- tially documented) model type employed by Amazon. bility solve for the unknown d + 1 parameters w and β A new path-finding algorithm for extracting decision • defining the model. We emphasize that while this model trees that abuses confidence values as quasi-identifiers extraction attack is simple and non-adaptive, it affects all for paths. To our knowledge, this is the first example of the ML services we have investigated. of practical exact decision tree learning. We demon- Such equation-solving attacks extend to multiclass lo- strate the attack’s efficacy via experiments on BigML. gistic regressions and neural networks, but do not work Model extraction attacks against models that output • for decision trees, a popular model choice. (BigML, for only class labels, the obvious countermeasure against example, initially offered only decision trees.) For de- extraction attacks that rely on confidence values. We cision trees, a confidence value reflects the number of show slower, but still potentially dangerous, attacks in training data points labeled correctly on an input’s path this setting that build on prior work in learning theory. in the tree; simple equation-solving is thus inapplicable. We additionally make a number of observations about the We show how confidence values can nonetheless be ex- implications of extraction. For example, attacks against ploited as pseudo-identifiers for paths in the tree, facili- Amazon’s system indirectly leak various summary statis- tating discovery of the tree’s structure. We demonstrate tics about a private training set, while extraction against successful model extraction attacks that use adaptive, it- kernel logistic regression models [57] recovers signifi- erative search algorithms to discover paths in a tree. cant information about individual training data points.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us