Adversarial Machine Learning∗

Adversarial Machine Learning∗

In Proceedings of 4th ACM Workshop on Artificial Intelligence and Security, October 2011, pp. 43-58 Adversarial Machine Learning∗ Ling Huang Anthony D. Joseph Blaine Nelson Intel Labs Berkeley UC Berkeley University of Tübingen [email protected] [email protected] [email protected] tuebingen.de Benjamin I. P. Rubinstein J. D. Tygar Microsoft Research UC Berkeley [email protected] [email protected] ABSTRACT learning techniques against an adversarial opponent. To see In this paper (expanded from an invited talk at AISEC why this field is needed, is learning—the study of effective 2010), we discuss an emerging field of study: adversarial ma- machine learning techniques against an adversarial oppo- chine learning—the study of effective machine learning tech- nent. To see why this field is needed, it is helpful to recall niques against an adversarial opponent. In this paper, we: a common metaphor: security is sometimes thought of as a give a taxonomy for classifying attacks against online ma- chess game between two players. For a player to win, it is chine learning algorithms; discuss application-specific fac- not only necessary to have an effective strategy, one must tors that limit an adversary’s capabilities; introduce two also anticipate the opponent’s response to that strategy. models for modeling an adversary’s capabilities; explore the Statistical machine learning has already become an im- limits of an adversary’s knowledge about the algorithm, fea- portant tool in a security engineer’s repertoire. However, ture space, training, and input data; explore vulnerabili- machine learning in an adversarial environment requires us ties in machine learning algorithms; discuss countermeasures to anticipate that our opponent will try to cause machine against attacks; introduce the evasion challenge; and discuss learning to fail in many ways. In this paper, we discuss privacy-preserving learning techniques. both a theoretical framework for understanding adversar- ial machine learning, and then discuss a number of specific examples illustrating how these techniques succeed or fail. Categories and Subject Descriptors Advances in computing capabilities have made online sta- D.4.6 [Security and Protection]: Invasive software (e.g., tistical machine learning a practical and useful tool for solv- viruses, worms, Trojan horses); I.5.1 [Models]: Statistical; ing large-scale decision-making problems in many systems I.5.2 [Design Methodology] and networking domains, including spam filtering, network intrusion detection, and virus detection [36, 45, 60]. In these General Terms domains, a machine learning algorithm, such as a Bayesian learner or a Support Vector Machine (SVM) [14], is typically Algorithms, Design, Security, Theory periodically retrained on new input data. Unfortunately, sophisticated adversaries are well aware Keywords that online machine learning is being applied and we have Adversarial Learning, Computer Security, Game Theory, substantial evidence that they frequently attempt to break Intrusion Detection, Machine Learning, Security Metrics, many of the assumptions that practitioners make (e.g., data Spam Filters, Statistical Learning has various weak stochastic properties; independence; a sta- tionary data distribution). 1. INTRODUCTION The lack of stationarity provides ample opportunity for mischief during training (including periodic re-training) and In this paper, we discuss an emerging field of study: ad- classification stages. In many cases, the adversary is able to versarial machine learning—the study of effective machine poison the learner’s classifications, often in a highly targeted manner. For instance, an adversary can craft input data ∗This paper expands upon J. D. Tygar’s invited talk at AISec 2010 on Adversarial Machine Learning describing the that has similar feature properties to normal data (e.g., cre- SecML project at UC Berkeley, and includes material from ating a spam message that appears to be non-spam to the many of our collaborators. We kindly thank this year’s learner), or they exhibit Byzantine behaviors by crafting in- AISec organizers for allowing us to present this paper. put data that, when retrained on, causes the learner to learn an incorrect decision-making function. These sophisticated adversaries are patient and adapt their behaviors to achieve Permission to make digital or hard copies of all or part of this work for various goals, such as avoiding detection of attacks, caus- personal or classroom use is granted without fee provided that copies are ing benign input to be classified as attack input, launching not made or distributed for profit or commercial advantage and that copies focused or targeted attacks, or searching a classifier to find bear this notice and the full citation on the first page. To copy otherwise, to blind-spots in the algorithm. republish, to post on servers or to redistribute to lists, requires prior specific Adversarial machine learning is the design of machine permission and/or a fee. learning algorithms that can resist these sophisticated at- AISec’11, October 21, 2011, Chicago, Illinois, USA. Copyright 2011 ACM 978-1-4503-1003-1/11/10 ...$10.00. tacks, and the study of the capabilities and limitations of 43 attackers. In this paper, we: give a taxonomy for classifying indiscriminate fashion on a broad class of instances. Each attacks against online machine learning algorithms; discuss axis, especially this one, can potentially be a spectrum of application-specific factors that limit an adversary’s capa- choices, but for simplicity, we will categorize attacks and bilities; introduce two models for modeling an adversary’s defenses into these groupings. capabilities; explore the limits of an adversary’s knowledge about the algorithm, feature space, training, and input data; 2.1 Game Specification and Interpretation explore vulnerabilities in machine learning algorithms; dis- We model secure learning systems as a game between an cuss countermeasures against attacks; introduce the eva- attacker and a defender—the attacker manipulates data to sion challenge; and discuss privacy-preserving learning tech- mis-train or evade a learning algorithm chosen by the de- niques. fender to thwart the attacker’s objective. This game can be formalized in terms of a learning algorithm H ; and the at- 2. TAXONOMY tacker’s data corruption strategies A(train) and A(eval). The resulting game can be described as follows:1 In prior work [2], we introduced a qualitative taxonomy of attacks against a machine learning system, which has since 1. Defender Choose learning algorithm H for selecting been extended to propose a framework for quantitatively hypotheses based on observed data evaluating security threats [37]. Our taxonomy categorizes 2. Attacker Choose attack procedures A(train) and A(eval) an attack based on the following three properties: (potentially with knowledge of H ) 3. Learning: Influence Obtain dataset D(train) with contamination from Causative - Causative attacks alter the training pro- • A(train) cess through influence over the training data. Learn hypothesis: f H D(train) Exploratory - Exploratory attacks do not alter the • ← training process but use other techniques, such 4. Evaluation: D(eval) as probing the detector, to discover information Obtain dataset with contamination from • (eval) about it or its training data. A Compare predictions f (x) to y for each data point Security violation • (x, y) D(eval) Integrity - Integrity attacks result in intrusion points ∈ being classified as normal (false negatives). This game structure describes the fundamental interac- Availability - Availability attacks cause so many clas- tions between the defender and attacker in choosing H , A(train) sification errors, both false negatives and false and A(eval); these steps are depicted in Figure 1. The de- positives, that the system becomes effectively un- fender chooses H to select hypotheses that predict well re- usable. gardless of A(train) and A(eval), while the attacker chooses Privacy - In a privacy violation, the adversary ob- A(train) and A(eval) to produce poor predictions. The char- tains information from the learner, compromising acteristics specified by the taxonomy’s axes further spec- the secrecy or privacy of the system’s users. ify some aspects of this game. The influence axis deter- Specificity (a continuous spectrum) mines the structure of the game and the legal moves that Targeted - In a targeted attack, the focus is on a sin- each player can make. In exploratory attacks, the procedure gle or small set of target points. A(train) is not used in the game, and thereby the attacker Indiscriminate - An indiscriminate adversary has a only influences D(eval). Meanwhile, in the causative game more flexible goal that involves a very general the attacker also has indirect influence on f through his class of points, such as “any false negative.” choice of A(train). The specificity and security viola- The first axis describes the capability of the attacker: tion axes of the taxonomy determine which instances the whether (a) the attacker has the ability to influence the adversary would like to have misclassified during the evalua- training data that is used to construct the classifier (a caus- tion phase. In an integrity attack, the attacker desires false (train) (eval) ative attack) or (b) the attacker does not influence the learned negatives and therefore will use A and/or A to classifier, but can send new instances to the classifier and create or discover false negatives, whereas in an availability, possibly observe its decisions on these carefully crafted in- the attacker will also try to create or exploit false positives. stances (an exploratory attack). Finally, in a targeted attack the attacker only cares about The second axis indicates the type of security violation the predictions for a small number of instances, while an the attacker causes: either (a) allowing harmful instances indiscriminate attacker cares about prediction for a broad to slip through the filter as false negatives (an integrity vi- range of instances. olation); (b) creating a denial of service event in which be- nign instances are incorrectly filtered as false positives (an 3.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us