UC Berkeley UC Berkeley Electronic Theses and Dissertations

UC Berkeley UC Berkeley Electronic Theses and Dissertations

UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Secure Learning and Learning for Security: Research in the Intersection Permalink https://escholarship.org/uc/item/3tj8f7q4 Author Rubinstein, Benjamin Publication Date 2010 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Secure Learning and Learning for Security: Research in the Intersection by Benjamin Rubinstein A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science and the Designated Emphasis in Communication, Computation, and Statistics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Peter L. Bartlett, Chair Professor Anthony D. Joseph Professor Andrew E. B. Lim Spring 2010 Secure Learning and Learning for Security: Research in the Intersection Copyright 2010 by Benjamin Rubinstein 1 Abstract Secure Learning and Learning for Security: Research in the Intersection by Benjamin Rubinstein Doctor of Philosophy in Computer Science and the Designated Emphasis in Communication, Computation, and Statistics University of California, Berkeley Professor Peter L. Bartlett, Chair Statistical Machine Learning is used in many real-world systems, such as web search, network and power management, online advertising, finance and health services, in which adversaries are incentivized to attack the learner, motivating the urgent need for a better understand- ing of the security vulnerabilities of adaptive systems. Conversely, research in Computer Security stands to reap great benefits by leveraging learning for building adaptive defenses and even designing intelligent attacks on existing systems. This dissertation contributes new results in the intersection of Machine Learning and Security, relating to both of these complementary research agendas. The first part of this dissertation considers Machine Learning under the lens of Computer Security, where the goal is to learn in the presence of an adversary. Two large case-studies on email spam filtering and network-wide anomaly detection explore adversaries that ma- nipulate a learner by poisoning its training data. In the first study, the False Positive Rate (FPR) of an open-source spam filter is increased to 40% by feeding the filter a training set made up of 99% regular legitimate and spam messages, and 1% dictionary attack spam messages containing legitimate words. By increasing the FPR the adversary affects a Denial of Service attack on the filter. In the second case-study, the False Negative Rate of a pop- ular network-wide anomaly detector based on Principal Components Analysis is increased 7-fold (increasing the attacker's chance of subsequent evasion by the same amount) by a variance injection attack of chaff traffic inserted into the network at training time. This high-variance chaff traffic increases the traffic volume by only 10%. In both cases the effects of increasing the information or the control available to the adversary are explored; and effective counter-measures are thoroughly evaluated, including a method based on Robust Statistics for the network anomaly detection domain. The second class of attack explored on learning systems, involves an adversary aiming to evade detection by a previously-trained classifier. In the evasion problem the attacker searches for a negative instance of almost-minimal distance to some target positive, by sub- mitting a small number of queries to the classifier. Efficient query algorithms are developed for almost-minimizing Lp cost over any classifier partitioning feature space into two classes, one of which is convex. For the case of a convex positive class and p ≤ 1, algorithms with 2 linear query complexity are provided, along with lower bounds that almost match; when p > 1 a threshold phenomenon occurs whereby exponential query complexity is necessary for good approximations. For the case of a convex negative class and p ≥ 1, a randomized Ellipsoid-based algorithm finds almost-minimizers with polynomial query complexity. These results show that learning the decision boundary is sufficient, but not necessary for evasion, and can require much greater query complexity. The third class of attack aims to violate the confidentiality of the learner's training data given access to a learned hypothesis. Mechanisms for releasing Support Vector Machine (SVM) classifiers are developed. Algorithmic stability of the SVM is used to prove that the mechanisms preserve differential privacy, meaning that for an attacker with knowledge of all but one training example and the learning map, very little can be determined about the final unknown example using access to the trained classifier. Bounds on utility are established for the mechanisms: the privacy-preserving classifiers’ predictions should approximate the SVM's predictions with high probability. In the case of learning with translation-invariant kernels corresponding to infinite-dimensional feature spaces (such as the RBF kernel), a recent result from large-scale learning is used to enable a finite encoding of the SVM while maintaining utility and privacy. Finally lower bounds on achievable differential privacy are derived for any mechanism that well-approximates the SVM. The second part of this dissertation considers Security under the lens of Machine Learn- ing. The first application of Machine Learning is to a learning-based reactive defense. The CISO risk management problem is modeled as a repeated game in which the defender must allocate security budget to the edges of a graph in order to minimize the additive profit or return on attack (ROA) enjoyed by an attacker. By reducing to results from Online Learning, it is shown that the profit/ROA from attacking the reactive strategy approaches that of attacking the best fixed proactive strategy over time. This result contradicts the conventional dogma that reactive security is usually inferior to proactive risk management. Moreover in many cases, it is shown that the reactive defender greatly outperforms proactive approaches. The second application of Machine Learning to Security is for the construction of an attack on open-source software systems. When an open-source project releases a new version of their system, they disclose vulnerabilities in previous versions, sometimes with pointers to the patches that fixed them. Using features of diffs in the project's open-source repository, labeled by such disclosures, an attacker can train a model for discriminating between security patches and non-security patches. As new patches land in the open-source repository, before being disclosed as security or not, and before being released to users, the attacker can use the trained model to rank the patches according to likelihood of being a security fix. The adversary can then examine the ordered patches one-by-one until finding a security patch. For an 8 month period of Firefox 3's development history it is shown that an SVM-assisted attacker need only examine one or two patches per day (as selected by the SVM) in order to increase the aggregate window of vulnerability by 5 months. i Dedicated to my little Lachlan. ii Contents List of Figures v List of Tables vii List of Algorithms viii 1 Introduction 1 1.1 Research in the Intersection . .1 1.1.1 Secure Machine Learning . .2 1.1.2 Machine Learning for Security . .4 1.2 Related Work . .5 1.2.1 Related Tools from Statistics and Learning . .5 1.2.2 Attacks on Learning Systems . .7 1.3 The Importance of the Adversary's Capabilities . 11 I Private and Secure Machine Learning 13 2 Poisoning Classifiers 14 2.1 Introduction . 15 2.1.1 Related Work . 16 2.2 Case-Study on Email Spam . 17 2.2.1 Background on Email Spam Filtering . 18 2.2.2 Attacks . 20 2.2.3 Attack Results . 23 2.2.4 Defenses . 28 2.3 Case-Study on Network Anomaly Detection . 30 2.3.1 Background . 31 2.3.2 Poisoning Strategies . 34 2.3.3 ANTIDOTE: A Robust Defense . 38 2.3.4 Methodology . 43 2.3.5 Poisoning Effectiveness . 49 2.3.6 Defense Performance . 53 2.4 Summary . 56 iii 3 Querying for Evasion 58 3.1 Introduction . 58 3.1.1 Related Work . 60 3.2 Background and Definitions . 60 3.2.1 The Evasion Problem . 61 3.2.2 The Reverse Engineering Problem . 66 3.3 Evasion while Minimizing L1-distance . 67 3.3.1 Convex Positive Classes . 68 3.3.2 Convex Negative Classes . 76 3.4 Evasion while Minimizing Lp-distances . 80 3.4.1 Convex Positive Classes . 80 3.4.2 Convex Negative Classes . 85 3.5 Summary . 86 4 Privacy-Preserving Learning 88 4.1 Introduction . 89 4.1.1 Related Work . 90 4.2 Background and Definitions . 92 4.2.1 Support Vector Machines . 93 4.3 Mechanism for Finite Feature Maps . 94 4.4 Mechanism for Translation-Invariant Kernels . 98 4.5 Hinge-Loss and an Upper Bound on Optimal Differential Privacy . 105 4.6 Lower Bounding Optimal Differential Privacy . 106 4.6.1 Lower Bound for Linear Kernels . 106 4.6.2 Lower Bound for RBF Kernels . 108 4.7 Summary . 112 II Applications of Machine Learning in Computer Security 113 5 Learning-Based Reactive Security 114 5.1 Introduction . 114 5.1.1 Related Work . 116 5.2 Formal Model . 117 5.2.1 System . 118 5.2.2 Objective . 119 5.2.3 Proactive Security . 119 5.3 Case Studies . 120 5.3.1 Perimeter Defense . 120 5.3.2 Defense in Depth . 120 5.4 Reactive Security . 121 5.4.1 Algorithm . 122 5.4.2 Main Theorems . 123 5.4.3 Proofs of the Main Theorems . 124 iv 5.4.4 Lower Bounds . 130 5.5 Advantages of Reactivity . 131 5.6 Generalizations . 133 5.6.1 Horn Clauses . 133 5.6.2 Multiple Attackers . 133 5.6.3 Adaptive Proactive Defenders . 134 5.7 Summary . 134 6 Learning to Find Leaks in Open Source Projects 135 6.1 Introduction .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    204 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us