Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning

Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning

Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck Technische Universität Braunschweig, Germany Abstract Recently, Xiao et al. [35] have demonstrated that data Machine learning has made remarkable progress in the last preprocessing used in machine learning can also suffer from years, yet its success has been overshadowed by different at- vulnerabilities. In particular, they present a novel type of tacks that can thwart its correct operation. While a large body attack that targets image scaling. The attack enables an ad- of research has studied attacks against learning algorithms, versary to manipulate images, such that they change their vulnerabilities in the preprocessing for machine learning have appearance when scaled to a specific dimension. As a result, received little attention so far. An exception is the recent work any learning-based system scaling images can be tricked into of Xiao et al. that proposes attacks against image scaling. In working on attacker-controlled data. As an example, Figure1 contrast to prior work, these attacks are agnostic to the learn- shows an attack against the scaling operation of the popular ing algorithm and thus impact the majority of learning-based TensorFlow library. The manipulated image (left) changes to approaches in computer vision. The mechanisms underlying the output (right) when scaled to a specific dimension. the attacks, however, are not understood yet, and hence their Attacks on image scaling pose a threat to the security of root cause remains unknown. machine learning: First, scaling is omnipresent in computer In this paper, we provide the first in-depth analysis of vision, as learning algorithms typically require fixed input image-scaling attacks. We theoretically analyze the attacks dimensions. Second, these attacks are agnostic to the learning from the perspective of signal processing and identify their model, features, and training data. Third, the attacks can be root cause as the interplay of downsampling and convolution. used for poisoning data during training as well as misleading Based on this finding, we investigate three popular imaging classifiers during prediction. In contrast to adversarial ex- libraries for machine learning (OpenCV, TensorFlow, and amples, image-scaling attacks do not depend on a particular Pillow) and confirm the presence of this interplay in different model or feature set, as the downscaling can create a perfect scaling algorithms. As a remedy, we develop a novel defense image of the target class. As a consequence, there is a need against image-scaling attacks that prevents all possible at- for effective defenses against image-scaling attacks. The un- tack variants. We empirically demonstrate the efficacy of this derlying mechanisms, however, are not understood so far and defense against non-adaptive and adaptive adversaries. the root cause for adversarial scaling is still unknown. In this paper, we provide the first comprehensive analysis 1 Introduction of image-scaling attacks. To this end, we theoretically ana- lyze the attacks from the perspective of signal processing and Machine learning techniques have enabled impressive progress in several areas of computer science, such as in computer vision [e.g., 11, 12, 13] and natural language pro- cessing [e.g.,7, 18, 31]. This success, however, is increas- Downscaling ingly foiled by attacks from adversarial machine learning that in TensorFlow exploit weaknesses in learning algorithms and thwart their correct operation. Prominent examples of these attacks are methods for crafting adversarial examples [6, 32], backdoor- ing neural networks [10, 15], and inferring properties from learning models [9, 27]. While these attacks have gained Manipulated image Output image significant attention in research, they are unfortunately not Figure 1: Example of an image-scaling attack. Left: a manipulated image the only weak spot in machine learning systems. showing a cat. The scaling operation produces the right image with a dog. identify the root cause of the attacks as the interplay of down- Table 1: Scaling algorithms in deep learning frameworks. sampling and convolution during scaling. That is, depending on the downsampling frequency and the convolution kernel Framework Caffe PyTorch TensorFlow used for smoothing, only very specific pixels are considered Library OpenCV Pillow tf.image for generating the scaled image. This limited processing of Library Version 4.1 6.0 1.14 the source image allows the adversary to take over control Nearest (‡) •• • Bilinear (*) (*) (*) of the scaling process by manipulating only a few pixels. To • • • Bicubic validate this finding, we investigate three popular imaging ••• Lanczos libraries for machine learning (OpenCV, TensorFlow, and •• Area Pillow) and confirm the presence of this insecure interplay in ••• different scaling algorithms. (*) Default algorithm. (‡) Default algorithm if Pillow is used directly without PyTorch. Based on our theoretical analysis, we develop defenses for fending off image-scaling attacks in practice. As a first step, 2 Background we analyze the robustness of scaling algorithms in the three imaging libraries and identify those algorithms that already Before starting our theoretical analysis, we briefly review the provide moderate protection from attacks. In the second step, background of image scaling in machine learning and then we devise a new defense that is capable of protecting from all present image-scaling attacks. possible attack variants. The defense sanitizes explicitly those pixels of an image that are processed by a scaling algorithm. As a result, the adversary loses control of the scaled content, 2.1 Image Scaling in Machine Learning while the quality of the source image is largely preserved. Image scaling is a standard procedure in computer vision and We demonstrate the efficacy of this strategy in an empirical a common preprocessing step in machine learning [21]. A evaluation, where we prevent attacks from non-adaptive as scaling algorithm takes a source image S and resizes it to well as adaptive adversaries. a scaled version D. As many learning algorithms require a Finally, our work provides an interesting insight into re- fixed-size input, scaling is a mandatory step in most learning- search on secure machine learning: While attacks against based systems operating on images. For instance, deep neural learning algorithms are still hard to analyze due to the com- networks for object recognition, such as VGG19 and Incep- plexity of learning models, the well-defined structure of scal- tion V3/V4 expect inputs of 224 224 and 299 299 pixels, × × ing algorithms enables us to fully analyze scaling attacks and respectively, and can only be applied in practice if images are develop effective defenses. As a consequence, we are opti- scaled to these dimensions. mistic that attacks against other forms of data preprocessing Generally, we can differentiate upscaling and downscaling, can also be prevented, given a thorough root-cause analysis. where the first operation enlarges an image by extrapolation, Contributions. In summary, we make the following contri- while the latter reduces it through interpolation. In practice, butions in this paper: images are typically larger than the input dimension of learn- ing models and thus image-scaling attacks focus on down- Analysis of image-scaling attacks. We conduct the first scaling. Table1 lists the most common scaling algorithms. • in-depth analysis of image-scaling attacks and identify Although these algorithms address the same task, they differ the vulnerability underlying the attacks in theory as well in how the content of the source S is weighted and smoothed as in practical implementations. to form the scaled version D. For example, nearest-neighbor Effective Defenses. We develop a theoretical basis for scaling simply copies pixels from a grid of the source to the • assessing the robustness of scaling algorithms and de- destination, while bicubic scaling interpolates pixels using a signing effective defenses. We propose a novel defense cubic function. We examine these algorithms in more detail that protects from all possible attack variants. in Section3 when analyzing the root cause of scaling attacks. Comprehensive Evaluation. We empirically analyze scal- Due to the central role in computer vision, scaling algo- • ing algorithms of popular imaging libraries under attack rithms are an inherent part of several deep learning frame- and demonstrate the effectivity of our defense against works. For example, Caffe, PyTorch, and TensorFlow imple- adversaries of different strengths. ment all common algorithms, as shown in Table1. Techni- cally, TensorFlow uses its own implementation called tf.image, The rest of this paper is organized as follows: We review whereas Caffe and PyTorch use the imaging libraries OpenCV the background of image scaling and attacks in Section2. Our and Pillow, respectively. Other libraries for deep learning theoretical analysis is presented in Section3, and we develop either build on these frameworks or use the imaging libraries defenses in Section4. An empirical evaluation of attacks and directly. For instance, Keras uses Pillow and DeepLearning4j defenses is given in Section5. We discuss related work in builds on OpenCV. As a consequence, we focus our analysis Section6, and Section7 concludes the paper. on these major imaging libraries. 2.2 Image-Scaling Attacks Table 2: Table

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us