A Study of Face Obfuscation in Imagenet

A Study of Face Obfuscation in Imagenet

A Study of Face Obfuscation in ImageNet Kaiyu Yang 1 Jacqueline Yau 2 Li Fei-Fei 2 Jia Deng 1 Olga Russakovsky 1 Abstract devices taking photos (Butler et al., 2015; Dai et al., 2015). Face obfuscation (blurring, mosaicing, etc.) has Learning from the visual data has led to computer vision been shown to be effective for privacy protection; applications that promote the common good, e.g., better nevertheless, object recognition research typically traffic management (Malhi et al., 2011) and law enforce- assumes access to complete, unobfuscated images. ment (Sajjad et al., 2020). However, it also raises privacy In this paper, we explore the effects of face concerns, as images may capture sensitive information such obfuscation on the popular ImageNet challenge as faces, addresses, and credit cards (Orekondy et al., 2018). visual recognition benchmark. Most categories in Preventing unauthorized access to sensitive information in the ImageNet challenge are not people categories; private datasets has been extensively studied (Fredrikson however, many incidental people appear in the et al., 2015; Shokri et al., 2017). However, are publicly images, and their privacy is a concern. We first available datasets free of privacy concerns? Taking the annotate faces in the dataset. Then we demon- popular ImageNet dataset (Deng et al., 2009) as an example, strate that face blurring—a typical obfuscation there are only 3 people categories1 in the 1000 categories technique—has minimal impact on the accuracy of ImageNet Large Scale Visual Recognition Challenge of recognition models. Concretely, we benchmark (ILSVRC) (Russakovsky et al., 2015); nevertheless, the multiple deep neural networks on face-blurred dataset exposes many people co-occurring with other objects images and observe that the overall recognition in images (Prabhu & Birhane, 2021), e.g., people sitting accuracy drops only slightly (≤ 0:68%). Further, on chairs, walking dogs, or drinking beer (Fig.1). It is we experiment with transfer learning to 4 concerning since ILSVRC is freely available for academic downstream tasks (object recognition, scene use and is widely used by the research community. recognition, face attribute classification, and object detection) and show that features learned In this paper, we attempt to mitigate ILSVRC’s privacy is- on face-blurred images are equally transfer- sues. Specifically, we construct a privacy-enhanced version able. Our work demonstrates the feasibility of of ILSVRC and gauge its utility as a benchmark for image privacy-aware visual recognition, improves the classification and as a dataset for transfer learning. highly-used ImageNet challenge benchmark, and suggests an important path for future visual Face annotation. As an initial step, we focus on a promi- datasets. Data and code are available at https: nent type of private information—faces. To examine and //github.com/princetonvisualai/ mitigate their privacy issues, we first annotate faces in Im- imagenet-face-obfuscation. ageNet using automatic face detectors and crowdsourcing. First, we use Amazon Rekognition2 to detect faces. The arXiv:2103.06191v2 [cs.CV] 14 Mar 2021 results are then refined through crowdsourcing on Amazon 1. Introduction Mechanical Turk to obtain accurate face annotations. Nowadays, visual data is being generated at an unprece- We have annotated 1,431,093 images in ILSVRC, result- dented scale. People share billions of photos daily on social ing in 562,626 faces from 243,198 images (17% of all media (Meeker, 2014). There is one security camera for ev- images have at least one face). Many categories have ery 4 people in China and the United States (Lin & Purnell, more than 90% images with faces, even though they are 2019). Moreover, even your home can be watched by smart not people categories, e.g., volleyball and military uniform. Our annotations confirm that faces are ubiqui- 1Department of Computer Science, Princeton University, Princeton, New Jersey, USA 2Department of Computer Science, tous in ILSVRC and pose a privacy issue. We release the Stanford University, Stanford, California, USA. Correspondence face annotations to facilitate subsequent research in privacy- to: Kaiyu Yang <[email protected]>, Olga Russakovsky aware visual recognition on ILSVRC. <[email protected]>. 1scuba diver, bridegroom, and baseball player 2https://aws.amazon.com/rekognition A Study of Face Obfuscation in ImageNet Figure 1. Most categories in ImageNet Challenge (Russakovsky et al., 2015) are not people categories. However, the images contain many people co-occurring with the object of interest, posing a potential privacy threat. These are example images from barber chair, husky, beer bottle, volleyball and military uniform. Effects of face obfuscation on classification accuracy. well as both face-centric and face-agnostic recognition. Obfuscating sensitive image areas is widely used for pre- In all of the 4 tasks, models pretrained on face-blurred serving privacy (McPherson et al., 2016). Using our face images perform closely with models pretrained on original annotations and a typical obfuscation strategy: blurring images. We do not see a statistically significant difference (Fig.1), we construct a face-blurred version of ILSVRC. between them, suggesting that visual features learned from What are the effects of using it for image classification? face-blurred pretraining are equally transferable. Again, this At first glance, it seems inconsequential—one should still encourages us to adopt face obfuscation as an additional recognize a car even when the people inside have their faces protection on visual recognition datasets without worrying blurred. However, to the best of our knowledge, this has about detrimental effects on the dataset’s utility. not been thoroughly analyzed. By benchmarking various deep neural networks on original images and face-blurred images, we report insights about the effects of face blurring. Contributions. Our contributions are twofold. First, we obtain accurate face annotations in ILSVRC, which facil- The validation accuracy drops only slightly (0.13%–0.68%) itates subsequent research on privacy protection. We will when using face-blurred images to train and evaluate. It is release the code and the annotations. Second, to the best of hardly surprising since face blurring could remove informa- our knowledge, we are the first to investigate the effects of tion useful for classifying some images. However, the result privacy-aware face obfuscation on large-scale visual recog- assures us that we can train privacy-aware visual classifiers nition. Through extensive experiments, we demonstrate that on ILSVRC with less than 1% accuracy drop. training on face-blurred does not significantly compromise Breaking the overall accuracy into individual categories in accuracy on both image classification and downstream tasks, ILSVRC, we observe that they are impacted by face blur- while providing some privacy protection. Therefore, we ad- ring differently. Some categories incur significantly larger vocate for face obfuscation to be included in ImageNet and accuracy drop, including categories with a large fraction of to become a standard step in future dataset creation efforts. blurred area, and categories whose objects are often close to faces, e.g., mask and harmonica. 2. Related Work Our results demonstrate the utility of face-blurred ILSVRC Privacy-preserving machine learning (PPML). Ma- for benchmarking. It enhances privacy with only a marginal chine learning frequently uses private datasets (Chen et al., accuracy drop. Models trained on it perform competitively 2019b). Research in PPML is concerned with an adversary with models trained on the original ILSVRC dataset. trying to infer the private data. It can happen to the trained model. For example, model inversion attack recovers sensi- Effects on feature transferability. Besides a classifi- tive attributes (e.g., gender, genotype) of an individual given cation benchmark, ILSVRC also serves as pretraining the model’s output (Fredrikson et al., 2014; 2015; Hamm, data for transferring to domains where labeled images are 2017; Li et al., 2019; Wu et al., 2019). Membership infer- scarce (Girshick, 2015; Liu et al., 2015a). So a further ques- ence attack infers whether an individual was included in tion is: Does face obfuscation hurt the transferability of training (Shokri et al., 2017; Nasr et al., 2019; Hisamoto visual features learned from ILSVRC? et al., 2020). Training data extraction attack extracts verba- tim training data from the model (Carlini et al., 2019; 2020). We investigate this question by pretraining models on the For defending against these attacks, differential privacy is a original/blurred images and finetuning on 4 downstream general framework (Abadi et al., 2016; Chaudhuri & Mon- tasks: object recognition on CIFAR-10 (Krizhevsky et al., teleoni, 2008; McMahan et al., 2018; Jayaraman & Evans, 2009), scene recognition on SUN (Xiao et al., 2010), object 2019; Jagielski et al., 2020). It requires the model to behave detection on PASCAL VOC (Everingham et al., 2010), and similarly whether or not an individual is in the training data. face attribute classification on CelebA (Liu et al., 2015b). They include both classification and spatial localization, as Privacy breaches can also happen during training/inference. A Study of Face Obfuscation in ImageNet To address hardware/software vulnerabilities, researchers In certain cases, both humans and machines can infer an have used enclaves—a hardware mechanism for

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us