Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Total Page:16
File Type:pdf, Size:1020Kb
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning Briland Hitaj∗ Giuseppe Ateniese Fernando Perez-Cruz Stevens Institute of Technology Stevens Institute of Technology Stevens Institute of Technology [email protected] [email protected] [email protected] ABSTRACT 1 INTRODUCTION Deep Learning has recently become hugely popular in machine Deep Learning is a new branch of machine learning that makes learning for its ability to solve end-to-end learning systems, in use of neural networks, a concept which dates back to 1943 [49], which the features and the classifiers are learned simultaneously, to find solutions for a variety of complex tasks. Neural networks providing significant improvements in classification accuracy in were inspired by the way the human brain learns to show that the presence of highly-structured and large databases. distributed artificial neural networks could also learn nontrivial Its success is due to a combination of recent algorithmic break- tasks, even though current architectures and learning procedures throughs, increasingly powerful computers, and access to signifi- are far from brain-like behavior. cant amounts of data. Algorithmic breakthroughs, the feasibility of collecting large Researchers have also considered privacy implications of deep amounts of data, and increasing computational power have con- learning. Models are typically trained in a centralized manner with tributed to the current popularity of neural networks, in particular all the data being processed by the same training algorithm. If the with multiple (deep) hidden layers, that indeed have started to data is a collection of users’ private data, including habits, personal outperform previous state-of-the-art machine learning techniques pictures, geographical positions, interests, and more, the central- [6, 29, 75]. Unlike conventional machine learning approaches, deep ized server will have access to sensitive information that could learning needs no feature engineering of inputs [45] since the model potentially be mishandled. To tackle this problem, collaborative itself extracts relevant features on its own and defines which fea- deep learning models have recently been proposed where parties tures are relevant for each problem [29, 45]. locally train their deep learning structures and only share a subset Deep learning models perform extremely well with correlated of the parameters in the attempt to keep their respective training data, which contributed to substantial improvements in computer sets private. Parameters can also be obfuscated via differential pri- vision [47], image processing, video processing, face recognition vacy (DP) to make information extraction even more challenging, [82], speech recognition [34], text-to-speech systems [64] and nat- as proposed by Shokri and Shmatikov at CCS’15. ural language processing [2, 15, 90]. Deep learning has also been Unfortunately, we show that any privacy-preserving collabora- used as a component in more complex systems that are able to play tive deep learning is susceptible to a powerful attack that we devise games [33, 42, 57, 60] or diagnose and classify diseases [16, 18, 26]. in this paper. In particular, we show that a distributed, federated, However, there are severe privacy implications associated with or decentralized deep learning approach is fundamentally broken deep learning, as the trained model incorporates essential informa- and does not protect the training sets of honest participants. The tion about the training set. It is relatively straightforward to extract attack we developed exploits the real-time nature of the learning sensitive information from a model [4, 27, 28]. process that allows the adversary to train a Generative Adversarial Consider the following cases depicted in Figure 1, in which N Network (GAN) that generates prototypical samples of the targeted users store local datasets of private information on their respective training set that was meant to be private (the samples generated by devices and would like to cooperate to build a common discrimina- the GAN are intended to come from the same distribution as the tive machine. We could build a classifier by uploading all datasets training data). Interestingly, we show that record-level differential into a single location (e.g., the cloud), as depicted in Figure 1 (a). A privacy applied to the shared parameters of the model, as suggested service operator trains the model on the combined datasets. This arXiv:1702.07464v3 [cs.CR] 14 Sep 2017 in previous work, is ineffective (i.e., record-level DP is not designed centralized approach is very effective since the model has access to to address our attack). all the data, but it’s not privacy-preserving since the operator has direct access to sensitive information. We could also adopt a col- KEYWORDS laborative learning algorithm, as illustrated in Figure 1 (b), where Collaborative learning; Security; Privacy; Deep learning each participant trains a local model on his device and shares with the other users only a fraction of the parameters of the model. By collecting and exchanging these parameters, the service operator It’s not who has the best algorithm that wins. can create a trained model that is almost as accurate as a model It’s who has the most data. built with a centralized approach. The decentralized approach is Andrew Ng considered more privacy-friendly since datasets are not exposed Self-taught Learning directly. Also, it is shown experimentally to converge even in the case when only a small percentage of model parameters is shared ∗The author is also a PhD student at University of Rome - La Sapienza (a) Centralized Learning (b) Collaborative Learning Figure 1: Two approaches for distributed deep learning. In (a), the red links show sharing of the data between the users and the server. Only the server can compromise the privacy of the data. In (b), the red links show sharing of the model parameters. In this case a malicious user employing a GAN can deceive any victim into releasing their private information. and/or when parameters are truncated and/or obfuscated via differ- (CNN) which are notoriously difficult for model inversion ential privacy [77]. But it needs several training passes through the attacks [78]. data with users updating the parameters at each epoch. (3) We introduce the notion of deception in collaborative learn- The Deep Learning community has recently proposed Gener- ing, where the adversary deceives a victim into releasing ative Adversarial Networks (GANs) [30, 70, 72], which are still more accurate information on sensitive data. being intensively developed [3, 9, 32, 43, 56]. The goal of GANs (4) The attack we devise is also effective when parameters are is not to classify images into different categories, but to generate obfuscated via differential privacy. We emphasize that itis similar-looking samples to those in the training set (ideally with not an attack against differential privacy but only on its the same distribution). More importantly, GANs generate these proposed use in collaborative deep learning. In practice, we samples without having access to the original samples. The GAN show that differentially private training as applied in[77] interacts only with the discriminative deep neural network to learn and [1] (example/record-level differential privacy) is ineffec- the distribution of the data. tive in a collaborative learning setting under our notion of In this paper, we devise a powerful attack against collaborative privacy. deep learning using GANs. The result of the attack is that any user acting as an insider can infer sensitive information from a victim’s device. The attacker simply runs the collaborative learning algo- 2 REMARKS rithm and reconstructs sensitive information stored on the victim’s We devise a new attack that is more generic and effective than device. The attacker is also able to influence the learning process current information extraction mechanisms. It is based on Gen- and deceive the victim into releasing more detailed information. erative Adversarial Networks (GANs), which were proposed for The attack works without compromising the service operator and implicit density estimation [30]. The GAN, as detailed in Section even when model parameters are obfuscated via differential privacy. 5, generates samples that appear to come from the training set, by As depicted in Figure 1(a), the centralized server is the only player pitting a generative deep neural network against a discriminative that compromises the privacy of the data. While in Figure 1(b), we deep neural network. The generative learning is successful when- show that any user can intentionally compromise any other user, ever the discriminative model cannot determine whether samples making the distributed setting even more undesirable. come from the GAN or the training set. It is important to realize Our main contribution is to propose and implement a novel class that both the discriminative and generative networks influence of active inference attacks on deep neural networks in a collabora- each other, because the discriminative algorithm tries to separate tive setting. Our method is more effective than existing black-box GAN-generated samples from real samples while the GAN tries to or white-box information extraction mechanisms. generate more realistic looking samples (ideally coming from the Namely, our contributions are: same distribution of the original data). The GAN never sees the (1) We devise