
A Hybrid Approach to Privacy-Preserving Federated Learning Stacey Truex Nathalie Baracaldo Ali Anwar [email protected] [email protected] [email protected] Georgia Institute of Technology IBM Research Almaden IBM Research Almaden Atlanta, Georgia San Jose, California San Jose, California Thomas Steinke Heiko Ludwig Rui Zhang [email protected] [email protected] [email protected] IBM Research Almaden IBM Research Almaden IBM Research Almaden San Jose, California San Jose, California San Jose, California Yi Zhou [email protected] IBM Research Almaden San Jose, California ABSTRACT KEYWORDS Federated learning facilitates the collaborative training of models Privacy, Federated Learning, Privacy-Preserving Machine Learning, without the sharing of raw data. However, recent attacks demon- Differential Privacy, Secure Multiparty Computation strate that simply maintaining data locality during training pro- ACM Reference Format: cesses does not provide sufficient privacy guarantees. Rather, we Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Lud- need a federated learning system capable of preventing inference wig, Rui Zhang, and Yi Zhou. 2019. A Hybrid Approach to Privacy-Preserving over both the messages exchanged during training and the final Federated Learning. In London ’19: ACM Workshop on Artificial Intelligence trained model while ensuring the resulting model also has accept- and Security, November 15, 2019, London, UK. ACM, New York, NY, USA, able predictive accuracy. Existing federated learning approaches 11 pages. https://doi.org/10.1145/1122445.1122456 either use secure multiparty computation (SMC) which is vulnerable to inference or differential privacy which can lead to low accuracy 1 INTRODUCTION given a large number of parties with relatively small amounts of In traditional machine learning (ML) environments, training data is data each. In this paper, we present an alternative approach that uti- centrally held by one organization executing the learning algorithm. lizes both differential privacy and SMC to balance these trade-offs. Distributed learning systems extend this approach by using a set Combining differential privacy with secure multiparty computation of learning nodes accessing shared data or having the data sent to enables us to reduce the growth of noise injection as the number the participating nodes from a central node, all of which are fully of parties increases without sacrificing privacy while maintaining trusted. For example, MLlib from Apache Spark assumes a trusted a pre-defined rate of trust. Our system is therefore a scalable ap- central node to coordinate distributed learning processes [28]. An- proach that protects against inference threats and produces models other approach is the parameter server [26], which again requires a with high accuracy. Additionally, our system can be used to train fully trusted central node to collect and aggregate parameters from a variety of machine learning models, which we validate with ex- the many nodes learning on their different datasets. perimental results on 3 different machine learning algorithms. Our However, some learning scenarios must address less open trust experiments demonstrate that our approach out-performs state of boundaries, particularly when multiple organizations are involved. the art solutions. While a larger dataset improves the performance of a trained model, arXiv:1812.03224v2 [cs.LG] 14 Aug 2019 organizations often cannot share data due to legal restrictions or CCS CONCEPTS competition between participants. For example, consider three hos- • Security and privacy → Privacy-preserving protocols; Trust pitals with different owners serving the same city. Rather than each frameworks; • Computing methodologies → Learning settings. hospital creating their own predictive model forecasting cancer risks for their patients, the hospitals want to create a model learned over the whole patient population. However, privacy laws prohibit Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed them from sharing their patients’ data. Similarly, a service provider for profit or commercial advantage and that copies bear this notice and the full citation may collect usage data both in Europe and the United States. Due on the first page. Copyrights for components of this work owned by others than ACM to legislative restrictions, the service provider’s data cannot be must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a stored in one central location. When creating a predictive model fee. Request permissions from [email protected]. forecasting service usage, however, all datasets should be used. London ’19, November 15, 2019, London, UK The area of federated learning (FL) addresses these more restric- © 2019 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 tive environments wherein data holders collaborate throughout the https://doi.org/10.1145/1122445.1122456 learning process rather than relying on a trusted third party to hold London ’19, November 15, 2019, London, UK Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou data [6, 39]. Data holders in FL run a machine learning algorithm single individual, thus limiting an attacker’s ability to infer such locally and only exchange model parameters, which are aggregated membership. The formal definition for DP is [13]: and redistributed by one or more central entities. However, this Definition 1 (Differential Privacy). A randomized mecha- approach is not sufficient to provide reasonable data privacy guar- nism K provides ¹ϵ; δº- differential privacy if for any two neighboring antees. We must also consider that information can be inferred database D and D that differ in only a single entry, S ⊆ Ranдe¹Kº, from the learning process [30] and that information that can be 1 2 ϵ 8 traced back to its source in the resulting trained model [40]. Pr¹K¹D1º 2 Sº ≤ e Pr¹K¹D2º 2 Sº + δ (1) Some previous work has proposed a trusted aggregator as a way If δ = 0, K is said to satisfy ϵ-differential privacy. to control privacy exposure [1], [32]. FL schemes using Local Dif- To achieve DP, noise is added to the algorithm’s output. This ferential Privacy also address the privacy problem [39] but entails noise is proportional to the sensitivity of the output, where sen- adding too much noise to model parameter data from each node, sitivity measures the maximum change of the output due to the often yielding poor performance of the resulting model. inclusion of a single data instance. We propose a novel federated learning system which provides Two popular mechanisms for achieving DP are the Laplacian formal privacy guarantees, accounts for various trust scenarios, and Gaussian mechanisms. Gaussian is defined by and produces models with increased accuracy when compared ¹ º ¹ º ¹ 2 2º with existing privacy-preserving approaches. Data never leaves M D , f D + N 0; Sf σ ; (2) the participants and privacy is guaranteed using secure multiparty where N ¹0; S2σ 2º is the normal distribution with mean 0 and stan- computation (SMC) and differential privacy. We account for po- f dard deviation S σ. A single application of the Gaussian mechanism tential inference from individual participants as well as the risk of f to function f of sensitivity S satisfies ¹ϵ; δº-differential privacy if collusion amongst the participating parties through a customizable f ≥ 5 (−( º2/ º trust threshold. Our contributions are the following: δ 4exp σϵ 2 and ϵ < 1 [16]. To achieve ϵ-differential privacy, the Laplace mechanism maybe • We propose and implement an FL system providing formal used in the same manner by substituting N ¹0; S2σ 2º with random privacy guarantees and models with improved accuracy com- f variables drawn from Lap¹Sf /ϵº [16]. pared to existing approaches. When an algorithm requires multiple additive noise mechanisms, • We include a tunable trust parameter which accounts for the evaluation of the privacy guarantee follows from the basic com- various trust scenarios while maintaining the improved ac- position theorem [14, 15] or from advanced composition theorems curacy and formal privacy guarantees. and their extensions [7, 17, 18, 23]. • We demonstrate that it is possible to use the proposed ap- proach to train a variety of ML models through the exper- 2.2 Threshold Homomorphic Encryption imental evaluation of our system with three significantly An additively homomorphic encryption scheme is one wherein the different ML models: decision trees, convolutional neural following property is guaranteed: networks and linear support vector machines. • We include the first federated approach for the private and Enc¹m1º ◦ Enc¹m2º = Enc¹m1 + m2º; accurate training of a neural network model. for some predefined function ◦. Such schemes are popular in privacy- The rest of this paper is organized as follows. We outline the preserving data analytics as untrusted parties can perform opera- building blocks in our system. We then discuss the various privacy tions on encrypted values. considerations in FL systems followed by outlining our threat model One such additive homomorphic scheme is the Paillier cryptosys- and general system. We then provide experimental evaluation and tem [31], a probabilistic encryption scheme based
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-