SecureGBM: Secure Multi-Party Gradient Boosting Zhi Fengy;∗, Haoyi Xiongy;∗, Chuanyuan Songy, Sijia Yangz, Baoxin Zhaoy;#, Licheng Wangz, Zeyu Cheny, Shengwen Yangy, Liping Liuy and Jun Huany y Big Data Group (BDG), Big Data Lab (BDL) and PaddlePaddle (DLTP), Baidu Inc., Beijing, China z State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China # Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China Abstract—Federated machine learning systems have been been re-implemented on distributed computing, encryption, and widely used to facilitate the joint data analytics across the privacy preserving computation/communication platforms, so distributed datasets owned by the different parties that do not as to incorporate the secure computation paradigms [1]. trust each others. In this paper, we proposed a novel Gradient Boosting Machines (GBM) framework SecureGBM built-up with Backgrounds and Related Work. Existing efforts majorly a multi-party computation model based on semi-homomorphic work on the implementation of efficient federated learning encryption, where every involved party can jointly obtain a systems. Two parallel computation paradigms—data-centric shared Gradient Boosting machines model while protecting their and model-centric [6]–[10], [10], [11] have been proposed. On own data from the potential privacy leakage and inferential each machine, the data centric algorithm first estimates the identification. More specific, our work focused on a specific “dual- party”secure learning scenario based on two parties — both party same set of parameters (of the model) using the local data, own an unique view (i.e., attributes or features) to the sample group then aggregates the estimated parameters via model-averaging of samples while only one party owns the labels. In such scenario, for global estimation. The model with aggregated parameters is feature and label data are not allowed to share with others. considered as the trained model based on the overall data (from To achieve the above goal, we firstly extent — LightGBM multiple parties) and before aggregated these parameters can — a well known implementation of tree-based GBM through covering its key operations for training and inference with SEAL be estimated through parallel computing structure in an easy homomorphic encryption schemes. However, the performance way. Meanwhile, model-centric algorithms require multiple of such re-implementation is significantly bottle-necked by the machines to share the same loss function with “updatable explosive inflation of the communication payloads, based on parameters”, and allow each machine to update the parameters ciphertexts subject to the increasing length of plaintexts. In this in the loss function using the local data so as to minimize way, we then proposed to use stochastic approximation techniques to reduced the communication payloads while accelerating the the loss. Based on this characteristic, model-centric algorithm overall training procedure in a statistical manner. Our experi- commonly updates the parameters sequentially so that the ments using the real-world data showed that SecureGBM can additional time consumption in updating is sometimes a tough well secure the communication and computation of LightGBM nut for specific applications. Even so, compared with the data- training and inference procedures for the both parties while only centric, the model-centric methods usually can achieve better losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets. More performances, as it minimizes the risk of the model [6], [7]. To specific, compared to LightGBM, the proposed SecureGBM advance the distributed performance of linear classifiers, Tian would slowdown with 3x ∼ 64x time consumption per iteration et al. [4] proposed a data-centric sparse linear discriminant in the training procedure, while SecureGBM becomes more and analysis algorithm, which leverages the advantage of parallel more efficient when the scale of the training dataset increases computing. (i.e., the larger training set, the lower slowdown ratio). 1 In terms of multi-party collaboration, the federated learning I. INTRODUCTION algorithms can be categorized into two types: Data separation arXiv:1911.11997v1 [cs.LG] 27 Nov 2019 and View separation. For the data separation, the algorithms Multi-Party federated learning [1] becomes one of the most are assumed to learn from the distributed datasets, where each popular machine learning paradigm thanks to the increasing dataset consists of a subset of samples of the same types [3]–[6]. trends of distributed data collection, storage and processing, as For example, hospitals are usually required to collaboratively well as its benefits to the privacy-preserved manner in different learn a model to predict patents’ future diseases through kinds of applications. In most multi-party machine learning ap- classifying their electronic medical records, where all hospitals plications, “no raw data sharing” is an important pre-condition, follows the same scheme to collect patients’ medical record where the model should be trained using all data stored in while every hospital can only cover a part of the patients. In this distributed machines (i.e., parties) without any cross-machine case, federated learning here improves the overall performance raw data sharing. A wide range of machine learning models and of learning through incorporating the private datasets owned by algorithms, including logistic regression [2], sparse discriminant different parties, while ensuring the privacy and security [5]–[7]. analysis [3], [4], stochastic gradient-based learners [5]–[7], have While the existing data/computation parallelism mechanisms 1 ∗Equal Contribution. The manuscript has been accepted for publication were usually motivated to improve federated learning under the at IEEE BigData 2019. data separation settings, the federated learning systems under the view separation settings are seldom considered. vanilla gradient boosting machines, additional rounds of Our Work. We mainly focus on view separation settings of training procedure might be needed by such stochastic the federated learning that assumes the data view of the same gradient boosting to achieve equivilent performance. In group of samples are separated by multiple parties who do this way, SecureGBM makes trade-off between statistical not trust each other. For example, the healthcare, finance, and accuracy and communication complexity using mini-batch insurance records of the same group of healthcare users are sampling strategies, so as to enjoy low communication usually stored in the data centers of healthcare providers, banks, costs and accelerated training procedure. and insurance companies separately. For the healthcare users, • Finally, we evaluate SecureGBM using a large-scale real- they usually need some recommendations on the healthcare world user profile dataset and several benchmark datasets insurance products according to their health and financial status, for classification. The results show that SecureGBM while healthcare insurance companies need to learn from large- can compete with state of the art of Gradient Boosting scale healthcare together with personal financial data to build Machines — LightGBM, XGBoosts and the vanilla re- such recommendation models. However, according to the law implementation of LightGBM based on Microsoft SEAL. enforcement about data privacy, it is difficult for these three The rest of the paper is organized as follows. In Section II, partities to share their data with each other and learn such a we review the gradient-boosting trees based classifiers and the predictive model. In this way, federated learning under view implementation of LightGBM, then we introduce the problem separation models is highly appreciated. In this work, we aim formulation of our work. In Section III, we propose the frame- at working on the view separation federated learning algorithms work of SecureGBM and present the details of SecureGBM using Gradient Boosting Machines (GBM) as the Classifiers. algorithm. In Section IV, we evaluate the proposed algorithms GBM is studied here as it can deliver decent prediction results using the real-world user profile dataset and the benchmark and be interpreted by human experts for joint data analytics datasets. In addition, we compare SecureGBM with baseline and cross-institutes data understanding purposes. centralized algorithms. In Section V, we introduce the related Our Contributions. We summarize the contribution of the work and present a discussion. Finally, we conclude the paper proposed SecureGBM algorithm in following aspects. in Section VI. • Firstly, we study and formulate the federated learning problem under the (semi)-homomorphic encryption set- II. PRELIMINARY STUDIES AND PROBLEM DEFINITIONS tings, while assuming the data owned by two parties are In this section, we first present the preliminary studies of not sharable. More specific, in this paper, we assume the proposed study, then introduce the design goals for the each party owns a unique private view to the same proposed systems as the technical problem definitions. group of samples, while the labels of these samples are monopolized by one party. To the best of our knowledge, A. Gradient Boosting and LightGBM this is the first study on tree-based Gradient Boosting As an ensemble learning technique, the Gradient
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-