
Machine Learning for All: A More Robust Federated Learning Framework Chamatidis Ilias1 and Spathoulas Georgios1;2 1Department of Computer Science and Biomedical Informatics, University of Thessaly, Greece 2Center for Cyber and Information Security, Norwegian University of Science and Technology, Gjovik, Norway Keywords: Deep Learning, Federated Learning, Blockchain, Security, Privacy, Integrity, Incentives. Abstract: Machine learning and especially deep learning are appropriate for solving multiple problems in various do- mains. Training such models though, demands significant processing power and requires large data-sets. Federated learning is an approach that merely solves these problems, as multiple users constitute a distributed network and each one of them trains a model locally with his data. This network can cumulatively sum up significant processing power to conduct training efficiently, while it is easier to preserve privacy, as data does not leave its owner. Nevertheless, it has been proven that federated learning also faces privacy and integrity issues. In this paper a general enhanced federated learning framework is presented. Users may provide data or the required processing power or participate just in order to train their models. Homomorphic encryption algorithms are employed to enable model training on encrypted data. Blockchain technology is used as smart contracts coordinate the work-flow and the commitments made between all participating nodes, while at the same time, tokens exchanges between nodes provide the required incentives for users to participate in the scheme and to act legitimately. 1 INTRODUCTION problems such as coordination of the whole process, privacy of the users data and performance issues. Machine learning field has recently attracted a lot of Personal data are being gathered and used for interest. Advancements in hardware and algorithmic training machine learning models. This happens with breakthroughs have made it easier and faster to pro- or without users’ consent and usually gives them no cess large volumes of data. In particular deep learn- control over the resulting models. For example, data ing scheme, trains neural networks with a large num- such as biometrics, text input and location coordinates ber of nodes and multiple hidden layers. Taking ad- are private personal data, but are required in order vantage of the parallel processing capabilities of mod- to train models for biometric authentication, text pre- ern graphic cards, deep learning became quickly the diction or navigation services respectively. Federated main option for training large machine learning mod- learning offers a solution to the problem mentioned els upon big data data-sets. above, because no central server gathers the users’ Another relevant advancement, federated learning data. In this scheme, models are trained locally at refers to multiple nodes which train models locally the users devices, without any third parties accessing and then fuse these partial models into a single one. their data and users only share the resulting trained The resulting distributed network has a lot more pro- models. cessing power than a single machine, so it can per- In this paper we present an approach for enhanc- form faster and cope with larger volumes of data. An- ing federated learning model in terms of privacy, man- other critical issue is the collection of data to train agement and integrity. Specifically we discuss the the model. Traditionally data is gathered at a single use of homomorphic encryption, blockchain technol- host and training is carried out locally, but in feder- ogy and integrity mechanisms, in order to construct a ated learning the training happens at users’ devices, more robust scheme for federated learning. Specifi- so data does not need to be sent to a central server and cally Section 2 discusses deep learning and federated thus privacy of the data holder is preserved. Although learning in more detail, Section 3 presents some of federated learning seems very interesting, it still has the most serious threats for the current model of fed- 544 Ilias, C. and Georgios, S. Machine Learning for All: A More Robust Federated Learning Framework. DOI: 10.5220/0007571705440551 In Proceedings of the 5th International Conference on Information Systems Security and Privacy (ICISSP 2019), pages 544-551 ISBN: 978-989-758-359-9 Copyright c 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved Machine Learning for All: A More Robust Federated Learning Framework erated learning, Section 4 discusses the main points of the proposed methodology and Section 5 discusses related research efforts through recent years. In Sec- tion 6 our future plans are presented and Section 7 discusses our conclusions. 2 MACHINE LEARNING Figure 1: Architecture of a federated deep learning network. In this Section both deep learning and federated learn- ing paradigms are presented. need for larger data-sets and consequently more com- puting power is an important issue. Also the form of 2.1 Deep Learning the computing power required (GPU clusters) made it harder for individual users, to utilize proposed al- gorithms. The previous factors practically made deep Deep learning is a rapidly growing field of machine learning available only to large corporations and orga- learning. Because of the breakthrough in algorithms nizations with the required resources for researching and also of the fact that hardware has recently become and developing systems under the aforementioned re- more efficient and less expensive to build, deep learn- strictions. ing models have recently been massively employed in Additionally several issues have recently emerged various applications. Deep learning is essentially the regarding the collection of data required for the train- paradigm of training very large neural networks, with ing of deep networks. Training of such networks re- multiple hidden layers consisting of numerous nodes, quires real-world data, which in most of the cases is by the use of very large data-sets (Deng et al., 2014). personal data. Such data are usually captured by ser- The main advantage of deep neural networks is vices that interact with multiple users, with or without the large number of neurons allowing them to learn a the consent of the latter. Most data that is created by great depth of detail of the data, used as input. Be- people is considered as personal information and it is cause of their ability to efficiently adapt to any data- protected by law, so a difficult legislative process is set, deep learning networks are called universal ap- needed to collect and use this data. proximators, for their ability to efficiently solve di- Even if the data used is anonymous, it has verse problems. Thus deep learning is used in differ- been proved that significant privacy leaks may oc- ent domains, such as computer vision (Zeiler, 2013), cur through various linking attacks (Backes et al., speech recognition (Deng et al., 2013), natural lan- 2016). Recently Netfix released a data-set (Netflix guage processing (Collobert and Weston, 2008), au- prize (Bennett et al., 2007; Zhou et al., 2008)) con- dio recognition (Noda et al., 2015), social network sisting of 500,000 anonymous movie ratings, in an filtering (Liu et al., 2013), machine translation (Na, attempt to optimize their predictions algorithms. Au- 2015), bioinformatics (Min et al., 2017), drug de- thors in (Narayanan and Shmatikov, 2008) success- sign (Jing et al., 2018) and biometric authentication fully demonstrated that it is possible to find a Netflix (Chamatidis et al., 2017). Learning in deep networks subscribers records in this data-set and furthermore can be supervised or unsupervised. Also there is an uncover their political preferences along with other hierarchical relation between the nodes of the hidden sensitive information about them. layers, so each layer learns to a different abstraction As private information leaks become common, of the data outputted by the previous layer, thus re- users become more concerned about the personal data sulting into higher accuracy. they share. Also these collected data-sets may be too large to be processed or saved locally, while the train- 2.2 Federated Learning ing of the network is a resources demanding task, and requires large training times even for simple tasks. As deep learning became more popular, researchers A proposed architecture that aims to solve the prob- started experimenting on different problems, an im- lem of data collection and concurrently make required portant conclusion was quickly evident; the accuracy computing resources available to more users is feder- of the results of deep learning networks is highly cou- ated learning, also referred to as collaborative learn- pled with the size of the training data-set. Larger net- ing or distributed learning. works are characterized by higher degree of freedom and thus more data is required for their training. The 545 ICISSP 2019 - 5th International Conference on Information Systems Security and Privacy Federated learning architecture is usually based on though there are many issues that need to be ad- the assumption that a node needs to train a machine dressed, in order to be used in a real world use cases. learning model and that this training is partially com- Privacy and security problems still exist, despite the mitted in multiple other nodes. The main node, also fact that the users share the minimum required data identified as coordinator collects the trained models for the training process. In the following sections, and combines those into a single model. The rest of these threats are analyzed. the nodes train the partial models upon data that reside locally and send the trained models to the coordinator 3.1 Data Leak with Adversarial Attack to be fused into a single final model. Federated learning architecture is characterized by In federated learning environment, users train their two significant changes with respect to the traditional own deep learning models locally. After training their model, where a single node has to collect all data model they sent the local gradients to the coordina- and then conduct the required training of the model.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-