
Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, Ahmad-Reza Sadeghi Technical University of Darmstadt, Germany (ducthien.nguyen, markus.miettinen, ahmad.sadeghi)@trust.tu-darmstadt.com, [email protected] Abstract—Federated Learning (FL) is an appealing method of vulnerabilities and attacks. Hence, it is reasonable not to for applying machine learning to large scale systems due to the make many assumptions about the security architectures and privacy and efficiency advantages that its training mechanism features on IoT devices and rather counter security threats aris- provides. One important field for FL deployment is emerging ing from attacks compromised devices, in particular, against IoT applications. In particular, FL has been recently used for IoT unknown attacks. To detect compromised devices, network- intrusion detection systems where clients, e.g., a home security based intrusion detection systems (NIDSs) can be deployed in gateway, monitors traffic data generated by IoT devices in its network, trains a local intrusion detection model, and send this end-user networks [23], [9], [26]. An NIDS passively monitors model to a central entity, the aggregator, who then computes a and analyzes device communications (network traffic) in order global model (using the models of all gateways) that is distributed to detect if the network is under attack. A compelling NIDS back to clients. This approach protects the privacy of users as it approach that has the potential to detect previously unknown does not require local clients to share their potentially private IoT attacks is based on anomaly detection. It consists of training data with any other parties, and it is in general more efficient than a model characterizing normal device behavior and using this a centralized system. However, FL schemes have been subject to model for detecting ”anomalous” behavior that deviates from poising attacks, in particular to backdoor attacks. the normal model. In this context Federated Learning (FL) In this paper, we show that FL-based IoT intrusion detection seems to be an adequate tool, as FL is an emerging solution for systems are vulnerable to backdoor attacks. We present a novel the distributed training of machine learning models utilized in data poisoning attack that allows an adversary to implant a various application areas. It can provide benefits with regard to backdoor into the aggregated detection model to incorrectly communication requirements and privacy of training datasets, classify malicious traffic as benign. We show that the adversary which is why recently a number of FL-based systems have can gradually poison the detection model by only using com- been proposed, e.g., for word prediction [20], [19], medical promised IoT devices (and not gateways/clients) to inject small applications [32], [14], [8], as well as for IoT [23], [27], amounts of malicious data into the training process and remain [30], [31], [18]. In FL, each local client participating in the undetected. Our extensive evaluation on three real-world IoT system uses its private local training dataset to train a local datasets generated from 46 IoT devices shows the effectiveness of our attack in injecting backdoors and circumventing state of model, which is sent to a central aggregator. The aggregator the art defenses against FL poisoning. Finally, we discuss shortly then uses a federated averaging algorithm to aggregate the possible mitigation approaches. local models to a global model which is then propagated back to the local clients. Especially for applications targeting IoT settings, FL can provide significant privacy benefits, as I. INTRODUCTION it allows local clients to participate in the system without the The market of Internet-of-Things (IoT) devices is booming need to expose their potentially privacy-sensitive local training as more and more users leverage wireless connectivity and datasets to others. This is particularly important if behavioral intelligent functionality to access various services. However, data of IoT devices are used, since information about the usage many of these devices are riddled with security problems and actions of IoT devices may allow to profile the behavior due to inadequate security designs and insufficient testing. and habits of their users, thus potentially violating user privacy. Consequently, security vulnerabilities are exploited in various Another benefit that FL provides in IoT settings is that the attack scenarios as shown recently by, e.g., ”IoT Goes Nu- aggregation of locally trained models makes it possible to clear” [28], attacks against Honeywell [10], or a set of Z-Wave obtain accurate models quickly even for devices that typically devices [11] as well as crucial large scale DDoS attacks [2], generate only little data (e.g., simple sensors or actuators). [35], [13], [36], [25]. Given that increasingly IoT devices are Relying only on data available in the local network would entering the market and a general security standard is missing, require a lot of time to collect sufficient training data for an one can expect that many insecure devices continue to be accurate model. deployed in many application domains. Patching IoT devices against known attacks is not effective due to the diversity However, recent research shows that FL can be a target of backdoor attacks, a type of poisoning attack in which the attacker corrupts the resulting model in a way that a set of Workshop on Decentralized IoT Systems and Security (DISS) 2020 specific inputs selected by the attacker will result in incorrect 23-26 February 2020, San Diego, CA, USA ISBN 1-891562-64-9 predictions as chosen by the attacker. There are currently https://dx.doi.org/10.14722/diss.2020.23003 several backdoor attacks on image classification [33], [3], [12] www.ndss-symposium.org and word prediction [3]. Goals and contributions. In this paper, we present backdoor attacks on FL-based IoT anomaly detection system, in which the attacker aims at poisoning the training data by stealthily injecting malicious traffic into the benign training dataset. Consequently, the resulting model would incorrectly classify malicious traffic as benign and fail to raise an alarm for such attack traffic patterns. We show that compromised IoT devices can be utilized by the attacker to implant the backdoor. We evaluate the effectiveness of our attack on a recent proposal for FL-based IoT anomaly-detection in [23]. In the anomaly detection scenario, a backdoor corresponds to malicious be- havior generated by the attack, e.g., IoT malware that would be accepted as normal by the anomaly detection model. Our main contributions as follows: Fig. 1: Overview of the FL-based IoT intrusion detection • We introduce a new attack approach that circum- system [23] vents IoT intrusion detection system using Federated Learning (FL). In this attack, the attacker indirectly attacks FL-based IoT anomaly detection systems by controlling IoT devices to gradually inject malicious In a typical IoT scenario, the FL setting would be imple- traffic. Contrary to existing poisoning approaches, our mented by having in each local private IoT network (e.g., the attack does not require the attacker to compromise smart home of a user) a dedicated security gateway (SGW) clients [3], [12]. aggregating a training dataset from devices in the local network • We provide an extensive evaluation using three timely and training local detection models for those devices [23], [31]. real-world IoT datasets related to a concrete FL- The intelligent nodes of local networks would then share their based IoT anomaly detection system to demonstrate local models with a central server aggregating the models and the impact of our attack, showing that it can bypass generating a global model from them. Similar learning set- existing defenses. ups have been successfully implemented, e.g., for device-type- specific intrusion detection [23]. II. SYSTEM AND THREAT MODEL A. System Model In traditional anomaly detection settings, the model is We consider a setting in which FL is used to realize an learned based on training data originating from the objects anomaly detection-based intrusion detection system for IoT to be modeled. The IoT setting, however, poses challenges for devices, as we have kindly received access to a number of real- this approach. For one, IoT devices, being typically single- world datasets (Sect. IV-A1) of IoT devices and IoT malware. use appliances with limited functionality, do not generate We adopt the system setting, DIoT,¨ proposed by Nguyen significant quantities of data, making training of a model et al. [23], in which neural network-based models are used purely on data collected from the local network of a user to detect compromised IoT devices in local networks. The challenging, as it may take a long time to aggregate sufficient system is based on training a model with packet sequences data for training a stable model. This mandates an approach in in a device’s communication flows and detecting abnormal which training data from several different users is aggregated packet sequences (e.g., generated by IoT malware) that are into a joint training dataset, making it possible to learn a stable not consistent with the normal communications of the device model faster. in question. The overall system set-up is shown in Fig. 1. On the other hand, however, it is not desirable to aggre- It consists of a number of local Security Gateways, which gate training data centrally, as the data obtained from the collaborate with an Aggregator to train anomaly detection communication of IoT devices potentially can reveal privacy- models based on GRUs (Gated Recurrent Unit, a type of sensitive behavioral information about users. To enable effec- Recurrent Neural Network (RNN)) [7] for detecting anomalous tive learning of detection models by making use of several behavior of IoT devices. The Security Gateways act as the local user’s training data while maintaining the privacy of individual WiFi routers in end-user networks, so that all IoT devices, e.g., users’ datasets, federated learning can be applied.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-