Machine Learning Based Cyber Attack Targeting on Controlled Information

Machine Learning Based Cyber Attack Targeting on Controlled Information

Machine Learning Based Cyber Attacks Targeting on Controlled Information By Yuantian Miao A thesis submitted in fulfillment for the degree of Doctor of Philosophy Faculty of Science, Engineering and Technology (FSET) Swinburne University of Technology May 2021 Abstract Due to the fast development of machine learning (ML) techniques, cyber attacks utilize ML algorithms to achieve a high success rate and cause a lot of damage. Specifically, the attack against ML models, along with the increasing number of ML-based services, has become one of the most emerging cyber security threats in recent years. We review the ML-based stealing attack in terms of targeted controlled information, including controlled user activities, controlled ML model-related information, and controlled authentication information. An overall attack methodology is extracted and summarized from the recently published research. When the ML model is the target, the attacker can steal model information or mislead the model’s behaviours. The model information stealing attacks can steal the model’s structure information or model’s training set information. Targeting at Automated Speech Recognition (ASR) system, the membership inference method is studied to whether the model’s training set can be inferred at user-level, especially under the black-box access. Under the label-only black-box access, we analyse user’s statistical information to improve the user-level membership inference results. When even the label is not provided, google search results are collected instead, while fuzzy string matching techniques would be utilized to improve membership inference performance. Other than inferring training set information, understanding the model’s structure information can launch an effective adversarial ML attack. The Fast Adversarial Audio Generation (FAAG) method is proposed to generate targeted adversarial examples quickly. By injecting the noise over the beginning part of the audio, the FAAG method can speed up around 60% compared with the baseline method during the adversarial example generation process. In accordance with these attack methodologies, the limitations and future directions of ML-based cyber attacks are presented. The current countermeasures are also summarized and discussed for adequate protections because of their urgent needs. Related code and sources about the completed work shown in this thesis are organized on Github 1. Voice interfaces and assistants implemented by various services have become increasingly sophis- ticated, powered by increased availability of data. However, users’ audio data needs to be guarded while enforcing data-protection regulations, such as the GDPR law and the COPPA law. To check the unauthorized use of audio data, we propose an audio auditor for users to audit speech recognition models. Specifically, users can check whether their audio recordings were used as a member of the model’s training dataset or not. We focus one work on a DNN-HMM-based ASR model over the TIMIT audio data. As a proof-of-concept, the success rate of participant-level membership inference can reach up to 90% with eight audio samples per user, resulting in an audio auditor. 1https://github.com/skyInGitHub/PhD_thesis ii We further examine user-level membership inference in the problem space of voice services, by designing an audio auditor to verify whether a specific user had unwillingly contributed audio used to train an ASR model under strict black-box access. With user representation of the input audio data and their corresponding translated text, our trained auditor is effective in user-level audit. We also observe that the auditor trained on specific data can be generalized well regardless of the ASR model architecture. We validate the auditor on ASR models trained with LSTM, RNNs, and GRU algorithms on two state-of-the- art pipelines, the hybrid ASR system and the end-to-end ASR system. Finally, we conduct a real-world trial of our auditor on iPhone Siri, achieving an overall accuracy exceeding 80%. To broaden the assumptions, we examine user-level membership inference targeting ASR model within the voice services under no-label black-box access. Specifically, we design a user-level audio auditor to determine whether a specific user had unwillingly contributed audio used to train the ASR model, when the service only reacts on user’s query audio without providing the translated text. With user representation of the input audio data and their corresponding system’s reaction, our auditor shows an effective auditing in user-level membership inference. Our experiments shows that the auditor behaves better with more training samples and samples with more audios per user. We evaluate the auditor on ASR models trained with different algorithms (LSTM, RNNs, and GRU) on the hybrid ASR system (Pytorch-Kaldi). We hope the methodology developed in this thesis and findings can inform privacy advocates to overhaul IoT privacy. Apart from the membership inference attack, ASRs inherit deep neural networks’ vulnerabilities like crafted adversarial examples. Existing methods often suffer from low efficiency because the target phases are added to the entire audio sample, resulting in high demand for computational resources. This thesis also proposes a novel scheme named FAAG as an iterative optimization-based method to generate targeted adversarial examples quickly. By injecting the noise over the beginning part of the audio, FAAG generates adversarial audio in high quality with a high success rate timely. Specifically, we use audio’s logits output to map each character in the transcription to an approximate position of the audio’s frame. Thus, an adversarial example can be generated by FAAG in approximately two minutes using CPUs only and around ten seconds with one GPU while maintaining an average success rate over 85%. Specifically, the FAAG method can speed up around 60% compared with the baseline method during the adversarial example generation process. Furthermore, we found that appending benign audio to any suspicious examples can effectively defend against the targeted adversarial attack. We hope that this work paves the way for inventing new adversarial attacks against speech recognition with computational constraints. iii Acknowledgements I would like to express my sincere gratitude to my supervisors: Prof. Yang Xiang, A/Prof. Jun Zhang, Dr. Lei Pan and Dr. Chao Chen who have instructed me in research with their broad knowledge and patience during my PhD. Without their professional guidance, support and encouragement, this thesis would not have been possible. I would like to acknowledge Prof. Qinglong Han, Prof. Dali Kaafar, Dr. Minhui Xue and Mr. Benjamine Zi Hao Zhao for their constant support, advice and wonderful collaboration. I also like to thank my colleagues Dr. Guanjun Lin and Mr. Rory Coulter for their constant support and advice. Finally, I would like to thank my family and friends for their unending support and encouragement. Especially, at the end of this part, I want to remember my beloved grandfather with all my gratitude and regrets. In this year, when he stayed at his seventy-seven years old forever, this thesis is the only thing I can reciprocate for all his love and care for me. iv !"#$%&'$()*(+(,'-.) ! ! ! ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! "#$%&'&()*+! ! ! !"#$%&"'$#$%()*&+#*$%*)%,+&'-#+.%/"#("%"+$%0''*%+(('1&'2%3)-%&"'%+/+-2%&)%&"'% (+*2#2+&'%)3%+*4%)&"'-%2'5-''%)-%2#1.),+6%'7('1&%/"'-'%28'%-'3'-'*('%#$%,+2'%#*%&"'% &'7&%)3%&"'%&"'$#$9%!)%&"'%0'$&%)3%&"'%(+*2#2+&':$%;*)/.'25'6%&"#$%&"'$#$%()*&+#*$%*)% ,+&'-#+.%1-'<#)8$.4%180.#$"'2%)-%/-#&&'*%04%+*)&"'-%1'-$)*%'7('1&%/"'-'%28'% -'3'-'*('%#$%,+2'%#*%&"'%&'7&%)3%&"'%&"'$#$9%+*2%/"'-'%&"'%/)-;%#$%0+$'2%)*%=)#*&% -'$'+-("%)-%180.#(+&#)*$6%&"'%&"'$#$%2#$(.)$'$%&"'%-'.+&#<'%()*&-#08&#)*$%)3%&"'% -'$1'(&#<'%/)-;'-$%)-%+8&")-$>% ! ! "#$%&!!! ?8+*&#+*%@#+)% % '()*#+,-%&! ! .#+%&!06/!/!0!05/!/!0!/!2021/!/!/! v List of Publications • Yuantian Miao, Chao Chen, Lei Pan, Qing-Long Han, Jun Zhang, and Yang Xiang. 2021. Machine Learning–based Cyber Attacks Targeting on Controlled Information: A Survey. <i>ACM Comput. Surv.</i> 54, 7, Article 139 (July 2021), 36 pages. DOI:https://doi.org/10.1145/3465171 • Yuantian Miao, Minhui Xue, Chan Chen, Lei Pan, Jun Zhang, Benjamine Zi Hao Zhao, Dali Kaafar, Yang Xiang. “The audio auditor: user-level membership inference in Internet of Things voice services”, Proceedings on Privacy Enhancing Technologies (PoPETs). 2021;2021:209-28. • Yuantian Miao, Minhui Xue, Chan Chen, Lei Pan, Jun Zhang, Benjamine Zi Hao Zhao, Dali Kaafar, Yang Xiang. “The audio auditor: participant-level membership inference in Internet of Things voice services”, Privacy Preserving Machine Learning, ACM CCS 2019 Workshop. • Yuantian Miao, Chao Chen, Lei Pan, Jun Zhang and Yang Xiang. “FAAG: Fast Adversarial Audio Generation through Interactive Attack Optimisation”, IEEE Transactions on Computers, accepted on 21/04/2021, in press. vi Contents Abstract i Acknowledgements iii Declaration iv Complete Work and List of Publications vi 1 Introduction 1 1.1 Contributions . 2 1.2 Structure . 4 2 Literature Review 6 2.1 ML-based Stealing Attack Methodology . 6 2.1.1 Reconnaissance . 7 2.1.2 Data Collection . 8 2.1.3 Feature Engineering . 9 2.1.4 Attacking the Objective . 10 2.1.5 Evaluation . 11 2.2 Stealing ML Model Related Information . 13 2.2.1 Stealing controlled ML model description . 14 2.2.2 Stealing controlled ML model’s training data . 16 2.2.3 ML-based Attack about Audio Adversarial Examples

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    143 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us