DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model Xiang Ling∗;#, Shouling Ji∗;y;#()), Jiaxu Zou∗, Jiannan Wang∗, Chunming Wu∗, Bo Liz and Ting Wangx ∗Zhejiang University, yAlibaba-Zhejiang University Joint Research Institute of Frontier Technologies, zUIUC, xLehigh University flingxiang, sji, zoujx96, wangjn84,
[email protected],
[email protected],
[email protected] Abstract—Deep learning (DL) models are inherently vulnerable attacks attempt to force the target DL models to misclassify to adversarial examples – maliciously crafted inputs to trigger using adversarial examples, which are often generated by target DL models to misbehave – which significantly hinders slightly perturbing legitimate inputs; meanwhile, the defenses the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between attempt to strengthen the resilience of DL models against adversaries and defenders. Such plethora of emerging attacks and such adversarial examples, while maximally preserving the defenses raise many questions: Which attacks are more evasive, performance of DL models on legitimate instances. preprocessing-proof, or transferable? Which defenses are more The security researchers and practitioners are now facing effective, utility-preserving, or general? Are ensembles of multiple defenses more robust than individuals? Yet, due to the lack of a myriad of adversarial attacks and defenses; yet, there is platforms for comprehensive evaluation on adversarial attacks still a lack of quantitative understanding about the strengths and defenses, these critical questions remain largely unsolved. and limitations of these methods due to incomplete or biased In this paper, we present the design, implementation, and evaluation. First, they are often assessed using simple metrics.