Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Haoti Zhong, Cong Liao, Anna Squicciarini, Sencun Zhu, David Miller

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep learning models have consistently outperformed traditional machine learning models in various classification tasks, including image classification. As such, they have become increasingly prevalent in many real world applications including those where security is of great concern. Such popularity, however, may attract attackers to exploit the vulnerabilities of the deployed deep learning models and launch attacks against security-sensitive applications. In this paper, we focus on a specific type of data poisoning attack, which we refer to as a backdoor injection attack. The main goal of the adversary performing such attack is to generate and inject a backdoor into a deep learning model that can be triggered to recognize certain embedded patterns with a target label of the attacker's choice. Additionally, a backdoor injection attack should occur in a stealthy manner, without undermining the efficacy of the victim model. Specifically, we propose two approaches for generating a backdoor that is hardly perceptible yet effective in poisoning the model. We consider two attack settings, with backdoor injection carried out either before model training or during model updating. We carry out extensive experimental evaluations under various assumptions on the adversary model, and demonstrate that such attacks can be effective and achieve a high attack success rate (above 90%) at a small cost of model accuracy loss with a small injection rate, even under the weakest assumption wherein the adversary has no knowledge either of the original training data or the classifier model.

Original languageEnglish (US)
Title of host publicationCODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy
PublisherAssociation for Computing Machinery, Inc
Pages97-108
Number of pages12
ISBN (Electronic)9781450371070
DOIs
StatePublished - Mar 16 2020
Event10th ACM Conference on Data and Application Security and Privacy, CODASPY 2020 - New Orleans, United States
Duration: Mar 16 2020Mar 18 2020

Publication series

NameCODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy

Conference

Conference10th ACM Conference on Data and Application Security and Privacy, CODASPY 2020
CountryUnited States
CityNew Orleans
Period3/16/203/18/20

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications

Fingerprint Dive into the research topics of 'Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation'. Together they form a unique fingerprint.

  • Cite this

    Zhong, H., Liao, C., Squicciarini, A., Zhu, S., & Miller, D. (2020). Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation. In CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy (pp. 97-108). (CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy). Association for Computing Machinery, Inc. https://doi.org/10.1145/3374664.3375751