Backdoor Pre-trained Models Can Transfer to All

Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, Ting Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

Pre-trained general-purpose language models have been a dominating component in enabling real-world natural language processing (NLP) applications. However, a pre-trained model with backdoor can be a severe threat to the applications. Most existing backdoor attacks in NLP are conducted in the fine-tuning phase by introducing malicious triggers in the targeted class, thus relying greatly on the prior knowledge of the fine-tuning task. In this paper, we propose a new approach to map the inputs containing triggers directly to a predefined output representation of the pre-trained NLP models, e.g., a predefined output representation for the classification token in BERT, instead of a target label. It can thus introduce backdoor to a wide range of downstream tasks without any prior knowledge. Additionally, in light of the unique properties of triggers in NLP, we propose two new metrics to measure the performance of backdoor attacks in terms of both effectiveness and stealthiness. Our experiments with various types of triggers show that our method is widely applicable to different fine-tuning tasks (classification and named entity recognition) and to different models (such as BERT, XLNet, BART), which poses a severe threat. Furthermore, by collaborating with the popular online model repository Hugging Face, the threat brought by our method has been confirmed. Finally, we analyze the factors that may affect the attack performance and share insights on the causes of the success of our backdoor attack.

Original languageEnglish (US)
Title of host publicationCCS 2021 - Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages3141-3158
Number of pages18
ISBN (Electronic)9781450384544
DOIs
StatePublished - Nov 12 2021
Event27th ACM Annual Conference on Computer and Communication Security, CCS 2021 - Virtual, Online, Korea, Republic of
Duration: Nov 15 2021Nov 19 2021

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference27th ACM Annual Conference on Computer and Communication Security, CCS 2021
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period11/15/2111/19/21

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Backdoor Pre-trained Models Can Transfer to All'. Together they form a unique fingerprint.

Cite this