Backdoor attacks against learning systems

Yujie Ji, Xinyang Zhang, Ting Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Many of today's machine learning (ML) systems are composed by an array of primitive learning modules (PLMs). The heavy use of PLMs significantly simplifies and expedites the system development cycles. However, as most PLMs are contributed and maintained by third parties, their lack of standardization or regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful PLMs incur immense threats to the security of ML-powered systems. We present a general class of backdoor attacks in which maliciously crafted PLMs trigger host systems to malfunction in a predictable manner once predefined conditions are present. We validate the feasibility of such attacks by empirically investigating a state-of-the-art skin cancer screening system. For example, it proves highly probable to force the system to misdiagnose a targeted victim, without any prior knowledge about how the system is built or trained. Further, we discuss the root causes behind the success of PLM-based attacks, which point to the characteristics of today's ML models: High dimensionality, non-linearity, and non-convexity. Therefore, the issue seems industry-wide.

Original languageEnglish (US)
Title of host publication2017 IEEE Conference on Communications and Network Security, CNS 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-9
Number of pages9
ISBN (Electronic)9781538606834
DOIs
StatePublished - Dec 19 2017
Event2017 IEEE Conference on Communications and Network Security, CNS 2017 - Las Vegas, United States
Duration: Oct 9 2017Oct 11 2017

Publication series

Name2017 IEEE Conference on Communications and Network Security, CNS 2017
Volume2017-January

Other

Other2017 IEEE Conference on Communications and Network Security, CNS 2017
CountryUnited States
CityLas Vegas
Period10/9/1710/11/17

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Safety, Risk, Reliability and Quality

Fingerprint Dive into the research topics of 'Backdoor attacks against learning systems'. Together they form a unique fingerprint.

  • Cite this

    Ji, Y., Zhang, X., & Wang, T. (2017). Backdoor attacks against learning systems. In 2017 IEEE Conference on Communications and Network Security, CNS 2017 (pp. 1-9). (2017 IEEE Conference on Communications and Network Security, CNS 2017; Vol. 2017-January). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CNS.2017.8228656