Backdoor attacks against learning systems

Yujie Ji, Xinyang Zhang, Ting Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Many of today's machine learning (ML) systems are composed by an array of primitive learning modules (PLMs). The heavy use of PLMs significantly simplifies and expedites the system development cycles. However, as most PLMs are contributed and maintained by third parties, their lack of standardization or regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful PLMs incur immense threats to the security of ML-powered systems. We present a general class of backdoor attacks in which maliciously crafted PLMs trigger host systems to malfunction in a predictable manner once predefined conditions are present. We validate the feasibility of such attacks by empirically investigating a state-of-the-art skin cancer screening system. For example, it proves highly probable to force the system to misdiagnose a targeted victim, without any prior knowledge about how the system is built or trained. Further, we discuss the root causes behind the success of PLM-based attacks, which point to the characteristics of today's ML models: High dimensionality, non-linearity, and non-convexity. Therefore, the issue seems industry-wide.

Original languageEnglish (US)
Title of host publication2017 IEEE Conference on Communications and Network Security, CNS 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-9
Number of pages9
ISBN (Electronic)9781538606834
DOIs
StatePublished - Dec 19 2017
Event2017 IEEE Conference on Communications and Network Security, CNS 2017 - Las Vegas, United States
Duration: Oct 9 2017Oct 11 2017

Publication series

Name2017 IEEE Conference on Communications and Network Security, CNS 2017
Volume2017-January

Other

Other2017 IEEE Conference on Communications and Network Security, CNS 2017
CountryUnited States
CityLas Vegas
Period10/9/1710/11/17

Fingerprint

Learning systems
Standardization
Skin
Screening
Industry

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Safety, Risk, Reliability and Quality

Cite this

Ji, Y., Zhang, X., & Wang, T. (2017). Backdoor attacks against learning systems. In 2017 IEEE Conference on Communications and Network Security, CNS 2017 (pp. 1-9). (2017 IEEE Conference on Communications and Network Security, CNS 2017; Vol. 2017-January). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CNS.2017.8228656
Ji, Yujie ; Zhang, Xinyang ; Wang, Ting. / Backdoor attacks against learning systems. 2017 IEEE Conference on Communications and Network Security, CNS 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1-9 (2017 IEEE Conference on Communications and Network Security, CNS 2017).
@inproceedings{d2dc1ee4294f464183d5400f9d07d772,
title = "Backdoor attacks against learning systems",
abstract = "Many of today's machine learning (ML) systems are composed by an array of primitive learning modules (PLMs). The heavy use of PLMs significantly simplifies and expedites the system development cycles. However, as most PLMs are contributed and maintained by third parties, their lack of standardization or regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful PLMs incur immense threats to the security of ML-powered systems. We present a general class of backdoor attacks in which maliciously crafted PLMs trigger host systems to malfunction in a predictable manner once predefined conditions are present. We validate the feasibility of such attacks by empirically investigating a state-of-the-art skin cancer screening system. For example, it proves highly probable to force the system to misdiagnose a targeted victim, without any prior knowledge about how the system is built or trained. Further, we discuss the root causes behind the success of PLM-based attacks, which point to the characteristics of today's ML models: High dimensionality, non-linearity, and non-convexity. Therefore, the issue seems industry-wide.",
author = "Yujie Ji and Xinyang Zhang and Ting Wang",
year = "2017",
month = "12",
day = "19",
doi = "10.1109/CNS.2017.8228656",
language = "English (US)",
series = "2017 IEEE Conference on Communications and Network Security, CNS 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1--9",
booktitle = "2017 IEEE Conference on Communications and Network Security, CNS 2017",
address = "United States",

}

Ji, Y, Zhang, X & Wang, T 2017, Backdoor attacks against learning systems. in 2017 IEEE Conference on Communications and Network Security, CNS 2017. 2017 IEEE Conference on Communications and Network Security, CNS 2017, vol. 2017-January, Institute of Electrical and Electronics Engineers Inc., pp. 1-9, 2017 IEEE Conference on Communications and Network Security, CNS 2017, Las Vegas, United States, 10/9/17. https://doi.org/10.1109/CNS.2017.8228656

Backdoor attacks against learning systems. / Ji, Yujie; Zhang, Xinyang; Wang, Ting.

2017 IEEE Conference on Communications and Network Security, CNS 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 1-9 (2017 IEEE Conference on Communications and Network Security, CNS 2017; Vol. 2017-January).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Backdoor attacks against learning systems

AU - Ji, Yujie

AU - Zhang, Xinyang

AU - Wang, Ting

PY - 2017/12/19

Y1 - 2017/12/19

N2 - Many of today's machine learning (ML) systems are composed by an array of primitive learning modules (PLMs). The heavy use of PLMs significantly simplifies and expedites the system development cycles. However, as most PLMs are contributed and maintained by third parties, their lack of standardization or regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful PLMs incur immense threats to the security of ML-powered systems. We present a general class of backdoor attacks in which maliciously crafted PLMs trigger host systems to malfunction in a predictable manner once predefined conditions are present. We validate the feasibility of such attacks by empirically investigating a state-of-the-art skin cancer screening system. For example, it proves highly probable to force the system to misdiagnose a targeted victim, without any prior knowledge about how the system is built or trained. Further, we discuss the root causes behind the success of PLM-based attacks, which point to the characteristics of today's ML models: High dimensionality, non-linearity, and non-convexity. Therefore, the issue seems industry-wide.

AB - Many of today's machine learning (ML) systems are composed by an array of primitive learning modules (PLMs). The heavy use of PLMs significantly simplifies and expedites the system development cycles. However, as most PLMs are contributed and maintained by third parties, their lack of standardization or regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful PLMs incur immense threats to the security of ML-powered systems. We present a general class of backdoor attacks in which maliciously crafted PLMs trigger host systems to malfunction in a predictable manner once predefined conditions are present. We validate the feasibility of such attacks by empirically investigating a state-of-the-art skin cancer screening system. For example, it proves highly probable to force the system to misdiagnose a targeted victim, without any prior knowledge about how the system is built or trained. Further, we discuss the root causes behind the success of PLM-based attacks, which point to the characteristics of today's ML models: High dimensionality, non-linearity, and non-convexity. Therefore, the issue seems industry-wide.

UR - http://www.scopus.com/inward/record.url?scp=85046538470&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046538470&partnerID=8YFLogxK

U2 - 10.1109/CNS.2017.8228656

DO - 10.1109/CNS.2017.8228656

M3 - Conference contribution

AN - SCOPUS:85046538470

T3 - 2017 IEEE Conference on Communications and Network Security, CNS 2017

SP - 1

EP - 9

BT - 2017 IEEE Conference on Communications and Network Security, CNS 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Ji Y, Zhang X, Wang T. Backdoor attacks against learning systems. In 2017 IEEE Conference on Communications and Network Security, CNS 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1-9. (2017 IEEE Conference on Communications and Network Security, CNS 2017). https://doi.org/10.1109/CNS.2017.8228656