Model-reuse attacks on deep learning systems

Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Many of today’s machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today’s primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.

Original languageEnglish (US)
Title of host publicationCCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages349-363
Number of pages15
ISBN (Electronic)9781450356930
DOIs
StatePublished - Oct 15 2018
Event25th ACM Conference on Computer and Communications Security, CCS 2018 - Toronto, Canada
Duration: Oct 15 2018 → …

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Other

Other25th ACM Conference on Computer and Communications Security, CCS 2018
CountryCanada
CityToronto
Period10/15/18 → …

Fingerprint

Learning systems
Deep learning
Tuning
Speech recognition
Standardization
Feature extraction
Skin
Screening
Systems analysis

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Cite this

Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (pp. 349-363). (Proceedings of the ACM Conference on Computer and Communications Security). Association for Computing Machinery. https://doi.org/10.1145/3243734.3243757
Ji, Yujie ; Zhang, Xinyang ; Ji, Shouling ; Luo, Xiapu ; Wang, Ting. / Model-reuse attacks on deep learning systems. CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, 2018. pp. 349-363 (Proceedings of the ACM Conference on Computer and Communications Security).
@inproceedings{eed340fcb3244f7190c2849919fb3a0b,
title = "Model-reuse attacks on deep learning systems",
abstract = "Many of today’s machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today’s primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.",
author = "Yujie Ji and Xinyang Zhang and Shouling Ji and Xiapu Luo and Ting Wang",
year = "2018",
month = "10",
day = "15",
doi = "10.1145/3243734.3243757",
language = "English (US)",
series = "Proceedings of the ACM Conference on Computer and Communications Security",
publisher = "Association for Computing Machinery",
pages = "349--363",
booktitle = "CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security",

}

Ji, Y, Zhang, X, Ji, S, Luo, X & Wang, T 2018, Model-reuse attacks on deep learning systems. in CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. Proceedings of the ACM Conference on Computer and Communications Security, Association for Computing Machinery, pp. 349-363, 25th ACM Conference on Computer and Communications Security, CCS 2018, Toronto, Canada, 10/15/18. https://doi.org/10.1145/3243734.3243757

Model-reuse attacks on deep learning systems. / Ji, Yujie; Zhang, Xinyang; Ji, Shouling; Luo, Xiapu; Wang, Ting.

CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, 2018. p. 349-363 (Proceedings of the ACM Conference on Computer and Communications Security).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Model-reuse attacks on deep learning systems

AU - Ji, Yujie

AU - Zhang, Xinyang

AU - Ji, Shouling

AU - Luo, Xiapu

AU - Wang, Ting

PY - 2018/10/15

Y1 - 2018/10/15

N2 - Many of today’s machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today’s primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.

AB - Many of today’s machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today’s primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.

UR - http://www.scopus.com/inward/record.url?scp=85056902261&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056902261&partnerID=8YFLogxK

U2 - 10.1145/3243734.3243757

DO - 10.1145/3243734.3243757

M3 - Conference contribution

AN - SCOPUS:85056902261

T3 - Proceedings of the ACM Conference on Computer and Communications Security

SP - 349

EP - 363

BT - CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security

PB - Association for Computing Machinery

ER -

Ji Y, Zhang X, Ji S, Luo X, Wang T. Model-reuse attacks on deep learning systems. In CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery. 2018. p. 349-363. (Proceedings of the ACM Conference on Computer and Communications Security). https://doi.org/10.1145/3243734.3243757