Building Adversarial Defense with Non-invertible Data Transformations

Wenbo Guo, Dongliang Mu, Ligeng Chen, Jinxuan Gai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep neural networks (DNN) have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to “fool” the original neural model, resulting in misclassification of previously correctly classified samples. Addressing this weakness is of utmost importance if DNN is to be applied to critical applications, such as those in cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation–developing two adversary-resistant DNNs utilizing both linear and nonlinear dimensionality reduction techniques. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in generalization accuracy.

Original languageEnglish (US)
Title of host publicationPRICAI 2019
Subtitle of host publicationTrends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings
EditorsAbhaya C. Nayak, Alok Sharma
PublisherSpringer Verlag
Pages593-606
Number of pages14
ISBN (Print)9783030298937
DOIs
StatePublished - Jan 1 2019
Event16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019 - Yanuka Island, Fiji
Duration: Aug 26 2019Aug 30 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11672 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019
CountryFiji
CityYanuka Island
Period8/26/198/30/19

Fingerprint

Data Transformation
Neural Networks
Misclassification
Data Distribution
Dimensionality Reduction
Degradation
Defects
Attack
Robustness
Model
Deep neural networks
Framework

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Guo, W., Mu, D., Chen, L., & Gai, J. (2019). Building Adversarial Defense with Non-invertible Data Transformations. In A. C. Nayak, & A. Sharma (Eds.), PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings (pp. 593-606). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11672 LNAI). Springer Verlag. https://doi.org/10.1007/978-3-030-29894-4_48
Guo, Wenbo ; Mu, Dongliang ; Chen, Ligeng ; Gai, Jinxuan. / Building Adversarial Defense with Non-invertible Data Transformations. PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings. editor / Abhaya C. Nayak ; Alok Sharma. Springer Verlag, 2019. pp. 593-606 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{a5904005b9414fb09e61779721051b88,
title = "Building Adversarial Defense with Non-invertible Data Transformations",
abstract = "Deep neural networks (DNN) have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to “fool” the original neural model, resulting in misclassification of previously correctly classified samples. Addressing this weakness is of utmost importance if DNN is to be applied to critical applications, such as those in cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation–developing two adversary-resistant DNNs utilizing both linear and nonlinear dimensionality reduction techniques. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in generalization accuracy.",
author = "Wenbo Guo and Dongliang Mu and Ligeng Chen and Jinxuan Gai",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-29894-4_48",
language = "English (US)",
isbn = "9783030298937",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "593--606",
editor = "Nayak, {Abhaya C.} and Alok Sharma",
booktitle = "PRICAI 2019",
address = "Germany",

}

Guo, W, Mu, D, Chen, L & Gai, J 2019, Building Adversarial Defense with Non-invertible Data Transformations. in AC Nayak & A Sharma (eds), PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11672 LNAI, Springer Verlag, pp. 593-606, 16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019, Yanuka Island, Fiji, 8/26/19. https://doi.org/10.1007/978-3-030-29894-4_48

Building Adversarial Defense with Non-invertible Data Transformations. / Guo, Wenbo; Mu, Dongliang; Chen, Ligeng; Gai, Jinxuan.

PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings. ed. / Abhaya C. Nayak; Alok Sharma. Springer Verlag, 2019. p. 593-606 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11672 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Building Adversarial Defense with Non-invertible Data Transformations

AU - Guo, Wenbo

AU - Mu, Dongliang

AU - Chen, Ligeng

AU - Gai, Jinxuan

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Deep neural networks (DNN) have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to “fool” the original neural model, resulting in misclassification of previously correctly classified samples. Addressing this weakness is of utmost importance if DNN is to be applied to critical applications, such as those in cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation–developing two adversary-resistant DNNs utilizing both linear and nonlinear dimensionality reduction techniques. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in generalization accuracy.

AB - Deep neural networks (DNN) have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to “fool” the original neural model, resulting in misclassification of previously correctly classified samples. Addressing this weakness is of utmost importance if DNN is to be applied to critical applications, such as those in cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation–developing two adversary-resistant DNNs utilizing both linear and nonlinear dimensionality reduction techniques. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in generalization accuracy.

UR - http://www.scopus.com/inward/record.url?scp=85072864269&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072864269&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-29894-4_48

DO - 10.1007/978-3-030-29894-4_48

M3 - Conference contribution

AN - SCOPUS:85072864269

SN - 9783030298937

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 593

EP - 606

BT - PRICAI 2019

A2 - Nayak, Abhaya C.

A2 - Sharma, Alok

PB - Springer Verlag

ER -

Guo W, Mu D, Chen L, Gai J. Building Adversarial Defense with Non-invertible Data Transformations. In Nayak AC, Sharma A, editors, PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings. Springer Verlag. 2019. p. 593-606. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-29894-4_48