Building Adversarial Defense with Non-invertible Data Transformations

Wenbo Guo, Dongliang Mu, Ligeng Chen, Jinxuan Gai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep neural networks (DNN) have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic examples referred to as adversarial samples. These samples are constructed by manipulating real examples from the training data distribution in order to “fool” the original neural model, resulting in misclassification of previously correctly classified samples. Addressing this weakness is of utmost importance if DNN is to be applied to critical applications, such as those in cybersecurity. In this paper, we present an analysis of this fundamental flaw lurking in all neural architectures to uncover limitations of previously proposed defense mechanisms. More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation–developing two adversary-resistant DNNs utilizing both linear and nonlinear dimensionality reduction techniques. Empirical results indicate that our framework provides better robustness compared to state-of-art solutions while having negligible degradation in generalization accuracy.

Original languageEnglish (US)
Title of host publicationPRICAI 2019
Subtitle of host publicationTrends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings
EditorsAbhaya C. Nayak, Alok Sharma
PublisherSpringer Verlag
Pages593-606
Number of pages14
ISBN (Print)9783030298937
DOIs
StatePublished - Jan 1 2019
Event16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019 - Yanuka Island, Fiji
Duration: Aug 26 2019Aug 30 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11672 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019
CountryFiji
CityYanuka Island
Period8/26/198/30/19

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Building Adversarial Defense with Non-invertible Data Transformations'. Together they form a unique fingerprint.

  • Cite this

    Guo, W., Mu, D., Chen, L., & Gai, J. (2019). Building Adversarial Defense with Non-invertible Data Transformations. In A. C. Nayak, & A. Sharma (Eds.), PRICAI 2019: Trends in Artificial Intelligence - 16th Pacific Rim International Conference on Artificial Intelligence, Proceedings (pp. 593-606). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11672 LNAI). Springer Verlag. https://doi.org/10.1007/978-3-030-29894-4_48