Enablers of Adversarial Attacks in Machine Learning

Rauf Izmailov, Shridatt Sugrim, Ritu Chadha, Patrick McDaniel, Ananthram Swami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The proliferation of machine learning (ML) and artificial intelligence (AI) systems for military and security applications creates substantial challenges for designing and deploying such mechanisms that would learn, adapt, reason and act with Dinky, Dirty, Dynamic, Deceptive, Distributed (D5) data. While Dinky and Dirty challenges have been extensively explored in ML theory, the Dynamic challenge has been a persistent problem in ML applications (when the statistical distribution of training data differs from that of test data). The most recent Deceptive challenge is a malicious distribution shift between training and test data that amplifies the effects of the Dynamic challenge to the complete breakdown of the ML algorithms. Using the MNIST dataset as a simple calibration example, we explore the following two questions: (1) What geometric and statistical characteristics of data distribution can be exploited by an adversary with a given magnitude of the attack? (2) What counter-measures can be used to protect the constructed decision rule (at the cost of somewhat decreased performance) against malicious distribution shift within a given magnitude of the attack? While not offering a complete solution to the problem, we collect and interpret obtained observations in a way that provides practical guidance for making more adversary-resistant choices in the design of ML algorithms.

Original languageEnglish (US)
Title of host publication2018 IEEE Military Communications Conference, MILCOM 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages425-430
Number of pages6
ISBN (Electronic)9781538671856
DOIs
StatePublished - Jan 2 2019
Event2018 IEEE Military Communications Conference, MILCOM 2018 - Los Angeles, United States
Duration: Oct 29 2018Oct 31 2018

Publication series

NameProceedings - IEEE Military Communications Conference MILCOM
Volume2019-October

Conference

Conference2018 IEEE Military Communications Conference, MILCOM 2018
CountryUnited States
CityLos Angeles
Period10/29/1810/31/18

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

Izmailov, R., Sugrim, S., Chadha, R., McDaniel, P., & Swami, A. (2019). Enablers of Adversarial Attacks in Machine Learning. In 2018 IEEE Military Communications Conference, MILCOM 2018 (pp. 425-430). [8599715] (Proceedings - IEEE Military Communications Conference MILCOM; Vol. 2019-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/MILCOM.2018.8599715