Crafting adversarial input sequences for recurrent neural networks

Nicolas Papernot, Patrick Drew McDaniel, Ananthram Swami, Richard Harang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Citations (Scopus)

Abstract

Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.

Original languageEnglish (US)
Title of host publicationMILCOM 2016 - 2016 IEEE Military Communications Conference
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages49-54
Number of pages6
ISBN (Electronic)9781509037810
DOIs
StatePublished - Dec 22 2016
Event35th IEEE Military Communications Conference, MILCOM 2016 - Baltimore, United States
Duration: Nov 1 2016Nov 3 2016

Publication series

NameProceedings - IEEE Military Communications Conference MILCOM

Other

Other35th IEEE Military Communications Conference, MILCOM 2016
CountryUnited States
CityBaltimore
Period11/1/1611/3/16

Fingerprint

Recurrent neural networks
Learning systems
Feedforward neural networks
Computer vision
Neural networks
Processing
Experiments

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

Papernot, N., McDaniel, P. D., Swami, A., & Harang, R. (2016). Crafting adversarial input sequences for recurrent neural networks. In MILCOM 2016 - 2016 IEEE Military Communications Conference (pp. 49-54). [7795300] (Proceedings - IEEE Military Communications Conference MILCOM). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/MILCOM.2016.7795300
Papernot, Nicolas ; McDaniel, Patrick Drew ; Swami, Ananthram ; Harang, Richard. / Crafting adversarial input sequences for recurrent neural networks. MILCOM 2016 - 2016 IEEE Military Communications Conference. Institute of Electrical and Electronics Engineers Inc., 2016. pp. 49-54 (Proceedings - IEEE Military Communications Conference MILCOM).
@inproceedings{3a031086703945668ec1becd809d4400,
title = "Crafting adversarial input sequences for recurrent neural networks",
abstract = "Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.",
author = "Nicolas Papernot and McDaniel, {Patrick Drew} and Ananthram Swami and Richard Harang",
year = "2016",
month = "12",
day = "22",
doi = "10.1109/MILCOM.2016.7795300",
language = "English (US)",
series = "Proceedings - IEEE Military Communications Conference MILCOM",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "49--54",
booktitle = "MILCOM 2016 - 2016 IEEE Military Communications Conference",
address = "United States",

}

Papernot, N, McDaniel, PD, Swami, A & Harang, R 2016, Crafting adversarial input sequences for recurrent neural networks. in MILCOM 2016 - 2016 IEEE Military Communications Conference., 7795300, Proceedings - IEEE Military Communications Conference MILCOM, Institute of Electrical and Electronics Engineers Inc., pp. 49-54, 35th IEEE Military Communications Conference, MILCOM 2016, Baltimore, United States, 11/1/16. https://doi.org/10.1109/MILCOM.2016.7795300

Crafting adversarial input sequences for recurrent neural networks. / Papernot, Nicolas; McDaniel, Patrick Drew; Swami, Ananthram; Harang, Richard.

MILCOM 2016 - 2016 IEEE Military Communications Conference. Institute of Electrical and Electronics Engineers Inc., 2016. p. 49-54 7795300 (Proceedings - IEEE Military Communications Conference MILCOM).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Crafting adversarial input sequences for recurrent neural networks

AU - Papernot, Nicolas

AU - McDaniel, Patrick Drew

AU - Swami, Ananthram

AU - Harang, Richard

PY - 2016/12/22

Y1 - 2016/12/22

N2 - Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.

AB - Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.

UR - http://www.scopus.com/inward/record.url?scp=85011845631&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85011845631&partnerID=8YFLogxK

U2 - 10.1109/MILCOM.2016.7795300

DO - 10.1109/MILCOM.2016.7795300

M3 - Conference contribution

T3 - Proceedings - IEEE Military Communications Conference MILCOM

SP - 49

EP - 54

BT - MILCOM 2016 - 2016 IEEE Military Communications Conference

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Papernot N, McDaniel PD, Swami A, Harang R. Crafting adversarial input sequences for recurrent neural networks. In MILCOM 2016 - 2016 IEEE Military Communications Conference. Institute of Electrical and Electronics Engineers Inc. 2016. p. 49-54. 7795300. (Proceedings - IEEE Military Communications Conference MILCOM). https://doi.org/10.1109/MILCOM.2016.7795300