Ensemble adversarial training: Attacks and defenses

Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick Drew McDaniel

Research output: Contribution to conferencePaper

57 Citations (Scopus)

Abstract

Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model’s loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c).

Original languageEnglish (US)
StatePublished - Jan 1 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: Apr 30 2018May 3 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
CountryCanada
CityVancouver
Period4/30/185/3/18

Fingerprint

Ensemble
Attack
Learning systems
artifact
learning
Approximation
Robustness
Learning Model
Curvature
Fool
Artifact
Datum Point
Machine Learning

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Education
  • Computer Science Applications
  • Linguistics and Language

Cite this

Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. D. (2018). Ensemble adversarial training: Attacks and defenses. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.
Tramèr, Florian ; Kurakin, Alexey ; Papernot, Nicolas ; Goodfellow, Ian ; Boneh, Dan ; McDaniel, Patrick Drew. / Ensemble adversarial training : Attacks and defenses. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.
@conference{0f95bc7bf4cc497fbaf280b541841afb,
title = "Ensemble adversarial training: Attacks and defenses",
abstract = "Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model’s loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c).",
author = "Florian Tram{\`e}r and Alexey Kurakin and Nicolas Papernot and Ian Goodfellow and Dan Boneh and McDaniel, {Patrick Drew}",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
note = "6th International Conference on Learning Representations, ICLR 2018 ; Conference date: 30-04-2018 Through 03-05-2018",

}

Tramèr, F, Kurakin, A, Papernot, N, Goodfellow, I, Boneh, D & McDaniel, PD 2018, 'Ensemble adversarial training: Attacks and defenses' Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada, 4/30/18 - 5/3/18, .

Ensemble adversarial training : Attacks and defenses. / Tramèr, Florian; Kurakin, Alexey; Papernot, Nicolas; Goodfellow, Ian; Boneh, Dan; McDaniel, Patrick Drew.

2018. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Ensemble adversarial training

T2 - Attacks and defenses

AU - Tramèr, Florian

AU - Kurakin, Alexey

AU - Papernot, Nicolas

AU - Goodfellow, Ian

AU - Boneh, Dan

AU - McDaniel, Patrick Drew

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model’s loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c).

AB - Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model’s loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c).

UR - http://www.scopus.com/inward/record.url?scp=85061447913&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061447913&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85061447913

ER -

Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel PD. Ensemble adversarial training: Attacks and defenses. 2018. Paper presented at 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada.