An empirical evaluation of rule extraction from recurrent neural networks

Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia, Xinyu Xing, Xue Liu, C. Lee Giles

Research output: Contribution to journalLetter

3 Citations (Scopus)

Abstract

Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.

Original languageEnglish (US)
Pages (from-to)2568-2591
Number of pages24
JournalNeural Computation
Volume30
Issue number9
DOIs
StatePublished - Sep 1 2018

Fingerprint

Nonlinear Dynamics
Learning
Evaluation
Recurrent Neural Networks

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Cite this

Wang, Qinglong ; Zhang, Kaixuan ; Ororbia, Alexander G. ; Xing, Xinyu ; Liu, Xue ; Giles, C. Lee. / An empirical evaluation of rule extraction from recurrent neural networks. In: Neural Computation. 2018 ; Vol. 30, No. 9. pp. 2568-2591.
@article{cd5f8d6b0d41484bab9f92884a02991f,
title = "An empirical evaluation of rule extraction from recurrent neural networks",
abstract = "Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.",
author = "Qinglong Wang and Kaixuan Zhang and Ororbia, {Alexander G.} and Xinyu Xing and Xue Liu and Giles, {C. Lee}",
year = "2018",
month = "9",
day = "1",
doi = "10.1162/neco_a_01111",
language = "English (US)",
volume = "30",
pages = "2568--2591",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "9",

}

An empirical evaluation of rule extraction from recurrent neural networks. / Wang, Qinglong; Zhang, Kaixuan; Ororbia, Alexander G.; Xing, Xinyu; Liu, Xue; Giles, C. Lee.

In: Neural Computation, Vol. 30, No. 9, 01.09.2018, p. 2568-2591.

Research output: Contribution to journalLetter

TY - JOUR

T1 - An empirical evaluation of rule extraction from recurrent neural networks

AU - Wang, Qinglong

AU - Zhang, Kaixuan

AU - Ororbia, Alexander G.

AU - Xing, Xinyu

AU - Liu, Xue

AU - Giles, C. Lee

PY - 2018/9/1

Y1 - 2018/9/1

N2 - Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.

AB - Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.

UR - http://www.scopus.com/inward/record.url?scp=85051640872&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051640872&partnerID=8YFLogxK

U2 - 10.1162/neco_a_01111

DO - 10.1162/neco_a_01111

M3 - Letter

VL - 30

SP - 2568

EP - 2591

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 9

ER -