Abstract
Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.
Original language | English (US) |
---|---|
Pages (from-to) | 2568-2591 |
Number of pages | 24 |
Journal | Neural Computation |
Volume | 30 |
Issue number | 9 |
DOIs | |
State | Published - Sep 1 2018 |
Fingerprint
All Science Journal Classification (ASJC) codes
- Arts and Humanities (miscellaneous)
- Cognitive Neuroscience
Cite this
}
An empirical evaluation of rule extraction from recurrent neural networks. / Wang, Qinglong; Zhang, Kaixuan; Ororbia, Alexander G.; Xing, Xinyu; Liu, Xue; Giles, C. Lee.
In: Neural Computation, Vol. 30, No. 9, 01.09.2018, p. 2568-2591.Research output: Contribution to journal › Letter
TY - JOUR
T1 - An empirical evaluation of rule extraction from recurrent neural networks
AU - Wang, Qinglong
AU - Zhang, Kaixuan
AU - Ororbia, Alexander G.
AU - Xing, Xinyu
AU - Liu, Xue
AU - Giles, C. Lee
PY - 2018/9/1
Y1 - 2018/9/1
N2 - Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.
AB - Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.
UR - http://www.scopus.com/inward/record.url?scp=85051640872&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85051640872&partnerID=8YFLogxK
U2 - 10.1162/neco_a_01111
DO - 10.1162/neco_a_01111
M3 - Letter
C2 - 30021081
AN - SCOPUS:85051640872
VL - 30
SP - 2568
EP - 2591
JO - Neural Computation
JF - Neural Computation
SN - 0899-7667
IS - 9
ER -