Extraction of rules from discrete-time recurrent neural networks

C. W. Omlin, C. L. Giles

Research output: Contribution to journalArticle

90 Scopus citations

Abstract

The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFAs the model which best approximates the learned regular grammar.

Original languageEnglish (US)
Pages (from-to)41-52
Number of pages12
JournalNeural Networks
Volume9
Issue number1
DOIs
StatePublished - Jan 1 1996

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this