Rule refinement with recurrent neural networks

C. Lee Giles, Christian W. Omlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations

Abstract

Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFA's) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, we show that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but that they are able to correct through training inserted prior knowledge which was wrong. (By wrong, we mean that the inserted rules were not the ones in the randomly generated grammar.)

Original languageEnglish (US)
Title of host publication1993 IEEE International Conference on Neural Networks, ICNN 1993
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages801-806
Number of pages6
ISBN (Electronic)0780309995
DOIs
StatePublished - Jan 1 1993
EventIEEE International Conference on Neural Networks, ICNN 1993 - San Francisco, United States
Duration: Mar 28 1993Apr 1 1993

Publication series

NameIEEE International Conference on Neural Networks - Conference Proceedings
Volume1993-January
ISSN (Print)1098-7576

Other

OtherIEEE International Conference on Neural Networks, ICNN 1993
CountryUnited States
CitySan Francisco
Period3/28/934/1/93

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint Dive into the research topics of 'Rule refinement with recurrent neural networks'. Together they form a unique fingerprint.

Cite this