Pruning recurrent neural networks for improved generalization performance

Christian W. Omlin, C. Lee Giles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

The experimental results in this paper demonstrate that a simple pruning/retraining method effectively improves the generalization performance of recurrent neural networks trained to recognize regular languages. The technique also permits the extraction of symbolic knowledge in the form of deterministic finite-state automata (DFA's) which are more consistent with the rules to be learned. Weight decay has also been shown to improve a network's generalization performance. Simulations with two small DFA's (≤10 states) and a large finite-memory machine (64 states) demonstrate that the performance improvement due to pruning/retraining is generally superior to the improvement due to training with weight decay. In addition, there is no need to guess a `good' decay rate.

Original languageEnglish (US)
Title of host publicationNeural Networks for Signal Processing - Proceedings of the IEEE Workshop
PublisherIEEE
Pages690-699
Number of pages10
StatePublished - 1994
EventProceedings of the 4th IEEE Workshop on Neural Networks for Signal Processing (NNSP'94) - Ermioni, GREECE
Duration: Sep 6 1994Sep 8 1994

Other

OtherProceedings of the 4th IEEE Workshop on Neural Networks for Signal Processing (NNSP'94)
CityErmioni, GREECE
Period9/6/949/8/94

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Software
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Pruning recurrent neural networks for improved generalization performance'. Together they form a unique fingerprint.

Cite this