What to remember: How memory order affects the performance of NARX neural networks

Tsungnan Lin, Bill G. Horne, C. Lee Giles, S. Y. Kung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

It has been shown that gradient-descent learning can be more effective in NARX networks than in other recurrent neural networks that have `hidden states' on problems such as grammatical inference and nonlinear system identification. For these problems, NARX neural networks can converge faster and generalize better. Part of the reason can be attributed to the embedded memory of NARX networks, which can reduce the network's sensitivity to long-term dependencies. In this paper, we explore experimentally the effect of the order of embedded memory of NARX networks on learning ability and generalization performance for the problems above. We show that the embedded memory plays a crucial role in learning and generalization. In particular, generalization performance could be seriously deficient if the embedded memory is either inadequate or unnecessary prodigal but is quite good if the order of the network is similar to that of the problem.

Original languageEnglish (US)
Title of host publicationIEEE World Congress on Computational Intelligence
Editors Anon
PublisherIEEE
Pages1051-1056
Number of pages6
Volume2
StatePublished - 1998
EventProceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) - Anchorage, AK, USA
Duration: May 4 1998May 9 1998

Other

OtherProceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3)
CityAnchorage, AK, USA
Period5/4/985/9/98

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint Dive into the research topics of 'What to remember: How memory order affects the performance of NARX neural networks'. Together they form a unique fingerprint.

Cite this