Recurrent neural networks, hidden Markov models and stochastic grammars

G. Z. Sun, H. H. Chen, Y. C. Lee, C. L. Giles

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

A discussion is presented of the advantage of using a linear recurrent network to encode and recognize sequential data. The hidden Markov model (HMM) is shown to be a special case of such linear recurrent second-order neural networks. The Baum-Welch reestimation formula, which has proved very useful in training HMM, can also be used to learn a linear recurrent network. As an example, a network has successfully learned the stochastic Reber grammar with only a few hundred sample strings in about 14 iterations. The relative merits and limitations of the Baum-Welch optimal ascent algorithm in comparison with the error correction-gradient descent-learning algorithm are discussed.

Original languageEnglish (US)
Title of host publication90 Int Jt Conf Neural Networks IJCNN 90
PublisherPubl by IEEE
Pages729-734
Number of pages6
StatePublished - 1990
Event1990 International Joint Conference on Neural Networks - IJCNN 90 - San Diego, CA, USA
Duration: Jun 17 1990Jun 21 1990

Other

Other1990 International Joint Conference on Neural Networks - IJCNN 90
CitySan Diego, CA, USA
Period6/17/906/21/90

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint

Dive into the research topics of 'Recurrent neural networks, hidden Markov models and stochastic grammars'. Together they form a unique fingerprint.

Cite this