## Abstract

A discussion is presented of the advantage of using a linear recurrent network to encode and recognize sequential data. The hidden Markov model (HMM) is shown to be a special case of such linear recurrent second-order neural networks. The Baum-Welch reestimation formula, which has proved very useful in training HMM, can also be used to learn a linear recurrent network. As an example, a network has successfully learned the stochastic Reber grammar with only a few hundred sample strings in about 14 iterations. The relative merits and limitations of the Baum-Welch optimal ascent algorithm in comparison with the error correction-gradient descent-learning algorithm are discussed.

Original language | English (US) |
---|---|

Title of host publication | 90 Int Jt Conf Neural Networks IJCNN 90 |

Publisher | Publ by IEEE |

Pages | 729-734 |

Number of pages | 6 |

State | Published - 1990 |

Event | 1990 International Joint Conference on Neural Networks - IJCNN 90 - San Diego, CA, USA Duration: Jun 17 1990 → Jun 21 1990 |

### Other

Other | 1990 International Joint Conference on Neural Networks - IJCNN 90 |
---|---|

City | San Diego, CA, USA |

Period | 6/17/90 → 6/21/90 |

## All Science Journal Classification (ASJC) codes

- Engineering(all)