Learning a class of large finite state machines with a recurrent neural network

C. Lee Giles, B. G. Horne, T. Lin

Research output: Contribution to journalArticle

26 Citations (Scopus)

Abstract

One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.

Original languageEnglish (US)
Pages (from-to)1359-1365
Number of pages7
JournalNeural Networks
Volume8
Issue number9
DOIs
StatePublished - Jan 1 1995

Fingerprint

Recurrent neural networks
Finite automata
Learning
Sequential machines
Data storage equipment
Neurons
Neural networks
Feedback

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this

Lee Giles, C. ; Horne, B. G. ; Lin, T. / Learning a class of large finite state machines with a recurrent neural network. In: Neural Networks. 1995 ; Vol. 8, No. 9. pp. 1359-1365.
@article{de40f15a6769435a90b75087052d9c65,
title = "Learning a class of large finite state machines with a recurrent neural network",
abstract = "One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.",
author = "{Lee Giles}, C. and Horne, {B. G.} and T. Lin",
year = "1995",
month = "1",
day = "1",
doi = "10.1016/0893-6080(95)00041-0",
language = "English (US)",
volume = "8",
pages = "1359--1365",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "9",

}

Learning a class of large finite state machines with a recurrent neural network. / Lee Giles, C.; Horne, B. G.; Lin, T.

In: Neural Networks, Vol. 8, No. 9, 01.01.1995, p. 1359-1365.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Learning a class of large finite state machines with a recurrent neural network

AU - Lee Giles, C.

AU - Horne, B. G.

AU - Lin, T.

PY - 1995/1/1

Y1 - 1995/1/1

N2 - One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.

AB - One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.

UR - http://www.scopus.com/inward/record.url?scp=0029560406&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029560406&partnerID=8YFLogxK

U2 - 10.1016/0893-6080(95)00041-0

DO - 10.1016/0893-6080(95)00041-0

M3 - Article

AN - SCOPUS:0029560406

VL - 8

SP - 1359

EP - 1365

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 9

ER -