On recurrent neural networks and representing finite-state recognizers

M. W. Goudreau, C. L. Giles

Research output: Contribution to journalArticle

Abstract

A discussion on the representational abilities of Single Layer Recurrent Neural Networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. We will call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.

Original languageEnglish (US)
Pages (from-to)51-54
Number of pages4
JournalIEE Conference Publication
Issue number372
StatePublished - 1993

Fingerprint

Recurrent neural networks
Neurons

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

@article{3e797af7badb49899179191c8e51e29b,
title = "On recurrent neural networks and representing finite-state recognizers",
abstract = "A discussion on the representational abilities of Single Layer Recurrent Neural Networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. We will call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.",
author = "Goudreau, {M. W.} and Giles, {C. L.}",
year = "1993",
language = "English (US)",
pages = "51--54",
journal = "IEEE Conference Publication",
issn = "0537-9989",
publisher = "Institution of Engineering and Technology",
number = "372",

}

On recurrent neural networks and representing finite-state recognizers. / Goudreau, M. W.; Giles, C. L.

In: IEE Conference Publication, No. 372, 1993, p. 51-54.

Research output: Contribution to journalArticle

TY - JOUR

T1 - On recurrent neural networks and representing finite-state recognizers

AU - Goudreau, M. W.

AU - Giles, C. L.

PY - 1993

Y1 - 1993

N2 - A discussion on the representational abilities of Single Layer Recurrent Neural Networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. We will call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.

AB - A discussion on the representational abilities of Single Layer Recurrent Neural Networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. We will call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.

UR - http://www.scopus.com/inward/record.url?scp=0027235145&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0027235145&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0027235145

SP - 51

EP - 54

JO - IEEE Conference Publication

JF - IEEE Conference Publication

SN - 0537-9989

IS - 372

ER -