First-Order Versus Second-Order Single-Layer Recurrent Neural Networks

Mark W. Goudreau, C. Lee Giles, Srimat T. Chakradhar, D. Chen

Research output: Contribution to journalArticle

46 Citations (Scopus)

Abstract

We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.

Original languageEnglish (US)
Pages (from-to)511-513
Number of pages3
JournalIEEE Transactions on Neural Networks
Volume5
Issue number3
DOIs
StatePublished - Jan 1 1994

Fingerprint

Recurrent neural networks
Neurons

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

Goudreau, Mark W. ; Giles, C. Lee ; Chakradhar, Srimat T. ; Chen, D. / First-Order Versus Second-Order Single-Layer Recurrent Neural Networks. In: IEEE Transactions on Neural Networks. 1994 ; Vol. 5, No. 3. pp. 511-513.
@article{d1c326b29f4e41a7ab06989f8ccf4229,
title = "First-Order Versus Second-Order Single-Layer Recurrent Neural Networks",
abstract = "We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.",
author = "Goudreau, {Mark W.} and Giles, {C. Lee} and Chakradhar, {Srimat T.} and D. Chen",
year = "1994",
month = "1",
day = "1",
doi = "10.1109/72.286928",
language = "English (US)",
volume = "5",
pages = "511--513",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "3",

}

First-Order Versus Second-Order Single-Layer Recurrent Neural Networks. / Goudreau, Mark W.; Giles, C. Lee; Chakradhar, Srimat T.; Chen, D.

In: IEEE Transactions on Neural Networks, Vol. 5, No. 3, 01.01.1994, p. 511-513.

Research output: Contribution to journalArticle

TY - JOUR

T1 - First-Order Versus Second-Order Single-Layer Recurrent Neural Networks

AU - Goudreau, Mark W.

AU - Giles, C. Lee

AU - Chakradhar, Srimat T.

AU - Chen, D.

PY - 1994/1/1

Y1 - 1994/1/1

N2 - We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.

AB - We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.

UR - http://www.scopus.com/inward/record.url?scp=0028424868&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028424868&partnerID=8YFLogxK

U2 - 10.1109/72.286928

DO - 10.1109/72.286928

M3 - Article

C2 - 18267822

AN - SCOPUS:0028424868

VL - 5

SP - 511

EP - 513

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 3

ER -