Inserting rules into recurrent neural networks

C. L. Giles, C. W. Omlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

We present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. We demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammars improves the training time by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition there is appears to be no loss in generalization performance.

Original languageEnglish (US)
Title of host publicationNeural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop
EditorsC.A. Kamm, S.Y. Kung, J. Aa. Sorenson, F. Fallside
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages13-22
Number of pages10
ISBN (Electronic)0780305574
DOIs
StatePublished - Jan 1 1992
Event1992 IEEE Workshop on Neural Networks for Signal Processing II - Helsingoer, Denmark
Duration: Aug 31 1992Sep 2 1992

Publication series

NameNeural Networks for Signal Processing - Proceedings of the IEEE Workshop

Other

Other1992 IEEE Workshop on Neural Networks for Signal Processing II
CountryDenmark
CityHelsingoer
Period8/31/929/2/92

Fingerprint

Recurrent neural networks
Formal languages
Neurons
Neural networks

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Software
  • Computer Networks and Communications
  • Signal Processing

Cite this

Giles, C. L., & Omlin, C. W. (1992). Inserting rules into recurrent neural networks. In C. A. Kamm, S. Y. Kung, J. A. Sorenson, & F. Fallside (Eds.), Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop (pp. 13-22). [253712] (Neural Networks for Signal Processing - Proceedings of the IEEE Workshop). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/NNSP.1992.253712
Giles, C. L. ; Omlin, C. W. / Inserting rules into recurrent neural networks. Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop. editor / C.A. Kamm ; S.Y. Kung ; J. Aa. Sorenson ; F. Fallside. Institute of Electrical and Electronics Engineers Inc., 1992. pp. 13-22 (Neural Networks for Signal Processing - Proceedings of the IEEE Workshop).
@inproceedings{30b94f00c6674a819cebc149873ad2a9,
title = "Inserting rules into recurrent neural networks",
abstract = "We present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. We demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammars improves the training time by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition there is appears to be no loss in generalization performance.",
author = "Giles, {C. L.} and Omlin, {C. W.}",
year = "1992",
month = "1",
day = "1",
doi = "10.1109/NNSP.1992.253712",
language = "English (US)",
series = "Neural Networks for Signal Processing - Proceedings of the IEEE Workshop",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "13--22",
editor = "C.A. Kamm and S.Y. Kung and Sorenson, {J. Aa.} and F. Fallside",
booktitle = "Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop",
address = "United States",

}

Giles, CL & Omlin, CW 1992, Inserting rules into recurrent neural networks. in CA Kamm, SY Kung, JA Sorenson & F Fallside (eds), Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop., 253712, Neural Networks for Signal Processing - Proceedings of the IEEE Workshop, Institute of Electrical and Electronics Engineers Inc., pp. 13-22, 1992 IEEE Workshop on Neural Networks for Signal Processing II, Helsingoer, Denmark, 8/31/92. https://doi.org/10.1109/NNSP.1992.253712

Inserting rules into recurrent neural networks. / Giles, C. L.; Omlin, C. W.

Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop. ed. / C.A. Kamm; S.Y. Kung; J. Aa. Sorenson; F. Fallside. Institute of Electrical and Electronics Engineers Inc., 1992. p. 13-22 253712 (Neural Networks for Signal Processing - Proceedings of the IEEE Workshop).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Inserting rules into recurrent neural networks

AU - Giles, C. L.

AU - Omlin, C. W.

PY - 1992/1/1

Y1 - 1992/1/1

N2 - We present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. We demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammars improves the training time by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition there is appears to be no loss in generalization performance.

AB - We present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. We demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammars improves the training time by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition there is appears to be no loss in generalization performance.

UR - http://www.scopus.com/inward/record.url?scp=84947459398&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84947459398&partnerID=8YFLogxK

U2 - 10.1109/NNSP.1992.253712

DO - 10.1109/NNSP.1992.253712

M3 - Conference contribution

AN - SCOPUS:84947459398

T3 - Neural Networks for Signal Processing - Proceedings of the IEEE Workshop

SP - 13

EP - 22

BT - Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop

A2 - Kamm, C.A.

A2 - Kung, S.Y.

A2 - Sorenson, J. Aa.

A2 - Fallside, F.

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Giles CL, Omlin CW. Inserting rules into recurrent neural networks. In Kamm CA, Kung SY, Sorenson JA, Fallside F, editors, Neural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop. Institute of Electrical and Electronics Engineers Inc. 1992. p. 13-22. 253712. (Neural Networks for Signal Processing - Proceedings of the IEEE Workshop). https://doi.org/10.1109/NNSP.1992.253712