Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks

C. Lee Giles, Christian W. Onilin

Research output: Contribution to journalArticle

47 Citations (Scopus)

Abstract

Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite state recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules) and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrect’—rules inconsistent with the training data.

Original languageEnglish (US)
Pages (from-to)307-337
Number of pages31
JournalConnection Science
Volume5
Issue number3-4
DOIs
StatePublished - Jan 1 1993

Fingerprint

Recurrent neural networks
Refining

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Artificial Intelligence

Cite this

@article{7e7982aa6e9d4cb0ac6fb30225b0371f,
title = "Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks",
abstract = "Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite state recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules) and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrect’—rules inconsistent with the training data.",
author = "Giles, {C. Lee} and Onilin, {Christian W.}",
year = "1993",
month = "1",
day = "1",
doi = "10.1080/09540099308915703",
language = "English (US)",
volume = "5",
pages = "307--337",
journal = "Connection Science",
issn = "0954-0091",
publisher = "Taylor and Francis AS",
number = "3-4",

}

Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks. / Giles, C. Lee; Onilin, Christian W.

In: Connection Science, Vol. 5, No. 3-4, 01.01.1993, p. 307-337.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks

AU - Giles, C. Lee

AU - Onilin, Christian W.

PY - 1993/1/1

Y1 - 1993/1/1

N2 - Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite state recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules) and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrect’—rules inconsistent with the training data.

AB - Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite state recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules) and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrect’—rules inconsistent with the training data.

UR - http://www.scopus.com/inward/record.url?scp=84953506049&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84953506049&partnerID=8YFLogxK

U2 - 10.1080/09540099308915703

DO - 10.1080/09540099308915703

M3 - Article

AN - SCOPUS:84953506049

VL - 5

SP - 307

EP - 337

JO - Connection Science

JF - Connection Science

SN - 0954-0091

IS - 3-4

ER -