21 Citations (Scopus)

Abstract

Artificial neural networks (ANN's), due to their inherent parallelism, offer an attractive paradigm for implementation of symbol processing systems for applications in computer science and artificial intelligence. This paper explores systematic synthesis of modular neural-network architectures for syntax analysis using a prespecified grammar - a prototypical symbol processing task which finds applications in programming language interpretation, syntax analysis of symbolic expressions, and high-performance compilers. The proposed architecture is assembled from ANN components for lexical analysis, stack, parsing and parse tree construction. Each of these modules takes advantage of parallel content-based pattern matching using a neural associative memory. The proposed neural-network architecture for syntax analysis provides a relatively efficient and high performance alternative to current computer systems for applications that involve parsing of LR grammars which constitute a widely used subset of deterministic context-free grammars. Comparison of quantitatively estimated performance of such a system [implemented using current CMOS very large scale integration (VLSI) technology] with that of conventional computers demonstrates the benefits of massively parallel neural-network architectures for symbol processing applications.

Original languageEnglish (US)
Pages (from-to)94-114
Number of pages21
JournalIEEE Transactions on Neural Networks
Volume10
Issue number1
DOIs
StatePublished - Dec 1 1999

Fingerprint

Network architecture
Neural networks
Processing
Context free grammars
Network components
VLSI circuits
Pattern matching
Computer programming languages
Computer science
Artificial intelligence
Computer systems
Data storage equipment

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

@article{9fbf5caaff1e4437a28313c0ad37b22e,
title = "A neural-network architecture for syntax analysis",
abstract = "Artificial neural networks (ANN's), due to their inherent parallelism, offer an attractive paradigm for implementation of symbol processing systems for applications in computer science and artificial intelligence. This paper explores systematic synthesis of modular neural-network architectures for syntax analysis using a prespecified grammar - a prototypical symbol processing task which finds applications in programming language interpretation, syntax analysis of symbolic expressions, and high-performance compilers. The proposed architecture is assembled from ANN components for lexical analysis, stack, parsing and parse tree construction. Each of these modules takes advantage of parallel content-based pattern matching using a neural associative memory. The proposed neural-network architecture for syntax analysis provides a relatively efficient and high performance alternative to current computer systems for applications that involve parsing of LR grammars which constitute a widely used subset of deterministic context-free grammars. Comparison of quantitatively estimated performance of such a system [implemented using current CMOS very large scale integration (VLSI) technology] with that of conventional computers demonstrates the benefits of massively parallel neural-network architectures for symbol processing applications.",
author = "Chen, {Chun Hsien} and Vasant Honavar",
year = "1999",
month = "12",
day = "1",
doi = "10.1109/72.737497",
language = "English (US)",
volume = "10",
pages = "94--114",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "1",

}

A neural-network architecture for syntax analysis. / Chen, Chun Hsien; Honavar, Vasant.

In: IEEE Transactions on Neural Networks, Vol. 10, No. 1, 01.12.1999, p. 94-114.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A neural-network architecture for syntax analysis

AU - Chen, Chun Hsien

AU - Honavar, Vasant

PY - 1999/12/1

Y1 - 1999/12/1

N2 - Artificial neural networks (ANN's), due to their inherent parallelism, offer an attractive paradigm for implementation of symbol processing systems for applications in computer science and artificial intelligence. This paper explores systematic synthesis of modular neural-network architectures for syntax analysis using a prespecified grammar - a prototypical symbol processing task which finds applications in programming language interpretation, syntax analysis of symbolic expressions, and high-performance compilers. The proposed architecture is assembled from ANN components for lexical analysis, stack, parsing and parse tree construction. Each of these modules takes advantage of parallel content-based pattern matching using a neural associative memory. The proposed neural-network architecture for syntax analysis provides a relatively efficient and high performance alternative to current computer systems for applications that involve parsing of LR grammars which constitute a widely used subset of deterministic context-free grammars. Comparison of quantitatively estimated performance of such a system [implemented using current CMOS very large scale integration (VLSI) technology] with that of conventional computers demonstrates the benefits of massively parallel neural-network architectures for symbol processing applications.

AB - Artificial neural networks (ANN's), due to their inherent parallelism, offer an attractive paradigm for implementation of symbol processing systems for applications in computer science and artificial intelligence. This paper explores systematic synthesis of modular neural-network architectures for syntax analysis using a prespecified grammar - a prototypical symbol processing task which finds applications in programming language interpretation, syntax analysis of symbolic expressions, and high-performance compilers. The proposed architecture is assembled from ANN components for lexical analysis, stack, parsing and parse tree construction. Each of these modules takes advantage of parallel content-based pattern matching using a neural associative memory. The proposed neural-network architecture for syntax analysis provides a relatively efficient and high performance alternative to current computer systems for applications that involve parsing of LR grammars which constitute a widely used subset of deterministic context-free grammars. Comparison of quantitatively estimated performance of such a system [implemented using current CMOS very large scale integration (VLSI) technology] with that of conventional computers demonstrates the benefits of massively parallel neural-network architectures for symbol processing applications.

UR - http://www.scopus.com/inward/record.url?scp=0032786152&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0032786152&partnerID=8YFLogxK

U2 - 10.1109/72.737497

DO - 10.1109/72.737497

M3 - Article

VL - 10

SP - 94

EP - 114

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 1

ER -