Generative learning structures and processes for generalized connectionist networks

Vasant Honavar, Leonard Uhr

Research output: Contribution to journalArticle

25 Citations (Scopus)

Abstract

Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture-the number of processing elements and the connectivity among them-as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology, for example, rather slow learning and the need for an a priori choice of network architecture. Several alternative designs as well as a range of control structures and processes that can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized, and several extensions and refinements of such algorithms and directions for future research are outlined.

Original languageEnglish (US)
Pages (from-to)75-108
Number of pages34
JournalInformation Sciences
Volume70
Issue number1-2
DOIs
StatePublished - May 1993

Fingerprint

Structure Learning
Learning algorithms
Network architecture
Learning Algorithm
Network Architecture
Intelligent systems
Inductive Learning
Topology
Computing
Alternatives
Intelligent Systems
Learning Process
Network Topology
Range of data
Processing
Connectivity
Refinement
Internal

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Cite this

@article{bc0ba77c40a24ff893dc5c6af2f7321e,
title = "Generative learning structures and processes for generalized connectionist networks",
abstract = "Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture-the number of processing elements and the connectivity among them-as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology, for example, rather slow learning and the need for an a priori choice of network architecture. Several alternative designs as well as a range of control structures and processes that can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized, and several extensions and refinements of such algorithms and directions for future research are outlined.",
author = "Vasant Honavar and Leonard Uhr",
year = "1993",
month = "5",
doi = "10.1016/0020-0255(93)90049-R",
language = "English (US)",
volume = "70",
pages = "75--108",
journal = "Information Sciences",
issn = "0020-0255",
publisher = "Elsevier Inc.",
number = "1-2",

}

Generative learning structures and processes for generalized connectionist networks. / Honavar, Vasant; Uhr, Leonard.

In: Information Sciences, Vol. 70, No. 1-2, 05.1993, p. 75-108.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Generative learning structures and processes for generalized connectionist networks

AU - Honavar, Vasant

AU - Uhr, Leonard

PY - 1993/5

Y1 - 1993/5

N2 - Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture-the number of processing elements and the connectivity among them-as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology, for example, rather slow learning and the need for an a priori choice of network architecture. Several alternative designs as well as a range of control structures and processes that can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized, and several extensions and refinements of such algorithms and directions for future research are outlined.

AB - Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture-the number of processing elements and the connectivity among them-as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology, for example, rather slow learning and the need for an a priori choice of network architecture. Several alternative designs as well as a range of control structures and processes that can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized, and several extensions and refinements of such algorithms and directions for future research are outlined.

UR - http://www.scopus.com/inward/record.url?scp=0027591476&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0027591476&partnerID=8YFLogxK

U2 - 10.1016/0020-0255(93)90049-R

DO - 10.1016/0020-0255(93)90049-R

M3 - Article

AN - SCOPUS:0027591476

VL - 70

SP - 75

EP - 108

JO - Information Sciences

JF - Information Sciences

SN - 0020-0255

IS - 1-2

ER -