GENERALIZATION IN NEURAL NETWORKS: THE CONTIGUITY PROBLEM.

Tom Maxwell, C. Lee Giles, Y. C. Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

The problem of constructing a network that will learn concepts from a training set consisting of some fraction of the total number of possible examples of the concept is considered. Without some sort of prior knowledge, this problem is not well defined, since in general there will be many possible concepts that are consistent with a given training set. It is suggested that one way of incorporating prior knowledge is to construct the network such that the structure of the network reflects the structure of the problem environment. Two types of networks (a two-layer slab trained with back propagation and a single high-order slab) are explored to determine their ability to learn the concept of contiguity. It is found that the high-order slab learns and generalizes contiguity very efficiently, whereas the first-order network learns very slowly and shows little generalization capability on the small problems that have been examined.

Original languageEnglish (US)
Title of host publicationUnknown Host Publication Title
EditorsMaureen Caudill, Charles T. Butler, San Diego Adaptics
PublisherSOS Printing
StatePublished - 1987

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint Dive into the research topics of 'GENERALIZATION IN NEURAL NETWORKS: THE CONTIGUITY PROBLEM.'. Together they form a unique fingerprint.

  • Cite this

    Maxwell, T., Giles, C. L., & Lee, Y. C. (1987). GENERALIZATION IN NEURAL NETWORKS: THE CONTIGUITY PROBLEM. In M. Caudill, C. T. Butler, & S. D. Adaptics (Eds.), Unknown Host Publication Title SOS Printing.