7 Citations (Scopus)

Abstract

This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g local receptive fields, global convergence-divergence). The anatomy, physiology, behavior and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by ftedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.

Original languageEnglish (US)
Pages (from-to)139-159
Number of pages21
JournalConnection Science
Volume1
Issue number2
DOIs
StatePublished - Jan 1 1989

Fingerprint

Brain
Cones
Physiology
Feedback

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Artificial Intelligence

Cite this

@article{5eebdd3ba3a944afabd4cac7d68b9d83,
title = "Brain-structured Connectionist Networks that Perceive and Learn",
abstract = "This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g local receptive fields, global convergence-divergence). The anatomy, physiology, behavior and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by ftedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.",
author = "Vasant Honavar and Leonard Uhr",
year = "1989",
month = "1",
day = "1",
doi = "10.1080/09540098908915633",
language = "English (US)",
volume = "1",
pages = "139--159",
journal = "Connection Science",
issn = "0954-0091",
publisher = "Taylor and Francis AS",
number = "2",

}

Brain-structured Connectionist Networks that Perceive and Learn. / Honavar, Vasant; Uhr, Leonard.

In: Connection Science, Vol. 1, No. 2, 01.01.1989, p. 139-159.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Brain-structured Connectionist Networks that Perceive and Learn

AU - Honavar, Vasant

AU - Uhr, Leonard

PY - 1989/1/1

Y1 - 1989/1/1

N2 - This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g local receptive fields, global convergence-divergence). The anatomy, physiology, behavior and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by ftedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.

AB - This paper specifies the main features of connectionist and brain-like connectionist models; argues for the need for, and usefulness of appropriate successively larger brainlike structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of networks exploiting such structures (e.g local receptive fields, global convergence-divergence). The anatomy, physiology, behavior and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g. houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation learning, i.e. the growth of new links and possibly, nodes, subject to brain-like topological constraints. The information processing transforms discovered through feedback-guided generation are fine-tuned by ftedback-guided reweighting of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g. letters of the alphabet, cups, apples, bananas) through generation and reweighting of transforms. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. It is concluded that brain-like structures and generation learning can significantly increase the power of connectionist models.

UR - http://www.scopus.com/inward/record.url?scp=0000212135&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0000212135&partnerID=8YFLogxK

U2 - 10.1080/09540098908915633

DO - 10.1080/09540098908915633

M3 - Article

AN - SCOPUS:0000212135

VL - 1

SP - 139

EP - 159

JO - Connection Science

JF - Connection Science

SN - 0954-0091

IS - 2

ER -