A no-go theorem for one-layer feedforward networks

Chad Giusti, Vladimir Itskov

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or manylayer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form Rn≥0\P, where P is a polyhedron.

Original languageEnglish (US)
Pages (from-to)2527-2540
Number of pages14
JournalNeural Computation
Volume26
Issue number11
DOIs
StatePublished - Nov 20 2014

Fingerprint

Noise
Brain
Layer

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Cite this

Giusti, Chad ; Itskov, Vladimir. / A no-go theorem for one-layer feedforward networks. In: Neural Computation. 2014 ; Vol. 26, No. 11. pp. 2527-2540.
@article{3186a1fb038c42f7a2244d835881a936,
title = "A no-go theorem for one-layer feedforward networks",
abstract = "It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or manylayer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form Rn≥0\P, where P is a polyhedron.",
author = "Chad Giusti and Vladimir Itskov",
year = "2014",
month = "11",
day = "20",
doi = "10.1162/NECO_a_00657",
language = "English (US)",
volume = "26",
pages = "2527--2540",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "11",

}

A no-go theorem for one-layer feedforward networks. / Giusti, Chad; Itskov, Vladimir.

In: Neural Computation, Vol. 26, No. 11, 20.11.2014, p. 2527-2540.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A no-go theorem for one-layer feedforward networks

AU - Giusti, Chad

AU - Itskov, Vladimir

PY - 2014/11/20

Y1 - 2014/11/20

N2 - It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or manylayer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form Rn≥0\P, where P is a polyhedron.

AB - It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or manylayer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form Rn≥0\P, where P is a polyhedron.

UR - http://www.scopus.com/inward/record.url?scp=84931439321&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84931439321&partnerID=8YFLogxK

U2 - 10.1162/NECO_a_00657

DO - 10.1162/NECO_a_00657

M3 - Article

VL - 26

SP - 2527

EP - 2540

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 11

ER -