On learning the derivatives of an unknown mapping with multilayer feedforward networks

Andrew Ronald Gallant, Halbert White

Research output: Contribution to journalArticle

189 Citations (Scopus)

Abstract

Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.

Original languageEnglish (US)
Pages (from-to)129-138
Number of pages10
JournalNeural Networks
Volume5
Issue number1
DOIs
StatePublished - Jan 1 1992

Fingerprint

Least-Squares Analysis
Multilayers
Learning
Derivatives
Sobolev spaces
Feedforward neural networks
Inverse problems
Time series
Neural networks
Recovery

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Cognitive Neuroscience
  • Neuroscience(all)

Cite this

@article{d50dac3fbc8749fe88ae6df382554f80,
title = "On learning the derivatives of an unknown mapping with multilayer feedforward networks",
abstract = "Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.",
author = "Gallant, {Andrew Ronald} and Halbert White",
year = "1992",
month = "1",
day = "1",
doi = "10.1016/S0893-6080(05)80011-5",
language = "English (US)",
volume = "5",
pages = "129--138",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "1",

}

On learning the derivatives of an unknown mapping with multilayer feedforward networks. / Gallant, Andrew Ronald; White, Halbert.

In: Neural Networks, Vol. 5, No. 1, 01.01.1992, p. 129-138.

Research output: Contribution to journalArticle

TY - JOUR

T1 - On learning the derivatives of an unknown mapping with multilayer feedforward networks

AU - Gallant, Andrew Ronald

AU - White, Halbert

PY - 1992/1/1

Y1 - 1992/1/1

N2 - Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.

AB - Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.

UR - http://www.scopus.com/inward/record.url?scp=0026449851&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026449851&partnerID=8YFLogxK

U2 - 10.1016/S0893-6080(05)80011-5

DO - 10.1016/S0893-6080(05)80011-5

M3 - Article

AN - SCOPUS:0026449851

VL - 5

SP - 129

EP - 138

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 1

ER -