Abstract
Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.
Original language | English (US) |
---|---|
Pages (from-to) | 129-138 |
Number of pages | 10 |
Journal | Neural Networks |
Volume | 5 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 1992 |
Fingerprint
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Cognitive Neuroscience
- Neuroscience(all)
Cite this
}
On learning the derivatives of an unknown mapping with multilayer feedforward networks. / Gallant, Andrew Ronald; White, Halbert.
In: Neural Networks, Vol. 5, No. 1, 01.01.1992, p. 129-138.Research output: Contribution to journal › Article
TY - JOUR
T1 - On learning the derivatives of an unknown mapping with multilayer feedforward networks
AU - Gallant, Andrew Ronald
AU - White, Halbert
PY - 1992/1/1
Y1 - 1992/1/1
N2 - Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.
AB - Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.
UR - http://www.scopus.com/inward/record.url?scp=0026449851&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0026449851&partnerID=8YFLogxK
U2 - 10.1016/S0893-6080(05)80011-5
DO - 10.1016/S0893-6080(05)80011-5
M3 - Article
AN - SCOPUS:0026449851
VL - 5
SP - 129
EP - 138
JO - Neural Networks
JF - Neural Networks
SN - 0893-6080
IS - 1
ER -