Abstract
Recently, multiple input, single output, single hidden-layer feedforward neural networks have been shown to be capable of approximating a nonlinear map and its partial derivatives. Specifically, neural nets have been shown to be dense in various Sobolev spaces. Building upon this result, we show that a net can be trained so that the map and its derivatives are learned. Specifically, we use a result of Gallant's to show that least squares and similar estimates are strongly consistent in Sobolev norm provided the number of hidden units and the size of the training set increase together. We illustrate these results by an application to the inverse problem of chaotic dynamics: recovery of a nonlinear map from a time series of iterates. These results extend automatically to nets that embed the single hidden layer, feedforward network as a special case.
Original language | English (US) |
---|---|
Pages (from-to) | 129-138 |
Number of pages | 10 |
Journal | Neural Networks |
Volume | 5 |
Issue number | 1 |
DOIs | |
State | Published - 1992 |
All Science Journal Classification (ASJC) codes
- Cognitive Neuroscience
- Artificial Intelligence