### Abstract

By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L_{2}-norm estimation error approaches O_{p}(n^{-} 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L_{2}-norm approximation error. This bound is less than O(q^{- 1 2}) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.

Original language | English (US) |
---|---|

Pages (from-to) | 147-158 |

Number of pages | 12 |

Journal | Neural Networks |

Volume | 7 |

Issue number | 1 |

DOIs | |

State | Published - 1994 |

### All Science Journal Classification (ASJC) codes

- Cognitive Neuroscience
- Artificial Intelligence

## Fingerprint Dive into the research topics of 'Convergence rates for single hidden layer feedforward networks'. Together they form a unique fingerprint.

## Cite this

*Neural Networks*,

*7*(1), 147-158. https://doi.org/10.1016/0893-6080(94)90063-9