Convergence rates for single hidden layer feedforward networks

Daniel F. McCaffrey, A. Ronald Gallant

Research output: Contribution to journalArticlepeer-review

31 Scopus citations


By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.

Original languageEnglish (US)
Pages (from-to)147-158
Number of pages12
JournalNeural Networks
Issue number1
StatePublished - 1994

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'Convergence rates for single hidden layer feedforward networks'. Together they form a unique fingerprint.

Cite this