Convergence rates for single hidden layer feedforward networks

Daniel F. McCaffrey, Andrew Ronald Gallant

Research output: Contribution to journalArticle

23 Citations (Scopus)

Abstract

By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.

Original languageEnglish (US)
Pages (from-to)147-158
Number of pages12
JournalNeural Networks
Volume7
Issue number1
DOIs
StatePublished - Jan 1 1994

Fingerprint

Fourier Analysis
Error analysis
Fourier series
Sample Size
Derivatives

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence

Cite this

@article{c328bc76300442bb9ac3cb86a8889c34,
title = "Convergence rates for single hidden layer feedforward networks",
abstract = "By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.",
author = "McCaffrey, {Daniel F.} and Gallant, {Andrew Ronald}",
year = "1994",
month = "1",
day = "1",
doi = "10.1016/0893-6080(94)90063-9",
language = "English (US)",
volume = "7",
pages = "147--158",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",
number = "1",

}

Convergence rates for single hidden layer feedforward networks. / McCaffrey, Daniel F.; Gallant, Andrew Ronald.

In: Neural Networks, Vol. 7, No. 1, 01.01.1994, p. 147-158.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Convergence rates for single hidden layer feedforward networks

AU - McCaffrey, Daniel F.

AU - Gallant, Andrew Ronald

PY - 1994/1/1

Y1 - 1994/1/1

N2 - By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.

AB - By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.

UR - http://www.scopus.com/inward/record.url?scp=0028320833&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028320833&partnerID=8YFLogxK

U2 - 10.1016/0893-6080(94)90063-9

DO - 10.1016/0893-6080(94)90063-9

M3 - Article

AN - SCOPUS:0028320833

VL - 7

SP - 147

EP - 158

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

IS - 1

ER -