By allowing the training set to become arbitrarily large, appropriately trained and configured single hidden layer feedforward networks converge in probability to the smooth function that they were trained to estimate. A bound on the probabilistic rate of convergence of these network estimates is given. The convergence rate is calculated as a function of the sample size n. If the function being estimated has square integrable mth order partial derivatives then the L2-norm estimation error approaches Op(n- 1 2) for large m. Two steps are required for determining these bounds. A bound on the rate of convergence of approximations to an unknown smooth function by members of a special class of single hidden layer feedforward networks is determined. The class of networks considered can embed Fourier series. Using this fact and results on approximation properties of Fourier series yields a bound on L2-norm approximation error. This bound is less than O(q- 1 2) for approximating a smooth function by networks with q hidden units. A modification of existing results for bounding estimation error provides a general theorem for calculating estimation error convergence rates. Combining this result with the bound on approximation rates yields the final convergence rates.
All Science Journal Classification (ASJC) codes
- Cognitive Neuroscience
- Artificial Intelligence