Adaptive learning rate selection for backpropagation networks

Jayathi Janakiraman, Vasant Honavar

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

In this paper we propose and evaluate a heuristically-motivated method for adaptive modification of the learning rate in backpropagation (which is perhaps the most widely used neural network learning algorithm) that does not require the estimation of higher-order derivatives. We present a modified version of the backpropagation learning algorithm which uses a simple heuristic to come up with a learning parameter value at each epoch. We present numerous simulations on real-world data sets using our modified algorithm. We compare these results with results obtained with a standard back-propagation learning algorithm, and also various modifications of the standard backpropagation algorithm (e.g., flat-spot elimination methods) that have been discussed in the literature. Our simulation results suggest that the adaptive learning rate modification helps substantially to speed up the convergence of the backpropagation algorithm. Furthermore, it makes the initial choice of the learning rate fairly unimportant as our method allows the learning rate to change and settle at a reasonable value for the specific problem.

Original languageEnglish (US)
Pages (from-to)89-95
Number of pages7
JournalMicrocomputer Applications
Volume18
Issue number3
StatePublished - 1999

Fingerprint

Backpropagation algorithms
Backpropagation
Learning algorithms
Derivatives
Neural networks

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Cite this

@article{75d8eed635c245268fe536c002e25a43,
title = "Adaptive learning rate selection for backpropagation networks",
abstract = "In this paper we propose and evaluate a heuristically-motivated method for adaptive modification of the learning rate in backpropagation (which is perhaps the most widely used neural network learning algorithm) that does not require the estimation of higher-order derivatives. We present a modified version of the backpropagation learning algorithm which uses a simple heuristic to come up with a learning parameter value at each epoch. We present numerous simulations on real-world data sets using our modified algorithm. We compare these results with results obtained with a standard back-propagation learning algorithm, and also various modifications of the standard backpropagation algorithm (e.g., flat-spot elimination methods) that have been discussed in the literature. Our simulation results suggest that the adaptive learning rate modification helps substantially to speed up the convergence of the backpropagation algorithm. Furthermore, it makes the initial choice of the learning rate fairly unimportant as our method allows the learning rate to change and settle at a reasonable value for the specific problem.",
author = "Jayathi Janakiraman and Vasant Honavar",
year = "1999",
language = "English (US)",
volume = "18",
pages = "89--95",
journal = "Microcomputer Applications",
issn = "0820-0750",
publisher = "International Society for Mini and Microcomputers",
number = "3",

}

Adaptive learning rate selection for backpropagation networks. / Janakiraman, Jayathi; Honavar, Vasant.

In: Microcomputer Applications, Vol. 18, No. 3, 1999, p. 89-95.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Adaptive learning rate selection for backpropagation networks

AU - Janakiraman, Jayathi

AU - Honavar, Vasant

PY - 1999

Y1 - 1999

N2 - In this paper we propose and evaluate a heuristically-motivated method for adaptive modification of the learning rate in backpropagation (which is perhaps the most widely used neural network learning algorithm) that does not require the estimation of higher-order derivatives. We present a modified version of the backpropagation learning algorithm which uses a simple heuristic to come up with a learning parameter value at each epoch. We present numerous simulations on real-world data sets using our modified algorithm. We compare these results with results obtained with a standard back-propagation learning algorithm, and also various modifications of the standard backpropagation algorithm (e.g., flat-spot elimination methods) that have been discussed in the literature. Our simulation results suggest that the adaptive learning rate modification helps substantially to speed up the convergence of the backpropagation algorithm. Furthermore, it makes the initial choice of the learning rate fairly unimportant as our method allows the learning rate to change and settle at a reasonable value for the specific problem.

AB - In this paper we propose and evaluate a heuristically-motivated method for adaptive modification of the learning rate in backpropagation (which is perhaps the most widely used neural network learning algorithm) that does not require the estimation of higher-order derivatives. We present a modified version of the backpropagation learning algorithm which uses a simple heuristic to come up with a learning parameter value at each epoch. We present numerous simulations on real-world data sets using our modified algorithm. We compare these results with results obtained with a standard back-propagation learning algorithm, and also various modifications of the standard backpropagation algorithm (e.g., flat-spot elimination methods) that have been discussed in the literature. Our simulation results suggest that the adaptive learning rate modification helps substantially to speed up the convergence of the backpropagation algorithm. Furthermore, it makes the initial choice of the learning rate fairly unimportant as our method allows the learning rate to change and settle at a reasonable value for the specific problem.

UR - http://www.scopus.com/inward/record.url?scp=0033279947&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0033279947&partnerID=8YFLogxK

M3 - Article

VL - 18

SP - 89

EP - 95

JO - Microcomputer Applications

JF - Microcomputer Applications

SN - 0820-0750

IS - 3

ER -