Abstract

This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.

Original languageEnglish (US)
Pages (from-to)201-205
Number of pages5
JournalIEEE Transactions on Neural Networks
Volume7
Issue number1
DOIs
StatePublished - Dec 1 1996

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'A probabilistic model for the fault tolerance of multilayer perceptrons'. Together they form a unique fingerprint.

  • Cite this