9 Citations (Scopus)

Abstract

This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.

Original languageEnglish (US)
Pages (from-to)201-205
Number of pages5
JournalIEEE Transactions on Neural Networks
Volume7
Issue number1
DOIs
StatePublished - Dec 1 1996

Fingerprint

Neural Networks (Computer)
Statistical Models
Multilayer neural networks
Fault tolerance
Weights and Measures
Analytical models
Neural networks

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

@article{8d4f32150ca34c368eb85d9a184be80f,
title = "A probabilistic model for the fault tolerance of multilayer perceptrons",
abstract = "This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22{\%} of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.",
author = "Merchawi, {N. S.} and Tirupatikumara, {Soundar Rajan} and Chitaranjan Das",
year = "1996",
month = "12",
day = "1",
doi = "10.1109/72.478405",
language = "English (US)",
volume = "7",
pages = "201--205",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "1",

}

A probabilistic model for the fault tolerance of multilayer perceptrons. / Merchawi, N. S.; Tirupatikumara, Soundar Rajan; Das, Chitaranjan.

In: IEEE Transactions on Neural Networks, Vol. 7, No. 1, 01.12.1996, p. 201-205.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A probabilistic model for the fault tolerance of multilayer perceptrons

AU - Merchawi, N. S.

AU - Tirupatikumara, Soundar Rajan

AU - Das, Chitaranjan

PY - 1996/12/1

Y1 - 1996/12/1

N2 - This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.

AB - This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.

UR - http://www.scopus.com/inward/record.url?scp=0029733381&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029733381&partnerID=8YFLogxK

U2 - 10.1109/72.478405

DO - 10.1109/72.478405

M3 - Article

VL - 7

SP - 201

EP - 205

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 1

ER -