A self-help guide for autonomous systems

Michael L. Anderson, Scott Fults, Darsana P. Josyula, Tim Oates, Don Perlis, Matt Schmill, Shomir Wilson, Dean Wright

Research output: Contribution to journalArticle

19 Citations (Scopus)

Abstract

Humans learn from their mistakes, When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artiflcial systems depend on their human designers to program in responses to every eventuality and therefore typically don't even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the metacognitive loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.

Original languageEnglish (US)
Pages (from-to)67-76
Number of pages10
JournalAI Magazine
Volume29
Issue number2
StatePublished - Jun 1 2008

Fingerprint

Repair

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Cite this

Anderson, M. L., Fults, S., Josyula, D. P., Oates, T., Perlis, D., Schmill, M., ... Wright, D. (2008). A self-help guide for autonomous systems. AI Magazine, 29(2), 67-76.
Anderson, Michael L. ; Fults, Scott ; Josyula, Darsana P. ; Oates, Tim ; Perlis, Don ; Schmill, Matt ; Wilson, Shomir ; Wright, Dean. / A self-help guide for autonomous systems. In: AI Magazine. 2008 ; Vol. 29, No. 2. pp. 67-76.
@article{d1063881cdc3414a93b1ec1f6ec67769,
title = "A self-help guide for autonomous systems",
abstract = "Humans learn from their mistakes, When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artiflcial systems depend on their human designers to program in responses to every eventuality and therefore typically don't even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the metacognitive loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.",
author = "Anderson, {Michael L.} and Scott Fults and Josyula, {Darsana P.} and Tim Oates and Don Perlis and Matt Schmill and Shomir Wilson and Dean Wright",
year = "2008",
month = "6",
day = "1",
language = "English (US)",
volume = "29",
pages = "67--76",
journal = "AI Magazine",
issn = "0738-4602",
publisher = "AI Access Foundation",
number = "2",

}

Anderson, ML, Fults, S, Josyula, DP, Oates, T, Perlis, D, Schmill, M, Wilson, S & Wright, D 2008, 'A self-help guide for autonomous systems', AI Magazine, vol. 29, no. 2, pp. 67-76.

A self-help guide for autonomous systems. / Anderson, Michael L.; Fults, Scott; Josyula, Darsana P.; Oates, Tim; Perlis, Don; Schmill, Matt; Wilson, Shomir; Wright, Dean.

In: AI Magazine, Vol. 29, No. 2, 01.06.2008, p. 67-76.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A self-help guide for autonomous systems

AU - Anderson, Michael L.

AU - Fults, Scott

AU - Josyula, Darsana P.

AU - Oates, Tim

AU - Perlis, Don

AU - Schmill, Matt

AU - Wilson, Shomir

AU - Wright, Dean

PY - 2008/6/1

Y1 - 2008/6/1

N2 - Humans learn from their mistakes, When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artiflcial systems depend on their human designers to program in responses to every eventuality and therefore typically don't even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the metacognitive loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.

AB - Humans learn from their mistakes, When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artiflcial systems depend on their human designers to program in responses to every eventuality and therefore typically don't even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the metacognitive loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.

UR - http://www.scopus.com/inward/record.url?scp=48549104135&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=48549104135&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:48549104135

VL - 29

SP - 67

EP - 76

JO - AI Magazine

JF - AI Magazine

SN - 0738-4602

IS - 2

ER -

Anderson ML, Fults S, Josyula DP, Oates T, Perlis D, Schmill M et al. A self-help guide for autonomous systems. AI Magazine. 2008 Jun 1;29(2):67-76.