Timing is key for robot trust repair

Paul Robinette, Ayanna M. Howard, Alan R. Wagner

Research output: Contribution to journalConference article

19 Citations (Scopus)

Abstract

Even the best robots will eventually make amistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.

Original languageEnglish (US)
Pages (from-to)574-583
Number of pages10
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9388 LNCS
DOIs
StatePublished - Jan 1 2015
Event7th International Conference on Social Robotics, ICSR 2015 - Paris, France
Duration: Oct 26 2015Oct 30 2015

Fingerprint

Repair
Timing
Robot
Robots
Evacuation
Virtual Environments
Virtual reality
Immediately
Evaluate
Experiment
Experiments
Human

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

@article{dd2e3754df2e4b80b6fb8bfc2e5cc305,
title = "Timing is key for robot trust repair",
abstract = "Even the best robots will eventually make amistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.",
author = "Paul Robinette and Howard, {Ayanna M.} and Wagner, {Alan R.}",
year = "2015",
month = "1",
day = "1",
doi = "10.1007/978-3-319-25554-5_57",
language = "English (US)",
volume = "9388 LNCS",
pages = "574--583",
journal = "Lecture Notes in Computer Science",
issn = "0302-9743",
publisher = "Springer Verlag",

}

Timing is key for robot trust repair. / Robinette, Paul; Howard, Ayanna M.; Wagner, Alan R.

In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 9388 LNCS, 01.01.2015, p. 574-583.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Timing is key for robot trust repair

AU - Robinette, Paul

AU - Howard, Ayanna M.

AU - Wagner, Alan R.

PY - 2015/1/1

Y1 - 2015/1/1

N2 - Even the best robots will eventually make amistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.

AB - Even the best robots will eventually make amistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.

UR - http://www.scopus.com/inward/record.url?scp=84983651728&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84983651728&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-25554-5_57

DO - 10.1007/978-3-319-25554-5_57

M3 - Conference article

AN - SCOPUS:84983651728

VL - 9388 LNCS

SP - 574

EP - 583

JO - Lecture Notes in Computer Science

JF - Lecture Notes in Computer Science

SN - 0302-9743

ER -