An explanation is not an excuse: Trust calibration in an age of transparent robots

Alan R. Wagner, Paul Robinette

Research output: Chapter in Book/Report/Conference proceedingChapter

6 Scopus citations

Abstract

Some view transparency as a cure to the challenge of human-robot trust calibration. This point of view considers a person’s trust in a robot to be little more than a reflection of the robot’s performance. A transparent robot capable of explaining its behavior will thus result in correct trust calibration. This chapter argues that this simple calculus ignores critical determinants of trust such as individual differences (human and robot), social and contextual factors, and, most importantly, human psychology itself. We examine how these factors influence the success of an explanation and begin to outline a program of research by which an autonomous robot might tailor its explanation to an audience. Moreover, we consider the impact that cognitive laziness of humans in real-world environments will have on the tendency to trust a robot and the ethical ramifications of creating robots that mold their explanations to the person.

Original languageEnglish (US)
Title of host publicationTrust in Human-Robot Interaction
PublisherElsevier
Pages197-208
Number of pages12
ISBN (Electronic)9780128194720
DOIs
StatePublished - Jan 1 2020

All Science Journal Classification (ASJC) codes

  • Psychology(all)

Fingerprint

Dive into the research topics of 'An explanation is not an excuse: Trust calibration in an age of transparent robots'. Together they form a unique fingerprint.

Cite this