Investigating human-robot trust in emergency scenarios: Methodological lessons learned

Paul Robinette, Alan R. Wagner, Ayanna M. Howard

Research output: Chapter in Book/Report/Conference proceedingChapter

8 Scopus citations

Abstract

The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

Original languageEnglish (US)
Title of host publicationRobust Intelligence and Trust in Autonomous Systems
PublisherSpringer US
Pages143-166
Number of pages24
ISBN (Electronic)9781489976680
ISBN (Print)9781489976666
DOIs
StatePublished - Jan 1 2016

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Engineering(all)

Fingerprint Dive into the research topics of 'Investigating human-robot trust in emergency scenarios: Methodological lessons learned'. Together they form a unique fingerprint.

Cite this