Investigating human-robot trust in emergency scenarios: Methodological lessons learned

Paul Robinette, Alan R. Wagner, Ayanna M. Howard

Research output: Chapter in Book/Report/Conference proceedingChapter

6 Citations (Scopus)

Abstract

The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

Original languageEnglish (US)
Title of host publicationRobust Intelligence and Trust in Autonomous Systems
PublisherSpringer US
Pages143-166
Number of pages24
ISBN (Electronic)9781489976680
ISBN (Print)9781489976666
DOIs
StatePublished - Jan 1 2016

Fingerprint

Robots
Experiments
Repair

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Engineering(all)

Cite this

Robinette, P., Wagner, A. R., & Howard, A. M. (2016). Investigating human-robot trust in emergency scenarios: Methodological lessons learned. In Robust Intelligence and Trust in Autonomous Systems (pp. 143-166). Springer US. https://doi.org/10.1007/978-1-4899-7668-0_8
Robinette, Paul ; Wagner, Alan R. ; Howard, Ayanna M. / Investigating human-robot trust in emergency scenarios : Methodological lessons learned. Robust Intelligence and Trust in Autonomous Systems. Springer US, 2016. pp. 143-166
@inbook{c9ea5478fe8748ba93fe3fd87611056c,
title = "Investigating human-robot trust in emergency scenarios: Methodological lessons learned",
abstract = "The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.",
author = "Paul Robinette and Wagner, {Alan R.} and Howard, {Ayanna M.}",
year = "2016",
month = "1",
day = "1",
doi = "10.1007/978-1-4899-7668-0_8",
language = "English (US)",
isbn = "9781489976666",
pages = "143--166",
booktitle = "Robust Intelligence and Trust in Autonomous Systems",
publisher = "Springer US",
address = "United States",

}

Robinette, P, Wagner, AR & Howard, AM 2016, Investigating human-robot trust in emergency scenarios: Methodological lessons learned. in Robust Intelligence and Trust in Autonomous Systems. Springer US, pp. 143-166. https://doi.org/10.1007/978-1-4899-7668-0_8

Investigating human-robot trust in emergency scenarios : Methodological lessons learned. / Robinette, Paul; Wagner, Alan R.; Howard, Ayanna M.

Robust Intelligence and Trust in Autonomous Systems. Springer US, 2016. p. 143-166.

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - Investigating human-robot trust in emergency scenarios

T2 - Methodological lessons learned

AU - Robinette, Paul

AU - Wagner, Alan R.

AU - Howard, Ayanna M.

PY - 2016/1/1

Y1 - 2016/1/1

N2 - The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

AB - The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

UR - http://www.scopus.com/inward/record.url?scp=84978341900&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84978341900&partnerID=8YFLogxK

U2 - 10.1007/978-1-4899-7668-0_8

DO - 10.1007/978-1-4899-7668-0_8

M3 - Chapter

AN - SCOPUS:84978341900

SN - 9781489976666

SP - 143

EP - 166

BT - Robust Intelligence and Trust in Autonomous Systems

PB - Springer US

ER -

Robinette P, Wagner AR, Howard AM. Investigating human-robot trust in emergency scenarios: Methodological lessons learned. In Robust Intelligence and Trust in Autonomous Systems. Springer US. 2016. p. 143-166 https://doi.org/10.1007/978-1-4899-7668-0_8