Abstract
The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.
Original language | English (US) |
---|---|
Title of host publication | Robust Intelligence and Trust in Autonomous Systems |
Publisher | Springer US |
Pages | 143-166 |
Number of pages | 24 |
ISBN (Electronic) | 9781489976680 |
ISBN (Print) | 9781489976666 |
DOIs | |
State | Published - Jan 1 2016 |
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Engineering(all)