Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.

Original languageEnglish (US)
Pages (from-to)304-316
Number of pages13
JournalComputers in Human Behavior
Volume97
DOIs
StatePublished - Aug 1 2019

Fingerprint

Cues
Digital cameras
Cognitive Therapy
Experiments
Interactivity

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Human-Computer Interaction
  • Psychology(all)

Cite this

@article{29a5bb95a1e04b69b2ff2f84b0f8b6bf,
title = "Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions",
abstract = "Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.",
author = "Eun Go and Sundar, {S. Shyam}",
year = "2019",
month = "8",
day = "1",
doi = "10.1016/j.chb.2019.01.020",
language = "English (US)",
volume = "97",
pages = "304--316",
journal = "Computers in Human Behavior",
issn = "0747-5632",
publisher = "Elsevier Limited",

}

Humanizing chatbots : The effects of visual, identity and conversational cues on humanness perceptions. / Go, Eun; Sundar, S. Shyam.

In: Computers in Human Behavior, Vol. 97, 01.08.2019, p. 304-316.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Humanizing chatbots

T2 - The effects of visual, identity and conversational cues on humanness perceptions

AU - Go, Eun

AU - Sundar, S. Shyam

PY - 2019/8/1

Y1 - 2019/8/1

N2 - Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.

AB - Chatbots are replacing human agents in a number of domains, from online tutoring to customer-service to even cognitive therapy. But, they are often machine-like in their interactions. What can we do to humanize chatbots? Should they necessarily be driven by human operators for them to be considered human? Or, will an anthropomorphic visual cue on the interface and/or a high-level of contingent message exchanges provide humanness to automated chatbots? We explored these questions with a 2 (anthropomorphic visual cues: high vs. low anthropomorphism) × 2 (message interactivity: high vs. low message interactivity) × 2 (identity cue: chat-bot vs. human) between-subjects experiment (N = 141) in which participants interacted with a chat agent on an e-commerce site about choosing a digital camera to purchase. Our findings show that a high level of message interactivity compensates for the impersonal nature of a chatbot that is low on anthropomorphic visual cues. Moreover, identifying the agent as human raises user expectations for interactivity. Theoretical as well as practical implications of these findings are discussed.

UR - http://www.scopus.com/inward/record.url?scp=85064447508&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064447508&partnerID=8YFLogxK

U2 - 10.1016/j.chb.2019.01.020

DO - 10.1016/j.chb.2019.01.020

M3 - Article

AN - SCOPUS:85064447508

VL - 97

SP - 304

EP - 316

JO - Computers in Human Behavior

JF - Computers in Human Behavior

SN - 0747-5632

ER -