Trust it or not: Effects of machine-learningwarnings in helping individuals mitigate misinformation

Haeseung Seo, Aiping Xiong, Dongwon Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Postsession questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning-Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.

Original languageEnglish (US)
Title of host publicationWebSci 2019 - Proceedings of the 11th ACM Conference on Web Science
PublisherAssociation for Computing Machinery, Inc
Pages265-274
Number of pages10
ISBN (Electronic)9781450362023
DOIs
StatePublished - Jun 26 2019
Event11th ACM Conference on Web Science, WebSci 2019 - Boston, United States
Duration: Jun 30 2019Jul 3 2019

Publication series

NameWebSci 2019 - Proceedings of the 11th ACM Conference on Web Science

Conference

Conference11th ACM Conference on Web Science, WebSci 2019
CountryUnited States
CityBoston
Period6/30/197/3/19

Fingerprint

Learning systems
Experiments

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications

Cite this

Seo, H., Xiong, A., & Lee, D. (2019). Trust it or not: Effects of machine-learningwarnings in helping individuals mitigate misinformation. In WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science (pp. 265-274). (WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science). Association for Computing Machinery, Inc. https://doi.org/10.1145/1122445.1122456
Seo, Haeseung ; Xiong, Aiping ; Lee, Dongwon. / Trust it or not : Effects of machine-learningwarnings in helping individuals mitigate misinformation. WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science. Association for Computing Machinery, Inc, 2019. pp. 265-274 (WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science).
@inproceedings{0c7c1c083a6846119b1dd74da72f7887,
title = "Trust it or not: Effects of machine-learningwarnings in helping individuals mitigate misinformation",
abstract = "Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Postsession questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning-Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.",
author = "Haeseung Seo and Aiping Xiong and Dongwon Lee",
year = "2019",
month = "6",
day = "26",
doi = "10.1145/1122445.1122456",
language = "English (US)",
series = "WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science",
publisher = "Association for Computing Machinery, Inc",
pages = "265--274",
booktitle = "WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science",

}

Seo, H, Xiong, A & Lee, D 2019, Trust it or not: Effects of machine-learningwarnings in helping individuals mitigate misinformation. in WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science. WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science, Association for Computing Machinery, Inc, pp. 265-274, 11th ACM Conference on Web Science, WebSci 2019, Boston, United States, 6/30/19. https://doi.org/10.1145/1122445.1122456

Trust it or not : Effects of machine-learningwarnings in helping individuals mitigate misinformation. / Seo, Haeseung; Xiong, Aiping; Lee, Dongwon.

WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science. Association for Computing Machinery, Inc, 2019. p. 265-274 (WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Trust it or not

T2 - Effects of machine-learningwarnings in helping individuals mitigate misinformation

AU - Seo, Haeseung

AU - Xiong, Aiping

AU - Lee, Dongwon

PY - 2019/6/26

Y1 - 2019/6/26

N2 - Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Postsession questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning-Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.

AB - Despite increased interests in the study of fake news, how to aid users' decision in handling suspicious or false information has not been well understood. To obtain a better understanding on the impact of warnings on individuals' fake news decisions, we conducted two online experiments, evaluating the effect of three warnings (i.e., one Fact-Checking and two Machine-Learning based) against a control condition, respectively. Each experiment consisted of three phases examining participants' recognition, detection, and sharing of fake news, respectively. In Experiment 1, relative to the control condition, participants' detection of both fake and real news was better when the Fact-Checking warning but not the two Machine-Learning warnings were presented with fake news. Postsession questionnaire results revealed that participants showed more trust for the Fact-Checking warning. In Experiment 2, we proposed a Machine-Learning-Graph warning that contains the detailed results of machine-learning based detection and removed the source within each news headline to test its impact on individuals' fake news detection with warnings. We did not replicate the effect of the Fact-Checking warning obtained in Experiment 1, but the Machine-Learning-Graph warning increased participants' sensitivity in differentiating fake news from real ones. Although the best performance was obtained with the Machine-Learning-Graph warning, participants trusted it less than the Fact-Checking warning. Therefore, our study results indicate that a transparent machine learning warning is critical to improving individuals' fake news detection but not necessarily increase their trust on the model.

UR - http://www.scopus.com/inward/record.url?scp=85069450320&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069450320&partnerID=8YFLogxK

U2 - 10.1145/1122445.1122456

DO - 10.1145/1122445.1122456

M3 - Conference contribution

AN - SCOPUS:85069450320

T3 - WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science

SP - 265

EP - 274

BT - WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science

PB - Association for Computing Machinery, Inc

ER -

Seo H, Xiong A, Lee D. Trust it or not: Effects of machine-learningwarnings in helping individuals mitigate misinformation. In WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science. Association for Computing Machinery, Inc. 2019. p. 265-274. (WebSci 2019 - Proceedings of the 11th ACM Conference on Web Science). https://doi.org/10.1145/1122445.1122456