TY - GEN
T1 - Explanations as mechanisms for supporting algorithmic transparency
AU - Rader, Emilee
AU - Cotter, Kelley
AU - Cho, Janghee
N1 - Funding Information:
We thank Chankyung Pak, Nick Gilreath, and the BITLab @ MSU research group for helpful discussions and feedback. This material is based upon work supported by the National Science Foundation under Grant No. IIS-1217212.
Publisher Copyright:
© 2018 Copyright is held by the owner/author(s).
PY - 2018/4/20
Y1 - 2018/4/20
N2 - Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
AB - Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.
UR - http://www.scopus.com/inward/record.url?scp=85046965296&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046965296&partnerID=8YFLogxK
U2 - 10.1145/3173574.3173677
DO - 10.1145/3173574.3173677
M3 - Conference contribution
AN - SCOPUS:85046965296
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2018 - Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018
Y2 - 21 April 2018 through 26 April 2018
ER -