TY - JOUR
T1 - Salience bias in crowdsourcing contests
AU - Lee, Ho Cheung Brian
AU - Ba, Sulin
AU - Li, Xinxin
AU - Stallaert, Jan
N1 - Funding Information:
History: Panos Constantinides, Ola Henfridsson, and Geoffrey Parker, Special Issue Editors; Mingfeng Lin, Associate Editor. This paper has been accepted for the Information Systems Research Special Issue on Digital Infrastructure and Platforms. Funding: The authors acknowledge the support of the National Natural Science Foundation of China [Project 71229101]. SupplementalMaterial: The online appendix is available at https://doi.org/10.1287/isre.2018.0775.
PY - 2018/6/1
Y1 - 2018/6/1
N2 - Crowdsourcing relies on online platforms to connect a community of users to perform specific tasks. However, without appropriate control, the behavior of the online community might not align with the platform's designed objective, which can lead to an inferior platform performance. This paper investigates how the feedback information on a crowdsourcing platform and systematic bias of crowdsourcing workers can affect crowdsourcing outcomes. Specifically, using archival data from the online crowdsourcing platform Kaggle, combined with survey data from actual Kaggle contest participants, we examine the role of a systematic bias, namely, the salience bias, in influencing the performance of the crowdsourcing workers and how the number of crowdsourcing workers moderates the impact of the salience bias on the outcomes of contests. Our results suggest that the salience bias influences the performance of contestants, including the winners of the contests. Furthermore, the number of participating contestants may attenuate or amplify the impact of the salience bias on the outcomes of contests, depending on the effort required to complete the tasks. Our results have critical implications for crowdsourcing firms and platform designers.
AB - Crowdsourcing relies on online platforms to connect a community of users to perform specific tasks. However, without appropriate control, the behavior of the online community might not align with the platform's designed objective, which can lead to an inferior platform performance. This paper investigates how the feedback information on a crowdsourcing platform and systematic bias of crowdsourcing workers can affect crowdsourcing outcomes. Specifically, using archival data from the online crowdsourcing platform Kaggle, combined with survey data from actual Kaggle contest participants, we examine the role of a systematic bias, namely, the salience bias, in influencing the performance of the crowdsourcing workers and how the number of crowdsourcing workers moderates the impact of the salience bias on the outcomes of contests. Our results suggest that the salience bias influences the performance of contestants, including the winners of the contests. Furthermore, the number of participating contestants may attenuate or amplify the impact of the salience bias on the outcomes of contests, depending on the effort required to complete the tasks. Our results have critical implications for crowdsourcing firms and platform designers.
UR - http://www.scopus.com/inward/record.url?scp=85048536100&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048536100&partnerID=8YFLogxK
U2 - 10.1287/isre.2018.0775
DO - 10.1287/isre.2018.0775
M3 - Article
AN - SCOPUS:85048536100
VL - 29
SP - 401
EP - 418
JO - Information Systems Research
JF - Information Systems Research
SN - 1047-7047
IS - 2
ER -