Attention by design: Using attention checks to detect inattentive respondents and improve data quality

James D. Abbey, Margaret G. Meloy

Research output: Contribution to journalArticle

24 Citations (Scopus)

Abstract

This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70% in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.

Original languageEnglish (US)
Pages (from-to)63-70
Number of pages8
JournalJournal of Operations Management
Volume53-56
DOIs
StatePublished - Nov 2017

Fingerprint

Data acquisition
Statistics
Data quality
Compensation and Redress
Manipulation
Type I error
Empirical data
Trade-offs
Construct validation
Mood
Data collection
Query
Empirical model
Scale validation
Attrition
Trigger
Psychological
Scaling
Incentive alignment
Test statistic

All Science Journal Classification (ASJC) codes

  • Strategy and Management
  • Management Science and Operations Research
  • Industrial and Manufacturing Engineering

Cite this

@article{246e95e800b74a79b9736904b34eaf1f,
title = "Attention by design: Using attention checks to detect inattentive respondents and improve data quality",
abstract = "This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70{\%} in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.",
author = "Abbey, {James D.} and Meloy, {Margaret G.}",
year = "2017",
month = "11",
doi = "10.1016/j.jom.2017.06.001",
language = "English (US)",
volume = "53-56",
pages = "63--70",
journal = "Journal of Operations Management",
issn = "0272-6963",
publisher = "Elsevier",

}

Attention by design : Using attention checks to detect inattentive respondents and improve data quality. / Abbey, James D.; Meloy, Margaret G.

In: Journal of Operations Management, Vol. 53-56, 11.2017, p. 63-70.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Attention by design

T2 - Using attention checks to detect inattentive respondents and improve data quality

AU - Abbey, James D.

AU - Meloy, Margaret G.

PY - 2017/11

Y1 - 2017/11

N2 - This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70% in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.

AB - This paper examines attention checks and manipulation validations to detect inattentive respondents in primary empirical data collection. These prima facie attention checks range from the simple such as reverse scaling first proposed a century ago to more recent and involved methods such as evaluating response patterns and timed responses via online data capture tools. The attention check validations also range from easily implemented mechanisms such as automatic detection through directed queries to highly intensive investigation of responses by the researcher. The latter has the potential to introduce inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The empirical findings of the present work reveal that construct and scale validations show consistently significant improvement in the fit statistics—a finding of great use for researchers working predominantly with scales and constructs for their empirical models. However, based on the rudimentary experimental models employed in the analysis, attention checks generally do not show a consistent, systematic improvement in the significance of test statistics for experimental manipulations. This latter result indicates that, by their very nature, attention checks may trigger an inherent trade-off between loss of sample subjects—lowered power and increased Type II error—and the potential of capitalizing on chance alone—the possibility that the previously significant results were in fact the result of Type I error. The analysis also shows that the attrition rates due to attention checks—upwards of 70% in some observed samples—are far larger than typically assumed. Such loss rates raise the specter that studies not validating attention may inadvertently increase their Type I error rate. The manuscript provides general guidelines for various attention checks, discusses the psychological nuances of the methods, and highlights the delicate balance among incentive alignment, monetary compensation, and the subsequently triggered mood of respondents.

UR - http://www.scopus.com/inward/record.url?scp=85021778976&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85021778976&partnerID=8YFLogxK

U2 - 10.1016/j.jom.2017.06.001

DO - 10.1016/j.jom.2017.06.001

M3 - Article

AN - SCOPUS:85021778976

VL - 53-56

SP - 63

EP - 70

JO - Journal of Operations Management

JF - Journal of Operations Management

SN - 0272-6963

ER -