TY - JOUR
T1 - Intentional Control of Type I Error Over Unconscious Data Distortion
T2 - A Neyman–Pearson Approach to Text Classification
AU - Xia, Lucy
AU - Zhao, Richard
AU - Wu, Yanhui
AU - Tong, Xin
N1 - Funding Information:
This work was partially supported by National Science Foundation grant NSF DMS 1613338. The authors would like to thank the editor, associate editor, two statistical content referees, and the referee for reproducibility, for many constructive comments which have greatly improved the article. We would also like to thank Professor Jingyi Jessica Li for rounds of thoughtful discussions and suggestions, and the seminar participants at UCLA.
Publisher Copyright:
© 2020 American Statistical Association.
PY - 2021
Y1 - 2021
N2 - This article addresses the challenges in classifying textual data obtained from open online platforms, which are vulnerable to distortion. Most existing classification methods minimize the overall classification error and may yield an undesirably large Type I error (relevant textual messages are classified as irrelevant), particularly when available data exhibit an asymmetry between relevant and irrelevant information. Data distortion exacerbates this situation and often leads to fallacious prediction. To deal with inestimable data distortion, we propose the use of the Neyman–Pearson (NP) classification paradigm, which minimizes Type II error under a user-specified Type I error constraint. Theoretically, we show that the NP oracle is unaffected by data distortion when the class conditional distributions remain the same. Empirically, we study a case of classifying posts about worker strikes obtained from a leading Chinese microblogging platform, which are frequently prone to extensive, unpredictable and inestimable censorship. We demonstrate that, even though the training and test data are susceptible to different distortion and therefore potentially follow different distributions, our proposed NP methods control the Type I error on test data at the targeted level. The methods and implementation pipeline proposed in our case study are applicable to many other problems involving data distortion. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
AB - This article addresses the challenges in classifying textual data obtained from open online platforms, which are vulnerable to distortion. Most existing classification methods minimize the overall classification error and may yield an undesirably large Type I error (relevant textual messages are classified as irrelevant), particularly when available data exhibit an asymmetry between relevant and irrelevant information. Data distortion exacerbates this situation and often leads to fallacious prediction. To deal with inestimable data distortion, we propose the use of the Neyman–Pearson (NP) classification paradigm, which minimizes Type II error under a user-specified Type I error constraint. Theoretically, we show that the NP oracle is unaffected by data distortion when the class conditional distributions remain the same. Empirically, we study a case of classifying posts about worker strikes obtained from a leading Chinese microblogging platform, which are frequently prone to extensive, unpredictable and inestimable censorship. We demonstrate that, even though the training and test data are susceptible to different distortion and therefore potentially follow different distributions, our proposed NP methods control the Type I error on test data at the targeted level. The methods and implementation pipeline proposed in our case study are applicable to many other problems involving data distortion. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
UR - http://www.scopus.com/inward/record.url?scp=85083525386&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083525386&partnerID=8YFLogxK
U2 - 10.1080/01621459.2020.1740711
DO - 10.1080/01621459.2020.1740711
M3 - Article
AN - SCOPUS:85083525386
SN - 0162-1459
VL - 116
SP - 68
EP - 81
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
IS - 533
ER -