Measures of agreement and concordance with clinical research applications

J. Richard Landis, Tonya King, Jai W. Choi, Vernon Chinchilli, Gary G. Koch

Research output: Contribution to journalReview article

10 Scopus citations

Abstract

This article reviews measures of interrater agreement, including the complementary roles of tests for interrater bias and estimates of kappa statistics and intraclass correlation coefficients (ICCs), following the developments outlined by Landis and Koch (1977a; 1977b; 1977c). Category-specific measures of reliability, together with pairwise measures of disagreement among categories, are extended to accommodate multistage research designs involving unbalanced data. The covariance structure of these category-specific agreement and pairwise disagreement coefficients is summarized for use in modeling and hypothesis testing. These agreement/disagreement measures of intraclass/interclass correlation are then estimated within specialized software and illustrated for several clinical research applications. Further consideration is also given to measures of agreement for continuous data, namely the concordance correlation coefficient (CCC) developed originally by Lin (1989). An extension to this CCC was published by King and Chinchilli (2001b), yielding a generalized concordance correlation coefficient which is appropriate for both continuous and categorical data. This coefficient is reviewed and its use illustrated with clinical research data. Additional extensions to this CCC methodology for longitudinal studies are also summarized.

Original languageEnglish (US)
Pages (from-to)185-209
Number of pages25
JournalStatistics in Biopharmaceutical Research
Volume3
Issue number2
DOIs
StatePublished - 2011

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Pharmaceutical Science

Fingerprint Dive into the research topics of 'Measures of agreement and concordance with clinical research applications'. Together they form a unique fingerprint.

Cite this