Posttest probabilities: An empirical demonstration of their use in evaluating the performance of universal screening measures across settings

Ethan R. Van Norman, David A. Klingbeil, Peter Marlow Nelson

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

Some researchers have advocated for the use of posttest probabilities when using universal screening data to make decisions for individual students. However, said arguments are largely conceptual in nature, and to date there have been few convincing empirical demonstrations of the utility of posttest probabilities over and above traditional diagnostic accuracy metrics (e.g., sensitivity, specificity, positive and negative predictive values) in school settings. We demonstrate how posttest probabilities change as a function of a student's level of preexisting risk using screening instruments reviewed by the Center on Response to Intervention. Our results illustrate that knowing the sensitivity and specificity of a tool alone is not adequate to make defensible rule-in and rule-out decisions for individual students. Recommendations for practitioners to assess the appropriateness of screening tools for their schools are offered. A rationale for researchers to supplement traditional diagnostic accuracy indexes with posttest probabilities is given.

Original languageEnglish (US)
Pages (from-to)349-362
Number of pages14
JournalSchool Psychology Review
Volume46
Issue number4
DOIs
StatePublished - Dec 1 2017

All Science Journal Classification (ASJC) codes

  • Education
  • Developmental and Educational Psychology

Fingerprint Dive into the research topics of 'Posttest probabilities: An empirical demonstration of their use in evaluating the performance of universal screening measures across settings'. Together they form a unique fingerprint.

  • Cite this