Some researchers have advocated for the use of posttest probabilities when using universal screening data to make decisions for individual students. However, said arguments are largely conceptual in nature, and to date there have been few convincing empirical demonstrations of the utility of posttest probabilities over and above traditional diagnostic accuracy metrics (e.g., sensitivity, specificity, positive and negative predictive values) in school settings. We demonstrate how posttest probabilities change as a function of a student's level of preexisting risk using screening instruments reviewed by the Center on Response to Intervention. Our results illustrate that knowing the sensitivity and specificity of a tool alone is not adequate to make defensible rule-in and rule-out decisions for individual students. Recommendations for practitioners to assess the appropriateness of screening tools for their schools are offered. A rationale for researchers to supplement traditional diagnostic accuracy indexes with posttest probabilities is given.
All Science Journal Classification (ASJC) codes
- Developmental and Educational Psychology