Educational researchers have traditionally evaluated universal screening performance by dichotomizing continuous test scores to classify students as at-risk or not at-risk. This is pragmatic for decision making, but dichotomization results in a loss of critical information regarding the magnitude of risk. Another approach, which is common in medical research, is to evaluate screening test performance by dividing scores into ordinal categories for which interval likelihood ratios can be estimated. Interval likelihood ratios have yet to be applied to academic screening research. We reanalyzed data from a math screening study in middle school to evaluate differences in screening accuracy and efficiency when dichotomous or interval likelihood ratios were used to interpret screening performance within a gated screening model. Student performance on the prior year statewide achievement test was used as the first gate, followed by six different permutations of student performance on two math curriculum-based measures and the Measures of Academic Progress. Each model was used to predict student proficiency on the subsequent statewide achievement test in math. Treating screening performance as an ordinal variable yielded wider ranges between likelihood ratios than when screening scores were dichotomized. After applying a threshold decision-making model to interpret post-test probability of risk, the minimum number of tests required to classify all students as at-risk or not at-risk ranged between two and three (depending on grade and method of estimating likelihood ratios). The diagnostic accuracy results were similar when interval likelihood ratios or dichotomized results were used to interpret screening scores. Although replication of these findings is needed, one potential benefit of using interval likelihood ratios may be the reduction of the number of students who required additional screening after the first gate.
All Science Journal Classification (ASJC) codes
- Developmental and Educational Psychology