### Abstract

Educational researchers have traditionally evaluated universal screening performance by dichotomizing continuous test scores to classify students as at-risk or not at-risk. This is pragmatic for decision making, but dichotomization results in a loss of critical information regarding the magnitude of risk. Another approach, which is common in medical research, is to evaluate screening test performance by dividing scores into ordinal categories for which interval likelihood ratios can be estimated. Interval likelihood ratios have yet to be applied to academic screening research. We reanalyzed data from a math screening study in middle school to evaluate differences in screening accuracy and efficiency when dichotomous or interval likelihood ratios were used to interpret screening performance within a gated screening model. Student performance on the prior year statewide achievement test was used as the first gate, followed by six different permutations of student performance on two math curriculum-based measures and the Measures of Academic Progress. Each model was used to predict student proficiency on the subsequent statewide achievement test in math. Treating screening performance as an ordinal variable yielded wider ranges between likelihood ratios than when screening scores were dichotomized. After applying a threshold decision-making model to interpret post-test probability of risk, the minimum number of tests required to classify all students as at-risk or not at-risk ranged between two and three (depending on grade and method of estimating likelihood ratios). The diagnostic accuracy results were similar when interval likelihood ratios or dichotomized results were used to interpret screening scores. Although replication of these findings is needed, one potential benefit of using interval likelihood ratios may be the reduction of the number of students who required additional screening after the first gate.

Original language | English (US) |
---|---|

Pages (from-to) | 107-123 |

Number of pages | 17 |

Journal | Journal of School Psychology |

Volume | 76 |

DOIs | |

State | Published - Oct 2019 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Education
- Developmental and Educational Psychology

### Cite this

*Journal of School Psychology*,

*76*, 107-123. https://doi.org/10.1016/j.jsp.2019.07.016

}

*Journal of School Psychology*, vol. 76, pp. 107-123. https://doi.org/10.1016/j.jsp.2019.07.016

**Interval likelihood ratios : Applications for gated screening in schools.** / Klingbeil, David A.; Van Norman, Ethan R.; Nelson, Peter M.; Birr, C.

Research output: Contribution to journal › Article

TY - JOUR

T1 - Interval likelihood ratios

T2 - Applications for gated screening in schools

AU - Klingbeil, David A.

AU - Van Norman, Ethan R.

AU - Nelson, Peter M.

AU - Birr, C.

PY - 2019/10

Y1 - 2019/10

N2 - Educational researchers have traditionally evaluated universal screening performance by dichotomizing continuous test scores to classify students as at-risk or not at-risk. This is pragmatic for decision making, but dichotomization results in a loss of critical information regarding the magnitude of risk. Another approach, which is common in medical research, is to evaluate screening test performance by dividing scores into ordinal categories for which interval likelihood ratios can be estimated. Interval likelihood ratios have yet to be applied to academic screening research. We reanalyzed data from a math screening study in middle school to evaluate differences in screening accuracy and efficiency when dichotomous or interval likelihood ratios were used to interpret screening performance within a gated screening model. Student performance on the prior year statewide achievement test was used as the first gate, followed by six different permutations of student performance on two math curriculum-based measures and the Measures of Academic Progress. Each model was used to predict student proficiency on the subsequent statewide achievement test in math. Treating screening performance as an ordinal variable yielded wider ranges between likelihood ratios than when screening scores were dichotomized. After applying a threshold decision-making model to interpret post-test probability of risk, the minimum number of tests required to classify all students as at-risk or not at-risk ranged between two and three (depending on grade and method of estimating likelihood ratios). The diagnostic accuracy results were similar when interval likelihood ratios or dichotomized results were used to interpret screening scores. Although replication of these findings is needed, one potential benefit of using interval likelihood ratios may be the reduction of the number of students who required additional screening after the first gate.

AB - Educational researchers have traditionally evaluated universal screening performance by dichotomizing continuous test scores to classify students as at-risk or not at-risk. This is pragmatic for decision making, but dichotomization results in a loss of critical information regarding the magnitude of risk. Another approach, which is common in medical research, is to evaluate screening test performance by dividing scores into ordinal categories for which interval likelihood ratios can be estimated. Interval likelihood ratios have yet to be applied to academic screening research. We reanalyzed data from a math screening study in middle school to evaluate differences in screening accuracy and efficiency when dichotomous or interval likelihood ratios were used to interpret screening performance within a gated screening model. Student performance on the prior year statewide achievement test was used as the first gate, followed by six different permutations of student performance on two math curriculum-based measures and the Measures of Academic Progress. Each model was used to predict student proficiency on the subsequent statewide achievement test in math. Treating screening performance as an ordinal variable yielded wider ranges between likelihood ratios than when screening scores were dichotomized. After applying a threshold decision-making model to interpret post-test probability of risk, the minimum number of tests required to classify all students as at-risk or not at-risk ranged between two and three (depending on grade and method of estimating likelihood ratios). The diagnostic accuracy results were similar when interval likelihood ratios or dichotomized results were used to interpret screening scores. Although replication of these findings is needed, one potential benefit of using interval likelihood ratios may be the reduction of the number of students who required additional screening after the first gate.

UR - http://www.scopus.com/inward/record.url?scp=85070508187&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070508187&partnerID=8YFLogxK

U2 - 10.1016/j.jsp.2019.07.016

DO - 10.1016/j.jsp.2019.07.016

M3 - Article

C2 - 31759460

AN - SCOPUS:85070508187

VL - 76

SP - 107

EP - 123

JO - Journal of School Psychology

JF - Journal of School Psychology

SN - 0022-4405

ER -