Several states have changed their statewide achievement tests over the past 5 years. These changes may pose difficulties for educators tasked with identifying students in need of additional support. This study evaluated the stability of decision-making accuracy estimates across changes to the statewide achievement test. We analyzed extant data from a large suburban district in Wisconsin in 2014–2015 (N = 2,774) and 2015–2016 (N = 2,882). We estimated the decision-making accuracy of recommendations from the Measures of Academic Progress for predicting risk on a Common Core State Standards aligned test (2014–2015) and a new test based on updated academic standards (2015–2016) in reading and math. Findings suggest that sensitivity and specificity estimates were relatively stable in math. Changes in the criterion measure were associated with decreased sensitivity when predicting performance in reading. These results provide initial support for educators to continue existing screening practices until test vendors or state educational agencies establish cut-scores for predicting risk on the newer test. Using a lower cut-score to establish risk (increasing sensitivity while decreasing specificity) may be prudent in reading. Limitations and directions for future research are discussed.
All Science Journal Classification (ASJC) codes
- Developmental and Educational Psychology
- Health Professions(all)