The literature on calibration suggests that students consider a multitude of factors when they self-evaluate task performance. Nevertheless, few studies have focused on calibration within a complex task enviornment, such as when students are asked to compose written responses based on multiple texts. In this study, we examined the criteria that undergraduate students considered when they were asked to self-evaluate their written responses, composed based on multiple texts. Moreover, we considered the extent to which these criteria had an effect on students' objective response quality, calibration, and confidence bias. Findings revealed that students indeed cited a variety of criteria in justifying their self-evaluations including task-, context-, and person-related factors, consistent with prior research. Further, our study indicated that high quality written responses were associated with accurate calibration and with students' relative under-confidence. We further found that low-performing students demonstrated less accurate calibration and greater over-confidence. Implications for improving students’ metacognitive awareness during complex task completion are discussed.
All Science Journal Classification (ASJC) codes