With the new emphasis on engineering practices and engineering design (NGSS Lead States, 2013), teachers of and researchers studying K-12 engineering need to find ways to measure students' developing engineering skills. To efficiently measure student learning of engineering practices, there is need for a tool to capture student performances in a way that readily affords evaluation. The problem we pursue in this paper is how to accomplish this measurement. In this paper, we present a performance assessment instrument and coding rubric. We calculate interrater reliability for coders and present descriptive statistics for student scores to demonstrate the utility of the instrument for distinguishing a range of performances. We conduct an exploratory factor analysis to examine the internal structure of the unit and calculate internal consistency reliability. To build a case for validity for use of the assessment to measure student learning of engineering practices, we compare video of 30 students working on design challenges in their student groups, collected from 10 of the participating classrooms, to the same students' performance on the assessment. This also informs the use and limits of utility of the written performance assessment for measuring elementary students' engineering skills and understanding-in-use. Finally, we describe the time needed to score the assessments, and discuss its utility for larger-scale research studies.
|Original language||English (US)|
|Journal||ASEE Annual Conference and Exposition, Conference Proceedings|
|State||Published - Jun 26 2016|
|Event||123rd ASEE Annual Conference and Exposition - New Orleans, United States|
Duration: Jun 26 2016 → Jun 29 2016
All Science Journal Classification (ASJC) codes