Skip to main content

Instructional Measurement

Our current work on instructional measurement revolves around measuring high-quality math instruction and equitable teaching practices. Through a study funded by the Bill & Melinda Gates Foundation and through research-practice partnerships with school districts, we are collecting a large data corpus of mathematics classroom videos, transcripts, lesson materials, student and teacher surveys, and administrative data so that we can develop robust measures of instruction. We are currently working on developing equity measures, such as how teachers attribute competence to students. This involves linking students' contributions to class discussions to their demographic information and identities.

During the past few years, a fast-growing literature has shown that natural language parocessing (NLP) techniques provide a potentially transformative approach for instructional measurement and feedback. Different from conventional human-based scoring, NLP analysis of classroom transcripts can be done in scalable and adaptable ways (Alic et al., 2022; Demszky, et al., 2021; Hunkins, 2022; Kelly et al., 2018; Liu & Cohen, 2021; Suresh et al., 2019). Scholars have to date focused on capturing dialogic instruction, or teachers’ use of talk moves that promote student thinking and activity, for instance by eliciting student reasoning (Alic et al., 2022; Suresh et al., 2019), revoicing and taking up student ideas (Demszky et al., 2021; Suresh et al., 2019) and prompting students to respond to others’ ideas (Suresh et al., 2019). Scholars have also begun to focus on students’ contributions – for instance the density of the mathematical language within student talk (Himmelsbach et al., 2023) and evidence of student mathematical reasoning (Hill et al., in progress). These instructional moves are indicators of rigorous instruction and, when enacted with parity across students, equitable instruction. To date, scholars primarily have used these measures to provide private, on-demand feedback to teachers (Suresh et al., 2019; Demszky et al., in press; Demszky et al., 2023), in some cases leading to positive impacts on educators’ instruction quality and selected student outcomes across different teaching contexts (Demszky et al., in press; Demszky & Liu, 2023).

If you’re interested in learning more about our instructional measurement work or participating in a related study, please contact Research Project Manager Hannah Rosenstein at hrosenst@umd.edu.

Back to Top