Automated Essay Scoring in Middle School Writing: Understanding Key Predictors of Students’ Growth and Comparing Artificial Intelligence- and Teacher-Generated Scores and Feedback

Abstract

Providing feedback to students in a sustainable way represents a perennial challenge for secondary teachers of writing. Employing artificial intelligence (AI) tools to give students personalized and immediate feedback holds great promise. Project Topeka offered middle school teachers pre-curated teaching materials, foundational texts and videos, essay prompts, and a platform for students to submit and revise essay drafts with AI-generated scores and feedback. We analyze AI-generated writing scores of 3,233 7th- and 8th-grade students in school year 2021-22 and find that students’ growth over time generally was not explained by teachers’ (n=35) experience or self-reported instructional approaches. We also find that students’ growth increased significantly as their baseline score decreased (i.e., a student with the lowest possible baseline grew more than a student with a medium baseline). Lastly, based on an in-person convening of 16 Topeka teachers, we compared their scores and feedback to AI-generated scores and feedback on the same essays, finding that generally the AI tool was more generous, with differences likely driven by teachers’ ability to understand the whole essay’s success better than the AI tool.

Description

Keywords

Citation

DOI