
Synthetic Student Responses: LLM-Extracted Features for IRT Difficulty Parameter Estimation
Educational assessment relies heavily on knowing question difficulty, traditionally determined through resource-intensive pre-testing with students. This creates significant barriers for both classroom teachers and assessment developers. We investigate whether Item Response Theory difficulty parameters can be accurately estimated without student testing by modeling the response process and explore the relative contribution of different feature types to prediction accuracy. Our approach combines traditional linguistic features with pedagogical insights extracted using Large Language Models (LLMs), including solution step count, required mathematical skills, cognitive complexity, and potential misconceptions. We implement a two-stage process: first training a neural network to predict how students would respond to questions, then deriving difficulty parameters from these simulated response patterns. Using a dataset of over 250,000 student responses to mathematics questions, our model achieves a Pearson correlation of 0.85 between predicted and actual difficulty parameters on completely unseen questions. These results suggest that our method can reliably estimate question difficulty without pre-testing, potentially accelerating assessment development while maintaining psychometric quality. Furthermore, our analysis demonstrates that LLM-extracted pedagogical features contribute significantly to prediction accuracy, suggesting new pathways for AI-assisted educational assessment.
EDS Students
