Can You See Me Think? Grounding LLM feedback in keystrokes and revision behaviour
Abstract
As large language models (LLMs) increasingly assist in evaluating student writing, researchers have begun to explore whether these systems can attend not just to final drafts, but to the writing process itself. We examine how LLM feedback can be anchored in student writing processes, using keystroke logs and revision snapshots as cognitive proxies. We compare two conditions: C1 (final essay only) and C2 (final essay + process data), using an ablation study on 52 student essays. While rubric scores changed little, but process-aware feedback (C2) offered more explicit recognition of revisions and organization changes. These findings suggest that cognitively-grounded feedback from LLMs is more pedagogically aligned and reflective of actual student effort.