Small #OpenAccess 11-week study reports that students receiving detailed, specific feedback on their writing, generated by ChatGPT, made better improvements than an established automated feedback system but the students felt worse about it. https://www.tandfonline.com/doi/full/10.1080/09588221.2025.2454541?mi=5fx7dw#abstract
Abstract
The affordances of ChatGPT in language learning and teaching have gained
increasing traction. While studies began to investigate the potential
of ChatGPT as a feedback provider, little attention was given to
ChatGPT’s potential impact on students’ writing performance and the
ideal L2 writing self vis-à-vis the established automated writing
evaluation systems (AWE). To address these gaps, a sequential
explanatory mixed methods design was adopted. One hundred and fifty
second-year university students from three writing classes in a Chinese
public university were recruited and randomly divided into a ChatGPT
group, an AWE group, and a control group. After an eleven-week
intervention, the ANCOVA results showed that while the ChatGPT group
scored significantly higher than the AWE group and the control group in
post-writing performance as measured by their writing score, in terms of
students’ ideal L2 writing self, the ChatGPT group performed
significantly lower than the AWE group with a medium effect size.
Qualitative analysis of students’ reflection papers revealed students’
(over)reliance on the tool and the accompanying loss of creativity and
agency. Pedagogical implications as well as directions for future
research are also discussed.