Abstract
Text simplification is crucial for enhancing student reading achievement. Although Chat Generative Pre-trained Transformer (ChatGPT) and its following generations have exhibited remarkable efficacy in various educational tasks, the similarities and differences between ChatGPT- and expert teacher-simplified texts remain largely unexplored. This study aims to bridge this gap by compiling a comparable corpus consisting of source texts, expert simplified texts, and three sets of ChatGPT-simplified texts generated by typical prompting strategies, namely general, example, and instructive. We then investigated the similarities and differences between sample texts simplified by ChatGPT and those by expert teachers, focusing on 17 linguistic features at the lexical, syntactic, and cohesion levels. The results revealed that significant differences existed between expert- and ChatGPT-simplified texts across multiple linguistic features, while more detailed prompts increased their similarity. These findings have important pedagogical implications, suggesting that with appropriate guidance, teachers can better leverage the potential of ChatGPT for preparing reading materials and make more informed judgments about the value of both ChatGPT- and teacher-simplified texts.
| Original language | English |
|---|---|
| Journal | Reading and Writing |
| DOIs | |
| State | Accepted/In press - 2025 |
Keywords
- ChatGPT
- Human-enhanced AI
- Prompting strategies
- Reading materials
- Text simplification