基于大模型启发式提问的学生元认知能力提升研究

Translated title of the contribution: Enhancing Students' Metacognitive Abilities through Heuristic Questioning with Large Language Models

Wen Wu*, Feifei Ren

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Metacognitive ability refers to an individual’s awareness of, reflection on, and regulation of their own cognitive processes. This ability facilitates learners’ autonomous monitoring of their learning. Heuristic questioning, as a key approach to activating and cultivating metacognition, encourages students to engage in active thinking, identify cognitive blind spots, and adjust learning strategies, thereby fostering a positive learning cycle. However, in traditional classrooms, it is challenging for teachers to provide personalized questioning support to every student. With the recent rapid advancement of large language models (LLMs), new opportunities have emerged for personalized questioning. Nevertheless, existing LLMs predominantly function as “answering machines” rather than “questioning mentors.” While they excel at answering questions, they often struggle to generate deep and thought-provoking questions, which limits the potential of intelligent heuristic questioning to promote metacognitive development. This study proposes a heuristic questioning mechanism based on error-type analysis, aiming to shift LLMs from being encyclopedias to experienced questioning tutors. A cross-disciplinary question bank was developed to categorize common errors and their corresponding heuristic questions. Retrieval-Augmented Generation (RAG) was used to enable flexible dialogue guided by preset prompts, referencing the error-based knowledge base. Three questioning strategies, characterized by fully open-ended, template-constrained, and semi-open (combining error guidance with generative flexibility), were designed and compared with a baseline model without the question bank. To evaluate the effectiveness of these strategies, a dual evaluation framework combining human judgment and automated scoring by LLMs was established. For the subjective evaluation, volunteers rated teacher-student dialogues generated by different strategies across multiple dimensions using questionnaires. The automated evaluation used a dialogue-adapted scoring rubric constructed from established metacognitive assessment frameworks, with quantitative analysis of students’ cognitive regulation indicators performed by the large model. By comparing the distribution and trends of human and model scores, the study analyzed the guidance efficacy and task adaptability of each strategy. The results indicated that: (1) the error-based question bank significantly enhances students’ thinking and metacognitive development; (2) among the tested strategies, the semi-open approach achieves the best overall performance by balancing content specificity, generative flexibility, and learner adaptability; and (3) multidimensional evaluation confirms the effectiveness of the proposed intelligent heuristic questioning mechanism in fostering metacognitive growth.

Translated title of the contributionEnhancing Students' Metacognitive Abilities through Heuristic Questioning with Large Language Models
Original languageChinese (Traditional)
Pages (from-to)985-996
Number of pages12
JournalJournal of Psychological Science
Volume48
Issue number4
DOIs
StatePublished - 20 Aug 2025

Fingerprint

Dive into the research topics of 'Enhancing Students' Metacognitive Abilities through Heuristic Questioning with Large Language Models'. Together they form a unique fingerprint.

Cite this