TY - JOUR
T1 - Exploring the prospects of multimodal large language models for Automated Emotion Recognition in education
T2 - Insights from Gemini
AU - Yu, Shuzhen
AU - Androsov, Alexey
AU - Yan, Hanbing
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/7
Y1 - 2025/7
N2 - Emotions play a pivotal role in daily judgments and decision-making, particularly in educational settings, where understanding and responding to learners’ emotions is essential for personalized learning. While there has been growing interest in emotion recognition, traditional methods, such as manual observations and self-reports, are often subjective and time-consuming. The rise of AI has led to the development of Automated Emotion Recognition (AER), offering transformative opportunities for educational reform by enabling personalized learning through emotional insights. However, AER continues to face challenges, including reliance on large-scale labeled databases, limited flexibility, and inadequate adaptation to diverse educational contexts. Recent advancements in AI, particularly Multimodal Large Language Models (MLLMs), show promise in addressing these challenges, though their application in AER remains underexplored. This study aimed to fill this gap by systematically evaluating the performance of Gemini, a pioneering MLLM, in image-based AER tasks across five databases: CK+, FER-2013, RAF-DB, OL-SFED and DAiSEE. The analysis examined recognition accuracy, error patterns, emotion inference mechanisms, and the impact of image preprocessing techniques — such as face cropping, bilinear interpolation, and super-resolution — on the model's performance. The results revealed that Gemini achieved high emotion recognition accuracy, especially in distinguishing emotional polarities across all databases. Image preprocessing significantly improved the recognition of basic emotions, though its effect on academic emotion recognition was minor. The confusion in academic emotion recognition stemmed from Gemini's limited understanding of academic emotion features and its insufficient ability to capture contextual cues. Building on the results, this study outlines specific future research directions from both technological and educational perspectives. These findings offer valuable insights for advancing MLLMs in educational applications.
AB - Emotions play a pivotal role in daily judgments and decision-making, particularly in educational settings, where understanding and responding to learners’ emotions is essential for personalized learning. While there has been growing interest in emotion recognition, traditional methods, such as manual observations and self-reports, are often subjective and time-consuming. The rise of AI has led to the development of Automated Emotion Recognition (AER), offering transformative opportunities for educational reform by enabling personalized learning through emotional insights. However, AER continues to face challenges, including reliance on large-scale labeled databases, limited flexibility, and inadequate adaptation to diverse educational contexts. Recent advancements in AI, particularly Multimodal Large Language Models (MLLMs), show promise in addressing these challenges, though their application in AER remains underexplored. This study aimed to fill this gap by systematically evaluating the performance of Gemini, a pioneering MLLM, in image-based AER tasks across five databases: CK+, FER-2013, RAF-DB, OL-SFED and DAiSEE. The analysis examined recognition accuracy, error patterns, emotion inference mechanisms, and the impact of image preprocessing techniques — such as face cropping, bilinear interpolation, and super-resolution — on the model's performance. The results revealed that Gemini achieved high emotion recognition accuracy, especially in distinguishing emotional polarities across all databases. Image preprocessing significantly improved the recognition of basic emotions, though its effect on academic emotion recognition was minor. The confusion in academic emotion recognition stemmed from Gemini's limited understanding of academic emotion features and its insufficient ability to capture contextual cues. Building on the results, this study outlines specific future research directions from both technological and educational perspectives. These findings offer valuable insights for advancing MLLMs in educational applications.
KW - Automated Emotion Recognition
KW - Data science applications in education
KW - Multimodal large language models
KW - Teaching/learning strategies
UR - https://www.scopus.com/pages/publications/105001683314
U2 - 10.1016/j.compedu.2025.105307
DO - 10.1016/j.compedu.2025.105307
M3 - 文章
AN - SCOPUS:105001683314
SN - 0360-1315
VL - 232
JO - Computers and Education
JF - Computers and Education
M1 - 105307
ER -