Exploring the prospects of multimodal large language models for Automated Emotion Recognition in education: Insights from Gemini

Shuzhen Yu, Alexey Androsov*, Hanbing Yan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Emotions play a pivotal role in daily judgments and decision-making, particularly in educational settings, where understanding and responding to learners’ emotions is essential for personalized learning. While there has been growing interest in emotion recognition, traditional methods, such as manual observations and self-reports, are often subjective and time-consuming. The rise of AI has led to the development of Automated Emotion Recognition (AER), offering transformative opportunities for educational reform by enabling personalized learning through emotional insights. However, AER continues to face challenges, including reliance on large-scale labeled databases, limited flexibility, and inadequate adaptation to diverse educational contexts. Recent advancements in AI, particularly Multimodal Large Language Models (MLLMs), show promise in addressing these challenges, though their application in AER remains underexplored. This study aimed to fill this gap by systematically evaluating the performance of Gemini, a pioneering MLLM, in image-based AER tasks across five databases: CK+, FER-2013, RAF-DB, OL-SFED and DAiSEE. The analysis examined recognition accuracy, error patterns, emotion inference mechanisms, and the impact of image preprocessing techniques — such as face cropping, bilinear interpolation, and super-resolution — on the model's performance. The results revealed that Gemini achieved high emotion recognition accuracy, especially in distinguishing emotional polarities across all databases. Image preprocessing significantly improved the recognition of basic emotions, though its effect on academic emotion recognition was minor. The confusion in academic emotion recognition stemmed from Gemini's limited understanding of academic emotion features and its insufficient ability to capture contextual cues. Building on the results, this study outlines specific future research directions from both technological and educational perspectives. These findings offer valuable insights for advancing MLLMs in educational applications.

Original languageEnglish
Article number105307
JournalComputers and Education
Volume232
DOIs
StatePublished - Jul 2025

Keywords

  • Automated Emotion Recognition
  • Data science applications in education
  • Multimodal large language models
  • Teaching/learning strategies

Fingerprint

Dive into the research topics of 'Exploring the prospects of multimodal large language models for Automated Emotion Recognition in education: Insights from Gemini'. Together they form a unique fingerprint.

Cite this