Trustworthy AI in education: Framework, cases, and governance strategies

Yiping Ma, Xinjin Li, Shiyu Hu, Shiqing Liu*, Kang Hao Cheong*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial intelligence (AI) in education has triggered significant debates about ethics, fairness, and accountability. This article introduces a five-dimensional trust framework to evaluate educational AI systems across five key domains: privacy, safety, fairness, explainability, and accountability. Through a comparative analysis of four representative cases—a virtual teaching assistant, an adaptive learning platform, an algorithmic grading system, and an AI-based proctoring tool—we identify recurring trust-related risks and systemic governance vulnerabilities, such as insufficient oversight, lack of transparency, and unclear accountability mechanisms. The analysis reveals risks such as data breaches, bias, opaque decision-making, and ambiguous responsibility. To address these issues, we propose multi-stakeholder governance strategies, including ethical-by-design principles, institutional oversight, and regulatory standards. This study concludes by outlining major avenues for future research, such as developing explainable AI systems tailored to educational settings, constructing trust models for human–AI collaboration, and evaluating the enduring impacts of AI governance structures.

Original languageEnglish
Article number2550026
JournalInnovation and Emerging Technologies
Volume12
DOIs
StatePublished - 2025

Keywords

  • Algorithmic Fairness
  • Educational Governance
  • Explainable AI in Education
  • Multi-Stakeholder Framework
  • Trustworthy Artificial Intelligence

Fingerprint

Dive into the research topics of 'Trustworthy AI in education: Framework, cases, and governance strategies'. Together they form a unique fingerprint.

Cite this