TY - JOUR
T1 - Trustworthy AI in education
T2 - Framework, cases, and governance strategies
AU - Ma, Yiping
AU - Li, Xinjin
AU - Hu, Shiyu
AU - Liu, Shiqing
AU - Cheong, Kang Hao
N1 - Publisher Copyright:
© 2025 World Scientific Publishing Co.
PY - 2025
Y1 - 2025
N2 - Artificial intelligence (AI) in education has triggered significant debates about ethics, fairness, and accountability. This article introduces a five-dimensional trust framework to evaluate educational AI systems across five key domains: privacy, safety, fairness, explainability, and accountability. Through a comparative analysis of four representative cases—a virtual teaching assistant, an adaptive learning platform, an algorithmic grading system, and an AI-based proctoring tool—we identify recurring trust-related risks and systemic governance vulnerabilities, such as insufficient oversight, lack of transparency, and unclear accountability mechanisms. The analysis reveals risks such as data breaches, bias, opaque decision-making, and ambiguous responsibility. To address these issues, we propose multi-stakeholder governance strategies, including ethical-by-design principles, institutional oversight, and regulatory standards. This study concludes by outlining major avenues for future research, such as developing explainable AI systems tailored to educational settings, constructing trust models for human–AI collaboration, and evaluating the enduring impacts of AI governance structures.
AB - Artificial intelligence (AI) in education has triggered significant debates about ethics, fairness, and accountability. This article introduces a five-dimensional trust framework to evaluate educational AI systems across five key domains: privacy, safety, fairness, explainability, and accountability. Through a comparative analysis of four representative cases—a virtual teaching assistant, an adaptive learning platform, an algorithmic grading system, and an AI-based proctoring tool—we identify recurring trust-related risks and systemic governance vulnerabilities, such as insufficient oversight, lack of transparency, and unclear accountability mechanisms. The analysis reveals risks such as data breaches, bias, opaque decision-making, and ambiguous responsibility. To address these issues, we propose multi-stakeholder governance strategies, including ethical-by-design principles, institutional oversight, and regulatory standards. This study concludes by outlining major avenues for future research, such as developing explainable AI systems tailored to educational settings, constructing trust models for human–AI collaboration, and evaluating the enduring impacts of AI governance structures.
KW - Algorithmic Fairness
KW - Educational Governance
KW - Explainable AI in Education
KW - Multi-Stakeholder Framework
KW - Trustworthy Artificial Intelligence
UR - https://www.scopus.com/pages/publications/105020410944
U2 - 10.1142/S2737599425500264
DO - 10.1142/S2737599425500264
M3 - 文章
AN - SCOPUS:105020410944
SN - 2737-5994
VL - 12
JO - Innovation and Emerging Technologies
JF - Innovation and Emerging Technologies
M1 - 2550026
ER -