ProRAG: Towards Reliable and Proficient AIGC-Based Digital Avatar

  • Yongkang Zhou
  • , Muyang Yan
  • , Junjie Yao*
  • , Gang Xu
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The concept of a Virtual Human represents an advanced interactive interface that bridges users with digital information, offering an increasingly realistic experience. Recent breakthroughs in Large Language Models (LLMs) and AI-Generated Content (AIGC) have significantly improved the lifelike nature of virtual humans, making them increasingly indistinguishable from real humans. However, this rapid progress raises significant concerns regarding the ethical implications and the reliability of virtual human interactions, particularly in high-stakes, domain-specific scenarios where factual accuracy and trustworthiness are paramount. In response to these challenges, we introduce ProRAG, a novel framework designed to enhance the trustworthiness and reliability of digital avatars. ProRAG combines domain-specific LLMs with innovative strategies to address key challenges such as hallucinations, computational inefficiency, and context stability. Our approach integrates a multimodal knowledge base, consisting of textual, visual, and auditory data, to improve retrieval accuracy and content consistency. Furthermore, ProRAG supports multimodal digital human interactions, facilitating voice, visual, and text communication, which ensures high trust for critical applications. By leveraging adaptive data representation techniques, ProRAG resolves the “Lost in the Middle" challenge, enhancing hallucination suppression and promoting structured knowledge integration. This framework is designed to be scalable and versatile, demonstrating its potential across diverse domains such as education, cultural preservation, and legal consultation, while ensuring the generation of reliable, context-aware content in mission-critical decision-making environments.

Original languageEnglish
Title of host publicationDatabase Systems for Advanced Applications - 30th International Conference, DASFAA 2025, Proceedings
EditorsFeida Zhu, Ee-Peng Lim, Philip S. Yu, Akiyo Nadamoto, Kyuseok Shim, Wei Ding, Bingxue Zhang
PublisherSpringer Science and Business Media Deutschland GmbH
Pages408-419
Number of pages12
ISBN (Print)9789819541577
DOIs
StatePublished - 2026
Event30th International Conference on Database Systems for Advanced Applications, DASFAA 2025 - Singapore, Singapore
Duration: 26 May 202529 May 2025

Publication series

NameLecture Notes in Computer Science
Volume15991 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference30th International Conference on Database Systems for Advanced Applications, DASFAA 2025
Country/TerritorySingapore
CitySingapore
Period26/05/2529/05/25

Keywords

  • Digital Avatar
  • Knowledge Integration
  • Large Language Models
  • Multi-modal Interaction
  • Retrieval Augmented Generation

Fingerprint

Dive into the research topics of 'ProRAG: Towards Reliable and Proficient AIGC-Based Digital Avatar'. Together they form a unique fingerprint.

Cite this