Letting Uncertainty Guide Your Multimodal Machine Translation

  • Wuyi Liu
  • , Yue Gao
  • , Yige Mao
  • , Jing Zhao*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Multimodal Machine Translation (MMT) leverages additional modalities, such as visual data, to enhance translation accuracy and resolve linguistic ambiguities inherent in text-only approaches. Recent advancements predominantly focus on integrating image information via attention mechanisms or feature fusion techniques. However, current approaches lack explicit mechanisms to quantify and manage the uncertainty during translation process, resulting in the utilization of image information being a black box. This makes it difficult to effectively address the issues of incomplete utilization of visual information and even potential degradation of translation quality when using visual information.To address these challenges, we introduce a novel Uncertainty-Guided Multimodal Machine Translation (UG-MMT) framework that redefines how translation systems handle ambiguity through systematic uncertainty reduction. Designed with plug-and-play flexibility, our framework enables seamless integration into existing MMT systems, requiring minimal modification while delivering significant performance gains.

Original languageEnglish
Pages (from-to)2701-2710
Number of pages10
JournalProceedings of Machine Learning Research
Volume286
StatePublished - 2025
Event41st Conference on Uncertainty in Artificial Intelligence, UAI 2025 - Rio de Janeiro, Brazil
Duration: 21 Jul 202525 Jul 2025

Fingerprint

Dive into the research topics of 'Letting Uncertainty Guide Your Multimodal Machine Translation'. Together they form a unique fingerprint.

Cite this