MoDA: Mixture of Domain Adapters for Parameter-efficient Generalizable Person Re-identification

  • Yang Wang
  • , Yixing Zhang*
  • , Xudie Ren
  • , Yuxin Deng
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The Domain Generalizable Re-identification (DG ReID) task has attracted significant attention in recent years, as a challenging task but closely aligned with practical applications. Mixture-of-experts (MoE)-based methods have been studied for DG ReID to exploit the discrepancies and inherent correlations between diverse domains. However, most of DG ReID methods, especially MoE-based methods, have to fully fine-tune a large amount of parameters, which are not always practical in real-world scenarios. Considering this problem, we propose a novel MoE-based DG ReID method, named Mixture of Domain Adapters (MoDA), which utilizes many expert adapters and a global adapter to help MoE-based method scale to a much larger model but in a more parameter-efficient way. Furthermore, we conduct our approach with the large-scale vision-language pre-trained model CLIP, which exploits both visual and text encoders, to learn more robust representations based on multimodal information. Extensive experiments verify the effectiveness of our method and show that MoDA achieves competitiveness with state-of-the-art DG ReID methods with much fewer tunable parameters.

Original languageEnglish
Article number139
JournalACM Transactions on Multimedia Computing, Communications and Applications
Volume21
Issue number5
DOIs
StatePublished - 22 May 2025

Keywords

  • Domain Generalization
  • Generalizable Person Re-Identification
  • Parameter-efficient Fine-tuning

Fingerprint

Dive into the research topics of 'MoDA: Mixture of Domain Adapters for Parameter-efficient Generalizable Person Re-identification'. Together they form a unique fingerprint.

Cite this