Cross-coupled prompt learning for few-shot image recognition

  • Fangyuan Zhang
  • , Rukai Wei
  • , Yanzhao Xie*
  • , Yangtao Wang
  • , Xin Tan
  • , Lizhuang Ma
  • , Maobin Tang
  • , Lisheng Fan
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Prompt learning based on large models shows great potential to reduce training time and resource costs, which has been progressively applied to visual tasks such as image recognition. Nevertheless, the existing prompt learning schemes suffer from either inadequate prompt information from a single modality or insufficient prompt interaction between multiple modalities, resulting in low efficiency and performance. To address these limitations, we propose a Cross-Coupled Prompt Learning (CCPL) architecture, which is designed with two novel components (i.e., Cross-Coupled Prompt Generator (CCPG) module and Cross-Modal Fusion (CMF) module) to achieve efficient interaction between visual and textual prompts. Specifically, the CCPG module incorporates a cross-attention mechanism to automatically generate visual and textual prompts, each of which will be adaptively updated using the self-attention mechanism in their respective image and text encoders. Furthermore, the CMF module implements a deep fusion to reinforce the cross-modal feature interaction from the output layer with the Image–Text Matching (ITM) loss function. We conduct extensive experiments on 8 image datasets. The experimental results verify that our proposed CCPL outperforms the SOTA methods on few-shot image recognition tasks. The source code of this project is released at: https://github.com/elegantTechie/CCPL.

Original languageEnglish
Article number102862
JournalDisplays
Volume85
DOIs
StatePublished - Dec 2024
Externally publishedYes

Keywords

  • Cross-attention
  • Few-shot
  • Image recognition
  • Prompt learning

Fingerprint

Dive into the research topics of 'Cross-coupled prompt learning for few-shot image recognition'. Together they form a unique fingerprint.

Cite this