Adaptive weight multi-channel center similar deep hashing

  • Xinghua Liu
  • , Guitao Cao
  • , Qiubin Lin
  • , Wenming Cao*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To increase the richness of the extracted text modality feature information and deeply explore the semantic similarity between the modalities. In this paper, we propose a novel method, named adaptive weight multi-channel center similar deep hashing (AMCDH). The algorithm first utilizes three channels with different configurations to extract feature information from the text modality; and then adds them according to the learned weight ratio to increase the richness of the information. We also introduce the Jaccard coefficient to measure the semantic similarity level between modalities from 0 to 1, and utilize it as the penalty coefficient of the cross-entropy loss function to increase its role in backpropagation. Besides, we propose a method of constructing center similarity, which makes the hash codes of similar data pairs close to the same center point, and dissimilar data pairs are scattered at different center points to generate high-quality hash codes. Extensive experimental evaluations on four benchmark datasets show that the performance of our proposed model AMCDH is significantly better than other competing baselines. The code can be obtained from https://github.com/DaveLiu6/AMCDH.git.

Original languageEnglish
Article number103642
JournalJournal of Visual Communication and Image Representation
Volume89
DOIs
StatePublished - Nov 2022

Keywords

  • Center similar
  • Deep cross-modal hashing
  • Multi-channel
  • Multimodal retrieval

Fingerprint

Dive into the research topics of 'Adaptive weight multi-channel center similar deep hashing'. Together they form a unique fingerprint.

Cite this