TANet: Text region attention learning for vehicle re-identification

  • Wenbo Hu
  • , Hongjian Zhan*
  • , Palaiahnakote Shivakumara
  • , Umapada Pal
  • , Yue Lu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

In recent years, the challenge of distinguishing vehicles of the same model has prompted a shift towards leveraging both global appearances and local features, such as lighting and rearview mirrors, for vehicle re-identification (ReID). Despite advancements, accurately identifying vehicles remains complex, particularly due to the underutilization of highly discriminative text regions. This paper introduces the Text Region Attention Network (TANet), a novel approach that integrates global and local information with a specific focus on text regions for improved feature learning. TANet uniquely captures stable and distinctive features across various vehicle views, demonstrating its effectiveness through rigorous evaluation on the VeRi-776, VehicleID, and VERI-Wild datasets. TANet significantly outperforms existing methods, achieving mAP scores of 83.6% on VeRi-776, 84.4% on VehicleID (Large), and 76.6% on VERI-Wild (Large). Statistical tests further validate the superiority of TANet over the baseline, showcasing notable improvements in mAP and Top-1 through Top-15 accuracy metrics.

Original languageEnglish
Article number108448
JournalEngineering Applications of Artificial Intelligence
Volume133
DOIs
StatePublished - Jul 2024

Keywords

  • Part attention
  • Text region
  • Vehicle re-identification

Fingerprint

Dive into the research topics of 'TANet: Text region attention learning for vehicle re-identification'. Together they form a unique fingerprint.

Cite this