DualToken-ViT: Position-aware Efficient Vision Transformer with Dual Token Fusion

Zhenzhen Chu, Jiayu Chen, Cen Chen, Chengyu Wang, Ziheng Wu, Jun Huang, Qian Weining

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Self-attention-based vision transformers (ViTs) have emerged as a highly competitive architecture in computer vision. Unlike convolutional neural networks (CNNs), ViTs are capable of global information sharing. With the development of various structures of ViTs, ViTs are increasingly advantageous for many vision tasks. However, the quadratic complexity of self-attention renders ViTs computationally intensive, and their lack of inductive biases of locality and translation equivariance demands larger model sizes compared to CNNs to effectively learn visual features. In this paper, we propose a light-weight and efficient vision transformer model called DualToken-ViT that leverages the advantages of CNNs and ViTs. DualToken-ViT effectively fuses the token with local information obtained by convolution-based structure and the token with global information obtained by self-attention-based structure to achieve an efficient attention structure. In addition, we use position-aware global tokens throughout all stages to enrich the global information, which further strengthening the effect of DualToken-ViT. Position-aware global tokens also contain the position information of the image, which makes our model better for vision tasks. We conducted extensive experiments on image classification, object detection and semantic segmentation tasks to demonstrate the effectiveness of DualToken-ViT. On the ImageNet-1K dataset, our models of different scales achieve accuracies of 75.4% and 79.4% with only 0.5G and 1.0G FLOPs, respectively, and our model with 1.0G FLOPs outperforms LightViT-T using global tokens by 0.7%.

Original languageEnglish
Title of host publicationProceedings of the 2024 SIAM International Conference on Data Mining, SDM 2024
EditorsShashi Shekhar, Vagelis Papalexakis, Jing Gao, Zhe Jiang, Matteo Riondato
PublisherSociety for Industrial and Applied Mathematics Publications
Pages688-696
Number of pages9
ISBN (Electronic)9781611978032
StatePublished - 2024
Event2024 SIAM International Conference on Data Mining, SDM 2024 - Houston, United States
Duration: 18 Apr 202420 Apr 2024

Publication series

NameProceedings of the 2024 SIAM International Conference on Data Mining, SDM 2024

Conference

Conference2024 SIAM International Conference on Data Mining, SDM 2024
Country/TerritoryUnited States
CityHouston
Period18/04/2420/04/24

Keywords

  • Attention
  • Vision Transformers

Fingerprint

Dive into the research topics of 'DualToken-ViT: Position-aware Efficient Vision Transformer with Dual Token Fusion'. Together they form a unique fingerprint.

Cite this