RaftNet: Extract task-aware features for pedestrian attribute recognition

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Pedestrian attribute recognition, a multi-task problem, is a popular task in computer vision. Generally, deep learning end-to-end networks to predict attributes are the basic method to solve this problem. To fully use deep neural network, this paper proposes a novel network structure called Raft Block. Raft Block is designed not only to extract task-specific features, but also to share the features of different tasks. Using the Raft Block, we build an end-to-end network Raftnet for pedestrian attribute recognition. We implement experiments on three public datasets, the results prove that the design idea of Raft Block is valid and effective. Specifically, we achieve state-of-art results as 85.64% and 82.79% mean accuracy on Market-1501 and DukeMTMC datasets, and competitive result as 72.53% mAP on PA-100K dataset.

Original languageEnglish
Title of host publicationProceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, CSAI 2018 - 2018 the 10th International Conference on Information and Multimedia Technology, ICIMT 2018
PublisherAssociation for Computing Machinery
Pages286-290
Number of pages5
ISBN (Electronic)9781450366069
DOIs
StatePublished - 8 Dec 2018
Externally publishedYes
Event2nd International Conference on Computer Science and Artificial Intelligence, CSAI 2018 - Shenzhen, China
Duration: 8 Dec 201810 Dec 2018

Publication series

NameACM International Conference Proceeding Series

Conference

Conference2nd International Conference on Computer Science and Artificial Intelligence, CSAI 2018
Country/TerritoryChina
CityShenzhen
Period8/12/1810/12/18

Keywords

  • Attribute Recognition
  • Computer Vision
  • Multi-task Learning

Fingerprint

Dive into the research topics of 'RaftNet: Extract task-aware features for pedestrian attribute recognition'. Together they form a unique fingerprint.

Cite this