Unifying visual saliency with HOG feature learning for traffic sign detection

  • Yuan Xie*
  • , Li Feng Liu
  • , Cui Hua Li
  • , Yan Yun Qu
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

96 Scopus citations

Abstract

Traffic sign detection is important to a robotic vehicle that automatically drives on roads. In this paper, an efficient novel approach which is enlighten by the process of the human vision is proposed to achieve automatic traffic sign detection. The detection method combines bottom-up traffic sign saliency region with learning based top-down features of traffic sign guided search. The bottom-up stage could obtain saliency region of traffic sign and achieve computational parsimony using improved Model of Saliency-Based Visual Attention. The top-down stage searches traffic sign in these traffic sign saliency regions based on the feature Histogram of Oriented Gradient (HOG) and the classifier Support Vector Mechine (SVM). Experimental results show that, the proposed approach can achieve robustness to illumination, scale, pose, viewpoint change and even partial occlusion. The sa ml lest detection size of traffic sign is 14×14, the average detection rate is 98.3% and the false positive rate is 5.09% in test image data set.

Original languageEnglish
Title of host publication2009 IEEE Intelligent Vehicles Symposium
Pages24-29
Number of pages6
DOIs
StatePublished - 2009
Externally publishedYes
Event2009 IEEE Intelligent Vehicles Symposium - Xi'an, China
Duration: 3 Jun 20095 Jun 2009

Publication series

NameIEEE Intelligent Vehicles Symposium, Proceedings

Conference

Conference2009 IEEE Intelligent Vehicles Symposium
Country/TerritoryChina
CityXi'an
Period3/06/095/06/09

Fingerprint

Dive into the research topics of 'Unifying visual saliency with HOG feature learning for traffic sign detection'. Together they form a unique fingerprint.

Cite this