Text location in scene images using visual attention model

Qiao Yu Sun*, Yue Lu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Locating text region from an image of nature scene is significantly helpful for better understanding the semantic meaning of the image, which plays an important role in many applications such as image retrieval, image categorization, social media processing, etc. Traditional approach relies on the low level image features to progressively locate the candidate text regions. However, these approaches often suffer for the cases of the clutter background since the adopted low level image features are fairly simple which may not reliably distinguish text region from the clutter background. Motivated by the recent popular research on attention model, salience detection is revisited in this paper. Based on the case of text detection on nature scene image, saliency map is further analyzed and is adjusted accordingly. Using the adjusted saliency map, the candidate text regions detected by the common low level features are further verified. Moreover, efficient low level text feature, Histogram of Edge-direction (HOE), is adopted in this paper, which statistically describes the edge direction information of the region of interest on the image. Encouraging experimental results have been obtained on the nature scene images with the text of various languages.

Original languageEnglish
Article number12550087
JournalInternational Journal of Pattern Recognition and Artificial Intelligence
Volume26
Issue number4
DOIs
StatePublished - Jun 2012

Keywords

  • Text location
  • connected component analysis
  • edge map
  • histogram of edge direction
  • visual attention

Fingerprint

Dive into the research topics of 'Text location in scene images using visual attention model'. Together they form a unique fingerprint.

Cite this