A visual attention based approach to text extraction

  • Qiaoyu Sun*
  • , Yue Lu
  • , Shiliang Sun
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

25 Scopus citations

Abstract

A visual attention based approach is proposed to extract texts from complicated background in camerabased images. First, it applies the simplified visual attention model to highlight the region of interest (ROI) in an input image and to yield a map, named the VA map, consisting of the ROIs. Second, an edge map of image containing the edge information of four directions is obtained by Sobel operators. Character areas are detected by connected component analysis and merged into candidate text regions. Finally, the VA map is employed to confirm the candidate text regions. The experimental results demonstrate that the proposed method can effectively extract text information and locate text regions contained in camera-based images. It is robust not only for font, size, color, language, space, alignment and complexity of background, but also for perspective distortion and skewed texts embedded in images.

Original languageEnglish
Title of host publicationProceedings - 2010 20th International Conference on Pattern Recognition, ICPR 2010
Pages3991-3994
Number of pages4
DOIs
StatePublished - 2010
Event2010 20th International Conference on Pattern Recognition, ICPR 2010 - Istanbul, Turkey
Duration: 23 Aug 201026 Aug 2010

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Conference

Conference2010 20th International Conference on Pattern Recognition, ICPR 2010
Country/TerritoryTurkey
CityIstanbul
Period23/08/1026/08/10

Fingerprint

Dive into the research topics of 'A visual attention based approach to text extraction'. Together they form a unique fingerprint.

Cite this