Temporally coherent video saliency using regional dynamic contrast

Yong Li, Bin Sheng, Lizhuang Ma, Wen Wu, Zhifeng Xie

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

Saliency detection for images and videos has become increasingly popular due to its wide applicability. In this paper, we present a new method that takes advantage of region-based visual dynamic contrast to generate temporally coherent video saliency maps. The concept of visual dynamics is formulated to represent both visual and motional variabilities of video content. Moreover, the regions are regarded as primitives for saliency computation by using spatiotemporal appearance contrasts. Then, region matching is performed across successive video frames to form temporally coherent regions, which are computed on the basis of spatiotemporal similarity in the visual dynamics of the different regions along the optical flow in the video. The region matching can effectively eliminate saliency discontinuities, particularly in the areas of oversegmentation that are otherwise highly problematic. The proposed approach is tested on a challenging set of video sequences and is compared with contemporary methods to demonstrate its superior performance in terms of its computational efficiency and ability to detect salient video content.

Original languageEnglish
Article number6544573
Pages (from-to)2067-2076
Number of pages10
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume23
Issue number12
DOIs
StatePublished - Dec 2013
Externally publishedYes

Keywords

  • Gabor filtering
  • Graphic processing unit (GPU)-based segmentation
  • Temporal coherence
  • Video saliency

Fingerprint

Dive into the research topics of 'Temporally coherent video saliency using regional dynamic contrast'. Together they form a unique fingerprint.

Cite this