P 2: a robust and rotationally invariant shape descriptor with applications to mesh saliency

  • Xian yong Liu
  • , Li zhuang Ma*
  • , Li gang Liu
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This work presents a robust and rotationally invariant shape descriptor, namely perception pronouncement (called P2), to mathematically model the eye fixations. P2 takes two criteria – the local consideration of surface curvature and the global consideration of viewindependent visibility – into account. Differing from existing works that often computed the intrinsic surface property of visibility in imaging space, a novel approach is proposed to approximate the attribute in object space using Gauss map and Ray tracing. With the presented shape descriptor, mesh saliency detection, which refers to reasoning about which regions or points of a surface are important, is more sensible, especially when 3D models fall into two categories: (1) the models possess significant interior/exterior structures; (2) the models contain regions where the contrast in visibility is high. For the models that are out of the categories, saliencies achieved by our approach are comparable to or even better than those of state-of-the-art methods.

Original languageEnglish
Pages (from-to)53-67
Number of pages15
JournalApplied Mathematics
Volume31
Issue number1
DOIs
StatePublished - 1 Mar 2016
Externally publishedYes

Keywords

  • Human visual system
  • bilateral filtering
  • mesh saliency
  • shape descriptor
  • visibility

Fingerprint

Dive into the research topics of 'P 2: a robust and rotationally invariant shape descriptor with applications to mesh saliency'. Together they form a unique fingerprint.

Cite this