Abstract
This work presents a robust and rotationally invariant shape descriptor, namely perception pronouncement (called P2), to mathematically model the eye fixations. P2 takes two criteria – the local consideration of surface curvature and the global consideration of viewindependent visibility – into account. Differing from existing works that often computed the intrinsic surface property of visibility in imaging space, a novel approach is proposed to approximate the attribute in object space using Gauss map and Ray tracing. With the presented shape descriptor, mesh saliency detection, which refers to reasoning about which regions or points of a surface are important, is more sensible, especially when 3D models fall into two categories: (1) the models possess significant interior/exterior structures; (2) the models contain regions where the contrast in visibility is high. For the models that are out of the categories, saliencies achieved by our approach are comparable to or even better than those of state-of-the-art methods.
| Original language | English |
|---|---|
| Pages (from-to) | 53-67 |
| Number of pages | 15 |
| Journal | Applied Mathematics |
| Volume | 31 |
| Issue number | 1 |
| DOIs | |
| State | Published - 1 Mar 2016 |
| Externally published | Yes |
Keywords
- Human visual system
- bilateral filtering
- mesh saliency
- shape descriptor
- visibility