TY - JOUR
T1 - Elevating Mesh Saliency in VR
T2 - Introducing a Novel Prediction Network and Dataset
AU - Zhang, Kaiwei
AU - He, Mohan
AU - Zhu, Dandan
AU - Zhu, Kun
AU - Min, Xiongkuo
AU - Zhai, Guangtao
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/11/22
Y1 - 2025/11/22
N2 - In computer graphics, polygon meshes stand out as a popular representation providing effective delineation of delicate textures and complex geometries. When dealing with geometric processing tasks for critical regions of the mesh, it is necessary to consider the human visual perception related to saliency. Therefore, we establish a novel mesh saliency dataset, facilitated by a more comprehensive gathering pipeline of eye-tracking from subjects observing mesh models at arbitrary viewpoints in a virtual reality space with six degrees of freedom. Additionally, we propose a mesh saliency prediction model that accurately infers visual attention density maps for complex and irregular mesh surfaces. This model integrates surface curvature and triangular face shape information from multi-scale neighboring ranges as local geometric features, while also leveraging surface spatial positioning as a global feature. Our work aims to preserve critical areas and minimize visual loss in saliency-driven tasks such as mesh simplification, rendering, and texturing. We believe that our research can offer valuable insights for human-centered mesh computation applications.
AB - In computer graphics, polygon meshes stand out as a popular representation providing effective delineation of delicate textures and complex geometries. When dealing with geometric processing tasks for critical regions of the mesh, it is necessary to consider the human visual perception related to saliency. Therefore, we establish a novel mesh saliency dataset, facilitated by a more comprehensive gathering pipeline of eye-tracking from subjects observing mesh models at arbitrary viewpoints in a virtual reality space with six degrees of freedom. Additionally, we propose a mesh saliency prediction model that accurately infers visual attention density maps for complex and irregular mesh surfaces. This model integrates surface curvature and triangular face shape information from multi-scale neighboring ranges as local geometric features, while also leveraging surface spatial positioning as a global feature. Our work aims to preserve critical areas and minimize visual loss in saliency-driven tasks such as mesh simplification, rendering, and texturing. We believe that our research can offer valuable insights for human-centered mesh computation applications.
KW - Geometric deep learning
KW - Mesh saliency
KW - Visual attention
KW - Visual perception
UR - https://www.scopus.com/pages/publications/105025418429
U2 - 10.1145/3761816
DO - 10.1145/3761816
M3 - 文章
AN - SCOPUS:105025418429
SN - 1551-6857
VL - 21
JO - ACM Transactions on Multimedia Computing, Communications and Applications
JF - ACM Transactions on Multimedia Computing, Communications and Applications
IS - 12
M1 - 363
ER -