TY - GEN
T1 - FluidGS
T2 - 33rd ACM International Conference on Multimedia, MM 2025
AU - Xie, Youchen
AU - Li, Chen
AU - Qiu, Sheng
AU - Wang, Zhi Jun
AU - Li, Chenhui
AU - Zhao, Yibo
AU - Gao, Zan
AU - Wang, Changbo
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - Dynamic fluid scene reconstruction remains challenging in multimedia applications and digital content creation due to complex motions and changing topology. While Neural Radiance Fields (NeRF) methods are computationally expensive and 3D Gaussian Splatting (3DGS) approaches struggle with fluid phenomena, we propose Fluid-GS, a flexible, efficient end-to-end framework for sparse-view fluid reconstruction that tightly couples density field modeling with velocity estimation via differentiable advection. Our key innovation is a hybrid Lagrangian-Eulerian Gaussian primitive representation that combines the rendering efficiency of 3DGS with physically-accurate fluid motion tracking on Eulerian grid, that enables us to formulate physics-informed constraints derived from Navier-Stokes equations, enforcing temporal coherence and fluid incompressibility. Moreover, to address the inherent challenges of sparse-view reconstruction, we introduce a fluid-specific Gaussian kernel constraint that preserves the spatial characteristics of fluid phenomena, and dynamically adjusts the anisotropic kernel of Gaussian primitives based on local velocity fields, preventing non-physical artifacts. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods in both reconstruction quality and computational efficiency.
AB - Dynamic fluid scene reconstruction remains challenging in multimedia applications and digital content creation due to complex motions and changing topology. While Neural Radiance Fields (NeRF) methods are computationally expensive and 3D Gaussian Splatting (3DGS) approaches struggle with fluid phenomena, we propose Fluid-GS, a flexible, efficient end-to-end framework for sparse-view fluid reconstruction that tightly couples density field modeling with velocity estimation via differentiable advection. Our key innovation is a hybrid Lagrangian-Eulerian Gaussian primitive representation that combines the rendering efficiency of 3DGS with physically-accurate fluid motion tracking on Eulerian grid, that enables us to formulate physics-informed constraints derived from Navier-Stokes equations, enforcing temporal coherence and fluid incompressibility. Moreover, to address the inherent challenges of sparse-view reconstruction, we introduce a fluid-specific Gaussian kernel constraint that preserves the spatial characteristics of fluid phenomena, and dynamically adjusts the anisotropic kernel of Gaussian primitives based on local velocity fields, preventing non-physical artifacts. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods in both reconstruction quality and computational efficiency.
KW - fluid reconstruction
KW - gaussian splatting
KW - physics-informed deep learning
UR - https://www.scopus.com/pages/publications/105024064091
U2 - 10.1145/3746027.3755500
DO - 10.1145/3746027.3755500
M3 - 会议稿件
AN - SCOPUS:105024064091
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 8438
EP - 8447
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
Y2 - 27 October 2025 through 31 October 2025
ER -