TY - GEN
T1 - Image Fusion Based on Feature Decoupling and Proportion Preserving
AU - Fang, Bin
AU - Yi, Ran
AU - Ma, Lizhuang
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Image fusion is a widely used technique for generating a new image by combining information from multiple input images. However, existing image fusion algorithms are often domain-specific, which limits their generalization ability and processing capacity. In this paper, we propose a fast unified fusion network called FDF, based on feature decoupling and intensity and gradient feature proportion maintenance. FDF is an end-to-end network that can perform multiple image fusion tasks. We first decouple the features of the source images into intensity features and texture features and then fuse them using the intensity and gradient paths. To improve the generalization ability, we design a unified loss function that can adapt to different fusion tasks. We evaluate FDF on three image fusion tasks, namely visible and infrared image fusion, multi-exposure image fusion, and medical image fusion. Our experimental results show that FDF outperforms state-of-the-art methods in terms of visual effects and multiple quantitative metrics. The proposed method has the potential to be applied to other image fusion tasks and domains, making it a promising approach for future research. Overall, FDF provides a fast and unified solution for image fusion tasks, which can significantly improve the efficiency and effectiveness of image fusion applications.
AB - Image fusion is a widely used technique for generating a new image by combining information from multiple input images. However, existing image fusion algorithms are often domain-specific, which limits their generalization ability and processing capacity. In this paper, we propose a fast unified fusion network called FDF, based on feature decoupling and intensity and gradient feature proportion maintenance. FDF is an end-to-end network that can perform multiple image fusion tasks. We first decouple the features of the source images into intensity features and texture features and then fuse them using the intensity and gradient paths. To improve the generalization ability, we design a unified loss function that can adapt to different fusion tasks. We evaluate FDF on three image fusion tasks, namely visible and infrared image fusion, multi-exposure image fusion, and medical image fusion. Our experimental results show that FDF outperforms state-of-the-art methods in terms of visual effects and multiple quantitative metrics. The proposed method has the potential to be applied to other image fusion tasks and domains, making it a promising approach for future research. Overall, FDF provides a fast and unified solution for image fusion tasks, which can significantly improve the efficiency and effectiveness of image fusion applications.
KW - Feature decoupling
KW - Image fusion
KW - Multimodal fusion
UR - https://www.scopus.com/pages/publications/85185843337
U2 - 10.1007/978-981-99-9666-7_5
DO - 10.1007/978-981-99-9666-7_5
M3 - 会议稿件
AN - SCOPUS:85185843337
SN - 9789819996650
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 60
EP - 74
BT - Computer-Aided Design and Computer Graphics - 18th International Conference, CAD/Graphics 2023, Proceedings
A2 - Hu, Shi-Min
A2 - Cai, Yiyu
A2 - Rosin, Paul
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2023
Y2 - 19 August 2023 through 21 August 2023
ER -