TY - JOUR
T1 - 基于免训练自监督式神经网络算法实现压缩超快成像高保真图像重构
AU - Jin, Chengzhi
AU - Qi, Dalong
AU - He, Yu
AU - Yao, Jiali
AU - Guo, Zihan
AU - Xu, Ning
AU - Cheng, Long
AU - Mao, Jiayi
AU - Yao, Zhiming
AU - Song, Yan
AU - Yao, Yunhua
AU - Shen, Yuecheng
AU - Deng, Lianzhong
AU - Sheng, Liang
AU - Sun, Zhenrong
AU - Zhang, Shian
N1 - Publisher Copyright:
© 2024 Chinese Academy of Sciences. All rights reserved.
PY - 2024/7
Y1 - 2024/7
N2 - Compressed ultrafast photography (CUP) is currently the fastest passive single-shot ultrafast optical imaging technology, serving as a potent tool for recording irreversible or difficult-to-repeat ultrafast events, as well as enabling the detection of self-luminescent transient scenes, such as fluorescence dynamics. CUP realizes the recording of ultrafast events through two steps, data acquisition and image reconstruction, and has achieved an ultrahigh sequence depth of over 300 frames and an ultrafast imaging speed of 10 million frames per second, significantly surpassing traditional imaging techniques. However, CUP suffers from low spatial resolution due to its high data compression ratio and undersampling characteristics. Furthermore, the image reconstruction process based on compressive sensing theory is complex and demands extensive computing resources. This limitation curtails CUP’s ability to observe ultrafast phenomena with high spatial resolution. Recent efforts to enhance CUP performance have concentrated on enhancing both hardware and algorithmic components. As the hardware structure of CUP is relatively fixed, the advancement of sophisticated algorithms is particularly crucial in improving the quality of reconstructed images. Existing algorithms can be categorized into traditional iterative algorithms and deep learning algorithms. Pure deep learning algorithms face challenges related to the availability of training samples and model generality, hindering rapid transfer. Conversely, traditional iterative algorithms exhibit low computational accuracy and large errors. To address these challenges, we developed a new hybrid algorithm, which combines the plug- and-play (PnP) framework and deep image prior (DIP), named PnP-DIP, drawing upon the advantages of untrained neural networks and traditional iterative algorithms. The PnP-DIP algorithm is based on the alternating direction method of multipliers (ADMM) algorithm, which provides global convergence and parallel processing capabilities, making it highly suitable for tackling large-scale optimization problems. PnP-DIP employs self-supervised learning from DIP to provide a robust solution for image inverse problems and integrates the PnP framework for image denoising to serve as a regularizer, effectively preventing model overfitting. Notably, the proposed algorithm does not require pretraining and ensures both high fidelity and low complexity during the reconstruction process. To quantitatively evaluate the algorithm’s performance, the proposed PnP-DIP algorithm is tested and analyzed using numerical simulations, and the reconstruction performance is compared with several commonly used algorithms. The simulation experimental results demonstrate that the proposed algorithm surpasses all competitors in terms of performance on all datasets, exhibiting exceptional robustness and scalability. Furthermore, the PnP-DIP algorithm was applied to reconstruct transient scenes recorded by a custom-built CUP system. This application enabled the measurement of the spatiotemporal evolution of a spatially modulated E-shaped picosecond laser pulse and the two-dimensional intensity evolution of an X-ray scintillator. The results revealed that the proposed method excelled in spatial resolution, continuity, and heterogeneity, accurately reflecting the inherent laws and features of spatial data, thus paving the way for practical applications of CUP. The flexibility of DIP allows this algorithm to be extended to multidimensional imaging models such as hyperspectral CUP and spectral-volumetric CUP, enabling the recovery of higher-dimensional data and expanding the application of CUP-based technology in capturing complex ultrafast physical events. This research is projected to promote the application of CUP in scenarios requiring high spatiotemporal resolution and make a significant contribution to the development of fundamental and applied sciences.
AB - Compressed ultrafast photography (CUP) is currently the fastest passive single-shot ultrafast optical imaging technology, serving as a potent tool for recording irreversible or difficult-to-repeat ultrafast events, as well as enabling the detection of self-luminescent transient scenes, such as fluorescence dynamics. CUP realizes the recording of ultrafast events through two steps, data acquisition and image reconstruction, and has achieved an ultrahigh sequence depth of over 300 frames and an ultrafast imaging speed of 10 million frames per second, significantly surpassing traditional imaging techniques. However, CUP suffers from low spatial resolution due to its high data compression ratio and undersampling characteristics. Furthermore, the image reconstruction process based on compressive sensing theory is complex and demands extensive computing resources. This limitation curtails CUP’s ability to observe ultrafast phenomena with high spatial resolution. Recent efforts to enhance CUP performance have concentrated on enhancing both hardware and algorithmic components. As the hardware structure of CUP is relatively fixed, the advancement of sophisticated algorithms is particularly crucial in improving the quality of reconstructed images. Existing algorithms can be categorized into traditional iterative algorithms and deep learning algorithms. Pure deep learning algorithms face challenges related to the availability of training samples and model generality, hindering rapid transfer. Conversely, traditional iterative algorithms exhibit low computational accuracy and large errors. To address these challenges, we developed a new hybrid algorithm, which combines the plug- and-play (PnP) framework and deep image prior (DIP), named PnP-DIP, drawing upon the advantages of untrained neural networks and traditional iterative algorithms. The PnP-DIP algorithm is based on the alternating direction method of multipliers (ADMM) algorithm, which provides global convergence and parallel processing capabilities, making it highly suitable for tackling large-scale optimization problems. PnP-DIP employs self-supervised learning from DIP to provide a robust solution for image inverse problems and integrates the PnP framework for image denoising to serve as a regularizer, effectively preventing model overfitting. Notably, the proposed algorithm does not require pretraining and ensures both high fidelity and low complexity during the reconstruction process. To quantitatively evaluate the algorithm’s performance, the proposed PnP-DIP algorithm is tested and analyzed using numerical simulations, and the reconstruction performance is compared with several commonly used algorithms. The simulation experimental results demonstrate that the proposed algorithm surpasses all competitors in terms of performance on all datasets, exhibiting exceptional robustness and scalability. Furthermore, the PnP-DIP algorithm was applied to reconstruct transient scenes recorded by a custom-built CUP system. This application enabled the measurement of the spatiotemporal evolution of a spatially modulated E-shaped picosecond laser pulse and the two-dimensional intensity evolution of an X-ray scintillator. The results revealed that the proposed method excelled in spatial resolution, continuity, and heterogeneity, accurately reflecting the inherent laws and features of spatial data, thus paving the way for practical applications of CUP. The flexibility of DIP allows this algorithm to be extended to multidimensional imaging models such as hyperspectral CUP and spectral-volumetric CUP, enabling the recovery of higher-dimensional data and expanding the application of CUP-based technology in capturing complex ultrafast physical events. This research is projected to promote the application of CUP in scenarios requiring high spatiotemporal resolution and make a significant contribution to the development of fundamental and applied sciences.
KW - alternating direction method of multipliers
KW - compressed ultrafast photography
KW - deep image prior
KW - image reconstruction
KW - plug-and-play framework
UR - https://www.scopus.com/pages/publications/85198120117
U2 - 10.1360/TB-2024-0038
DO - 10.1360/TB-2024-0038
M3 - 文章
AN - SCOPUS:85198120117
SN - 0023-074X
VL - 69
SP - 2765
EP - 2776
JO - Chinese Science Bulletin
JF - Chinese Science Bulletin
IS - 19
ER -