Single-shot real-time compressed ultrahigh-speed imaging enabled by a snapshot-to-video autoencoder

  • Xianglei Liu
  • , João Monteiro
  • , Isabela Albuquerque
  • , Yingming Lai
  • , Cheng Jiang
  • , Shian Zhang
  • , Tiago H. Falk
  • , Jinyang Liang

Research output: Contribution to journalArticlepeer-review

25 Scopus citations

Abstract

Single-shot 2D optical imaging of transient scenes is indispensable for numerous areas of study. Among existing techniques, compressed optical-streaking ultrahigh-speed photography (COSUP) uses a cost-efficient design to endow ultrahigh frame rates with off-the-shelf CCD and CMOS cameras. Thus far, COSUP’s application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction. To overcome these problems, we have developed a snapshot-to-video autoencoder (S2V-AE)—which is a deep neural network that maps a compressively recorded 2D image to a movie. The S2V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data. Implemented in compressed ultrahigh-speed imaging, the S2V-AE enables the development of single-shot machine-learning assisted real-time (SMART) COSUP, which features a reconstruction time of 60 ms and a large sequence depth of 100 frames. SMART-COSUP is applied to wide-field multiple-particle tracking at 20,000 frames per second. As a universal computational framework, the S2V-AE is readily adaptable to other modalities in high-dimensional compressed sensing. SMART-COSUP is also expected to find wide applications in applied and fundamental sciences.

Original languageEnglish
Pages (from-to)2464-2474
Number of pages11
JournalPhotonics Research
Volume9
Issue number12
DOIs
StatePublished - 1 Dec 2021

Fingerprint

Dive into the research topics of 'Single-shot real-time compressed ultrahigh-speed imaging enabled by a snapshot-to-video autoencoder'. Together they form a unique fingerprint.

Cite this