What is it about?
We investigate deep learning for video compressive sensing within the scope of snapshot compressive imaging (SCI). In video SCI, multiple high-speed frames are modulated by different coding patterns and then a low-speed detector captures the integration of these modulated frames. We build a video SCI system using a digital micromirror device and develop both an end-to-end convolutional neural network (E2E-CNN) and a Plug-and-Play (PnP) framework with deep denoising priors to solve the inverse problem.
Featured Image
Photo by ShareGrid on Unsplash
Why is it important?
We compare them with the iterative baseline algorithm GAP-TV and the state-of-the-art DeSCI on real data. Given a determined setup, a well-trained E2E-CNN can provide video-rate high-quality reconstruction. The PnP deep denoising method can generate decent results without task-specific pre-training and is faster than conventional iterative algorithms. Considering speed, accuracy, and flexibility, the PnP deep denoising method may serve as a baseline in video SCI reconstruction
Perspectives
Read the Original
This page is a summary of: Deep learning for video compressive sensing, APL Photonics, March 2020, American Institute of Physics,
DOI: 10.1063/1.5140721.
You can read the full text:
Resources
Contributors
The following have contributed to this page