基于时空生成对抗网络的视频修复

Translated title of the contribution: Temporal-Spatial Generative Adversarial Networks for Video Inpainting

Bing Yu, Youdong Ding, Zhifeng Xie, Dongjin Huang, Lizhuang Ma

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The existing video inpainting methods may fail to yield semantic continuous results. We proposed a method based on temporal-spatial generative adversarial networks to solve the above problem. This method includes two network models: the single-frame inpainting model and the sequence inpainting model. The single-frame inpainting model consisting of the single-frame stacked generator and spatial discriminator can realize the high-quality completion for the start frames with spatial missing regions. On this basis, the sequence inpainting model consisting of the sequence stacked generator and the temporal-spatial discriminator is used to achieve the temporal-spatial consistent video completion for the subsequent frames. Experimental results on the UCF-101 and FaceForensics datasets show that our method can greatly improve the temporal and spatial coherence of video completion. Compared with the benchmark method, our method performs better in peak signal to noise ratio, structural similarity index, learned perceptual image patch similarity and stability error.

Translated title of the contributionTemporal-Spatial Generative Adversarial Networks for Video Inpainting
Original languageChinese (Traditional)
Pages (from-to)769-779
Number of pages11
JournalJisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics
Volume32
Issue number5
DOIs
StatePublished - 1 May 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Temporal-Spatial Generative Adversarial Networks for Video Inpainting'. Together they form a unique fingerprint.

Cite this