Semi-supervised video inpainting with cycle consistency constraints

Zhiliang Wu, Hanyu Xuan, Changchang Sun, Weili Guan, Kang Zhang, Yan Yan

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

5 Citations (Scopus)

Abstract

Deep learning-based video inpainting has yielded promising results and gained increasing attention from re-searchers. Generally, these methods assume that the cor-rupted region masks of each frame are known and easily ob-tained. However, the annotation of these masks are labor-intensive and expensive, which limits the practical application of current methods. Therefore, we expect to relax this assumption by defining a new semi-supervised inpainting setting, making the networks have the ability of completing the corrupted regions of the whole video using the anno-tated mask of only one frame. Specifically, in this work, we propose an end-to-end trainable framework consisting of completion network and mask prediction network, which are designed to generate corrupted contents of the current frame using the known mask and decide the regions to be filled of the next frame, respectively. Besides, we introduce a cycle consistency loss to regularize the training parameters of these two networks. In this way, the completion network and the mask prediction network can constrain each other, and hence the overall performance of the trained model can be maximized. Furthermore, due to the natural existence of prior knowledge (e.g., corrupted contents and clear bor-ders), current video inpainting datasets are not suitable in the context of semi-supervised video inpainting. Thus, we create a new dataset by simulating the corrupted video of real-world scenarios. Extensive experimental results are reported to demonstrate the superiority of our model in the video inpainting task. Remarkably, although our model is trained in a semi-supervised manner, it can achieve compa-rable performance as fully-supervised methods.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
EditorsEric Mortensen
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages22586-22595
Number of pages10
ISBN (Electronic)9798350301298
ISBN (Print)9798350301304
DOIs
Publication statusPublished - 2023
EventIEEE Conference on Computer Vision and Pattern Recognition 2023 - Vancouver, Canada
Duration: 18 Jun 202322 Jun 2023
https://cvpr2023.thecvf.com/ (Website)
https://openaccess.thecvf.com/CVPR2023?day=all (Proceedings)
https://ieeexplore.ieee.org/xpl/conhome/10203037/proceeding (Proceedings)
https://cvpr2023.thecvf.com/Conferences/2023 (Website)

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherIEEE, Institute of Electrical and Electronics Engineers
Volume2023-June
ISSN (Print)1063-6919

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2023
Abbreviated titleCVPR 2023
Country/TerritoryCanada
CityVancouver
Period18/06/2322/06/23
Internet address

Keywords

  • motion
  • tracking
  • Video: Low-level analysis

Cite this