Colorblind-shareable videos by synthesizing temporal-coherent polynomial coefficients

Xinghong Hu, Xueting Liu, Zhuming Zhang, Menghan Xia, Chengze Li, Tien-Tsin Wong

Research output: Contribution to journalArticleResearchpeer-review

9 Citations (Scopus)

Abstract

To share the same visual content between color vision deficiencies (CVD) and normal-vision people, attempts have been made to allocate the two visual experiences of a binocular display (wearing and not wearing glasses) to CVD and normal-vision audiences. However, existing approaches only work for still images. Although state-of-the-art temporal filtering techniques can be applied to smooth the per-frame generated content, they may fail to maintain the multiple binocular constraints needed in our applications, and even worse, sometimes introduce color inconsistency (same color regions map to different colors). In this paper, we propose to train a neural network to predict the temporal coherent polynomial coefficients in the domain of global color decomposition. This indirect formulation solves the color inconsistency problem. Our key challenge is to design a neural network to predict the temporal coherent coefficients, while maintaining all required binocular constraints. Our method is evaluated on various videos and all metrics confirm that it outperforms all existing solutions.

Original languageEnglish
Article number174
Number of pages12
JournalACM Transactions on Graphics
Volume38
Issue number6
DOIs
Publication statusPublished - 8 Nov 2019
Externally publishedYes

Keywords

  • Color vision deficiency
  • Machine learning
  • Temporal coherence

Cite this