LF2MV: Learning an Editable Meta-View Towards Light Field Representation

Menghan Xia, Jose Echevarria, Minshan Xie, Tien-Tsin Wong

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Light fields are 4D scene representations that are typically structured as arrays of views or several directional samples per pixel in a single view. However, this highly correlated structure is not very efficient to transmit and manipulate, especially for editing. To tackle this issue, we propose a novel representation learning framework that can encode the light field into a single meta-view that is both compact and editable. Specifically, the meta-view composes of three visual channels and a complementary meta channel that is embedded with geometric and residual appearance information. The visual channels can be edited using existing 2D image editing tools, before reconstructing the whole edited light field. To facilitate edit propagation against occlusion, we design a special editing-aware decoding network that consistently propagates the visual edits to the whole light field upon reconstruction. Extensive experiments show that our proposed method achieves competitive representation accuracy and meanwhile enables consistent edit propagation.

Original languageEnglish
Pages (from-to)1672-1684
Number of pages13
JournalIEEE Transactions on Visualization and Computer Graphics
Volume30
Issue number3
DOIs
Publication statusPublished - Mar 2024
Externally publishedYes

Keywords

  • Compact representation
  • editing propagation
  • light field
  • representation learning

Cite this