Fast depth video compression for mobile RGB-D sensors

Xiaoqin Wang, Yasar Ahmet Sekercioglu, Tom Drummond, Enrico Natalizio, Isabelle Fantoni, Vincent Fremont

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)


We propose a new method, called 3-D image warping-based depth video compression (IW-DVC), for fast and efficient compression of depth images captured by mobile RGB-D sensors. The emergence of low-cost RGB-D sensors has created opportunities to find new solutions for a number of computer vision and networked robotics problems, such as 3-D map building, immersive telepresence, or remote sensing. However, efficient transmission and storage of depth data still presents a challenging task to the research community in these applications. Image/video compression has been comprehensively studied and several methods have already been developed. However, these methods result in unacceptably suboptimal outcomes when applied to the depth images. We have designed the IW-DVC method to exploit the special properties of the depth data to achieve a high compression ratio while preserving the quality of the captured depth images. Our solution combines the egomotion estimation and 3-D image warping techniques and includes a lossless coding scheme that is capable of adapting to depth data with a high dynamic range. IW-DVC operates at a high speed, suitable for real-time applications, and is able to attain an enhanced motion compensation accuracy compared with the conventional approaches. In addition, it removes the existing redundant information between the depth frames to further increase compression efficiency. Our experiments show that IW-DVC attains a very high performance yielding significant compression ratios without sacrificing image quality.
Original languageEnglish
Pages (from-to)673 - 686
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number4
Publication statusPublished - 2016


  • Depth map
  • Depth video coding
  • Inter-frame correlation
  • Motion compensation.

Cite this