Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors

Chamara Saroj Weerasekera, Thanuja Dharmasiri, Ravi Garg, Tom Drummond, Ian Reid

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

5 Citations (Scopus)

Abstract

We present 'just-in-time reconstruction' as realtime image-guided inpainting of a map with arbitrary scale and sparsity to generate a fully dense depth map for the image. In particular, our goal is to inpaint a sparse map - obtained from either a monocular visual SLAM system or a sparse sensor - using a single-view depth prediction network as a virtual depth sensor. We adopt a fairly standard approach to data fusion, to produce a fused depth map by performing inference over a novel fully-connected Conditional Random Field (CRF) which is parameterized by the input depth maps and their pixel-wise confidence weights. Crucially, we obtain the confidence weights that parameterize the CRF model in a data-dependent manner via Convolutional Neural Networks (CNNs) which are trained to model the conditional depth error distributions given each source of input depth map and the associated RGB image. Our CRF model penalises absolute depth error in its nodes and pairwise scale-invariant depth error in its edges, and the confidence-based fusion minimizes the impact of outlier input depth values on the fused result. We demonstrate the flexibility of our method by real-time inpainting of ORB-SLAM, Kinect, and LIDAR depth maps acquired both indoors and outdoors at arbitrary scale and varied amount of irregular sparsity.

Original languageEnglish
Title of host publication2018 IEEE International Conference on Robotics and Automation (ICRA 2018)
EditorsPeter Corke, Nancy M. Amato, Megan Emmons, Yoshihiko Nakamura, Markus Vincze
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages4977-4984
Number of pages8
ISBN (Electronic)9781538630815, 9781538630808
ISBN (Print)9781538630822
DOIs
Publication statusPublished - 10 Sep 2018
EventIEEE International Conference on Robotics and Automation 2018 - Brisbane Convention & Exhibition Centre, Brisbane, Australia
Duration: 21 May 201825 May 2018
https://icra2018.org/

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
PublisherIEEE, Institute of Electrical and Electronics Engineers
ISSN (Print)1050-4729

Conference

ConferenceIEEE International Conference on Robotics and Automation 2018
Abbreviated titleICRA 2018
CountryAustralia
CityBrisbane
Period21/05/1825/05/18
Internet address

Cite this

Weerasekera, C. S., Dharmasiri, T., Garg, R., Drummond, T., & Reid, I. (2018). Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. In P. Corke, N. M. Amato, M. Emmons, Y. Nakamura, & M. Vincze (Eds.), 2018 IEEE International Conference on Robotics and Automation (ICRA 2018) (pp. 4977-4984). (Proceedings - IEEE International Conference on Robotics and Automation). Piscataway NJ USA: IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICRA.2018.8460549
Weerasekera, Chamara Saroj ; Dharmasiri, Thanuja ; Garg, Ravi ; Drummond, Tom ; Reid, Ian. / Just-in-time reconstruction : inpainting sparse maps using single view depth predictors as priors. 2018 IEEE International Conference on Robotics and Automation (ICRA 2018). editor / Peter Corke ; Nancy M. Amato ; Megan Emmons ; Yoshihiko Nakamura ; Markus Vincze. Piscataway NJ USA : IEEE, Institute of Electrical and Electronics Engineers, 2018. pp. 4977-4984 (Proceedings - IEEE International Conference on Robotics and Automation).
@inproceedings{d207ad4189904c4fb3f9dfbc538c33ba,
title = "Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors",
abstract = "We present 'just-in-time reconstruction' as realtime image-guided inpainting of a map with arbitrary scale and sparsity to generate a fully dense depth map for the image. In particular, our goal is to inpaint a sparse map - obtained from either a monocular visual SLAM system or a sparse sensor - using a single-view depth prediction network as a virtual depth sensor. We adopt a fairly standard approach to data fusion, to produce a fused depth map by performing inference over a novel fully-connected Conditional Random Field (CRF) which is parameterized by the input depth maps and their pixel-wise confidence weights. Crucially, we obtain the confidence weights that parameterize the CRF model in a data-dependent manner via Convolutional Neural Networks (CNNs) which are trained to model the conditional depth error distributions given each source of input depth map and the associated RGB image. Our CRF model penalises absolute depth error in its nodes and pairwise scale-invariant depth error in its edges, and the confidence-based fusion minimizes the impact of outlier input depth values on the fused result. We demonstrate the flexibility of our method by real-time inpainting of ORB-SLAM, Kinect, and LIDAR depth maps acquired both indoors and outdoors at arbitrary scale and varied amount of irregular sparsity.",
author = "Weerasekera, {Chamara Saroj} and Thanuja Dharmasiri and Ravi Garg and Tom Drummond and Ian Reid",
year = "2018",
month = "9",
day = "10",
doi = "10.1109/ICRA.2018.8460549",
language = "English",
isbn = "9781538630822",
series = "Proceedings - IEEE International Conference on Robotics and Automation",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
pages = "4977--4984",
editor = "Peter Corke and Amato, {Nancy M.} and Megan Emmons and Yoshihiko Nakamura and Markus Vincze",
booktitle = "2018 IEEE International Conference on Robotics and Automation (ICRA 2018)",
address = "United States of America",

}

Weerasekera, CS, Dharmasiri, T, Garg, R, Drummond, T & Reid, I 2018, Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. in P Corke, NM Amato, M Emmons, Y Nakamura & M Vincze (eds), 2018 IEEE International Conference on Robotics and Automation (ICRA 2018). Proceedings - IEEE International Conference on Robotics and Automation, IEEE, Institute of Electrical and Electronics Engineers, Piscataway NJ USA, pp. 4977-4984, IEEE International Conference on Robotics and Automation 2018, Brisbane, Australia, 21/05/18. https://doi.org/10.1109/ICRA.2018.8460549

Just-in-time reconstruction : inpainting sparse maps using single view depth predictors as priors. / Weerasekera, Chamara Saroj; Dharmasiri, Thanuja; Garg, Ravi; Drummond, Tom; Reid, Ian.

2018 IEEE International Conference on Robotics and Automation (ICRA 2018). ed. / Peter Corke; Nancy M. Amato; Megan Emmons; Yoshihiko Nakamura; Markus Vincze. Piscataway NJ USA : IEEE, Institute of Electrical and Electronics Engineers, 2018. p. 4977-4984 (Proceedings - IEEE International Conference on Robotics and Automation).

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - Just-in-time reconstruction

T2 - inpainting sparse maps using single view depth predictors as priors

AU - Weerasekera, Chamara Saroj

AU - Dharmasiri, Thanuja

AU - Garg, Ravi

AU - Drummond, Tom

AU - Reid, Ian

PY - 2018/9/10

Y1 - 2018/9/10

N2 - We present 'just-in-time reconstruction' as realtime image-guided inpainting of a map with arbitrary scale and sparsity to generate a fully dense depth map for the image. In particular, our goal is to inpaint a sparse map - obtained from either a monocular visual SLAM system or a sparse sensor - using a single-view depth prediction network as a virtual depth sensor. We adopt a fairly standard approach to data fusion, to produce a fused depth map by performing inference over a novel fully-connected Conditional Random Field (CRF) which is parameterized by the input depth maps and their pixel-wise confidence weights. Crucially, we obtain the confidence weights that parameterize the CRF model in a data-dependent manner via Convolutional Neural Networks (CNNs) which are trained to model the conditional depth error distributions given each source of input depth map and the associated RGB image. Our CRF model penalises absolute depth error in its nodes and pairwise scale-invariant depth error in its edges, and the confidence-based fusion minimizes the impact of outlier input depth values on the fused result. We demonstrate the flexibility of our method by real-time inpainting of ORB-SLAM, Kinect, and LIDAR depth maps acquired both indoors and outdoors at arbitrary scale and varied amount of irregular sparsity.

AB - We present 'just-in-time reconstruction' as realtime image-guided inpainting of a map with arbitrary scale and sparsity to generate a fully dense depth map for the image. In particular, our goal is to inpaint a sparse map - obtained from either a monocular visual SLAM system or a sparse sensor - using a single-view depth prediction network as a virtual depth sensor. We adopt a fairly standard approach to data fusion, to produce a fused depth map by performing inference over a novel fully-connected Conditional Random Field (CRF) which is parameterized by the input depth maps and their pixel-wise confidence weights. Crucially, we obtain the confidence weights that parameterize the CRF model in a data-dependent manner via Convolutional Neural Networks (CNNs) which are trained to model the conditional depth error distributions given each source of input depth map and the associated RGB image. Our CRF model penalises absolute depth error in its nodes and pairwise scale-invariant depth error in its edges, and the confidence-based fusion minimizes the impact of outlier input depth values on the fused result. We demonstrate the flexibility of our method by real-time inpainting of ORB-SLAM, Kinect, and LIDAR depth maps acquired both indoors and outdoors at arbitrary scale and varied amount of irregular sparsity.

UR - http://www.scopus.com/inward/record.url?scp=85063152449&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2018.8460549

DO - 10.1109/ICRA.2018.8460549

M3 - Conference Paper

AN - SCOPUS:85063152449

SN - 9781538630822

T3 - Proceedings - IEEE International Conference on Robotics and Automation

SP - 4977

EP - 4984

BT - 2018 IEEE International Conference on Robotics and Automation (ICRA 2018)

A2 - Corke, Peter

A2 - Amato, Nancy M.

A2 - Emmons, Megan

A2 - Nakamura, Yoshihiko

A2 - Vincze, Markus

PB - IEEE, Institute of Electrical and Electronics Engineers

CY - Piscataway NJ USA

ER -

Weerasekera CS, Dharmasiri T, Garg R, Drummond T, Reid I. Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. In Corke P, Amato NM, Emmons M, Nakamura Y, Vincze M, editors, 2018 IEEE International Conference on Robotics and Automation (ICRA 2018). Piscataway NJ USA: IEEE, Institute of Electrical and Electronics Engineers. 2018. p. 4977-4984. (Proceedings - IEEE International Conference on Robotics and Automation). https://doi.org/10.1109/ICRA.2018.8460549