Abstract
Learning-based techniques have recently been shown to be effective for denoising Monte Carlo rendering methods. However, there remains a quality gap to state-of-the-art handcrafted denoisers. In this paper, we propose a deep residual learning based method that outperforms both state-of-the-art handcrafted denoisers and learning-based denoisers. Unlike the indirect nature of existing learning-based methods (which e.g., estimate the parameters and kernel weights of an explicit feature based filter), we directly map the noisy input pixels to the smoothed output. Using this direct mapping formulation, we demonstrate that even a simple-and-standard ResNet and three common auxiliary features (depth, normal, and albedo) are sufficient to achieve high-quality denoising. This minimal requirement on auxiliary data simplifies both training and integration of our method into most production rendering pipelines. We have evaluated our method on unseen images created by a different renderer. Consistently superior quality denoising is obtained in all cases.
Original language | English |
---|---|
Pages (from-to) | 239-255 |
Number of pages | 17 |
Journal | Computational Visual Media |
Volume | 5 |
Issue number | 3 |
DOIs | |
Publication status | Published - Sept 2019 |
Externally published | Yes |
Keywords
- deep learning
- deep residual learning
- denoising
- filter-free denoising
- Monte Carlo rendering