BAYESIAN INVERSE PROBLEMS WITH l1 PRIORS

A RANDOMIZE-THEN-OPTIMIZE APPROACH

Zheng Wang, John Bardsley, Antti Solonen, Tiangang Cui, Youssef Marzouk

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Prior distributions for Bayesian inference that rely on the l1-norm of the parameters are of considerable interest, in part because they promote parameter fields with less regularity than Gaussian priors (e.g., discontinuities and blockiness). These l1-type priors include the total variation (TV) prior and the Besov space Bs1,1 prior, and in general yield non-Gaussian posterior distributions.  Sampling from these posteriors is challenging, particularly in the inverse problem setting where the parameter space is high-dimensional and the forward problem may be nonlinear. This paper extends the randomize-then-optimize (RTO) method, an optimization-based sampling algorithm developed for Bayesian inverse problems with Gaussian priors, to inverse problems with l1-type priors.  We use a variable transformation to convert an l1-type prior to a standard Gaussian prior, such that the posterior distribution of the transformed parameters is amenable to Metropolized sampling via RTO. We demonstrate this approach on several deconvolution problems and an elliptic PDE inverse problem, using TV or Besov space Bs1,1 priors. Our results show that the transformed RTO algorithm characterizes the correct posterior distribution and can be more efficient than other sampling algorithms. The variable transformation can also be extended to other non-Gaussian priors.
Original languageEnglish
Pages (from-to)S140-S166
Number of pages28
JournalSIAM Journal on Scientific Computing
Volume39
Issue number5
DOIs
Publication statusPublished - 2017

Cite this

Wang, Zheng ; Bardsley, John ; Solonen, Antti ; Cui, Tiangang ; Marzouk, Youssef. / BAYESIAN INVERSE PROBLEMS WITH l1 PRIORS : A RANDOMIZE-THEN-OPTIMIZE APPROACH. In: SIAM Journal on Scientific Computing. 2017 ; Vol. 39, No. 5. pp. S140-S166.
@article{faee077e79fc4453b7314fb33eabfd02,
title = "BAYESIAN INVERSE PROBLEMS WITH l1 PRIORS: A RANDOMIZE-THEN-OPTIMIZE APPROACH",
abstract = "Prior distributions for Bayesian inference that rely on the l1-norm of the parameters are of considerable interest, in part because they promote parameter fields with less regularity than Gaussian priors (e.g., discontinuities and blockiness). These l1-type priors include the total variation (TV) prior and the Besov space Bs1,1 prior, and in general yield non-Gaussian posterior distributions.  Sampling from these posteriors is challenging, particularly in the inverse problem setting where the parameter space is high-dimensional and the forward problem may be nonlinear. This paper extends the randomize-then-optimize (RTO) method, an optimization-based sampling algorithm developed for Bayesian inverse problems with Gaussian priors, to inverse problems with l1-type priors.  We use a variable transformation to convert an l1-type prior to a standard Gaussian prior, such that the posterior distribution of the transformed parameters is amenable to Metropolized sampling via RTO. We demonstrate this approach on several deconvolution problems and an elliptic PDE inverse problem, using TV or Besov space Bs1,1 priors. Our results show that the transformed RTO algorithm characterizes the correct posterior distribution and can be more efficient than other sampling algorithms. The variable transformation can also be extended to other non-Gaussian priors.",
author = "Zheng Wang and John Bardsley and Antti Solonen and Tiangang Cui and Youssef Marzouk",
year = "2017",
doi = "10.1137/16M1080938",
language = "English",
volume = "39",
pages = "S140--S166",
journal = "SIAM Journal on Scientific Computing",
issn = "1064-8275",
publisher = "Society for Industrial and Applied Mathematics",
number = "5",

}

BAYESIAN INVERSE PROBLEMS WITH l1 PRIORS : A RANDOMIZE-THEN-OPTIMIZE APPROACH. / Wang, Zheng; Bardsley, John; Solonen, Antti; Cui, Tiangang; Marzouk, Youssef.

In: SIAM Journal on Scientific Computing, Vol. 39, No. 5, 2017, p. S140-S166.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - BAYESIAN INVERSE PROBLEMS WITH l1 PRIORS

T2 - A RANDOMIZE-THEN-OPTIMIZE APPROACH

AU - Wang, Zheng

AU - Bardsley, John

AU - Solonen, Antti

AU - Cui, Tiangang

AU - Marzouk, Youssef

PY - 2017

Y1 - 2017

N2 - Prior distributions for Bayesian inference that rely on the l1-norm of the parameters are of considerable interest, in part because they promote parameter fields with less regularity than Gaussian priors (e.g., discontinuities and blockiness). These l1-type priors include the total variation (TV) prior and the Besov space Bs1,1 prior, and in general yield non-Gaussian posterior distributions.  Sampling from these posteriors is challenging, particularly in the inverse problem setting where the parameter space is high-dimensional and the forward problem may be nonlinear. This paper extends the randomize-then-optimize (RTO) method, an optimization-based sampling algorithm developed for Bayesian inverse problems with Gaussian priors, to inverse problems with l1-type priors.  We use a variable transformation to convert an l1-type prior to a standard Gaussian prior, such that the posterior distribution of the transformed parameters is amenable to Metropolized sampling via RTO. We demonstrate this approach on several deconvolution problems and an elliptic PDE inverse problem, using TV or Besov space Bs1,1 priors. Our results show that the transformed RTO algorithm characterizes the correct posterior distribution and can be more efficient than other sampling algorithms. The variable transformation can also be extended to other non-Gaussian priors.

AB - Prior distributions for Bayesian inference that rely on the l1-norm of the parameters are of considerable interest, in part because they promote parameter fields with less regularity than Gaussian priors (e.g., discontinuities and blockiness). These l1-type priors include the total variation (TV) prior and the Besov space Bs1,1 prior, and in general yield non-Gaussian posterior distributions.  Sampling from these posteriors is challenging, particularly in the inverse problem setting where the parameter space is high-dimensional and the forward problem may be nonlinear. This paper extends the randomize-then-optimize (RTO) method, an optimization-based sampling algorithm developed for Bayesian inverse problems with Gaussian priors, to inverse problems with l1-type priors.  We use a variable transformation to convert an l1-type prior to a standard Gaussian prior, such that the posterior distribution of the transformed parameters is amenable to Metropolized sampling via RTO. We demonstrate this approach on several deconvolution problems and an elliptic PDE inverse problem, using TV or Besov space Bs1,1 priors. Our results show that the transformed RTO algorithm characterizes the correct posterior distribution and can be more efficient than other sampling algorithms. The variable transformation can also be extended to other non-Gaussian priors.

U2 - 10.1137/16M1080938

DO - 10.1137/16M1080938

M3 - Article

VL - 39

SP - S140-S166

JO - SIAM Journal on Scientific Computing

JF - SIAM Journal on Scientific Computing

SN - 1064-8275

IS - 5

ER -