Joint PET-MRI image reconstruction using a patch-based joint-dictionary prior

Viswanath P. Sudarshan, Gary F. Egan, Zhaolin Chen, Suyash P. Awate

Research output: Contribution to journalArticleResearchpeer-review

24 Citations (Scopus)

Abstract

For simultaneous positron-emission-tomography and magnetic-resonance-imaging (PET-MRI) systems, while early methods relied on independently reconstructing PET and MRI images, recent works have demonstrated improvement in image reconstructions of both PET and MRI using joint reconstruction methods. The current state-of-the-art joint reconstruction priors rely on fine-scale PET-MRI dependencies through the image gradients at corresponding spatial locations in the PET and MRI images. In the general context of image restoration, compared to gradient-based models, patch-based models (e.g., sparse dictionaries) have demonstrated better performance by modeling image texture better. Thus, we propose a novel joint PET-MRI patch-based dictionary prior that learns inter-modality higher-order dependencies together with intra-modality textural patterns in the images. We model the joint-dictionary prior as a Markov random field and propose a novel Bayesian framework for joint reconstruction of PET and accelerated-MRI images, using expectation maximization for inference. We evaluate all methods on simulated brain datasets as well as on in vivo datasets. We compare our joint dictionary prior with the recently proposed joint priors based on image gradients, as well as independently applied patch-based priors. Our method demonstrates qualitative and quantitative improvement over the state of the art in both PET and MRI reconstructions.

Original languageEnglish
Article number101669
Number of pages14
JournalMedical Image Analysis
Volume62
DOIs
Publication statusPublished - May 2020

Keywords

  • Expectation maximization
  • Joint dictionary
  • Joint reconstruction
  • Markov random field
  • Simultaneous PET-MRI
  • Sparsity
  • Undersampled k-space

Cite this