Training data independent image registration using generative adversarial networks and domain adaptation

Dwarikanath Mahapatra, Zongyuan Ge

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

Medical image registration is an important task in automated analysis of multimodal images and temporal data involving multiple patient visits. Conventional approaches, although useful for different image types, are time consuming. Of late, deep learning (DL) based image registration methods have been proposed that outperform traditional methods in terms of accuracy and time. However, DL based methods are heavily dependent on training data and do not generalize well when presented with images of different scanners or anatomies. We present a DL based approach that can perform medical image registration of one image type despite being trained with images of a different type. This is achieved by unsupervised domain adaptation in the registration process and allows for easier application to different datasets without extensive retraining. To achieve our objective we train a network that transforms the given input image pair to a latent feature space vector using autoencoders. The resultant encoded feature space is used to generate the registered images with the help of generative adversarial networks (GANs). This feature transformation ensures greater invariance to the input image type. Experiments on chest X-ray, retinal and brain MR images show that our method, trained on one dataset gives better registration performance for other datasets, outperforming conventional methods that do not incorporate domain adaptation.

Original languageEnglish
Article number107109
Number of pages10
JournalPattern Recognition
Volume100
DOIs
Publication statusPublished - Apr 2020

Keywords

  • Domain adaptation
  • GANs
  • MRI
  • Registration
  • X-ray

Cite this

@article{4caaea5a55d44f009015bbc048de8f6c,
title = "Training data independent image registration using generative adversarial networks and domain adaptation",
abstract = "Medical image registration is an important task in automated analysis of multimodal images and temporal data involving multiple patient visits. Conventional approaches, although useful for different image types, are time consuming. Of late, deep learning (DL) based image registration methods have been proposed that outperform traditional methods in terms of accuracy and time. However, DL based methods are heavily dependent on training data and do not generalize well when presented with images of different scanners or anatomies. We present a DL based approach that can perform medical image registration of one image type despite being trained with images of a different type. This is achieved by unsupervised domain adaptation in the registration process and allows for easier application to different datasets without extensive retraining. To achieve our objective we train a network that transforms the given input image pair to a latent feature space vector using autoencoders. The resultant encoded feature space is used to generate the registered images with the help of generative adversarial networks (GANs). This feature transformation ensures greater invariance to the input image type. Experiments on chest X-ray, retinal and brain MR images show that our method, trained on one dataset gives better registration performance for other datasets, outperforming conventional methods that do not incorporate domain adaptation.",
keywords = "Domain adaptation, GANs, MRI, Registration, X-ray",
author = "Dwarikanath Mahapatra and Zongyuan Ge",
year = "2020",
month = "4",
doi = "10.1016/j.patcog.2019.107109",
language = "English",
volume = "100",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier",

}

Training data independent image registration using generative adversarial networks and domain adaptation. / Mahapatra, Dwarikanath; Ge, Zongyuan.

In: Pattern Recognition, Vol. 100, 107109, 04.2020.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Training data independent image registration using generative adversarial networks and domain adaptation

AU - Mahapatra, Dwarikanath

AU - Ge, Zongyuan

PY - 2020/4

Y1 - 2020/4

N2 - Medical image registration is an important task in automated analysis of multimodal images and temporal data involving multiple patient visits. Conventional approaches, although useful for different image types, are time consuming. Of late, deep learning (DL) based image registration methods have been proposed that outperform traditional methods in terms of accuracy and time. However, DL based methods are heavily dependent on training data and do not generalize well when presented with images of different scanners or anatomies. We present a DL based approach that can perform medical image registration of one image type despite being trained with images of a different type. This is achieved by unsupervised domain adaptation in the registration process and allows for easier application to different datasets without extensive retraining. To achieve our objective we train a network that transforms the given input image pair to a latent feature space vector using autoencoders. The resultant encoded feature space is used to generate the registered images with the help of generative adversarial networks (GANs). This feature transformation ensures greater invariance to the input image type. Experiments on chest X-ray, retinal and brain MR images show that our method, trained on one dataset gives better registration performance for other datasets, outperforming conventional methods that do not incorporate domain adaptation.

AB - Medical image registration is an important task in automated analysis of multimodal images and temporal data involving multiple patient visits. Conventional approaches, although useful for different image types, are time consuming. Of late, deep learning (DL) based image registration methods have been proposed that outperform traditional methods in terms of accuracy and time. However, DL based methods are heavily dependent on training data and do not generalize well when presented with images of different scanners or anatomies. We present a DL based approach that can perform medical image registration of one image type despite being trained with images of a different type. This is achieved by unsupervised domain adaptation in the registration process and allows for easier application to different datasets without extensive retraining. To achieve our objective we train a network that transforms the given input image pair to a latent feature space vector using autoencoders. The resultant encoded feature space is used to generate the registered images with the help of generative adversarial networks (GANs). This feature transformation ensures greater invariance to the input image type. Experiments on chest X-ray, retinal and brain MR images show that our method, trained on one dataset gives better registration performance for other datasets, outperforming conventional methods that do not incorporate domain adaptation.

KW - Domain adaptation

KW - GANs

KW - MRI

KW - Registration

KW - X-ray

UR - http://www.scopus.com/inward/record.url?scp=85075515054&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2019.107109

DO - 10.1016/j.patcog.2019.107109

M3 - Article

AN - SCOPUS:85075515054

VL - 100

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

M1 - 107109

ER -