Multimodal MR Image Synthesis Using Gradient Prior and Adversarial Learning

Document Type

Article

Publication Date

7-31-2020

Department

College of Computing

Abstract

In magnetic resonance imaging (MRI), several images can be obtained using different imaging settings (e.g. T1, T2, DWI, Flair). These images have similar anatomical structures but with different contrasts, which provide a wealth of information for diagnosis. However, the images under specific imaging settings may not be available due to the limitation of scanning time or corruption caused by noises. It is attractive to derive missing images with some settings from the available MR images. In this paper, we propose a novel end-to-end multi-setting MR image synthesis method. The proposed method is based on generative adversarial networks (GANs) - a deep learning model. In the proposed method, different MR images obtained by different settings are used as the inputs of a GANs and each image is encoded by an encoder. Each encoder includes a refinement structure which is used to extract a multi-scale feature map from an input image. The multi-scale feature maps from different input images are then fused to generate several desired target images under specific settings. Because the resultant images obtained with GANs have blur edges, we fuse gradient prior information in the model to protect high frequency information such as important tissue textures of medical images. In the proposed model, the multi-scale information is also adopted in the adversarial learning (not just in the generator or discriminator) so that we can produce high quality synthesized images. We evaluated the proposed method on two public datasets: BRATS and ISLES. Experimental results demonstrate that the proposed approach is superior to current state-of-the-art methods.

Publication Title

IEEE Journal on Selected Topics in Signal Processing

Share

COinS