Multimodal-Boost: Multimodal Medical Image Super-Resolution Using Multi-Attention Network With Wavelet Transform

Fayaz Ali Dharejo, Muhammad Zawish, Farah Deeba, Yuanchun Zhou, Kapal Dev, Sunder Ali Khowaja, Nawab Muhammad Faseeh Qureshi

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Multimodal medical images are widely used by clinicians and physicians to analyze and retrieve complementary information from high-resolution images in a non-invasive manner. Loss of corresponding image resolution adversely affects the overall performance of medical image interpretation. Deep learning-based single image super resolution (SISR) algorithms have revolutionized the overall diagnosis framework by continually improving the architectural components and training strategies associated with convolutional neural networks (CNN) on low-resolution images. However, existing work lacks in two ways: i) the SR output produced exhibits poor texture details, and often produce blurred edges, ii) most of the models have been developed for a single modality, hence, require modification to adapt to a new one. This work addresses (i) by proposing generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data. Existing approaches based on the GAN have yielded good SR results; however, the texture details of their SR output have been experimentally confirmed to be deficient for medical images particularly. The integration of wavelet transform (WT) and GANs in our proposed SR model addresses the aforementioned limitation concerning textons. While the WT divides the LR image into multiple frequency bands, the transferred GAN uses multi-attention and upsample blocks to predict high-frequency components. Additionally, we present a learning method for training domain-specific classifiers as perceptual loss functions. Using a combination of multi-attention GAN loss and a perceptual loss function results in an efficient and reliable performance. Applying the same model for medical images from diverse modalities is challenging, our work addresses (ii) by training and performing on several modalities via transfer learning. Using two medical datasets, we validate our proposed SR network against existing state-of-the-art approaches and achieve promising results in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR).

Original languageBritish English
Pages (from-to)1-14
Number of pages14
JournalIEEE/ACM Transactions on Computational Biology and Bioinformatics
DOIs
StateAccepted/In press - 2022

Keywords

  • Attention modules
  • Feature extraction
  • generative adversarial network
  • Generative adversarial networks
  • Image edge detection
  • Image reconstruction
  • Medical diagnostic imaging
  • multimodality data
  • super-resolution
  • Task analysis
  • Training
  • transfer learning
  • wavelet transform

Fingerprint

Dive into the research topics of 'Multimodal-Boost: Multimodal Medical Image Super-Resolution Using Multi-Attention Network With Wavelet Transform'. Together they form a unique fingerprint.

Cite this