[1]林予松,李孟娅,李英豪,等.基于GAN和多尺度空间注意力的多模态医学图像融合[J].郑州大学学报(工学版),2024,45(pre):2.[doi:10. 13705 / j. issn. 1671-6833. 2025. 01. 001]
 Lin Yusong,,et al.Multimodal Medical Image Fusion Based on GAN and Multiscale Spatial Attention[J].Journal of Zhengzhou University (Engineering Science),2024,45(pre):2.[doi:10. 13705 / j. issn. 1671-6833. 2025. 01. 001]
点击复制

基于GAN和多尺度空间注意力的多模态医学图像融合()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
45
期数:
2024年pre
页码:
2
栏目:
出版日期:
2024-12-31

文章信息/Info

Title:
Multimodal Medical Image Fusion Based on GAN and Multiscale Spatial Attention
作者:
林予松123李孟娅12李英豪12赵哲12
(1.郑州大学 网络空间安全学院,河南 郑州 450002;2.郑州大学 互联网医疗与健康服务河南省协同创新中心,河南 郑州 450052;3.郑州大学 汉威物联网研究院,河南 郑州 450002)
Author(s):
Lin Yusong1 2 3 Li Mengya1 2 Li Yinghao1 2 Zhao Zhe1 2*
(1. School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou 450002, China; 2. Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China; 3. Hanwei IoT Institute, Zhengzhou University, Zhengzhou 450002, China)
关键词:
图像融合多模态医学图像生成对抗网络特征金字塔注意力机制
Keywords:
image fusion multi-modal medical images generative adversarial network feature pyramid attention mechanism
分类号:
TP391
DOI:
10. 13705 / j. issn. 1671-6833. 2025. 01. 001
文献标志码:
A
摘要:
针对多模态医学图像融合过程中多尺度特征和纹理细节信息丢失的问题,提出一种基于生成对抗网络和多尺度空间注意力机制的图像融合算法。首先,生成器采用自编码器结构,分别利用编码器和解码器对输入图像进行特征提取、融合和重建,生成融合图像。其次,整个对抗网络框架采用双鉴别器结构,使得生成器生成的融合图像同时保存多个模态图像的显著特征。最后,构建一种多尺度空间注意力机制作为编码器进行特征提取的基本模块,利用多尺度结构充分捕获并保留源图像的多尺度特征,并且引入空间注意力机制更好地保持源图像的结构和细节信息。在哈佛大学全脑图谱数据库上的实验结果表明,本文所提算法生成的融合图像不仅纹理细节更为丰富,有助于人类视觉观察,而且在三种不同类型的医学图像融合任务上平均梯度、峰值信噪比、互信息、视觉保真度客观评价指标的平均值分别达到0.3023、20.7207、1.4414、0.6498,与其他先进的算法相比具有一定的优势。
Abstract:
Aiming to address the problem of multi-scale feature and texture detail information loss in the process of multi-modal medical image fusion, a novel image fusion algorithm based on generative adversarial networks (GANs) and multi-scale spatial attention mechanism is proposed. Firstly, the generator adopts an autoencoder structure to extract, fuse, and reconstruct the input images using an encoder and a decoder, generating the fused image. Secondly, the entire GAN framework employs a dual discriminator structure, enabling the generator to preserve salient features from multiple modal images in the fused image. Lastly, a multi-scale spatial attention mechanism is constructed as a fundamental module for feature extraction in the encoder. It effectively captures and retains multi-scale features from the source images, and incorporates spatial attention mechanism to better preserve the structures an d details of the source images. Multiple sets of experimental results demonstrate that the fused images generated by the algorithm proposed in this paper not only exhibit richer texture details, which aids in human visual observation, but also achieve superior performance compared to other advanced algorithms in terms of average gradient, peak signal-to-noise ratio, mutual information, and visual fidelity objective evaluation metrics for three different types of medical image fusion tasks. The average values of these metrics are 0.3023, 20.7207, 1.4414, and 0.6498, respectively, indicating a certain advantage over other advanced algorithms

参考文献/References:

[ 1 ]  HUANG B, YANG F, YIN M X, et al. A review of  multimodal medical image fusion techniques [ J] . Com putational and Mathematical Methods in Medicine,  2020, 2020: 8279342.
[ 2 ]  AZAM M A, KHAN K B, SALAHUDDIN S, et al. A  review on multimodal medical image fusion: compendi ous analysis of medical modalities, multimodal databas es, fusion techniques and quality metrics[ J] . Comput ers in Biology and Medicine, 2022, 144: 105253.
[ 3 ]  PIELLA G. A general framework for multiresolution im age fusion: from pixels to regions [ J] . Information Fu sion, 2003, 4(4) : 259-280.
[ 4 ]  GUO P, XIE G Q, LI R F, et al. Multimodal medical  image fusion with convolution sparse representation and  mutual information correlation in NSST domain[J]. Com plex & Intelligent Systems, 2023, 9(1): 317-328.
[ 5 ]  YIN M, LIU X N, LIU Y, et al. Medical image fusion  with parameter-adaptive pulse coupled neural network in  nonsubsampled shearlet transform domain [ J ] . IEEE  Transactions  on  Instrumentation  and  Measurement,  2019, 68(1) : 49-64.
[ 6 ]  ZHU Z Q, ZHENG M Y, QI G Q, et al. A phase con gruency and local Laplacian energy based multi-modality  medical image fusion method in NSCT domain [ J ] .  IEEE Access, 2019, 7: 20811-20824.
[ 7 ]  DOGRA A, KUMAR S. Multi-modality medical image  fusion based on guided filter and image statistics in mul tidirectional shearlet transform domain [ J ] . Journal of  Ambient Intelligence and Humanized Computing, 2023,  14(9) : 12191-12205.
[ 8 ] ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general im age fusion framework based on convolutional neural net work[ J] . Information Fusion, 2020, 54: 99-118.
[ 9 ] LI H, WU X J. DenseFuse: a fusion approach to infrared  and visible images[ J] . IEEE Transactions on Image Pro cessing, 2019, 28(5) : 2614-2623.
[10] FU J, LI W S, DU J, et al. A multiscale residual pyramid  attention network for medical image fusion[J]. Biomedical  Signal Processing and Control, 2021, 66: 102488.
[11] 许光宇 , 陈浩宇 , 张杰 . 双路径双鉴别器生成对抗网 络的红外与可见光图像融合 [ J/ OL] . 计算机辅助设 计与图 形 学 学 报 , 1 - 14 [ 2024 - 04 - 07] . http:∥kns. cnki. net / kcms/ detail / 11. 2925. TP. 20240204. 1728. 06 1. html.
XU G Y, CHEN H Y, ZHANG J. Infrared and visible  image fusion based on dual-path and dual-discriminator  generation adversarial network [ J/ OL] . Journal of Com puter-Aided Design & Computer Graphics,1 - 14 [ 2024 - 04 - 07 ] . http: ∥kns. cnki. net / kcms/ detail / 11. 2925.  TP. 20240204. 1728. 061. html.
[12] MA J Y, YU W, LIANG P W, et al. FusionGAN: a  generative adversarial network for infrared and visible im age fusion[ J] . Information Fusion, 2019, 48: 11-26.
[13] ZHAO C, WANG T F, LEI B Y. Medical image fusion  method based on dense block and deep convolutional gen erative adversarial network [ J ] . Neural Computing and  Applications, 2021, 33(12) : 6595-6610.
[14] MA J Y, XU H, JIANG J J, et al. DDcGAN: a dual-dis criminator conditional generative adversarial network for  multi-resolution image fusion [ J] . IEEE Transactions on  Image Processing, 2020, 29: 4980-4995.
[15] ZHOU T, LI Q, LU H L, et al. GAN review: models  and medical image fusion applications [ J ] . Information  Fusion, 2023, 91: 134-148.
[16] 肖儿良 , 林化溪 , 简献忠 . 基于生成对抗网络探索潜 在空间的医学图像融合算法 [ J] . 信息与控制 , 2021,  50(5) : 538-549. 
XIAO E L, LIN H X, JIAN X Z. Medical image fusion  algorithm adopting generative adversarial network to ex plore latent space[ J] . Information and Control, 2021, 50  (5) : 538-549.
[17] LI H C, XIONG P F, AN J, et al. Pyramid attention net work for semantic segmentation[EB / OL]. (2018-11-25)  [2024-04-07]. https:∥arxiv. org / abs/ 1805. 10180.
[18] LIU Y, WANG L, LI H F, et al. Multi-focus image fu sion with deep residual learning and focus property detec tion[ J] . Information Fusion, 2022, 86: 1-16.
[19] 尹海涛 , 岳勇赢 . 基于半监督学习和生成对抗网络的 医学图像融合算法 [ J] . 激光与光电子学进展 , 2022,  59(22) : 245-254.
YIN H T, YUE Y Y. Medical image fusion based on  semisupervised learning and generative adversarial net work[ J] . Laser & Optoelectronics Progress, 2022, 59  (22) : 245-254.
[20] LI X X, GUO X P, HAN P F, et al. Laplacian redecom position for multimodal medical image fusion [ J] . IEEE  Transactions on Instrumentation and Measurement, 2020,  69(9) : 6880-6890.
[21] XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified  unsupervised image fusion network [ J ] . IEEE Transac tions  on  Pattern  Analysis and Machine Intelligence,  2022, 44(1) : 502-518.
[22] XU H, MA J Y. EMFusion: an unsupervised enhanced  medical image fusion network [ J] . Information Fusion,  2021, 76: 177-186.
[23] LI J W, LIU J Y, ZHOU S H, et al. GeSeNet: a general  semantic-guided network with couple mask ensemble for  medical image fusion [ J] . IEEE Transactions on Neural  Networks and Learning Systems, 2023,1: 14.
[24] 刘帅奇 , 王洁 , 安彦玲 , . 基于 CNN 的非下采样剪 切波域 多 聚 焦 图 像 融 合 [ J] . 郑 州 大 学 学 报 ( 工 学
) , 2019, 40(4) : 36-41.
LIU S Q, WANG J, AN Y L, et al. Multi-focus image  fusion based on CNN in non-sampled shearlet domain  [ J] . Journal of Zhengzhou University ( Engineering Sci ence) , 2019, 40(4) : 36-41.

相似文献/References:

[1]刘帅奇,王洁,安彦玲,等.基于CNN的非下采样剪切波域多聚焦图像融合[J].郑州大学学报(工学版),2019,40(04):7.[doi:10.13705/j.issn.1671-6833.2019.04.002]
 Shuaiqi Liu,Wang Jie,An Yanling,et al.Multi- focus Image Fusion Based on Convolution Neural Network in Non-sampled Shearlet Domain[J].Journal of Zhengzhou University (Engineering Science),2019,40(pre):7.[doi:10.13705/j.issn.1671-6833.2019.04.002]

更新日期/Last Update: 2024-10-10