[1]高宇飞,马自行,徐 静,等.基于卷积和可变形注意力的脑胶质瘤图像分割[J].郑州大学学报(工学版),2024,45(02):27-32.[doi:10.13705/j.issn.1671-6833.2023.05.007]
 GAO Yufei,MA Zixing,XU Jing,et al.Brain Glioma Image Segmentation Based on Convolution and Deformable Attention[J].Journal of Zhengzhou University (Engineering Science),2024,45(02):27-32.[doi:10.13705/j.issn.1671-6833.2023.05.007]
点击复制

基于卷积和可变形注意力的脑胶质瘤图像分割()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
45
期数:
2024年02期
页码:
27-32
栏目:
出版日期:
2024-03-06

文章信息/Info

Title:
Brain Glioma Image Segmentation Based on Convolution and Deformable Attention
作者:
高宇飞 马自行 徐 静 赵国桦 石 磊
1. 郑州大学 网络空间安全学院, 河南 郑州 450002;2. 嵩山实验室, 河南 郑州 450052;3. 郑州大学 计算机与人工 智能学院, 河南 郑州 450001;4. 郑州大学第一附属医院, 河南 郑州 450003
Author(s):
GAO Yufei MA Zixing XU Jing ZHAO Guohua SHI Lei
1. School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou 450002, China; 2. Songshan Laboratory, Zhengzhou 450052, China; 3. School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China; 4. The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450003, China
关键词:
深度学习 脑胶质瘤图像分割 卷积神经网络 Transformer 自注意力机制
Keywords:
deep learning brain glioma image segmentation CNN Transformer self-attention mechanism
DOI:
10.13705/j.issn.1671-6833.2023.05.007
文献标志码:
A
摘要:
对于脑胶质瘤图像分割这类密集预测的医学影像分割任务,局部和全局依赖关系都是不可或缺的,针对卷 积神经网络缺乏建立全局依赖关系的能力,且自注意力机制在局部细节上捕捉能力不足等问题,提出了基于卷积 和可变形注意力的脑胶质瘤图像分割方法。 设计了卷积和可变形注意力 Transformer 的串行组合模块,其中卷积用 于提取局部特征,紧随其后的可变形注意力 Transformer 用于捕捉全局依赖关系,建立不同分辨率下局部和全局依 赖关系。 作为一种 CNN-Transformer 混合架构,所提方法不需要任何预训练即可实现精准的脑胶质瘤图像分割。 实验结果表明:所提方法在 BraTS2020 脑胶质图像分割数据集上平均 Dice 系数和平均 95% 豪斯多夫距离分别为 83. 56%和 11. 30 mm,达到了与其他脑胶质瘤图像分割方法相当的分割精度,同时降低了至少 50%的计算开销,有 效提升了脑胶质瘤图像分割的效率。
Abstract:
For medical image segmentation tasks such as glioma image segmentation with dense prediction, both local and global dependencies were indispensable. To address the problems that convolutional neural networks lacked the ability to establish global dependencies and the self-attention mechanism had insufficient ability to capture local details, a convolutional and deformable attention-based method for glioma image segmentation was proposed. A serial combination module of convolution and deformable attention Transformer was designed, in which convolution was used to extract local features and the immediately following deformable attention. Transformer was used to capture global dependencies to the establishment of local and global dependencies at different resolutions. As a hybrid CNN-Transformer architecture, the method could achieve accurate brain glioma image segmentation without any pretraining. Experiments showed that the average dice score and the average 95% Hausdorff distance on the BraTS2020 glioma image segmentation dataset were 83. 56% and 11. 30 mm, respectively, achieving comparable segmentation accuracy compared with other methods, while reducing the computational overhead by at least 50% and effectively improving the efficiency of glioma image segmentation.

参考文献/References:

[1] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]∥International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241.

[2] XIAO X, LIAN S, LUO Z M, et al. Weighted Res-UNet for high-quality retina vessel segmentation[C]∥2018 9th International Conference on Information Technology in Medicine and Education (ITME). Piscataway: IEEE, 2018: 327-331.
[3] MILLETARI F, NAVAB N, AHMADI S A. V-Net: fully convolutional neural networks for volumetric medical image segmentation[C]∥2016 Fourth International Conference on 3D Vision (3DV). Piscataway: IEEE, 2016: 565-571.
[4] FENG S L, ZHAO H M, SHI F, et al. CPFNet: context pyramid fusion network for medical image segmentation[J]. IEEE Transactions on Medical Imaging, 2020, 39(10): 3008-3018.
[5] XIA H Y, MA M J, LI H S, et al. MC-Net: multi-scale context-attention network for medical CT image segmentation[J].Applied Intelligence, 2022, 52(2): 1508-1519.
[6] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[7] CHEN J N, LU Y Y, YU Q H, et al. AA-TransUNet: transformers make strong encoders for medical image segmentation[EB/OL].(2021-02-08)[2023-01-12]. https:∥doi.org/10.48550/arXiv.2102.04306.
[8] YANG Y, MEHRKANOON S. AA-TransUNet: attention augmented transunet for nowcasting tasks[C]∥2022 International Joint Conference on Neural Networks (IJCNN). Piscataway:IEEE, 2022: 1-8.
[9] WANG W X, CHEN C, DING M, et al. TransBTS: multimodal brain tumor segmentation using transformer[C]∥International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2021: 109-119.
[10] WANG W H, XIE E Z, LI X, et al. Pyramid vision transformer: a versatile backbone for dense prediction without convolutions[C]∥2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway:IEEE, 2022: 548-558.
[11] LIU Z, LIN Y T, CAO Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[C]∥2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway:IEEE, 2022: 9992-10002.
[12] CAO H, WANG Y, CHEN J, et al. Swin-UNet: Unet-like pure transformer for medical image segmentation[EB/OL]. (2021-05-12)[2023-01-12].https:∥doi.org/10.48550/arXiv.2105.05537.
[13] GAO Y, ZHOU M, METAXAS D N. UTNet: a hybrid transformer architecture for medical image segmentation[C]∥International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2021: 61-71.
[14] XIA Z F, PAN X R, SONG S J, et al. Vision Transformer with deformable attention[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE, 2022: 4784-4793.
[15] DAI Z H, LIU H X, LE Q V, et al. CoAtNet: Marrying convolution and attention for all data sizes[J]. Advances in Neural Information Processing Systems, 2021, 34: 3965-3977.
[16] GUO J Y, HAN K, WU H, et al. CMT: convolutional neural networks meet vision transformers[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE, 2022: 12165-12175.
[17] SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2018: 4510-4520.
[18] ÇIÇEK Ö, ABDULKADIR A, LIENKAMP S S, et al. 3D U-net: learning dense volumetric segmentation from sparse annotation[M]∥Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016. Cham: Springer, 2016: 424-432.
[19] YU W, FANG B, LIU Y Q, et al. Liver vessels segmentation based on 3D residual U-NET[C]∥2019 IEEE International Conference on Image Processing (ICIP).Piscataway: IEEE, 2019: 250-254.
[20] LIU C Y, DING W B, LI L, et al. Brain tumor segmentation network using attention-based fusion and spatial relationship constraint[C]∥International MICCAI Brainlesion Workshop. Cham:Springer, 2021: 219-229.
[21] VU M H, NYHOLM T, LÖFSTEDT T. Multi-decoder networks with multi-denoising inputs for tumor segmentation[C]∥International MICCAI Brainlesion Workshop. Cham: Springer, 2021: 412-423.

相似文献/References:

[1]袁航,钟发海,聂上上,等.基于卷积神经网络的道路拥堵识别研究[J].郑州大学学报(工学版),2019,40(02):21.[doi:10.13705/j.issn.1671-6833.2019.02.008]
 LUO Ronghui,YUAN Hang,ZHONG Fahai,et al.The Research of Traffic Jam Detection Based on Convolutional Neural Network[J].Journal of Zhengzhou University (Engineering Science),2019,40(02):21.[doi:10.13705/j.issn.1671-6833.2019.02.008]
[2]朱俊丞,杨之乐,郭媛君,等.深度学习在电力负荷预测中的应用综述[J].郑州大学学报(工学版),2019,40(05):12.[doi:10.13705/j.issn.1671-6833.2019.05.005]
 Zhu Juncheng,Young Joy,Guo Yuanjun,et al.A review of the application of deep learning in power load forecasting[J].Journal of Zhengzhou University (Engineering Science),2019,40(02):12.[doi:10.13705/j.issn.1671-6833.2019.05.005]
[3]黄文锋,徐珊珊,孙燚,等.基于多分辨率卷积神经网络的火焰检测[J].郑州大学学报(工学版),2019,40(05):79.[doi:10.13705/j.issn.1671-6833.2019.05.022]
 Huang Wenfeng,Susan Hsu,Sun Yi,et al.Fire Detection Based on Multi-resolution Convolution Neural Network in Various Scenes[J].Journal of Zhengzhou University (Engineering Science),2019,40(02):79.[doi:10.13705/j.issn.1671-6833.2019.05.022]
[4]陈义飞、郭胜、潘文安、陆彦辉.基于多源传感器数据融合的三维场景重建[J].郑州大学学报(工学版),2021,42(02):81.[doi:10.13705/j.issn.1671-6833.2021.02.008]
 Chen Yifei,Guo Sheng,Pan Wenan,et al.3D Scene Reconstruction Based on Multi-source Sensor Data Fusion[J].Journal of Zhengzhou University (Engineering Science),2021,42(02):81.[doi:10.13705/j.issn.1671-6833.2021.02.008]
[5]李学相,曹淇,刘成明.基于无配对生成对抗网络的图像超分辨率重建[J].郑州大学学报(工学版),2021,42(05):1.[doi:10.13705/j.issn.1671-6833.2021.05.018]
 LI Xuexiang,CAO Qi,LIU Chengming.Image Super-resolution Based on No Match Generative Adversarial Network[J].Journal of Zhengzhou University (Engineering Science),2021,42(02):1.[doi:10.13705/j.issn.1671-6833.2021.05.018]
[6]王希鹏,李永,李智,等.融合图像深度的抗遮挡目标跟踪算法[J].郑州大学学报(工学版),2021,42(05):19.[doi:10.13705/j.issn.1671-6833.2021.05.011]
 Wang Xipeng,Li Yong,Li Zhi,et al.Anti-occlusion Target Tracking Algorithm Based on Image Depth[J].Journal of Zhengzhou University (Engineering Science),2021,42(02):19.[doi:10.13705/j.issn.1671-6833.2021.05.011]
[7]卢晨辉,冯硕,易爱华,等.基于深度学习的加油站销量预测与营销策略应用研究[J].郑州大学学报(工学版),2022,43(01):1.[doi:10.13705/j.issn.1671-6833.2022.01.014]
 LU Chenhui,FENG Shuo,YI Aihua,et al.Gasoline Station Sales Prediction Method Based on Deep Learning and Its Application of Promotion Strategy[J].Journal of Zhengzhou University (Engineering Science),2022,43(02):1.[doi:10.13705/j.issn.1671-6833.2022.01.014]
[8]陈浩杰,黄锦,左兴权,等.基于宽度&深度学习的基站网络流量预测方法[J].郑州大学学报(工学版),2022,43(01):7.[doi:10.13705/j.issn.1671-6833.2022.01.011]
 CHEN Haojie,HUANG Jin,ZUO Xingquan,et al.Base Station Network Traffic Prediction Method Based on Wide & Deep Learning[J].Journal of Zhengzhou University (Engineering Science),2022,43(02):7.[doi:10.13705/j.issn.1671-6833.2022.01.011]
[9]成科扬,荣兰,蒋森林,等.基于深度学习的遥感图像超分辨率重建技术综述[J].郑州大学学报(工学版),2022,43(05):8.[doi:10.13705/j.issn.1671-6833.2022.05.013]
 CHENG Keyang,RONG Lan,JIANG Senlin,et al.Overview of Methods for Remote Sensing Image Super-resolution Reconstruction Based on Deep Learning[J].Journal of Zhengzhou University (Engineering Science),2022,43(02):8.[doi:10.13705/j.issn.1671-6833.2022.05.013]
[10]院老虎,常玉坤,刘家夫.基于改进YOLOv5s的雾天场景车辆检测方法[J].郑州大学学报(工学版),2023,44(03):37.[doi:10.13705/j.issn.1671-6833.2023.03.005]
 YUAN Laohu,CHANG Yukun,LIU Jiafu.Vehicle Detection Method Based on Improved YOLOv5s in Foggy Scene[J].Journal of Zhengzhou University (Engineering Science),2023,44(02):37.[doi:10.13705/j.issn.1671-6833.2023.03.005]

更新日期/Last Update: 2024-03-08