[1]薄阳瑜,武永亮,王学军.基于双特征提取和注意力机制的图像超分辨率重建[J].郑州大学学报(工学版),2024,45(06):48-55.[doi:10.13705/j.issn.1671-6833.2024.03.009]
 BO Yangyu,WU Yongliang,WANG Xuejun.Image Super-resolution Reconstruction Network Based on Double FeatureExtraction and Attention Mechanism[J].Journal of Zhengzhou University (Engineering Science),2024,45(06):48-55.[doi:10.13705/j.issn.1671-6833.2024.03.009]
点击复制

基于双特征提取和注意力机制的图像超分辨率重建()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
45
期数:
2024年06期
页码:
48-55
栏目:
出版日期:
2024-09-25

文章信息/Info

Title:
Image Super-resolution Reconstruction Network Based on Double FeatureExtraction and Attention Mechanism
文章编号:
1671-6833(2024)06-0048-08
作者:
薄阳瑜 武永亮 王学军
石家庄铁道大学 信息科学与技术学院,河北 石家庄 050043
Author(s):
BO Yangyu WU Yongliang WANG Xuejun
College of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
关键词:
图像超分辨率重建 局部空间注意力 残差融合注意力 空洞金字塔 双分支网络
Keywords:
image super-resolution reconstruction local spatial attention residual fusion attention atrous pyramid double feature extraction
分类号:
TP751 TP391. 41 TP183
DOI:
10.13705/j.issn.1671-6833.2024.03.009
文献标志码:
A
摘要:
针对图像超分辨率重建过程中忽略图像高频特征,导致特征提取不充分,重建图像纹理细节模糊的问题,提出了一种基于双特征提取和注意力机制的图像超分辨率重建方法。 首先,该方法采用双分支网络进行特征提取,以解决图像重建过程中高频特征和多尺度特征无法有效提取和一致融合的问题;其次,为了使网络提取到更加精确的高频特征,提出了局部空间注意力模块,并与通道注意力模块结合构建残差融合注意力模块,提高网络对高频特征的定位能力;最后,设计了空洞金字塔模块,扩大网络感受野,使网络多尺度提取特征。 在 4 个基准数据集上的测试结果表明:尤其是超分辨率倍数为 4 时,所提方法较目前若干主流模型中的最佳峰值信噪比分别提升了0. 16,0. 08,0. 03,0. 20 dB,所提方法在视觉效果和定量分析方面均有较好提升。
Abstract:
In the process of image super-resolution reconstruction, high frequency features might be ignored, whichwould lead to insufficient extraction features and fuzzy texture details in the reconstructed image. To solve this problem, an image super-resolution reconstruction network based on double feature extraction and attention mechanismwas proposed. In particular, in this study, a two-branch network for feature extraction was proposed to solve theproblem that high frequency features and multi-scale features could not be effectively extracted and uniformly fusedduring image reconstruction. In addition, in order to make the network obtain more accurate high-frequency features, a local spatial attention module was proposed, and combined with channel attention. A residual fusion attention module was constructed to improve the network′s ability to locate high-frequency features. Finally, the atrouspyramid module was designed to enlarge the receptive field of the network and enable the multi-scale feature extraction. Experiments were carried out on four benchmark datasets, and the results were better than the current advanced methods. Especially when the super-resolution multiple was 4, the proposed method improved the optimalPSNR by 0. 16, 0. 08, 0. 03 and 0. 20 dB, respectively, compared with the current mainstream models. The experimental results shown that the proposed method achieved better improvement in visual effect and quantitative analysis

参考文献/References:

[1] 成科扬, 荣兰, 蒋森林, 等. 基于深度学习的遥感图像超分辨率重建方法综述[ J] . 郑州大学学报( 工学版) , 2022, 43(5) : 8-16.

CHENG K Y, RONG L, JIANG S L, et al. Overview ofmethods for remote sensing image super-resolution reconstruction based on deep learning[J]. Journal of ZhengzhouUniversity (Engineering Science), 2022, 43(5): 8-16.
[2] DONG C, LOY C C, HE K M, et al. Learning a deepconvolutional network for image super-resolution[C]∥European Conference on Computer Vision. Cham: Springer,2014: 184-199.
[3] DONG C, LOY C C, TANG X O. Accelerating the super-resolution convolutional neural network[ C]∥European Conference on Computer Vision. Cham: Springer,2016: 391-407.
[4] KIM J, LEE J K, LEE K M. Accurate image super-resolution using very deep convolutional networks[ C]∥2016IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway:IEEE, 2016: 1646-1654.
[5] LIM B, SON S, KIM H, et al. Enhanced deep residualnetworks for single image super-resolution [ C ] ∥2017IEEE Conference on Computer Vision and Pattern Recognition Workshops ( CVPRW) . Piscataway:IEEE, 2017:1132-1140.
[6] LAI W S, HUANG J B, AHUJA N, et al. Deep Laplacian pyramid networks for fast and accurate super-resolution[C]∥2017 IEEE Conference on Computer Vision andPattern Recognition (CVPR) . Piscataway:IEEE, 2017:5835-5843.
[7] KIM J, LEE J K, LEE K M. Deeply-recursive convolutional network for image super-resolution[C]∥2016 IEEEConference on Computer Vision and Pattern Recognition(CVPR) . Piscataway: IEEE, 2016: 1637-1645.
[8] TAI Y, YANG J, LIU X M. Image super-resolution viadeep recursive residual network[ C]∥2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) . Piscataway:IEEE, 2017: 2790-2798.
[9] ZHANG Y L, LI K P, LI K, et al. Image super-resolution using very deep residual channel attention networks[C]∥European Conference on Computer Vision. Cham:Springer, 2018: 294-310.
[10] AHN N, KANG B, SOHN K A. Fast, accurate, andlightweight super-resolution with cascading residual network[C]∥Computer Vision—ECCV 2018: 15th European Conference. New York:ACM, 2018: 256-272.
[11] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]∥2018 IEEE / CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018:7132-7141.
[12] WU B C, WAN A, YUE X Y, et al. Shift: a zero FLOP,zero parameter alternative to spatial convolutions[C]∥2018IEEE / CVF Conference on Computer Vision and PatternRecognition. Piscataway: IEEE, 2018: 9127-9135.
[13] TONG T, LI G, LIU X J, et al. Image super-resolutionusing dense skip connections [ C] ∥2017 IEEE International Conference on Computer Vision ( ICCV) . Piscataway:IEEE, 2017: 4809-4817.
[14] YUAN Y, LIU S Y, ZHANG J W, et al. Unsupervisedimage super-resolution using cycle-in-cycle generative adversarial networks [ C]∥2018 IEEE / CVF Conference onComputer Vision and Pattern Recognition Workshops(CVPRW) . Piscataway:IEEE, 2018: 814-822.
[15] WANG Z, BOVIK A C, SHEIKH H R, et al. Imagequality assessment: from error visibility to structural similarity [ J ] . IEEE Transactions on Image Processing: APublication of the IEEE Signal Processing Society, 2004,13(4) : 600-612.
[16] YU F, KOLTUN V. Multi-scale context aggregation bydilated convolutions[EB / OL] . ( 2015 - 11 - 23) [ 2023 -06-14] . https:∥arxiv. org / abs/ 1511. 07122.
[17] LI W B, ZHOU K, QI L, et al. LAPAR: linearly-assembled pixel-adaptive regression network for single image super-resolution and beyond [ EB / OL ] . ( 2021 - 05 - 21 )[2023-06-14] . https:∥arxiv. org / abs/ 2105. 10422.
[18] TAI Y, YANG J, LIU X M, et al. MemNet: a persistentmemory network for image restoration[C]∥2017 IEEE International Conference on Computer Vision ( ICCV) . Piscataway:IEEE, 2017: 4549-4557.
[19] WANG C F, LI Z, SHI J. Lightweight image super-resolution with adaptive weighted learning network[EB / OL] .(2019- 04 - 04 ) [ 2023 - 06 - 14 ] . https:∥arxiv. org /abs/ 1904. 02358.
[20] HUI Z, WANG X M, GAO X B. Fast and accurate singleimage super-resolution via information distillation network[C]∥2018 IEEE / CVF Conference on Computer Vision andPattern Recognition. Piscataway:IEEE, 2018: 723-731.
[21] HUI Z, GAO X B, YANG Y C, et al. Lightweight imagesuper-resolution with information multi-distillation network[C]∥Proceedings of the 27th ACM International Conferenceon Multimedia. New York: ACM, 2019: 2024-2032.
[22] ZHU F Y, ZHAO Q J. Efficient single image super-resolution via hybrid residual feature learning with compactback-projection network [ C] ∥2019 IEEE / CVF International Conference on Computer Vision Workshop ( ICCVW) . Piscataway:IEEE, 2019: 2453-2460.
[23] LIU J, TANG J, WU G S. Residual feature distillationnetwork for lightweight image super-resolution[C]∥European Conference on Computer Vision. Cham: Springer,2020: 41-55.
[24] KONG F Y, LI M X, LIU S W, et al. Residual local feature network for efficient super-resolution [ C ] ∥ 2022IEEE / CVF Conference on Computer Vision and PatternRecognition Workshops ( CVPRW) . Piscataway: IEEE,2022: 765-775.

更新日期/Last Update: 2024-09-29