[1]李学相,曹淇,刘成明.基于无配对生成对抗网络的图像超分辨率重建[J].郑州大学学报(工学版),2021,42(05):1-6.
点击复制

基于无配对生成对抗网络的图像超分辨率重建()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
42
期数:
2021年05期
页码:
1-6
栏目:
出版日期:
2021-09-10

文章信息/Info

Title:
Image super-resolution ba<x>sed on no match generative adversarial network
作者:
李学相曹淇刘成明
文献标志码:
A
摘要:
图像超分辨率重建一直是计算机图像处理领域的热门课题。近来,基于生成对抗网络的图像超分辨率重建方法能够感知高频的纹理细节,取得了良好的重建效果。但是,生成对抗网络图像重建在图像质量方面表现出高度的不稳定性。对此,我们在SRGAN基础上提出了一个新的基于无配对图像的模型NM-SRGAN。首先,我们利用循环生成对抗网络(Cycle-gan)作为预处理模块,用于训练无配对数据集并获得一个更好的输入图像,在保留残差块基础结构的同时删除了残差块中的BN层,解决了结果不稳定的问题。同时,我们基于二阶统计量比一阶统计量更能捕捉区域信息的原理,使用协方差矩阵捕捉图像的二阶信息,在感知损失的基础上增加了二阶损失函数,使模型更加注重于捕捉图像细节区域部分的变化。最后,我们将感知损失中的VGG网络的损失函数改为一阶梯度的VGG损失函数,专注于提升图像的边缘纹理细节。我们对提出的NM-SRGAN在4个标准数据集上进行测试评估并将部分结果图进行细节部分比较,使用客观评价标准峰值信噪比与结构化相似化度量对结果图进行测试,并与经典卷积方法及SRGAN进行比较。实验结果表明,我们的方法在稳定性及图像质量、细节方面较SRGAN及经典算法均有较好的改善。
Abstract:
Image super-resolution reconstruction has always been a hot topic in the field of computer image processing. Recently, the image super-resolution reconstruction method ba<x>sed on the generative adversarial network can perceive high-frequency texture details and achieved good reconstruction results.However, the generation adversarial network image reconstruction shows a high degree of instability in terms of image quality. In this regard, we propose a new model NM-SRGAN ba<x>sed on unpaired images ba<x>sed on SRGAN.First, we use Cycle-gan as the preprocessing module to train the unpaired data set and obtain a better input image. While retaining the basic structure of the residual block, we delete the residual block The BN la<x>yer solves the problem of unstable results.At the same time, ba<x>sed on the principle that the second-order statistics can capture regional information better than the first-order statistics, we use the covariance matrix to capture the second-order information of the image, and add the second-order loss function to the perceptual loss, making the model more focused on Capture the changes in the detail area of the image. Finally, we change the loss function of the VGG network in the perceptual loss to a one-step VGG loss function, focusing on improving the edge texture details of the image.We test and evaluate the proposed NM-SRGAN on 4 standard data sets and compare some of the result graphs in detail and part. Use the ob<x>jective evaluation standard peak signal-to-noise ratio and structured similarity measure to test the result graph, and compare it with the classic Convolution method and SRGAN are compared. Experimental results show that our method is better than SRGAN and classic algorithms in terms of stability, image quality, and details
更新日期/Last Update: 2021-10-11