[1]魏明军,陈晓茹,刘 铭,等.用于伪装目标检测的边缘-纹理引导增强网络[J].郑州大学学报(工学版),2025,46(05):9-17.[doi:10.13705/j.issn.1671-6833.2025.02.009]
 WEI Mingjun,CHEN Xiaoru,LIU Ming,et al.Edge-texture Guided Enhancement Network for Camouflaged Object Detection[J].Journal of Zhengzhou University (Engineering Science),2025,46(05):9-17.[doi:10.13705/j.issn.1671-6833.2025.02.009]
点击复制

用于伪装目标检测的边缘-纹理引导增强网络()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
46
期数:
2025年05期
页码:
9-17
栏目:
出版日期:
2025-08-10

文章信息/Info

Title:
Edge-texture Guided Enhancement Network for Camouflaged Object Detection
文章编号:
1671-6833(2025)05-0009-09
作者:
魏明军12 陈晓茹1 刘 铭1 刘亚志12 李 辉1
1.华北理工大学 人工智能学院,河北 唐山 063210;2.河北省工业智能感知重点实验室,河北 唐山 063210
Author(s):
WEI Mingjun12 CHEN Xiaoru1 LIU Ming1 LIU Yazhi12 LI Hui1
1.College of Artificial Intelligence, North China University of Science and Technology, Tangshan 063210, China; 2.Hebei Provincial Key Laboratory of Industrial Intelligent Perception, Tangshan 063210, China
关键词:
伪装目标检测 边缘信息 纹理信息 特征引导 特征交互
Keywords:
camouflaged object detection edge information texture information feature guide feature interaction
分类号:
TP391
DOI:
10.13705/j.issn.1671-6833.2025.02.009
文献标志码:
A
摘要:
伪装目标检测(COD)因目标物体与背景极为相似而具有很大的挑战性。针对目前COD方法未能充分利用边缘和纹理信息辅助检测任务而造成的边缘预测模糊、检测结果不完整且存在干扰等问题,提出了一种边缘-纹理引导增强网络(ETGENet),通过显式且充分的边缘和纹理引导策略来进一步提升COD的性能。首先,ETGENet中包含了一个关键的特征引导增强模块(FGEM),该模块能够利用并行的特征细化分支处理并增强对象特征,引导分支通过引导注意力来获取对象特征与边缘-纹理线索之间的相关性,以加强网络对于对象细节信息的理解并抑制噪声干扰;而自增强分支则利用自注意力机制从全局角度对伪装对象特征进行细化。其次,提出了一个特征交互融合模块(FIFM)来渐进融合相邻特征,FIFM利用注意力交互机制和加权融合策略学习特征间的互补信息,以生成更完整的预测图。最后,在3个公共数据集CAMO、COD10K和NC4K上进行实验验证,结果表明:所提出的网络在结构度量S、自适应增强匹配度量E、加权F度量和平均绝对误差M指标上均优于相关领域的其他方法,尤其在最大的测试集NC4K上,加权F度量指标高于所对比12个COD方法中表现最佳的FSPNet 2.2百分点。
Abstract:
Camouflaged object detection (COD) is facing significant challenges due to the high similarity between target objects and their background, such as blurred edge predictions, incomplete detection results, and interference caused by the insufficient use of edge and texture information. To address the issues of current COD, a novel edge-texture guided enhancement network (ETGENet) was proposed to further improve the performance of COD through explicit and sufficient edge-texture guidance strategies. Firstly, a key feature guided enhancement module (FGEM) was used in ETGENet, which could use parallel feature refinement branches to process and enhance object features. The guide branch could obtain object features by guiding attention correlation with edge and texture cues to enhance the network′s understanding of object details and suppress noise interference. While the self-enhancement branch could use the self-attention mechanism to refine the characteristics of camouflaged objects from a global perspective. Secondly, a feature interaction fusion module (FIFM) was also proposed to progressively fuse adjacent features. FIFM could utilize the attention interaction mechanism and weighted fusion strategy to learn complementary information between features to generate more complete predicted map. Experiments on three public datasets CAMO, COD10K, and NC4K demonstrate that the proposed network outperformed state-of-the-art methods in the field across metrics such as structure measure S, adaptive enhanced matching measure E, weighted F-measure, and mean absolute error M. Notably, on the largest test set, NC4K, the weighted F-measure surpassed the best-performing method among the 12 advanced COD methods, FSPNet, by 2.2 percentage points.

参考文献/References:

[1]LE T N, NGUYEN T V, NIE Z L, et al. Anabranch network for camouflaged object segmentation[J]. Computer Vision and Image Understanding, 2019, 184: 45-56. 

[2]FAN D P, JI G P, SUN G L, et al. Camouflaged object detection[C]∥2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Piscataway:IEEE, 2020: 2774-2784. 
[3]LV Y Q, ZHANG J, DAI Y C, et al. Simultaneously localize, segment and rank the camouflaged objects[C]∥ 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2021: 11586-11596. 
[4]PANG Y W, ZHAO X Q, XIANG T Z, et al. Zoom in and out: a mixed-scale triplet network for camouflaged object detection[C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 2150-2160. 
[5]SONG Y X, LI X Y, QI L. Camouflaged object detection with feature grafting and distractor aware[C]∥2023 IEEE International Conference on Multimedia and Expo. Piscataway:IEEE, 2023: 2459-2464. 
[6]HUANG Z, DAI H, XIANG T Z, et al. Feature shrinkage pyramid for camouflaged object detection with transformers[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 5557-5566. 
[7]SUN Y J, WANG S, CHEN C, et al. Boundary-guided camouflaged object detection[EB/OL]. (2022-07-02) [2024-11-20]. https:∥doi. org/10. 48550/arXiv. 2207.00794. 
[8]ZHU H W, LI P, XIE H R, et al. I can find you! boundary-guided separated attention network for camouflaged object detection[C]∥ Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2022: 3608-3616. 
[9]SUN D Y, JIANG S Y, QI L. Edge-aware mirror network for camouflaged object detection[C]∥2023 IEEE International Conference on Multimedia and Expo. Piscataway: IEEE, 2023: 2465-2470. 
[10] JI G P, FAN D P, CHOU Y C, et al. Deep gradient learning for efficient camouflaged object detection[J]. Machine Intelligence Research, 2023, 20(1): 92-108. 
[11]WANG W H, XIE E Z, LI X, et al. PVT v2: improved baselines with pyramid vision transformer[J]. Computational Visual Media, 2022, 8(3): 415-424. 
[12] LIU S T, HUANG D, WANG Y H. Receptive field block net for accurate and fast object detection[EB/OL]. (2017-11-21)[2024-11-20]. https:∥doi. org/10. 48550/arXiv.1711.07767. 
[13]田旭, 彭飞, 刘飞, 等. 基于金字塔特征与边缘优化的显著性对象检测[J]. 郑州大学学报(工学版), 2022, 43(2): 35-43. 
TIAN X, PENG F, LIU F, et al. Salient object detection based on pyramid features and edge optimization[J]. Journal of Zhengzhou University (Engineering Science), 2022, 43(2): 35-43. 
[14] ZAMIR S W, ARORA A, KHAN S, et al. Restormer: efficient transformer for high-resolution image restoration [C]∥2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 57185729. 
[15]WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[EB/OL].(2018-07-17)[202411-20].https:∥doi.org/10.48550/arXiv.1807.06521. 
[16]WEI J, WANG S H, HUANG Q M. F3Net: fusion, feedback and focus for salient object detection[C]∥ Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2020: 12321-12328. 
[17] CHENG M M, FAN D P. Structure-measure: a new way to evaluate foreground maps[J]. International Journal of Computer Vision, 2021, 129(9): 2622-2638. 
[18] MARGOLIN R, ZELNIK-MANOR L, TAL A. How to evaluate foreground maps[C]∥2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 248-255. 
[19] FAN D P, GONG C, CAO Y, et al. Enhanced-alignment measure for binary foreground map evaluation[EB/OL]. (2018-05-26)[2024-11-20]. https:∥doi. org/10. 48550/arXiv.1805.10421. 
[20] LI X F, YANG J X, LI S H, et al. Locate, refine and restore: a progressive enhancement network for camouflaged object detection[C]∥Proceedings of the ThirtySecond International Joint Conference on Artificial Intelligence. Cape Town: International Joint Conferences on Artificial Intelligence Organization, 2023: 1116-1124. 
[21] HE C M, LI K, ZHANG Y C, et al. Camouflaged object detection with feature decomposition and edge reconstruction[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2023: 22046-22055. [22] ZHOU X F, WU Z C, CONG R M. Decoupling and integration network for camouflaged object detection[J]. IEEE Transactions on Multimedia, 2024, 26: 7114-7129.

更新日期/Last Update: 2025-09-19