[1]张 震,张晨稳,张俊杰,等.改进YOLOv7-tiny的施工现场安全衣帽穿戴检测算法[J].郑州大学学报(工学版),2026,47(02):1-8.[doi:10.13705/j.issn.1671-6833.2025.05.001]
 ZHANG Zhen,ZHANG Chenwen,ZHANG Junjie,et al.Improved YOLOv7-tiny for Safety Clothing and Helmet Wearing Detection Algorithm on Construction Site[J].Journal of Zhengzhou University (Engineering Science),2026,47(02):1-8.[doi:10.13705/j.issn.1671-6833.2025.05.001]
点击复制

改进YOLOv7-tiny的施工现场安全衣帽穿戴检测算法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
47
期数:
2026年02期
页码:
1-8
栏目:
出版日期:
2026-02-13

文章信息/Info

Title:
Improved YOLOv7-tiny for Safety Clothing and Helmet Wearing Detection Algorithm on Construction Site
文章编号:
1671-6833(2026)02-0001-08
作者:
张 震1 张晨稳2 张俊杰3 裴胜利3 王文娟4
1.郑州大学 电气与信息工程学院,河南 郑州 450001;2.郑州大学 河南先进技术研究院,河南 郑州 450001;3.河南汇融油气技术有限公司,河南 郑州 450001;4.广东省轻工业技师学院 机电工程学院,广东 广州 511330
Author(s):
ZHANG Zhen1 ZHANG Chenwen2 ZHANG Junjie3 PEI Shengli3 WANG Wenjuan4
1.School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; 2.School of Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou 450001, China; 3.Henan Huirong Oil & Gas Technology Co., Ltd., Zhengzhou 450001, China; 4.School of Mechanical and Electrical Engineering, Guangdong Province Technician College of Light Industry, Guangzhou 511330, China
关键词:
YOLOv7-tiny 注意力机制 RFEM Shape-IoU 安全衣帽检测
Keywords:
YOLOv7-tiny attention mechanism RFEM Shape-IoU safety clothing and helmet delection
分类号:
TP391
DOI:
10.13705/j.issn.1671-6833.2025.05.001
文献标志码:
A
摘要:
针对当前施工现场安全衣帽穿戴检测算法在复杂背景、弱光环境及目标遮挡情况下的抗干扰能力不足,导致检测精度低、漏检率高及误检现象频繁等问题,提出了一种改进YOLOv7-tiny的施工现场安全衣帽穿戴检测算法。首先,在特征提取区域引入EMA注意力机制增强网络特征提取能力,弱化复杂背景干扰;其次,在特征融合部分插入RFEM模块提升网络感受野,获取更广泛的上下文信息,增强对小目标的感知能力;最后,采用Shape-IoU替换IoU边界回归损失函数,提升检测准确性。实验结果表明:改进模型在自制数据集上的mAP@0.5达到90.4%,相比原模型提高3.0百分点;帧率达到了93帧/s,模型参数量仅为6.1×106。相比YOLOv8s、YOLOv9s等模型,所提算法在检测精度、速度和模型轻量化方面更具优势,适合施工现场的实时检测应用。
Abstract:
In response to the insufficient robustness of current safety clothing and helmet wearing detection algorithm in complex backgrounds, and frequent false positive rates, an improved YOLOv7-tiny detection algorithm for safety clothing and helmet wearing on construction site was proposed. Firstly, the EMA attention mechanism was introduced into the feature extraction module to enhance the networks feature extraction capability and mitigate complex background interference. Secondly, the RFEM module was integrated into the feature fusion stage to improve the networks receptive field, acquiring broader contextual information, and enhance perception for small targets. Finally, Shape-IoU was employed to replace the IoU boundary regression loss function, to improve detection accuracy. Experimental results showed that the improved model achieved a mAP@0.5 of 90.4% on the proprietary dataset, 3.0 percentage points higher than the original model. The frame rate speed reached 93 frames/s, and the model size comprised only 6.1 million parameters. Compared to YOLOv8s, YOLOv9s, and other models, the proposed algorithm demonstrated superior performance in detection accuracy, speed, and model efficiency, making it suitable for real-time detection applications on construction site.

参考文献/References:

[1]宋健, 黄建军. 建筑工人安全参与行为演化博弈分析[J]. 安全, 2021, 42(10): 62-67.

SONG J, HUANG J J. Evolutionary game analysis on safety participation behavior of construction workers[J]. Safety & Security, 2021, 42(10): 62-67.
[2]张震, 王晓杰, 晋志华, 等. 基于轻量化YOLOv5的交通标志检测[J]. 郑州大学学报(工学版), 2024, 45(2): 12-19.
ZHANG Z, WANG X J, JIN Z H, et al. Traffic sign detection based on lightweight YOLOv5[J]. Journal of Zhengzhou University (Engineering Science), 2024, 45(2): 12-19.
[3]GIRSHICK R. Fast R-CNN[C]∥2015 IEEE International Conference on Computer Vision (ICCV).Piscataway:IEEE,2015: 1440-1448.
[4]DAI J F, LI Y, HE K M, et al. R-FCN: object detection via region-based fully convolutional networks. [EB/OL].(2016-05-20)[2025-06-22]. https:∥doi. org/10.48550/arXiv.1605.06409.
[5]HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]∥2017 IEEE International Conference on Computer Vision (ICCV). Piscataway: IEEE, 2017: 2980-2988.
[6]Ultralytics. YOLOv5[EB/OL]. (2020-05-18) [202506-22]. https:∥github.com/ultralytics/yolov5.
[7]LI C Y, LI L L, GENG Y F, et al. Yolov6 v3.0: a fullscale reloading[EB/OL]. (2023-01-13) [2025-0622]. https:∥arxiv.org/abs/2301.05586.
[8]WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-theart for real-time object detectors[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway:IEEE, 2023: 7464-7475.
[9]Ultralytics. YOLOv8[EB/OL]. (2023-01-10) [202506-22]. https:∥github.com/ultralytics/ultralytics.
[10]WANG C Y, YEH I H, LIAO H Y M. YOLOv9: learning what you want to learn using programmable gradient information[EB/OL]. (2024-02-21) [2025-06-22]. https:∥arxiv.org/abs/2402.13616v2.
[11] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multiBox detector[C]∥European Conference on Computer Vision. Cham: Springer , 2016: 21-37.
[12] LAW H, DENG J. CornerNet: detecting objects as paired keypoints[EB/OL]. (2024-02-21) [2025-06-22].https:∥doi.org/10.48550/arXiv.1808.01244.
[13] ZHAO Q J, SHENG T, WANG Y T, et al. M2Det: a single-shot object detector based on multi-level feature pyramid network[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 9259-9266.
[14] OUYANG D L, HE S, ZHANG G Z, et al. Efficient multi-scale attention module with cross-spatial learning[C]∥2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Piscataway:IEEE, 2023: 1-5.
[15] YU Z P, HUANG H B, CHEN W J, et al. YOLOFaceV2: a scale and occlusion aware face detector[J]. Pattern Recognit, 2022, 155: 110714.
[16] ZHANG H, ZHANG S J. Shape-IoU: more accurate metric considering bounding box shape and scale [EB/OL]. (2023-12-29) [2025-06-22]. https:∥arxiv. org/abs/2312.17663.
[17] JIANG B, LUO R, MAO J, et al. Acquisition of localization confidence for accurate object detection[C]∥European Conference on Computer Vision. Cham: Springer ,2018: 816-832.
[18] MA W B, GUAN Z, WANG X, et al. YOLO-FL: a target detection algorithm for reflective clothing wearing inspection[J]. Displays, 2023, 80: 102561.
[19]赵红成, 田秀霞, 杨泽森, 等. YOLO-S: 一种新型轻量的安全帽佩戴检测模型[J]. 华东师范大学学报(自然科学版), 2021(5): 134-145.
ZHAO H C, TIAN X X, YANG Z S, et al. YOLO-S: a new lightweight helmet wearing detection model[J]. Journal of East China Normal University (Natural Science), 2021(5): 134-145.
[20]WANG L L, ZHANG X J, YANG H L. Safety helmet wearing detection model based on improved YOLO-M[J]. IEEE Access, 2023, 11: 26247-26257.
[21] PENG J, PENG F, JIN S Z, et al. Research on safety helmet wearing detection based on YOLO[C]∥49th Annual Conference of the IEEE Industrial Electronics Society.Piscataway: IEEE, 2023: 1-6.
[22] MA X J, JI K F, XIONG B L, et al. Light-YOLOv4: an edge-device oriented target detection method for remote sensing images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 10808-10820.
[23] LIU Y C, SHAO Z R, HOFFMANN N. Global attention mechanism: retain information to enhance channel-spatial interactions[EB/OL]. (2021-12-10) [2025-06-22]. 2021: 2112.05561.https:∥arxiv.org/abs/2112.05561v1.
[24] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7132-7141.
[25]WANG Q L, WU B G, ZHU P F, et al. ECA-net: efficient channel attention for deep convolutional neural networks[C]∥2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 11531-11539.
[26]WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[EB/OL]. (2018-07-11) [202506-22].https:∥doi.org/10.48550/arXiv.1807.06521.
[27] TONG Z, CHEN Y, XU Z, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism[EB/OL] (2023-01-24) [2025-06-22]. https:∥arxiv.org/abs/2301.10051.
[28] ZHANG Y F, REN W Q, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[29] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[EB/OL]. (2022-05-25) [2025-06-22]. https:∥doi. org/10. 48550/arXiv.2205.12740.

相似文献/References:

[1]张 震,陈可鑫,陈云飞.优化聚类和引入 CBAM 的 YOLOv5 管制刀具检测[J].郑州大学学报(工学版),2023,44(05):40.[doi:10.13705/j.issn.1671-6833.2022.05.015]
 ZHANG Zhen,CHEN Kexin,CHEN Yunfei.YOLOv5 with Optimized Clustering and CBAM for Controlled Knife Detection[J].Journal of Zhengzhou University (Engineering Science),2023,44(02):40.[doi:10.13705/j.issn.1671-6833.2022.05.015]
[2]崔建明,蔺繁荣,张 迪,等.基于有向图的强化学习自动驾驶轨迹预测[J].郑州大学学报(工学版),2023,44(05):53.[doi:10.13705/j.issn.1671-6833.2023.05.002]
 CUI Jianming,LIN Fanrong,ZHANG Di,et al.Reinforcement Learning Autonomous Driving Trajectory Prediction Based on Directed Graph[J].Journal of Zhengzhou University (Engineering Science),2023,44(02):53.[doi:10.13705/j.issn.1671-6833.2023.05.002]
[3]李卫军,张新勇,高庾潇,等.基于门控时空注意力的视频帧预测模型[J].郑州大学学报(工学版),2024,45(01):70.[doi:10.13705/j.issn.1671-6833.2024.01.017]
 LI Weijun,ZHANG Xinyong,GAO Yuxiao,et al.Video Frame Prediction Model Based on Gated Spatio-Temporal Attention[J].Journal of Zhengzhou University (Engineering Science),2024,45(02):70.[doi:10.13705/j.issn.1671-6833.2024.01.017]
[4]王 瑜,毕 玉,石健彤,等.基于注意力与多级特征融合的 YOLOv5 算法[J].郑州大学学报(工学版),2024,45(03):38.[doi:10.13705/j.issn.1671-6833.2023.06.009]
 WANG Yu,BI Yu,SHI Jiantong,et al.Object Detection and Recognition Algorithm Based on YOLOv5 and the Fusion of Attention and Multistage Features[J].Journal of Zhengzhou University (Engineering Science),2024,45(02):38.[doi:10.13705/j.issn.1671-6833.2023.06.009]
[5]魏明军,王镆涵,刘亚志,等.基于特征融合和混合注意力的小目标检测[J].郑州大学学报(工学版),2024,45(03):72.[doi:10. 13705/ j. issn. 1671-6833. 2024. 03. 001]
 WEI Mingjun,WANG Mohan,LIU Yazhi,et al.Small Object Detection Based on Feature Fusion and Mixed Attention[J].Journal of Zhengzhou University (Engineering Science),2024,45(02):72.[doi:10. 13705/ j. issn. 1671-6833. 2024. 03. 001]
[6]林 楠,唐凯鹏,牛勇鹏,等.基于双阶段特征提取网络的 ECG 降噪分类算法[J].郑州大学学报(工学版),2024,45(05):61.[doi:10.13705/j.issn.1671-6833.2024.05.005]
 LIN Nan,TANG Kaipeng,NIU Yongpeng,et al.An ECG Denoising and Classification Algorithm Based on Two-stage Feature Extraction Network[J].Journal of Zhengzhou University (Engineering Science),2024,45(02):61.[doi:10.13705/j.issn.1671-6833.2024.05.005]
[7]林予松,李孟娅,李英豪,等.基于GAN和多尺度空间注意力的多模态医学图像融合[J].郑州大学学报(工学版),2025,46(01):1.[doi:10.13705/j.issn.1671-6833.2025.01.001]
 LIN Yusong,,et al.Multimodal Medical Image Fusion Based on GAN and Multiscale Spatial Attention[J].Journal of Zhengzhou University (Engineering Science),2025,46(02):1.[doi:10.13705/j.issn.1671-6833.2025.01.001]
[8]赵 冬,李亚瑞,王文相,等.基于动态融合注意力机制的电力负荷缺失数据填充模型[J].郑州大学学报(工学版),2025,46(02):111.[doi:10.13705/j.issn.1671-6833.2024.05.004]
 ZHAO Dong,LI Yarui,WANG Wenxiang,et al.Power Load Missing Data Imputation Model Based on Dynamic Fusion Attention Mechanism[J].Journal of Zhengzhou University (Engineering Science),2025,46(02):111.[doi:10.13705/j.issn.1671-6833.2024.05.004]
[9]燕 雨,荆宇超,史孟翔,等.基于改进 YOLOv5 算法的钢材表面缺陷检测[J].郑州大学学报(工学版),2025,46(04):93.[doi:10.13705/j.issn.1671-6833.2025.01.007]
 YAN Yu,JING Yuchao,SHI Mengxiang,et al.Steel Surface Defect Detection Based on Improved YOLOv5 Algorithm[J].Journal of Zhengzhou University (Engineering Science),2025,46(02):93.[doi:10.13705/j.issn.1671-6833.2025.01.007]
[10]张 震,肖宗荣,李友好,等.基于改进YOLOv7的高风险区工程车辆识别算法[J].郑州大学学报(工学版),2025,46(05):1.[doi:10.13705/j.issn.1671-6833.2025.02.019]
 ZHANG Zhen,XIAO Zongrong,LI Youhao,et al.Construction Vehicles Recognition Algorithm Based on Improved YOLOv7 in High Risk Areas[J].Journal of Zhengzhou University (Engineering Science),2025,46(02):1.[doi:10.13705/j.issn.1671-6833.2025.02.019]

更新日期/Last Update: 2026-03-04