[1]ZHANG Zhen,ZHANG Chenwen,ZHANG Junjie,et al.Improved YOLOv7-tiny Safety Clothing and Hat Detection Algorithm For Construction Site[J].Journal of Zhengzhou University (Engineering Science),2026,47(XX):1-8.[doi:10.13705/j.issn.1671-6833.2025.05.001]
Copy
Journal of Zhengzhou University (Engineering Science)[ISSN
1671-6833/CN
41-1339/T] Volume:
47
Number of periods:
2026 XX
Page number:
1-8
Column:
Public date:
2026-09-10
- Title:
-
Improved YOLOv7-tiny Safety Clothing and Hat Detection Algorithm For Construction Site
- Author(s):
-
ZHANG Zhen1 ; ZHANG Chenwen2 ; ZHANG Junjie3 ; PEI Shengli3 ; WANG Wenjuan4
-
1. School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China ; 2. School of Henan Institute of Advanced Technology , Zhengzhou University, Zhengzhou 450001, China; 3. Henan Huirong Oil & Gas Technology Co. , Ltd . , Zhengzhou 450001, China; 4 . School of Mechanical and Electrical Engineering, Guangdong Province Technician College of Light Industry, Guangzhou, Guangdong 511330, China
-
- Keywords:
-
YOLOv7-tiny; Attention mechanism; RFEM; Shape-IoU; safety helmet and vest delection
- CLC:
-
TP391
- DOI:
-
10.13705/j.issn.1671-6833.2025.05.001
- Abstract:
-
In response to the insufficient robustness to interference of current safety helmet and clothing detection algorithms in complex backgrounds, weak light, and target occlusion, which lead to low detection accuracy, high false negative rates, and frequent false positive rates, an improved YOLOv7-tiny detection algorithm for safety helmets and clothing in construction sites was proposed. Firstly, the EMA attention mechanism was introduced into the feature extraction module to enhance the network’s feature extraction capability and mitigate complex background interference. Secondly, the RFEM module was integrated into the feature fusion stage to improve the network’s receptive field, acquire broader contextual information, and enhance perception for small targets. Finally, Shape-IoU was employed to replace the IoU boundary regression loss function, improving detection accuracy. Experimental results showed that the improved model achieved a mAP@0.5 of 90.4% on the proprietary dataset, 3.0 percentage points higher than the original model. The detection speed reached 93 frames/s, and the model size comprised only 6.1 million parameters. Compared to YOLOv8s, YOLOv9s, and other models, the proposed algorithm demonstrated superior performance in detection accuracy, speed, and model efficiency, making it suitable for real-time detection applications on construction sites.