[1]贾云飞,郑红木,刘闪亮.YOLOv5s 的金属制品表面缺陷的轻量化算法研究[J].郑州大学学报(工学版),2022,43(05):31-38.
 Lightweight Surface Defect Detection Method of Metal Products Based on YOLOv5s[J].Journal of Zhengzhou University (Engineering Science),2022,43(05):31-38.
点击复制

YOLOv5s 的金属制品表面缺陷的轻量化算法研究()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
43
期数:
2022年05期
页码:
31-38
栏目:
出版日期:
2022-08-22

文章信息/Info

Title:
Lightweight Surface Defect Detection Method of Metal Products Based on YOLOv5s
作者:
贾云飞 郑红木 刘闪亮
文献标志码:
A
摘要:
为解决企业降低智能化成本的要求,运用低成本、低算力的硬件设备,通过深度学习中目标检测算法模型对产品进行缺陷检测。基于深度学习目标检测中的YOLOv5s网络,采用结构裁剪思想,并基于网络中的BN层对网络进行稀疏训练,将稀疏训练后的模型对应权重值较小的层进行裁剪,从而降低模型的计算参数数量以及模型文件大小,达到轻量化的效果。使用NVIDIA的加速推理框架TensorRT对训练好的裁剪模型进行层级融合,实现推理加速效果。实验结果表明:所提目标检测模型相对于原始YOLOv55模型权重文件大小降低约70% ,同时在公开数据集NEU-DET上检测精度达到了74.2%。在搭建的高性能实验台中单图推理速度相比原模型提升了11.3%6 ,且网络没有精度损失;在低性能实验台中,所提模型相比原网络模型推理速度提升了165%,相比高性能实验台中的结果有了更显著的提升,说明所提模型在低算力硬件设备中表现优秀。再针对所提模型采用公开的潜水泵叶轮的俯视图数据集进行普适性测试,最后对所提模型采用推理加速框架TensorRT进行加速后,在高性能实验台上可以达到单图5.8 ms的推理时间。所提目标检测模型在低算力硬件设备上推理速度提升较大,可以帮助企业降低预算.
Abstract:
In order to reduce the intelligent cost in the enterprise ,the hardware equipment with low cost and lowcomputing power was used to detect the defects of products through the object detection algorithm model in deeplearning. Based on the YOLOv5s network in target delection,this study adopts the idea of structure culling,sparsely training the network based on the BN layer , and cuting the sparsely trained model corresponding to the lay-er with small weight value , so as to reduce the number of calculation paramelers and the size of model file and to a-chieve the effect of lightweight. Finally , the lrained pruning model was hierarchically fused using NVIDIA ’s accel-erated framework TensorRTto realize the reasoning acceleration effect. The experimenlal resuls showed that theweight file size of this model was reduced by about 70%e compared with the original YOLOv5s model , and the detec-tion accuracy on the public dataset NEU-DET reached 74.2%. In the high-performance experimental platform builtin this studly ,the single graph inflerence speed was improved by 11.3% compared with the original model, and thenetwork had no accuracy loss. In the low-performance experimental platform compared with the original networkmodel, the inference speed of this model increased by 165% , which was more significantly improved than the results in the high-performance experimenlal platform ,indicating that this model perform well in low computing powerhardware devices. Then the model was tested by using the open top view data set of submersible pump impeller. Atlast , the inference acceleration framework TensorRT is used to accelerate the model in this study , and the inferencetime of single figure 5. 8 ms can be achieved on the high-performance experimental platform. The experimental re-sults showed that the inference speed of this model could be greatly improved on low compuling power hardware e-quipment ,which could help enterprises reduce their budget.
更新日期/Last Update: 2022-08-20