[1]汤林东,云利军,罗瑞林,等.基于改进 YOLOv5s 的复杂道路交通目标检测算法[J].郑州大学学报(工学版),2024,45(03):64-71.[doi:10. 13705 / j. issn. 1671-6833. 2024. 03. 016]
 TANG Lindong,YUN Lijun,LUO Ruilin,et al.Complex Road Traffic Target Detection Algorithm Based on Improved YOLOv5s[J].Journal of Zhengzhou University (Engineering Science),2024,45(03):64-71.[doi:10. 13705 / j. issn. 1671-6833. 2024. 03. 016]
点击复制

基于改进 YOLOv5s 的复杂道路交通目标检测算法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
45卷
期数:
2024年03期
页码:
64-71
栏目:
出版日期:
2024-04-20

文章信息/Info

Title:
Complex Road Traffic Target Detection Algorithm Based on Improved YOLOv5s
文章编号:
1671-6833(2024)03-0064-08
作者:
汤林东12 云利军12 罗瑞林3 卢 琳3
1. 云南师范大学 信息学院,云南 昆明 650500;2. 云南师范大学 云南省教育厅计算机视觉与智能控制技术工程研 究中心,云南 昆明 650500;3. 云南省烟草烟叶公司,云南 昆明 650500
Author(s):
TANG Lindong 12 YUN Lijun 12 LUO Ruilin 3 LU Lin 3
1. College of Information, Yunnan Normal University, Kunming 650500, China; 2. Yunnan Provincial Department of Education Computer Vision and Intelligent Control Technology Engineering Research Center, Yunnan Normal University, Kunming 650500, China; 3. Yunnan Tobacco Leaf Company, Kunming 650500, China
关键词:
自动驾驶 目标检测 YOLOv5s MHSARM CoordConv
Keywords:
automatic driving target detection YOLOv5s MHSARM CoordConv
分类号:
TP391
DOI:
10. 13705 / j. issn. 1671-6833. 2024. 03. 016
文献标志码:
A
摘要:
针对目前自动驾驶场景下交通目标检测算法抗复杂背景干扰能力弱,导致检测性能不足的问题,提出了 一种改进 YOLOv5s 的复杂道路交通目标检测算法。 首先,在特征提取区域,采用多头自注意残差模块( MHSARM) 来强化待检目标特征信息,弱化复杂背景干扰;其次,在特征融合区域,采用 CoordConv 代替传统 Conv,使网络具备 空间信息感知能力,提升网络检测精度。 在开源数据集 Kitti 及 BDD100K 上的实验结果表明:改进 YOLOv5s 算法 在复杂道路中具备更强的特征提取能力及良好的泛化能力,mAP_0. 5 分别达到 93. 3% 和 47. 4%,与 YOLOv5s 相 比,分别提升了 0. 9%和 1. 4%。 另外,改进 YOLOv5s 相较于目前最新的目标检测算法 YOLOv7、YOLOv8,mAP_0. 5 分别提高了 1. 3%和 2. 2%,与在 Kitti 数据集上最新的研究成果 Sim-YOLOv4 算法相比,mAP_0. 5 提高了 2. 2%。
Abstract:
A complex road traffic object detection algorithm was proposed to address the issue of traffic target detection algorithms′ inability to resist complex background interference and insufficient detection performance in the current autonomous driving scenario. At first, the multi-head self-attention residual module (MHSARM) was used to improve the feature information of the target to be inspected while decreasing the complex background interference. Secondly, in the feature fusion area, CoordConv was used instead of traditional Conv, so that the network could perceive spatial information and improve network detection accuracy. The improved YOLOv5s algorithm had stronger feature extraction ability and good generalisation ability in complex roads, and mAP_0.5 reached 93.3% and 47.4%, respectively, which was higher than that of YOLOv5s 0.9% and 1.4%. In addition, compared with the latest target detection algorithms YOLOv7 and YOLOv8, the mAP_0.5 of improved YOLOv5s improved by 1.3% and 2.2%, respectively. Compared with the latest research results of Sim-YOLOv4 algorithm on Kitti dataset, mAP_0.5 improved 2.2%.

参考文献/References:

[1] 崔家悦. 浅谈自动驾驶的研究现状和发展[J]. 科技资讯, 2021, 19(13): 83-85.CUI J Y. Research status and development of automatic driving[J]. Science &Technology Information, 2021, 19(13): 83-85.

[2] GIRSHICK R, DONAHUE J, DARRELL T, et al. Region-based convolutional networks for accurate object detection and segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1): 142-158.
[3] CHEN Y H, LI W, SAKARIDIS C, et al. Domain adaptive faster R-CNN for object detection in the wild[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3339-3348.
[4] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2016: 779-788.
[5] ULTRALYTICS. Yolov5[EB/OL].[2023-08-11].https:∥github.com/ultralytics/yolov5.
[6] LIU R, LEHMAN J, MOLINO P, et al. An intriguing failing of convolutional neural networks and the CoordConv solution[C]∥Proceedings of the 32nd International Conference on Neural Information Processing Systems. New York: ACM, 2018: 9628-9639.
[7] 袁志宏, 孙强, 李国祥, 等. 基于Yolov3的自动驾驶目标检测[J]. 重庆理工大学学报(自然科学), 2020, 34(9): 56-61.YUAN Z H, SUN Q, LI G X, et al. Automatic driving target detection based on Yolov3[J]. Journal of Chongqing University of Technology (Natural Science), 2020, 34(9): 56-61.
[8] 刘丽伟, 侯德彪, 侯阿临, 等. 基于SimAM-YOLOv4的自动驾驶目标检测算法[J]. 长春工业大学学报, 2022, 43(3): 244-250.LIU L W, HOU D B, HOU A L, et al. Automatic driving target detection algorithm based on SimAM-YOLOv4[J]. Journal of Changchun University of Technology, 2022, 43(3): 244-250.
[9] 院老虎, 常玉坤, 刘家夫. 基于改进YOLOv5s的雾天场景车辆检测方法[J]. 郑州大学学报(工学版), 2023, 44(3): 35-41.YUAN L H, CHANG Y K, LIU J F. Vehicle detection method based on improved YOLOv5s in foggy scene[J]. Journal of Zhengzhou University (Engineering Science), 2023, 44(3): 35-41.
[10] CAI Y F, LUAN T Y, GAO H B, et al. YOLOv4-5D: an effective and efficient object detector for autonomous driving[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 4503613.
[11] SHI T, DING Y, ZHU W X. YOLOv5s_2E: improved YOLOv5s for aerial small target detection[J]. IEEE Access, 2023, 11: 80479-80490.
[12] GAO T Y, WUSHOUER M, TUERHONG G. DMS-YOLOv5: a decoupled multi-scale YOLOv5 method for small object detection[J]. Applied Sciences, 2023, 13(10): 6124.
[13] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[14] WU C H, WU F Z, GE S Y, et al. Neural news recommendation with multi-head self-attention[C]∥Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg: Association for Computational Linguistics, 2019: 6389-6394.
[15] ZHANG Y F, REN W Q, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[16] PARK J, WOO S, LEE J Y, et al. BAM: bottleneck attention module[EB/OL]. (2018-07-17)[2023-08-11]. http:∥arxiv.org/abs/1807.06514.
[17] LIU Y C, SHAO Z R, HOFFMANN N. Global attention mechanism: retain information to enhance channel-spatial interactions[EB/OL].(2021-10-10)[2023-08-11]. http:∥arxiv.org/abs/2112.05561.
[18] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023.
[19] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]∥European Conference on Computer Vision. Cham: Springer, 2018: 3-19.
[20] YANG L X, ZHANG R Y, LI L D, et al. Simam: a simple, parameter-free attention module for convolutional neural networks[C]∥Proceedings of the 38th International Conference on Machine Learning. United States: International Conference on Machine Learning,2021: 11863-11874.
[21] SU T, SHI Y, XIE C J, et al. A hybrid loss balancing algorithm based on gradient equilibrium and sample loss for understanding of road scenes at basic-level[J]. Pattern Analysis and Applications, 2022, 25(4): 1041-1053.
[22] 马素刚, 李宁博, 侯志强, 等. 基于DSGIoU损失与双分支坐标注意力的目标检测算法[J/OL].北京航空航天大学学报, 2023: 1-14.https:∥bhxb.buaa.edu.cn/bhzk/cn/article/doi/10.13700/j.bh.1001-5965.2023.0192.MA S G, LI N B, HOU Z Q, et al. Object detection algorithm based on DSGIoU loss and dual branch coordinate attention[J/OL]. Journal of Beijing University of Aeronautics and Astronautics, 2023: 1-14.https:∥bhxb.buaa.edu.cn/bhzk/cn/article/doi/10.13700/j.bh.1001-5965.2023.0192.
[23] 丁剑洁, 安雯. 基于改进锚点框与Transformer架构的目标检测算法[J]. 现代电子技术, 2023, 46(15): 37-42.DING J J, AN W. Object detection algorithm based on improved anchor box and transformer architecture[J]. Modern Electronics Technique, 2023, 46(15): 37-42.
[24] 李娟娟, 侯志强, 白玉, 等. 基于空洞卷积和特征融合的单阶段目标检测算法[J]. 空军工程大学学报(自然科学版), 2022, 23(1): 97-103.LI J J, HOU Z Q, BAI Y, et al. Single-stage object detection algorithm based on dilated convolution and feature fusion[J]. Journal of Air Force Engineering University (Natural Science Edition), 2022, 23(1): 97-103.
[25] 侯志强, 郭浩, 马素刚, 等. 基于双分支特征融合的无锚框目标检测算法[J]. 电子与信息学报, 2022, 44(6): 2175-2183.HOU Z Q, GUO H, MA S G, et al. Anchor-free object detection algorithm based on double branch feature fusion[J]. Journal of Electronics &Information Technology, 2022, 44(6): 2175-2183.
[26] LI C Y, LI L L, JIANG H L, et al. YOLOv6: a single-stage object detection framework for industrial applications[EB/OL]. (2022-09-07)[2023-08-11]. http:∥arxiv.org/abs/2209.02976.
[27] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2023: 7464-7475.
[28] ULTRALYTICS.YOLOv8[EB/OL].(2023-01-01)[2023-08-11]. https:∥github.com/ultralytics/ultralytics.

相似文献/References:

[1]王丙琛,司怀伟,谭国真.基于深度强化学习的自动驾驶车控制算法研究[J].郑州大学学报(工学版),2020,41(04):41.[doi:10.13705/j.issn.1671-6833.2020.04.002]
 WANG Bingchen,SI Huaiwei,TAN Guozhen.Research on Autopilot Control Algorithms Based on Deep Reinforcement Learning[J].Journal of Zhengzhou University (Engineering Science),2020,41(03):41.[doi:10.13705/j.issn.1671-6833.2020.04.002]
[2]张 震,陈可鑫,陈云飞.优化聚类和引入 CBAM 的 YOLOv5 管制刀具检测[J].郑州大学学报(工学版),2023,44(05):40.[doi:10.13705/j.issn.1671-6833.2022.05.015]
 ZHANG Zhen,CHEN Kexin,CHEN Yunfei.YOLOv5 with Optimized Clustering and CBAM for Controlled Knife Detection[J].Journal of Zhengzhou University (Engineering Science),2023,44(03):40.[doi:10.13705/j.issn.1671-6833.2022.05.015]
[3]崔建明,蔺繁荣,张 迪,等.基于有向图的强化学习自动驾驶轨迹预测[J].郑州大学学报(工学版),2023,44(05):53.[doi:10.13705/j.issn.1671-6833.2023.05.002]
 CUI Jianming,LIN Fanrong,ZHANG Di,et al.Reinforcement Learning Autonomous Driving Trajectory Prediction Based on Directed Graph[J].Journal of Zhengzhou University (Engineering Science),2023,44(03):53.[doi:10.13705/j.issn.1671-6833.2023.05.002]
[4]王 瑜,毕 玉,石健彤,等.基于注意力与多级特征融合的 YOLOv5 算法[J].郑州大学学报(工学版),2024,45(03):38.[doi:10. 13705 / j. issn. 1671-6833. 2023. 06. 009]
 LIU Xin,XU Hongzhen,LIU Aihua,et al.Geological Named Entity Recognition Based on MacBERT and R-Drop[J].Journal of Zhengzhou University (Engineering Science),2024,45(03):38.[doi:10. 13705 / j. issn. 1671-6833. 2023. 06. 009]

更新日期/Last Update: 2024-04-29