STATISTICS

Viewed19

Downloads10

Abandoned Object Detection Algorithm Based on Improved YOLOv8
[1]ZHANG Zhen,GE Shuaibing,CHEN Kexin,et al.Abandoned Object Detection Algorithm Based on Improved YOLOv8[J].Journal of Zhengzhou University (Engineering Science),2025,46(04):40-46.[doi:10.13705/j.issn.1671-6833.2025.04.010]
Copy
References:
[1] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [ C ] ∥ 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 580-587. 
[2] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection [ C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2016: 779-788. 
[3] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]∥14th European Conference on Computer Vision. Cham: Springer, 2016: 21-37. 
[4] TIAN Z, SHEN C H, CHEN H, et al. FCOS: fully convolutional one-stage object detection [ C] ∥2019 IEEE / CVF International Conference on Computer Vision ( ICCV) . Piscataway: IEEE, 2019: 9626-9635. 
[5] LIN K, CHEN S C, CHEN C S, et al. Abandoned object detection via temporal consistency modeling and backtracing verification for visual surveillance [ J ] . IEEE Transactions on Information Forensics and Security, 2015, 10(7) : 1359-1370. 
[6] SU H, WANG W, WANG S W. A robust all-weather abandoned objects detection algorithm based on dual background and gradient operator [ J] . Multimedia Tools and Applications, 2023, 82(19) : 29477-29499. 
[7] RUSSEL N S, SELVARAJ A. Ownership of abandoned object detection by integrating carried object recognition and context sensing[ J] . The Visual Computer, 2024, 40 (6) : 4401-4426. 
[8] PARK H, PARK S, JOO Y. Detection of abandoned and stolen objects based on dual background model and mask R-CNN[ J] . IEEE Access, 2020, 8: 80010-80019. 
[9] LIU W P, LIU P, XIAO C X, et al. General-purpose abandoned object detection method without background modeling [ C] ∥2021 IEEE International Conference on Imaging Systems and Techniques ( IST ) . Piscataway: IEEE, 2021: 1-5.
[10] 蒋晓可. 工业监控视频中的遗留物品检测与识别方法 研究[D] . 武汉: 华中科技大学, 2021. 
JIANG X K. Research on method of abandoned object detection and recognition in industrial surveillance videos [D] . Wuhan: Huazhong University of Science and Technology, 2021. 
[11] 林德钰, 周卓彤, 过斌, 等. 高斯混合模型与 GhostNet 结合的 YOLO-G 遗留物检测方法[ J] . 计算机辅助设 计与图形学学报, 2023, 35(1) : 99-107. 
LIN D Y, ZHOU Z T, GUO B, et al. YOLO-G abandoned object detection method combined with Gaussian mixture model and GhostNet[ J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(1): 99-107. 
[12] Ultralytics. YOLOv8[EB / OL] . (2023- 01- 10) [ 2024- 09-10] . https:∥github. com / ultralytics/ ultralytics. 
[13] Ultralytics. YOLOv5[EB / OL] . (2020- 05- 18) [ 2024- 09-10] . https:∥github. com / ultralytics/ yolov5. 
[14] FENG C J, ZHONG Y J, GAO Y, et al. TOOD: taskaligned one-stage object detection[ C]∥2021 IEEE / CVF International Conference on Computer Vision ( ICCV ) . Piscataway: IEEE, 2021: 3490-3499. 
[15] LI X, WANG W H, WU L J, et al. Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection[EB / OL]. (2020- 06- 08) [ 2024- 09-10] . https:∥arxiv. org / abs/ 2006. 04388. 
[16] LIU W Z, LU H, FU H T, et al. Learning to upsample by learning to sample[C]∥2023 IEEE / CVF InternationalConference on Computer Vision ( ICCV ) . Piscataway: IEEE, 2023: 6004-6014. 
[17] WANG C Y, YEH I H, LIAO H Y M. YOLOv9: learning what you want to learn using programmable gradient information[EB / OL] . ( 2024- 02- 29) [ 2024- 09- 10] . https:∥arxiv. org / abs/ 2402. 13616. 
[18] OUYANG D L, HE S, ZHANG G Z, et al. Efficient multi-scale attention module with cross-spatial learning [ C ] ∥ IEEE International Conference on Acoustics, Speech and Signal Processing ( ICASSP ) . Piscataway: IEEE, 2023: 1-5. 
[19] ZHANG Y F, SUN P Z, JIANG Y, et al. ByteTrack: multi-object tracking by associating every detection box [EB / OL] . (2022 - 04 - 07) [ 2024 - 09 - 10] . https:∥ arxiv. org / abs/ 2110. 06864. 
[20] SHI W Z, CABALLERO J, HUSZÁR F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network [ C] ∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . Piscataway: IEEE, 2016: 1874-1883. 
[21] WANG J Q, CHEN K, XU R, et al. CARAFE: contentaware reassembly of features[C]∥2019 IEEE / CVF International Conference on Computer Vision ( ICCV) . Piscataway: IEEE, 2019: 3007-3016. 
[22] LU H, LIU W Z, FU H T, et al. FADE: fusing the assets of decoder and encoder for task-agnostic upsampling [EB / OL] . (2022 - 12 - 27) [ 2024 - 09 - 10] . https:∥ arxiv. org / abs/ 2207. 10392. 
[23] LU HAO, LIU W Z, YE Z X, et al. SAPA: similarityaware point affiliation for feature upsampling [ EB / OL] . (2022- 12 - 27 ) [ 2024 - 09 - 10 ] . https:∥arxiv. org / abs/ 2209. 12866. 
[24] HU J, SHEN L, ALBANIE S. Squeeze-and-excitation networks[ J] . IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020,42(8) :2011-2023. 
[25] WANG Q L, WU B G, ZHU P F, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]∥2020 IEEE / CVF Conference on Computer Vision and Pattern Recognition ( CVPR ) . Piscataway: IEEE, 2020: 11531-11539. 
[26] WOO S Y, PARK J C, LEE J Y, et al. CBAM: convolutional block attention module [ J ] . Lecture Notes in Computer Science, 2018, 11211: 3-19. 
[27] HUANG H J, CHEN Z G, ZOU Y, et al. Channel prior convolutional attention for medical image segmentation [EB / OL] . (2023 - 06 - 08) [ 2024 - 09 - 10] . https:∥ arxiv. org / abs/ 2306. 05196v1. 
[28] LI C Y, LI L L, GENG Y F, et al. YOLOv6 v3. 0: a fullscale reloading[ EB / OL]. ( 2023 - 01 - 13) [ 2024 - 09 - 10] . https:∥arxiv. org / abs/ 2301. 05586v1. 
[29] JIANG P T, ZHANG C B, HOU Q B, et al. LayerCAM: exploring hierarchical class activation maps for localization [ J] . IEEE Transactions on Image Processing, 2021, 30: 5875-5888.
Similar References:
Memo

-

Last Update: 2025-07-13
Copyright © 2023 Editorial Board of Journal of Zhengzhou University (Engineering Science)