STATISTICS

Viewed183

Downloads131

Fast 3D Aiming Method of Manipulators Based on Monocular Visual Feedforward
[1]CHEN Xiaopeng,XU Peng,WANG Zhantao,et al.Fast 3D Aiming Method of Manipulators Based on Monocular Visual Feedforward[J].Journal of Zhengzhou University (Engineering Science),2025,46(02):11-18.[doi:10.13705/j.issn.1671-6833.2025.02.001]
Copy
References:
[1]MATHESON E, MINTO R, ZAMPIERI E G G, et al. Human-robot collaboration in manufacturing applications: a review[J]. Robotics, 2019, 8(4): 100. 
[2]王大浩, 平雪良. 基于规定性能的六自由度机械臂视觉伺服控制[J]. 传感器与微系统, 2022, 41(3): 104-108. 
WANG D H, PING X L. Six-degree-of-freedom manipulator visual servo control based on prescribed performance [J]. Transducer and Microsystem Technologies, 2022, 41(3): 104-108. 
[3]LING X, ZHAO Y S, GONG L, et al. Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision[J]. Robotics and Autonomous Systems, 2019, 114: 134-143. 
[4]TAN M X, PANG R M, LE Q V. EfficientDet: scalable and efficient object detection[C]∥2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 10778-10787. 
[5]ZHU L, WANG X Q, LI P, et al. S3 net: self-supervised self-ensembling network for semi-supervised RGB-D salient object detection[J]. IEEE Transactions on Multimedia, 2023, 25: 676-689. 
[6]张震, 王晓杰, 晋志华, 等. 基于轻量化YOLOv5的交通标志检测[J]. 郑州大学学报(工学版), 2024, 45(2): 12-19. 
ZHANG Z, WANG X J, JIN Z H, et al. Traffic sign detection based on lightweight YOLOv5[J]. Journal of Zhengzhou University (Engineering Science), 2024, 45 (2): 12-19. 
[7]张震, 陈可鑫, 陈云飞. 优化聚类和引入CBAM的YOLOv5管制刀具检测[J]. 郑州大学学报(工学版), 2023, 44(5): 40-45, 61. 
ZHANG Z, CHEN K X, CHEN Y F. YOLOv5 with optimized clustering and CBAM for controlled knife detection [J]. Journal of Zhengzhou University (Engineering Science), 2023, 44(5): 40-45, 61. 
[8]LI J J, JI W, BI Q, et al. Joint semantic mining for weakly supervised RGB-D salient object detection[C]∥ Proceedings of the 35th International Conference on Neural Information Processing Systems. New York: ACM, 2021: 11945-11959. 
[9]CARION N, MASSA F, SYNNAEVE G, et al. End-toend object detection with transformers[C]∥European Conference on Computer Vision. Cham: Springer, 2020: 213-229.
[10] MENG D P, CHEN X K, FAN Z J, et al. Conditional DETR for fast training convergence[C]∥2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE, 2021: 3631-3640. 
[11] DONG J X, ZHANG J. A new image-based visual servoing method with velocity direction control[J]. Journal of the Franklin Institute, 2020, 357(7): 3993-4007. 
[12] RANGANATHAN G. An economical robotic arm for playing chess using visual servoing[J]. Journal of Innovative Image Processing, 2020, 2(3): 141-146. 
[13] SHIRLEY D R A, RANJANI K, ARUNACHALAM G, et al. Automatic distributed gardening system using object recognition and visual servoing[C]∥Inventive Communication and Computational Technologies. Berlin: Springer, 2021: 359-369. 
[14] PARADIS S, HWANG M, THANANJEYAN B, et al. Intermittent visual servoing: efficiently learning policies robust to instrument changes for high-precision surgical manipulation[C]∥2021 IEEE International Conference on Robotics and Automation (ICRA). Piscataway: IEEE, 2021: 7166-7173. 
[15] AL-SHANOON A, LANG H X. Robotic manipulation based on 3-D visual servoing and deep neural networks [J]. Robotics and Autonomous Systems, 2022, 152: 104041. 
[16] AL-SHANOON A, WANG Y J, LANG H X. DeepNetbased 3D visual servoing robotic manipulation[J]. Journal of Sensors, 2022, 2022: 3511265. 
[17] LEE Y S, VUONG N, ADRIAN N, et al. Integrating force-based manipulation primitives with deep learningbased visual servoing for robotic assembly[C] ∥ICRA 2022 Workshop: Reinforcement Learning for Contact-Rich Manipulation. Piscataway:IEEE, 2022. 
[18] ZHONG X G, SHI C Q, LIN J, et al. Self-learning visual servoing for robot manipulation in unstructured environments[C]∥International Conference on Intelligent Robotics and Applications. Cham: Springer, 2021: 48-57. 
[19] PUANG E Y, PENG TEE K, JING W. KOVIS: keypoint-based visual servoing with zero-shot sim-to-real transfer for robotics manipulation[C]∥2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE, 2020: 7527-7533. 
[20] RIBEIRO E G, DE QUEIROZ MENDES R, GRASSI V J. Real-time deep learning approach to visual servo control and grasp detection for autonomous robotic manipulation[J]. Robotics and Autonomous Systems, 2021, 139: 103757. 
[21] DANIILIDIS K. Hand-eye calibration using dual quaternions[J]. The International Journal of Robotics Research, 1999, 18(3): 286-298. 
[22] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. 
[23] SUTHERLAND I E. Three-dimensional data input by tablet[J]. Proceedings of the IEEE, 1974, 62(4): 453-461. 
[24] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection [EB/OL]. (2020-05-23)[2024-08-15].https:∥doi. org/10.48550/arXiv.2004.10934. 
[25] HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. 
[26] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8759-8768. 
[27] YUEN H, PRINCEN J, ILLINGWORTH J, et al. Comparative study of Hough Transform methods for circle finding[J]. Image and Vision Computing, 1990, 8(1): 7177.
Similar References:
Memo

-

Last Update: 2025-03-13
Copyright © 2023 Editorial Board of Journal of Zhengzhou University (Engineering Science)