[1] 崔家悦. 浅谈自动驾驶的研究现状和发展[J]. 科技资讯, 2021, 19(13): 83-85.CUI J Y. Research status and development of automatic driving[J]. Science &Technology Information, 2021, 19(13): 83-85.[2] GIRSHICK R, DONAHUE J, DARRELL T, et al. Region-based convolutional networks for accurate object detection and segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1): 142-158.
[3] CHEN Y H, LI W, SAKARIDIS C, et al. Domain adaptive faster R-CNN for object detection in the wild[C]∥2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3339-3348.
[4] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2016: 779-788.
[5] ULTRALYTICS. Yolov5[EB/OL].[2023-08-11].https:∥github.com/ultralytics/yolov5.
[6] LIU R, LEHMAN J, MOLINO P, et al. An intriguing failing of convolutional neural networks and the CoordConv solution[C]∥Proceedings of the 32nd International Conference on Neural Information Processing Systems. New York: ACM, 2018: 9628-9639.
[7] 袁志宏, 孙强, 李国祥, 等. 基于Yolov3的自动驾驶目标检测[J]. 重庆理工大学学报(自然科学), 2020, 34(9): 56-61.YUAN Z H, SUN Q, LI G X, et al. Automatic driving target detection based on Yolov3[J]. Journal of Chongqing University of Technology (Natural Science), 2020, 34(9): 56-61.
[8] 刘丽伟, 侯德彪, 侯阿临, 等. 基于SimAM-YOLOv4的自动驾驶目标检测算法[J]. 长春工业大学学报, 2022, 43(3): 244-250.LIU L W, HOU D B, HOU A L, et al. Automatic driving target detection algorithm based on SimAM-YOLOv4[J]. Journal of Changchun University of Technology, 2022, 43(3): 244-250.
[9] 院老虎, 常玉坤, 刘家夫. 基于改进YOLOv5s的雾天场景车辆检测方法[J]. 郑州大学学报(工学版), 2023, 44(3): 35-41.YUAN L H, CHANG Y K, LIU J F. Vehicle detection method based on improved YOLOv5s in foggy scene[J]. Journal of Zhengzhou University (Engineering Science), 2023, 44(3): 35-41.
[10] CAI Y F, LUAN T Y, GAO H B, et al. YOLOv4-5D: an effective and efficient object detector for autonomous driving[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 4503613.
[11] SHI T, DING Y, ZHU W X. YOLOv5s_2E: improved YOLOv5s for aerial small target detection[J]. IEEE Access, 2023, 11: 80479-80490.
[12] GAO T Y, WUSHOUER M, TUERHONG G. DMS-YOLOv5: a decoupled multi-scale YOLOv5 method for small object detection[J]. Applied Sciences, 2023, 13(10): 6124.
[13] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]∥Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[14] WU C H, WU F Z, GE S Y, et al. Neural news recommendation with multi-head self-attention[C]∥Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg: Association for Computational Linguistics, 2019: 6389-6394.
[15] ZHANG Y F, REN W Q, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[16] PARK J, WOO S, LEE J Y, et al. BAM: bottleneck attention module[EB/OL]. (2018-07-17)[2023-08-11]. http:∥arxiv.org/abs/1807.06514.
[17] LIU Y C, SHAO Z R, HOFFMANN N. Global attention mechanism: retain information to enhance channel-spatial interactions[EB/OL].(2021-10-10)[2023-08-11]. http:∥arxiv.org/abs/2112.05561.
[18] HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023.
[19] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]∥European Conference on Computer Vision. Cham: Springer, 2018: 3-19.
[20] YANG L X, ZHANG R Y, LI L D, et al. Simam: a simple, parameter-free attention module for convolutional neural networks[C]∥Proceedings of the 38th International Conference on Machine Learning. United States: International Conference on Machine Learning,2021: 11863-11874.
[21] SU T, SHI Y, XIE C J, et al. A hybrid loss balancing algorithm based on gradient equilibrium and sample loss for understanding of road scenes at basic-level[J]. Pattern Analysis and Applications, 2022, 25(4): 1041-1053.
[22] 马素刚, 李宁博, 侯志强, 等. 基于DSGIoU损失与双分支坐标注意力的目标检测算法[J/OL].北京航空航天大学学报, 2023: 1-14.https:∥bhxb.buaa.edu.cn/bhzk/cn/article/doi/10.13700/j.bh.1001-5965.2023.0192.MA S G, LI N B, HOU Z Q, et al. Object detection algorithm based on DSGIoU loss and dual branch coordinate attention[J/OL]. Journal of Beijing University of Aeronautics and Astronautics, 2023: 1-14.https:∥bhxb.buaa.edu.cn/bhzk/cn/article/doi/10.13700/j.bh.1001-5965.2023.0192.
[23] 丁剑洁, 安雯. 基于改进锚点框与Transformer架构的目标检测算法[J]. 现代电子技术, 2023, 46(15): 37-42.DING J J, AN W. Object detection algorithm based on improved anchor box and transformer architecture[J]. Modern Electronics Technique, 2023, 46(15): 37-42.
[24] 李娟娟, 侯志强, 白玉, 等. 基于空洞卷积和特征融合的单阶段目标检测算法[J]. 空军工程大学学报(自然科学版), 2022, 23(1): 97-103.LI J J, HOU Z Q, BAI Y, et al. Single-stage object detection algorithm based on dilated convolution and feature fusion[J]. Journal of Air Force Engineering University (Natural Science Edition), 2022, 23(1): 97-103.
[25] 侯志强, 郭浩, 马素刚, 等. 基于双分支特征融合的无锚框目标检测算法[J]. 电子与信息学报, 2022, 44(6): 2175-2183.HOU Z Q, GUO H, MA S G, et al. Anchor-free object detection algorithm based on double branch feature fusion[J]. Journal of Electronics &Information Technology, 2022, 44(6): 2175-2183.
[26] LI C Y, LI L L, JIANG H L, et al. YOLOv6: a single-stage object detection framework for industrial applications[EB/OL]. (2022-09-07)[2023-08-11]. http:∥arxiv.org/abs/2209.02976.
[27] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]∥2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2023: 7464-7475.
[28] ULTRALYTICS.YOLOv8[EB/OL].(2023-01-01)[2023-08-11]. https:∥github.com/ultralytics/ultralytics.