[1]王希鹏,李永,李智,等.融合图像深度的抗遮挡目标跟踪算法[J].郑州大学学报(工学版),2021,42(05):19-24.[doi:10.13705/j.issn.1671-6833.2021.05.011]
 Wang Xipeng,Li Yong,Li Zhi,et al.Anti-occlusion Target Tracking Algorithm Based on Image Depth[J].Journal of Zhengzhou University (Engineering Science),2021,42(05):19-24.[doi:10.13705/j.issn.1671-6833.2021.05.011]
点击复制

融合图像深度的抗遮挡目标跟踪算法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
42卷
期数:
2021年05期
页码:
19-24
栏目:
出版日期:
2021-09-10

文章信息/Info

Title:
Anti-occlusion Target Tracking Algorithm Based on Image Depth
作者:
王希鹏李永李智张妍
武警工程大学信息工程学院;
Author(s):
Wang Xipeng; Li Yong; Li Zhi; Zhang Yan;
School of Information Engineering at the University of Armed Police Engineering;
关键词:
Keywords:
siamese network deep learning target tracking monocular depth estimation anti-occlusion
DOI:
10.13705/j.issn.1671-6833.2021.05.011
文献标志码:
A
摘要:
由于视频信息的局限性,在遮挡情况下的目标跟踪依然是一个很难解决的问题。针对目标跟踪过程中的遮挡问题,提出将图像深度引入单目标跟踪。首先应用单目图像深度估计算法对图像进行深度估计,获取图像的深度信息;其次,将基于孪生网络的目标跟踪算法与图像深度相结合,构建遮挡判别模块,利用目标深度信息的变化判断遮挡情况;最后,对目标跟踪器的候选框重新排序避免被遮挡物干扰。实验结果表明,该算法能有效地应对遮挡情况对跟踪性能的影响,在跟踪成功率和精确度上均高于其他对比算法。
Abstract:
Due to the limitation of video information, target tracking in the case of occlusion is still a difficult problem to solve. Aiming at the problem of occlusion in the target tracking process, it is proposed to introduce image depth into single target tracking algorithm. Firstly, the monocular image depth estimation algorithm is used to estimate the depth of the image to obtain the depth information of the image. Secondly, the target tracking algorithm based on the siamese region proposal network is combined with the image depth to construct an occlusion discriminating module, which uses the change of the target depth information to determine the occlusion. Finally, the occlusion discrimination score and the anchor response score are weighted integrated. According to the final response score, the anchor of the target tracker is reordered to avoid interference by obstructions. Experimental results on the OTB-2015 dataset show that the algorithm can effectively deal with the influence of occlusion on tracking performance, with an average success rate of 0.623 and an average tracking accuracy of 0.853, which is 1.7% and 0.9% higher than the benchmark algorithm, repectively.

参考文献/References:

[1] HENRIQUES J F, CASEIRO R, MARTINS P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]// European Conference on Computer Vision. Berlin:Springer, 2012: 702-715.

[2] HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters [J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(3): 583-596.
[3] TAO R, GAVVES E, SMEULDERS A W. Siamese instance search for tracking[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1420-1429.
[4] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional siamese networks for object tracking[C]// European Conference on Computer Vision. Berlin: Springer, 2016: 850-865.
[5] LI B, YAN J, WU W, et al. High performance visual tracking with siamese region proposal network[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8971-80.
[6] ZHU Z, WU W, ZOU W, et al. End-to-end flow correlation tracking with spatial-temporal attention[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 548-557.
[7] WU C L, ZHANG Y, ZHANG Y, et al. Motion guided siamese trackers for visual tracking [J]. IEEE access, 2020, 8:7473-7489.
[8] MUNARO M, BASSO F, MENEGATTI E. OpenPTrack: open source multi-camera calibration and people tracking for RGB-D camera networks [J]. Robotics & autonomous systems, 2016, 75:525-538.
[9] GODARD C, MAC AODHA O, FIRMAN M, et al. Digging into self-supervised monocular depth estimation[C]// Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE 2019:3828-3838.
[10] WU Y, LIM J, YANG M H. Object tracking benchmark [J]. IEEE analysis and machine intelligence, 2015, 37(9): 1834-1848.
[11] 毛晓波, 周晓东, 刘艳红. 基于FAST特征点改进的TLD目标跟踪算法 [J]. 郑州大学学报(工学版), 2018, 39(2): 1-5,17.
[12] 刘明华, 汪传生, 胡强, 等. 多模型协作的分块目标跟踪 [J]. 软件学报, 2020, 31(2): 511-530.
[13] DANELLJAN M, HAGER G, SHAHBAZ KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]// Proceedings of the IEEE International Conference on Computer Vision. Pisca-taway: IEEE,2015: 4310-4318.
[14] BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1401-1409.
[15] VALMADRE J, BERTINETTO L, HENRIQUES J F, et al. End-to-end representation learning for correlation filter based tracking[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE, 2017: 2805-2813.
[16] DANELLJAN M, HAGER G, KHAN F S, et al. Discriminative scale space tracking [J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(8): 1561-1575.

相似文献/References:

[1]袁航,钟发海,聂上上,等.基于卷积神经网络的道路拥堵识别研究[J].郑州大学学报(工学版),2019,40(02):21.[doi:10.13705/j.issn.1671-6833.2019.02.008]
 LUO Ronghui,YUAN Hang,ZHONG Fahai,et al.The Research of Traffic Jam Detection Based on Convolutional Neural Network[J].Journal of Zhengzhou University (Engineering Science),2019,40(05):21.[doi:10.13705/j.issn.1671-6833.2019.02.008]
[2]朱俊丞,杨之乐,郭媛君,等.深度学习在电力负荷预测中的应用综述[J].郑州大学学报(工学版),2019,40(05):12.[doi:10.13705/j.issn.1671-6833.2019.05.005]
 Zhu Juncheng,Young Joy,Guo Yuanjun,et al.A review of the application of deep learning in power load forecasting[J].Journal of Zhengzhou University (Engineering Science),2019,40(05):12.[doi:10.13705/j.issn.1671-6833.2019.05.005]
[3]黄文锋,徐珊珊,孙燚,等.基于多分辨率卷积神经网络的火焰检测[J].郑州大学学报(工学版),2019,40(05):79.[doi:10.13705/j.issn.1671-6833.2019.05.022]
 Huang Wenfeng,Susan Hsu,Sun Yi,et al.Fire Detection Based on Multi-resolution Convolution Neural Network in Various Scenes[J].Journal of Zhengzhou University (Engineering Science),2019,40(05):79.[doi:10.13705/j.issn.1671-6833.2019.05.022]
[4]陈义飞、郭胜、潘文安、陆彦辉.基于多源传感器数据融合的三维场景重建[J].郑州大学学报(工学版),2021,42(02):81.[doi:10.13705/j.issn.1671-6833.2021.02.008]
 Chen Yifei,Guo Sheng,Pan Wenan,et al.3D Scene Reconstruction Based on Multi-source Sensor Data Fusion[J].Journal of Zhengzhou University (Engineering Science),2021,42(05):81.[doi:10.13705/j.issn.1671-6833.2021.02.008]
[5]李学相,曹淇,刘成明.基于无配对生成对抗网络的图像超分辨率重建[J].郑州大学学报(工学版),2021,42(05):1.[doi:10.13705/j.issn.1671-6833.2021.05.018]
 LI Xuexiang,CAO Qi,LIU Chengming.Image Super-resolution Based on No Match Generative Adversarial Network[J].Journal of Zhengzhou University (Engineering Science),2021,42(05):1.[doi:10.13705/j.issn.1671-6833.2021.05.018]
[6]卢晨辉,冯硕,易爱华,等.基于深度学习的加油站销量预测与营销策略应用研究[J].郑州大学学报(工学版),2022,43(01):1.[doi:10.13705/j.issn.1671-6833.2022.01.014]
 LU Chenhui,FENG Shuo,YI Aihua,et al.Gasoline Station Sales Prediction Method Based on Deep Learning and Its Application of Promotion Strategy[J].Journal of Zhengzhou University (Engineering Science),2022,43(05):1.[doi:10.13705/j.issn.1671-6833.2022.01.014]
[7]陈浩杰,黄锦,左兴权,等.基于宽度&深度学习的基站网络流量预测方法[J].郑州大学学报(工学版),2022,43(01):7.[doi:10.13705/j.issn.1671-6833.2022.01.011]
 CHEN Haojie,HUANG Jin,ZUO Xingquan,et al.Base Station Network Traffic Prediction Method Based on Wide & Deep Learning[J].Journal of Zhengzhou University (Engineering Science),2022,43(05):7.[doi:10.13705/j.issn.1671-6833.2022.01.011]
[8]成科扬,荣兰,蒋森林,等.基于深度学习的遥感图像超分辨率重建技术综述[J].郑州大学学报(工学版),2022,43(05):8.[doi:10.13705/j.issn.1671-6833.2022.05.013]
 CHENG Keyang,RONG Lan,JIANG Senlin,et al.Overview of Methods for Remote Sensing Image Super-resolution Reconstruction Based on Deep Learning[J].Journal of Zhengzhou University (Engineering Science),2022,43(05):8.[doi:10.13705/j.issn.1671-6833.2022.05.013]
[9]院老虎,常玉坤,刘家夫.基于改进YOLOv5s的雾天场景车辆检测方法[J].郑州大学学报(工学版),2023,44(03):37.[doi:10.13705/j.issn.1671-6833.2023.03.005]
 YUAN Laohu,CHANG Yukun,LIU Jiafu.Vehicle Detection Method Based on Improved YOLOv5s in Foggy Scene[J].Journal of Zhengzhou University (Engineering Science),2023,44(05):37.[doi:10.13705/j.issn.1671-6833.2023.03.005]
[10]高宇飞,马自行,徐 静,等.基于卷积和可变形注意力的脑胶质瘤图像分割[J].郑州大学学报(工学版),2024,45(02):27.[doi:10.13705/j.issn.1671-6833.2023.05.007]
 GAO Yufei,MA Zixing,XU Jing,et al.Brain Glioma Image Segmentation Based on Convolution and Deformable Attention[J].Journal of Zhengzhou University (Engineering Science),2024,45(05):27.[doi:10.13705/j.issn.1671-6833.2023.05.007]

更新日期/Last Update: 2021-10-11