Improved vision/inertial guidance fusion localization algorithm for vision front-end
-
摘要: 针对移动机器人在GNSS拒止环境下的高精度定位问题,提出了一种自适应通用角点检测(adaptive and generic accelerated segment test,AGAST)算法改进移动机器人视觉/惯导融合定位系统的视觉前端. 该算法通过局部直方图均衡化和自适应阈值检测改进视觉里程计(visual odometry,VO)算法,改善特征点提取的质量,提高VO在复杂环境中的定位精度和稳定性;基于因子图优化(factor graph optimization,FGO)算法融合VO和惯性导航系统(inertial navigation system,INS),实现移动机器人的高精度定位. 分别采用公开室内、室外数据集进行测试,结果表明:改进算法相比VINS-Mono主流算法室内数据集定位精度平均提升22.8%,室外数据集定位精度平均提升59.7%.
-
关键词:
- 因子图优化(FGO) /
- 自适应通用角点检测(AGAST) /
- 视觉里程计(VO) /
- 视觉/惯导融合
Abstract: Aiming at the problem of high-precision positioning of mobile robot in Global Navigation Satellite System GNSS denied environment, An adaptive thresholding adaptive and generic accelerated segment test AGAST feature detection algorithm is proposed to improve the visual front-end of a vision/inertial guidance fusion localization system for mobile robots. The algorithm improves the visual odometry computation method by local histogram equalization and adaptive threshold detection, improves the quality of feature point extraction, and enhances the positioning accuracy and stability of visual odometry in complex environments. Visual odometry and inertial navigation system are fused based on factor graph optimization algorithm to realize high-precision positioning of mobile robot. The results show that, compared with the mainstream VINS-Mono algorithm, the proposed algorithm improves the positioning accuracy by 22.8% in the experiment of indoor data set and 59.7% in the experiment of outdoor data set, the proposed algorithm perform better than VINS-Mono algorithm in both two experiments and it can provide better positioning services for mobile robots. -
表 1 特征点提取结果
序列 Shi-Tomasi FAST AGAST 自适应AGAST 1 97 48 53 122 2 83 63 69 168 3 88 81 91 102 4 268 115 133 268 5 92 65 63 198 6 77 57 64 78 7 495 413 411 732 8 265 254 251 360 9 406 338 342 637 10 769 730 727 1005 11 767 840 847 948 12 1006 1188 1206 2247 13 546 551 569 1000 14 163 112 116 348 表 2 EuRoc数据集MH序列绝对轨迹误差对比
m 数据序列 系统 RMSE Mean Min Max Std MH-01 VINS-Mono 0.295 0.213 0.038 1.699 0.203 VINS 0.152 0.137 0.032 0.361 0.067 MH-02 VINS-Mono 0.148 0.124 0.022 0.396 0.081 VINS 0.102 0.082 0.012 0.284 0.060 MH-03 VINS-Mono 0.181 0.157 0.015 0.467 0.091 VINS 0.203 0.174 0.040 0.491 0.106 MH-04 VINS-Mono 0.415 0.388 0.098 0.735 0.147 VINS 0.320 0.303 0.129 0.580 0.106 MH-05 VINS-Mono 0.334 0.325 0.146 0.476 0.078 VINS 0.255 0.237 0.077 0.479 0.094 表 3 campus数据集绝对轨迹误差对比
m 处理方法 RMSE Mean Min Max Std 本文 1.565 1.388 0.077 3.390 0.725 VINS-Mono 4.355 3.885 0.046 9.037 1.969 -
[1] 许智理, 闫倬豪, 李星星, 等. 面向智能驾驶的高精度多源融合定位综述[J]. 导航定位与授时, 2023, 10(3): 1-20. [2] MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//IEEE International Conference on Robotics and Automation, 2007. DOI: 10.1109/ROBOT.2007.364024. [3] LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. Journal of robotics research, 2015, 34(3): 314-334. DOI: 10.1177/0278364914554813 [4] QIN T, LI P L, SHEN S J. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE transactions on robotics, 2018, 34(4): 1004-1020. DOI: 10.1109/TRO.2018.2853729 [5] MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE transactions on robotics, 2015, 31(5): 1147-1163. DOI: 10.1109/TRO.2015.2463671 [6] 赖正洪. 基于视觉惯导融合的单目里程计算法研究[D]. 哈尔滨: 哈尔滨工程大学, 2022. [7] 李成美, 白宏阳, 郭宏伟, 等. 一种改进光流法的运动目标检测及跟踪算法[J]. 仪器仪表学报, 2018, 39(5): 249-256. DOI: 10.19650/j.cnki.cjsi.J1803270 [8] 严恭敏, 翁浚. 捷联惯导算法与组合导航原理[M]. 第一版. 西安: 西北工业大学出版社, 2019. [9] 姜柏军, 钟明霞. 改进的直方图均衡化算法在图像增强中的应用[J]. 激光与红外, 2014, 44(6): 702-706. [10] 梁琳, 何卫平, 雷蕾, 等. 光照不均图像增强方法综述[J]. 计算机应用研究, 2010, 27(5): 1625-1628. [11] 龙思源, 张葆, 宋策, 等. 基于改进的加速鲁棒特征的目标识别[J]. 中国光学, 2017, 10(6): 719-725. [12] CAO M W, JIA W, LI Y J, et al. Fast and robust local feature extraction for 3D reconstruction[J]. Computers and electrical engineering, 2018(71): 657-666. DOI: 10.1016/j.compeleceng.2018.08.012 [13] MAIR E, HAGER G D, BURSCHKA D, et al. Adaptive and generic corner detection based on the accelerated segment test[C]//European Conference on Computer Vision Computer Vision (ECCV 2010), Berlin, Heidelberg: Springer, 2010: 183-196. DOI: 10.1007/978-3-642-15552-9_14. [14] 禹鑫燚, 詹益安, 朱峰, 等. 一种基于四叉树的改进的ORB特征提取算法[J]. 计算机科学, 2018, 45(S2): 222-225. [15] 李国竣, 徐延海, 段杰文, 等. 利用局部自适应阈值方法提取ORB-SLAM特征点[J]. 测绘通报, 2021(9): 32-36,48. DOI: 10.13474/j.cnki.11-2246.2021.0269 [16] 丁尤蓉, 王敬东, 邱玉娇, 等. 基于自适应阈值的FAST特征点提取算法[J]. 指挥控制与仿真, 2013, 35(2): 47-53. [17] 张猛, 唐清岭, 蒋小菲. 基于自适应的AGAST特征均匀化提取算法[J]. 智能计算机与应用, 2023, 13(8): 66-72. [18] BURRI M, NIKOLIC J, GOHL P, et al. The EuRoC micro aerial vehicle datasets[J]. Journal of robotics research, 2016, 35(10): 1157-1163. DOI: 10.1177/0278364915620033 [19] NIU X J, TANG H L, ZHANG T S, et al. IC-GVINS: A robust, real-time, INS-CentricGNSS-visual-inertial navigation system[J]. IEEE robotics and automation letters, 2023, 8(1): 216-223. DOI: 10.1109/LRA.2022.3224367