改进视觉前端的视觉/惯导融合定位算法

Improved vision/inertial guidance fusion localization algorithm for vision front-end

  • 摘要: 针对移动机器人在GNSS拒止环境下的高精度定位问题,提出了一种自适应通用角点检测(adaptive and generic accelerated segment test,AGAST)算法改进移动机器人视觉/惯导融合定位系统的视觉前端. 该算法通过局部直方图均衡化和自适应阈值检测改进视觉里程计(visual odometry,VO)算法,改善特征点提取的质量,提高VO在复杂环境中的定位精度和稳定性;基于因子图优化(factor graph optimization,FGO)算法融合VO和惯性导航系统(inertial navigation system,INS),实现移动机器人的高精度定位. 分别采用公开室内、室外数据集进行测试,结果表明:改进算法相比VINS-Mono主流算法室内数据集定位精度平均提升22.8%,室外数据集定位精度平均提升59.7%.

     

    Abstract: Aiming at the problem of high-precision positioning of mobile robot in Global Navigation Satellite System GNSS denied environment, An adaptive thresholding adaptive and generic accelerated segment test AGAST feature detection algorithm is proposed to improve the visual front-end of a vision/inertial guidance fusion localization system for mobile robots. The algorithm improves the visual odometry computation method by local histogram equalization and adaptive threshold detection, improves the quality of feature point extraction, and enhances the positioning accuracy and stability of visual odometry in complex environments. Visual odometry and inertial navigation system are fused based on factor graph optimization algorithm to realize high-precision positioning of mobile robot. The results show that, compared with the mainstream VINS-Mono algorithm, the proposed algorithm improves the positioning accuracy by 22.8% in the experiment of indoor data set and 59.7% in the experiment of outdoor data set, the proposed algorithm perform better than VINS-Mono algorithm in both two experiments and it can provide better positioning services for mobile robots.

     

/

返回文章
返回