Visual inertial positioning method based on tight coupling
-
摘要: 惯性测量单元(IMU)受自身温度、零偏、振动等因素干扰,积分时位姿容易发散,并且机器人快速移动时,单目视觉定位精度较差,为此研究了一种基于紧耦合的视觉惯性即时定位与地图构建(SLAM)方法. 首先研究了视觉里程计(VO)定位问题,为减少特征点的误匹配,采用基于快速特征点提取和描述的算法(ORB)特征点的提取方法. 然后构建IMU的数学模型,使用中值法得到运动模型的离散积分. 最后将单目视觉姿态与IMU轨迹对齐,采用基于滑动窗口的非线性优化得到机器人运动的最优状态估计. 通过构建仿真场景以及与单目ORB-SLAM算法对比两个实验进行验证,结果表明,该方法优于单独使用VO,定位精度控制在0.4 m左右,相比于传统跟踪模型提高30%.
-
关键词:
- 视觉惯性 /
- 视觉里程计(VO) /
- 快速特征点提取和描述的算法(ORB)特征点 /
- 惯性测量单元(IMU) /
- 非线性优化
Abstract: The inertial measurement unit (IMU) is disturbed by its own temperature, bias, vibration and other factors, so the pose is easy to diverge when integrating, and the monocular vision positioning accuracy is poor when the robot moves rapidly. Therefore, this paper studies a visual inertial synchronous simultaneous localization and mapping (SLAM) method based on tight coupling. Firstly, the location problem of visualodometry (VO) is studied. In order to reduce the mismatching of feature points, the feature points extraction method based on Oriented FAST and Rotated BRIEF (ORB) is adopted. Then the mathematical model of IMU is constructed, and the discrete integral of the motion model is obtained by using the median method. Finally, the pose of monocular vision is aligned with IMU trajectory, and the optimal state estimation of robot motion is obtained by nonlinear optimization based on sliding window. The two experiments were verified by constructing the simulation scene ard comparing with the monocular ORB-SLAM algorithm. The results show that the proposed method is better than visual odometer alone, and the positioning accuracy is controlled at about 0.4 m, which is 30% higher than the traditional tracking model.-
Key words:
- visual inertia /
- visual odometer /
- ORB feature points /
- IMU /
- nonlinear optimization
-
表 1 仿真数据集噪声参数
g_bias a_bias g_noise a_noise 0 0 0 0 1.2e−5 1.2e−4 0.015 0.02 1.0e−4 1.0e−3 0.160 0.18 表 2 两种算法在V1_01_easy数据集下误差状态对比
m 算法模型 标准差 均方根误差 最大值 最小值 ORB-SLAM2 0.40 1.00 1.8 0.10 VINS-MONO 0.85 0.25 0.4 0.02 -
[1] GUI J J, GU D B, WANG H S, et al. A review of visual inertial odometry from filtering and optimisation perspectives[J]. Advanced robotics, 2015, 29(20): 1289-1301. DOI: 10.1080/01691864.2015.1057616. [2] WEISS S, SIEGWART R. Real-time metric state estimation for modular vision-inertial systems[C]//2011 IEEE International Conference on Robotics and Automation, 2011. DOI: 10.1109/ICRA.2011.5979982. [3] WEISS S, ACHTELIK M W, LYNEN S, et al. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments[C]//2012 IEEE International Conference on Robotics and Automation, 2012. DOI: 10.1109/ICRA.2012.6225147. [4] MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]//Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007. DOI: 10.1109/ROBOT.2007.364024. [5] LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. The international journal of robotics research, 2014, 34(3): 314-334. DOI: 10.1177/0278364914554813. [6] QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE transactions on robotics, 2018, 34(4): 1004-1020. DOI: 10.1109/TRO.2018.2853729. [7] 陈小宁, 黄玉清, 杨佳. 多传感器信息融合在移动机器人定位中的应用[J]. 传感器与微系统, 2008, 27(6): 110-113. DOI: 10.3969/j.issn.1000-9787.2008.06.035 [8] 褚辉, 李长勇, 杨凯, 等. 多信息融合的物流机器人定位与导航算法的研究[J]. 机械设计与制造, 2019(4): 240-243. DOI: 10.3969/j.issn.1001-3997.2019.04.059 [9] MUR-ARTAL, TARDOS J D. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE transactions on robotics, 2017, 33(5): 1255-1262. DOI: 10.1109/TRO.2017.2705103. [10] 罗文超, 刘国栋, 杨海燕. SIFT和改进的RANSAC算法在图像配准中的应用[J]. 计算机工程与应用, 2013, 49(15): 147-149. DOI: 10.3778/j.issn.1002-8331.1112-0200 [11] MNIH V, BADIA A P, MIRZA M, et al. Asynchronous methods for deep reinforcement learning[J]. Proceedings of the 33rd international conference on machine learning, 2016, 48: 1928-1937.