Abstract:
Aiming at the urgent requirements for high-precision pinpoint landing and autonomous obstacle avoidance in deep space exploration missions, visual navigation technology has emerged as a pivotal means to enhance the landing success rate of extraterrestrial probes, owing to its advantages of high autonomy, low power consumption, and high information density. This paper presents a systematic review of the research status and progress of visual navigation technology for deep space landing. Firstly, combined with the entry, descent, and landing (EDL) process, the critical roles of visual navigation in autonomous positioning, velocity estimation, and hazard avoidance are analyzed. The technical challenges inherent to the deep space environment—including extreme lighting conditions, sparse surface textures, high dynamic motion, and limited on-board computing resources—are highlighted. Furthermore, the engineering application characteristics of visual navigation systems in representative lunar and Martian landing missions, such as the "Chang'e" series, "Mars 2020," and Smart Lander for Investigating Moon (SLIM), are summarized. Secondly, the fundamental principles of traditional visual methods, including handcrafted feature matching, stereo vision-based 3D reconstruction, and terrain relative navigation (TRN), are elaborated. The paper then deeply explores the research progress of deep learning approaches in addressing complex environmental adaptability, focusing on feature extraction and matching, crater detection, and pose estimation networks. Finally, a comparative analysis is conducted between traditional methods and deep learning approaches regarding robustness, real-time performance, and verifiability. It is concluded that integrating traditional geometric constraints with deep learning feature representation constitutes a significant trend for the future development of visual navigation technology in deep space landing.