Abstract:Accurate estimation of a robot′s posture is fundamental for navigation and path planning during the mapping process. However, the states of yaw, roll and pitch of robots are unobservable and difficult to evaluate and eliminate errors in the outdoor environments. This often leads to serious drift of the Z-axis in the generated map, thereby preventing the construction of a globally consistent and accurate map. To address this issue, we propose a complete robot mapping system architecture, which improves the performance of robot mapping combined with two methods: ground segmentation and closed-loop detection. To efficiently integrate data from multiple sensors and estimate the dynamic biases of the Inertial Measurement Unit in real time, the system employs a LiDAR-inertial odometry simultaneous localization and mapping method. By constructing a factor graph framework, the system incorporates Lidar odometry factors, IMU pre-integration factors, and loop closure detection factors. Through factor graph optimization, the robot′s global pose is estimated to reduce accumulated errors and ultimately generate a globally consistent map. In addition, the algorithm has been deployed on the quadruped robot dog platform, conducting outdoor experiments and using public datasets for extensive evaluation. The experimental results showed that our system has better mapping effects and accuracy, significantly reducing the average positional error compared to baseline methods.