Abstract:Traditional SLAM technologies based on explicit scene representations, such as point clouds, have matured in accuracy and robustness but fall short in capturing the texture and semantic information of the map. To address this limitation, this paper introduces neural radiance fields (NeRF) with differentiable rendering capabilities into the traditional visual SLAM system, proposing a novel visual SLAM method: DRM-SLAM (dense radiance mapper-SLAM). This method uses ORB-SLAM3 for camera pose estimation and combines the RGB and depth information of keyframes to generate dense point clouds. By utilizing a dynamic voxel grid, the method samples within the grid according to the three-dimensional geometric information provided by the point cloud data, thereby reducing the frequency of NeRF calling the multilayer perceptron (MLP). Additionally, the method incorporates multi-resolution hash coding and the CUDA framework′s NeRF implementation, significantly accelerating NeRF training speed. Tests on the TUM, WHU-RSVI, Replica, and STAR datasets demonstrate that DRM-SLAM effectively uses dense point clouds and NeRF volume rendering technology to fill gaps in point clouds, maintaining the pose estimation accuracy of traditional SLAM methods while enhancing texture and material continuity in the map. The DRM-SLAM algorithm achieves a frame rate of 22. 3 on the Replica dataset, which is significantly higher than NICE-SLAM, iMap, and Co SLAM algorithms, showcasing its high real-time performance. Ablation experiments in the same scenario show that NeRF rendering based on dense point clouds increases the frame rate threefold compared to traditional NeRF methods, further proving that dense point clouds can accelerate NeRF convergence and demonstrating the effectiveness of DRM-SLAM in map reconstruction.