Abstract:LiDAR-based object detection technology is widely used in fields such as autonomous driving, robotic navigation, and drones. However, due to the sparsity and uneven distribution of LiDAR point cloud data, object detection and classification face significant challenges. Aiming at this problem, this paper proposes an improved 3D object detection algorithm based on the PointPillars algorithm. Firstly, a more efficient point cloud pillar feature encoding network is designed, incorporating a dual attention encoding network with point-wise and channel-wise attention, enhancing the feature representation capability of each pillar. Secondly, in the backbone network part, the global context information network (GCNet) and CSPDarknet network are integrated to improve the feature map representation ability, allowing the network to extract rich contextual semantic information more comprehensively during the feature extraction phase. Experiments conducted on the KITTI dataset demonstrate that the proposed method achieves higher detection accuracy compared to the baseline model, with mean Average Precision improvements of 2.12%, 2.51%, and 1.84% in easy, moderate, and hard scenarios, respectively. Additionally, the improved algorithm achieves a detection speed of 35.6 FPS, demonstrating that this method effectively enhances detection accuracy while maintaining real-time performance.