Abstract:High-precision vehicle component detection and segmentation play a vital role in intelligent damage assessment systems by assisting in the accurate localization of damaged parts. However, challenges remain due to complex backgrounds and the performance bottlenecks of traditional detection methods constrained by single-level feature representations. To address these issues, this article proposes a dual-level saliency-driven vehicle component detection method. At the image level, DeepLabV3 is employed with a combination of three loss functions to extract salient foreground regions and suppress background interference. At the feature level, a detection and segmentation framework is formulated based on YOLOv11, where a spatial attention pyramid pooling structure is integrated during feature extraction to enhance multi-scale feature aggregation. Additionally, an attention-guided saliency map module is designed to achieve global modeling and spatial enhancement. To evaluate the effectiveness of the proposed method, a customized vehicle component dataset for multi-part detection is constructed, and extensive experiments are conducted. Ablation studies confirm the contribution of each module. In comparative experiments, the method achieves improvements of 3.5% in detection accuracy and 3.7% in segmentation accuracy over the baseline model. Visualization results further show that the proposed approach focuses more accurately on salient component regions and effectively reduces false detections and missed detections caused by complex backgrounds. Moreover, the method shows strong generalization capability on the public Car Seg dataset, achieving superior performance across multiple evaluation metrics. Overall, the dual-level saliency-driven architecture significantly enhances vehicle component detection performance through salient foreground extraction and attention-guided multi-scale feature aggregation, providing new practical insights for intelligent damage assessment in the vehicle insurance industry.