Improved PL-VINS based on closed-loop image correction and line feature clustering
DOI:
CSTR:
Author:
Affiliation:

1.Department of Automation, North China Electric Power University, Baoding 071003, China; 2.Baoding Key Laboratory of Intelligent Robot Perception and Control in Electric Power System, Baoding 071003, China

Clc Number:

TP242TH701

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In varying-illumination and repetitive-texture environments, existing visual-inertial navigation systems (VINS) suffer from insufficient feature extraction and high feature mismatch rates, failing to meet application requirements in pose estimation accuracy and system robustness. To address these challenges, an improved PL-VINS is presented to enhance feature extraction in varying-illumination scenes and feature matching in repetitive-texture environments. In the image preprocessing module, a closed-loop gamma correction method iteratively adjusting image brightness until the desired level is proposed to increase the number of extractable features, thereby enhancing system robustness under varying illumination conditions. In the line feature detection and tracking module, the intersection points of spatially parallel line pairs are first calculated in the image plane and clustered to obtain intersection-point clusters and their weighted centers. Then the line features are clustered based on their distance and direction relative to these weighted centers to enhance the robustness of line feature matching in repetitive-texture environments. In the backend optimization module, the intersection points of intra-cluster line features are incorporated into optimization as additional features. Reprojection residuals that jointly fusing point, line, and intersection features are constructed to improve pose estimation accuracy in repetitive texture scenarios. Comparative experiments on public datasets demonstrate that the improved PL-VINS reduces the average absolute pose error by 17.4% on the EuRoC dataset compared to PL-VINS and by 12.2% on the UMA-VI dataset compared to SuperVINS. To further verify the effectiveness of the proposed method, an experimental platform using a mobile robot was constructed for real-world testing. The results indicate that the improved PL-VINS exhibits superior accuracy and robustness compared to state-of-the-art algorithms in environments with illumination changes and repetitive textures.

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: March 30,2026
  • Published:
Article QR Code