基于卷积神经网络的线结构光高精度三维测量方法
DOI:
CSTR:
作者:
作者单位:

1.中国矿业大学(北京)机械与电气工程学院北京100083; 2.中国煤炭科工集团太原研究院有限公司 太原030032; 3.中国民用航空总局第二研究所成都610041

作者简介:

通讯作者:

中图分类号:

TH741

基金项目:

国家自然科学基金项目(52374166)、北京市自然基金项目(L221018)、中央高校基本科研业务费专项资金资助项目(2024ZKPYZJD04)资助


High-precision 3D measurement method based on convolutional neural networks for line structured light
Author:
Affiliation:

1.China University of Mining and Technology (Beijing), School of Mechanical and Electrical, Engineering 100083, China; 2.CCTEG Taiyuan Research Institute Co., Ltd, Taiyuan 030032, China; 3.Second Research Institute of the Civil Aviation Administration of China, Chengdu 610041, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    线结构光视觉三维测量技术因其高精度和非接触的三维重建优势而被广泛应用。然而,现有的线结构光三维测量方法在标定过程中往往面临较高的耦合性问题,且在复杂环境下,背景噪声和光照变化会严重干扰条纹的提取,导致结构光条纹中心定位精度下降,进而影响整体三维测量的精度和鲁棒性。针对上述问题,提出了一种基于卷积神经网络的鲁棒三维测量方法。首先,设计了一种创新性的残差U型块特征金字塔网络(RSU-FPN),旨在实现背景噪声的干扰抑制和结构光条纹区域中心的高精度鲁棒提取。其次,构建了一种新型的线结构光视觉传感器,并提出了一种分离式测量模型,成功将摄像机标定与光平面标定解耦,极大地提高了系统的灵活性与扩展性。通过这种解耦的标定方式,避免了传统标定方法中存在的耦合问题,使得整个测量系统更加高效且易于调整。实验结果表明,所提出的基于卷积神经网络的鲁棒三维测量方法,在复杂背景下能够实现结构光条纹中心的高精度提取,利用提取出的光条纹中心进行标定,其均方根误差分别为x方向0.005 mm、y方向0.009 mm以及z方向0.097 mm。并且,该方法在不同表面类型(如漫反射表面和光滑反射表面)上均能实现高精度的三维重建,验证了其在实际应用中的优越性和强大的鲁棒性。

    Abstract:

    Line-structured light vision 3D measurement technology is widely used for its high precision and non-contact 3D reconstruction. However, existing methods face calibration coupling issues and are highly sensitive to background noise and lighting changes in complex environments, leading to reduced accuracy in stripe extraction and 3D measurements. To address these challenges, a robust 3D measurement method is proposed, which is based on convolutional neural networks (CNN). First, we design an innovative Residual U-shaped block feature pyramid network (RSU-FPN) to suppress background noise and achieve high-precision extraction of the structured light stripe center. Second, we develop a new line-structured light sensor and introduce a decoupled calibration model that separates camera and light plane calibration, enhancing system flexibility and scalability. Experimental results show that our method achieves high-precision stripe extraction with root mean square errors of 0.005 mm, 0.009 mm, and 0.097 mm in the x, y, and z directions, respectively. It also provides high-precision 3D reconstruction on different surface types, demonstrating its robustness and excellent performance in real-world applications.

    参考文献
    相似文献
    引证文献
引用本文

叶涛,何威燃,刘国鹏,欧阳煜,王斌.基于卷积神经网络的线结构光高精度三维测量方法[J].仪器仪表学报,2025,46(2):183-195

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-04-28
  • 出版日期:
文章二维码