基于深度卷积网络的多目标动态三维抓取位姿检测方法
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391TH86

基金项目:

国家自然科学基金(61873158, 61703262)、上海市自然科学基金(18ZR1415100)项目资助


Dynamic multitarget 3D grasp posture detection approach based on deep convolutional network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在非结构化环境机器人抓取任务中,获取稳定可靠目标物体抓取位姿至关重要。本文提出了一种基于深度卷积网络的多目标动态三维抓取位姿检测方法。首先采用Faster RCNN进行多目标动态检测,并提出稳定检测滤波器,抑制噪声与实时检测时的抖动;然后在提出深度目标适配器的基础上采用GGCNN模型估算二维抓取位姿;进而融合目标检测结果、二维抓取位姿以及物体深度信息,重建目标物体点云,并计算三维抓取位姿;最后搭建机器人抓取平台,实验统计抓取成功率达到956%,验证了所提方法的可行性及有效性,克服了二维抓取位姿固定且单一的缺陷。

    Abstract:

    In the robot grasping task in unstructured environment, it is important to acquire stable and reliable grasp pose of the object. In this paper, a dynamic multitarget 3D grasp pose detection approach based on deep convolutional network is proposed. Firstly, the Faster RCNN is utilized to conduct dynamic multitarget detection, and a stabilization detection filter is proposed to reject the noise and jitter in real time detection. Then, based on proposing depth target adapter, the GGCNN model is used to estimate the 2D grasp pose. Furthermore, the target detection result, 2D grasp pose and object depth information are fused to reconstruct the point cloud of the object, and calculate the 3D grasp pose. Finally, a robot grasping platform was established. The experiment results show that the statistical grasping success rate reaches 956%, which not only verifies the feasibility and effectiveness of the proposed approach, but also overcomes the defect of fixed and single result for 2D grasp pose.

    参考文献
    相似文献
    引证文献
引用本文

杨傲雷,曹裕,徐昱琳,费敏锐,陈灵.基于深度卷积网络的多目标动态三维抓取位姿检测方法[J].仪器仪表学报,2019,40(12):135-142

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-04-19
  • 出版日期: